Advertisement

Training Interpretable Convolutional Neural Networks by Differentiating Class-Specific Filters

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12347)

Abstract

Convolutional neural networks (CNNs) have been successfully used in a range of tasks. However, CNNs are often viewed as “black-box” and lack of interpretability. One main reason is due to the filter-class entanglement – an intricate many-to-many correspondence between filters and classes. Most existing works attempt post-hoc interpretation on a pre-trained model, while neglecting to reduce the entanglement underlying the model. In contrast, we focus on alleviating filter-class entanglement during training. Inspired by cellular differentiation, we propose a novel strategy to train interpretable CNNs by encouraging class-specific filters, among which each filter responds to only one (or few) class. Concretely, we design a learnable sparse Class-Specific Gate (CSG) structure to assign each filter with one (or few) class in a flexible way. The gate allows a filter’s activation to pass only when the input samples come from the specific class. Extensive experiments demonstrate the fabulous performance of our method in generating a sparse and highly class-related representation of the input, which leads to stronger interpretability. Moreover, comparing with the standard training strategy, our model displays benefits in applications like object localization and adversarial sample detection. Code link: https://github.com/hyliang96/CSGCNN.

Keywords

Class-specific filters Interpretability Disentangled representation Filter-class entanglement Gate 

Notes

Acknowledgement

This work was supported by the National Key R&D Program of China (2017YFA0700904), NSFC Projects (61620106010, U19B2034, U1811461, U19A2081, 61673241, 61771273), Beijing NSF Project (L172037), PCL Future Greater-Bay Area Network Facilities for Large-scale Experiments and Applications (LZC0019), Beijing Academy of Artificial Intelligence (BAAI), Tsinghua-Huawei Joint Research Program, a grant from Tsinghua Institute for Guo Qiang, Tiangong Institute for Intelligent Computing, the JP Morgan Faculty Research Program, Microsoft Research Asia, Rejoice Sport Tech. co., LTD and the NVIDIA NVAIL Program with GPU/DGX Acceleration.

Supplementary material

504434_1_En_37_MOESM1_ESM.pdf (357 kb)
Supplementary material 1 (pdf 356 KB)

References

  1. 1.
    Bai, J., Li, Y., Li, J., Jiang, Y., Xia, S.: Rectified decision trees: Towards interpretability, compression and empirical soundness. arXiv preprint arXiv:1903.05965 (2019)
  2. 2.
    Bau, D., Zhou, B., Khosla, A., Oliva, A., Torralba, A.: Network dissection: quantifying interpretability of deep visual representations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6541–6549 (2017)Google Scholar
  3. 3.
    Bojarski, M., et al.: Explaining how a deep neural network trained with end-to-end learning steers a car. arXiv preprint arXiv:1704.07911 (2017)
  4. 4.
    Bouchacourt, D., Tomioka, R., Nowozin, S.: Multi-level variational autoencoder: learning disentangled representations from grouped observations. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)Google Scholar
  5. 5.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)CrossRefGoogle Scholar
  6. 6.
    Burgess, C.P., et al.: Understanding disentangling in \(\beta \)-vae. arXiv preprint arXiv:1804.03599 (2018)
  7. 7.
    Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: S&P (2017)Google Scholar
  8. 8.
    Caruana, R., et al.: Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1721–1730. ACM (2015)Google Scholar
  9. 9.
    Chen, T.Q., Li, X., Grosse, R.B., Duvenaud, D.K.: Isolating sources of disentanglement in variational autoencoders. In: Advances in Neural Information Processing Systems, pp. 2610–2620 (2018)Google Scholar
  10. 10.
    Chen, X., et al.: Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2172–2180 (2016)Google Scholar
  11. 11.
    Deng, J., et al.: ImageNet: a large-scale hierarchical image database. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2009)Google Scholar
  12. 12.
    Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2010 (VOC2010) Results. http://www.pascal-network.org/challenges/VOC/voc2010/workshop/index.html
  13. 13.
    Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448 (2015)Google Scholar
  14. 14.
    Gonzalez-Garcia, A., Modolo, D., Ferrari, V.: Do semantic parts emerge in convolutional neural networks? Int. J. Comput. Vis. 126(5), 476–494 (2018)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2014)Google Scholar
  16. 16.
    He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)Google Scholar
  17. 17.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  18. 18.
    Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., Lerchner, A.: beta-vae: learning basic visual concepts with a constrained variational framework. Int. Conf. Learn. Represent. 2(5), 6 (2017)Google Scholar
  19. 19.
    Jiang, Z., Wang, Y., Davis, L., Andrews, W., Rozgic, V.: Learning discriminative features via label consistent neural network. In: 2017 IEEE Winter Conference on Applications of Computer Vision, pp. 207–216. IEEE (2017)Google Scholar
  20. 20.
    Kim, H., Mnih, A.: Disentangling by factorising. arXiv preprint arXiv:1802.05983 (2018)
  21. 21.
    Kingma, D.P., Welling, M.: Stochastic gradient VB and the variational auto-encoder. In: International Conference on Learning Representations (2014)Google Scholar
  22. 22.
    Krizhevsky, A., et al.: Learning multiple layers of features from tiny images. Technical report TR-2009 (2009)Google Scholar
  23. 23.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  24. 24.
    Kumar, A., Sattigeri, P., Balakrishnan, A.: Variational inference of disentangled latent concepts from unlabeled observations. arXiv preprint arXiv:1711.00848 (2017)
  25. 25.
    Locatello, F., et al.: Challenging common assumptions in the unsupervised learning of disentangled representations. arXiv preprint arXiv:1811.12359 (2018)
  26. 26.
    Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)
  27. 27.
    Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5188–5196 (2015)Google Scholar
  28. 28.
    Martinez, B., Modolo, D., Xiong, Y., Tighe, J.: Action recognition with spatial-temporal discriminative filter banks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5482–5491 (2019)Google Scholar
  29. 29.
    Mordvintsev, A., Olah, C., Tyka, M.: Inceptionism: going deeper into neural networks (2015). https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html
  30. 30.
    Olah, C., et al.: The building blocks of interpretability. Distill (2018).  https://doi.org/10.23915/distill.00010. https://distill.pub/2018/building-blocks
  31. 31.
    Prakash, A., Storer, J., Florencio, D., Zhang, C.: RePr: improved training of convolutional filters. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10666–10675 (2019)Google Scholar
  32. 32.
    Ross, B.C.: Mutual information between discrete and continuous data sets. PloS one 9(2), e87357 (2014)CrossRefGoogle Scholar
  33. 33.
    Sabour, S., Frosst, N., Hinton, G.E.: Dynamic routing between capsules. In: Advances in Neural Information Processing Systems, pp. 3856–3866 (2017)Google Scholar
  34. 34.
    Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013)
  35. 35.
    Smith, A.G., et al.: Inhibition of pluripotential embryonic stem cell differentiation by purified polypeptides. Nature 336(6200), 688–690 (1988)CrossRefGoogle Scholar
  36. 36.
    Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
  37. 37.
    Thomas, V., et al.: Disentangling the independently controllable factors of variation by interacting with the world. arXiv preprint arXiv:1802.09484 (2018)
  38. 38.
    Wang, Y., Morariu, V.I., Davis, L.S.: Learning a discriminative filter bank within a cnn for fine-grained recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4148–4157 (2018)Google Scholar
  39. 39.
    Wang, Y., Su, H., Zhang, B., Hu, X.: Interpret neural networks by identifying critical data routing paths. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8906–8914 (2018)Google Scholar
  40. 40.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10590-1_53CrossRefGoogle Scholar
  41. 41.
    Zhang, Q., Cao, R., Wu, Y.N., Zhu, S.C.: Mining object parts from CNNS via active question-answering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 346–355 (2017)Google Scholar
  42. 42.
    Zhang, Q., Cao, R., Shi, F., Wu, Y.N., Zhu, S.C.: Interpreting CNN knowledge via an explanatory graph. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)Google Scholar
  43. 43.
    Zhang, Q., Cao, R., Wu, Y.N., Zhu, S.C.: Growing interpretable part graphs on convnets via multi-shot learning. In: Thirty-First AAAI Conference on Artificial Intelligence (2017)Google Scholar
  44. 44.
    Zhang, Q., et al.: Interactively transferring cnn patterns for part localization. arXiv preprint arXiv:1708.01783 (2017)
  45. 45.
    Zhang, Q., Wu, Y.N., Zhu, S.C.: Interpretable convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8827–8836 (2018)Google Scholar
  46. 46.
    Zhang, Q., Yang, Y., Ma, H., Wu, Y.N.: Interpreting CNNS via decision trees. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6261–6270 (2019)Google Scholar
  47. 47.
    Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929 (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Computer Science and Technology, BNRist Center, Institute for AI, THBI LaboratoryTsinghua UniversityBeijingChina
  2. 2.Tsinghua SIGSShenzhenChina
  3. 3.Peng Cheng LaboratoryUniversity of Southern CaliforniaLos AngeleUSA
  4. 4.ByteDance AI LabUniversity of Southern CaliforniaLos AngeleUSA
  5. 5.Department of CSUniversity of Southern CaliforniaLos AngeleUSA

Personalised recommendations