Advertisement

Interpretable Neural Network Decoupling

Conference paper
  • 501 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12360)

Abstract

The remarkable performance of convolutional neural networks (CNNs) is entangled with their huge number of uninterpretable parameters, which has become the bottleneck limiting the exploitation of their full potential. Towards network interpretation, previous endeavors mainly resort to the single filter analysis, which however ignores the relationship between filters. In this paper, we propose a novel architecture decoupling method to interpret the network from a perspective of investigating its calculation paths. More specifically, we introduce a novel architecture controlling module in each layer to encode the network architecture by a vector. By maximizing the mutual information between the vectors and input images, the module is trained to select specific filters to distill a unique calculation path for each input. Furthermore, to improve the interpretability and compactness of the decoupled network, the output of each layer is encoded to align the architecture encoding vector with the constraint of sparsity regularization. Unlike conventional pixel-level or filter-level network interpretation methods, we propose a path-level analysis to explore the relationship between the combination of filter and semantic concepts, which is more suitable to interpret the working rationale of the decoupled network. Extensive experiments show that the decoupled network achieves several applications, i.e., network interpretation, network acceleration, and adversarial samples detection.

Keywords

Network interpretation Architecture decoupling 

Notes

Acknowledgements

This work is supported by the Nature Science Foundation of China (No. U1705262, No. 61772443, No. 61572410, No. 61802324 and No. 61702136), National Key R&D Program (No. 2017YFC0113000, and No. 2016Y FB1001503), Key R&D Program of Jiangxi Province (No. 20171ACH80022) and Natural Science Foundation of Guangdong Provice in China (No. 2019B1515120049).

Supplementary material

504470_1_En_39_MOESM1_ESM.pdf (5.6 mb)
Supplementary material 1 (pdf 5724 KB)

References

  1. 1.
    Agakov, D.B.F.: The im algorithm: a variational approach to information maximization. In: NeurIPS (2004)Google Scholar
  2. 2.
    Bau, D., Zhou, B., Khosla, A., Oliva, A., Torralba, A.: Network dissection: quantifying interpretability of deep visual representations. In: CVPR (2017)Google Scholar
  3. 3.
    Bengio, Y., Léonard, N., Courville, A.: Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432 (2013)
  4. 4.
    Bolukbasi, T., Wang, J., Dekel, O., Saligrama, V.: Adaptive neural networks for efficient inference. In: ICML (2017)Google Scholar
  5. 5.
    Chen, R., Chen, H., Huang, G., Ren, J., Zhang, Q.: Explaining neural networks semantically and quantitatively. In: ICCV (2019)Google Scholar
  6. 6.
    Chen, Z., Li, Y., Bengio, S., Si, S.: You look twice: gaternet for dynamic filter selection in CNNS. In: CVPR (2019)Google Scholar
  7. 7.
    Dong, X., Huang, J., Yang, Y., Yan, S.: More is less: a more complicated network with less inference complexity. In: CVPR (2017)Google Scholar
  8. 8.
    Dosovitskiy, A., Brox, T.: Inverting visual representations with convolutional networks. In: CVPR (2016)Google Scholar
  9. 9.
    Figurnov, M., et al.: Spatially adaptive computation time for residual networks. In: CVPR (2017)Google Scholar
  10. 10.
    Fong, R., Vedaldi, A.: Net2vec: quantifying and explaining how concepts are encoded by filters in deep neural networks. In: CVPR (2018)Google Scholar
  11. 11.
    Gao, X., Zhao, Y., Dudziak, L., Mullins, R., Xu, C.Z.: Dynamic channel pruning: feature boosting and suppression. ICLR (2018)Google Scholar
  12. 12.
    Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. ICLR (2015)Google Scholar
  13. 13.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)Google Scholar
  14. 14.
    He, Y., Kang, G., Dong, X., Fu, Y., Yang, Y.: Soft filter pruning for accelerating deep convolutional neural networks. IJCAI (2018)Google Scholar
  15. 15.
    He, Y., Zhang, X., Sun, J.: Channel pruning for accelerating very deep neural networks. In: ICCV (2017)Google Scholar
  16. 16.
    Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: CVPR (2018)Google Scholar
  17. 17.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. Int. Conf. Mach. Learn. (2015) Google Scholar
  18. 18.
    Kaiser, Ł., Bengio, S.: Can active memory replace attention? In: NeurIPS (2016)Google Scholar
  19. 19.
    Kaiser, Ł., et al.: Fast decoding in sequence models using discrete latent variables. ICML (2018)Google Scholar
  20. 20.
    Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. ICML (2017)Google Scholar
  21. 21.
    Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Tech. rep, Citeseer (2009)Google Scholar
  22. 22.
    Lakkaraju, H., Kamar, E., Caruana, R., Horvitz, E.: Identifying unknown unknowns in the open world: representations and policies for guided exploration. In: AAAI (2017)Google Scholar
  23. 23.
    Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning filters for efficient convnets. ICLR (2016)Google Scholar
  24. 24.
    Liu, L., Deng, J.: Dynamic deep neural networks: optimizing accuracy-efficiency trade-offs by selective execution. In: AAAI (2018)Google Scholar
  25. 25.
    Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C.: Learning efficient convolutional networks through network slimming. In: ICCV (2017)Google Scholar
  26. 26.
    Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: NeurIPS (2017)Google Scholar
  27. 27.
    Maaten, L., Hinton, G., Visualizing data using t-SNE: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)zbMATHGoogle Scholar
  28. 28.
    Morcos, A.S., Barrett, D.G., Rabinowitz, N.C., Botvinick, M.: On the importance of single directions for generalization. ICLR (2018)Google Scholar
  29. 29.
    Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: International Conference on Machine Learning (2010)Google Scholar
  30. 30.
    Paszke, A., et al.: Automatic differentiation in pytorch. NeurIPS Workshop (2017)Google Scholar
  31. 31.
    Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. IJCV 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  32. 32.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  33. 33.
    Szegedy, C., et al.: Going deeper with convolutions. In: CVPR (2015)Google Scholar
  34. 34.
    Wang, J., Zhang, Z., Xie, C., Premachandran, V., Yuille, A.: Unsupervised learning of object semantic parts from internal states of cnns by population encoding. arXiv preprint arXiv:1511.06855 (2015)
  35. 35.
    Wang, X., Yu, F., Dou, Z.Y., Darrell, T., Gonzalez, J.E.: Skipnet: learning dynamic routing in convolutional networks. In: ECCV (2018)Google Scholar
  36. 36.
    Wang, Y., Su, H., Zhang, B., Hu, X.: Interpret neural networks by identifying critical data routing paths. In: CVPR (2018)Google Scholar
  37. 37.
    Yiyou, S., Sathya N., R., Vikas, S.: Adaptive activation thresholding: dynamic routing type behavior for interpretability in convolutional neural networks. In: ICCV (2019)Google Scholar
  38. 38.
    Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: NeurIPS (2014)Google Scholar
  39. 39.
    Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., Lipson, H.: Understanding neural networks through deep visualization. Int. Conf. Mach. Learn. Workshop (2015)Google Scholar
  40. 40.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10590-1_53CrossRefGoogle Scholar
  41. 41.
    Zhang, Q., Nian Wu, Y., Zhu, S.C.: Interpretable convolutional neural networks. In: CVPR (2018)Google Scholar
  42. 42.
    Zhang, Q., Yang, Y., Wu, Y.N., Zhu, S.C.: Interpreting cnns via decision trees. In: CVPR (2019)Google Scholar
  43. 43.
    Zhuang, Z., et al.: Discrimination-aware channel pruning for deep neural networks. In: NeurIPS (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Artificial Intelligence, School of InformaticsXiamen UniversityXiamenChina
  2. 2.Peng Cheng LaboratoryShenzhenChina
  3. 3.National University of SingaporeNanyangSingapore
  4. 4.Beihang UniversityBeijingChina
  5. 5.BestImage, Tencent Technology (Shanghai) Co., Ltd.ShanghaiChina
  6. 6.Mohamed bin Zayed University of Artificial IntelligenceAbu DhabiUAE
  7. 7.Inception Institute of Artificial IntelligenceAbu DhabiUAE

Personalised recommendations