Skip to main content

Propositional Rules Generated at the Top Layers of a CNN

  • Conference paper
  • First Online:
From Bioinspired Systems and Biomedical Applications to Machine Learning (IWINAC 2019)

Abstract

So far, many rule extraction techniques have been proposed to explain the classifications of shallow Multi Layer Perceptrons (MLPs), but very few methods have been introduced for Convolutional Neural Networks (CNNs). To fill this gap, this work presents a new technique applied to a CNN architecture including two convolutional layers. This neural network is trained with the MNIST dataset, representing images of digits. Rule extraction is performed at the first fully connected layer, by means of the Discretized Interpretable Multi Layer Perceptron (DIMLP). This transparent MLP architecture allows us to generate symbolic rules, by precisely locating axis-parallel hyperplanes. The antecedents of the extracted rules represent responses of convolutional filters that makes it possible to determine for each rule the covered samples. Hence, we can visualize the centroid of each rule, which gives us some insight into how the network works. This represents a first step towards the explanation of CNN responses, since the final explanation would be obtained in a further processing by generating propositional rules with respect to the input layer. In the experiments we illustrate a generated ruleset with its characteristics in terms of accuracy, complexity and fidelity, which is the degree of matching between CNN classifications and rules classifications. Overall, rules reach very high fidelity. Finally, several examples of rules are visualized and discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The Lasagne script that defines the CNN architecture is available on https://lasagne.readthedocs.io/en/latest/user/tutorial.html.

  2. 2.

    See http://yann.lecun.com/exdb/mnist/ for the comparison of several models.

References

  1. Andrews, R., Diederich, J., Tickle, A.B.: Survey and critique of techniques for extracting rules from trained artificial neural networks. Knowl.-Based Syst. 8(6), 373–389 (1995)

    Article  Google Scholar 

  2. Bologna, G.: Rule extraction from a multilayer perceptron with staircase activation functions. In: 2000 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, IJCNN 2000, vol. 3, pp. 419–424. IEEE (2000)

    Google Scholar 

  3. Bologna, G.: A model for single and multiple knowledge based networks. Artif. Intell. Med. 28(2), 141–163 (2003)

    Article  Google Scholar 

  4. Dieleman, S., et al.: Lasagne: first release, August 2015. https://doi.org/10.5281/zenodo.27878

  5. Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: What do we need to build explainable AI systems for the medical domain? arXiv preprint arXiv:1712.09923 (2017)

  6. Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. arXiv preprint arXiv:1703.04730 (2017)

  7. Lakkaraju, H., Bach, S.H., Leskovec, J.: Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1675–1684. ACM (2016)

    Google Scholar 

  8. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  9. Murphy, K.P.: Machine Learning: A Probabilistic Perspective. Adaptive Computation and Machine Learning. The MIT Press, Cambridge (2012)

    MATH  Google Scholar 

  10. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM (2016)

    Google Scholar 

  11. Turner, R.: A model explanation system. In: 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), pp. 1–6. IEEE (2016)

    Google Scholar 

  12. Yao, Y., Rosasco, L., Caponnetto, A.: On early stopping in gradient descent learning. Constr. Approx. 26(2), 289–315 (2007)

    Article  MathSciNet  Google Scholar 

  13. Zhou, B., Bau, D., Oliva, A., Torralba, A.: Interpreting deep visual representations via network dissection. arXiv preprint arXiv:1711.05611 (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guido Bologna .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bologna, G. (2019). Propositional Rules Generated at the Top Layers of a CNN. In: Ferrández Vicente, J., Álvarez-Sánchez, J., de la Paz López, F., Toledo Moreo, J., Adeli, H. (eds) From Bioinspired Systems and Biomedical Applications to Machine Learning. IWINAC 2019. Lecture Notes in Computer Science(), vol 11487. Springer, Cham. https://doi.org/10.1007/978-3-030-19651-6_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-19651-6_42

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-19650-9

  • Online ISBN: 978-3-030-19651-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics