Advertisement

Re-training Deep Neural Networks to Facilitate Boolean Concept Extraction

  • Camila GonzálezEmail author
  • Eneldo Loza Mencía
  • Johannes Fürnkranz
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10558)

Abstract

Deep neural networks are accurate predictors, but their decisions are difficult to interpret, which limits their applicability in various fields. Symbolic representations in the form of rule sets are one way to illustrate their behavior as a whole, as well as the hidden concepts they model in the intermediate layers. The main contribution of the paper is to demonstrate how to facilitate rule extraction from a deep neural network by retraining it in order to encourage sparseness in the weight matrices and make the hidden units be either maximally or minimally active. Instead of using datasets which combine the attributes in an unclear manner, we show the effectiveness of the methods on the task of reconstructing predefined Boolean concepts so it can later be assessed to what degree the patterns were captured in the rule sets. The evaluation shows that reducing the connectivity of the network in such a way significantly assists later rule extraction, and that when the neurons are either minimally or maximally active it suffices to consider one threshold per hidden unit.

Keywords

Deep neural networks Inductive rule learning Knowledge distillation 

Notes

Acknowledgements

We would like to thank the anonymous reviewers for their helpful suggestions. Computations for this research were conducted on the Lichtenberg high performance computer of the TU Darmstadt.

References

  1. 1.
    Aizenberg, I., Aizenberg, N.N., Vandewalle, J.P.: Multi-valued and Universal Binary Neurons: Theory, Learning and Applications. Springer, New York (2013). doi: 10.1007/978-1-4757-3115-6 zbMATHGoogle Scholar
  2. 2.
    Andrews, R., Diederich, J., Tickle, A.B.: Survey and critique of techniques for extracting rules from trained artificial neural networks. Knowl. Based Syst. 8(6), 373–389 (1995)CrossRefzbMATHGoogle Scholar
  3. 3.
    Courbariaux, M., Bengio, Y., David, J.: BinaryConnect: training deep neural networks with binary weights during propagations. In: Advances in Neural Information Processing Systems 28 (NIPS 2015), Montreal, Quebec, Canada, pp. 3123–3131 (2015)Google Scholar
  4. 4.
    Craven, M., Shavlik, J.W.: Using sampling and queries to extract rules from trained neural networks. In: Proceedings of the 11th International Conference on Machine Learning (ICML 1994), pp. 37–45. Morgan Kaufmann, New Brunswick (1994)Google Scholar
  5. 5.
    Craven, M., Shavlik, J.W.: Extracting tree-structured representations of trained networks. In: Advances in Neural Information Processing Systems 8 (NIPS 1995), pp. 24–30 (1995)Google Scholar
  6. 6.
    Demšar, J., Schuurmans, D.: Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7(1), 1–30 (2006)MathSciNetzbMATHGoogle Scholar
  7. 7.
    Freitas, A.A.: Comprehensible classification models: a position paper. SIGKDD Explor. 15(1), 1–10 (2013)CrossRefGoogle Scholar
  8. 8.
    Fu, L.: Rule learning by searching on adapted nets. In: Proceedings of the 9th National Conference on Artificial Intelligence (AAAI 1991), Anaheim, CA, USA, vol. 2, pp. 590–595 (1991)Google Scholar
  9. 9.
    Fürnkranz, J., Gamberger, D., Lavrač, N.: Foundations of Rule Learning. Springer, Heidelberg (2012). doi: 10.1007/978-3-540-75197-7 CrossRefzbMATHGoogle Scholar
  10. 10.
    Goodfellow, I.J., Bengio, Y., Courville, A.C.: Deep Learning. Adaptive Computation and Machine Learning. MIT Press, Cambridge (2016)zbMATHGoogle Scholar
  11. 11.
    Han, S., Pool, J., Tran, J., Dally, W.J.: Learning both weights and connections for efficient neural network. In: Advances in Neural Information Processing Systems 28 (NIPS 2015), Montreal, Quebec, Canada, pp. 1135–1143 (2015)Google Scholar
  12. 12.
    Hassibi, B., Stork, D.G.: Second order derivatives for network pruning: optimal brain surgeon. In: Advances in Neural Information Processing Systems 5 (NIPS 1992), pp. 164–171. Morgan Kaufmann, Denver (1992)Google Scholar
  13. 13.
    Hayes, J.P.: Digital Logic Design. Addison Wesley, Reading (1993)Google Scholar
  14. 14.
    Kayande, U., Bruyn, A.D., Lilien, G.L., Rangaswamy, A., van Bruggen, G.H.: How incorporating feedback mechanisms in a DSS affects DSS evaluations. Inf. Syst. Res. 20(4), 527–546 (2009)CrossRefGoogle Scholar
  15. 15.
    LeCun, Y., Denker, J.S., Solla, S.A.: Optimal brain damage. In: Advances in Neural Information Processing Systems 2 (NIPS 1990), Denver, Colorado, USA, pp. 598–605 (1989)Google Scholar
  16. 16.
    Liu, J., Li, M.: Finding cancer biomarkers from mass spectrometry data by decision lists. J. Comput. Biol. 12(7), 971–979 (2005)CrossRefGoogle Scholar
  17. 17.
    Malioutov, D.M., Varshney, K.R.: Exact rule learning via Boolean compressed sensing. In: Proceedings of the 30th International Conference on Machine Learning (ICML 2013), Atlanta, GA, USA, pp. 765–773 (2013)Google Scholar
  18. 18.
    Milaré, C.R., Carvalho, A.C.P.L.F., Monard, M.C.: Extracting knowledge from artificial neural networks: an empirical comparison of trepan and symbolic learning algorithms. In: Coello Coello, C.A., Albornoz, A., Sucar, L.E., Battistutti, O.C. (eds.) MICAI 2002. LNCS (LNAI), vol. 2313, pp. 272–281. Springer, Heidelberg (2002). doi: 10.1007/3-540-46016-0_29 CrossRefGoogle Scholar
  19. 19.
    Ng, A.: Sparse autoencoder. CS294A Lecture Notes, Stanford University (2011)Google Scholar
  20. 20.
    Sato, M., Tsukimoto, H.: Rule extraction from neural networks via decision tree induction. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN 2001), vol. 3, pp. 1870–1875. IEEE Press (2001)Google Scholar
  21. 21.
    Setiono, R.: Extracting rules from pruned neural networks for breast cancer diagnosis. Artif. Intell. Med. 8(1), 37–51 (1996)CrossRefGoogle Scholar
  22. 22.
    Setiono, R.: Extracting rules from neural networks by pruning and hidden-unit splitting. Neural Comput. 9(1), 205–225 (1997)CrossRefzbMATHGoogle Scholar
  23. 23.
    Setiono, R.: A penalty-function approach for pruning feedforward neural networks. Neural Comput. 9(1), 185–204 (1997)CrossRefzbMATHGoogle Scholar
  24. 24.
    Setiono, R., Liu, H.: Symbolic representation of neural networks. IEEE Comput. 29(3), 71–77 (1996)CrossRefGoogle Scholar
  25. 25.
    Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  26. 26.
    Thodberg, H.H.: Improving generalization of neural networks through pruning. Int. J. Neural Syst. 1(4), 317–326 (1991)CrossRefGoogle Scholar
  27. 27.
    Towell, G.G., Shavlik, J.W.: Extracting refined rules from knowledge-based neural networks. Mach. Learn. 13(1), 71–101 (1993)Google Scholar
  28. 28.
    Tsukimoto, H.: Extracting rules from trained neural networks. IEEE Trans. Neural Netw. Learn. Syst. 11(2), 377–389 (2000)CrossRefGoogle Scholar
  29. 29.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). doi: 10.1007/978-3-319-10590-1_53 Google Scholar
  30. 30.
    Zilke, J.R., Loza Mencía, E., Janssen, F.: DeepRED - rule extraction from deep neural networks. In: Proceedings of the 19th International Conference on Discovery Science (DS 2016), Bari, Italy, pp. 457–473 (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Knowledge Engineering GroupTU DarmstadtDarmstadtGermany

Personalised recommendations