Advertisement

Learning Human-Understandable Description of Dynamical Systems from Feed-Forward Neural Networks

  • Sophie Tourret
  • Enguerrand Gentet
  • Katsumi Inoue
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10261)

Abstract

Learning the dynamics of systems, the task of interest in this paper, is a problem to which artificial neural networks (NN) are naturally suited. However, for a non-expert, a NN is not a convenient tool. There are two reasons for this. First, the creation of an accurate NN requires fine-tuning its architecture and training parameters. Second, even the most accurate NN prediction gives no insight on the rules governing the system. These two issues are addressed in this paper, that presents a method to automatically fine-tune a NN to accurately predict the evolution of a dynamical system and to extract human-understandable rules from it. Experimental results on Boolean systems are presented. They show the relevance of this approach and open the way to many extensions naturally supported by NNs, such as the handling of noisy data, continuous variables or time delayed systems.

References

  1. 1.
    Cherkassky, V., Friedman, J.H., Wechsler, H.: From Statistics to Neural Networks: Theory and Pattern Recognition Applications, vol. 136. Springer Science & Business Media, Heidelberg (2012)Google Scholar
  2. 2.
    Svozil, D., Kvasnicka, V., Pospichal, J.: Introduction to multi-layer feed-forward neural networks. Chemom. Intell. Lab. Syst. 39(1), 43–62 (1997)CrossRefGoogle Scholar
  3. 3.
    Augasta, M.G., Kathirvalavakumar, T.: Rule extraction from neural networks - a comparative study. In: 2012 International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME), pp. 404–408. IEEE (2012)Google Scholar
  4. 4.
    Carpenter, G.A., Tan, A.H.: Rule extraction: from neural architecture to symbolic representation. Connect. Sci. 7(1), 3–27 (1995)CrossRefGoogle Scholar
  5. 5.
    Garcez, A.S.A., Zaverucha, G.: The connectionist inductive learning and logic programming system. Appl. Intell. 11(1), 59–77 (1999)CrossRefGoogle Scholar
  6. 6.
    Kamruzzaman, S., Islam, M.M.: An algorithm to extract rules from artificial neural networks for medical diagnosis problems. Int. J. Inf. Technol. 12(8), 41–59 (2006)Google Scholar
  7. 7.
    Lehmann, J., Bader, S., Hitzler, P.: Extracting reduced logic programs from artificial neural networks. Appl. Intell. 32(3), 249–266 (2010)CrossRefGoogle Scholar
  8. 8.
    Towell, G.G., Shavlik, J.W.: Extracting refined rules from knowledge-based neural networks. Mach. Learn. 13(1), 71–101 (1993)Google Scholar
  9. 9.
    França, M.V.M., Garcez, A.S.D., Zaverucha, G.: Relational knowledge extraction from neural networks (2015)Google Scholar
  10. 10.
    Garcez, A.D., Broda, K., Gabbay, D.M.: Symbolic knowledge extraction from trained neural networks: a sound approach. Artif. Intell. 125(1), 155–207 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Muggleton, S., De Raedt, L., Poole, D., Bratko, I., Flach, P., Inoue, K., Srinivasan, A.: ILP turns 20 – biography and future challenges. Mach. Learn. 86(1), 3–23 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Comet, J.-P., Fromentin, J., Bernot, G., Roux, O.: A formal model for gene regulatory networks with time delays. In: Chan, J.H., Ong, Y.-S., Cho, S.-B. (eds.) CSBio 2010. CCIS, vol. 115, pp. 1–13. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-16750-8_1 CrossRefGoogle Scholar
  13. 13.
    Ribeiro, T., Magnin, M., Inoue, K., Sakama, C.: Learning delayed influences of biological systems. Front. Bioeng. Biotechnol. (2014). doi: 10.3389/fbioe.2014.00081
  14. 14.
    Ash, T.: Dynamic node creation in backpropagation networks. Connect. Sci. 1(4), 365–375 (1989)CrossRefGoogle Scholar
  15. 15.
    Inoue, K., Ribeiro, T., Sakama, C.: Learning from interpretation transition. Mach. Learn. 94(1), 51–79 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Gentet, E., Tourret, S., Inoue, K.: Learning from interpretation transition using feed-forward neural networks. In: CEUR Workshop Proceedings of the 26th International Conference on Inductive Logic Programming (ILP 16 Short Papers) (2016)Google Scholar
  17. 17.
    Caferra, R.: Logic for Computer Science and Artificial Intelligence. Wiley, New York (2013)Google Scholar
  18. 18.
    Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Netw. 2(5), 359–366 (1989)CrossRefGoogle Scholar
  19. 19.
    Previti, A., Ignatiev, A., Morgado, A., Marques-Silva, J.: Prime compilation of non-clausal formulae. In: Proceedings of the 24th International Conference on Artificial Intelligence, pp. 1980–1987. AAAI Press (2015)Google Scholar
  20. 20.
    Dubrova, E., Teslenko, M.: A SAT-based algorithm for finding attractors in synchronous Boolean networks. IEEE/ACM Trans. Comput. Biol. Bioinform. (TCBB) 8(5), 1393–1399 (2011)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Sophie Tourret
    • 1
  • Enguerrand Gentet
    • 1
    • 2
  • Katsumi Inoue
    • 1
    • 3
  1. 1.National Institute of InformaticsTokyoJapan
  2. 2.Paris-Sud UniversityOrsayFrance
  3. 3.Tokyo Institute of TechnologyTokyoJapan

Personalised recommendations