Skip to main content
Log in

Extracting reduced logic programs from artificial neural networks

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are black boxes. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as simple as possible—where simple is being understood in some clearly defined and meaningful way.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Alexandre R, Diederich J, Tickle A (1995) A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowl Based Syst, 373–389

  2. Andrews R, Geva S (2002) Rule extraction from local cluster neural nets. Neurocomputing 47(1–4):1–20

    Article  MATH  Google Scholar 

  3. Bader S, Hitzler P (2004) Logic programs, iterated function systems, and recurrent radial basis function networks. J Appl Logic 2(3):273–300

    Article  MATH  MathSciNet  Google Scholar 

  4. Bader S, Hitzler P (2005) Dimensions of neural-symbolic integration—a structured survey. In: Artemov S, Barringer H, d’Avila Garcez AS, Lamb LC, Woods J (eds) We will show them: Essays in honour of dov gabbay, vol 1. International Federation for Computational Logic, College Publications, pp 167–194

  5. Bader S, Hitzler P, Hölldobler S (2004) The integration of connectionism and first-order knowledge representation and reasoning as a challenge for artificial intelligence. In: Li L, Yen KK (eds) Proceedings of the third international conference on information, Tokyo, Japan, November/December. International Information Institute, pp 22–33

  6. Bader S, Garcez A, Hitzler P (2005) Computing first-order logic programs by fibring artificial neural networks. In: Russell I, Markov Z (eds) Proceedings of the eighteenth international Florida artificial intelligence research symposium conference, Clearwater Beach, Florida, USA. AAAI Press, Menlo Park, pp 314–319

    Google Scholar 

  7. Bader S, Hitzler P, Hölldobler S, Witzel A (2007) A fully connectionist model generator for covered first-order logic programs. In: Veloso MM (ed) IJCAI 2007, Proceedings of the 20th international joint conference on artificial intelligence. Hyderabad, India, January 6–12, pp 666–671

  8. Bader S, Hitzler P, Hölldobler S, Witzel A (2007) The core method: Connectionist model generation for first-order logic programs. In: Hammer B, Hitzler P (eds) Perspectives of neural-symbolic integration. Studies in computational intelligenc, vol 77. Springer, Berlin, pp 205–232

    Chapter  Google Scholar 

  9. Bader S, Hitzler P, Hölldobler S (2008) Connectionist model generation: A first-order approach. Neurocomputing (to appear)

  10. Barnsley M (1993) Fractals everywhere. Academic Press, San Diego

    MATH  Google Scholar 

  11. Bishop ChM (1995) Neural networks for pattern recognition. Oxford University Press, Oxford

    Google Scholar 

  12. Blake CL, Newman DJ, Hettich S, Merz CJ (1998) UCI repository of machine learning databases

  13. d’Avila Garcez AS, Gabbay DM (2004) Fibring neural networks. In: Proceedings of the 19th national conference on artificial intelligence (AAAI 04), San Jose, California, USA, July. AAAI Press

  14. d’Avila Garcez AS, Zaverucha G (1999) The connectionist inductive learning and logic programming system. Appl Intell 11(1):59–77. Special Issue on Neural Networks and Structured Knowledge

    Article  Google Scholar 

  15. d’Avila Garcez AS, Zaverucha G, de Carvalho LAV (1997) Logical inference and inductive learning in artificial neural networks. In: Hermann C, Reine F (eds) Knowledge representation in neural networks. Logos Verlag, Berlin, pp 33–46

    Google Scholar 

  16. d’Avila Garcez AS, Broda K, Gabbay DM (2000) Metalevel priorities and neural networks. In: Proceedings of the workshop on the foundations of connectionist-symbolic integration, ECAI’2000, Berlin, August

  17. d’Avila Garcez AS, Broda K, Gabbay DM (2001) Symbolic knowledge extraction from trained neural networks: A sound approach. Artif Intell 126(1–2):155–207

    Article  MathSciNet  Google Scholar 

  18. d’Avila Garcez AS, Broda KB, Gabbay DM (2002) Neural-symbolic learning systems—foundations and applications. Perspectives in neural computing. Springer, Berlin

    MATH  Google Scholar 

  19. d’Avila Garcez AS, Lamb LC, Gabbay DM (2002) A connectionist inductive learning system for modal logic programming. In: Proceedings of the IEEE international conference on neural information processing ICONIP’02, Singapore

  20. d’Avila Garcez AS, Lamb LC, Gabbay DM (2003) Neural-symbolic intuitionistic reasoning. In: Koppen M, Abraham A, Franke K (eds) Frontiers in artificial intelligence and applications. Proceedings of the third international conference on hybrid intelligent systems (HIS’03), Melbourne, Australia, December. IOS Press

  21. Gabbay DM (1999) Fibring logics. Oxford University Press, Oxford

    MATH  Google Scholar 

  22. Hitzler P (2004) Corollaries on the fixpoint completion: studying the stable semantics by means of the Clark completion. In: Seipel D, Hanus M, Geske U, Bartenstein O (eds) Proceedings of the 15th international conference on applications of declarative programming and knowledge management and the 18th workshop on logic programming, Potsdam, Germany, March 4–6. Technichal report, vol 327, pp 13–27. Bayerische Julius-Maximilians-Universität Würzburg, Institut für Informatik

  23. Hitzler P, Hölldobler S, Seda AK (2004) Logic programs and connectionist networks. J Appl Logic 2(3):245–272

    Article  MATH  Google Scholar 

  24. Hitzler P, Bader S, Garcez A (2005) Ontology leaning as a use case for neural-symbolic integration. In: Proceedings of the IJCAI-05 workshop on neural-symbolic learning and reasoning, NeSy’05

  25. Hölldobler S, Kalinke Y (1994) Towards a new massively parallel computational model for logic programming. In: Proceedings of the ECAI94 workshop on combining symbolic and connectionist processing. ECCAI, pp 68–77

  26. Hölldobler S, Kalinke Y, Störr H-P (1999) Approximating the semantics of logic programs by recurrent neural networks. Appl Intell 11:45–58

    Article  Google Scholar 

  27. Jacobsson H (2005) Rule extraction from recurrent neural networks: A taxonomy and review. Neural Comput 17(6):1223–1263

    Article  MATH  MathSciNet  Google Scholar 

  28. Komendantskaya E, Seda AK, Komendantsky V (2005) On approximation of the semantic operators determined by bilattice-based logic programs. In: Proceedings of the seventh international workshop on first-order theorem proving (FTP’05), Koblenz, Germany, September, pp 112–130

  29. Komendantsky V, Seda AK (2005) Computation of normal logic programs by fibring neural networks. In: Proceedings of the seventh international workshop on first-order theorem proving (FTP’05), Koblenz, Germany, September, pp 97–111

  30. Kurfess FJ (2000) Neural networks and structured knowledge: Rule extraction and applications. Appl Intell 12(1–2):7–13

    Article  Google Scholar 

  31. Lloyd JW (1988) Foundations of logic programming. Springer, Berlin

    Google Scholar 

  32. Muggleton SH, De Raedt L (1994) Inductive logic programming: Theory and methods. J Logic Program 19-20:629–679

    Article  Google Scholar 

  33. Remm J-F, Alexandre F (2002) Knowledge extraction using artificial neural networks: application to radar target identification. Signal Process 82(1):117–120

    Article  MATH  Google Scholar 

  34. Seda AK (2005) On the integration of connectionist and logic-based systems. In: Hurley T, Mac M (eds) Proceedings of MFCSIT2004, Trinity College Dublin, July. Electronic notes in theoretical computer science. Elsevier, Amsterdam, pp 1–24

    Google Scholar 

  35. Seda AK, Lane M (2005) On approximation in the integration of connectionist and logic-based systems. In: Proceedings of the third international conference on information (Information’04), Tokyo, November 2005, International Information Institute, pp 297–300

  36. Thrun SB, Bala J, Bloedorn E, Bratko I, Cestnik B, Cheng J, De Jong K, Džeroski S, Fahlman SE, Fisher D, Hamann R, Kaufman K, Keller S, Kononenko I, Kreuziger J, Michalski RS, Mitchell T, Pachowicz P, Reich Y, Vafaie H, Van de Welde W, Wenzel W, Wnek J, Zhang J (1991) The MONK’s problems: A performance comparison of different learning algorithms (Technical Report CS-91-197). Computer Science Department, Carnegie Mellon University, Pittsburgh, PA

  37. Tickle AB, Maire F, Bologna G, Andrews R, Diederich J (1998) Lessons from past, current issues, and future research directions in extracting the knowledge embedded in artificial neural networks. Hybrid Neural Syst, 226–239

  38. Towell GG, Shavlik JW (1994) Knowledge-based artificial neural networks. Artif Intell 70(1–2):119–165

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pascal Hitzler.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lehmann, J., Bader, S. & Hitzler, P. Extracting reduced logic programs from artificial neural networks. Appl Intell 32, 249–266 (2010). https://doi.org/10.1007/s10489-008-0142-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-008-0142-y

Keywords

Navigation