Applied Intelligence

, Volume 32, Issue 3, pp 249–266 | Cite as

Extracting reduced logic programs from artificial neural networks



Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are black boxes. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as simple as possible—where simple is being understood in some clearly defined and meaningful way.


Artificial neural network Rule extraction Logic program Neural-symbolic integration 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Alexandre R, Diederich J, Tickle A (1995) A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowl Based Syst, 373–389 Google Scholar
  2. 2.
    Andrews R, Geva S (2002) Rule extraction from local cluster neural nets. Neurocomputing 47(1–4):1–20 MATHCrossRefGoogle Scholar
  3. 3.
    Bader S, Hitzler P (2004) Logic programs, iterated function systems, and recurrent radial basis function networks. J Appl Logic 2(3):273–300 MATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Bader S, Hitzler P (2005) Dimensions of neural-symbolic integration—a structured survey. In: Artemov S, Barringer H, d’Avila Garcez AS, Lamb LC, Woods J (eds) We will show them: Essays in honour of dov gabbay, vol 1. International Federation for Computational Logic, College Publications, pp 167–194 Google Scholar
  5. 5.
    Bader S, Hitzler P, Hölldobler S (2004) The integration of connectionism and first-order knowledge representation and reasoning as a challenge for artificial intelligence. In: Li L, Yen KK (eds) Proceedings of the third international conference on information, Tokyo, Japan, November/December. International Information Institute, pp 22–33 Google Scholar
  6. 6.
    Bader S, Garcez A, Hitzler P (2005) Computing first-order logic programs by fibring artificial neural networks. In: Russell I, Markov Z (eds) Proceedings of the eighteenth international Florida artificial intelligence research symposium conference, Clearwater Beach, Florida, USA. AAAI Press, Menlo Park, pp 314–319 Google Scholar
  7. 7.
    Bader S, Hitzler P, Hölldobler S, Witzel A (2007) A fully connectionist model generator for covered first-order logic programs. In: Veloso MM (ed) IJCAI 2007, Proceedings of the 20th international joint conference on artificial intelligence. Hyderabad, India, January 6–12, pp 666–671 Google Scholar
  8. 8.
    Bader S, Hitzler P, Hölldobler S, Witzel A (2007) The core method: Connectionist model generation for first-order logic programs. In: Hammer B, Hitzler P (eds) Perspectives of neural-symbolic integration. Studies in computational intelligenc, vol 77. Springer, Berlin, pp 205–232 CrossRefGoogle Scholar
  9. 9.
    Bader S, Hitzler P, Hölldobler S (2008) Connectionist model generation: A first-order approach. Neurocomputing (to appear) Google Scholar
  10. 10.
    Barnsley M (1993) Fractals everywhere. Academic Press, San Diego MATHGoogle Scholar
  11. 11.
    Bishop ChM (1995) Neural networks for pattern recognition. Oxford University Press, Oxford Google Scholar
  12. 12.
    Blake CL, Newman DJ, Hettich S, Merz CJ (1998) UCI repository of machine learning databases Google Scholar
  13. 13.
    d’Avila Garcez AS, Gabbay DM (2004) Fibring neural networks. In: Proceedings of the 19th national conference on artificial intelligence (AAAI 04), San Jose, California, USA, July. AAAI Press Google Scholar
  14. 14.
    d’Avila Garcez AS, Zaverucha G (1999) The connectionist inductive learning and logic programming system. Appl Intell 11(1):59–77. Special Issue on Neural Networks and Structured Knowledge CrossRefGoogle Scholar
  15. 15.
    d’Avila Garcez AS, Zaverucha G, de Carvalho LAV (1997) Logical inference and inductive learning in artificial neural networks. In: Hermann C, Reine F (eds) Knowledge representation in neural networks. Logos Verlag, Berlin, pp 33–46 Google Scholar
  16. 16.
    d’Avila Garcez AS, Broda K, Gabbay DM (2000) Metalevel priorities and neural networks. In: Proceedings of the workshop on the foundations of connectionist-symbolic integration, ECAI’2000, Berlin, August Google Scholar
  17. 17.
    d’Avila Garcez AS, Broda K, Gabbay DM (2001) Symbolic knowledge extraction from trained neural networks: A sound approach. Artif Intell 126(1–2):155–207 CrossRefMathSciNetGoogle Scholar
  18. 18.
    d’Avila Garcez AS, Broda KB, Gabbay DM (2002) Neural-symbolic learning systems—foundations and applications. Perspectives in neural computing. Springer, Berlin MATHGoogle Scholar
  19. 19.
    d’Avila Garcez AS, Lamb LC, Gabbay DM (2002) A connectionist inductive learning system for modal logic programming. In: Proceedings of the IEEE international conference on neural information processing ICONIP’02, Singapore Google Scholar
  20. 20.
    d’Avila Garcez AS, Lamb LC, Gabbay DM (2003) Neural-symbolic intuitionistic reasoning. In: Koppen M, Abraham A, Franke K (eds) Frontiers in artificial intelligence and applications. Proceedings of the third international conference on hybrid intelligent systems (HIS’03), Melbourne, Australia, December. IOS Press Google Scholar
  21. 21.
    Gabbay DM (1999) Fibring logics. Oxford University Press, Oxford MATHGoogle Scholar
  22. 22.
    Hitzler P (2004) Corollaries on the fixpoint completion: studying the stable semantics by means of the Clark completion. In: Seipel D, Hanus M, Geske U, Bartenstein O (eds) Proceedings of the 15th international conference on applications of declarative programming and knowledge management and the 18th workshop on logic programming, Potsdam, Germany, March 4–6. Technichal report, vol 327, pp 13–27. Bayerische Julius-Maximilians-Universität Würzburg, Institut für Informatik Google Scholar
  23. 23.
    Hitzler P, Hölldobler S, Seda AK (2004) Logic programs and connectionist networks. J Appl Logic 2(3):245–272 MATHCrossRefGoogle Scholar
  24. 24.
    Hitzler P, Bader S, Garcez A (2005) Ontology leaning as a use case for neural-symbolic integration. In: Proceedings of the IJCAI-05 workshop on neural-symbolic learning and reasoning, NeSy’05 Google Scholar
  25. 25.
    Hölldobler S, Kalinke Y (1994) Towards a new massively parallel computational model for logic programming. In: Proceedings of the ECAI94 workshop on combining symbolic and connectionist processing. ECCAI, pp 68–77 Google Scholar
  26. 26.
    Hölldobler S, Kalinke Y, Störr H-P (1999) Approximating the semantics of logic programs by recurrent neural networks. Appl Intell 11:45–58 CrossRefGoogle Scholar
  27. 27.
    Jacobsson H (2005) Rule extraction from recurrent neural networks: A taxonomy and review. Neural Comput 17(6):1223–1263 MATHCrossRefMathSciNetGoogle Scholar
  28. 28.
    Komendantskaya E, Seda AK, Komendantsky V (2005) On approximation of the semantic operators determined by bilattice-based logic programs. In: Proceedings of the seventh international workshop on first-order theorem proving (FTP’05), Koblenz, Germany, September, pp 112–130 Google Scholar
  29. 29.
    Komendantsky V, Seda AK (2005) Computation of normal logic programs by fibring neural networks. In: Proceedings of the seventh international workshop on first-order theorem proving (FTP’05), Koblenz, Germany, September, pp 97–111 Google Scholar
  30. 30.
    Kurfess FJ (2000) Neural networks and structured knowledge: Rule extraction and applications. Appl Intell 12(1–2):7–13 CrossRefGoogle Scholar
  31. 31.
    Lloyd JW (1988) Foundations of logic programming. Springer, Berlin Google Scholar
  32. 32.
    Muggleton SH, De Raedt L (1994) Inductive logic programming: Theory and methods. J Logic Program 19-20:629–679 CrossRefGoogle Scholar
  33. 33.
    Remm J-F, Alexandre F (2002) Knowledge extraction using artificial neural networks: application to radar target identification. Signal Process 82(1):117–120 MATHCrossRefGoogle Scholar
  34. 34.
    Seda AK (2005) On the integration of connectionist and logic-based systems. In: Hurley T, Mac M (eds) Proceedings of MFCSIT2004, Trinity College Dublin, July. Electronic notes in theoretical computer science. Elsevier, Amsterdam, pp 1–24 Google Scholar
  35. 35.
    Seda AK, Lane M (2005) On approximation in the integration of connectionist and logic-based systems. In: Proceedings of the third international conference on information (Information’04), Tokyo, November 2005, International Information Institute, pp 297–300 Google Scholar
  36. 36.
    Thrun SB, Bala J, Bloedorn E, Bratko I, Cestnik B, Cheng J, De Jong K, Džeroski S, Fahlman SE, Fisher D, Hamann R, Kaufman K, Keller S, Kononenko I, Kreuziger J, Michalski RS, Mitchell T, Pachowicz P, Reich Y, Vafaie H, Van de Welde W, Wenzel W, Wnek J, Zhang J (1991) The MONK’s problems: A performance comparison of different learning algorithms (Technical Report CS-91-197). Computer Science Department, Carnegie Mellon University, Pittsburgh, PA Google Scholar
  37. 37.
    Tickle AB, Maire F, Bologna G, Andrews R, Diederich J (1998) Lessons from past, current issues, and future research directions in extracting the knowledge embedded in artificial neural networks. Hybrid Neural Syst, 226–239 Google Scholar
  38. 38.
    Towell GG, Shavlik JW (1994) Knowledge-based artificial neural networks. Artif Intell 70(1–2):119–165 MATHCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversität LeipzigLeipzigGermany
  2. 2.International Center for Computational LogicTechnische Universität DresdenDresdenGermany
  3. 3.AIFBUniversität Karlsruhe (TH)KarlsruheGermany

Personalised recommendations