Abstract
Different forms of neural networks have been used to solve all sorts of problems in the previous years. These were typically problems that classic approaches of artificial intelligence and automation could not solve efficiently, like handwriting recognition, speech recognition, or machine translation of natural languages. Yet, it is very hard for us to understand how exactly all these different types of neural networks make their decisions in specific situations. We cannot verify them as we can verify, e.g., grammars, trees and classic state machines. Being able to actually prove the reliability of artificial intelligence models becomes more and more important, especially, when cyber-physical systems and humans are the subject of the AI’s decisions. The aim of this paper is to introduce an approach for the analysis of decision processes in neural networks at a specific point of training. Therefore, we identify characteristics that artificial neural networks have in common with classic symbolic AI models and where both are different. Besides, we describe our first ideas of how to overcome the aspects where both systems are different and of how to find a way to create something from an artificial neural network that is either an equivalent symbolic model or at least similar enough to such a symbolic model to allow for its construction. Our long term goal is to find, if possible, an appropriate bidirectional transformation between both AI approaches.
Keywords
- Artificial neural networks
- Symbolic AI models
- Connectionism
- Symbolism
This is a preview of subscription content, access via your institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning Cambridge. The MIT Press, Massachusetts (2017)
van Veen, F.: The Neural Network Zoo. The Asimov Institute, Utrecht (2016). https://www.asimovinstitute.org/neural-network-zoo/
Schmidhuber, J.: Deep Learning in Neural Networks: An Overview. University of Lugano & SUPSI - Istituto Dalle Molle di Studi sull’Intelligenza Artificiale, Manno-Lugano (2014)
Russel, S., Norvig, P.: Artificial Intelligence: A Modern Approach, 3rd edn. Pearson Education Inc., Upper Saddle River (2010)
Garson, J., Zalta, E.N.: Stanford Encyclopedia of Philosophy. Metaphysics Research Lab - Stanford University, Stanford (2016). https://plato.stanford.edu/archives/win2016/entries/connectionism/
Reingold, E., Nightingale, J.: Artificial Intelligence Tutorial Review. Department of Psychology - University of Toronto, Mississauga (1999). http://www.psych.utoronto.ca/users/reingold/courses/ai/symbolic.html
Minsky, M.: Logical vs. Analogical or Symbolic vs. Connectionist or Neat vs. Scruffy - Artificial Intelligence at MIT, Expanding Frontiers. The MIT Press, Cambridge (1990)
Smolensky, P., Legendre, G.: The Harmonic Mind - Volume 1: Cognitive Architecture. The MIT Press, Cambridge (2011)
Smolensky, P.: Connectionist AI, Symbolic AI, and the Brain - Artificial Intelligence Review. Springer-Verlag GmbH, Heidelberg (1987)
Smolensky, P.: Symbolic Functions From Neural Computation. The Royal Society Publishing, London (2012)
Millington, I., Funge, J.: Artificial Intelligence for Games, 2nd edn. Morgan Kaufmann Publishers, Burlington (2009)
Mordvintsev, A., Olah, C., Tyka, M.: Inceptionism: Going Deeper into Neural Networks. Google Research Blog (2015). https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html
Acknowledgment
The authors would like to thank Franz Schmalhofer for his many constructive, open-minded discussions regarding our ideas. We also would like to thank Wolfgang Hommel for polishing our paper and helping us to identify future challenges. We highly appreciate their support.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Seidel, S., Schimmler, S., Borghoff, U.M. (2019). Understanding Neural Network Decisions by Creating Equivalent Symbolic AI Models. In: Arai, K., Kapoor, S., Bhatia, R. (eds) Intelligent Systems and Applications. IntelliSys 2018. Advances in Intelligent Systems and Computing, vol 868. Springer, Cham. https://doi.org/10.1007/978-3-030-01054-6_45
Download citation
DOI: https://doi.org/10.1007/978-3-030-01054-6_45
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-01053-9
Online ISBN: 978-3-030-01054-6
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)