Abstract
Logical representation and reasoning is an important aspect of intelligence. Current ANN models are good at perceptual intelligence while they are not good at cognitive intelligence such as logical representation, so researchers have tried to design novel models so as to represent and store logical relations into the neural network structures, called the type of Knowledge-Based Neural Network. However, there is an ambiguous problem that the same neural network structure represents multiple logical relations. It causes the corresponding logical relations not to be read out from these neural network structures which are constructed according to them. To let logical relations stored in the format of neural network and read out from it, this paper studies the direct mapping method between logical relations and neural network structures and proposes a novel model called Probabilistic Logical Generative Neural Network, which is specified for logical relation representation by redesigning the neurons and links. It can make neurons solely for representing things while making links solely for representing logical relations between things, and thus no extra logical neurons and layers are needed. Moreover, the related construction and adjustment methods of the neural network structure are also designed making the neural network structure dynamically constructed and adjusted according to logical relations.
Similar content being viewed by others
References
Human brain project, framework partnership agreement. https://www.humanbrainproject.eu. Accessed July 2016
Markram H, Meier K et al (2012) The human brain project: a report to the European Commission. Technical report
Bargmann CI, Newsome WT (2014) The brain research through advancing innovative neurotechnologies (BRAIN) initiative and neurology. JAMA Neurol 71(6):675–676
Poo MM, Du JL, Ip N et al (2016) China brain project: basic neuroscience, brain diseases, and brain-inspired computing. Neuron 92(3):591–596
Sun Y, Liang D, Wang X, Tang X (2015) Deepid3: face recognition with very deep neural networks. arXiv:1502.00873
Parkhi OM, Vedaldi A, Zisserman A (2015) Deep face recognition. In: British machine vision conference, pp 41.1–41.12
He X, Wang G, Zhang XP et al (2016) Leaf classification utilizing a convolutional neural network with a structure of single connected layer. In: 12th international conference on intelligent computation, Lanzhou, China, pp 332–340
Mohamed AR, Dahl George E, Hinton Geoffrey E (2012) Acoustic modeling using deep belief networks. IEEE Trans Audio Speech Lang Process 20(1):14–22
Gehring J, Lee W, Kilgour K et al (2013) Modular combination of deep neural networks for acoustic modeling. In: 14th annual conference of the international speech communication association, Lyon, France, pp 94–98
Chollet F (2017) Deep learning with python. Manning Publications, New York
How to teach artificial intelligence some common sense. https://www.wired.com/story/how-to-teach-artificial-intelligence-common-sense/. Accessed Mar 2019
LeCun Y (2015) What’s wrong with deep learning. CVPR, keynote
Garcez A, Raedt L, Lamb L et al (2015) Neural-symbolic learning and reasoning: contributions and challenges. In: AAAI, CA
Garnelo M, Arulkumaran K, Shanahan M (2016) Towards deep symbolic reinforcement learning. arXiv:1609.05518
Cybenko G (1989) Approximations by superpositions of sigmoidal functions. Math Control Signals Syst 2(4):303–314
Hassoun M (1995) Fundamentals of artificial neural networks. MIT Press, Cambridge
Irving G, Szegedy C et al (2016) Deepmath- deep sequence models for premise selection. In: NIPS, pp 2235–2243
Cai C, Ke D, Xu Y, Su K (2017) Symbolic manipulation based on deep neural networks and its application to axiom discovery. In: IJCNN
Luger GF (2008) Artificial intelligence: structures and strategies, 6th edn. Pearson Education, London
Negnevitsky M (2011) Artificial intelligence: a guide to intelligent systems, 3rd edn. Pearson Education, London
Besold TR, Kuhnberger KU (2015) Towards integrated neural–symbolic systems for human-level AI: two research programs helping to bridge the gaps. Biol Inspired Cogn Archit 14:97–110
Besold TR (2015) Same same, but different? Exploring differences in complexity between logics and neural networks. In: NeSy’15, Neural-Symbolic.org
de Penning L, d’Avila Garcez AS, Lamb LC, Ch Meyer JJ (2011) A neural-symbolic cognitive agent for online learning and reasoning. In: IJCAI, pp 1653–1658
Towell GG, Shavlik JW (1994) Knowledge-based artificial neural networks. Artif Intell 70(1):119–165
Garcez A, Lamb L, Gabbay D (2008) Neural-symbolic cognitive reasoning, perspectives in neural computing. In: Cognitive technologies. Springer
Valiant Leslie G (2006) Knowledge infusion. In: Proceedings of 21th national conference on artificial intelligence, 2006, Boston, USA, pp 1546–1551
Bowman SR, Potts C, Manning C D (2014) Recursive neural networks can learn logical semantics. Technical report, arXiv:1406.1827
Mandziuk J, Macukow B (1993) A neural network performing boolean logic operations. Opt Mem Neural Netw 2(1):17–35
Gallant SI (1993) Neural network learning and expert systems. MIT Press, Boston
Gang Wang (2017) Automatical knowledge representation of logical relations by dynamical neural network. J Intell Syst 26(4):625–639
Hebb D (1949) The organization of behavior. Wiley, New York
Haykin S (2008) Neural networks and learning machines, 3rd edn. Prentice Hall, Upper Saddle River
UCI-datasets. http://archive.ics.uci.edu/ml/datasets/zoo. Accessed June 2017
Forsyth R (1987) Pc/beagle user guide. Technical report, Pathway Research Ltd, Nottingham
Mangasarian OL, Wolberg WH (1990) Cancer diagnosis via linear programming. SIAM News 23(5):1–18
Acknowledgements
This work was funded by the NSFC (National Natural Science Foundation of China) Grant No. 61503273.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declared that they have no conflict of interest to this work.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix 1: Supplements
In the following, it is a simple rule library as example which has 14 logical relations. The PLGNN in Fig. 11 memorizes and stores them through the interconnection structure of the neural network.
-
1.
If an animal has hair, then it is mammal.
-
2.
If an animal produces milk, then it is mammal.
-
3.
If a mammal is predator, then it is beast.
-
4.
If a mammal has hoof, then it is ungulate.
-
5.
If a mammal is ruminant, then it is ungulate.
-
6.
If an animal has feather, produces egg, then it is bird.
-
7.
If an animal airborne, then it is bird.
-
8.
If a beast is yellow and spots, then it is leopard.
-
9.
If a beast is yellow and black strips, then it is tiger.
-
10.
If an ungulate has long neck, long leg, yellow and spots, then it is giraffe.
-
11.
If an ungulate is white and black strips, then it is zebra.
-
12.
If a bird cannot airborne, has long neck, long legs, and is mixture of black and white, then it is ostrich.
-
13.
If a bird cannot airborne, can aquatic, and is mixture of black and white, then it is penguin.
-
14.
If a bird can airborne, then it is swallow.
These relations often appeared as example and appeared in AI-related papers and books such as Neural Networks and Learning Machines written by Haykin.
Appendix 2
The algorithms of the construction and adjustment of the neural network structure are shown in Fig. 12.
Rights and permissions
About this article
Cite this article
Wang, G. A neural network structure specified for representing and storing logical relations. Neural Comput & Applic 32, 14975–14993 (2020). https://doi.org/10.1007/s00521-020-04852-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-020-04852-4