Skip to main content
Log in

A neural network structure specified for representing and storing logical relations

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Logical representation and reasoning is an important aspect of intelligence. Current ANN models are good at perceptual intelligence while they are not good at cognitive intelligence such as logical representation, so researchers have tried to design novel models so as to represent and store logical relations into the neural network structures, called the type of Knowledge-Based Neural Network. However, there is an ambiguous problem that the same neural network structure represents multiple logical relations. It causes the corresponding logical relations not to be read out from these neural network structures which are constructed according to them. To let logical relations stored in the format of neural network and read out from it, this paper studies the direct mapping method between logical relations and neural network structures and proposes a novel model called Probabilistic Logical Generative Neural Network, which is specified for logical relation representation by redesigning the neurons and links. It can make neurons solely for representing things while making links solely for representing logical relations between things, and thus no extra logical neurons and layers are needed. Moreover, the related construction and adjustment methods of the neural network structure are also designed making the neural network structure dynamically constructed and adjusted according to logical relations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Human brain project, framework partnership agreement. https://www.humanbrainproject.eu. Accessed July 2016

  2. Markram H, Meier K et al (2012) The human brain project: a report to the European Commission. Technical report

  3. Bargmann CI, Newsome WT (2014) The brain research through advancing innovative neurotechnologies (BRAIN) initiative and neurology. JAMA Neurol 71(6):675–676

    Article  Google Scholar 

  4. Poo MM, Du JL, Ip N et al (2016) China brain project: basic neuroscience, brain diseases, and brain-inspired computing. Neuron 92(3):591–596

    Article  Google Scholar 

  5. Sun Y, Liang D, Wang X, Tang X (2015) Deepid3: face recognition with very deep neural networks. arXiv:1502.00873

  6. Parkhi OM, Vedaldi A, Zisserman A (2015) Deep face recognition. In: British machine vision conference, pp 41.1–41.12

  7. He X, Wang G, Zhang XP et al (2016) Leaf classification utilizing a convolutional neural network with a structure of single connected layer. In: 12th international conference on intelligent computation, Lanzhou, China, pp 332–340

  8. Mohamed AR, Dahl George E, Hinton Geoffrey E (2012) Acoustic modeling using deep belief networks. IEEE Trans Audio Speech Lang Process 20(1):14–22

    Article  Google Scholar 

  9. Gehring J, Lee W, Kilgour K et al (2013) Modular combination of deep neural networks for acoustic modeling. In: 14th annual conference of the international speech communication association, Lyon, France, pp 94–98

  10. Chollet F (2017) Deep learning with python. Manning Publications, New York

    Google Scholar 

  11. How to teach artificial intelligence some common sense. https://www.wired.com/story/how-to-teach-artificial-intelligence-common-sense/. Accessed Mar 2019

  12. LeCun Y (2015) What’s wrong with deep learning. CVPR, keynote

  13. Garcez A, Raedt L, Lamb L et al (2015) Neural-symbolic learning and reasoning: contributions and challenges. In: AAAI, CA

  14. Garnelo M, Arulkumaran K, Shanahan M (2016) Towards deep symbolic reinforcement learning. arXiv:1609.05518

  15. Cybenko G (1989) Approximations by superpositions of sigmoidal functions. Math Control Signals Syst 2(4):303–314

    Article  MathSciNet  MATH  Google Scholar 

  16. Hassoun M (1995) Fundamentals of artificial neural networks. MIT Press, Cambridge

    MATH  Google Scholar 

  17. Irving G, Szegedy C et al (2016) Deepmath- deep sequence models for premise selection. In: NIPS, pp 2235–2243

  18. Cai C, Ke D, Xu Y, Su K (2017) Symbolic manipulation based on deep neural networks and its application to axiom discovery. In: IJCNN

  19. Luger GF (2008) Artificial intelligence: structures and strategies, 6th edn. Pearson Education, London

    Google Scholar 

  20. Negnevitsky M (2011) Artificial intelligence: a guide to intelligent systems, 3rd edn. Pearson Education, London

    Google Scholar 

  21. Besold TR, Kuhnberger KU (2015) Towards integrated neural–symbolic systems for human-level AI: two research programs helping to bridge the gaps. Biol Inspired Cogn Archit 14:97–110

    Google Scholar 

  22. Besold TR (2015) Same same, but different? Exploring differences in complexity between logics and neural networks. In: NeSy’15, Neural-Symbolic.org

  23. de Penning L, d’Avila Garcez AS, Lamb LC, Ch Meyer JJ (2011) A neural-symbolic cognitive agent for online learning and reasoning. In: IJCAI, pp 1653–1658

  24. Towell GG, Shavlik JW (1994) Knowledge-based artificial neural networks. Artif Intell 70(1):119–165

    Article  MATH  Google Scholar 

  25. Garcez A, Lamb L, Gabbay D (2008) Neural-symbolic cognitive reasoning, perspectives in neural computing. In: Cognitive technologies. Springer

  26. Valiant Leslie G (2006) Knowledge infusion. In: Proceedings of 21th national conference on artificial intelligence, 2006, Boston, USA, pp 1546–1551

  27. Bowman SR, Potts C, Manning C D (2014) Recursive neural networks can learn logical semantics. Technical report, arXiv:1406.1827

  28. Mandziuk J, Macukow B (1993) A neural network performing boolean logic operations. Opt Mem Neural Netw 2(1):17–35

    Google Scholar 

  29. Gallant SI (1993) Neural network learning and expert systems. MIT Press, Boston

    Book  MATH  Google Scholar 

  30. Gang Wang (2017) Automatical knowledge representation of logical relations by dynamical neural network. J Intell Syst 26(4):625–639

    Article  Google Scholar 

  31. Hebb D (1949) The organization of behavior. Wiley, New York

    Google Scholar 

  32. Haykin S (2008) Neural networks and learning machines, 3rd edn. Prentice Hall, Upper Saddle River

    Google Scholar 

  33. UCI-datasets. http://archive.ics.uci.edu/ml/datasets/zoo. Accessed June 2017

  34. Forsyth R (1987) Pc/beagle user guide. Technical report, Pathway Research Ltd, Nottingham

  35. Mangasarian OL, Wolberg WH (1990) Cancer diagnosis via linear programming. SIAM News 23(5):1–18

    Google Scholar 

Download references

Acknowledgements

This work was funded by the NSFC (National Natural Science Foundation of China) Grant No. 61503273.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gang Wang.

Ethics declarations

Conflict of interest

The authors declared that they have no conflict of interest to this work.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Supplements

In the following, it is a simple rule library as example which has 14 logical relations. The PLGNN in Fig. 11 memorizes and stores them through the interconnection structure of the neural network.

  1. 1.

    If an animal has hair, then it is mammal.

  2. 2.

    If an animal produces milk, then it is mammal.

  3. 3.

    If a mammal is predator, then it is beast.

  4. 4.

    If a mammal has hoof, then it is ungulate.

  5. 5.

    If a mammal is ruminant, then it is ungulate.

  6. 6.

    If an animal has feather, produces egg, then it is bird.

  7. 7.

    If an animal airborne, then it is bird.

  8. 8.

    If a beast is yellow and spots, then it is leopard.

  9. 9.

    If a beast is yellow and black strips, then it is tiger.

  10. 10.

    If an ungulate has long neck, long leg, yellow and spots, then it is giraffe.

  11. 11.

    If an ungulate is white and black strips, then it is zebra.

  12. 12.

    If a bird cannot airborne, has long neck, long legs, and is mixture of black and white, then it is ostrich.

  13. 13.

    If a bird cannot airborne, can aquatic, and is mixture of black and white, then it is penguin.

  14. 14.

    If a bird can airborne, then it is swallow.

These relations often appeared as example and appeared in AI-related papers and books such as Neural Networks and Learning Machines written by Haykin.

Appendix 2

The algorithms of the construction and adjustment of the neural network structure are shown in Fig. 12.

Fig. 12
figure 12

Algorithms of construction and adjustment of neural network structure

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, G. A neural network structure specified for representing and storing logical relations. Neural Comput & Applic 32, 14975–14993 (2020). https://doi.org/10.1007/s00521-020-04852-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-020-04852-4

Keywords

Navigation