Skip to main content

Designing a Neural Network Primitive for Conditional Structural Transformations

  • Conference paper
  • First Online:
Artificial Intelligence (RCAI 2020)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12412))

Included in the following conference series:

Abstract

Among the problems of neural network design the challenge of explicit representing conditional structural manipulations on a sub-symbolic level plays a critical role. In response to that challenge the article proposes a computationally adequate method for design of a neural network capable of performing an important group of symbolic operations on a sub-symbolic level without initial learning: extraction of elements of a given structure, conditional branching and construction of a new structure. The neural network primitive infers on distributed representations of symbolic structures and represents a proof of concept for the viability of implementation of symbolic rules in a neural pipeline for various tasks like language analysis or aggregation of linguistic assessments during the decision making process. The proposed method was practically implemented and evaluated within the Keras framework. The network designed was tested for a particular case of transforming active-passive sentences represented in parsed grammatical structures.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    From now on network description contains terminology accepted in the Keras [7] and TensorFlow [1] software frameworks.

  2. 2.

    https://github.com/demid5111/ldss-tensor-structures.

References

  1. Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). http://tensorflow.org/. Software available from tensorflow.org

  2. Besold, T.R., et al.: Neural-symbolic learning and reasoning: a survey and interpretation. arXiv preprint arXiv:1711.03902 (2017)

  3. Besold, T.R., Kühnberger, K.U.: Towards integrated neural-symbolic systems for human-level AI: two research programs helping to bridge the gaps. Biol. Inspired Cogn. Archit. 14, 97–110 (2015)

    Google Scholar 

  4. Browne, A., Sun, R.: Connectionist inference models. Neural Netw. 14(10), 1331–1355 (2001)

    Article  Google Scholar 

  5. Cheng, P., Zhou, B., Chen, Z., Tan, J.: The topsis method for decision making with 2-tuple linguistic intuitionistic fuzzy sets. In: 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), pp. 1603–1607. IEEE (2017)

    Google Scholar 

  6. Cho, P.W., Goldrick, M., Smolensky, P.: Incremental parsing in a continuous dynamical system: sentence processing in gradient symbolic computation. Linguistics Vanguard 3(1), 1–10 (2017)

    Article  Google Scholar 

  7. Chollet, F., et al.: Keras (2015). https://keras.io

  8. Demidovskij, A.: Implementation aspects of tensor product variable binding in connectionist systems. In: Bi, Y., Bhatia, R., Kapoor, S. (eds.) IntelliSys 2019. AISC, vol. 1037, pp. 97–110. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-29516-5_9

    Chapter  Google Scholar 

  9. Demidovskij, A.: Automatic construction of tensor product variable binding neural networks for neural-symbolic intelligent systems. In: Proceedings of 2nd International Conference on Electrical, Communication and Computer Engineering, pp. not published, accepted. IEEE (2020)

    Google Scholar 

  10. Demidovskij, A., Babkin, E.: Developing a distributed linguistic decision making system. Business Informatics 13(1 (eng)) (2019)

    Google Scholar 

  11. Demidovskij, A., Babkin, E.: Towards designing linguistic assessments aggregation as a distributed neuroalgorithm. In: 2020 XXII International Conference on Soft Computing and Measurements (SCM)), pp. not published, accepted. IEEE (2020)

    Google Scholar 

  12. Demidovskij, A.V.: Towards automatic manipulation of arbitrary structures in connectivist paradigm with tensor product variable binding. In: Kryzhanovsky, B., Dunin-Barkowski, W., Redko, V., Tiumentsev, Y. (eds.) NEUROINFORMATICS 2019. SCI, vol. 856, pp. 375–383. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-30425-6_44

    Chapter  Google Scholar 

  13. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  14. Fodor, J.A., Pylyshyn, Z.W., et al.: Connectionism and cognitive architecture: a critical analysis. Cognition 28(1–2), 3–71 (1988)

    Article  Google Scholar 

  15. Gallant, S.I., Okaywe, T.W.: Representing objects, relations, and sequences. Neural Comput. 25(8), 2038–2078 (2013)

    Article  MathSciNet  Google Scholar 

  16. Golmohammadi, D.: Neural network application for fuzzy multi-criteria decision making problems. Int. J. Prod. Econ. 131(2), 490–504 (2011)

    Article  Google Scholar 

  17. Huang, Q., Smolensky, P., He, X., Deng, L., Wu, D.: Tensor product generation networks for deep NLP modeling. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1 (Long Papers), pp. 1263–1273. Association for Computational Linguistics, New Orleans (2018). https://doi.org/10.18653/v1/N18-1114. https://www.aclweb.org/anthology/N18-1114

  18. Legendre, G., Miyata, Y., Smolensky, P.: Distributed recursive structure processing. In: Advances in Neural Information Processing Systems, pp. 591–597 (1991)

    Google Scholar 

  19. Leitgeb, H.: Interpreted dynamical systems and qualitative laws: from neural networks to evolutionary systems. Synthese 146(1–2), 189–202 (2005)

    Article  MathSciNet  Google Scholar 

  20. McCoy, R.T., Linzen, T., Dunbar, E., Smolensky, P.: RNNS implicitly implement tensor product representations. arXiv preprint arXiv:1812.08718 (2018)

  21. Palangi, H., Smolensky, P., He, X., Deng, L.: Question-answering with grammatically-interpretable representations. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  22. de Penning, H.L.H., Garcez, A.S.d., Lamb, L.C., Meyer, J.J.C.: A neural-symbolic cognitive agent for online learning and reasoning. In: Twenty-Second International Joint Conference on Artificial Intelligence (2011)

    Google Scholar 

  23. Pinkas, G.: Reasoning, nonmonotonicity and learning in connectionist networks that capture propositional knowledge. Artif. Intell. 77(2), 203–247 (1995)

    Article  MathSciNet  Google Scholar 

  24. Pinkas, G., Lima, P., Cohen, S.: A dynamic binding mechanism for retrieving and unifying complex predicate-logic knowledge. In: Villa, A.E.P., Duch, W., Érdi, P., Masulli, F., Palm, G. (eds.) ICANN 2012. LNCS, vol. 7552, pp. 482–490. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33269-2_61

    Chapter  Google Scholar 

  25. Pinkas, G., Lima, P., Cohen, S.: Representing, binding, retrieving and unifying relational knowledge using pools of neural binders. Biol. Inspired Cogn. Archit. 6, 87–95 (2013)

    Google Scholar 

  26. Serafini, L., Garcez, A.d.: Logic tensor networks: deep learning and logical reasoning from data and knowledge. arXiv preprint arXiv:1606.04422 (2016)

  27. Smolensky, P.: Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artif. Intell. 46(1–2), 159–216 (1990)

    Article  MathSciNet  Google Scholar 

  28. Smolensky, P., Goldrick, M., Mathis, D.: Optimization and quantization in gradient symbol systems: a framework for integrating the continuous and the discrete in cognition. Cogn. Sci. 38(6), 1102–1138 (2014)

    Article  Google Scholar 

  29. Smolensky, P., Legendre, G.: The Harmonic Mind: From Neural Computation to Optimality-theoretic Grammar (Cognitive Architecture), Vol. 1. MIT press, Cambridge (2006)

    Google Scholar 

  30. Soulos, P., McCoy, T., Linzen, T., Smolensky, P.: Discovering the compositional structure of vector representations with role learning networks. arXiv preprint arXiv:1910.09113 (2019)

  31. Teso, S., Sebastiani, R., Passerini, A.: Structured learning modulo theories. Artif. Intell. 244, 166–187 (2017)

    Article  MathSciNet  Google Scholar 

  32. Wei, C., Liao, H.: A multigranularity linguistic group decision-making method based on hesitant 2-tuple sets. Int. J. Intell. Syst. 31(6), 612–634 (2016)

    Article  MathSciNet  Google Scholar 

  33. Yousefpour, A., et al.: Failout: achieving failure-resilient inference in distributed neural networks. arXiv preprint arXiv:2002.07386 (2020)

Download references

Acknowledgements

Authors sincerely appreciate all valuable comments and suggestions given by the reviewers. The reported study was funded by RFBR, project number 19-37-90058.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eduard Babkin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Demidovskij, A., Babkin, E. (2020). Designing a Neural Network Primitive for Conditional Structural Transformations. In: Kuznetsov, S.O., Panov, A.I., Yakovlev, K.S. (eds) Artificial Intelligence. RCAI 2020. Lecture Notes in Computer Science(), vol 12412. Springer, Cham. https://doi.org/10.1007/978-3-030-59535-7_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59535-7_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59534-0

  • Online ISBN: 978-3-030-59535-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics