Abstract
Among the problems of neural network design the challenge of explicit representing conditional structural manipulations on a sub-symbolic level plays a critical role. In response to that challenge the article proposes a computationally adequate method for design of a neural network capable of performing an important group of symbolic operations on a sub-symbolic level without initial learning: extraction of elements of a given structure, conditional branching and construction of a new structure. The neural network primitive infers on distributed representations of symbolic structures and represents a proof of concept for the viability of implementation of symbolic rules in a neural pipeline for various tasks like language analysis or aggregation of linguistic assessments during the decision making process. The proposed method was practically implemented and evaluated within the Keras framework. The network designed was tested for a particular case of transforming active-passive sentences represented in parsed grammatical structures.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
References
Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). http://tensorflow.org/. Software available from tensorflow.org
Besold, T.R., et al.: Neural-symbolic learning and reasoning: a survey and interpretation. arXiv preprint arXiv:1711.03902 (2017)
Besold, T.R., Kühnberger, K.U.: Towards integrated neural-symbolic systems for human-level AI: two research programs helping to bridge the gaps. Biol. Inspired Cogn. Archit. 14, 97–110 (2015)
Browne, A., Sun, R.: Connectionist inference models. Neural Netw. 14(10), 1331–1355 (2001)
Cheng, P., Zhou, B., Chen, Z., Tan, J.: The topsis method for decision making with 2-tuple linguistic intuitionistic fuzzy sets. In: 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), pp. 1603–1607. IEEE (2017)
Cho, P.W., Goldrick, M., Smolensky, P.: Incremental parsing in a continuous dynamical system: sentence processing in gradient symbolic computation. Linguistics Vanguard 3(1), 1–10 (2017)
Chollet, F., et al.: Keras (2015). https://keras.io
Demidovskij, A.: Implementation aspects of tensor product variable binding in connectionist systems. In: Bi, Y., Bhatia, R., Kapoor, S. (eds.) IntelliSys 2019. AISC, vol. 1037, pp. 97–110. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-29516-5_9
Demidovskij, A.: Automatic construction of tensor product variable binding neural networks for neural-symbolic intelligent systems. In: Proceedings of 2nd International Conference on Electrical, Communication and Computer Engineering, pp. not published, accepted. IEEE (2020)
Demidovskij, A., Babkin, E.: Developing a distributed linguistic decision making system. Business Informatics 13(1 (eng)) (2019)
Demidovskij, A., Babkin, E.: Towards designing linguistic assessments aggregation as a distributed neuroalgorithm. In: 2020 XXII International Conference on Soft Computing and Measurements (SCM)), pp. not published, accepted. IEEE (2020)
Demidovskij, A.V.: Towards automatic manipulation of arbitrary structures in connectivist paradigm with tensor product variable binding. In: Kryzhanovsky, B., Dunin-Barkowski, W., Redko, V., Tiumentsev, Y. (eds.) NEUROINFORMATICS 2019. SCI, vol. 856, pp. 375–383. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-30425-6_44
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Fodor, J.A., Pylyshyn, Z.W., et al.: Connectionism and cognitive architecture: a critical analysis. Cognition 28(1–2), 3–71 (1988)
Gallant, S.I., Okaywe, T.W.: Representing objects, relations, and sequences. Neural Comput. 25(8), 2038–2078 (2013)
Golmohammadi, D.: Neural network application for fuzzy multi-criteria decision making problems. Int. J. Prod. Econ. 131(2), 490–504 (2011)
Huang, Q., Smolensky, P., He, X., Deng, L., Wu, D.: Tensor product generation networks for deep NLP modeling. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1 (Long Papers), pp. 1263–1273. Association for Computational Linguistics, New Orleans (2018). https://doi.org/10.18653/v1/N18-1114. https://www.aclweb.org/anthology/N18-1114
Legendre, G., Miyata, Y., Smolensky, P.: Distributed recursive structure processing. In: Advances in Neural Information Processing Systems, pp. 591–597 (1991)
Leitgeb, H.: Interpreted dynamical systems and qualitative laws: from neural networks to evolutionary systems. Synthese 146(1–2), 189–202 (2005)
McCoy, R.T., Linzen, T., Dunbar, E., Smolensky, P.: RNNS implicitly implement tensor product representations. arXiv preprint arXiv:1812.08718 (2018)
Palangi, H., Smolensky, P., He, X., Deng, L.: Question-answering with grammatically-interpretable representations. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
de Penning, H.L.H., Garcez, A.S.d., Lamb, L.C., Meyer, J.J.C.: A neural-symbolic cognitive agent for online learning and reasoning. In: Twenty-Second International Joint Conference on Artificial Intelligence (2011)
Pinkas, G.: Reasoning, nonmonotonicity and learning in connectionist networks that capture propositional knowledge. Artif. Intell. 77(2), 203–247 (1995)
Pinkas, G., Lima, P., Cohen, S.: A dynamic binding mechanism for retrieving and unifying complex predicate-logic knowledge. In: Villa, A.E.P., Duch, W., Érdi, P., Masulli, F., Palm, G. (eds.) ICANN 2012. LNCS, vol. 7552, pp. 482–490. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33269-2_61
Pinkas, G., Lima, P., Cohen, S.: Representing, binding, retrieving and unifying relational knowledge using pools of neural binders. Biol. Inspired Cogn. Archit. 6, 87–95 (2013)
Serafini, L., Garcez, A.d.: Logic tensor networks: deep learning and logical reasoning from data and knowledge. arXiv preprint arXiv:1606.04422 (2016)
Smolensky, P.: Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artif. Intell. 46(1–2), 159–216 (1990)
Smolensky, P., Goldrick, M., Mathis, D.: Optimization and quantization in gradient symbol systems: a framework for integrating the continuous and the discrete in cognition. Cogn. Sci. 38(6), 1102–1138 (2014)
Smolensky, P., Legendre, G.: The Harmonic Mind: From Neural Computation to Optimality-theoretic Grammar (Cognitive Architecture), Vol. 1. MIT press, Cambridge (2006)
Soulos, P., McCoy, T., Linzen, T., Smolensky, P.: Discovering the compositional structure of vector representations with role learning networks. arXiv preprint arXiv:1910.09113 (2019)
Teso, S., Sebastiani, R., Passerini, A.: Structured learning modulo theories. Artif. Intell. 244, 166–187 (2017)
Wei, C., Liao, H.: A multigranularity linguistic group decision-making method based on hesitant 2-tuple sets. Int. J. Intell. Syst. 31(6), 612–634 (2016)
Yousefpour, A., et al.: Failout: achieving failure-resilient inference in distributed neural networks. arXiv preprint arXiv:2002.07386 (2020)
Acknowledgements
Authors sincerely appreciate all valuable comments and suggestions given by the reviewers. The reported study was funded by RFBR, project number 19-37-90058.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Demidovskij, A., Babkin, E. (2020). Designing a Neural Network Primitive for Conditional Structural Transformations. In: Kuznetsov, S.O., Panov, A.I., Yakovlev, K.S. (eds) Artificial Intelligence. RCAI 2020. Lecture Notes in Computer Science(), vol 12412. Springer, Cham. https://doi.org/10.1007/978-3-030-59535-7_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-59535-7_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-59534-0
Online ISBN: 978-3-030-59535-7
eBook Packages: Computer ScienceComputer Science (R0)