Abstract
Recently reported results with distributed-vector word representations in natural language processing make them appealing for incorporation into a general cognitive architecture like Sigma. This paper describes a new algorithm for learning such word representations from large, shallow information resources, and how this algorithm can be implemented via small modifications to Sigma. The effectiveness and speed of the algorithm are evaluated via a comparison of an external simulation of it with state-of-the-art algorithms. The results from more limited experiments with Sigma are also promising, but more work is required for it to reach the effectiveness and speed of the simulation.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Bengio, Y., et al.: A neural probabilistic language model. The Journal of Machine Learning Research 3, 1137–1155 (2003)
Collobert, R., Weston, J.: A unified architecture for natural language processing: Deep neural networks with multitask learning. In: Proceedings of the 25th International Conference on Machine Learning, pp. 160–167 (2008)
Cox, G.E., Kachergis, G., Recchia, G., Jones, M.N.: Toward a scalable holographic word-form representation. Behavior Research Methods 43(3), 602–615 (2011)
Franklin, S., Madl, T., D’Mello, S., Snaider, J.: LIDA: A systems-level architecture for cognition, emotion, and learning. IEEE Transactions on Mental Development (2013)
Guthrie, D., Allison, B., Liu, W., Guthrie, L., Wilks, Y.: A closer look at skip-gram modelling. In: Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006), pp. 1–4 (2006)
Jones, M.N., Mewhort, D.J.: Representing word meaning and order information in a composite holographic lexicon. Psychological Review 114(1), 1 (2007)
Koller, D., Friedman, N.: Probabilistic Graphical Models: Principles and Techniques. MIT press (2009)
Kschischang, F.R., Frey, B.J., Loeliger, H.A.: Factor graphs and the sum-product algoprithm. IEEE Transactions on Information Theory 47(2), 498–519 (2001)
http://mattmahoney.net/dc/textdata.html (last accessed March 28, 2014)
Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. In: Proceedings of the International Conference on Learning Representations (2013)
Mnih, A., Kavukcuoglu, K.: Learning word embeddings efficiently with noise-contrastive estimation. In: Advances in Neural Information Processing Systems, pp. 2265–2273 (2013)
Plate, T.A.: Holographic reduced representations. IEEE Transactions on Neural Networks 6(3), 623–641 (1995)
Riordan, B., Jones, M.N.: Redundancy in perceptual and linguistic experience: Comparing feature-based and distributional models of semantic representation. Topics in Cognitive Science 3(2), 303–345 (2011)
Rosenbloom, P.S.: Bridging dichotomies in cognitive architectures for virtual humans. In: Proceedings of the AAAI Fall Symposium on Advances in Cognitive Systems (2011)
Rosenbloom, P.S.: The Sigma cognitive architecture and system. AISB Quarterly 136, 4–13 (2013)
Rosenbloom, P.S., Demski, A., Han, T., Ustun, V.: Learning via gradient descent in Sigma. In: Proceedings of the 12th International Conference on Cognitive Modeling (2013)
Rosenbloom, P.S., Demski, A., Ustun, V.: Efficient message computation in Sigma’s graphical architecture. Submitted to BICA 2014 (2014)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986)
Russell, S., Binder, J., Koller, D., Kanazawa, K.: Local learning in probabilistic networks with hidden variables. In: Proceedings of the 14th International Joint Conference on AI, pp. 1146–1152 (1995)
Rutledge-Taylor, M.F., West, R.L.: MALTA: Enhancing ACT-R with a holographic persistent knowledge store. In: Proceedings of the XXIV Annual Conference of the Cognitive Science Society, pp. 1433–1439 (2007)
Snaider, J., Franklin, S.: Modular composite representation. Cognitive Computation, 1-18 (2014)
Turney, P.D., Pantel, P.: From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research 37(1), 141–188 (2010)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Ustun, V., Rosenbloom, P.S., Sagae, K., Demski, A. (2014). Distributed Vector Representations of Words in the Sigma Cognitive Architecture. In: Goertzel, B., Orseau, L., Snaider, J. (eds) Artificial General Intelligence. AGI 2014. Lecture Notes in Computer Science(), vol 8598. Springer, Cham. https://doi.org/10.1007/978-3-319-09274-4_19
Download citation
DOI: https://doi.org/10.1007/978-3-319-09274-4_19
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-09273-7
Online ISBN: 978-3-319-09274-4
eBook Packages: Computer ScienceComputer Science (R0)