Advertisement

A Geometric Connectionist Machine for Word-Senses

  • Tiansi DongEmail author
Chapter
  • 237 Downloads
Part of the Studies in Computational Intelligence book series (SCI, volume 910)

Abstract

In this chapter, we follow the design principles to precisely spatialize tree structured hypernym relations among word-senses onto word embeddings. Our target is to promote each word-embedding into a ball in a higher dimension space (\(\mathscr {N}\)-Ball) such that the configuration of these balls precisely captures tree-structured hypernym relations. Each \(\mathscr {N}\)-Ball represents a word-sense. One \(\mathscr {N}\)-Ball is contained by another \(\mathscr {N}\)-Ball, if and only if the word-sense represented by the first \(\mathscr {N}\)-Ball is a hyponym of the word-sense represented by the second \(\mathscr {N}\)-Ball.

References

  1. Blackburn, P. (2000). Representation, reasoning, and relational structures: A hybrid logic manifesto. Logic Journal of the IGPL, 8(3), 339–625.MathSciNetCrossRefGoogle Scholar
  2. Faruqui, M., Dodge, J., Jauhar, S. K., Dyer, C., Hovy, E., & Smith, N. A. (2015). Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1606–1615. ACL.Google Scholar
  3. Finkelstein, L., Gabrilovich, E., Matias, Y., Rivlin, E., Solan, Z., Wolfman, G., et al. (2001). Placing search in context: The concept revisited. WWW, 406–414.Google Scholar
  4. Han, X., Liu, Z., & Sun, M. (2016). Joint representation learning of text and knowledge for knowledge graph completion. CoRR, arXiv:abs/1611.04125.
  5. Huang, E. H., Socher, R., Manning, C. D., & Ng, A. Y. (2012). Improving word representations via global context and multiple word prototypes. In ACL’12: Long Papers - Volume 1, pp. 873–882, Stroudsburg, PA, USA. Association for Computational Linguistics.Google Scholar
  6. Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. CoRR, arXiv:abs/1301.3781.
  7. Miller, G. A. (1995). Wordnet: A lexical database for english. Communication ACM, 38(11), 39–41.CrossRefGoogle Scholar
  8. Pennington, J., Socher, R., & Manning, C. D. (2014). GloVe: Global vectors for word representation. In EMNLP’14, pp. 1532–1543.Google Scholar
  9. Speer, R., Chin, J., & Havasi, C. (2017). Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pp. 4444–4451, February 4–9, 2017, San Francisco, California, USA.Google Scholar
  10. Zeng, D., Liu, K., Lai, S., Zhou, G., & Zhao, J. (2014). Relation classification via convolutional deep neural network. COLING, pp. 2335–2344.Google Scholar

Copyright information

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021

Authors and Affiliations

  1. 1.ML2R Competence Center for Machine Learning Rhine-Ruhr, MLAI Lab, AI Foundations Group, Bonn-Aachen International Center for Information Technology (b-it)University of BonnBonnGermany

Personalised recommendations