Representation of Relations by Planes in Neural Network Language Model

  • Takuma Ebisu
  • Ryutaro Ichise
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9947)


Whole brain architecture (WBA) which uses neural networks to imitate a human brain is attracting increased attention as a promising way to achieve artificial general intelligence, and distributed vector representations of words is becoming recognized as the best way to connect neural networks and knowledge. Distributed representations of words have played a wide range of roles in natural language processing, and they have become increasingly important because of their ability to capture a large amount of syntactic and lexical meanings or relationships. Relation vectors are used to represent relations between words, but this approach has some problems; some relations cannot be easily defined, for example, sibling relations, parent-child relations, and many-to-one relations. To deal with these problems, we have created a novel way of representing relations: we represent relations by planes instead of by vectors, and this increases by more than 10 % the accuracy of predicting the relation.


Accuracy Rate Representation Space Relation Vector Euclid Distance Function Sibling Relation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



This paper is based on results obtained from a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO).


  1. 1.
    Yamakawa, H.: The Whole Brain Architecture Initiative,
  2. 2.
    Bengio, Y., Ducharme, R., Vincent, P., Jauvin, C.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137–1155 (2003)zbMATHGoogle Scholar
  3. 3.
    Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. In: Proceedings of Workshop at International Conference on Learning Representations (2013)Google Scholar
  4. 4.
    Jauhar, S.K., Dyer, C., Hovy, E.: Ontologically Grounded multi-sense representation learning for semantic vector space models. In: Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics, pp. 683–693, Association for Computational Linguistics (2015)Google Scholar
  5. 5.
    Neelakantan, A., Shankar, J., Passos, A., and McCallum, A.: Efficient Non-Parametric Estimation of Multiple Embeddings per Word in Vector Space, arXiv preprint arXiv:1504.06662 (2015)
  6. 6.
    Vilnis, A., and McCallum, A.: Word Representations via Gaussian Embedding, arXiv preprint arXiv:1412.6623 (2014)
  7. 7.
    Ichise, R., Arakawa, N.: Relationships Between Distributed Representation and Ontology. In: The 29th Annual Conference of the Japanese Society for Artificial Intelligence, 2I4-OS-17a-5 (2015)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.SOKENDAI (The Graduate University for Advanced Studies)TokyoJapan
  2. 2.National Institute of InformaticsTokyoJapan
  3. 3.National Institute of Advanced Industrial Science and TechnologyTokyoJapan

Personalised recommendations