Improve the translational distance models for knowledge graph embedding

Abstract

Knowledge graph embedding techniques can be roughly divided into two mainstream, translational distance models and semantic matching models. Though intuitive, translational distance models fail to deal with the circle structure and hierarchical structure in knowledge graphs. In this paper, we propose a general learning framework named TransX-pa, which takes various models (TransE, TransR, TransH and TransD) into consideration. From this unified viewpoint, we analyse the learning bottlenecks are: (i) the common assumption that the inverse of a relation r is modelled as its opposite − r; and (ii) the failure to capture the rich interactions between entities and relations. Correspondingly, we introduce position-aware embeddings and self-attention blocks, and show that they can be adapted to various translational distance models. Experiments are conducted on different datasets extracted from real-world knowledge graphs Freebase and WordNet in the tasks of both triplet classification and link prediction. The results show that our approach makes a great improvement, showing a better, or comparable, performance with state-of-the-art methods.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Notes

  1. 1.

    https://github.com/quark0/ANALOGY

  2. 2.

    https://github.com/ttrouill/complex

  3. 3.

    https://github.com/TimDettmers/ConvE

  4. 4.

    https://github.com/thunlp/OpenKE

  5. 5.

    https://www.cs.princeton.edu/∼danqic/data/nips13-dataset.tar.bz2

References

  1. Nickel, M., Murphy, K., Tresp, V., Gabrilovich, E. (2016). A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104, 11–33.

    Article  Google Scholar 

  2. Wang, Q., Mao, Z., Wang, B., Guo, L. (2017). Knowledge graph embedding: a survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29, 2724–2743.

    Article  Google Scholar 

  3. Toutanova, K., & Chen, D. (2015). Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd workshop on continuous vector space models and their compositionality (pp. 57–66).

  4. Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O. (2013). Translating embeddings for modeling multi-relational data. Advances in Neural Information Processing Systems, 26, 2787–2795.

    Google Scholar 

  5. Wang, Z., Zhang, J., Feng, J., Chen, Z. (2014). Knowledge graph embedding by translating on hyperplanes. In Proceedings of the twenty-eighth AAAI conference on artificial intelligence (pp. 1112–1119).

  6. Lin, Y., Liu, Z., Zhu, X., Zhu, X., Zhu, X. (2015). Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the twenty-ninth AAAI conference on artificial intelligence (pp. 2181–2187).

  7. Ji, G., He, S., Xu, L., Liu, K., Zhao, J. (2015). Knowledge graph embedding via dynamic mapping matrix. In Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing, (Vol. 1: Long Papers pp. 687–696).

  8. Ji, G., Liu, K., He, S., Zhao, J. (2016). Knowledge graph completion with adaptive sparse transfer matrix. In Proceedings of the thirtieth AAAI conference on artificial intelligence (pp. 985–991).

  9. Fan, M., Zhou, Q., Chang, E., Zheng, T.F. (2014). Transition-based knowledge graph embedding with relational mapping properties. In Proceedings of the 28th Pacific Asia Conference on Language, Information and Computing (pp. 328–337).

  10. Xiao, H., Huang, M., Zhu, X. (2016). From one point to a manifold: knowledge graph embedding for precise link prediction. In Proceedings of the 25th international joint conference on artificial intelligence (pp. 1315–1321).

  11. He, S., Liu, K., Ji, G., Zhao, J. (2015). Learning to represent knowledge graphs with Gaussian embedding. In Proceedings of the 24th ACM international on conference on information and knowledge management (pp. 623–632).

  12. Zhang, W. (2017). Knowledge graph embedding with diversity of structures. In Proceedings of the 26th international conference on world wide web companion (pp. 747–753).

  13. Bordes, A., Weston, J., Collobert, R., Bengio, Y. (2011). Learning structured embeddings of knowledge bases. In Proceedings of the twenty-fifth AAAI conference on artificial intelligence (pp. 301–306).

  14. Bordes, A., Glorot, X., Weston, J., Bengio, Y. (2014). A semantic matching energy function for learning with multi-relational data: application to word-sense disambiguation. Machine Learning, 94(2), 233–259.

    MathSciNet  Article  Google Scholar 

  15. Yang, B., Yih, W. T., He, X., Gao, J., Deng, L. (2015). Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of the International Conference on Learning Representations.

  16. Nickel, M., Rosasco, L., Poggio, T. (2016). Holographic embeddings of knowledge graphs. In Proceedings of the thirtieth AAAI conference on artificial intelligence (pp. 1955–1961).

  17. Théo, T., Johannes, W., Sebastian, R., Eric, G., Guillaume, B. (2016). Complex embeddings for simple link prediction. In International Conference on Machine Learning (pp. 2071–2080).

  18. Hayashi, K., & Shimbo, M. (2017). On the equivalence of holographic and complex embeddings for link prediction. In Proceedings of the 55th annual meeting of the association for computational linguistics (pp. 554–559).

  19. Dettmers, T., Minervini, P., Stenetorp, P., Ridedel, S. (2018). Convolutional 2D knowledge graph embeddings. In Proceedings of the thirty-second AAAI conference on artificial intelligence.

  20. Lin, Y., Liu, Z., Luan, H., Sun, M., Rao, S., Liu, S. (2015). Modeling relation paths for representation learning of knowledge bases. In Proceedings of the conference on empirical methods in natural language processing (pp. 705–714).

  21. Socher, R., Chen, D., Manning, C., Chen, D., Ng, A. (2013). Reasoning with neural tensor networks for knowledge base completion. In Proceedings of the 26th international conference on neural information processing systems (pp. 926–934).

  22. Wang, Z., & Li, J. (2016). Text-enhanced representation learning for knowledge graph. In Proceedings of the 25th international joint conferences on artificial intelligence (pp. 1293–1299).

  23. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., Polosukhin, I. (2017). Attention is all you need. In Proceedings of the 30th international conference on neural information processing systems (pp. 5998–6008).

  24. Bengio, Y., Courville, A., Vincent, P. (2013). Representation learning: a review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 1798–1828.

    Article  Google Scholar 

  25. Glorot, X., & Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics (pp. 249–256).

  26. Miller, G. A. (2005). Wordnet: a lexical database for english. Communications of the Association for Computing Machinery, 38, 39–41.

    Article  Google Scholar 

  27. Bollacker, K., Evans, C., Paritosh, P., Sturge, T., Taylor, J. (2008). Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data (pp. 1247–1250).

  28. Liu, H., Wu, Y., Yang, Y. (2017). Analogical inference for multi-relational embeddings. In Proceedings of the 34th international conference on machine learning (pp. 2168–2178).

  29. Han, X., Cao, S., Lv, X., Lin, Y., Liu, Z., Sun, M., Li, J. (2018). OpenKE: an open toolkit for knowledge embedding. In Proceedings of the 2018 conference on empirical methods in natural language processing: system demonstrations (pp. 139–144).

  30. Akrami, F., Guo, L., Hu, W., Li, C. (2018). Re-evaluating embedding-based knowledge graph completion methods. In Proceedings of the 27th ACM international conference on information and knowledge management (pp. 1779–1782).

  31. Toutanova, K., Chen, D., Pantel, P., Poon, H., Choudhury, P., Gamon, M. (2015). Representing text for joint embedding of text and knowledge bases. In Proceedings of the 2015 conference on empirical methods in natural language processing (pp. 1499–1509).

  32. Nguyen, D. Q., Nguyen, T. D., Nguyen, D. Q., Phung, D. (2018). A novel embedding model for knowledge base completion based on convolutional neural network. In Proceedings of Annual Conference of the North American Chapter of the Association for Computational Linguistics.

  33. Duchi, J., Hazan, E., Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12, 2121–2159.

    MathSciNet  MATH  Google Scholar 

  34. Demsar, J. (2006). Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7, 1–30.

    MathSciNet  MATH  Google Scholar 

  35. Wang, Z., Zhang J., Feng J., Chen Z. (2014). Knowledge graph and text jointly embedding. In Proceedings of the 2014 conference on empirical methods in natural language processing (pp. 1591–1601).

Download references

Acknowledgments

The authors are thankful for the financial support from the National Key Research and Development Program of China (2016QY03D0500), as well as the National Natural Science Foundation of China (U1636220, 61876183, 61976212).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Wensheng Zhang.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zhang, S., Sun, Z. & Zhang, W. Improve the translational distance models for knowledge graph embedding. J Intell Inf Syst 55, 445–467 (2020). https://doi.org/10.1007/s10844-019-00592-7

Download citation

Keywords

  • Knowledge graph embedding
  • Translational distance model
  • Positional encoding
  • Self-attention