Advertisement

A Sentence Similarity Model Based on Word Embeddings and Dependency Syntax-Tree

  • Wenfeng Liu
  • Peiyu Liu
  • Jing Yi
  • Yuzhen Yang
  • Weitong Liu
  • Nana Li
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11303)

Abstract

How to effectively measure the similarity between two sentences is a challenging task in natural language processing. In this paper, we propose a sentence similarity comparison method that combines word embeddings and syntactic structure. First of all, by generating the corresponding syntactic tree, we synthetically analyze the two sentences and block them according to the syntactic components. Secondly, we prune the syntactic tree, remove the stop words and perform morphological restoration. Then, some important operations will be performed, such as passive flipping, negative flipping, and so on. Finally, the similarity of two sentence pairs is calculated by weighting the block embeddings of the syntactic tree. Experiments show the effectiveness of this method.

Keywords

Word embeddings Dependency syntax tree Sentence similarity Syntactic structure 

Notes

Acknowledgments

This work was supported by the national natural science foundation of China (61373148, 61502151), Shandong social science planning project (17CHLJ18, 17CHLJ33, 17CHLJ30), the natural science foundation of Shandong province (ZR2014FL010) and Shandong province department of education (J15LN34).

References

  1. 1.
    Harris, Z.S.: Distributional structure. Word 10(2–3), 146–162 (1954)CrossRefGoogle Scholar
  2. 2.
    Firth, J.: A synopsis of linguistic theory 1930–1955. Stud. Linguist. Anal. Oxf. Philol. Soc. 41(4), 1–32 (1957)Google Scholar
  3. 3.
    Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. Trans. Assoc. Comput. Linguist. 5, 135–146 (2017)Google Scholar
  4. 4.
    Covington, M.A.: A fundamental algorithm for dependency parsing. In: 39th Annual ACM Southeast Conference, pp. 95–102. ACM Press, Pisa (2001)Google Scholar
  5. 5.
    Yamada, H., Matsumoto, Y.: Statistical dependency analysis with support vector machines. In: 8th International Workshop on Parsing Technologies, pp. 195–206. ACL Press, Nancy (2003)Google Scholar
  6. 6.
    Nivre, J., Nilsson, J.: Three algorithms for deterministic dependency parsing. Comput. Linguist. 34(4), 513–553 (2003)CrossRefGoogle Scholar
  7. 7.
    Andor, D., et al.: Globally Normalized transition-based neural networks. In: 54th Annual Meeting of the Association for Computational Linguistics, pp. 2442–2452. ACL Press, Berlin (2016)Google Scholar
  8. 8.
    Tian, J., Zhang, T., Qin, A., Shang, Z., Tang, Y.Y.: Learning the distribution preserving semantic subspace for clustering. IEEE Trans. Image Process. 26(12), 5950–5965 (2017)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Xu, W., Alex, R.: Can artificial neural networks learn language models? In: 6th International Conference on Spoken Language Processing, pp. 202–205. China Military Friendship Publish, Beijing (2000)Google Scholar
  10. 10.
    Bengio, Y., Senecal, J.S.: Adaptive importance sampling to accelerate training of a neural probabilistic language model. IEEE Trans. Neural Netw. 19(4), 713–722 (2008)CrossRefGoogle Scholar
  11. 11.
    Mnih, A., Hinton, G.: Three new graphical models for statistical language modelling. In: 24th International Conference on Machine Learning, pp. 641–648. ACM Press, Corvallis (2007)Google Scholar
  12. 12.
    Mnih, A., Kavukcuoglu, K.: Learning word embeddings efficiently with noise-contrastive estimation. Adv. Neural. Inf. Process. Syst. 2013, 2265–2273 (2013)Google Scholar
  13. 13.
    Mikolov, T.: Statistical language models based on neural networks. Technical report, Google Mountain View (2012)Google Scholar
  14. 14.
    Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural networks with multitask learning. In: 25th International Conference on Machine Learning, Helsinki, Finland, pp. 160–167 (2008)Google Scholar
  15. 15.
    Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. In: International Conference on Learning Representations, pp. 1–12. Hans Publisher, Scottsdale (2013)Google Scholar
  16. 16.
    Henry, S., Cuffy, C., Mcinnes, B.T.: Vector representations of multi-word terms for semantic relatedness. J. Biomed. Inform. 77, 111–119 (2018)CrossRefGoogle Scholar
  17. 17.
    Jin, P., Zhang, Y., Chen, X., Xia, Y.: Bag-of-embeddings for text classification, In: 25th International Joint Conference on Artificial Intelligence, pp. 2824–2830. AAAI Press, New York (2016)Google Scholar
  18. 18.
    Deng, H., Zhu, X., Li, Q.: sentence similarity calculation based on syntactic structure and modifier. Comput. Eng. 43(9), 240–244 (2017)Google Scholar
  19. 19.
    Lévy, B.: Robustness and efficiency of geometric programs the Predicate Construction Kit (PCK). Comput. Aided Des. 72(1), 3–12 (2016)CrossRefGoogle Scholar
  20. 20.
    Bin, L.I., Liu, T., Bing, Q., Sheng, L.I.: Chinese sentence similarity computing based on semantic dependency relationship analysis. Appl. Res. Comput. 12, 15–17 (2003)Google Scholar
  21. 21.
    Liu, W., Liu, P., Yang, Y., Gao, Y., Yi, J.: An attention-based syntax-tree and tree-LSTM model for sentence summarization. Int. J. Perform. Eng. 13(5), 775–782 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Wenfeng Liu
    • 1
    • 2
  • Peiyu Liu
    • 1
    • 3
  • Jing Yi
    • 1
    • 4
  • Yuzhen Yang
    • 2
  • Weitong Liu
    • 1
    • 3
  • Nana Li
    • 1
    • 3
  1. 1.School of Information Science and EngineeringShandong Normal UniversityJinanChina
  2. 2.School of ComputerHeze UniversityHezeChina
  3. 3.Shandong Provincial Key Laboratory for Distributed Computer Software Novel TechnologyJinanChina
  4. 4.School of Computer Science and TechnologyShandong Jianzhu UniversityJinanChina

Personalised recommendations