Skip to main content

Learning to Attentively Represent Distinctive Information for Semantic Text Matching

  • Conference paper
  • First Online:
Natural Language Processing and Chinese Computing (NLPCC 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14302))

  • 1301 Accesses

Abstract

Pre-trained language models (PLMs) such as BERT have achieved remarkable results in the task of Semantic Text Matching (STM). Nevertheless, existing models face challenges in discerning subtle distinction between texts, although it is the vital clue for STM. Concretely, the alteration of a single word causes significant variation of semantics of the entire text. To solve the problem, we propose a novel method of attentively representing distinctive information for STM. It comprises two components, including Reversed Attention Mechanism (RAM) and Sample-based Adaptive Learning (SAL). RAM reverses the hidden states of texts before computing attentions, which contributes to the highlighting of mutually-different syntactic constituents when comparing texts. In addition, during the initial stage of training, the model may acquire some biases. For example, it may ignore the distinctions between sentence pairs and simply classify sentence pairs with high lexical overlap as positive examples, because a majority of positive examples exhibit such high lexical overlap. SAL is designed to facilitate the model in comprehensively acquiring the semantic knowledge hidden in the distinctive constituents. Experiments on six STM datasets demonstrate the effectiveness of our proposed approach. Furthermore, we employ ChatGPT to generate textual descriptions of distinction between texts and empirically validate the significance of distinctive information in semantic text matching task.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://chat.openai.com/.

  2. 2.

    https://huggingface.co/.

References

  1. Chen, J., Chen, Q., Liu, X., Yang, H., Lu, D., Tang, B.: The BQ corpus: A large-scale domain-specific Chinese corpus for sentence semantic equivalence identification. In: EMNLP 2018, pp. 4946–4951 (2018)

    Google Scholar 

  2. Chen, M.Y., Jiang, H., Yang, Y.: Context enhanced short text matching using clickthrough data. CoRR abs/2203.01849 (2022)

    Google Scholar 

  3. Cheng, J., Dong, L., Lapata, M.: Long short-term memory-networks for machine reading. In: Su, J., Carreras, X., Duh, K. (eds.) EMNLP 2016, pp. 551–561 (2016)

    Google Scholar 

  4. Cui, Y., Che, W., Liu, T., Qin, B., Wang, S., Hu, G.: Revisiting pre-trained models for Chinese natural language processing. In: EMNLP 2020. Findings of ACL, vol. EMNLP 2020, pp. 657–668 (2020)

    Google Scholar 

  5. Cui, Y., Che, W., Liu, T., Qin, B., Yang, Z.: Pre-training with whole word masking for Chinese BERT. IEEE ACM Trans. Audio Speech Lang. Process. 29, 3504–3514 (2021)

    Google Scholar 

  6. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT 2019, pp. 4171–4186 (2019)

    Google Scholar 

  7. Dolan, W.B., Brockett, C.: Automatically constructing a corpus of sentential paraphrases. In: IWP@IJCNLP 2005 (2005)

    Google Scholar 

  8. Li, B., Zhou, H., He, J., Wang, M., Yang, Y., Li, L.: On the sentence embeddings from pre-trained language models. In: EMNLP 2020, pp. 9119–9130 (2020)

    Google Scholar 

  9. Liu, X., et al.: LCQMC: A large-scale Chinese question matching corpus. In: COLING 2018, pp. 1952–1962 (2018)

    Google Scholar 

  10. Liu, Y., et al.: Roberta: A robustly optimized BERT pretraining approach abs/1907.11692 (2019)

    Google Scholar 

  11. Lyu, B., Chen, L., Zhu, S., Yu, K.: LET: linguistic knowledge enhanced graph transformer for chinese short text matching. In: AAAI 2021, pp. 13498–13506 (2021)

    Google Scholar 

  12. Peng, Q., Weir, D.J., Weeds, J., Chai, Y.: Predicate-argument based bi-encoder for paraphrase identification. In: ACL 2022, pp. 5579–5589 (2022)

    Google Scholar 

  13. Shankar Iyer, Nikhil Dandekar, K.C.: First quora dataset release: Question pairs (2012). https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs

  14. Sun, Y., et al.: ERNIE: enhanced representation through knowledge integration abs/1904.09223 (2019)

    Google Scholar 

  15. Sun, Z., et al.: Chinesebert: Chinese pretraining enhanced by glyph and pinyin information. In: ACL/IJCNLP 2021, pp. 2065–2075 (2021)

    Google Scholar 

  16. Tai, K.S., Socher, R., Manning, C.D.: Improved semantic representations from tree-structured long short-term memory networks. In: ACL 2015, pp. 1556–1566 (2015)

    Google Scholar 

  17. Tan, M., dos Santos, C.N., Xiang, B., Zhou, B.: Improved representation learning for question answer matching. In: ACL 2016 (2016)

    Google Scholar 

  18. Vaswani, A., et al.: Attention is all you need. NeurIPS 30 5998–6008 (2017)

    Google Scholar 

  19. Wang, A., et al.: Superglue: A stickier benchmark for general-purpose language understanding systems. In: NeurIPS 2019, pp. 3261–3275 (2019)

    Google Scholar 

  20. Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., Bowman, S.R.: GLUE: A multi-task benchmark and analysis platform for natural language understanding. In: ICLR 2019 (2019)

    Google Scholar 

  21. Wang, S., Liang, D., Song, J., Li, Y., Wu, W.: DABERT: dual attention enhanced BERT for semantic matching. In: COLING 2022, pp. 1645–1654 (2022)

    Google Scholar 

  22. Wu, L., Yang, Y., Zhang, K., Hong, R., Fu, Y., Wang, M.: Joint item recommendation and attribute inference: An adaptive graph convolutional network approach. In: SIGIR 2020, pp. 679–688 (2020)

    Google Scholar 

  23. Yu, L., Ettinger, A.: Assessing phrasal representation and composition in transformers. In: EMNLP 2020, pp. 4896–4907 (2020)

    Google Scholar 

  24. Zhang, Kun, Wu, Le., Lv, Guangyi, Wang, Meng, Chen, Enhong, Ruan, Shulan: Making the relation matters: relation of relation learning network for sentence semantic matching. Proceedings AAAI Conf. Artif. Intell. 35(16), 14411–14419 (2021). https://doi.org/10.1609/aaai.v35i16.17694

    Article  Google Scholar 

  25. Zhang, Y., Baldridge, J., He, L.: PAWS: paraphrase adversaries from word scrambling. In: NAACL-HLT 2019, pp. 1298–1308 (2019)

    Google Scholar 

  26. Zhao, Z., Zhang, Z., Hopfgartner, F.: SS-BERT: mitigating identity terms bias in toxic comment classification by utilising the notion of “subjectivity” and “identity terms”. CoRR abs/2109.02691 (2021)

    Google Scholar 

  27. Zou, Y., et al.: Divide and conquer: Text semantic matching with disentangled keywords and intents. In: ACL 2022, pp. 3622–3632 (2022)

    Google Scholar 

Download references

Acknowledgements

The research is supported by National Key R &D Program of China (2020YFB1313601), National Science Foundation of China (62076174, 61836007).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu Hong .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, J., Peng, R., Hong, Y. (2023). Learning to Attentively Represent Distinctive Information for Semantic Text Matching. In: Liu, F., Duan, N., Xu, Q., Hong, Y. (eds) Natural Language Processing and Chinese Computing. NLPCC 2023. Lecture Notes in Computer Science(), vol 14302. Springer, Cham. https://doi.org/10.1007/978-3-031-44693-1_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44693-1_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44692-4

  • Online ISBN: 978-3-031-44693-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics