Skip to main content

Knowledge Enhanced Transformers System for Claim Stance Classification

  • Conference paper
  • First Online:
Natural Language Processing and Chinese Computing (NLPCC 2021)

Abstract

In this paper, we present our system for the NLPCC 2021 shared task 1 of “argumentative text understanding for AI debater”, where we achieved the 3rd place with the accuracy score of 0.8925 on Track 1, Task 1. Specifically, we proposed a rapid, simple, and efficient ensemble method which uses different pre-trained language models such as BERT, Roberta, Ernie, etc. with various training strategies including warm-up, learning rate schedule and k-fold cross-validation. In addition, we also propose a knowledge enhancement approach, which makes it possible for our model to achieve the first place without introducing external data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/huggingface/transformers.

  2. 2.

    https://api.fanyi.baidu.com/.

References

  1. Bagheri, M., et al.: Keep it accurate and diverse: enhancing action recognition performance by ensemble learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 22–29 (2015)

    Google Scholar 

  2. Bar-Haim, R., Edelstein, L., Jochim, C., Slonim, N.: Improving claim stance classification with lexical knowledge expansion and context utilization. In: Proceedings of the 4th Workshop on Argument Mining, pp. 32–38 (2017)

    Google Scholar 

  3. Bogoychev, N., Sennrich, R.: Domain, translationese and noise in synthetic data for neural machine translation. arXiv preprint arXiv:1911.03362 (2019)

  4. Brownlee, J.: Ensemble learning methods for deep learning neural networks. In: Machine Learning Mastery (2018)

    Google Scholar 

  5. Clark, K., Luong, M.T., Le, Q.V., Manning, C.D.: Electra: pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020)

  6. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  8. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  9. Kochkina, E., Liakata, M., Augenstein, I.: Turing at semeval-2017 task 8: sequential approach to rumour stance classification with branch-lstm. arXiv preprint arXiv:1704.07221 (2017)

  10. Li, X., Xia, Y., Long, X., Li, Z., Li, S.: Exploring text-transformers in aaai 2021 shared task: Covid-19 fake news detection in english. arXiv preprint arXiv:2101.02359 (2021)

  11. Liu, Y., et al.: Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019)

  12. Loshchilov, I., Hutter, F.: Sgdr: stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983 (2016)

  13. Mosteller, F., Tukey, J.W.: Data analysis, including statistics. Handb. Social Psychol. 2, 80–203 (1968)

    Google Scholar 

  14. Peters, M.E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., Zettlemoyer, L.: Deep contextualized word representations. arXiv preprint arXiv:1802.05365 (2018)

  15. Polikar, R.: Ensemble learning. In: Zhang, C., Ma, Y. (eds.) Ensemble Machine Learning, pp. 1–34. Springer, Heidelberg (2012). https://doi.org/10.1007/978-1-4419-9326-7_1

    Chapter  Google Scholar 

  16. Prabhumoye, S., Tsvetkov, Y., Salakhutdinov, R., Black, A.W.: Style transfer through back-translation. arXiv preprint arXiv:1804.09000 (2018)

  17. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding by generative pre-training (2018)

    Google Scholar 

  18. Sun, C., Qiu, X., Xu, Y., Huang, X.: How to fine-tune BERT for text classification? In: Sun, M., Huang, X., Ji, H., Liu, Z., Liu, Y. (eds.) CCL 2019. LNCS (LNAI), vol. 11856, pp. 194–206. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32381-3_16

    Chapter  Google Scholar 

  19. Sun, Y., et al.: Ernie 2.0: a continual pre-training framework for language understanding. In: AAAI, pp. 8968–8975 (2020)

    Google Scholar 

  20. Sun, Y., et al.: Ernie: enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223 (2019)

  21. Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.: Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261 (2016)

  22. Tutek, M., et al.: Takelab at semeval-2016 task 6: stance classification in tweets using a genetic algorithm based ensemble. In: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pp. 464–468 (2016)

    Google Scholar 

  23. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)

    Google Scholar 

  24. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: generalized autoregressive pretraining for language understanding. In: Advances in Neural Information Processing Systems, pp. 5753–5763 (2019)

    Google Scholar 

  25. Zhou, Z.H.: Ensemble learning. Encycl. Biometrics 1, 270–273 (2009)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sujian Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, X., Li, Z., Li, S., Li, Z., Yang, S. (2021). Knowledge Enhanced Transformers System for Claim Stance Classification. In: Wang, L., Feng, Y., Hong, Y., He, R. (eds) Natural Language Processing and Chinese Computing. NLPCC 2021. Lecture Notes in Computer Science(), vol 13029. Springer, Cham. https://doi.org/10.1007/978-3-030-88483-3_50

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88483-3_50

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88482-6

  • Online ISBN: 978-3-030-88483-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics