Skip to main content

PEKIN: Prompt-Based External Knowledge Integration Network for Rumor Detection on Social Media

  • Conference paper
  • First Online:
PRICAI 2022: Trends in Artificial Intelligence (PRICAI 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13630))

Included in the following conference series:

  • 1265 Accesses

Abstract

Pretrained language models(PLMs) and additional features have been used in rumor detection with excellent performance. However, on the one hand, some recent studies find one of its critical challenges is the significant gap of objective forms in pretraining and fine-tuning, which restricts taking full advantage of knowledge in PLMs. On the other hand, text contents are condensed and full of knowledge entities, but existing methods usually focus on the textual contents and social contexts, and ignore external knowledge of text entities. In this paper, to address these limitations, we propose a Prompt-based External Knowledge Integration Network(PEKIN) for rumor detection, which incorporates both prior knowledges of rumor detection tasks and external knowledge of text entities. For one thing, unlike the conventional “pretrain, finetune" paradigm, we propose a prompt-based method, which brings prior knowledge to help PLMs understand the rumor detection task and better stimulate the rich knowledge distributed in PLMs. For another, we identify entities mentioned in the text and then get these entities’ annotations from a knowledge base. After that, we use these annotations contexts as external knowledge to provide complementary information. Experiments on three datasets showed that PEKIN outperformed all compared models, significantly beating the old state-of-the-art on Weibo dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/nltk/nltk.

  2. 2.

    https://github.com/baidu/lac.

  3. 3.

    https://github.com/goldsmith/Wikipedia.

References

  1. Allport, G.W., Postman, L.: The psychology of rumor. Russel & Russell (1965)

    Google Scholar 

  2. Devlin, J., et al.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  3. Risch, J., Krestel, R.: Bagging BERT models for robust aggression identification. In: Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (2020)

    Google Scholar 

  4. Han, X., et al.: Ptr: Prompt tuning with rules for text classification. arXiv preprint arXiv:2105.11259 (2021)

  5. Zhong, R, et al.: Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections. arXiv preprint arXiv:2104.04670 (2021)

  6. Schick, T., Schütze, H.: Few-shot text generation with natural language instructions. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)

    Google Scholar 

  7. Schick, T., Schütze, H.: Exploiting cloze questions for few shot text classification and natural language inference. arXiv preprint arXiv:2001.07676 (2020)

  8. Ma, J., et al.: Detecting rumors from microblogs with recurrent neural networks, vol. 3818 (2016)

    Google Scholar 

  9. Ma, J., et al.: An attention-based rumor detection model with tree-structured recursive neural networks. ACM Trans. Intell. Syst. Technol. (TIST) 11(4), 1–28 (2020)

    Google Scholar 

  10. DiFonzo, N.: Am. Psychol. Assoc. Prashant Bordia. Social and organizational approaches, Rumor psychology (2007)

    Google Scholar 

  11. Del Vicario, M., et al.: Polarization and fake news: Early warning of potential misinformation targets. ACM Trans. Web (TWEB) 13(2), 1–22 (2019)

    Google Scholar 

  12. Meel, P., Vishwakarma, D.S.: Fake news, rumor, information pollution in social media and web: A contemporary survey of state-of-the-arts, challenges and opportunities. Expert Syst. Appli. 153, 112986 (2020)

    Google Scholar 

  13. Wang, Z., Guo, Y.: Rumor events detection enhanced by encoding sentimental information into time series division and word representations. Neurocomputing 397, 224–243 (2020)

    Article  Google Scholar 

  14. Kumar, S., Carley, K.M.: Tree lstms with convolution units to predict stance and rumor veracity in social media conversations. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (2019)

    Google Scholar 

  15. Bian, T., et al.: Rumor detection on social media with bi-directional graph convolutional networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34(01) (2020)

    Google Scholar 

  16. Zhang, Q., et al.: Reply-aided detection of misinformation via bayesian deep learning. In: The World Wide Web Conference (2019)

    Google Scholar 

  17. Riedel, B., et al.: A simple but tough-to-beat baseline for the Fake News Challenge stance detection task. arXiv preprint arXiv:1707.03264 (2017)

  18. Lu, Y-J., Li, C-T.: GCAN: Graph-aware co-attention networks for explainable fake news detection on social media. arXiv preprint arXiv:2004.11648 (2020)

  19. Brown, T., et al.: Language models are few-shot learners. In: Advances in neural Information Processing Systems, vol. 33, pp, 1877–1901 (2020)

    Google Scholar 

  20. Gao, T., Fisch, A., Chen, D.: Making pre-trained language models better few-shot learners. arXiv preprint arXiv:2012.15723 (2020)

  21. Chen, X., et al.: Lightner: A lightweight generative framework with prompt-guided attention for low-resource ner. arXiv preprint arXiv:2109.00720 (2021)

  22. Hu, S., et al.: Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. arXiv preprint arXiv:2108.02035 (2021)

  23. Inui, K., et al.: Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP). In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (2019)

    Google Scholar 

  24. Talmor, A., et al.: oLMpics-on what language model pre-training captures. Trans. Assoc. Comput Ling. 8, 743–758 (2020)

    Google Scholar 

  25. Teng, Z., Vo,D-T., Zhang, Y.: Context-sensitive lexicon features for neural sentiment analysis. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (2016)

    Google Scholar 

  26. Shin, B., Lee, T., Choi, J.D.: Lexicon integrated CNN models with attention for sentiment analysis. arXiv preprint arXiv:1610.06272 (2016)

  27. Kumar, A., Kawahara, D., Kurohashi, S.: Knowledge-enriched two-layered attention network for sentiment analysis. arXiv preprint arXiv:1805.07819 (2018)

  28. Yang, F., Mukherjee, A., Dragut, E.: Satirical news detection and analysis using attention mechanism and linguistic features. arXiv preprint arXiv:1709.01189 (2017)

  29. Ma, J., Gao, W., Wong, K-F.: Detect rumors in microblog posts using propagation structure via kernel learning. In: Association for Computational Linguistics (2017)

    Google Scholar 

  30. Ma, J., et al.: Detect rumors using time series of social context information on microblogging websites. In: Proceedings of the 24th ACM International on Conference on Information and Knowledge Management (2015)

    Google Scholar 

  31. Tu, K., et al.: Rumor2vec: A rumor detection framework with joint text and propagation structure representation learning. Inf. Sci. 560, 137–151 (2021)

    Article  Google Scholar 

  32. Liu, Y., et al.: 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 5 (2019)

  33. Beltagy, I., Peters, M.E., Cohan, A.: Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150 (2020)

  34. Khoo, L.M.S., et al.: Interpretable rumor detection in microblogs by attending to user interactions. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34(05) (2020)

    Google Scholar 

  35. Wu, Y. et al.: Weibo rumor recognition based on communication and stacking ensemble learning. In: Discrete Dynamics in Nature and Society 2020 (2020)

    Google Scholar 

  36. Geng, Y., Lin, Z., Fu, P., Wang, W.: Rumor detection on social media: A multi-view model using self-attention mechanism. In: Rodrigue, J.M.F., et al. (eds.) ICCS 2019. LNCS, vol. 11536, pp. 339–352. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-22734-0_25

    Chapter  Google Scholar 

Download references

Acknowledgements

This work is partially supported by the Strategic Priority Research Program of the Chinese Academy of Sciences,Grant No. XDC02060400

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaodan Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hu, Z., Liu, H., Li, K., Wang, Y., Liu, Z., Zhang, X. (2022). PEKIN: Prompt-Based External Knowledge Integration Network for Rumor Detection on Social Media. In: Khanna, S., Cao, J., Bai, Q., Xu, G. (eds) PRICAI 2022: Trends in Artificial Intelligence. PRICAI 2022. Lecture Notes in Computer Science, vol 13630. Springer, Cham. https://doi.org/10.1007/978-3-031-20865-2_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20865-2_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20864-5

  • Online ISBN: 978-3-031-20865-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics