Skip to main content
Log in

An emotion-driven, transformer-based network for multimodal fake news detection

  • Regular Paper
  • Published:
International Journal of Multimedia Information Retrieval Aims and scope Submit manuscript

Abstract

Social media is filled with multimedia data in the form of news and is heavily impacting the daily lives of the people. However, the rise of fake news is causing distress and becoming a major source of concern. Several attempts have been made in the past to detect fake news, but it still remains a challenging problem. In this study, we propose an emotion-driven framework that extracts emotions from the multimodal data to identify fake news. We use the vision transformer, which removes the irrelevant data from the images and enhances the overall classification accuracy. To the best of our knowledge, this is the first work that incorporates multimodal emotions to detect fake news in the multimodal data, comprising of images and text. We conducted several experiments on five datasets: Twitter, Jruvika Fake News Dataset, Pontes Fake News Dataset, Risdal Fake News Dataset, and Fakeddit Multimodal Dataset, and evaluated the performance of the network by using Precision, Recall, F1 scores, Accuracy, and ROC curves. We also conducted an ablation study to verify the effectiveness of different components involved in the proposed architecture. The experimental results show that the proposed architecture outperforms the state-of-the-art and other baseline methods on all the evaluation metrics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Research data policy and data availability statements

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Agrawal A, Lu J, Antol S, Mitchell M, Zitnick L, Batra D and Parikh D (2015) VQA: visual question answering, Proceedings of the IEEE international conference on computer vision, pp. 2425–2433.

  2. Li P, Sun X, Yu H, Tian Y, Yao F (2022) Entity-oriented multi-modal alignment and fusion network for fake news detection. IEEE Trans Multimedia 24:3455–3468

    Article  Google Scholar 

  3. Jin Z, Cao J, Guo H, Luo J and Zhang Y (2017), Multimodal fusion with recurrent neural networks for rumor detection on microblogs, Proceedings of the 25th ACM international conference on Multimedia., pp. 795–816

  4. Khattar D, Goud J, Gupta M and Varma V (2019), MVAE: multimodal Variational Autoencoder for Fake News detection, The world wide web conference, pp. 2915–2921.

  5. Singhal S, Kabra A, Sharma M, Shah RR, Chakraborty T, Kumaraguru P (2020) SpotFake+: a multimodal framework for fake news detection via transfer learning (student abstract). Proc AAAI Conf Artif Intell 34(10):13915–13916

    Google Scholar 

  6. Yuan H, Zheng J, Ye Q, Qian Y, Zhang Y (2021) Improving fake news detection with domain-adversarial and graph-attention neural network. Decis Support Syst 151:1–11

    Article  Google Scholar 

  7. Meel P, Vishwakarma DK (2021) HAN, image captioning, and forensics ensemble multimodal fake news detection. Inf Sci 567:23–41

    Article  MathSciNet  Google Scholar 

  8. Kumari R, Ekbal A (2021) AMFB: attention based multimodal factorized bilinear pooling for multimodal fake news detection. Expert Syst Appl 184:1–12

    Article  Google Scholar 

  9. Zhao S, Yao H, Gao Y, Ding G, Chua T-S (2016) Predicting personalized image emotion perceptions in social networks. IEEE Trans Affect Comput 9(4):526–540

    Article  Google Scholar 

  10. Deng J, Dong W, Socher R, Li L-J, Li K and Fei-Fei L (2009), Imagenet: a large-scale hierarchical image database, IEEE conference on computer vision and pattern recognition, pp. 248–255, 2009.

  11. Devlin J, Chang M-W, Lee K and Toutanova K (2019), BERT: pre-training of deep bidirectional transformers for language understanding, Annual conference of the North American chapter of the association for computational linguistics, pp. 1–16, 2019.

  12. “List of emoticons,” [Online]. Available: https://en.wikipedia.org/wiki/List_of_emoticons. [Accessed 25 06 2022].

  13. Mohammad SM, Turney PD (2013) Crowdsourcing a word-emotion association lexicon. Comput Intell 29(3):436–465

    Article  MathSciNet  Google Scholar 

  14. Mohammad SM and Turney PD (2010), Emotions evoked by common words and phrases: using mechanical turk to create an emotion lexicon, Proceedings of the NAACL HLT 2010 workshop on computational approaches to analysis and generation of emotion in text, pp. 26–34, 2010.

  15. Yang Y, Zheng L, Zhang J, Cui Q and Zhang X, TI-CNN: convolutional neural networks for fake news detection, arXiv preprint arXiv:1806.00749, pp. 1–11, 2018.

  16. “Jruvika Fake News Dataset,” [Online]. Available: https://www.kaggle.com/datasets/jruvika/fake-news-detection. [Accessed 10 07 2022].

  17. “Pontes Fake News Dataset,” [Online]. Available: https://www.kaggle.com/pontes/fake-news-sample.

  18. Boididou C, Papadopoulos S, Dang-Nguyen D, Boato G, Riegler M (2016) Verifying multimedia use at mediaeval. Work Notes Proc MediaEval 2016:1–3

    Google Scholar 

  19. Xue J, Wang Y, Tian Y, Li Y, Shi L, Wei L (2021) Detecting fake news by exploring the consistency of multimodal data. Inf Process Manage 58(5):1–13

    Article  Google Scholar 

  20. Song C, Ning N, Zhang Y, Wu B (2021) a multimodal fake news detection model based on crossmodal attention. Inf Process Manage 58(1):1–35

    Article  Google Scholar 

  21. Ajao O, Bhowmik D and Zargari S (2019), Sentiment aware fake news detection on online social networks, IEEE international conference on acoustics, speech and signal processing (ICASSP), pp. 2507–2511

  22. Kumari R, Ashok N, Ghosal T, Ekbal A (2022) What the fake? Probing misinformation detection standing on the shoulder of novelty and emotion. Inform Process Manag 59(1):1–18

    Article  Google Scholar 

  23. Wu L, Rao Y, Zhang C, Zhao Y and Nazir A (2021), Category-controlled encoderdecoder for fake news detection, IEEE Transact Knowl Data Eng.

  24. Liao Q, Chai H, Han H, Zhang X and Wang X (2021), An integrated multi-task model for fake news detection, IEEE Transact Knowl Data Eng, pp. 1–12.

  25. Trueman TE, Kumar A, Narayanasamy P, Vidya J (2021) Attention-based C-BiLSTM for fake news detection. Appl Soft Comput 110:1–8

    Article  Google Scholar 

  26. Paka W, Bansal R, Kaushik A, Sengupta S, Chakraborty T (2021) Cross-SEAN: a cross-stitch semi-supervised neural attention model for COVID-19 fake news detection. Appl Soft Comput 107:1–13

    Article  Google Scholar 

  27. Dong X, Victor U, Qian L (2020) Two-path deep semisupervised learning for timely fake news detection. IEEE Transact Comput Soc Syst 7(6):1386–1398

    Article  Google Scholar 

  28. Shim J-S, Lee Y, Ahn H (2021) A link2vec-based fake news detection model using web search results. Expert Syst Appl 184:1–15

    Article  Google Scholar 

  29. Verma P, Agrawal P, Amorim I, Prodan R (2021) WELFake: word embedding over linguistic features for fake news detection. IEEE Transact Comput Soc Syst 8(4):1–13

    Google Scholar 

  30. Zhang X, Cao J, Li X, Sheng Q, Zhong L and Shu K (2021), Mining dual emotion for fake news detection, Proceedings of the Web Conference, pp. 3465–3476

  31. Uppada, SK Patel P (2022) An image and text-based multimodal model for detecting fake news in OSN’s, J Intell Inform Syst, pp. 1–27

  32. Armin K, Djordje S and Matthias Z (2021), Multimodal detection of information disorder from social media,” IEEE International conference on content-based multimedia indexing (CBMI), pp. 1–4,

  33. Sengan S, Vairavasundaram S, Ravi L, AlHamad AQM, Alkhazaleh HA and Alharbi M, Fake news detection using stance extracted multimodal fusion-based hybrid neural network,” IEEE transactions on computational social systems, pp. 1–12, 2023.

Download references

Funding

No funding was provided for this work.

Author information

Authors and Affiliations

Authors

Contributions

Ashima Yadav: Software, Validation, Investigation, Data Curation, Writing – Original Draft, Writing - Review & Editing, Visualization, Formal Analysis, Resources. Anika Gupta: Writing – Original Draft, Visualization, Resources.

Corresponding author

Correspondence to Ashima Yadav.

Ethics declarations

Conflict of interest

The authors have no competing interests to declare that are relevant to the content of this article. The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yadav, A., Gupta, A. An emotion-driven, transformer-based network for multimodal fake news detection. Int J Multimed Info Retr 13, 7 (2024). https://doi.org/10.1007/s13735-023-00315-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13735-023-00315-3

Keywords

Navigation