Abstract
Social media is filled with multimedia data in the form of news and is heavily impacting the daily lives of the people. However, the rise of fake news is causing distress and becoming a major source of concern. Several attempts have been made in the past to detect fake news, but it still remains a challenging problem. In this study, we propose an emotion-driven framework that extracts emotions from the multimodal data to identify fake news. We use the vision transformer, which removes the irrelevant data from the images and enhances the overall classification accuracy. To the best of our knowledge, this is the first work that incorporates multimodal emotions to detect fake news in the multimodal data, comprising of images and text. We conducted several experiments on five datasets: Twitter, Jruvika Fake News Dataset, Pontes Fake News Dataset, Risdal Fake News Dataset, and Fakeddit Multimodal Dataset, and evaluated the performance of the network by using Precision, Recall, F1 scores, Accuracy, and ROC curves. We also conducted an ablation study to verify the effectiveness of different components involved in the proposed architecture. The experimental results show that the proposed architecture outperforms the state-of-the-art and other baseline methods on all the evaluation metrics.
Similar content being viewed by others
Research data policy and data availability statements
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
References
Agrawal A, Lu J, Antol S, Mitchell M, Zitnick L, Batra D and Parikh D (2015) VQA: visual question answering, Proceedings of the IEEE international conference on computer vision, pp. 2425–2433.
Li P, Sun X, Yu H, Tian Y, Yao F (2022) Entity-oriented multi-modal alignment and fusion network for fake news detection. IEEE Trans Multimedia 24:3455–3468
Jin Z, Cao J, Guo H, Luo J and Zhang Y (2017), Multimodal fusion with recurrent neural networks for rumor detection on microblogs, Proceedings of the 25th ACM international conference on Multimedia., pp. 795–816
Khattar D, Goud J, Gupta M and Varma V (2019), MVAE: multimodal Variational Autoencoder for Fake News detection, The world wide web conference, pp. 2915–2921.
Singhal S, Kabra A, Sharma M, Shah RR, Chakraborty T, Kumaraguru P (2020) SpotFake+: a multimodal framework for fake news detection via transfer learning (student abstract). Proc AAAI Conf Artif Intell 34(10):13915–13916
Yuan H, Zheng J, Ye Q, Qian Y, Zhang Y (2021) Improving fake news detection with domain-adversarial and graph-attention neural network. Decis Support Syst 151:1–11
Meel P, Vishwakarma DK (2021) HAN, image captioning, and forensics ensemble multimodal fake news detection. Inf Sci 567:23–41
Kumari R, Ekbal A (2021) AMFB: attention based multimodal factorized bilinear pooling for multimodal fake news detection. Expert Syst Appl 184:1–12
Zhao S, Yao H, Gao Y, Ding G, Chua T-S (2016) Predicting personalized image emotion perceptions in social networks. IEEE Trans Affect Comput 9(4):526–540
Deng J, Dong W, Socher R, Li L-J, Li K and Fei-Fei L (2009), Imagenet: a large-scale hierarchical image database, IEEE conference on computer vision and pattern recognition, pp. 248–255, 2009.
Devlin J, Chang M-W, Lee K and Toutanova K (2019), BERT: pre-training of deep bidirectional transformers for language understanding, Annual conference of the North American chapter of the association for computational linguistics, pp. 1–16, 2019.
“List of emoticons,” [Online]. Available: https://en.wikipedia.org/wiki/List_of_emoticons. [Accessed 25 06 2022].
Mohammad SM, Turney PD (2013) Crowdsourcing a word-emotion association lexicon. Comput Intell 29(3):436–465
Mohammad SM and Turney PD (2010), Emotions evoked by common words and phrases: using mechanical turk to create an emotion lexicon, Proceedings of the NAACL HLT 2010 workshop on computational approaches to analysis and generation of emotion in text, pp. 26–34, 2010.
Yang Y, Zheng L, Zhang J, Cui Q and Zhang X, TI-CNN: convolutional neural networks for fake news detection, arXiv preprint arXiv:1806.00749, pp. 1–11, 2018.
“Jruvika Fake News Dataset,” [Online]. Available: https://www.kaggle.com/datasets/jruvika/fake-news-detection. [Accessed 10 07 2022].
“Pontes Fake News Dataset,” [Online]. Available: https://www.kaggle.com/pontes/fake-news-sample.
Boididou C, Papadopoulos S, Dang-Nguyen D, Boato G, Riegler M (2016) Verifying multimedia use at mediaeval. Work Notes Proc MediaEval 2016:1–3
Xue J, Wang Y, Tian Y, Li Y, Shi L, Wei L (2021) Detecting fake news by exploring the consistency of multimodal data. Inf Process Manage 58(5):1–13
Song C, Ning N, Zhang Y, Wu B (2021) a multimodal fake news detection model based on crossmodal attention. Inf Process Manage 58(1):1–35
Ajao O, Bhowmik D and Zargari S (2019), Sentiment aware fake news detection on online social networks, IEEE international conference on acoustics, speech and signal processing (ICASSP), pp. 2507–2511
Kumari R, Ashok N, Ghosal T, Ekbal A (2022) What the fake? Probing misinformation detection standing on the shoulder of novelty and emotion. Inform Process Manag 59(1):1–18
Wu L, Rao Y, Zhang C, Zhao Y and Nazir A (2021), Category-controlled encoderdecoder for fake news detection, IEEE Transact Knowl Data Eng.
Liao Q, Chai H, Han H, Zhang X and Wang X (2021), An integrated multi-task model for fake news detection, IEEE Transact Knowl Data Eng, pp. 1–12.
Trueman TE, Kumar A, Narayanasamy P, Vidya J (2021) Attention-based C-BiLSTM for fake news detection. Appl Soft Comput 110:1–8
Paka W, Bansal R, Kaushik A, Sengupta S, Chakraborty T (2021) Cross-SEAN: a cross-stitch semi-supervised neural attention model for COVID-19 fake news detection. Appl Soft Comput 107:1–13
Dong X, Victor U, Qian L (2020) Two-path deep semisupervised learning for timely fake news detection. IEEE Transact Comput Soc Syst 7(6):1386–1398
Shim J-S, Lee Y, Ahn H (2021) A link2vec-based fake news detection model using web search results. Expert Syst Appl 184:1–15
Verma P, Agrawal P, Amorim I, Prodan R (2021) WELFake: word embedding over linguistic features for fake news detection. IEEE Transact Comput Soc Syst 8(4):1–13
Zhang X, Cao J, Li X, Sheng Q, Zhong L and Shu K (2021), Mining dual emotion for fake news detection, Proceedings of the Web Conference, pp. 3465–3476
Uppada, SK Patel P (2022) An image and text-based multimodal model for detecting fake news in OSN’s, J Intell Inform Syst, pp. 1–27
Armin K, Djordje S and Matthias Z (2021), Multimodal detection of information disorder from social media,” IEEE International conference on content-based multimedia indexing (CBMI), pp. 1–4,
Sengan S, Vairavasundaram S, Ravi L, AlHamad AQM, Alkhazaleh HA and Alharbi M, Fake news detection using stance extracted multimodal fusion-based hybrid neural network,” IEEE transactions on computational social systems, pp. 1–12, 2023.
Funding
No funding was provided for this work.
Author information
Authors and Affiliations
Contributions
Ashima Yadav: Software, Validation, Investigation, Data Curation, Writing – Original Draft, Writing - Review & Editing, Visualization, Formal Analysis, Resources. Anika Gupta: Writing – Original Draft, Visualization, Resources.
Corresponding author
Ethics declarations
Conflict of interest
The authors have no competing interests to declare that are relevant to the content of this article. The authors have no relevant financial or non-financial interests to disclose.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Yadav, A., Gupta, A. An emotion-driven, transformer-based network for multimodal fake news detection. Int J Multimed Info Retr 13, 7 (2024). https://doi.org/10.1007/s13735-023-00315-3
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13735-023-00315-3