Skip to main content

Artificial Intelligence and the Spread of Mis- and Disinformation

  • Chapter
  • First Online:
Artificial Intelligence and National Security
  • 2348 Accesses

Abstract

In a so-called post-truth era, research on the subject of the spread of mis- and disinformation is being widely explored across academic disciplines in order to further understand the phenomenon of how information is disseminated by not only humans but also the technology humans have created (Tandoc, Sociol Compass 13(9), 2019). As technology advances rapidly, it is more important than ever to reflect on the effects of the spread of both mis- and disinformation on individuals and wider society, as well as how the impacts can be mitigated to create a more secure online environment. This chapter aims to analyse the current literature surrounding the topic of artificial intelligence (AI) and the spread of mis- and disinformation, beginning with a look through the lens of the meaning of these terms, as well as the meaning of truth in a post-truth world. In particular, the use of software robots (bots) online is discussed to demonstrate the manipulation of information and common malicious intent beneath the surface of everyday technologies. Moreover, this chapter discusses why social media platforms are an ideal breeding ground for malicious technologies, the strategies employed by both human users and bots to further the spread of falsehoods within their own networks, and how human users further the reach of mis- and disinformation. It is hoped that the overview of both the threats caused by and the solutions achievable by AI technology and human users alike will further highlight the requirement for more progress in the area at a time when the spread of falsehoods online continues to be a source of deep concern for many. This chapter also calls into question the use of AI to combat issues arising from the use of advanced Machine Learning (ML) methods. Furthermore, this chapter offers a set of recommendations to help mitigate the risks, seeking to explore the role technology plays in a wider scenario in which ethical foundations of communities and democracies are increasingly being threatened.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 53.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 53.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  • Arendt, H. (1967). Truth and politics. The New Yorker.

    Google Scholar 

  • Bakardjieva, M. (2005). Internet society: The internet in everyday life. SAGE Publications.

    Book  Google Scholar 

  • Benedek, W., & Kettemann, M. C. (2020). Freedom of expression and the internet: Updated and revised (2nd ed.). Council of Europe.

    Google Scholar 

  • Berger, J., & Milkman, K. L. (2013). Emotion and virality: What makes online content go viral? Insights, 5, 18–23.

    Google Scholar 

  • Brown, E. (2019). Propaganda, misinformation, and the epistemic value of democracy. A Journal of Politics and Society, 30, 194–218.

    Google Scholar 

  • Buckland, M. K. (1991). Information as thing. Journal of the American Society for Information Science, 42, 351–360.

    Article  Google Scholar 

  • Bufacchi, V. (2021). Truth, lies and tweets: A consensus theory of post-truth. Philosophy & Social Criticism, 47, 347–361.

    Article  Google Scholar 

  • Burkhardt, J. M. (2017). Combating fake news in the digital age. Library technology reports (pp. 5–33). ALA TechSource.

    Google Scholar 

  • Carsten Stahl, B. (2006). On the difference or equality of information, misinformation, and disinformation: A critical research perspective. Informing Science Journal, 9, 83–96.

    Article  Google Scholar 

  • Center for Countering Digital Hate. (2022a, July 3). Home: Center for countering digital hate. Retrieved from Center for Countering Digital Hate Website: https://www.counterhate.com/

  • Center for Countering Digital Hate. (2022b, February 23). Facebook failing to flag harmful climate misinformation, new research finds. Retrieved from Center for Countering Digital Hate Website: https://www.counterhate.com/post/facebook-failing-to-flag-harmful-climate-misinformation-new-research-finds

  • Chen, Z., Tanash, R. S., Stoll, R., & Subramanian, D. (2017). Hunting malicious bots on Twitter: An unsupervised approach. In International conference on social informatics (pp. 501–510). Springer.

    Chapter  Google Scholar 

  • Choraś, M., Demestichas, K., Giełczyk, A., Herrero, Á., Ksieniewicz, P., Remoundou, K., … Woźniak, M. (2021). Advanced ML techniques for fake news (online disinformation) detection: A systematic mapping study. Applied Soft Computing, 101, 1–22.

    Article  Google Scholar 

  • Chu, Z., Gianvecchio, S., Wang, H., & Jajodia, S. (2012). Detecting automation of twitter accounts: Are you a human, bot, or cyborg? IEEE Transactions on Dependable and Secure Computing, 9, 811–824.

    Article  Google Scholar 

  • Ciampaglia, G. L., Mantzarlis, A., Maus, G., & Menczer, F. (2018). Research challenges of digital misinformation: Toward a trustworthy web. AI Magazine, 39, 65–74.

    Article  Google Scholar 

  • Clarke, A. C. (1973). Profiles of the future. Harper & Row Publishers. Retrieved from NewScientist: https://www.newscientist.com/definition/clarkes-three-laws/#:~:text=But%20perhaps%20the%20best%20known%20of%20Clarke’s%20three,"Any%20su fficiently%20advanced%20technology%20is%20indistinguishable%20from%20magic"

    Google Scholar 

  • de Cock Buning, M. (2018). A multi-dimensional approach to disinformation: Report of the independent high level group on fake news and online disinformation. Publications Office of the European Union.

    Google Scholar 

  • de Freitas Melo, P., Coimbra, V., Garimella, K., Vaz de Melo, P. O., & Benevenuto, F. C. (2019). Can WhatsApp counter misinformation by limiting message forwarding? In International conference on complex networks and their applications (pp. 372–384). Springer.

    Google Scholar 

  • Dean, B. (2022, January 5). Facebook demographic statistics: How many people use Facebook in 2022? Retrieved from Backlinko Website: https://backlinko.com/facebook-users

  • Diehl, T., Barnidge, M., & de Zuniga, G. (2018). Multi-platform news use and political participation across age groups: Toward a valid metric of platform diversity and its effects. Journalism & Mass Communication Quarterly, 96, 428–451.

    Article  Google Scholar 

  • El Naqa, I., & Murphy, M. J. (2015). What is ML? In I. El Naqa, M. J. Murphy, & R. Li (Eds.), ML in radiation oncology (pp. 3–11). Springer.

    Google Scholar 

  • European Parliament. (2015, November). Understanding propaganda and disinformation. Retrieved from European Parliament Website: https://www.europarl.europa.eu/RegData/etudes/ATAG/2015/571332/EPRS_ATA(2015)571332_EN.pdf

  • Facebook. (2022, March 7). Promoting safety and expression. Retrieved from About Facebook Website: https://about.facebook.com/actions/promoting-safety-and-expression/

  • Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. In Communications of the ACM (pp. 96–104). ACM.

    Google Scholar 

  • Fetzer, J. H. (2004). Disinformation: The use of false information. Minds and Machines, 14, 231–240.

    Article  Google Scholar 

  • Floridi, L. (2004). Outline of a theory of strongly semantic information. Minds and Machines, 14, 197–221.

    Article  Google Scholar 

  • Floridi, L. (2006). The logic of being informed. Logique et Analyse, 49, 433–460.

    MathSciNet  MATH  Google Scholar 

  • George, S. (2019, June 13). ‘Deepfakes’ called new election threat, with no easy fix. Retrieved from AP News: https://apnews.com/article/nancy-pelosi-elections-artificial-intelligence-politics-technology-4b8ec588bf5047a981bb6f7ac4acb5a7

  • Giorgi, S., Ungar, L., & Schwartz, H. A. (2021). Characterizing social spambots by their human traits. Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 5148–5158.

    Google Scholar 

  • Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., & Verdoliva, L. (2021). Are GAN generated images easy to detect? A critical analysis of the state-of-the-art. In 2021 IEEE international conference on multimedia and expo (ICME) (pp. 1–6). IEEE.

    Google Scholar 

  • Güera, D., & Delp, E. (2018). Deepfake video detection using recurrent neural networks. In 2018 15th IEEE international conference on advanced video and signal based surveillance (AVSS) (pp. 1–6). IEEE.

    Google Scholar 

  • Guess, A., & Lyons, B. (2020). Misinformation, disinformation and online propaganda. In N. Persily & J. A. Tucker (Eds.), Social media and democracy (pp. 10–33). Cambridge University Press.

    Chapter  Google Scholar 

  • Habermas, J. (1984). The theory of communicative action: Reason and the rationalization of society. Reason and the rationalization of society. Wiley.

    Google Scholar 

  • Habermas, J. (2003). Truth and justification. Polity Press.

    Google Scholar 

  • Himelein-Wachowiak, M., Giorgi, S., Devoto, A., Rahman, M., Ungar, L., Schwartz, H. A., … Curtis, B. (2021). Bots and misinformation spread on social media: Implications for COVID-19. Journal of Medical Internet Research, 23(5), e26933.

    Article  Google Scholar 

  • Horne, B. D., & Adali, S. (2017). This just in: Fake news packs a lot in title, uses simpler, repetitive content in text body, more similar to satire than real news. arXiv:1703.09398.

    Google Scholar 

  • Kreps, S. (2020). The role of technology in online misinformation. Brookings.

    Google Scholar 

  • Kreps, S., McCain, R. M., & Brundage, M. (2020). All the news that’s fit to fabricate: AI-generated text as a tool of media misinformation. Journal of Experimental Political Science, 9, 104–117.

    Article  Google Scholar 

  • Lazer, D., Baum, M., Benkler, Y., Berinsky, A., Greenhill, K., & Menczer, F. (2018). The science of fake news: Addressing fake news requires a multidisciplinary effort. Science, 359, 1094–1096.

    Article  Google Scholar 

  • Li, Y., Chang, M.-C., & Lyu, S. (2018). In ictu oculi: Exposing AI created fake videos by detecting eye blinking. In 2018 IEEE international workshop on information forensics and security (WIFS). IEEE.

    Google Scholar 

  • Mingers, J. (2001). Embodying information systems: The contribution of phenomenology. Information and Organization, 11, 103–128.

    Article  Google Scholar 

  • Neelam, M. (2022). Aspects of AI. In J. Karthikeyan, T. Su Hie, & N. Yu Jin (Eds.), Learning outcomes of classroom research (pp. 250–256). L’Ordine Nuovo.

    Google Scholar 

  • Nelson, J. L., & Taneja, H. (2018). The small, disloyal fake news audience: The role of audience availability in fake news consumption. New Media & Society, 20, 3720–3737.

    Article  Google Scholar 

  • Norris, A. (1996). Arendt, Kant, and the politics of common sense. Polity, 29, 165–191.

    Article  Google Scholar 

  • Pedro Baptista, J., & Gradim, A. (2021). “Brave new world” of fake news: How it works. Journal of the European Institute for Communication and Culture, 28, 426–442.

    Google Scholar 

  • Pennycook, G., Bear, A., Collins, E., & Gertler Rand, D. (2019). The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Management Science, 66, 4944–4957.

    Article  Google Scholar 

  • Pourghomi, P., Safieddine, F., Masri, W., & Dordevic, M. (2017). How to stop spread of misinformation on social media: Facebook plans vs. right-click authenticate approach. In 2017 international conference on engineering & MIS (ICEMIS) (pp. 1–8). IEEE.

    Google Scholar 

  • Russell, S., & Norvig, P. (1995). AI: A modern approach. Prentice-Hall.

    MATH  Google Scholar 

  • Shao, C., Ciampaglia, G. L., Varol, O., Yang, K.-C. F., & Menczer, F. (2018). The spread of low-credibility content by social bots. Nature Communications, 9(1), 4787.

    Article  Google Scholar 

  • Shu, K., Bhattacharjee, A., Alatawi, F., Nazer, T. H., Ding, K., Karami, M., & Liu, H. (2020). Combating disinformation in a social media age. WIREs Data Mining and Knowledge Discovery, 10, 1–23.

    Article  Google Scholar 

  • Simon, H. A. (1983). Why should machines learn? In R. S. Michalski, J. G. Carbonell, & T. M. Mitchell (Eds.), ML: An AI approach (pp. 25–37). Elsevier Inc.

    Google Scholar 

  • Singh, V. K., Ghosh, I., & Sonagara, D. (2020). Detecting fake news stories via multimodal analysis. Journal of the Association for Information Science and Technology, 72, 3–17.

    Article  Google Scholar 

  • Tandoc, E. C., Jr. (2019). The facts of fake news: A research review. Sociology Compass, 13(2), 1–10.

    Google Scholar 

  • Tandoc, E. C., Lim, D., & Ling, R. (2019). Diffusion of disinformation: How social media users respond to fake news and why. Journalism, 21, 381–398.

    Article  Google Scholar 

  • The New York Times. (2021, October 25). Facebook wrestles with the features it used to define social networking. Retrieved from The New York Times Website: https://www.nytimes.com/2021/10/25/technology/facebook-like-share-buttons.html

  • Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460.

    Article  MathSciNet  Google Scholar 

  • Van Duyn, E., & Collier, J. (2019). Priming and fake news: The effects of elite discourse on evaluations of news media. Mass Communication and Society, 22, 29–48.

    Article  Google Scholar 

  • Vasu, N., Ang, B., Teo, T.-A., Jayakumar, S., Faizal, M., & Ahuja, J. (2018). Fake news: National security in the post-trust era. S. Rajaratnam School of International Studies.

    Google Scholar 

  • Watts, D. J., Rothschild, D. M., & Mobius, M. (2021). Measuring the news and its impact on democracy. In Advancing the science and practice of science communication: Misinformation about science in the public sphere (pp. 1–6). PNAS.

    Google Scholar 

  • Webster, F. (2014). Theories of the information society. Routledge.

    Book  Google Scholar 

  • Weedon, J., Nuland, W., & Stamos, A. (2017, April 27). Information operations and Facebook. Retrieved from About Facebook Website: https://about.fb.com/br/wp-content/uploads/sites/3/2017/09/facebook-and-information-operations-v1.pdf

  • Yuezun, L., Chang, M.-C., & Lyu, S. (2018). Exposing AI generated fake face videos by detecting eye blinking. In IEEE international workshop on information forensics and security (WIFS) (pp. 1–7). IEEE.

    Google Scholar 

  • Zhang, X., & Ghorbani, A. A. (2020). An overview of online fake news: Characterization, detection, and discussion. Information Processing & Management, 57, 2–26.

    Article  Google Scholar 

  • Zhang, X., Karaman, S., & Shih-Fu, C. (2019). Detecting and simulating artifacts in GAN fake images. In 2019 IEEE international workshop on information forensics and security (WIFS) (pp. 1–6). IEEE.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Annie Benzie .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Benzie, A., Montasari, R. (2022). Artificial Intelligence and the Spread of Mis- and Disinformation. In: Montasari, R. (eds) Artificial Intelligence and National Security. Springer, Cham. https://doi.org/10.1007/978-3-031-06709-9_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-06709-9_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-06708-2

  • Online ISBN: 978-3-031-06709-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics