Skip to main content

Misinformation, Paradox, and Nudge: Combating Misinformation Through Nudging

  • Chapter
  • First Online:
Artificial Misinformation
  • 130 Accesses

Abstract

The rapid spread of misinformation online can be attributed to bias in human decision-making, facilitated by algorithmic processes. The area of human-computer interaction has contributed a mechanism of such biases that can be addressed by the design of system interventions. For example, the principle of nudging refers to sophisticated modifications in the choice architecture that can change user behaviors in desired or directed ways. This chapter discusses the design of nudging interventions in the context of misinformation, including a systematic review of the use of nudging in human-AI interaction that has led to a design framework. By using algorithms that work invisibly, nudges can be maneuvered in misinformation to individuals, and their effectiveness can be traced and attuned as the algorithm improves from user feedback based on a user’s behavior. It seeks to explore the potential of nudging in decreasing the chances of consuming and spreading misinformation. The key is how to ensure that algorithmic nudges are used in an effective way and whether the nudge could also help to achieve a sustainable way of life. This chapter discusses the principles and dimensions of the nudging effects of AI systems on user behavior in response to misinformation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Akter, S., McCarthy, G., Sajib, S., Michael, K., Dwivedi, Y., D’Ambra, J., & Shen, K. (2021). Algorithmic bias in data-driven innovation in the age of AI. International Journal of Information Management, 102387. https://doi.org/10.1016/j.ijinfomgt.2021.102387

  • Badke, W. (2018). Fake news, confirmation bias, the search for truth, and the theology student. Theological Librarianship, 11(2), 4–7. https://doi.org/10.31046/tl.v11i2.519

    Article  Google Scholar 

  • Baumer, E. P. (2017). Toward human-centered algorithm design. Big Data & Society. https://doi.org/10.1177/2053951717718854

  • Brown, S., Davidovic, J., & Hasan, A. (2022). The algorithm audit: Scoring the algorithms that score us. Big Data & Society. https://doi.org/10.1177/2053951720983865

  • Bucher, T. (2017). The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20, 30–44.

    Article  Google Scholar 

  • Burr, C., & Cristianini, N. (2019). Can machines read our minds? Minds & Machines, 29, 461–494. https://doi.org/10.1007/s11023-019-09497-4

    Article  Google Scholar 

  • Burr, C., Cristianini, N., & Ladyman, J. (2018). An analysis of the interaction between intelligent software agents and human users. Minds and Machines, 28, 735–774. https://doi.org/10.1007/s11023-018-9479-0-1-3

    Article  Google Scholar 

  • Cowgill, B., & Stevenson, M. T. (2020, May). Algorithmic social engineering. In AEA papers and proceedings (Vol. 110, pp. 96–100). American Economic Association.

    Google Scholar 

  • Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press.

    Book  Google Scholar 

  • Greene, T., Martens, D., & Shmueli, G. (2022). Barriers to academic data science research in the new realm of algorithmic behavior modification by digital platforms. Nature Machine Intelligence, 4, 323–330. https://doi.org/10.1038/s42256-022-00475-7

    Article  Google Scholar 

  • Jakesch, M., Hancock, J. T., & Naaman, M. (2023). Human heuristics for AI-generated language are flawed. Proceedings of the National Academy of Sciences, 120(11), e2208839120.

    Article  Google Scholar 

  • Juneja, P., & Mitra, T. (2022). Algorithmic nudge to make better choices: Evaluating effectiveness of XAI frameworks to reveal biases in algorithmic decision making to users. CoRR abs/2202.02479. CHI 2022 Workshop on Operationalizing Human-centered Perspectives in Explainable AI.

    Google Scholar 

  • Karimi, M., Jannach, D., & Jugovac, M. (2018). News recommender systems: Survey and roads ahead. Information Processing & Management, 54(6), 1203–1227.

    Article  Google Scholar 

  • Kim, J., Lee, J., & Dai, Y. (2023). Misinformation and the Paradox of Trust during the COVID-19 pandemic in the U.S.: Pathways to Risk perception and compliance behaviors. Journal of Risk Research, 26(5), 469–484. https://doi.org/10.1080/13669877.2023.2176910

    Article  Google Scholar 

  • Kroll, T., & Stieglitz, S. (2021). Digital nudging and privacy: Improving decisions about self-disclosure in social networks. Behavior & Information Technology, 40, 1–19.

    Article  Google Scholar 

  • Loecherbach, F., Moeller, J., Trilling, D., & van Atteveldt, W. (2020). The unified framework of media diversity: A systematic literature review. Digital Journalism, 8(5), 605–642.

    Article  Google Scholar 

  • Logg, J., Minson, J., & Moore, D. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103.

    Article  Google Scholar 

  • Mattis, N., Masur, P., Möller, J., & van Atteveldt, W. (2022). Nudging toward news diversity: A theoretical framework for facilitating diverse news consumption through recommenders design. New Media & Society. https://doi.org/10.1177/14614448221104413

  • Möhlman, M., & Henfridsson, O. (2019). What people hate about being managed by algorithms, according to a study of uber drivers. Harvard Business Review. www.hbr.org.

  • Möhlmann, M. (2021). Algorithmic nudges do not have to be unethical. Harvard Business Review. https://hbr.org/2021/04/algorithmic-nudges-dont-have-to-be-unethical

  • Munyaka, I., Hargittai, E., & Redmiles, E. (2022). The misinformation paradox: Older adults are cynical about news media, but engage with it anyway. Journal of Online Trust and Safety, 1(4), 10.54501/jots.v1i4.62.

    Article  Google Scholar 

  • Newman, D., Lewandowsky, S., & Mayo, R. (2022). Believing in nothing and believing in everything: The underlying cognitive paradox of anti-COVID-19 vaccine attitudes. Personality and Individual Differences, 189, 111522. https://doi.org/10.1016/j.paid.2022.111522

    Article  Google Scholar 

  • Raveendhran, R., & Fast, N. J. (2021). Humans judge, algorithms nudge: The psychology of behavior tracking acceptance. Organizational Behavior and Human Decision Processes, 164, 11–26. https://doi.org/10.1016/j.obhdp.2021.01.001

    Article  Google Scholar 

  • Scheiber, N. (2021, April 2). How uber uses psychological tricks to push its drivers’ buttons. New York Times, Technology Section.

    Google Scholar 

  • Schobel, S., Barev, T., Janson, A., Hupfeld, F., & Leimeister, J. M. (2020). Understanding user preferences for digital privacy nudges. In Hawaii International Conference on System Sciences, Maui, Hawaii, USA. 10.24251/HICSS.2020.479

    Google Scholar 

  • Shin, D. (2021). The perception of humanness in conversational journalism: An algorithmic information-processing perspective. New Media & Society. https://doi.org/10.1177/1461444821993801

  • Shin, D., & Akhtar, F. (2024). Algorithmic inoculation against misinformation: How to build cognitive immunity against misinformation. Journal of Broadcasting & Electronic Media. https://doi.org/10.1080/08838151.2024.2323712

  • Shin, D., & Kee, K. F. (2023). Editorial note for special issue on AI and fake news, mis(dis)information, and algorithmic bias. Journal of Broadcasting & Electronic Media, 67(3), 241–245. https://doi.org/10.1080/08838151.2023.2225665

  • Shin, D., Ibrahim, M., & Zaid, B. (2020). Algorithm appreciation: Algorithmic performance, developmental processes, and user interactions. In 2020 International conference on Communications, Computing, Cybersecurity, and Informatics, November 3–5, 2020, The University of Sharjah, Sharjah, UAE. https://doi.org/10.1109/CCCI49893.2020.9256470

  • Shin, D., Lim, J., Ahmad, N., & Ibarahim, M. (2022). Understanding user sensemaking in fairness and transparency in algorithms: Algorithmic sensemaking in over-the-top platform. AI & Society. https://doi.org/10.1007/s00146-022-01525-9

  • Shin, D., Jitkajornwanich, K., Lim, J., & Spyridou, A. (2024). Debiasing misinformation: How do people diagnose health recommendations from AI? Online Information Review. https://doi.org/10.1108/OIR-04-2023-0167

  • Thaler, R., & Sunstein, C. R. (2014). Nudging: Improving decisions about health, wealth, and happiness. Yale University Press.

    Google Scholar 

  • Tsavli, M., Efraimidis, P. S., Katos, V., & Mitrou, L. (2015). Reengineering the user: Privacy concerns about personal data on smartphones. Information and Computer Security, 23(4), 394–305. https://doi.org/10.1108/ICS-10-2014-0071

    Article  Google Scholar 

  • Tufekci, Z. (2017). Twitter and tear gas. Yale University Press.

    Google Scholar 

  • Valenzuela, S., Halpern, D., Katz, J., & Miranda, J. (2019). The paradox of participation versus misinformation: Social media, political engagement, and the spread of misinformation. Digital Journalism, 7(6), 802–823. https://doi.org/10.1080/21670811.2019.1623701

    Article  Google Scholar 

  • Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 9(359), 1146–1151. https://doi.org/10.1126/science.aap9559

    Article  Google Scholar 

  • Weinmann, M., Schneider, C., & vom Brocke, J. (2016). Digital nudging. Business & Information Systems Engineering, 58(6), 433–436. https://doi.org/10.2139/ssrn.2708250

    Article  Google Scholar 

  • Yeung, K. (2017). Hyper nudge: Big data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136.

    Article  Google Scholar 

  • Zarouali, B., Boerman, S., & de Vreese, C. (2021). Is this recommended by an algorithm? Telematics and Informatics, 62, 101607. https://doi.org/10.1016/j.tele.2021.101607

  • Zingales, N. (2018). Google shopping: Beware of self-favoring in a world of algorithmic nudging. Competition Policy International-Europe Column. Available at SSRN: https://ssrn.com/abstract=3707797

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Donghee Shin .

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Shin, D. (2024). Misinformation, Paradox, and Nudge: Combating Misinformation Through Nudging. In: Artificial Misinformation. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-52569-8_7

Download citation

Publish with us

Policies and ethics