Abstract
The rapid spread of misinformation online can be attributed to bias in human decision-making, facilitated by algorithmic processes. The area of human-computer interaction has contributed a mechanism of such biases that can be addressed by the design of system interventions. For example, the principle of nudging refers to sophisticated modifications in the choice architecture that can change user behaviors in desired or directed ways. This chapter discusses the design of nudging interventions in the context of misinformation, including a systematic review of the use of nudging in human-AI interaction that has led to a design framework. By using algorithms that work invisibly, nudges can be maneuvered in misinformation to individuals, and their effectiveness can be traced and attuned as the algorithm improves from user feedback based on a user’s behavior. It seeks to explore the potential of nudging in decreasing the chances of consuming and spreading misinformation. The key is how to ensure that algorithmic nudges are used in an effective way and whether the nudge could also help to achieve a sustainable way of life. This chapter discusses the principles and dimensions of the nudging effects of AI systems on user behavior in response to misinformation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Akter, S., McCarthy, G., Sajib, S., Michael, K., Dwivedi, Y., D’Ambra, J., & Shen, K. (2021). Algorithmic bias in data-driven innovation in the age of AI. International Journal of Information Management, 102387. https://doi.org/10.1016/j.ijinfomgt.2021.102387
Badke, W. (2018). Fake news, confirmation bias, the search for truth, and the theology student. Theological Librarianship, 11(2), 4–7. https://doi.org/10.31046/tl.v11i2.519
Baumer, E. P. (2017). Toward human-centered algorithm design. Big Data & Society. https://doi.org/10.1177/2053951717718854
Brown, S., Davidovic, J., & Hasan, A. (2022). The algorithm audit: Scoring the algorithms that score us. Big Data & Society. https://doi.org/10.1177/2053951720983865
Bucher, T. (2017). The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20, 30–44.
Burr, C., & Cristianini, N. (2019). Can machines read our minds? Minds & Machines, 29, 461–494. https://doi.org/10.1007/s11023-019-09497-4
Burr, C., Cristianini, N., & Ladyman, J. (2018). An analysis of the interaction between intelligent software agents and human users. Minds and Machines, 28, 735–774. https://doi.org/10.1007/s11023-018-9479-0-1-3
Cowgill, B., & Stevenson, M. T. (2020, May). Algorithmic social engineering. In AEA papers and proceedings (Vol. 110, pp. 96–100). American Economic Association.
Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press.
Greene, T., Martens, D., & Shmueli, G. (2022). Barriers to academic data science research in the new realm of algorithmic behavior modification by digital platforms. Nature Machine Intelligence, 4, 323–330. https://doi.org/10.1038/s42256-022-00475-7
Jakesch, M., Hancock, J. T., & Naaman, M. (2023). Human heuristics for AI-generated language are flawed. Proceedings of the National Academy of Sciences, 120(11), e2208839120.
Juneja, P., & Mitra, T. (2022). Algorithmic nudge to make better choices: Evaluating effectiveness of XAI frameworks to reveal biases in algorithmic decision making to users. CoRR abs/2202.02479. CHI 2022 Workshop on Operationalizing Human-centered Perspectives in Explainable AI.
Karimi, M., Jannach, D., & Jugovac, M. (2018). News recommender systems: Survey and roads ahead. Information Processing & Management, 54(6), 1203–1227.
Kim, J., Lee, J., & Dai, Y. (2023). Misinformation and the Paradox of Trust during the COVID-19 pandemic in the U.S.: Pathways to Risk perception and compliance behaviors. Journal of Risk Research, 26(5), 469–484. https://doi.org/10.1080/13669877.2023.2176910
Kroll, T., & Stieglitz, S. (2021). Digital nudging and privacy: Improving decisions about self-disclosure in social networks. Behavior & Information Technology, 40, 1–19.
Loecherbach, F., Moeller, J., Trilling, D., & van Atteveldt, W. (2020). The unified framework of media diversity: A systematic literature review. Digital Journalism, 8(5), 605–642.
Logg, J., Minson, J., & Moore, D. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103.
Mattis, N., Masur, P., Möller, J., & van Atteveldt, W. (2022). Nudging toward news diversity: A theoretical framework for facilitating diverse news consumption through recommenders design. New Media & Society. https://doi.org/10.1177/14614448221104413
Möhlman, M., & Henfridsson, O. (2019). What people hate about being managed by algorithms, according to a study of uber drivers. Harvard Business Review. www.hbr.org.
Möhlmann, M. (2021). Algorithmic nudges do not have to be unethical. Harvard Business Review. https://hbr.org/2021/04/algorithmic-nudges-dont-have-to-be-unethical
Munyaka, I., Hargittai, E., & Redmiles, E. (2022). The misinformation paradox: Older adults are cynical about news media, but engage with it anyway. Journal of Online Trust and Safety, 1(4), 10.54501/jots.v1i4.62.
Newman, D., Lewandowsky, S., & Mayo, R. (2022). Believing in nothing and believing in everything: The underlying cognitive paradox of anti-COVID-19 vaccine attitudes. Personality and Individual Differences, 189, 111522. https://doi.org/10.1016/j.paid.2022.111522
Raveendhran, R., & Fast, N. J. (2021). Humans judge, algorithms nudge: The psychology of behavior tracking acceptance. Organizational Behavior and Human Decision Processes, 164, 11–26. https://doi.org/10.1016/j.obhdp.2021.01.001
Scheiber, N. (2021, April 2). How uber uses psychological tricks to push its drivers’ buttons. New York Times, Technology Section.
Schobel, S., Barev, T., Janson, A., Hupfeld, F., & Leimeister, J. M. (2020). Understanding user preferences for digital privacy nudges. In Hawaii International Conference on System Sciences, Maui, Hawaii, USA. 10.24251/HICSS.2020.479
Shin, D. (2021). The perception of humanness in conversational journalism: An algorithmic information-processing perspective. New Media & Society. https://doi.org/10.1177/1461444821993801
Shin, D., & Akhtar, F. (2024). Algorithmic inoculation against misinformation: How to build cognitive immunity against misinformation. Journal of Broadcasting & Electronic Media. https://doi.org/10.1080/08838151.2024.2323712
Shin, D., & Kee, K. F. (2023). Editorial note for special issue on AI and fake news, mis(dis)information, and algorithmic bias. Journal of Broadcasting & Electronic Media, 67(3), 241–245. https://doi.org/10.1080/08838151.2023.2225665
Shin, D., Ibrahim, M., & Zaid, B. (2020). Algorithm appreciation: Algorithmic performance, developmental processes, and user interactions. In 2020 International conference on Communications, Computing, Cybersecurity, and Informatics, November 3–5, 2020, The University of Sharjah, Sharjah, UAE. https://doi.org/10.1109/CCCI49893.2020.9256470
Shin, D., Lim, J., Ahmad, N., & Ibarahim, M. (2022). Understanding user sensemaking in fairness and transparency in algorithms: Algorithmic sensemaking in over-the-top platform. AI & Society. https://doi.org/10.1007/s00146-022-01525-9
Shin, D., Jitkajornwanich, K., Lim, J., & Spyridou, A. (2024). Debiasing misinformation: How do people diagnose health recommendations from AI? Online Information Review. https://doi.org/10.1108/OIR-04-2023-0167
Thaler, R., & Sunstein, C. R. (2014). Nudging: Improving decisions about health, wealth, and happiness. Yale University Press.
Tsavli, M., Efraimidis, P. S., Katos, V., & Mitrou, L. (2015). Reengineering the user: Privacy concerns about personal data on smartphones. Information and Computer Security, 23(4), 394–305. https://doi.org/10.1108/ICS-10-2014-0071
Tufekci, Z. (2017). Twitter and tear gas. Yale University Press.
Valenzuela, S., Halpern, D., Katz, J., & Miranda, J. (2019). The paradox of participation versus misinformation: Social media, political engagement, and the spread of misinformation. Digital Journalism, 7(6), 802–823. https://doi.org/10.1080/21670811.2019.1623701
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 9(359), 1146–1151. https://doi.org/10.1126/science.aap9559
Weinmann, M., Schneider, C., & vom Brocke, J. (2016). Digital nudging. Business & Information Systems Engineering, 58(6), 433–436. https://doi.org/10.2139/ssrn.2708250
Yeung, K. (2017). Hyper nudge: Big data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136.
Zarouali, B., Boerman, S., & de Vreese, C. (2021). Is this recommended by an algorithm? Telematics and Informatics, 62, 101607. https://doi.org/10.1016/j.tele.2021.101607
Zingales, N. (2018). Google shopping: Beware of self-favoring in a world of algorithmic nudging. Competition Policy International-Europe Column. Available at SSRN: https://ssrn.com/abstract=3707797
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Shin, D. (2024). Misinformation, Paradox, and Nudge: Combating Misinformation Through Nudging. In: Artificial Misinformation. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-52569-8_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-52569-8_7
Published:
Publisher Name: Palgrave Macmillan, Cham
Print ISBN: 978-3-031-52568-1
Online ISBN: 978-3-031-52569-8
eBook Packages: Literature, Cultural and Media StudiesLiterature, Cultural and Media Studies (R0)