Abstract
Social media platforms enable largely unrestricted many-to-many communication. In times of crisis, they offer a space for collective sense-making and give rise to new social phenomena (e.g., open-source investigations). However, they also serve as a tool for threat actors to conduct Cyber-enabled Social Influence Operations (CeSIOs) to shape public opinion and interfere in decision-making processes. CeSIOs employ sock puppet accounts to engage authentic users in online communication, exert influence, and subvert online discourse. Large Language Models (LLMs) may further enhance the deceptive properties of sock puppet accounts. Recent LLMs can generate targeted and persuasive text, which is, for the most part, indistinguishable from human-written content—ideal features for covert influence. This article reviews recent developments at the intersection of LLMs and influence operations, summarizes LLMs’ salience, and explores the potential impact of LLM-instrumented sock puppet accounts for CeSIOs. Finally, mitigation measures for the near future are highlighted.
Chapter PDF
References
Susan C Herring. Computer-mediated communication on the internet. Annual review of information science and technology, 109, 2002.
Claudio Baraldi, Giancarlo Corsi, and Elena Esposito. Unlocking Luhmann; Luhmann in Glossario. I concetti fondamentali della teoria: A Keyword Introduction to Systems Theory. Bielefeld University Press, 2021.
A Bergh. Understanding influence operations in social media: A cyber kill chain approach. Journal of Information Warfare, 19(4):110–131, 2020.
Mark Stout. Covert action in the age of social media. Georgetown Journal of International Affairs, 18(2):94–103, 2017.
Sean Cordey. Cyber influence operations: An overview and comparative analysis. CSS Cyberdefense Reports, 2019.
Lennart Maschmeyer. A new and better quiet option? strategies of subversion and cyber conflict. Journal of Strategic Studies, 46(3):570–594, 2023.
Ryan Shandler and Miguel Alberto Gomez. The hidden threat of cyber-attacks–undermining public confidence in government. Journal of Information Technology & Politics, pages 1–16, 2022.
Jaron Mink et al. {DeepPhish}: Understanding user trust towards artificially generated profiles in online social networks. In 31st USENIX Security Symposium (USENIX Security 22), pages 1669–1686, 2022.
Laura Weidinger et al. Taxonomy of risks posed by language models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 214–229, 2022.
Josh A Goldstein et al. Generative language models and automated influence operations: Emerging threats and potential mitigations. arXiv preprint arXiv:2301.04246, 2023.
Rowan Zellers et al. Defending against neural fake news. Advances in neural information processing systems, 32, 2019.
Tom Brown et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
Ben Buchanan, Andrew Lohn, Micah Musser, and Katerina Sedova. Truth, lies, and automation. Center for Security and Emerging Technology, 1(1):2, 2021.
Alex Tamkin, Miles Brundage, Jack Clark, and Deep Ganguli. Understanding the capabilities, limitations, and societal impact of large language models. arXiv preprint arXiv:2102.02503, 2021.
Samuel Gehman et al. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. arXiv preprint arXiv:2009.11462, 2020.
Long et al. Ouyang. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022.
Matthew Burtell and Thomas Woodside. Artificial influence: An analysis of ai-driven persuasion. arXiv preprint arXiv:2303.08721, 2023.
Hui Bai, JG Voelkel, JC Eichstaedt, and R Willer. Artificial intelligence can persuade humans on political issues. OSF Preprints, 5, 2023.
Josh A et al. Goldstein. Can ai write persuasive propaganda? 2023.
Sandra Matz et al. The potential of generative ai for personalized persuasion at scale. 2023.
Maurice Jakesch, Advait Bhat, Daniel Buschek, Lior Zalmanson, and Mor Naaman. Co-writing with opinionated language models affects users’ views. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1–15, 2023.
Lewis Griffin et al. Large language models respond to influence like humans. In Proceedings of the First Workshop on Social Influence in Conversations (SICon 2023), pages 15–24, 2023.
Elizabeth et al. Clark. All that’s’ human’is not gold: Evaluating human evaluation of generated text. arXiv preprint arXiv:2107.00061, 2021.
Maurice Jakesch, Jeffrey T Hancock, and Mor Naaman. Human heuristics for ai-generated language are flawed. Proceedings of the National Academy of Sciences, 120(11):e2208839120, 2023.
Da Silva Gameiro Henrique, Andrei Kucharavy, and Rachid Guerraoui. Stochastic parrots looking for stochastic parrots: Llms are easy to fine-tune and hard to detect with other llms. arXiv preprint arXiv:2304.08968, 2023.
Kai-Cheng Yang and Filippo Menczer. Anatomy of an ai-powered malicious social botnet. arXiv preprint arXiv:2307.16336, 2023.
Adithya Bhaskar, Alexander R Fabbri, and Greg Durrett. Zero-shot opinion summarization with gpt-3. arXiv preprint arXiv:2211.15914, 2022.
Hans-Georg Moeller and Paul J D’ambrosio. You and your profile: Identity after authenticity. Columbia University Press, 2021.
Leonard Salewski et al. In-context impersonation reveals large language models’ strengths and biases. arXiv preprint arXiv:2305.14930, 2023.
Bobby Chesney and Danielle Citron. Deep fakes: A looming challenge for privacy, democracy, and national security. Calif. L. Rev., 107:1753, 2019.
Maurice Jakesch, Megan French, Xiao Ma, Jeffrey T Hancock, and Mor Naaman. Ai-mediated communication: How the perception that profile text was written by ai affects trustworthiness. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pages 1–13, 2019.
Andrea et al. Kavanaugh. The use and impact of social media during the 2011 tunisian revolution. In Proceedings of the 17th International Digital Government Research Conference on Digital Government Research, pages 20–30, 2016.
James H Kuklinski et al. Misinformation and the currency of democratic citizenship. The Journal of Politics, 62(3):790–816, 2000.
Lennart Maschmeyer, Alexei Abrahams, Peter Pomerantsev, and Volodymyr Yermolenko. Donetsk don’t tell–‘hybrid war’in ukraine and the limits of social media influence operations. Journal of Information Technology & Politics, pages 1–16, 2023.
Kate Starbird, Renée DiResta, and Matt DeButts. Influence and improvisation: Participatory disinformation during the 2020 us election. Social Media+ Society, 9(2):20563051231177943, 2023.
John Kirchenbauer et al. A watermark for large language models. arXiv preprint arXiv:2301.10226, 2023.
Owain Evans et al. Truthful ai: Developing and governing ai that does not lie. arXiv preprint arXiv:2110.06674, 2021.
Gabriel Lima, Jiyoung Han, and Meeyoung Cha. Others are to blame: Whom people consider responsible for online misinformation. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW1):1–25, 2022.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this chapter
Cite this chapter
Meier, R. (2024). LLM-Aided Social Media Influence Operations. In: Kucharavy, A., Plancherel, O., Mulder, V., Mermoud, A., Lenders, V. (eds) Large Language Models in Cybersecurity. Springer, Cham. https://doi.org/10.1007/978-3-031-54827-7_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-54827-7_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-54826-0
Online ISBN: 978-3-031-54827-7
eBook Packages: Computer ScienceComputer Science (R0)