Skip to main content

Artificial Intelligence and Deepfakes in Strategic Deception Campaigns: The U.S. and Russian Experiences

  • Chapter
  • First Online:
The Palgrave Handbook of Malicious Use of AI and Psychological Security

Abstract

Strategic communication campaigns using artificial intelligence (AI) are rapidly gaining notoriety for their use of manipulated videos involving humans. Strategic deception campaigns are detrimental to society when they promote fake political statements made by imposters or disseminate revenge porn that targets investigative journalists. Deepfake technology is especially virulent when it comes to persuading mass audiences. In recent years, deepfakes have been weaponized as a tool of strategic deception in political power contests, character assassination, efforts to fight the opposition, and information warfare. This chapter seeks to contribute to the growing body of literature investigating the perils of the malicious use of artificial intelligence (MUAI). The chapter provides a strong conceptual framework for addressing manipulation and deception strategies in the new era of political standoffs and psychological warfare operations between the United States and Russia. The authors discuss several novel concepts, including trolling, pranking, visual manipulation, and computational propaganda. The chapter explores and accentuates the strategic applications of deepfake technology across various scenarios. It concludes by reflecting on the effects of MUAI on society, new detection approaches, and potential measures against online deception.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Ackerman, S., & Thielman, S. (2016, February 9). US intelligence chief: We might use the internet of things to spy on you. The Guardian. https://bit.ly/3Js9eiF

  • Armesto-Larson, B. (2020). Nonconsensual pornography: Criminal law solutions to a worldwide problem. Oregon Review of International Law, 21, 177–214.

    Google Scholar 

  • Asmolov, G. (2018). The disconnective power of disinformation campaigns. Journal of International Affairs, 71(1.5), 69–76.

    Google Scholar 

  • Ayyub, R. (2018, November 21). I was the victim of a deepfake porn plot intended to silence me. The Huffington Post. https://bit.ly/3uQ8DDy

  • Bailey, A., & Samoilenko, S. (2018). Astroturfing. In A. Ledeneva (Ed.), The global encyclopaedia of informality. UCL University Press.

    Chapter  Google Scholar 

  • BBC. (2020, December 23). Deepfake queen to deliver Channel 4 Christmas message. https://www.bbc.co.uk/news/technology-55424730

  • Bellware, K. (2020, January 4). Rep. Maxine Waters thought she was talking to Greta Thunberg. It was actually Russian trolls. Washington Post. https://wapo.st/3BibkyS

  • Benoit, W. L. (1995). Accounts, excuses, and apologies: A theory of image restoration strategies. State University of New York Press.

    Google Scholar 

  • Bonfadelli, H. (2002). The Internet and knowledge gaps: A theoretical and empirical investigation. European Journal of Communication, 17(1), 65–84. https://doi.org/10.1177/0267323102017001607

    Article  Google Scholar 

  • Botan, C. (2018). Strategic communication theory and practice: The cocreational model. Wiley-Blackwell.

    Google Scholar 

  • Botsman, R. (2017, October 21). Big data meets Big Brother as China moves to rate its citizens. Wire. https://bit.ly/3sHTtx9

  • Botvinkina, A. (2020, September 14). A new experimental legal framework in Russia shows the perils of future AI regulation. https://bit.ly/3HQ8oMi

  • Boyd, A. (Ed.). (2012). Beautiful trouble: A toolbox for revolution. OR Books.

    Google Scholar 

  • Buckels, E. E., Trapnell, P. D., & Paulhus, D. L. (2014). Trolls just want to have some fun. Personality and Individual Differences, 67, 97–102.

    Article  Google Scholar 

  • Burt, T., & Horvitz, E. (2020, September 1). New steps to combat disinformation. Microsoft. https://bit.ly/3HO1ibe

  • Byers, A. (2014). W.H.’s privacy effort for apps is stuck in neutral. Politico. p. 33.

    Google Scholar 

  • Caron, J. E. (2016). The quantum paradox of truthiness: Satire, activism, and the postmodern condition. Studies in American Humor, 2(2), 153–181.

    Article  Google Scholar 

  • Cheatham, B., Javanmardian, K., & Samandari, H. (2019, April). Confronting the risks of artificial intelligence. McKinsey Quarterly. https://bit.ly/3rNLyzj

  • Chernobrov, D. (2021). Strategic humour: Public diplomacy and comic framing of foreign policy issues. The British Journal of Politics and International Relations. https://doi.org/10.1177/13691481211023958

  • Chessen, M. (2019). The MADCOM future how artificial intelligence will enhance computational propaganda, reprogram human culture, and threaten democracy … and what can be done about it. In R. V. Yampolskiy (Ed.), Artificial intelligence safety and security (pp. 127–144). CRC Press.

    Google Scholar 

  • Chiluwa, I. E., & Samoilenko, S. A. (2019). Handbook of research on deception, fake new and misinformation online. IGI Global.

    Book  Google Scholar 

  • Christopher, N. (2020, February 18). We’ve just seen the first use of deepfakes in an Indian election campaign. Vice. https://bit.ly/3GKHiVr

  • Cole, S. (2018, January 24). We are truly fucked: Everyone is making AI-generated fake porn now. Vice. https://bit.ly/3oNsOOv

  • Collin, R. (2016, October 21). Black Mirror, season 3, Hated in the Nation, review: ‘An inspired, frost-fringed police procedural’. The Telegraph. https://bit.ly/34TdMQ7

  • Cook, J., Ellerton, P., & Kinkead, D. (2018). Deconstructing climate misinformation to identify reasoning errors. Environmental Research Letters, 13(2). https://doi.org/10.1088/1748-9326/aaa49f

  • Cottiero, C., Kucharski, K., Olimpieva, E., & Orttung, R. (2015). War of words: The impact of Russian state television on the Russian Internet. Nationalities Papers, 43(4), 533–555. https://doi.org/10.1080/00905992.2015.1013527

    Article  Google Scholar 

  • Creativepool. (2021, February 15). Journalists from Mexico are still speaking up. https://bit.ly/34QVFua

  • Danner, C. (2021, October 2). What is being leaked in the Facebook papers. A guide to the biggest revelations. New York Magazine. https://nym.ag/3Bkj2s5

  • Denham, H. (2020, August 3). Another fake video of Pelosi goes viral on Facebook. The Washington Post. https://wapo.st/3HO1tDq

  • Denyer, S. (2018, January 7). China’s watchful eye. The Washington Post. https://wapo.st/3uNrUW1

  • DFC. (2021, November 15). Tinkov’s doppelganger invites to a fake site. Deepfake Challenge Association. https://deepfakechallenge.com/gb/2021/09/16/11906/

  • Dunbar, N. E. (2009). Deception detection. In S. W. Littlejohn & K. A. Foss (Eds.), Encyclopedia of communication theory (pp. 291–292). Sage.

    Google Scholar 

  • Ekman, P. (2009). Telling lies. Clues to deceit in the marketplace, politics, and marriage. W. W. Norton and Company.

    Google Scholar 

  • European Parliament. (2021, July). Tackling deepfakes in European policy (Study PE 690.039). https://bit.ly/3HUata0

  • Federal Trade Commission. (2012, March). Protecting consumer privacy in an era of rapid change: Recommendations for businesses and policymakers (FTC Report). https://bit.ly/3HPJxs5

  • Frenkel, S., & Barnes, J. E. (2020, September 1). Russians again targeting Americans with disinformation, Facebook and Twitter say. PBS. https://to.pbs.org/3HPghBL

  • Gass, R. H., & Seiter, J. S. (2018). Persuasion: Social influence and compliance gaining. Routledge.

    Book  Google Scholar 

  • Grossman, S., Bush, D., & DiResta, R. (2019, October 29). Evidence of Russia-linked influence. Operations in Africa. Stanford Internet Observatory. https://stanford.io/3Js9G0l

  • Gupta, A., Lamba, H., Kumaraguru, P., & Joshi, A. (2013, May). Faking Sandy: Characterizing and identifying fake images on Twitter during hurricane sandy. In Proceedings of the 22nd international conference on World Wide Web (pp. 729–736). ACM.

    Chapter  Google Scholar 

  • Guth, D. W., & Marsh, C. (2011). Public relations: A value driven approach. Pearson.

    Google Scholar 

  • Hao, K. (2021, September 13). A horrifying new AI app swaps women into porn videos with a click. MIT Technology Review. https://bit.ly/3v4wfV9

  • Harding, L. (2020, November 27). Revealed: UN Sudan expert’s links to Russian oligarch Prigozhin. The Guardian. https://bit.ly/3rPRtnD

  • Harold, C. (2004). Pranking rhetoric: ‘Culture jamming’ as media activism. Critical Studies in Media Communication, 21(3), 189–211. https://doi.org/10.1080/0739318042000212693

    Article  Google Scholar 

  • Harwell, D. (2019, May 29). Faked Pelosi videos, slowed to make her appear drunk, spread across social media. The Washington Post. https://wapo.st/3LAqlRm

  • Henty, A., Tomchin, J. A., Kurov, A., & France, D. (Producers), & France, D. (Director). (2020). Welcome to Chechnya [Online Video]. Public Square Films production. https://bit.ly/3HRkW5Y

  • Hopper, R., & Bell, R. (1984). Broadening the deception construct. Quarterly Journal of Speech, 70(3), 288–302.

    Article  Google Scholar 

  • Humpfries, M. (2021, August 23). Bruce Willis deepfake to star in Russian TV ads. PC Magazine. https://www.pcmag.com/news/bruce-willis-deepfake-to-star-in-russian-tv-ads

  • Hunt, J. (2021, April 25). Mandalorian’s Luke Skywalker without CGI: Mark Hamill, deep fake & deaging. https://bit.ly/3szGn5c

  • Ibrahim, A., Thiruvady, D., Schneider, J.-G., & Abdelrazek, M. (2020, August 28). The challenges of leveraging threat intelligence to stop data breaches. Frontiers in Computer Science. https://doi.org/10.3389/fcomp.2020.00036

  • Jansen, S. C. (2017). Stealth communications. Polity Press.

    Google Scholar 

  • Khayryuzov, V. (2021, November 05). The privacy, data protection and cybersecurity law review: Russia. The Law Reviews. https://bit.ly/3uLWTlm

  • Klyueva, A. (2013). Trolling. In R. Heath (Ed.), Encyclopedia of public relations (pp. 933–934). Sage Publications.

    Google Scholar 

  • Klyueva, A. (2019). Trolls, bots, and whatnots: Deceptive content and challenges of online engagement. In I. Chiluwa & S. A. Samoilenko (Eds.), Social media and the production and spread of spurious deceptive contents (pp. 18–32). IGI Global. https://doi.org/10.4018/978-1-5225-8535-0.ch002

    Chapter  Google Scholar 

  • Kraut, R. (1980). Humans as lie detectors: Some second thoughts. Journal of Communication, 30, 209–216.

    Article  Google Scholar 

  • Krishnan, A. (2009). Killer robots: Legality and ethicality of autonomous weapons. Routledge.

    Google Scholar 

  • Kte’pi, B. (2014). Deception in political social media. In K. Harvey (Ed.), Encyclopedia of social media and politics (pp. 356–358). Sage.

    Google Scholar 

  • Kumar Garg, P., & Sharma, L. (2021). Artificial intelligence: Challenges and future applications. In L. Sharma & P. Kumar Garg (Eds.), Artificial intelligence technologies, applications, and challenges (pp. 229–245). CRC Press.

    Chapter  Google Scholar 

  • Kuzmin, E. (2021, January 21). Entuziasty s Yutyub pri pomoshchi neyrosetey uluchshayut kachestvo sovetskikh mul’tfil’mov. K kakim rezul’tatam eto privodit [YouTube enthusiasts use neural networks to improve the quality of Soviet cartoons. What are the results?]. TJournal. https://bit.ly/34SNL3q

  • Lahman, M. P. (2014). Awareness and action: A general semantics approach to effective language behavior, (Part 4)—Inference-observation confusion: Distinguishing between observation and inference. A Review of General Semantics, 71(1), 55–59.

    Google Scholar 

  • Lee, J. K. (2009). Incidental exposure to news: Limiting fragmentation in the new media environment. University of Texas at Austin. https://bit.ly/3oNtjYT

    Google Scholar 

  • Leetaru, K. (2019, August 26). The real danger today is shallow fakes and selective editing not deep fakes. Forbes. https://www.forbes.com/sites/kalevleetaru/2019/08/26/the-real-danger-today-is-shallow-fakes-and-selective-editing-not-deep-fakes/?sh=547f8e664ea0

  • Lightfarm Studios. (2021). Unfinished voices. [Vimeo Video]. https://vimeo.com/465077929

  • Lomsadze, G. (2021, October 1). Georgia’s big little election. https://eurasianet.org/georgias-big-little-election

  • Lynch, M. P. (2017, May 9). The Internet of us: Knowing more and understanding less in the age of big data paperback. Liveright.

    Google Scholar 

  • March, E., & Steele, G. (2020). High esteem and hurting others online: Trait sadism moderates the relationship between self-esteem and Internet trolling. Cyberpsychology, Behavior, and Social Networking, 23(7), 441–446. https://doi.org/10.1089/cyber.2019.0652

    Article  Google Scholar 

  • Marwick, A., & Lewis, R. (2017). Media manipulation and disinformation online. Data and Society Research Institute. https://datasociety.net/output/media-manipulation-and-disinfo-online/

    Google Scholar 

  • Masood, M., Nawaz, M., Malik, K. M., Javed, A., & Irtaza, A. (2021). Deepfakes generation and detection: State-of-the-art, open challenges, countermeasures, and way forward. https://arxiv.org/abs/2103.00484

  • Mayzlin, D., Dover, Y., & Chevalier, J. (2014). Promotional reviews: An empirical investigation of online review manipulation. The American Economic Review, 104(8), 2421–2455. https://doi.org/10.1257/aer.104.8.2421

    Article  Google Scholar 

  • Meduza. (2021, April 22). Hello, this is Leonid Volkov* Using deepfake video and posing as Navalny’s right-hand man, Russian pranksters fool Latvian politicians and journalists into invitation and TV interview. Meduza. https://bit.ly/3gO0DL0

  • Miller, D., & Dinan, W. (2007). Thinker, faker, spinner, spy. Corporate PR and the assault on democracy. Pluto Press.

    Google Scholar 

  • Mohanty, S., Jagadeesh, M., & Srivatsa, H. (2013). Big data imperatives: Enterprise ‘big data’ warehouse, ‘BI’ implementations and analytics (the Expert’s voice). Apress.

    Book  Google Scholar 

  • Moore, S. (2021, October 8). Justin Bieber fooled into picking a fight with deepfake Tom Cruise. Yahoo News. https://yhoo.it/3GNUw3T

  • O’Leary, M. (2006). The dictionary of homeland security and defence. iUniverse.

    Google Scholar 

  • Öhman, C. (2020). Introducing the pervert’s dilemma: A contribution to the critique of deepfake pornography. Ethics and Information Technology, 22(2), 133–140.

    Article  Google Scholar 

  • Pashentsev, E., & Bazarkina, D. (2022). The malicious use of artificial intelligence against government and political institutions in the psychological area. In D. M. Bielicki (Ed.), Regulating artificial intelligence in industry (pp. 36–52). Routledge.

    Google Scholar 

  • Pashentsev, E. (2021). The malicious use of artificial intelligence through agenda setting: Challenges to political stability. In Proceedings of the 3rd European Conference on the Impact of Artificial Intelligence and Robotics (ECIAIR), Lisbon, Portugal (pp. 138–144). Academic Conferences International Limited.

    Google Scholar 

  • Popova, M. (2020). Reading out of context: pornographic deepfakes, celebrity and intimacy. Porn Studies, 7(4), 367–381. https://doi.org/10.1080/23268743.2019.1675090

    Article  Google Scholar 

  • Powell, T. E., Boomgaarden, H. G., De Swert, K., & de Vreese, C. H. (2015). A clearer picture: The contribution of visuals and text to framing effects. Journal of Communication, 65(6), 997–1017.

    Google Scholar 

  • Reynard, L. J. (2019). Troll farm: Anonymity as a weapon for online character assassination. In I. E. Chiluwa & S. A. Samoilenko (Eds.), Handbook of research on deception, fake news, and misinformation online (pp. 392–419). IGI Global.

    Chapter  Google Scholar 

  • Robinson, O. (2018, November 15). In Chechnya, televised shamings to keep people in check. BBC. https://bbc.in/3HTqb5e

  • Roumate, F. (2021, March). Artificial intelligence, ethics and international human rights. The International Review of Information Ethics, 29. https://doi.org/10.29173/irie422

  • Samoilenko, S. A. (2016). Character assassination. In C. Carroll (Ed.), The Sage encyclopedia of corporate reputation. (Vol. 1, pp. 115–118). Thousand Oaks, CA: Sage.

    Google Scholar 

  • Samoilenko, S. A. (2018a). Subversion practices: From coercion to attraction. In E. J. Bridgen, D. Verčič, & D. (Eds.), Experiencing public relations: International voices (pp. 174–193). Routledge.

    Google Scholar 

  • Samoilenko, S. A. (2018b). Strategic deception in the age of ‘truthiness’. In I. Chiluwa (Ed.), Deception: Motivations, recognition techniques and behavioral control. Nova Science Publishers.

    Google Scholar 

  • Samoilenko, S. A. (2020). Character assassination in the context of mediated complexity. In K. Sriramesh & D. Verčič (Eds.), The global public relations handbook: Theory, research, and practice (3rd ed.). Routledge.

    Google Scholar 

  • Samoilenko, S. A., & Shilina, M. (2022). Governance. In L. A. Schintler, C. L. McNeely, G. J. Golson, & J. (Eds.), The Encyclopedia of Big Data. Springer.

    Google Scholar 

  • Sensity. (n.d.). https://bit.ly/3HOVf61

  • Shiraev, E., Keohane, J., Icks, M., & Samoilenko, S. A. (2022). Character assassination and reputation management: Theory and applications. Routledge.

    Google Scholar 

  • Sindelar, D. (2014, August 14). The Kremlin’s troll army. The Atlantic. https://bit.ly/3BnNhOT

  • Skynews. (2021, October 24). ‘Daydreaming about riding a pony’: Joe Biden’s fists gaffe at CNN town hall. https://bit.ly/3GMgV1h

  • Spocchia, G. (2020, June). Republican candidate shares conspiracy theory that George Floyd murder was faked. Independent. https://bit.ly/3uPlKob

  • Sukhodolov, A. P., Kudlik, E. S., & Antonova, A. B. (2018). Prank journalism as a new genre in Russian media landscape. Theoretical and Practical Issues of Journalism, 7(3), 361–370.

    Article  Google Scholar 

  • The European Union. (2016). Regulations (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016. https://gdpr-info.eu/

  • The Guardian. (2016, May 16). YouTube pranksters jailed after ‘terrifying’ fake art heist. https://bit.ly/3Jp7HtJ

  • Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society. https://doi.org/10.1177/2056305120903408

  • Vallina-Rodriguez, N., & Sundaresan, S. (2017, May 30). 7 in 10 smartphone apps share your data with third-party services. The Conversation. https://theconversation.com/7-in-10-smartphone-apps-share-your-data-with-third-partyservices-72404

  • Vincent, J. (2021, April 30). ‘Deepfake’ that supposedly fooled European politicians was just a look-alike, say pranksters. The Verge. https://bit.ly/34XC0Zp

  • Wagner, J. (2019, May 24). Trump shares heavily edited video that highlights verbal stumbles by Pelosi and questions her mental acuity. The Washington Post. https://wapo.st/3HPgODL

  • Walker, S. (2016, March 13). Kremlin calling? Meet the Russian pranksters who say ‘Elton owes us.’ The Guardian. https://bit.ly/3oPoBKi

  • Wanless, A., & Berk, M. (2020). The audience is the amplifier: Participatory propaganda. In P. Baines, N. O’Shaughnessy, & N. Snow (Eds.), The SAGE handbook of propaganda (pp. 85–104). SAGE Publications.

    Chapter  Google Scholar 

  • Weiner, B., Amirkhan, J., Folkes, V. S., & Verette, J. A. (1987). An attributional analysis of excuse giving: Studies of a naive theory of emotion. Journal of Personality and Social Psychology, 52(2), 316–324. https://doi.org/10.1037/0022-3514.52.2.316

    Article  Google Scholar 

  • Wootson Jr., C. R. (2017, July 26). It was a prank call. Listen to Russian pranksters trick Rick Perry into a conversation about pig manure. The Washington Post. https://wapo.st/3go6l5Y

  • Yampolskiy, R. V. (2019). Introduction to AI safety and security. In R. V. Yampolskiy (Ed.), Artificial intelligence safety and security (pp. xi–xxii). CRC Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Samoilenko, S.A., Suvorova, I. (2023). Artificial Intelligence and Deepfakes in Strategic Deception Campaigns: The U.S. and Russian Experiences. In: Pashentsev, E. (eds) The Palgrave Handbook of Malicious Use of AI and Psychological Security. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-22552-9_19

Download citation

Publish with us

Policies and ethics