Introduction

The rapid development of generative artificial intelligence (AI) and its use for political purposes raises justified concerns about the outcome of sophisticated forms of manipulation that might undermine democratic processes, particularly elections (Muñoz 2023).

Researchers regularly consider future scenarios in which AI plays a major role, expressing the “fear of what could happen” (Wahl-Jorgensen & Carlson 2021). It draws the attention of policymakers and public opinion to specific threats, helps to imagine what challenges the society will face, but it might also lead to excessive demonization of AI and its incarnations (Yadlin-Segal & Oppenheim 2021).

Deep fakes can be considered a good example of this phenomenon. They can be defined as AI-generated or AI-manipulated audio, image or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (European Commission 2024). They have found numerous applications in various fields, which may vary from being extremely useful to morally questionable, dangerously harmful or directly illegal (Farid & Schindler 2020; Pawelec 2022). This diversity of applications of deep fakes does not allow to qualify them as a bad technology per se (de Ruiter 2021), but specific threats stemming from their misuse should not be underestimated, as they have already been effectively employed to discredit and ridicule political opponents, fuel social conflicts, spread hate speech, strengthen gender inequalities or spread disinformation (Chesney & Citron 2019; Kleemann 2023). The number of malicious applications has led some researchers to develop a dystopian vision of an information or epistemic “apocalypse” that threatens the information environment and society (Schick 2020).

One of the key threats analysed in the context of the multiplication of deep fakes in the information space is the possibility of influencing electoral processes, both in the form of enabling the promotion or discrediting of specific candidates, and undermining trust in the elections as such (Chesney & Citron 2019; Langa 2021). The recent years have seen a massive increase in the quality of deep fake technology as well as its “democratization”, that is, its availability for an unlimited audience for little costs. Nowadays almost anyone “can fabricate fake videos that are practically indistinguishable from authentic media” (Westerlund 2019, p. 39) by using easily accessible, downloadable software that allows for far-reaching interference in the audiovisual content.

Gains in the field of generative AI naturally strengthen the resources available to state and non-state actors, lowering the cost and time needed for creation and propagation of digital disinformation (Maham & Küspert 2023). Elections are one of the potential targets of malicious activities, which was continuously recognized by many researchers. This study aims to analyse relevant cases of using deep fakes in the context of elections that were reported in 2023. In doing so, the study focuses on three overarching questions:

  1. 1.

    How were deep fakes used in 2023 in the context of elections?

  2. 2.

    Did deep fakes significantly influence the outcome of a particular election?

  3. 3.

    To what extent does the “information apocalypse” narrative reflect the direct impact of deep fakes on election results?

These questions are particularly important for the global super election year 2024, which, due to the number of citizens going to vote, may be symbolically called the “Year of Democracy” (Global Coalition for Tech Justice 2023).

Methodology and limitations of the study

The aim of this study is to verify to what extent dystopian expectations about an “information apocalypse” (Schick 2020) fuelled by deep fakes have already materialized exclusively in the context of elections. We decided to limit the scope of the study to elections due to the fact that the main part of the public discourse on deep fakes is dominated specifically by concerns about the elections. The study analyses a non-representative sample of case studies of deep fakes that were reported in eleven countries. These countries (USA, Turkiye, Argentina, Poland, UK, France, India, Bulgaria, Taiwan, Indonesia and Slovakia) were chosen either because they saw an election in 2023 or have ongoing election campaigns.

The selection of the 11 countries analysed below is based on the major (general, parliamentary/legislative or presidential) election calendars for 2023 and 2024. Data for election calendars were provided by The Association of World Election Bodies (2023). We identified 85 different relevant elections that met the criteria. In December 2023 we conducted a Google search tools query for generic phrases: “deepfake AND election” and “deep fake AND election”, as well as specific phrases: “deepfake AND {country}” and “deep fake AND {country}”. Reports related to corresponding electoral processes were identified manually and an enhanced search for positive matches was conducted afterwards. The initial search was limited to English and complemented with local media reports, where possible. Two cases (France, UK) were recognized additionally in a way unrelated to the basic query. We were able to reach most of the deep fakes identified, which allowed us to estimate their visibility based on social media entries or media reports. Such data have significant limitations, because they only take into account the main sources of dissemination of AI-generated content.

We excluded all cases of deep fakes if they did not appear in the direct context of a particular election. In 2023 alone, deep fakes were used for political purposes in many countries—Estonia, Germany, Israel, Japan, Serbia, Sudan among others—and they may also subsequently influence voters’ behaviour as well as contribute to the phenomena of undermining trust in information and the media as described in this study. However, none of these cases indicated a direct link to the election processes. We focus on the holistic deep fakes landscape, not ignoring satirical forms or political advertising, which also contribute to negative phenomena mentioned above.

The limitation of the study might be the reliance on media reports in English, which means that cases reported locally might have been omitted and gone undetected. Researching media reports is justified in a view of the “information apocalypse” narrative that stems from journalistic discourse. Our goal is not to cover all deep fakes, as some of them are also shared in limited, often closed circles and do not get media coverage. However, it should be taken into account that they also shape trust in information and the media and may have a cumulative negative impact on the epistemic value of audio and visual content. The likelihood of using deep fakes increases as the time remaining until the election decreases, which means that in the case of the 2024 elections, many deep fakes may be yet to come.

A significant limitation of considerations is the lack of an appropriate methodological framework for measuring the impact of deep fakes on the election results. Deep fakes are part of the disinformation toolset and it seems impossible to precisely determine the limits of their influence. Therefore, we focus on identifying a direct and decisive impact to recognize any correlation.

Narrative on “information apocalypse”

Since 2018, scientists, philosophers and journalists (Schwartz 2018; Rini 2020; Schick 2020; Toews 2020; Fallis 2021) have been fuelling fears that deep fakes could lead to a so-called “information apocalypse”, which was linked to gradual erosion of public trust in information and the media. These processes were supposed to completely blur the boundaries between what is true and false. Rini (2020) coined the notion of “epistemic backstop” to describe the reduced epistemic value of visual recordings that historically were associated with credible media, even if they could become subject to falsification or alteration (Galston 2020; Fallis 2021; Geddes 2021).

The detailed considerations on the erosion of trust are necessary to understand the effects of deep fakes on our cognitive processes and information space, whereas the use of terms “apocalypse”, “infocalypse” or questioning the epistemic value of recordings as such seem at least problematic. What exactly is an apocalypse? In classical philosophical and literary terms, it refers to destruction, the end of the world or a global catastrophe. The gradual secularization of the term has popularized its use in non-religious contexts (CenSAMM 2021).

Apocalyptic terms have already been used to describe the negative consequences of disinformation and fake news (Stover 2018; Eder 2019), what might be called “post-truth narrative” (Habgood-Coote 2023). They have transposed to the discourse about deep fakes, shaping the doomsdays scenarios (Broinowski 2022). “Infodemic” (“pandemic of information disorders”) was also commonly used after the COVID-19 pandemic that resulted in a wave of disinformation and fake news (Lim 2023). Habgood-Coote (2023) refers to “epistemic apocalypse”, but he critically assesses apocalyptical predictions, whereas Horvitz (2022) expects that future generations might find themselves in a “post-epistemic world”. These phenomena might also be associated with “reality apathy” that results from constant exposure to misinformation (Schwartz 2018), or questioning the normative claim that seeing means believing (Galston 2020). The latter was intended to express declining trust in information and its carriers and heralded the end of ocularcentrism (Geddes 2021). In contrast, Immerwahr (2023), using Habgood-Coote’s (2023) arguments on social verification, argues that we rarely purely rely on our eyes and reasoning stills plays a major role in distinguishing real from fake. This argument may work in the case of content, which is objectively easy to distinguish, but in the case of hyper-realistic audio or visual deep fakes it may not be valid.

A study published in October 2023 (Twomey et al. 2023, p. 17) suggests that deep fake videos indeed undermine epistemic trust, but the media coverage might “be disproportionate to the threat we are currently facing” and this response might be “creating more distrust and contributing to an epistemic crisis”. Alarmist voices draw attention to certain dangerous phenomena, but they may also foster false belief that an “information apocalypse” is already occurring (Habgood-Coote 2023), thus contributing to the increasingly negative psychological effects of deep fake technology. In doing so, exaggerated fears about possible effects of deep fake technology in the context of elections might themselves contribute to an “over-pollution” of public discourse on AI and deep fakes.

Our rejection of the doomsday approach is not merely semantic. Shaping appropriate narratives and improving journalistic discourse are not widely recognized as potential countermeasures against deep fakes (Horvitz 2022; Simon et al. 2023). A recent study proved this by pointing to the “relatively narrow conceptualization and understanding of deepfakes and their impact on society at large in journalistic discourse” (Weikmann & Lecheler 2023, p. 13). Excessive concerns may be the result of the speculative nature of predictions and the natural fear of new technologies, which in the past was expressed in the claim that lightbulbs may cause blindness (Murphy et al. 2023). Changing the discourse and negating the alarmist–apocalyptic–doomsday narrative is one of the potential steps to slowing down the process of undermining trust in information and the media and preventing an information/epistemic apocalypse.

The implications of the apocalyptic narrative are visible in the public sphere. On the one hand, they shape the discourse, on the other hand, they fuel fear of modern technologies, often leading to their premature demonization (Yadlin-Segal & Oppenheim 2021). It was already proved that regularly repeated warnings contribute to a decrease in trust, excessive scepticism and more frequent marking of real materials as fake (Vaccari & Chadwick 2020; Twomey et al. 2023). This, in turn, may result in a self-fulfilling prophecy—poorly balanced messages suggesting a complete erosion of the epistemic value of information or their carriers may contribute to erroneous questioning of the veracity of information and recordings.

British Prime Minister Rishi Sunak, who himself fell victim to a discrediting deep fake in 2023, indicated that deep fakes “pollute the public information ecosystem” (Gye 2023). These assessments seem to be accurate in the face of the noticeable correlation between the increasing number of deep fakes and the low level of public trust in information and the media (Luminate 2023; Home Security Heroes 2023). In our opinion, giving up the apocalyptic narrative does not have to result in ignoring the problem. We are facing a huge challenge of the reduced epistemic value of recordings that can be empirically measured. In that context, the first signs of the described scenarios are visible. We do not deny that in the worst-case scenario, information/epistemic apocalypse may occur. We decided to check it directly in the context of elections. The currently observed trends evidenced in this study indicate mostly the occurrence of individual cases (with the potential to grow) rather than a mass phenomenon. Fallis (2021, p. 625) argues that “as deepfakes become more prevalent, it may be epistemically irresponsible to simply believe that what is depicted in a video actually occurred”. A significant weakness of the apocalyptic narrative is the inability to set a clear boundary of information apocalypse and, consequently, difficulties in empirical verification of the thesis.

We argue that a more appropriate term to describe the ongoing process is “pollution of the information environment”. In our opinion, currently it is epistemically irresponsible to simply not believe that what is depicted in a video actually occurred. Only in 11 cases of elections or election campaigns in 2023 were we able to determine the appearance of deep fakes that received media coverage, and only in 2 cases we assume some non-key impact on the results of electoral processes. We have not recorded a significant erosion of democratic elections caused by the occurrence of reported deep fakes, although some phenomena should be a source of justified concern.

Deep fakes in the context of elections—query of selected cases

United States of America

Despite concerns about the possibility of manipulation before the 2020 US presidential election, deep fakes did not play a significant role during the campaign (Meneses 2021). At this moment there are clear concerns about the course of the presidential election in 2024 (Klein 2023; Ulmer & Tong 2023). The whole list of minor incidents was recognized in 2023 alone. However, none of them had the potential to become a game-changer or confirms an already existing “information apocalypse”.

After Joe Biden announced his readiness to run for office, Republicans produced an AI-generated video presenting the disastrous consequences of Biden’s second term (Johnson 2023). Biden is regularly the target of attacks aimed at portraying him as unable to hold office. Modifications of his speeches circulating on social media are mixed with parodic performances, including singing the Baby Shark theme (Klein 2023). Biden was also portrayed dressed as trans celebrity Dylan Mulvaney. The video was mostly satirical in nature, but it gained a significant audience reaching up to several million recipients. Such happenings are not without significance, as they can subsequently influence voters’ behaviour and strengthen cognitive bias (Immerwahr 2023).

The US Vice President Kamala Harris was the victim of a remake, in which the original audio of her speech was replaced with disparaging material. Her voice was rambling, making the false impression that she might have been intoxicated (Farid 2023). The deep fake video portrayed famous actor Morgan Freeman, who allegedly criticized Biden, calling him “a fool”. The falsified footage was seen by thousands of X users (Reuters Fact Check 2023). One case may seem absurd, but it has amassed a sizable audience of nearly 90,000 followers on Twitch. The debate between two live-generated deep fakes imitating Biden and Trump is streamed round the clock (Farid 2023; TrumpOrBiden2024, 2023).

Donald Trump was portrayed while hugging one of his bitter rivals. Voice cloning was used to dub Trump’s controversial entries in social media (Isenstadt 2023) and the content generated at least 60,000 views on YouTube. He was also depicted while allegedly dancing with a 13-year-old girl (Marcelo 2023). The deep fake video depicting CNN journalist Anderson Cooper was meant to mock the CNN’s real reaction to Trump’s town hall and was shared among pro-Trump circles in social media, including Trump himself (Mastrangelo 2023). His entry in Truth Social was shared more than 5,000 times. Trump’s supporters disseminated a deep fake video of Hilary Clinton, in which she allegedly suggested that Democrats could control Ron DeSantis (Gorman 2023). This deep fake video was seen by almost 900,000 viewers, but it was labelled as AI-generated by platform X.

The cases described above were mainly aimed at harming rival candidates or were parodic in nature. There is also a second pillar of the use of deep fakes for election campaign purposes. Francis Suarez, the Republican mayor of Miami, used his own deep fake avatar in the pre-campaign for the 2024 US presidential election. The quality was poor, but it was aimed at allowing communication with voters. Suarez eventually withdrew from running for the Republican Party nomination (Economist 2023).

In our opinion, at this point, none of the analysed cases, even despite amassing a large audience, had the potential to have a significant impact on the election result. What should attract attention is the growing number of deep fakes that are deliberately shared by leading political actors and contribute to the pollution of the information space. In this case, a significant number of deep fakes may result in further disruption of the epistemic value of the media and is slowly heading towards doomsday scenarios.

Turkiye

The presidential election in Turkiye in 2023 was a bitter fight between the incumbent president Tayyip Erdogan and the opposition. In May, opposition leader Kemal Kilicdaroglu accused Russia of attempts to manipulate public opinion by using AI-generated content (Dallison 2023). Earlier, the third main candidate Muharrem İnce decided to withdraw from the race in response to deep porn content that depicted him and circulated on social media (Michaelson 2023). Although İnce’s popular support was estimated at 5% and he was not among top candidates, forcing a candidate to withdraw has a completely different qualitative dimension, especially since the fake content was revealed at the very end of the campaign. However, this was not the result of an accumulation of synthetic content, but a personalized attack on a specific person.

Additionally, Erdogan’s staff used an edited clip in which his main opponent, Kilicdaroglu, was supposed to perform with a representative of the Kurdistan Workers’ Party, recognized as a terrorist organization (Sparrow & Ünker 2023). Although it was labelled as deep fake by the media, two different clips presenting the leader of a terrorist organization and Kilicdaroglu were probably edited and merged. The technical nature of this content is unclear, but it is very likely that it imitated a deep fake disinformation pattern.

Argentina

In October 2023, presidential elections took place in Argentina. The campaign period was marked by the extensive use of deep fake technology to discredit political opponents or for self-promotion (political advertising). All these phenomena have prompted “The New York Times” commentator to call it “the first AI election” (Nicas 2023).

Losing candidate Sergio Massa’s campaign staff used deep fake technology on a large scale to generate his election posters. Some of them were stylized as classic Soviet propaganda posters. Additionally, Massa’s staff produced dozens of images using pop culture references and memes, including movie posters that incorporated the image of Massa depicted as a strong, fearless leader (Nicas 2023).

AI technology was also used to produce discrediting materials. For example, a deep fake in which Massa’s opponent, Javier Milei, allegedly explained how the business of selling human organs might work was marked as AI-generated, but it was clearly intended to cause harm and to lower the level of trust in Milei (Käss 2023). Massa’s staff generated footage of other important political actors in Argentina, presenting Milei as a zombie or a madman. In response, Milei shared AI-generated images, presenting Massa as a communist leader. The campaigns conducted by both politicians gained enormous popularity. The images generated by Milei supposedly reached up to 30 million viewers (Nicas 2023) and single images were regularly viewed or shared by thousands of recipients.

The actions of election teams have apparently encouraged supporters of both politicians to experiment with deep fakes. Again, satirical contexts and artistic associations were mainly used, but the sheer number of fake images circulating on social media effectively undermined trust in information. Some real recordings were labelled as fakes by the politicians’ supporters (Nicas 2023).

None of the deep fakes used were groundbreaking enough to completely change the outcome of the election, but one should notice the multidimensional consequences that the use of deep fakes on a mass scale might have on the functioning of society and the information space. Particularly disturbing seems to be the new dynamic of public debate in Argentina, in which both sides responded with AI-generated content, which led to a kind of arms race, as well as the acceptance of the use of this type of strategy, which was reflected in the behaviour of citizens.

Of all the examples we have discussed, Argentina has come closest to the use of deep fakes for electoral purposes on a mass scale. However, it should be noted that the vast majority of AI-generated content did not imitate reality, and the parodic nature allowed for relatively easy recognition of the materials.

Poland

In August 2023 the main opposition party in Poland dubbed (voice cloning) emails leaked from the government mailbox with the voice of Polish Prime Minister Mateusz Morawiecki. Deep fakes were disseminated on social media. They also featured the controversial CEO of the largest Polish state-owned energy company PKN Orlen.

The entries collected several dozen thousand interactions of various types, but they did not enjoy much recognition in Poland, mainly due to the critical assessment from the media and non-sensational nature of the deep fakes. They were widely considered to be a “new stage of political struggle” and an attempt to test a non-regulated grey zone (Breczko 2023).

In response to the videos shared by the opposition, a member of the government coalition created a rather amateurish deep fake that met with almost no response. He used the cloned voice of Morawiecki’s main rival, Donald Tusk, who allegedly admitted he was a fraud.

United Kingdom

In October 2023, a recording was published on platform X allegedly presenting the voice of the leader of the opposition Labor Party, Keir Starmer, who was swearing and attacking his staff in an obscene manner. The publication was synchronized with the party conference in Liverpool, “probably their last before the UK holds a general election” (Meaker 2023a) that might take place in 2024. The shocking recording was quickly debunked by Labor representatives as a “deep fake” but still attracted a significant audience “approaching nearly 1.5 million hits” (Bristow 2023).

This particular deep fake was clearly aimed at undermining trust in Starmer as a potential candidate for prime minister, but it was quickly debunked, long before the election, which reduces its manipulative potential.

France

An interesting minor case was recorded in September 2023 in France, as one of the candidates to Senate elections, Juliette de Causans, asked AI to beautify her election poster. AI actually generated a heavily modified image, which was met with criticism (Styllis 2023). As de Causans was not among the top candidates, her chances to influence the election results with the retouched photo were relatively low, but this application of deep fake technology has created an interesting precedent that may become an alternative to traditional methods of manual beautification aimed at influencing voters.

India

In April 2023 a representative of the ruling BJP party released an audio recording presenting one of the opposition leaders, Palanivel Thiagarajan, who had allegedly accused his own party of illegal financial transactions. Thiagarajan denied the accusations, suggesting the possibility of deep fake manipulation. However, the later analysis of the audio track was inconclusive (Christopher 2023b).

Very likely there is no significant impact of the attempt at discrediting Thiagarajan on future elections, which is related to the relatively long time until they are next held. It seems that Thiagarajan actually fell victim to a deep fake, but the fact that the authenticity of the recordings might be questioned by reference to deep fakes is worth noting. This phenomenon has been widely described by researchers as a potential threat to the integrity and credibility of information and the media. Researchers coined the term “liar’s dividend” to describe the leverage on the part of people denying the veracity of the materials by calling them fakes (Chesney & Citron 2019).

An original form of political promotion are deep fakes presenting the Prime Minister of India, Narendra Modi, singing popular songs. Each time they gain several million views and collect positive reviews on social media. Although the fake nature of the content is clear, it helps to warm Modi’s image and may evoke positive associations with the politician. Moreover, recordings of Modi’s speeches are also generated in Indian languages other than Hindi, which allows him to reach communities that are usually excluded from political debate, which should be seen as a specific form of political advertising (Christopher 2023a).

Bulgaria

Bulgaria is known to be a hotspot for Russian influence as well as foreign and domestic disinformation (Nehring & Sittig 2023), but it has seen surprisingly little incidents of deep fake disinformation before the parliamentary election in April 2023. As in other countries, some deep fake videos have featured well-known news anchors reading entirely fabricated news (Ignatow 2023). The most probable motive behind these deep fakes were shadowy business interests and smear campaigns without direct political implications.

Yet, a couple of weeks prior to the regional elections in October 2023 a deep fake video of prime minister Nikolai Denkov circulated on social media and made its way into all major news outlets (BNT 2023). Denkov allegedly addressed the entire nation explaining a rather odd investment scheme including the Russian oil company Lukoil. The video was quickly debunked by all major Bulgarian media, that hinted at one spelling mistake in the audio track of that video that resembled a Russian, not a Bulgarian intonation.

The biggest political implications of this deep fake might have been psychological in nature as it spread fear about Russian election interference and discredited the image of Denkov. Yet, comparing pre-election polls and the election outcome, if this video has had any consequences for the elections, the only possible outcome could have been a discouraging effect on election activity due to a disenchantment with politics and politicians in general in Bulgarian society.

Taiwan

The 2024 presidential election in Taiwan has already been targeted by deep fakes. In August 2023, an audio-deep fake of Ko Wen-je, the nominee of the Taiwan People’s Party and former mayor of Taipei, surfaced on social media and was mailed to several media agencies (Maiberg 2023b). Ko allegedly criticized his opponent’s visit to the USA, called him pompous, and said that his supporters were paid up to $800. Ko and his party quickly debunked the claims and the Investigation Bureau, confirmed the fake nature of the recording.

While the entire outlook, instruments and tactics of this deep fake-attack resemble classic “active measures” of influence and interference operations and thus suggest a professional campaign, there was no immediate visible effect on the election process. Since the content of this deep fake was barely sensational enough to decisively influence the outcome of the election, this deep fake was most probably not meant to swing the entire election, but should rather be seen as one small piece of a larger mosaic of disinformation.

Indonesia

In Indonesia the presidential election will be held in 2024. The first signals of the use of deep fakes were recorded in 2022. A deep fake video depicting potential candidate Anies Baswedan, who was allegedly supporting a person “accused of embezzling charity money”, was aimed at discrediting him (Harish 2023). Yet, so far, the nature of these deep fakes has not directly threatened the outcome of election.

In April, the voice of President Joko “Jokowi” Widodo was used to create a cover of the popular song “Asmalibrasi”, which went viral on social media, racking up over 5 million views and 10,000 retweets on Twitter, as well as over 188,000 likes on TikTok. In October, a video in which Jokowi spoke Chinese gained a significant number of views estimated at more than 2 million on TikTok alone (Harish 2023). While the cover of the song can be treated as a form of warming Jokowi’s image, the synthesized speech has a deeper political dimension. On the one hand, it can be used to promote the politician’s linguistic skills, and on the other, to create an unclear connection between Jokowi and China, which, in the face of anti-Chinese sentiment in Indonesia, already has a clear political dimension. However, in both cases no immediate political effect threatening the 2024 elections was observed.

Slovakia

Of all the elections in 2023, Slovakia probably saw the most challenging deep fake disinformation attempt. In September, two days before the election and during a traditional 48 h of moratorium on political campaigning and reporting, an audio deep fake appeared on social media. The candidate of the liberal Progressive Slovakia party, Michal Šimečka, and journalist Monika Tódová from the newspaper “Denník N” allegedly discussed a scheme to rig the election by buying votes from the country’s marginalized Roma minority (Meaker 2023b). Particularly tricky about this disinformation attack was that it used only an audio file and that it was spread so close to the election date, thus making it harder to be effectively debunked.

While the tactics and specifics of this deep fake disinformation attack made it particularly challenging, it might also have been of relevance to the outcome of the election. Pre-election polls saw a tight race between Šimečka’s Progressives and the SMER party and most polls predicted SMER to win the election (SMER in fact won by 5%). In this tight race, it was very hard to effectively measure the effect of this one deep fake, yet the content of the audio suggested that it was meant to demobilize Šimečka’s voters and encourage far-right and populist voters. In this case of a highly contested, polarized and tight election and little to no time to react, deep fake disinformation had the potential to make a significant difference.

Consequences for the election processes

According to estimations, the number of deep fakes posted online has tripled in 2023 compared to 2022, while the number of audio deep fakes is eight times higher (Ulmer & Tong 2023). This shows a certain trend, but it is mainly quantitative in nature and does not refer directly to the power of deep fakes to influence election processes. Even the increase in individual cases does not necessarily translate into a decisive impact on voting behaviour. This impact is extremely difficult, if possible at all, to be measured. Deep fakes are part of the disinformation landscape and should be analysed as a contributing factor. The threats and risks of deep fakes for elections should also be considered in the context of undermining trust in the election process as well as in information, truth, facts, authenticity and the media. This is particularly important in view of the year 2024. According to the Integrity Institute (2023), the major elections in 2024 will directly affect 3.65 billion people worldwide (Harbath & Khizanishvili 2023).

The USA and Argentina seem unique due to the extensive use of deep fakes, as evidenced by the number of reported cases, but not necessarily due to their quality or persuasive potential. In our opinion, a breakthrough should primarily be seen in the mere fact of using specific technology in the context of elections and not in its direct consequences. Some countries have had their “first deep fake moments” (Bristow 2023) in 2023, whereas others have become the stage for tests that may lead to the widespread use of this technology for electoral purposes in the future. However, this absolutely does not allow us to indicate that the information environment has been flooded with deep fakes on a global scale or to draw conclusions that an “information apocalypse” is already taking place. This does not mean, however, that the epistemic value of audio and visual materials remains the same.

Although we believe that 2023 has not brought any “breakthrough-deep fakes” so far, it is worth noting cases in which deep fakes could have an impact on the election results. Two cases described above deserve special attention.

In Turkiye one of the candidates was practically forced to withdraw from the presidential race and faced severe reputational consequences. It could contribute to the new distribution of votes. Using deep porn against Muharrem İnce was adapted to local conditions and the candidate’s life situation. His withdrawal was a clear confirmation of the effectiveness of a “deep porn strategy”.

The consequences of the campaign implemented in Slovakia are difficult to estimate. Analysis of the poll results shows slight fluctuations in favour of SMER, while the attacked Progresívne Slovensko did not record any significant declines in the last days of the election campaign. Deviations of 1 percentage point might be treated as an element of statistical error (Politico 2023), but they can also be seen as decisive points allowing for the formation of a government coalition due to the tight race between the two leading parties.

In both cases, the strategies implemented by the attackers are of key importance. The attacks were executed in the last days of the election campaigns, which shortened the reaction time and partly prevented necessary debunking. Therefore, we can point to the use of the classic “decisional checkpoints” defined as the short time preceding the election when “irrevocable decisions are made, and during which the circulation of false information therefore may have irremediable effects” (Chesney & Citron 2019). The campaigns in Turkiye and Slovakia might possibly pave the way for future uses of this strategy.

According to a poll conducted in August 2023, “more than 70% of citizens in the UK and Germany who did understand AI and deepfake technology say they are concerned about the threat such technology poses to elections” (Luminate 2023). The Home Security Heroes (2023) conducted a similar survey in the USA in the context of the presidential election in 2024. As much as 77% of respondents have encountered deep fake content related to political candidates and, unsurprisingly, 74.7% of the participants expressed their concern about the potential impact of deep fakes on the upcoming election.

It is safe to conclude that there is a huge potential for psychological influence in deep fake technology. In fact, voters’ perceptions and fears about deep fakes might lead to situations where malign actors do not necessarily need to use a deep fake, but simply invoke the possibility that a certain piece of information might be a deep fake. There are several examples where such claims have already influenced politics or social processes. In late 2018, the military launched a coup d’état in Gabon because rumours spread that a video message of then-president Ali Bongo was actually a deep fake and the president himself was dead (Delcker 2019).

All surveys mentioned above clearly support the claim that notwithstanding the actual impact deep fakes have had on elections so far, voters and politicians alike perceive them as a threat. Over 40% of respondents of a study in the USA indicated a sense of scepticism or a sense of being misled or misinformed, which translates into more frequent questioning of the authenticity of displayed materials and an active search for confirmation of their veracity (Home Security Heroes 2023). A recent study (Ahmed 2023) suggests that exposure to deep fakes is correlated to social media news scepticism. This partly fits into the “apocalyptic” narrative, but the scale of this phenomenon is still limited, as we do not see the erosion of democratic processes or complete questioning of media authenticity. In this sense, ocularcentrism still seems to be prevalent.

Nevertheless, the sheer number of deep fakes makes it more difficult to separate truth from fake, which creates new challenges for recipients. CBS alone rejected around 900 videos that were allegedly presenting events in the Gaza Strip in autumn 2023, which forced the media company to announce an increase in manpower to counteract “the deep fake pandemic” (Lebovic 2023). Again, this fits into the dystopian narrative of an “information apocalypse”, but the media countermeasures constituted an effective barrier, although they require additional expenditure and resources.

This does not mean that the epistemic value of information remains the same. The aforementioned increase in the number of deep fakes gradually increases the uncertainty among recipients, which may be particularly important in critical moments. Numerous images that appeared in the media and presented real recordings of the war in the Gaza Strip were dismissed by online commentators as fakes (Bedingfield 2023). In October 2023, heavily contested online debates with millions of participants spun around an image presented by the Israeli government which depicted innocent children as victims of the terrorist attack. Online users accused the Israeli government of deep faking the image and cited deep fake detection software as evidence. Yet, the picture was genuine and the mere insinuation of the government employing deep fake technology was used as a propaganda weapon (Maiberg 2023a).

In the cases described above it is not so much about deep fakes themselves that are at the core of the problem, but rather the fear and uncertainty, whether a piece of information might be a deep fake or not. This is one of the correct indications of the information apocalypse narrative, which is already confirmed. The mere existence of high-quality deep fake technology can be used as a weapon of information warfare and propaganda. This demonstrates that the disruptive potential of deep fakes does not necessarily root in their ability to persuade audiences of their messages. Instead, perceived psychological threats, fear, uncertainty and the inability to distinguish between authentic and inauthentic content, between fact and fake, may be enough to exert influence and manipulate, especially if fears are fuelled by unreliable reports and society does not develop protective mechanisms. However, the degree of intensity of these processes allows us to distinguish them from the extreme vision of an information apocalypse without disregarding the threats and simultaneously challenging two contradictory narratives that emerged.

The first of them creates the belief that an “information apocalypse” is coming, which in our opinion is mainly rooted in journalistic discourse. The increase in the number of deep fakes reported in 2023, and the case studies described above, should obviously be seen as a warning signal, but none of the analysed campaigns heralds the collapse of the election’s integrity. Much more difficult to measure are the consequences of uncertainty about the authenticity of the media and the decline in trust in news.

The second narrative tends to underestimate the problem in contrast to doomsday scenarios (Economist 2023; Habgood,-Coote, 2023; Immerwahr 2023; Simon et al. 2023). Some experts doubt that the potential of deep fakes is large enough to completely change the outcome of the election that finds an outlet in statements like: “We still have not one convincing case of a deepfake making any difference whatsoever in politics.” (Economist 2023). One can only partially agree with such an opinion, as the level of impact and harmfulness might be graded. Even if there has not been a “breakthrough-deep fake” so far, we should not automatically assume that individual cases of deep fakes have not had any influence on elections and the political process. Although fears about the impact of generative AI on misinformation might be indeed partly overblown (Simon et al. 2023), one should not be overly optimistic in assuming that a “breakthrough-deep fake” will not occur in future. These two narratives in regard to deep fakes will interpenetrate, contributing to additional information chaos.

Another consequence of the growing number of deep fakes might be the growing interest in using deep fake technology for political advertising. It is a powerful tool, as it already allows to change many characteristic features of candidates and minimize or completely eliminate deficits of their performances. With the help of deep fake technology, candidates may appear more appealing, younger, better-looking, more energetic, can speak to many audiences at the same time, in different languages and customize their messages for each voter personally. The ongoing experiments with speech translation and personalized messages may set new standards in political advertising, the consequences of which cannot be clearly assessed in technological, legal, or ethical terms. These problems, however, are still understudied and do not show enough recognition.

Conclusions

We believe that none of the cases in which deep fakes were used in the context of elections so far has had a decisive impact on the course of the elections, which does not mean no effect at all. The increasing number of reported cases may indicate a certain trend that has further potential for growth. Although dystopian predictions of an “information apocalypse” have not (yet?) come true, there are some signs of already noticeable phenomena of an undermined trust in information, politics and the media, which strengthens the sense of uncertainty among the society and opens up new possibilities for manipulation. An excessively alarmist narrative does not contribute to understanding the impact of deep fakes. In our opinion, it is definitely more cognitively and socially responsible to use terms that better reflect the nature of the phenomenon (i.e. “pollution of the public information ecosystem”), even if “information apocalypse” may be an ultimate consequence of over-pollution.

Our analysis of a non-representative sample of deep fake election-related content in 2023 has produced a variety of interesting results:

First, the sheer number of deep fakes registered in the context of elections has increased in 2023. However, most of the elections were not seen to be employing this strategy. Nevertheless, it is very likely that hardly any future election can be completely safe from deep fakes. For the upcoming “super-election-year” in 2024 this means that the quantity of deep fakes created and disseminated in the context of the election will most probably see another increase. Some countries will face “deep fake campaigns” for the first time in 2024 and may draw upon the experience of other countries where the use of deep fakes was tested on a larger scale in 2023 (i.e. USA, Argentina) or directionally (i.e. Turkiye and Slovakia).

Second, this increase in quantity did not equal an increase of their quality to influence elections. The fear of an “information apocalypse”, but also of a massive interference in the outcome of elections has not (yet?) materialized. Only in two of the cases analysed in this study, i.e. Turkiye and Slovakia, it is safe to assume that deep fakes did have some effect on the election. But even there, deep fakes did not “swing”, “steal”, turn around or decisively influence the outcome of the election and they should not be seen as real “breakthrough-deep fakes”.

Third, despite the increase in quality of deep fake technology, there has not been a case in which this quality led to an equally qualitative attack on elections. Instead, so far it seems that it is not the quality of deep fakes that make them a dangerous weapon to influence democratic elections, but the technology itself and its perception. At the moment, it is not about the one, big and powerful deep fake the night before election day that turns the whole election around; it is about a dozen or so deep fakes of mediocre quality on minor political issues that create general distrust in parties, candidates and the election processes itself, that demobilize voters and builds a cumulative effect of distrust and disenchantment. If the apocalyptic narrative teaches something, it is the need to be particularly interested in human interactions with deep fakes and the consequences of declining trust in information and its carriers. Little by little, deep fakes attack not only individual politicians and political decisions, but the very existence of truth, authenticity and facts. In the case of elections, it means that we should also worry about election integrity and basic trust in democracy.

Fourth, our findings do not seek to suggest that deep fakes do not pose any direct threats to the outcomes of elections. Indicated cases of attacking decisional checkpoints should be analysed in detail, as they highlight the possible strategy that they still might be applied to swing the outcome of elections directly. The resilience of democratic systems will be of key significance, as an erosion of trust may increase the probability of successful, direct attacks. Therefore, apocalyptic visions also have a scientific value because they draw attention to the core of the problem, even if they do it in an exaggerated and overly alarmist way.

Fifth, next to quantity and a cumulative effect of their occurrence in the information space, one of the biggest threats posed by deep fakes is their psychological effect. Voters, politicians and journalists are already confused and uncertain about the authenticity of information, which is partly a consequence of fake news and disinformation campaigns. The mere fear about not being able to detect and distinguish deep fakes from authentic content might lead to an alteration in voters’ behaviour due to insecurity, i.e. for psychological reasons. This also suggests that research, public discourse and AI-media literacy efforts should probably shift away from a focus on deep fake technology towards the human interactions with it and response to deep fakes, whereas journalistic discourse should shape public debate in a responsible way, without dazzling with apocalyptic visions that may lead to the effect of a self-fulfilling prophecy.