Introduction

The world is embedded in a moment governed by change and uncertainty. The current geopolitical context is marked by warfare situations, the rise of populism, and the polarization and fragmentation of societies. Meanwhile, the media system turns towards a hybrid nature (Chadwick, 2017), consisting of various information channels where users are placed at the center of the production process (Palau‐Sampio and Carratalá, 2021). In this media landscape, social networks have emerged as the cradle of disinformation, becoming the space where audiences regularly consume content and news.

In this sense, information disorders have become a subject of research and debate within academic communication studies. Del-Fresno-García (2019) defines information disorders as intentional productions aimed at generating doubts and false controversies for economic or ideological gain. These disorders are characterised by their interconnection and rely on post-internet technologies, which have transformed the nature of collective interpersonal communication. Among these products, terms such as disinformation, fake news, hoaxes, propaganda, and post-truth can be found.

The concept of post-truth was introduced globally in 2016 as a hallmark of an era (Estrada-Cuzcano et al. 2020). In this regard, Sánchez-Illán (2021) describes it as the deliberate distortion of reality, in which objective facts have less influence than appeals to emotions and personal beliefs, with the aim of creating and shaping public opinion and influencing social attitudes. Thus, the term post-truth refers to a context marked by information disorders.

Disinformation, on the other hand, is considered as “any false information content that has been deliberately created and disseminated” (Wardle, 2018). The concept “hoax” is generally used to specifically refer to disinformative content. Additionally, fake news is a type of disinformation characterised by imitating news. It exploits existing public beliefs to influence and destabilise society, creating confusion and uncertainty within it (Waisbord, 2018).

In contrast, we find the concept of misinformation, which is also widely used in the scientific literature. It can be described as false, erroneous, or misleading information (Fetzer, 2004) although it is likely not intended to deceive (Wardle and Derakhshan, 2018). Therefore, it refers to inaccuracies that, while potentially harmful, are not intentional.

On the other hand, the term “propaganda” is also frequently used and is described as the art of influencing, manipulating, controlling, promoting, changing, inducing, or achieving the acceptance of opinions, attitudes, actions, or behaviours (Martin, 1958) and is primarily directed at the general population (Taddeo, 2012; Milina, 2012). It is important to note that the general use of the word “propaganda” is confusing, as organisations such as the North Atlantic Treaty Organisation (NATO) or the European Parliament (EP) have used it to refer to disinformation, using both concepts interchangeably (Bayer et al. 2019). However, although these terms are intrinsically linked, they are not the same, since while propaganda employs disinformation as a tool for manipulation and control, disinformation can also serve non-propagandistic purposes.

In 2020, the health crisis brought about an unprecedented impact on our society, as it took on multiple forms to reshape genuine information (Brennen et al. 2020; Naeem et al. 2021; Salaverría et al. 2020). Additionally, it profoundly influenced news, journalism, and the media system (Casero-Ripollés, 2020; García-Marín, 2020; Aguado-Guadalupe and Bernaola-Serrano, 2020). Initially, one of the pandemic’s characteristics was that we were unaware of how much we knew (Rogers, 2020), as the initial uncertainty was compounded by the sense of unreality stemming from a common event occurring simultaneously worldwide (Alba, 2020). Moreover, the pandemic’s evolution on an international scale, along with restrictive measures and subsequent lockdowns, altered how the population accessed and shared information (Magallón, 2020; Peña et al. 2021), posing an effort to organize and confront informational disorder (Masip and Palomo, 2020). In this scenario, science could not resolve these issues as quickly, leaving an informational void exploited by purveyors of falsehoods, such as anti-vaccine campaigners, denialists, and even populist political parties. This triggered the susceptibility of any news event to become a conspiracy theory.

During the COVID-19 confinement, there was a reported increase in the use of digital platforms and applications, which have become significant information intermediaries (Fernández-Roldán-Díaz, 2021). In this context, social networks serve as the public arena where hoaxes go viral (Vázquez and Pulido, 2020; Bak et al. 2020; Rosenberg et al. 2020; Wetzelhütter and Martin, 2021), significantly exacerbating the adverse effects of disinformation (Alonso-López et al. 2021; Martel et al. 2020; Novotná et al. 2023; Haq et al. 2022) by amplifying its reach across multiple formats (Sundar et al. 2021; Amorós-García, 2018; Rodríguez-Pérez, 2021) and channels. Indeed, social networks such as TikTok (Chayka, 2022), Facebook (Allcott et al. 2019), and X (Twitter) (Gutiérrez-Coba et al. 2020) provide fertile ground for alternative facts and emotions to take precedence over indisputable facts (Magallón, 2020; Alonso-González, 2021). According to Lelo and Fígaro (2021), disinformation can be more appealing than truthful information because its creators prioritise rapid content generation.

Just two years later, the world plunged into another internationally scaled event that would profoundly affect Europe, unleashing another wave of disinformation and propaganda: Russia’s invasion of Ukraine (Giovanna-Sessa, 2022; García-Marín and Salvat-Martinrey, 2023). On February 24, 2022, Russian President Vladimir Putin announced on television the start of a “special military operation” in Ukraine. Subsequently, Russian troops crossed the border and invaded the neighboring country amidst bombings in major Ukrainian cities. That same day, the Ukrainian government denounced the invasion on its X (Twitter) account with a meme, accompanied by the hashtags #StopRussianAggression and #RussiaInvadedUkraine (Tuñón-Navarro et al. 2024). This underscores a before and after in the conception of contemporary warfare, more hybrid than ever, emerging as “the first digital world war” (Carrión, 2022; Morejón-Llamas et al. 2022; Aso, 2022).

This conflict is a continuation of the military and propaganda operation initiated by Russia in 2014 (Veebel, 2015), which resulted in the invasion and annexation of the Crimean Peninsula. However, the difference between both conflicts lies in the use of technology (Lucas and Pomeranzev, 2016; Colom-Piella, 2020). Indeed, the current invasion features unprecedented media representation, being covered by both journalists and YouTubers, as well as anonymous individuals on TikTok. This way of narrating the war originated in the Syrian conflict, where technological revolution and citizen journalism gained unprecedented relevance. Furthermore, the use of social media in the Arab Spring, as well as in the Occupy Wall Street movement or the 15-M, served as a learning experience for Russian propaganda systems (Magallón, 2022).

Moreover, the war has strained the global geopolitical landscape, leading to a geostrategic reconfiguration that has once again resulted in a bipolar world, where disinformation has become an “asymmetric and indirect military method” (Milosevich-Juaristi, 2017). Thus, one of its primary objectives is to polarise and divide audiences (Sobieraj and Berry, 2011; Hwang et al. 2014), focusing on “the other” (Fandiño, 2014), a common enemy (Hopp et al. 2020) who becomes the target of disinformation.

Furthermore, this conflict has firsthand experienced the impact of hybrid warfare, where, in addition to asymmetric equipment and armament technology, informational warfare has played a key role. All of this directly affects the functioning of communities, societies, and democracies (Matakos et al. 2017; Cinelli et al. 2021), making the war in Ukraine the first conflict where lies have been experienced live from minute one (Montes, 2022).

Precisely, both the COVID-19 pandemic and the war between Russia and Ukraine are particularly paradigmatic in Europe, as they are the two major events that have marked the third decade of the 21st century on an international scale (Sánchez-del-Vas and Tuñón-Navarro, 2023). In fact, disinformation is capable of spreading up to six times faster than accurate information, multiplying exponentially in times of crisis (Vosoughi et al. 2018), such as the COVID-19 pandemic or Russia’s invasion of Ukraine. So, both have revealed a new way of using disinformation, with consequences both online and offline, becoming an effective external weapon to definitively win public opinion’s favor.

To combat disinformation, fact-checkers worldwide joined forces in the fight against fake news (Tuñón and Sánchez-del-Vas, 2022). In fact, fact-checking can be defined as the process by which data and information about a given content are checked and disproved (Pamment and Kimber Lindwall, 2021). Particulary, the pandemic propelled collaborative fact-checking, which has been refined with the viral disinformation spread throughout the invasion of Ukraine (Morejón-Llamas et al. 2022). The speed in the verification process provided by international fact-checker alliances has been crucial in shedding light on disinformation processes disseminated internationally.

Research methodology

This research has been grounded in a mixed methodology that combines both quantitative and qualitative techniques. This methodological triangulation has allowed for the integration and validation of the findings obtained, thereby strengthening the reliability of the results. Thus, it has provided a broader perspective of the study object.

Research objectives and hypotheses

Main objective: To study the characteristics of disinformation related to the COVID-19 pandemic and the war between Russia and Ukraine in the four countries under analysis, over the specified time period.

Specific objective: To establish the similarities and differences between the hoaxes of both case studies, considering their frequency, format, typology, platform, and purpose.

Hypothesis I: Peaks of dissemination of debunked hoaxes were observed in both the pandemic and the war at their onset, which declined more rapidly in the case of the war due to its lack of direct impact on the audiences of the selected European fact-checkers.

Hypothesis II: During the pandemic, the viral spread of disinformation primarily took the form of textual fabrications, whereas in the war, there was greater use of contextually detached images due to the language barrier separating the conflict from the rest of Europe.

Hypothesis III: In both the pandemic and the war, verified hoaxes utilized social media as their most prolific channels of disinformation, a trend that polarized audiences against a common (other) enemy.

Fact-checks as the unit of analysis

The unit of analysis is defined as “each of the elements to be quantified” (Wimmer and Dominick, 1996 p. 170), “susceptible to be expressed and broken down into categories and subcategories” (Fernández-Chavez, 2016 p. 38). In this regard, an approach to hoaxes has been chosen through verifications, as known as fact-checks, conducted by eight fact-checking media outlets from four European countries. Therefore, the unit of analysis consists of the verifications published by these outlets.

The use of verification media as a tool to study hoaxes has been employed in previous research. Notable among them are studies by Salaverría et al. (2020); Aguado-Guadalupe and Bernaola-Serrano (2020); García-Marín (2020); Naeem et al. (2021); Brennen et al. (2020); Alonso-González (2021); García-Vivero and López-Martínez (2021); Almansa-Martínez et al. (2022); Ruiz Incertis et al. (2024), among others.

Comparative analysis of case studies

Comparing case studies provides the opportunity to make generalizations about similarities between empirical phenomena shared by a set of assumptions. This aims to discover what is common to each case (Landman and Carvalho, 2003) and to make inferences about possible future conclusions based on the presence of antecedent factors. Thus, the paper is grounded in one of the comparative design systems proposed by Landman and Carvalho (2003), “the most similar systems design” (MSSD). In this line, this research revolves around the following case studies: two international events, four European countries, and eight fact-checking media outlets.

The 2020s have been marked by two significant international events impacting Europe: the COVID-19 pandemic and Russia’s invasion of Ukraine, both accompanied by high levels of disinformation. While these events differ in nature, it’s deemed relevant to identify common disinformation patterns and strategies while acknowledging differences. Comparing them offers insights into hoaxes’ impact in Europe. The selection of Spain, Germany, United Kingdom, and Poland aligns with Hallin and Mancini’s media model (2004). The authors identify three distinct media models: First, the polarised pluralist model, in which the media are closely related to politics. Second, the democratic corporatist model, where there is a relationship between the media and both political and economic powers. Third, the liberal model, in which the media maintain ties with economic power. This classification offers a comprehensive framework for analysing the structure and functioning of media systems across various contexts. It facilitates systematic comparison and elucidates the interplay between media institutions and their sociopolitical environments.

While Hallin and Mancini’s model omits Eastern European countries, including Poland enhances the research’s European representativeness and diversity. Eight specialized fact-checking media outlets in four selected countries were analyzed to study disinformation in Europe regarding the COVID-19 pandemic and the Russia-Ukraine war: Newtral and Maldito Bulo (Spain), CORRECTIV Faktencheck and BR24 Faktenfuchs (Germany), FullFact and Reuters Fact Check (United Kingdom), and Demagog and FakenewsPL (Poland).

It should be noted that to carry out the comparative analysis of case studies, hoaxes were grouped by country and by pair of years. That is, all verified content by selected fact-checkers related to the pandemic (2020–2021) as well as the war (2022–2023) respectively in each region. This has allowed the establishment of similarities and differences between both phenomena in the four countries, considering variables that will be explained further below.

Variables and categories of investigation

The following variables, categories, and subcategories have been followed to conduct the coding for the research analysis.

Variable 1 (V1): The frequency of fact-checking refers to the rate at which false claims are verified on a daily basis by the fact-checkers.

Variable 2 (V2): Format of hoaxes. Refers to the communicative code used to spread hoaxes. Categories of V2: Text; Image; Video; Audio; Combined (subcategories: text and image; text and video; text and audio).

Variable 3 (V3): The hoax typology quantifies the different types of problematic contents found in the information ecosystem. This variable is based on the classification made by Wardle (2017). Categories of V3: Fabricated content: Completely false content created with the intention to deceive; Manipulated content: When real information or multimedia content is manipulated to deceive; Imposter content: When authentic sources are impersonated; False context: Real content that goes viral taken out of context; Misleading content: Hoaxes that use information misleadingly to accuse someone or something; False connection: When the content is not supported by headlines, images, or subtitles.

Variable 4 (V4): Hoax platform studies the channels where the studied hoaxes have been verified. Categories of V4: Social networks; Blogs; Media.

Variable 5 (V5): The purpose of the rumour analyses the possible motivation of the disinformers. This variable has been based on the categorisation carried out by Wardle (2017), who asserts that it is grounded in a classification conducted by British journalist Eliot Higgings. Categories of V5: Poor Journalism: Dissemination of erroneous information due to a lack of journalistic rigor; Parody: Disinformation with satirical purposes created to entertain, but which may be mistaken as genuine information; Provocation: Rumours intended to create controversy or agitation, aiming to manipulate public opinion or divide society; Economic Gain: Seeking economic returns by attracting traffic to websites or platforms, increasing audience for advertising, or promoting deceptive products or services; Empowering a common enemy: It entails inadvertently strengthening or validating a shared adversary’s position or influence through actions intended to oppose them; Political Power or Influence: Utilisation of disinformation to influence public opinion, employing information to gain political advantages or maintain power.

Content analysis

Content analysis is a fundamental research technique in communication studies, utilized in both Spanish and Anglo-Saxon contexts (Wimmer and Dominick, 1996). To analyze disinformation through fact-checkers’ verifications, only publications refuting hoaxes have been considered, a methodological choice seen in previous research such as Brennen et al. (2020).

Regarding the temporal period under study, it has been chosen to analyze the month of March for four consecutive years: 2020, 2021, 2022, and 2023. This month has been selected as it coincides with significant milestones for this research. In March 2020, the COVID-19 pandemic erupted; in March 2021, vaccination was widespread in most European countries; in March 2022, there was a surge of disinformation regarding Russia’s invasion of Ukraine. Likewise, it was deemed appropriate to continue with the temporal pattern and analyze the evolution of hoaxes in 2023.

Since the aim is to study disinformation related to both the COVID-19 pandemic and the war between Russia and Ukraine, it has been chosen to filter only those verifications that revolve around these topics. In order to establish the proposed comparisons, in March 2020 and 2021, only hoaxes related to the pandemic were analyzed, while in March 2022 and 2023, only publications about the war between Russia and Ukraine were studied. Finally, the selection of the sample publications, following the previously established criteria, resulted in 812 verifications that have been the subject of study. Specifically, 515 verifications correspond to the years 2020 and 2021, while 297 are related to the war between Russia and Ukraine.

To ensure the validity and reliability of the research, a systematic and coordinated approach was employed between two coders. The protocol for applying the coding manual to the various variables involved initially establishing the categories into which each variable was divided, based on an initial analysis of 10% of the sample. After this preliminary round, the coders divided the sample equitably and conducted the coding in parallel, consulting each other to reach a consensus on cases that might present more ambiguity or complexity in their categorization.

After this round of manual coding, the level of agreement between the coders and compatibility was found for 92% of the statements. Before processing the data for detailed analysis, a final joint review was conducted for verification purposes, ensuring that the results obtained are the product of a unified working method. This rigorous approach ensured that the categorisation was both reliable and valid, enhancing the overall robustness of the research findings.

Semi-structured interviews with specialized agents

As a secondary and qualitative technique, eight in-depth interviews with specialized agents have been conducted. The interview is the qualitative research technique that grants the most freedom to the actors in the interaction, allowing the interviewer to obtain responses to the questions presented through an informal conversational atmosphere, detached from a predetermined standardized form. In this way, eight professionals specialized in the subject of this research have been selected and the interviews have been conducted in the form of online personal communication, through Google Meets. This modality has been adopted because online interviews offer greater flexibility in scheduling, accommodating the diverse availability of participants and ensuring a higher participation rate. Additionally, this method mitigates geographical constraints, allowing the inclusion of participants from a broader range of locations, thereby enhancing the diversity and representativeness of the sample.

Moreover, the interviewees were chosen on the basis of their professional profiles and specialization areas, which are detailed in the Table 1:

Profile 1: Experts in fact-checking, disinformation and media. This category includes various fact-checkers, academics, and professionals in the field. This profile has enabled an understanding of how falsehoods are monitored and refuted from a fact-checking perspective, as well as providing insights into the characteristics of disinformation associated with the phenomena studied.

Profile 2: Experts in disinformation on digital platforms and IA. This profile is associated with researchers in the fields of engineering and computing. The selected experts are leaders in the study of Artificial Intelligence (AI) and the analysis of disinformation on social networks. Their perspectives have allowed for a deeper exploration of how falsehoods go viral on digital platforms and how their algorithms function.

Profile 3: Experts in disinformation in Europe and European regulation of disinformation. Given that the study is framed within Europe, it has been deemed relevant to engage with various professionals specialising in the investigation of disinformation on the continent. Their contributions have been particularly valuable for a profound understanding of the phenomenon, its challenges, and the policies adopted for its containment.

Table 1 Relevant information about the interviews.

These discussions were transcribed and coded. Additionally, they have been completely anonymised based on the following table.

Before conducting the interviews, three provisional scripts, each containing among 10 and 12 questions, were prepared in accordance with the hypotheses underpinning this research. Given the semi-structured nature of the interviews, the questions varied according to the interviewees, their specific areas of expertise, and the flow of the conversation. Each interview lasted between 30 and 45 min.

The following outlines the topics and segments investigated during the interviews, organized based on the participants’ profiles and specialized domains:

In the interviews with profile 1, the focus was on comparing and analyzing viral disinformation in various contexts, such as the COVID-19 pandemic and the Russia-Ukraine conflict. The discussion highlighted identifying similarities and differences in misleading narratives, key actors like Russian propaganda, and underlying motives. It also explored different types and formats of disinformation, primary channels for dissemination, and how disinformation adapts to current events. The dialogue concluded with predictions about the future impact and implications of disinformation.

In Profile 2, the interviews focused on how digital platforms facilitate the dissemination of disinformation, with a particular emphasis on the role of algorithms. The questions aimed to identify the platforms and algorithms most conducive to disinformation dissemination, quantify disinformation from orchestrated campaigns, and track its trajectory from the pandemic to current conflicts. Furthermore, the discussion addressed how current social media trends intersect with disinformation dissemination and examined potential benefits for digital platforms. The dialogue also assessed the current and future role of artificial intelligence in generating disinformation.

Experts from Profile 3 were asked regarding the impact of disinformation on the European public sphere and European regulatory frameworks, exploring how disinformation destabilises Europe’s public sphere and why the region is vulnerable to it on social networks. The interviews also examined the role of external actors such as China and Russia in amplifying instability in Europe, particularly during ongoing conflicts. Interviewees were also asked about the European Union’s response to disinformation, as well as predictions about the future of disinformation in Europe.

Findings

Frequency of verified hoaxes

Disinformation is a phenomenon that is intrinsically linked to current events and, consequently, fluctuates adapting to it (Interviewee 7, personal communication, April 26, 2023), as audiences do not continuously pay attention to a topic but rather go through peaks of interest (Interviewee 5, personal communication, May 1, 2023). In this regard, as can be seen in Fig. 1, out of the 812 verifications analyzed in this research, the highest verification peaks are observed in March 2020 (41% of the total sample of hoaxes) and March 2022 (32% of the sample of hoaxes).

Fig. 1: Number of verifications published by country and year.
figure 1

Own elaboration.

Additionally, the country reporting the highest percentage of publications throughout the analyzed temporal period is Spain (N = 390; 48% of the total), followed by the United Kingdom (N = 188; 23% of the total), Germany (N = 98; 12% of the total), and Poland (N = 136; 17% of the total). It is worth noting that in Spain, several factors converge leading to a significantly higher sample than in the rest of the analyzed cases. “On one hand, it serves as the gateway and exit point for certain narratives that circulate rapidly in Latin America, and on the other hand, its fact-checkers are among the best in the world” (Interviewee 4, personal communication, April 12, 2023). In this sense, their highly qualified teams, along with their greater capacity to monitor content, allow them to verify more hoaxes in the same amount of time as other fact-checking media.

In Spain, in March 2020, 239 hoaxes about the pandemic were verified, a figure that decreased to 54 in 2021. In March 2022, 89 verifications related to the war were published, while in 2023, only 8 were reported.

In Germany, during March 2020, 26 hoaxes related to the coronavirus were published, while in 2021 the figure decreased to 39. However, in March 2022, 25 verifications about the invasion were recorded, which decreased to 8 in 2023.

In the United Kingdom, in March 2020, 33 hoaxes about the health crisis were verified, while in 2021, 67 were reported. Subsequently, in March 2022, 76 verifications related to the Russian-Ukrainian conflict were published, which experienced a significant decrease in 2023, with 12.

In Poland, in March 2020, 33 hoaxes related to the pandemic were verified, decreasing to 24 in 2021. However, in March 2022, 66 verifications about the war were published, while in 2023, 13 were reported.

Format of hoaxes

In general, in the two years analyzed for the COVID-19 pandemic, the primary format of viralized disinformation has been text. In fact, 45% of the hoaxes studied during this period were presented in written form. Meanwhile, in the case of hoaxes related to the war between Russia and Ukraine, the predominant format through which these contents were disseminated was combined (71%), specifically through images and text. “In the image, you don’t need to translate anything. A montage of Zelenski with a swastika works the same in Burma, Bolivia, and Spain, which makes hoaxes spread even faster” (Interviewee 2, personal communication, May 22, 2023). This trend is also observed specifically in the four countries studied, as illustrated in Fig. 2.

Fig. 2: Format of hoaxes in percentage, by year and country.
figure 2

Own elaboration.

In Spain, in 2020 and 2021, 38% of the analyzed contents went viral in text form, 25% corresponded to images, and 20% to a combined format of image and text. Video (9%) and audio (7%) had much lower representation. Meanwhile, in 2022 and 2023, it can be observed how this trend changes significantly, as 67% of the contents went viral in a combined format of text and image, 18% as single images, 8% as video, and 7% as text.

In Germany, in 2020 and 2021, 48% of the analyzed contents were presented as text, 22% as videos, and 12% had a combined format of text and images. Likewise, the representation of images and audio was 9% in both cases. In 2022 and 2023, 55% of the analyzed hoaxes were in a combined format of text and images, 24% corresponded to single images, 15% to videos, and 6% to audio.

In the United Kingdom, in 2020 and 2021, 52% of the sample hoaxes were in text format, 17% in a combined format of text and images, 14% in single images, as well as in video, 2% in audio, while in 1% of the sample, its distribution format was not specified. In 2022 and 2023, significant differences are observed, as 84% of the contents were disseminated in a combined format of text and images, 8% in the form of videos, with almost non-existent representation of text alone (6%), and single images (2%).

In the case of Poland, 2020 and 2021 were marked by textual hoaxes, corresponding to 63% of the sample. 16% of the sample was represented as videos, 11% as combinations of text and images, 9% as single images, while 2% were audio. Again, in 2022 and 2023, significant differences are observed. 67% of the contents were viralized in a combined format of text and images, 16% only in text, while 9% in video, and 8% in single images.

Typology of hoaxes

In general terms, the pandemic (2020–2021) was marked by fabricated content. 47% of all hoaxes analyzed during this period corresponded to this typology. Meanwhile, in 2022 and 2023, 44% of the analyzed hoaxes have been identified as false context. This is a trend that is also observed specifically in the four case studies, as evidenced in Fig. 3. “The war in Ukraine has had a much more visual imprint than the pandemic. Therefore, many images taken out of context and manipulated have been observed” (Interviewee 1, personal communication, May 13, 2023).

Fig. 3: Typology of hoaxes in percentage, by year and country.
figure 3

Own elaboration.

In Spain, in 2020 and 2021, 52% of the country’s sample corresponds to fabricated contents, 19% to manipulated, 12% to false context, and 8% to impostor content, while 5% are represented as misleading content and false connection, in both cases. In 2022 and 2023, 54% of the analyzed hoaxes were reported as false context, 24% as fabricated content, and 11% as manipulated. Less representation is found in impostor content and misleading content, both at 5% of the sample and finally, false connection (1%).

In Germany, in 2020 and 2021, fabricated content predominates (38%), followed by misleading content (35%). Thus, manipulated content and false context represent each 11% of the sample. Meanwhile, impostor content and false connection are reflected in 3 and 2% respectively. In 2022 and 2023, 36% of the contents correspond to false context and 30% to fabricated content. 21% is represented as manipulated content, and impostor content and misleading content occupy 6% each.

In the case of the UK, 2020 and 2021 were marked by fabricated content (45%), followed by misleading content (27%). False context represents 13% and manipulated content 12%. Impostor content is only reflected in 3% of the sample. In 2022 and 2023, 55% of the hoaxes correspond to false context, while 19% to fabricated content. 13% are manipulated content, 11% false connection, and 2% impostor content.

For Poland, in 2020 and 2021, 47% of the analyzed hoaxes corresponded to fabricated contents, followed by manipulated (21%) and false context (19%). Likewise, 12% is reported as misleading content. In 2022 and 2023, 59% of the hoaxes are represented as fabricated content, 24% correspond to false context, 8% to misleading content, and 5% to manipulated content. Impostor content and false connection only represent 3 and 1%, respectively, of the sample.

Platform of hoaxes

In general, over the four years studied, a near-total hegemony of social networks can be observed as the predominant channel for the dissemination of erroneous content. In the case of the pandemic, in 2020 and 2021, 76% of the hoaxes in the sample were primarily viralized through digital platforms. A figure that increases in the studied hoaxes about the war in 2022 and 2023, rising to 89%. This slight increase is also reported individually in the four countries studied, as detailed in Fig. 4. “Platforms are a very cheap means of disinformation, as you don’t need to have a website or infrastructure, you just need to viralize the information” (Interviewee 5, personal communication, May 1, 2023).

Fig. 4: Platform of hoaxes in percentage, by year and country.
figure 4

Own elaboration.

Thus, in Spain, in 2020 and 2021, 77% of the contents were viralized through social networks. Specifically, WhatsApp (41% of social networks), X (Twitter) (18% of social networks), and Facebook (14% of social networks). Blogs accounted for 7% of dissemination channels, while media accounted for 5%. Additionally, in 11% of the cases, the origin of the hoaxes was not specified. In 2022 and 2023, social networks represented 79% of the sample. In particular, X (Twitter) (42% of social networks) and Facebook (35% of social networks) stood out. Blogs were the dissemination channel in 2% of the cases, while traditional media were only in 1%. It is worth noting that the origin of the hoaxes was not referenced in 18% of the cases.

In Germany, in 2020 and 2021, 72% of the analyzed contents were disseminated on social networks, mainly on Facebook (50% of social networks) and WhatsApp (23% of social networks). Thus, 20% went viral on blogs and 6% on media. Additionally, the platform was not specified in 2% of the cases. As for 2022 and 2023, social networks accounted for 86% of the sample, especially Facebook (45% of social networks) and X (Twitter) (23% of social networks), while the rest (14%) were found on blogs.

Regarding the UK, in 2020 and 2021, 83% of the hoaxes were spread through social networks, specifically Facebook (67% of social networks) and X (Twitter) (13% of social networks), 9% through media, and 4% through blogs. The origin of 4% of the hoaxes was not indicated. On the other hand, in 2022 and 2023, 100% of the hoaxes were viralized through social networks, with Facebook (44% of social networks) and X (Twitter) (46% of social networks) standing out.

In Poland, in 2020 and 2021, social networks emerged as the most prominent format (65%), particularly Facebook (65% of social networks), followed by blogs (25%) and traditional media (8%). In 3% of the cases, the dissemination channel was not reported. As for 2022 and 2023, 92% of the hoaxes were disseminated on social networks such as Facebook (61% of social networks) and X (Twitter) (25% of social networks), 5% on blogs, and 3% on media.

Purpose of hoaxes

In the set of countries studied, both in the hoaxes about the pandemic and about the war, similar results regarding their purpose can be observed, as referenced in Fig. 5. During the pandemic, 42% of the analyzed hoaxes aimed to provoke, while, in the case of the war, this intention is reflected in 45% of the cases. Likewise, the promotion of a common enemy is observed in 32% of the analyzed hoaxes about the pandemic and in 44% in those corresponding to Russia’s invasion of Ukraine. “A common enemy is what allows all positions to be unified no matter how different they are. The search for culprits serves any narrative” (Interviewee 7, personal communication, April 26, 2023).

Fig. 5: Purpose of hoaxes in percentage, by year and country.
figure 5

Own elaboration.

In the case of Spain, in 2020 and 2021, 53% of the analyzed hoaxes aimed to provoke, while 25% sought to promote a common enemy and 16% to obtain economic benefit. Parody (4%), political influence (1%), and poor journalism (1%) had less representativeness. In 2022 and 2023, 48% of the analyzed contents aimed to provoke and 36% to promote a common enemy. With a 5% representativeness, political influence, as well as parody. At the same time, 3 and 2% are occupied by economic benefit and poor journalism, respectively.

In Germany, in 2020 and 2021, 35% of the contents aimed to promote a common enemy, 25% to provoke, and 22% to obtain economic benefit. Political influence accounts for 15%. Thus, parody and poor journalism represent 2% each. In 2022 and 2023, both provocation and promotion of the common enemy represent 45% of the sample, respectively. Meanwhile, 6% aim to influence politically, and 3% to parody.

In the British case, in 2020 and 2021, 48% of the hoaxes promote a common enemy and 31% seek to provoke. Likewise, 9% aim to obtain economic benefit, followed by poor journalism in 5% of cases, political influence in 4%, and parody in 3%. Similarly, in 2022 and 2023, provocation (53%) and promotion of a common enemy (33%) are the most prominent purposes. Political influence (8%), parody (2%), poor journalism (2%), and economic benefit (1%) follow.

For the Polish case, in 2020 and 2021, the main purpose was to promote a common enemy (32%), as well as to obtain economic benefit (32%), followed by provocation (30%). With less representativeness are political influence (4%) and poor journalism (4%). Meanwhile, in 2022 and 2023, promoting a common enemy is reflected in 65% of the cases and provocation in 32%. In this case, political influence (3%) and parody (1%) hardly have representativeness within the sample.

Discussion of results

In light of the obtained results, we proceed to discuss them through an analysis and comparison of the various identified trends, addressing the research objectives outlined at the beginning of the study.

Trends in the frequency, format, and typology of disinformation

During periods of high informational intensity, linked to extraordinary and unpredictable events, the populace necessitates an increase in their information consumption (Magallón, 2020). Therefore, as inferred from the sample of this investigation, the volume of verified disinformation regarding the pandemic was substantially higher in March 2020 (64% of the total sample of pandemic-related falsehoods) than in March 2021. The same trend was observed in March 2022 (86% of the sample of false content regarding the war), compared to March 2023.

In this regard, a special report by the European Digital Media Observatory (EDMO, 2023a) suggests that the most relevant examples of disinformation about the health crisis circulated in the early stages of the pandemic in 2020. Concerning the war in Ukraine, the peak of disinformation was reached in March 2022. A frequency that has since steadily decreased, as also evidenced by the results of our article.

In this sense, Interviewee 1 (personal communication, May 13, 2023) confirms that, although there was an explosion of false content about the pandemic in March 2020, falsehoods continued to spread relatively consistently (with their corresponding peaks), given that the health crisis directly affected citizens’ lives. However, in the case of the war in Ukraine, although there was an explosion of falsehoods and verifications in March 2022, public interest gradually waned. “One year after the war, informational fatigue and the visible consequences of the war in our daily lives lead to a disconnection by the citizenry” (Interviewee 4, personal communication, April 12, 2023).

This decrease in attention from European citizens is due to the fact that the war affects European citizens less directly than the coronavirus pandemic (with the exception of Poland). According to the Flash Eurobarometer published in December 2022 by the European Commission (2022), while approximately eight out of ten citizens of the member countries followed news related to the war in Ukraine, there was a considerable decrease in the frequency of these news consultations, shifting from daily to weekly. The same occurred with discussions about the war and its effects.

This is intrinsically linked to disinformation since falsehoods target areas where there is a demand for information from users, directly influencing the work of verifiers. In a fact-checking newsroom, Interviewee 2 (personal communication, May 22, 2023) notes that when selecting content to verify, if the organization does not find many inquiries about the topic of Ukraine from users and observes that disinformation on the subject does not have much exposure on social media, they pay less attention when verifying it.

However, it is interesting to note that, in both the case of the pandemic and the war, although disinformation occupies less space on the verifiers’ agenda, falsehoods have not disappeared. “We are moving away from the pandemic in terms of time, but there are still many falsehoods related to vaccines, their side effects, or criticisms of masks. The same happens with the topic of the war in Ukraine. Even narratives that were considered past return with an original focus” (Interviewee 7, personal communication, April 26, 2023).

Regarding format, the data obtained in this study demonstrate that the predominant formats for the dissemination of falsehoods have been significantly different between disinformation about the pandemic and that related to the war between Russia and Ukraine. On one hand, between 2020 and 2021, most false contents referencing the health crisis were presented in text form (45%). Examples of this were viral chains on Facebook or WhatsApp about the adverse effects of vaccination or supposed home remedies to cure the disease.

The COVID-19 pandemic became an internationally scaled event that affected humanity as a whole and, of course, Europe (Tuñón-Navarro et al. 2023). The world was engulfed in a context of unprecedented chaos in which there was an excessive eruption of information demands. Since science could not provide an immediate response to these concerns, disinformers found a clear path to fabricate and disseminate falsehoods with minimal effort. As a consequence, the cheapest method of disinformation, text, was employed.

This is compounded by the fact that during this period, fact-checkers had to exponentially increase their efforts to verify erroneous information as quickly as possible. Therefore, since debunking disinformation based on audiovisual languages requires more working time, verifiers may have prioritized debunking text-based falsehoods. These results are consistent with the research of Salaverría et al. (2020), García-Marín (2020), and Aguado-Guadalupe and Bernaola-Serrano (2020), which indicate the predominance of text in published verifications about COVID-19-related falsehoods.

On the other hand, the sample of falsehoods about the pandemic is marked by the predominance of fabricated content (47% of the analyzed sample during that period), constituting the most detrimental typology due to its high intent to deceive (Wardle, 2017). Other typologies such as manipulated content (17%) and, to a lesser extent, misleading content (16%) and false context (13%) also stand out. This trend remained practically uniform across the four studied countries during the years 2020 and 2021.

These observations are supported by the research of Naeem et al. (2021) and Salaverría et al. (2020), who demonstrate that the most serious disinformation fabrications have been the most viral during the pandemic. However, they contrast with the study by Brennen et al. (2020), where configured information, that is, information involving the re-elaboration of truth, remained above completely fabricated contents during the pandemic.

In the case of the war between Russia and Ukraine, a substantial change can be observed in the way disinformation has been disseminated compared to the pandemic, as images, specifically, the combined format of images and text, become particularly relevant in the spread of falsehoods (71% of the analyzed sample during that period). Taking into consideration the context, as mentioned previously, this international event did not have a direct impact on the daily lives of citizens who were outside the conflict. Additionally, the language barrier hindered the rapid viralization of falsehoods, so it became crucial to use images since, not being on the ground and unaware of the language, they allow the introduction of elements that distort the context (Interviewee 4, personal communication, April 12, 2023).

Moreover, images are processed more superficially than other formats (Sundar et al. 2021), which is why many disinformative contents are illustrated with impactful photographs (Amorós-García, 2018). “Visual effects are the most attractive to the brain and the human perception of reality,” confirms Interviewee 3 (personal communication, June 14, 2023). Derived from the format, false context constituted the predominant typology of falsehoods in the sample between 2022 and 2023 (44% of the sample). This is a strategy mainly used in cases where disinformative contents are presented in visual formats (Salaverría et al. 2020; Rodríguez-Pérez, 2021).

Likewise, the results coincide with those of other research, such as Morejón-Llamas et al. (2022), which demonstrates how the analyzed contents about the war between Russia and Ukraine mostly utilized archived images to complement false texts. García-Marín and Salvat-Martinrey (2023) reported that a large part of the falsehoods about the invasion, verified by fact-checkers, employed the decontextualization of photographs, videos, parts of movies, or video games.

Despite growing concerns about the use of AI for creating deepfakes, no substantial evidence has been found within the studied sample. “Artificial Intelligence could generate disinformative content much faster, effectively, and scalably. However, today I am not seeing widespread use,” (Interviewee 6, personal communication, May 22, 2023). Similarly, Interviewee 2 (personal communication, May 22, 2023) points out that a clear example of people being copiously deceived by AI-generated content has not been detected.

Trends in the platform and purpose of disinformation

Regarding the dissemination channel, in both the case of the COVID-19 pandemic and the war in Ukraine, there is an almost exclusive predominance of social media as the dissemination channels for the analyzed misinformation (76 and 89% of the sample analyzed in their respective periods). In fact, other studies have demonstrated that disinformation about both events has spread more rapidly through digital platforms (Bak et al. 2020; Rosenberg et al. 2020; Wetzelhütter and Martin, 2021).

Digital platforms play an increasingly indispensable role in the lives of Europeans. However, in recent years, technological development has facilitated the professionalization of disinformation and propaganda tools (Colom-Piella, 2020), becoming the most effective tool for influencing the minds of communities and even entire nations (Milina, 2012).

In particular, both in the pandemic and in the war, the most used social networks to disseminate this content have been Facebook and X (Twitter). In this regard, Allcott et al. (2019) suggest that Facebook is the main channel through which fake news tends to go viral. In fact, this social network presents favorable characteristics for the spread of fake news, as its users are more interconnected than on other platforms.

On the other hand, Gutiérrez-Coba et al. (2020) have found that X (Twitter) has a great capacity to generate sub-communities and echo chambers, overexposing audiences to algorithmically suitable content for their tastes and needs. As a consequence, this creates an environment conducive to the spread of false information. In fact, as pointed out by Vosoughi et al. (2018), when a user makes a claim about a topic in a tweet and others disseminate it by retweeting, a hoax cascade occurs. Specifically, they note that fake news is 70% more likely to be retweeted than real news.

In this regard, Interviewee 6 (personal communication, May 22, 2023) emphasizes that X (Twitter) “has a great capacity for expansion, being the simplest platform for the message to reach everyone.” All this in an apparently intimate dialogue with the screen, which generates that users feel that they are not being manipulated, but that the conclusions they have reached are their own, unique, and exclusive (Aso, 2022).

It is also worth mentioning the increase in the appearance of hoaxes on the TikTok social network, specifically regarding the war. “At the beginning of the pandemic, TikTok was not as powerful as in the war in Ukraine” (Interviewee 1, personal communication, May 13, 2023). As some researchers state, the invasion in Ukraine constitutes the “first war reported on TikTok” (Chayka, 2022; Aso, 2022). In fact, some studies (Alonso-López et al. 2021) establish that it is easy to spread disinformation through this platform thanks to the use of generic hashtags and challenges, which, although not directly related to the content, have high levels of popularity.

Despite the results and the bibliographic review carried out, it should not be considered that the reported social networks are the channels through which the most disinformation has been disseminated about the pandemic and the war since the selected sample is subject to the fact-checkers’ bias. Therefore, as Interviewee 6 (personal communication, May 22, 2023) asserts that it is risky to say that on X (Twitter) or on public networks more is spread since it is much more difficult to enter private networks. In some cases, the main responsibility for distributing disinformation lies with those private platforms where it is very complex to monitor, control, and see what is happening (Interviewee 8, personal communication, April 13, 2023).

Moreover, it is noteworthy to mention that the company Meta works with fact-checkers belonging to the IFCN, who are responsible for tracking false information viralized on Facebook, Instagram, and WhatsApp and later debunking it. In fact, in 2020, the IFCN itself reported that 43% of the fact-checking media surveyed obtained a substantial part of their funding through this program (Mantas, 2021), providing them with income and visibility (Tuñón and Sánchez-del-Vas, 2022). Similarly, TikTok also has a program in which it collaborates with some fact-checkers from the IFCN, so it is not surprising that the studied verifiers have allocated part of their resources and efforts to refute viralized content on these social networks, thus justifying the high number of debunkings made on this platform.

Regarding the purpose of hoaxes, the results of the study report that the majority of the analyzed false content has the intention of provoking emotions, as well as of promoting a common enemy, the so-called “other” (Fandiño, 2014). Both categories are characterized by polarizing individuals. In this sense, between 2020 and 2021, 42% of the hoaxes aimed to provoke, and 32% aimed to promote a common enemy. Meanwhile, between 2022 and 2023, 45% of the hoaxes aimed to provoke, and 44% sought to promote a common enemy. As Aso (2022) argues, professionalized algorithms allow reaching the intimacy of individuals, bombarding them with arguments in search of polarization in favor of one side or the other. That is why the purpose of hoaxes is closely linked to their dissemination channel.

Some studies have demonstrated both correlational and causal relationships between trust in emotion and increased belief in disinformation (Martel et al. 2020). Within the sample of this research, hoaxes such as videos of people suffering attacks after being inoculated with the COVID-19 vaccine in March 2021, or images of children shared as if they were from the war in Ukraine in March 2022 stand out. In them, a strong emotional component can be appreciated with which it is intended to provoke audiences. Even Interviewee 3 (personal communication, June 24, 2023) points out that the networks that were specifically designed during the pandemic to target hesitant users were used in the invasion of Ukraine to further increase their uncertainty.

It is also interesting to analyze the willingness of hoax emitters to configure a common enemy. In the case of the pandemic, many hoaxes were detected that went against a “corrupt elite,” while in the case of the Russian-Ukrainian war, the intention was to make Ukraine the enemy. Putin’s own speech on February 21, 2022, already showed this trend. In it, the Kremlin’s president affirmed the intention to “denazify” and “liberate” Ukraine, which he unjustly accused of plotting “genocide” and “ethnic cleansing” in Donbás (Giovanna-Sessa, 2022).

In this regard, some researchers point out that, in certain cases, disinformation can foster the feeling of “us against them” (Hopp et al. 2020), based on the negative perception of “the others” (Iyengar and Simon, 2000), which increases polarization and the division of audiences (Sobieraj and Berry, 2011; Hwang et al. 2014). When a society is divided into two main factions, usually related to an issue, polarization around these issues has a corrosive and detrimental effect on the functioning of communities, societies, and democracies (Matakos et al. 2017; Cinelli et al. 2021).

In this way, Interviewee 7 (personal communication, April 26, 2023) asserts that “the hoaxes [about the pandemic and the war] are based on institutional discredit and the search for culprits. For example, in the case of Ukrainian refugees, the narratives that already existed against other migrants have been adapted to them.” Interviewee 2 (personal communication, May 22, 2023) agrees that, in the case of the invasion, since political parties and the media were clear about their pro-Ukraine stance, alternative thinking materialized in pro-Russian narratives.

On the other hand, the research by Novotná et al. (2023) shows that social networks polarized even more during the pandemic, extending to the Russian-Ukrainian war. Their study reported that people who denied the pandemic and vaccines were very likely to be supporters of Russia. And it is that Interviewee 5 (personal communication, May 1, 2023) detected a dialogue between the anti-vaccine, the pro-Russian, and those who spoke out against climate change. “They are the same and they are shifting.”

In this regard, Interviewee 2 (personal communication, May 22, 2023) and Interviewee 1 (personal communication, May 13, 2023) observed how certain groups that had been dedicated to spreading disinformation about the pandemic began to do so about the war. This was also reported in a report published by Maldita in mid-March 2022 (Maldita, 2022), which shows how the large denialist communities automatically switched from spreading disinformation about the pandemic to defending Russia’s attack on Ukraine. This shift in themes within the same denialist groups could be due to factors such as the evolution of interest in current events, an economic component, as well as the desire to maintain a sufficient critical mass to continue with the existing community (Interviewee 4, personal communication, April 12, 2023).

The studied crises demonstrate how social networks can originate a bipolar discourse in audiences (Haq et al. 2022). This polarization became evident through the reinforced uncivil character of communication, criticism of the low quality of discourse, and the negative evaluation of opponents (Novotná et al. 2023).

Conclusions

Throughout the research, the characteristics of disinformation regarding the COVID-19 pandemic and the war between Russia and Ukraine within the European context have been examined. Considering the three hypotheses proposed at the outset and based on the findings, this study confirms the following: Firstly, the frequency of verified hoaxes is intrinsically linked to peaks of current events and the evolution of the phenomenon in question. While it is true that, in the initial weeks of the pandemic as well as the war, there was a considerable increase in verifications of hoaxes in the cases analyzed, this trend gradually decreased. However, the decline was much more pronounced in the hoaxes concerning Russia’s invasion of Ukraine. Experts suggest that the shift in information trends, along with the waning interest of audiences in the topic, may have led to a reduction in the efforts of fact-checkers to debunk hoaxes related to it. Meanwhile, disinformation about the pandemic remained consistently prevalent, as the impact of the health crisis on the public was direct. Thus, the first hypothesis is confirmed.

Regarding the format of hoaxes, it was observed that the majority of verifications published regarding the coronavirus pandemic were disseminated in written form, without any associated multimedia content. Due to the crisis, the public was much more susceptible to receiving false information. Therefore, it was not necessary to employ highly developed disinformation techniques, and text offered a quick and direct way to reach audiences. In contrast, in the war, several factors converged that hindered the widespread dissemination of hoaxes beyond the borders of the conflict, such as the language barrier. Therefore, images emerged as the primary format upon which disinformation narratives were constructed. It is also worth noting that the typology of hoaxes is closely linked to their format, as they follow a similar trend. In the case of the pandemic, the most commonly used type of hoax was fabricated content, i.e., entirely invented. Meanwhile, hoaxes about the invasion utilized genuine images taken out of context, in order to tailor them to the interests of the disseminators. Thus, the second hypothesis is validated.

Furthermore, in both the pandemic and the war, it was observed that the majority of verified hoaxes were spread through social media, surpassing blogs or traditional media channels. Especially prominent were Facebook and X (Twitter). While the nature of social networks, subject to algorithmic control, allows for the rapid dissemination of erroneous content, one must not overlook the bias of fact-checkers. In many cases, they have collaboration agreements with these digital platforms to verify potentially false content circulating through them. Similarly, the purpose of hoaxes is directly related to their dissemination channel. In this regard, social networks have the capacity to polarize audiences, especially in times of crisis. Thus, most of the hoaxes in the sample, both in the case of the pandemic and the war, aimed to directly attack the emotions of the audiences, provoking them and promoting a common enemy. In fact, the objective in both cases was the search for culprits and the discrediting of institutions in the face of alternative thinking. These results confirm the third hypothesis.

On one hand, one limitation of this research is that the sample was based on hoaxes selected in advance by the fact-checking organizations studied. While these fact-checking organizations have deontological principles aligned with the quality standards of the IFCN, their work is biased. Additionally, the temporal limitation for conducting this research led to selecting a specific number of case studies. Greater geographic coverage and a more extensive timeframe would have provided more precise conclusions about the object of study. Despite this, the present study represents a significant advancement in understanding the nature and scope of disinformation in Europe, laying the groundwork for future, more comprehensive research.