“If everybody always lies to you, the consequence is not that you believe the lies, but rather that nobody believes anything any longer.”

— Hannah Arendt

One of the normative goods on which democracy relies is accountable representation through fair elections (Tenove, 2020). This good is at risk when public perception of the integrity of elections is significantly distorted by false or misleading information (H. Farrell and Schneier, 2018). The two most recent presidential elections in the U.S. were accompanied by a plethora of false or misleading information, which grew from false information about voting procedures in 2016 (Stapleton, 2016) to the “big lie” that the 2020 election was stolen from Donald Trump, which he and his allies have baselessly and ceaselessly repeated (Henricksen and Betz, 2023; Jacobson, 2023). Misleading or false information has always been part and parcel of political debate (Lewandowsky et al., 2017), and the public arguably accepts a certain amount of dishonesty from politicians (e.g., McGraw, 1998; Swire-Thompson et al., 2020). However, Trump’s big lie differs from conventional, often accidentally disseminated, misinformation by being a deliberate attempt to disinform the public.

Scholars tend to think of disinformation as a type of misinformation and technically that is true: intentional falsehoods are but one subset of falsehoods (Lewandowsky et al., 2013) and intentionality does not affect how people’s cognitive apparatus processes the information (e.g., L. K. Fazio et al., 2015). But given the real-world risks that disinformation poses for democracy (Lewandowsky et al., 2023), we think it is important to be clear at the outset whether we are dealing with a mistake versus a lie.

The tobacco industry’s 50-year-long campaign of disinformation about the health risks from smoking is a classic case of deliberate deception and has been recognized as such by the U.S. Federal Courts (Smith et al., 2011, see also Civil Action 99-2496(GK) United States District Court, District of Columbia. United States v. Philip Morris Inc.). This article focuses primarily on the nature of disinformation and how it can be identified, and places it into the contemporary societal context. Wherever we make a broader point about the prevalence of false information, its identifiability or its effects, we use the term misinformation to indicate that intentionality is secondary or unknown.

An analysis of mis- and disinformation cannot be complete without also considering the role of the audience, in particular when people share information with others, where the distinction between mis- and disinformation becomes more fluid. In most instances, when people share information, they do so based on the justifiable default expectation that it is true (Grice, 1975). However, occasionally people also share information that they know to be false, a phenomenon known as “participatory propaganda” (e.g., Lewandowsky, 2022; Wanless and Berk, 2019). One factor that may underlie participatory propaganda is the social utility that persons can derive from beliefs, even if they are false, which may stimulate them into rationalizing belief in falsehoods (Williams, 2022). The converse may also occur, where members of the public accurately report an experience, which is then taken up by others, usually political operatives or elites, and redeployed for a malign purpose. For example, technical problems with some voting machines in Arizona in 2022 were seized on by Trump and his allies as being an attempt to disenfranchise conservative voters (Reid, 2022). Both cases underscore the importance of audience involvement and the reverberating feedback loops between political actors and the public which can often amplify and extend the reach of intentional disinformation (Starbird et al., 2023; Vosoughi et al., 2018), and which can often involve non-epistemic but nonetheless rational choices (Williams, 2021, 2022).

The circular and mutually reinforcing relationship between political actors and the public was a particularly pernicious aspect of the rhetoric associated with Trump’s big lie (for a detailed analysis, see Starbird et al., 2023). During the joint session of Congress to certify the election on 6 January 2021, politicians speaking in support of Donald Trump and his unsubstantiated claims about election irregularities appealed not to evidence or facts but to public opinion. For example, Senator Ted Cruz cited a poll result that 39% of the public believed the election had been “rigged”. Similarly, Representative Jim Jordan (R-Ohio), who is now Chairman of the House Judiciary Committee, argued against certification of the election by arguing that “80 million of our fellow citizens, Republicans and Democrats, have doubts about this election; and 60 million people, 60 million Americans think it was stolen” (Salek, 2023). The appeal to public opinion to buttress false claims is cynical in light of the fact that public opinion was the result of systematic disinformation in the first place. While nearly 75% of Republicans considered the election result legitimate on election day, this share dropped to around 40% within a few days (Arceneaux and Truex, 2022), coinciding with the period during which Trump ramped up his false claims about the election being stolen. By December 2020, 28% of American conservatives did not support a peaceful transfer of power (Weinschenk et al., 2021), perhaps the most important bedrock of democracy. Among liberals, by contrast, this attitude was far more marginal (3%).

Public opinion has shifted remarkably little since the election. In August 2023, nearly 70% of Republican voters continued to question the legitimacy of President Biden’s electoral win in 2020. More than half of those who questioned Biden’s win believed that there was solid evidence proving that the election was not legitimate (Agiesta and Edwards-Levy, 2023). However, the purported evidence marshaled in support of this view has been repeatedly shown to be false (Canon and Sherman, 2021; Eggers et al., 2021; Grofman and Cervas, 2023). Footnote 1 It is particularly striking that high levels of false election beliefs are found even under conditions known to reduce “expressive responding”—that is, responses that express support for a position but do not reflect true belief (Graham and Yair, 2023).

The entrenchment of the big lie erodes the core of American democracy and puts pressure on Republican politicians to cater to antidemocratic forces (Arceneaux and Truex, 2022; Jacobson, 2021, 2023). It has demonstrably decreased trust in the electoral system (Berlinski et al., 2021), and a violent constitutional crisis has been identified as a “tail risk” for the United States in 2024 (McLauchlin, 2023). Similar crises in which right-wing authoritarian movements are dismantling democratic institutions and safeguards have found traction in many countries around the world including liberal democracies (Cooley and Nexon, 2022).

In this context, it is worth noting that the situation in other countries, notably in the Global South, may differ from the situation in the U.S. (Badrinathan and Chauchard, 2024). On the one hand, low state capacity and infrastructure constraints may curtail the ability of powerful actors to spread disinformation and propaganda (though see Kellow and Steeves, 1998; Li, 2004, for discussion of the role of government-adjacent radio station RTLM in facilitating the 1994 Rwandan genocide). On the other hand, such spread can be facilitated by the fact that closed, encrypted social-media channels are particularly popular in the Global South, sometimes providing an alternative source of news when broadcast channels and other conventional media have limited reach. In those cases, dissemination strategies will also be less direct, relying more on distributed “cyber-armies” than direct one-to-millions broadcasts such as Trump’s social-media posts (Badrinathan, 2021; Jalli and Idris, 2019). The harm that can be caused by such distributed systems was vividly illustrated by the false rumors about child kidnapers shared in Indian WhatsApp groups in 2018, which incited at least 16 mob lynchings, causing the deaths of 29 innocent people (Dixit and Mac, 2018). The ensuing interplay between the attempts of the Indian government to hold WhatsApp accountable and Meta, the platform’s owner, highlights the limited power that governments in the Global South hold over multinational technology corporations (Arun, 2019). As a result, many platforms do not even have moderation tools for problematic content in popular non-Western languages (Shahid and Vashistha, 2023).

The power asymmetry between corporations and the Global South has been noted repeatedly, and recent calls for action include the idea of collective action by countries in the Global South to insist on regulation of platforms (Takhshid, 2021). We have only scratched the surface of a big global issue that is in urgent need of being addressed.

Despite these differences between the Global North and South, beliefs in political misinformation can be pervasive regardless of regime type or development level (e.g., for a discussion in the context of the “developing democracy” of Brazil, see Dourado and Salgado, 2021; Pereira et al., 2022).

The political landscape of disinformation

Given that the 2020 election was lost by the Republican candidate, the finding that conservatives are more likely than liberals to believe false election claims is explainable on the basis of motivated cognition and the general finding that conspiracy theories “are for losers” (Uscinski and Parent, 2014); that is, they provide an explanation—even if only a chimerical one—for a political setback to the losing parties. There is no a priori reason to assume that susceptibility to disinformation is skewed across the political spectrum.

However, a large body of recent research on the American public and U.S. political actors has consistently identified a pervasive ideological asymmetry, with conservatives and people from the populist right being far more likely to consume, share, and believe false information than their liberal counterparts (Benkler et al., 2018; Garrett and Bond, 2021; González-Bailón et al., 2023; Grinberg et al., 2019; Guess et al., 2020a; Guess et al., 2020b; Guess et al., 2019; Ognyanova et al., 2020). Research into the asymmetry culminated in a recent analysis of the news diet of 208 million Facebook users in the U.S., which discovered that a substantial segment of the news ecosystem is consumed exclusively by conservatives and that most misinformation exists within this ideological bubble (González-Bailón et al., 2023). Although the reasons for this asymmetry are not fully understood, Lasser et al. (2022) recently showed that it also held for politicians, with Republican members of Congress disseminating far more low-quality information on Twitter/X than their Democratic counterparts. Greene (2024) reported a parallel analysis for Facebook and found the same asymmetry between politicians of the two major parties. Similarly, Benkler et al. (2018) showed how the particular structure of the American media scene, with a dense interconnected cluster of right-wing sources that is separate from the remaining mainstream, fosters political asymmetry in the use and consumption of disinformation.

This asymmetry extends beyond the political domain to health-related information, which might at first glance appear to be of sufficient importance for most people to cast aside their political leanings. A recent systematic review discovered eight studies that identified conservatism as a predictor of susceptibility to health misinformation, seven studies that found no association involving political leanings, and not a single study that showed liberals to be more misinformed on health topics than conservatives (Nan et al., 2022). The observed political asymmetry is also not limited to survey results or other behavioral measures. Wallace et al. (2023) examined vaccination and mortality data from two U.S. states (Ohio and Florida) during the COVID-19 pandemic and found a widening partisan gap in excess mortality. Specifically, whereas mortality rates were equal for registered Republican and Democratic voters pre-pandemic, a wide partisan gap—with excess death rates among Republicans being up to 43% greater than among Democratic voters—was observed after vaccines had become available for everyone. The gap was greatest in counties with the lowest share of vaccinated people and it almost disappeared for the most vaccinated counties. Similar results have been reported across U.S. states (Leonhardt, 2021). One explanation for these patterns invokes the frequent false statements by Republican politicians and conservative news networks—foremost Fox News—that discredited the COVID-19 vaccines (Hotez, 2023). In support, consumption of Fox News has been causally linked to lower vaccination rates (Pinna et al., 2022).

Moreover, a recent analysis identified a specific “Trump effect” such that even conditional on the Republican vote share, support for Trump was additionally and causally associated with a lower vaccination rate (Jung and Lee, 2023).

The political asymmetry surrounding the dissemination and consumption of misinformation must be caveated in two ways. First, although the asymmetry is substantial and pervasive it is not absolute. For some materials, such as specific conspiracy theories, the asymmetry is found to be attenuated in some studies (A. Enders et al., 2022; M. Enders and Uscinski, 2021). Second, the asymmetry observed among American politicians does not necessarily hold in other countries. Lasser et al. (2022) examined tweets by British and German parliamentarians and showed that with the exception of the extreme right in Germany (the AfD party), politicians across the mainstream spectrum were equally judicious in what information they shared in their tweets. This finding suggests that it is not conservatism per se that is associated with asymmetric reliance on misinformation, but the specific manifestation of conservatism currently dominant in the American political landscape.

Notwithstanding those caveats, the political asymmetry surrounding the dissemination and consumption of misinformation in the U.S. has been accompanied by at least two major issues: First, there has been a strong political response by Republicans in Congress who have commenced a campaign against misinformation research and researchers, claiming that the research seeks to censor conservative voices. Second, the political backlash has coincided with growing self-reflection and critique among scholars, some of whom began to question the misinformation research effort, culminating in claims that misinformation may not be sufficiently identifiable or widespread to warrant concern or countermeasures. We now take up these two issues in turn.

The politicization of misinformation research

At the time of this writing, Representative Jim Jordan, R-Ohio, has been leading a campaign against misinformation research and misinformation researchers in his role as Chairman of the House Judiciary Committee. The core allegation by Jordan and his alliesFootnote 2 is that misinformation researchers are part of a purported “Censorship Industrial Complex” that is assisting the Biden administration in its purported endeavor to pressure platforms into suppressing conservative viewpoints (U.S. House of Representatives Judiciary Committee, 2023). The allegation is, however, problematic for at least four reasons: it rests on false assertions; it ironically denies first-amendment rights to researchers; it rests on a basic premise that is false; and it misunderstands the role of platforms in content moderation.

Concerning the first point, Jordan has subpoenaed several prominent academics engaged in the study of mis- and disinformation based on false assertions. For example, Dr Kate Starbird, an expert on disinformation from the University of Washington, was called to testify before Jordan’s subcommittee and had to defend herself against accusations that she was colluding with the Biden administration in an effort to chill conservative speech (Nix and Menn, 2023). Core to the specific allegations against Starbird and her colleagues is a claim—initially voiced by online conspiracy theorists—that they colluded with the Department of Homeland Security to censor 22 million tweets during the 2020 election campaign. In actual fact, the researchers collected 22 million tweets for analysis, and flagged about 3000 (0.0001 of the total) for potential violations of Twitter’s terms of use (Blitzer, 2023).

Second, Jordan’s purported championing of free speech is difficult to reconcile with the chilling effect the House Committee’s actions have had on the first-amendment rights of researchers. According to Starbird, “The people that benefit from the spread of disinformation have effectively silenced many of the people that would try to call them out” (Rutenberg and Myers, 2024). The deterring effect on the research community is widespread (Bernstein, 2023; Nix et al., 2023). Similarly, Facebook and YouTube have reversed their restrictions on content claiming that the 2020 election was stolen. Election disinformation, unsurprisingly, has seen an uptick in response (Rutenberg and Myers, 2024).

Third, Jordan’s campaign rests on a false premise, namely that social-media platforms are biased against conservatives. Together with other conservative figures such as Tucker Carlson (formerly with Fox News) and Ben Shapiro, Jordan claimed in 2020 that “Big Tech is out to get conservatives”. This claim has been shown to be wrong by several studies. For example, an analysis of Facebook engagements during the 2016 election campaign revealed that conservative outlets (Fox News, Breitbart, and Daily Caller) amassed 839 million interactions, dwarfing more centrist outlets (CNN with 191 million and ABC news with 138 million), and totaling more than the remaining seven mainstream pages in the top 10 (Barrett and Sims, 2021). Another analysis involving millions of Twitter users and 6.2 million news articles shared on the platform also found that conservatives enjoy greater algorithmic amplification than people on the political left (Huszár et al., 2022). Moreover, the Congressional January 6th Committee detailed the way in which major platforms, including Twitter and Facebook, facilitated the organization of the violent insurrection in a 122-page memo, although much of that information did not make it into the final committee report (Zakrzewski et al., 2023). Congressional investigators discovered that the platforms failed to heed their own experts’ warnings about violent rhetoric on their platforms, and selectively failed to enforce existing rules to avoid antagonizing conservatives for fear of reprisals (Zakrzewski et al., 2023).

Finally, and perhaps most important, Jordan’s pursuit fails to differentiate between the roles of government and the platforms, and in particular ignores the crucial role that platforms already play in shaping people’s information diet (Lewandowsky et al., 2023a). In a nutshell, the internet is currently neither unregulated nor is all information on the internet equally free. Instead, nearly all content on social media is curated by algorithms that are designed to maximize dwell time in pursuit of the platforms’ advertising profit (Lewandowsky and Pomerantsev, 2022; Wu, 2017). Algorithms therefore favor captivating information that keeps users engaged. Unfortunately, human attention is known to be biased towards negative information (Soroka et al., 2019), which creates an incentive for platforms to drench users in outrage-evoking content. Similar to junk food that supermarkets strategically place at checkout lanes, the information that is preferentially curated by platforms may satisfy our presumed momentary preferences while reducing our long-term well-being. If platforms were to address their role in those dynamics, for example by redesigning their algorithms, this would hardly constitute censorship. Solving a problem one has caused is good iterative design rather than bias or suppression of opinions. No one would accuse a supermarket of suppressing consumers’ preferences if the checkout lanes put on offer celery instead of chocolate bars.

In summary, far from being a restorative effort in defense of free speech, Jordan’s attacks are reminiscent of similar campaigns launched against inconvenient scientists by the tobacco and fossil-fuel industries (Lewandowsky et al., 2023b). In all cases scientists have been subject to personal abuse, their email correspondence is hacked or subpoenaed, and allegations are woven together from snippets of decontextualized actions or events (Blitzer, 2023). Because these attacks are systemic, the response also requires a systemic approach (Desikan et al., 2023). However, any such response seems unlikely to be achievable in the current political landscape. Scientists who work under such challenging conditions must therefore rely on other avenues to protect their integrity. The U.S. National Academy of Sciences has published a list of resources for scientists under attack.Footnote 3 Specific recommendations include responding publicly to valid criticism (without, however, engaging in a long drawn-out direct conversation with an attacker), reporting abusive messages to the authorities, and seeking support from colleagues who have been in similar situations (Kinser, 2020).

The attacks have also coincided with moves by the platforms and the courts that align with Jordan’s claims. For example, the major platforms (Meta, Google, Twitter/X, and Amazon) have cut back on the number of staff dedicated to combating hate speech and misinformation (Field and Vanian, 2023). Meta (the parent company of Facebook) has been laying off employees in its “content review” team, which had been involved in countering misinformation and disinformation in the 2022 midterm election, citing confidence in improved electronic tools for detecting inauthentic accounts. It remains to be seen how the platform actions will play out during the 2024 presidential election.

In the legal arena, a Trump-appointed federal judge in Louisiana barred the Biden administration from having any contact with social-media companies and certain research institutions to discuss safeguarding elections in July 2023. The judgement echoed the claims by Jim Jordan and other Republicans that there was collusion between the White House and the social-media companies to censor conservative voices under the guise of fighting disinformation about COVID-19 during the pandemic and false election claims during the 2022 midterms. Although there are important and potentially problematic implications for free speech that arise whenever a government gets involved in managing what it considers misinformation (Neo, 2022; Vese, 2022), the Louisiana ruling was particularly broad in its prohibitions (West, 2023). The implications of the ruling include denying election officials access to information gathered by independent research bodies (the ruling lists “the Election Integrity Partnership, the Virality Project, the Stanford Internet Observatory, or any like project or group”) that would enable them to debunk false election-related information and provide more accurate information instead. The Supreme Court blocked the Louisiana ruling in October 2023 (Hurley, 2023) but agreed to a full hearing later in its current term. We return to the conflict between free speech and the adverse effects of disinformation later.

The post-modern critiques of misinformation research

At the heart of research on misinformation is the belief that the concepts of truth and falsehood are essential to democracy, to cognition, and to daily life, and that the status of many, but of course not all claims, can be determined with sufficient accuracy to warrant rebuttal of false information. For example, the “big lie” about a stolen election is just that—it is a lie with no sustainable evidentiary support and it is routinely referred to as such in the scholarly literature (e.g., Arceneaux and Truex, 2022; Canon and Sherman, 2021; Graham and Yair, 2023; Henricksen and Betz, 2023; Jacobson, 2021, 2023; Painter and Fernandes, 2022). The lie has been rejected by 62 American courts, all of which dismissed or ruled against law suits questioning the legitimacy of the election by Donald Trump or his supporters.Footnote 4

It is curious that the reaction by Trump and some of his most ardent public supporters to such determinative judgments about the falsity of his claims has not been to claim that they are in fact true, but to attack the idea that objective knowledge is even possible. When confronted with a lie, Trump’s adviser Kellyanne Conway once famously quipped that she was presenting “alternative facts.” On another occasion, Trump’s attorney Rudy Giuliani declared that “truth isn’t truth.” Such a strategy seems oddly reminiscent of the postmodernist critique of the possibility of objective knowledge, which first arose as a core aspect of 1930s fascism and was then adapted by left-wing literary criticism from the 1960s onward (Lewandowsky, 2020). At that time, humanities scholars had grown increasingly uncomfortable with the idea that facts were just facts, and that there was no role for considering the personal or political interests of those who were engaged in the pursuit of empirical knowledge. In this, postmodernists raised an important point of self-reflection for scientists and others who blithely claimed that there was an impenetrable wall between facts and values. But then they took things too far. Derrida claimed that there was no such thing as objective knowledge. Foucault went on to suggest that —given this— all knowledge claims were nothing more than an assertion of the political interests of the investigator (McIntyre, 2018, p. 124).

This led to the “science wars” of the 1990s, when scientists and their allies fought back against subjectivism and relativism to defend the importance of objective knowledge at least as a regulative ideal of empirical inquiry. This particular attack on science eventually dissipated —and in the face of damage it had done to objective knowledge claims like the reality of global warming, some postmodernists such as Bruno Latour eventually even apologized (Latour, 2004)— but the damage was already done. Meanwhile, both the corporate sector and the religious and political right wing had once again taken up the strategy in their attacks on science. The advantage of post-modernism for anti-democratic purposes is obvious, and has echoes of authoritarian attacks on truth-tellers and their defenders throughout history. Indeed, to someone who embraces the idea that their political ideology should have supremacy over objective reality, the advantages of postmodernism are clear. Not only can falsehoods about the economy, crime, and political violence be offered as “alternative narratives” to carefully-measured statistics or other forms of evidence, but the credibility of any party as an objective truth-teller can be undermined. And this suits the authoritarian just fine—for where there is no truth then there can be no blame or accountability either.

Hannah Arendt long ago recognized the dangers of this strategy when she wrote: “the ideal subject of totalitarian rule is not the convinced Nazi or the convinced communist, but people for whom the distinction between fact and fiction … true and false … no longer exist.” This easy political slide into postmodernism does violence to the idea that truth matters, that facts can be discovered through empirical analysis, and that it is crucial to attempt to discern the facts before we can make good policy—especially when we hold competing values that will impact policy choice. And this is true even more so in an era when the creation and amplification of knowledge claims are so easily subject to digital manipulation and weaponization by anyone who has a personal or political interest. Fortunately, researchers have developed conceptual, cognitive, and computational tools that permit the differentiation between legitimate contestation of facts on the one hand, and misinformation and willful disinformation on the other.

The identifiability of contested facts

Notwithstanding our rejection of the postmodernist project, we do not dispute its core idea that many contested assertions cannot be unambiguously adjudicated by referring to “facts”. There are indeed cases in which different actors may legitimately question each other’s “facts”. In our view, these ambiguous cases are precisely those that merit democratic debate and contestation. When conducted in good-faith, such debates can be particularly revealing because both sides can marshal evidence in support.

To illustrate, consider the recent controversy surrounding a machine-learning tool known as COMPAS (Dressel and Farid, 2018), which is intended to assist judges in the U.S. by predicting the likelihood of recidivism of a specific offender. Critics accused COMPAS of being racially biased based on statistical analysis of the evidence (Angwin et al., 2016). The case rested on the observation that among defendants who ultimately did not re-offend, the algorithm misclassified African-Americans as being at risk of re-offending more than twice as often as White offenders. This misclassification can have serious consequences for a person because judges are inclined to treat high-risk defendants more harshly.

Proponents of COMPAS rejected this charge and argued that the algorithm was not racially biased because it predicted recidivism equally for Black and White offenders for each of its 10 risk categories. That is, the classification into risk categories based on a large number of indicator variables was racially unbiased—a Black person’s actual probability of re-offending was the same as that of a White person with the same risk score (Dieterich et al., 2016).

It turns out that it is mathematically impossible to simultaneously satisfy both forms of fairness—calibration and classification—when the base rates of re-offending differ between groups (Berk et al., 2021; Lagioia et al., 2023). That is, if a greater share of Black people are classified as high-risk—which the algorithm does in an unbiased manner—then it necessarily follows that a greater share of Black defendants who do not re-offend will also be mistakenly classified as high-risk. In those circumstances, it would be inappropriate to accuse one or the other side of spreading misinformation, as each party has mathematical justification for their position and a resolution can only be attained through a value-laden policy discussion. Indeed, to our knowledge, the main contestants in this debate—Northpointe, the manufacturer of COMPAS (Dieterich et al., 2016) and ProPublica, a public-interest media organization (Angwin et al., 2016)—did not level charges of misinformation against each other despite engaging in robust debate.

A similar controversy with even greater stakes arose in the context of the COVID-19 vaccine rollout in the U.S. in 2021. Unlike most other countries, which vaccinated their populations according to age alone—with the elderly being given highest priority because of their much higher mortality rate from COVID-19—the U.S. Advisory Committee on Immunization Practices (ACIP) favored a policy that gave higher priority to essential workers (e.g., food and transport workers) than the elderly. This policy was partially motivated by the fact that racial minorities (Blacks and Hispanics) are underrepresented among adults over 65, whereas they are slightly over-represented among essential workers—thus, under an age-based policy the share of Whites who receive the vaccine would have initially been greater than their proportion in the population would have warranted. Conversely, Blacks would have been underrepresented among the vaccinated early on (Mounk, 2023). This inequity could be avoided by first vaccinating essential workers among whom racial minorities were over-represented. However, because the age distribution of essential workers has a much lower average, fewer lives were saved among vaccinated essential workers—whose young age rendered their risk of dying from COVID-19 low to begin with—than would have been saved among the elderly had they been vaccinated (Rumpler et al., 2023). Modeling has confirmed that while the essential-worker policy introduced racial equity in terms of doses administered, more lives would have been saved in all ethnic groups under an age-based policy (Rumpler et al., 2023). Again, the apparent fairness of a policy depended on the outcome measure: doses administered vs. lives saved. Given the unequal distributions of different ethnic and racial groups across different ages, no mathematical possibility exists to settle on a single “fair” policy. Public opinion appears to have been broadly in line with the policy ultimately adopted by ACIP (Persad et al., 2021).

The controversies surrounding COMPAS and ACIP’s vaccination policy are just two instances of a much wider problem, which is that when issues become sufficiently complex, even good-faith actors may find it impossible to agree. One reason is that cognitive limitations prevent a full Bayesian representation (the gold standard of rationality) of the problem (Pothos et al., 2021). Instead, people are forced to simplify their representations, for example by partitioning their knowledge (Lewandowsky et al., 2002). Persistent and irresolvable disagreements are thus almost ensured by human cognitive limitations (Pothos et al., 2021). The second reason is that people differ in their values and weigh evidence differently even if all parties can agree on underlying facts (Walasek and Brown, 2023).

Nonetheless, controversies such as those surrounding COMPAS and ACIP’s vaccination policy do not give licence to political actors to obscure the debate through falsehoods, misleading claims, or lies. On the contrary, proper debate of those issues is only possible in the absence of falsehoods because their resolution ultimately requires a trade-off of values that is best arrived at by weighing the importance of different competing sets of evidence. We therefore reject recent academic voices that have questioned whether misinformation can be reliably identified at all (Acerbi et al., 2022; Adams et al., 2023; Harris, 2022; van Doorn, 2023; Yee, 2023a, 2023b). We suggest that its identification is essential and, as we show next, empirically well supported.

The identifiability of misinformation

We place our case into the context of the more extreme end of the academic critique because it involves positions that are antithetical to ours, calling into question the entire idea of fact-checking. For example, Uscinski (2015) raised the specter that fact-checking is merely a “veiled continuation of politics by means of journalism” (p. 243). Yee (2023a) argued more broadly that any deference to “epistemic elites”—including not only fact-checkers but also academics, researchers, or journalists—is problematic, and assessment of the quality of information should include democratic elements “that are participatory, transparent, and fully negotiable by average citizens” (Yee, 2023a, p. 1111). This demand has several problematic implications. First, it does not explain who counts as “average citizen” and who would belong to the “elite”. At what point should individuals seeking to counter misinformation begin to recuse themselves for fear of accidentally treading on “average” citizens? Is a virologist too “elite” to correct misinformation surrounding the origin of a new virus? What about citizens with a PhD or Master’s degree, how are they being classified? Second, why exactly would one exclude epistemic elites, such as investigative journalists or forensic IT experts, from identifying bad-faith actors such as foreign “bots” or “trolls”? Are average citizens really better at this task than network scientists? Should we decide by social-media poll whether a new strain of avian flu is contagious to humans (Lewandowsky et al., 2017)? Probably not. There are obviously many domains that benefit from expert assessment of claims.

Nonetheless, there has been much research that has revealed the competence of crowds in the context of fact-checking. For example, Pennycook and Rand (2019) showed that crowdsourced trust ratings of media outlets were quite successful in the aggregate when compared to ratings by professionals, notwithstanding substantial partisan differences. This basic finding has been replicated and extended several times (M. R. Allen et al., 2024; Martel et al., 2024), with community-based fact-checking of COVID-19 content being 97% accurate in one study (M. R. Allen et al., 2024). Care must, however, be taken that crowds are politically balanced. When people can choose what content to evaluate, as in Twitter/X’s crowdsourced “Birdwatch” fact-checking program (now known as Community Notes), partisan differences among contributors may limit the value of the crowdsourcing (J. Allen et al., 2022). The crowdsourcing results show not only that average citizens can match the competence of experts in the aggregate, but they also reaffirm that misinformation is identifiable.

Much recent research has uncovered specific “fingerprints” that can enable people as well as machines to infer the likely quality or accuracy of content. Misinformation has been shown to be suffused with emotions, logical fallacies, and conspiratorial reasoning (Blassnig et al., 2019; Carrasco-Farré, 2022; Fong et al., 2021; Musi et al., 2022; Musi and Reed, 2022). For example, critical thinking methods offer a qualitative approach to deconstructing arguments in order to identify the presence of reasoning fallacies (Cook et al., 2018).

Quantitatively, one study found that compared to reliable information, misinformation is less cognitively complex and 10 times more likely to rely on negative emotional appeals (Carrasco-Farré, 2022). In confirmation, numerous other studies show that misinformation is, on average, more emotional than factual information (for a systematic review, see Peng et al., 2023) Upward of 75% of anti-vaccination websites use negative emotional appeals (Bean, 2011) and linguistic analyses show that conspiracy theorists use significantly more fear-driven language as compared to scientists (Fong et al., 2021).

Emotion also plays a role in the receivers’ behavior. People have been shown to be more susceptible to misinformation when put in an emotional state (Martel et al., 2020), which helps explain the preferential and more rapid diffusion of unreliable versus reliable information online (Pröllochs et al., 2021; Vosoughi et al., 2018).

Critics may argue that the datasets used for determining what constitutes “misinformation” and “reliable” information are limited or biased or that the mere prevalence of these cues is not evidence of their diagnosticity in real-world contexts. However, computational machine-learning work relying on a large variety of different URL sources and fact-checked datasets has confirmed that the results are robust and generalizable (Ghanem et al., 2020; Kumari et al., 2022; Lebernegg et al., 2024). A recent comprehensive study which combined many of the available cues found that they have high diagnostic and predictive validity and help discriminate between false and true information, with state-of-the-art models reaching over 83% classification accuracy (Lebernegg et al., 2024). Moreover, real-world training on fake news detection, such as logical fallacy training, helps people accurately discriminate between misleading and credible news (e.g., Hruschka and Appel, 2023; Lu et al., 2023; Roozenbeek et al., 2022).

In summary, the available evidence shows quite convincingly that misinformation can be identified by both humans and machines with considerable accuracy. As we show next, we can go beyond mere identification as there are also at least three ways in which one can ascertain the deceptive intent underlying disinformation if present. Identification of deceptive intent is particularly pertinent because it allows information to be safely discounted without requiring a detailed analysis of its factual status.

The identifiability of willful disinformation

For decades, the hallmark of Western news coverage about politicians’ false or misleading claims was an array of circumlocutions that carefully avoided the charge of lying—that is, knowingly telling an untruth with intent to deceive (Lackey, 2013)—and instead used adverbs such as “falsely”, “wrongly”, “bogus”, or “baseless” when describing a politician’s speech. Other choice phrases referred to “unverified claims” or “repeatedly debunked claims”. This changed in late 2016, when the New York Times first used the word “lie” to characterize an utterance by Donald Trump (Borchers, 2016). The paper again referred to Donald Trump’s lies within days of the inauguration in January 2017 (Barry, 2017) and it has grown into a routine part of its coverage from then on. Many other mainstream news organizations soon followed suit and it has now become widely accepted practice to refer to Trump’s lies as lies.

Given that lying involves the intentional uttering of false statements, what tools are at our disposal to infer a person’s intention when they utter falsehoods? How can we know a person is lying rather than being confused? How can we infer intentionality?

Anecdotally, defenders of Donald Trump’s lies have raised precisely that objection to the use of the word “lie” in connection with his falsehoods. This objection runs afoul of centuries of legal scholarship and Western jurisprudence. Brown (2022) argues that inferring intentionality from the evidence is “ordinary and ubiquitous and pervades every area of the law” (p. 2). Inferring intentionality is the difference between manslaughter and murder and is at the heart of the concept of perjury—namely, willfully or knowingly making a false material declaration (Douglis, 2018).

There are at least three approaches that can be pursued to infer intentional deception by a communicating agent with varying degrees of confidence. The first approach is statistical and relies on linguistic analysis of material. Unlike people, who are not very good lie detectors despite performing (slightly) above chance (Bond and DePaulo, 2006; Mattes et al., 2023), recent advances in natural language processing (NLP) have given rise to machine-learning models that can classify texts as deceptive or honest based on subtle linguistic clues (e.g., Braun et al., 2015; Davis and Sinnreich, 2020; Van Der Zee et al., 2021). To illustrate, a model that relied on analysis of the distribution of different types of words achieved 67% accuracy (considerably better than the 52% achieved by human judges) on texts generated by speakers who were either instructed to lie or to be honest. Using the same analysis approach, Davis and Sinnreich (2020) trained a model to classify tweets by Donald Trump as true or false by using independent fact-checks as ground truth. The model was able to classify tweets with more than 90% accuracy, suggesting that Trump uses subtly different language (e.g., more negative emotion, more prepositions and discrepancies) when communicating untruths. A similar model of Trump’s tweets was developed by Van Der Zee et al. (2021), who additionally applied 26 extant models from the literature to Trump’s tweets and showed that most of them performed above chance despite being developed on very different materials. In summary, NLP-based approaches have repeatedly shown their value in the classification of speech into honest and deceptive. The fact that those models succeed also when applied to the tweets of Donald Trump implies at the very least that Trump’s falsehoods are not uttered at random or accidentally but are deployed using specific linguistic techniques.

In general, machine-learning approaches to deception detection have shown promise. A recent systematic review identified 81 studies, 19 of which achieved accuracies in excess of 90%, with a further 15 exceeding 80% accuracy (Constâncio et al., 2023). The machine-learning models in that ensemble were trained on a variety of corpora, ranging from reviews on Tripadvisor (either true or generated with the intent to deceive; Barsever et al., 2020) to segments of a radio game show dedicated to bluff detection by the audience (Papantoniou et al., 2021). In all cases, the ground truth (i.e., whether or not deceptive intent was present) was unambiguously known, and the models learned to identify deceptive text based on linguistic analysis with considerable albeit imperfect success.

The second approach to establish willful deception relies on analysis of internal documents of institutions such as governments or corporations. Comparison of the internal knowledge to public stances of the same entities can identify active deception, especially when it is large-scale. Numerous such cases exist, mainly involving corporations and their associated infrastructure such as think-tanks and other front groups (Ceccarelli, 2011; Oreskes and Conway, 2010). For example, as early as the 1920s, the electricity industry organized a propaganda campaign to falsely insist that private sector electricity was cheaper and more reliable than electricity generated in the public sector (Oreskes and Conway, 2023). The tobacco industry’s activities to mislead the public about the dangers from smoking are well documented and established beyond reasonable doubt (e.g., Cataldo et al., 2010; Fallin et al., 2013; Francey and Chapman, 2000; Proctor, 2012). The tobacco industry was well aware of the link between smoking and lung cancer in the 1950s and 1960s (Proctor, 2012), and yet continued publicly to dispute that medical fact using a variety of propagandistic means (Landman and Glantz, 2009; Proctor, 2011). Similarly, analysis of internal documents of the fossil-fuel industry has revealed that industry leaders, in particular ExxonMobil, were fully aware of the reality of climate change and its underlying causes (Supran and Oreskes, 2017, 2021) while simultaneously expending large sums to deny its existence in public (J. Farrell, 2016) and to prevent Congress from enacting climate-mitigation legislation (Brulle, 2018). Ironically, ExxonMobil’s scientists projected global temperatures in the 1970s and 1980s with comparable skill as independent academics at the time (Supran et al., 2023). As Baker and Oreskes (2017) argued, the best explanation for ExxonMobil’s conduct is that they knowingly deceived the public by funding a disinformation machine that denied the realities of climate change. This approach admittedly requires considerable resources and skill, and it is comparatively slow, but in exchange the results it yields are particularly diagnostic and demonstrably useful in litigation. In the case of the tobacco industry, this was the basis for a conviction of Phillip Morris under federal racketeering (RICO) law. The appeals in that case explicitly noted that Phillip Morris intentionally deceived the public and that first-amendment (free speech) rights did not apply as they do not protect fraud or deliberate misrepresentation (Farber et al., 2018). In the case of the fossil fuel industry, litigation has not been met with notable success at the time of this writing, but the “Exxon knew” campaign, based on research by Supran and colleagues (Supran et al., 2023; Supran and Oreskes, 2017, 2021), has had considerable public impact with 178 relevant media articles identified by Google News.Footnote 5

The final approach to identifying intentional deception resembles the approach involving institutional documents but specifically focuses on lies promulgated by identifiable individuals. We illustrate this approach with Donald Trump’s big lie about the 2020 presidential elections, focusing on statements made in courts of law. Although Trump was making widespread public accusations of fraud, his lawyers—who filed more than 60 lawsuits in connection with the election—did not echo those accusations in court. Quite on the contrary, his lawyers frequently disavowed any mention of fraud in court despite their very different public stance. For example, Rudy Giuliani, one of Trump’s lead attorneys, stood outside a landscaping business on the day most networks declared the election for Biden, and thundered that “It’s [the election] a fraud, an absolute fraud.” Ten days later, being questioned by a federal judge in Pennsylvania during one of Trump’s lawsuits (dealing with whether local election officials in Pennsylvania should have allowed voters to fix problems with their mail-in ballots after submitting them), he declared “This is not a fraud case” (Lerer, 2020). This pattern was pervasive: Trump’s lawyers continued to back away from suggestions that the election was stolen and admitted in court that there was no evidence of fraud, all in contradiction to their plaintiff’s public statements (Lerer, 2020).

Notwithstanding the careful hedging of their claims in court, the frivolous suits filed on behalf of Trump resulted in sanctions for several of his attorneys. Two lawyers who did claim widespread voter fraud not only had their suit dismissed but were also sanctioned $187,000 by a federal judge in Colorado for their frivolous, meritless case (Polantz, 2021). The decision was upheld on appeal, and the Supreme Court declined to hear a further appeal by the lawyers (Scarcella, 2023). Altogether, 22 Trump lawyers have been identified who face sanctions in litigation, criminal prosecutions, and state bar disciplinary proceedings. In all cases, what appears to be at issue is violation of the Model Code of Conduct, in particular rules stipulating that claims must be meritorious and that lawyers must exhibit candor and truthfulness (Neff and Fredrickson, 2023).

Since the flurry of lawsuits in late 2020, Trump lawyer Sidney Powell has pleaded guilty to charges arising from her involvement in pushing the big lie. Ms Powell pleaded guilty to “conspiracy to commit intentional interference with performance of election duties” and agreed to cooperate with prosecutors in a criminal case against Donald Trump (Fausset and Hakim, 2023). Two further Trump lawyers have pleaded guilty in the same case and agreed to testify truthfully about other defendants (Blake, 2023).

In a civil suit brought against Rudy Giuliani by two election workers in Georgia, whom he had publicly accused of election fraud, Giuliani conceded before trial that those statements were false (Brumback, 2023). The election workers were awarded $148 million in damages, causing Giuliani to file for bankruptcy in late 2023 (Aratani and Oladipo, 2023). In a further twist, Giuliani repeated his false claims during the trial outside the court room even while his lawyers conceded in court that they were wrong (Hsu and Weiner, 2023).

Giuliani was promptly sued again by the election workers, and at the time of this writing the suit was still under way (Hsu and Weiner, 2023).

The big lie was not just curated and pushed by politicians seeking to cling to power and their attorneys. It is now public knowledge that one major news network, Rupert Murdoch’s Fox News, knowingly amplified claims about the election that network executives knew to be false. The fact that Fox lied became apparent during a defamation suit filed by Dominion Voting Machines against the network over false allegations that the voting machines had been rigged to steal the 2020 election. As trial was about to begin, Fox News agreed to pay Dominion $787.5 million and acknowledged that the network had broadcast false statements. The discovery process that preceded trial had uncovered numerous documents and emails that revealed that senior network executives and hosts were convinced that the allegations about the election made by Trump and his allies were untrue (e.g., Peltz, 2023; Terkel et al., 2023). The network continued to air those allegations and its CEO instructed staff that fact-checking “had to stop” because it was bad for business (Levine, 2023). One scholar put it succinctly: “Fox News deliberately misleads the audience for profit” (Nyberg, 2023, p. 1). Although Fox has been repeatedly implicated in spreading disinformation with harmful consequences for the American public (Ash et al., 2023; Bolin and Hamilton, 2018; Bursztyn et al., 2020; DellaVigna and Kaplan, 2007; Feldman et al., 2012; Kull et al., 2003; Simonov et al., 2022), the Dominion case provided a unique opportunity to ascertain that, at least in this case, the network was knowingly lying to its audience.

The preceding examples illustrate the approaches available to establish—with some degree of confidence—the intention to deceive that is the core element of lies. Our examples are not intended to be exhaustive but they illustrate the options available to researchers, journalists, and the public to uncover when they are being lied to. The examples also put to rest several generous auxiliary assumptions that have been made about lies in politics, such as their presumed inevitability because issues can be so nuanced that complete honesty is impossible. Contrary to that assumption, the fact that a person’s rhetoric can differ strikingly between courts of law—where penalties apply for misrepresentations and perjury—and politics—where accountability is notoriously absent—not only reveals the intention to deceive but also the person’s sensitivity to the consequences of their speech.

We have already noted that the contrast between what companies such as ExxonMobil or Philip Morris said in public about their products and what they discussed in private was sufficient to provoke legal consequences. Similar arguments, that fraudulent political speech should not be protected by the First Amendment, have been advanced in the context of Trump’s big lie (Henricksen and Betz, 2023).

Although our examination was necessarily limited to a small number of cases, they suffice to illustrate a pathway towards pinpointing intentional disinformation by analysing the utterances of the liars themselves, be they corporations, politicians, or media organizations. We believe that the basic approach is of considerable generality, extending to numerous recorded instances:

  • Politicians catching themselves lying by changing their story, indicating they were telling an untruth on at least one of those occasions (O’Toole, 2022, p. 427).

  • Attorneys of conspiracy theorist Alex Jones—who was sued for his claims that the Sandy Hook massacre never happened by parents of the victims—seeking to defend him by calling him a performance artist who should not be taken seriously (Borchers, 2017).

  • Alex Jones himself admitting in court that the Sandy Hook shooting was “100% real” after having misled millions of people for many years (Associated Press, 2022).

  • Fox News requiring their employees to be vaccinated against COVID-19 or submit to daily testing while the network routinely broadcast anti-vaccination content (Darcy, 2021).

  • Tucker Carlson, former Fox News host, openly admitting that he lies on air (Muzaffar, 2021).

Moving forward

Our work explored three fundamental premises: First, that democracy rests on a foundation of common knowledge (H. Farrell and Schneier, 2018) and that it is imperiled if citizens cannot agree on basic facts such as the integrity of elections (H. Farrell and Schneier, 2018; Tenove, 2020). Second, that while democratic debate—including evidence-informed policy-making—often involves contestation of facts (e.g., Kuklinski et al., 1998), this does not licence the use of outright lies and propaganda to willfully mislead the public (Lewandowsky, 2020). Third, that it is often possible to identify falsehoods, disinformation, and lies and differentiate them from good-faith political and policy-related argumentation.

At the time of this writing, Donald Trump is the Republican nominee for the 2024 presidential election. His campaign has rolled out an explicitly authoritarian agenda for his second term (Arnsdorf and Stein, 2023). The authoritarian agenda is likely to result in less free speech, rather than more, which is ironic in light of the fact that people such as Jim Jordan, who are attacking the idea of studying disinformation, do so under the banner of defending the First Amendment. Against this background, the question of how to address Donald Trump’s lies in particular and misinformation in general takes on particular importance.

At the more pessimistic end, Barkho (2023) posed three questions about the success of fact-checking Trump’s claims: first, have fact-checkers succeeded in persuading Trump to stop disseminating lies? Second, have the long inventories of falsehoods compiled by fact-checkers embarrassed or shamed Trump? Third, has fact-checking changed public perception of what constitutes truth? At first glance, the answer to all three questions might appear to be a resounding “no” (even though the counterfactual is, of course, unknown). However, at the more optimistic end of the spectrum, experimental studies in which election-fraud misinformation was corrected have found positive effects on trust in electoral processes (Bailard et al., 2022; Painter and Fernandes, 2022), including among Republican respondents and supporters of Trump. Those findings should give rise to a sliver of optimism that even partisans are receptive to corrective messages about election integrity, and therefore underscore the value of disinformation research.

Correcting lies about elections is arguably compatible with the spirit of a democracy. But what is the democratic legitimacy of broader countermeasures against misinformation and disinformation? It is straightforward to explore techniques with which to correct misconceptions in an experiment, in particular if the misinformation is introduced in the experiment itself (e.g., Ecker et al., 2011). It is less straightforward to deploy such techniques in the public sphere. Who determines what is “misinformation”, and what is “correct”? And how narrow is the gap between correcting misinformation and banning it? Several countries have recently outlawed “fake news” (e.g., Burkina Faso, Cambodia, Hungary, India, Malaysia, Singapore, and Vietnam) whose democratic credentials are at best questionable. In those cases, fake news can damage democracy not only by disinforming the public but also because countermeasures can be used to curb civil liberties and justify authoritarian crackdowns (Neo, 2022; Vese, 2022). Indeed, given that Donald Trump has routinely labeled any media coverage he did not like as “fake news”, perhaps the worst response to misinformation would be a law against fake news designed by Donald Trump and his allies.

There are, however, numerous ways in which the public can be better protected by the platforms—in particular if prodded into action by suitable regulations—against disinformation. One avenue involves content moderation and removal of unacceptable or problematic content, such as hate speech. The public is broadly supportive of moderation in certain cases (Kozyreva, Herzog, et al., 2023), and the European Union’s recent Digital Services Act (DSA) acknowledges a role for content moderation while highlighting the need for transparency of the underlying rules (for details, see Kozyreva, Smillie, et al., 2023). In addition, there are a number of alternative approaches that aim to inform or educate consumers rather than govern content directly. Those approaches have the advantage that they side-step concerns about censorship and that they are demonstrably scalable and readily deployable by the platforms.

One avenue involves the provision of “nutrition labels”, that is, indicators of the quality of a source. Reliable indicators of quality exist that are based on basic journalistic principles (Lin et al., 2023), and it is well-known that perceived source credibility can influence misinformation persuasiveness (Nadarevic et al., 2020; Prike et al., 2024). The effectiveness of source-quality indicators can be enhanced by introducing friction, for example, by requiring users to expend additional clicks to make information visible (L. Fazio, 2020; Pillai and Fazio, 2023). Naturally, such indicators cannot be perfect, and even sources of widely-acknowledged high quality can also publish dubious content. This makes it important to go beyond credibility and consider alternative approaches, such as those that boost users’ ability to spot deception and enhance their information-discernment skills. This can range from teaching “critical ignoring” (Kozyreva, Wineburg, et al., 2023), which enables people to ignore information that is unlikely to warrant expenditure of our limited attention, to psychological inoculation or “prebunking” (Lewandowsky and van der Linden, 2021; Roozenbeek et al., 2022), which involves refuting a lie in advance by explaining the rhetorical techniques that disinformers use to mislead consumers (e.g., scapegoating, false dichotomies, ad hominem attacks, and so on). Through short “edutainment” videos that are displayed as ads or public-service messages, this approach has been scaled on social media to empower millions of people to spot manipulation techniques (Goldberg, 2023). Meta-analyses have affirmed the efficacy of the inoculation approach (Banas and Rains, 2010; Lu et al., 2023). However, while standard debunking and prebunking interventions promise to be effective regardless of the cultural context in which they are applied (Blair et al., 2024; Pereira et al., 2023; Porter and Wood, 2021; but see Pereira et al., 2022), the effects of other interventions such as media-literacy training may be less robust in the Global South (Badrinathan, 2021). Some interventions developed and successfully applied in the Global North may also be less suitable in less-developed countries, if for example they target dissemination channels that have limited relevance locally (Badrinathan and Chauchard, 2024; de Freitas Melo et al., 2019).

Overall, much is now known about various cognitively-inspired countermeasures to correct misinformation or to protect people against being misled in the first place. For further extensive discussion of these countermeasures, see Ecker et al. (2022) and Kozyreva et al. (2024). Some of the cognitive science of misinformation has been reflected in European regulatory initiatives, such as the strengthened Code of Practice on Disinformation (Kozyreva, Smillie, et al., 2023). In addition, specific evidence-based recommendations for platforms have been developed by Roozenbeek et al. (2023) and Wardle and Derakhshan (2017).

Our work has also identified several important questions for future research. We consider the long-term consequences of misinformation on society to be a particularly pressing issue. We have a reasonably good understanding of the individual-level cognitive processes that are engaged when a person is exposed to a single piece of misinformation (Ecker et al., 2022). We know very little about the cognitive and social consequences for an individual who is inundated with information of dubious quality for prolonged periods of time. We do not know how societies are affected by epistemic uncertainty and chaos in the long run. Numerous indicators suggest that Western societies, in particular the United States, are ailing (e.g., Lewandowsky et al., 2017), but the attribution of those trends to misinformation or epistemic chaos is difficult. On those occasions where researchers have successfully isolated causal effects, they tend to implicate certain media organs (e.g., Fox News in particular) in compromising public health (Bursztyn et al., 2020; Simonov et al., 2020), and they have identified the role of social media in causing ethnic hate crimes and xenophobia (Bursztyn et al., 2019; Müller and Schwarz, 2021). However, it is unclear as yet how generalizable those findings are and much additional work remains to be done (for a review, see Lorenz-Spreen et al., 2022).

Future research should also address some of the limitations of fact-checking, such as the difficulties of verifying statements about the future (Nieminen and Sankari, 2021) or arguments that employ the rhetorical technique of “paltering” — that is, the use of truthful statements to convey a misleading impression (Lewandowsky et al., 2016; Rogers et al., 2017). One approach is to focus on what is pragmatically useful for people to make informed decisions, such as whether a claim is misleading (Birks, 2019), with critical thinking methods offering a means of identifying the presence of logical fallacies (Cook et al., 2018).

Increasing research attention is being paid to the concept of discernment; that is, the extent to which accurate misinformation is believed more than misinformation (Pennycook and Rand, 2021). Focusing on discernment rather than acceptance of misinformation guards against inadvertently developing interventions that reduce belief in facts and misinformation equally. A general cynicism and disbelief of everything does not solve the misinformation problem. Instead, we must boost people’s ability to distinguish between facts and falsehoods.

Conclusion

We began the paper with a quote from Hannah Arendt, one of the foremost analysts of 20th century totalitarianism. It is worth here revisiting the same quotation in its extended form, which underscores the urgency of finding a solution to the epistemic crisis affecting democracy in the U.S. and beyond:

“If everybody always lies to you, the consequence is not that you believe the lies, but rather that nobody believes anything any longer…. And a people that no longer can believe anything cannot make up its mind. It is deprived not only of its capacity to act but also of its capacity to think and to judge. And with such a people you can then do what you please.” (our emphasis)

— Hannah Arendt