Introduction

‘This is a free country. Land of the free. Go to China if you want communism’ yelled an American protester at a nurse counter-protesting the resumption of commercial activity 5 weeks into the country’s COVID-19 crisis (Armus and Hassan 2020). Like many policy challenges, the COVID-19 crisis is exposing deep-seated political and epistemological divisions, fueled in part contestation over scientific evidence and ideological tribalism stoked in online communities. The proliferation of social media has democratized access to information with evident benefits, but also raises concerns about the difficulty users face in distinguishing between truth and falsehood. The perils of ‘fake news’—false information masquerading as verifiable truth, often disseminated online—are acutely apparent during public health crises, with false equivalence drawn between scientific evidence and uninformed opinion.

In an illustrative episode from April 2020, the scientific community’s largely consensus views about the need for social distancing to limit the spread of COVID-19 were challenged by protesters in the American states of Minnesota, Michigan, and Texas, who demanded in rallies that governors immediately relax social distancing protocols and re-open shuttered businesses. Populist skepticism about COVID-19 response in the USA had arguably been growing since President Donald Trump’s early dismissals of the severity of the virus (US White House 2020) and his call for protestors to ‘liberate’ states undertaking containment measures (Shear and Mervosh 2020). These actions were seen by some as evidence of the presidential administration’s willingness to politicize virus response; indeed, critical language from some politicians and commentators cast experts and political opponents as unnecessarily panicky and politically motivated to overstate the need for lock-downs and business closures. Despite the salience of this recent phenomenon, anti-science populism has an arguably extended history, not only for issues related to public health (e.g., virus response and, prior to COVID-19, vaccinations) but also for climate change (Fischer 2020; Huber 2020; Lejano and Dodge 2017; Lewandowsky et al. 2015). Anti-science skepticism, often lacking a broad audience and attention from mainstream media, is left to peddle scientifically unsubstantiated claims in online communities, where such content remains widely accessible and largely unregulated (Edis 2020; Szabados 2019). As such, the issue of fake news deserves closer scrutiny with the world facing its greatest public health crisis in a century.

There is no consensus definition of fake news (Shu et al. 2017). Based on a survey of articles published between 2003 and 2017, Tandoc et al. (2018) propose a typology for how the concept can be operationalized: satire, parody, fabrication, manipulation, propaganda, and advertising. Waszak et al. (2018) propose a similar typology (with the overlapping categories of fabricated news, manipulated news, and advertising news) but add ‘irrelevant news’ to capture the cooptation of health terms and topics to support unrelated arguments. Shu et al. (2017) cite verifiable lack of authenticity and intent to deceive as general characteristics of fake news. Making a distinction between fake news and brazen falsehoods, which has implications for this study’s focus on the behavior of the individual information consumer, Tandoc et al. (2018) argue ‘while news is constructed by journalists, it seems that fake news is co-constructed by the audience, for its fakeness depends a lot on whether the audience perceives the fake as real. Without this complete process of deception, fake news remains a work of fiction’ (p. 148).

Amidst the COVID-19 crisis, during which trust in government is not merely an idle theoretical topic but has substantial implications for public health, deeper scholarly understandings about the power and allure of fake news are needed. According to Porumbescu (2018), ‘the evolution of online mass media is anything but irrelevant to citizens’ evaluations of government, with discussions of “fake news,” “alternative facts,” “the deep state,” and growing political polarization rampant’ (p. 234). With the increasing level of global digital integration comes the growing difficulty of controlling the dissemination of misinformation. Efforts by social media platforms (as the underlying organizational structures and operations; referred to hereafter as SMPs) and governments have targeted putative sources of misinformation, but engagement (i.e., sharing and promoting links) with fake news by individual users is an additional realm in which the problem of fake news can be addressed.

Examining the motivations driving an individual’s engagement with fake news, this study introduces a formal mathematical model that illustrates the cost to an individual of making low- or high-level efforts to resist fake news. The intent is to reveal mechanisms by which SMPs and governments can intervene at individual and broader scales to contain the spread of willful misinformation. This article continues with a literature review focusing on fake news in social media and policy efforts to address it. This is followed by the presentation of the model, with a subsequent section focusing on policy insights and recommendations that connect the findings of the model to practical implications. The conclusion reflects more broadly on the ‘post-truth’ phenomenon as it relates to policymaking and issues a call for continued research around epistemic contestation.

Literature review

A canvassing of literature about fake news could draw from an array of disciplines including communications, sociology, psychology, and economics. We focus on the treatment of fake news by the public policy literature—an angle that engages discussions about cross-cutting issues like misinformation, politicization of fact, and the use of knowledge in policymaking. The review is in two parts. The first focuses on the intersection of fake news, social media, and pandemics (in particular the COVID-19 crisis), and the second on policy efforts to address individual reactions to fake news.

Fake news, social media, and pandemics

Fake news and social media as topics of analysis are closely intertwined, as the latter is considered a principal conduit through which the former spreads; indeed, Shu et al. (2017) call social media ‘a powerful source for fake news dissemination’ (p. 23). The aftermath of the 2016 US presidential election sent scholars scrambling to the topic of fake news, misinformation, and populism; as such, information-filtering through political and cognitive bias is a topic now enjoying a spirited revival in the literature (Fang et al. 2019; Polletta and Callahan 2019; Cohen 2018; Allcott and Gentzkow 2017; DiFranzo and Gloria-Garcia 2017; Flaxman et al. 2016; Zuiderveen Borgesius et al. 2016). A popular heuristic for conceptualizing the phenomenon of social media-enabled fake news is the notion of the ‘echo chamber’ effect (Shu et al. 2017; Barberá et al. 2015; Agustín 2014; Jones et al. 2005), in which information consumers intentionally self-expose only to content and communities that confirm their beliefs and perceptions while avoiding those that challenges them. The effect leads to the development of ideologically homogenous social networks whose members derive collective satisfaction from frequently repeated narratives (a process McPherson et al. (2001) label ‘homophily’). This phenomenon leads to ‘filter bubbles’ (Spohr 2017) in which ‘algorithmic curation and personalization systems […] decreases [users’] likelihood of encountering ideologically cross-cutting news content’ (p. 150). The filtering mechanism is both self-imposed and externally imposed based on the algorithmic efforts of SMPs to circulate content that maintains user interest (Tufekci 2015). For example, in a Singapore-based study of the motivations behind social media users’ efforts to publicly confront fake news, Tandoc et al. (2020) find that users are driven most by the relevance of the issue covered, their interpersonal relationships, and their ability to convincingly refute the misinformation; according to the authors, ‘participants were willing to correct when they felt that the fake news post touches on an issue that is important to them or has consequences to their loved ones and close friends’ (p. 393). As the issue of misinformation has gained further salience during the COVID-19 episode, this review continues by exploring scholarship about fake news in the context of pandemics.

Research has shown that fake news and misinformation can have detrimental effects on public health. In the context of pandemics, fake news operates by ‘masking healthy behaviors and promoting erroneous practices that increase the spread of the virus and ultimately result in poor physical and mental health outcomes’ (Tasnim et al. 2020; n.p.), by limiting the dissemination of ‘clear, accurate, and timely transmission of information from trusted sources’ (Wong et al. 2020; p. 1244), and by compromising short-term containment efforts and longer-term recovery efforts (Shaw et al. 2020). First used by World Health Organization Director-General Tedros Adhanom Ghebreyesus in February 2020 to describe the rapid global spread of misinformation about COVID-19 through social media (Zarocostas 2020), the term ‘infodemic’ has recently gained popularity in pandemic studies (Hu et al. 2020; Hua and Shaw 2020; Medford et al. 2020; Pulido et al. 2020). Similar terms are ‘pandemic populism’ (Boberg et al. 2020) and the punchy albeit casual ‘covidiocy’ (Hogan 2020). Predictably, misinformation has proliferated with the rising salience of COVID-19 (Cinelli et al. 2020; Frenkel et al. 2020; Hanafiah and Wan 2020; Pennycook et al. 2020; Rodríguez et al. 2020; Singh et al. 2020). In an April 2020 press conference, US President Donald Trump made ambiguous reference to the possible value of ingesting disinfectants to treat the virus (New York Times 2020), an utterance that elicited both concern and ridicule.

Scholarly efforts to understand misinformation in the COVID-19 pandemic contribute to an existing body of similar research in other contexts, including the spread of fake news during outbreaks of Zika (Sommariva et al. 2018), Ebola (Spinney 2019; Fung et al. 2016), and SARS (Taylor 2003). Research about misinformation and COVID-19 draws also on existing research about online information-sharing behaviors more generally. For example, in a study about the role of fake news in public health, Waszak et al. (2018) find that among the most shared links on common social media, 40 percent contained fallacious content (with vaccination having the highest incidence, at 90 percent). In taking a still broader view, understandings about the politicization of public health information draw from research about science denialism more generally, including the process by which climate denial narratives as ‘alternative facts’ are socio-culturally constructed to protect ideological imaginaries (see Fischer (2019) for a similar discussion related to climate change). On the other hand, there is also evidence that the use of social media as a conduit for information dissemination by public health authorities and governments has been useful, including communicating the need for social distancing, indicating support for healthcare workers, and providing emotional encouragement during lock-down (Thelwall and Thelwall 2020). As such, it is crucial to distinguish between productive uses of social media from unproductive ones, with the operative characteristic being the effect on safety and wellbeing of information consumers and the broader public.

Policy efforts to address individual reactions to fake news

The second part of this review explores literature about policy efforts to address fake news, such as it is understood as a policy problem. The phenomenon of fake news can be considered an individual-level issue, and this is the perspective adopted by this review and study. Within a ‘marketplace’ of information exchange, consumers encounter information and must decide whether to engage with it or to discredit and dismiss it. As such, many policy interventions targeting fake news focus on verification by helping equip social media users with the tools to identify and confront fake news (Torres et al. 2018). Nevertheless, the efficacy of such policy tools depends on their calibration to individual cognitive and emotional characteristics. For example, Lazer et al. (2018) outline cognitive biases that determine the allure of fake news, including self-selection (limiting one’s consumption only to affirming content), confirmation (giving greater credibility to affirming content), and desirability (accepting only affirming content). To this list Rini (2017) adds ‘credibility excess’ as a way of ascribing ‘inappropriately high testimonial credibility [to a news item] on the basis of [the source’s] demography’ (p. E-53).

A well-developed literature also indicates that cognitive efforts and characteristics determine an individual’s willingness to engage with fake news. According to a study about individual behaviors in response to the COVID-19 crisis in the USA, Stanley et al. (2020) find that ‘individuals less willing to engage effortful, deliberative, and reflective cognitive processes were more likely to believe the pandemic was a hoax, and less likely to have recently engaged in social-distancing and hand-washing’ (n.p.). The individual cognitive perspective is utilized also by Castellacci and Tveito (2018) in a review of literature about the impact of internet use on individual wellbeing: ‘the effects of Internet on wellbeing are mediated by a set of personal characteristics that are specific to each individual: psychological functioning, capabilities, and framing conditions’ (p. 308). Ideological orientation has likewise been found to associate with perceptions about and reactions to fake news; for example, Guess et al. (2019) find in a study of Facebook activity during the 2016 US presidential election that self-identifying political conservatives (the ‘right’) were more likely than political liberals (the ‘left’) to share fake news and that the user group aged 65 and older (controlling for ideology) shared over six times more fake news articles than did the youngest user group. A similar age-related effect on psychological responses to social media rumors is observed by He et al. (2019) in a study of usage patterns for messaging application WeChat; older users who are new to the application struggle more to manage their own rumor-induced anxiety. Network type also plays a role in determining fake news engagement; circulation of fake news and misinformation was found to be higher among anonymous and informal (individual and group) social media accounts than among official and formal institutional accounts (Kouzy et al. 2020).

The analytical value of connecting individual behavior with public policy interventions has prompted studies about the conduits through which policies influence social media consumers. According to Rini (2017), the problem of fake news ‘will not be solved by focusing on individual epistemic virtue. Rather, we must treat fake news as a tragedy of the epistemic commons, and its solution as a coordination problem’ (p. E-44). This claim makes reference to a scale and topic – the actions of government – that are underexplored in studies about the failure to limit the spread of fake news. Venturing as well into the realm of interpersonal dynamics, Rini continues by arguing that the development of unambiguous norms can enhance individual accountability, particularly around the transmission of fake news through social media sharing as a ‘testimonial endorsement’ (p. E-55). Extending the conversation about external influences on individual behavior, Lazer et al. (2018) classify ‘political interventions’ into two categories: (1) empowerment of individuals to evaluate fake news (e.g., training, fact-checking websites, and verification mechanisms embedded within social media posts to evaluate information source authenticity) and (2) SMP-based controls on dissemination of fake news (e.g., identification of media-active ‘bots’ and ‘cyborgs’ through algorithms). The authors also advocate wider applicability of tort lawsuits related to the harm caused by individuals sharing fake news. From a cognitive perspective, Van Bavel et al. (2020) add ‘prebunking’ as a form of psychological inoculation that exposes users to a modest amount of fake news with the purpose of helping them develop an ability to recognize it; the authors cite support in similar studies by van der Linden et al. (2017) and McGuire (1964). These interventions, in addition to crowd-sourced verification mechanisms whereby users rate the perceived accuracy of social media posts by other users, have the goal of conditioning and nudging users to reflect more deeply on the accuracy of the news they encounter.

Given that the model introduced in the following section concerns the issue of fake news from the perspective of individual behavior and that topics addressed by fake news are often ideologically contentious, it is appropriate to acknowledge the literature related to cognitive bias, beliefs, and ideologies with reference to political behavior. In a study about narrative framing, Mullainathan and Shleifer (2005) explore how the media, whether professional or otherwise, seeks to satisfy the confirmation biases of targeted or segmented viewer groups; when extended to the current environment of online media, narrative targeting increases the likelihood that fake news will be shared due to its attractiveness to particular audiences. In turn, this narrative targeting perpetuates the process by which information consumers construct their own narratives about political issues in a way that comports with their ideologies (Kim and Fording 1998; Minar 1961) and personalities (Duckitt and Sibley 2016; Caprara and Vecchione 2013). This tendency is shown to be strongly influenced by not only by (selectively observed) reality but also by individual perceptions that Kinder (1978; p. 867) labels ‘wishful thinking.’ Further, the cognitive tendency to classify reality through sweeping categorizations (e.g., ‘liberals’ vs. ‘conservatives,’ a common polarity in American politics) compels individuals to associate more strongly with one side and distance further from the other (Vegetti and Širinic 2019; Devine 2015) and thereby biases an individual’s cognitive processing of information (Van Bavel and Pereira 2018). This observation is arguably relevant to the current online discourses and content of fake news, which often reflect extreme party-political rivalries and left–right partisanship (Spohr 2017; Gaughan 2016). These issues are relevant as they bear strongly on the choices of individuals about effort levels related to their interactions with and subjective judgments of fake news.

Finally, few attempts have been made to apply formal mathematical modeling to understand the behavior of individuals with respect to the consumption of fake news on social media; a notable example is Shu et al. (2017), who model the ability of algorithms to detect fake news, and Tong et al. (2018), who model the ‘multi-cascade’ diffusion of fake news. Papanastasiou (2020) uses a formal mathematical model to illustrate the behavior of SMPs in response to sharing fake news by users. To this limited body of research we contribute a formal mathematical model addressing the motivations of users to engage with or dismiss fake news when encountered.

The model

The model presented in this section examines the behavior of a hypothetical digital citizen (DC) who encounters fake news while using social media. The vulnerability of the DC in this encounter depends on the DC’s level of effort in resisting fake news. To illustrate this dynamic, we adopt the modeling approach used by Lin et al. (2019) and Hartley et al. (2019) that considers the equilibrium choices of a rational decision-maker as determined by individual attributes and external factors. The advantage of this model is its incorporation of factors related to ethical standards in addition to cost–benefit considerations in decision-making. This type of model has been widely used in studies related to taxpayer behavior (Eisenhauer 2006, 2008; Yaniv 1994; Beck and Jung 1989; Srinivasan 1973), in which the tension between ethics and perceived benefits of acting unethically is comparable to that faced by a DC.

The model’s parameters are intended to aid thinking about issues within the ambit of public policy, making clearer the assumptions about individual behaviors and consequences of those behaviors as addressable by government interventions. While the model examines DC behavior, it is not intended to be meaningful for research about psychology and individual or group behavior more generally; mentions of behavior and its motivations and effects are made in service only to arguments about the policy implications of governing behavior in free societies. The remainder of this section specifies the model, which aims to formally and systematically observe dynamics among effort levels, standards, and utility for consuming and sharing fake news.

Choice of effort level

The model assumes that the DC is a rational decision-maker; that is, in reacting to fake news she maximizes her utility as determined by two factors: consumer utility (the benefits accruing to the DC by engaging in a particular way with fake news) and ethical standards. Regarding consumer utility, the DC makes an implicit cost–benefit analysis; for ethical standards, she behaves consistently with her personally held ethical norms. For simplicity, we assume that the DC’s overall utility function U takes the following form:

$$U = \left[ {\alpha \left( e \right) \cdot W\left( e \right) + \left( {1 - \alpha \left( e \right)} \right) \cdot S} \right]/e$$
(1)

where

  • e is the DC’s effort level in reacting to fake news. For simplicity of exposition, the DC is assumed to choose from two levels of effort: low (e = eL) and high (e = eH). If the DC chooses low effort, she increases her consumption of fake news; with high effort, she reduces her consumption. In the equation q = eH/eL, we assume that q > 1 and that a higher q represents a higher cost to the DC because it reflects a higher level of effort.

  • α is the weight the DC gives to her consumer utility and (1 − α) is the weight she gives to her utility from ethical behavior. We assume that α is a function of effort level as follows: \(\alpha \left( {e_{\text{H}} } \right) = \alpha (0 < \alpha < 1)\) and α(eL) = 1. That is, by choosing the high-effort level, the DC derives her utility not only from consumption (α > 0) but also from ethical behavior (\(\left( {1 - \alpha } \right) > 0\)). If choosing the low-effort level, the DC derives her utility only from consumption (α = 1).

  • W(e) is the consumer utility that the DC gains from engagement with social media, which is assumed to depend on her effort level as follows: \(W\left( {e_{\text{H}} } \right) = W\) and \(W\left( {e_{\text{L}} } \right) = W/\omega\) where ω > 1. That is, if the DC chooses the low-effort level, she derives less consumer utility due to the negative effects of engaging with fake news (as described in the literature review).

  • S represents the DC’s utility from observing her own standards (beyond cost–benefit considerations) for guiding online behavior. The assumption is that the DC gains utility by aligning her behavior choice (regarding whether to engage with fake news) with ethical norms and her own beliefs.

Given the utility function in Eq. (1), the DC exerts high effort in reacting to fake news if and only if:

$$U|_{{e = e_{\text{H}} }} > U|_{{e = e_{\text{L}} }}$$
(3)
$$\leftrightarrow \left[ {\alpha \cdot W + \left( {1 - \alpha } \right) \cdot S} \right]/e_{\text{H}} > \left[ {W/\omega } \right]/e_{\text{L}}$$
(4)
$$\leftrightarrow \left[ {\alpha \cdot W + \left( {1 - \alpha } \right) \cdot S} \right] > qW/\omega$$
(5)
$$\leftrightarrow \omega \cdot \left( {1 - \alpha } \right) \cdot S > \left( {q - \omega \alpha } \right)W$$
(6)
$$\leftrightarrow S > \left[ {\frac{q - \omega \alpha }{{\omega \cdot \left( {1 - \alpha } \right)}}} \right] \cdot W$$
(7)
$$\leftrightarrow S > \varphi \cdot W$$
(8)

where

$$\varphi = \varphi \left( {q, \omega , \alpha } \right) = \left[ {\frac{q - \omega \alpha }{{\omega \cdot \left( {1 - \alpha } \right)}}} \right]$$
(9)

That is, the DC chooses high-level effort if \(S > \varphi \cdot W\) and low-level effort if \(S < \varphi \cdot W\).

A negative value for φ reflects a trivial scenario in which condition (8) is satisfied at any value of S because the right-hand side of the inequality is negative. This scenario occurs if and only if (\(q - \omega \alpha ) < 0\). This condition reflects a situation in which the relative cost of making high effort (q) is low and the consumer utility loss of making low effort (ω) is high such that q < ωα for a given α; this implies that high effort is the DC’s utility-maximizing choice. This scenario suggests an environment in which social media is a mature platform with a well-established regulatory regime and rational and discerning users.

A positive value for φ reflects the inverse scenario. This scenario is the main focus of this modeling exercise because it better reflects the existing dynamics of the current social media landscape. This scenario occurs if and only if \(q - \omega \alpha > 0\), which reflects a situation in which the relative cost of making high effort (q) is high and the consumer utility loss of making low effort (ω) is low such that \(q > \omega \alpha\) for a given α; this implies that low effort is the DC’s utility-maximizing choice. This scenario suggests an environment in which social media is poorly regulated and characterized by irrational use.

Figure 1 graphically illustrates this scenario; consumer utility W is represented on the horizontal axis and ethical behavior utility S on the vertical axis. The locus S = φ·W establishes a boundary line dividing the plane into two areas. In the area below the boundary line (the shaded area), the condition S < φ·W holds, depicting the DC’s choice of low effort; in the upper (non-shaded) area, the condition S > φ·W holds, depicting the DC’s choice of high effort.

Fig. 1
figure 1

Effort level by DC in reacting to fake news

Digital citizen’s behavior at equilibrium

Given the latter situation presented in “Choice of effort level” section, the DC’s equilibrium decision depends on demand and supply dynamics governing her behavior. We assume that the DC’s demand curve is a horizontal line S = SDC (Fig. 2), which implies that at any given level of SDC (the utility she gains from ethical behavior), her demand for consumer utility ranges from 0 to +∞. Specifically, this assumes that the standards adopted by a DC are endogenous and shaped by the DC’s belief systems and other individual-specific factors; thus, these standards are fixed at a given level regardless of the value of W (the consumer utility the DC gains from engagement with social media). It is plausible that the DC’s standards may shift relative to the value of W, but there is no guiding heuristic about the direction of this relationship and for modeling simplicity the DC’s standards are assumed to be absolute instead of relative. Regarding the source of the supply of consumer utility, the model assumes that a DC’s personal social networks impact this variable, as represented by an upward linear curve and taking the following form:

$$W = a + \rho \cdot S$$
(10)

where ρ > 0.

Fig. 2
figure 2

Digital citizen’s equilibrium choice for effort level

The upward slope of this supply curve implies that the DC derives a higher consumer utility W from interacting with her personal social networks as her S (the utility from ethical behavior) increases; that is, the DC is satisfied with online activity when not needing to compromise her ethical standards to access it. The coefficient ρ (ρ > 0) represents the value-set adopted by the DC’s personal social network. A greater ρ indicates that this network ‘publicly’ grants ethical behavior positive feedback, resulting in a larger consumer utility reward for the DC; this is manifest in expressions of validation such as number of ‘likes’ and positive comments. As shown in Fig. 2, the DC’s equilibrium decision exists at point Q, where the demand curve S = SDC crosses the supply curve \(W = a + \rho S\); the equation \(W_{S} = a + \rho S_{DC}\) thus applies. Point Q can exist in the upper (high-effort) area (Panel A) or lower (low-effort) area (Panel B). The model implies that the DC’s equilibrium effort level depends on multiple factors including her individual preferences and characteristics of the setting.

Policy insights and recommendations

Policy insights

The model introduced in “The model” section shows that the DC’s choice of reacting productively to fake news falls into the high- or low-effort area and is an equilibrium decision. That is, the choice remains stable as long as the model’s key policy-related parameters,Footnote 1 which include q, ω, and ρ, do not change significantly. This section discusses how a changes in each of these policy parameters can potentially alter the DC’s equilibrium choice by shifting the boundary line S = φW between the high- and low-effort areas and by shifting the supply curve W = a+ρ·S.

Shifting the borderline S = φW

Two policy parameters that have the ability to shift the boundary line S=φW are q and ω. Coefficient φ depends on q and ω in differing ways. Taking the derivative of φ with regard to q from Eq. (9) yields the following equation:

$$\partial \varphi /\partial q = 1/\left[ { \omega \cdot \left( {1 - \alpha } \right)} \right]^{2} > 0$$
(11)

This equation indicates that φ is an increasing function of q. In other words, φ rises if q rises and φ falls if q falls. As such, an increase in q increases φ and thereby rotates the boundary line S = φW counter-clockwise. As shown in Fig. 3, this counter-clockwise rotation expands the lower (low-effort) area and shrinks the upper (high-effort) area. This implies that a rise in the relative cost of making high effort (compared to making low effort) can push the DC’s equilibrium choice from the high-effort into the low-effort area, illustrating the risk that the DC’s effort level falls and remains low when a setting changes in a way that does not encourage high effort. On the other hand, a decrease in q lowers φ and thereby rotates the boundary line S = φW clockwise. As shown in Fig. 4, this clockwise rotation expands the upper (high-effort) area and shrinks the lower (low-effort) area. This implies that a fall in the relative cost of making high effort (compared to making low effort) can push the DC’s equilibrium choice from the low-effort to the high-effort area.

Fig. 3
figure 3

Effect of increasing q or lowering ω on equilibrium choice of effort. DC’s equilibrium choice changes from the high-effort to low-effort area as φ rises to φ1 due to an increase in q or a reduction in ω

Fig. 4
figure 4

Effect of decreasing q or increasing ω on the DC’s equilibrium choice of effort. DC’s equilibrium choice moves from the low-effort to high-effort area as φ declines to φ2 due to a fall in q or an increase in ω

Similarly, taking the derivative of φ with regard to ω from Eq. (9) yields the following equation:

$$\partial \varphi /\partial \omega = \left[ { - \alpha \omega - \left( {q - \omega \alpha } \right)} \right]/[\left( {1 - \alpha } \right)\omega^{2} \left] { = - q/[\left( {1 - \alpha } \right)\omega^{2} } \right] < 0$$
(12)

This equation indicates that φ is a decreasing function of ω. As such, a change in ω causes the boundary line S = φW to rotate. If ω falls, φ rises; hence, the boundary line rotates counter-clockwise, expanding the low-effort area and shrinking the high-effort area. This shift can push the DC’s equilibrium choice from the high-effort to low-effort area (Fig. 3), suggesting that weak regulation of fake news reduces the consumer utility loss caused by low effort and thereby increases the possibility of the DC falling from high- to low-effort equilibrium. On the other hand, if ω increases, φ decreases; hence, the boundary line rotates clockwise, enlarging the high-effort area and shrinking the low-effort area. This shift can push the DC’s equilibrium choice from the low-effort to high-effort area (Fig. 4). This suggests that increasing the consumer utility loss caused by low effort can induce the DC to change her equilibrium effort choice from low to high level.

Shifting the supply curve \(W = a + \rho S\)

A shift of the supply curve can also change the DC’s equilibrium choice of effort level. As shown in Fig. 5, an increase in ρ and/or a can shift the supply curve upward, while the external setting represented by the boundary line remains unchanged. This shift can change the equilibrium point from Q, which is in the low-effort area, to Q’, which is in the high-effort area (Fig. 5). This suggests that an improvement in the ethical standards of the DC’s social media networks can induce her to change her equilibrium effort choice from low to high level. On the other hand, as shown in Fig. 6, a decrease in ρ and/or a can shift the supply curve downward, while the external setting represented by the boundary line remains unchanged. This shift can change the equilibrium point from Q, which is in the high-effort area, to Q’, which is in the low-effort area (Fig. 6). This suggests that a deterioration in the ethical standards of the DC’s personal social networks can induce her to change her equilibrium effort choice from high to low level.

Fig. 5
figure 5

Effect of increasing ρ to ρ′. An increase of ρ to ρ′ can push the DC’s equilibrium effort choice from low to high level

Fig. 6
figure 6

Effect of decreasing ρ to ρ′. A decrease in ρ to ρ′ can move the DC’s equilibrium effort choice from high to low level

Policy recommendations

While the model is based on the actions of individuals within a social media environment that implicitly incentivizes certain types of online behavior depending on individual values and preferences, the objective of the model is to derive insights for practice by demonstrating relationships among variables that fall within the ambit of external influence (policies of government or SMPs). In particular, the model is notable for its illustration of equilibrium, which has an important policy implication: behavioral change is achievable through an initial policy push that shifts the DC into a ‘high effort’ mindset, and once the altered conditions are stable and maintained, it can be postulated that DCs will maintain their new level of effort. This differs from policy intervention based on punitive measures, which often do little to change mindset and as such must be repeatedly reinforced and maintained at the cost of monitoring and enforcement.

This subsection outlines actions that can be taken by the DC, SMPs, and governments to facilitate the previously introduced model mechanisms. Table 1 summarizes the policy insights and recommendations inferred from the modeling exercises in “Policy insights” section. Mechanism 1 addresses the need to expand the high-effort area such that the DC’s equilibrium choice of effort defaults to a high level. As shown by the model, there are two sub-mechanisms for achieving this objective: (1) decrease the cost to the DC of making high effort relative to low effort (q), and (2) increase the consumer utility loss of making low effort (ω).

Table 1 Measures for moving DC equilibrium effort choice from low to high level

Regarding the first sub-mechanism, the DC’s critical evaluation of fake news should be made easier. For the DC, action items include acquiring the necessary training [e.g., ‘media literacy’ (Lee 2018) and critical thinking skills (Baron and Crootof 2017)] and analytical tools (e.g., personal technologies and applications) to evaluate the authenticity of websites and news shared on social media.Footnote 2 Additional action items include the adoption by SMPs of increasingly sophisticated algorithms for detecting fake news (Shu et al. 2017; Kumar and Geethakumari 2014) and for verifying contributors (Brandtzaeg et al. 2016; Schifferes et al. 2014), and the development of crowdsourcing capabilities for the same purpose (Pennycook and Rand 2019; Kim et al. 2018). For government, action items include the dissemination of official guidelines and protocols for SMPs to address fake news as based on research and feedback from platform administrators and users, and a threshold level of regulatory tolerance calibrated to particular contexts based on social, economic, and political factors. Despite the potential efficacy of these actions, they may raise concerns about the protection of free speech, as the definition of fake news is contestable and as government intervention could be seen to privilege certain points of view (Baron and Crootof 2017).Footnote 3

Regarding the second sub-mechanism, the negative effects of fake news (e.g., those effecting the sharer of content or the ‘third-person’; Jang and Kim 2018; Emanuelson 2017) should be studied and communicated meaningfully to the DC. This involves collaboration among SMPs, researchers, and governments to specify ways in which misinformation compromises individual or collective wellbeing (e.g., by advocating counterproductive and unhealthy practices during a pandemic like COVID-19; Cuan-Baltazar et al. 2020; Pennycook et al. 2020). It is assumed that the utility derived from ethical behavior would compel a rational or standardly ‘ethical’ DC, in the face of evidence about the harmful impacts of fake news, to adopt techniques related to the first mechanism (i.e., being more diligent about identifying and resisting fake news). Nevertheless, it is acknowledged that such interventions represent the outer limits of politically feasible policy reach in a ‘free society,’ and on matters of speech there is often little action government can take without establishing a direct link between speech and imminent material danger. The COVID-19 crisis is an opportunity to further analytically underscore the link among fake news, public perceptions and resulting behaviors, and negative public health outcomes; as the crisis continues to unfold in some countries while being relatively well mitigated in others, research will be able to empirically test these connections.

Mechanism 2 addresses the need to reduce the DC’s consumer utility of engaging with fake news. This is achieved by shifting the utility supply curve upwards. Remaining unchanged is the external setting, which determines the relative size of areas of high and low effort. For this model, the external setting is characterized by two factors: (1) the degree of maturity of the SMP with respect to organization-level regulatory mechanisms that address fake news and any willfully harmful exploitation of the platform, and (2) the self-regulating and discerning behaviors of users with respect to fake news. Within the second mechanism, the constancy of utility that an individual derives from ethical behavior, as represented between the default and alternative scenarios (Q and Q’; Fig. 5), eliminates the need to directly appeal on a policy level to ethical sensibilities as a motivation for high effort. With a DC defaulting to the high-effort zone, external circumstances reflect an ethical standard on the part of the DC’s social media community that may compel the DC to resist fake news (for a discussion about the relationship between group influence and individual-level media and technology use, see Kim et al. 2016, Contractor et al. 1996, and Fulk et al. 1990). The negative effects of engaging with fake news—as made clear to the DC through action items for the first mechanism—are regulated by a ‘social sanctioning’ mechanism (e.g., through public shunning or shaming the DC for sharing fake news) that can be motivated by what Batson and Powell (2003) describe as altruism, collectivism, and prosocial behavior. This community-based policing also reflects the kind of collective action observed in the absence of adequate regulatory mechanisms, as discussed by new institutionalism scholars including Ostrom (1990).

Finally, it is necessary to broaden this analytical perspective by acknowledging the influence of power and control over information. SMPs are owned and controlled by what have arguably become some of the twenty-first century’s most dominant and politically connected commercial actors. As in many industries, the SMP market structure is shaped by consolidation pressures, and these are evident in examples like Facebook’s ownership of picture sharing platform Instagram and mobile messaging application WhatsApp, and in Google’s ownership of video sharing platform YouTube and online advertising company DoubleClick. Growing political scrutiny has focused on the political relationships these and other major internet conglomerates have cultivated with American policymakers (Moore and Tambini 2018), on their monopolistic and accumulative behaviors (Ouellet 2019), on their influence on voter preference (Kreiss and McGregor 2019; Wilson 2019), and on their shaping of discourses about issues related to privacy and user identity (Hoffmann et al. 2018). Amidst growing concern about the collection and use of private information and the perceived preference given to content reflecting certain political ideologies, calls have been issued to strengthen regulations on large technology firms (Flew et al. 2019; Smyth 2019; Hemphill 2019). As society grows more technologically connected and the dissemination of information more democratized, the contribution of an informed public to the functionality of representative and democratic systems of government and policymaking grows increasingly dependent on the commercial choices of a relatively small number powerful technology companies. It is thus appropriate to interpret the findings of this study within the larger context of an information and communications ecosystem whose rules appear to be defined as much by commercial interest as by public policy. In executing the policy recommendations outlined in this section, policymakers should remain aware of power dynamics in commercial spheres and maintain objectivity, fairness, and so-called arm’s-length distance from SMPs and related entities subject to regulation.

Conclusion

This article has examined the dynamics of how social media users react to fake news, with findings that support targeted public policies and SMP strategies and that underscore the role of informal influence and social sanctioning among members of personal online networks. The study’s formal mathematical model conceptualizes the effort level of a DC in resisting fake news when encountered, illustrating how a decrease the cost of high effort (e.g., more convenient ability to filter content) and an increase in the cost of low effort (e.g., negative consequences of engaging fake news) can nudge the DC toward a more diligent posture in the face of misinformation. These findings lend depth to existing studies about how social media users react to fake news, including through self-guided verification, correction, and responses to correction (Tandoc et al. 2018; Lewandowsky et al. 2017; Zubiaga and Ji 2014). They also confirm related findings by Pennycook and Rand (2019) that a DC’s susceptibility to fake news may be a function more of lax effort in analytical reasoning than of partisanship and its manifestation in ‘motivated reasoning.’ These findings provide support for formal interventions by governments (e.g., guidelines, protocols, and regulations on content) and by SMPs (e.g., algorithms and crowdsourcing for content verification).

It is appropriate to acknowledge the model’s limitations—namely that, as with any model based on human or collective social behavior, the certitude with which quantification and statistical inferences can be conclusively made is lower than for other disciplines; this leads potentially to the trap of false precision. As such, policymakers should be duly cautious about over-reliance on this model and indeed all models based on social science, as some assumptions about the use of rational choice in policy decisions have long been challenged (Hay 2004; Ostrom 1991; Kahneman 1994). As previously stated, the issue of cognitive bias is salient in any model that addresses the behavior of actors in interactive settings like social media and political arenas, where ideologies often influence cognitive processes. This model rests on assumptions about behavior that are based on existing theories and empirical observations, and the transparency with which this article has sought to introduce it intended to help analysts fully understand, apply, and modify it. We contend that the model provides a useful approximation appropriate for the policy field’s state of knowledge about human behavior and preference—particularly amidst the contestation of truth and socialization of information sharing that define the modern era. At the same time, we see the aforementioned caveat as an opportunity for future research, including the refinement of the model with further understandings about how individuals interact with fake news and about the efficacy with which public policies and the actions of SMPs are capable of limiting the spread of fake news and mitigating its impact.

The challenge of fake news not only mandates consideration about individual behaviors and their aggregation across a community of hundreds of millions of social media users, but also invites abstract contemplation about the construction of truth in what has been labeled a ‘post-factual’ era (Perl et al. 2018; Berling and Bueger 2017). The democratization of fact-finding and dissemination, made possible by the proliferation of information and communications technology, provides an egalitarian platform for any user of any interest—including the politically opportunistic and malicious. The rostrum of narrative authority is no longer solely occupied by the ‘fourth estate’—newspapers ‘of record’ and network or public broadcast media. With the growing sophistication and savviness of fake news operatives has come a torrent of misinformation; at the same time, many users are likewise savvy in their recognition and dismissal of fake news. In the context of COVID-19, the influence of fake news and misinformation on the behavior of even a modest share of the population could substantially compromise crisis mitigation and recovery efforts. Despite their access to high-level official information, even political leaders have shown vulnerability to the allure of fake news in the COVID-19 pandemic. An example is the promotion by US President Donald Trump of the anti-malarial drug hydroxychloroquine—a drug about which the scientific community lacked, at the time, full evidence of its efficacy in treating COVID-19 (Owens 2020). The intuitive policy intervention would appear to be more robust means of fact-checking content on social media and efforts to disabuse the public of false notions concerning the science of the virus. Nevertheless, according to Van Bavel et al. (2020), ‘fact-checking may not keep up with the vast amount of false information produced in times of crisis like a pandemic’ (n.p.). Additionally, Fischer (2020) argues that facts are no tonic for ideological denialism on matters related to science. Such episodes illustrate the existential risks posed by political forces built on the redefinition or rejection of scientific fact.

In closing, it is appropriate to consider practical pathways for policymaking amidst the rise of fake news. Careful not to imply that postmodernism is responsible for fake news, Fischer (2020) draws from an earlier work about climate change policy (2009) in arguing that ‘the public and its politicians […] need to find ways to develop a common, socially meaningful dialogue that moves beyond acrimonious rhetoric to permit an authentic socio-cultural discussion’ (p. 149). The uncomfortable dilemma for research about the politics of contentious issues like climate change and pandemic response is how to reconcile the preeminence of scientific fact in official policy discourses with constructivist and deliberative perspectives that may exist within or outside such discourses. The precarity of the COVID-19 crisis, as reminiscent of longer-evolving crises such as climate change, suggests urgent questions for future research. Does the raucous debate about the severity of COVID-19, in a country like the United States, indicate democratic robustness and political process legitimacy? Are all discourses, even as ‘alternative truths,’ equally legitimate as policy inputs in deliberative settings? Is scientific evidence politically assailable as a product of elite agendas and framing efforts? Research is thus needed to map narratives in the virtual realm and ‘contact-trace’ their origins and later influence on discourses and policymaking. At a general level, this study provides a foundation for exploring how post-truth narratives are reproduced in the commons, not only in the COVID-19 crisis but also in unforeseen crises that will inevitably arise.