# Network segregation in a model of misinformation and fact-checking

## Abstract

Misinformation under the form of rumor, hoaxes, and conspiracy theories spreads on social media at alarming rates. One hypothesis is that, since social media are shaped by homophily, belief in misinformation may be more likely to thrive on those social circles that are segregated from the rest of the network. One possible antidote to misinformation is fact checking which, however, does not always stop rumors from spreading further, owing to selective exposure and our limited attention. What are the conditions under which factual verification are effective at containing the spreading of misinformation? Here we take into account the combination of selective exposure due to network segregation, forgetting (i.e., finite memory), and fact-checking. We consider a compartmental model of two interacting epidemic processes over a network that is segregated between gullible and skeptic users. Extensive simulation and mean-field analysis show that a more segregated network facilitates the spread of a hoax only at low forgetting rates, but has no effect when agents forget at faster rates. This finding may inform the development of mitigation techniques and raise awareness on the risks of uncontrolled misinformation online.

## Keywords

Misinformation Fact-checking Information diffusion Network segregation Agent-based modeling## Introduction

Social media are rife with inaccurate information of all sorts [6, 18, 21]. This is in part due to their egalitarian, bottom-up model of information consumption and production [9], according to which users can broadcast to their peers information vetted by neither experts nor journalists, and thus potentially inaccurate or misleading [28]. Examples of social media misinformation include rumors [21], hoaxes [35], and conspiracy theories [3, 24].

In journalism, corrections, verification, and fact-checking are simple yet powerful antidotes to misinformation [11], and several newsrooms employ these techniques to vet the information they publish. Moreover, in recent years, several independent fact-checking organizations have emerged with the goal of debunking widely circulating claims online. From now on, we refer to all these practices collectively as fact-checking. Among the leading US-based fact-checking organizations we can cite Snopes [42], FactCheck.org [20], and Politifact [46]. Several more are joining their ranks worldwide [45]. In many cases these organizations cannot cope with the sheer volume of misinformation circulating online, and some are exploring alternatives to scale their verification efforts, including automated techniques [16], and collaboration with technology platforms such as Facebook [37] and Google [19].

These trends thus beg a rather fundamental question—is the dissemination of fact-checking information effective at stopping misinformation from spreading on social media? In particular cases, timely corrections are enough to limit a rumor from spreading further [4, 21, 34]. However, administering fact-checking information may also have adverse effects. For example, in some instances it has been observed that correcting an inaccurate or misleading claim can have counterproductive effects, increasing—and not decreasing—belief in it. This is a phenomenon called the backfire effect [35]. Recent work has however failed to replicate this form of backfiring in independent trials, suggesting that it is a rather elusive phenomenon [48].

Fact-checking could also lead to a hypercorrection effect, meaning that providing accurate information to people who have been exposed to misinformation may cause them, on the long term, to forget the former, and remember the latter [12]. Thus, given the growing emphasis put into fact-checking, as well as its unintended side effects, it is clear that, for a better understanding of how to fight social media misinformation, it would be useful to explore the relation between fact-checking and the misinformation it is intended to quell.

Recent work has also revealed that, when it comes to misinformation, online conversations tend to be highly polarized [10, 18]. This suggests the importance of homophily and segregation in the spread of misinformation. Since social networks are shaped by homophily [29], one hypothesis is that misinformation may be more likely to thrive in those social circles that are segregated from the rest of the network. Social media may be particularly susceptible to this aspect due to the fact that exposure to information is mediated in part by algorithms, whose goal is to filter and recommend stories that have a high potential for engagement. This may create filter bubbles and echo chambers, information spaces that favor confirmation bias and repetition [38, 43]. Recent work has started to measure the extent to which editorial decisions performed automatically by algorithms affect selective exposure, and thus segregation of the information space [8, 33]. Therefore, in modeling the interplay between misinformation and fact-checking, our second goal is to shed light on the role of the underlying social network structure in the spreading process, in particular the presence of communities of users with different attitude toward unvetted and unconfirmed information—which could potentially constitute misinformation.

Besides segregation, in the literature there is also disagreement about whether weak ties—the links that connect different communities together—play a role in the diffusion of information. Some studies suggest that weak ties play an important role [7]; others that they do not [36]. In their seminal work on complex social contagion, Centola and Macy argue that the spread of collective action benefits from bridges, i.e., ties that are “wide enough to transmit strong social reinforcement” [13]. It is well known that misinformation can be propagated thanks to repetition [2, 27], which in some ways can be obtained through social reinforcement, and thus, it would be useful to investigate this additional aspect as well.

In terms of modeling, there has hitherto been little work on characterizing the epidemic spreading of different types of information, with most efforts devoted to describing mutually independent processes [23, 32]. Instead, the presence of the rich cognitive effects just described suggests that misinformation and fact-checking interact and compete for the attention of individuals on social media, and this could lead to non-trivial diffusion dynamics. Among the work specifically devoted to competition in the diffusion of information, or memes, the literature has focused on the role of limited attention [25, 47], as well as that of information quality [31, 40].

Several models have been proposed in prior work to describe the propagation of rumors in a complex social networks [1, 14, 17, 30]. Most are based on the epidemic compartmental models like the SIR (susceptible–infected–recovered) or the SIS (susceptible–infected–susceptible) [39]. In these models, the population is divided into compartments that indicate the stage of the disease, and the evolution of the spreading process is ruled by transition rates in differential equations. Usually, in SIS-like models, \(\beta \) represents the ‘infection’ rate, that is, the rate of the transition *S* \(\rightarrow \) *I*, and \(\mu \) the ‘recovery’ rate, that is, the rate of the transition *I* \(\rightarrow \) *S*. In the adaptations of the models to rumors and news, an analogy between the latter and infective pathogens is considered. Fact-checking, on the other hand, is contemplated only as a remedy after the hoax infection. Another class of models uses branching processes on signed networks to take into account user polarization [18]. Neither type, however, takes into account in the same model the three aforementioned mechanisms—competition between hoax and fact-checking, forgetting mechanisms and segregation.

To consider all these features, here we introduce a simple agent-based model in which individuals are endowed with finite memory and a fixed predisposition toward factual verification. In this model, hoaxes and fact-checks compete on a network formed by two groups, the gullible and the skeptic, marked by a different tendency to believe in the hoax. Varying the level of segregation in the network, as well as the relative credibility of the hoax among the two groups, we look at whether the hoax becomes endemic or instead is eradicated from the whole population.

## Model

Here we describe a model of the spread of the belief in a hoax and the related fact-checking within a social network of agents with finite memory. An agent can be in any of the following three states: ‘Susceptible’ (denoted by *S*), if they have not heard about neither the hoax nor the fact checking, or if they have forgotten about it; ‘Believer’ (*B*), if they believe in the hoax and choose to spread it; and ‘Fact-checker’ (*F*) if they know the hoax is false—for example after having consulted an accurate news source—and choose to spread the fact-checking.

*i*-th agent at time step

*t*and let us denote with \(n_i^{X}(t)\) the number of its neighbors in state \(X\in \left\{ S,B,F\right\} \). We assume that an agent ‘decides’ to believe in either the hoax or the fact checking as a result of interaction over interpersonal ties. This could be due to social conformity [5] or because agents accept information from their neighbors [41]. Second, we assume that the hoax displays an intrinsic credibility \(\alpha \in \left[ 0,1\right] \), which, all else being equal, makes it more believable than the fact-checking. We will discuss later how this parameter can be also related to the users: by now, we consider it as a feature of the hoax. Thus, the probability of transitioning from

*S*to either

*B*or

*F*are given by functions \(f_i\), and \(g_i\), respectively:

Observe that \(f_i(t) + g_i(t)=\beta \), which is equivalent to the infection rate of the SIS model. Indeed, if one considers the two states *B* and *F* as single ‘Infected’ state (*I*), then our model is reduced to an SIS model, with the only difference that the probability of recovery \(\mu \) is denoted by \(p_\mathrm{f}\).

*i*th agent at time

*t*, and let us define, for \(X \in \{B,F,S\}\), the state indicator function \(s_i^X(t) = \delta (s_i(t), X)\). The triple \(p_i(t) = \left[ p_i^{B}(t), p_i^{F}(t), p_i^{S}(t)\right] \) describes the probability that a node

*i*is in any of the three states at time t. The dynamics of the system at time \(t + 1\) will be then given by a random realization of \(p_i\) at \(t + 1\). Thus, \(p_i(t + 1)\) can be described as:

*N*agents, in which a few agents have been initially seeded as believers, we derived the expressions for the density of believers, fact-checkers, and susceptible agents in the infinite-time limit denoted by \(B_{\infty }\), \(F_{\infty }\), and \(S_{\infty }\), respectively. We found that, independent of the network topology (Barabási-Albert and Erdős-Rényi), the value of \(p_\mathrm{v}\), and of \(\alpha \), \(S_{\infty }\) stabilizes around the same values in all simulations. We confirmed such a result using both mean-field equations and simulations.

Once the system reaches equilibrium, the relative ratio between believers and fact checkers is determined by \(\alpha \) and \(p_\mathrm{v}\): such as the higher \(\alpha \), the more believers, and conversely for \(p_\mathrm{v}\). In particular, we showed that there always exists a critical value of \(p_\mathrm{v}\) above which the hoax is completely eradicated from the network (i.e., \(B_\infty = 0\)). This value depends on \(\alpha \) and \(p_\mathrm{f}\), but not on the spreading rate \(\beta \).

As one can see, the model has several parameters, namely, spreading rate \(\beta \), credibility of the hoax \(\alpha \), probability of verification \(p_\mathrm{v}\) and probability of forgetting \(p_\mathrm{f}\). Since, in the present work, we want to consider the role of communities of people with different attitudes to believe to an hoax, the number of parameters is going to increase.

## Results

Two parameters govern the spreading dynamics in our model. These are the credibility \(\alpha \) and forgetting probability \(p_\mathrm{f}\). To address our research question about the role of network structure and communities, we consider a simple generative model of a segregated network. Let us consider *N* agents divided into two groups, one comprised by \(t < N\) agents whose beliefs conform more to the hoax than the other one, which is comprised by the rest of the population. We call the former the gullible group, while the latter the skeptic group. To represent this in our framework, we set different values of \(\alpha \) for each agent group (either \(\alpha _\mathrm{gu}\) or \(\alpha _\mathrm{sk}\), \(\alpha _\mathrm{gu}> \alpha _\mathrm{sk}\)). This is not a contradiction with what we said before: the credibility is a parameter describing the hoax, but of course is also related to the user attitude and personal worldviews, then it is reasonable to think different groups having different values of it.

*M*edges at random. Let \(s \in \left[ \frac{1}{2}, 1\right) \) denote the fraction of intra-group edges, regardless of the group. For each edge we first decide, with probability

*s*, whether two individuals from the same group (intra-group tie) or different groups (inter-group tie) should be connected. In the case of an intra-group tie, we select a group with probability proportional to the relative ratio of the total number of possible inter-group ties (of that group) to that of the whole network; then, we pick uniformly at random two agents from that group and connect them. In the second case, two agents are chosen at random, one per group, and connected. Figure 4 shows three examples of networks with different values of

*s*.

*s*, and

*t*. Figure 5 reports the results of the first of these exercises, showing the overall number of believers \(B_\infty \) in the whole population at equilibrium.

Increasing either \(s\) or \(\alpha _\mathrm{gu}\) we see an increase of \(B_\infty \), all else being equal. However, when we change the forgetting probability \(p_{\rm f}\) we observe two different situations: for small \(p_\mathrm{f}\), an increase of *s* results in an increase of \(B_{\infty }\). Conversely—and perhaps a bit surprisingly—under high values of \(p_\mathrm{f}\) increasing *s* does not change \(B_{\infty }\).

*s*. In Fig. 6 we report the relevant phase diagrams, breaking down the number of believers at equilibrium by group, i.e., \(B_\infty = B_\infty ^\mathrm{gu} + B_\infty ^\mathrm{sk}\). If \(p_\mathrm{f}\) is low (Fig 6, left column), the overall number of believers depends heavily on \(B_\infty ^\mathrm{gu}\), whereas \(B_\infty ^\mathrm{sk} \approx 0\), and the segregation is unimportant, see Fig. 6a, c, e.

Instead, with an high rate of forgetting (right column), \(B_{\infty }\) (Fig. 6f) depends on both \(B_\infty ^\mathrm{sk}\) and \(B_\infty ^\mathrm{gu}\). But in this case we have a different role of the segregation: while in the skeptic group \(B_\infty ^\mathrm{sk}\) decreases when *s* increases, see Fig. 6d, in the gullible group *s* has fewer influence, see Fig. 6b.

Summarizing, segregation can have a very different role on the final configuration of the hoax spreading and this depends on the forgetting rate. Why the number of links among communities with different behaviors is so important? It should be noted that any ‘network effect’ present in our model will only appear in the infection phase, that is for transitions \(S\rightarrow B\) and \(S\rightarrow F\). To better understand what happens in both groups, we computed the rate at which these transitions happen, that is, the conditional probability of, being susceptible, becoming either believer or fact checker.

Let us consider a susceptible agent in the gullible group. At low forgetting rates, in the gullible group more intra-group ties (i.e., an higher *s*) increase the chances of becoming a believer and reduce those of becoming fact-checker; see Fig. 8 (top left). In the skeptic group, the segregation effect is almost negligible (top right). This happens because inter-group ties expose the susceptible agents, among the gullible, to more members of the skeptic group, who are largely fact-checkers.

In other words, the role of segregation, being related to the abundance of inter-group ties, can have both a positive and negative role in stopping the spread of misinformation: for low forgetting rates, these links can help the spread of the debunking in the gullible group, while for high forgetting rates, they have the opposite effect, helping the hoax spread in the skeptic group.

## Discussion

Using agent-based simulations, here we have analyzed the role of the underlying structure of the network on which the diffusion of a piece of misinformation takes place. In particular we consider a network formed by two groups—gullible and skeptic—characterized by different values of the credibility parameter \(\alpha \). To study how the social structure shapes information exposure, we introduce a parameter *s* that regulates the abundance of ties between these two groups. We observe that *s* has an important role in the diffusion of misinformation. If the probability of forgetting \(p_\mathrm{f}\) is small then the fraction of the population affected by the hoax will be large or small depending on whether the network is, respectively, segregated or not. However, if the rate of forgetting is large, segregation has a somewhat different effect on the spread of the hoax, and the presence of links among communities can promote the diffusion of the misinformation within the skeptic communities.

The probability of forgetting could be also interpreted as a measure of how much a given topic is discussed. A low value of \(p_\mathrm{f}\) could perhaps fit well with the scenario of ideas whose belief tends to be more persistent over time, for example conspiracy theories. A high value of \(p_\mathrm{f}\) could fit better with situations where beliefs are short lived, either because the claims are easy to debunk or are no more interesting than mere gossip, whose information value is transient. Hoaxes about the alleged death of celebrities, for instance, could fall within this latter category.

On the basis of the findings presented in this paper, further research should be devoted to understanding the role of network segregation in the spread of misinformation on social media. In the case of conspiracy theories, it could be useful to analyze what happens if the communication among different groups increases. Moreover, it could be also interesting to consider more realistic situations in which rumors or hoaxes have different level of credibility for different agents—for example, based on socio-economic features and other individual-level attributes—or where the likelihood of verifying (\(B\rightarrow F\)) depends on the state of the network, as opposed to having a constant rate of occurrence \(p_\mathrm{v}\), as we do here.

Our results are also important from a purely theoretical point of view. Indeed, the model we had introduced in prior work, and on which we build upon here, was an example of an epidemic process that is not affected by the network topology, meaning that the structure does not influence the final configuration of the network—indeed, it could be proved that there are no significant differences in the behavior of the spreading dynamics in random or scale-free networks [44]. In the present work, however, we show that network structure can actually become very important if we add an extra element of complexity, characterizing groups of nodes with slightly different behaviors (here different values of the credibility parameter). This points to the need for more research, experiments, and simulations to understand which parameters are sensitive to the segregation level, or some other topological measure, even in models that have usually topology-independent dynamics.

In conclusion, understanding the production and consumption of misinformation is a critical issue [15]. As several episodes are showing, there are obvious consequences connected to the uncontrolled production and consumption of inaccurate information [26]. A more thorough understanding of rumor propagation and the structural properties of the information exchange networks on which this happens may help mitigate these risks.

## Notes

### Acknowledgements

The authors would like to acknowledge Filippo Menczer and Alessandro Flammini for feedback and insightful conversations. DFMO acknowledges the support from James S. McDonnell Foundation. GLC acknowledges support from the Indiana University Network Science Institute (http://iuni.iu.edu) and from the Swiss National Science Foundation (PBTIP2_142353).

## References

- 1.Acemoglu, D., Ozdaglar, A., & ParandehGheibi, A. (2010). Spread of (mis) information in social networks.
*Games and Economic Behavior*,*70*(2), 194–227.CrossRefGoogle Scholar - 2.Allport, G. W., & Postman, L. (1947).
*The psychology of rumor*. Oxford, England: Henry Holt.Google Scholar - 3.Anagnostopoulos, A., Bessi, A., Caldarelli, G., Del Vicario, M., Petroni, F., Scala, A., Zollo, F.,& Quattrociocchi, W. (2014). Viral misinformation: the role of homophily and polarization. arXiv:1411.2893.
- 4.Andrews, C., Fichet, E., Ding, Y., Spiro, E. S.,& Starbird, K. (2016). Keeping up with the tweet-dashians: The impact of ’official’ accounts on online rumoring. In
*Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing*, CSCW ’16 (pp. 452–465). New York, NY, USA: ACM.Google Scholar - 5.Asch, S. E. (1961). Effects of group pressure upon the modification and distortion of judgements. In M. Henle (Ed.),
*Documents of gestalt psychology*(pp. 222–236). Oakland, California, USA: University of California Press.Google Scholar - 6.Bakshy, E., Hofman, J. M., Mason, W. A., & Watts, D. J. (2011). Everyone’s an influencer: Quantifying influence on Twitter. In
*Proceedings of the Fourth ACM International Conference on Web Search and Data Mining*, WSDM ’11 (pp. 65–74). New York, NY, USA: ACM.Google Scholar - 7.Bakshy, E., Rosenn, I., Marlow, C.,& Adamic, L. (2012). The role of social networks in information diffusion. In
*Proceedings of the 21st international conference on World Wide Web*(pp. 519–528). ACM.Google Scholar - 8.Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook.
*Science*,*348*(6239), 1130–1132.CrossRefGoogle Scholar - 9.Benkler, Y. (2006).
*The wealth of networks: How social production transforms markets and freedom*. London: Yale University Press.Google Scholar - 10.Bessi, A., Coletto, M., Davidescu, G. A., Scala, A., Caldarelli, G., & Quattrociocchi, W. (2015). Science vs conspiracy: Collective narratives in the age of misinformation.
*PLoS ONE*,*10*(2), e0118093.CrossRefGoogle Scholar - 11.Borel, B. (2016).
*The Chicago guide to fact-checking*. Chicago, IL, USA: The University of Chicago Press.CrossRefGoogle Scholar - 12.Butler, A. C., Fazio, L. K., & Marsh, E. J. (2011). The hypercorrection effect persists over a week, but high-confidence errors return.
*Psychonomic Bulletin & Review*,*18*(6), 1238–1244.CrossRefGoogle Scholar - 13.Centola, D., & Macy, M. (2007). Complex contagions and the weakness of long ties.
*American Journal of Sociology*,*113*(3), 702–734.CrossRefGoogle Scholar - 14.Chierichetti, F., Lattanzi, S.,& Panconesi, A. (2009). Rumor spreading in social networks. In
*Automata, Languages and Programming*(pp. 375–386). Springer.Google Scholar - 15.Ciampaglia, G. L., Flammini, A., & Menczer, F. (2015). The production of information in the attention economy.
*Scientific Reports*,*5*, 9452.CrossRefGoogle Scholar - 16.Ciampaglia, G. L., Shiralkar, P., Rocha, L. M., Bollen, J., Menczer, F., & Flammini, A. (2015). Computational fact checking from knowledge networks.
*PLoS One*,*10*(6), e0128193.CrossRefGoogle Scholar - 17.Daley, D. J., & Kendall, D. G. (1964). Epidemics and rumours.
*Nature*,*204*, 1118.CrossRefGoogle Scholar - 18.Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, H. E.,& Quattrociocchi W. (2016). The spreading of misinformation online. In
*Proceedings of the National Academy of Sciences*.Google Scholar - 19.Dong, X. L., Gabrilovich, E., Murphy, K., Dang, V., Horn, W., Lugaresi, C., et al. (2015). Knowledge-based trust: Estimating the trustworthiness of web sources.
*Proceedings of the VLDB Endowment*,*8*(9), 938–949.CrossRefGoogle Scholar - 20.Factcheck.org. (2017). A project of the Annenberg Public Policy Center. Online. Accessed 28 Oct 2017.Google Scholar
- 21.Friggeri, A., Adamic, L. A., Eckles, D.,& Cheng, J. (2014). Rumor cascades. In
*Proc. Eighth Intl. AAAI Conf. on Weblogs and Social Media (ICWSM)*(pp. 101–110).Google Scholar - 22.Fruchterman, T. M. J., & Reingold, E. M. (1991). Graph drawing by force-directed placement.
*Software: Practice & Experience*, 21(11):1129–1164.Google Scholar - 23.Funk, S., & Jansen, V. A. A. (2010). Interacting epidemics on overlay networks.
*Phys. Rev. E*,*81*, 036118.CrossRefGoogle Scholar - 24.Galam, S. (2003). Modelling rumors: the no plane Pentagon French hoax case.
*Physica A: Statistical Mechanics and Its Applications*,*320*, 571–580.CrossRefGoogle Scholar - 25.Gleeson, J. P., Ward, J. A., O’Sullivan, K. P., & Lee, W. T. (2014). Competition-induced criticality in a model of meme popularity.
*Phys. Rev. Lett.*,*112*, 048701.CrossRefGoogle Scholar - 26.Howell, L., et al. (2013). Digital wildfires in a hyperconnected world. In
*Global Risks: World Economic Forum*.Google Scholar - 27.Knapp, R. H. (1944). A psychology of rumor.
*Public Opinion Quarterly*,*8*(1), 22–37.CrossRefGoogle Scholar - 28.Kwak, H., Lee, C., Park, H.,& Moon, S. (2010). What is Twitter, a social network or a news media? In
*Proceedings of the 19th International Conference on World Wide Web*, WWW ’10 (pp. 591–600). New York, NY, USA: ACM.Google Scholar - 29.McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). Birds of a feather: Homophily in social networks.
*Annual Review of Sociology*,*27*(1), 415–444.CrossRefGoogle Scholar - 30.Moreno, Y., Nekovee, M., & Pacheco, A. F. (2004). Dynamics of rumor spreading in complex networks.
*Physical Review E*,*69*(6), 066130.CrossRefGoogle Scholar - 31.Nematzadeh, A., Ciampaglia, G. L., Menczer, F.,& Flammini, A. (2017). How algorithmic popularity bias hinders or promotes quality. e-print, CoRR.Google Scholar
- 32.Newman, M. E. J., & Ferrario, C. R. (2013). Interacting epidemics and coinfection on contact networks.
*PLoS One*,*8*(8), 1–8.CrossRefGoogle Scholar - 33.Nikolov, D., Oliveira, D. F., Flammini, A., & Menczer, F. (2015). Measuring online social bubbles.
*PeerJ Computer Science*,*1*, e38.CrossRefGoogle Scholar - 34.Nyhan, B., & Reifler, J. (2015). The effect of fact-checking on elites: A field experiment on us state legislators.
*American Journal of Political Science*,*59*(3), 628–640.CrossRefGoogle Scholar - 35.Nyhan, B., Reifler, J., & Ubel, P. A. (2013). The hazards of correcting myths about health care reform.
*Medical Care*,*51*(2), 127–132.CrossRefGoogle Scholar - 36.Onnela, J.-P., Saramäki, J., Hyvönen, J., Szabó, G., Lazer, D., Kaski, K., et al. (2007). Structure and tie strengths in mobile communication networks.
*Proceedings of the National Academy of Sciences*,*104*(18), 7332–7336.CrossRefGoogle Scholar - 37.Owens, E., & Weinsberg, U. (2015).
*News feed fyi: Showing fewer hoaxes*(p. 2016). Online. Accessed Jan.Google Scholar - 38.Pariser, E. (2011).
*The filter bubble: What the Internet is hiding from you*. London, UK: Penguin.Google Scholar - 39.Pastor-Satorras, R., Castellano, C., Van Mieghem, P., & Vespignani, A. (2015). Epidemic processes in complex networks.
*Rev. Mod. Phys.*,*87*, 925–979.CrossRefGoogle Scholar - 40.Qiu, X., Oliveira, D. F., Shirazi, A. S., Flammini, A., & Menczer, F. (2017). Limited individual attention and online virality of low-quality information.
*Nature Human Behaviour*,*1*(7), s41562–017.CrossRefGoogle Scholar - 41.Rosnow, R. L., & Fine, G. A. (1976).
*Rumor and gossip: The social psychology of hearsay*. New York City: Elsevier.Google Scholar - 42.Snopes.com. (2017). The definitive fact-checking site and reference source for urban legends, folklore, myths, rumors, and misinformation. Online. Accessed 28 Oct 2017.Google Scholar
- 43.Sunstein, C. (2002).
*Republic.com*. Princeton: Princeton University Press.Google Scholar - 44.Tambuscio, M., Ruffo, G., Flammini, A.,& Menczer, F. (2015). Fact-checking effect on viral hoaxes: A model of misinformation spread in social networks. In
*Proceedings of the 24th International Conference on World Wide Web Companion*(pp. 977–982). International World Wide Web Conferences Steering Committee.Google Scholar - 45.The Duke Reporters’ Lab keeps an updated list of global fact-checking sites. https://reporterslab.org/fact-checking/. Accessed 29 June 2018.
- 46.Times, T. B. (2017). Fact-checking U.S. politics. Online. Accessed 28 Oct 2017.Google Scholar
- 47.Weng, L., Flammini, A., Vespignani, A., & Menczer, F. (2012). Competition among memes in a world with limited attention.
*Scientific Reports*,*2*, 335.CrossRefGoogle Scholar - 48.Wood, T.,& Porter, E. (2016). The elusive backfire effect: Mass attitudes’ steadfast factual adherence. e-print, SSRN.Google Scholar