Journal of Computational Social Science

, Volume 1, Issue 2, pp 261–275 | Cite as

Network segregation in a model of misinformation and fact-checking

  • Marcella Tambuscio
  • Diego F. M. Oliveira
  • Giovanni Luca Ciampaglia
  • Giancarlo Ruffo
Research Article


Misinformation under the form of rumor, hoaxes, and conspiracy theories spreads on social media at alarming rates. One hypothesis is that, since social media are shaped by homophily, belief in misinformation may be more likely to thrive on those social circles that are segregated from the rest of the network. One possible antidote to misinformation is fact checking which, however, does not always stop rumors from spreading further, owing to selective exposure and our limited attention. What are the conditions under which factual verification are effective at containing the spreading of misinformation? Here we take into account the combination of selective exposure due to network segregation, forgetting (i.e., finite memory), and fact-checking. We consider a compartmental model of two interacting epidemic processes over a network that is segregated between gullible and skeptic users. Extensive simulation and mean-field analysis show that a more segregated network facilitates the spread of a hoax only at low forgetting rates, but has no effect when agents forget at faster rates. This finding may inform the development of mitigation techniques and raise awareness on the risks of uncontrolled misinformation online.


Misinformation Fact-checking Information diffusion Network segregation Agent-based modeling 


Social media are rife with inaccurate information of all sorts [6, 18, 21]. This is in part due to their egalitarian, bottom-up model of information consumption and production [9], according to which users can broadcast to their peers information vetted by neither experts nor journalists, and thus potentially inaccurate or misleading [28]. Examples of social media misinformation include rumors [21], hoaxes [35], and conspiracy theories [3, 24].

In journalism, corrections, verification, and fact-checking are simple yet powerful antidotes to misinformation [11], and several newsrooms employ these techniques to vet the information they publish. Moreover, in recent years, several independent fact-checking organizations have emerged with the goal of debunking widely circulating claims online. From now on, we refer to all these practices collectively as fact-checking. Among the leading US-based fact-checking organizations we can cite Snopes [42], [20], and Politifact [46]. Several more are joining their ranks worldwide [45]. In many cases these organizations cannot cope with the sheer volume of misinformation circulating online, and some are exploring alternatives to scale their verification efforts, including automated techniques [16], and collaboration with technology platforms such as Facebook [37] and Google [19].

These trends thus beg a rather fundamental question—is the dissemination of fact-checking information effective at stopping misinformation from spreading on social media? In particular cases, timely corrections are enough to limit a rumor from spreading further [4, 21, 34]. However, administering fact-checking information may also have adverse effects. For example, in some instances it has been observed that correcting an inaccurate or misleading claim can have counterproductive effects, increasing—and not decreasing—belief in it. This is a phenomenon called the backfire effect [35]. Recent work has however failed to replicate this form of backfiring in independent trials, suggesting that it is a rather elusive phenomenon [48].

Fact-checking could also lead to a hypercorrection effect, meaning that providing accurate information to people who have been exposed to misinformation may cause them, on the long term, to forget the former, and remember the latter [12]. Thus, given the growing emphasis put into fact-checking, as well as its unintended side effects, it is clear that, for a better understanding of how to fight social media misinformation, it would be useful to explore the relation between fact-checking and the misinformation it is intended to quell.

Recent work has also revealed that, when it comes to misinformation, online conversations tend to be highly polarized [10, 18]. This suggests the importance of homophily and segregation in the spread of misinformation. Since social networks are shaped by homophily [29], one hypothesis is that misinformation may be more likely to thrive in those social circles that are segregated from the rest of the network. Social media may be particularly susceptible to this aspect due to the fact that exposure to information is mediated in part by algorithms, whose goal is to filter and recommend stories that have a high potential for engagement. This may create filter bubbles and echo chambers, information spaces that favor confirmation bias and repetition [38, 43]. Recent work has started to measure the extent to which editorial decisions performed automatically by algorithms affect selective exposure, and thus segregation of the information space [8, 33]. Therefore, in modeling the interplay between misinformation and fact-checking, our second goal is to shed light on the role of the underlying social network structure in the spreading process, in particular the presence of communities of users with different attitude toward unvetted and unconfirmed information—which could potentially constitute misinformation.

Besides segregation, in the literature there is also disagreement about whether weak ties—the links that connect different communities together—play a role in the diffusion of information. Some studies suggest that weak ties play an important role [7]; others that they do not [36]. In their seminal work on complex social contagion, Centola and Macy argue that the spread of collective action benefits from bridges, i.e., ties that are “wide enough to transmit strong social reinforcement” [13]. It is well known that misinformation can be propagated thanks to repetition [2, 27], which in some ways can be obtained through social reinforcement, and thus, it would be useful to investigate this additional aspect as well.

In terms of modeling, there has hitherto been little work on characterizing the epidemic spreading of different types of information, with most efforts devoted to describing mutually independent processes [23, 32]. Instead, the presence of the rich cognitive effects just described suggests that misinformation and fact-checking interact and compete for the attention of individuals on social media, and this could lead to non-trivial diffusion dynamics. Among the work specifically devoted to competition in the diffusion of information, or memes, the literature has focused on the role of limited attention [25, 47], as well as that of information quality [31, 40].

Several models have been proposed in prior work to describe the propagation of rumors in a complex social networks  [1, 14, 17, 30]. Most are based on the epidemic compartmental models like the SIR (susceptible–infected–recovered) or the SIS (susceptible–infected–susceptible) [39]. In these models, the population is divided into compartments that indicate the stage of the disease, and the evolution of the spreading process is ruled by transition rates in differential equations. Usually, in SIS-like models, \(\beta \) represents the ‘infection’ rate, that is, the rate of the transition S \(\rightarrow \) I, and \(\mu \) the ‘recovery’ rate, that is, the rate of the transition I \(\rightarrow \) S. In the adaptations of the models to rumors and news, an analogy between the latter and infective pathogens is considered. Fact-checking, on the other hand, is contemplated only as a remedy after the hoax infection. Another class of models uses branching processes on signed networks to take into account user polarization [18]. Neither type, however, takes into account in the same model the three aforementioned mechanisms—competition between hoax and fact-checking, forgetting mechanisms and segregation.

To consider all these features, here we introduce a simple agent-based model in which individuals are endowed with finite memory and a fixed predisposition toward factual verification. In this model, hoaxes and fact-checks compete on a network formed by two groups, the gullible and the skeptic, marked by a different tendency to believe in the hoax. Varying the level of segregation in the network, as well as the relative credibility of the hoax among the two groups, we look at whether the hoax becomes endemic or instead is eradicated from the whole population.


Here we describe a model of the spread of the belief in a hoax and the related fact-checking within a social network of agents with finite memory. An agent can be in any of the following three states: ‘Susceptible’ (denoted by S), if they have not heard about neither the hoax nor the fact checking, or if they have forgotten about it; ‘Believer’ (B), if they believe in the hoax and choose to spread it; and ‘Fact-checker’ (F) if they know the hoax is false—for example after having consulted an accurate news source—and choose to spread the fact-checking.

Let us consider the i-th agent at time step t and let us denote with \(n_i^{X}(t)\) the number of its neighbors in state \(X\in \left\{ S,B,F\right\} \). We assume that an agent ‘decides’ to believe in either the hoax or the fact checking as a result of interaction over interpersonal ties. This could be due to social conformity [5] or because agents accept information from their neighbors [41]. Second, we assume that the hoax displays an intrinsic credibility \(\alpha \in \left[ 0,1\right] \), which, all else being equal, makes it more believable than the fact-checking. We will discuss later how this parameter can be also related to the users: by now, we consider it as a feature of the hoax. Thus, the probability of transitioning from S to either B or F are given by functions \(f_i\), and \(g_i\), respectively:
$$ f_i(t) = \beta \,\frac{{n_i^B}(1 + \alpha )}{{n_i^B}(1 + \alpha ) + {n_i^F}(1 - \alpha )} $$
$$ g_i(t) = \beta \,\frac{{n_i^F}(1 - \alpha )}{{n_i^B}(1 + \alpha ) + {n_i^F}(1 - \alpha )} $$
where \(\beta \in [0,1]\) is the overall spreading rate. Furthermore, agents who have been infected by the news, either as believer or fact-checker, can ‘forget’ and become susceptible again with a fixed probability \(p_\mathrm{f}\). This probability can also represent the strength of a belief: indeed, psychologists observed that people can assume different propensities in remembering facts or changing opinion towards a false news, even if they have been exposed to the fact-checking [34, 35].
Finally, any believer who has not forgotten the hoax yet can decide to check the news and stop believing in the hoax, becoming a fact-checker. This happens with probability \(p_\mathrm{v}\). In any other case, an agent remains in its current state. The full model with the transition states are shown in Fig. 1.
Fig. 1

The transitions states for the generic ith agent of our hoax epidemic model. To simplify the model, here we set \(p_{{\text {v}}} = 1 - \alpha \)

Observe that \(f_i(t) + g_i(t)=\beta \), which is equivalent to the infection rate of the SIS model. Indeed, if one considers the two states B and F as single ‘Infected’ state (I), then our model is reduced to an SIS model, with the only difference that the probability of recovery \(\mu \) is denoted by \(p_\mathrm{f}\).

Let us denote by \(s_i(t)\) the state of the ith agent at time t, and let us define, for \(X \in \{B,F,S\}\), the state indicator function \(s_i^X(t) = \delta (s_i(t), X)\). The triple \(p_i(t) = \left[ p_i^{B}(t), p_i^{F}(t), p_i^{S}(t)\right] \) describes the probability that a node i is in any of the three states at time t. The dynamics of the system at time \(t + 1\) will be then given by a random realization of \(p_i\) at \(t + 1\). Thus, \(p_i(t + 1)\) can be described as:
$$ {p_i^{B}(t+1)} = f_i(t) s_i^{S}(t) + (1 - p_\mathrm{f})(1 - p_\mathrm{v}) s_i^{B}(t) $$
$$ {p_i^{F}(t+1)} = g_i(t) s_i^{S}(t) + p_\mathrm{v} (1 - p_\mathrm{f}) s_i^{B}(t) + (1 - p_\mathrm{f}) s_i^{F}(t) $$
$$ {p_i^{S}(t+1)} = p_\mathrm{f} \left[ s_i^{B}(t) + s_i^{F}(t)\right] + \left[ 1 - f_i(t) - g_i(t)\right] s_i^{S}(t) $$
In previous work [44] we analyzed the behavior of the model at equilibrium. Starting from a well-mixed topology of N agents, in which a few agents have been initially seeded as believers, we derived the expressions for the density of believers, fact-checkers, and susceptible agents in the infinite-time limit denoted by \(B_{\infty }\), \(F_{\infty }\), and \(S_{\infty }\), respectively. We found that, independent of the network topology (Barabási-Albert and Erdős-Rényi), the value of \(p_\mathrm{v}\), and of \(\alpha \), \(S_{\infty }\) stabilizes around the same values in all simulations. We confirmed such a result using both mean-field equations and simulations.

Once the system reaches equilibrium, the relative ratio between believers and fact checkers is determined by \(\alpha \) and \(p_\mathrm{v}\): such as the higher \(\alpha \), the more believers, and conversely for \(p_\mathrm{v}\). In particular, we showed that there always exists a critical value of \(p_\mathrm{v}\) above which the hoax is completely eradicated from the network (i.e., \(B_\infty = 0\)). This value depends on \(\alpha \) and \(p_\mathrm{f}\), but not on the spreading rate \(\beta \).

As one can see, the model has several parameters, namely, spreading rate \(\beta \), credibility of the hoax \(\alpha \), probability of verification \(p_\mathrm{v}\) and probability of forgetting \(p_\mathrm{f}\). Since, in the present work, we want to consider the role of communities of people with different attitudes to believe to an hoax, the number of parameters is going to increase.

As an attempt to reduce the number of parameters, we set
$$\begin{aligned} p_\mathrm{v} = 1 - \alpha . \end{aligned}$$
This simplification can be motivated by assuming that the more credible a piece of news is, the lower are the chances anybody will try to check its veracity. This means that we restrict the parameters space \(p_\mathrm{v} \times \alpha \) to a line. This constrain can be easily observed in Fig. 2 (left), where the curve represents the analytic threshold on the verifying probability. Above it the hoax becomes endemic, over it the hoax is completely removed. We note that even with this additional constrain, this new, simplified model exhibits the same behaviors that our original model can produce (i.e., believers survive or not).
Fig. 2

Simplification of the model setting \(p_\mathrm{v} = 1- \alpha \), here fixing \(p_\mathrm{f}=0.1\) on scale-free networks of 1000 nodes. On the left we can observe the phase diagram of the entire parameter space considered for the model. In the present work we are restricting it to the dashed line, but we preserve all the possible configurations of the model. The right panel shows two random realizations of the number of believers over time, for two different sets of parameters (\(\alpha = 0.4\) and \(\alpha = 0.9\)) whose respective locations in the \(\alpha \times p_{\rm v}\) space are shown in the left panel. Believers can survive (dark line) or not (pale line)

Recomputing the mean-field equations with Eq. 6, we obtain now a critical value for \(p_\mathrm{f}\), a sufficient condition that guarantees the removal of the hoax from the network:
$$\begin{aligned} p_\mathrm{f} \le \frac{(1 - \alpha )^2}{1 + \alpha ^2} \quad \Longrightarrow \quad p^B(\infty ) = 0. \end{aligned}$$
The behaviour of \(p_\mathrm{f}\) versus \(\alpha \) is shown in Fig. 3. For any combination of \(p_\mathrm{f}\) and \(\alpha \) below the curve, the hoax is completely removed from the network. For combinations above the curve, the infection is instead endemic. The forgetting probability can be considered as a measure of the attention toward a specific topic, meaning that if there is a large discussion around the subject, then exposed people tend to have a stable opinion about it, otherwise the probability to forget the belief and the news will be higher. The presence of this threshold in Eq. 7 could suggest that the level of attention plays an important role in fake news global spread and persistence.
Fig. 3

Epidemic threshold for the simplified version of the model given by Eq. 6. The grey area indicates the region of the parameter space where the hoax is completely removed from the network. The white part indicates the region of the parameter space where the hoax can become endemic


Two parameters govern the spreading dynamics in our model. These are the credibility \(\alpha \) and forgetting probability \(p_\mathrm{f}\). To address our research question about the role of network structure and communities, we consider a simple generative model of a segregated network. Let us consider N agents divided into two groups, one comprised by \(t < N\) agents whose beliefs conform more to the hoax than the other one, which is comprised by the rest of the population. We call the former the gullible group, while the latter the skeptic group. To represent this in our framework, we set different values of \(\alpha \) for each agent group (either \(\alpha _\mathrm{gu}\) or \(\alpha _\mathrm{sk}\), \(\alpha _\mathrm{gu}> \alpha _\mathrm{sk}\)). This is not a contradiction with what we said before: the credibility is a parameter describing the hoax, but of course is also related to the user attitude and personal worldviews, then it is reasonable to think different groups having different values of it.

To generate the network, we assign M edges at random. Let \(s \in \left[ \frac{1}{2}, 1\right) \) denote the fraction of intra-group edges, regardless of the group. For each edge we first decide, with probability s, whether two individuals from the same group (intra-group tie) or different groups (inter-group tie) should be connected. In the case of an intra-group tie, we select a group with probability proportional to the relative ratio of the total number of possible inter-group ties (of that group) to that of the whole network; then, we pick uniformly at random two agents from that group and connect them. In the second case, two agents are chosen at random, one per group, and connected. Figure 4 shows three examples of networks with different values of s.
Fig. 4

Network structure under different segregation regimes between two groups (in this case of equal size). In the figure, three different values of s were used to generate an example network of 1000 nodes a \(s = 0.6\), b \(s = 0.8\), and c \(s = 0.95\). Node layout was computed using a force-directed algorithm [22]

To understand the behavior of the model in this segregated network, we performed extensive numerical simulations with networks of 1000 nodes. We set fixed values for \(\alpha _\mathrm{sk}\) and we considered a wide range of values of \(\alpha _\mathrm{gu}\), \(p_\mathrm{f}\), s, and t. Figure 5 reports the results of the first of these exercises, showing the overall number of believers \(B_\infty \) in the whole population at equilibrium.
Fig. 5

Believers at equilibrium in the phase space of \(s\times \alpha _\mathrm{gu}\). We considered two forgetting regimes: a low forgetting, \(p_\mathrm{f} = 0.1\), and b high forgetting, \(p_\mathrm{f} = 0.8\). Other parameters: \(\alpha _\mathrm{sk}= 0.4\), \(N=1000\). Each point was averaged over 50 simulations

Increasing either \(s\) or \(\alpha _\mathrm{gu}\) we see an increase of \(B_\infty \), all else being equal. However, when we change the forgetting probability \(p_{\rm f}\) we observe two different situations: for small \(p_\mathrm{f}\), an increase of s results in an increase of \(B_{\infty }\). Conversely—and perhaps a bit surprisingly—under high values of \(p_\mathrm{f}\) increasing s does not change \(B_{\infty }\).

Trying to better understand the role of \(p_\mathrm{f}\), we further explore the behavior of the model by varying the size of the gullible group \(\gamma \) and its level of segregation s. In Fig. 6 we report the relevant phase diagrams, breaking down the number of believers at equilibrium by group, i.e., \(B_\infty = B_\infty ^\mathrm{gu} + B_\infty ^\mathrm{sk}\). If \(p_\mathrm{f}\) is low (Fig 6, left column), the overall number of believers depends heavily on \(B_\infty ^\mathrm{gu}\), whereas \(B_\infty ^\mathrm{sk} \approx 0\), and the segregation is unimportant, see Fig. 6a, c, e.
Fig. 6

Believers at equilibrium under low (\(p_\mathrm{f} = 0.1\)) and high forgetting (\(p_\mathrm{f} = 0.8\)) rate. The number of believers at equilibrium is broken down as \(B_\infty = B_{\infty }^\mathrm{gu} + B_{\infty }^\mathrm{sk}\). Phase diagrams in the space \(s \times \gamma \) for a \(B_\infty ^\mathrm{gu}\), low forgetting, b \(B_\infty ^\mathrm{gu}\), high forgetting, c \(B_\infty ^\mathrm{sk}\), low forgetting, d \(B_\infty ^\mathrm{sk}\), high forgetting, e \(B_\infty \), low forgetting, and f \(B_\infty \), high forgetting. We fixed \(N=1000\), \(\alpha _\mathrm{gu}= 0.9\) and \(\alpha _\mathrm{sk}=0.05\)

Instead, with an high rate of forgetting (right column), \(B_{\infty }\) (Fig. 6f) depends on both \(B_\infty ^\mathrm{sk}\) and \(B_\infty ^\mathrm{gu}\). But in this case we have a different role of the segregation: while in the skeptic group \(B_\infty ^\mathrm{sk}\) decreases when s increases, see Fig. 6d, in the gullible group s has fewer influence, see Fig. 6b.

To give an analytical support to our findings, we obtain mean-field approximation for the model (details in the “Appendix”) and we perform both numerical integration of the mean field equations and agent-based simulations, which give very similar results. Figure 7 shows the phase diagrams obtained by numerical simulations of the mean-field equations.
Fig. 7

Mean-field approximation for different values of \(p_\mathrm{v}\): these phase diagrams represent the density of Believers at equilibrium varying \(\gamma \) and s, exactly as in Fig. 6

Summarizing, segregation can have a very different role on the final configuration of the hoax spreading and this depends on the forgetting rate. Why the number of links among communities with different behaviors is so important? It should be noted that any ‘network effect’ present in our model will only appear in the infection phase, that is for transitions \(S\rightarrow B\) and \(S\rightarrow F\). To better understand what happens in both groups, we computed the rate at which these transitions happen, that is, the conditional probability of, being susceptible, becoming either believer or fact checker.

Let us consider a susceptible agent in the gullible group. At low forgetting rates, in the gullible group more intra-group ties (i.e., an higher s) increase the chances of becoming a believer and reduce those of becoming fact-checker; see Fig. 8 (top left). In the skeptic group, the segregation effect is almost negligible (top right). This happens because inter-group ties expose the susceptible agents, among the gullible, to more members of the skeptic group, who are largely fact-checkers.

At high forgetting rates, instead, we observe the opposite behavior: more inter-group ties translate into more exposure, for susceptible users in the skeptic group, to gullible agents who are, by and large, believers. In the gullible group (bottom left of Fig. 8), segregation is not very important while, in the skeptic group, more connections with the gullible means more believers (bottom right of Fig. 8).
Fig. 8

Rate of transitions of type \(S\rightarrow B\) and \(S\rightarrow F\) at equilibrium. We run the simulation until the system has reached the steady state and then compute the average number of transitions per susceptible. The plot shows averages over 50 simulations on networks of 1000 nodes

In other words, the role of segregation, being related to the abundance of inter-group ties, can have both a positive and negative role in stopping the spread of misinformation: for low forgetting rates, these links can help the spread of the debunking in the gullible group, while for high forgetting rates, they have the opposite effect, helping the hoax spread in the skeptic group.


Using agent-based simulations, here we have analyzed the role of the underlying structure of the network on which the diffusion of a piece of misinformation takes place. In particular we consider a network formed by two groups—gullible and skeptic—characterized by different values of the credibility parameter \(\alpha \). To study how the social structure shapes information exposure, we introduce a parameter s that regulates the abundance of ties between these two groups. We observe that s has an important role in the diffusion of misinformation. If the probability of forgetting \(p_\mathrm{f}\) is small then the fraction of the population affected by the hoax will be large or small depending on whether the network is, respectively, segregated or not. However, if the rate of forgetting is large, segregation has a somewhat different effect on the spread of the hoax, and the presence of links among communities can promote the diffusion of the misinformation within the skeptic communities.

The probability of forgetting could be also interpreted as a measure of how much a given topic is discussed. A low value of \(p_\mathrm{f}\) could perhaps fit well with the scenario of ideas whose belief tends to be more persistent over time, for example conspiracy theories. A high value of \(p_\mathrm{f}\) could fit better with situations where beliefs are short lived, either because the claims are easy to debunk or are no more interesting than mere gossip, whose information value is transient. Hoaxes about the alleged death of celebrities, for instance, could fall within this latter category.

On the basis of the findings presented in this paper, further research should be devoted to understanding the role of network segregation in the spread of misinformation on social media. In the case of conspiracy theories, it could be useful to analyze what happens if the communication among different groups increases. Moreover, it could be also interesting to consider more realistic situations in which rumors or hoaxes have different level of credibility for different agents—for example, based on socio-economic features and other individual-level attributes—or where the likelihood of verifying (\(B\rightarrow F\)) depends on the state of the network, as opposed to having a constant rate of occurrence \(p_\mathrm{v}\), as we do here.

Our results are also important from a purely theoretical point of view. Indeed, the model we had introduced in prior work, and on which we build upon here, was an example of an epidemic process that is not affected by the network topology, meaning that the structure does not influence the final configuration of the network—indeed, it could be proved that there are no significant differences in the behavior of the spreading dynamics in random or scale-free networks [44]. In the present work, however, we show that network structure can actually become very important if we add an extra element of complexity, characterizing groups of nodes with slightly different behaviors (here different values of the credibility parameter). This points to the need for more research, experiments, and simulations to understand which parameters are sensitive to the segregation level, or some other topological measure, even in models that have usually topology-independent dynamics.

In conclusion, understanding the production and consumption of misinformation is a critical issue [15]. As several episodes are showing, there are obvious consequences connected to the uncontrolled production and consumption of inaccurate information [26]. A more thorough understanding of rumor propagation and the structural properties of the information exchange networks on which this happens may help mitigate these risks.



The authors would like to acknowledge Filippo Menczer and Alessandro Flammini for feedback and insightful conversations. DFMO acknowledges the support from James S. McDonnell Foundation. GLC acknowledges support from the Indiana University Network Science Institute ( and from the Swiss National Science Foundation (PBTIP2_142353).


  1. 1.
    Acemoglu, D., Ozdaglar, A., & ParandehGheibi, A. (2010). Spread of (mis) information in social networks. Games and Economic Behavior, 70(2), 194–227.CrossRefGoogle Scholar
  2. 2.
    Allport, G. W., & Postman, L. (1947). The psychology of rumor. Oxford, England: Henry Holt.Google Scholar
  3. 3.
    Anagnostopoulos, A., Bessi, A., Caldarelli, G., Del Vicario, M., Petroni, F., Scala, A., Zollo, F.,& Quattrociocchi, W. (2014). Viral misinformation: the role of homophily and polarization. arXiv:1411.2893.
  4. 4.
    Andrews, C., Fichet, E., Ding, Y., Spiro, E. S.,& Starbird, K. (2016). Keeping up with the tweet-dashians: The impact of ’official’ accounts on online rumoring. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, CSCW ’16 (pp. 452–465). New York, NY, USA: ACM.Google Scholar
  5. 5.
    Asch, S. E. (1961). Effects of group pressure upon the modification and distortion of judgements. In M. Henle (Ed.), Documents of gestalt psychology (pp. 222–236). Oakland, California, USA: University of California Press.Google Scholar
  6. 6.
    Bakshy, E., Hofman, J. M., Mason, W. A., & Watts, D. J. (2011). Everyone’s an influencer: Quantifying influence on Twitter. In Proceedings of the Fourth ACM International Conference on Web Search and Data Mining, WSDM ’11 (pp. 65–74). New York, NY, USA: ACM.Google Scholar
  7. 7.
    Bakshy, E., Rosenn, I., Marlow, C.,& Adamic, L. (2012). The role of social networks in information diffusion. In Proceedings of the 21st international conference on World Wide Web (pp. 519–528). ACM.Google Scholar
  8. 8.
    Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130–1132.CrossRefGoogle Scholar
  9. 9.
    Benkler, Y. (2006). The wealth of networks: How social production transforms markets and freedom. London: Yale University Press.Google Scholar
  10. 10.
    Bessi, A., Coletto, M., Davidescu, G. A., Scala, A., Caldarelli, G., & Quattrociocchi, W. (2015). Science vs conspiracy: Collective narratives in the age of misinformation. PLoS ONE, 10(2), e0118093.CrossRefGoogle Scholar
  11. 11.
    Borel, B. (2016). The Chicago guide to fact-checking. Chicago, IL, USA: The University of Chicago Press.CrossRefGoogle Scholar
  12. 12.
    Butler, A. C., Fazio, L. K., & Marsh, E. J. (2011). The hypercorrection effect persists over a week, but high-confidence errors return. Psychonomic Bulletin & Review, 18(6), 1238–1244.CrossRefGoogle Scholar
  13. 13.
    Centola, D., & Macy, M. (2007). Complex contagions and the weakness of long ties. American Journal of Sociology, 113(3), 702–734.CrossRefGoogle Scholar
  14. 14.
    Chierichetti, F., Lattanzi, S.,& Panconesi, A. (2009). Rumor spreading in social networks. In Automata, Languages and Programming (pp. 375–386). Springer.Google Scholar
  15. 15.
    Ciampaglia, G. L., Flammini, A., & Menczer, F. (2015). The production of information in the attention economy. Scientific Reports, 5, 9452.CrossRefGoogle Scholar
  16. 16.
    Ciampaglia, G. L., Shiralkar, P., Rocha, L. M., Bollen, J., Menczer, F., & Flammini, A. (2015). Computational fact checking from knowledge networks. PLoS One, 10(6), e0128193.CrossRefGoogle Scholar
  17. 17.
    Daley, D. J., & Kendall, D. G. (1964). Epidemics and rumours. Nature, 204, 1118.CrossRefGoogle Scholar
  18. 18.
    Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, H. E.,& Quattrociocchi W. (2016). The spreading of misinformation online. In Proceedings of the National Academy of Sciences.Google Scholar
  19. 19.
    Dong, X. L., Gabrilovich, E., Murphy, K., Dang, V., Horn, W., Lugaresi, C., et al. (2015). Knowledge-based trust: Estimating the trustworthiness of web sources. Proceedings of the VLDB Endowment, 8(9), 938–949.CrossRefGoogle Scholar
  20. 20. (2017). A project of the Annenberg Public Policy Center. Online. Accessed 28 Oct 2017.Google Scholar
  21. 21.
    Friggeri, A., Adamic, L. A., Eckles, D.,& Cheng, J. (2014). Rumor cascades. In Proc. Eighth Intl. AAAI Conf. on Weblogs and Social Media (ICWSM) (pp. 101–110).Google Scholar
  22. 22.
    Fruchterman, T. M. J., & Reingold, E. M. (1991). Graph drawing by force-directed placement. Software: Practice & Experience, 21(11):1129–1164.Google Scholar
  23. 23.
    Funk, S., & Jansen, V. A. A. (2010). Interacting epidemics on overlay networks. Phys. Rev. E, 81, 036118.CrossRefGoogle Scholar
  24. 24.
    Galam, S. (2003). Modelling rumors: the no plane Pentagon French hoax case. Physica A: Statistical Mechanics and Its Applications, 320, 571–580.CrossRefGoogle Scholar
  25. 25.
    Gleeson, J. P., Ward, J. A., O’Sullivan, K. P., & Lee, W. T. (2014). Competition-induced criticality in a model of meme popularity. Phys. Rev. Lett., 112, 048701.CrossRefGoogle Scholar
  26. 26.
    Howell, L., et al. (2013). Digital wildfires in a hyperconnected world. In Global Risks: World Economic Forum.Google Scholar
  27. 27.
    Knapp, R. H. (1944). A psychology of rumor. Public Opinion Quarterly, 8(1), 22–37.CrossRefGoogle Scholar
  28. 28.
    Kwak, H., Lee, C., Park, H.,& Moon, S. (2010). What is Twitter, a social network or a news media? In Proceedings of the 19th International Conference on World Wide Web, WWW ’10 (pp. 591–600). New York, NY, USA: ACM.Google Scholar
  29. 29.
    McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). Birds of a feather: Homophily in social networks. Annual Review of Sociology, 27(1), 415–444.CrossRefGoogle Scholar
  30. 30.
    Moreno, Y., Nekovee, M., & Pacheco, A. F. (2004). Dynamics of rumor spreading in complex networks. Physical Review E, 69(6), 066130.CrossRefGoogle Scholar
  31. 31.
    Nematzadeh, A., Ciampaglia, G. L., Menczer, F.,& Flammini, A. (2017). How algorithmic popularity bias hinders or promotes quality. e-print, CoRR.Google Scholar
  32. 32.
    Newman, M. E. J., & Ferrario, C. R. (2013). Interacting epidemics and coinfection on contact networks. PLoS One, 8(8), 1–8.CrossRefGoogle Scholar
  33. 33.
    Nikolov, D., Oliveira, D. F., Flammini, A., & Menczer, F. (2015). Measuring online social bubbles. PeerJ Computer Science, 1, e38.CrossRefGoogle Scholar
  34. 34.
    Nyhan, B., & Reifler, J. (2015). The effect of fact-checking on elites: A field experiment on us state legislators. American Journal of Political Science, 59(3), 628–640.CrossRefGoogle Scholar
  35. 35.
    Nyhan, B., Reifler, J., & Ubel, P. A. (2013). The hazards of correcting myths about health care reform. Medical Care, 51(2), 127–132.CrossRefGoogle Scholar
  36. 36.
    Onnela, J.-P., Saramäki, J., Hyvönen, J., Szabó, G., Lazer, D., Kaski, K., et al. (2007). Structure and tie strengths in mobile communication networks. Proceedings of the National Academy of Sciences, 104(18), 7332–7336.CrossRefGoogle Scholar
  37. 37.
    Owens, E., & Weinsberg, U. (2015). News feed fyi: Showing fewer hoaxes (p. 2016). Online. Accessed Jan.Google Scholar
  38. 38.
    Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. London, UK: Penguin.Google Scholar
  39. 39.
    Pastor-Satorras, R., Castellano, C., Van Mieghem, P., & Vespignani, A. (2015). Epidemic processes in complex networks. Rev. Mod. Phys., 87, 925–979.CrossRefGoogle Scholar
  40. 40.
    Qiu, X., Oliveira, D. F., Shirazi, A. S., Flammini, A., & Menczer, F. (2017). Limited individual attention and online virality of low-quality information. Nature Human Behaviour, 1(7), s41562–017.CrossRefGoogle Scholar
  41. 41.
    Rosnow, R. L., & Fine, G. A. (1976). Rumor and gossip: The social psychology of hearsay. New York City: Elsevier.Google Scholar
  42. 42. (2017). The definitive fact-checking site and reference source for urban legends, folklore, myths, rumors, and misinformation. Online. Accessed 28 Oct 2017.Google Scholar
  43. 43.
    Sunstein, C. (2002). Princeton: Princeton University Press.Google Scholar
  44. 44.
    Tambuscio, M., Ruffo, G., Flammini, A.,& Menczer, F. (2015). Fact-checking effect on viral hoaxes: A model of misinformation spread in social networks. In Proceedings of the 24th International Conference on World Wide Web Companion (pp. 977–982). International World Wide Web Conferences Steering Committee.Google Scholar
  45. 45.
    The Duke Reporters’ Lab keeps an updated list of global fact-checking sites. Accessed 29 June 2018.
  46. 46.
    Times, T. B. (2017). Fact-checking U.S. politics. Online. Accessed 28 Oct 2017.Google Scholar
  47. 47.
    Weng, L., Flammini, A., Vespignani, A., & Menczer, F. (2012). Competition among memes in a world with limited attention. Scientific Reports, 2, 335.CrossRefGoogle Scholar
  48. 48.
    Wood, T.,& Porter, E. (2016). The elusive backfire effect: Mass attitudes’ steadfast factual adherence. e-print, SSRN.Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Computer Science DepartmentUniversity of TurinTurinItaly
  2. 2.School of Informatics, Computing, and EngineeringIndiana UniversityBloomingtonUSA
  3. 3.US Army Research LaboratoryAdelphiUSA
  4. 4.Network Science and Technology CenterRensselaer Polytechnic InstituteTroyUSA
  5. 5.Network Science InstituteIndiana UniversityBloomingtonUSA

Personalised recommendations