Fact-checking strategies to limit urban legends spreading in a segregated society
We propose a framework to study the spreading of urban legends, i.e., false stories that become persistent in a local popular culture, where social groups are naturally segregated by virtue of many (both mutable and immutable) attributes. The goal of this work is identifying and testing new strategies to restrain the dissemination of false information, focusing on the role of network polarization. Following the traditional approach in the study of information diffusion, we consider an epidemic network-based model where the agents can be ‘infected’ after being exposed to the urban legend or to its debunking depending on the belief of their neighborhood. Simulating the spreading process on several networks showing different kind of segregation, we perform a what-if analysis to compare strategies and to understand where it is better to locate eternal fact-checkers, nodes that maintain their position as debunkers of the given urban legend. Our results suggest that very few of these strategies have a chance to succeed. This apparently negative outcomes turns out to be somehow surprising taking into account that we ran our simulations under a highly pessimistic assumption, such that the ‘believers’, i.e., agents that accepted as true the urban legend after they have been exposed to it, will not change their belief no matter of how much external or internal additional informational sources they access to. This has implications on policies that are supposed to decide which strategy to apply to stop misinformation from spreading in real world networks.
KeywordsMisinformation spreading Network segregation Debunking strategies Epidemics
Susceptible - Infected - Susceptible (epidemic model)
eternal Fact Checkers
Our goal is to investigate new strategies to limit false news spreading, specially in presence of existing structural, geographical and/or social barriers. If segregation seems to be an intrinsic feature of modern urban environments, which policies can be implemented to empower fact-checking platforms? At the best of our knowledge there are some researches about identifying influential spreaders of information (Kitsak et al. 2010; Ghosh and Lerman 2010) or rumors (Borge-Holthoefer et al. 2013), but to our knowledge few efforts have been devoted to assess and compare possible debunking strategies exploiting the network topology. In this paper, we suggest the application of a what-if analysis based on epidemic modeling in order to explore new fact-checking policies to limit the diffusion of urban legends. This methodology is particularly useful in contexts where we do not have data on how a given strategy to restrain misinformation would perform, because such action plans have never been applied in real life (or whose results have not been disclosed to scholars yet).
Online information consumption and polarization
Recently misinformation has been largely discussed (Lazer et al. 2018; Vosoughi et al. 2018) because it can imply serious consequences to our lives: even if in some cases fake news are intentionally disseminated to manipulate public opinion, there is a large amount of persistent rumors, or urban legends, that look as simple popular stories but are often related with social problems and leverage on fears, prejudices and emotions of people (Campion-Vincent 2017; Heath et al. 2001). In this framework, digital technologies as online social networks can facilitate the spreading of misinformation, specially because they are homophily-driven, built with the intent to connect like-minded people and often exhibit the presence of echo chambers, highly segregated environments with low content diversity and high degree of repetition (Adamic and Glance 2005; Conover et al. 2011; Pariser 2011; Bozdag and van den Hoven 2015). Moreover, these platforms involve filtering algorithms (DeVito 2017) and recommendation systems that give disproportionate visibility to popular content within social circles. These mechanisms of algorithmic personalization have been largely debated in literature to understand if they affect the evolution of opinions (Rossi et al. 2018; Bressan et al. 2016) and polarize the network (Perra and Rocha 2019; Dandekar et al. 2013; Geschke et al. 2019), or if, conversely, they do not have a leading role in the formation of echo chambers (Möller et al. 2018; Bakshy et al. 2015).
Segregation, homophily, and network topologies
Empirical analysis confirmed that online conversations involving misinformation appear to be highly polarized (Del Vicario et al. 2016; Bessi et al. 2015), but research about the role of the underlying topology of the network in information diffusion suggests that the level of segregation can affect the spreading in different ways (Tambuscio et al. 2018; Bakshy et al. 2012; Onnela et al. 2007; Weng et al. 2013; Nematzadeh et al. 2014). However, many attributes or factors that lead to the formation of segregated communities are somehow ‘mutable’: for example, nodes that join or leave the network can contribute to create new shortest paths to otherwise distant communities, interests change during time and, as a consequence, attention to given topics. On the other hand, segregation has been largely studied (Oka and Wong 2015; Massey and Denton 1993; 1987) and observed (Bajardi et al. 2015; Herdağdelen et al. 2016; Lamanna et al. 2018) in urban environments, involving features of human life as language, religion, ethnicity, education, employment and so on. Many of these attributes are ‘immutable’, and the topology of the network can be shaped accordingly. The theoretical framework provided by the Schelling model (Schelling 2006) shows that spatial segregation is somehow natural even in tolerant societies: in a simple grid where agents can change their place if the percentage of similar individuals in their spatial proximity is lower than a given percentage, even a small bias towards homophily, but still highly tolerant w.r.t. diversities, leads to totally segregated configurations. Interestingly, these patterns have been observed in real societies (Gracia-Lázaro et al. 2009; Clark and Fossett 2008). In our research, to better generalize our findings, we focus on different network topologies that can be caused by social dynamics such as preferential attachment, as well as intrinsic segregation patterns that are dependent on immutable characteristics of the population of a city.
Information spreading modeling
The tradition of information (and consequently, rumor and misinformation) diffusion modeling has involved different approaches: epidemic models (Moreno et al. 2004; Daley and Kendall 1964), influence models (Goldenberg et al. 2001; Granovetter and Soong 1983), opinion dynamics (Castellano et al. 2009) are the most known. In particular researchers distinguish among biological simple contagion (induced by a single exposure, as epidemic models) and complex contagion (dependent on multiple exposures, as influence models) (Centola and Macy 2007). Even if complex contagion has been found to well describe observed information cascades and predict their size (Lerman 2016; Mønsted et al. 2017; Romero et al. 2011), the complexity of the phenomenon seems to involve other factors (Min and San Miguel 2018; Zhuang et al. 2017) and many models based on epidemics have been proposed to study rumor and misinformation (Zhao et al. 2013; Jin et al. 2013; de Arruda et al. 2016). Moreover, in models based on complex contagion agents have only one possibility to activate their neighbors and never de-activate, meaning that they do not take into account forgetting mechanisms. Considering misinformation spreading, this is an important element to represent, since many psychological studies suggest it has a significant role (Lewandowsky et al. 2012; Nyhan and Reifler 2010).
Following the epidemic approach we extended a compartmental model (Tambuscio et al. 2015) where the agents can be in one of the three states: Susceptible, if they ignore the news, Believer, if they support the urban legend, or Fact Checker, if they decide to foster the debunking. Evolution in time is given by transition rates that allow an agent to change state, and these rates depend on the following parameters: the number of neighbors Believers or Fact-Checker, the spreading rate β (common to both hoax and debunking), the credibility of the hoax α (that gives some priority to misinformation but also can represent different propensities to believe), the forgetting rate pf (the probability for agents in both Believer and Fact-Checker states to return to the Susceptible state). Since it is known that bias and personal beliefs often prevent people to look for fact-checking (Nyhan and Reifler 2010; Lewandowsky et al. 2012), we consider here the worst case in the framework given by the model in Tambuscio et al. (2015), when no one verify the information (meaning that it is not possible to switch from Believer to Fact-Checker state) and the debunking spreads only as a competing opinion of the rumor.
We simulate spreading dynamics on three types of networks: scale free networks, networks formed by communities characterized by different values of credibility (including a simulation on the well known ‘polblogs’ real network) and grid configurations obtained by means of the Schelling segregation model. The parameter α in the model represents the credibility of the hoax, i.e. the tendency that each agent has to believe to it. This is an advantage for the misinformation piece, reflecting the results of several psychological studies (Allport and Postman 1947; DiFonzo and Bordia 2007; Silverman 2015) that indicate credibility (combined with repetition) is a strong enhancer for rumor diffusion; the effect is even amplified if the story matches pre-existing beliefs (confirmation bias) (Nyhan and Reifler 2010). Then it is reasonable to represent urban legends having some priority respect to their fact-check, at least for some communities. Under these conditions, in absence of a transition Believer - Fact Checker (no verifying activity), the rumor at some point affects the whole population and the debunking dies out, even for very low values of α. Therefore, to limit the propagation of the rumor in such a configuration we propose here to fix some nodes to be eternal Fact Checkers (i.e., that never return to be Susceptible) and then we run several simulations to compare a group of strategies targeting different type of nodes as eternal fact checkers. For instance, if the network is highly segregated, a solution to be tested would be to place fact-checkers on the frontier among the clusters, so that we could exploit natural segregation to isolate the urban legend only in some clusters. Nevertheless, if the frontier is not totally covered, the rumor can eventually go beyond it and propagate in the whole network. In this case, if the same number of fact-checkers is placed in the higher degree nodes (hubs), the rumor diffusion can be partially limited. We will discuss in detail these strategies through simulations of the model in different network topologies, highlighting the fact that in each case we are able to find a strategy that contains the misinformation: these findings can be useful in proposing new policies to foster debunking and fight fake news spreading.
assuming that a Susceptible agent can decide to believe in either the hoax or the fact checking as a result of interaction over interpersonal ties (Rosnow and Fine 1976), the rumor/debunking spreading (transitions S→B and S→F) depends on the number of believers/fact-checkers among neighbors and on a parameter α that represents the credibility of the legend;
Believers and Fact Checkers can return to the Susceptible state with a fixed forgetting probability pf (transitions B→S and F→S);
Please observe that this model is a ‘pessimistic’ variation of a previous model (Tambuscio et al. 2015), that also follows the traditional approach of epidemic spreading (Moreno et al. 2004) to understand misinformation diffusion dynamics; in fact, in the previous model, we introduced also the possibility for an agent to switch from Believer to Fact Checker with a given verifying probability pv (Tambuscio et al. 2015), meaning that the debunking can also be spread by external factors (online fact-checking platforms, for instance). On the other hand we consider here the worst possible scenario to test our strategies: users do not (want to) verify the news, they can only be influenced by their Fact-Checker neighbors (if any) when they are in their Susceptible state. After they get a position, they can only return to the Susceptible state if they forget what they learnt about the news they have been previously exposed to.
where β∈[0,1] is the spreading rate and α∈[0,1) represents the credibility of the legend (meaning that is more believable when α is close to 1) and give some priority to the piece of misinformation respect to the debunking. Please observe that when α=0 the hoax is still able to spread, but there is not any advantage over the fact-checking.
Once the model has reached the equilibrium, we denote with S∞, B∞ and F∞ the asymptotic density of agents in the three states.
In other words, we are representing an urban legend spreading with an opinion dynamics model where the hoax compete with its debunking at a local level of the social interaction of agents. Please notice that this model follows a SIS-like (Susceptible-Infected-Susceptible) dynamics where the Infected state is split into Believer and Fact Checker compartments.
The goal of this work is comparing differnt strategies to limit the misinformation spreading in a segregated society; in our simulations we will consider different types of networks that exhibit several degrees of segregation. Let us briefly recap what we know about the role of the topology network in the model described in “The model” section. The original version of the model, where the agents can verify a piece of information and switch from Believer to Fact Checker, showed the same behavior on random, scale-free and real networks (Tambuscio et al. 2015). The work described in Tambuscio et al. (2018) focused on the evolution of the model dynamics in networks formed by two communities, one made of more gullible agents, while the other is set to be more skeptic: these communities exhibit different propensities to believe (different values of α): extensive simulations show that the segregation level of the network can both help to spread or stop the misinformation, depending on the forgetting rate. In particular these networks were artificially generated rewiring two random networks, obtaining different levels of segregation.
In the following lines we will introduce the networks on which we tested our experiments and debunking strategies to perform our what-if analysis with our new model.
Synthetic scale-free networks
Observations: In our previous works Tambuscio et al. (2015; 2018) we ran our simulations with varying values of N. We understood that when N is larger (up to 10,000), the general behavior does not change, hence we kept N smaller to run many different realizations of the model faster.
It is also important to observe that scale-free artificial networks showing different segregation values could have been generated by means of configuration models. In our comparative what-if analysis we used three different topologies (BA graphs, Shelling based networks, and PolBlog as a network built up from real data), and we observed comparable behaviors, that led us to conclude it was not necessary to simulate our strategies on another family of artificially generated graphs. Nevertheless, it is true that configuration models have less drawbacks in terms of non-trivial correlations than BA networks, and so an additional analysis can be performed as a future work.
We considered a real network (POLBLOGS) between weblogs on US politics, collected during US elections in 2004 (Adamic and Glance 2005). We chose this network because is formed by two labeled communities that reflect somehow an opinion (belief) of the nodes, and we considered them as gullible and skeptic groups described before, assigning them different values of credibility. Specifically, we used a modified version of the original network: we mapped the directed graph to an undirected one, we selected the largest connected component, lowering the number of vertices from 1490 to 1222, and finally we removed all multi-edges and loops, lowering the number of edges from 16725 to 16714.
Synthetic Schelling networks
Finally, we considered the Schelling segregation model as a simple representation of a segregated urban environment (Schelling 2006). In this model two groups of agents are randomly placed on a grid of size S. The number of agents N is obtained by a density D as N=D∗S2. A parameter P denotes the preference, i.e., the desired fraction of neighbors of the same type for all the agents. These preference can also be seen as an inverse measure of tolerance (a lower preference corresponds to higher number of neighbors of a different type). Clearly in a random configuration there will be some unsatisfied agents: at each time t they move to an empty cell. Running the simulations, different configurations can be obtained: when there is an equilibrium (all the agents are satisfied) the network results to be segregated in small communities of the same group Fig. 2c-d.
This class of topologies is interesting because the spatial segregation arises from the local effect of homophily based on more ‘immutable’ characteristics of the individual (i.e., ethnicity, religion, language, and so on): urban networks can be shaped very differently w.r.t. a (on-line) social network. Indeed, even for a low value of P we can obtain these segregated configurations, see Fig. 2c-d. Here, we run the Schelling model with S=35 so that, varying D in [0.7,0.9] we have N≈1000. We consider the final configuration of the model as the starting one for our simulations: the two groups represent the gullible and skeptic agents, and they are assigned with different values of α.
In our model agents can become Believer or Fact-Checker only if they are infected by their neighbors, then we start our simulations with a population of Susceptible agents and some Believer and Fact-Checker seeders (B0=F0=0.1∗N). To understand the behavior of the model in different configurations we performed extensive numerical simulations fixing for simplicity the spreading rate beta=0.5 and the forgetting probability pf=0.1.
Scale-free networks without gullible communities
Scale-free networks with gullible communities
Running the simulation without debunking strategies (see Fig. 5a) the misinformation can conquer all the network: even a highly segregated community (half of the entire population) formed by skeptic people with low tendency to believe to an hoax is not enough to limit the urban legend spreading. Then we fix, as in the previous case, some skeptic nodes as eternal Fact Checker and we try three different strategies to choose them: randomly, among the highest degree nodes, among the nodes on the frontier. As before, in all the three cases, for simplicity we set the number of eternal Fact Checkers equal to F0: in the case of the frontier if the network is highly segregated and we saturated all the possible nodes we choose the remaining eternal Fact Checkers at random in the skeptic community.
In the first case, setting the eternal Fact Checkers at random (see Fig. 5b), we can see that the misinformation is partially limited, but it still reaches the skeptic community and stays endemic at the equilibrium. The second strategy, choosing the hubs (see Fig. 5c), has indeed an important effect, limiting the global spreading of the hoax (that is now basically confined in the gullible community) and it guarantees a constant high number of Fact Checker in the skeptic community (with a slightly better result for the real network, that can depend on the segregation level, as we will see next). Finally, the third case, locating the eternal debunkers on the frontier among the communities, is very interesting, because two events can occur. If the frontier is totally covered by the eternal Fact Checker, trivially we have the best possible result in this framework: the misinformation is totally confined in the gullible community and the skeptic one is totally protected by its “watchers” (look at the first time iterations of first column plot in Fig. 5d). But, even if there is only one possible “door”, at some point the Believer agents can invade the skeptic community and at the equilibrium we have number of endemic Fact Checker very similar to the first case (see first column plot in Fig. 5d when t≈150). We would like to remark that here we are considering a toy-network in a borderline case of two communities more or less connected: the nodes on the frontier then represent the bridges of our network, and, indeed, they exhibit high values of betweenness centrality on average, specially for higher segregation.
Therefore the most powerful strategy would be the third, but only if it is possible to cover all the frontier with the eternal Fact Checkers. However, since we are exploring possible solutions to limit misinformation in the real world where new links form continuously and keeping a community totally protected is not achievable, we can conclude that the best strategy, among the proposed ones, is the second, fixing eternal Fact Checkers among the hubs of the network.
Real-world segregated networks
Schelling model networks
In this section we consider another type of segregated network: the grid configuration of the Schelling segregation model after the equilibrium is reached (see Sec. 2). This agent based model showed that segregation is something that can arise even in very tolerant contexts and has been used for instance to study residential segregation of ethnic groups: empirical evidences supporting Schelling-like patterns were observed between the Jewish and Arabs communities in Israel (Hatna and Benenson 2012). The Schelling grids at equilibrium provide us a framework to test the hoax spreading model and its debunking strategies on a segregated urban environment where the topology of the network is inherently shaped by social and human attributes that historically led to separate and isolate groups (ethnicity, religion, gender, language etc).
Summarizing, we focused on the worst scenario provided by a misinformation spreading model, based on epidemics, in which agents can be infected by an urban legend or by its debunking, then they can forget their belief about it and turn to be infected again; nevertheless, our pessimistic assumption is that once agents opted to become Believers, they will not verify the news anymore keeping their belief (until they forget their position). We defined this as the worst possible scenario because the fact-checking can only be broadcast trough neighbors contagion, meaning that the debunking platforms and activities could appear useless and inefficient. Indeed, not surprisingly, under these assumptions our simulations show that the hoax easily becomes endemic and the debunking disappears. In order to limit misinformation, this is a quite negative results: this is reflected by some other relevant studies that show how fact checking can be not effective and sometimes counterproductive (Butler et al. 2011; Nyhan and Reifler 2010; Lewandowsky et al. 2012), while the hoaxes proliferate creating highly polarized communities in the communication networks (Del Vicario et al. 2016; Bessi et al. 2015).
Nevertheless, keeping this pessimistic scenario, we tested some fact checking strategies that involve the introduction of eternal fact-checkers, agents that support the debunking and never forget their belief: the location of these agents has a crucial role in shaping the global diffusion of the urban legend and its debunking at equilibrium.
First, our simulation results on scale-free networks show that fixing the highest degree nodes as the eternal fact-checkers is the more successful strategy in limiting the hoax spreading, while choosing them randomly has a lower effect (even if the debunking at equilibrium survives, trivially in the immediate neighborhood of the eternal fact-checkers).
In the first case, scale-free networks (synthetic and real) formed by two communities more or less segregated, the winning strategy is (again) to fix the skeptic hubs as eternal debunkers. This is more powerful than fixing them at random or on the frontier, that would be the most powerful strategy only when it is totally covered by eternal fact-checkers but this is clearly not affordable, since the real networks are dynamics. Indeed, in this case our simulations highlight that eventually the hoax finds a way to “dig through the wall” and spread in the other community, becoming endemic in it even if the agents in this group are more skeptic, i.e. they are less likely to believe to the urban legend. Then the frontier approach has the same outcome of the random one, both in synthetic and real segregated networks. Moreover, we find that in this framework of the model (when there is not verifying activity) the segregation of the network can restrain the misinformation spread because it prevents that the hoax spread in the skeptic community. For comparison with the framework in which agents can verify the news (then some Believer turn into Fact Checker) see Tambuscio et al. (2015; 2018).
In the second case, the network is obtained by a realization of the Schelling model, i.e. it is a grid and every node has a low degree (k≤8) then we can not consider hubs. Nevertheless, fixing some eternal fact-checkers (at random or on the frontiers among the groups) works as well in limiting the legend spreading.
To draw a conclusion from our experimental settings, our what-if analysis show evidences that, even in a very pessimistic scenario where no one verify the news, some debunking strategies can be applied and have a partial success in limiting the misinformation spread, specially exploiting the presence of more skeptic agents in the network. Conversely, a censorship action on the nodes that broadcast hoaxes could not be helpful since new nodes can easily replace the silenced ones. Therefore, our results can surely be helpful in developing new policies to build fact-checking platforms and to foster their usage.
Misinformation is surely one of the most dangerous risks of our hyper-connected society and some proposed solutions involve the creation of accounts’ blacklists or the development of tools to give less visibility to specific items labeled as fake news. Then interesting questions are arising (John Borthwick 2016): how to legislate without limiting freedom of speech and which authority should trust to an eventual law-making of the Internet? With this intent many fact-checking platforms have been proposed1.
How these projects can become more effective? In this work we considered a simplified version of an epidemics-based model where the misinformation spreading is described only as a competition process among an urban legend and its debunking. The fact checking activity of the debunkers has been frequently labeled as useless and counterproductive because of psychological and social factors. Then we focused on the worst case scenario: the agents can not verify the news, and the debunking can only be spread at a neighborhood level influencing agents that have not taken a position against or in favor a given fake news yet. We proved that, in different network topologies, the strategy of fixing the belief of a portion of the Fact-Checkers can indeed limit the misinformation spreading, even if the location of these agents has a big influence on the success of these strategies. This could mean that, even if the debunking services provided by the main stream media or online platforms are not much visited, they are still useful to restrain a fake news diffusion, specially if their usage is strategically coordinated by a skeptic community.
In the future we would like to collect data to better validate our model, developing a platform in which users can express their belief towards a news and some agents can be activated as eternal Fact-Checkers in strategic locations of the network. Moreover, on the theoretical side, we would like to explore next what happens on networks made of n>2 communities with different propensities to believe.
We think that our findings, based on a what-if analysis that helped us to study a domain where we do not have enough data, can help to shed lights on the understanding of the complex phenomenon of misinformation spreading. Specifically, they can suggest new debunking policies to empower the existing fact-checking platforms or new social experiments in real contexts to test the proposed strategies and the role of segregation.
Authors are grateful to Filippo Menczer, Alessandro Flammini, and Giovanni Luca Ciampaglia for their inspiration and useful suggestions, to Emilio Ferrara, Bruno Gonçalves, and Diego F. Olivera, for their insights and references at the early stages of this work, and to Alfonso Semeraro for his practical observations that made our goal clearer. We would like also to thank the anonymous reviewers for their careful reading of our manuscript and their many insightful comments and suggestions.
Main idea and conceptualization: GR, MT; Networks generation, analysis and simulations: MT, GR; Formal analysis: MT, GR: Methodology: GR, MT; Writing: MT, GR. Both authors read and approved the final manuscript.
The authors acknowledge support from Intesa Sanpaolo Innovation Center. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Additionally, authors have been partially supported by the project ‘Analisi di Reti Complesse e di Sistemi Socio-Tecnologici’, funded by University of Turin. Any opinions, findings and conclusions or recommendations expressed in this manuscript are those of the author(s) and do not necessarily reflect the views of Intesa Sanpaolo Innovation Center and University of Turin.
The authors declare that they have no competing interests.
- Allport, GW, Postman L (1947) The Psychology of Rumor. Henry Holt, Oxford, England.Google Scholar
- Bakshy, E, Rosenn I, Marlow C, Adamic L (2012) The role of social networks in information diffusion In: Proceedings of the 21st International Conference on World Wide Web, 519–528.. ACM, New York.Google Scholar
- Bessi, A, Petroni F, Del Vicario M, Zollo F, Anagnostopoulos A, Scala A, Caldarelli G, Quattrociocchi W (2015) Viral misinformation: The role of homophily and polarization In: Proceedings of the 24th International Conference on World Wide Web, 355–356.. ACM, New York.Google Scholar
- Bressan, M, Leucci S, Panconesi A, Raghavan P, Terolli E (2016) The limits of popularity-based recommendations, and the role of social ties In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 745–754.. ACM.Google Scholar
- Conover, M, Ratkiewicz J, Francisco M, Gonçalves B, Flammini A, Menczer F (2011) Political polarization on Twitter In: Proc. 5th International AAAI Conference on Weblogs and Social Media (ICWSM).. AAAI, Barcelona.Google Scholar
- de Arruda, GF, Rodrigues FA, Rodriiguez PM, Cozzo E, Moreno Y (2016) Unifying markov chain approach for disease and rumor spreading in complex networks. arXiv preprint arXiv:1609.00682.Google Scholar
- DeVito, MA (2017) From editors to algorithms: A values-based approach to understanding story selection in the facebook news feed. Digit J 5(6):753–773.Google Scholar
- Ghosh, R, Lerman K (2010) Predicting influential users in online social networks In: SNA-KDD: PROCEEDINGS OF KDD WORKSHOP ON SOCIAL NETWORK ANALYSIS.. ACM, New York.Google Scholar
- Jin, F, Dougherty E, Saraf P, Cao Y, Ramakrishnan N (2013) Epidemiological modeling of news and rumors on twitter In: Proceedings of the 7th Workshop on Social Network Mining and Analysis, 8.. ACM, New York.Google Scholar
- John Borthwick, JJ (2016) A Call for Cooperation Against Fake News. https://medium.com/whither-news/a-call-for-cooperation-against-fake-news-d7d94bb6e0d4.
- Massey, DS, Denton NA (1993) American Apartheid: Segregation and the Making of the Underclass. Harvard University Press, Cambridge.Google Scholar
- Oka, M, Wong DW (2015) Spatializing segregation measures: An approach to better depict social relationships. Cityscape 17(1):97–114.Google Scholar
- Pariser, E (2011) The Filter Bubble: What the Internet Is Hiding from You. Penguin, Penguin, UK.Google Scholar
- Romero, DM, Meeder B, Kleinberg J (2011) Differences in the mechanics of information diffusion across topics: idioms, political hashtags, and complex contagion on twitter In: Proceedings of the 20th International Conference on World Wide Web, 695–704.. ACM.Google Scholar
- Rosnow, RL, Fine GA (1976) Rumor and Gossip: The Social Psychology of Hearsay. Elsevier, New York.Google Scholar
- Rossi, WS, Polderman JW, Frasca P (2018) The closed loop between opinion formation and personalised recommendations. arXiv preprint arXiv:1809.04644.Google Scholar
- Schelling, TC (2006) Micromotives and Macrobehavior. WW Norton & Company, New York.Google Scholar
- Silverman, C (2015) Lies, Damn Lies and Viral Content. Tow Center for Digital Journalism, Columbia University, New York. https://doi.org/10.7916/D8Q81RHH.
- Tambuscio, M, Ruffo G, Flammini A, Menczer F (2015) Fact-checking effect on viral hoaxes: A model of misinformation spread in social networks In: Proceedings of the 24th International Conference on World Wide Web Companion, 977–982.. International World Wide Web Conferences Steering Committee, New York.CrossRefGoogle Scholar
- Weng, L, Ratkiewicz J, Perra N, Gonçalves B, Castillo C, Bonchi F, Schifanella R, Menczer F, Flammini A (2013) The role of information diffusion in the evolution of social networks In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 356–364.. ACM, New York.CrossRefGoogle Scholar
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.