Fighting fake news: a role for computational social science in the fight against digital misinformation
- 3.8k Downloads
The massive, uncontrolled, and oftentimes systematic spread of inaccurate and misleading information on the Web and social media poses a major risk to society. Digital misinformation thrives on an assortment of cognitive, social, and algorithmic biases and current countermeasures based on journalistic corrections do not seem to scale up. By their very nature, computational social scientists could play a twofold role in the fight against fake news: first, they could elucidate the fundamental mechanisms that make us vulnerable to misinformation online and second, they could devise effective strategies to counteract misinformation.
KeywordsDigital misinformation Algorithmic bias Fact checking
Information is produced and consumed according to novel paradigms on the Web and social media. The ‘laws’ of these modern marketplaces of ideas are starting to emerge [1, 2], thanks in part to the intuition that the data we leave behind as we use technology platforms reveal much about ourselves and our true behavior . The transformative nature of this revolution offers the promise of a better understanding of both individual human behavior and collective social phenomena, but poses also fundamental challenges to society at large. One such risk is the massive, uncontrolled, and oftentimes systematic spread of inaccurate and misleading information.
Misinformation spreads on social media under many guises. There are rumors, hoaxes, conspiracy theories, propaganda, and, of course, fake news. Regardless of the form, the repercussions of inaccurate or misleading information are stark and worrisome. Confidence in the safety of childhood vaccines has dropped significantly in recent years , fueled by misinformation on the topic, resulting in outbreaks of measles . When it comes to the media and information landscape, things are not better. According to recent surveys, even though an increasing majority of American adults (67% in 2017, up from 62% in 2016) gets their news from social media platforms on a regular basis, the majority (63% of respondents) do not trust the news coming from social media. Even more disturbing, 64% of Americans say that fake news have left them with a great deal of confusion about current events, and 23% also admit to passing on fake news stories to their social media contacts, either intentionally or unintentionally [6, 7, 8].
How to fight fake news? In this essay, I will sketch a few areas in which computational social scientists could play a role in the struggle against digital misinformation.
Social media balkanization?
When processing information humans are subject to an assortment of socio-cognitive biases, including homophily , confirmation bias , conformity bias , the misinformation effect , and motivated reasoning . These biases are at work in any situations, i.e., not just online, but we are finally coming to the realization that the current structure of the Web and social media plays upon and reinforces them. At the beginning of the Web, some scholars had hypothesized that the Internet could in principle ‘balkanize’ into communities of like-minded people espousing completely different sets of facts [14, 15, 16]. Over the years, various web technologies have been tested for signs of cyber-balkanization  or bias, for example, the early Google search engines  or recommender systems .
Could social media provide fertile ground for cyber-balkanization? With the rise of big data and personalization technologies, one concern is that algorithmic filters may relegate us into information ‘bubbles’ tailored to our own preferences . Filtering and ranking algorithms like the Facebook NewsFeed do indeed prioritize content based on engagement and popularity signals, and even though popularity in some cases may help quality contents bubble up , it may not be enough for our limited attention, which may explain the virality of low-quality content [22, 23], and why group conversations often turn into cacophony .
Combining together the effect of algorithmic bias with the homophilistic structure of social networks, one concern is that this may lead to an ‘echo chamber’ effect, in which existing beliefs are reinforced, thus lowering barriers to the manipulation of misinformation. There have been some attempts at measuring the degree of selective exposure of social media [25, 26], but as platforms continuously evolve, tune their algorithms, and grow in scope the question is far from settled.
It worth noting here that these questions have not been explored only from an empirical point of view. There is in fact a very rich tradition of computational work that seeks to explain the evolution of distinct cultural groups using computer simulation, dating back to the pioneering work of Robert Axelrod [27, 28, 29, 30]. Agent-based models can help us test the plausibility of competing hypotheses in a generative framework, but they often lack empirical validation . The computational social science community could contribute in a unique fashion to the debate on the cyber-balkanization of social media by bridging the divide between social simulation models and empirical observation .
A call to action for computational social scientists
What to do? Professional journalism, guided by ethical principles of trustworthiness and integrity, has been for decades the answer to episodes of rampant misinformation, like yellow journalism. At the moment, the fourth estate does not seem to be effective enough in the fight against digital misinformation. For example, analysis of Twitter conversations has shown that fact-checking lags behind misinformation both in terms of overall reach and of sheer response time—there is a delay of approximately 13 h between the consumption of fake news stories and that of its verifications . While this span may seem relatively short, social media have been shown to spread content orders of magnitude faster .
Perhaps, even more troubling, doubts have been cast on the very effectiveness of factual corrections. For example, it has been observed that corrections may sometimes backfire, resulting in even more entrenched support of factually inaccurate claims . More recent evidence seems to paint a more nuanced, and optimistic picture about fact-checking [36, 37, 38, 39], showing that the backfire effect is not as common as initially alleged. Still, to this date, the debate on how to improve the efficacy of fact-checking is far from resolved [40, 41].
The debate on the efficacy of fact-checking is important also because there is a growing interest in automating the various activities that revolve around fact-checking. These include: news gathering, verification, and delivery of corrections [42, 43, 44, 45, 46]. These activities are already capitalizing on the growing number of tools, data sets, and platforms contributed by computer scientists to detect, define, model, and counteract the spread of misinformation [47, 48, 49, 50, 51, 52, 53, 54, 55, 56]. Without a clear understanding of what are the most effective countermeasures, and of who is best equipped to deliver them, these tools may never be brought to complete fruition.
Computational social scientists could play a pivotal role in the collective effort to produce and disseminate accurate information at scale. A challenge for this type of research is how to test the efficacy of different types of corrections on large samples and possibly in the naturalistic settings of social media. Social bots, which so far have been responsible for spreading large amounts of misinformation [57, 58, 59, 60], could be employed to this end, since initial evidence shows that they can be employed to spread positive messages .
The fight against fake news has just started, and many answers are still open. Are there any successful examples that we should emulate? After all, accurate information is produced and disseminate on the Internet every day. Wikipedia comes to mind here, as perhaps one of the most successful communities of knowledge production. To be sure, Wikipedia is not immune to vandalism and inaccurate information, but the cases of intentional disinformation that have survived for long are surprisingly few . Wikipedia is often credited for being the go-to resource of accurate knowledge for millions of people worldwide . It is often said that Wikipedia only works in practice; in theory, it should never work. Computational social scientists have elucidated many of the reasons why the crowdsourcing model Wikipedia works , and thus could effectively contribute to a better understanding of the problem of digital misinformation.
The author would like to thank Filippo Menczer and Alessandro Flammini for insightful conversations.
- 4.Wooley, M. (2015). Childhood vaccines. Presentation at the workshop on Trust and Confidence at the Intersections of the Life Sciences and Society, Washington D.C. http://nas-sites.org/publicinterfaces/files/2015/05/Woolley_PILS_VaccineSlides-3.pdf.
- 6.Barthel, M., Mitchell, A., & Holcomb, J. (2016). Many Americans believe fake news is sowing confusion. Online, Pew Research. http://www.journalism.org/2016/12/15/many-americans-believe-fake-news-is-sowing-confusion/.
- 7.Barthel, M., & Mitchell, A. (2017). Americans’ attitudes about the news media deeply divided along partisan lines. Online, Pew Research. http://www.journalism.org/2017/05/10/americans-attitudes-about-the-news-media-deeply-divided-along-partisan-lines/.
- 8.Gottfried, J., & Shearer, E. (2017). News use across social media platforms 2017. Online, Pew Research. http://www.journalism.org/2017/09/07/news-use-across-social-media-platforms-2017/.
- 11.Asch, S. E. (1961). Effects of group pressure upon the modification and distortion of judgements. In M. Henle (Ed.) Documents of Gestalt psychology (pp. 222–236). Oakland: University of California Press.Google Scholar
- 20.Pariser, E. (2011). The filter bubble: What the internet is hiding from you. UK: Penguin.Google Scholar
- 21.Nematzadeh, A., Ciampaglia, G. L., Menczer, F., & Flammini, A. (2017). How algorithmic popularity bias hinders or promotes quality. CoRR. arXiv:1707.00574
- 24.Nematzadeh, A., Ciampaglia, G. L., Ahn, Y. Y., & Flammini, A. (2016). Information overload in group communication: From conversation to cacophony in the twitch chat. CoRR. arXiv:1610.06497
- 33.Shao, C., Ciampaglia, G. L., Flammini, A., & Menczer, F. (2016). Hoaxy: A platform for tracking online misinformation. In Proceedings of the 25th International Conference Companion on World Wide Web, WWW ’16 Companion (pp. 745–750). Republic and Canton of Geneva: International World Wide Web Conferences Steering Committee. https://doi.org/10.1145/2872518.2890098
- 34.Sakaki, T., Okazaki, M., & Matsuo, Y. (2010). Earthquake shakes twitter users: Real-time event detection by social sensors. In Proceedings of the 19th International Conference on World Wide Web, WWW ’10 (pp. 851–860). New York: ACM. https://doi.org/10.1145/1772690.1772777
- 37.Wood, T., & Porter, E. (2016). The elusive backfire effect: Mass attitudes’ steadfast factual adherence. SSRN. https://ssrn.com/abstract=2819073. Accessed 26 Nov 2017.
- 38.Nyhan, B., Porter, E., Reifler, J., & Wood, T. (2017). Taking corrections literally but not seriously? the effects of information on factual beliefs and candidate favorability. SSRN. https://ssrn.com/abstract=2995128. Accessed 26 Nov 2017.
- 39.Vraga, E. K., & Bode, L. (2017). I do not believe you: how providing a source corrects health misperceptions across social media platforms. Information, Communication & Society, 1–17. https://doi.org/10.1080/1369118X.2017.1313883.
- 44.Shiralkar, P., Flammini, A., Menczer, F., & Ciampaglia, G. L. (2017). Finding streams in knowledge graphs to support fact checking. In Proceedings of the 2017 IEEE 17th International Conference on Data Mining, Extended Version. Piscataway, NJ: IEEE.Google Scholar
- 46.Hassan, N., Arslan, F., Li, C., & Tremayne, M. (2017). Toward automated fact-checking: Detecting check-worthy factual claims by claimbuster. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17 (pp. 1803–1812). New York, NY: ACM. https://doi.org/10.1145/3097983.3098131
- 47.Ratkiewicz, J., Conover, M., Meiss, M., Gonçalves, B., Patil, S., Flammini, A., & Menczer, F. (2011). Truthy: Mapping the spread of astroturf in microblog streams. In Proceedings of the 20th International Conference Companion on World Wide Web, WWW ’11 (pp. 249–252). New York, NY: ACM. https://doi.org/10.1145/1963192.1963301
- 48.Ratkiewicz, J., Conover, M., Meiss, M., Goncalves, B., Flammini, A., & Menczer, F. (2011). Detecting and tracking political abuse in social media. In Proc. International AAAI Conference on Web and Social Media (pp. 297–304). Palo Alto, CA: AAAI. https://www.aaai.org/ocs/index.php/ICWSM/ICWSM11/paper/view/2850.
- 49.Liu, X., Nourbakhsh, A., Li, Q., Fang, R., & Shah, S. (2015). Real-time rumor debunking on twitter. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM ’15 (pp. 1867–1870). New York, NY: ACM. https://doi.org/10.1145/2806416.2806651.
- 50.Metaxas, P. T., Finn, S., & Mustafaraj, E. (2015). Using twittertrails.com to investigate rumor propagation. In Proceedings of the 18th ACM Conference Companion on Computer Supported Cooperative Work & Social Computing, CSCW’15 Companion (pp. 69–72). New York, NY: ACM. https://doi.org/10.1145/2685553.2702691
- 51.Mitra, T., & Gilbert, E. (2015). Credbank: A large-scale social media corpus with associated credibility annotations. In Proc. International AAAI Conference on Web and Social Media (pp. 258–267). Palo Alto, CA: AAAI. https://www.aaai.org/ocs/index.php/ICWSM/ICWSM15/paper/view/10582.
- 52.Zubiaga, A., Liakata, M., Procter, R., Bontcheva, K., & Tolmie, P. (2015). Crowdsourcing the annotation of rumourous conversations in social media. In Proceedings of the 24th International Conference on World Wide Web, WWW ’15 Companion (pp. 347–353). New York, NY: ACM. https://doi.org/10.1145/2740908.2743052.
- 54.Declerck, T., Osenova, P., Georgiev, G., & Lendvai, P. (2016). Ontological modelling of rumors. In D. TrandabăŢ, D. Gîfu (Eds.) Linguistic Linked Open Data: 12th EUROLAN 2015 Summer School and RUMOUR 2015 Workshop, Sibiu, Romania, July 13–25, 2015, Revised Selected Papers (pp. 3–17). Berlin: Springer International Publishing. https://doi.org/10.1007/978-3-319-32942-0_1.
- 55.Sampson, J., Morstatter, F., Wu, L., & Liu, H. (2016). Leveraging the implicit structure within social media for emergent rumor detection. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, CIKM ’16 (pp. 2377–2382). New York, NY: ACM. https://doi.org/10.1145/2983323.2983697.
- 56.Wu, L., Morstatter, F., Hu, X., & Liu, H. (2016). Mining misinformation in social media. In M. T. Thai, W. Wu, H. Xiong (Eds.) Big Data in Complex and Social Networks, Business & Economics (pp. 125–152). Boca Raton, FL: CRC Press.Google Scholar
- 57.Bessi, A., & Ferrara, E. (2016). Social bots distort the 2016 U.S. presidential election online discussion. First Monday, 21(11). https://doi.org/10.5210/fm.v21i11.7090. http://firstmonday.org/ojs/index.php/fm/article/view/7090.
- 59.Ferrara, E. (2017). Disinformation and social bot operations in the run up to the 2017 French presidential election. First Monday, 22(8). https://doi.org/10.5210/fm.v22i8.8005. http://firstmonday.org/ojs/index.php/fm/article/view/8005.
- 60.Varol, O., Ferrara, E., Davis, C. A., Menczer, F., & Flammini, A. (2017). Online human-bot interactions: Detection, estimation, and characterization. In Proc. International AAAI Conference on Web and Social Media (pp. 280–289). Palo Alto, CA: AAAI. https://aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/view/15587
- 62.Kumar, S., West, R., & Leskovec, J. (2016). Disinformation on the web: Impact, characteristics, and detection of wikipedia hoaxes. In Proceedings of the 25th International Conference on World Wide Web, WWW ’16 (pp. 591–602). Republic and Canton of Geneva: International World Wide Web Conferences Steering Committee. https://doi.org/10.1145/2872427.2883085.
- 63.Singer, P., Lemmerich, F., West, R., Zia, L., Wulczyn, E., Strohmaier, M., & Leskovec, J. (2017). Why we read wikipedia. In Proceedings of the 26th International Conference on World Wide Web, WWW ’17 (pp. 1591–1600). Republic and Canton of Geneva: International World Wide Web Conferences Steering Committee. https://doi.org/10.1145/3038912.3052716.
- 64.Mesgari, M., Okoli, C., Mehdi, M., Nielsen, F. Å., & Lanamäki, A. (2015). The sum of all human knowledge: A systematic review of scholarly research on the content of wikipedia. Journal of the Association for Information Science and Technology, 66(2), 219–245. https://doi.org/10.1002/asi.23172.CrossRefGoogle Scholar