Advertisement

The Digital Misinformation Pipeline

Proposal for a Research Agenda
  • Giovanni Luca Ciampaglia
Chapter

Abstract

Digital misinformation poses a major risk to society and thrives on cognitive, social, and algorithmic biases. As social media become engulfed in rumor, hoaxes, and fake news, a “research pipeline” for the detection, monitoring, and checking of digital misinformation is needed. This chapter gives a brief introductory survey to the main research on these topics. The problem of digital misinformation does not lie squarely within a single discipline; instead, it is informed by research in several areas. An integrated research agenda devoted to the implementation of these tools should take into account a wide range of perspectives.

Keywords

Digital Misinformation Echo Chambers Fact Checking Social Bots Algorithmic Bias Computational Social Science Knowledge Networks Social Media 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Bibliography

  1. Adamic, L. A., & Glance, N. (2005). The political blogosphere and the 2004 US election: divided they blog. In Proceedings of the 3rd international workshop on Link discovery (pp. 36–43). ACM.Google Scholar
  2. Ciampaglia, G. L., Flammini, A., & Menczer, F. (2015a). The production of information in the attention economy. Scientific Reports, 5(9452).  https://doi.org/10.1038/srep09452
  3. Ciampaglia, G. L., Shiralkar, P., Rocha, L. M., Bollen, J., Menczer, F., & Flammini. A. (2015b). Computational fact checking from knowledge networks. PLoS ONE, 10(6), e0128193.Google Scholar
  4. Conover, M., Ratkiewicz, J., Francisco, M., Gonçalves, B., Flammini, A., & Menczer, F. (2011). Political polarization on Twitter. In Proc. 5th International AAAI Conference on Weblogs and Social Media (ICWSM).Google Scholar
  5. Conover, M. D., Gonçalves, B., Flammini, A., & Menczer, F. (2012). Partisan asymmetries in online political activity. EPJ Data Science, 1(1), 6.Google Scholar
  6. Conroy, N. J., Rubin, V. L., & Chen, Y. (2015). Automatic deception detection: Methods for finding fake news. Proceedings of the Association for Information Science and Technology, 52(1), 1–4.Google Scholar
  7. Davis, C. A., Ciampaglia, G. L., Aiello, L. M., Chung, K., Conover, M. D., Ferrara, E., Flammini, A., Fox, G. C., Gao, X., Gonçalves, B., Grabowicz, P. A., Hong, K., Hui, P.-M., McCaulay, S., McKelvey, K., Meiss, M. R., Patil, S., Kankanamalage, C. P., Pentchev, V., Qiu, J., Ratkiewicz, J., Rudnick, A., Serrette, B., Shiralkar, P., Varol, O., Weng, L., Wu, T.- L., Younge, A. J., & Menczer, F. (2016). OSoMe: the IUNI observatory on social media. PeerJ Computer Science, 2(e87).  https://doi.org/10.7717/peerj-cs.87
  8. Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Comm. ACM, 59(7), 96–104.Google Scholar
  9. Fortunato, S., Flammini, A., Menczer, F., & Vespignani, A. (2006). Topical interests and the mitigation of search engine bias. Proceedings of the National Academy of Sciences, 103(34), 12684–12689.Google Scholar
  10. Hassan, N., Arslan, F., Li, C., & Tremayne, M. (2017). Toward automated fact-checking: Detecting check-worthy factual claims by ClaimBuster. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, (pp. 1803–1812). New York, NY, USA: ACM.Google Scholar
  11. Hotez, P. J. (2016). Texas and its measles epidemics. PLOS Medicine, 13(10),1–5.Google Scholar
  12. Howell, L. (2013). Global Risks 2013, chapter Digital Wildfires in a Hyperconnected World (pp.23–27). World Economic Forum, 2013. [Online; accessed 19-August-2015].Google Scholar
  13. Joshi, A., Bhattacharyya, P., & Carman, M. J. (2016). Automatic Sarcasm Detection: A Survey. ArXiv e-prints.Google Scholar
  14. Knapp, R. H. (1944). A psychology of rumor. Public opinion quarterly, 8(1), 22–37.Google Scholar
  15. Lazer, D., Pentland, A., Adamic, L., Aral, S., Barabási, A.-L., Brewer, D., Christakis, N., Contractor, N., Fowler, J., Gutmann, M., Jebara, T., King, G., Macy, M., Roy, D., & Van Alstyne, M. (2009). Computational social science. Science, 323(5915), 721–723.Google Scholar
  16. Liu, X., Nourbakhsh, A., Li, Q., Fang, R., & Shah, S. (2015). Real-time rumor debunking on Twitter. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM ’15 (pp. 1867–1870). New York, NY, USA. ACM.Google Scholar
  17. McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). Birds of a feather: Homophily in social networks. Annual review of sociology, 27(1), 415–444.Google Scholar
  18. Takis Metaxas, P., Finn, S., & Mustafaraj, E. (2015). Using TwitterTrails.com to investigate rumor propagation. In Proceedings of the 18th ACM Conference Companion on Computer Supported Cooperative Work & Social Computing, CSCW’15 Companion (pp. 69–72). New York, USA: ACM.Google Scholar
  19. Mitra, T., & Gilbert, E. (2015). CREDBANK: A large-scale social media corpus with associated credibility annotations. In Proceedings of the International AAAI Conference on Web and Social Media.Google Scholar
  20. Nematzadeh, A., Ciampaglia, G. L., Ahn, Y.-Y., & Flammini. A. (2016). From conversation to cacophony: Information overload and collective communication in Twitch. ArXiv eprints.Google Scholar
  21. Nematzadeh, A., Ciampaglia, G. L., Menczer, F., & Flammini, A. (2017). How algorithmic popularity bias hinders or promotes quality. ArXiv e-prints.Google Scholar
  22. Qiu, X., Oliveira, D. F. M., Sahami Shirazi, A., Flammini, A., & Menczer, F. (2017). Limited individual attention and online virality of low-quality information. Nature Human Behavior, 1(0132).  https://doi.org/10.1038/s41562-017-0132
  23. Sakaki, T., Okazaki, M., & Matsuo, Y. (2010). Earthquake shakes Twitter users: real-time event detection by social sensors. In Proceedings of the 19th international conference on World Wide Web (pp. 851–860). ACM.Google Scholar
  24. Salganik, M. J., Sheridan Dodds, P., & Watts, D. J. (2006). Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311(5762), 854–856.Google Scholar
  25. Shao, C., Ciampaglia, G. L., Flammini, A., & Menczer, F. (2016). Hoaxy: A platform for tracking online misinformation. In Proceedings of the 25th International Conference Companion on World Wide Web, WWW ’16 Companion (pp. 745–750). International World Wide Web Conferences Steering Committee.Google Scholar
  26. Shao, C., Ciampaglia, G. L., Varol, O., Flammini, A., & Menczer, F. (2017). The spread of fake news by social bots. ArXiv e-prints.Google Scholar
  27. Shi, B., & Weninger, T. (2016). Discriminative predicate path mining for fact checking in knowledge graphs. Knowledge-Based Systems, 104, 123–133.  https://doi.org/10.1016/j.knosys.2016.04.015
  28. Shiralkar, P., Avram, M., Ciampaglia, G. L., Menczer, F., & Flammini, A. (2017a). Relsifter: Scoring triples from typelike relations. In Proceedings of WSDM Cup 2017.Google Scholar
  29. Shiralkar, P., Flammini, A., Menczer, F., & Ciampaglia, G. L. (2017b). Finding streams in knowledge graphs to support fact checking. In Proceedings of the 2017 IEEE 17th International Conference on Data Mining.Google Scholar
  30. Starbird, K. (2017). Examining the alternative media ecosystem through the production of alternative narratives of mass shooting events on Twitter. In Proceedings of the International AAAI Conference on Web and Social Media (pp. 230–239). Palo Alto, California: AAAI Press.Google Scholar
  31. Szabo, G. & Huberman, B. A. (2010). Predicting the popularity of online content. Communications of the ACM, 53(8), 80–88.  https://doi.org/10.1145/1787234.1787254
  32. Tambuscio, M., Oliveira, D. F. M., Ciampaglia, G. L., & Ruffo, G. (2016). Network segregation in a model of misinformation and fact checking. ArXiv e-prints.Google Scholar
  33. Weedon, J., Nuland, W., & Stamos, A. (2017). Information operations and facebook. Retrieved from https://fbnewsroomus.files.wordpress.com/2017/04/facebook-and-information-operations-v1.pdf.
  34. Weng, L., Flammini, A., Vespignani, A., & Menczer, F. (2012). Competition among memes in a world with limited attention. Scientific Reports, 2(335).  https://doi.org/10.1038/srep00335
  35. Wu, L., Morstatter, F., Hu, X., & Liu, H. (2017). Minning Misinformation in Social Media. In M. T. Thai, W. Wu & H. X. (Eds.), Big Data in Complex and Social Networks (pp. 123–152). Boca Raton, FL: CRC Press.Google Scholar

Copyright information

© Springer Fachmedien Wiesbaden GmbH 2018

Authors and Affiliations

  1. 1.Indiana University BloomingtonBloomingtonUSA

Personalised recommendations