Abstract
Digital deception in online social networks, particularly the viral spread of misinformation and disinformation, is a critical concern at present. Online social networks are used as a means to spread digital deception within local, national, and global communities which has led to a renewed focus on the means of detection and defense. The audience (i.e., social media users) form the first line of defense in this process and it is of utmost importance to understand the who, how, and what of audience engagement. This will shed light on how to effectively use this wisdom-of-the-audience to provide an initial defense. In this chapter, we present the key findings of the recent studies in this area to explore user engagement with trustworthy information, misinformation, and disinformation framed around three key research questions (1) Who engages with mis- and dis-information?, (2) How quickly does the audience engage with mis- and dis-information?, and (3) What feedback do users provide? These patterns and insights can be leveraged to develop better strategies to improve media literacy and informed engagement with crowd-sourced information like social news.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
News sources collected from EUvsDisinfor.eu were identified as spreaders of disinformation by the European Union’s East Strategic Communications Task Force.
- 2.
Example resources used by Volkova et al [43] to compile deceptive news sources: http://www.fakenewswatch.com/, http://www.propornot.com/p/the-list.html.
- 3.
The area under the ROC curve (AUC) for 10-fold cross-validation experiments were 0.89 for gender, 0.72 for age, 0.72 for income, and 0.76 for education.
- 4.
References
Bikhchandani, S., Hirshleifer, D., Welch, I.: A theory of fads, fashion, custom, and cultural change as informational cascades. J. Polit. Econ. 100(5), 992–1026 (1992). https://doi.org/10.2307/2138632, http://www.jstor.org/stable/2138632
Davis, C.A., Varol, O., Ferrara, E., Flammini, A., Menczer, F.: Botornot: a system to evaluate social bots. In: Proceedings of the 25th International Conference Companion on World Wide Web, pp. 273–274. International World Wide Web Conferences Steering Committee (2016)
Ferrara, E.: Contagion dynamics of extremist propaganda in social networks. Inf. Sci. 418, 1–12 (2017)
Ferrara, E.: Disinformation and social bot operations in the run up to the 2017 french presidential election. First Monday 22(8) (2017). https://doi.org/10.5210/fm.v22i8.8005, http://journals.uic.edu/ojs/index.php/fm/article/view/8005
Gabielkov, M., Ramachandran, A., Chaintreau, A., Legout, A.: Social clicks: what and who gets read on twitter? ACM SIGMETRICS Perform. Eval. Rev. 44(1), 179–192 (2016)
Glenski, M., Pennycuff, C., Weninger, T.: Consumers and curators: browsing and voting patterns on reddit. IEEE Trans. Comput. Soc. Syst. 4(4), 196–206 (2017)
Glenski, M., Weninger, T.: Predicting user-interactions on reddit. In: Proceedings of the IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). IEEE/ACM (2017)
Glenski, M., Weninger, T.: Rating effects on social news posts and comments. ACM Trans. Intell. Syst. Technol. (TIST) 8(6), 1–9 (2017)
Glenski, M., Weninger, T., Volkova, S.: How humans versus bots react to deceptive and trusted news sources: a case study of active users. In: Proceedings of the IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). IEEE/ACM (2018)
Glenski, M., Weninger, T., Volkova, S.: Identifying and understanding user reactions to deceptive and trusted social news sources. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), vol. 2, pp. 176–181 (2018)
Glenski, M., Weninger, T., Volkova, S.: Propagation from deceptive news sources who shares, how much, how evenly, and how quickly? IEEE Trans. Comput. Soc. Syst. 5(4), 1071–1082 (2018)
Goertzel, T.: Belief in conspiracy theories. Polit. Psychol. 15(4), 731–742 (1994)
Gottfried, J., Shearer, E.: News use across social media platforms 2016. Pew Research Center (2016). http://www.journalism.org/2016/05/26/news-use-across-social-media-platforms-2016/
Hasson, U., Simmons, J.P., Todorov, A.: Believe it or not: on the possibility of suspending belief. Psychol. Sci. 16(7), 566–571 (2005)
Hirshleifer, D.A.: The blind leading the blind: social influence, fads and informational cascades. In: Ierulli, K., Tommasi, M. (eds.) The New Economics of Human Behaviour, chap 12, pp. 188–215. Cambridge University Press, Cambridge (1995)
Jin, F., Dougherty, E., Saraf, P., Cao, Y., Ramakrishnan, N.: Epidemiological modeling of news and rumors on twitter. In: Proceedings of the Seventh Workshop on Social Network Mining and Analysis, p. 8. ACM (2013)
Kakwani, N.C., Podder, N.: On the estimation of lorenz curves from grouped observations. Int. Econ. Rev. 14(2), 278–292 (1973)
Karadzhov, G., Gencheva, P., Nakov, P., Koychev, I.: We built a fake news & click-bait filter: what happened next will blow your mind! In: Proceedings of the International Conference on Recent Advances in Natural Language Processing (2017)
Karduni, A., Cho, I., Wesslen, R., Santhanam, S., Volkova, S., Arendt, D.L., Shaikh, S., Dou, W.: Vulnerable to misinformation? Verifi! In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 312–323. ACM (2019)
Karduni, A., Wesslen, R., Santhanam, S., Cho, I., Volkova, S., Arendt, D., Shaikh, S., Dou, W.: Can you verifi this? studying uncertainty and decision-making about misinformation using visual analytics. In: Twelfth International AAAI Conference on Web and Social Media (2018)
Kumar, S., Cheng, J., Leskovec, J., Subrahmanian, V.: An army of me: sockpuppets in online discussion communities. In: Proceedings of the 26th International Conference on World Wide Web, pp. 857–866. International World Wide Web Conferences Steering Committee (2017)
Kumar, S., Shah, N.: False information on web and social media: a survey. arXiv preprint arXiv:1804.08559 (2018)
Kumar, S., West, R., Leskovec, J.: Disinformation on the web: impact, characteristics, and detection of wikipedia hoaxes. In: Proceedings of the 25th International Conference on World Wide Web, pp. 591–602. International World Wide Web Conferences Steering Committee (2016)
Kwon, S., Cha, M., Jung, K.: Rumor detection over varying time windows. PLoS One 12(1), e0168344 (2017)
Kwon, S., Cha, M., Jung, K., Chen, W., Wang, Y.: Prominent features of rumor propagation in online social media. In: Proceedings of the 13th International Conference on Data Mining (ICDM), pp. 1103–1108. IEEE (2013)
Lewandowsky, S., Oberauer, K., Gignac, G.E.: Nasa faked the moon landing – therefore,(climate) science is a hoax: an anatomy of the motivated rejection of science. Psychol. Sci. 24(5), 622–633 (2013)
Lorenz, J., Rauhut, H., Schweitzer, F., Helbing, D.: How social influence can undermine the wisdom of crowd effect. Proc. Natl. Acad. Sci. 108(22), 9020–9025 (2011)
Matsa, K.E., Shearer, E.: News use across social media platforms 2018. Pew Research Center (2018). http://www.journalism.org/2018/09/10/news-use-across-social-media-platforms-2018/
Mitra, T., Wright, G.P., Gilbert, E.: A parsimonious language model of social media credibility across disparate events. In: Proceedings of the ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW), pp. 126–145. ACM (2017)
Muchnik, L., Aral, S., Taylor, S.J.: Social influence bias: a randomized experiment. Science 341(6146), 647–651 (2013)
Qazvinian, V., Rosengren, E., Radev, D.R., Mei, Q.: Rumor has it: identifying misinformation in microblogs. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1589–1599. Association for Computational Linguistics (2011)
Rashkin, H., Choi, E., Jang, J.Y., Volkova, S., Choi, Y.: Truth of varying shades: analyzing language in fake news and political fact-checking. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2921–2927 (2017). https://aclanthology.info/papers/D17-1317/d17-1317
Rath, B., Gao, W., Ma, J., Srivastava, J.: From retweet to believability: utilizing trust to identify rumor spreaders on twitter. In: Proceedings of the IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) (2017)
Rubin, V.L., Conroy, N.J., Chen, Y., Cornwell, S.: Fake news or truth? Using satirical cues to detect potentially misleading news. In: Proceedings of NAACL-HLT, pp. 7–17 (2016)
Schelling, T.C.: Micromotives and macrobehavior. WW Norton & Company, New York (2006)
Shao, C., Ciampaglia, G.L., Varol, O., Flammini, A., Menczer, F.: The spread of fake news by social bots. arXiv preprint arXiv:1707.07592 (2017)
Starbird, K.: Examining the alternative media ecosystem through the production of alternative narratives of mass shooting events on twitter. In: Proceedings of the 11th International AAAI Conference on Web and Social Media (ICWSM). AAAI (2017)
Starbird, K., Maddock, J., Orand, M., Achterman, P., Mason, R.M.: Rumors, false flags, and digital vigilantes: misinformation on twitter after the 2013 Boston marathon bombing. iConference 2014 Proceedings (2014)
Street, C.N., Masip, J.: The source of the truth bias: Heuristic processing? Scand. J. Psychol. 56(3), 254–263 (2015)
Takahashi, B., Tandoc, E.C., Carmichael, C.: Communicating on twitter during a disaster: an analysis of tweets during typhoon haiyan in the philippines. Comput. Human Behav. 50, 392–398 (2015)
Tambuscio, M., Ruffo, G., Flammini, A., Menczer, F.: Fact-checking effect on viral hoaxes: a model of misinformation spread in social networks. In: Proceedings of the 24th International Conference on World Wide Web, pp. 977–982. ACM (2015)
Volkova, S., Bachrach, Y.: Inferring perceived demographics from user emotional tone and user-environment emotional contrast. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1 (2016)
Volkova, S., Shaffer, K., Jang, J.Y., Hodas, N.: Separating facts from fiction: linguistic models to classify suspicious and trusted news posts on Twitter. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), vol. 2, pp. 647–653 (2017)
Vosoughi, S., Roy, D., Aral, S.: The spread of true and false news online. Science 359(6380), 1146–1151 (2018). https://doi.org/10.1126/science.aap9559
Wang, W.Y.: “Liar, liar pants on fire”: a new benchmark dataset for fake news detection (2017)
Weninger, T., Johnston, T.J., Glenski, M.: Random voting effects in social-digital spaces: a case study of reddit post submissions. In: Proceedings of the 26th ACM Conference on Hypertext & Social Media, pp. 293–297. HT ’15, ACM, New York (2015). https://doi.org/10.1145/2700171.2791054
Wu, K., Yang, S., Zhu, K.Q.: False rumors detection on Sina Weibo by propagation structures. In: Proceedings of the 31st International Conference on Data Engineering (ICDE), pp. 651–662. IEEE (2015)
Wu, L., Morstatter, F., Hu, X., Liu, H.: Mining misinformation in social media. In: Big Data in Complex and Social Networks, CRC Press, pp. 123–152 (2016)
Zhang, A., Culbertson, B., Paritosh, P.: Characterizing online discussion using coarse discourse sequences. In: Proceedings of the 11th International AAAI Conference on Web and Social Media (ICWSM). AAAI (2017)
Acknowledgements
This work was supported in part by the Laboratory Directed Research and Development Program at Pacific Northwest National Laboratory, a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy. This research was also supported by the Defense Advanced Research Projects Agency (DARPA), contract W911NF-17-C-0094. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. This work has also been supported in part by Adobe Faculty Research Award , Microsoft, IDEaS, and Georgia Institute of Technology.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Glenski, M., Volkova, S., Kumar, S. (2020). User Engagement with Digital Deception. In: Shu, K., Wang, S., Lee, D., Liu, H. (eds) Disinformation, Misinformation, and Fake News in Social Media. Lecture Notes in Social Networks. Springer, Cham. https://doi.org/10.1007/978-3-030-42699-6_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-42699-6_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-42698-9
Online ISBN: 978-3-030-42699-6
eBook Packages: Computer ScienceComputer Science (R0)