Skip to main content

Identity and Status: When Counterspeech Increases Hate Speech Reporting and Why

Abstract

Much has been written about how social media platforms enable the rise of networked activism. However, few studies have examined how these platforms’ low-information environments shape how social movement activists, their opponents, and social media platforms interact. Hate speech reporting is one understudied area where such interactions occur. This article fills this gap by examining to what extent and how the gender and popularity of counterspeech in comment sections influence social media users’ willingness to report hate speech on the #MeToo movement. Based on a survey experiment (n = 1250) conducted in South Korea, we find that YouTube users are more willing to report such sexist hate speech when the counterspeech is delivered by a female rather than a male user. However, when the female user’s counterspeech received many upvotes, this was perceived to signal her enhanced status and decreased the intention to report hate speech, particularly among male users. No parallel patterns were found regarding other attitudes toward hate speech, counterspeech, YouTube, the #MeToo movement, and gender discrimination and hate speech legislation. These findings inform that users report hate speech based on potentially harmful content as well as their complex social interactions with other users and the platform.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3

Notes

  1. Each social media platform provides its own guideline for content moderation:

    • Twitter: https://help.twitter.com/en/rules-and-policies/violent-threats-glorification

    • Facebook: https://www.facebook.com/communitystandards/hate_speech

    • YouTube: https://support.google.com/youtube/answer/2801939?hl=en

  2. In our experiment, we use static image-based rather than video-enabled scenarios. Although using a static image makes the experience more artificial, it also makes using YouTube in the context of this survey similar to using other platforms such as Facebook and Twitter. In addition, YouTube video clips and associated comment sections are highly distracting. Using static images helps respondents continue paying attention to the survey.

  3. We ask, “what is the gender of the person who wrote the reply?” to check the manipulation of the replier’s gender. 83.6% of respondents assigned to the female condition report that the author was female, whereas only 24.8% of those assigned to the male condition do. Overall, respondents are less likely to report that the counterspeech author was male (44.4% for the male treatment, 1.4% for the female condition) than female. To check the upvote manipulation, we ask, “Do you agree with the following statement?: This reply received a large number of ‘upvotes’.” (1 = Strongly disagree, 5 = Strongly agree). The mean response among those assigned to the many-upvote condition is 3.892. By contrast, the mean response among those assigned to the few-upvote condition is 3.792. The difference between these two mean responses is statistically significant at the 5% level for the one-tailed t-test (t = 1.75, p = 0.04).

  4. We divided the range of respondent age into four groups: (1) 20-29, (2) 30-39, (3) 40-49, and (4) 50-59. In each experimental group, the four age groups are evenly distributed. Furthermore, each age group within an experimental group has the same number of men and women.

  5. We controlled for respondents’ gender, age, education level, household income, political ideology, party identification, and attitude toward the #MeToo movement. These control variables were measured before respondents were exposed to the treatment except education level and household income.

  6. The question for each variable is as follows:

    1) Attitude toward the platform (i.e., YouTube): “What do you think about YouTube?”

    2) Attitude on user moderation: “YouTube users can keep the comments section safe to everyone.”

    3) Attitude toward the platform’s self-regulation: “YouTube should regulate users’ hate speech by itself.”

    4) Attitude toward the gender discrimination bill: (regarding after the introduction of Belgium gender discrimination law passed in March 2018) “Do you agree that the above bill is also necessary for South Korea?”

    5) Attitude toward the social media regulation bill: (regarding after the introduction of French law making online social media platforms responsible for content moderation passed in 2019) “Do you agree that the above bill is also necessary for South Korea?”

References

  • Abrams, D., & Hogg, M. A. (2006). Social identifications: A social psychology of intergroup relations and group processes. Routledge.

    Google Scholar 

  • Adam, A., & Richardson, H. (2001). Feminist philosophy and information systems. Information Systems Frontiers, 3(2), 143–154.

    Google Scholar 

  • Alesina, A., Devleeschauwer, A., Easterly, W., Kurlat, S., & Wacziarg, R. (2003). Fractionalization. Journal of Economic Growth, 8(2), 155–194.

    Google Scholar 

  • Aronson, E. (1969). The Theory of Cognitive Dissonance: A Current Perspective. In Advances in Experimental Social Psychology (4), Academic Press, pp. 1-34.

  • Bail, C. (2021). Breaking the social media prism. Princeton University Press.

    Google Scholar 

  • Batson, C. D., Early, S., & Salvarani, G. (1997). Perspective taking: Imagining how another feels versus imaging how you would feel. Personality and Social Psychology Bulletin, 23(7), 751–758.

    Google Scholar 

  • Bazerman, M. H., & Moore, D. A. (2012). Judgment in managerial decision making (8th ed.). Wiley.

    Google Scholar 

  • Benkler, Y. (2008). The wealth of networks: How social production transforms markets and freedom. Yale University Press.

    Google Scholar 

  • Bennett, W. L., & Manheim, J. B. (2006). The one-step flow of communication. The Annals of the American Academy of Political and Social Science, 608(1), 213–232.

    Google Scholar 

  • Bimber, B., Flanagin, A., & Stohl, C. (2012). Collective action in organizations: Interaction and engagement in an era of technological change. Cambridge University Press.

    Google Scholar 

  • Bobo, L. D. (1999). Prejudice as group position: Microfoundations of a sociological approach to racism and race relations. Journal of Social Issues, 55(3), 445–472.

    Google Scholar 

  • Bond, R. M., Fariss, C. J., Jones, J. J., Kramer, A. D., Marlow, C., Settle, J. E., & Fowler, J. H. (2012). A 61-million-person experiment in social influence and political mobilization. Nature, 489(7415), 295–298.

    Google Scholar 

  • Boucher, E. M., Hancock, J. T., & Dunham, P. J. (2008). Interpersonal sensitivity in computer-mediated and face-to-face conversations. Media Psychology, 11(2), 235–258.

    Google Scholar 

  • Broockman, D., & Kalla, J. (2016). Durably reducing transphobia: A field experiment on door-to-door canvassing. Science, 352(6282), 220–224.

    Google Scholar 

  • Brosius, H. B., & Weimann, G. (1996). Who sets the agenda: Agenda-setting as a two-step flow. Communication Research, 23(5), 561–580.

    Google Scholar 

  • Davidson, T., Bhattacharya, D., & Weber, I. (2019). Racial Bias in Hate Speech and Abusive Language Detection Datasets. In Proceedings of the 3rd Workshop on Abusive Language Online, Florence, Italy.

  • Dubrovsky, V. J., Kiesler, S., & Sethna, B. N. (1991). The equalization phenomenon: Status effects in computer-mediated and face-to-face decision-making groups. Human-Computer Interaction, 6(2), 119–146.

    Google Scholar 

  • Erikson, E. H. (1968). Identity: Youth and crisis. WW Norton & Company.

    Google Scholar 

  • Fearon, J. D. (2003). Ethnic and cultural diversity by country. Journal of Economic Growth, 8(2), 195–222.

    Google Scholar 

  • Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press.

    Google Scholar 

  • Fox, J., Cruz, C., & Lee, J. Y. (2015). Perpetuating online sexism offline: Anonymity, interactivity, and the effects of sexist hashtags on social media. Computers in Human Behavior, 52, 436–442.

    Google Scholar 

  • Galesic, M., & Bosnjak, M. (2009). Effects of questionnaire length on participation and indicators of response quality in a web survey. Public Opinion Quarterly, 73(2), 349–360.

    Google Scholar 

  • Galinsky, A. D., & Moskowitz, G. B. (2000). Perspective-taking: Decreasing stereotype expression, stereotype accessibility, and in-group favoritism. Journal of Personality and Social Psychology, 78(4), 708–724.

    Google Scholar 

  • Greenwald, A. G., & Ronis, D. L. (1978). Twenty years of cognitive dissonance: Case study of the evolution of a theory. Psychological Review, 85(1), 53–57.

    Google Scholar 

  • Gröndahl, T., Pajola, L., Juuti, M., Conti, M., & Asokan, N. (2018). All you need is “love” evading hate speech detection. In Proceedings of the 11th ACM Workshop on Artificial Intelligence and Security, pp. 2-12.

  • Haenschen, K. (2016). Social pressure on social media: Using Facebook status updates to increase voter turnout. Journal of Communication, 66(4), 542–563.

    Google Scholar 

  • Hardin, G. (1968). The tragedy of the commons. Science, 162(3859), 1243–1248.

    Google Scholar 

  • Harmon-Jones, E., & Harmon-Jones, C. (2007). Cognitive dissonance theory after 50 years of development. Zeitschrift für Sozialpsychologie, 38(1), 7–16.

    Google Scholar 

  • Jackson, S. J., Bailey, M., & Welles, B. F. (2020). #HashtagActivism: Networks of race and gender justice. MIT Press.

    Google Scholar 

  • Jang, K., Park, N., & Song, H. (2016). Social comparison on Facebook: Its antecedents and psychological outcomes. Computers in Human Behavior, 62, 147–154.

    Google Scholar 

  • Jenkins, H. (2006). Convergence culture: Where old and new media collide. New York University Press.

    Google Scholar 

  • Jones, C., Trott, V., & Wright, S. (2020). Sluts and Soyboys: MGTOW and the production of misogynistic online harassment. New Media & Society, 22(10), 1903–1921.

    Google Scholar 

  • Ju, J., Cho, D., Lee, J. K., & Ahn, J.-H. (2021). Can it clean up your inbox? Evidence from south Korean anti-spam legislation. Production and Operations Management, forthcoming.

  • Kahneman, D. (2011). Thinking, fast and slow. Macmillan.

    Google Scholar 

  • Kapoor, K. K., Tamilmani, K., Rana, N. P., Patil, P., Dwivedi, Y. K., & Nerur, S. (2018). Advances in social media research: Past, present and future. Information Systems Frontiers, 20(3), 531–558.

    Google Scholar 

  • Karpf, D. (2012). The MoveOn effect: The unexpected transformation of American political advocacy. Oxford University Press.

    Google Scholar 

  • Kats, E., & Lazarsfeld, P. (1955). Personal influence: The part played by people in the flow of mass communications. FreePress.

    Google Scholar 

  • Kim, J. W., Guess, A., Nyhan, B., & Reifler, J. (2020a). The distorting prism of social media: How self-selection and exposure to incivility fuel online comment toxicity. Journal of Communication, forthcoming.

  • Kim, J. Y., Ortiz, C., Nam, S., Santiago, S., & Datta, V. (2020b). Intersectional Bias in Hate Speech and Abusive Language Datasets. In Proceedings of the 14th International AAAI Conference on Web and Social Media (ICWSM), Data Challenge Workshop.

  • King, G., & Persily, N. (2020). A new model for industry–academic partnerships. PS: Political Science & Politics, 53(4), 703–709.

    Google Scholar 

  • Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–408.

    Google Scholar 

  • Lazer, D. M., Pentland, A., Watts, D. J., Aral, S., Athey, S., Contractor, N., Freelon, D., Gonzalez-Bailon, S., King, G., Margetts, H., & Nelson, A. (2020). Computational social science: Obstacles and opportunities. Science, 369(6507), 1060–1062.

    Google Scholar 

  • Lee, J. K., Cho, D., & Lim, G. G. (2018). Design and validation of the bright internet. Journal of the Association for Information Systems, 19(2), 63–85.

    Google Scholar 

  • Lee, J. K., Chang, Y., Kwon, H. Y., & Kim, B. (2020). Reconciliation of privacy with preventive cybersecurity: The bright internet approach. Information Systems Frontiers, 22(1), 45–57.

    Google Scholar 

  • Lessig, L. (2008). Remix: Making art and commerce thrive in the hybrid economy. Penguin.

    Google Scholar 

  • Lupia, A. (2016). Uninformed: Why people seem to know so little about politics and what we can do about it. Oxford University Press.

    Google Scholar 

  • Matias, J. N. & Mou, M., (2018). CivilServant: Community-led experiments in platform governance. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-13.

  • Mavletova, A. (2013). Data quality in PC and Mobile web surveys. Social Science Computer Review, 31(6), 725–743.

    Google Scholar 

  • Mirbabaie, M., Ehnis, C., Stieglitz, S., Bunker, D., & Rose, T. (2020). Digital nudging in social media disaster communication. Information Systems Frontiers, forthcoming.

  • Mo, C. H., & Conn, K. M. (2018). When do the advantaged see the disadvantages of others? A quasi-experimental study of National Service. American Political Science Review, 112(4), 1016–1035.

    Google Scholar 

  • Munger, K. (2017). Tweetment effects on the tweeted: Experimentally reducing racist harassment. Political Behavior, 39(3), 629–649.

    Google Scholar 

  • Nadim, M., & Fladmoe, A. (2021). Silencing Women? Gender and Online Harassment. Social Science Computer Review, 39(2), 245–258.

    Google Scholar 

  • Nickerson, R. S. (1998). Confirmation Bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220.

    Google Scholar 

  • Oh, O., Eom, C., & Rao, H. R. (2015). Role of social Media in Social Change: An analysis of collective sense making during the 2011 Egypt revolution. Information Systems Research, 26(1), 210–223.

    Google Scholar 

  • Olson, M. (1965). The logic of collective action: Public goods and the theory of groups. Harvard University Press.

    Google Scholar 

  • Raymo, J. M., Park, H., Xie, Y., & Yeung, W. J. J. (2015). Marriage and family in East Asia: Continuity and change. Annual Review of Sociology, 41, 471–492.

    Google Scholar 

  • Robinson, J. P. (1976). Interpersonal influence in election campaigns: Two step-flow hypotheses. Public Opinion Quarterly, 40(3), 304–319.

    Google Scholar 

  • Sap, M., Card, D., Gabriel, S., Choi, Y., & Smith, N. A. (2019). The risk of racial Bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1668-1678.

  • Sechiyama, K. (2013). Patriarchy in East Asia: A comparative sociology of gender. Brill.

    Google Scholar 

  • Sidanius, J., Devereux, E., & Pratto, F. (1992). A comparison of symbolic racism theory and social dominance theory as explanations for racial policy attitudes. The Journal of Social Psychology, 132(3), 377–395.

    Google Scholar 

  • Sobieraj, S. (2018). Bitch, slut, skank, cunt: Patterned resistance to Women’s visibility in digital publics. Information, Communication & Society, 21(11), 1700–1714.

    Google Scholar 

  • Sobieraj, S. (2020). Credible threat: Attacks against women online and the future of democracy. Oxford University Press.

    Google Scholar 

  • Tajfel, H. (1979). Individuals and groups in social psychology. British Journal of Social and Clinical Psychology, 18(2), 183–190.

    Google Scholar 

  • Tajfel, H. (1982). Social psychology of intergroup relations. Annual Review of Psychology, 33(1), 1–39.

    Google Scholar 

  • Tajfel, H., Turner, J. C., Austin, W. G., & Worchel, S. (2004). An integrative theory of intergroup conflict. In M. J. Hach & M. Schultz (Eds.), Organizational identity: A reader (pp. 56–65). Oxford University Press.

    Google Scholar 

  • Tatman, R. (2017). Gender and Dialect Bias in YouTube’s Automatic Captions. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing (EthNLP), Valencia, Spain.

  • TechCrunch. (2020). YouTube Has Seen Soaring Growth in South Korea, (February 2) retrieved from https://techcrunch.com/2020/02/05/youtube-has-seen-soaring-growth-in-south-korea/ (Accessed on 16 Oct 2020).

  • Thaler, R. H., & Sunstein, C. R. (2009). Nudge: Improving decisions about health, wealth, and happiness. Penguin.

    Google Scholar 

  • Thompson, N., Wang, X., & Daya, P. (2019). Determinants of news sharing behavior on social media. Journal of Computer Information Systems, 60(6), 593–601.

    Google Scholar 

  • Tufekci, Z. (2017). Twitter and tear gas. Yale University Press.

    Google Scholar 

  • Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.

    Google Scholar 

  • Veglis, A. (2014) Moderation techniques for social media content. In Proceedings of International Conference on Social Computing and Social Media, pp. 137-148.

  • Vogel, T., & Wanke, M. (2016). Attitudes and attitude change. Psychology Press.

    Google Scholar 

  • Waseem, Z. (2016). Are You a Racist or Am I Seeing Things? Annotator Influence on Hate Speech Detection on Twitter. In Proceedings of the First Workshop on NLP and Computational Social Science (NLP+CSS), Austin, Texas.

  • Waseem, Z. & Hovy, D. (2016). Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter. In Proceedings of the NAACL Student Research Workshop, San Diego.

  • Weimann, G. (1982). On the importance of marginality: One more step into the two-step flow of communication. American Sociological Review, 47(6), 764–773.

    Google Scholar 

  • Zaller, J. R. (1992). The nature and origins of mass opinion. Cambridge University Press.

    Google Scholar 

  • Zhou, X., Sap, M., Swayamdipta, S., Smith, N. A., & Choi, Y. (2021). Challenges in Automated Debiasing for Toxic Language Detection. arXiv preprint, arXiv:2102.00086.

Download references

Acknowledgements

We are grateful to Jiyong Eom, Youngdeok Hwang, Miyeon Jung, Chihong Jeon, Euro Bae, Jay Winston, and participants at the Bright Internet Global Summit (BIGS) for sharing their ideas and encouragement. We also thank the editor and two anonymous reviewers for their valuable feedback on an early draft

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daegon Cho.

Ethics declarations

Conflict of Interest

This study was no financial funding related to this study. The authors declare that they have no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Replication data and code are available at https://github.com/jaeyk/status_identity_hate_speech_reporting.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kim, J.Y., Sim, J. & Cho, D. Identity and Status: When Counterspeech Increases Hate Speech Reporting and Why. Inf Syst Front (2022). https://doi.org/10.1007/s10796-021-10229-2

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10796-021-10229-2

Keywords

  • Online hate speech
  • Social media
  • Social movements
  • Gender bias