Tweetment Effects on the Tweeted: Experimentally Reducing Racist Harassment

Abstract

I conduct an experiment which examines the impact of group norm promotion and social sanctioning on racist online harassment. Racist online harassment de-mobilizes the minorities it targets, and the open, unopposed expression of racism in a public forum can legitimize racist viewpoints and prime ethnocentrism. I employ an intervention designed to reduce the use of anti-black racist slurs by white men on Twitter. I collect a sample of Twitter users who have harassed other users and use accounts I control (“bots”) to sanction the harassers. By varying the identity of the bots between in-group (white man) and out-group (black man) and by varying the number of Twitter followers each bot has, I find that subjects who were sanctioned by a high-follower white male significantly reduced their use of a racist slur. This paper extends findings from lab experiments to a naturalistic setting using an objective, behavioral outcome measure and a continuous 2-month data collection period. This represents an advance in the study of prejudiced behavior.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Notes

  1. 1.

    All hypotheses were pre-registered at EGAP.org (ID number 20150520AA) prior to any data collection.

  2. 2.

    Whether a picture is actually of the subject was impossible to verify perfectly; I included any picture that clearly showed the face of a person who I did not recognize.

  3. 3.

    As is recorded in my pre-analyis plan (registered at EGAP, ID number 20150520AA), I had originally intended to perform two similar experiments: one on racist harassment, and the other on misogynist harassment. However, my method was insufficient for generating a large enough sample of misogynist users. For any misogynist slur I tried to use as my search term (bitch, whore, slut), there were far too many people using it as a term of endearment for their friends for me to filter through and find the actual harassment. I plan on figuring out a way to crowdsource this process of manually discerning genuine harassment, but for now, the misogynist harassment experiment is unfeasible. The pre-analysis plan also intended to test two hypotheses about spillover effects on the subjects’ networks, but this has thus far proven technically intractable.

  4. 4.

    Chen et al. (2012), for example, emulates Xu and Zhu (2010) and takes a list of terms from the website www.noswearing.com.

  5. 5.

    For a full list of terms, see the Online Appendix.

  6. 6.

    Each Twitter account is assigned a unique numerical user ID based on when they signed up; newer accounts have higher ID’s. Not all of the numbers correspond to extant or frequently used accounts, so if I randomly picked one of those numbers, I generated a new random number.

  7. 7.

    Still, there are many people who believe that they’re “joking” when they call a friend a slur. While this is still objectionable behavior, it is different from the kind of targeted prejudiced harassment that is of interest in this paper, so I excluded from the sample any users who appeared to be friends who did not find the slur they were using offensive. This process is inherently subjective, but it usually entailed the users with a long back-and-forth, with slurs interspersed with more obviously friendly terms.

  8. 8.

    Throughout the assignment process, I matched subjects in each treatment group on their (0–2) anonymity score. They were otherwise randomly assigned.

  9. 9.

    This process was approved by NYU’s Institutional Review Board. These subjects had not given their informed consent to participate in this experiment, but the intervention I applied falls within the “normal expectations” of their user experience on Twitter. The subjects were not debriefed. The benefits to their debriefing would not outweigh the risks to me, the researcher, in providing my personal information to a group of people with a demonstrated propensity for online harassment.

  10. 10.

    I avoid providing the entire username of the bot to protect my subjects’ anonymity.

  11. 11.

    It is possible that a stronger racial treatment effect might have obtained if I also changed the facial features of the black bots to be more afrocentric, the effect of which Weaver (2012) finds to be approximately as large as changing skin color on voting outcomes.

  12. 12.

    Initially, I assigned 243 subjects to one of the four treatment arms or to the control group. However, the rate of tweeting of one of these subjects was too infrequent for me to be able to calculate a meaningful pre-treatment rate of offensive language use, and I excluded him.

  13. 13.

    I contacted Twitter to see if they could provide me with this information, but they were not forthcoming.

  14. 14.

    Note, though, that the Out-group/High Followers condition saw much lower attrition than the other treatment conditions. I have no explanation for why this is the case, and in fact my ex ante expectation was that, to the extent that attrition was positively correlated with any treatment condition, it would have been higher among the High Followers conditions.

  15. 15.

    A more conservative and less substantively accurate assumption is to treat these observations as having a post-treatment rate of racist language equal to their pre-treatment rate of racist language use. Figure 7 in the Appendix presents the results with this alternate assumption. The results are substantively similar, although the point estimates are slightly smaller.

  16. 16.

    I have selected my sample based on their use of this slur. Expanding the dependent variable to include other anti-black language does not substantively change the results, primarily because the use of other anti-black slurs is uncommon among this subject pool.

  17. 17.

    These responses also did not vary in terms of vitriol between the treatment arms. In fact, even the number of subjects that responded to call my bot a “n****r” did not vary significantly between the white and black bots.

References

  1. Allport, G. W. (1954). The nature of prejudice. Basic Books.

  2. Aral, S., & Walker, D. (2012). Identifying influential and susceptible members of social networks. Science, 337(6092), 337–341.

    Article  Google Scholar 

  3. Banks, A. J. (2014). The public’s anger: White racial attitudes and opinions toward health care reform. Political Behavior, 36(3), 493–514.

    Article  Google Scholar 

  4. Banks, A. J. (2016). Are group cues necessary? How anger makes ethnocentrism among whites a stronger predictor of racial and immigration policy opinions. Political Behavior, 1–23.

  5. Bertrand, M., & Mullainathan, S. (2003). Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. Cambridge: National Bureau of Economic Research.

    Google Scholar 

  6. Binder, J., Zagefka, H., Brown, R., Funke, F., Kessler, T., Mummendey, A., et al. (2009). Does contact reduce prejudice or does prejudice reduce contact? A longitudinal test of the contact hypothesis among majority and minority groups in three European countries. Journal of Personality and Social Psychology, 96(4), 843.

    Article  Google Scholar 

  7. Blanchard, F. A., Crandall, C. S., Brigham, J. C., & Vaughn, L. A. (1994). Condemning and condoning racism: A social context approach to interracial settings. Journal of Applied Psychology, 79(6), 993.

    Article  Google Scholar 

  8. Bordia, P. (1997). Face-to-face versus computer-mediated communication: A synthesis of the experimental literature. Journal of Business Communication, 34(1), 99–118.

    Article  Google Scholar 

  9. Brewer, M. B. (1999). The psychology of prejudice: Ingroup love and outgroup hate? Journal of Social Issues, 55(3), 429–444.

    Article  Google Scholar 

  10. Chen, Y., Zhou, Y., Zhu, S., & Xu, H. (2012). Detecting offensive language in social media to protect adolescent online safety. In Privacy, Security, Risk and Trust (PASSAT), 2012 International Conference on and 2012 International Confernece on Social Computing (SocialCom). IEEE pp. 71–80.

  11. Chhibber, P., & Sekhon, J. S. (2014). The asymmetric role of religious appeals in India.

  12. Coppock, A., Guess, A., & Ternovski, J. (2015). When treatments are tweets: A network mobilization experiment over twitter. Political Behavior, 1–24.

  13. Crandall, C. S., Eshleman, A., & O’Brien, L. (2002). Social norms and the expression and suppression of prejudice: The struggle for internalization. Journal of personality and social psychology, 82(3), 359.

    Article  Google Scholar 

  14. Dovidio, J. F., & Gaertner, S. L. (1999). Reducing prejudice combating intergroup biases. Current Directions in Psychological Science, 8(4), 101–105.

    Article  Google Scholar 

  15. Gulker, J. E., Mark, A. Y., & Monteith, M. J. (2013). Confronting prejudice: The who, what, and why of confrontation effectiveness. Social Influence, 8(4), 280–293.

    Article  Google Scholar 

  16. Harrison, B. F., & Michelson, M. R. (2012). Not that theres anything wrong with that: The effect of personalized appeals on marriage equality campaigns. Political Behavior, 34(2), 325–344.

    Article  Google Scholar 

  17. Henson, B., Reyns, B. W., & Fisher, B. S. (2013). Fear of crime online? Examining the effect of risk, previous victimization, and exposure on fear of online interpersonal victimization. Journal of Contemporary Criminal Justice.

  18. Hinduja, S., & Patchin, J. W. (2007). Offline consequences of online victimization: School violence and delinquency. Journal of School Violence, 6(3), 89–112.

    Article  Google Scholar 

  19. Hosseinmardi, H., Rafiq, R. I., Li, S., Yang, Z., Han, R., Mishra, S., & Lv, Q. (2014). A comparison of common users across instagram and ask. fm to better understand cyberbullying. arXiv preprintarXiv:1408.4882 .

  20. Kam, C. D., & Kinder, D. R. (2012). Ethnocentrism as a short-term force in the 2008 American presidential election. American Journal of Political Science, 56(2), 326–340.

    Article  Google Scholar 

  21. Kennedy, M. A., & Taylor, M. A. (2010). Online harassment and victimization of college students. Justice Policy Journal, 7(1), 1–21.

    Google Scholar 

  22. Kiesler, S., Siegel, J., & McGuire, T. W. (1984). Social psychological aspects of computer-mediated communication. American Psychologist, 39(10), 1123.

    Article  Google Scholar 

  23. Lea, M., & Spears, R. (1991). Computer-mediated communication, de-individuation and group decision-making. International Journal of Man-Machine Studies, 34(2), 283–301.

    Article  Google Scholar 

  24. Mantilla, K. (2013). Gendertrolling: Misogyny adapts to new media. Feminist Studies, 39(2), 563–570.

    Google Scholar 

  25. Moor, P. J. (2007). Conforming to the flaming norm in the online commenting situation.

  26. Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330.

    Article  Google Scholar 

  27. Omernick, E., & Sood, S. O. (2013). The impact of anonymity in online communities. In Social Computing (SocialCom), 2013 International Conference on. IEEE pp. 526–535.

  28. Paluck, E. L., & Green, D. P. (2009). Prejudice reduction: What works? A review and assessment of research and practice. Annual review of psychology, 60, 339–367.

    Article  Google Scholar 

  29. Paluck, E. L., Shepherd, H., & Aronow, P. M. (2016). Changing climates of conflict: A social network experiment in 56 schools. Proceedings of the National Academy of Sciences, 113(3), 566–571.

    Article  Google Scholar 

  30. Pettigrew, T. F., & Tropp, L. R. (2006). A meta-analytic test of intergroup contact theory. Journal of Personality and Social Psychology, 90(5), 751.

    Article  Google Scholar 

  31. Piston, S. (2010). How explicit racial prejudice hurt Obama in the 2008 election. Political Behavior, 32(4), 431–451.

    Article  Google Scholar 

  32. Plant, E. A., & Devine, P. G. (1998). Internal and external motivation to respond without prejudice. Journal of Personality and Social Psychology, 75(3), 811.

    Article  Google Scholar 

  33. Postmes, T., Spears, R., Sakhel, K., & Groot, D. D. (2001). Social influence in computer-mediated communication: The effects of anonymity on group behavior. Personality and Social Psychology Bulletin, 27(10), 1243–1254.

    Article  Google Scholar 

  34. Rasinski, H. M., & Czopp, A. M. (2010). The effect of target status on witnesses’ reactions to confrontations of bias. Basic and Applied Social Psychology, 32(1), 8–16.

    Article  Google Scholar 

  35. Reicher, S. D., Spears, R., & Postmes, T. (1995). A social identity model of deindividuation phenomena. European Review of Social Psychology, 6(1), 161–198.

    Article  Google Scholar 

  36. Shepherd, H., & Paluck, E. L. (2015). Stopping the drama gendered influence in a network field experiment. Social Psychology Quarterly, 78(2), 173–193.

    Article  Google Scholar 

  37. Sherif, M., & Sherif, C. W. (1953). Groups in harmony and tension; an integration of studies of intergroup relations.

  38. Sood, S., Antin, J., & Churchill, E. (2012). Profanity use in online communities. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM pp. 1481–1490.

  39. Stangor, C., Sechrist, G. B., & Jost, J. T. (2001). Changing racial beliefs by providing consensus information. Personality and Social Psychology Bulletin, 27(4), 486–496.

    Article  Google Scholar 

  40. Stringhini, G., Egele, M., Kruegel, C., & Vigna, G. (2012). Poultry markets: On the underground economy of twitter followers. In Proceedings of the 2012 ACM workshop on Workshop on online social networks. ACM pp. 1–6.

  41. Tajfel, H., & Turner, J. C. (1979). An integrative theory of intergroup conflict. The Social Psychology of Intergroup Relations, 33(47), 74.

    Google Scholar 

  42. Walther, J. B. (1996). Computer-mediated communication impersonal, interpersonal, and hyperpersonal interaction. Communication Research, 23(1), 3–43.

    Article  Google Scholar 

  43. Weaver, V. M. (2012). The electoral consequences of skin color: The hidden side of race in politics. Political Behavior, 34(1), 159–192.

    Article  Google Scholar 

  44. Xu, Z., & Zhu, S. (2010). Filtering offensive language in online communities using grammatical relations. Proceedings of the Seventh Annual Collaboration, Electronic Messaging, Anti-Abuse and Spam Conference.

  45. Yin, D., Xue, Z., Hong, L., Davison, B. D., Kontostathis, A., & Edwards, L. (2009). Detection of harassment on web 2.0. Proceedings of the Content Analysis in the WEB 2.

  46. Zitek, E. M., & Hebl, M. R. (2007). The role of social norm clarity in the influenced expression of prejudice over time. Journal of Experimental Social Psychology, 43(6), 867–876.

    Article  Google Scholar 

Download references

Acknowledgments

I would like to thank Chris Dawes, Neal Beck, Eric Dickson, James Hodgdon Bisbee, David Broockman, Livio Di Lonardo, Ryan Enos and Drew Dimmery, along with three anonymous reviewers; participants at the 2015 Summer Methods Meeting, the Harvard Experimental Political Science Graduate Student Conference, Neal's Seminar, the Yale ISPS Experiments Workshop and the NYU Graduate Political Economy Seminar; and members of the NYU Social Media and Political Participation (SMaPP) Lab, for their valuable feedback on earlier versions of this project.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Kevin Munger.

Ethics declarations

Conflict of interest

The author declares that he had no conflicts of interest with respect to his authorship or the publication of this article.

Ethical Standards

All procedures performed in studies involving human participants were in accordance with the ethical standards of the New York University Institutional Review Board.

Additional information

Replication materials are available on the author’s website, www.kevinmunger.com.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (PDF 393 kb)

Appendix

Appendix

Conservative Assumption for Main Results

For the subjects who produced too few post-treatment tweets to calculate an rate of racist language use, I assumed that their post-treatment rate of racist language use was zero. This assumption makes sense substantively, because these people were no longer tweeting (and thus no longer engaging in racist harassment). However, a more conservative assumption would be to assume that there was no change in their behavior, and to assign them a post-treatment rate equal to the their pre-treatment rate. This does not substantively change the results, although the magnitude of the effect sizes becomes slightly smaller.

Fig. 7
figure7

Full sample (N = 242). Each panel represents the results of a separate OLS regression in which the outcome variable is the absolute number of times a subjects tweeted the word “n****r” per day in the specified time period. For example, the coefficient associated with the In-group/High Followers treatment in Panel A shows these subjects reduced their average daily usage of this slur by 0.25 more than subjects in the control in the week after treatment. Each regression also controls for the subject’s absolute daily use of this slur in the 2 months prior to the treatment. The vertical tick marks represent 90% confidence intervals and the full lines represent 95% confidence intervals

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Munger, K. Tweetment Effects on the Tweeted: Experimentally Reducing Racist Harassment. Polit Behav 39, 629–649 (2017). https://doi.org/10.1007/s11109-016-9373-5

Download citation

Keywords

  • Online harassment
  • Social media
  • Randomized field experiment
  • Social identity