The origins of research on incivility and hate speech can be traced back to the question of what qualities public communication should have in order to establish and maintain a democratic society. Democracy and public sphere theorists have presented different answers to this question and accordingly developed different concepts of civility. Incivility is a controversial concept associated with a wide spectrum of behaviors. Based on the different theoretical concepts, different indicators of incivility have been used. This chapter summarizes previous theoretical approaches and provides an overview of existing content analytic studies of incivility in online user-generated communication.
- Katharina Esau
The origins of research on incivility and hate speech can be traced back to the question of what qualities public communication should have in order to establish and maintain a democratic society (Herbst 2010; Papacharissi 2004). Democracy and public sphere theorists have presented different answers to this question and accordingly developed different concepts of civility (for an overview, see, e.g., Ferree et al. 2002; Freelon 2010). In recent years, incivility and hate speech have increasingly been observed in user-generated content (Coe et al. 2014; Rowe 2015; Stroud et al. 2015; Ziegele et al. 2017). Empirical studies have shown that incivility can generate negative emotions and responses toward others (Hwang et al. 2017; Phillips and Smith 2003, 2004), polarize users’ attitudes (Anderson et al. 2014), and can even have an indirect impact on the willingness to help others (Ziegele et al., 2018b). Despite these negative effects of incivility, scholars have maintained the importance of heated (Papacharissi 2004) and cross-cutting discussion (Popan et al. 2019) and the legitimacy of conflict and disagreement for democracy (Huckfeldt et al. 2004).
Against this backdrop, detecting and explaining hateful and uncivil user comments have become a major focus in communication research. Due to the relevance of the topic and the availability of data, communication scholars are now studying incivility and hate speech across various subfields. Political communication research, for example, focuses on hate speech in the context of political discussions online and offline (Boatright et al. 2019; Coe et al. 2014; Mutz and Reeves 2005; Papacharissi 2004). The relatively young but growing field of online deliberation research focuses on civil, respectful, reciprocal, and reasoned online discussions from the perspective of deliberative democracy theory and examines incivility through the theoretical lens of online deliberation (Esau et al. 2017; Ziegele et al. 2018c). Furthermore, incivility research has links to science communication research because uncivil user comments are not only directed at journalism or politics but can also be directed at scientific content (Su et al. 2021; Yuan and Lu 2020). Overall, digital communication research has produced a large number of empirical studies on incivility and hate speech online (Oz et al. 2018; Poole et al. 2020; Rowe 2015; Ziegele et al. 2017).
Incivility in online discussions is deemed a challenge for democratic societies (Boatright et al. 2019; Herbst 2010). Beyond understanding and explaining the phenomenon of online hate and incivility, research in recent years has been shifting toward a greater focus on application-oriented goals. One growing research strand in the use of content analysis concerns techniques that could help reduce incivility and hate speech in online spaces through a more careful deliberative design of such spaces (Esau et al. 2017; Towne and Herbsleb 2012; Wright and Street 2007). For example, the identification of otherwise anonymous users (Rowe 2015; Santana 2013) and different styles of moderation (Stroud et al. 2015; Ziegele et al., 2018a) are promising design factors that could reduce uncivil and hateful behavior online.
In recent years, the public and scientific debate on incivility and hate speech online has focused on specific topics on the public agenda. As both incivility and hate speech can be observed in situations of disagreement or conflict, topics that provide fodder for controversial and polarized discussion have gained more attention than those deemed less sensitive. In content analyses and experimental studies, researchers have investigated incivility and hate speech in the context of morally charged, polarized, antagonistic discussions in areas such as abortion (Ferree et al. 2002; Stroud et al. 2015), climate change (Howarth and Sharman 2017; Yuan and Lu 2020), immigration (Santana 2013), refugees (Ziegele 2018b), violence (Chen et al. 2020), same-sex marriage (Oz et al. 2018), terrorism (Oz et al. 2018; Poole et al. 2020), gun control (Oz et al. 2018), and politically divisive and new technologies such as nanotechnology (Anderson et al. 2014) and fracking (Su et al. 2021). Coe et al. (2014) found that “weightier topics” and those with “clear opposing sides” (e.g., sports) tended to stir incivility. It is noteworthy that research usually focused on issues that recently emerged on the news agenda, therefore arousing particularly controversial discussion and prominence in public.
2 Definitions and Theories: Between Impoliteness and Incivility
While some scholars have argued that “sufficient consensus exists about what type of speech counts as extremely uncivil” (Massaro and Stryker, 2012, p. 406), others have pointed out that “civility is also very much in the eye of the beholder” (Herbst 2010, p. 3). Incivility is a controversial concept and is associated with a wide spectrum of behaviors ranging from the mere expression of emotions (Su et al. 2018, 2021) to offensive and derogatory statements (Chen et al. 2020), stereotypes, and serious threats to personal rights or democracy as a whole (Papacharissi 2004; Rowe 2015). Accordingly, the existing literature provides a wide range of definitions (e.g., Anderson et al. 2014; Coe et al. 2014; Oz et al. 2018) and some classification attempts (Muddiman 2017; Papacharissi 2004; Su et al. 2021).
Correspondingly, the research landscape provides a rather rudimentary image of what is considered uncivil communication. Nevertheless, one unifying element in behavior deemed uncivil is that it has to violate an existing norm (Muddiman 2017; Papacharissi 2004; Seely 2017; Su et al. 2018, 2021). However, identifying the violated norms tends to be less clear-cut. This question is either overlooked or controversially discussed among scholars. For example, Papacharissi (2004) focused on violations of democratic norms, while, for Seely (2017), striving for social harmony is a valid social norm that, when violated, constitutes incivility. Accordingly, for Seely (2017), impoliteness and incivility are inseparable, while Papacharissi (2004) distinguished between the two.
The different definitions of incivility can be explained with different theoretical traditions and theoretical approaches that are the backbone of empirical research. On one hand, incivility research can be related to theories on social norms of communication and conversation: conversational maxims (Grice 1975), face-saving concepts (Brown and Levinson 1987; Goffman 1989), or conversational contract theories (Fraser 1990). On the other hand, incivility research has ties to democratic theories that view public communication as part of democratic opinion formation and decision-making (Dryzek 2000; Gutmann and Thompson 1990, 1996; Habermas 1984, 1994). Although researchers investigating norm violations in communication should be the last to think that communication can lead to understanding, there are numerous points of overlap between research on deliberative democracy theory and research on incivility online (Esau et al. 2017; Halpern and Gibbs 2013; Rowe 2015; Ziegele et al. 2018c; Ziegele et al. 2017).
In contrast to incivility, the term hate speech provokes less controversy among scholars. One common element is that hate speech, as the term suggests, expresses and promotes hatred toward others (Erjavec and Kovačič 2012; Rosenfeld 2012; Ziegele et al. 2018b). A second element in the literature is that hate speech is directed against others on the basis of their ethnic or national origin, religion, gender, disability, sexual orientation, or political conviction (Erjavec and Kovačič 2012; Rosenfeld 2012; Waseem and Hovy 2016). Further, it is associated with the use of terms that are considered to denigrate, degrade, and threaten others (Döring and Mohseni 2020; Gagliardone et al. 2015). However, the less controversial view on the phenomenon may be a result of the comparatively low attention that has been given to hate speech compared to incivility. Hate speech and incivility are often used synonymously as hateful speech is considered part and parcel of incivility (Ziegele et al. 2018b). Furthermore, theoretically, the two concepts have yet to be distinguished from each other.
3 Research Designs, Methods, and Method Combinations
Experimental designs (Kim and Chen 2020; Popan et al. 2019; Su et al. 2021) and content analysis (Chen et al. 2020; Papacharissi 2004; Rowe 2015), sometimes used in combination (Borah 2013; Muddiman 2017; Oz et al. 2018), are so far the most commonly used methodological approaches to investigating the prevalence and effects of hate speech and incivility in user-generated media content. Of these approaches, online experiments seem to be suitable for creating an environment in which test persons are able to experience reality-adjacent online discussions while researchers control specific variables (Oz et al. 2018; Su et al. 2021; Ziegele et al. 2018b). Comparative designs of content analysis are also able to identify important context and influence variables (Esau et al. 2017; Halpern & Gibbs 2013; Oz et al. 2018; Rowe 2015).
Manual content analysis is most commonly used in communication science (Chen et al. 2020; Papacharissi 2004; Rowe 2015; Stroud et al. 2015; Ziegele et al. 2018a; Ziegele et al. 2018c). However, one increasingly popular trend is automated content analysis, which is a collection of techniques used to automatically analyze media content. Usually, manually coded text data are used as a “training set” to develop supervised machine learning techniques and training algorithms for the automatic detection of hate speech and incivility (Stoll et al. 2020; Su et al. 2018). This can enable researchers to test and further develop procedures originally developed in computer science (Burnap and Williams 2015; Davidson et al. 2017; Waseem and Hovy 2016). Another multi-method approach combines automated data collection, initial automated preliminary analyses, and refined manual quantitative or qualitative analyses (Poole et al. 2020; Waseem and Hovy 2016). Furthermore, although rarely used, another promising approach is the combination of qualitative or quantitative content analysis and in-depth interviews (Erjavec and Kovačič 2012; Ziegele 2016). Most studies provide useful overviews of the state of the art; however, meta-analyses on hate speech and incivility online remain wanting (Ziegele et al. 2017).
4 Main Constructs, Preconditions, and Effects of Incivility
Content analyses have demonstrated that, on average, between 20 and 50% of user comments contain some form of impoliteness, incivility, or hate speech (Coe et al. 2014; Papacharissi 2004; Santana 2013; Ziegele et al. 2018c). Most content analyses regarding online hate and incivility are either case studies focusing on a single online platform or comparative studies of various online platforms. Hate speech and incivility have been studied on Usenet newsgroups (Papacharissi 2004), political blogs (Borah 2013; Seely 2017), news websites (Chen et al. 2020; Rowe 2015; Seely 2017), Facebook (Oz et al. 2018; Rowe 2015; Stoll et al. 2020; Su et al. 2018; Ziegele et al. 2018a; Ziegele et al. 2018c), Twitter (Oz et al., 2018; Poole et al., 2020; Waseem and Hovy 2016), YouTube (Agarwal and Sureka 2014), and Wikipedia (Black et al. 2011). Examples of comparative studies include: Rowe (2015), who found higher levels of incivility in user comments on a news website compared to a news page on Facebook. In contrast to this, Esau et al. (2017), using a similar study design, found significantly more disrespectful comments on Facebook compared to news websites and a news forum. Oz et al. (2018) compared government pages on social media and found more incivility on Twitter compared to Facebook. Other studies found significant differences in the amount of incivility on different news websites depending on ideological leanings (Chen et al. 2020), geographic scope (Su et al. 2018), and country (Ruiz et al. 2011).
Papacharissi (2004) developed one of the first and most-cited coding schemes for the standardized manual content analysis of incivility. However, the analytic constructs used in the study have been controversially discussed and have resulted in a variety of analytical approaches. Despite the variety, some commonly analyzed constructs can be distilled:
Dimensions or levels of incivility: Most analytical constructs take different dimensions or levels of incivility into account (Seely 2017; Su et al. 2018; Ziegele et al. 2018b) and are thereby divided by the question of where incivility begins, where it ends, and what it includes (Muddiman 2017; Papacharissi 2004; Seely 2017; Su et al. 2021). As previously discussed, some researchers distinguish between impoliteness (e.g., name calling, vulgarity) and incivility (e.g., violent threats, stereotypes) (Papacharissi 2004; Rowe 2015), while others understand impoliteness as part of incivility (Seely 2017). Another concept further distinguishes between civility, mere rudeness (e.g., insults), and extreme incivility (e.g., violent threats) (Su et al. 2018). Again, others distinguish between mere negativity, as an inevitable characteristic of disagreement, and incivility, which undermines the ideal of deliberative discussions (Ziegele et al. 2018a), or rudeness and hate speech (Ziegele et al. 2018b). Researchers have argued that even if negativity implies incivility, it is not because it has the same negative impacts on democracy compared to hate speech or extreme incivility (Su et al. 2018). However, Ziegele et al. (2018b) argued convincingly that, although negativity alone does not constitute incivility, negativity combined with a disrespectful and hostile tone can be understood as uncivil.
Personal vs. public-level incivility: Furthermore, Muddiman (2017), based on Papacharissi (2004), distinguished between personal-level incivility as a violation of interpersonal politeness norms and public-level incivility as a violation of political process and deliberative norms. The study found that although personal-level incivility was perceived as more uncivil than public-level incivility, both forms of norm violation were seen as uncivil. Personal and public levels of incivility, however, seem to be overlapping concepts as uncivil or hateful comments are often expressed on a personal level, although they also concern the public level when expressed in public online discussions.
Multi-dimensional concepts of incivility: The spectrum of items included in multi-dimensional concepts of incivility range from a simple emotional display (“you make me angry”) (Su et al. 2018, 2021) to profanity (F**k; + #?**!) (Chen et al. 2020; Ziegele et al. 2018a), rudeness (“that’s bullshit”) (Seely 2017), sarcasm (“just killed people but you’re right it’s a religion of peace”) (Poole et al. 2020; Seely 2017; Ziegele et al. 2018a), offensive language (“bottom feeder”) (Chen et al. 2020; Seely 2017), insults (“you are too stupid to understand”) (Chen et al. 2020; Seely 2017; Ziegele et al. 2018a), hot-button language (“abortion is killing”) (Ferree et al. 2002), name calling (“greedy pigs”) (Chen et al. 2020; Ziegele et al. 2018a), use of stereotypes (“liberal pothead,” “faggot”) (Chen et al. 2020; Papacharissi 2004; Rowe, 2015; Seely 2017; Ziegele et al. 2018a), violent threats to democracy (“our politicians should be shot”) (Papacharissi 2004; Rowe 2015; Ziegele et al. 2018a), and threats to individual rights (“shut your mouth or I’ll shut it for you”) (Papacharissi 2004; Rowe 2015; Ziegele et al. 2018a). Some studies (Seely 2017) have also included other dimensions (e.g., accusation of lying), thereby supporting the impression that there is little conceptual clarity about the dimensions of incivility.
Characteristics of deliberative quality: Studies on incivility or disrespect often also comparatively examine other characteristics of deliberative quality, for example, rationality through argumentation or reciprocity through replying to others (Black et al. 2011; Chen et al. 2020; Esau et al. 2017; Halpern and Gibbs 2013; Ziegele et al. 2018c). Study results have shown that uncivil discussions can contain rational content (e.g., arguments or evidence) (Coe et al. 2014; Popan et al. 2019).
Hate speech: Analyses of hate speech show commonalities with concepts of incivility (Erjavec and Kovačič 2012; Waseem and Hovy 2016; Ziegele et al. 2018b), for example, expressions that include insults, violent threats, hatred, or discrimination. Furthermore, hate speech is directed against people on the basis of, for example, their ethnic or national origin, religion, gender, disability, sexual orientation, or political conviction (Erjavec and Kovačič 2012; Waseem and Hovy 2016).
Topics: As detailed above, incivility has been examined in the context of a variety of mostly highly controversial and conflictual topics. For example, Oz et al. (2018) demonstrated that significantly more impolite and uncivil user comments were published on sensitive, morally charged topics (e.g., same-sex marriage) than on non-sensitive topics (e.g., technology).
Effects of incivility: Despite this not being the main focus, some studies employing standardized content analysis are interested in the effects of incivility on user-generated online discussions. For example, a few studies have demonstrated that uncivil and aggressive comments can increase negative emotions, negative responses, and overall user engagement (Coe et al. 2014; Hwang et al. 2017; Ziegele et al. 2014). These studies demonstrate that content analysis can potentially be used to reveal causal relationships between user-generated contributions over time.
5 Research Desiderata and Future Perspectives
This chapter showed that, in recent years, research on incivility and hate speech in user-generated online discussions has increased considerably and developed rapidly. However, it can still be considered a budding research area that can take very different paths in the future. The rapid growth of data and high public interest in the topic have forced scholars to set priorities on testing new designs and methods, which sometimes requires them to put theoretical and definitional work aside. This conceptual work can and should receive more attention in the future. Open controversial questions can include the following: Where does incivility begin and end? Where does extreme incivility start? How are incivility and hate speech theoretically connected? What do expressions of emotions have to do with incivility? How can sarcasm be embedded within the theoretical concept? The widely varying results of between five and 50% of incivility found in online discussions (Coe et al. 2014; Rowe 2015; Santana 2013) demand more conceptual agreement for a stronger comparative perspective. Another major research gap is the motivation behind incivility and hate speech: Although we have insights into the motivational structures of average users (Eberwein 2019; Ziegele 2016), we have little knowledge about extreme and radical users, who will be less inclined to participate in scientific interviews and surveys, especially those who use uncivil communication and hate speech as strategic communication connected to extremist and radical political groups.
Another important path for future research is to gain more knowledge about online discussion structures and dynamics. How do civil and uncivil discussions evolve, and when and why do they change in tone and purpose? For example, tracking incivility and hate speech dynamics within entire threads and taking the time dimension into account might be one interesting future path. Furthermore, research could test more longitudinal approaches to capture the development of topics or individual hateful users. Thus far, there are no meta-analyses of incivility studies, which could be another step toward a more systematic research field. Finally, the combination of manual and automated content analysis will most likely be the gold standard in future studies. However, it should be noted that the concept of incivility can be highly dependent on cultural imprint, personality, political orientation, and contextual knowledge and can, therefore, cause disagreement even among human coders. It is important that this insight does not lead to capitulation in the face of complexity but, instead, inspires better methods of automated analysis.
Relevant Variables in DOCA – Database of Variables for Content Analysis
Hate speech: https://doi.org/10.34778/5a
Agarwal, S., & Sureka, A. (2014). A focused crawler for mining hate and extremism promoting videos on YouTube. In L. Ferres, G. Rossi, V. Almeida, & E. Herder (Chairs), Proceedings of the 25th ACM Conference on Hypertext and Social Media. https://dl.acm.org/doi/proceedings/https://doi.org/10.1145/2631775.
Anderson, A. A., Brossard, D., Scheufele, D. A., Xenos, M. A., & Ladwig, P. (2014). The “nasty effect:” Online incivility and risk perceptions of emerging technologies. Journal of Computer-Mediated Communication, 19(3), 373–387.
Black, L. W., Welser, H. T., Cosley, D., & DeGroot, J. M. (2011). Self-governance through group discussion in Wikipedia: Measuring deliberation in online groups. Small Group Research, 42(5), 595–634.
Boatright, R. G., Shaffer, T. J., Sobieraj, S., & Young, D. G. (2019). A crisis of civility? Political discourse and its discontents. Routledge.
Borah, P. (2013). Does it matter where you read the news story? Interaction of incivility and news frames in the political blogosphere. Communication Research, 41(6), 809–827.
Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. Cambridge University Press.
Burnap, P., & Williams, M. L. (2015). Cyber hate speech on Twitter: An application of machine classification and statistical modeling for policy and decision making. Policy & Internet, 7(2), 223–242.
Chen, G. M., Fadnis, D., & Whipple, K. (2020). Can we talk about race? Exploring online comments about race-related shootings. Howard Journal of Communications, 31(1), 35–49.
Coe, K., Kenski, K., & Rains, S. A. (2014). Online and uncivil? Patterns and determinants of incivility in newspaper website comments. Journal of Communication, 64(4), 658–679.
Davidson, T., Warmsley, D., Macy, M., & Weber, I. (2017). Automated hate speech detection and the problem of offensive language. Proceedings of the Eleventh International AAAI Conference on Web and Social Media. http://arxiv.org/pdf/1703.04009v1.
Döring, N., & Mohseni, M. R. (2020). Gendered hate speech in YouTube and YouNow comments: Results of two content analyses. SCM Studies in Communication and Media, 9(1), 62–88.
Dryzek, J. S. (2000). Deliberative democracy and beyond: Liberals, critics, contestations. Oxford University Press.
Eberwein, T. (2019). “Trolls” or “warriors of faith”? Journal of Information, Communication and Ethics in Society, 18(1), 131–143.
Erjavec, K., & Kovačič, M. P. (2012). “You don’t understand, this is a new war!” Analysis of hate speech in news web sites’ comments. Mass Communication and Society, 15(6), 899–920.
Esau, K., Friess, D., & Eilders, C. (2017). Design matters! An empirical analysis of online deliberation on different news platforms. Policy & Internet, 9(3), 321–342.
Ferree, M. M., Gamson, W. A., Gerhards, J., & Rucht, D. (2002). Shaping abortion discourse: Democracy and the public sphere in Germany and the United States. Cambridge University Press.
Fraser, B. (1990). Perspectives on politeness. Journal of Pragmatics, 14(2), 219–236.
Freelon, D. G. (2010). Analyzing online political discussion using three models of democratic communication. New Media & Society, 12(7), 1172–1190.
Gagliardone, I., Gal, D., Alves, T., & Martínez, G. (2015). Countering online hate speech. UNESCO Series on Internet Freedom. UNESCO. http://unesdoc.unesco.org/images/0023/002332/233231e.pdf.
Goffman, E. (1989). Interaction ritual: Essays on face-to-face behavior. Pantheon Books.
Grice, P. H. (1975). Logic and conversation. In P. Cole (Ed.), Syntax and semantics: Speech acts (pp. 41–58). Academic Press.
Gutmann, A., & Thompson, D. (1990). Moral conflict and political consensus. Ethics, 101(1), 64–88.
Gutmann, A., & Thompson, D. F. (1996). Democracy and disagreement. Belknap Press of Harvard University Press.
Habermas, J. (1984). The theory of communicative action: Reason and the rationalization of society. Beacon Press.
Habermas, J. (1994). Three normative models of democracy. Constellations, 1(1), 1–10.
Halpern, D., & Gibbs, J. (2013). Social media as a catalyst for online deliberation? Exploring the affordances of Facebook and YouTube for political expression. Computers in Human Behavior, 29(3), 1159–1168.
Herbst, S. (2010). Rude democracy: Civility and incivility in American politics: Temple University Press.
Howarth, C., & Sharman, A. (2017). Influence of labeling and incivility on climate change communication. Oxford research encyclopedia of climate science. https://oxfordre.com/climatescience/view/https://doi.org/10.1093/acrefore/9780190228620.001.0001/acrefore-9780190228620-e-382?rskey=79L0zj&result=1.
Huckfeldt, R. R., Sprague, J. D., & Johnson, P. E. (2004). Political disagreement: The survival of diverse opinions within communication networks. Cambridge University Press.
Hwang, H., Kim, Y., & Kim, Y. (2017). Influence of discussion incivility on deliberation: An examination of the mediating role of moral indignation. Communication Research, 45(2), 213–240.
Kim, J. W., & Chen, G. M. (2020). Exploring the influence of comment tone and content in response to misinformation in social media news. Journalism Practice, 6(2), 1–15.
Massaro, T. M., & Stryker, R. (2012). Freedom of speech, liberal democracy, and emerging evidence on civility and effective democratic engagement. Arizona Legal Studies, 54(2), 375–411.
Muddiman, A. (2017). Personal and public levels of political incivility. International Journal of Communication, 11, 3182–3202.
Mutz, D. C., & Reeves, B. (2005). The new video malaise: Effects of televised incivility on political trust. The American Political Science Review, 99(1), 1–15.
Oz, M., Zheng, P., & Chen, G. M. (2018). Twitter versus Facebook: Comparing incivility, impoliteness, and deliberative attributes. New Media & Society, 20(9), 3400–3419.
Papacharissi, Z. (2004). Democracy online: Civility, politeness, and the democratic potential of online political discussion groups. New Media & Society, 6(2), 259–283.
Phillips, T., & Smith, P. (2003). Everyday incivility: Towards a benchmark. The Sociological Review, 51(1), 85–108.
Phillips, T., & Smith, P. (2004). Emotional and behavioural responses to everyday incivility. Journal of Sociology, 40(4), 378–399.
Poole, E., Giraud, E. H., & Quincey, E. de (2020, online first). Tactical interventions in online hate speech: The case of #stopIslam. New Media & Society, 1–28. https://doi.org/10.1177/1461444820903319.
Popan, J. R., Coursey, L., Acosta, J., & Kenworthy, J. (2019). Testing the effects of incivility during internet political discussion on perceptions of rational argument and evaluations of a political outgroup. Computers in Human Behavior, 96, 123–132.
Rosenfeld, M. (2012). Hate speech in constitutional jurisprudence. In M. Herz & P. Molnar (Eds.), The content and context of hate speech (pp. 242–289). Cambridge University Press.
Rowe, I. (2015). Civility 2.0: A comparative analysis of incivility in online political discussion. Information, Communication & Society, 18(2), 121–138.
Ruiz, C., Domingo, D., Micó, J. L., Díaz-Noci, J., Meso, K., & Masip, P. (2011). Public sphere 2.0? The democratic qualities of citizen debates in online newspapers. The International Journal of Press/Politics, 16(4), 463–487.
Santana, A. D. (2013). Virtuous or vitriolic. Journalism Practice, 8(1), 18–33.
Seely, N. (2017). Virtual vitriol: A comparative analysis of incivility within political news discussion forums. Electronic News, 12(1), 42–61.
Stoll, A., Ziegele, M., & Quiring, O. (2020). Detecting impoliteness and incivility in online discussions: Classification approaches for German user comments. Computational Communication Research, 2(1), 109–134.
Stroud, N. J., Scacco, J. M., Muddiman, A., & Curry, A. L. (2015). Changing deliberative norms on news organizations’ Facebook sites. Journal of Computer-Mediated Communication, 20(2), 188–203.
Su, L. Y.-F., Scheufele, D. A., Brossard, D., & Xenos, M. A. (2021). Political and personality predispositions and topical contexts matter: Effects of uncivil comments on science news engagement intentions. New Media & Society, 67(3), 894–919.
Su, L. Y.-F., Xenos, M. A., Rose, K. M., Wirz, C., Scheufele, D. A., & Brossard, D. (2018). Uncivil and personal? Comparing patterns of incivility in comments on the Facebook pages of news outlets. New Media & Society, 20(10), 3678–3699.
Towne, W. B., & Herbsleb, J. D. (2012). Design considerations for online deliberation systems. Journal of Information Technology & Politics, 9(1), 97–115.
Waseem, Z., & Hovy, D. (2016). Hateful symbols or hateful people? Predictive features for hate speech detection on Twitter. In J. Andreas, E. Choi, & A. Lazaridou (Chairs), Proceedings of the NAACL Student Research Workshop. https://aclanthology.org/volumes/N16-2/.
Wright, S., & Street, J. (2007). Democracy, deliberation and design: The case of online discussion forums. New Media & Society, 9(5), 849–869.
Yuan, S., & Lu, H. (2020). “It’s global warming, stupid”: Aggressive communication styles and political ideology in science blog debates about climate change. Journalism & Mass Communication Quarterly, 97(4), 1003–1025.
Ziegele, M. (2016). Nutzerkommentare als Anschlusskommunikation: Theorie und qualitative Analyse des Diskussionswerts von Online-Nachrichten. Springer VS.
Ziegele, M., Breiner, T., & Quiring, O. (2014). What creates interactivity in online news discussions? An exploratory analysis of discussion factors in user comments on news items. Journal of Communication, 64(6), 1111–1138.
Ziegele, M., Jost, P., Bormann, M., & Heinbach, D. (2018a). Journalistic counter-voices in comment sections: Patterns, determinants, and potential consequences of interactive moderation of uncivil user comments. Studies in Communication and Media, 7(4), 525–554.
Ziegele, M., Koehler, C., & Weber, M. (2018b). Socially destructive? Effects of negative and hateful user comments on readers’ donation behavior toward refugees and homeless persons. Journal of Broadcasting & Electronic Media, 62(4), 636–653.
Ziegele, M., Quiring, O., Esau, K., & Friess, D. (2018c). Linking news value theory with online deliberation: How news factors and illustration factors in news articles affect the deliberative quality of user discussions in SNS’ comment sections. Communication Research, 47(6), 860–890.
Ziegele, M., Springer, N., Jost, P., & Wright, S. (2017). Online user comments across news and other content formats: Multidisciplinary perspectives, new directions. Studies in Communication | Media, 6(4), 315–332.
Editors and Affiliations
© 2023 Der/die Autor(en)
About this chapter
Cite this chapter
Esau, K. (2023). Content Analysis in the Research Field of Incivility and Hate Speech in Online Communication. In: Oehmer-Pedrazzi, F., Kessler, S.H., Humprecht, E., Sommer, K., Castro, L. (eds) Standardisierte Inhaltsanalyse in der Kommunikationswissenschaft – Standardized Content Analysis in Communication Research. Springer VS, Wiesbaden. https://doi.org/10.1007/978-3-658-36179-2_38
Publisher Name: Springer VS, Wiesbaden
Print ISBN: 978-3-658-36178-5
Online ISBN: 978-3-658-36179-2
eBook Packages: Social Science and Law (German Language)