Keywords

1 Introduction

Online hate speech has profound consequences for individuals, groups and community resilience. Research shows that hate speech can erode social cohesion, fracture community bonds, and create divisions within online communities (Citron, 2014). Hate speech can perpetuate discriminatory attitudes, reinforce stereotypes, and marginalize targeted groups, undermining the inclusive fabric of communities (Baumgartner et al., 2019). Studies show that the vice of online hate speech is not only multi-disciplinary and complex (Paz et al. 2020), but also on the rise globally (Tontodimama et al. 2021, Laub 2019) including in Norway (Medietilsynet, 2022; Wigh 2022; Hatkriminalitet 2022; 2021 and 2019). A recent study on the proliferation of online hate speech covering 10.5 million Norwegian comments on Facebook over a period of 18 months, shows that 1, 7 percent of the comments are defined as attacks and 0,4 percent as hate speech (Nordic Safe Cities, 2023).

Given the steady growth over the last years, of online hate-crimes in Western societies, combating it has become crucial for governments, organizations, local communities, and stakeholders (Kalsnes & Ihlebæk 2021, NOU, 2022:9i1). Legislation has been one way of fighting the vice of online hate speech globally (UN, 2023) and in Europe (See Council of Europe recommendationsFootnote 1 and frameworksFootnote 2). Just as in several countries in Europe, in Norway, the constitution and several legislations prohibit hate-based discrimination and the abuse of individual and group rights. In Norwegian legislation, the so-called “hate-speech” paragraph (§ 185), placed in Chapter 20 in the Norwegian legislation, highlights the protection of society, public peace, order, and security. The § 185 deals with hate speech targeted at vulnerable groups, based on ethnicity, skin color, gender, religion, disability, sexual orientation, or worldview. The ongoing debate and scholarship on online hate-speech in Norway gained momentum in 2020, after a legal precedence was set with its first guilty verdict on racial online hate crimes (Nguyen, 2020). In August 2022, a special committee on freedom of expression released a new compendium that further defines and gives guidance on freedom of expression including online expressions (NOU, 2022: 9). Despite the Norwegian Constitutional and legislative recourse, there are still lingering tensions/grey areas including when freedom of speech collides with other rights and interests (Kierulf, 2021). We revisit some of these debate by exploring current police force encounters with this evolving § (185).

As a key stakeholder in building community resilience against online hate speech and for their close work with the § 185 in Norwegian contexts, the Police have been selected as the main target group for this paper. Police in Norway engage in several activities in their work on online hate speech. Their work includes (and is not limited to): outreach programs meant for raising awareness and education (such as making TikTok, reaching out to minority social arenas, mosques, etc.), partnerships with other stakeholders, net patrol, tips portal, legislative processes, and so on. More centrally, their task involves applying/interpreting the legislation (§ 185) to establish hate-crime cases. This paper seeks answers to the research question(s):

What are the challenges in the work against online hate speech generally, and more specifically, how do ambiguous aspects within legislation (§ 185) affect law enforcement’s ability to regulate online hate speech?

The paper’s focus on § 185, is not intended as a legal analysis or undertaking per se, but rather to help highlight how the interviewees in police force reflect among other issues on the importance of the socio-cultural context of hate speech. The socio-cultural lens of analysis is vital for deepening our understanding of societal implications of legislative processes and outcomes (a focus we also recommend later in the chapter when developing AI tools). It is a lens well-articulated by Gagliardone, Gal, Alves and Martinez who caution that: “…purely legal lens can miss out on how societies evolve through contestation and disagreement” (Gagliardone et al., 2015, p.15). To highlight the existing tensions of § 185, we bring to view three recent public debates we refer to as the Sumaya Facebook case, the Sumaya-Atle case and the Swastika case.

With regards to AI. It was established early on in our exploratory study, that, although highly desired, the police did not yet employ AI tools in their work. Given that our overall computational social science project aims to also develop real time AI tools to support community resilience work such as the work police do, we include, brief reflections on how our findings here are relevant for and how they can inform the development of AI and what measures ought to be integrated in the AI development processes.

Methodologically, the study adopts a qualitative approach encompassing ‘bottom-up’, dialogical, interdisciplinary tenents of research. What follows is a brief contextualization of the paper, conceptual framework, followed by methodology, presentation of analysis, and discussion of findings. Concluding remarks include recommendation for police work, within the framework of the § 185, on community resilience against hate speech.

2 Paragraph 185 Contextualized Within Three Use Cases

The § 185 on hateful utterances was enacted in 1970, but has been changed several times, latest in 2020. The § 185 does not protect every hateful utterance, but the hate speech must affect a certain vulnerable group, based on ethnicity, skin color, religion, disability, sexual orientation or worldview. This right to protection by the law, sometimes collide with the freedom of speech (Jakubowicz et al., 2017). Conditions for conviction include: the act has to be public (Lovdata, § 185, 1st section, “in the presence of others” [our translation]), there has to be an intent to cause harm behind the act, or to reduce the value of someone, (Lovdata, § 185, 1st section, “intentionally or grossly negligent” [our translation]), the act has to have a certain degree of offensive impact (Lovdata, § 185, 2nd section, “threaten or insult somebody, or promote hatred, persecution or contempt towards somebody” [our translation]), and the target must, as mentioned above, be a person a minority based on color, race, sexual orientation, religion, gender, ‘handicap’ (Lovdata, § 185, 2nd section). The context also needs to be taken into consideration (ECHR, 2022, p. 4).

The following use-cases illustrate the tensions of the § 185. These cases will be referred to as Sumaya Facebook case, Sumaya-Atle case, and swastika case. In the Sumaya-Facebook case, from 2019, a woman wrote to Sumaya Jirde Ali (henceforth referred to as Sumaya) in a public Facebook group: Bloody black offspring go back to Somalia and stay there, you corrupt cockroach. Sumaya is a Norwegian-Somalian award-winning writer and debater. She has lived with threats and harassment in many years, because of her religion and origins (Somalia). The woman, who wrote the abovementioned sentence, was convicted because the case met all the requirements in the hate speech paragraph. In the Sumaya-Atle case, from 2022, is when a famous comedian Atle Antonsen told Sumaya in a bar: You are too black to be here. After repeatedly yelling at her to: ‘Shut the f**k up. The violation was reported but was rejected. The two court cases on hate speech towards her, received respectively a conviction and a rejection. Both cases reveal tensions within the hate crime legislation, and between the expectations in the public on what the law covers – and the law (Lovdata, § 185). As a comedian, Atle’s intension was considered not to harm, and the context around the expressions - they were colleagues, Atle had earlier sent Sumaya support in a similar situation - supported that interpretation.

The intention criterion is often difficult to prove in hate speech cases (Lovdata, § 185, 1st section). The context of the hate speech, as illustrated above, needs to be taken into consideration. The case of the swastika flag illustrates this point. But it also illustrates the wide interpretation space there is in these cases. The swastika case received much attention in the media. A white supremacy group put up the swastika flag in the Norwegian Town Kristiansand. In the lower court, these men were sentenced and fined. The context, in reference to the date, place and symbolisms, made the act even more vile and threatening. The date was 9th of April, the date of the German invasion in Norway. The place was the Arkivet, a place where opponents of the nazis were tortured and imprisoned. However, the conviction was appealed, and the court of appeal acquitted the defendants, saying that the utterance was not aimed at one of the protected groups, since it also affected many other groups. In addition to the three use cases above, the Supreme court has stated that the expressions need to have a good “margin” for tastelessness or offensiveness (Kierulf, 2021).

3 Conceptual Framework

Online hate speech and cybercrime calls for a new discipline, social cyber security, and a need to build an infrastructure that allows the essential character of a society to persist through a cyber-mediated information environment (Carley, 2020). AI can be part of such infrastructure. Social cyber security is focused on how humans, as well as communities and narratives, can be compromised, and how this can be stopped. Within the social cyber security area, AI can provide new tools to support good decision-making. But such tools will have limitations, according to Carley (2020). AI and machine learning are immensely valuable for managing extensive datasets. Nevertheless, these systems depend on data that is structured around signs and vocabularies commonly recognized as indicative of hate. Consequently, AI may struggle to understand more fluid socio-cultural contexts and nuances in language and sentiment (Carley, 2020). The tensions and uncertainties within the legal framework for hate speech underscore the need to take the socio-cultural context into account when developing AI-based tools to combat hate speech (we return to this later). Firstly, the term ‘hate speech’ lacks a universal definition (Assimakopoulos et al., 2017). Hate speech may be seen as “[…] the expression of hatred towards an individual or group of individuals based on protected characteristics,” (ibid, 2017, 12). But again, these characteristics are also open to definition. Hate speech can be conveyed through any form of expression, online or offline, it is discriminatory or pejorative of an individual or group and it “calls out real or perceived ‘identity factors’ of an individual or a group, including ‘religion, ethnicity, nationality, race, color, descent, gender’, but also characteristics such as language, economic or social origin, disability, health status, or sexual orientation”, among others” (ECHR-KS, 2022; UN.orgFootnote 3).

Secondly, there is no comprehensive definition of the concept hate speech. Instead, for instance, the European Court of Human Rights approach the problem on a case-by-case basis (ECHR, 2022, p. 1). This is consistent with the Norwegian case law (Kierulf, 2021). They acknowledge National courts’ responsibilities to interpret and apply domestic laws. The ECHR has general principles drawn from the case-law (ECHR, 2022, p. 1). One is that freedom of expression constitutes not only sharing ideas or information, but also offensive utterances that may disturb the State and sections of the population. Hate speech is often categorized as either hard or soft hate speech, according to Assimakopoulos et al. (2017). Hard hate speech is prohibited by law, and soft hate speech is not illegal, but still have serious consequences when it comes to intolerance and discrimination. The line between the two is drawn differently from country to country. In addition, there is a grey zone of ambiguity where human values and human rights, freedom of expression and the right to be protected from discrimination might collide. Freedom of speech is the right to utter one’s opinion without interference from the authorities. The term “speech” includes expression more than just words, but also clothing, what a person reads, performs, protests and so on (CSUSM, u.å.) (Jakubowicz et al. 2017, p. 26). Hate speech such as racism “can be embedded in structures of societies” (Jakubowicz et al. 2017, p. 26), in the benefits that flow and the disadvantages that inhibit. To build community resilience against cyber racism, there should be established online communities that offer support against racism. When talking about resilience, Jakubowicz et al. (2017) are not talking about victims needing to get tougher, nor is it right of others to be bigots, rather it is about “enabling citizens to discern and name racism, and ultimately resist it”, to unpack and challenge hateful behavior. (Bodkin-Andrews et al. 2013 in Jakubowicz et al., 2017, p. 277).

The internet is argued to be a non-governable space as it might create a favorable setting for unwanted activities due to the high quantity of online activities (Bromell, 2022); the lack of social and cultural consensus on what content is acceptable and what should be requirements for intervention (Kierulf, 2021), lack of harmonization of legislation across borders as well as how easy it is to be anonymous (Citron & Norton, 2011; Jakubowicz et al., 2017). The technological provisions that allow the use of pseudonyms, fake accounts, or anonymous browsing tools hinder law enforcement agencies’ ability to trace, gather evidence and attribute hateful content to specific individuals (Citron & Norton, 2011). The lack of identifiable information (for instance when someone deletes their initial hateful utterances) makes it challenging to initiate legal proceedings or pursue criminal charges (Kovacs, Alonso and Saini, 2021). Legal frameworks often require evidence that can establish a direct connection between the hate speech and its author, which anonymity obstructs. The practical limitations imposed by anonymity can impede the enforcement and hinder the pursuit of justice. The absence of personal accountability or disinhibition may contribute to a culture of impunity and increase the prevalence of hate speech online. On the other hand, the fear of retaliation or retribution faced by victims and potential whistleblowers can deter them from reporting instances of hate speech, further enabling anonymous perpetrators to operate freely (Citron & Norton, 2011).

Further, there are tensions between different human rights, like freedom of expression and freedom of religion and between freedom of expression and freedom from discrimination. Rogstad (2014 in Colbjørnsen 2016) points out that discussions on where these borderlines should be drawn, often are turned into issues of principle, that often may be driven by a news event, covered by the media, and further discussed in the social media. Rogstad calls these events collective references (Colbjørnsen, 2016).

While societal norms often sanction overt racism in offline spaces, anonymity of online spaces provides opportunities for online racist hate speech, the study points out (Ortiz, 2019). One of the strategies of the men of color in the study was desensitizing the racism. Soral et al. (2018) suggest that regularly being exposed to online hate speech “reduces automatic triggering of negative emotional reactions to images, words, or thoughts of violence” (p 137). This lower sensitivity has been measured by physiological tests, as decreased attention to violence, increased belief that violence is normal and so on. Jakubowicz et al. (2017) reflects on whether cyber racism should be seen as a civil or a criminal wrong. The discussion is relevant for several types of hate speech. Criminalization has the advantage that the victim does not have to enforce the matter. Moreso, criminal sanctions send the signal that the state condemns this kind of behavior. On the negative side, criminalization tends to individualize the problem, with the risk of re-producing the problem (McNamara, 2002).

4 Methodology

The focus of this paper is the police work against online hate speech that intersect with other works towards building community resilience such as investigating the punishable utterances (i.e., penal code § 185), preventing radicalization to violent extremism; preserving the democracy; polarization; working to prevent lack of feeling of safety, i.e. religious minorities, sexual minorities and other vulnerable groups (§ 185); working to prevent lack of trust from vulnerable groups towards the police.

Empirical insights for this paper were obtained through qualitative methodologies, including document analysis, a workshop with stakeholders, interviews, and focus group discussions with individuals within the Norwegian Police force between October 2022 and February 2023.

Workshop:

In the early phase of the research, a workshop was organized with various public and private stakeholders to gain insights into how stakeholders working against online hate speech in different societal arenas perceive their roles and challenges. We also wanted their input for an AI tool. These stakeholders included the Norwegian Media Authority, the Police, voluntary organization, and local municipality. The workshop began with a presentation of the research project and then proceeded with stakeholders sharing their work experiences. The stakeholders’ presentations were transcribed. The stakeholders invited to the workshop served as gate openers, helping us connect with the appropriate participants for individual and group interviews.

Interviews:

In the subsequent phase of the research, three key informants from the relevant police units from Bergen and Oslo were invited to interview. An interview-guide was administered encompassing five primary areas of discussion: 1) projects focused on combating online hate speech, 2) organizational structure and workflow, 3) the specific project’s definition of hate speech, 4) the resulting products such as statistics, reports, and measures, and 5) the challenges encountered in the project. All interviews took 45–60 min.

Document Review:

During the fieldwork, we collected publicly available reports and notes that were suggested or provided by the participants. Some of these documents included the Police annual report on hate criminality and Norway’s Supreme Court rulings (Hatkriminalitet 2022; 2021 and 2019; Borgarting lagmannsrett Dom – LB- 2019-177188). The reports were examined to provide context for the research, while the rulings from the Supreme Court were studied and analyzed in the context of the discussions that came up in the interviews and workshop.

This study gained ethical approval from the Norwegian Centre for Research Data (SIKT) and followed their rules for data security. All informants are anonymized. However, there are limitations of anonymity as there are a few departments in the Norwegian police force involved in combating online hate speech. For anonymity, we use letters for identification of informants.

Analytical Framework

Inspired by grounded theory (Charmaz, 2014), we have applied an inductive qualitative analysis inquiry. The process started with coding the data material from the interviews and workshop. At this phase the research team aimed to get familiar with the data. This developed to find patterns across the interviews and going back and forth between notes from the Supreme Court, the interviews and workshop discussions. In this process, codes were categorized. The relevant categories for this paper were interpretation of law, definition of hate speech, identifying online hate speech, challenges in combating online hate speech. Following the steps of grounded theory, the identified categories or central themes are discussed in the form of findings and discussed in the context of earlier research and literature.

5 Presentation of Key Findings on Challenges of Regulatory Work on Online Hate Speech

It is important to note that during the fieldwork, the police did not utilize AI tools. However, they did employ digital/online technologies, primarily open source, in conjunction with outreach efforts such as talks in cafes and mosques. Interviews frequently highlighted the use of web-based tips portals, net patrols, and the police’s presence on popular youth-oriented social media platforms like TikTok. The informants saw the need for an AI tool that could support their existing efforts through digital technology. In this section and following pages, we will delve deeper into the challenges faced by the police in regulating online hate speech, shedding light on the implications of paragraph 185 legislation and the areas of ambiguity it presents.

Police reports (Hatkriminalitet I Oslo pd 2021 p.4; 2020 p.5; 2019, p.4) highlight a rise in online hate speech. Despite this, fewer online hate cases were reported to the police (informant A). ‘Tips’ portal (Tipsportalen), a web-based scheme, is created by Police to opens access for the public to report cases of hate speech including online hate speech. However, for instance informant C underscores the fact that “there is a massive underreporting of hate crime to the police. Both online and offline.

Difficulties in Understanding the Parameters of § 185

All our informants reflected on whether and how § 185 is challenging. For instance, when the public register their experiences of hate speech on police website (Tipsportalen), the police hate crime team interpret the paragraph to find out whether they must open a case of a crime. Informant C argues that the paragraph is not easy to understand and/or interpret by the public. According to her:

I do not think people walk around with Supreme Court judgements in their headsFootnote 4. We experience through the public debate that hate crime is more on the agenda, but the threshold for reporting to the police is far too high. It is not the public’s task to assess whether its punishable or not, but only to submit incidents. Informant C.

Commenting on the challenges relating to ‘intention,’ ‘specificity’ and ‘public’ nature of utterances of online hate speech, informant A argues that the line between punishable hate speech and freedom of speech is vague and the requirements for a conviction by § 185 are strict. “Often, one wonders, is the case public enough? Specific enough? “Loosening up the requirements a little could solve a lot.” On the question of specificity, Informant A offers an illustration:

There are cases where people of ethnic minority backgrounds have tried to file a report and are rejected at the front desk. This is because first reports must be registered, and the hate motive has to be registered. Identifying the motives behind hate speech and who are these people who perform the hate speech online could be difficult for victims to identify. A rejection at the front desk can be a demoralizing factor for further reports.

Most online hate speech occurs on social media, which challenges the general traditional public-private boundaries. In addition, such hate speech might be operating in small and closed platforms different from others that are open and “public” – challenges that hinder the assessment of the online hate speech. If you want to make it relevant to § 185, you need to remind the reader that the fact that the hate is happening in public is one of the criteria of the hate speech as explained by the Sumaya case. Informant A highlights that the interpretation of what can be considered as public in the context of § 185 is difficult when we operate in a digital world where the boundaries between public and private are unclear. Informant A ponders:

Interpretation of the § 185 is difficult, for instance, what is ‘in public’? Would a private group on Facebook be considered public? What about a crowd of 10–15 people?

In addition to the challenges of lack of simplicity, difficulty of interpretating the paragraph, underreporting of incidences of online hate speech by the public, the grey areas associated with specificity and public nature of utterances, another challenge relates to diversity and interpretation from multi-cultural contexts. Two of our informants explain below:

The police used translation programs or colleagues with language skills, but these come with challenges as some nuances and contexts may be lost along the way. Informant A.

For informant C, the socio-cultural challenge is more complex as it involves a good understanding of language, context, culture, and rhetoric. It also comes with a resource aspect:

For things said in other languages, we need both the immediate understanding of a text and seeing it in the context of what has been said before. The meaning also depends on culture. We have not had many of those cases, but we have had some. Then we try to identify (human) resources within the police force, read and assist with how the statement can be interpreted, then we contact the interpreting services. We have also been in contact with the Mosque to get help in understanding the context and culture. That also means understanding rhetoric and which words we use in everyday speech.”

We understand that some of the underreporting is related to difficulties in understanding the parameters of §185, or lack of trust in the authorities or fear that the case will be rejected. However, Informant A, highlights another challenge as a possible explanation for underreporting:

We are witnessing a rise in hostile opinions and utterances becoming more socially accepted than before. Tendencies are that hostile debates are increasing in society, and there is more acceptance and less social control on this area. According to Informant A.

Informant A argues that in today’s online spaces, hate speech is normalized and people are becoming desensitized to it. The statistics are not accurate nor representative of the scope of hate speech violations. Accordingly, the opportunity by the police to convict haters is reduced.

The issue of anonymity and its impact on identifying individuals responsible for engaging in online hate speech poses a major obstacle to linking hate speech to real-world identities which undermines efforts to hold individuals responsible for their actions. It is also not possible to assess the intention of the act, nor the full context or whether it occurred in ‘public’, which are essential when it comes to assess a case legally.

The police further argue that the affordances of anonymity and privacy on social media platforms makes their work challenging in the following ways:

Anonymous profiles online can be a problem. Some suggest that you log in with a bank ID etc. But there are many opportunities to make yourself invisible, which makes it difficult for us. Most people who engage in hate speech are around 50 years old. Also, a lot is deleted, then we struggle if we do not have a screenshot. If it has happened on Facebook, Facebook probably owns it, with a view to tracking it down afterwards. According to informant C.

We see that the inherent ambiguity in hate speech laws may be a result of the complex nature of hate speech and legislation cannot effectively guide people without a proper understanding of the context within which it occurs. But, as a novelty, it gets more complicated especially in discerning multimodalities (i.e., images, sound, graphics and videos) as our informant laments:

While the law is sometimes difficult to interpret, online hate speech perpetuators are creative and find ways to go around the law. For instance, they may post an image of monkeys instead of texting insults or a hateful utterance. Informant A.

So, we see the police work towards building community resilience against online hate speech are hindered by several challenges ranging from legalistic, to socio-cultural, to ethical (privacy and anonymity issues), as well as technological affordances.

Discussion

The § 185 which falls under Chapter 20 of the Norwegian Penal Code is primarily for societal protection, the aim is to prevent expressions that can foster hatred or harm against minorities in society. While acknowledged as a positive measure towards curbing online hate speech, our informants also reflected on the limitations and challenges – which is the focus of this paper. The challenges are varying from underreporting, lack of trust, fear of harassment, difficulties in using the ‘tipsportalen’, privacy on social media platform and difficulties in finding and defining contexts and intentions behind the online utterances, which goes directly to the core of the challenge regarding legislation, §185. Some of these challenges are about technological infrastructure, and some are about the victim’s experiences. The challenge about legislation covers language, interpretation and judicial precedent and discretion. The grey areas challenge the informants to evaluate the registered online hate speech and open cases on hate-crime. Their arguments among others range from: the § 185 has high threshold which makes it be experienced by offended as out of reach and too strict and must be loosened (informant A), the parameters for what is hate speech and freedom of expression are blurry or not specific enough (informant C) and the requirement for what is considered a publicly performed hateful utterance are unclear (informant A). The illustrative use-cases presented earlier i.e. Sumaya Facebook case, Sumaya-Atle case, Swastika cases derived from the paragraph’s requirements for intent ((Lovdata, § 185, 1st section), public (Lovdata, § 185, 1st section), context (ECHR, 2022, p.4) and level of offensiveness and tastelessness of hateful utterances (Lovdata, § 185, 2nd section).) further illustrate how the lack of clarity poses challenges in the interpretation/operationalization of the paragraph (Kierulf, 2021).

Further, according Kierulf (2021) the human language will always be more open to interpretation than human actions. § 185 says that the meaning of an utterance must never be interpreted beyond the actual phrase – to avoid convictions for something suspects did not utter or intend to utter. This principle needs to be made clearer or literal in the legislation (Kierulf, 2021). When it comes to balancing freedom of expression against hate speech, the Norwegian legislation and jurisdiction is bound by the European Court of Human Rights, which states that the two must be assessed with each case, in the same way as the utterance must be evaluated in the specific context in which it is made (ECHR-KS, 2022). Concrete boundaries are easier to establish against actions than against utterances, but even with actions, there exist gray areas. You will always need an interpretive space, and more so when it comes to language – this will often be perceived as vagueness. Also, when it comes to building community resilience, for instance, through an AI tool and machine learning, which can work to identify hate speech terms and expressions, it is worth reminding ourselves on the limitations on AI on interpreting more volatile socio-cultural contexts and sentiments in human language (Carley, 2020). It is in other words difficult to avoid grey areas when interpreting and regulating utterances, and to achieve community resilience, we need humans in the loop.

The ‘Chaotic’ Terrain of Online Hate Speech

The findings highlight how community resilience efforts by the police must be placed within the chaotic contextual terrain of digitalization and online affordances within which hate speech survives and thrives. ‘Chaotic’ terrain because, it lacks coherence which creates confusion and makes it difficult to establish clear guidelines or solutions. Informant A points to the concerning trends of the normalization of hate speech through repeated exposure, possibilities of anonymity and limited disinhibition echoing researcher warnings on how repeated exposure to online hate speech, leads to desensitization and negative effects on empathy and attitudes according to scholars such as Kircaburun et al. (2018) and Tynes et al. (2018). Duffy and Chan also point to ethical concerns about privacy and the fear of being incorrectly associated with hate speech may lead individuals to self-censor or refrain from engaging in legitimate expression (2019).

Novelty and Trust

As the informants indicated, the police’s challenges of regulation of online hate speech are also due to its novelty in comparison to regulation of traditional/offline forms of hate speech and other forms of discrimination (such as the gender equality law). Our study supports the existing research that shows novelty in legal frameworks and definitions (Trottier & Fuchs, 2014); novelty in jurisdictional complexity (Feldman, 2019), technological advancement and evasion tactics (Citron & Norton, 2011), dynamic nature of online platforms (Van Dijck, Poell, & De Waal, 2018) as legitimate challenges which may be relevant in Norwegian contexts. The novelty of these practices and laws presents challenges in trust-building, credibility of enforcement mechanisms, user engagement in reporting, and platform cooperation. Our findings highlight the challenge from the novelty of the fight against online hate speech particularly pointing to the issue of trust among the public especially among victim groups and communities as particularly indicated by informant C.

Social-Cultural Sensitivities

The work that the police undertake particularly with the interpretation of the law is arguably limited considering the victims’ perspectives, experiences, and contexts of online hate speech. Given that the threshold is high for conviction from paragraph 185 means that several victims (often minorities), in this case, ethnic minorities, cannot obtain legal recourse from the racism clause and yet as research shows, hate speech whether soft racism or hard has consequences to victims and victim communities varying from: health and wellbeing (Burks et al., 2018; Gagliardone et al., 2015) and their right to living in peace and security (UN Declaration of Human rights article 28). That conviction for the online hate speech must among other things be highly offensive (Kierulf, 2021) as understood by the public (Kierulf, 2021) calls for a debate about power and hegemony over marginalized groups. As informants C highlights, although there might be intersections with majority (Rios & Cohen, 2023), majority and minority understandings and experiences of online hate may not be similar given differences in culture, language, contexts).

6 Concluding Remarks and Implications for AI Development

This article explores the challenges faced by the police in their efforts to combat online hate speech. The focus on and purpose of § 185 is to protect society from hate speech directed at particularly vulnerable groups. However, the qualitative interviews conducted with stakeholders in this study indicate that while necessary and useful in some cases, the legislative provisions are not sufficient nor efficient enough to cover the scope, complexity and nuances of victims’ experiences, desired societal perspectives, and varied contexts. The police, responsible for enforcing the law aimed at protecting society from online hateful expressions, point out that the wide interpretive scope for online expressions, which is much broader than for actions of hate, affects the police’s ability to address online hate speech. For instance, no one should be convicted for anything more than they have intended and language and messages can be interpreted differently depending on the context. With comprehensive language (including multimodal) training, AI can assist the police in identifying possible hateful expressions, but the many, ever-changing layers of complexity and nuanced aspects of language, different socio-cultural codes, traditions, and contexts, imply that an AI tool may not fully comprehend it all on its own. Humans are required in the loop, to provide context and accurate language interpretation.