Skip to main content

Online Hate Speech and the Role of Digital Platforms: What Are the Prospects for Freedom of Expression?

  • Chapter
  • First Online:
The Rule of Law in Cyberspace

Part of the book series: Law, Governance and Technology Series ((LGTS,volume 49))

  • 723 Accesses

Abstract

In recent years, the subject of freedom of expression has been extended to two new phenomena: the web and hate speech. The former has to do with the extent (potentially infinite and uncontrolled) of freedom; the latter has to do with its limits and brings into play the fundamental principles of protection of the individual and respect for human dignity, as well as the principle of non-discrimination. The present essay addresses the controversial issue of the repression of hate speech by online platforms and the new role assigned to them, namely regulating users’ fundamental rights.

The two authors collaborated in the design and drafting of this essay. However, paragraphs 2, 3, 4 and 5 are to be attributed to G. Cerrina Feroni and paragraphs 6, 6.1. 6.2., 6.3. 7 and 8 to A. Gatti. Paragraphs 1 were 9 jointly drafted.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Pollicino and De Gregorio (2019), pp. 421–436, in particular p. 422.

  2. 2.

    Abbondante (2017), pp. 42–43.

  3. 3.

    The definition is quoted literally from Spigno (2018), in particular p. 17, which may be referred to for other literature references. See, among recent works by Italian legal commentators, Pollicino et al. (2017).

  4. 4.

    Committing the same to “declare an offence punishable by law all dissemination of ideas based on racial superiority or hatred, incitement to racial discrimination, as well as all acts of violence or incitement to such acts against any race or group of persons of another colour or ethnic origin, and also the provision of any assistance to racist activities, including the financing thereof (letter a)”; to “declare illegal and prohibit organizations, and also organized and all other propaganda activities, which promote and incite racial discrimination, and shall recognize participation in such organizations or activities as an offence punishable by law (letter b)”; and not to “permit public authorities or public institutions, national or local, to promote or incite racial discrimination (letter c)”.

  5. 5.

    This was followed by other documents, such as, for example, again in the framework of the Council of Europe, the Additional Protocol to the Budapest Convention on Cybercrime, signed in Strasbourg on 28 January 2003, concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems; it obliges the States parties to adopt criminal sanctions to punish the dissemination of racist and xenophobic material through computer systems, racist and xenophobic threats and insults, the denial, gross minimisation, approval or justification of genocide or crimes against humanity.

  6. 6.

    It is far from simple to maintain a distinction between hate speech and hate crimes. See Aliberti (2019), pp. 171–192, in particular p. 175; in referring to the annual reports of the OECD, the author notes that the distinction between hate crimes and hate speech lies in the fact that the former manifest themselves as a material conduct, whilst the latter are only an expression of words. However, some legal commentators feel that hate speech should in any case be punished as it violates the principle of equality and dignity of individuals belonging to groups or communities constituting minorities in the society, based on race, language, ethnicity, religion, nationality, etc.

  7. 7.

    Coleman (2016), notes a paradox: that “students in leading the universities want to be protected from offence more than they want the freedom of speech” and to this end they often support for speech-limiting regulations (p. 115).

  8. 8.

    These are instruments that represent a form of pressure, or collateral censorship, as it were, so that in the fear of the introduction of stricter rules, or hard law measures, online intermediaries adapt their conduct to the guidelines received.

  9. 9.

    Framework Decision (2008/913/JHA) of the Council of 28 November 2008.

  10. 10.

    European Commission (2016) Code of Conduct on Countering Illegal Hate Speech Online, text available at https://ec.europa.eu/info/policies/justice-and-fundamental-rights/combatting-discrimination/racism-and-xenophobia/eu-code-conduct-countering-illegal-hate-speech-online_en#theeucodeofconduct.

  11. 11.

    European Commission, Code of Practice on Online Disinformation, text available at https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation. Since the main objective of the Code is to combat “fake news”, it addresses the issue of hate speech only indirectly, that is, only to the extent that fake news has discriminatory aims and seeks to spread hatred against a minority. As is specified in Annex II of the Code (“Current best practices”): “This could include (but is not limited to) hate speech”.

  12. 12.

    European Commission, Communication on Tackling Illegal Content Online, Towards an enhanced responsibility of online platforms, COM (2017)555 final.

  13. 13.

    European Commission, Recommendation of 1 March 2018 on measures to effectively tackle illegal content online COM (2018)1177 final), whereby hosting platforms are encouraged to publish clear, easily understandable and sufficiently detailed criteria for the removal or disabling of access to content considered to be hate speech (cf. § 16 of the Recommendation).

  14. 14.

    The text of the Code literally states: “IT Companies, taking the lead on countering the spread of illegal hate speech online…”.

  15. 15.

    By way of example, we quote two passages from the Recommendation in question: § 13: “Those principles should be set out and applied in full respect for the fundamental rights protected in the Union’s legal order and notably those guaranteed in the Charter of Fundamental Rights of the European Union (‘the Charter’). Illegal content online should be tackled with proper and robust safeguards to ensure protection of the different fundamental rights at stake of all parties concerned”; § 19: “In order to enhance transparency and the accuracy of notice-and-action mechanisms and to allow for redress where needed, hosting service providers should, where they possess the contact details of notice providers and/or content providers, timely and adequately inform those persons of the steps taken in the context of the said mechanisms, in particular as regards their decisions on the requested removal or disabling of access to the content concerned”.

  16. 16.

    Addressing the subject of ECtHR case-law on hate speech would require a separate essay. One need only refer to the factsheet available at https://www.echr.coe.int/Documents/FS_Hate_speech_ENG.pdf; see also Morelli and Pollicino (2018), pp. 1–24, specifically § 6) and Mir and Bassini (2016), pp. 71–93.

  17. 17.

    The community policies can be found at https://www.facebook.com/communitystandards/hate_speech/.

  18. 18.

    Ibid.

  19. 19.

    However obvious it may be, there is no underlying investigation as to whether the target individual is actually a member of one of the minorities traditionally protected against hate speech. Given the substantially inquisitorial character of the censorship procedure, his or her membership is assumed, just as the intent to offend and discriminate can be assumed. Though this may be easy in the case of discrimination based on manifest physiognomic attributes (for example, calling someone a “negro”), it more difficult to identify an intention to discriminate, for example, on grounds of religion or sexual orientation, or offensive speech directed against a character trait (for example, calling radical Islamists “insane”).

  20. 20.

    Cf. https://help.twitter.com/it/rules-and-policies/hateful-conduct-policy.

  21. 21.

    Regarding the “basic congenital ambiguity” of the role played by Tech Companies in relation to the freedom of expression, see, among others, Bazzoni (2019), pp. 635–643, in particular p. 639.

  22. 22.

    It has been underscored that this private architecture is: “the central battleground over free speech in the digital era”, Balkin (2014), pp. 2296–2342.

  23. 23.

    Conti (2018), pp. 200–225.

  24. 24.

    As likewise affirmed by Balkin (2018), p. 1181: “Companies that began as technology companies soon discover not only that they are actually media companies, but that they are also governance structures”. See also Gatti (2019), pp. 711–743, in particular pp. 719 ff.

  25. 25.

    On this point see Zuckerberg (2017) Building Global Community, Facebook, 16 February 2017, available at https://www.facebook.com/notes/mark-zuckerberg/building-global-community/10154544292806634.

  26. 26.

    Klonick (2018), pp. 1598–1670, available online at https://harvardlawreview.org/wp-content/uploads/2018/04/1598-1670_Online.pdf.

  27. 27.

    Helmond (2015), pp. 1–11.

  28. 28.

    Srnicek (2016).

  29. 29.

    Regarding the democratic and transparent design of platforms as a tool intended not to replace but rather to support national legal systems in developing a new ecosystem, see Mueller (2017).

  30. 30.

    The paradox has been highlighted by many authors, including Conti (2018), p. 202, where the author affirms: “the infrastructures which, in the first place, enable the development of democratic discourse, because they allow anyone to speak, can serve as a means of exercising new forms of censorship and control of democratic discourse, as well as control of people who, in a given area, express an opinion in terms that displease those who exercise surveillance”; similarly, De Gregorio (2019), pp. 1–28, p. 3 ff.: “On the one hand, social media commit to protecting free speech, while, on the other hand, they moderate content regulating their communities for business purposes”.

  31. 31.

    When users look up content associated with terrorism, an algorithm shows them content (playlists, videos, documentaries, etc.) aimed at debunking or discrediting the terrorist narrative. Cf. https://redirectmethod.org.

  32. 32.

    This is the focus of the investigation by Van Dick et al. (2018).

  33. 33.

    European Commission, v. https://ec.europa.eu/commission/presscorner/detail/en/IP_19_805.

  34. 34.

    https://www.theguardian.com/technology/2019/nov/24/tim-berners-lee-unveils-global-plan-to-save-the-internet.

  35. 35.

    Lessig (2006), pp. 96 ff.

  36. 36.

    Regarding the vagueness of the legal standards serving as a basis for the assessments, mostly limited to internal guidelines not accessible to the public, and thus in full violation of the principle of the rule of law and democratic values, cf. Suzor (2019). Cf. also Belli and Venturini (2016), archived at https://policyreview.info/node/441/pdf.

  37. 37.

    Fisher, Inside Facebook’s Secret Rulebook for Global Political Speech. New York Times, 27 December 2018.

  38. 38.

    Klonick (2018), pp. 1622 ff.

  39. 39.

    Abbondante (2017), pp. 41–68. The author notes “Given the briefness of the period allowed, there cannot be a procedure for verifying the behaviour of intermediaries – which thus act on the basis of the contractual rules highlighted above – with an even greater risk of removal of lawful speech” (p. 65).

  40. 40.

    European Commission, Recommendation of 1 March 2018 on measures to effectively tackle illegal content online (COM(2018)1177 final), cit.

  41. 41.

    Regarding the need to limit to the maximum possible degree the possible repressive actions of the State, see Kirshner (2014), pp. 40–41.

  42. 42.

    See Court of Rome order no. 59264/19 of 12 December 2019 http://www.ansa.it/english/news/politics/2019/09/09/casapound-facebook-instagram-blocked_fab0cb8c-ccce-4247-b7cb-d958d1951a7c.html.

  43. 43.

    Social networks by now commonly use algorithms (i.e., sequences of basic instructions and data processed by a computer to establish the logical rules to be used in solving a problem) for the purpose not only of conducting economic and behavioural analyses, but also of tracing offensive or discriminatory statements (so-called removal algorithm).

  44. 44.

    In the words of Zuckerberg himself, the algorithm represents “the single most important improvement in enforcing our policies, because it can quickly and proactively identify harmful content”. Zuckerberg (2017), A Blueprint for Content Governance and Enforcement, Facebook, (15 November 2018), https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-content-governance-and-enforcement/10156443129621634/. On the use of algorithms in the administration of justice, see, ex multis, Luciani (2018), pp. 872–893; Holder et al. (2016), pp. 384 ff.

  45. 45.

    On the use of the removal algorithm, or the “classifier” as it is called internally, see https://time.com/5739688/facebook-hate-speech-languages/.

  46. 46.

    Benjamin (2013), pp. 1445–1494, in particular: “What if we assume that Google (or another algorithm-based provider) does not care about “quality”, but instead only about relevance and usefulness for the user? Are Google’s algorithm-based outputs based on its understanding of relevance and usefulness speech under the Supreme Court’s jurisprudence? Yes. Google disclaims any adoption of the expression in the sites it finds, but it is making all sorts of judgments in determining what its customers want”. The author, in turn, cites Goldman (2006), pp. 192 ff. Available at http://digitalcommons.law.scu.edu/facpubs/76.

  47. 47.

    Ex multis, Balkin (2018) in particular pp. 1166 ff.; Buni and Chemaly (2016), archived at https://www.theverge.com/2016/4/13/11387934/internet-moderator-history-youtube-facebook-reddit-censorship-free-speech.

  48. 48.

    The problem of algorithms is tied first of all to their inability to understand the context of the speech. Moreover, since many of the categories that may give rise to “problematic” speech are barely defined in terms of their structure and extent, human intervention proves to be not only appropriate but necessary. Regarding the problem of context, Cf. Finck (2019), pp. 8 ff.

  49. 49.

    Nicotra and Varone (2019), pp. 87–106, p. 90.

  50. 50.

    Recommendation CM/Rec(2018)2 of the Committee of Ministers to member States on the roles and responsibilities of internet intermediaries. Cf. the paragraph entitled “Access to an effective remedy” of the Recommendation, where it is stated: “[Platforms] should furthermore ensure that intermediaries provide users or affected parties with access to prompt, transparent and effective reviews for their grievances and alleged terms of service violations, and provide for effective remedies, such as the restoration of content, apology, rectification or compensation for damages. Judicial review should remain available, when internal and alternative dispute settlement mechanisms prove insufficient or when the affected parties opt for judicial redress or appeal”.

  51. 51.

    Among the first comments, Douek (2019), available at https://www.lawfareblog.com/how-much-power-did-facebook-give-its-oversight-board, 25 September 2019; Weinzierl (2019), https://verfassungsblog.de/difficult-times-ahead-for-the-facebook-supreme-court/.

  52. 52.

    Kohson and Post (1996), pp. 1367 ff.

  53. 53.

    Belli and Venturini (2016) and Bassini (2019), pp. 198 ff.

  54. 54.

    Cf. https://www.reuters.com/article/us-twitter-deepfakes/twitter-wants-your-feedback-on-its-deepfake-policy-plans-idUSKBN1XL2C6. However, in this case as well, we cannot generalise: Twitter itself is accused of having an excessively liberal policy, that is, of failing to remove comments that can be qualified as hate speech because they are considered to have political content (of “public interest”) and are thus afforded greater protection than other types of expression. See https://www.aljazeera.com/ajimpact/critics-twitter-treats-hate-speech-public-interest-191016225423140.html.

  55. 55.

    De Gregorio (2019), p. 4.

  56. 56.

    Van Dick et al. (2018), p. 164.

  57. 57.

    The European Union has already moved in this direction. In the European Commission (2016) Communication on the Online Platforms and the Digital Single Market Opportunities and Challenges for Europe COM (2016) 288 final it was affirmed that: “In respect of access to information and content for many parts of society, platforms are increasingly taking centre stage. This role, necessarily, brings with it a wider responsibility”.

References

Other Legal References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ginevra Cerrina Feroni .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Cerrina Feroni, G., Gatti, A. (2022). Online Hate Speech and the Role of Digital Platforms: What Are the Prospects for Freedom of Expression?. In: Blanco de Morais, C., Ferreira Mendes, G., Vesting, T. (eds) The Rule of Law in Cyberspace. Law, Governance and Technology Series, vol 49. Springer, Cham. https://doi.org/10.1007/978-3-031-07377-9_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-07377-9_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-07376-2

  • Online ISBN: 978-3-031-07377-9

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics