Skip to main content

The Regulation and Governance of Online Hate Speech in the Post-truth Era: A European Comparative Perspective

  • Chapter
  • First Online:
The Indian Yearbook of Comparative Law 2019

Part of the book series: The Indian Yearbook of Comparative Law ((IYCL))

  • 237 Accesses

Abstract

Europe is experiencing an intense dilemma in regulating hate speech and online harassment. The question in the European continent has shifted from whether there should be limits to freedom of expression to where these limits should be placed. The aim of this chapter is to explore the features of some of the approaches undertaken in Europe on online hate speech regulation within the broader digital ecosystem, through a comparison between a supra-national and a national example, the EU and Germany respectively in order to reflect on the broad directions on the regulation of online content moderation as connected to hate speech. To do so, the analysis first discusses the normative underpinnings of hate speech regulation placing emphasis briefly on the main applicable international legal standards on the issue and then on the (often) invoked concepts of autonomy and dignity as they relate to freedom of expression. It then proceeds with an overview of the current features in the governance of online hate speech, approaching two concrete challenges in the regulation of online hate speech, namely the use of Artificial Intelligence (AI) by Information Technology (IT) companies and the limits of the role of the states in online content regulation (digital authoritarianism). The second part of the article deals with the examples of the EU framework on online hate speech regulation as compared with the German one, in order to conclude arguing for the need for more regulatory imagination in combatting hate speech in the digital ecosystem that escapes the routine of shifting regulation to non-state actors, such as online intermediaries.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Zachary Laub, ‘Hate Speech on Social Media: Global Comparisons’ (Council on Foreign Relations, 7 June 2019).

  2. 2.

    Jack M. Balkin, ‘Old-School/New-School Speech Regulation’ (2014) 127 Harvard Law Review 2296.

  3. 3.

    This is the key concept of collateral censorship.

  4. 4.

    According to Balkin these are the two main features of new-school regulatory goals. ibid, 2341.

  5. 5.

    Filtering systems block online speech, without an opportunity to contest the filter or any procedural protection or visibility for speakers. SeeBalkin (n2) 2318.

  6. 6.

    Daphne Keller, ‘Internet Platforms: Observations on Speech, Danger and Money’ (2018) Aegis Series Papers No. 1807, 10, < https://cyberlaw.stanford.edu/files/publication/files/381732092-internet-platforms-observations-on-speech-danger-and-money.pdf > accessed 1 May 2020.

  7. 7.

    Article 19, ‘The Expression Agenda Report 2016/2017: The State of Freedom of Expression and Information Around the World’ (Article 19, 2017) < https://www.article19.org/xpa-17 > accessed 1 May 2020.

  8. 8.

    David Kaye, Report of the Special Rapporteur on the Promotion and Protection of the Freedom of Opinion and Expression (A/74/48050, 9 October 2019) para 6 [hereafter Special Rapporteur 2019 Report].

  9. 9.

    Special Rapporteur 2019 Report, paras 10, 21–22. The cases of blasphemy and holocaust denial seems to fall into this category, according to the Report. A more detailed framework of the interpretation of Article 20 (2) ICCPR, particularly relevant for the criminalization of hate speech is to be found in the Rabat Plan of Action. The six relevant factors determining the severity towards criminalization of the hate speech act are: the context, the status of the speaker, the intent, the content and form of speech, the extent/reach of the speech act and the likelihood of harm. See Report of the United Nations High Commissioner for Human Rights on the Expert Workshops on the Prohibition of Incitement to National, Racial or Religious Hatred (A/HRC/22/17/Add.4, 11 January 2013).

  10. 10.

    Compare for example Delfi v Estonia Appl. N. 64,569/09, Judgment of 16 June 2015 with MTE v Hungary Appl. N. 22,947/13, Judgment of 2 February 2016, also discussed further in part IV of the article.

  11. 11.

    Special Rapporteur 2019 Report para 54.

  12. 12.

    Spécial Rapporteur 2019 Report para 55.

  13. 13.

    Catherine O’ Regan, Hate Speech Online: An (Intractable) Contemporary Challenge?, 71(1) Current Legal Problems, (2018) 71(1), 403–429, 409.

  14. 14.

    ibid 411–412.

  15. 15.

    For a different argumentation, see Eric Heinze, Hate Speech and Democratic Citizenship (OUP, 2016) where he identifies a causal link between hate speech and public order threats in countries that are not stable and prosperous democracies. In these cases, prohibition of hate speech in his view is only necessary for these latter countries.

  16. 16.

    Jeremy Waldron, The Harm in Hate Speech (Harvard University Press, 2012).

  17. 17.

    ibid 5.

  18. 18.

    Saskatchewan (Human Rights Commission) v Whatcott [2013] SCC 11.

  19. 19.

    For more see Julian Walker, Hate Speech and Freedom of Expression: Legal Boundaries in Canada (Background Paper Library of Parliament Publication No.2018–25-E, 29 June 2018) 2.

  20. 20.

    R. v Keegstra [1990] 3 SCR 697.

  21. 21.

    Canada (Human Rights Commission) v Taylor [1990] 3 SCR 892.

  22. 22.

    Pompeu Casanovas and Andre Oboler, ‘Behavioural Compliance and Law Enforcement in Online Hate Speech’ Proceedings of the 2nd Workshop on Technologies for Regulatory Compliance, (2019) 125–134,125, <https://ceur-ws.org/Vol-2309/> accessed 01 May 2020.

  23. 23.

    ibid 126.

  24. 24.

    Njagi Dennis Gitari, et al., ‘A Lexicon-Based Approach for Hate Speech Detection’ (2015) 10(4) International Journal of Multimedia and Ubiquitous Engineering 215–230.

  25. 25.

    Timothy Garton Ash, Free Speech: Ten Principles for a Connected World (Yale University Press, 2016).

  26. 26.

    Majid Yar, ‘A Failure to Regulate? The Demands and Dilemmas of Tackling Illegal Content and Behaviour on Social Media’ (2018) 1(1) International Journal of Cybersecurity Intelligence and Cybercrime5-20, 6.

  27. 27.

    ibid 9.

  28. 28.

    Home Affairs Select Committee Report, Hate Crime: Abuse, Hate and Extremism Online (2017) 10 <https://publications.parliament.uk/pa/cm201617/cmselect/cmhaff/609/609.pdf> accessed 01 May 2020. The Report has suggested that in the case of Google self-regulation had the opposite results, whereby Google ‘profited from hatred and has allowed itself to be a platform from which extremists have generated revenue.’.

  29. 29.

    Jack M. Balkin, ‘Free Speech as a Triangle’ (2018) 118 Columbia Law Review 2011–2055, 2014.

  30. 30.

    ibid 2021.

  31. 31.

    ibid 2025. Balkin notes as related to transparency the fact that at times certain users are afforded preferential treatment in some of the platforms in relation to freedom of expression (e.g., the case of Trump and Facebook).

  32. 32.

    ibid 2029. Balkin refers to this process as ‘privatized bureaucracy’.

  33. 33.

    Case C-131/12 Google Spain SL v Agencia Espagnola de Proteccion de Datos, 2014 ECR 317, paras 93–94.

  34. 34.

    Balkin (n29) 2033.

  35. 35.

    ibid 2036.

  36. 36.

    ibid 2041.

  37. 37.

    United Nations Human Rights, Guiding Principles on Business and Human Rights: Implementing the United Nations: ‘Protect, Respect and Remedy’ Framework (HR/PUB/11/04 2011) Principles 13–21 <https://www.ohchr.org/documents/publications/guidingprinciplesbusinesshr_en.pdf> accessed 01 May 2020.

  38. 38.

    European Court of Human Rights, Otto Preminger Institut v Austria, Appl. N. 13,470/87, Ser. A, 17 EHRR.

  39. 39.

    See for example, European Court of Human Rights, Kühnen v Federal Republic of Germany, Appl. N. 12,194/86, Decision of 12 May 1988.

  40. 40.

    Evelyn Mary Aswad, ‘The Future of Freedom of Expression’ (2018) 17(1) Duke Law and Technology Review, 27–70, 47.

  41. 41.

    ibid 48.

  42. 42.

    ibid 48. Evelyn Mary Aswad refers to initiatives undertaken by Facebook to support dialogue and counter-narrative approaches as well as to Google’s initiatives on media literacy.

  43. 43.

    ibid 49.

  44. 44.

    See for example, Twitter, Our Range of Enforcement Options (2020) <https://help.twitter.com/en/rules-and-policies/enforcement-options> .

  45. 45.

    Aswad (n40) 66. Evelyn Mary Aswad makes the case for the alignment of corporate regulation of speech to international human rights law as the most efficient way forward.

  46. 46.

    The question is aptly considered in Aswad (n40) 56.

  47. 47.

    According to Finck Facebook has reportedly hired 20,000 workers to detect hate speech and YouTube an equivalent 10,000 to check content compatibility with its community standards. Michele Finck, Artificial Intelligence and Online Hate Speech (Centre on Regulation in Europe, January 2019) 4.

  48. 48.

    For example, Twitter has been used for both Jihadist hate speech as well as right-wing hate speech. See Finck (n47) 4 (her footnotes 3 and 4).

  49. 49.

    Facebook Publishes Enforcement Numbers for the First Time, (Facebook, 15 May 2018) <https://newsroom.fb.com/news/2018/05/enforcement-numbers> accessed 01 May 2020.

  50. 50.

    Finck (n47) 6.

  51. 51.

    The German NetzDG is illustrative of this type of pressure due to is fining system.

  52. 52.

    European Court of Human Rights, Handyside v UK, Judgment of 7 December 1976, Series A, No.24, para 49.

  53. 53.

    Facebook has adopted its own definition of hate speech as:

    a direct attack on people based on protected characteristics—race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity and serious disability or disease. We also provide some protections for immigration status. We define an attack as violent or dehumanizing speech, statements of inferiority, or class for exclusion or segregation.

    Hate Speech (Facebook, 2020) <https://www.facebook.com/communitystandards/hate_speech> accessed 01 May 2020; Twitter has adopted the diverging concept of ‘hateful conduct’ to characterize ‘violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.’ See Hateful Conduct Policy (Twitter 2020) <https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy> accessed 01 May 2020.

  54. 54.

    Special Rapporteur 2019 Report (n8) para 42 mentions Facebook’s revised statement of values referring to international human rights standards as a limited exception. Cf. Updating the Values that Inform Our Community Standards (Facebook, 12 September 2019).

  55. 55.

    Spécial Rapporteur 2019 Report (n8) para 46.

  56. 56.

    Spécial Rapporteur 2019 Report (n8) para 47.

  57. 57.

    Spécial Rapporteur 2019 Report (n8) para 47.

  58. 58.

    Keller (n6) 3.

  59. 59.

    Keller (n6) 22. Keller cites the examples of Muslim communities across Europe.

  60. 60.

    Google’s ‘Redirect Method’ based on counter-messaging at the point of initial interest is one such example.

  61. 61.

    Spécial Rapporteur 2019 Report (n8) para 29.

  62. 62.

    Dawn C. Nunciato, ‘The Beginning of the End of Internet Freedom’ (2014) 45 Georgetown Journal of International Law383-410.

  63. 63.

    Alina Polyakova and Chris Meserole, ‘Exporting Digital Authoritarianism: The Russian and Chinese Models’ (2019) Foreign Policy at Brookings 1–22,1.

  64. 64.

    ibid 8. Since 2016, the law (Yarovaya Amendments) requires among others social media platforms and messaging services to store user data for 3 years allowing authorities to access it. In fall 2017, new legislation additionally allowed the Russian government to designate media organisations that receive funding from abroad as ‘foreign agents’ but also to block online content when deemed ‘undesirable’ or ‘extremist’. ibid, 9.

  65. 65.

    Case of Ahmet Yildirim v Turkey Appl. N. 3111/10, ECtHR, 2012.

  66. 66.

    European Court of Human Rights in Yildirim v Turkey, in particular the Concurring Opinion of Judge Pinto de Albuquerque. The last requirement about the possibility of appeal is fulfilled when a meaningful opportunity to challenge the decision is afforded. [Cf. Frank La Rue, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression (UN Doc. A/HRC/17/27 May 16, 2011) 24].

  67. 67.

    Special Rapporteur 2019 Report, (n8) 47–48.

  68. 68.

    Special Rapporteur 2019 Report, (n8) 36.

  69. 69.

    Steven Feldstein, ‘The Road to Digital Unfreedom: How Artificial Intelligence is Reshaping Repression’ (2019) 30(1) Journal of Democracy 40–52, 41.

  70. 70.

    ibid 43.

  71. 71.

    ibid 47.

  72. 72.

    Freedom House, The Rise of Digital Authoritarianism: Fake News, Data Collection and the Challenge to Democracy, (Freedom House 31 October 2018) <https://freedomhouse.org/article/rise-digital-authoritarianism-fake-news-data-collection-and-challenge-democracy> accessed 10 May 2020.

  73. 73.

    Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, O.J L 178, Article 14.

  74. 74.

    See Christina Angelopoulos, ‘On Online Platforms and the Commission’s New Proposal for a Directive on Copyright in the Digital Single Market’ (2017) 9, <https://www.repository.cam.ac.uk/bitstream/handle/n1810/275826/17-2-28%20-%20Julia%20Reda%20Study.pdf?sequence=1&isAllowed=y> accessed on 2 May 2020. According to Angelopoulos the notice does not have to be followed necessarily by takedown of the content by the IT intermediaries as the notification ‘may turn out to be insufficiently precise or inadequately substantiated’; Case C-324/09 L’Oréal SA and Others v eBay International AG and Others 12 July 2011, para 122.

  75. 75.

    Joris van Hoboken et al., Hosting Intermediary Services and Illegal Content: An Analysis of the Scope of Article 14 ECD in Light of Developments in the Online Service Landscape (European Commission Report 2018) 27. The report states that the threat of regulation may be one of the most significant factors in shaping the incentives of host intermediaries ‘(…) in order to avoid the costs of complying to additional legally binding regulation at EU level, along with the risks associated with an eventual non-compliance’.

  76. 76.

    Directive EU 2018/1808 amending Directive 2010/13/EU concerning the provision of audio-visual media in view of changing market realities, O.J. L3030, 28 November 2018.

  77. 77.

    Google + , Instagram, Snapchat and Daily Motion joined later.

  78. 78.

    Keller (n6) 8.

  79. 79.

    European Commission, Communication on Tackling Illegal Content Online: Towards an Enhanced Responsibility of Online Platforms (September 28, 2017).

  80. 80.

    Council Framework Decision 2008/913/JHA of 28 November 2008 on combatting certain forms and expressions of racism and xenophobia by means of criminal law, OJ L 328.

  81. 81.

    MTE v Hungary, Appl. N. 22,947/13 EurCtHR 135 (2016) para 86. The Court ruled that Hungary failed to adequately balance the right to reputation and the right to freedom of expression when it awarded damages to a real-estate website for injuries to its business reputation. The Hungarian courts imposed objective liability for unlawful comments made by readers on a website, and the ECtHR held such reasoning unduly placed ‘excessive and impracticable forethought capable of undermining freedom of the right to impart information on the Internet’.

  82. 82.

    European Court of Human Rights, Delfi v Estonia Appl. N. 64,569/09, Judgment of 16 June 2015, for example at paras 115–116.

  83. 83.

    CJEU, C-18/18 Eva Glawischnig - Piesczek v Facebook Ireland Limited, 3 October 2019 [preliminary ruling]. The Court found no general monitoring obligation on hosting providers to remove or block equivalent content and covers only essentially unchanged content where the hosting provider does not have to carry out an independent assessment but can use automated technologies to identify it.

  84. 84.

    Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC, OJ L 130, 92–125.

  85. 85.

    See for example, Evan Enstrom and Nick Feamster, ‘The Limits of Filtering: A Look at the Functionality and Shortcomings of Content Detection Tools’ (Engine, 2017) <www.engine.is/the-limits-of-filtering> accessed 10 May 2020.

  86. 86.

    The Digital Services Act was not unveiled yet at the time of writing.

  87. 87.

    European Commission, Commission Recommendation on measures to effectively tackle illegal content online (C (2018) 1177 final). The Council of Europe’s Committee of Ministers in a 2018 Recommendation had similarly stated: ‘Due to the current limited liability of automated means to assess context, intermediaries should carefully assess the human rights impact of automated content management and should ensure human review where appropriate’. [Cf. Council of Europe, Recommendation CM/Rec (2018) 2 of the Committee of Ministers to Member States on the roles and responsibilities of internet intermediaries, (Committee of Ministers, 2018)].

  88. 88.

    Law for the Improvement of the Legal Regulation of Social Networks, into force on 1 October 2017 and in full operation since 1 January 2018.

  89. 89.

    Network Enforcement Law, s 3(1).

  90. 90.

    Failure to handle complaints may lead to fines up to 5 million euros. According to late October 2019, around 50 million euros worth of fines have been issued against tech companies. See Mark Scott and Janosch Delker, ‘Germany Lays Down Marker for Online Hate Speech Laws (Politico, 30 October 2019).

  91. 91.

    Network Enforcement Law s.1(3), with reference to German Criminal Code provisions s.130, 131, 166 on incitement to hatred against a national, racial, religious or ethnic group, glorification or trivialization of violations to human dignity and defamation of beliefs religious and ideological organizations in a manner threatening the public space.

  92. 92.

    Official Translation of the German Ministry of Justice. One can note the contrast with Facebook’s bi-annual Transparency Reports in Britain where the number of takedowns in the 2017 Report for hate speech were not provided. Similarly, Twitter for the same year did not specify grounds for complaint removal [Cf. Yar (n26) 12].

  93. 93.

    Balkin (n29) 2030.

  94. 94.

    Wolfgang Schutz, ‘Regulating Intermediaries to Protect Privacy Online -The Case of the German NetzDG’ (HIIG Discussion Paper Series 2018–01, 2018) 8.

  95. 95.

    For Facebook alone, defamation and hate speech requests range to 100,000 on a monthly basis for Germany. Cf. Schulz (n 94) at 8.

  96. 96.

    Recommendation CM/Rec (2018) 2 of the Committee of Ministers to Member States on the roles and responsibilities of internet intermediaries (April 2018).

  97. 97.

    Kirsten Gollatz, Martin J., Riedl and Jens Pohlmann, ‘Removals of Online Hate Speech in Numbers’ (Digital Society Blog, 9 August 2018).

  98. 98.

    ibid.

  99. 99.

    O’Reagan (n 13) 427.

  100. 100.

    Amelie Heldt, ‘Reading Between the Lines and the Numbers: An Analysis of the first NetzDG Reports’ (2019) 8 (2) Internet Policy Review 2, 5. The users’ right to appeal the platform’s decision to German courts remains however present.

  101. 101.

    O’Reagan (n 13) at 428.

  102. 102.

    While the Bill was passed by both German legislative chambers, the German President has refused to sign it due to constitutionality concerns regarding the privacy implications of the Bill: See < https://www.bundesregierung.de/breg-en/search/bekaempfung-hasskriminalitaet-1738462 > accessed 14 December 2020.

  103. 103.

    Scott and Delcker (n 90).

  104. 104.

    The measures are in response to a 20 per cent rise in politically motivated crime. A considerable number of crimes with anti-Semitic and xenophobic motivation were committed by right-wing extremists. See ‘Germany’s Government Approves Hate Speech Bill’ DW (Bonn, 19 February 2020) <https://www.dw.com/en/germanys-government-approves-hate-speech-bill/a-52433689> accessed 1 May 2020.

  105. 105.

    ibid. Punishable hate speech posts include far-right propaganda, graphic portrayals of violence, murder or rape threats, posts indicating the preparation of a terrorist attack of the distribution of child sex abuse images.

  106. 106.

    See Karsten Muller and Carlo Schwarz ‘Fanning the Flames of Hate: Social Media and Hate Crime’ (2017) <https://ssrn.com/abstract=3082972> accessed 01 May 2020.

  107. 107.

    The example of YouTube’s auto-play function is cited here as an instrument of potential radicalization. In 2019, YouTube reported changes in its recommendation algorithm for videos of ‘borderline content’ cutting their number and by extension the spread of misinformation.

  108. 108.

    Amelie Heldt, ‘Reading Between the Lines and the Numbers: An Analysis of the First NetzDG Reports’ (2019) 8(2) Internet Policy Review2.

  109. 109.

    ibid 14.

  110. 110.

    ibid 9.

  111. 111.

    Special Rapporteur 2019 Report (n 8) para32.

  112. 112.

    Schulz (n 94) 9.

  113. 113.

    Scott and Delcker (n 90). Facebook was additionally fined with 2 million euros in 2019 for failing to provide transparent procedures for complaints from users.

  114. 114.

    ‘Germany Struggles to Define Limits of What Can be Said’Der Spiegel (Hamburg, 8 November 2019.

  115. 115.

    European Commission, Hosting Intermediary Services and Illegal Content Online: An Analysis of the Scope of Article 14 ECD in Light of Developments in the Online Service Landscape’COM (2018) 0033, 11.

  116. 116.

    European Commission Against Racism and Intolerance (ECRI), Annual Report on ECRI’s Activities, 2019, at 8.

  117. 117.

    Finck mentions one example in such a direction: the German Landesanstaltfuer Medien Nordrhein Westfalen’s use of human–machine filters, that include a two-stage processing of content first by machines and then humans. Finck (n 47) 11.

  118. 118.

    Term borrowed from Gitari et al. (n 24)129.

  119. 119.

    Gitari et al. (n 24) 129.

  120. 120.

    Bjorn Ross, et al., ‘Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis’, arXiv Proceedings of NLP4CMC III: 3rd Workshop on Natural Language Processing for Computer-Mediated Communication (Bochum), Bochumer Linguistische Arbeitsberichte, (2016) 17, 6–9, available at <https://arxiv.org/abs/1701.08118> accessed 01 May 2020.

  121. 121.

    Keller (n 6) 22. In the words of a German government report, ‘the internet does not replace the real-world influences but reinforces them.’ (in particular her footnote 168).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kyriaki Topidi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Topidi, K. (2021). The Regulation and Governance of Online Hate Speech in the Post-truth Era: A European Comparative Perspective. In: John, M., Devaiah, V.H., Baruah, P., Tundawala, M., Kumar, N. (eds) The Indian Yearbook of Comparative Law 2019. The Indian Yearbook of Comparative Law. Springer, Singapore. https://doi.org/10.1007/978-981-16-2175-8_12

Download citation

  • DOI: https://doi.org/10.1007/978-981-16-2175-8_12

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-16-2174-1

  • Online ISBN: 978-981-16-2175-8

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics