Keywords

Online content moderation is affected by multiple human rights concerns. This book has mentioned some of them, from the use of discriminatory and opaque algorithms to the lack of procedural rules. Yet, this work has focused on a theme that is common to all of them. We have analysed a meta-problem: the transversal issue of clarifying which rules should govern online content moderation, and thus help it prevent the aforementioned human rights concerns. We have done it without taking a normative approach, striving to present an objective account of the discrepancies between legal theory and reality.

Firstly, by exposing the dilemmatic nature of the choice on the law of online content moderation (see Chap. 2). A conundrum composed of three options, none of which is fully satisfying. If social media platforms adopt content policies based on their own values—for example, Facebook’s ‘voice, authenticity, safety, privacy and dignity’—they are unavoidably accused of unilaterally setting the rules of the game for their own virtual spaces, questioning why they depart from international human rights law and suspecting that these values are a way to protect their business interest. However, if this form of platform authoritarianism is to exclude, one cannot knowingly affirm that the solution is merely to refer to existing laws. Social media are global spaces characterised by a melting pot of users coming from various countries: Which national law could or should prevail without being accused of digital imperialism? Moreover, resorting to international law standards is not the panacea that one would expect, but would rather lead to a situation of normative anomie, a status of disorientation justified by the fact that its norms do not directly target private actors, and only include very general principles that would in any case, require a further interpretation in order to be applied in the context of online content moderation.

Yet, if this book shows how international human rights law is not the solution to the content governance dilemma, it does not exclude its contribution to the resolution of this issue (see Chap. 3). We propose a multi-level approach where multiple actors are instrumental in translating human rights norms—what constitutes the DNA of contemporary constitutionalism—into rules that speak to the context of online content governance. Constitutionalism is the ideology that champions the respect of fundamental rights through the limitation of the power of dominant actors. It is embedded into international human rights law, but regrettably these norms speak to a social reality that has been overtaken. Today, the multinational companies owning and managing social media platforms emerge as dominant actors beside nation states. Achieving a form of digital constitutionalism would consist in rearticulating its principles in the context of the digital society, subjecting social media platforms to the same types of obligations that international law imposes on states. Constitutionalising social media would therefore mean instilling human rights guarantees in an environment that very often lacks them due to its conceptualisation as a private space (Celeste et al. 2022).

In this work we focus on an actor that is often neglected, particularly among legal scholars: civil society organisations. From a legal point of view, as strong the voice of these actors might be, their claims remain outside what is considered to be legally relevant as their normative power does not have legal force. Yet, this book shows how civil society, if it is on the one hand unable to produce lex—something that social media platforms as owners of their virtual fiefdoms can conversely do—can and does contribute to the development of the ius, the legal discourse, on the rules that should govern online content moderation. Adopting a musical metaphor, our work has consisted in putting a series of fragmented voices together in one score, so that it can be read—or played—in unison. The music that one can read from this analysis is a vocal appeal to clarify what the legitimate limitations to the principle of freedom of expression that social media platforms should apply online are; to establish procedural principles in order to mitigate the potential arbitrariness of social media platforms’ decisions. Civil society actors thus ‘generalise and recontextualise’ principles of international human rights law and core constitutional values in a way that directly addresses the challenges of the social media environment (Celeste 2022; Teubner 2012). Not only in the form of meta-rules but also by directly embedding norms into the socio-technical architecture that run and govern online platforms: a form of constitutionalism ‘by design’ and ‘by default’ which holds together technical solutions and governance mechanisms (see Chap. 4).

By comparing the demands advanced by civil society actors with platforms’ policies we have reconstructed an image of this process of social media constitutionalisation ‘in motion’ (see Chap. 5). Over the past few years, there has indeed been a positive trend towards an increased proceduralisation and transparency of online content moderation. There is a progressive convergence between civil society demands and platforms’ policies. The use of automated systems to filter and take down content is now compensated by increasing numbers of regulated appeal mechanisms—at times even multiple, as in the case of the two-instance structure created by Meta with the creation of the Oversight Board. Most social media companies publish detailed transparency reports that help reduce the level of opacity characterising the governance of content moderation and increase accountability towards the general public, researchers and public authorities.

Certainly, this is not the outcome of civil society advocacy alone. There are dramatic moments—as always in history—from which social media companies are learning, like after the assault on Capitol Hill. There are still tensions between a proprietary vision of social media as private fiefdoms, whose internal rules can be modified with a simple tweet by an almighty CEO, and their role as public forum, a contemporary centre for the exercise of fundamental freedoms. There is a growing body of law, both at national and at regional level, directly addressing content moderation practices in order to tackle the issue of online harm. Courts and internal oversight bodies are playing a ‘maieutic’ role in interpreting and further developing platforms’ policies in a fundamental rights-compliant way (Celeste 2021; Karavas 2010). There is now a shared belief that social media companies can no longer be left entirely alone in regulating online content. Thanks to contributions from various actors, the process of constitutionalisation of social media is shaping clearer rules guaranteeing digital human rights.

The solution to the dilemma on which rules should govern online content moderation can thus be found in the structure of the dilemma itself. It is its composite nature, the key to understanding how to legitimately develop norms that will be able to preserve fundamental rights on social media platforms. It is not a question of choosing which actor should prevail and impose its law. The solution rather lies in recomposing the puzzle of the various voices that are contributing to shape digital human rights in the context of online platforms.