Keywords

One of the main issues of global social media governance relates to the definition of the rules governing online content moderation worldwide. One could think that it would be sufficient for online platforms to refer to existing international human rights standards. However, a more careful analysis shows not only that international law provides exclusively general principles, which do not specifically address the context of online content moderation. But also that a single human rights standard does not exist, as even the same provisions and principles are interpreted by courts in different ways across the world. This is one of the reasons why, since their inception, major social media platforms have set their own rules, adopting their own peculiar language, values and parameters. Yet, at the same time, this normative autonomy too has raised serious concerns. Why should private companies establish the rules governing free speech online? Is it legitimate to depart from minimal human rights standards and impose more (or less) stringent rules?

The current situation exposes a dilemma for online content governance that seriously affects the operations of social media companies and impacts on the exercise of fundamental rights by users as well as digital policy strategies. On the one hand, if social media platforms simply adopted international law standards, they would be compelled to operate a choice on which interpretative model to follow—for example, between a US-style freedom of expression-dominated approach and a European-style standard, which tries to balance freedom of expression with other social values. And the same would be if they decided to adopt the law of one country. Moreover, they would also need to put in place a mechanism able to translate, or ‘operationalise’, such general state-centred standards in the context of online content moderation. On the other hand, where social media platforms adopt their own values, rules and terminology to regulate content moderation, thus departing from international law standards, they are accused of censorship or laxity, intrusiveness or negligence.

The present work aims to analyse this normative dilemma. Chapter 2, entitled ‘The Content Governance Dilemma’, sets the scene, deconstructing the core elements of the conundrum. We analyse the evolution from community-based and user-led online content moderation to the professionalised, industry-led sets of mechanisms that characterise it today. Such a transition, combined with the intervention of external actors in defining rules for online content, such as national legislators and courts, determined the emergence of a macro governance dimension of content moderation. Its conception of the social media environment as a public forum clashes with the private approach of its micro dimension, which is completely dominated by the platforms themselves and managed as a private independent space. This tension lies at the basis of a complex normative dilemma, whose central question is: Which rules should govern content online? Private norms or democratically voted laws? If more national laws or international standards are simultaneously applicable, which law is to be chosen? How does one avoid the risk of having one-single approach imperialistically imposed on the others? In this chapter, we explain how this dilemma exposes a tension between the risk of normative authoritarianism, anomie and imperialism. A process of ‘constitutionalisation’ of the social media environment seems to be needed in any case. Social media internal rules should better incorporate fundamental rights and guarantees. An input to this process is increasingly originating from civil society actors. Over the past few years, a significant number of ‘bills of rights’ have been proposed to articulate constitutional rights for social media. This phenomenon has been described in terms of the emergence of a movement of ‘digital constitutionalism’. In this book, after having analysed the contribution that international human rights law can give to the constitutionalisation of social media content moderation standards, we focus our analysis on the role and message of civil society initiatives in this field.

Vis-à-vis the twofold issue of constitutional authoritarianism of online platforms’ terms of service and the potential normative imperialism of imposing one dominant legal approach, many have advocated the application of international human rights standards as a solution to the issues of platform governance. Chapter 3, entitled ‘The International Law of Content Governance’, examines the question of whether, and the extent to which, international law really offers normative guidance to the complex world of platform content governance. It argues that the potential of international human rights law in offering much-needed normative guidance to content governance is circumscribed by three interrelated factors. First, international human rights law is—by design—state-centred and hence does not go a long way in attending to human rights concerns in the private sector. Second, international human rights law standards are couched in general principles, and hence, less suited to apply in the context of platform content moderation which requires a rather granular and dynamic system of norms. Third, and related to the second, the generic international content governance standards have not adequately been unpacked by relevant adjudicative bodies to make them fit for purpose to the present the realities of content moderation. The chapter then maps applicable content governance standards in international law, focusing in particular on the role of soft law instruments addressing private organisations.

Chapter 4, entitled ‘Shaping Standards from Below: Insights from Civil Society’, proposes the analysis of civil society impulses in the field of online content moderation, a source that contributes to the definition of normative standards that has been so far neglected by the scholarship. Internet bills of rights promoted by civil society are presented as expressing the ‘voice’ of communities that struggle to propose an innovative message within traditional institutional channels: one of the layers of the complex process of constitutionalisation that is pushing towards reconceptualising core constitutional principles in light of the challenges of the digital society in a new form of ‘digital constitutionalism’. This chapter illustrates the findings of a content analysis of 40 Internet bills of rights that include principles related to online content governance. We start with an overview of the main features of the textual corpus, taking into account the distribution across time and geographical areas of these bottom-up sources of constitutional values. We illustrate the main principles, rights and standards detected in the corpus and their mutual relationships, tracing back a civil society framework for content moderation. Our analysis then focuses on the substantive standards promoted by these declarations and on the procedural guarantees articulating formal rules and procedures through which substantive rights are created, exercised and enforced. We then analyse some ad hoc provisions specifically crafted to address social media platforms, which most of the times are attempts to contextualise and adapt international human rights standards into more granular norms and rules to be implemented in the platform environment.

Chapter 5, entitled ‘Platform Policies Versus Human Rights Standards’, investigates to what extent human rights standards as enshrined in international law and in civil society initiatives are reflected in social media platforms’ standards—both on paper and in terms of adopted practices. The chapter utilises a comparative approach and studies five major social media platforms—Facebook and Instagram (Meta Inc.), TikTok/Douyin (Bytedance Inc.), Twitter (Twitter Inc.) and YouTube (Alphabet Inc.). This chapter first explores the official commitments of the three platforms to international human rights. Building on data provided by the platforms themselves, it then compares the magnitude of cases of moderation occurring between different principles and across platforms for 2021, the last full available year. While a number of interesting differences and commonalities exist between the platforms in terms of substantive content moderation outcomes, an investigation of procedural practices is as important to understand how platforms moderate. Hence, the chapter subsequently compares procedural principles demanded by civil society groups and those that can be derived from international human rights law with platform practices, specifically concerning transparency reporting and automated content moderation. The chapter finds a relatively high degree of convergence among the platforms on a number of practices.

Chapter 6 concludes the book by highlighting the role played by civil society actors in the broader process of constitutionalisation of social media internal rules. We argue that the solution to the content governance dilemma lies in its composite nature. No actor has the final word, but we rather witness a polyphonic conversation. Multiple societal layers are simultaneously and gradually contributing to rearticulate the core principles of contemporary constitutionalism in the context of online content moderation.