Keywords

2.1 From Content Moderation to Content Governance

In 2009 Meta CEO Mark Zuckerberg said:

More than 175 million people use Facebook. If it were a country, it would be the sixth most populated country in the world. (Zittrain 2009)

Less than two decades later, the similitude between social media platforms and nations no longer works. Today Facebook has 2.9 billion monthly active users, which means more than twice the number of inhabitants of the most populous country in the world, China (Statista 2022; World Population Review 2022). However, this analogy is still helpful to understand recent governance and regulatory trends that have characterised the sector in the past few years. One of these is a phenomenon of progressive institutionalisation. As it occurred in the context of development of the nation state, social media platforms too have gradually introduced internal norms, procedures and mechanisms to address increasingly complex issues involving a significant number of users (Sanders 2006).

In 2015, Carr and Hayes proposed a future-proof definition of social media platforms as “Internet-based, disentrained, and persistent channels of masspersonal communication facilitating perceptions of interactions among users, deriving value primarily from user-generated content” (Carr and Hayes 2015, 49). What would distinguish social media such as Facebook, Instagram, Tinder and YouTube from other online services such as emails, news websites, Zoom or Wikipedia are six main factors (Carr and Hayes 2015). Firstly, social media are not necessarily Web-based; users can access them by simply relying on an Internet connection without having to access a World Wide Web browser, as is the case while using apps like Tinder. Secondly, social media are characterised by ‘disentrainment’ (Carr and Hayes 2015, 50): communications over them occur in an asynchronous way, without the need of putting in place ‘entrainment’ mechanisms that would push and facilitate synchronous exchange. This is because social media are persistent channels; they do not disappear when users are not online but rather offer the possibility to connect at any time and resume the flow of the conversation. Thirdly, the boundaries between interpersonal and mass communication are blurred on social media: hence, Carr and Hayes’ reference to ‘masspersonal communication’ (Carr and Hayes 2015, 52; O’Sullivan and Carr 2018). Users employ social media as an instrument of both interpersonal and mass communication without a neat demarcation between these two dimensions. Moreover, despite their asynchronous nature, users constantly have a ‘perception of’ interaction (Carr and Hayes 2015, 51). In other words, users might not be interacting with each other directly, in an interpersonal way, but the technical environment created by social media might provide a perception of interaction, as is the case when one is able to identify users located in a specific geographical area on Tinder. However, in ultimate analysis the added value of using social media would not lie in the content generated by platforms but by users themselves.

Indeed, social media platforms first appeared in the mid-1990s with the commercialisation of the Internet but proliferated only in the early 2000s (van Dijck 2013). For the first time in the history of the Internet, social media allowed users to be at the same time producers and consumers of the content published online. The blurring of the traditional distinction between Internet content creators and users determined the emergence of ‘prosumers’ on social media platforms and marked the beginning of a second phase of the Web, the so-called Web 2.0 (Fuchs 2011). However, if at the beginning the users themselves were able to moderate the content published on social media platforms, this reality soon became a utopia due to the sharp increase of users and content published (Gorwa et al. 2020). Companies managing social media platforms had to step in and introduce general rules and mechanisms to screen, assess and possibly remove the content published online in order to hinder forms of harm and abuse (Flew et al. 2019; Grimmelmann 2015). Online content moderation transitioned from community- to company-led, and unavoidably became part of these organisations’ commercial activities (Gorwa et al. 2020). Moderators were no longer volunteers drawn from the cohort of users. Companies had to hire an increasing number of staff members to deal with the titanic volume of content generated online by users every day. The ‘wisdom’ of the community that until then had informed a bespoken interpretation of social media moderation rules was replaced with general standard guidelines to be implemented in an invariable and uniform way by external professionals. Metaphorically speaking, this radical transformation represented the transition from craftsmanship to industry in online content moderation. It is at this point of the history of online content moderation on social media platforms that one can observe the emergence of proper content governance systems (Gorwa 2019a, 2019b).

2.2 Micro and Macro Governance Tensions

The growth of social media platforms required the adoption of standardised rules, the institutionalisation of content review mechanisms and the professionalisation of the actors involved in moderating content online. The internal norms and structures which were consequently established progressively defined a first layer of content governance that in this book we will call ‘micro’ governance, as opposed to a ‘macro’ governance dimension which is represented by the mechanisms developed in conjunction with external actors, such as governments and advocacy groups, at a more general level (Gorwa 2019b). In the same way, it has been commonly distinguished between governance (and regulation) by platforms and of platforms (Gillespie 2018b).

Indeed, the increased centrality of online platforms in the daily life of individuals, the role played by social media in terms of allowing individuals to exercise essential freedoms and the associated level of risk of fundamental rights violations on social media transformed online platforms from mere actors of regulation to subjects of regulation. States progressively changed their approach to social media platforms. If originally these organisations were treated as mere intermediaries of information online, thus enjoying a limitation of liability for the content published by their users, over the past few years there has been an increasing tendency to recognise the role that these entities can play in limiting fundamental rights violations online (Frosio 2020, 2022). National and supranational regulators are therefore progressively shifting towards a model of co-regulation where social media platforms are entrusted the responsibility to monitor the content published by their users and to promptly intervene in order to prevent fundamental rights infringements deriving from a broad array of behaviours sanctioned by the law, from hate speech to incitement to violence (Iglesias Keller 2022).

Micro and macro content governance systems are not mutually exclusive, yet their degree of complementarity has still to be improved. The main tensions between these two governance layers are generated by two factors: the blurred boundaries between the private and public dimensions of the social media ecosystem, and the unavoidable fragmentation of the state regulatory response at global level.

The private-public distinction should theoretically inform the rationale behind the delimitation of the reciprocal actions of the micro and macro governance systems. For instance, it should demark where social media, from being mere private spaces of interaction that can be autonomously governed, assume a public relevance, and state regulation might be thus needed to enforce fundamental rights and prevent potential violations (Gillespie 2018a; Jørgensen and Zuleta 2020). However, these private online spaces have today acquired a public, not to say ‘constitutional’ relevance (Celeste 2021a; Celeste et al. 2022a). Individuals spend an increasing amount of their life on social media. It is no longer possible to neatly distinguish between physical and virtual life of a person as the latter is complementary to the first one and vice versa. Our physical life would not be the same without our virtual interactions so much that the digital world can be regarded as an integral component of the context where we live (Dowek 2017; Karppi 2018). Today one could no longer think of exercising some of our core fundamental rights without resorting to social media. Communicating, acquiring information, expressing our political or religious faith, protesting and exercising our businesses are only some examples of fundamental liberties that we would not be able to enjoy at the same standard if deprived of the use of social media platforms. It is certainly possible to exercise these rights in an ‘analogue’ way but digital technology, and in particular social media, has definitively increased the standard to which we are accustomed to exercise these rights.

In 2017 the US Supreme Court in the seminal judgement Packingham v. North Carolina recognised social media as “the most powerful mechanisms available to a private citizen to make his or her voice heard” (Packingham v. North Carolina 2017, 8; Celeste 2018, 2021a). Yet, at the same time, these modern public squares are owned and managed by private organisations, which are legally entitled to pursue their business interests and autonomously regulate their platforms. Contemporary German case law speaks of a virtuelles Hausrecht, literally the right of the digital householder, recognising the ability of platforms of banning users contravening their internal rules from the virtual domains (Celeste 2021a). Along the same lines, a common similitude employed with regard to social media links these organisations with feudal systems (Schneier 2013; Jensen 2020; Lehdonvirta 2022). These platforms create and manage autonomous virtual spaces with the power of arbitrarily defining their internal rules, as medieval dignitaries used to do in their fiefdoms.

Yet—and here the historical metaphor holds true again—online platforms do not represent virtual entities suspended in a legal vacuum, but these companies operate in physical jurisdictions. Their intangible territories host the legal and illegal actions of flesh and blood users that live in the real world. Feudalism was characterised by a multi-layered system of governance: the king of England was a vassal of the king of France; the emperor of the Holy Roman Empire revendicated power on his constituent kingdoms; the pope claimed authority on all religious affairs regardless of the existence of other personal or geographical connections to a territory that was not subject to his temporal power (Maiolo 2007). The legal maxim rex imperator in regno suo, the king is emperor in his kingdom, meaning that he can exercise full sovereignty and power (plenitudo potestatis) within the boundaries of his territories, was only introduced at the end of the Middle Ages to support the ambitions of emerging nation states, such as the Kingdom of France (Jostkleigrewe 2018). Similarly, the micro governance by social media platforms is subject to the constraints developed by the macro governance mechanisms introduced by external stakeholders, among which the one being the most impactful is state regulation.

However, while micro governance by social media platforms is unitary in nature, in the sense that each governance system at this level represents a coherent and self-sufficient entity, macro governance mechanisms are plural. The monadic unity of platforms’ internal rules has to cope with the multiplicity of legal obligations originating from the various national and supranational systems in which the social media is accessible. This asymmetry generates the second element of friction between micro and macro governance. Not only do micro governance systems clash with the public objectives and values of the virtual space that social media represent for the society, but what should theoretically guide them in recomposing this tension, that is, the action of the state under the form of legal regulation, is not unitary, as many are the states and jurisdictions simultaneously affected by a single virtual social media space.

2.3 A Normative Dilemma

Micro and macro governance tensions generate a complex normative dilemma for social media companies (Celeste et al. 2022b).Footnote 1 The central question is: Which rules should govern content online? Private norms, which would ensure coherence at platform level but are arbitrarily determined by the companies themselves, or democratically voted laws? And if more national laws or international standards are simultaneously applicable to one single social media virtual space, which extends across various countries around the globe, which law to choose? How to avoid the risk of having one national or international approach imperialistically imposed on the others without resorting to third and more neutral private norms of social media companies?

Online content governance is currently facing a problem which is not novel in its essence. Determining which principles govern global spaces is an issue that characterised all phenomena related to globalisation and has affected the Internet since its origin. In his seminal book ‘Code 2.0’, Lessig schematised this dilemma as being the choice between a ‘no law’, ‘one law’ and ‘many laws’ worlds (Lessig 2006). In the social media environment, the decision of private platforms to adopt their own internal rules has been accused of arbitrariness and lack of accountability, being even associated with a ‘no law’ scenario (Suzor 2019). Yet, this choice is not only justified by the legal qualification of social media companies, which are private companies and are therefore legally entitled to define the rules that define their private spaces, but also by the legal pluralism that characterises national and international law. By adopting their own internal rules, social media companies are bypassing a twofold issue: firstly, the problem of reconciling multiple overlapping sets of legislation that might be simultaneously applicable to one single social media platform and, secondly, the problem of choosing the law of one country or one group of countries among many. This dilemma exposes a tension between the risk of normative authoritarianism, imperialism and anomie.

2.3.1 Authoritarianism

Pozen defined Facebook’s way of establishing its own content moderation rules as a form of ‘authoritarian constitutionalism’ (Pozen 2018). As recognised by Celeste, online platforms’ terms of service represent private constitutions as they regulate the exercise of users’ rights in these virtual spaces (2019b; Suzor 2016). Social media companies have the power to unilaterally establish and amend their terms of service with no need to ensure transparency or democratic legitimacy, as in an ‘absolutist’ regime (Pozen 2018). According to Pozen, the internal rules of private platforms would still represent an expression of constitutionalism, as they generally try to promote values and principles, such as freedom of expression, which derive from the contemporary constitutionalist doctrine (Tushnet 2019). Yet, the formulation, articulation and implementation of constitutional principles are in the hands of a single decision-maker, the social media company in question. De Gregorio posits that from a constitutional perspective there is no separation of powers in the field of online content governance: the prerogatives to make rules, interpret and enforce them are in the hands of the same actor (De Gregorio 2019). The same author highlights the ‘paradoxical’ aspect of the internal rules established by social media platforms: they are formally inspired and indeed resort to the terminology and rhetoric of constitutional values, but are de facto guided by the private interests of these commercial entities (De Gregorio 2020).

From a legal perspective, platforms’ terms of service are contracts between private parties: the social media company, on the one hand, and the user, on the other hand. However, such a description of social media terms of service is extremely formal and reductive. Firstly, online platforms’ internal rules are non-negotiable; they constitute ‘boilerplate’ contracts, which users have no choice than accepting if they want to access the social media virtual space (De Gregorio 2019; Venturini et al. 2016). Secondly, given the role de facto played by these contracts, terms of service are more similar to law, as they are norms of general application that affect millions of individuals worldwide. The scholarship has indeed talked of lex Facebook (Bygrave 2015) or lex digitalis (Karavas and Teubner 2005; Teubner 2017; Celeste 2022a) to denote the law imposed by social media platforms. Social media platforms not only rule in a ‘softer’ (York and Zuckerman 2019)Footnote 2 or, we would rather argue, in a more concealed way, through their technology, the algorithms that determine the content users will be ‘fed’ with, what Lessig called the ‘code’ (Lessig 2006) and Reidenberg the ‘lex informatica’ (Reidenberg 1998). The companies also become ‘legis-lators’, literally ‘promoters of the law’, and this time ‘law’ in its traditional sense, as a set of norms expressed in words. Teubner uses the concept of lex electronica, which would represent an application of the notion of lex mercatoria to the digital field (Teubner 2004). However, the lex electronica is not only comparable with the ordinary law of the various sub-sectors that compose the contemporary digital society, but would represent their constitution. According to Teubner, a series of ‘civil constitutions’ emerge beyond the state dimension, defining the constitutional affordances of the actors of various specific societal sub-sectors (Teubner 2012). Building on Teubner, Celeste defines social media’s terms of service as constitutional instruments emerging outside the state-centric dimension, not only in light of their ability to affects users’ fundamental rights on online platforms but also due to their potential role as self-restraining norms for social media companies themselves, despite their connatural limited use in this sense, as observed by Suzor (Celeste 2019b, 2022a; Suzor 2018). This form of constitutionalisation occurs in a space at least originally left outside the regulatory spectrum of nation states, relying on the capacity of online platforms of regulating themselves, establishing the rules that govern speech in their virtual spaces (Belli and Venturini 2016). Social media content moderation policies are at the same time “the most important editorial guide sheet the world has ever created”, as Miller put it (qtd in Solon 2017); a contract whose force is even stronger than the law (Belli and Venturini 2016); private statutes that apply transnationally to millions of users (Langvardt 2018); and constitutional instruments regulating the exercise of fundamental rights online (Celeste 2019b; Teubner and Fischer-Lescano 2004).

2.3.2 Imperialism

If the adoption of social media’s own values and principles has been accused of representing a ‘no law’ scenario or a form of non-democratic authoritarianism, the solution of resorting to the law of one specific country does not appear a better one either. At first sight, legally speaking, this might seem the most effective and easy to implement mechanism for a private company incorporated in one specific country to comply with the law of that state and to promote the adoption of democratically legitimated rules. However, this ‘one law’ scenario conceals the risk of incurring in a form of normative imperialism that does not fit the transnational and plural virtual space of social media. Indeed, most of the major social media companies are incorporated in the United States. Adopting US fundamental rights standards for content moderation would imply a forced harmonisation of the plurality of approaches to the issue of balancing freedom of speech against competing rights and interests that characterise jurisdictions around the globe. The US legal tradition, in particular, is significantly protective of the individuals’ freedom of expression, enshrined in the First Amendment to the US Constitution (Pollicino 2019; Krotoszynski 2006). This would imply a limitation of the possibilities to moderate content published on social media, given the prevalence of the individual’s freedom of speech over other competing interests.

Users of jurisdictions where freedom of expression is balanced in a more equal way with other rights and values would find themselves to act in a social media ecosystem regulated by rules that they would not be accustomed to, and that might be far from their legal tradition, conception of justice and culture more generally (Sangsuvan 2014). Moreover, given the fact that most social media companies are incorporated in the United States, this would also mean that a Western, US-centric prominence, which is already a matter of fact in many fields, would be perpetuated in the social media environment (Baym 2015). In a time where digital sovereignty claims are progressively emerging to contrast the de facto economic and legal imperialism of US and Chinese corporations, which share the monopoly of the tech sector, the adoption of a US-dominating ‘one law’ solution appears even more problematic (Celeste 2021b). Indeed, particularly in the European Union, a new conception akin to digital autarchy aiming to protect the European fundamental rights model and to emancipate member states’ shared market from the predominance of the US and Chinese tech products and services is emerging and thriving, at times pushed by a sovereigntist rhetoric (Floridi 2020).

2.3.3 Anomie

In light of the risks of a ‘lawless’ social media environment regulated by rules arbitrarily established by online platforms (Suzor 2019), or one imperialistically dominated by the legal conception of a single country, the solution vocally invoked in the past few years has been to ensure that the content moderation rules included in the terms of service be in line with international human rights law (ARTICLE 19 2018).

At first sight, this option seems to follow the traditional legal approach of resorting to international law when transnational challenges demand to address global issues. However, this position is de facto weakened by the legal reality that does not know the existence of a single human rights law standard, but conversely a plurality of legal models, interpreted differently by courts and professionals around the globe (Mégret 2013).Footnote 3 Particularly in the field of freedom of expression, international human rights law exposes divergent approaches, often as a heritage of national constitutional traditions. Therefore, this would mean that behind the claim that it would suffice to bring private content moderation standards in line with international human rights law the issue of choosing one legal approach among many persists, similarly to what happens if social media companies decided to apply the law of one country. The problem of determining which legal standards to apply does not move away, together with its connatural issues of potential normative imperialism and distance from the cultural pluralism that characterises the social media environment.

Secondly, one particular issue that characterises international human rights law, making it less suitable to govern online content moderation, is that its norms traditionally address states and not private actors. Legally speaking, international law obligations only bind states. There are international instruments advocating for an increased responsibility of private actors in ensuring that their activities do not infringe fundamental rights, also through preliminary risk and impact assessments; yet these documents only have value of soft law.Footnote 4 Moreover, this discrepancy between the traditional addressees of international law and the dominant actors of online content moderation generates complexities in terms of interpretation of these norms. International human rights standards are not directly applicable to online content moderation cases as they require a work of legal interpretation and recontextualisation.

An issue that is made even more problematic in light of the lack of granularity of international human rights norms. At the international level, indeed, fundamental rights and liberties are framed as general principles. There are neither provisions tailored to the social media environment nor specific mechanisms created to operate a balancing of competing rights and interests in the context of online content moderation. This circumstance exposes an issue of potential normative anomie, a sense of disorientation that emerges in the phase of implementation of norms to concrete content moderation cases (Celeste 2022a). International human rights standards, by defining general orienting principles, require a substantial degree of interpretation and, unavoidably, a sufficient legal knowledge. These norms, as they are, could not offer explicit guidance of behaviour to the actors involved in the social media environment and could not be directly applicable without a preliminary work of interpretation and recontextualisation (Belli and Venturini 2016).Footnote 5 Arguing that international human rights law would be the panacea of the online content governance dilemma is a false myth.

2.4 The Potential of Digital Constitutionalism

A straightforward solution to the question of which standards should govern online content moderation cannot be represented by a mere legal transplant. Adopting the law of one country or referring to international human rights standards are options that conceal a series of significant problems, in particular due to the fact these legal frameworks were not intended to govern a transnational and plural environment dominated by private actors like the one of online platforms. A twofold work of translation and adaptation is needed in order to ensure that social media standards comply with fundamental rights. On the one hand, international and national norms enshrining fundamental rights principles have to be recontextualised in light of the specificities of the social media environment, and, on the other hand, platform standards must be reshaped in order to progressively incorporate these values. A process of ‘constitutionalisation’ of the social media environment seems to be needed (Celeste et al. 2022a; Celeste 2022a, Chap. 5). Instilling the core principles of contemporary constitutionalism in the architecture of social media would mean to preserve the legal effectiveness of platform standards while enhancing their capability to promote the respect of fundamental rights in the multinational and plural environment they govern.

Interestingly, an input to this process of constitutionalisation is increasingly originating from civil society actors. Over the past few years, a significant number of ‘declarations’ or ‘bills of rights’ have been proposed to articulate constitutional rights and principles in a way that would reflect and address the challenges of the digital age (Redeker et al. 2018; Yilma 2021; Celeste 2022a). This phenomenon has been described in terms of emergence of a movement of ‘digital constitutionalism’ (Redeker et al. 2018; Padovani and Santaniello 2018; Suzor 2018; Celeste 2019a; Pollicino 2021; De Gregorio 2021; Celeste 2022a). These documents do not represent legally binding texts, yet they often adopt the ‘lingua franca’ of constitutional law (Celeste 2022a). Singularly taken, the contribution of these documents from a constitutional law perspective is limited. However, regarded as a comprehensive movement composed by a plurality of initiatives, these civil society efforts have so far nourished a conversation on which values and principles should govern the digital ecosystem (Celeste 2022a, Chap. 8). These declarations promote an update and re-articulation of core principles of contemporary constitutionalism, rather than a complete re-writing of norms. They do not aim to subvert the DNA of contemporary constitutionalism, but rather to ‘generalise and respecify’ its core values in light of the mutated social reality where we live (Celeste 2022b).

In this context, the social media environment emerges as a laboratory of new ideas. Internet bills of rights often include principles that explicitly address common challenges of online content moderation and could help develop platform standards that are more in line with fundamental rights. These documents represent a voice often unheard. The closest one to the users, whose opinion is way far neglected in the context of the content governance dilemma, being them subject to private standards or public laws without having the possibility to express how they think their fundamental freedoms should be articulated and balanced on online platforms.

Internet bills of rights do not claim to become cosmopolitan constitutions for the social media environment but provide an impulse to the conversation on how to instil constitutional values within online platforms standards. Despite the evocative image that the concept of ‘constitutionalisation’ brings to mind, there are no founding fathers—or mothers—sitting in the same room for days that aim to define a single constitution for social media. The process of constitutionalisation of this environment reflects the complex, global and plural scenario in which online platforms operate. Online content governance rules are being fertilised by a multistakeholder constitutional input. An aerial view on this phenomenon witnesses multiple, simultaneous processes of ‘parallel’ or ‘collateral’ constitutionalisation that are currently ongoing (Celeste et al. 2022b; Celeste 2019a, 2022a, 2022b). Civil society’s Internet bills of rights may be regarded as one of the inputs that contribute to shaping this plural phenomenon. The online content governance dilemma might not be solved by choosing to stick to private rules or refer to national or international law: the ultimate solution might be a combination of these options. The conversation on digital constitutionalism is polyphonic, nourished by a plurality of voices, also emerging from below. Internet bills of rights work as a linking element that help connect, complement and stimulate these various normative dimensions to find answers to the challenges of online platforms (Celeste 2022a, Chap. 8).