Keywords

3.1 Unveiling a Myth

Recent years have seen a growing attention in the potential of international law in offering normative guidance to address human rights concerns in content governance. Partly in response to the pressure from civil society groups—including through the launching of Internet bills of rights that advance progressive content governance standards, social media platforms are also increasingly yet vaguely turning their attention to international human rights law (Meta 2021; Twitter 2022). As shall be shown in the next chapter, a number of civil society organisations have pushed for social media companies to ground their content moderation policies in international human rights standards. Several reports of United Nations (UN) special rapporteurs and the scholarly literature have likewise argued for platform content moderation policies and practices to be based on and guided by international human rights law. But the question of whether, and the extent to which, international law offers such guidance to the complex world of platform content governance remains. This chapter seeks to address this question. It will show that the potential of international human rights law in offering much-needed normative guidance to content governance is circumscribed by two interrelated factors.

One is that international human rights law is—by design—state-centred and hence does not go a long way in attending to human rights concerns in the private sector. This means that international human rights law relegates to national law the regulation of the private sector. Problematic about this state of affairs is that it risks leading to divergent regulatory approaches globally. The other ‘design constraint’ is that international human rights standards are mostly couched in general principles. This makes the principles less suited to be applied in the context of platform content moderation which requires a rather granular and dynamic system of norms. The second factor concerns a set of structural constraints that further limit the regulatory potential of international human rights law. In the rare instances where soft international law standards appear to have companies in their regulatory site, they still rely by and large on voluntary compliance and hence envisage no robust accountability mechanisms. In practice, the generic international content governance standards have not adequately been unpacked by relevant adjudicative bodies such as treaty bodies to make them fit for purpose to the present realities of content moderation. On the whole, content governance jurisprudence at the international level remains to be thin.

In this chapter, the phrase ‘international law of content governance’ refers to a set of international standards relating to content governance provided both in international hard and soft laws. Hard legal instruments considered include the International Covenant on Civil and Political Rights (ICCPR) and the International Convention on the Elimination of All Forms of Racial Discrimination (CERD). Certain Covenant rights, particularly the right to freedom of expression, which is at the centre of content governance, are replicated nigh verbatim in other post-ICCPR specialised human rights treaties (Convention on the Rights of the Child 1989: art 13; Convention on the Rights of Persons with Disabilities 2007: art 21). As a result, references in this chapter to ICCPR provisions would, mutatis mutandis, apply to corresponding provisions in those treaties.

Whereas soft legal instruments include a broad range of instruments, including the Universal Declaration of Human Rights (UDHR), the United Nations Guiding Principles on Business and Human Rights (UNGPs, alternatively referred to in this chapter as Ruggie Principles),Footnote 1 relevant Resolutions of the Human Rights Council and Joint Declarations of UN and intergovernmental mandates on freedom of expression. This means that the chapter excludes regional instruments from the purview of the ‘international law’ analysis, mainly because such instruments are transnational/regional in scope while issues of content governance are inherently universal. International law is the most pertinent framework of reference for addressing such a universal issue.

The rest of the chapter develops in four sections. We first map the key normative sources of international law of content governance (Sect. 3.2) which consists of general and specific standards applicable to the governance of and in digital platforms. Emergent standards developed through intergovernmental mandates on freedom of expression are then considered to highlight recent norm progressions in international law (Sect. 3.3). In Sect. 3.4, we explore the ways in which a host of design and structural constraints undercut the regulatory potential of international law of content governance. We close the chapter in Sect. 3.5 where the growing gap-filling role of civil society initiatives is flagged, a subject explored in full in Chap. 4.

3.2 Normative Sources

Content governance standards in international law draw from multiple normative sources and take various legal forms. While some are embodied in hard law and hence carry binding legal obligations, others are envisaged in soft law instruments with no enforceable obligations. Whereas certain standards are general in formulation and scope. One, for instance, finds general norms that define the scope of human rights obligations against state parties to the relevant treaty. This, in turn, would include the obligations of states in regulating the content moderation practices of digital platforms. But this category also encompasses less explored binding norms that would potentially apply to digital platforms directly. Other content governance standards are specific in the sense that they have the potential to apply to particular cases of content governance. Such norms include international standards that deal with rights and principles engaged directly by content governance. A set of human rights guaranteed in widely accepted human rights treaties and soft legal instruments such as the UNGPs fall within this category. What follows maps this set of content governance norms in international law.

3.2.1 Generic Standards: Platforms as Duty-Bearers?

The ICCPR, a human rights treaty widely ratified by states—173 state parties at the time of writingFootnote 2—provides the general framework for any consideration of content governance in international law. One way it does so is by defining the scope of state obligations vis-à-vis Covenant rights. States generally owe two types of obligations under the Covenant: negative and positive obligations (International Covenant on Civil and Political Rights 1966, art 2(1)). States’ negative obligation imposes a duty to ‘respect’ the enjoyment of rights. As such, it requires States and their agents to refrain from any conduct that would impair or violate the exercise or enjoyment of rights guaranteed in the Covenant. States’ positive obligation, on the other hand, imposes a duty to ‘protect’ the enjoyment of rights. This obligation thus concerns state regulation of third parties, including private actors, to ensure respect for Covenant rights. Article 2 of the Covenant stipulates states’ positive human rights obligations as follows:

[…] Each State Party to the Present Covenant undertakes to take the necessary steps, in accordance with its constitutional processes and with the provisions of the present Covenant, to adopt such laws or other measures as may be necessary to give effect to the rights recognized in the present Covenant. [Emphasis added]

Each State Party to the present Covenant undertakes:

  1. (a)

    To ensure that any person whose rights or freedoms as herein recognized are violated shall have effective remedy […]’

  2. (b)

    To ensure that any person claiming such a remedy shall have his rights thereto determined by competent judicial, administrative or legislative authorities, or by any other competent authority provided for by the legal system of the State, and to develop the possibilities of judicial remedy;

  3. (c)

    To ensure that the competent authorities shall enforce such remedies when granted.

States’ positive human rights obligation primarily concerns a duty to put in place the requisite legal and institutional framework to enable the enjoyment of rights, including means of recourse when violations occur (General Comment 31 2004: paras 6–7). Applied to content governance, this duty would mean that—where permitted by their respective domestic constitutional framework—states should enact laws that regulate the conduct of digital platforms, including content moderation policies and practices. Recent regulatory initiatives in several jurisdictions, such as Germany’s Network Enforcement Act (NetzDG), are good cases in point (Germany’s Network Enforcement Act 2017).

International human rights law generally does not impose obligations on non-state actors, including corporations; but there are certain exceptions to this rule. At the highest level, the Preamble of the UDHR states that ‘every organ of society’ shall strive to promote respect for rights guaranteed in the Declaration (Universal Declaration of Human Rights 1948, Preamble: para 8). Commentators argue that the reference to ‘every organ of society’ should be taken to include corporations, and their duty to ‘respect’ human rights (Henkin 1999, 25). Despite the fact that this proviso is in the inoperative parts of the Declaration—and the latter is not formally a binding instrument, it can be taken to foreshadow a negative obligation of non-state actors, including technology companies, to ‘respect’ human rights. At its core, the negative human rights obligation to respect is a duty to refrain from any act that would undermine the exercise or enjoyment of human rights.

Perhaps a more concrete version of this tendency to address non-state actors in compulsory terms is provided in the operative provisions of both the UDHR and the ICCPR. Article 30 of the UDHR and Article 5(1) of the ICCPR, respectively, read as follows:

Nothing in this Declaration may be interpreted as implying for any State, group or person any right to engage in any activity or to perform any act aimed at the destruction of any of the rights and freedoms set forth herein.

Nothing in the present Covenant may be interpreted as implying for any State, group or person any right to engage in any activity or perform any act aimed at the destruction of any of the rights and freedoms recognized herein or at their limitation to a greater extent than is provided for in the present Covenant.

A closer look at these provisions suggests that no right is bestowed upon anyone, including ‘groups and persons’, as well as states to impair or destruct the exercise and enjoyment of the rights guaranteed in the Declaration and the Covenant. It has been argued that the rationale for the inclusion of this provision in the UDHR—which was later replicated in the ICCPR with minor additions—was that persons who are opposed to the ‘spirit of the Declaration or who are working to undermine the rights of men should not be given the protection of those rights’ (Schabas 2013, 1308). Guided by the slogan “no freedom for the enemies of freedom”, this provision is meant to prevent the abuse of rights (Opsahl and Dimitrijevic 1999, 648–649). Now the question is whether the prohibition of abuse of rights would equally apply to digital platforms.

Freedom of expression in international law is bestowed to individuals, and as such, companies including social media platforms are not right holders. That is not, however, the case in national legal systems such as the United States where First Amendment protection applies to social media companies (United States Supreme Court, Manhattan Community Access Corp. v. Halleck 2019).Footnote 3 But the prohibition of abuse of rights both in the Declaration and the Covenant arguably would also apply to social media companies in the sense that their policies and practices, including those relating to content moderation, must not have the effect of impairing or destructing the enjoyment of human rights. In that sense, there is a negative obligation to ‘respect’ human rights which requires them to refrain from measures that would affect the enjoyment of rights. This provision is general—and originally was meant to limit abuse by, as Nowak writes, “national socialists, fascists, racists and other totalitarian activities” who employ certain rights like freedom of expression to “destroy democratic structures and human rights of others protected by such structures” (Nowak 2005, 112, 115).Footnote 4 But there is no reason why it would not apply to govern content policies and practices of digital platforms.

However, the tendency in such provisions to address non-state actors directly, potentially including corporations, appears to be at odds with the state-centric nature of human rights law generally. As alluded to above, international human rights law imposes obligations only on states who are parties to the underlying treaties imposing such obligations. Of course, the binding human rights norms in question are embodied in a treaty—that is, ICCPR—to which only states are, or can be, parties. And this state of affairs raises the fundamental question of how this would apply in the context of digital platforms. But the sheer fact that the provisions appear to impose binding duties, regardless of how they would be enforced, would certainly lend weight to recent arguments—considered later in this chapter—that international human rights law does, or should, apply directly to the content moderation practices of digital platforms.

Albeit in a different context, some commentators argue that Article 19 of the ICCPR imposes some duties directly on online intermediaries (Land 2019, 286, 303–304). Among such duties include respect for principles of due process and remediation. This expansive and potentially contentious reading of the provision relies on two points. One is the fact that the drafters of the Covenant had contemplated non-state actors as duty-bearers although this was not later incorporated in the final text. Non-state actors have long been considered potential duty-bearers in human rights standard-setting processes, but a range of factors frustrated any attempt to codify corporate human rights obligations in treaty law. Chief among such factors was lack of support from many developed countries and multinational corporations.

The more recent attempt to translate the Ruggie Principles into a human rights treaty is already facing similar challenges. At the sixth session of the Working Group that is currently drafting such a treaty, a number of states expressed reservations and outright opposition to the draft text as well as to the whole treaty process. The UK delegation, for instance, noted that while the draft has noble aims, it expressed scepticism that the text can gather enough political support (Open-ended Intergovernmental Working Group 2021, para 22). The United States, on the other hand, not only maintained objection to the process but also called on the Working Group to abandon it in favour of alternative approaches (Open-ended Intergovernmental Working Group 2021, para 23). That undercuts the normative value of abandoned “contemplation” among drafters of the ICCPR alluded to above by some commentators.

But a more direct reading of the duties of digital platforms in international law draws from the terms “special duties and responsibilities” in Article 19(3) of the ICCPR. It is argued that these terms imply duties of intermediaries, including digital platforms (Land 2019, 303–305). Yet the use of the terms in the provision is expressly in the context of the exercise of the right to freedom of expression and attendant grounds of restriction. It reads that “the exercise of the rights provided for in paragraph 2 of this article carries with it special duties and responsibilities”. That simply means the envisioned “special duties and responsibilities” are owed by individuals to whom the right to freedom of expression and opinion are bestowed under international law. It is not apparent from the argument whether it flows from the rule present in some jurisdictions, particularly in the United States, where private publishers enjoy free speech rights. If that was the case, it would mean that the exercise of this right by intermediaries would entail “special duties and responsibilities”. However, simply because non-state actors are not entitled to freedom of expression, there can be no corresponding duties on intermediaries in international law.

Overall, the upshot is that while certain standards appear to envision corporations as duty-bearers, the state-centred nature of international human rights law makes them less suited to content governance. Add to that their exceedingly generic formulation which has yet to be unpacked in practice. What follows considers whether more specific standards in international law may offer better normative guidance.

3.2.2 Specific Standards: Applicable Human Rights Treaties

Specific content governance standards in international law are envisaged in a series of human rights treaties and international soft law. Human rights treaties guarantee a broad range of rights that set out standards applicable to the governance of the conduct of or in digital platforms. The UN framework on business and human rights, also referred to as the Ruggie Principles, is the other specific standard potentially applicable to the governance of platforms. What follows considers the degree to which these standards offer effective normative guidance for content governance in digital platforms.

Content governance standards in human rights law take different forms. One finds, for instance, standards that prohibit certain types of speech: war propaganda, advocacy for racial, religious and national hatred and racist speech (International Covenant on Civil and Political Rights 1966, art 20; International Convention on the Elimination of All Forms of Racial Discrimination 1965, art 4). In outlawing certain types of expression, international human rights law essentially sets forth content governance standards that must be implemented by state parties to the relevant treaties, including in social media platforms. Social media companies are not bound by such international standards. But digital platforms are increasingly being required by domestic legislation to observe rules that essentially reflect international human rights and principles. Germany’s Network Enforcement Act, for instance, has among its legislative objectives tackling hate speech, thereby upholding rights affected by such types of online speech (Germany’s Network Enforcement Act 2017). In a way, community standards of digital platforms can be taken—at least theoretically—to be translation of domestic and international law standards. What this may further mean is that in many cases such platform policies will apply to users in jurisdictions where domestic legislation is non-existent or unenforced. But in the latter—and probably common—case, it would largely be a voluntary commitment on the part of digital platforms.

International human rights law not only guarantees the right to freedom of expression but also provides standards for permissible restrictions. This is the other source of specific content governance standards in human rights and principles. Restriction of freedom of expression will be permissible when three cumulative requirements are fulfilled: legality, necessity and legitimacy. Article 19(3) of the ICCPR provides the standards of restriction as follows:

[…] It [freedom of expression] may therefore be subject to certain restrictions, but these shall only be such as are provided by law and are necessary: (a) for respect of the rights or reputation of others, (b) for the protection of national security or of public order or of public health or morals.

The three-part tests of legality, necessity and legitimacy are designed to address the restriction of rights by the states and their agents. But there have been several attempts to adapt the test or to formulate sui generis standards of human rights restrictions by corporations (Karavias 2013; Ratner 2001). More recently, several attempts to translate and adapt these standards to content governance have emerged. At the forefront of this effort has been the former UN Special Rapporteur on Freedom of Expression, David Kaye. In a series of reports, he argued how digital platforms can, and should, follow international human rights standards in the course of applying content governance standards. Doubtless, this offers some intellectual guidance on the human rights responsibilities of technology companies, including social media platforms. But Kaye’s reports are particularly notable in two respects.

One is that they seek to elaborate on how the right to freedom of expression guaranteed in Article 19 of the ICCPR, as well as Article 19 of the UDHR, would apply to technology companies. Kaye argued that following international law standards in content moderation, as opposed to discretionary community standards, would allow corporations to make ‘principled arguments’ to protect the rights of users (Kaye 2019, 10). In particular, adapting international law would mean that requirements for permissible restriction of freedom of expression would need to be applied by social media companies. For example, the requirement of ‘legality’ would require platforms to adopt ‘fixed rules’ on content moderation that are publicly accessible and understandable to users (Kaye 2019, 43). His report on online hate speech similarly argues that companies should assess whether their hate speech rules infringe upon freedom of expression based on the requirements of legality, necessity and legitimacy (Special Rapporteur on Freedom of Opinion and Expression 2019, paras 46–52).

Reinventing the Ruggie Principles is the other way in which Kaye sought to adapt international standards to the platform governance context. We return to this point in the next section where we illustrate the extent to which the UN business and human rights framework may apply to the governance of and in digital platforms. But it is vital to note that such reports of the former Special Rapporteur often build on the submissions of various stakeholders, mainly civil society groups.Footnote 5 Indeed, the above highlighted ‘procedural safeguards’ alluded to by Kaye are widely advocated in civil society initiatives. The next chapter will examine civil society initiatives relating to content governance. But this phenomenon attests to the iterative cross-fertilisation of and convergence between civil society standards and international human rights standards.

Multistakeholder bodies have similarly attempted to adapt human rights standards to content governance.Footnote 6 A good case in point is the Global Network Initiative (GNI) which introduced Principles on Freedom of Expression and Privacy (GNI Principles 2008, as updated in 2017). GNI is a multistakeholder body established to serve as a platform for addressing issues relating to digital privacy and free speech through dialogue among stakeholders.Footnote 7 Although focused only on privacy and freedom of expression, the GNI Principles address technology companies broadly defined, including Internet companies such as Meta, telecommunication companies and telecom equipment vendors. Moreover, GNI’s work in the area, including its policy briefs, has adopted a broader perspective. As shall be outlined in the next chapter, transparent rule-making is one of the recurring standards in civil society instruments which require the development of content moderation standards to be open and participatory.

A recent GNI policy brief, in this regard, provides that the requirement of ‘legality’ requires that restrictions on free expression should be based on a clear and accessible law that is adopted through democratic legislative processes, particularly when the law-making powers of states are delegated (Global Network Initiative Policy Brief 2020, 12–13). Additionally, the policy brief draws from the requirement of ‘legality’ that the delegation of regulatory or adjudicatory roles to private companies must be accompanied by corollary safeguards of independent and impartial oversight. The policy brief further relates the transparency of content moderation practices, including the need for human review of content moderation practices, to the requirement of legality (Global Network Initiative  Policy Brief 2020, 14). We consider in Sect. 2.4 to what extent such interpretive exercises help in reimagining international law to the content governance context. But the fact that GNI principles apply only to a dozen technology companies on a voluntary basis may lessen their impact.

In addition to freedom of expression, content governance engages a broad range of human rights guaranteed in international law that are yet to receive due attention in the Internet governance discourse. This is the other variant of content governance norms in human rights law. Common acts of content moderation such as content curation, flagging and take downs normally restrict freedom of expression of users. But other human rights and principles such as the right to equality/non-discrimination, the right to effective remedy, the right to fair hearing and freedom of religion would also be impacted by platform content moderation policies and practices. As in freedom of expression, the corresponding duties to these rights fall on states but they form the basis for recent civil society content governance standards which—in contrast—are addressed, in most cases, directly to both states and social media companies. This will be considered further in the next chapter.

As highlighted above, a key part of states’ positive human rights obligation is to ensure effective remedy when violation of rights occurs (General Comment 31 2004, para 8). The right to effective remedy of individuals is corollary to that duty which entitles them to seek remedy when any of the Covenant rights, including freedom of expression, are violated. Thus, this is essentially a ‘supporting guarantee’, as opposed to a freestanding right (Joseph and Castan 2013, 869, 882). Content moderation would also engage cross-cutting rights such as the right to non-discrimination (International Covenant on Civil and Political Rights 1966, arts 2(1), 3, 26). Speech that incites discrimination or hatred against particular groups would violate the right to non-discrimination. Content moderation decisions to remove certain content might amount to violation of free speech rights while a decision to retain the problematic content might equally violate the right to equality/non-discrimination.

The right to a fair hearing guarantees fair processes to individuals in civil as well as criminal cases (Universal Declaration of Human Rights 1948, art 10; International Covenant on Civil and Political Rights 1966, art 14). This right imposes a duty to ‘respect’ on states, but its aim is to ensure respect for due process guarantees such as the ability to challenge charges through a fair and impartial process. According to the Human Rights Committee (HR Committee)—a treaty body that oversees the ICCPR, this right “serves as a procedural means to safeguard the rule of law” (General Comment 32 2007, para 1). Content governance decisions inherently give rise to due process concerns, for instance in the context of notification of decisions to users or in the opportunity to challenge those decisions.

Freedom to manifest religion would also be engaged by content governance. This right is the ‘active’ component of freedom of religion that entitles believers to freely express and practise their faith in any means, including through the use of digital platforms (Universal Declaration of Human Rights 1948, art 18; International Covenant on Civil and Political Rights 1966, art 18; Nowak 2005, 413–418). But this right is not absolute and hence may be restricted in line with the three-part requirements of legality, necessity and legitimacy. While the duty to respect and protect this freedom falls on states, content moderation decisions against content relating to the manifestation of religion or belief would constitute interference with the freedom to manifest religion and should be justified under the three-part tests.

Another relevant human right to be impacted by content moderation decisions is the protection of honour and reputation (International Covenant on Civil and Political Rights 1966, art 17). Honour relates to the subjective opinion of a person about oneself whereas reputation concerns the opinion of others about the person (Volio 1981, 198 et seq). The right is among the bundle of personality rights guaranteed in international law alongside the right to privacy (Universal Declaration of Human Rights 1948, art 12; International Covenant on Civil and Political Rights 1966, art 17). Decisions either to takedown, moderate or retain content in social media platforms that attack the honour or reputation of individuals would engage this right. As highlighted above, one of the legitimate aims for permissible restriction of freedom of expression is for ensuring respect of the rights or ‘reputation of others’—and not honour (International Covenant on Civil and Political Rights 1966, art 19(3(a))). This is also the approach taken in the community guidelines of several digital platforms. A good case in point is Twitch’s Terms of Service which prohibits defamatory content on its platform (Twitch 2021).

A less known but potentially relevant international standard concerns the right to a ‘social and international order’ in which human rights and freedoms (set forth in the UDHR) could be fully realised. Article 28 of the UDHR reads as follows:

Everyone is entitled to a social and international order in which the rights and freedoms set forth in this Declaration can be fully realized.

This right is essentially aspirational in the sense that it requires “social and international conditions to be restructured” so as to enable the realisation of rights (Eide 1999, 597). According to Eide, this would mean readjustment of political and economic relations within states (social order) and among states (international order). The right does not envisage a clear corresponding duty or a duty-bearer. In light of the fact that the UDHR is a soft law, this is not surprising. But there can be no doubt that states would be the prime duty-bearers under the right to a rights-friendly social and international order. But as a right whose drafters had hoped would help create an order where rights, including those highlighted above could be realised, several actors—including social media companies as well as states—arguably bear responsibility, if not a duty, under this provision (Schabas 2013, 2753 et seq). Many scholars have alluded to the advent of a new social order with the rise of big technology companies (Zuboff 2018).Footnote 8 Together with states, these companies are responsible for ensuring that their practices in this new order—including on content governance—do not impair the enjoyment of rights.

Among the bundle of cultural rights guaranteed in international law is what has come to be referred to as the ‘right to science’. Initially recognised in the UDHR, it guarantees the right of ‘everyone’ to share in “scientific advancement and its benefits” (Universal Declaration of Human Rights 1948, art 27(1)). It is later codified in the socio-economic and cultural rights covenant (International Covenant on Economic, Social and Cultural Rights 1966, art 15(1(b))). Article 15 of the International Covenant on Economic, Social and Cultural Rights (ICESCR) provides as follows:

  1. 1.

    The States Parties to the present Covenant recognize the right of everyone:

    1. (a)

      […]

    2. (b)

      To enjoy the benefits of scientific progress and its applications.

    3. (c)

      […]

  2. 2.

    The steps to be taken by the State Parties to the present Covenant to achieve the full realization of this right shall include those necessary for the conservation, the development and the diffusion of science and culture.

The nature of the right is such that it seeks to enable all persons who have not taken part in scientific progresses or innovations to participate in enjoying the benefits (Adalsteinsson and Thörballson 1999, 575–578). In that sense, it has the objective of protecting the rights of both scientists and the general public. This provision has barely been invoked in practice, but it might arguably apply to counter aggressive content moderation practices of social media companies vis-à-vis copyrighted materials. Digital platform policies routinely layout circumstances by which copyright infringing material may be subject to content moderation actions (TikTok 2021). As will be shown in the next chapter, ‘freedom from censorship’ is one of the content moderation-related standards often proposed in civil society initiatives. An aspect of this civil society-advocated freedom is the right to not to be subjected to onerous copyright restrictions.

More generally, the specific content of this ‘right to science’—and the attendant obligations—is uncertain, however. As a second generation right, the realisation of the right to science is progressive. That means states owe no immediate obligations. But one of the progressive state obligations relevant to the question at hand is to take the necessary steps towards the diffusion of scientific outputs. According to an interpretation by the UN Special Rapporteur in the field of Cultural Rights, the right involves two key sub-rights (Special Rapporteur in the field of Cultural Rights 2012, paras 25, 43–44; General Comment 25 2020, para 74). The first sub-right concerns the right of individuals to be protected from the adverse effects of scientific progress. The other dimension of the right relates to the right to public participation in decision-making about science and its uses.

While the ensuing human rights obligations fall on states, these rights appear to resemble civil society content governance standards.Footnote 9 In the context of platform content moderation, the first sub-right would—for instance—require measures to prevent harm and safeguard social groups, including vulnerable groups, on social media platforms. The second sub-right might concern meaningful participation in the development of moderation policies. Likewise, this dimension of the right to science finds parallel in recent attempts by various actors, including civil society groups, to define the human rights responsibilities of digital platforms. We will return to this point in the next chapter.

In summary, a number of rights in international law potentially provide high-level normative guidance to content governance. But the fact that these standards are generic in formulation—yet to be unpacked by authoritative bodies—means that this potential is unlikely to find meaningful practical application. Further punctuating this limitation are the complexities that the state-centred nature of the standards engenders. The UN Guiding Principles on Business and Human Rights not only are slightly specific in formulation but also address corporations more directly in human rights language. The next section explores this point further, particularly whether the UN business and human rights framework is fit for the purpose of digital content governance.

The UN framework on business and human rights, also referred to as the Ruggie Principles—named after its drafter John Ruggie, the late Special Representative of the UN Secretary General for Business and Human Rights—is the second potential source of specific international content governance standards. The Ruggie Principles is currently the only international instrument that seeks to address the conduct of businesses and the attendant impact on human rights (Guiding Principles on Business and Human Rights 2011). But it primarily affirms states as the sole and primary duty-bearers in human rights law, and hence it does not introduce new obligations (Guiding Principles on Business and Human Rights 2011, part I). This means, in turn, businesses, including technology companies, bear no human rights duties in international law. In human rights law—as highlighted at the outset, states’ human rights obligations are of two types: negative and positive obligations. State positive obligation concerns the duty to ensure that human rights are not violated by third parties, including companies. Beyond elaborating this duty of the State, the UNGPs also introduces corporate human rights ‘responsibilities’ of businesses.

Structurally, the UNGPs is organised around three key normative pillars. First, it reaffirms and slightly unpacks states’ human rights duty to protect in the specific context of human rights violations by businesses (Guiding Principles on Business and Human Rights 2011, part I, paras 1–10). It does so by requiring states to prevent, investigate, punish and redress abuse committed by businesses by putting in place the requisite legal and regulatory framework. But the duty to ‘protect’ would also apply in instances where states co-own businesses or otherwise deal with businesses whose conduct raises human rights concerns. Second, the Ruggie Principles imposes a ‘responsibility’ to respect on businesses (Guiding Principles on Business and Human Rights 2011, part II, paras 11–24). Three points are worth noting regarding this aspect of the UNGPs.

One is the terminological choice. While states owe obligations or duties, businesses bear merely corporate responsibilities resulting in no legal consequences but simply moral obligations. Not complying with the Principles would not, thus, amount to a violation of international law but merely ignoring global expectations (Oliva 2020, 616). The other is that unlike for states, the responsibility is only to ‘respect’—that is, a negative responsibility—and hence, businesses are not expected to ‘protect’ human rights. As part of the corporate responsibility to respect, businesses are ‘expected’ not to cause or contribute to human rights violations—and where they occur, to address the ensuing impact. To meet this responsibility to respect, businesses are expected to take the following steps: (a) to make policy commitment to respect human rights in their operations and dealings, (b) to undertake due diligence to prevent human rights violations and (c) to put in place processes of remediation.

Thirdly, the UNGPs envisages standards for the provision of remedies both by states and by businesses (Guiding Principles on Business and Human Rights 2011, part III, paras 25–31). When it comes to states, it stipulates that the duty to ‘protect’ embodies the obligation to put in place avenues for remediation by victims of human rights violations. And such ways of remediation could be either state or non-state based. Businesses, on the other hand, are expected to institute ‘operational-level’ mechanisms of handling grievances. Indeed, the Ruggie Principles also encourages other ways of remediation through industry and multistakeholder initiatives (Guiding Principles on Business and Human Rights 2011, part III, para 30).

The adoption of the Ruggie Principles is, no doubt, a significant development in terms of addressing corporations in human rights parlance. But neither its development nor its application thus far has focused on technology companies, including digital platforms. We return to this point in Sect. 2.4, but what immediately follows discusses how the emergent standards being introduced by intergovernmental mandates on freedom of expression address technology companies more directly. As shall be shown, this signifies further progress in the international law discourse relating to digital content governance.

3.3 Emergent Progressive Standards

Relatively progressive content moderation-related international standards are emerging through the work of UN special mandates, particularly the former UN Special Rapporteur on Freedom of Expression. In a series of reports the former Special Rapporteur, Kaye, examined—as highlighted above—the extent to which and whether international human rights law offers normative guidance in addressing the impact of content moderation practices on freedom of expression. But his role as part of the coalition of intergovernmental mandates on freedom of expression has had more significance in terms of outlining more progressive as well as normatively strong standards in the area of content moderation. Since 1999, intergovernmental mandates on freedom of expression—including the former UN Special Rapporteur—have issued ‘Joint Declarations’ on various themes relating to freedom of expression and media freedom.Footnote 10

The declarations constitute international soft law that tends to unpack general free speech standards envisaged in the ICCPR—the main international hard law that guarantees the right to freedom of expression—as well as regional human rights treaties such as the European Convention on Human Rights and Fundamental Freedoms (ECHR) and the African Charter on Human and Peoples’ Rights. The 2019 Declaration, for instance, states in its Preamble that the prime aim of the Joint Declarations is one of “interpreting human rights guarantees thereby providing guidance to governments, civil society organisations, legal professionals, journalists, media outlets, academics and the business sector” (emphasis added) (Joint Declaration 2019, preamble, para 3). It further provides that the Joint Declarations have—over the years—“contributed to the establishment of authoritative standards” on various aspects of free speech (emphasis added) (Joint Declaration 2019, preamble, para 4).

Intermediary liability is one of the content governance-related themes addressed at length in the joint declarations. As alluded to above, current international law standards on free speech are couched in general terms and offer little guidance to the complex and granular nature of content moderation practices. Apart from the right to freedom of expression guaranteed in the ICCPR and subsequent specialised human rights treaties, no international law instrument addresses the specific aspects of free speech protection in the digital environment. In particular, the role of intermediaries such as social media platforms in curating and moderating content online is not addressed in international hard law. Intermediaries play a key role in the enjoyment of the right to freedom of expression online which makes appropriate regulation of their conduct an imperative. If intermediaries were to play an active role in moderating content circulating on their platforms to avoid liability, the free speech rights and interests of users would be seriously curtailed. The objective of a fair intermediary liability regime is, then, to define the exceptional circumstances where intermediaries would be held liable for the problematic content of their users. In offering standards on intermediary liability, the Joint Declaration fills—to an extent—the normative void in international law.

Subject to narrow exceptions, international human rights law—as alluded to above—imposes no direct duty on non-state actors, including corporations. But the joint declarations send mixed signals. For instance, the 2019 Joint Declaration states in its preamble that intermediaries such as social media companies owe a responsibility—not a duty—to ‘respect’ human rights. In so saying—and in line with the Ruggie Principles, it absolves corporations from any human rights duty. But in its operative provisions, the Declaration appears to address intermediaries more directly. As a soft law, it cannot introduce binding obligations, but it tends to use compulsory terms when addressing intermediaries. Paragraph 1(d) of a Joint Declaration adopted in 2017 provides the general principle of intermediary liability as follows:

Intermediaries should never be liable for any third-party content relating to those services unless they specifically intervene in that content or refuse to obey an order adopted in accordance with due process guarantees by an independent, impartial, authoritative oversight body (such as a court) to remove it and they have the technical capacity to do that.

This general principle is further elaborated through specific principles. One such principle requires intermediaries to put in place clear and predetermined content moderation policies that are developed based on consultation with users (Joint Declaration 2017, para 4(a)). And such rules must set out objectively justifiable criteria that are not driven by political or ideological goals. Part of this requirement is that content moderation policies of intermediaries, including modalities of its enforcement, should be easily intelligible and accessible to users (Joint Declaration 2017, para 4(b)). This principle appears to reflect what is called in international human rights law the requirement of ‘legality’. The other specific principle stipulates that intermediaries should institute minimum due process guarantees subject only to reasonable legal or practical constraints (Joint Declaration 2017, para 4(c)). This is mainly in two respects. One is that they should provide prompt ‘notification’ to users whose content may be subjected to ‘content action’ such as take down. Secondly, it requires intermediaries to put in place avenues by which users may challenge impending content actions. In the latter case, the Declaration mandates the need to ensure the coherence and consistency of content moderation decisions. The Declaration requires intermediaries to apply these standards to any automated (e.g. algorithmic) content moderation processes, but it permits exemption for “legitimate competitive or operational needs” (Joint Declaration 2017, para 4(d)). This reinforces the exemption that intermediaries may decline to enforce court take down orders when the measures are not technically feasible.

Joint declarations adopted in the following years reinforce the above standards in particular contexts. The 2020 Declaration, for instance, addresses the role of online intermediaries in relation to elections (Joint Declaration 2020, arts 1(c(iv)), 2(a(ii))). The 2021 Joint Declaration addresses freedom of expression and political actors where a series of recommendations are offered for social media companies. Among others, it calls upon social media platforms to ensure that their content moderation rules, systems and practices are clear and consistent with international human rights standards (Joint Declaration 2021, art 4). Particular focus is given in this Declaration to political advertisements. Social media platforms are called upon to adopt rules governing political advertisements which should (a) be clear and non-discriminatory, (b) be labelled as such, (c) indicate the identity of the sponsor, (d) enable users to opt-out of targeting and (e) be archived for future reference. The 2022 Joint Declaration deals with issues at the intersection of gender justice and freedom of expression. More relevant to the question at hand is that the Declaration calls upon digital platforms to ensure that content moderation policies and automated systems do not discriminate on the basis of gender or amplify and sustain gender stereotypes (Joint Declaration 2022, arts 1(e), arts 4(e), 5(c)).

Notably, the above highlighted norms introduce progressive standards on content governance, partly influenced by the work of civil society groups, including Internet bills of rights. Soft law generally offers authoritative interpretation of high-level principles of international hard law, but the approach in the joint declarations raises questions of form and substance in international law. One such question is whether a soft human rights instrument drawing upon a human rights treaty could directly address non-state actors that are not party to the underlying treaty. Related to this is the question of whether reading binding obligations in general binding instruments through progressive interpretation of soft law is tenable. Stated differently, the legal status of soft law of such form would ultimately undercut its normative value unless, of course, intermediaries choose to follow it regardless.

Another avenue by which such elaborative soft laws may earn more or better legal authority is if—for instance, the HR Committee—were to draw upon the declarations. That way, progressive standards provided in the joint declarations would get more audience and jurisprudential value. Yet interpretive bodies such as the Committee are shackled by structural constraints that make it hard for them to engage in elaborate content governance jurisprudence. We address this particular point in the next section.

3.4 Regulatory Limits

To what extent does international law offer much sought-after normative guidance to platform governance? This section seeks to address this question in light of the brief sketch of the normative sources of international platform governance law in the preceding section. First, we consider the ways in which the relevant rules are designed to undercut the potential of international law in offering normative guidance on the governance of and in digital platforms. Second, we consider structural challenges that further punctuate the design constraints, namely the lack of robust oversight and accountability mechanisms.

3.4.1 Design Constraints

Design constraints of the international law of platform governance relate to the inherent normative characteristics of human rights standards more broadly. By design, international human rights law is state-centred. Only states are directly involved in its making and ultimately are obliged to respect and protect human rights. Non-state actors such as digital platforms may in one way or another play a role in shaping the making of rules of international law. But they are not subject to human rights obligations. International law delegates to national law when it comes to regulating the conduct of digital platforms.

International human rights treaties are rarely universally ratified, which means that there will be states with no underlying human rights obligation to put in place the requisite legal and regulatory framework applicable to digital platforms. At the time of writing, the ICCPR has 173 state parties while the ICESCR has been ratified by 171 states.Footnote 11 A much lesser level of ratification has been recorded so far for the protocols of both Covenants that envisage an individual communications procedure. This leaves out dozens of states which remain with no obligation to legislate on content governance. Of course, treaty ratification is no guarantee for the existence of a robust domestic regulation. Many states might not be willing or able to follow upon their human rights commitments. Indeed, recent regulatory initiatives in some jurisdictions appear to be prompted more by jurisdiction-specific considerations than by human rights commitments, a good case in point being the recent deluge of ‘fake news’ legislation in many African countries (Garbe et al. 2021).

A related design constraint is that the relevant international standards are formulated in an exceedingly generic manner. Generic formulation of norms is generally desirable in making rules apply across time and rapidly changing technological environments. But the particularly truncated nature of international law standards relating to content governance undercuts their potential. Regulation of content in digital platforms requires rules that attend to the complexities and dynamism of the digital platform ecosystem. Generic international human rights standards are not suited to this reality of digital platforms. We will return to the point of how structural problems further punctuate this design constraint in the next section. But in the meantime, it suffices to note that institutional arrangements in the international human rights regime offer weaker oversight mechanisms that are beset further by structural problems. This means that there is little jurisprudence that would shed light on generically formulated standards.

It has been suggested that generic international law standards are sufficiently unpacked in the jurisprudence of regional and national courts (Kaye 2019, 42; Aswad 2018, 26, 58–59). But as proponents of adapting international law standards to content moderation acknowledge, not all requirements of Article 19(3) can readily be transposed. For instance, corporations—unlike states—cannot invoke ‘public/national security’ or ‘public health’ as a legitimate aim when restricting speech on their platforms (Lwin 2020, 69–70: Aswad 2020, 657–658). But they do in practice somehow. One recalls here the suspension of the Facebook and Instagram accounts of former US President Donald Trump which essentially invoked public security/safety as a legitimate aim for the decision (Zuckerberg 2021). In the wake of the current pandemic, social media companies have likewise updated their policies to attend to health-related speech. But the question of whether platforms can invoke such objectives as legitimate aims normally reserved to states remains.

A version of the design constraint relates to the applicability of the UN business and human rights framework to digital platforms. The development of the UNGPs did not originally have technology businesses in sight. Surprisingly so in light of the time when it was adopted—as recently as 2011. Nor has the work of the Working Group that oversees the Ruggie Principles in the past decade considered technology companies. Transnational corporations operating in mining and petroleum industries, among other business sectors, that pose tangible, brick-and-mortar human rights concerns were, and remain to be, the prime concern. For instance, a 2016 report of the Working Group, a body that oversees the Guidelines, focuses on the “human rights impact of agroindustry operations, particularly the production of palm oil and sugarcane, on indigenous peoples and local communities” (Report of the Working Group 2016).

An earlier report of the Working Group from 2014 even indicated that its areas of priority for the future will be promoting the incorporation of the Guiding Principles in the policy framework of ‘international institutions’ (Report of the Working Group 2014, para 84; Report of the Working Group 2021). A recent ‘stocktaking’ report of the Working Group explicitly acknowledges the hitherto exclusive focus on brick-and-mortar corporations and signals a shift towards technology companies in the future (Report of the Working Group 2021, paras 66, 74). This might gradually go some way in bringing technology companies within the radar of the Working Group. In this respect, the 2022 report of the UN Office of High Commissioner for Human Rights (OHCHR) charts the path where the ways in which the Ruggie Principles may apply to the technology companies are, to a degree, mapped (Report of the High Commissioner for Human Rights 2022). By way of a side note, it is vital to flag that the preparation and content of this report was informed by input from experts and the work of stakeholders from different geographic regions. This reinforces the increasing cross-fertilisation of norms between civil society initiatives and international human rights standards.

The inherently generic formulation of the Ruggie Principles further limits its potential of being applied to digital platforms. Indeed, the scope of the Ruggie Principles is defined in a manner to apply to businesses of all types and sectors (Guiding Principles on Business and Human Rights 2011, part II, para 14). This means that the principles would, theoretically, apply to technology companies, including social media platforms. But how, in practice, it would apply to them is uncertain, particularly because of the generic formulation of its principles. Attempts to adapt the principles to the world of content moderation—alluded to above—may gradually help refashion the content governance policies and practices of digital platforms. Yet, this avenue is not only rife with uncertainties, but it hinges very much on the good will of platforms and their readiness to heed to civil society advocacy.

Exacerbating the uncertainty is the resultant discrepant approach across platforms. Such uneven platform policies and practices would inevitably affect the rights of users. More certain would have been authoritative guidance from adjudicative bodies such as the HR Committee. But as the next section shall illustrate, such bodies operate in a framework crippled by structural constraints.

3.4.2 Structural Constraints

Structural constraints of the international law of content governance relate to the lack of effective institutional arrangements that would translate or apply generic and state-centred standards to the unique realities of the digital platform ecosystem. A characteristic feature of the international human rights system is that it relies on human rights bodies which operate in a framework that does not permit the development of elaborate and dynamic jurisprudence. Ad hoc oversight bodies organised into committees and working groups are the only international mechanisms of human rights accountability. But the committees and working groups are composed of experts that work part-time. For instance, the HR Committee, which is responsible for overseeing the ICCPR, is a body of 18 part-time experts who meet only three times per year, and only for four weeks each time (International Covenant on Civil and Political Rights 1966, art 28). Moreover, these part-time experts are expected to review state periodic reports (and adopt concluding observations), examine individual communications, develop general comments and hear reports of rapporteurs on the follow-up of views and concluding observations. Add to that the frequent reshuffle of members of the Committee, due to term limits and other factors, which affects the development of a coherent rights jurisprudence (Yilma 2023, Chap. 2).

Perhaps in an attempt to fill this void, special procedures of the Human Rights Council have taken steps to translate generic content governance standards to the platform context. In particular, several reports of the former UN Special Rapporteur on Freedom of Expression and Opinion, Kaye, have issued instructive thematic reports. Kaye often argues that the Guiding Principles provide ‘baseline approaches’, a useful ‘starting point’ that all technology companies should adopt (Report of the Special Rapporteur on Freedom of Expression and Opinion 2018a, para 70; Report of the Special Rapporteur on Freedom of Expression and Opinion 2016, paras 10, 14).Footnote 12 The UNGPs, according to Kaye, provides a ‘global standard of expected conduct’ of social media companies (Report of the Special Rapporteur on Freedom of Expression and Opinion 2018b, para 21). A recent report of the OHCHR likewise claims that the Ruggie Principles is an “authoritative global standard for preventing and addressing human rights harms connected to business activity, including in the technology sector” (Report of the High Commissioner for Human Rights 2022, paras 7–8). Its authority and legitimacy, the OHCHR further claims, flow from the fact that the instrument was endorsed by the Human Rights Council with wide private sector support and participation.

Kaye elaborates on how free speech principles should apply in specific contexts. With respect to hate speech in platforms for instance, he argues that social media companies should have an ongoing process to determine how hate speech affects human rights, institute mechanisms of drawing upon the input from stakeholders, including potentially affected groups, regularly evaluate the effectiveness of their measures, subject their policies to external review for the sake of transparency and train their content policy teams and moderators on human rights norms (Report of the Special Rapporteur on Freedom of Opinion and Expression 2019, paras 44–45). His report on Artificial Intelligence (AI) similarly reads into Ruggie Principles specific responsibilities of social media platforms. Among other points, it states that platforms should make high-level policy commitments to respect the human rights of users in all AI applications, avoid causing or contributing to adverse human rights impacts through the use of AI technologies, conduct due diligence on AI systems to identify and address potential human rights concerns, conduct ongoing review of AI-related activities, including through consultations and provide accessible remedies when human rights harms are caused by the use of AI technologies (Report of the Special Rapporteur on Freedom of Expression and Opinion 2018b, para 21).

Nevertheless, such expansive interpretation of rather crude human rights and principles can be problematic. Particularly, the problem relates to the normative authority of the interpretive exercises as well as the relevant legal instruments. The Ruggie Principles is, for instance, a soft law carrying no binding obligations. That UNGPs are inherently non-binding means that adherence by businesses is on a voluntary basis. Even if more states were willing and able to regulate businesses in their jurisdictions, the outcome would be an uneven level of protection among states. And this is undesirable in Internet regulation, which inherently involves transnational issues, including digital rights protection. Neither the joint declaration nor reports of UN special rapporteurs carry the type of authority needed to ensure compliance with international content governance standards, including the Ruggie Principles. The reports provide ways in which they would apply to the unique and specific context of social media companies. But the non-binding nature of such reports means that acceptance by platforms would be entirely voluntary. In international law-making, such reports of UN special rapporteurs do not count as standard-setting instruments but one that offer intellectual guidance on specific human rights standards. But they might carry some legal effect in offering authoritative interpretation of treaty provisions or other soft law instruments in a manner suited to particular contexts such as the unique contexts of technology companies.

Such an elaborative role helps provide normative guidance to companies as well as states. But as even advocates of adapting international law standards, including the Ruggie Principles, to the social media context acknowledge, the scope and content of corporate human rights responsibilities is in the process of development in international law (McGregor et al. 2020, 326). Added to frequent exhortations of civil society groups on the normative value of the Ruggie Principles, emerging attempts at translating the UNGPs to the digital context would contribute towards a crystallisation and further development of international standards on content governance. But this only means that the topic is in a state of flux which, in turn, makes it less suited to the exigencies of platform governance.

GNI’s independent assessment of technology companies is a form of institutional oversight. Member technology companies undergo periodic assessment of their performance against GNI Principles. But it comes with structural limitations of its own. One is that the membership of technology companies is remarkably small. At the time of writing, 13 such companies are members of the initiative, and only a few of them—namely, Facebook and Google—are actively involved in content governance.Footnote 13 Another shortcoming is that the assessment is undertaken only every two or three years. That makes it less responsive to the dynamism in the digital ecosystem. But more crucially, as a voluntary oversight scheme, what the GNI Board—based on an independent assessment—could issue is a determination on whether the companies have made “good-faith efforts to implement the GNI Principles with improvement over time”. The latest determination of the Board, for instance, has been in the affirmative (Global Network Initiative 2019). These appear to limit the value of the independent assessment mechanism in holding technology companies to account.

In the past few years, two developments that seek to increase the normative value of the UNGPs, including in extending its applicability to the technology sector, have emerged. One relates to attempts to crystallise the Ruggie Principles into hard law—that is, a treaty. But the draft treaty being negotiated by states has little to offer when it comes to technology companies and human rights (Third Draft Business and Human Rights Treaty 2021). Not only that the draft treaty does not pay specific attention to technology companies that wield enormous powers in the digital age but also that it does not even address businesses directly. Perhaps a notable innovation of the draft treaty is that it introduces a committee that would oversee the implementation of the treaty, fashioned like human rights treaty bodies. But what awaits the proposed Committee are the same structural setbacks that international institutional arrangements face. Operating part-time and meeting a few times a year, the future Committee would not probably be able to make international law fit for purpose to the digital platform ecosystem.

A manifestation of the structural challenges is the general comment of the HR Committee on freedom of expression, which provides no meaningful guidance to topical issues of content governance (General Comment 34 2011). The Comment offers an authoritative interpretation of Article 19 of the ICCPR. Although it was adopted as recently as 2011, it does not address content governance themes. Issues of content governance are unlikely to gain prominence before the HR Committee because of its ‘slow-moving’ jurisprudence as well its part-time roles meeting only thrice a year. A further reflection of the ‘slow-moving’ nature of UN jurisprudence generally on content governance is that recent resolutions on freedom of expression pay no apparent attention to the issues of content moderation. A good case in point is the series of resolutions adopted by the Human Rights Council under the label “Promotion, Protection and Enjoyment of Human Rights on the Internet”. While these resolutions—adopted intermittently since 2009—focus primarily on the challenges of upholding freedom of expression on the Internet, topical issues of platform content moderation practices receive no mention (HRC Res 47/16 2021). A more recent Resolution ‘affirm’ the dicta that the same rights that people have offline must also be protected online. But content moderation by digital platforms is not addressed in any of the resolutions in meaningful detail. And understandably so, given the truncated nature of resolutions.

3.5 Filling a Void

This chapter explored the extent to which and whether the recent turn to international human rights law for content governance standards is a worthwhile exercise. It has shown that, beset by a host of design and structural constraints, international law does not offer meaningful normative guidance to the governance of and in digital platforms. A closer look at the current catalogue of international norms reveals that it remains uncertain just how international law would apply to the fast-moving, complex and voluminous nature of platform content governance. If major social media platforms such as Facebook follow through on their commitment to base their content moderation policies and practices in international human rights law, it would be along with all its uncertainties. In the absence of international oversight mechanisms, those companies are left with ample room to determine what international standards would apply, when and how when carrying out routine content moderation measures. Meta’s inaugural human rights report highlights how the company’s policies and practices are ‘informed’ by, ‘drew’ from and ‘build’ on international human rights standards (Meta 2022). With their obvious business interests in mind, this state of affairs diminishes the potential of international human rights law standards in providing reliable normative guidance on platform content governance.

A remarkable development, however, is that there are signs of progressive articulation and development of international standards into civil society instruments on content governance, a theme explored at length in the next chapter. While international law provides the overarching framework, civil society instruments appear to offer progressive standards on content governance at two levels. At one level, civil society standards are addressed directly to private actors, including social media companies, as well as states. This departs starkly from the state-centred international standards. But in doing so, civil society standards often tend to adapt state-centred international standards to social media companies. In that sense, there is a level of convergence between the two sources of content governance standards. At another level, civil society standards offer relatively detailed normative guidance on content governance. But such elaborate standards find only some form of high-level articulation in international law.

With the adoption of progressive standards in joint declarations, the normative cross-influence between international law standards and civil society standards is increasingly taking a new shape. But this phenomenon raises the question of what role international law should, or could, have in platform governance. In grounding progressive content governance standards that revitalise generic and state-centred international law standards, civil society initiatives tend to fill the void in international law. Nevertheless, the growing normative progression would mean little in practice unless the standards find proper articulation and recognition in international law and are advocated actively by civil society groups to influence the policies and practices of digital platforms. The next chapter shows how civil society initiatives are seeking to reimagine the international law of content governance.