1 Introduction

Artificial intelligence (AI) has overtaken the “metaverse”Footnote 1 as the technology topic of the day. However, despite headlines announcing the demise of the metaverse (such as Olinga (2023)), rumors of its death have been much exaggerated. Companies are certainly scaling back investments—as they are in many areas—but development continues. Mark Zuckerberg, CEO of the “metaverse company,” Meta, pushed back against the narrative “that we’re somehow moving away from focusing on the metaverse vision,” reiterating that Meta remains committed to investing in the metaverse as a long-term project (Dean 2023). Apple will launch a mixed reality (MR) headset in early 2024 (Apple 2023), and the government of China, which last year released a 5-year action plan on virtual reality (VR)Footnote 2 remains committed to following through in industrial applications even as consumer investment pulls back (Jiang 2023).

This cooling of investment may benefit governance efforts. The EU is preparing to regulate “virtual worlds” after several Citizens’ Panels (European Commission, 2023). One issue that must be considered but that will be difficult to regulate domestically is that of cross-border content moderation, which has implications for freedom of expression worldwide. In general, online platforms remove content based on the laws of the countries in which they operate and have users, either proactively or in response to takedown requests from governments. This problem is theoretically well-characterized: platforms remove or restrict locally illegal content for the users and/or website version based in that country (Goldman 2021). However, this commentary will argue that in reality, economic and political factors significantly complicate cross-border content moderation. In the metaverse, the politics and the fundamental issue of determining for whom and where to remove content will be much trickier, both because users from different jurisdictions will mingle in the same environment and because the immersive nature of the metaverse means that expression goes beyond speech to avatar representation and behavior, making the question of what is allowed even more political and pressing. The sense of presence created by immersion means that experiences trigger the same physiological and psychological responses as in the physical world (Parsons et al. 2009), and in platforms where users are represented by an avatar, they may identify with their avatar as an extension or representation of their own self (Freeman et al. 2020). Thus, experiences—including negative ones—impact users much like experiences in the physical world, meaning that, for example, hate speech in the metaverse could have an even more deleterious impact than hate speech on a traditional social media platform.

In this commentary, I will address the difficulties metaverse platforms will face in cross-border content moderation, especially in cases where required removals violate a platform’s stated values, and how they may impact individual freedoms of expression and non-discrimination guaranteed by the Universal Declaration of Human RightsFootnote 3 (UDHR). I will also consider the precedents (and lack thereof) of proto-metaverses such as Minecraft and Roblox, as well as traditional social media platforms.

2 The Difficulties of the Inevitable Cross-Border Moderation Conflicts

At the most fundamental level, content moderation is a question of what to remove where and for whom, as well as how. The metaverse presents difficulties across all four. I will first address the how. Moderating user-created worlds has already proven challenging for Meta’s metaverse platform, Horizon Worlds. In Horizon Worlds, journalists created a world called the “Qniverse” and filled it with content and misinformation that violated Meta’s Community Guidelines. Meta did not remove it even after multiple users reported it; it was only removed when the journalists inquired directly with Meta’s communications department (Baker-White 2022). Horizon Worlds has only around 200,000 active users (Horwitz et al., 2022), but scale is already proving to be an issue. Automated moderation of immersive content, which involves not just posts but the entire worlds, avatars, and other user-created content, is not as straightforward as 2D audiovisual content. However, if metaverse platforms become as popular as they themselves predict, human moderation will not be feasible at the scale required. New processes will have to be developed to ensure that content is appropriate (to say nothing of user behavior).

This brings us to the question of what is to be removed. Both platform standards and local laws must be considered. Platform standards are not always easy to apply, though. One theory the “Qniverse” journalists put forth was that moderators may have thought the world was parody, something that is difficult to judge when the content in question is an immersive world instead of text, images, and videos, which may contain more obvious context clues that identify their parodic nature. If this is difficult for human moderators, it will be even more so for automated moderation tools. Metaverse and proto-metaverse platforms require their users and content moderation to abide by local laws, but local laws may be in conflict with these platforms’ stated values, which include community diversity (Roblox Community Standards, 2023), inclusion (Community Standards for Minecraft, 2023), and the ability to express yourself (Customize Your Meta Avatar With New Body Shapes, Hair and Clothing Textures, and More Ways to Express Yourself, 2023). Under laws restricting free expression across the world, not just content but also user behavior and expression may be subject to moderation.Footnote 4 Countries have always had different standards regarding what speech is acceptable online and governments often issue takedown requests to platforms for violating content, but the embodied nature of metaverse platforms creates new avenues of restriction. For example, in 2022, Vladimir Putin signed into law an amendment to Russia’s existing “gay propaganda law,”Footnote 5 banning all depictions of LGBTQ+ “propaganda” in media and prohibiting positive depictions of LGBTQ+ relationships (Rajvanshi 2022)—which could include the mere existence of avatars presenting an LGBTQ+ identity. The law has already been used to fine TikTok for not taking down LGBTQ+ content (Marrow 2022) and to arrest two gay content creators (Padgett 2023). This is one entry in a long list of laws restricting the expression of LGBTQ+ identity, including in Hungary (whose law was noted as similar to Russia’s in a European Parliament resolution condemning itFootnote 6) and Uganda (Bhandari 2023). According to Russia’s law, displaying LGBTQ+ identity in online spaces is illegal, meaning that metaverse platforms could either be forced to prohibit Russian users from accessing spaces where any portrayal whatsoever of LGBTQ+ content could be present (i.e., everywhere), restrict LGBTQ+ content to specific areas where Russian users are prohibited (infringing on the freedom of expression of all non-Russian users), or selectively alter the appearance of avatars expressing LGBTQ+ identities to appear “straight” to Russian users (another severe violation of non-Russian users’ autonomy and freedom of expression). Restricting LGBTQ+ expression on metaverse platforms would also be overtly discriminatory, violating Article 7 of the UDHR. These same questions apply to any case where some jurisdictions ban expression that others do not.

The question of what to remove overlaps with for whom content is to be banned. Metaverse platforms would have to restrict content for users based in the country where it is illegal. If they choose to remove content for only those users, the immersive nature of metaverse platforms would create situations where two users from different countries could be standing right next to each other and see completely different worlds, undermining any shared sense of reality. If they choose to remove such content for all users, other users’ autonomy and freedom of expression (which includes the right to “seek and impart information” according to Article 19 of the UDHR) will be infringed on. Many social medial platforms ban wholesale content like pro-Nazi propaganda, which is illegal in GermanyFootnote 7 but not elsewhere. This is generally accepted as benefitting everyone’s online experience. Returning to the example of LGBTQ+ content, though, it would be discriminatory and severely curtail freedom of expression to ban it altogether. Despite this, platforms may face pressure from governments to broadly ban content. In January of 2023, the Indian government invoked emergency laws to block a BBC documentary unfavorable to Prime Minister Narendra Modi, ordering Twitter and YouTube to remove accounts sharing clips (Ellis-Petersen 2023). In an immersive metaverse, these actions would be akin to storming movie theaters and television stations and confiscating all copies of a film, impacting not merely the content available to users, but their very realities.

The final issue platforms face regarding content moderation is where content is to be removed, i.e., on what version of the site and if different standards apply for public and private spaces. It has already been established that the very premise of the metaverse is that there are not different versions for different countries, but the public/private divide question remains. The journalists who created the “Qniverse” flagged the lack of clarity around Horizon Worlds community standards and content moderation in public versus private spaces, even though Facebook’s Community Standards apply to both public and private groups (Alison, 2020). Issues of who can access what content are likely to persist in private spaces as well, but could be more egregious because users may expect that their private spaces are a place where they can freely express themselves. Restricting specific content for users from specific countries in private spaces would violate that expectation and infringe on not only peoples’ social experiences, but their freedom of expression.

3 Proto-metaverse and Social Media Platform Precedent

One might assume that proto-metaverses like Roblox and Minecraft might provide precedent for how platforms can deal with these issues. After all, many of the above content moderation questions also apply to non-immersive platforms where users are depicted via avatars. However, neither metaverse nor proto-metaverse platforms have had to address significant cross-border content moderation issues. For metaverse platforms, part of the reason may be their operating area or scale. Meta’s Horizon Worlds is (as of May 2023) only available in seven countries, all in Europe and North America (Supported Countries for Meta Horizon Worlds, 2022), and daily active VRChat users number in the tens of thousands (VRChat API Metrics, n.d.). Proto-metaverse platforms are larger, though. Roblox boasts millions of users in Russia and has games that would seem to fall afoul of the “gay propaganda law,” such as “LGBTQ+ Hangout,” (Film Family, 2023)  so the government could theoretically take action at any time. However, this would likely provoke significant international outrage, and there is precedent for governments ignoring virtual worlds that technically violate their laws. A Minecraft server called the “Uncensored Library” hosts articles that have been banned by various governments and is accessible to Minecraft users in any country. It was widely celebrated in the media, which perhaps contributed to its security—any government forcing its takedown would face widespread condemnation. Still, Nick Feamster of the University of Chicago warned that “Governments will know about this - the articles are going across the internet. It’s not going to be foolproof against a determined adversary” (Gerken 2020). The unknown factor is at what point governments will consider it worthwhile to take on platforms over content they disagree with, and it is here that social media platforms provide some insight, revealing the complex political dynamics of cross-border content moderation.

Best practices documentation for cross-border content moderation states that content restriction for locally illegal content should be “geographically proportionate” and preserve the broadest availability of “legitimate content” (Internet & Jurisdiction Policy Network 2021). When defining “legitimate content” and restrictions, platforms should look to Articles 19 of the UDHR and International Covenant on Civil and Political Rights (ICCPR)Footnote 8 (Internet & Jurisdiction Policy Network 2021), which protect the rights to freedom of opinion and expression, including the right to “seek, receive, and impart” information. Indeed, Meta states that if takedown requests are “inconsistent with international human rights standards,” (Content Restrictions Based on Local Law, n.d.) they may take no action. However, moderation decisions appear to be influenced more by political and commercial interests than the multi-step evaluation process laid out by Meta. Meta’s transparency reports reveal that they are not acting on takedown requests from the Russian government regarding pro-Ukraine content (Case Studies, n.d.), which could indicate that they are upholding their commitment to the right to free expression. Contradicting this is a May 2023 case regarding so-called anti-state content where Meta blocked access to 110 items alleging crimes and corruption by the Turkish government (which did not violate Meta’s Community Standards) under threat of having Facebook banned altogether (Case Studies, n.d.). Twitter similarly restricted anti-government accounts and Tweets, and owner Elon Musk offered insight into the decision when he responded to a critical Tweet: “The choice is to have Twitter throttled in its entirety or limit access to some tweets. Which do you want?” (Musk 2023).Footnote 9 Though the implication is that this was the trade-off necessary to protect free speech more broadly, commercial interests are no doubt at play, as no platform wants the bottom-line impact of being blocked (Gillespie 2018). Despite the similar nature of these cases, one of the reasons Meta likely did not remove pro-Ukraine content at the behest of the Russian government is that Facebook is already blocked in Russia (Allyn and Selyukh 2022), so the government had limited leverage.Footnote 10 Furthermore, given the overwhelming pro-Ukraine sentiment in the West, complying with such requests would create a public relations nightmare that would damage their business elsewhere.

Thus far, proto-metaverse and metaverse platforms have not faced the content moderation battles that traditional social media companies have, likely due to their comparatively smaller size and reach. However, as they grow, it is almost certain that they will clash with governments over content moderation, and what happens will depend on a variety of political factors independent of the human rights impact of banning the content. For platforms to truly avoid infringing on freedom of expression, they would have to concretely establish what values they want to uphold and stand by them, but this may come at a cost of their ability to operate in certain countries.

4 Conclusion

Metaverse platforms will be subject to the same political and economic pressures in content moderation as traditional social media platforms and proto-metaverses, but with higher stakes due to the implications for user identity and expression in an immersive context. And yet, they will probably be governed primarily by private companies, especially when it comes to content moderation. Any forthcoming EU metaverse regulation may spread to other jurisdictions via the Brussels Effect, but it will not be able to address cross-border content moderation. In an interview, Meta CEO Mark Zuckerberg was asked about how the metaverse will be governed and expressed a vision of a regime of industry-set standards (Newton 2021), which in reality would likely have to co-exist with some amount of hard law. However, unless a standards body can devise a universally accepted standard for content moderation, platforms will continue to set their own, which risks creating a “Lowest Common Denominator Effect” where, for technical and political ease, metaverse platforms adopt the most restrictive content regulations worldwide, severely limiting user expression. Platforms are poised to have enormous control over metaverse content and thus the realities of their users. Governments will have power as well, but only in restricting otherwise legitimate content, not in compelling certain forms of speech to be permitted outside their borders.

The status quo where traditional social media platforms are in constant battles over platform availability may spell doom for the vision of a universal Metaverse—unless it is one where freedom of expression is significantly restricted. Instead of this dystopic vision, platforms should concede that the Metaverse will be a set of “splinterverses” with different markets and values and (a) clarify how they will moderate content illegal in specific jurisdictions, (b) select and justify which markets they enter based on their laws and the platform’s professed values, and (c) be prepared to exit markets that become overly restrictive, rather than capitulate to restrictions over threats of being banned. So long as different countries have different standards for free expression online, disparities in platform and content availability will exist. In the absence of international regulations establishing global content norms, the metaverse is set to be a critical new battleground in the fight for freedom of expression.