In a Danish context, the first time the public was exposed to these flaws was in 2012. It was the seemingly banal case of documentary writer Peter Øvig Knudsen, who published the two-volume book Hippie about the Danish hippie movement and their so-called Thy Camp in 1970. The photo documentation of these books included photos from the camp, where hippies walked around naked, swam undressed and the like. It was, of course, a central part of the hippie movement. These seemingly innocent documentary photos, however, turned out to violate several standards of several tech giants.

On November 1, 2012, it became apparent that Apple refused to market Øvig’s Hippie 2—referring to the fact that the book contained photos of naked people, making it an act of indecent exposure. Øvig stated about Apple’s iBookstore: “If Apple becomes the most dominant book store in Denmark, we have a situation where censors located in the US, across the Atlantic, decide what books can be bought in the biggest book store in Denmark.” He then added: “This makes for a very unsettling future outlook. We have no influence over Apple, and our attempts at dialog with them reveals that they have no interest whatsoever in discussing this with us.”Footnote 1 Subsequently, the only reason the book could be purchased through Apple’s bookstore was that the publisher Gyldendal prepared a censored version, covering the more sensitive parts of the images. As an ironic reference to Apple, tiny red apples were placed across genitals and breasts on the photos. Danish Minister of Culture, Uffe Elbæk, declined to do anything about the matter, referencing Apple as a private player who is free to set its own guidelines.

In the spring of 2013, the case spread to Facebook, where several of Øvig’s posts were deleted and he was suspended from the platform, again citing indecent exposure present on some of the photos posted on his Facebook page. When he subsequently linked from the Facebook page to a website far outside of Facebook, describing and discussing what he perceived as a censorship, his Facebook page was blocked again. Facebook would not even allow critical discussion of the case outside the reach of their own service. Television network TV2 asked Facebook’s Danish representative, Thomas Myrup Kristensen, what was wrong with nudity: “In and of itself, there is nothing wrong with nudity. Here in the Nordic countries, we are quite relaxed about nudity. But other cultures see things differently. At Facebook we have to protect a set of guidelines that covers our whole community of 1.1 billion people,” said Thomas Myrup Kristensen, adding: “There are rules as to how we define them. I cannot get into exactly how we do it. It is a matter of nude nipples not being allowed. They are not — that is what this is about.”Footnote 2 Author Peter Øvig’s problems would return in 2016 when he published a book on the Danish squatter BZ-Movement of the early 1980s. Once again, his Facebook was suspended — this time because it contained a link to Øvig’s own website which featured a photo from a nude protest organized by the BZ-movement in from of the Copenhagen Town Hall in 1983. This constitutes a detailed regulation that not only affects content posted on social media but also content for sale in bookstores, as well as content on private web pages entirely outside the realm of Facebook, as long as these pages are linked to from the person’s Facebook page. Or in Øvig’s own words: “Not being able to decide what to put on your own website is simply bizarre.”Footnote 3

It may seem a peripheral and harmless matter whether or not tech giants prevent old photos from the Danish 1970s left wing from spreading on the web. And it is almost comical to follow how, for example, Facebook’s struggle against nudity has migrated into the world of art and cultural history removing, among other things, classical works of art such as Delacroix’s emblematic “La Liberté guidant le Peuple” (1830), where goddess of freedom Marianne leads revolutionary France in battle, equipped with the French tricolore, a bayonetted musket and bare breasts. Another removal was Courbet’s famous painting from 1866, “L’Origine du Monde”, featuring a close-up of a straddling woman, as well as iconic press photos from modern times such as “Terror of War”, Nick Ut’s Pulitzer Award-winning photo of naked Vietnamese children fleeing from an American napalm attack in 1972. Even Hans Holbein’s drawing of Erasmus by Rotterdam’s naked hand from around 1532 did not make it through.Footnote 4 Over the past ten years, an extensive and still ongoing confrontation has emerged between Facebook’s removal policy and activist groups of breastfeeding mothers who, not entirely without reason, consider Facebook’s removal of their happy selfie photos of babies sucking their breasts as just another element in a major suppression of breastfeeding mothers in the public sphere. The fact that Facebook categorizes such photos of mother-child idyll with relatively limited nudity as “obscene” or “adult” has probably offended these mothers just as much—if not more—as those images offended the sensible users who felt they needed to complain about them to begin with.Footnote 5 In terms of nudity in advertisements, Facebook’s policy is even more restrictive, which has made Belgian museums desperate—they can no longer advertise classic masterpieces such as “Adam and Eve” by Rubens. Therefore, they submitted an official complaint to Mark Zuckerberg in July 2018.Footnote 6

However, the somewhat comical fear of nudity illustrates a much bigger and more fundamental problem: the tech giants’ increasing removal of user content. To a large extent, the tech giants long managed to keep their machinery of removal hidden from the public, but as communications researcher Tarleton Gillespie points out, content moderation is one of the essential services provided by the platforms—it might even be part of what defines them.Footnote 7 Although the companies do not produce content, their reason for being—and what separates them from the remaining unfiltered web around them—lies in their services of moderation, prioritization and content curation. In the eyes of the tech giants, the fact that most users do not discover this machinery and perceive the platform as open and uncurated is a sign of the very success of the removal: when many users perceive the platforms as open to all ideas, it is because they have never run into problems themselves. This is not the case for the many users who have actually had their content removed and their accounts blocked. With a number of users in the billions, the number of removals can of course not be a marginal task; on the contrary, it is so extensive that the machinery of removal must take place on an industrial level. Due to a lack of transparency, it is difficult to get quality numbers on these removals, but in a 2014 TED Talk, Twitter Vice President Del Harvey said that “Given the scale that Twitter is at, a one-in-a-million chance happens five hundred times a day. (...) Say 99.999 percent of tweets pose no risk to anyone whatsoever. There are no threats involved... After you take out that 99.999 per cent, the tiny percentage of tweets remaining, works out roughly 150,000 per month.”Footnote 8 Obviously, this number has increased since. In March 2017, Mark Zuckerberg mentioned that Facebook gets “millions” of complaints each week (for more exact 2018 numbers, see ch. 12).Footnote 9 Although the percentage of controversial posts may be small, in absolute numbers they are enormous and many users are exposed to removals and sanctions which it takes a lot of resources to decide on and deploy.

As noted by Gillespie, most removal activity by the tech giants takes place on a number of levels, as part of an intricate collaboration scheme.Footnote 10 At the highest level is the company top management. Right below is the department responsible for ongoing updates of the removal policy, which consists of only a few highly placed persons. Below them is the management and recruitment of the crowdworkers who do the actual implementation of the rules by removing and making sanctions against users who have posted non-standard content. They are usually low-paid and loosely employed people, who can be hired and fired according to the given workload, far away from the lush main offices in Palo Alto, in places like Dublin in Ireland, Hyderabad in India or Manila in the Philippines. In addition, there is an increasing amount of automated artificial intelligence that—unlike the human censors working with already posted material—flags uploaded material before it is posted online so that it can be removed immediately, in some cases even without human involvement. In this context, the efficiency of AI is exaggerated, however. In 2016, it was reported that Facebook’s AI now finds more offensive photos than the hired staff does.Footnote 11 However, the computers merely flag the photos which then still need subsequent human check by a removal worker.

Although the development of such AI is highly prioritized in the companies’ R&D departments, Gillespie points to its inherently conservative character. AI still suffers from the problem that the machine-learning process that the software is trained by can only teach the artificial intelligence to recognize images and text that are closely analogous to the applied set of training examples. Which essentially are selected by people. Moreover, the AI software is not semantically savvy—it recognizes proxies only for the problematic content, not the content itself. For example, large areas of skin surface tones are a proxy for nudity and sex. The exact wording of certain sexual slang, pejoratives, swearwords or threats are proxies for the presence of porn, “hate speech” or threatening behavior. This means that the AI detectors have large amounts of “false positives” (e.g. sunsets with color nuances of pale skin) as well as “false negatives:” (e.g. porn images mostly featuring dressed models). It also implies that they are easy for creative users to fool by writing “fokking” instead of “fucking”, “bo@bs” instead of “boobs”, etc., or by coloring the pornographic models in colors far from that of pale skin. Something similar applies to political and religious terms, where it is possible to just write “mussels” instead of “Muslims”, “migr@nts” instead of “migrants”, etc. Of course, the programs can be revised so as to crack down on such creative spelling as well, but this requires ongoing human input and revision which cannot in itself be automatized. Another way is to identify controversial content individually, piece by piece—images can, for instance, be identified by a pixel sequence so that they can also be recognized as parts of edited or manipulated versions. This technology was developed to curb child pornography, but it requires that an extensive “library” of such banned images is kept, against which elements found online can then be automatically checked. Such technology was used in 2016 to fight terrorism, as part of a collaboration between Facebook, Microsoft, Twitter, and YouTube. They set up a shared database of already identified terrorist content, so that online reuse of this content could be removed automatically.Footnote 12 However, said technology is of course only able to recognize already identified content and it takes continuous human effort to update the database with new content found online. It is another variant of the overall fact that the software can not recognize e.g. pornographic, terrorist, threatening content, otherwise instantly recognizable to humans, unless the material has been anticipated in the software’s training syllabus. Despite the techno-optimism inherent to the industry, it does not seem as if AI will be able—any time soon at least—to respond to new types of controversial content without human monitoring. For example, AI will not perceive new terrorist threats with a different political agenda than already known ones.

In addition to these human and mechanical removal procedures, most tech giants also rely on the contribution from users who have the option to complain about content they have witnessed on the platform by flagging it.Footnote 13 The use of this option makes up, in a sense, user co-creation of the site as a way to patrol its borders, fitting nicely into many of the tech giants’ idyllic self-description as self-organized “communities”. This is not free from problems, however. The users are no neutral, impartial police force. It is better to compare it to a self-organized militia or “posse” of a particular group or urban neighborhood, who might be capable of carrying out certain protective tasks, but who may also have its own agendas and who is not obliged by the rule of law, let alone knowing about the law. Of course, the trend is for the most annoyed or quarreling users to be the ones filing the most complaints—rather than just leaving the content in question. No one is forced to stick around and watch. There is no guarantee whatsoever that complainants primarily complain to defend the platform policy. In fact, there is no guarantee that complainants even know about the policy. Other motivations for complaints range from diffusely sensing something to be inappropriate, to perceived offense (whether or not the offense is a violation of any given rule), to offense on behalf of others (one is not personally offended but believes that the content may offend someone else, who complainants feel more or less in their right to defend), to moralist pushiness, to taking action against views that one is in disagreement with, to private revenge against people belonging an opposing group. Sometimes groups of users even join forces to carry out coordinated flagging of specific persons, groups, or views, simply to have them removed from the platform. What is more, there is not even any guarantee that the complaint in fact relates to the content in question. These circumstances are supported by the fact that the complainant remains anonymous both to the removal staff and the accused party, the latter of which is not informed about who reported and over what exact content aspect—users are completely free to complain in the sense that they cannot be criticized or accused of fake complaints. In most cases, communication to the proscribed user seems rudimentary. Often it is not made clear exactly what content is deemed problematic or what rule was allegedly violated, appeals mechanisms are not mentioned, and with most tech giants it is notoriously difficult to get in contact with staff members to object to an unfair decision.

Some additional details seem to affect this flagging procedure and the subsequent brief checks by the tech giant staff members (often outsourced to other companies such as TaskUs, which have specialized in content removal): the use of “super users” whose complaint pattern enjoys particular confidence and whose complaints are almost automatically favored. For example, YouTube has a “Trusted Flagger” category, in 2016 expanded and renamed as nothing less than “YouTube Heroes”. The program turns flagging into a kind of computer game with points and prizes. Conversely, tech giants appear to be working with lists of particularly suspect users who could somehow be expected to post questionable content—possibly based on posts previously complained about and removed, or simply based on their suspicious behavior on the site or anywhere else on the Internet. Subsequently, such problematic users may be subject to special monitoring aimed at quick and consistent crackdowns.

Finally, this entire complex internal structure is continuously affected from outside by leaked cases discussed and criticized on the site, elsewhere on the web or in traditional media—which, in some cases, may put pressure on or jeopardize the reputation of the tech giant in question, who may then modify the set of rules or practices, in public or concealed fashion. Given the large amount of complaints and the many odd levels of the complaint mechanism, nobody can expect consistency in the decisions, and there are many reports of people who have had their accounts deleted for posts that are nevertheless still up on other user profiles.

User flagging has several advantages to tech giants. In a certain sense, it outsources an enormous and unforeseeable task to the users who work for free. It may also seem to support the belief of tech leaders that the removal of content is made by a self-regulating “community”—as if the complainants were in mutual agreement and as if they were not complaining about widely differing things and with widely differing motives. Sometimes, the tech elite comes off as if by definition there are no other problems on their platform than the ones flagged by users—such as when Zuckerberg spoke at the congressional hearings. These romantic ideas are of course contradicted by the simple fact that it is the companies who make and enforce the rules and not the users who act as reviewers only, some would say snitches.

Different deviations and varieties of this general scenario are found. Filtering can be a supplement or an alternative to removals so that certain content categories are reserved for certain users. For example, some platforms use filtering for certain categories—such as Tumblr with porn—in a setupFootnote 14 where users rate their posts themselves based on how pornographic or not they are. That way only users who have previously accepted it have access to that content category. Certain services have “safe search”, an option in which users beforehand deselect different categories of content from their search. However, as noted by Gillespie, not enabling “safe search” does not mean that the user gets access to all content when doing neutral searches (porn movies do not appear when searching “movies”, only “porn movies”), so even with “safe search” disabled, moderation takes place. Filtering is of course a milder kind of control than the far more widespread policy of removal, but if it is not done according to very explicit principles, making the user aware of what is going on, it naturally increases the risk of filter bubbles.

Even Tumblr’s relatively liberal stance towards porn has come under attack. In December 2018, the platform announced full prohibition of all pornographic material. The new rule prohibits “Adult Content. Don’t upload images, videos, or GIFs that show real-life human genitals or female-presenting nipples — this includes content that is so photorealistic that it could be mistaken for featuring real-life humans (nice try, though). Certain types of artistic, educational, newsworthy, or political content featuring nudity are fine. Don’t upload any content, including images, videos, GIFs, or illustrations, that depicts sex acts.”Footnote 15 The argument behind seems to be that in order to accommodate the App Store policy against child pornography, it was deemed safest to remove the much larger category of pornography in general, in order to avoid the issue of borderline cases between child porn and normal porn. This antiliberal step gave rise to outcry from the LGBT community, which had seen Tumblr as a forum of self-expression and experiments. Simultaneously, this step seems to corroborate the suspicion that Apple is increasingly setting the limits for online expression – and applying it to other tech companies – because of the central role of the App Store as a gateway to Apple devices for the other tech giants.

All in all, content removal is a complex process involving many levels. Coordination and communication between layers is a problem in and of itself, aggravated by the existence of different versions of the same set of rules. One may ask why content removal — not only on Facebook but with most tech giants — must be kept so secret. Media researcher Sarah Roberts highlights several reasons behind this: the fact that removal is concealed so users do not know the detailed removal criteria may prevent users from trying to bypass or game the rules, and it also hides the elementary fact that the entire platform is a result of ongoing active selection based on monetary motives.Footnote 16 One might add that the whole ideology of a free self-organizing community is probably best maintained if the extent of the removal industry does not see the light of day. At the same time, secrecy gives users intolerable conditions when the rules approach the status of laws—since the Renaissance, a core aspect of the idea of rule of law has been that laws must be public.

For a long time now, websites have been around documenting this censorship, as it is often directly dubbed—e.g. When looking at how hyper-detailed and sensitive the services are when it comes to targeting content and ads at individual users, it is curious to see how insensitive their community standards have generally been. By default, they apply to the whole world, to “our entire community”, as the Danish Facebook spokesperson said in sugar-coated terms. The standards are not in any way adapted to cultural, geographical let alone jurisdictional differences. They largely reflect the worldview of a small elite of wealthy men and software engineers at the top of Silicon Valley. In most Western countries, these community standards are more stringent than local laws on expressions, whereas in other cases, especially in the Middle East and Asia, they may be more lenient. But the principle that the standards should be the same throughout the world gives them a natural tendency to be organized by the lowest common denominator. If something is tabooed or feels offensive in one part in the world, then via the community standards that something is expanded to all users across the world. Øvig’s harmless images of naked hippie breasts did not raise many eyebrows in Denmark where his book was initially published. But they became a problem because in other, more straitlaced places of the world, there are people who claim to be shocked at the sight of people’s natural bodies. Something similar goes for the critique of religion, which is widely used in some places and even regarded as an intellectual endeavor of a certain standard, while elsewhere in the world it is regarded as blasphemy and may even invoke the death penalty. In that arena, the tendency is also for the lowest common denominator to prevail. When it comes to “hate speech”, there are very different standards—as mentioned, it is not a crime in the US, but in many European and other countries it is, and it usually also appears among expressions regulated by tech giant standards.

Although by default the same standards apply to the whole world, in recent years pressure from dictator states and authoritarian regimes has made several tech giants tighten the criteria to align them with local laws. For instance, Facebook gave in to Turkish pressure by forbidding specific critique of Islam on their Turkish Facebook—including a ban on caricature images of prophet Muhammad. That happened in January 2015, only two weeks after Zuckerberg proudly, prompted by the Charlie Hebdo massacre in Paris, had stated the following: “We never let one country or group of people dictate what people can share across the world.”Footnote 17 In 2010, American illustrator Molly Norris announced “Everybody Draw Mohammed Day”. It was based on the idea that if the Internet drowned in drawings of Muhammad, censorship of them would be practically impossible. Pakistan filed protests to Facebook and threatened to close down the service. Morris’ initiative was not covered by Facebook’s rulebook, so Facebook’s improvised reaction was to remove Norris’s big Facebook group with hundreds of thousands of users in Pakistan, India and Bangladesh. Even Google agrees to such local concessions: on the Russian version of Google Maps, Crimea is now a part of the Russian Federation. Thus the tendency is that in cases where the principle of one and the same standard for the whole world is departed from, the departure is towards further local narrowing of what can be said, not liberalization.Footnote 18 There is no general openness or transparency about the range and nature of these local compromises made with non-democratic regimes. Gillespie points out that such modifications—just like personal filter bubbles—are invisible because not only is content removed, but there is no mentioning of said removal: the problem is “... the obscured obscuring of contentious material from the public realm for some, not for others.” It creates filter bubbles on the national or religious level, so to speak. The “global community” may now look completely different in different countries and may not be as global as once proclaimed.

Looking at the standards of the different tech giants, we find something striking. From the Pinterest “Terms of Service”: “You grant Pinterest and our users a non-exclusive, royalty-free, transferable, sublicensable, worldwide license to use, store, display, reproduce, save, modify, create derivative works, perform, and distribute your User Content on Pinterest solely for the purposes of operating, developing, providing, and using Pinterest. Nothing in these Terms restricts other legal rights Pinterest may have to User Content, for example under other licenses. We reserve the right to remove or modify User Content, or change the way it’s used in Pinterest, for any reason”.Footnote 19 As with other tech giants, users give the company the right, to a surprising extent, to use their personal uploaded content in a number of ways, including commercial ones. But the crucial part here are the last three words: “for any reason”. The right to remove or modify user content is reserved for any reason. Similar unspecified formulations are found in many tech giant policies, which may talk about removing content “at any time” or launching lists of critical content with the phrase “... including (but not limited to).”Footnote 20 This is analogous to censorship laws which fail to specificy exactly which types of statements are in fact criminalized. In the absence of such specification, the laws may be applied to an unlimited amount of undefined infringements; potentially to all kinds of statements. The legal protection of the user is non-existent if the extent to what can be removed remains unknown. In a Danish context, we are somehow back at the Danish police-issued censorship of 1773, in which a police commissioner had the sovereign right to confiscate which books he deemed illegal, without any prior court decision. It is also characteristic that content removed by the tech giants for any reason is removed offhand, without the criteria being clear, without knowing whether it was done manually or automatically, without the user being informed why, without anything that looks like a court decision in which arguments pro and contra can be put forward based on a sound legal basis and finally without any formal or clear appeal path. When it comes to Pinterest’s “Terms of Service”, the policy on prohibited content fits on one single line: “Do not post pornography or spam or act like a jerk in front of others on Pinterest.” This may sound funky, idiomatic and straightforward, but it exposes a lack of more precise description of what types of actions makes one considered “to be a jerk”. This gives that particular company a legal license to do as it pleases.

The standard procedure for content removal on Facebook starts with users view content that they find problematic or offensive. They then decide to flag, to submit a complaint, which then constitutes the basis for removal. Then it is up to Facebook’s safety and security staff—a team as per early 2018 consisting of 7,500 people, but which after Facebook’s crisis in the spring of 2018 was set out to a quick expansion to around 30.000, many of which are allegedly based in the Philippines— to check the given content and determine which complaints should be favored. This essentially means that the needs of “offended” users are met—but it also means that only problems visible to the users can be addressed.Footnote 21 At the congressional hearings in April 2018, when commenting on various problematic issues raised by US politicians, Zuckerberg repeatedly said that Facebook would obviously take care of these issues if, it should be noted, any users filed relevant complaints. This is to say that if no users were complaining, then there could be no real problem and the politicians need not worry. According to this logic, it is the complainants (and the advertisers) who set the agenda and that they in fact comprise the sole specification or correction of the company guidelines. Many have discovered this and are able to abuse it to organize social media shitstorms against views and people they dislike—who may then be removed from the platform by its speedy representatives, see below. According to the same logic, structural, general, and statistical aspects of the algorithms – invisible from the viewpoint of single users – could never appear as real problems the company ought to address.

As mentioned earlier, manual removal of content, as a response to complaints, is increasingly supported by the automated removal of content based on algorithms constantly developed and modified. There has been no public access to, neither the algorithms themselves nor the exact criteria on which removals are carried out. The criteria are revised and modified on a regular basis, and rarely communicated to the public. At the hearings in April 2018, Zuckerberg placed the main responsibility for the recent removal of “terrorist” content from Facebook on the algorithms, claiming that 99% of ISIS-related material was now removed automatically. He repeatedly cited further development and sophistication of AI-based algorithms as the solution to many of the issues highlighted by members of congress—including the removal of many different types of content, both those already covered by the standards and content found problematic by different politicians. But the public has no insight into or guarantee that all the aforementioned deleted ISIS posts actually contain serious acts such as threats, incitement to violence, organization of terrorist cells, etc. Needless to mention, nothing could be farther from the writers of this book to have the remotest sympathy with ISIS, but that is exactly the reason these cases are suitable to illustrate the problem. Interpretations of for instance the ISIS theology, its understanding of government, politics, strategy, propaganda etc. should not be removed, in our opinion, and certainly not automatically and without control. The classic line of argument to defend the position that such content should also be tolerated goes as follows: if not all views—also abominable ones—are allowed, the result is a public sphere characterized by dishonesty, pretense and hypocrisy, because people are forced to hide what they really mean. If certain views are banned, they will not just disappear, but instead get organized in the underground—for instance on The Dark Web—where they may gain a sheen of martyrdom, become radicalized and even more difficult to control. The result is fragmentation of the public sphere. It makes it harder both for the public, for researchers and for secret services to understand, for example, what kind of phenomenon ISIS is, if no one has access to their distorted views. And if radical views are rejected, supporters of such views are more likely to conclude that democracy does not include them; therefore, they must take anti-democratic action. One could also add that the presence of grotesque and extreme views in the public sphere has a diversity of functions. This does not only make for the recruitment of more supporters. It also gives the public the opportunity to be aware of, disgusted at and renounce such positions—like a vaccine that makes society ready to battle against such positions, a readiness which may lose force if these ideologies develop in more clandestine fashions.

The censorship procedure not only includes the removal of content, be it algorithmically by censorship before the fact or manually by censorship after the fact. It also happens by categorizing certain types of users as suspicious and then subjecting them to special monitoring of their online behavior. Categorization of someone as suspicious can of course be the simple fact that on repeated occasions the person has had content removed, manually or algorithmically. But it can also just be a significant change in one’s metadata—data mapping one’s network connectivity and online behavior. If that behavior suddenly involves new groups, not to mention groups in contradiction to one another, or if it comes off as striking in any other way, then the “standing” with the tech giant can be changed from light green to red, which means that the user’s behavior is subject to special monitoring in order to immediately crack down, if the user posts something that violates the community standards. Monitoring users happens, of course, primarily for advertising and commercial purposes—but it also has a political dimension, so to speak, in the way that it categorizes users before the fact, if they are thought to possibly violate the community standards.Footnote 22 In August 2018, the public was given a small taste sample of this when Facebook announced its use of a Trust Index to rate users on a scale between 0 and 1. The purpose is to identify malicious users to crack down upon. The index most likely includes repeated violators of the community rules. According to Facebook, this credibility assessment also seems to include whether users have reported something as false which in fact was only a matter of disagreement—apparently the first attempt at possible sanctioning against abuse of the flagging feature. Similarly, the index also includes which publishers are considered credible by users. As always, there is no transparency about how these comprehensive credibility assessments are arrived at, whether all users are assigned a credibility score—and there is also no indication of whether it is even possible for users to gain insight into their own index and potentially file an appeal in case of error.Footnote 23

The overall picture is that by using their de facto monopoly, the tech giants are actually in the process of turning their community standards into the new limits of the public sphere. From their origin as editorial principles these guidelines slowly transform and become de facto censorship legislation. And these new limits are far narrower than the broad limits for freedom of speech generally set out in modern liberal democracies.