Seen from the offices of the CEO’s, the current public turmoil faced by the tech giants must be rather confusing. On the one hand, agitated politicians and intellectuals demand that the companies engage still more in the removal of content—of “hate speech”, “fake news”, extremism, defamation, Russian bots, nipples, pejoratives and a wide range of other things. On the other hand, a number of politicians and intellectuals—among them the authors of this very book—are accusing the giants of already removing way too much content, applying their narrow but at the same time vaguely worded community standards and the murky and uncontrollable enforcement of them. Unfortunately, the former of the two tendencies seems to have the upper hand at the moment.

Many of the critical questions made to Zuckerberg during the congressional hearings he faced in 2018 revolved around more removal of content, and it seems politicians are able to use the problems with the tech giants as a pretext to demand de facto censorship, otherwise impossible to carry out against the constitutions of most countries. Oftentimes the point is put forward that tech giants must acknowledge their status as actual media outlets—that they have to assume publisher responsibility, make targeted edits to their content and assume the same legal status as print media. We think that is going down the wrong road. Tech giants are not media and they produce news or other content only marginally (Google, Amazon) or not at all (Facebook, Twitter). We will return to how they should indeed be categorized.

Still, the idea of tech giants as media outlets naturally leads decision makers to imagine how, as subjects of political monitoring, the giants should take over responsibility for the content that is uploaded on their platforms. In this matter, Europe leads the way. In May of 2016, the European Union convinced Facebook, Microsoft, Twitter and YouTube to accept a “code of conduct” on “hate speech”, which required that the companies subject themselves to more detailed control and agree to respond to complaints within 24 h.Footnote 1 The most ominous example of this tendency is the German legislation carrying the nebulous name “Netzwerkdurchsetzungsgesetz”—Networks Implemen-tation Law – nicknamed “The Facebook Law” or “The Heiko Maas Law”, after the German Minister of Justice. It was adopted during 2017 and entered into full force on January 1st, 2018. Its purpose is to comb social networks for smear campaigns and “fake news” (it does not, however, include email services and edited news sites). The law obliges social network providers of a certain size—such as Twitter and Facebook—to remove all statements from their servers that are “clearly illegal” within 24 hours by establishing and staffing a permanent reporting service. In more complex cases, a 7-day deadline may be allowed so as to hear the user’s side of the matter. In not so clear cases, the service can initiate “regulated self-regulation”, to be monitored by the Justice Department in the form of reports twice a year, but also this regulation is run, staffed and paid for by the service provider itself. Objectionable statements not caught by the still more advanced algorithms will have to, for instance on Facebook, be reported by vigilant users. A person reports a posting he does not like, indicates what violation he thinks the given content is guilty of and adds a description of the context in which it appears. Subsequently, Facebook—or some subcontractor entrusted with the task of control—decides within 24 hours on the removal of the utterance in question. If the company fails to take measures to remove controversial content, fines of up to 50 million euros can be issued, along with €5 million fines levied on individual employees of the social media in question who are deemed guilty. Even during the preparation of this law, heavy criticism was voiced: it would breach the German constitution’s article on freedom of speech, and the short deadline and high fines would put pressure on the networks to remove all content under even the tiniest suspicion of containing reprehensible material. Another big problem is that this law actually privatizes parts of the judicial as well as the executive powers, by leaving tech giants to decide what falls within the scope of German law and whether or not to penalize users by removing content and enforcing short termed, long termed or indefinite exclusion from the networks. Finally, it is extremely problematic that all of this takes place in a semi-automated fashion, without any public scrutiny or control, except for generic half-yearly reports.

One would think these issues might be serious enough. However, it gets worse. The German newspaper Frankfurter Allgemeine decided to investigate how this new informer service works. If and when a user wishes to complain about another’s posting, he is taken to a website containing 14 different categories of violations and has to classify the offending post. According to Frankfurter Allgemeine, the informer can choose between the following categories of violations: (1) spreading propaganda material from unconstitutional organizations; (2) using features of unconstitutional organizations; (3) preparing a serious act of violence against the state; (4) instigating the public to commit criminal acts; (5) disrupting the public space by threatening to carry out criminal acts; (6) forming terrorist associations; (7) inciting hate and violence (“Volksverhetzung”); (8) reproducing violent acts; (9) insulting faiths, religious organizations or ideological associations (“Weltanschauungsvereinigungen”); (10) disseminating, acquiring or possessing child pornography (if disseminated via the Internet); (12) rewarding and accepting criminal acts, (13) unspecified “violation”—but, as the newspaper states, also otherwise not criminalized acts such as (14) “treasonous falsifications”.Footnote 2 However, it is not correct that treasonous lies are not punishable. They come under the rarely used Article 100a of the German Criminal Code.Footnote 3

In general terms, these infractions concern anything from planning and threatening to commit very serious crimes—to things normally associated with ordinary debate, such as criticism of religions and ideologies, representations of violence, and unspecified violation (“Beleidigung”). These examples are indeed included because they appear in the German Criminal Code. But in the absence of court cases with experienced and skilled judges, prosecutors and defense attorneys, how are we to trust that the newly hired staff at the German branch of Facebook are capable of making a distinction between “dead letter” articles of the law and still applicable ones? How are we to trust that relevant precedence is considered when applying particular articles and weighing contradictions between them? And how are we to trust that the people involved understand that the article of the German Constitution on freedom of speech historically has kept the articles on, for instance, criticism of religion and insults within strict limits? Would one not think it more likely that Facebook will interpret these articles in the light of their own community standards and the associated combination of tighter procedures and more vague wordings?

The entry into force of the law was not impaired by the fact that Reporters sans Frontières and a growing freedom-of-speech movement in Germany have pointed to the resulting serious encroachments on basic human rights. At the time of writing, the German branches of Facebook, Twitter etc. are busy cleaning out content of the aforementioned kind, a job carried out by a large influx of new employees. Critics have dubbed this law an autocratic mechanism of censorship, referring to totalitarian states and their control of the public. They point to the fact that the Belarusian dictator Lukashenko is inspired by this legislation. And as early as July 2018, Russia drafted legislation which basically copy-pasted the German law made public months before. German chair of Reporteurs sans Frontières Christian Mihr said: “Our worst fears have become reality. The German law on Internet” hate speech “now serves as a model for non-democratic states wishing to restrict debates online.”Footnote 4 Eight out of ten experts summoned to the German Bundestag claimed that enforcement of this law must be the responsibility of the government and not of private contractors. They point to a breach of the principle of proportionality regarding punishment, given the imbalance between the enormous fines and the nature of the actual offense. Observers estimate that the implementation of the required control systems will cost social media in Germany around 500 million euros per year. At the time of writing, the EU Commission has blocked access to documents examining whether this law is even compatible with EU legislation, the European Human Rights Convention, and European laws on information society service.

However, the most unsettling aspect of this rushed legislation is that responsibility for this new form of censorship is left to private actors and their opaque employee and subcontractor setups. Content is now removed from the public space without any warning, without any open case process, without the right of defense of the person responsible—and with limited recourse to appeal, a recourse to be decided by the networks. No court, trial or sentence is involved—something which, mirrored in a Danish context, would bomb us back to before 1790 when court sentences became standard in matters of press freedom crimes. We have no reason to believe that tech giants solely focused on profit possess the journalistic, scientific or legal expertise to judge whether a reported statement is criminal or merely controversial. Or true, for that matter: How would a provider of computer services, with such skills, be able to decide whether a piece of news is “treasonous falsification”? As pointed out by critics of the German law, which threatens to allow for fines of up to 50 million euros, the most likely scenario is that the tech giants will remove content in cases with even the slightest doubt about their nature. For obvious reasons, both the German right and left have opposed this legislation—surely both sides envision how their political statements can end up among those deleted by faceless hordes of content moderators.

An example from December 2017 illustrates the problem: a videoclip of a pedestrian breaking into foul and racist language while passing a Jewish restaurant in Berlin. It was shared vividly on Facebook, only to be quickly removed. After a while, the video returned to the platform. Facebook gave no information about the motivations behind the removal nor the reappearance of the video. But as stated in Frankfurter Allgemeine, one may assume it was removed because the passer-by was thought to be carrying out an act of “incitement to hate and violence”. Yet later it was uploaded again, once it became clear that people shared it in order to shame the individual in the video rather than support his views, turning sharing it into a kind of shunning by quotation. Merely from the primitive tools “share” and “like”, one cannot tell whether such clicking actually means that the clicker agrees with what is being liked or shared. Without any public insight into procedures and decisions, is there any way of knowing for sure that such subtle contextual reasoning will be applied in each individual case? We have no reason to believe that such flash justice performed by the German branch of Facebook is capable of distinguishing quotation or irony from direct statements.

The law on these networks appears to rely on the entirely untenable idea that to identify “clearly” criminal or controversial points of view is an easy and straightforward matter, even possible to turn into an object of automatized control. The law completely overlooks the fact that in modern societies, such elementary distinctions are only put into use thanks to the ongoing and demanding work done by the judicial systems, serious media and scientific research—and that there can be no shortcut to handle this justice via automated algorithmsFootnote 5 or anonymous, privately employed moderators stressed for time and with dubious qualified training.

It is important to emphasize that the new German censorship practiced by social media has come about despite the wishes expressed by the tech giants, and that it puts considerable economic and administrative burdens on them. It is the German government and Bundestag which have resorted to this peculiar, panic-like measure. It forces the outsourcing of a crucial and important part of the legal enforcement of freedom of speech online to private, formally incompetent and reluctant players. The legal rights for large parts of the German public sphere remains entirely undefined and insufficient. These days, much of this public sphere is practiced online via social media—including the many “traditional” media engaging with their audiences via pages on Facebook or Twitter. One might fear that the legislations currently being drafted in France, and the tech regulations possibly underway in the United States, could be inspired by the German government and hand over more power to the tech giants rather than less.

At present, we know only the outline of the suggested French legislation, which is set to include a newly established Internet tribunal with fast-track case processing as well as tightened legislation and control of political ads, especially around election time. Unlike the German law, case processing is here maintained within the judicial system. On the other hand, this new tribunal may be given very vast competences. Among other things, the scary possibility of simply closing down media found guilty of “fake news” is currently being discussed.

In March 2019, a white supremacist assassin from Australia attacked two mosques in Christchurch, New Zealand, simultaneously livestreaming his crime on Facebook. That incident provoked a new wave of strong demands for regulation of so-called “harmful content”, an umbrella term for “fake news”, “hate speech” and graphic violence. Inspired by the German legislation, Australia passed a new law in the beginning of April. The law threatens social media with heavy fines and jail sentences for their top executives if they do not manage to remove rapidly “abhorrent violent material” from their platforms. Such material comprises videos that show terrorist attacks, murder, rape or kidnapping; fines are up to 10% of annual profit, and employees can face up to three years in prison.Footnote 6 Similar legislation is underway in New Zealand. In the UK, an even more comprehensive law is drafted in a whitepaper, targeting harmful content including child exploitation, “fake news”, terrorist activity and extreme violence. British officials claim that UK will be the “safest place in the world to be online”.Footnote 7 The legislation will be policed by an independent regulating body with the power to impose fines against tech companies and hold individual executives personally liable. Increasingly, tech giants are categorized after the lines of traditional media with publishing responsibities. What began as internal regulation in individual companies is quickly developing in the direction of traditional centralized state censorship. The strange thing is that these tendencies are rarely discussed in the context of fundamental political liberties in existing constitutions. Why should special legislations be developed for the internet when well-functioning, clearly defined free speech legislation exists in most Western countries, e.g. the First Amendment in the US?