Advertisement

The Role of Civil Society

  • Frederik Stjernfelt
  • Anne Mette Lauritzen
Open Access
Chapter

Abstract

We cannot, however, rely on government-imposed regulation to solve all the problems of the Internet. Regulation can and should set a better framework than what is currently the case, but by nature all regulation is general and framework-setting only, and nobody can expect it to solve all problems in detail. Malignant forces of many different kinds will continue to bypass, exploit and challenge even the best regulation. Therefore, actors within the public sphere and civil society must contribute to the preservation of freedom of expression on the Internet. The simplistic view of a dichotomy between state and private often seems to assume that society consists only of a number of companies and private individuals on the one hand, and on the other, a state. From that plain dichotomy, it follows that policy must then be determined by an ongoing arm-wrestle between the two. Such a polarized setup might give a rough picture of authoritarian societies. But painting the two in this oppositional way completely ignores the key role of civil society and the public sphere in modern democracies—in between the government sphere and the private sphere, so to speak. In this context, traditional media ought to adopt a leading role. It is well known that such media outlets are under pressure by tech giants, primarily because the crucial advertising income is migrating towards the tech giants and their attention economy. Print newspapers are shrinking, and people go online to find news. In this story of decay, many have been a bit too willing to see an overlapping shift, where new technology and new players replace old and outdated ones, just like cars replaced horse-drawn carriages and the drive-thru replaced roadside inns. This widely-held idea overlooks one fundamental fact: to a very large extent, the news content that people look for and find on the tech giants’ platforms is still largely produced by the old pre-Internet media outlets, newspapers, TV-networks, publishing companies, film production companies, etc. Tech giants have indeed become great marketers of news and content, but they themselves do not produce the content they deliver. This is one of the reasons that the categorization of the giants as media is wrong. Newspapers, media outlets and publishers embody free debate and Enlightenment principles in civil society—supplemented by public-private players such as universities and other research institutions, think tanks and philanthropic foundations. Here it is crucial that such media and institutions hold on to the elementary principles of free speech and do not resort to introducing community standards. Moreover, they must keep promoting freedom, which includes also points of view considered unpopular, provocative or grotesque by the ideas of the moment and the mainstream. One man’s “hate speech” is another man’s truth may not apply to every single case but should still count as a guiding motto.

We cannot, however, rely on government-imposed regulation to solve all the problems of the Internet. Regulation can and should set a better framework than what is currently the case, but by nature all regulation is general and framework-setting only, and nobody can expect it to solve all problems in detail. Malignant forces of many different kinds will continue to bypass, exploit and challenge even the best regulation. Therefore, actors within the public sphere and civil society must contribute to the preservation of freedom of expression on the Internet. The simplistic view of a dichotomy between state and private often seems to assume that society consists only of a number of companies and private individuals on the one hand, and on the other, a state. From that plain dichotomy, it follows that policy must then be determined by an ongoing arm-wrestle between the two. Such a polarized setup might give a rough picture of authoritarian societies. But painting the two in this oppositional way completely ignores the key role of civil society and the public sphere in modern democracies—in between the government sphere and the private sphere, so to speak. In this context, traditional media ought to adopt a leading role. It is well known that such media outlets are under pressure by tech giants, primarily because the crucial advertising income is migrating towards the tech giants and their attention economy. Print newspapers are shrinking, and people go online to find news. In this story of decay, many have been a bit too willing to see an overlapping shift, where new technology and new players replace old and outdated ones, just like cars replaced horse-drawn carriages and the drive-thru replaced roadside inns. This widely-held idea overlooks one fundamental fact: to a very large extent, the news content that people look for and find on the tech giants’ platforms is still largely produced by the old pre-Internet media outlets, newspapers, TV-networks, publishing companies, film production companies, etc. Tech giants have indeed become great marketers of news and content, but they themselves do not produce the content they deliver. This is one of the reasons that the categorization of the giants as media is wrong. Newspapers, media outlets and publishers embody free debate and Enlightenment principles in civil society—supplemented by public-private players such as universities and other research institutions, think tanks and philanthropic foundations. Here it is crucial that such media and institutions hold on to the elementary principles of free speech and do not resort to introducing community standards. Moreover, they must keep promoting freedom, which includes also points of view considered unpopular, provocative or grotesque by the ideas of the moment and the mainstream. One man’s “hate speech” is another man’s truth may not apply to every single case but should still count as a guiding motto.

In the 2017 Spielberg film The Post, famous 1970s-80s editor-in-chief at The Washington Post, Ben Bradlee, is quoted for saying: “The only way to assert the right to publish is to publish.” This ethos must be followed by strong actors in civil society. Lengthy tribute speeches in praise of freedom of expression, often heard from the tech giants, can be gripping but also entirely useless, if the companies are not willing to act on their principles. One of the inventors of virtual reality, Jaron Lanier, recently published a sharply written pamphlet encouraging people to simply delete their social media accounts. He advises people to read three independent news sites each day instead of their social media news feed—making them more well-informed faster. Tech giants are harmful for truth, for politics, and they favor people who act like assholes, to paraphrase Lanier’s blunt characterization. However, he rejects the claim that the highly addictive tech giants are the tobacco industry of our day—there are also good sides to them, after all. He prefers to compare them to lead paint: it was phased out little by little—but it did not make people believe that they should stop painting their houses entirely. So, his radical proposal is not to shut down social media, but to push them to evolve in a direction away from their addictive, public-distorting traits. It seems doubtful, however, that Lanier will be able to provoke a mass movement to actually abandon the tech giants. But the threat in itself, and the rising debate in civil society, might perhaps result in a pressure to gradually guide the companies in a better direction. Lanier suggests an alternative business model where users pay for social media to be at their service rather than primarily serving the advertisers1—supplemented by the rights of users to own the data they create. That would make the companies’ use of these data subject to payment, but payment in the opposite direction, to the users. Such a solution would no doubt be more bureaucratic, and relevant supporting algorithms could be written. As to whether people would really wish to pay for such services, Lanier points to the fact that they have indeed long since become used to free content but still have proved willing to pay for HBO and Netflix subscriptions to access quality television. So why should discerning consumers not also be willing to pay for a similar subscription to access high quality social media? Lanier vows a fateful oath: he will not restore his accounts with the tech giants until he is allowed to pay for them.

In general, it is important that the public sphere and civil society continuously develop new tools to face new challenges. In early 2018, a number of civil liberty organizations—some old, some new—got together to discuss a mutual interest: online freedom of expression.2 Over two meetings in January and May carrying the title “Content Moderation and Removal to Scale”, they articulated a possible set of principles for freedom on the major Internet platforms, given the name Santa Clara Principles (named after the first meeting, which took place at Santa Clara University, California). The objective was to achieve “... reasonable transparency and accountability on the Internet platforms.”3 The manifest lists three basic principles to achieve this objective, respectively 1) number of posts removed; 2) notice about removal; 3) possibility to appeal for any content removals.

About removal of content the general recommendation is: “Publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines.” It must include the number of complaints about posts, number of posts and accounts deleted—organized by certain criteria: by the rules they are claimed to have violated; formats (text, image, video, etc.); by the type of complainant (governments, employees, users, automated); by the geographical location of the parties involved (both complaining and accused party).

Regarding notification of the affected user about removal, the general recommendation is: “The companies have to submit a notice to all users whose content has been removed or whose account has been suspended, giving the reason for the removal or suspension.” Generally, the companies must provide clear guidelines with examples of both permissible and critical content. Any removal notice must contain: URL, content quote or other clear reference to identify the posting; what exact rule the content has allegedly violated; how the content was flagged and removed (however, individual complainants may maintain anonymity); a guideline of how to appeal the decision.

Regarding appeal, the recommendation is: “The companies have to provide reasonable opportunity to lodge an appeal for any removal of content or suspension of accounts.” A proper appeal must include: a review carried out by a person or group of persons who were not involved in the first decision; an opportunity to present further arguments and have them included in the investigation; announcement of the results of the review in an understandable language.

The Santa Clara Principles address the tech giants directly, requiring them to adjust their removal procedures according to the recommendations. There is no doubt that these principles would represent a very big step forward for user freedom of expression on the tech giant platforms. They would introduce clarity, openness and a rule-governed procedure, which is not the case at present.

However, there are a number of important questions that these principles do not address. It appears that the Santa Clara organizations are hoping that tech giants will voluntarily adhere to their principles—government involvement in the matter is not mentioned. The problems include, for example, the nature and scope of the platforms’ policies: Should tech giants be able to freely decide their own censorship policies? Should external control of their removal policies become a condition for accepting their monopolies? Are there limits to what may be prohibited? What is the relationship between the policies of the tech giants and the free speech legislations in different countries? Should governments have influence over the policies, why, and how (or why not)? Also, the principles do not state (although they may imply it) that all removal of content should be subject to explicit rules (based on the old doctrine nulla poena sine lege—no punishment without law). They also do not emphasize any conditions for changing the rules—tech giants are famous for continuously changing their rules, often without clearly announcing the changes or giving any clear deadline for their enforcement. And finally, the principles do not specify whether the rules are to be enforced or even controlled by a third party, which would make it difficult for tech giants to keep acting as legislative, judicial, and executive branch alike.

In June 2018, we had the opportunity to speak to one of the persons behind the Santa Clara document, Nate Cardozo from the Electronic Frontier Foundation in San Francisco. EFF was founded in 1990 by John Perry Barlow, among others, to protect people and new technological tools active on the emerging Internet from various legal threats, to monitor and criticize government interventions in the field and to organize political action in support of personal freedoms online. Cardozo says that, until now, the EFF has focused on the problem of government demanding tech companies to hand over data. But two recent developments have now put censorship by tech giants on the agenda: the election of Trump and the advance of the Alt-right movement online: “The purpose of SC manifesto is twofold—to try to make companies comply, but also, once companies accept those principles, we’ll have data about removal—before that we can’t even have an intelligent conversation.”

  • Do you imagine the SC principles accepted voluntarily by the tech giants, or by political enforcement?

  • “We have no unified view on that yet. EFF comes from a more libertarian tradition so we would never embrace political enforcement, but there should be a political pressure for transparency. We’re not a fan of government regulation because the risk of error is extraordinarily high—if Congress would write a law now with the present power relations there and the present President, chances are we would not like it at all.”

  • But there are mounting regulation pressures in both the US and in the EU?

  • “One point where there’s appetite on Capitol Hill is for data protection like in the EU—which we at EFF would be in favor of. One area of the EU GDPR which makes us nervous, however, is the “right to erasure” [the idea that people have a right to have removed claims made about them online, claims they do not like, even old statements made by themselves]. Another problem is that tech giants have way more means of affecting legislations and adapting to them—Google has thousands of lawyers on staff, a startup has none. The other possibility is that government passes something toothless which would just make matters worse because then political pressure would cease. Another area where we would welcome government intervention would be more initiative from FTC (Federal Trade Commission) on enforcing already existing rules for FIPs (Fair Information Practices) which are 20 years old, but essentially unenforced as of yet.”

  • The whole complex of the criteria of removal are not addressed in the Santa Clara Principles ?

  • “No, that’s correct. You’re right, they are quite fuzzy—vague, intentionally so, because that gives a large room of manoeuvre for the companies. We talked about it but concluded: let’s not let the perfect be the enemy of the good, let us first see how far we can get with transparency and then increase demands later.”

  • There is tension between the broad freedom of expression granted by the First Amendment and the far more narrow removal criteria of the tech giants?

  • “We are indeed hesitant to accept any criteria narrower than the First Amendment. On the other hand, platforms also have their free speech rights. If you have a dog photo site, you have your rights to delete cat photos. If you have a vegetarian site, you will delete photos of pork etc.—of course you have rights to such things—the issue is how to balance these liberties. Infrastructure companies are easy to tackle—they should not censor at all! When CloudFare and Google cut off the white supremacist site The Daily Stormer which is not illegal under US law, we took a strong line against that.4 With things like Facebook it is different, but we’re moderating our stance a bit—we don’t say they should keep everything up all of the time, but they should keep up a hell of a lot more than they do today.”

  • Where do you stand on the suggestions for control of monopoly?

  • “We have an ongoing project in EFF to find out what we mean about that. One thing we have decided on is to back the idea of ‘interoperability’—that is, the possibility to move across the big tech companies online. You should be able to leave e.g. Facebook and take all your data with you to another operator, and it should be possible to send messages across different companies. Right now, monopolies are enforced by means of NOT being interoperable, even Google turned off the possibility of interoperative chats 4 years ago, they went out of the way to turn off interoperability. Facebook has all of these privacy settings—only one of them have a default setting which protects your privacy, and that is about your address book. And that is in order to not letting users taking their address book with them when they leave for another company.”

  • What about the AT&T example? Meeting government demands as a price in order to get accept of monopoly?

  • “It makes me extremely uncomfortable to imagine what the current government might require as a price for accepting monopoly, for instance government access to personal data—I think we rather need to come up with new solutions rather than look at the AT&T example.”

  • How is the EFF itself funded?

  • “EFF lives from donations from individuals, we’re only supported less than 6% from companies and 0% from the government. We had a big kick in support after the Edward Snowden affair in 2013 and another kick when Trump got elected ...”

  • Another type of censorship comes from external pressure on the companies, e.g. flagging storms by protest groups attempting to have opponents suspended from the platforms ...

  • “Companies repeatedly tell us that numbers of flaggers complaining about a particular post play no role—that is obviously bullshit. They have “trusted flaggers” whose complaints are easily followed—that sort of inequality is intransparent. And postings by known people have much better chances for being preserved than postings by an unknown. Another issue is: Which elements of civil society do companies chose to engage with or ignore also has a huge influence. After Charlottesville, the tech giants have been much more willing to engage with anti-hate speech organizations like Southern Poverty Law Center (who tend to err on the side of fighting white supremacy) and Anti-Defamation League (a Jewish organization particularly focusing on antisemitism) than with free-speech organizations like us.”

  • Globally, a huge issue for Free Speech is the compromises being made between the tech giants and totalitarian states?

  • “If tech giants are not forced to comply with legislations in countries where they have no boots on the ground, our stance is they should NOT at all comply with the demand of those countries. We protested when Twitter opened an office in UAE [the United Arab Emirates] because now they are no longer able to ignore demands from UAE, because now they have employees there who may be pressured or threatened. We have the policy that when content is illegal in some jurisdiction but not others—e.g. insulting Erdogan in Turkey—and the companies are forced to comply with that, they should continue to make that content available in the rest of the world.”

As an organization, EFF illustrates very well some of the dilemmas involved when trying to bring transparency and appropriate conditions to the technological Wild West of tech giant censorship. EFF is a strong voice in civil society hoping to help create pressure that makes the companies listen. Still, the organization refrains from supporting government intervention. The ongoing crisis after the Cambridge Analytica scandal also make this an opportune time for the public to put pressure on tech giants, because the companies might be nudged to see the advantage of taking a proactive stance—before regulation they might dislike is imposed on them.

It is crucial that civil society organizations and NGO’s, such as the ones behind the Santa Clara manifesto, continuously articulate and develop demands that can inspire public and political debate on tech regulation, a debate which seems growing in both the United States and EU after the 2018 crises. The basic clarity and obvious fairness of the Santa Clara Principles ought to ensure that they have great impact. But it also seems necessary that such organizations gain public and political influence to enable them to match the tech giants’ broad and well-funded lobbyism activity among Western politicians.

Another example of the abilities of civil society to interfere is a recent American NGO, Alliance for Securing Democracy.5 The organization was founded in 2017 in response to the exposure of the Russian “troll factory” Internet Research Agency and its massive influence over the American voting public via Russian bots on the Internet. The organization is privately funded —which is very important in the United States—and bipartisan, i.e., it has no privileged affiliation with either of the two major parties. In fact, the initiative was taken by two experienced former political advisors from each party, Marco Rubio’s (R) Jamie Fly and Hillary Clinton’s (D) Laura Rosenberger. The motto is: “We are not telling you what to think, but we believe you should know when someone is trying to manipulate you”.

One part of the organization’s activity is an ongoing mapping of Russian activities on Twitter, the main platform used for the Russian bot campaign. The project bears the title Hamilton686 after one of the American founding fathers, Alexander Hamilton, and his famous essay “The Mode of Electing the President”, published in 1788 in The Federalist Papers vol. 68, on how to fight foreign interference in US democracy. On the project website, people can continuously keep an eye on which Russian tweets posted within the latest 24 hours rank the highest, which hashtags are used and what topics are currently addressed in Russian propaganda. The Alliance for Securing Democracy website has become a key source of information about this activity. For example, it broke the news in January 2018 that bots were now set on a mission to de-legitimize Robert Mueller III, the Special Counsel charged with investigating possible Russian collusion with key players inside the Trump campaign. The fact that it is now possible, a bit like a continuous weather report, to map out current misinformation is already a strong advance compared to the time around the 2016 Presidential elections in the US.

In a manifesto on its methods, ASD describes how the organization has identified key sources of Russian misinformation by using three methods. First, by tracking online misinformation campaigns synchronized with open and obvious Russian propaganda sources such as RT (Russia Today) and Sputnik. Next, by identifying networks of users who obviously tweeted support for Russian policy. Finally, by identifying accounts that use automatic forwarding from other accounts to multiply signals from Russian sources of influence—by isolating accounts which show unusual amounts of interaction with other accounts. Such accounts may be bots which automatically forward content according to pre-defined rules, or they may be “cyborgs”, partly automated but supervised by persons. Triangulation of these three data sets then enabled ASD to identify a relatively small group of accounts, around 600, as responsible for systematic Russian misinformation. The organization is especially inspired by experiences from Estonia. The tiny Baltic state was probably the first place where active Russian cyber war was felt, during a large attack in 2007 which paralyzed large parts of the state apparatus. The attack was made possible due to Estonia’s high level of digitization. Former Estonian President Toomas Ilves is now on the cross-political advisory board of ASD, which also features heavy hitters with experience in international politics, security and counter-espionage such as neo-Conservative intellectual and politician Bill Kristol.

In February 2018, we had the chance to interview media analyst Bret Schaefer of the Alliance for Securing Democracy in Washington D.C.
  • How did you manage to build up your misinformation warning website?7

  • “We were able to quickly identify about 600 accounts, who formed a network of pro-Kremlin misinformation—the most well-known is probably RT and Sputnik. The idea of making a website with automatically updated information about misinformation activities was the brainchild of security researcher Clint Watts (who has repeatedly testified on Russian interference before the Senate), who 3–4 years ago began to be interested in Russian interference in Syria and in ISIS activity. He was then helped by people such as anti-terrorism specialist J.M. Berger, social media analyst Andrew Wiseburd and Jonathan Morgan—a big data guy who founded “Data for Democracy”. Those four guys are the Hamilton team with the dashboard, automatically following different activities on the 600 accounts we have identified so far.”

  • How can fake news online be fought against without violating user freedom of expression?

  • “Yes, that’s the big question. Who should have authority and responsibility to determine what is credible and what not? We cannot be the ones doing that and you cannot leave it to the government. Facebook does not want to or cannot, and not even bipartisan NGOs can do it. It is interesting that the new flagging, which Facebook tried before Christmas proved to be counterintentional. Users would flag sketchy online content with a special flag—and it soon became apparent that this sign actually increased Internet traffic to the sites in question. People like the two of us might want to visit the page out of curiosity—but also people who simply do not trust common news sources and therefore actively go look for alternative, less credible sources. Facebook’s idea was definitely not the right way to go. One solution right now is dealing with the automation problem, mainly on Twitter—artificial augmentation of content, taking one voice and turning it into 30,000 votes. That is what we aim to dismantle. So, our fight does not threaten freedom of expression, because users can easily turn to a conspiracy theory site, if they so wish. But we must fight the artificial inflation of this content, so it’s not pushed it into people’s news feeds”.

  • How do you discern systematic misinformation from individual trolls and your average extreme online voices?

  • “Trolls are tricky. Who is a troll? It can just be a guy with lots of time on his hands tweeting frequently. To us, it’s not so important whether the source is ultimately a person or organization. We expose the artificial multiplication of the content in question—the goal is not to disclose individual URLs. It is a computerized system we use, but it also has a human element checking through the list to avoid including accounts which are not relevant, for instance American Alt-right accounts from the extreme right. They may feature bizarre content, but they are not involved in artificial dissemination of the content.

In January, the 600 accounts, which run some 10,000 bots, managed to disseminate around 80,000 messages. At the top of the list were RT and Sputnik, but many of them are indistinguishable from US accounts from the far right. A bit further down on the top-ten came more extreme pro-Kremlin sites run out of Eastern Ukraine, such as Donbass News and Stalkerzone.com calling Ukrainians fascists and calling for ethnic cleansing, and so on. Very extreme views. They link to hyperpartisan US websites to give them followers, credibility, a way of getting pro-Kremlin viewpoints in front of Americans—and to exaggerate tensions between US groups”.

  • So, the Russian side is trying to use existing Alt-right websites as a kind of gateway to the US public sphere?

  • “Yes, but it does not have to be extreme US websites. It may also be pro-life antiabortion websites, where maybe 80% of the traffic concerns abortion-related topics such as Catholicism—but with 20% suddenly addressing pro-Kremlin issues. For example, if I put in a link on such a website suggesting that the United States is working with IS in Syria, it gains more credibility because it appears as if it came from within the anti-abortion tribe, so to speak”.

  • What is ASD’s attitude to the current attempts to legislate against “fake news” and “hate speech” online—for example, the German legislation requiring social media to censor different content or the French legislation in the pipeline?

  • “We’re in the process of making policy recommendations also as an organization. It’s a real challenge, approaching an area where governments censor content. On the other hand, not doing anything and leaving it to the tech companies is also not helpful. None of the two options is a viable one. Each month, we get more and more data and we now know that a couple of hundred million people on Facebook have seen Russian-generated content, and maybe fifty million Twitter accounts. Our claim is that automation is the low-hanging fruit here, and each further solution comes with its own problems. As I mentioned earlier, no one can or wants to be responsible for deciding what is credible and what is not. If you create rules or safeguards that are too stringent, people will just jump to other platforms—like in Germany with Alternative für Deutschland and Pegida which quickly left Facebook after the new law entered into force on January 1st, 2018. They’re now jumping to gab.ai and other uncensored underground networks. So, the outcome of censoring Facebook is that extreme right-wing activity is forced underground where it’s harder to control …”

  • That dilemma was already formulated in the eighteenth century: censorship of unwanted positions only strengthens them underground ...

  • “Yes, it’s new technology, but the issue is age-old. And it appears again in new ways. We may be close to having the AI technology to manipulate actual speech and video—we’re still trusting what we see on tv, but in a few years it will fool some people and in ten years it is going to be very difficult ... That is the next big problem we face”.

  • It never stops, does it? New technology will continuously generate new opportunities for the spread of false claims ...

  • “Yes, there is no definitive solution, but one step that should be taken is for the tech giants to start working with government organizations that are able to foresee and solve the issues before they arise. Facebook and Twitter have been caught off guard, they did not see the problems coming at all”.

In our view, the emergence of an organization like ASD is a healthy sign. Currently, it only unveils Russian disinformation and it is far from solving all problems, but it may act as a beacon for further public disclosure of systematic disinformation campaigns. As we have argued in this book, we agree with Schaefer that the right way forward cannot be to prohibit, censure, weed out, de-rank, flag, or otherwise remove or marginalize “fake news” or false statements online. No authorities have the divine overview of true and false required for such a procedure. Therefore, such an authority would remove statements as false which then later turn out to be true, and it would be impossible to vaccinate against political bias. Giving even more power to the tech giants would also mean going down the wrong path—it would be disastrous to hand them the right to define and determine what is true and false. Moreover, it would only make censored voices regroup and reorganize underground, on The Dark Web or in other places beyond the control of authorities and general public oversight, bestowing on them the heroic status of persecuted martyrs. As a way of dealing with the issue on a higher level, the general public and civil society can contribute with a sharpened alertness and sensitivity to systematic disinformation, which is characterized by the mass dissemination of repetitive content using bots, fake profiles, submitted by clandestine individuals, etc. All of this must be exposed and made public, on an ongoing basis and as soon as it is identified. This is by no means an easy matter, and it will involve a digital arms race because disinformers from foreign governments and other organizations are likely to further develop their tools and are probably, as we write, already on their way to figuring out how to bypass and deceive information services such as ASD and Hamilton68. ASD, on their side, is also busy developing new tools. In December 2018, they announced a new “Authoritarian Interference Tracker” covering 42 countries in North America and Europe, revealing further details of trends and tactics of Russian government interference in each of these countries. The new tool investigates the interconnection of Russian government activity in several areas: cyberattacks, political and social subversion and economic and financial coercion.8

It may be a bit of a stretch to call the United Nations an NGO—but recommendations of this organization are of the same non-binding nature as those originating from civil society organizations. The UN has independent “special rapporteurs” working on a wide range of topics, especially human rights, and annual reports of these experts are not necessarily consistent. In fact, it is the rule rather than the exception that they are not. Still, we would like to point to a report published by David Kaye, Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. It was presented on April 6, 2018, and addressing the issue of freedom of expression online.9 In concise and direct language, the report presents a number of key problems with the content control practiced by the internet companies. Section 41: “Private norms, which vary according to each company’s business model and vague assertions of community interests, have created unstable, unpredictable and unsafe environments for users and intensified government scrutiny.” In Section 46 of the report, the vagueness and lack of transparency of the policies are especially criticized: “Company rules routinely lack the clarity and specificity that would enable users to predict with reasonable certainty what content places them on the wrong side of the line. This is particularly evident in the context of “extremism” and “hate speech”, areas of restriction easily susceptible to excessive removals in the absence of rigorous human evaluation of context.” Therefore, the report recommends that the rules be based on the principles of human rights. Section 46: “Terms of service should move away from a discretionary approach rooted in generic and self-serving “community” needs. Companies should instead adopt high-level policy commitments to maintain platforms for users to develop opinions, express themselves freely and access information of all kinds in a manner consistent with human rights law.”

Towards the end of the report, Section 64, the general recommendation is: “Opaque forces are shaping the ability of individuals worldwide to exercise their freedom of expression. This moment calls for radical transparency, meaningful accountability and a commitment to remedy in order to protect the ability of individuals to use online platforms as forums for free expression, access to information and engagement in public life.” With regard to government policy, the recommendation, in Section 66, is to avoid clumsy regulation of viewpoints and instead only accept content removal carried out “by an independent and impartial judicial authority, and in accordance with due process and standards of legality, necessity and legitimacy” and also to “refrain from imposing disproportionate sanctions, whether heavy fines or imprisonment, on Internet intermediaries.” States should also not activate or take part in pre-publication censorship, nor delegate legal decisions to political departments or to the companies themselves, and finally, the state should continuously publish transparency reports on all the requirements it imposes on the companies. Regarding the companies, in Section 70 the recommendation is that they should “... recognize that the authoritative global standard for ensuring freedom of expression on their platforms is human rights law, not the varying laws of States or their own private interests, and they should re-evaluate their content standards accordingly.” Likewise, they should be reorganized in accordance with transparency, through greater cooperation with civil society institutions who are concerned with digital rights and avoid involvement in secret state-run content management agreements. Finally, the companies should strive to assume public responsibility, for example by developing common standards across the Internet, monitored by joint “social media councils”—a type of complaints commission for Internet companies. Of course, there are many details the report does not address, and critical issues such as the monopoly question are not mentioned, but its overall message is completely in line with our conclusions. One can only hope that the report will help guide tech companies in an era where they seem to waver about dizzy, caught between the intoxication of their own omnipotence and the growing number of severe political beatings they are taking.

The role of civil society is at risk of being underappreciated in the European debate, as we do not share the American tradition for strong private support for think tanks, NGOs and the like. Nevertheless, the Santa Clara Declaration, the Alliance for Securing Democracy, and the UN report all demonstrate that there is some possibility for different kinds of civil society efforts to put pressure on tech giants and mitigate the consequences of their sins of omission. Serious media, private funds, individual patrons, universities, government subsidizers based on the arm’s length principle, etc. must find themselves in the new context and contribute to the ongoing development and refinement of such independent efforts in the public sphere.

A promising but less certain possibility is the idea of a new, decentralized Internet. It would have no central servers and be based on blockchain technology (much like the crypto-currency Bitcoin). This would eliminate the option of any joint control efforts and ensure user freedom of expression and anonymity beyond the reach of government or tech giant control. Among the uncertainties in this idea, however, is whether such technology will in fact create an Internet with the same scope and potential as the current one. It is also uncertain whether it would attract other users beyond declared “crypto-anarchists” only, who feel they can only exist freely with encrypted protection from states and companies.10 Another problem of this utopia is that it would probably open up new opportunities not only for civil society, but also for many types of crimes now more difficult to trace.

Footnotes

  1. 1.

    Lanier (2018) p. 104ff.

  2. 2.

    The organizations are primarily American: American Civil Liberties Union Foundation of Northern California, the California branch of ACLU from 1920; Center for Democracy and Technology, founded in 1994 to defend free speech online; Electronic Frontier Foundation, a digital rights organization founded in 1990 by John Perry Barlow, the man behind the Cyberspace manifesto; New America’s Open Technology Institute from 2009 (the technology branch of think tank New America, headed by Kevin Bankston)—plus a handful of individual researchers: Irina Raicu (Santa Clara University), Nicolas Suzor (Queensland University of Technology), Sarah T. Roberts (UCLA), and Sarah Myers West (USC)

  3. 3.

    The principles can be found here “The Santa Clara Principles on Transparency and Accountability in Content Moderation” Last visited 08-04-18: https://cdt.org/files/2018/05/Santa_Clara_Principles.pdf

  4. 4.

    Cf. Vega, N. “Internet rights group slams Google and GoDaddy’s ‘dangerous’ decision to ban a neo-Nazi site” nordic.businessinsider.com. 08-18-17.

  5. 5.

    Alliance for Securing Democracy. Last visited 08-04-18: http://securingdemocracy.gmfus.org.

  6. 6.

    Alliance for Securing Democracy—Hamilton68. Last visited 08-04-18: http://dashboard.securingdemocracy.org

  7. 7.

    The quotes from the interview with Schaefer appeared in Danish in a news article by Stjernfelt, F “Systematisk afsløring af systematisk misinformation” in Danish weekly newspaper Weekendavisen. 02-23-18.

  8. 8.

    “Alliance for Securing Democracy Launches New Tool to Analyze Russian Interference Operations” Alliance for Securing Democracy. 12-06-18.

  9. 9.

    Kaye, D. “Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression” UN Human Rights Council. 04-06-18.

  10. 10.

    Cf. Bartlett (2018) chap. 6.

Bibliography

Books

  1. Bartlett, J. (2018). The people vs tech: How the internet is killing democracy (and how we save it). Ebury Press.Google Scholar
  2. Lanier, J. (2018). Ten arguments for deleting your social media accounts right now. The Bodley Head.Google Scholar

Copyright information

© The Author(s) 2020

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  • Frederik Stjernfelt
    • 1
  • Anne Mette Lauritzen
    • 2
  1. 1.Humanomics Center, Communication/AAUAalborg University CopenhagenKøbenhavn SVDenmark
  2. 2.Center for Information and Bubble StudiesUniversity of CopenhagenKøbenhavn SDenmark

Personalised recommendations