In 2010, in an attempt to explore how technology promotes peace, Facebook launched a new feature ‘Peace on Facebook’: “Facebook is proud to play a part in promoting peace by building technology that helps people better understand each other. By enabling people from diverse backgrounds to easily connect and share their ideas, we can decrease world conflict in the short and long term.” Facebook keeps track of for instance how many “friendships” the company has helped create between people representing arch enemies such as Israel/Palestine, Pakistan/India and Ukraine/Russia under the headline “A World of Friends”. All of them great stories. But Facebook speaks less loudly when it comes to the role of the company in the ethnic cleansing of the Rohingya minority in Myanmar. In 2014, Facebook moved into the country and within three years, the amount of users went from two to thirty million, and since then Buddhist extremists have used the platform to spread misinformation, encouraging violent upheaval. In 2017, propaganda, threats and coordination via Facebook became a contributing factor in this extensive ethnic cleansing. The fact that the conflict seems to have started with an Islamist massacre of Hindu villages in Rakhine state in August 2017, does not exempt Facebook for parts of the blame for the following escalating violence. In August 2018, after the UN had pointed to Myanmar military leaders as responsible for genocide, Facebook finally chose to remove 20 accounts of individuals and organizations from Myanmar’s top political management, among them General Min Aung Hlaing and the military television network Myawady—their Facebook pages were followed by as many as 12 million out of a total population of 53 million and had been used to encourage genocide. It was the first time Facebook banned political leaders from using the platform. The UN report criticized the role of Facebook as a useful instrument of the armed forces who incited the population to fight the Rohingya people. Facebook admitted that they had reacted too slowly.
In 2010, in an attempt to explore how technology promotes peace, Facebook launched a new feature ‘Peace on Facebook’: “Facebook is proud to play a part in promoting peace by building technology that helps people better understand each other. By enabling people from diverse backgrounds to easily connect and share their ideas, we can decrease world conflict in the short and long term.”Footnote 1 Facebook keeps track of for instance how many “friendships” the company has helped create between people representing arch enemies such as Israel/Palestine, Pakistan/India and Ukraine/Russia under the headline “A World of Friends”. All of them great stories. But Facebook speaks less loudly when it comes to the role of the company in the ethnic cleansing of the Rohingya minority in Myanmar. In 2014, Facebook moved into the country and within three years, the amount of users went from two to thirty million,Footnote 2 and since then Buddhist extremists have used the platform to spread misinformation, encouraging violent upheaval. In 2017, propaganda, threats and coordination via Facebook became a contributing factor in this extensive ethnic cleansing.Footnote 3 The fact that the conflict seems to have started with an Islamist massacre of Hindu villages in Rakhine state in August 2017,Footnote 4 does not exempt Facebook for parts of the blame for the following escalating violence. In August 2018, after the UN had pointed to Myanmar military leaders as responsible for genocide, Facebook finally chose to remove 20 accounts of individuals and organizations from Myanmar’s top political management, among them General Min Aung Hlaing and the military television network Myawady—their Facebook pages were followed by as many as 12 million out of a total population of 53 million and had been used to encourage genocide. It was the first time Facebook banned political leaders from using the platform. The UN report criticized the role of Facebook as a useful instrument of the armed forces who incited the population to fight the Rohingya people. Facebook admitted that they had reacted too slowly.Footnote 5
No one is claiming that Facebook is an evil company, planning to drive the world off a cliff. A more likely explanation of the tragic events can be found in the automated algorithm system and the business model of the giant. Despite the beautiful ideals, Facebook and the other giants are first and foremost ad brokers whose purpose it is to make money off of user attention. They have created a communication system where certain emotionally driven utterances are exposed while others are simply killed in the noise. It is a successful business strategy, but the side effect is disturbance of the public sphere. A contributing issue is that the spread of the business to new areas with new languages seems not to be associated with a corresponding linguistic training of the safety and security staff, skills which could more effectively remove threats and coordinated campaigns of violence, rather than spending time on nipples and “hate speech” of a diffuse character.
Tech giants have created a public sphere with unlimited access to contributing to this sphere, at least in principle. Many people can communicate information and attitudes so that they become part of the common knowledge of societies. That was the ideal—verbalized in slogans like “a more open and connected world”. But given the fact that the giants have almost reached monopoly power over user information flow, they have in their hands a very powerful tool to control and take advantage of precisely what becomes shared knowledge and what does not. It is a power that should be used with great care. The one who controls public space and the information in it can do good but also cause major harm. With information, or lack thereof, the character of the public sphere can even change. It makes it possible the amplification and dissemination of information phenomena such as information cascades and pluralistic ignorance.Footnote 6 The former arises from too much information. The user doubts the adequacy of his own information and turns to the other (presumably reasonable) users whom they trust and conclude that what the others do or feel must be the right thing to do or feel. The latter arises from too little information. It becomes legitimate for everyone to remain in the unknown as long as one observes that everyone else remains there as well. On these bases, it is possible to turn public opinion, create false agreement and make groups of consumers buy certain products—but also share political, social or religious views. Both individually and collectively, people can reproduce the mistakes of others and just jump on a train whose destination no one knows.
The public sphere of tech giants has become each individual user’s very own customized public space. First and foremost, that space is characterized by conformism. The algorithms feed the users with content they like to read and share with others. But what is not to like about giving us what we like? The problem is that the algorithms serve the user text and video that confirm existing positions—or even existing attitudes in more extreme form, see below. At the same time, the algorithm calculation suppresses the opinions and positions that might challenge users. That is, content that could be instrumental in moving attitudes and providing insight into the views of other people and perhaps understanding them – in short, the classic role of the public domain as a meeting place in democracies, “the marketplace of ideas”. But filter bubbles may develop, which tend to reinforce confirmation bias—believing in things that confirm already existing attitudes more so than things that go against them. After all, it is an easy thing to consume information which corresponds with one’s own ideas and world view. However, it may be frustrating and difficult to consume information that challenges a person to think in new ways and question one’s basic assumptions. In spite of Facebook’s ideal to help connect people around the globe, the algorithms of the filter bubble are just not set to present the user to the diversity of ideas or people and new cultures.
Conformism has a tendency to flip over and turn into phenomena like polarization and radicalization. As professor of Law Cass Sunstein from Harvard University says: “When people find themselves in groups of like-minded types, they are especially likely to move to extremes.”Footnote 7 Like-minded people can easily agree too much, because if not exposed to competing views and opinions, the views are more likely to becomes extreme—be they political, social, religious or cultural. On these terms, a discussion can streamline the attitudes of the entire group and that of the individual user into a more radical version than their initial point of view. In this polarization process, the algorithm does the user the dubious favor of making a thick preselection of the voices the individual users will likely listen to, the sources they will bother reading and the people they feel like talking to. This form of information selection can become an actual echo chamber, where only persons already in agreement are let in. An echo chamber is a hospitable environment for various forms of bubble formation. Bubbles of opinion, politics or religion easily arise when a given matter is overheated and when substance or value are absent. Opinion bubbles can grow and trigger Twitter storms or hate campaigns on Facebook. The international media storm which hit the Copenhagen Zoo in 2014 is a good example. A lot of people had very strong opinions about Marius, a giraffe that had been put down, and “the Danish tormentors of animals”. The media storm arose as a momentary storm of emotions, not considering the point that the zookeepers had to put Marius down in order to keep up the gene pool in the Copenhagen Zoo, since the genes of this particular giraffe were already well represented in European zoos. Of course, such effects do not violate freedom of expression in the sense of the right to express their point of view—but they are highly problematic when looking at the broader concept of freedom of information, especially since the bubbles are not noticed from the inside: you may not realize that alternative views are left out, and you also do not notice the filtering process itself.
With the algorithm behind YouTube’s autoplay feature, the video service even has a tendency to speed up radicalization. By feeding users with gradually more extreme content, Google-owned YouTube increases the possibility that the user remains glued to the screen. Sociologist Zeynep Tufekci experimented with YouTube during the 2016 US presidential elections. She found out that no matter what content she was looking for, the recommended videos were always a bit more extreme and titillating than the previous ones. Jogging became ultra marathon, vegetarianism became veganism, and even more interestingly: Trump videos were followed by rants and svadas from supporters of white supremacy, Holocaust denial, and other content from the extreme right. On the other hand, videos of Hillary Clinton and Bernie Sanders would prompt extreme leftist conspiracy theories, such as accusations that 9/11 was orchestrated by the US Government. Tufekci illustrated this by an analogy of fat and sugar, making YouTube a happy junk food restaurant: “In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.”Footnote 8
In addition to reshaping the framework of the public domain, tech giants also influence the way users interact. Users are rewarded for the expressions that engage the most. This happens because it gives them a dopamine hit every time they get likes, comments, shares, retweets, etc. You can say that tech giants train the user to cultivate a behavior that seeks confirmation. When, at the same time, the algorithms favor elementary activity-mobilizing emotions (anger, fear, awe and fascination), the consequence may be that the users get an intoxicating shot of energy from attacking the opponent, be it the other group, the other political party, or the other religion. The giants exploit the right of the users to express themselves by luring or downright manipulating the user to express negative feelings in anticipation of more social gain. A tempting hypothesis is that “conflicts and divisive material about everything from political views to religious beliefs to social inequalities have greater social transmission than consensus.”Footnote 9 Conflict can be thought to—perhaps unintentionally—have become one of the main ingredients of the algorithmic laws of tech giants, cf. the Girard-Thiel hypothesis above.
The problem with the public sphere of the giants is that there is a dismal backside to their version of it. Conformism, radicalization and polarization are by no means new phenomena, and divisive material has always gained attention. But this cocktail, plus the speed and global scope enabled by these technologies, inadvertently risk amplifying disruptive and in some cases downright dangerous tendencies in societies in different parts of the globe. Western countries are seeing increased political correctness, which has turned into a digital culture war between an aggressive identity political discourse on the one hand and extreme right-wing groups on the other; in developing countries there are disturbing examples of threats, incitement to violence and coordination between activists which turn into lynching in the streets; they are polarization processes with one thing in common – they all originate from Facebook.
In Western countries, the disturbance of public space is manifested in an online cultural war, where extreme right-wing movements battle against the front-line fighters of aggressive identity politics—and both parties are radicalized and made more simple-minded in the process. One of the first big clashes between the two sides was the controversy known as “Gamergate” in 2014, where attention was drawn to sexism and misogyny in the American gaming industry.Footnote 10 Feminist video game critic Anita Sarkeesian became the object of extensive hate campaign when she uploaded videos on YouTube where she introduced viewers to basic ideas within feminist media critique. She had a critical view on how video games depicted women. We are used to think that societies with free speech can discuss criticism of a classic work of art for being sexist without necessarily doubting its aesthetic or other value, and one can easily disagree with the quality of it without moving on threats of rape and death. But Sarkeesian would have to endure several years of not just harsh attacks, but also outright personal threats for this intolerable “crime”. She was met with comments like “I’ll rape you and put your head on a stick”, vandalism of her Wikipedia page with pornographic images, threats, and a campaign to flag her social media accounts for spam, fraud and even for terrorism. These were serious attempts to ruin her career and reputation—including criminal acts such as harassment, slander and threats.
Meanwhile, fairly unknown video game developer Zoe Quinn released the game “Depression Quest”, which was an ideological project to take video games in a more feminist direction. This would turn out to be the straw that broke the camel’s back within the gamer community. Also Quinn was subjected to death and rape threats; she was hacked, her personal information was published and she was the victim of revenge porn. In the wake of this, attacks were directed at other feminist gamers and video game critics who dared entering the war zone. Journalists defended these feminists through articles on how the game culture had become a toxic community for misogynist men. The reaction from anonymous voices in the online world was even more vandalism and trolling. Everyone blamed each other for lying and having malicious intentions. #Gamergate was born and lines of battle were drawn. The controversy will be remembered as a turbulent culture war feeding on not only strong passions but also harassment and personal threats, which are criminalized in the legislations of most countries.
Writer and journalist Angela Nagle has examined this digital culture war, where two parties fight over anything from feminism to sexuality, gender identity, racism, freedom of speech and political correctness. According to her analysis, the conflict can be traced back to a discussion about the discourse on Facebook. The Alt-right, which in the United States is the umbrella term for the new extreme right of the 2010s, has developed by using old left-wing strategies like transgression, provocation, satire, and abysmal irony. Their behavior is a reaction to what they see as a politically correct finger constantly wagging on the social media. Former Breitbart editor and Alt-rights leader figure Milo Yiannopoulos has expressed it in this way: “The Alt-right for me is primarily a cultural reaction to the nannying and language policing and authoritarianism of the progressive left—the stranglehold that it has on culture. It is primarily—like Trump is and like I am—a reaction against the progressive left doing today what the religious right was doing in the 1990s—which is trying to police what can be thought and said, how opinions can be expressed.”Footnote 11 In other words, some feel left out of the ranks. No one has to agree to the bizarre political ideas of the Alt-right in order to understand why they may be annoyed with the choking political correctness of their opponents. The limit of tolerance goes exactly where it glimpses intolerance.
The fraction of the left-wing concerned with identity politics is increasingly an anti-free speech, anti-free thinking and anti-intellectual online movement obsessed with policing speech. It is an emotional culture that aims to witness and expose the sufferings of oneself and others. Initially, the movement roamed on niche platform Tumblr, where floating genders and identities were broadly embraced. Over time, however, the movement has gained hold of the mainstream and now dominates parts of social media discourse. For example, in 2014, users could choose between 50 different genders on their Facebook profile.Footnote 12 It has even spread into “the real world” with Hillary Clinton assuming catchphrases from identity politics, for instance ‘check your privileges’ and ‘intersectionality’, as part of her presidential campaign. Particularly in American universities we have witnessed the emergence of a range of now well-established political demands and concepts going against free speech; safe spaces as in safe zones where women or African-Americans or other identities can meet screened off from groups and statements whom they prefer to avoid; trigger warnings which are clear expression of caution which the teacher is responsible for giving if a work of literature contains words or paragraphs with potentially offensive content, for instance violence, rape, discrimination, or particular offensive words; no platform which is a slogan used to prevent certain invited speakers from making their voices heard—by way of social media storms, harassment and pressure on university management; cultural appropriation which refers to majority persons who borrow or “steal” elements from another culture, and who are therefore accused of not respecting that other culture. A curious example of the latter was the public shaming that hit Caucasian pop star Justin Bieber when he got dreadlocks. These activities take place not only online but on campuses as well, in the classrooms and auditoriums—but the suppression of freedom of expression in these physical spaces is closely tied to social media storms organized online aimed at gathering loud masses who can interrupt and harass unwanted speakers or put pressure on weak university management to cave in to demands of censorship.
Rather than embracing diversity or hybrid identities, identity politics has developed an extreme worship of group affiliations. The battle is fought through words, and the goal is to achieve silence rather than agreement or disagreement. Nagle unfolds it this way: “They tried to move the culture in the opposite direction by restricting speech on the right but expanding the Overton window on the left when it came to issues of race and gender, making increasingly anti-male, anti-white, anti-straight, anti-cis rhetoric normal on the cultural left. The liberal online culture typified by Tumblr was equally successful in pushing fringe ideas into the mainstream. It was ultra-sensitive in contrast to the shooting irreverence of chan culture, but equally subcultural and radical.”Footnote 13
Policing the web from the perspective of identity politics has as a consequence that Facebook and Twitter hesitate to accommodate right-wing warriors from the Alt-right. Instead, the extreme right has absconded to anonymous niche platforms such as 4chan and 8chan, where they can freely practice their cynicism, nihilism, misogyny and fight for men’s rights, white identity and the right to rebellion. The Alt-right has not lost its power in this process, quite the contrary. It has grown larger in isolation because its devotees know how easily digestible humor and content, when combined with explosive comments and debate threads can leverage broad and active communities. With Trump’s election victory, they even felt represented in the White House by people like Steve Bannon. There are several examples reminiscent of Gamergate, where right-wing extremists hit back hard; where inhumane rhetoric turns into actual threats and violence. In 2015, a US student wrote on 4chan that the users of the site should stay home from school the following day, when that self-same student ended up committing a school shooting in Oregon.Footnote 14 The clash between the Alt-right and the cultural left has turned into a bitter and overheated cultural war with no peace dialog in sight. Each of these groups find themselves in each their own extreme bubble of opinion, where some matters overheat so much that they trigger media shitstorms, threats, shaming and slander campaigns.
Gamergate also helped make it clear that there is a big difference between the inaccurate concept of “hate speech” concerning pejoratives directed at certain groups—and actual persecution and harassment of individuals where groups single out an individual victim (cf. Girard’s scapegoat), using techniques such as shifting accounts, coordinated networks on other platforms, systematic abuse of the flagging option, doxxing (revealing people’s private addresses and other private data as a way of encouraging persecution and violence against them), sending private and compromising information to friends and colleagues, hacking and taking over the person’s Internet accounts and swatting (calling the victim’s address to the police based on invented reports)Footnote 15—a list that includes criminal acts such as stalking, threats and harassment. Here, the tech giants ought to play a more active role in the criminal persecution of such acts, by helping identify the individuals responsible for the coordinated pursuits.
The consequence of Gamergate was that a number of tech giants made their removal procedures even stricter. During the online war, particularly Twitter had gained reputation as a place where hateful voices would gather and at an internal meeting in February 2015, CEO Dick Costolo came to a tough conclusion: “We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years. It’s no secret and the rest of the world talks about it every day.... We’re going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them.”Footnote 16 Hardly a cure without side effects for free speech on the platform.
In Gamergate, Facebook did not live up to its ideal of contributing to human understanding between people—quite the contrary. The digital culture war is a consequence of the polarization tendencies brought on by the automatic algorithm system of the technology giants. It reinforces a universal tendency towards tribalism. Groups dividing the world into “us” and “them” occur naturally, also outside of the Internet, but they are easily reinforced if Internet users are always floating in a stream of information which confirms already existing attitudes only. It is difficult to say why the public opinion on Twitter and Facebook has fluctuated so heavily towards identity politics. It is also difficult to tell whether Facebook has unintentionally picked sides in the conflict by prioritizing the “tone of debate” over freedom of expression in their community standards and in their responses to users flagging questionable content. But preferring peace and quiet over agreement and disagreement in the public sphere has its serious consequences. In her many years as president of the American Civil Liberties Union, Professor of Law Nadine Strossen has argued that free speech is the only cure against hateful utterances, and that prohibition and restrictions will only make thing worse. For years, when arguing time and again with censorship-happy right wingers, she had to plead against censorship, but in more recent years she has had to argue also against her old comrades-in-arms on the left, who are leaning ever more towards censoring opinions they do not like, cf. Strossen’s book Hate: Why We Should Resist It with Free Speech, Not Censorship (2018).
However, such Western problems with extremization pale in comparison with the growing amount of urgent cases seen in developing countries. Facebook’s fast expansion in places like Indonesia, India, Mexico, Myanmar and Sri Lanka has resulted in some of the most frightening examples of serious, Internet-based social unrest. In those places, many people’s access to the Internet goes through Facebook, oftentimes they may even consider Facebook as the very Internet itself. Emotional feelings run free on the platform because the institutions in some of those countries are weak and credible sources are few and far between. This means that content shared between friends, family and people of trust may easily become “common knowledge”. When this is combined with people not relying on police and courts, panic originating from misinformation and calls for violence on Facebook may lead to violent riots and lynchings, because users take justice in their own hands. That recently happened in Sri Lanka. Facebook’s inborn preference for negative feelings helped escalate the conflict between Buddhists and Muslims, because mutual hatred and threats frequently and freely dominated the flow of news on Facebook, which is now the Sri Lankans’ primary source of news and information. The Muslims form a 10% minority on the island, speak predominantly Tamil, and many of them are immigrants.
In April 2018, New York Times ran the story. At the center of the storm was 28-year-old Muslim restaurant employee Farsith, who became the innocent victim of a seriously violent chain of event caused by a rumor gone viral on Facebook. According to rumors, the police had confiscated 23.000 sterilization pills from a Muslim pharmacist in the town of Ampara. For a while rumors had circulated of a Muslim plot to sterilize and wipe out the Sinhalese majority in Sri Lanka. One day, when a Sinhalese restaurant guest yelled out that he had found a lump of something white in his food, everything went wrong. The customer furiously gathered a crowd around Farsith and accused him of having put sterilization medicine in the food. Farsith was unaware of the viral rumor and hesitantly commented, in his inadequate Sinhalese, something along the lines of: “Yes, we put?” Farsith thought they were yelling about the small lump of flour he could see in the dish. The crowd took his muttering as a confession of his crime, beat him up, terrorized the restaurant and burned down the local mosque. The story did not end there, because while the assault unfolded, Farsith’s “confession” was recorded on a mobile phone, uploaded and quickly gained viral life. The 18 seconds video was uploaded to a popular Buddhist Facebook group as “proof” of the Muslim plot. With hasty shares, likes and comments, the video went viral and generated comments like “Kill all Muslims, don’t even spare infants”. Unintentionally, Facebook’s algorithm system transformed Farsith into a nationwide villain. This tragic affair ruined his business, put his family in debt and nearly cost him his life.Footnote 17
How could things turn out as badly as in the case of Farsith? Presumably because conflicts have higher social transmission than consensus—the algorithmic law. Because users move around inside an echo chamber which amplifies and radicalizes already existing positions—polarization. And because, as a group, we can easily end up following a norm (in this case extreme hate towards Muslims), which each of the members of the group might not individually dislike, but which they nevertheless end up persecuting because they mistakenly believe, that the everyone else in the group do the same—pluralistic ignorance. Because even if we personally believe otherwise, the mere fact that many others seem to believe the opposite (judging by the countless likes, shares and comments a particular video has generated) makes us suppress what we actually used to believe—information cascades. There is a particular mean irony to the fact that these effects seem enhanced by the new policy introduced by Facebook during the winter of 2017–18, as a response to the whole debate on “fake news” online. A new calibration of the news feed was introduced: communication between “friends” was now prioritized, while news stories driven by the media were deranked—allegedly in order to emphasize the ideal of “connecting people” and build local communities rather than serve as the source of news of shifting quality. In Sri Lanka, however, the effect of this was that local rumors of evil Muslims circulated intensively within circles of “friends” who intended to surpass each other’s news feed turning it into a self-reinforcing loop—while serious news that might have given an external and perhaps more objective perspective on the events now became deprioritized in their news feeds. It shows how Facebook’s sentimental understanding of contacts as “friends” does not necessarily bring global friendliness with it, but can easily gloss over malicious, even conspiratorial groups of people. Given cases such as these, one might well dream that Facebook’s increasing monitoring departments would focus their forces on serious and criminal cases, such as the spectrum ranging from personal injuries to harassment, threats, persecution of individuals, to terrorist plots or coordinated violence—rather than increasingly policing their platform with the large and vague catalog of taboos applied to everything from nudity and sex to diffuse “hate speech” and unspecified violations, to euthanasia and violent images, and to quotes, satire, irony and a wide range of other non-criminal content which is subject to wagging index fingers, moralism, removal and sanctions.
Maybe Facebook is slowly taking its first steps in this direction. In July 2018, WhatsApp launched an experiment featuring a limit on the number of “chats” a user could participate in on the messaging service—after discovering how groups in India had knowingly spread misinformation on the service by political agents acting in 10–20 interwoven chat-groups. At the same time, Facebook announced the removal of content from the site if it is deemed on the brink of turning into violence: “There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down,” a Facebook spokesperson said.Footnote 18 It is certainly a step forward if—contrary to unspecified “hate speech”—the focus is on content that actually calls for imminent violence. Then we may slowly approach the American judicial system’s standard interpretation of the limits to freedom of expression as “incitement to imminent lawless action”. This might have been discovered much earlier, if lawyers, intellectual historians, or sociologists had been asked, instead of believing that the tech environment itself holds the answer to everything. But how to identify such content remains problematic in the new initiative: “To help figure out when misinformation has tipped from “just plain wrong” to “wrong and possibly contributive to violence”, Facebook will partner with local civil society groups that might better understand the specific cultural context.”Footnote 19 Facebook’s naive belief in the good of local, cultural “communities” may become a danger. There is no information on which groups in India, Sri Lanka and Myanmar Facebook wishes to confide in, but local communities may often be part of the problem rather than the solution. Many such local groups might have widely different opinions and have their own agendas; far from all of them are necessarily democratic, and some of them may even be active parts of the conflict. Doubtlessly, publishing the names of such collaboration groups could put them in danger—but not doing so will contribute to even more to the opaqueness of Facebook’s removal policy and will promote local rumor formation, plot speculation and conflict. Obviously, on-site notifiers need to be locally connected, but it seems important to construct a procedure for selecting such helpers, to make sure they have some level of skill, neutrality and some knowledge of universal rights, e.g. from local media or human rights organizations. Or else, basing removal policy on the preferences of local cultural groups may turn out to be yet another mistake in Facebook’s “annus horribilis” which seem to go from singular to plural.
Huffington Post “‘Peace On Facebook’ Tracks How Tech Promotes Peace” Huffington Post. 03-18-10.
Ananny, op. cit.
Taub, A. & Fisher, M. “Where Countries Are Tinderboxes and Facebook Is a Match” New York Times. 04-21-18.
Amnesty International “Myanmar: New evidence reveals Rohingya armed group massacred scores in Rakhine State” Amnesty International. 05-22-18.
Bostrup, J. ”Facebook blokerer Myanmars militære topledelse” Politiken. 08-28-18.
Hendricks and Hansen (2011) p. 17.
Sunstein (2009) p. 2.
Tufecki, Z. “YouTube, the Great Radicalizer”. New York Times. 03-10-18.
Hendricks, V. (2016) p. 153.
Nagle (2017) p. 19–24.
Nagle (2017) p. 65.
Associated Press in Menlo Park, California “Facebook expands gender options: transgender activists hail ‘big advance’” The Guardian. 02-14-14.
Nagle (2017) p. 68.
Nagle (2017) p. 26.
Cf. Gillespie (2018), p. 56f.
This statement was leaked to The Verge, here quoted from Gillespie (2018) p. 24.
Taub, A. & Fisher, M. “Where Countries Are Tinderboxes and Facebook Is a Match” New York Times. 04-21-18.
Quot. from E. Dreyfuss “Facebook’s Fight Againt Fake News Keeps Raising Questions” Wired. 07-20-18.
Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
Hansen, P. G., & Hendricks, V. (2011). Oplysningens blinde vinkler: En åndselitær kritik af informationssamfundet. Samfundslitteratur.
Hendricks, V. (2016). Spræng boblen. Sådan bevarer du fornuften i en ufornuftig verden. Gyldendal.
Nagle, A. (2017). Kill all normies – Online culture wars from 4chan and Tumblr to Trump and the Alt-right. John Hunt Publishing.
Sunstein, C. R. (2009). Going to extremes: How like minds unite and divide. Oxford University Press.
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
© 2020 The Author(s)
About this chapter
Cite this chapter
Stjernfelt, F., Lauritzen, A.M. (2020). Distortion of the Public Sphere. In: Your Post has been Removed. Springer, Cham. https://doi.org/10.1007/978-3-030-25968-6_15
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-25967-9
Online ISBN: 978-3-030-25968-6
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)