Advertisement

Facebook’s Handbook of Content Removal

  • Frederik Stjernfelt
  • Anne Mette Lauritzen
Open Access
Chapter

Abstract

Due to the recent crises, Facebook is restructuring to restore the company’s reputation, which is, according to Zuckerberg, a three-year process. On April 24, 2018, Facebook published its updated internal guidelines for enforcement of the company’s community standards. It was the first time the public gained direct, “official” insight into this comprehensive hidden policing inside the company. The only glimpses behind the curtain provided before then came from confidential documents leaked to Gawker magazine in 2012, to S/Z in 2016—and in 2017, when The Guardian published “The Facebook Files”. They included comprehensive removal guidelines featuring a mixture of parameters, decision trees and rules of thumb—illustrated by many concrete examples of content to be removed, most likely taken from real ousted material of the time. As a contrast to this, the 2018 document is much more sparse, orderly and void of examples, and it is tempting to think that this is a combed-down version aimed for publication. Still, the document gives unique insight into the detailed principles for the company’s content removal—albeit not the enforcement procedure itself. One can only guess as to whether this surprising move away from secrecy can be attributed to the increasing media storm throughout 2017, culminating in the Cambridge Analytica revelation of March 2018 and the congressional hearings in April of the same year. The document contains six chapters: (1) “Violence and Criminal Behavior”, (2) “Safety”, (3) “Objectionable Content”, (4) “Integrity and Authenticity”, (5) “Respecting Intellectual Property” and (6) “Content Related Requests”.

Due to the recent crises, Facebook is restructuring to restore the company’s reputation, which is, according to Zuckerberg, a three-year process. On April 24, 2018, Facebook published its updated internal guidelines for enforcement of the company’s community standards.1 It was the first time the public gained direct, “official” insight into this comprehensive hidden policing inside the company. The only glimpses behind the curtain provided before then came from confidential documents leaked to Gawker magazine in 2012, to S/Z in 2016—and in 2017, when The Guardian published “The Facebook Files”. They included comprehensive removal guidelines featuring a mixture of parameters, decision trees and rules of thumb—illustrated by many concrete examples of content to be removed, most likely taken from real ousted material of the time.2 As a contrast to this, the 2018 document is much more sparse, orderly and void of examples, and it is tempting to think that this is a combed-down version aimed for publication. Still, the document gives unique insight into the detailed principles for the company’s content removal—albeit not the enforcement procedure itself. One can only guess as to whether this surprising move away from secrecy can be attributed to the increasing media storm throughout 2017, culminating in the Cambridge Analytica revelation of March 2018 and the congressional hearings in April of the same year. The document contains six chapters: (1) “Violence and Criminal Behavior”, (2) “Safety”, (3) “Objectionable Content”, (4) “Integrity and Authenticity”, (5) “Respecting Intellectual Property” and (6) “Content Related Requests”.

The first chapter features reasonable restrictions regarding criminal acts such as threats and incitement to violence. The second, “Safety”, is more problematic. Here, for instance, child pornography and images of naked children are treated as if they were but varieties of the same thing, i.e., no posting of photos featuring “nude, sexualized, or sexual activity with minors”. This means that images of diaper-changing and pedophilia fall into the same category. The stance towards “self-injury” is also problematic, because Facebook believes itself capable of preventing suicide by banning content which “promotes, encourages, coordinates, or provides instructions for suicide, self-injury or eating disorders.” For one, this excludes serious discussion of the ongoing political issue of voluntary euthanasia—and in the same vein, one can ask whether it would not also exclude many fashionable diets. The sections “Bullying” and “Harassment” and the right to privacy are less problematic. There is, however, an issue with the following wording: “Our bullying policies do not apply to public figures because we want to allow discourse, which often includes critical discussion of people who are featured in the news or who have a large public audience. Discussion of public figures nonetheless must comply with our Community Standards, and we will remove content about public figures that violates other policies, including “hate speech” or credible threats”. This can easily be used as a cop-out to shield public figures from criticism many would find completely legitimate.

The fourth item is “Spam”, “Misrepresentation”, “False News” and “Memorialization”. It is funny how a basic guideline within the “Spam” category says: “Do not artificially increase distribution for financial gain.” It is hard not to read this as an exact characterization of Facebook’s very own business model, but obviously the company cannot have users invading the company’s own commercial turf. Indeed, spam is by far the largest category of content removed.

“Misrepresentation” refers to Facebook’s policy stating that all users must use their own real name. In democratic countries, the reasoning behind this policy is understandable; the very name “Facebook” is based on the requirement of presenting a somewhat authentic picture of the user’s face. But it may be acutely dangerous for users in non-democratic countries. However, even in democratic countries, certain people such as anonymous media sources, whistle blowers or others might have very legitimate reasons not to appear with their own name and photo. In 2017, a major case put Facebook and the LGBT community at loggerheads. Many Drag Queens who appeared on the platform under their adopted transgender names had their accounts blocked (it would later turn out that they had all been flagged by one and the same energetic complainant) with reference to the requirement to appear under their own real name. The problem is not peripheral. In the first months of 2018, Facebook had to close as many as 583 million fake accounts, while still estimating that 3–4% of the remaining billions of users are fake.3 Creating and selling fake user accounts has become a large independent industry which can be used to influence everything from consumer reviews of restaurants, books, travel, etc., to more serious and malicious things such as political propaganda disguised as personal views originating from real users. When you read a good review of a restaurant online, it is potentially written by the owner, with a fake user as intermediary. As tech writer Jaron Lanier pointed out, there are numerous celebrities, businesses, politicians and others whose presence on the Internet is boosted by large numbers of fake users who “follow” or “like” their activities.4 He believes that the large amount of fake users represents a fundamental problem for tech giants because so much other false communication—fake ads, “fake news”, political propaganda—is disseminated though these non-existent people. These are dead souls that can also be traded. As of early 2018, the price of 25,000 fake followers on Twitter was around 225 USD.5 In this light, it is understandable that Facebook wants to tackle fake users, but it is unsettling if this can only be done by an encroaching ban on anonymity, especially earnest and necessary use of anonymity. Serious media regularly need to guarantee anonymity of sources or writers to even get them to participate, which then happens on the condition that the editorial staff know the identity of the person.

Regarding the strongly disputed concept of “fake news”, the following phrase from the document might seem reassuring: “There is also a fine line between false news and satire or opinion.” This could lead one to believe that Facebook does not feel called upon to act as judge of true and false. But the very next sentence goes: “For these reasons, we don’t remove false news from Facebook but instead significantly reduce its distribution by showing it lower in the News Feed.” So false news is not removed, but still the people in the background consider themselves capable of identifying false news, inasmuch as such news stories are downgraded in the news feed and thus marginalized. This reveals a shocking level of conceit: Facebook believes that its some 30.000 moderation inspectors —probably untrained— should be able to perform a truth check on news within 24 hours. It is self-evident that news is new, and society’s established institutions—with their highly educated specialists in serious journalism, courts and academia—often spend a very long time determining and documenting what is true and false in the news flow. How would a platform with no experience in the production and research of news whatsoever be a credible clearinghouse for truth? Perhaps the company is realizing this as of late. In December 2016, when the “fake news” debate raged in the wake of the US presidential election,6 Facebook announced a collaboration with various fact-checking organizations. They were tasked with tagging certain news (primarily about American politics) as “disputed”. The idea was, however, abandoned in December 2017, when it was found that this tagging attracted more attention and traffic to those news stories rather than scaring users off.7

Despite the public promotion of Facebook’s new fact-checking cooperation, it is still a very closed procedure with few details given. The collaborating organizations are fact checker companies PolitiFact, FactCheck.org, Snopes and the two news outlets ABC News and Associated Press—cf. Mike Ananny’s comprehensive 2018 report The partnership press: Lessons for platform-publisher collaborations as Facebook and news outlets team to fight misinformation.8 Some collaborators work for free, while others receive a symbolic amount from Facebook. The report is based mainly on anonymous interviews with fact checkers and according to it, the collaborations between Facebook and the five organizations works as follows: “Through a proprietary process that mixes algorithmic and human intervention, Facebook identifies candidate stories; these stories are then served to the five news and fact-checking partners through a partners-only dashboard that ranks stories according to popularity. Partners independently choose stories from the dashboard, do their usual fact-checking work, and append their fact-checks to the stories’ entries in the dashboards. Facebook uses these fact-checks to adjust whether and how it shows potentially false stories to its users.” Thousands of stories are cued up on the website, and each organization has the capacity to control a handful or two per day.

The procedure for selecting critical news stories seems to consist of Facebook users flagging them as fake, in combination with automated warnings, which are based on previous suspicious links. Once again, a lot of responsibility is put on users flagging other users—but the details of the selection remain protected, as mentioned above. Ananny’s report could access neither the central “dashboard” website nor the principles behind it, and many of the fact checkers interviewed in the report are dissatisfied with various aspects of the opaque procedure dictated by Facebook. Among other things, they complain of not being able to flag pictures and videos as fake.9 Among the interviewees, for example, there is suspicion that Facebook avoids sending them false stories if they have high advertising potential. In general, there is skepticism among fact checkers regarding Facebook’s motives and behavior around the design of the dashboard website and the classification and selection of its content: “We don’t see mainstream media appearing [in the dashboard]—is it being filtered out?” And: “We aren’t seeing major conspiracy theories or conservative media—no InfoWars on the list, that’s a surprise.” (InfoWars is a site dedicated to conspiracy theories, which had more than 1.4 million Facebook followers before Facebook finally shut down the site in August 2018—see Chapter 12).10

In the absence of a transparent process, several fact-checkers suspect that Facebook avoids sending certain types of news through the fact-check system in order to avoid their labelling. If that is the case, then some false news stories are removed or de-ranked while others are not even sent to check. The suspicion seems justified, as in July 2018, an undercover reporter from Channel4 Dispatches revealed how popular activists from the extreme right get special protection from Facebook. The documentary showed how moderators, for example, let right-wing movement Britain First’s pages slip through, simply because they “generate a lot of revenue”. The process is called “shielded review”. Typically, a page is removed if it has more than five entries violating Facebook rules. But with shielded review, particularly popular pages are elevated to another moderation level, where the final removal decision is made by Facebook’s internal staff.11

In Ananny’s report, fact checkers are also quoted as complaining that they have no knowledge of the actual purpose of Facebook’s checks or what impact they have. Facebook has publicly stated that a negative fact check results in 80% less traffic to the news in question. But as a fact checker says, this claim itself is not open to fact-checking. Others complain that the process has the character of a private agreement between private companies and that there is no openness about its ideals or accountability to the public. With so little transparency about Facebook’s fact-check initiatives, it is difficult to conclude anything unambiguously, but the whole process seems problematic from a free speech standpoint, given the lack of clear criteria regarding which stories are sent to check and which are not. The efforts do not seem to be working well, either. The number of users visiting Facebook pages with “fake news” was higher in 2017 than in 2016.12 As part of its hectic public relations activity in Spring 2018, Facebook announced that it would begin to check photos and videos, this time in collaboration with the French media agency AFP.13 Details about the procedure and results of this initiative remain to be seen. In December 2018, after Facebook had used the Definers spin company to smear opponents became known, former managing editor of Snopes, a fact-checking company, Brooke Binkowski expressed her disappointment with the company’s two-year collaboration with Facebook: “They’ve essentially used us for crisis PR.” She added: “They’re not taking anything seriously. They are more interested in making themselves look good and passing the buck […] They clearly don’t care.”14 By February 2019, Snopes quit the Facebook factchecking partnership.15

The next clause of the Facebook removal manual concerning intellectual property rights does nothing more than make explicit the company’s responsibility disclaimer—much like Google and other tech giants. It puts all responsibility on users, who are assumed to have made the copyright situation clear for all posts they upload (cf. Ch. 14).

The last section of the clause, “Content-Related Requests”, covers users’ right to delete accounts—as expected, there is no mention of the right to ask Facebook to delete their detailed data profiles including their general online behavior, data purchased, etc. Also, the section does not address the issue of how the tech giant will respond if asked by intelligence agencies and police for access to user data—a touchy subject concerning anything from relatively unproblematic help with criminal investigations to much more debatable help with politically motivated surveillance.

Crucial to freedom of expression, however, is the third item: “Objectionable Content”. It features the subcategories “Hate Speech”, “Graphic Violence”, “Adult Nudity and Sexual Activity” and “Cruel and Insensitive”.16 Each category is described in detail. “We define hate speech as a direct attack on people based on what we call protected characteristics—race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease. We also provide some protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation. We separate attacks into three tiers of severity, as described below.”17 Facebook’s list of “hate speech” examples is characteristic in its attempt at a definition based on a random list of groups of people who for some reason should enjoy particular protection beyond other groups in society. Such a break with equality before the law is one of the classic problems of “hate speech” regulation, both because different legislators choose and select different groups for special protection, but also in practice: usually, it is humor or other remarks about certain, selected skin colors, ethnicities and religions, that are considered as bad taste. But then there are others of whom it is considered acceptable to make fun. This changes with the spirit of the times and is often a matter of which groups yell the loudest—groups that do not have the zeitgeist in their favor notoriously do not even expect to find protection in “hate speech” paragraphs. Although “race” is a crucial concept on the list, for instance, the Caucasian race is rarely mentioned as worthy of protection from attacks related to skin color, and attacks on Islam is often taken very seriously which is seldom the case with Christianity. Also, Facebook’s “hate speech” definition does not include a reference to the concept of truth, as we find in libel—thus, a true statement can be classified as “hate speech” if someone claims to feel offended by it.

It is a well-known fact that Facebook and other tech giants have had a hard time deciding how to deal with statements which merely cite or parody the hateful statements of others. This problem is now openly addressed in the following segment: “Sometimes people share content containing someone else’s hate speech for the purpose of raising awareness or educating others. Similarly, in some cases, words or terms that might otherwise violate our standards are used self-referentially or in an empowering way.” Irony and satire are not mentioned explicitly but are referenced in the part about “fake news”, and one must assume that they are addressed in the “self-referential” use of “hate speech”. Such statements are, of course, difficult to process quickly or automatically because their character cannot be determined based on the simple presence or absence of particular terms but require a more thorough understanding of the whole context. Facebook’s solution goes: “When this is the case, we allow the content, but we expect people to clearly indicate their intent, which helps us better understand why they shared it. Where the intention is unclear, we may remove the content.”18

Quotes or irony are allowed, then, but only if this is made completely clear, with quotation marks and explicit or implicit underlining. An ironic post about Christians and white Danes was exactly what sprung the Facebook trap on Danish journalist Abdel Aziz Mahmoud in January 2018.19 As a public figure with many followers, he had posted a comment aimed at highlighting the double standard among many players in public Danish debate. However, after several users reported the post as offensive, Facebook chose to delete it and throw the journalist off the site. Facebook does not seem to understand that irony works best in a delicate balance, causing its addressee to wonder what exactly the idea may be—and not by overexplaining and spelling out. The reason for this removal was, of course, that no one can expect sophisticated text interpretation from underpaid staff working under pressure on the other side of the globe, just as it has not yet been possible to teach artificial intelligence to understand irony. But apparently Facebook has concluded that some of the most elegant and artistically and politically effective instruments—irony, parody and satire—cannot come to full fruition. In a Danish context, we need to dig deep in the history books and go all the way back to the Danish Freedom of the Press Act of 1799. Its Article 13 established that irony and allegory were penalized the same way as explicit statements. At the time, the idea was to protect the Monarchy. In the case of Facebook, the reasons are financial, as the company cannot afford to deploy the procedures necessary to really differentiate such challenging statements.

Since 1790, crimes of press freedom in Denmark have, at least as a general rule, been decided publicly in the courts, allowing for thorough arguments pro et contra to be presented, and for the intention and meaning of a contested statement to be clarified. One of the key challenges of the new online censorship is that this is not the case. It is performed automatically, without transparency, and thus far removed from any real appeal option, unless the affected person—as in the case of Abdel Mahmoud Aziz—is fortunate enough to be a publicly known figure with the related opportunities of contacting the traditional press to raise public awareness about a problem, pressuring tech giants to respond and apologize for the removal.

Another example from Denmark of a public figure clashing with Facebook’s foggy policies was Jens Philip Yazdani, former chairman of the Union of Danish Upper Secondary School Students. During the 2018 Soccer World Cup, Yazdani, whose background is part Iranian, weighed in on the debate on national identity and what it means to be Danish. In a post he wrote that he found it easier to support the Iranian national team than the Danish one, because of the harsh tone of the immigration debate in Danish society. The post was shared vividly on Facebook, garnering many likes and a glowing debate in the comments. Against all reason, Facebook decided to remove the post—including its many shares and comments—after several complaints, because the post had allegedly violated Facebook’s guidelines on “hate speech”. One may agree or disagree with Yazdani, but it is indeed hard to find anything per se offensive in the post whatsoever. With the press of a button, Facebook managed to kill a relevant contribution to the Danish debate in society. Only journalist Mikkel Andersson’s public criticism of Facebook’s decision led to a concession from Facebook, who put Yazdani’s post back online.20

The “hate speech” clause details three levels and therefore requires a larger quotation here:
  • Do not post:

  • Tier 1 attacks, which target a person or group of people who share one of the above-listed characteristics or immigration status (including all subsets except those described as having carried out violent crimes or sexual offenses), where attack is defined as

  • Any violent speech or support in written or visual form

  • Dehumanizing speech such as reference or comparison to:

  • Insects

  • Animals that are culturally perceived as intellectually or physically inferior

  • Filth, bacteria, disease and feces

  • Sexual predator

  • Subhumanity

  • Violent and sexual criminals

  • Other criminals (including but not limited to “thieves”, “bank robbers” or saying “all [protected characteristic or quasi-protected characteristic] are ‘criminals’”)

  • Mocking the concept, events or victims of hate crimes even if no real person is depicted in an image

  • Designated dehumanizing comparisons in both written and visual form

  • Tier 2 attacks, which target a person or group of people who share any of the above-listed characteristics, where attack is defined as

  • Statements of inferiority or an image implying a person’s or a group’s physical, mental, or moral deficiency

  • Physical (including but not limited to “deformed”, “undeveloped”, “hideous”, “ugly”)

  • Mental (including but not limited to “retarded”, “cretin”, “low IQ”, “stupid”, “idiot”)

  • Moral (including but not limited to “slutty”, “fraud”, “cheap”, “free riders”)

  • Expressions of contempt or their visual equivalent, including (but not limited to)

  • “I hate”

  • “I don’t like”

  • “X are the worst”

  • Expressions of disgust or their visual equivalent, including (but not limited to)

  • “Gross”

  • “Vile”

  • “Disgusting”

  • Cursing at a person or group of people who share protected characteristics

  • Tier 3 attacks, which are calls to exclude or segregate a person or group of people based on the above-listed characteristics. We do allow criticism of immigration policies and arguments for restricting those policies.

  • Content that describes or negatively targets people with slurs, where slurs are defined as words commonly used as insulting labels for the above-listed characteristics.

We find these straitlaced∗, American∗ moderators on Facebook despicable∗. We hate∗ their retarded∗ attempts to subdue free speech. We think that such idiots∗ ought to be kicked out∗ from Facebook and from other tech giants∗.

In this short statement, we have violated Facebook’s “hate speech” criteria in Tiers 1, 2 and 3 (marked by ∗). Despite the amplified rhetoric, the sentiment is sincere, and we consider the statement to express legitimate political criticism. It is instructive to compare Facebook’s weak and broad “hate speech” criteria with Twitter’s radically different narrow and precise definitions, beginning with: “You may not promote violence against or directly attack or threaten other people on the basis of race…” (and then a version of the usual well-known group list is added).21 The only strange thing here is that it implies that users are indeed allowed to promote violence against people who happen not to belong to any of those explicitly protected groups. At Twitter, the focus remains on “harm”, “harassment”, “threats” and—unlike Facebook’s list—it does not operate with a diffuse list of fairly harmless linguistic terms, statements and metaphors.

Regarding “Violence and Graphic Content”, Facebook’s policy goes as follows:

  • Do not post:

  • Imagery of violence committed against real people or animals with comments or captions by the poster that contain
    • Enjoyment of suffering

    • Enjoyment of humiliation

    • Erotic response to suffering

    • Remarks that speak positively of the violence; or

    • Remarks indicating the poster is sharing footage for sensational viewing pleasure

    • Videos of dying, wounded, or dead people if they contain

  • Dismemberment unless in a medical setting
    • Visible internal organs

    • Charred or burning people

    • Victims of cannibalism

It is no wonder that the company wants to ban snuff videos where people are actually killed in front of rolling cameras, essentially for profit. But the paragraph seems to completely overlook the value of war journalism and other serious reports on torture, crime or disasters—such as Nick Ut’s already mentioned press photo “Napalm Girl”, featuring a naked child running from a US napalm attack, a photo that at the time contributed to a radical turn in the public opinion on the Vietnam War.22 Or what about Robert Capa’s famous photos from the Spanish Civil War? Facebook seems to assume that all images featuring, for example, “charred or burning people” necessarily have a malignant purpose as opposed to an enlightening, medical, journalistic, documentary or critical purpose. In any event, this section of the policy has no counterpart in the legislations of most countries.

The section on nudity and sex contains the following interesting concessions:

“Our nudity policies have become more nuanced over time. We understand that nudity can be shared for a variety of reasons, including as a form of protest, to raise awareness about a cause, or for educational or medical reasons. Where such intent is clear, we make allowances for the content. For example, while we restrict some images of female breasts that include the nipple, we allow other images, including those depicting acts of protest, women actively engaged in breast-feeding, and photos of post-mastectomy scarring. We also allow photographs of paintings, sculptures, and other art that depicts nude figures.” Facebook seems to be realizing that fighting against the Delacroix painting, breast-feeding selfies, and so on is going way too far. Still, as recently as 2018, the company had to apologize for repeatedly deleting photos of one of humanity’s oldest sculptures, the tiny 30,000-year-old stone figurine known as “Venus from Willendorf”, an ample-bodied fertility symbol with highlighted labia.23 And August 2018 saw the story of the removal from the Anne Frank Center page of a Holocaust photo featuring naked concentration camp prisoners.24 The very long list of things that this section disallows is very detailed and would probably still include Peter Øvig’s hippie photos from 1970. In a subclause such as the following, there are two interesting things to make a note of among the list of sexual content which users are not allowed to post:

  • Other sexual activities including (but not limited to)

  • Erections

  • Presence of by-products of sexual activity

  • Stimulating genitals or anus, even if above or under clothing

  • Use of sex toys, even if above or under clothing

  • Stimulation of naked human nipples

  • Squeezing naked female breast except in breastfeeding context

The recurring phrase “but not limited to” (cf. “for any reason”) gives the platform a license to expand the list of prohibited subjects as it sees fit. Thus users, despite the quite explicit and detailed descriptions of examples worthy of a porn site, are not given any real clarity about where the boundary actually lies. Another interesting ban is that against “the presence of by-products of sexual activity”… the most widely known and visible byproduct of sexual activity being—children. However, photos of children (unless nude) do not seem to be removed from people’s Facebook pages—the sloppy choice of words shows that the platform’s detailed community standards are still a far cry from the clarity one normally expects of real legal texts. This is no minor issue, inasmuch as these standards are in the process of supplementing or even replacing actual legislation.

The last form of forbidden content has been given the enigmatic title “Cruel and Insensitive” (which seems to be missing a noun, by the way). It is only briefly elaborated: “Content that depicts real people and mocks their implied or actual serious physical injuries, disease, or disability, non-consensual sexual touching, or premature death.” Is this to say that making fun of someone’s death is okay, as long they died on time? Perhaps this rule against mockery of disabilities was also what allowed Facebook to remove a caricature drawing of Donald Trump with a very small penis, believing that it was an offense against the poor man. Again, a more context-sensitive reader or algorithm would know that this was an ironic political reference to the debates during the presidential primaries of 2016, where an opponent accused Trump of having small hands (obviously referring to the popular wisdom that a correlation exists between the size of men’s hands and their genitals).

In the spring and summer of 2018, Facebook seems to have been hit by almost a panic of activity in the wake of the Cambridge Analytica scandal—hardly a week went by without new, ostentatious initiatives from the company, probably in an attempt to appear serious and well-behaved enough to avoid imminent political regulation. However, many of the initiatives come off as improvised and uncoordinated—the principles of the removal manual from April were thus already being revised in August. During the Alex Jones case (see Chapter 12), the application of the “hate speech” policy was further tightened, and a few days after the Jones ban, on August 9th, Facebook came out with another sermon, this time with the title “Hard Questions: Where Do We Draw The Line on Free Expression?”, signed by the company’s Vice President of Policy Richard Allen. The document takes its departure in a defintion of freedom of speech as guaranteed by the government. The spread is noted between American freedom, acknowledged by the First Amendment, and at the other end, dictatorial regimes. However, in the message Facebook takes care to remind us that it is not a government, but that still the company wants to draw this line in a way “... that gives freedom of expression the maximum extension possible.”25 It seems that leaders at Facebook have finally begun to look to the political and legal tradition of freedom of expression. Now there are references to Article 19 of “The International Covenant on Civil and Political Rights” (ICCPR) as a source of inspiration. The United Nations joined this covenant in 1966, but even back then, the agreement was already surrounded by a lot of discussion and criticism, partly due to its Article 20 calling for legislation on “hate speech”. It was heavily criticized by many Western countries for its curtailment of free speech. There is some irony to the fact that this convention, which Facebook now invokes, was promoted by none other than the former Eastern Bloc, led by the Soviet Union.26 One may wonder why Facebook does not prefer to seek inspiration in the US tradition of free speech legislation and case law, a country which has gained important experience practicing freedom of speech over a long period of time. In the short term, however, what is worth noting is another bit: “we do not, for example, allow content that could physically or financially endanger people, that intimidates people through hateful language, or that aims to profit by tricking people using Facebook.” In mere casual remark, Facebook here introduces a new removal criterion that was not included in the removal handbook: “financial danger”, i.e. content that tries to gain a profit by fooling Facebook users.27 Again, the sloppy steps of the approach are spectacular: A whole new removal criterion is introduced in passing, with no clear definition or examples of what would comprise a violation of the new rule. If we did not know any better, the many ads through which Facebook generates its huge profits could easily be characterized as tools to gain profit by fooling people into buying something they do not need. This is yet another piece of improvisation when formulating policy—one must hope that American and European politicians realize that such measures cause more problems than they solve, and that such measures call out for regulation rather than make it superfluous.

The bottom line is that Facebook’s belated publication of more detailed content removal guidelines is a small step forward—probably triggered by the congressional hearings of Zuckerberg a few weeks before their publication. It is commendable that a little more public light is shed on the mix of reasonable and strange, common-sense and unconsidered pondering that lie beneath this key political document. We still do not know much, however, about the safety and security staff, at present counting some 30.000 people, and their training, qualifications and working conditions, or what equips them to perform this task so crucial for the public. Many of the content moderation departments of the tech giants work mostly for a low pay (3–500 dollars a month) in third-world countries like the Philippines and under non-disclosure agreements.28 There is indeed some distance between the luxurious hipster life of table soccer and free organic food and drinks at the Facebook headquarters in California and the work lives of stressed subcontractors stuffed closely side-by-side in shabby surroundings. One might reasonably ask how they should be able to understand the motivation behind a user posting a picture, especially when that user is in a different country, posting in a different language and a different context. Is the staff being trained, and if so then how? Image, video and text are often intertwined, commenting on each other: Does the company have personnel with the appropriate language skills to cover a global circle of users posting in hundreds of different languages? Does Facebook give moderators productivity bonuses—how many cases does an employee need to solve per hour? And, respectively, how many accounts need to be blocked? And how much content is removed per hour?

An average time of five to ten seconds spent on each image is often mentioned; in such a short span, aspects like context, culture, quotation or irony of course cannot be taken into account. But the actual time frame may be even shorter. Dave Willner, who worked for Facebook as a moderator from 2008 to 2013, processed 15,000 images per day; on an eight-hour workday, that makes around two seconds per image.29 Since doubtful cases presumably take a little longer, the average time for most decisions is even shorter. Is there any effective, overall assurance that the many employees actually follow the guidelines, or are they to some extent left to their own rushed decisions and assessments based on taste? In an interview with ProPublica, Willner’s description of how the removal work began in 2008 points to a great deal of judgment involved: “ ... [Facebook’s] censorship rulebook was still just a single page with a list of material to be removed, such as images of nudity and Hitler. At the bottom of the page it said, ‘Take down anything else that makes you feel uncomfortable’.” This is an extremely broad censorship policy, leaving a considerable amount of judgment on the shoulders of the individual employee—and very little legal protection for the user. Willner continues with a thoughtful remark: “‘There is no path that makes people happy. All the rules are mildly upsetting.’ The millions of decisions every day means that the method, according to Willner, is ‘more utilitarian than we are used to in our justice system. It’s fundamentally not rights-oriented.’”30 The utilitarian attitude weighs damage against utility. So if a number of users’ rights are violated and their content is removed, the act can be legitimized by the fact that a larger number of other users, in turn, experience a benefit—for example, if they feel that a violation has been avenged. Questions of guilt and rights drift to the background, as what matters is the net number of satisfied users. Obviously, such a balancing system tends to favor the complainant, since he or she is the one heard by the moderators, while the accused party is not heard and has no means of defense. Therefore, it is inherent to this system that the expressing party, the utterer of a statement, has no right—no real freedom of expression.

The community standards of the tech giants are becoming the policies guiding a new form of censorship. Removal of content by an algorithm before it even becomes visible to users takes us all the way back to the pre-censorship which was abolished in Denmark in 1770 by J.F. Struensee. On large parts of the Internet, this “formal” freedom of speech is not respected. The manual removal of content upon complaints can be likened to post-censorship and is comparable to the police control practiced in Denmark from 1814 until the Constitution of Denmark came into effect in 1849—with it came a number of laws against material freedom of expression, such as the sections on blasphemy, pornography and “hate speech”. Unlike Danish law going as far back as 1790, however, in the legal environment of the tech giants there is no judicial review, no public court case, and appeal options are poor, unsystematic, or non-existent.

Of course, Facebook’s rule-book is not a proper legal document, but still it is bizarre to note that this pseudo-legal text, with its vagueness and many hyper-detailed bans, now comprises the principles governing the limits of expression of millions—if not billions—of people for whom Facebook’s de facto monopoly is the only way they may reach the public sphere and access their news.

In the April 2018 document, Facebook had also promised a new appeal option for users whose content has been blocked and their accounts suspended. In a November 2018 missive to Facebook users, Zuckerberg elaborated on the idea. Here, he promised the long-term establishment of an independent appeal institution in order to “[...] uphold the principle of giving people a voice while also recognizing the reality of keeping people safe.”31 We are still waiting for the details on how that attempt of squaring the circle will work—particularly how the board will be selected and how independence of Facebook’s commercial interests will be granted. Given the amount of flaggings, one can only imagine how many staffers would have to be employed in this private “supreme court”. Even if this idea may be a virtual step in the right direction, such an appeal organ, of course, will still have to function on the basis of the much-disputed detail of the Facebook community standards.

In the same pastoral letter, Zuckerberg articulated a new theory on the regulation of free speech. No matter where one draws the line between legal and illegal, he claimed, special user interest will be drawn to legal content which comes close to that borderline. No matter whether you are prudish or permissive in drawing the line, special fascination will radiate from borderline posts. To mitigate this fact, Zuckerberg now proposes a new policy: such borderline content, legal but in the vicinity of the border, will be suppressed and have its Facebook circulation reduced—with more reduction the closer to the line it comes: “[...] by reducing sensationalism of all forms, we will create a healthier, less polarized discourse where more people feel safe participating.”32 The idea echoes de-ranking “fake news”, only now spreading to other types of content. Introduced in the same letter as the appeal institution, this idea begs some new unsolved questions: will people posting borderline content be informed about the reduced distribution of their posts? If not, a new zone of suppression without possibility of appeal will be created. Furthermore, as soon as this reduction is realized in the community, more interest is sure to be generated by posts on the borderline of the borderline—a slippery slope if there ever was one.

One might ask why there should even be detailed rules for content removal at all. It was not an issue with the communication technologies Facebook is helping to replace: the telephone and mail former generations relied on to “connect” with their “friends”. The postal services of the free world do not refuse to deliver certain letters after examining their content, and the telephone companies do not interrupt calls based on people talking about things the phone companies do not like. These providers of communications infrastructure were even obliged not to censor users; they were seen as companies that help communicate content, not moderate it.33 It is primarily for commercial reasons that companies like Facebook introduce restrictions on what their users have to say. But a harmful consequence of this is that it has turned out to be conducive to the desires for censorship of certain political forces.

Footnotes

  1. 1.

    Facebook “Community Standards”. Last visit 08-04-18: https://www.facebook.com/communitystandards/; the quotes in this chapter are taken from here. See also Lee, N. “Facebook publishes its community standards playbook” Engadget. 04-24-18.

  2. 2.

    Cf. Gillespie (2018) p. 111f.

  3. 3.

    That is, around 100 million fake users; “Facebook shut 583 million fake accounts” Phys Org. 05-15-18. Last visited 06-25-18: https://phys.org/news/2018-05-facebook-million-fake-accounts.html.

  4. 4.

    Lanier (2018) p. 34.

  5. 5.

    According to New York Times, cit. from Lanier, op.cit.

  6. 6.

    It has since become clear that Facebook was the biggest source of “fake news” during the 2016 presidential election, cf. Guess, A., Nyhan, B. & Reifler, J. “Selective Exposure to Misinformation: Evidence from the consumption of fake news during the 2016 U.S. presidential campaign” Dartmouth. 09-01-18. Last visited 07-30-18: https://www.dartmouth.edu/~nyhan/fake-news-2016.pdf.

  7. 7.

    BBC “Facebook ditches fake news warning flag” BBC News. 12-21-17.

  8. 8.

    Ananny, M. “The partnership press: Lessons for platform-publisher collaborations as Facebook and news outlets team to fight misinformation” Tow Center for Digital Journalism. 04-04-18. Last visited 07-30-18: https://www.cjr.org/tow_center_reports/partnership-press-facebook-news-outlets-team-fight-misinformation.php#citations—the following quotes are taken from this.

  9. 9.

    On iconic material in truth-based assertions, see Stjernfelt (2014).

  10. 10.

    InfoWars host Alex Jones had his account on Facebook and other sites shut down on 6. August 2018, cf. Vincent, J. “Facebook removes Alex Jones pages, citing repeated hate speech violations” The Verge. 08-06-18. Apple, Spotify and YouTube also closed InfoWars on the same day.

  11. 11.

    Hern, A. “Facebook protects far-right activists even after rule breaches” The Guardian. 07-17-18.

  12. 12.

    According to a Buzzfeed survey: Silverman, C., Lytvynenko, J. & Pham, S. “These are 50 of Fake News Hits on Facebook in 2017” BuzzFeed. 12-28-17.

  13. 13.

    Ingram, D. “Facebook begins ‘fact-checking’ photos and videos” Reuters. 03-29-18.

  14. 14.

    Levin, S. “‘They don’t care’: Facebook factchecking in disarray as journalists push to cut ties” The Guardian. 12-13-18.

  15. 15.

    Coldewey, D. ”UPDATE: Snopes quits and AP in talks over Facebook’s factchecking partnership” TechCrunch. 02-01-19.

  16. 16.

    Facebook’s “Community Standards 12. Hate Speech” p. 18. Last visited 07-30-18: https://www.facebook.com/communitystandards/objectionable_content/hate_speech.

  17. 17.

    Many tech giants have similar formulas that directly cite the range of groups that enjoy special protection in US anti-discrimination legislation. Although the United States has no criminalization of hate speech (and may not have it because of the First Amendment), companies thus, in a certain sense, generalize and extend the existing law to include hate speech. It is worth noting that the characteristics (ethnicity, gender, religion, etc.) used in this legislation do not distinguish between minority and majority groups—unlike what is often assumed, the protection here is not aimed at protecting minorities specifically, and as a matter of principle majority groups supposedly have right to equal protection according to such laws and regulations.

  18. 18.

    Facebook’s “Community Standards 12. Hate Speech” p. 18. Last visited 07-30-18: https://www.facebook.com/communitystandards/objectionable_content/hate_speech.

  19. 19.

    See Abdel Mahmoud’s Facebook post in Pedersen, J. ”Kendt DR-vært censureret af Facebook: Se opslaget, der fik ham blokeret” BT. 01-28-18.

  20. 20.

    Andersson, M. ”Når Facebook dræber samfundsdebatten” Berlingske. 07-25-18.

  21. 21.

    Quot. from Gillespie (2018) p. 58.

  22. 22.

    Ingram, M. “Here’s Why Facebook Removing That Vietnam War Photo Is So Important” Fortune. 09-09-2016. Norwegian newspaper Aftenposten went to great lengths to attack Facebook’s removal of the photo when its Editor-in-Chief published an open letter to Zuckerberg, which gained international impact. Critics added that the effect of Facebook’s removal of the photo reiterated the Nixon administration’s attempts many years ago to label the photo as a fake.

  23. 23.

    Breitenbach, D. “Facebook apologizes for censoring prehistoric figurine ‘Venus of Willendorf’” dw.com. 01-03-18.

  24. 24.

    The photo was put back up after a complaint filed by the museum. Brandom, R. “Facebook took down a post by the Anne Frank Center for showing nude Holocaust victims” The Verge. 08-29-18.

  25. 25.

    Facebook: “Hard Questions: Where Do We Draw The Line on Free Expression?” Facebook Newsroom. 08-09-18.

  26. 26.

    See also Mchangama & Stjernfelt (2016) p. 781ff.

  27. 27.

    Constine, J. ”Facebook now deletes posts that financially endanger/trick people” TechChrunch. 08-09-18.

  28. 28.

    Chen, A. “The Laborers Who Keep Dickpics and Beheadings out of Your Facebook Feed” Wired. 10-23-13.

  29. 29.

    Angwin, J. & Grassegger, H. “Facebook’s secret censorship rules protect white men from hate speech but not black children” Salon (originally appeared on ProPublica). 06-28-17.

  30. 30.

    Ibid.

  31. 31.

    Zuckerberg, M. “A Blueprint for Content Governance and Enforcement” Facebook Notes. 11-15-18.

  32. 32.

    ibid.

  33. 33.

    Cf. the distinction in American law between “conduit” and “content”, responsibility for transfer and responsibility for content modification, respectively.

Bibliography

Books

  1. Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.Google Scholar
  2. Lanier, J. (2018). Ten arguments for deleting your social media accounts right now. The Bodley Head.Google Scholar
  3. Stjernfelt, F. (2014). Natural propositions: The actuality of Peirce’s Doctrine of Dicisigns. Docent Press.Google Scholar

Copyright information

© The Author(s) 2020

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  • Frederik Stjernfelt
    • 1
  • Anne Mette Lauritzen
    • 2
  1. 1.Humanomics Center, Communication/AAUAalborg University CopenhagenKøbenhavn SVDenmark
  2. 2.Center for Information and Bubble StudiesUniversity of CopenhagenKøbenhavn SDenmark

Personalised recommendations