According to LeCun’s tweets cited at the beginning of this paper, Facebook’s AI-powered filter cleanses the platform of:
-
1.
Hate speech;
-
2.
Calls to violence;
-
3.
Bullying; and
-
4.
Disinformation that endangers public safety or the integrity of the democratic process
These are his words, so we will refer to them even while the actual definitions of hate speech, calls to violence, and other terms are potentially controversial and open to debate.
These claims are provably false. While “AI” (along with some very large, manual curation operations in developing countries) may effectively filter some of this content, at Facebook’s scale, some is not enough.
Let’s examine the claims a little closer.
Does Facebook actually filter out hate speech?
An investigation by the UK-based counter-extremist organization ISD (Institute for Strategic Dialog) found that Facebook’s algorithm “actively promotes” Holocaust denial content [20]. The same organization, in another report, documents how Facebook’s “delays or mistakes in policy enforcement continue to enable hateful and harmful content to spread through paid targeted ads.” [17]. They go on to explain that “[e]ven when action is taken on violating ad content, such a response is often reactive and delayed, after hundreds, thousands, or potentially even millions of users have already been served those ads on their feeds.”Footnote 6
Zuckerberg admitted in April 2018 that hate speech in Myanmar was a problem, and pledged to act. Four months later, Reuters found more than “1000 examples of posts, comments, images and videos attacking the Rohingya or other Myanmar Muslims that were on Facebook” [45]. As recently as June 2020 there were reports [7] of troll farms using Facebook to intimidate opponents of Rodrigo Duterte in the Philippines with death threats and hateful comments.
Does Facebook actually filter out calls to violence?
The Sri Lankan government had to block access to Facebook “amid a wave of violence against Muslims … after Facebook ignored years of calls from both the government and civil society groups to control ethnonationalist accounts that spread hate speech and incited violence.” [42] A report from the Center for Policy Alternatives in September 2014 detailed evidence of 20 hate groups in Sri Lanka, and informed Facebook. In March of 2018, Buzzfeed reported that “16 out of the 20 groups were still on Facebook”.Footnote 7
When former President Trump tweeted, in response to Black Lives Matters protests, when “the looting starts, the shooting starts,” the message was liked and shared hundreds of thousands of times across Facebook and Instagram, even as other social networks such as Twitter flagged the message for its explicit incitement of violence [48] and prevented it from being retweeted.
Facebook played a pivotal role in the planning of the January 6th insurrection in the US, providing an unchecked platform for proliferation of the Big Lie, radicalization around this lie, and coordinated organization around explicitly-stated plans to engage in violent confrontation at the nation’s capital on the outgoing president’s behalf. Facebook’s role in the deadly violence was far greater and more widespread than the role of Parler and the other fringe right-wing platforms that attracted so much attention in the aftermath of the attack [11].
Does Facebook actually filter out cyberbullying?
According to Enough Is Enough, a non-partisan, non-profit organization whose mission is “making the Internet safer for children and families,” the answer is a resounding no. According to their most recent cyberbullying statistics, [10] 47% of young people have been bullied online, and the two most prevalent platforms are Instagram at 42% and Facebook at 37%.
In fact, Facebook is failing to protect children on a global scale. According to a UNICEF poll of children in 30 countries, one in every three young people says that they have been victimized by cyberbullying. And one in five says the harassment and threat of actual violence caused them to skip school. According to the survey, conducted in concert with the UN Special Representative of the Secretary-General (SRSG) on Violence against Children, “almost three-quarters of young people also said social networks, including Facebook, Instagram, Snapchat and Twitter, are the most common place for online bullying” [49].
Does Facebook actually filter out “disinformation that endangers public safety or the integrity of the democratic process?”
To list the evidence contradicting this point would be exhausting. Below are just a few examples:
-
The Computational Propaganda Research Project found in their 2019 Global Inventory of Organized Social Media Manipulation that 70 countries had disinformation campaigns organized on social media in 2019, with Facebook as the top platform [6].
-
A Facebook whistleblower produced a 6600 word memo detailing case after case of Facebook “abdicating responsibility for malign activities on its platform that could affect the political fate of nations outside the United States or Western Europe.” [44]
-
Facebook is ground-zero for anti-vaccination and pandemic misinformation, with the 26-min conspiracy theory film “Plandemic” going viral on Facebook in April 2020 and garnering tens of millions of views. Facebook’s attempt to purge itself of anti-vaccination disinformation was easily thwarted when the groups guilty of proliferating this content removed the word “vaccine” from their names. In addition to undermining public health interests by spreading provably false content, these anti-vaccination groups have obscured meaningful discourse about the actual health concerns and risks that may or may not be connected to vaccinations. A paper from May 2020 attempts to map out the “multi-sided landscape of unprecedented intricacy that involves nearly 100 million individuals” [25] that are entangled with anti-vaccination clusters. That report predicts that such anti-vaccination views “will dominate in a decade” given their explosive growth and intertwining with undecided people. According to the Knight Foundation and Gallup, [26] 75% of Americans believe they “were exposed to misinformation about the election” on Facebook during the 2020 US presidential election. This is one of those rare issues on which Republicans (76%), Democrats (75%) and Independents (75%) agree–Facebook was the primary source for election misinformation.
If those AI filters are in fact working, they are not working very well.
All of this said, Facebook’s reliance on “AI filters” misses a critical point, which is that you cannot have AI ethics without ethics [30]. These problems cannot be solved with AI. These problems cannot be solved with checklists, incremental advances, marginal changes, or even state-of-the-art deep learning networks. These problems are caused by the company’s entire business model and mission. Bosworth’s provocative quotes above, along with Tim Kendall’s direct testimony demonstrate as much.
These are systemic issues, not technological ones. Yael Eisenstat put it best in her TED talk: “as long as the company continues to merely tinker around the margins of content policy and moderation, as opposed to considering how the entire machine is designed and monetized, they will never truly address how the platform is contributing to hatred, division and radicalization.”