The evolution of Facebook’s approach to suicide prevention was shaped by a series of ethical questions that we identified and addressed. In effect, the technical and product level decisions that were made during this process involved ethical balancing decisions between competing values and interests. This section elaborates on the ethical questions within our product development process, and how we reached the appropriate and justifiable balance between the competing values posed by those ethical questions.
Privacy Program
At Facebook, we consider diverse perspectives when building and reviewing our products, incorporating guidance from experts in areas like data protection and privacy law, security, content policy, engineering, product management, and public policy. This process, which involves a number of different teams (for example, legal, policy, privacy program, security, communications, and marketing) addresses potential privacy and other legal issues. It is important to note that this process goes beyond the assessment of legal compliance requirements, looking also into policy, ethical and societal implications of our products and services, while ensuring that our product applications bring value to people on our platform. The process starts at the product ideation phase and it goes until deployment, including—in some instances—post-deployment monitoring. During that process, we get wider input if necessary—this might include external policy briefings and consultation with industry and academic experts. It is within the product cross-functional review process that both privacy and ethical questions are discussed and settled upon. Discussion of ethical implications of our products and services are often a component of Facebook’s privacy program. Our evaluation commonly goes beyond strictly privacy, data protection and other legal questions, addressing product, and research use cases’ decisions that raise competing values and interests for which there is no simple, straightforward, or pre-delineated solution. These ethical discussions also stress the need to create optimal outcomes in the wider interests of society, looking at the collective dimension of automation and algorithmic processes.Footnote 43 In the following, we’ll describe and provide examples of concrete product decisions, within Facebook’s approach to suicide prevention, that raised ethical questions and prompted the need to balance different values and interests.
Ethical Questions
Ethical Imperative?
The very first question we had to ask ourselves was why should we be doing this in the first place? In other words, why should we be deploying resources and building suicide prevention tools? Are we crossing lines and overstepping bounds, going beyond what we are supposed to do as a company?
We addressed this foundational question by considering a number of important factors.
First of all, as noted above, a number of suicide prevention institutions came to us and highlighted our unique position to help tackle this problem. From that moment, our answer, be it negative or positive, would inevitably underline an ethical positioning from our end. To answer that question and based on the research we conducted (as mentioned also above), we first recognized that we are, in effect, well positioned and resourced to build suicide prevention tools. By being a platform that people may use to express thoughts of suicide, and by connecting people to those who care about them, we have in place the social infrastructure necessary to use those connections in cases where people are expressing thoughts about suicide on Facebook. Secondly, this decision also stems from our mission to keep our community safe,Footnote 44 and is aligned with our overall focus, work, and investment on safety and well-being, which makes suicide prevention work an ethical imperative for Facebook.Footnote 45 This decision provided the ethical grounding and justification to start working in this space. From here, and as explained below, a number of more concrete ethical questions regarding particular and technical decisions on how to build and implement a tool for suicide prevention followed suit.
Privacy vs Efficacy
When building suicide prevention tools, one of the balances we need to attain is between efficacy and privacy. These interests may be at odds with each other, as going too far in efficacy—detecting suicidal ideation at all costs, with no regards to limits or boundaries—, could compromise or undermine privacy, i.e., the control over your own information and how it is used. The question we were faced with was the following: How can we deploy a suicide prevention system that is effective, and that protects people’s privacy, i.e., that is not intrusive and respectful of people’s privacy expectations?
For the sake of efficacy, there were a number of decisions that could have been made in order to optimize that goal. One of the possible technical options was to leverage all types of content that we have available to train the Machine Learning model and use in production. This would mean, for example, all posts regardless of privacy or group setting. Another technical option was to involve more actively the network of people and friends connected to the person that expressed suicidal ideation, notifying them proactively of such suicidal expressions and encouraging them to interact with the person in distress. Following these options would render the system more effective, but it would also undercut important privacy values and safeguards.
With that in mind, we deliberately decided not to use “only me” posts in training the suicide prevention classifier, as model inputs, and in deploying it afterwards in our platform. “Only me” posts are posts with the privacy setting selected to only the person posting the content: no one else besides the person uploading the post can see the content of the post. This means that the content needs to have been posted to friends, list of friends or more publicly, to be used in building the suicide prevention tool and using it in production. We also deliberately decided to exclude secret group posts because there are higher privacy expectations with this type of content. Content posted in the ambit of a secret groupFootnote 46 were thus removed from the creation and deployment of the suicide prevention system. In a nutshell, and as a way to strike an ethical balance between privacy and efficacy, we decided not to look at “only me” posts or secret groups, but only at cases where a person is clearly reaching out to their friends.
Another important decision was not to highlight to anyone other than the authors themselves when their post has been flagged as expressing risk of suicide. An approach favoring efficacy would point towards proactively notifying friends about posts expressing suicidal ideation, encouraging them to engage with the person in distress that authored such post. But privacy considerations directed us to not issue any sort of proactive notifications and rely on our standard reporting mechanisms. Posts flagged as expressing suicidal ideation by our Machine Learning classifier will still trigger a recommendation on materials and helplines that the authors of the post, and only themselves, can leverage; but it will not alert in any way anyone else. Friends or other people in the audience of such posts can always, and on their own, report them as referring to suicide or self-injury.
Due to privacy considerations, we also focused primarily on the content of the posts rather than general risk factors tied to the user. In other words, we analyze the content of the posts without associating or collecting information on people uploading those posts. Our AI systems are trained to detect expressions of suicide without storing and connecting that information to specific people. The focus of our Machine Learning work is on the content expressed and not on the person expressing the content.
Last, but certainly not the least, we have been open and public about the deployment of this feature and the technical details involved in its building and rolling out. We provide explanations in our Help CenterFootnote 47 and Safety CenterFootnote 48 about the tools we have built for this purpose,Footnote 49 along with the Suicide and Self-Injury Resources. And we share resources in our Newsroom website with posts including “How Facebook AI Helps Suicide Prevention,”Footnote 50 “Getting Our Community Help in Real Time,”Footnote 51 and “Building a Safer Community with New Suicide Prevention Tools.”Footnote 52
The consistent theme to these decisions was to protect people’s privacy while still trying to be as effective as possible, getting people help when we can. The privacy–efficacy alignment exercise will be an ongoing one so we can continue to strike the right balance as technology evolves.
Human in the Loop
Another question that was raised during the product development process revolved around the scope of human intervention and the extent to which we could rely on automation in the review of human generated reports. The tradeoff we faced in this particular context was the one between “full automation” and “human indispensability,” or—as more popularly known—the question of the human in the loop. In our particular product building process, the questions boiled down to the following ones: Should all user reports go through human review, even those with low accuracy? Is there place for full automation, that is, delegation of the decision to not act on reports to the machine? Or to put it in other words: should we focus the resources, time, and energy of human review on reports that score high in accuracy, and auto-ignore those that score low?
Our internal analysis indicate that user reports on posts expressing suicidal thoughts tend to score low in accuracy and that most reactive reports are not actionable, that is, do not warrant specific actions from our Community Operations reviewers. Dismissing automatically low scoring reports would save resources and human review cost. Excluding such reports from manual review would allow our human reviewers to spend less time, attention and energy going through non-actionable reports, which only increases their fatigue and affects their productivity and motivation. Instead, and by fully automating the dismissal of these reports, human reviewers could focus on the reports that matter more, being more productive and strategic in the work they have been assigned to do.
Despite how appealing and apparently rational the decision to automatize low-scoring reports would be, namely in terms of economic savings and human resources optimization, there are important implications of removing the human from the loop that we need to take into consideration in the context of suicide prevention. The most important one is the cost of false negatives. If we automatically dismiss a report erroneously, categorizing it as non-actionable when in fact it is actionable, this could result in ignoring a person at risk. Despite the very low statistical probability, the cost of misclassifying a report and of doing that automatically with no human confirmation was deemed to be too high. In this light, we opted for the path of human indispensability, keeping the human in the loop. We thus chose to never automatically close reactive human generated reports, despite the fact that they are often far less actionable than proactive, machine driven ones. As such, all reports—even low-scoring, human-generated reactive ones—, need to go through human review. Guiding this decision was the severity of harm to people in our platformFootnote 53 that could derive from misclassifying a report without human intervention.Footnote 54
Targeted and Effective vs Thorough and Ineffective (How Do We Set the Threshold?)
Once a ML model is created, there is an evaluation phase to assess the performance quality of the model, that is, to assess if the classifier is good enough to put it to use. In this case, the ML model generates a numeric prediction score for each post (estimate), and then applies a threshold to convert these scores into binary labels of 0 (low likelihood of suicidal risk) and 1(high likelihood of suicidal risk). By changing the score threshold, you can adjust how the ML model assigns these labels, and ultimately determines if that post is expressing thoughts of suicide.
As always with Machine Learning, there are tradeoffs between the recall (number of posts expressing an interest in suicide) and precision (the number of posts detected expressing an interest in suicide for compared to those that were not). If we wanted to ensure we caught every single post expressing suicidal intent then we would want to review every post put on Facebook, but of course that is impossible. ML is probabilistic in nature so it will never be possible to ensure 100% of accuracy in its use. So the question we had ahead of us was how can we target the relevant posts and allocate the strictly necessary resources for that, while being as thorough as we can? The question was about target-ness vs thoroughfulness.
Since we have decided we should find content expressing suicidal thoughts, we have to decide how many posts can be reasonably looked at in a day and the number of false positives we’re willing to have our human reviewers look at. In ML terms, this is a question of how to set the threshold:
-
If we lower the threshold, the more posts that will less likely be actionable will need to be reviewed by more people; this poses the risk of having a disproportionate number of human reviewers looking at non-concerning posts.
-
If we raise the threshold, the more accurate will these posts be and the fewer people we will need to do the human review of the content; but this runs the risk of missing content that should have been flagged and reviewed.
In response to this challenge, our philosophy has been to maximize the use of human review available to us without falling beneath a certain threshold of accuracy. We have thus substantially increased our staffing in this area so that we can handle proactively reported posts of more people at risk. But we did this while maintaining high auto-report accuracy. Overall, we want to maintain a baseline accuracy to make sure that posts from people who need help are seen as fast as possible and are not lost among benign posts. Using human review allows us to run our classifiers at a lower precision but higher recall and ultimately help the most people. In this process, it’s important for human reviewers that the content they’re looking at is a mix of positives and negatives. Like ML algorithms, humans become less accurate when the content they’re trying to identify is scarce.
Human reviewers always make the final decision on whether we send resources for reported content. Moreover, people can always report any posts, including ones expressing thoughts of suicide.
In addition, and when introducing a new classifier, we did not want to AB test it against our old classifiers. This was of particular concern during experimental design: if the new signal performed significantly worse we would miss out on helping people. Instead, we give each classifier the opportunity to report content, the newest classifier given the first opportunity, so we can establish a baseline for how accurate it is. We can remove a classifier when it is primarily reporting false positives because newer classifiers are catching the concerning content. This has ensured that when introducing new classifiers, people we could have helped won’t get missed because we tested a faulty classifier.
Potential for Support vs Risk of Contagion (When Do We Leave Up or Take Down Posts/Videos?)
An important issue for us to think through was weighing the potential to provide support to the person expressing thoughts of suicide against the risk of contagion to those who are viewing the post or video. This question was particularly salient for Facebook Live,Footnote 55 where someone in distress may be sharing a video of themselves attempting suicide in real time. In this situation, the person contemplating or attempting suicide is in a very serious situation where harm to themselves may be imminent. When discussing this with experts, they pointed out that it could actually be helpful that this person is online and sharing their situation, instead of by themselves at home alone, as it enables other people to see the situation, reach out, and provide help. In this way, Facebook Live can become a lifeline to provide support to people in distress in the moment they need it the most. This simply wouldn’t be possible had the person been at home alone without sharing it. The concern, however, is that viewing a suicide attempt or completion can be traumatic, and if left up to circulate provides the risk of contagion.
So how do we decide whether to leave up a post/video with a suicide attempt or to stop the Live stream or remove a post/video? We worked extensively with experts to provide guidelines around when a person may still be helped versus when it’s likely a person can no longer be helped (i.e., person performs very specific behaviors that indicate a suicide attempt is in progress and they would no longer be responsive to help). If we determine that the person can no longer be helped and harm is imminent, we stop the Live video stream or will remove the post/video. At this point, the risk for trauma to the viewer is high, and we want to remove the post to mitigate the risk of contagion by it being viewed or shared. We do not take down the post/video as long as we think exposure may help. The hope is that someone will see the post or video and comment, reach out to the person directly, or flag it to Facebook to review so that we can provide resources.