Abstract
We investigate the question of whether (and if so why) creating or distributing deepfake pornography of someone without their consent is inherently objectionable. We argue that nonconsensually distributing deepfake pornography of a living person on the internet is inherently pro tanto wrong in virtue of the fact that nonconsensually distributing intentionally non-veridical representations about someone violates their right that their social identity not be tampered with, a right which is grounded in their interest in being able to exercise autonomy over their social relations with others. We go on to suggest that nonconsensual deepfakes are especially worrisome in connection with this right because they have a high degree of phenomenal immediacy, a property which corresponds inversely to the ease with which a representation can be doubted. We then suggest that nonconsensually creating and privately consuming deepfake pornography is worrisome but may not be inherently pro tanto wrong. Finally, we discuss the special issue of whether nonconsensually distributing deepfake pornography of a deceased person is inherently objectionable. We argue that the answer depends on how long it has been since the person died.
Similar content being viewed by others
Data Availability
Not applicable.
Notes
The term “deepfake” was coined by a Reddit user of the same name in late 2017, the “deep-” being a reference to the role of deep learning, a form of machine learning, in their creation (Somers, 2020). The earliest and most spectacular deepfakes were in the form of videos. Strictly speaking, they need not be videos; other forms of deepfakes, also using ML, are in the form of photographs or audio clips (Westerlund, 2019).
A set of convincing Tom Cruise deepfakes can be found at the following link: https://www.tiktok.com/@deeptomcruise?lang=en , and the Willem Dafoe deepfake can be found at this link: https://twitter.com/todd_spence/status/1434313457881391109?lang=en .
This example is modified from one we first encountered in an unpublished manuscript by David Boonin.
We restrict our discussion in this paper to moral rights and moral wrongdoing, while setting aside the issue of legal rights and legal wrongdoing. We make no claims about what is required to violate a legal right to privacy.
Again, we are not concerned with legal concepts. We are suggesting that, regardless of how the legal system classifies nonconsensual deepfake pornography, it is not always morally objectionable in virtue of being defamatory.
Admittedly, this claim is controversial among philosophers of fiction (see Kroon & Voltolini, 2019 for extensive discussion). It can also be obscured by the fact that claims about fiction, including claims about fictional characters and events, can clearly be false. Moreover, we can evaluate claims made within fiction as true or false relative to the fictional world. These possibilities do not complicate our argument, however.
See Harris (2021) and Gendler (2008) on the concept of “alief.” See also Husserl: In Husserl’s language, our experiences are liable to become “sedimented” in our minds, and even after being forgotten, they can continue to impact our outlook, actions, and behavior towards others. Experiences that become “sedimented” disappear into the “background” of our beliefs, but continue to “provide the context and material for further judgements” (Moran & Cohen, 2012 pp. 288–290). Thanks to David Zoller for pointing us to the origins of this claim in phenomenology.
For an example of nonconsensual erotic fanfiction about real people, listen to the podcast Normal Gossip, Season 2, Episode 6 (McKinney, 2022).
We are smoothing over some complexities in this section for ease of exposition. One complexity, relevant here, is that people may have multiple more or less discrete social identities across the multiple spheres of social activity in which they live. Another complexity is that the concept of veridicality is both ambiguous and complex. A representation can be veridical in one sense or respect but not another, and it may be that whether a representation should count as veridical sometimes depends partly on the context in which it is presented, including, perhaps, facts about which of a person’s multiple discrete social identities are active in the relevant context. These complexities are interesting, but there is no space to explore them here. Thanks to an anonymous reviewer for pressing us to consider these points.
We are grateful to an anonymous reviewer and several commentators for inquiring about this possibility.
Relatedly, consider the fact that Franny’s actions may have non-accidental negative effects on other people, too. Suppose Franny creates deepfakes of Daniel superimposed over footage originally depicting someone else—call him Manny—engaging in sexual acts. Does Manny have a claim against Franny, since Manny is also “depicted” in this video? (We thank an anonymous reviewer for raising this question.)
Several points are worth mentioning here. First, if the video depicts Manny, it must be to a much lesser degree than that to which Daniel is depicted—a person’s social identity is typically much more closely tied to their face and voice than the rest of their body, as de Ruiter (2021) and others have observed. Second, the resulting deepfake is ex hypothesi not a non-veridical representation of Manny’s actions because the original footage just was of Manny performing those same actions. To the extent someone could recognize Manny’s body and attribute those actions to him, they would be attributing successfully. Finally, while it might be problematic to “co-opt” Manny’s body to portray Daniel non-veridically, this co-opting is not a form of defamation towards Manny but rather a species of exploitation in the Marxist sense, i.e., roughly, forcefully profiting from another person’s labor without proper compensation (Schmitt 1997; Zwolinski et al., 2022 §1.2). What we say here could also be said of any of the other people involved in the creation of Manny’s original footage, i.e., the director of photography, writer, production designer, script supervisor, first and second assistant director, etc., in case a pornographic film employs people in any of those roles.
A related objection says that non-veridical deepfakes that are silly and fun but not satirical, like many of the deepfakes on the subreddit SFWdeepfakes, (https://www.reddit.com/r/SFWdeepfakes/), are a counterexample to the NVRP since many of these deepfakes are nonconsensual but clearly innocuous. For example, one popular deepfake on this website depicts Jim Carrey’s face superimposed over Tobey Maguire’s face in a rather uninteresting scene from the 2002 movie Spider Man (https://www.reddit.com/r/SFWdeepfakes/comments/chwwif/spiderjim/). However, deepfakes like this are innocuous because they are not socially significant. Thus, the NVRP does not apply to them, and they do not constitute a counterexample to it.
The NVRP is of course also compatible with saying that consent from the target of a deepfake is not sufficient to render its dissemination morally acceptable, perhaps because the consent of other people is needed (e.g., the people whose bodies are depicted in the deepfake, as in the case of Manny above) or because all deepfake pornography is bad. Thanks to an anonymous reviewer and Jennifer Kling for pressing us to clarify this point.
One might think that the argument behind the NVRP also implies that both nonconsensual private deepfake pornography and nonconsensual mental fantasy are inherently wrong since both of these at least risk producing a nonconsensual effect on the target’s social identity and relationships. We agree that both private deepfake pornography and mental fantasy can affect a target’s social identity and relationships by changing how the individual who privately indulges in these things interacts with the target and that this fact often grounds a reason against nonconsensually indulging in these things. However, we have only argued that people have a right that others not tamper with their social identity. Tampering implies intent. Nonconsensually disseminating intentionally non-veridical representations inherently involves intentionally altering a person’s social identity, whereas there is nothing about indulging in private deepfakes or mental fantasy that necessitates an intent to alter a person’s social identity. Thus, the reasoning behind the NVRP does not imply that private deepfake pornography and mental fantasy are inherently wrong. Thanks to an anonymous reviewer for pressing us to consider this point.
Notice that the claim that moral norms work differently in this special representational context does not commit us to Sher’s view that moral norms have no application in the mind.
Once it has been formed, it is very hard to dislodge a person’s belief. Research on this phenomenon dates back at least to the 1950s under the banner of “belief perseverance,” namely, a person’s likelihood to cling to a discredited belief even after it has been empirically disproven (Dawson, 1999). For more on this phenomenon, see Slusher and Anderson (1989), Savion (2009), Walter and Tukachinsky (2020), and Sima (2022). While we could not find empirical evidence testing the hypothesis with regard to memories formed specifically from visual perception, we presume the same would hold.
Theoretically, it would be possible to create and consume deepfakes in a perfectly secure environment, e.g., an air-gapped computer lab in a bunker somewhere on an island. But that seems exceedingly rare. Moreover, and in fact, the applications that enable a person to create deepfakes likely afford the user a “share” button to facilitate their spread. We thank an anonymous reviewer for both of these observations.
Note that there are conceivable types of non-veridical representations that would have even greater phenomenal immediacy than even the most convincing deepfake videos and thus in this respect would be more problematic than deepfakes from the point of view of the NVRP. For example, in episode 21 of the third season of Star Trek: The Next Generation, a character named Lieutenant Barclay creates unauthorized non-veridical simulations of his colleagues on the “Holodeck,” which is a device that generates interactive, immersive, and hyper-realistic three-dimensional simulations through the use of artificial intelligence. Because the simulations on the Holodeck look completely life-like, these simulations have even greater phenomenal immediacy than a deepfake. Because of this, it would be a more serious violation of the NVRP to intentionally distribute a non-veridical Holodeck program of someone than to intentionally distribute a deepfake video of someone, all else being equal.
To be fair, any reconstruction of a person dead for 2000 years must involve a healthy dose of speculation. But at least one faithful reconstruction of Caesar Augustus has been attempted with the help of machine learning (Vincent, 2020).
References
Allyn, Bobby. “Deepfake Video of Zelenskyy Could Be ‘tip of the Iceberg’ in Info War, Experts Warn.” NPR, 16 Mar. 2022. NPR, https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia
Brewer, T. (2000). The Bounds of Choice: Unchosen Virtues Unchosen Commitments. Garland Publishing.
Citron, D., & Chesney, R. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review, 107.
Collins, P. H. (2004). Black Sexual Politics: African Americans, Gender, and the New Racism. Routledge.
Cox, Damien, and Michael Levine. “Believing Badly.” Philosophical Papers, 33, 4, 2010, pp. 309-328.
Cox, Joseph. “Most Deepfakes Are Used for Creating Non-Consensual Porn, Not Fake News.” Vice, 2019. https://www.vice.com/en/article/7x57v9/most-deepfakes-are-porn-harassment-not-fake-news
Dawson, L. L. (1999). When Prophecy Fails and Faith Persists: A Theoretical Overview. Nova Religion, 3(1), 60–82. https://doi.org/10.1525/nr.1999.3.1.60
Director, S. (2022). The Sheriff in Our Minds: The Morality of the Mental. Journal of Ethics and Social Philosophy.
Dovas. (2015). World Leaders Are People Too: Artist Shows Them Doing Their ‘Duty’. Bored Panda https://www.boredpanda.com/world-leaders-pooping-the-daily-duty-cristina-guggeri/?utm_source=google&utm_medium=organic&utm_campaign=organic
Eaton, A. W. (2007). A Sensible Antipornography Feminism. Ethics, 117(4), 674–715.
Fallis, D. (2021). The Epistemic Threat of Deepfakes. Philosophy and Technology, 34, 623–643.
Franks, M. A., & Waldman, A. E. (2018). Sex, Lies, and Videotape: Deep Fakes and Free Speech Delusions. Maryland Law Review, 78.
Franks, M. A. (2016). ‘Revenge Porn’ Reform: A View from the Front Lines. Florida Law Review.
Gavin, G. (2023). ‘Fake Putin’ Announces Russia under Attack as Ukraine Goes on Offensive. Politico, 5 https://www.politico.eu/article/fake-vladimir-putin-announces-russia-under-attack-ukraine-war/
Gendler, T. S. (2008). Alief and Belief. Journal of Philosophy, 105(10), 634–663. https://doi.org/10.5840/jphil20081051025
Harris, K. R. (2021). Video on demand: what deepfakes do and how they harm. Synthese, 199, 13373–13391.
Hazlett, A. (2009). How to Defend Response Moralism. The British Journal of Aesthetics, 49(3), 241–255.
Kipnis, L. (1996). Bound and Gagged: Pornography and the Politics of Fantasy in America. Duke University Press.
Kroon, F., & Voltolini, A. (2019). Fiction. Stanford Encyclopedia of Philosophy.
Marmor, A. (2015). What Is the Right to Privacy? Philosophy and Public Affairs, 43(1), 3–26.
McKinney, Kelsey. Normal Gossip, June 2022, Season 2, Episode 6, “Podcast Famous with Brian Park.”
Moran, D., & Cohen, J. (2012). The Husserl Dictionary. Bloomsbury Publishing.
Öhman, C. (2020). Introducing the pervert’s dilemma: a contribution to the critique of Deepfake Pornography. Ethics and Information Technology, 22, 133–140.
Öhman, C. (2022). The identification game: deepfakes and the epistemic limits of identity. Synthese.
Rachels, J. (1975). Why Privacy is Important. Philosophy and Public Affairs, 4(4), 323–333.
Rini, R. (2020). Deepfakes and the Epistemic Backstop. Philosophers’ Imprint, 20(24), 1–16.
de Ruiter, A. (2021). The Distinct Wrong of Deepfakes. Philosophy and Technology, 34, 1311–1332.
Ryberg, J. (2007). Privacy Rights, Crime Prevention, CCTV, and the Life of Mrs Aremac. Res Publica, 13, 127–143.
Savion, L. (2009). Clinging to Discredited Beliefs: The Larger Cognitive Story. Journal of the Scholarship of Teaching and Learning, 9(1), 81–92. https://eric.ed.gov/?id=EJ854880
Sher, G. (2019). A Wild West of the Mind. Australasian Journal of Philosophy, 97(3), 483–496.
Sima, Richard. “Why Do Our Brains Believe Lies?” Washington Post, 25 2022. https://www.washingtonpost.com/wellness/2022/11/03/misinformation-brain-beliefs/
Slusher, M. P., & Anderson, C. A. (1989). Belief Perseverance and Self-Defeating Behavior. In R. C. Curtis (Ed.), Self-Defeating Behaviors: Experimental Research, Clinical Impressions, and Practical Implications (pp. 11–40). Springer Link. https://doi.org/10.1007/978-1-4613-0783-9_2
Smith, A. (2011). Guilty Thoughts. In C. Bagnoli (Ed.), Morality and Emotions (pp. 235–256). Oxford University Press.
Somers, M. (2020). Deepfakes, Explained | MIT Sloan. MIT Sloan School of Management, 21 https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained
Stokes, P. (2021). Digital Souls: A Philosophy of Online Death. Bloomsbury Academic.
Thomson, J. J. (1975). The Right to Privacy. Philosophy and Public Affairs, 4(4), 295–314.
Vincent, J. (2020). These Photorealistic Portraits of Ancient Roman Emperors Were Created Using Old Statues and AI. The Verge, 21 https://www.theverge.com/2020/8/21/21395115/roman-emperors-photorealistic-portraits-ai-artbreeder-dan-voshart
Wallace, D. F. (1996). Infinite Jest. Little Brown and Company.
Walter, N., & Tukachinsky, R. (2020). A Meta-Analytic Examination of the Continued Influence of Misinformation in the Face of Correction: How Powerful Is It, Why Does It Happen, and How to Stop It? Communication Research, 47(2), 155–177. https://doi.org/10.1177/0093650219854600
West, C. (2012). Pornography and Censorship. Stanford Encyclopedia of Philosophy https://plato.stanford.edu/entries/pornography-censorship/
Westerlund, M. (2019). The Emergence of Deepfake Technology: A Review. Technology Innovation Management Review https://timreview.ca/article/1282
Young, G. (2021). Fictional Immorality and Immoral Fiction. Lexington Books.
Zeman, A., et al. (2020). Phantasia–The psychological significance of lifelong visual imagery vividness extremes. Cortex, 130, 426–440.
Zheng, R., & Stear, N.-H. (2023). Imagining in Oppressive Contexts, or What’s Wrong with Blackface? Ethics, 133(3).
Zwolinski, M., Ferguson, B., & Wertheimer, A. (2022). Exploitation. In E. N. Zalta & U. Nodelman (Eds.), The Stanford Encyclopedia of Philosophy (Winter 2022 Edition) https://plato.stanford.edu/archives/win2022/entries/exploitation/
Acknowledgements
We owe thanks to several people for comments or discussion about the ideas in this paper, including Romy Eskens, Catelynn Kenner, Stephen Kershnar, Jennifer Kling, Amy Kurzweil, Carl Öhman, Jacob Sparks, Schuyler Sturm, Robert Hamilton Wallace, Sam Zahn, everyone in our audience at the 2022 Rocky Mountain Ethics (RoME) Congress, and three anonymous reviewers for this journal.
Author information
Authors and Affiliations
Contributions
DS was the main author of this work, completing most of the literature review, theoretical argument, and writing. DS and RJ collaboratively developed the series of thought experiments explored in this paper. RJ’s main contribution was the elaboration of phenomenal immediacy.
Corresponding author
Ethics declarations
Ethics Approval and Consent to Participate
Not applicable.
Consent for Publication
Not applicable.
Competing Interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Story, D., Jenkins, R. Deepfake Pornography and the Ethics of Non-Veridical Representations. Philos. Technol. 36, 56 (2023). https://doi.org/10.1007/s13347-023-00657-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13347-023-00657-0