Skip to main content
Log in

Deepfake Pornography and the Ethics of Non-Veridical Representations

  • Research Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript


We investigate the question of whether (and if so why) creating or distributing deepfake pornography of someone without their consent is inherently objectionable. We argue that nonconsensually distributing deepfake pornography of a living person on the internet is inherently pro tanto wrong in virtue of the fact that nonconsensually distributing intentionally non-veridical representations about someone violates their right that their social identity not be tampered with, a right which is grounded in their interest in being able to exercise autonomy over their social relations with others. We go on to suggest that nonconsensual deepfakes are especially worrisome in connection with this right because they have a high degree of phenomenal immediacy, a property which corresponds inversely to the ease with which a representation can be doubted. We then suggest that nonconsensually creating and privately consuming deepfake pornography is worrisome but may not be inherently pro tanto wrong. Finally, we discuss the special issue of whether nonconsensually distributing deepfake pornography of a deceased person is inherently objectionable. We argue that the answer depends on how long it has been since the person died.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data Availability

Not applicable.


  1. The term “deepfake” was coined by a Reddit user of the same name in late 2017, the “deep-” being a reference to the role of deep learning, a form of machine learning, in their creation (Somers, 2020). The earliest and most spectacular deepfakes were in the form of videos. Strictly speaking, they need not be videos; other forms of deepfakes, also using ML, are in the form of photographs or audio clips (Westerlund, 2019).

  2. A set of convincing Tom Cruise deepfakes can be found at the following link: , and the Willem Dafoe deepfake can be found at this link: .

  3. This example is modified from one we first encountered in an unpublished manuscript by David Boonin.

  4. We restrict our discussion in this paper to moral rights and moral wrongdoing, while setting aside the issue of legal rights and legal wrongdoing. We make no claims about what is required to violate a legal right to privacy.

  5. Again, we are not concerned with legal concepts. We are suggesting that, regardless of how the legal system classifies nonconsensual deepfake pornography, it is not always morally objectionable in virtue of being defamatory.

  6. Admittedly, this claim is controversial among philosophers of fiction (see Kroon & Voltolini, 2019 for extensive discussion). It can also be obscured by the fact that claims about fiction, including claims about fictional characters and events, can clearly be false. Moreover, we can evaluate claims made within fiction as true or false relative to the fictional world. These possibilities do not complicate our argument, however.

  7. See Harris (2021) and Gendler (2008) on the concept of “alief.” See also Husserl: In Husserl’s language, our experiences are liable to become “sedimented” in our minds, and even after being forgotten, they can continue to impact our outlook, actions, and behavior towards others. Experiences that become “sedimented” disappear into the “background” of our beliefs, but continue to “provide the context and material for further judgements” (Moran & Cohen, 2012 pp. 288–290). Thanks to David Zoller for pointing us to the origins of this claim in phenomenology.

  8. For an example of nonconsensual erotic fanfiction about real people, listen to the podcast Normal Gossip, Season 2, Episode 6 (McKinney, 2022).

  9. We are smoothing over some complexities in this section for ease of exposition. One complexity, relevant here, is that people may have multiple more or less discrete social identities across the multiple spheres of social activity in which they live. Another complexity is that the concept of veridicality is both ambiguous and complex. A representation can be veridical in one sense or respect but not another, and it may be that whether a representation should count as veridical sometimes depends partly on the context in which it is presented, including, perhaps, facts about which of a person’s multiple discrete social identities are active in the relevant context. These complexities are interesting, but there is no space to explore them here. Thanks to an anonymous reviewer for pressing us to consider these points.

  10. We are grateful to an anonymous reviewer and several commentators for inquiring about this possibility.

  11. Relatedly, consider the fact that Franny’s actions may have non-accidental negative effects on other people, too. Suppose Franny creates deepfakes of Daniel superimposed over footage originally depicting someone else—call him Manny—engaging in sexual acts. Does Manny have a claim against Franny, since Manny is also “depicted” in this video? (We thank an anonymous reviewer for raising this question.)

    Several points are worth mentioning here. First, if the video depicts Manny, it must be to a much lesser degree than that to which Daniel is depicted—a person’s social identity is typically much more closely tied to their face and voice than the rest of their body, as de Ruiter (2021) and others have observed. Second, the resulting deepfake is ex hypothesi not a non-veridical representation of Manny’s actions because the original footage just was of Manny performing those same actions. To the extent someone could recognize Manny’s body and attribute those actions to him, they would be attributing successfully. Finally, while it might be problematic to “co-opt” Manny’s body to portray Daniel non-veridically, this co-opting is not a form of defamation towards Manny but rather a species of exploitation in the Marxist sense, i.e., roughly, forcefully profiting from another person’s labor without proper compensation (Schmitt 1997; Zwolinski et al., 2022 §1.2). What we say here could also be said of any of the other people involved in the creation of Manny’s original footage, i.e., the director of photography, writer, production designer, script supervisor, first and second assistant director, etc., in case a pornographic film employs people in any of those roles.

  12. A related objection says that non-veridical deepfakes that are silly and fun but not satirical, like many of the deepfakes on the subreddit SFWdeepfakes, (, are a counterexample to the NVRP since many of these deepfakes are nonconsensual but clearly innocuous. For example, one popular deepfake on this website depicts Jim Carrey’s face superimposed over Tobey Maguire’s face in a rather uninteresting scene from the 2002 movie Spider Man ( However, deepfakes like this are innocuous because they are not socially significant. Thus, the NVRP does not apply to them, and they do not constitute a counterexample to it.

  13. The NVRP is of course also compatible with saying that consent from the target of a deepfake is not sufficient to render its dissemination morally acceptable, perhaps because the consent of other people is needed (e.g., the people whose bodies are depicted in the deepfake, as in the case of Manny above) or because all deepfake pornography is bad. Thanks to an anonymous reviewer and Jennifer Kling for pressing us to clarify this point.

  14. One might think that the argument behind the NVRP also implies that both nonconsensual private deepfake pornography and nonconsensual mental fantasy are inherently wrong since both of these at least risk producing a nonconsensual effect on the target’s social identity and relationships. We agree that both private deepfake pornography and mental fantasy can affect a target’s social identity and relationships by changing how the individual who privately indulges in these things interacts with the target and that this fact often grounds a reason against nonconsensually indulging in these things. However, we have only argued that people have a right that others not tamper with their social identity. Tampering implies intent. Nonconsensually disseminating intentionally non-veridical representations inherently involves intentionally altering a person’s social identity, whereas there is nothing about indulging in private deepfakes or mental fantasy that necessitates an intent to alter a person’s social identity. Thus, the reasoning behind the NVRP does not imply that private deepfake pornography and mental fantasy are inherently wrong. Thanks to an anonymous reviewer for pressing us to consider this point.

  15. Notice that the claim that moral norms work differently in this special representational context does not commit us to Sher’s view that moral norms have no application in the mind.

  16. Once it has been formed, it is very hard to dislodge a person’s belief. Research on this phenomenon dates back at least to the 1950s under the banner of “belief perseverance,” namely, a person’s likelihood to cling to a discredited belief even after it has been empirically disproven (Dawson, 1999). For more on this phenomenon, see Slusher and Anderson (1989), Savion (2009), Walter and Tukachinsky (2020), and Sima (2022). While we could not find empirical evidence testing the hypothesis with regard to memories formed specifically from visual perception, we presume the same would hold.

  17. Theoretically, it would be possible to create and consume deepfakes in a perfectly secure environment, e.g., an air-gapped computer lab in a bunker somewhere on an island. But that seems exceedingly rare. Moreover, and in fact, the applications that enable a person to create deepfakes likely afford the user a “share” button to facilitate their spread. We thank an anonymous reviewer for both of these observations.

  18. Note that there are conceivable types of non-veridical representations that would have even greater phenomenal immediacy than even the most convincing deepfake videos and thus in this respect would be more problematic than deepfakes from the point of view of the NVRP. For example, in episode 21 of the third season of Star Trek: The Next Generation, a character named Lieutenant Barclay creates unauthorized non-veridical simulations of his colleagues on the “Holodeck,” which is a device that generates interactive, immersive, and hyper-realistic three-dimensional simulations through the use of artificial intelligence. Because the simulations on the Holodeck look completely life-like, these simulations have even greater phenomenal immediacy than a deepfake. Because of this, it would be a more serious violation of the NVRP to intentionally distribute a non-veridical Holodeck program of someone than to intentionally distribute a deepfake video of someone, all else being equal.

  19. To be fair, any reconstruction of a person dead for 2000 years must involve a healthy dose of speculation. But at least one faithful reconstruction of Caesar Augustus has been attempted with the help of machine learning (Vincent, 2020).


Download references


We owe thanks to several people for comments or discussion about the ideas in this paper, including Romy Eskens, Catelynn Kenner, Stephen Kershnar, Jennifer Kling, Amy Kurzweil, Carl Öhman, Jacob Sparks, Schuyler Sturm, Robert Hamilton Wallace, Sam Zahn, everyone in our audience at the 2022 Rocky Mountain Ethics (RoME) Congress, and three anonymous reviewers for this journal.

Author information

Authors and Affiliations



DS was the main author of this work, completing most of the literature review, theoretical argument, and writing. DS and RJ collaboratively developed the series of thought experiments explored in this paper. RJ’s main contribution was the elaboration of phenomenal immediacy.

Corresponding author

Correspondence to Ryan Jenkins.

Ethics declarations

Ethics Approval and Consent to Participate

Not applicable.

Consent for Publication

Not applicable.

Competing Interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Story, D., Jenkins, R. Deepfake Pornography and the Ethics of Non-Veridical Representations. Philos. Technol. 36, 56 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: