Skip to main content
Log in

Video on demand: what deepfakes do and how they harm

  • Original Research
  • Published:
Synthese Aims and scope Submit manuscript

Abstract

This paper defends two main theses related to emerging deepfake technology. First, fears that deepfakes will bring about epistemic catastrophe are overblown. Such concerns underappreciate that the evidential power of video derives not solely from its content, but also from its source. An audience may find even the most realistic video evidence unconvincing when it is delivered by a dubious source. At the same time, an audience may find even weak video evidence compelling so long as it is delivered by a trusted source. The growing prominence of deepfake content is unlikely to change this fundamental dynamic. Thus, through appropriate patterns of trust, whatever epistemic threat deepfakes pose can be substantially mitigated. The second main thesis is that focusing on deepfakes that are intended to deceive, as epistemological discussions of the technology tend to do, threatens to overlook a further effect of this technology. Even where deepfake content is not regarded by its audience as veridical, it may cause its viewers to develop psychological associations based on that content. These associations, even without rising to the level of belief, may be harmful to the individuals depicted and more generally. Moreover, these associations may develop in cases in which the video content is realistic, but the audience is dubious of the content in virtue of skepticism toward its source. Thus, even if—as I suggest—epistemological concerns about deepfakes are overblown, deepfakes may nonetheless be psychologically impactful and may do great harm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. It would be an exaggeration to say that deepfakes introduce this possibility, as less sophisticated methods for generating fabricated recorded testimony have long existed. However, deepfake technology will dramatically increase the ease of generating realistic but fabricated recorded testimony.

  2. I adopt this term from Benkler et al. (2018, p. 36).

  3. In addition to Rini’s concerns, one might add that deepfakes potentially make the acquisition of knowledge more difficult by effectively submerging even veridical videos in a sea of fabricated ones. Fallis (2020) suggests that this might interfere with epistemic justification by making it epistemically irresponsible to believe on the basis of video evidence. More conservatively, deepfakes might interfere with knowledge even if they do not interfere with epistemic justification. Just as being in fake barn country might prevent one from knowing that one is in the presence of a barn through visual perception of an apparent barn (Goldman, 1976), a proliferation of deepfakes might prevent the attainment of knowledge through even veridical videos. In short, a proliferation of deepfakes might make acquisition of true belief through video footage lucky in a way that is incompatible with knowledge. While this suggestion may be of interest to epistemologists, I set it aside here to focus on more practical concerns likely to be of broader interest.

  4. More conservatively, one might say that the video provides some evidence that the events unfolding therein occurred, but that this evidence is immediately swamped by one’s background knowledge.

  5. Following Fallis (2020), one might analyze this reduction in evidential power in terms of a decrease in the amount of information carried by video footage.

  6. Hopkins makes a similar point, tying it to the distinction between traditional and digital photography (2012, p. 723).

  7. Interestingly, this suggests that, while deepfake technology confers upon individuals the ability to generate video evidence on demand, this very ability is limited by the widespread availability of the technology.

  8. In a paper on the epistemology of photography, Cavedon-Taylor (2013) argues that photographs convey perceptual knowledge, while handmade pictures convey testimonial knowledge. I will not pursue the point at length here, as a discussion of testimonial knowledge and its relation to trust would take us far afield and into controversial territory, but one might borrow from Cavedon-Taylor’s analysis to illuminate the epistemic effects of deepfakes. Arguably, video footage, like photographs, has historically been a source of perceptual knowledge. As deepfakes proliferate, the ability of video footage to provide perceptual knowledge will arguably diminish. However, it may well be that the ability of some video footage to provide testimonial knowledge—or some further form of knowledge, such that that form of knowledge depends on trust in the source—will remain intact.

  9. For recent empirical data and analysis concerning trust in the mainstream media in the US context, see Ardèvol-Abreu and de Zúñiga (2017), Mourão et al. (2018), and Brenan (2020).

  10. Perhaps the most notorious example in recent history is Donald Trump’s repeated dismissal of large swaths of reporting as “fake news”.

  11. Thanks to an anonymous referee for suggesting that I highlight this second role for technological solutions.

  12. Thanks to an anonymous referee for suggesting that I consider this concern.

  13. Fallis (2020) discusses a similar concern.

  14. This is not to say that it would be impossible for malicious actors to share deepfakes via trusted channels. For example, consider a case helpfully suggested to me by an anonymous referee. In July of 2020, hackers promoted a bitcoin scam from the Twitter accounts of Barack Obama, then-presidential candidate Joe Biden, and over 100 other high-profile accounts (Lerman & Denham, 2020). The threat of such hacks helps to underscore the importance of securing influential channels of information against co-option, a measure that was reportedly taken with Donald Trump’s Twitter account prior to its removal from the platform (Peters, 2020). Perhaps no security measures can eliminate the possibility that misleading deepfakes might be shared via trusted channels. However, the relative difficulty of sharing material via such channels, as compared to the difficulty of generating deepfakes, together with the potential to increase this difficulty, underscores the important role of attending to channels of information in mitigating the epistemic threat of deepfakes.

  15. As an anonymous referee has helpfully pointed out, it may prove useful to understand such associations using the framework of alief, as introduced by Gendler (2008a,b). In introducing the concept, Gendler describes alief as “associative, automatic, and arational,” among other characteristics (2008a, p. 641). These characteristics of alief make the state a good candidate for understanding the psychological effects that even unrealistic deepfakes might have on an audience. Moreover, as Gendler (2008b) has argued, alief is a promising tool for understanding implicit bias. Thus, understanding the mental associations at issue here as aliefs has the advantage of assimilating the present issue to existing research, which may ultimately prove especially useful in understanding some of the potential costs of deepfakes, to be discussed below. However, if only to minimize potentially distracting theoretical commitments, I decline to explicitly identify the relevant mental associations as aliefs here.

  16. While I will not argue for the point here, some recent work in epistemology has suggested that beliefs can wrong (Basu, 2019; Basu & Schroeder, 2019). One might likewise argue that non-doxastic mental associations can wrong their objects, even independent of the further consequences of these associations. It is worth noting that, even if one is dubious of the claim that beliefs and non-doxastic mental associations can wrong, one might nonetheless think it is wrong to cause such beliefs and associations to develop.

  17. This is not to say that encouraging this association is the only source of moral wrongness in this case. Plausibly the action is also wrong, perhaps among other reasons, because it involves the nonconsensual use of the subordinate’s image.

  18. In a recent paper, Öhman (2020) introduces the puzzle of why the creation of deepfake pornography with no intention of sharing it is wrong if sexual fantasizing is not. The present discussion of the psychological associations encouraged by deepfake pornography arguably offers a partial answer to this question. While individuals who create deepfake pornography are highly unlikely to form false beliefs based on its content, they may nonetheless form or strengthen psychological associations based on that content. Supposing it is plausible that video footage, even of one’s own making, is likely to be more psychologically impactful than mere fantasizing, we have an available explanation of the moral distinction between the creation of private deepfake pornography and sexual fantasizing.

  19. For example, in the run-up to the 2020 Presidential Election, Donald Trump shared a transparently edited video of Joe Biden. The video was not strictly speaking a deepfake, although it was misidentified by some commentators as such (Frum, 2020), but nonetheless helps to illustrate how even non-deceptive deepfake videos might be damaging to a politician’s image.

References

Download references

Acknowledgements

I would like to thank two anonymous referees for their thoughtful feedback on earlier drafts, which has led to substantial improvements to this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Keith Raymond Harris.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article belongs to the topical collection "Designed to Deceive? The Philosophy of Deepfakes", edited by Dan Cavedon-Taylor.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Harris, K.R. Video on demand: what deepfakes do and how they harm. Synthese 199, 13373–13391 (2021). https://doi.org/10.1007/s11229-021-03379-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-021-03379-y

Keywords

Navigation