Abstract
This paper defends two main theses related to emerging deepfake technology. First, fears that deepfakes will bring about epistemic catastrophe are overblown. Such concerns underappreciate that the evidential power of video derives not solely from its content, but also from its source. An audience may find even the most realistic video evidence unconvincing when it is delivered by a dubious source. At the same time, an audience may find even weak video evidence compelling so long as it is delivered by a trusted source. The growing prominence of deepfake content is unlikely to change this fundamental dynamic. Thus, through appropriate patterns of trust, whatever epistemic threat deepfakes pose can be substantially mitigated. The second main thesis is that focusing on deepfakes that are intended to deceive, as epistemological discussions of the technology tend to do, threatens to overlook a further effect of this technology. Even where deepfake content is not regarded by its audience as veridical, it may cause its viewers to develop psychological associations based on that content. These associations, even without rising to the level of belief, may be harmful to the individuals depicted and more generally. Moreover, these associations may develop in cases in which the video content is realistic, but the audience is dubious of the content in virtue of skepticism toward its source. Thus, even if—as I suggest—epistemological concerns about deepfakes are overblown, deepfakes may nonetheless be psychologically impactful and may do great harm.
Similar content being viewed by others
Notes
It would be an exaggeration to say that deepfakes introduce this possibility, as less sophisticated methods for generating fabricated recorded testimony have long existed. However, deepfake technology will dramatically increase the ease of generating realistic but fabricated recorded testimony.
I adopt this term from Benkler et al. (2018, p. 36).
In addition to Rini’s concerns, one might add that deepfakes potentially make the acquisition of knowledge more difficult by effectively submerging even veridical videos in a sea of fabricated ones. Fallis (2020) suggests that this might interfere with epistemic justification by making it epistemically irresponsible to believe on the basis of video evidence. More conservatively, deepfakes might interfere with knowledge even if they do not interfere with epistemic justification. Just as being in fake barn country might prevent one from knowing that one is in the presence of a barn through visual perception of an apparent barn (Goldman, 1976), a proliferation of deepfakes might prevent the attainment of knowledge through even veridical videos. In short, a proliferation of deepfakes might make acquisition of true belief through video footage lucky in a way that is incompatible with knowledge. While this suggestion may be of interest to epistemologists, I set it aside here to focus on more practical concerns likely to be of broader interest.
More conservatively, one might say that the video provides some evidence that the events unfolding therein occurred, but that this evidence is immediately swamped by one’s background knowledge.
Following Fallis (2020), one might analyze this reduction in evidential power in terms of a decrease in the amount of information carried by video footage.
Hopkins makes a similar point, tying it to the distinction between traditional and digital photography (2012, p. 723).
Interestingly, this suggests that, while deepfake technology confers upon individuals the ability to generate video evidence on demand, this very ability is limited by the widespread availability of the technology.
In a paper on the epistemology of photography, Cavedon-Taylor (2013) argues that photographs convey perceptual knowledge, while handmade pictures convey testimonial knowledge. I will not pursue the point at length here, as a discussion of testimonial knowledge and its relation to trust would take us far afield and into controversial territory, but one might borrow from Cavedon-Taylor’s analysis to illuminate the epistemic effects of deepfakes. Arguably, video footage, like photographs, has historically been a source of perceptual knowledge. As deepfakes proliferate, the ability of video footage to provide perceptual knowledge will arguably diminish. However, it may well be that the ability of some video footage to provide testimonial knowledge—or some further form of knowledge, such that that form of knowledge depends on trust in the source—will remain intact.
Perhaps the most notorious example in recent history is Donald Trump’s repeated dismissal of large swaths of reporting as “fake news”.
Thanks to an anonymous referee for suggesting that I highlight this second role for technological solutions.
Thanks to an anonymous referee for suggesting that I consider this concern.
Fallis (2020) discusses a similar concern.
This is not to say that it would be impossible for malicious actors to share deepfakes via trusted channels. For example, consider a case helpfully suggested to me by an anonymous referee. In July of 2020, hackers promoted a bitcoin scam from the Twitter accounts of Barack Obama, then-presidential candidate Joe Biden, and over 100 other high-profile accounts (Lerman & Denham, 2020). The threat of such hacks helps to underscore the importance of securing influential channels of information against co-option, a measure that was reportedly taken with Donald Trump’s Twitter account prior to its removal from the platform (Peters, 2020). Perhaps no security measures can eliminate the possibility that misleading deepfakes might be shared via trusted channels. However, the relative difficulty of sharing material via such channels, as compared to the difficulty of generating deepfakes, together with the potential to increase this difficulty, underscores the important role of attending to channels of information in mitigating the epistemic threat of deepfakes.
As an anonymous referee has helpfully pointed out, it may prove useful to understand such associations using the framework of alief, as introduced by Gendler (2008a,b). In introducing the concept, Gendler describes alief as “associative, automatic, and arational,” among other characteristics (2008a, p. 641). These characteristics of alief make the state a good candidate for understanding the psychological effects that even unrealistic deepfakes might have on an audience. Moreover, as Gendler (2008b) has argued, alief is a promising tool for understanding implicit bias. Thus, understanding the mental associations at issue here as aliefs has the advantage of assimilating the present issue to existing research, which may ultimately prove especially useful in understanding some of the potential costs of deepfakes, to be discussed below. However, if only to minimize potentially distracting theoretical commitments, I decline to explicitly identify the relevant mental associations as aliefs here.
While I will not argue for the point here, some recent work in epistemology has suggested that beliefs can wrong (Basu, 2019; Basu & Schroeder, 2019). One might likewise argue that non-doxastic mental associations can wrong their objects, even independent of the further consequences of these associations. It is worth noting that, even if one is dubious of the claim that beliefs and non-doxastic mental associations can wrong, one might nonetheless think it is wrong to cause such beliefs and associations to develop.
This is not to say that encouraging this association is the only source of moral wrongness in this case. Plausibly the action is also wrong, perhaps among other reasons, because it involves the nonconsensual use of the subordinate’s image.
In a recent paper, Öhman (2020) introduces the puzzle of why the creation of deepfake pornography with no intention of sharing it is wrong if sexual fantasizing is not. The present discussion of the psychological associations encouraged by deepfake pornography arguably offers a partial answer to this question. While individuals who create deepfake pornography are highly unlikely to form false beliefs based on its content, they may nonetheless form or strengthen psychological associations based on that content. Supposing it is plausible that video footage, even of one’s own making, is likely to be more psychologically impactful than mere fantasizing, we have an available explanation of the moral distinction between the creation of private deepfake pornography and sexual fantasizing.
For example, in the run-up to the 2020 Presidential Election, Donald Trump shared a transparently edited video of Joe Biden. The video was not strictly speaking a deepfake, although it was misidentified by some commentators as such (Frum, 2020), but nonetheless helps to illustrate how even non-deceptive deepfake videos might be damaging to a politician’s image.
References
Ajder, H., Patrini, G., Cavalli, F., & Cullen, L. (2019). The state of deepfakes: Landscape, threats, and impact. Deeptrace.
Ardèvol-Abreu, A., & de Zúñiga, H. G. (2017). Effects of editorial media bias perception and media trust on the use of traditional, citizen, and social media news. Journalism & Mass Communication Quarterly, 94(3), 703–724.
Basu, R. (2019). The wrongs of racist beliefs. Philosophical Studies, 176(9), 2497–2515.
Basu, R., & Schroeder, M. (2019). Doxastic wronging. In B. Kim & M. McGrath (Eds.), Pragmatic encroachment in epistemology (pp. 181–205). Routledge.
Benkler, Y., Faris, R., & Roberts, H. (2018). Network propaganda: manipulation, disinformation, and radicalization in American Politics. Oxford University Press.
Best, J., & Horiuchi, G. T. (1985). The razor blade in the apple: The social construction of urban legends. Social Problems, 32(5), 488–499.
Bitouk, D., Kumar, N., Dhillon, S., Belhumeur, P., & Nayar, S. K. (2008). Face swapping: Automatically replacing faces in photographs. AMC Transactions on Graphics, 27(3), 1–8.
Brenan, M. (2020). Americans remains distrustful of mass media. Gallup. Retrieved September 30, 2020, from https://news.gallup.com/poll/321116/americans-remain-distrustful-mass-media.aspx
Cavedon-Taylor, D. (2013). Photographically based knowledge. Episteme, 10(3), 283–297.
Cole, S. (2019a). This program makes it even easier to create deepfakes. Vice News. Retrieved September 19, 2019, from https://www.vice.com/en/article/kz4amx/fsgan-program-makes-it-even-easier-to-make-deepfakes
Cole, S. (2020). This open-source programs deepfakes you during Zoom meetings, in real time. Vice News. Retrieved April 16, 2020, from https://www.vice.com/en/article/g5xagy/this-open-source-program-deepfakes-you-during-zoom-meetings-in-real-time
Cole, S., & Maiberg, E. (2019). Deepfake porn is evolving to give people total control over women’s bodies. Vice News. Retrieved December 6, 2019, from https://www.vice.com/en/article/9keen8/deepfake-porn-is-evolving-to-give-people-total-control-over-womens-bodies
Cox, J. (2019). Most deepfakes are used for creating non-consensual porn, not fake news. Vice News. Retrieved October 7, 2019, from https://www.vice.com/en/article/7x57v9/most-deepfakes-are-porn-harassment-not-fake-news
Fallis, D. (2020). The epistemic threat of deepfakes. Philosophy & Technology. https://doi.org/10.1007/s13347-020-00419-2
Floridi, L. (2018). Artificial intelligence, deepfakes, and the future of ectypes. Philosophy & Technology, 31(3), 317–321.
Foer, F. (2018). The end of reality. The Atlantic. Retrieved 5, 2018, from. https://www.theatlantic.com/magazine/archive/2018/05/realitys-end/556877/
Foroni, F., & Mayr, U. (2005). The power of a story: New, automatic associations from a single reading of a short scenario. Psychonomic Bulletin & Review, 12(1), 139–144.
Frum, D. (2020). The very real threat of Trump’s deepfake. The Atlantic. Retrieved April 27, 2020, from https://www.theatlantic.com/ideas/archive/2020/04/trumps-first-deepfake/610750/
Gendler, T. S. (2008a). Alief and belief. Journal of Philosophy, 105(10), 634–663.
Gendler, T. S. (2008b). Alief in action (and reaction). Mind & Language, 23(5), 552–585.
Goldman, A. (1976). Discrimination and perceptual knowledge. Journal of Philosophy, 73(20), 771–791.
Green, L. (2000). Pornographies. The Journal of Political Philosophy, 8(1), 27–52.
Harwell, D. (2018). Scarlett Johansson on fake AI-generated sex videos: ‘Nothing can stop someone from cutting and pasting my image’. Washington Post. Retrieved December 31, 2018, from https://www.washingtonpost.com/technology/2018/12/31/scarlett-johansson-fake-ai-generated-sex-videos-nothing-can-stop-someone-cutting-pasting-my-image/
Hill, J. M. (1987). Pornography and degradation. Hypatia, 2(2), 39–54.
Hopkins, R. (2012). Factive pictorial experience: What’s special about photographs. Nous, 42(4), 709–731.
Huebner, B. (2016). Implicit bias, reinforcement learning, and scaffolded moral cognition. In M. Brownstein & J. Saul (Eds.), Implicit Bias and philosophy—Volume 1: Metaphysics and epistemology (pp. 47–79). Oxford University Press.
Kerner, C., & Risse, M. (2021). Beyond porn and discreditation: Epistemic promises and perils of deepfake technology in digital lifeworlds. Moral Philosophy and Politics, 8(1), 81–108.
Lerman, R., & Denham, H. (2020). 3 charged in massive Twitter hack, including alleged teenage ‘mastermind’. Washington Post. Retrieved July 31, 2020, from https://www.washingtonpost.com/technology/2020/07/30/twitter-hack-phone-attack/
MacKinnon, C. (1987). Feminism unmodified. Harvard University Press.
Mirsky, Y., & Lee, W. (2020). The creation and detection of deepfakes: A survey. ACM Computing Surveys, 54(1), 1–41.
Mourão, R. R., Thorson, E., Chen, W., & Tham, S. M. (2018). Media repertoires and news trust during the early trump administration. Journalism Studies, 19(13), 1945–1956.
Öhman, C. (2020). Introducing the pervert’s dilemma: A contribution to the critique of deepfake Pornography. Ethics and Information Technology, 22(2), 133–140.
Paris, B., & Donovan, J. (2019). Deepfakes and cheap fakes: The manipulation of audio and visual evidence. Data & Society, 1–47.
Peters, J. (2020). Trump’s Twitter account has extra protections, which could be why it didn’t get hacked. The Verge. Retrieved July 16, 2020, from https://www.theverge.com/2020/7/16/21327782/trump-twitter-hacked-massive-attack-bitcoin-scam
Rini, R. (2020). Deepfakes and the epistemic backstop. Philosophers’ Imprint, 20(24), 1–16.
Sedlmeir, J., Buhl, H. U., Frigden, G., & Keller, R. (2020). The energy consumption of blockchain technology: Beyond myth. Business & Information Systems Engineering, 62(6), 599–608.
Wang, Q., Li, R., Wang, Q., & Chen, S. (2021). Non-fungible token (NFT): Overview, evaluation, opportunities and challenges. arXiv:2105.07447v1
Warzel, C. (2018). He predicted the 2016 fake news crisis. Now he’s worried about an information apocalypse. Buzzfeed News. Retrieved February 11, 2018, from https://www.buzzfeednews.com/article/charliewarzel/the-terrifying-future-of-fake-news
Wittenbrink, B., Judd, C. M., & Park, B. (2001). Spontaneous prejudice in context: Variability in automatically activated attitudes. Journal of Personality and Social Psychology, 81(5), 815–827.
Young, G. (2021). Fictional immorality and immoral fiction. Lexington Books.
Acknowledgements
I would like to thank two anonymous referees for their thoughtful feedback on earlier drafts, which has led to substantial improvements to this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article belongs to the topical collection "Designed to Deceive? The Philosophy of Deepfakes", edited by Dan Cavedon-Taylor.
Rights and permissions
About this article
Cite this article
Harris, K.R. Video on demand: what deepfakes do and how they harm. Synthese 199, 13373–13391 (2021). https://doi.org/10.1007/s11229-021-03379-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-021-03379-y