Skip to main content
Log in

Deepfakes and depiction: from evidence to communication

  • Original Research
  • Published:
Synthese Aims and scope Submit manuscript

Abstract

In this paper, I present an analysis of the depictive properties of deepfakes. These are videos and pictures produced by deep learning algorithms that automatically modify existing videos and photographs or generate new ones. I argue that deepfakes have an intentional standard of correctness. That is, a deepfake depicts its subject only insofar as its creator intends it to. This is due to the way in which these images are produced, which involves a degree of intentional control similar to that involved in the production of other intentional pictures such as drawings and paintings. This aspect distinguishes deepfakes from real videos and photographs, which instead have a non-intentional standard: their correct interpretation corresponds to the scenes that were recorded by the mechanisms that produced them, and not to what their producers intended them to represent. I show that these depictive properties make deepfakes fit for communicating information in the same way as language and other intentional pictures. That is, they do not provide direct access to the communicated information like non-intentional pictures (e.g., videos and photographs) do. Rather, they convey information indirectly, relying only on the viewer's assumptions about the communicative intentions of the creator of the deepfake. Indirect communication is indeed a prominent function of such media in our society, but it is often overlooked in the philosophical literature. This analysis also explains what is epistemically worrying about deepfakes. With the introduction of this technology, viewers interpreting photorealistic videos and pictures can no longer be sure of which standard to select (i.e., intentional or non-intentional), which makes misinterpretation likely (i.e., they can take an image to have a standard of correctness that it does not have).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. https://www.youtube.com/watch?v=uAPUkgeiFVY.

  2. https://www.buzzfeed.com/craigsilverman/obama-jordan-peele-deepfake-video-debunk-buzzfeed.

  3. However, in Sect. 3 I show how the claims contained in this paper can be extended to other types of synthetic media such as DALL-E based images in interesting ways.

  4. I do not need to side with any specific account of pictorial experience here. Indeed, both experiential and recognitional accounts generally agree on the need for the notion of a standard of correctness in theories of depiction (Terrone, 2021).

  5. Millière (2022) provides a similar distinction between local and global deep learning-based media to distinguish algorithms that involve the modification of pre-existing, archival media, and algorithms that involve only the synthetization of new content. The main aspect under which my distinction and his differ is that mine takes into account the distribution of standards of correctness.

  6. https://pitchfork.com/reviews/tracks/kendrick-lamar-the-heart-part-5/.

  7. I thank an anonymous reviewer for highlighting this point.

  8. The term of transparency is used to refer to different notions (besides that of Walton (1984) I briefly mentioned in Sect. 2) in the philosophy of images (see Kulvicki, 2021 for an analysis of the three main ones). Epistemic transparency is not to be confused with any of these notions.

  9. Note that this applies to the claims developed by Gaut (2010) and Hopkins (2012) about digital photography.

  10. It is beyond the scope of this paper to offer an exhaustive list of such relevant features. However, note that much work in computer science is currently devoted to the development of detection systems that can detect deepfakes. This is an example of a means that can be implemented for the preservation of videos as evidence, and therefore for the creation of epistemic channels.

References

  • Abell, C. (2010). The epistemic value of photographs. Philosophical perspectives on depiction. NY: Oxford University Press.

    Book  Google Scholar 

  • Fallis, D. (2020). The epistemic threat of deepfakes. Philosophy & Technology, 1–21, 623–643.

    Google Scholar 

  • Gaut, B. (2010). A philosophy of cinematic art. Cambridge University Press.

    Book  Google Scholar 

  • Grice, H. P. (1957). Meaning Philosophical Review, 66: 377 – 88.

  • Hopkins, R. (1998). Picture, image and experience: a philosophical inquiry. Cambridge University Press.

    Google Scholar 

  • Hopkins, R. (2012). Factive pictorial experience: What’s special about photographs? Noûs., 46(4), 709–731.

    Article  Google Scholar 

  • Hopkins, R. (2015). The real challenge to Photography (as communicative representational art). Journal of the American Philosophical Association, 1(2), 329–348.

    Article  Google Scholar 

  • Karras, T., Laine, S., & Aila, T. (2019). A style-based generator architecture for generative adversarial networks. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4401–4410.

  • Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., & Aila, T. (2020). Analyzing and improving the image quality of stylegan. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8110–8119.

  • Kulvicki, J. (2021). Varieties of transparency. The Palgrave Handbook of Image Studies (pp. 501–514). Palgrave Macmillan.

    Book  Google Scholar 

  • Lopes, D. (1996). Understanding pictures. Clarendon Press.

    Google Scholar 

  • Meskin, A., & Cohen, J. (2008). Photographs as evidence. Photography and philosophy: essays on the pencil of nature, 70–90.

  • Millière, R. (2022). Deep learning and synthetic media. Synthese, 200(3), 1–27.

    Article  Google Scholar 

  • Mirsky, Y., & Lee, W. (2021). The creation and detection of deepfakes: a Survey. ACM Computing Surveys, 54(1), 1–41.

    Article  Google Scholar 

  • Newall, M. (2011). What is a picture? Depiction, realism, abstraction. Springer.

    Book  Google Scholar 

  • Pignocchi, A. (2019). The continuity between art and everyday communication.Advances in Experimental Philosophy of Aesthetics,241–266.

  • Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., & Sutskever, I. (2021). Zero-shot text-to-image generation. International Conference on Machine Learning. 8821–8831.

  • Rini, R. (2020). Deepfakes and the epistemic backstop. Philosophers, 20(24), 1–16.

    Google Scholar 

  • Scruton, R. (1981). Photography and representation. Critical Inquiry, 7(3), 577–603.

    Article  Google Scholar 

  • Skyrms, B. (2010). Signals. Oxford University Press.

    Book  Google Scholar 

  • Sperber, D., & Wilson, D. (1995). Relevance: communication and cognition. Blackwell.

    Google Scholar 

  • Terrone, E. (2021). The standard of correctness and the ontology of depiction. American Philosophical Quarterly, 58(4), 399–412.

    Article  Google Scholar 

  • Tolosana, R., Vera-Rodriguez, R., Fierrez, J., Morales, A., & Ortega-Garcia, J. (2020). Deepfakes and beyond: a survey of face manipulation and fake detection. Information Fusion, 64, 131–148.

    Article  Google Scholar 

  • Verdoliva, L. (2020). Media forensics and deepfakes: an overview. IEEE Journal of Selected Topics in Signal Processing, 14(5), 910–932.

    Article  Google Scholar 

  • Viebahn, E. (2019). Lying with pictures. The British Journal of Aesthetics, 59(3), 243–257.

    Article  Google Scholar 

  • Walton, K. L. (1984). Transparent pictures: On the nature of photographic realism. Critical Inquiry, 11(2), 246–277.

  • Wollheim, R. (1980). Art and its objects. Cambridge University Press.

    Google Scholar 

Download references

Acknowledgements

I am grateful to Roberto Casati for his comments on an early draft. I also would like to thank two anonymous referees for their helpful feedback which substantially improved this paper.

Author information

Authors and Affiliations

Authors

Contributions

I (Francesco Pierini) am the only author of this paper.

Corresponding author

Correspondence to Francesco Pierini.

Ethics declarations

Conflict of interest

I declare no competing interests.

Consent for publication

I consent to the publication of this article.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pierini, F. Deepfakes and depiction: from evidence to communication. Synthese 201, 97 (2023). https://doi.org/10.1007/s11229-023-04093-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11229-023-04093-7

Keywords

Navigation