Skip to main content
Log in

Deepfakes and trust in technology

  • Original Research
  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Deepfakes are fake recordings generated by machine learning algorithms. Various philosophical explanations have been proposed to account for their epistemic harmfulness. In this paper, I argue that deepfakes are epistemically harmful because they undermine trust in recording technology. As a result, we are no longer entitled to our default doxastic attitude of believing that P on the basis of a recording that supports the truth of P. Distrust engendered by deepfakes changes the epistemic status of recordings to resemble that of handmade images. Their credibility, like that of testimony, depends partly on the credibility of the source. I consider some proposed technical solutions from a philosophical perspective to show the practical relevance of these suggestions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. ‘Participant stance’ is Holton’s name for Strawson’s (2008) participant attitude that we adopt toward others whose attitudes and actions toward us elicit reactive attitudes like anger or gratitude from us.

  2. Govier (1997, pp. 110, 112) describes similar phenomena under the rubrics of social and scatter trust.

  3. Natural kind terms could be an exception, but the burden of proof is on those who wish to argue that ‘trust’, like ‘gold’ or ‘water’, picks out a natural kind.

  4. If this premise is true, then it casts doubt on the idea, mentioned above, that the notion of trust in technology can be treated as predictive trust or trust-based reliance on occasion, since these would not account for the morally relevant kind of disappointment elicited by technological failure.

  5. I would like to thank an anonymous reviewer for highlighting this issue.

  6. I do not thereby commit myself to his conclusion that only traditional photography can be veridical because I believe that digital photography can be veridical as well when used in appropriate conditions.

  7. Arguing for this assumption here is beyond the scope of this paper.

  8. According to Hopkins (2012, pp. 2, 3), a factive pictorial experience is the experience where “we grasp what a picture depicts” and “if what is seen in the picture is that p, then p.” I have called this experience indirect in order to connect this account with Walton’s (1984) transparency thesis.

  9. According to Cavedon-Taylor (2013, p. 295), this resembles our default doxastic response to testimony. I would add that our defualt doxastic response to photographs likely extends to other kinds of recordings as well.

  10. See Sect. 4.2 for some suggestions of what the relevant properties may be for digital recordings.

  11. Fricker (2021, pp. 67–68) also considers situations where trust in testimony is based on an optimistic attitude about the teller’s trustworthiness. I ignore such situations here because I am only interested in cases where trust in testimony is based on propositional justification since this links discussions of testimony with the amended trustworthiness requirement of EA.

  12. Alice could corroborate with someone else but this would imply that she did not trust Bob as a source of information.

  13. For further arguments in support of the conclusion that handmade images convey testimonial information, see Cavedon-Taylor (2013, pp. 287–290).

  14. I will not attempt to state or defend such a norm here because this would exceed the scope of this paper.

  15. I use ‘fundamental’ in the sense of Fallis (2020) who objects to Rini’s (2020) account on the grounds that it does not explain how deepfakes undermine epistemic norms associated with recordings (see Sect. 5.1). In what follows, when I say that the trust explanation is more fundamental than some other account, I mean that it explains why deepfakes are epistemically harmful while also accounting for the observations that motivated the other account.

  16. I would like to thank an anonymous reviewer for raising this objection.

  17. By ‘trust comes first’ I mean that distrust precedes and motivates the discovery about the law of reinforcement instead of being the consequence of this discovery in the bridge story.

  18. For this reply, Fricker’s (2021) trust-based reliance on an occasion or basal trust could also work.

  19. I would like to thank an anonymous reviewer for raising this objection.

  20. For challenges, see Knobe (2003), Machery et al. (2004), and Weinberg et al. (2001) while replies have been offered by Deutsch (2015), Pust (2000), and Williamson (2007) among others. Machery et al. (2004) find the assumption that the semantic intuitions of Western academic philosophers, a small sample of the global population, are somehow more accurate than those of the majority highly dubious. Similar doubts could be cast on philosophers’ intuitions about trust.

  21. My approach in this section is informed by value sensitive design.

  22. These ideas are inspired by the zero trust paradigm in cybersecurity (see Rose et al., 2020).

  23. I would like to thank an anonymous reviewer for posing this question.

  24. This does not mean that a centralized system would necessarily not work. But a single point of failure, and a concentration of power, may mean that such a system must meet higher standards in order to be justifiably deemed trustworthy by end-users.

  25. A closed source approach could also work if it underwent regular independent audits. This would introduce additional parties to the process and raise questions about their trustworthiness, as would relying on experts in the case of an open source system.

  26. What I have in mind is a scenario where someone uploads a high-quality deepfake, like those shared by @deeptomcruise on Twitter, that is not merely a modification of a pre-existing recording.

  27. I would like to thank an anonymous reviewer for mentioning and outlining the following attack.

  28. I would like to thank an anonymous reviewer for suggesting that I address this concern.

  29. This may change in some hypothetical future where false positives become prevalent but this simply means that Fallis’ account does not apply to the present.

  30. I would like to thank an anonymous reviewer for highlighting this issue.

References

Download references

Acknowledgements

A draft of this paper was presented at the Estonian Annual Philosophy Conference in 2021. I would like to thank the participants for their critical questions and helpful suggestions. I am also grateful to the two anonymous reviewers of this paper whose constructive criticisms and insightful comments have greatly improved the quality of this paper.

Funding

No funding to disclose.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Oliver Laas.

Ethics declarations

Conflict of interest

No conflict of interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Laas, O. Deepfakes and trust in technology. Synthese 202, 132 (2023). https://doi.org/10.1007/s11229-023-04363-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11229-023-04363-4

Keywords

Navigation