Abstract
Deepfakes are fake recordings generated by machine learning algorithms. Various philosophical explanations have been proposed to account for their epistemic harmfulness. In this paper, I argue that deepfakes are epistemically harmful because they undermine trust in recording technology. As a result, we are no longer entitled to our default doxastic attitude of believing that P on the basis of a recording that supports the truth of P. Distrust engendered by deepfakes changes the epistemic status of recordings to resemble that of handmade images. Their credibility, like that of testimony, depends partly on the credibility of the source. I consider some proposed technical solutions from a philosophical perspective to show the practical relevance of these suggestions.
Similar content being viewed by others
Notes
‘Participant stance’ is Holton’s name for Strawson’s (2008) participant attitude that we adopt toward others whose attitudes and actions toward us elicit reactive attitudes like anger or gratitude from us.
Govier (1997, pp. 110, 112) describes similar phenomena under the rubrics of social and scatter trust.
Natural kind terms could be an exception, but the burden of proof is on those who wish to argue that ‘trust’, like ‘gold’ or ‘water’, picks out a natural kind.
If this premise is true, then it casts doubt on the idea, mentioned above, that the notion of trust in technology can be treated as predictive trust or trust-based reliance on occasion, since these would not account for the morally relevant kind of disappointment elicited by technological failure.
I would like to thank an anonymous reviewer for highlighting this issue.
I do not thereby commit myself to his conclusion that only traditional photography can be veridical because I believe that digital photography can be veridical as well when used in appropriate conditions.
Arguing for this assumption here is beyond the scope of this paper.
According to Cavedon-Taylor (2013, p. 295), this resembles our default doxastic response to testimony. I would add that our defualt doxastic response to photographs likely extends to other kinds of recordings as well.
See Sect. 4.2 for some suggestions of what the relevant properties may be for digital recordings.
Fricker (2021, pp. 67–68) also considers situations where trust in testimony is based on an optimistic attitude about the teller’s trustworthiness. I ignore such situations here because I am only interested in cases where trust in testimony is based on propositional justification since this links discussions of testimony with the amended trustworthiness requirement of EA.
Alice could corroborate with someone else but this would imply that she did not trust Bob as a source of information.
For further arguments in support of the conclusion that handmade images convey testimonial information, see Cavedon-Taylor (2013, pp. 287–290).
I will not attempt to state or defend such a norm here because this would exceed the scope of this paper.
I use ‘fundamental’ in the sense of Fallis (2020) who objects to Rini’s (2020) account on the grounds that it does not explain how deepfakes undermine epistemic norms associated with recordings (see Sect. 5.1). In what follows, when I say that the trust explanation is more fundamental than some other account, I mean that it explains why deepfakes are epistemically harmful while also accounting for the observations that motivated the other account.
I would like to thank an anonymous reviewer for raising this objection.
By ‘trust comes first’ I mean that distrust precedes and motivates the discovery about the law of reinforcement instead of being the consequence of this discovery in the bridge story.
For this reply, Fricker’s (2021) trust-based reliance on an occasion or basal trust could also work.
I would like to thank an anonymous reviewer for raising this objection.
For challenges, see Knobe (2003), Machery et al. (2004), and Weinberg et al. (2001) while replies have been offered by Deutsch (2015), Pust (2000), and Williamson (2007) among others. Machery et al. (2004) find the assumption that the semantic intuitions of Western academic philosophers, a small sample of the global population, are somehow more accurate than those of the majority highly dubious. Similar doubts could be cast on philosophers’ intuitions about trust.
My approach in this section is informed by value sensitive design.
These ideas are inspired by the zero trust paradigm in cybersecurity (see Rose et al., 2020).
I would like to thank an anonymous reviewer for posing this question.
This does not mean that a centralized system would necessarily not work. But a single point of failure, and a concentration of power, may mean that such a system must meet higher standards in order to be justifiably deemed trustworthy by end-users.
A closed source approach could also work if it underwent regular independent audits. This would introduce additional parties to the process and raise questions about their trustworthiness, as would relying on experts in the case of an open source system.
What I have in mind is a scenario where someone uploads a high-quality deepfake, like those shared by @deeptomcruise on Twitter, that is not merely a modification of a pre-existing recording.
I would like to thank an anonymous reviewer for mentioning and outlining the following attack.
I would like to thank an anonymous reviewer for suggesting that I address this concern.
This may change in some hypothetical future where false positives become prevalent but this simply means that Fallis’ account does not apply to the present.
I would like to thank an anonymous reviewer for highlighting this issue.
References
Abdul-Rahman, A., & Hailes, S. (2000). Supporting trust in virtual communities. In HICSS: Proceedings of the 33rd annual Hawaii international conference on system sciences. https://doi.org/10.1109/HICSS.2000.926814
Ajder, H., Patrini, G., Cavalli, F., & Cullen, L. (2019). The state of Deepfakes: Landscape, threats, and impact. Deeptrace. https://regmedia.co.uk/2019/10/08/deepfake_report.pdf
Anantrasirichai, A., & Bull, D. (2022). Artificial intelligence in the creative industries: A review. Artificial Intelligence Review, 55(1), 589–656. https://doi.org/10.1007/s10462-021-10039-7
Agüera y Arcas, B. (2017). Art in the age of machine intelligence. Arts, 6(4), 18. https://doi.org/10.3390/arts6040018
Ayyub, R. (2018, November 21). I was the victim of a Deepfake porn plot intended to silence me. HuffPost. https://www.huffingtonpost.in/rana-ayyub/deepfake-porn_a_23595592/
Baig, R. (2022, March 18). The deepfakes in the disinformation war. Deutsche Welle. https://www.dw.com/en/fact-check-the-deepfakes-in-the-disinformation-war-between-russia-and-ukraine-61166433
Baier, A. (1986). Trust and antitrust. Ethics, 96(2), 231–260. https://doi.org/10.1086/292745
Bhaskar, R. (1998). The possibility of naturalism: A philosophical critique of the contemporary human sciences (3rd ed.). Routledge.
Borge, M., Kokoris-Kogias, E., Jovanovic, P., Gasser, L., Gailly, N., & Ford, B. (2017). Proof-of-personhood: Redemocratizing permissionless cryptocurrencies. In L. O’Conner (Ed.), 2nd IEEE European symposium on security and privacy workshops (EuroS &PW) (pp. 23–26). IEEE Computer Society. https://doi.org/10.1109/eurospw.2017.46
Cahill, V., Gray, E. Jean-Marc., Seigneur, J.-M., Jensen, C. D., Chen, Y., Shand, B., Dimmock, N., Twigg, A., Bacon, J., English, C., Wagealla, W., Teriz, S., Nixon, P., Serugendo, G. D. M., Bryce, C., Carbone, M., Krukow, K., & Nielsen, M. (2003). Using trust for secure collaboration in uncertain environments. IEEE Pervasive Computing, 2(3), 52–61. https://doi.org/10.1109/MPRV.2003.1229527
Cao, Q., Sirivianos, M., Yang, X., & Pregueiro, T. (2012). Aiding the Detection of Fake Accounts in Large Scale Social Online Services. In Proceedings of the 9th USENIX symposium on networked systems design and implementation (NSDI 12) (pp. 197–210). https://www.usenix.org/conference/nsdi12/technical-sessions/presentation/cao
Carlson, M. (2021). Skepticism and the digital information environment. SATS, 22(2), 149–167. https://doi.org/10.1515/sats-2021-0008
Cavedon-Taylor, D. (2013). Photographically based knowledge. Episteme, 10(3), 283–297. https://doi.org/10.1017/epi.2013.21
Chalmers, D. J. (2022). Reality+: Virtual worlds and the problems of philosophy. W. W. Norton & Company.
Chesney, R., & Citron, D. K. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107, 1753–1820. https://doi.org/10.15779/Z38RV0D15J
Christopher, N. (2020, February 18). We’ve just seen the first use of Deepfakes in an Indian election campaign. Vice. https://www.vice.com/en_in/article/jgedjb/the-first-use-of-deepfakes-in-indian-election-by-bjp
Chapple, M., & Seidl, D. (2019). CompTIA®: PenTest+ Study Guide. Wiley.
Danaher, J., & Sætra, H. S. (2022). Technology and moral change: The transformation of truth and trust. Ethics and Information Technology. https://doi.org/10.1007/s10676-022-09661-y
Dennett, C. (2007). Kinds of things—towards a bestiary of the manifest image. In D. Ross, J. Ladyman, & H. Kincaid (Eds.), Scientific metaphysics (pp. 96–107). Oxford University Press.
Deutsch, M. (2015). The myth of the intuitive: Experimental philosophy and the philosophical method. MIT.
Domenicucci, J., & Holton, R. (2017). Trust as a two-place relation. In P. Faulkner & R. Holton (Eds.), The philosophy of trust (pp. 149–160). Oxford University Press.
du Sautoy, M. (2019). The creativity code: Art and innovation in the age of AI. The Belknap Press of Harvard University Press.
Fallis, D. (2020). The epistemic threat of Deepfakes. Philosophy & Technology, 34(4), 623–643. https://doi.org/10.1007/s13347-020-00419-2
Faulkner, P. (2007). On telling and trusting. Mind, 116(464), 875–902. https://doi.org/10.1093/mind/fzm875
Floridi, L. (2014). 4th Revolution: How the infosphere is reshaping human reality. Oxford University Press.
Ford, B. (2021). Technologizing democracy or democratizing technology? A layered-architecture perspective on potentials and challenges. In L. Bernholz, H. Landemore, & R. Reich (Eds.), Digital technology and democratic theory (pp. 274–308). The University of Chicago Press.
Ford, B., & Strauss, J. (2008). An offline foundation for online accountable pseudonyms. In Proceedings of the 1st workshop on social network systems (pp. 31–36). Association for Computing Machinery.
Fricker, E. (2021). Can trust work epistemic magic? Philosophical Topics, 49(2), 57–82. https://doi.org/10.5840/philtopics202149215
Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press.
Friedman, B. (Ed.). (1998). Human values and the design of computer technology. CSLI Publications.
Friedman, B., Kahn, P. H., Jr., & Borning, Al. (2002). Value sensitive design: Theory and methods (UW CSET Technical Report 02-12-01). University of Washington. https://faculty.washington.edu/pkahn/vsd-theory-methods-tr.pdf
Galindo, G. (2020, April 14). XR Belgium posts deepfake of Belgian premiere linking Covid-19 with climate crisis. The Brussels Times. https://www.brusselstimes.com/news/belgium-all-news/politics/106320/xr-belgium-posts-deepfake-of-belgian-premiere-linking-covid-19-with-climate-crisis
Golbeck, J. (Ed.). (2009). Computing with social trust. Springer.
Golbeck, J. A. (2005). Computing and applying trust in web-based social networks. PhD Thesis, University of Maryland.
Govier, T. (1997). Social trust and human communities. McGill-Queen’s University Press.
Greenspan, G. (2016, April 12). Beware the impossible smart contract. Blockchain News. https://www.the-blockchain.com/2016/04/12/beware-of-the-impossible-smart-contract/
Güera, D., & Delp, E. J. (2018). Deepfake Video Detection Using Recurrent Neural Networks. In 15th IEEE international conference on advanced video and signal based surveillance (AVSS) (pp. 127–132). IEEE. https://doi.org/10.1109/AVSS.2018.8639163
Hanson, N. R. (1958). Patterns of discovery. Cambridge University Press.
Hardin, R. (2002). Trust and trustworthiness. Russell Sage Foundation.
Hardin, R. (2006). Trust. Polity Press.
Hao, K. (2018, November 1). Deepfake-busting apps can spot even a single pixel out of place. MIT Technology Review. https://www.technologyreview.com/2018/11/01/139227/deepfake-busting-apps-can-spot-even-a-single-pixel-out-of-place/
Hasan, H. R., & Salah, K. (2019). Combating Deepfake videos using blockchain and smart contracts. IEEE Access, 7, 41596–41606. https://doi.org/10.1109/ACCESS.2019.2905689
Hatherley, J. J. (2020). Limits of trust in medical AI. Journal of Medical Ethics, 46(7), 478–481. https://doi.org/10.1136/medethics-2019-105935
Hawley, K. (2014). Trust, distrust and commitment. Noûs, 48(1), 1–20. https://doi.org/10.1111/nous.12000
Holton, R. (1994). Deciding to trust, coming to believe. Australasian Journal of Philosophy, 72(1), 63–76. https://doi.org/10.1080/00048409412345881
Hopkins, R. (2012). Factive pictorial experience: What’s special about photographs? Noûs, 46(4), 709–731. https://doi.org/10.1111/j.1468-0068.2010.00800.x
Isaac, M., & Frenkel, S. (2021, October 4). Gone in minutes, out for hours: Outage shakes Facebook. The New York Times. https://www.nytimes.com/2021/10/04/technology/facebook-down.html
Jalave, J. (2003). From norms to trust: The Luhmannian connections between trust and system. European Journal of Social Theory, 6(2), 173–190. https://doi.org/10.1177/1368431003006002002
Jones, K. (1996). Trust as an affective attitude. Ethics, 107(1), 4–25. https://doi.org/10.1086/233694
Jones, K. (2004). Trust and terror. In P. DesAutels & M. Urban Walker (Eds.), Moral psychology: Feminist ethics and social theory (pp. 3–18). Rowman & Littlefield.
Knobe, J. (2003). Intentional action and side effects in ordinary language. Analysis, 63(3), 190–194.
Kripke, S. A. (1980). Naming and necessity. Basil Blackwell.
Kroes, P., Franssen, M., van den Poel, I., & Ottens, M. (2006). Treating socio-technical systems as engineering systems: Some conceptual problems. Systems Research and Behavioral Science, 23(6), 803–814. https://doi.org/10.1002/sres.703
Kurve, A., & Kesidis, G. (2011). Sybil detection via distributed sparse cut monitoring. In 2011 IEEE international conference on communications (ICC) (pp. 1–6). IEEE. https://doi.org/10.1109/icc.2011.5963402
Laas, O. (2017). On game definitions. Journal of the Philosophy of Sport, 44(1), 81–94. https://doi.org/10.1080/00948705.2016.1255556
Laas, O. (2022). Computational creativity and its cultural impact. In R. Kelomees, V. Guljajeva, & O. Laas (Eds.), The meaning of creativity in the age of AI (pp. 89–105). Estonian Academy of Arts.
Lammle, T. (2021). CompTIA®: Network+® Study guide (Exam N10-008). Wiley.
Lesniewski-Laas, C., & Kaashoek, M. F. (2011). Whānau: A Sybil-proof distributed hash table. In Proceedings of the 7th USENIX Symposium on networked systems design and implementation (NSDI 10). https://www.usenix.org/legacy/events/nsdi10/full_papers/lesniewski-laas.pdf
Lessig, L. (2006). Code: Version 2.0 (2nd ed.). Basic Books.
Luhmann, N. (1979). Trust and power. Wiley.
Machery, E., Mallon, R., Nichols, S., & Stich, S. P. (2004). Semantics, cross-cultural style. Cognition: International Journal of Cognitive Science, 92(3), B1–B12.
MacMillan, D. & McMillan, R. (2018, October 8). Google exposed user data, feared repercussions of disclosing to public. The Wall Street Journal. https://www.wsj.com/articles/google-exposed-user-data-feared-repercussions-of-disclosing-to-public-1539017194
Maheswaran, J., Jackowitz, D., Zhai, E., Wolinsky, D. I., & Ford, B. (2016). Building privacy-preserving cryptographic credentials from federated online identities. In Proceedings of the 6th ACM conference on data and application security and privacy (pp. 3–13). Association for Computing Machinery. https://doi.org/10.1145/2857705.2857725
Matthews, T. (2022). Deepfakes, intellectual cynics, and the cultivation of digital sensibility. Royal Institute of Philosophy Supplement, 92, 67–85. https://doi.org/10.1017/s1358246122000224
Matthews, T. (2023). Deepfakes, fake barns, and knowledge from videos. Synthese. https://doi.org/10.1007/s11229-022-04033-x
Mazzone, M., & Elgammal, A. (2019). Art, creativity, and the potential of artificial intelligence. Arts, 8(1), 26. https://doi.org/10.3390/arts8010026
McLeod, C. (2021). Trust. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. https://plato.stanford.edu/archives/fall2021/entries/trust/
McLuhan, M. (1962). The Gutenberg galaxy: The making of typographical man. University of Toronto Press.
McLuhan, M. (2001). Understanding media: The extensions of man. Routledge.
Mitchell, M. T. (1997). Machine learning. McGraw-Hill.
Nickel, P. J. (2013). Trust in technological systems. In M. J. de Vries, S. O. Hansson & A. W. M. Meijers (Eds.) Philosophy of engineering and technology, Vol. 9: Norms in technology (pp. 223–237). Springer. https://doi.org/10.1007/978-94-007-5243-6_14.
Nickel, P. J. (2015). Design for the value of trust. In J. van den Hoven, P. E. Vermaas & I. van de Poel (Eds.) Handbook of ethics, values, and technological design: Sources, theory, values and application domains (pp. 551–567). Springer. https://doi.org/10.1007/978-94-007-6970-0_21
Nickel, P. J. (2017). Being pragmatic about trust. In P. Faulkner & R. Holton (Eds.), The philosophy of trust (pp. 195–213). Oxford University Press.
Nissenbaum, H., & Introna, L. (1999). Shaping the web: Why the politics of search engines matters. The Information Society, 16(3), 169–185. https://doi.org/10.1080/01972240050133634
Öhman, C. (2019). Introducing the Pervert’s dilemma: A contribution to the critique of Deepfake Pornography. Ethics and Information Technology, 22(2), 133–140. https://doi.org/10.1007/s10676-019-09522-1
Ong, W. (2002). Orality and literacy: The technologization of the word. Routledge.
Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin Press.
Perelman, C., & Olbrechts-Tyteca, L. (1969). The new rhetoric: A treatise on argumentation. University of Notre Dame Press.
Pust, J. (2000). Intuition as evidence. Routledge.
Ries, S., Kangasharju, J., & Mühläuser, M. (2006). A classification of trust systems. In R. Meersman, Z. Tari, & P. Herrero (Eds.), On the move to meaningful Internet systems 2006: OTM 2006 workshops (pp. 894–903). Springer. https://doi.org/10.1107/11915034_114.
Riaz, S. (2020, December 14). Google services outage hits users across the world. yahoo!news. https://news.yahoo.com/google-services-outage-hits-users-across-the-world-121517814.html?guccounter=1
Rini, R. (2020). Deepfakes and the epistemic backstop. Philosopher’s Imprint, 20(24), 1–16. https://www.philosophersimprint.org/020024.
Rini, R., & Cohen, L. (2022). Deepfakes, deep harms. Journal of Ethics and Social Philosophy, 22(2), 143–161. https://doi.org/10.26556/jesp.v22i2.1628
Robinson, R. (1950). Definition. Clarendon Press.
Rose, S., Brochert, O., Mitchell, S., & Connelly, S. (2020). Zero Trust Architecture (NIST Special Publication 800-207). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.800-207.
Sabater-Mir, J., & Sierra, C. (2005). Review on computational trust and reputation models. Artificial Intelligence Review, 24(1), 33–60. https://doi.org/10.1007/s10462-004-0041-5
Savelyev, A. (2016). Contract Law 2.0: “Smart” contracts as the beginning of the end of classical contract law (Research Paper No. WP BRP 71/LAW/2016). Higher School of Economics. https://dx.doi.org/10.2139/ssrn.2885241.
Schiappa, E. (2003). Defining reality: Definitions and the politics of meaning. Southern Illinois University Press.
Schick, N. (2020). Deepfakes: The coming Infocalypse. Twelve.
Skyrms, B. (2010). Signals. Oxford University Press.
Strawson, P. F. (2008). Freedom and resentment. In P. F. Strawson (Ed.), Freedom and resentment and other essays (pp. 1–28). Routledge. (Reprinted from Proceedings of the British Academy, 48, 187–211.)
Strupp, C. (2019, August 30). Fraudsters used AI to Mimic CEO’s voice in unusual cybercrime case. The Washington Post. https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402
Thampi, S. M., Bhargava, B. & Atrey, P. K. (Eds.) (2014). Managing trust in cyberspace. CRC Press.
Tran, N., Min, B., Li, J., & Subramanian, L. (2009). Sybil-resilient online content voting. In Proceedings of the 6th USENIX symposium on networked systems design and implementation (pp. 15–28). USENIX Association.
van de Poel, I. (2020). Embedding values in artificial intelligence (AI) systems. Minds and Machines, 30(3), 385–409. https://doi.org/10.1007/s11023-020-09537-4
van den Berg, B., & Keymolen, E. (2017). Regulating security on the Internet: Control versus trust. International Review of Law, Computers & Technology, 31(2), 188–205. https://doi.org/10.1080/13600869.2017.1298504
Vincent, J. (2020, July 27). This is what a Deepfake voice clone used in failed fraud sounds like. The Verge. https://www.theverge.com/2020/7/27/21339898/deepfake-audio-voice-clone-scam-attempt-nisos
Yu, H., Gibbons, P. B., Kaminsky, M., & Xiao, F. (2008). SybilLimit: A near-optimal social network defense against Sybil attacks. In Proceedings of the 2008 IEEE symposium on security and privacy (pp. 3–17). IEEE Computer Society.
Yu, H., Kaminsky, M., Gibbons, P. B. & Flaxman, A. (2006). SybilGuard: Defending against sybil attacks via social networks. In Proceedings of the 2006 conference on applications, technologies, architectures, and protocols for computer communications (pp. 267–278). Association of Computer Machinery.
Yu, H., Shi, C., Kaminsky, M., Gibbons, P. B., & Xiao, F. (2009). DSybil: Optimal Sybil-resistance for recommendation systems. In Proceedings of the 2009 30th IEEE symposium on security and privacy (pp. 283–298). IEEE Computer Society.
Walton, D. (2001). Persuasive definitions and public policy arguments. Argumentation Advocacy: The Journal of the American Forensic Association, 37(3), 117–132.
Walton, D. (2006). Fundamental of critical argumentation. Cambridge University Press.
Walton, K. L. (1984). Transparency of pictures: On the nature of photographic realism. Noûs, 18(1), 67–72. https://doi.org/10.2307/2215023
Weinberg, J. M., Nichols, S., & Stich, S. (2001). Normativity and epistemic intuitions. Philosophical Topics, 29(1–2), 429–460. https://doi.org/10.5840/philtopics2001291/217
Williamson, T. (2007). The philosophy of philosophy. Routledge.
Winick, E. (2018, October 16). How acting as Carrie Fisher’s puppet made a career for Rogue One’s Princess Leia. MIT Technology Review. https://www.technologyreview.com/2018/10/16/how-acting-as-carrie-fishers-puppet-made-a-career-for-rogue-ones-princess-leia/
Acknowledgements
A draft of this paper was presented at the Estonian Annual Philosophy Conference in 2021. I would like to thank the participants for their critical questions and helpful suggestions. I am also grateful to the two anonymous reviewers of this paper whose constructive criticisms and insightful comments have greatly improved the quality of this paper.
Funding
No funding to disclose.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
No conflict of interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Laas, O. Deepfakes and trust in technology. Synthese 202, 132 (2023). https://doi.org/10.1007/s11229-023-04363-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11229-023-04363-4