Skip to main content

Advertisement

Log in

Making sense of the conceptual nonsense ‘trustworthy AI’

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

Following the publication of numerous ethical principles and guidelines, the concept of 'Trustworthy AI' has become widely used. However, several AI ethicists argue against using this concept, often backing their arguments with decades of conceptual analyses made by scholars who studied the concept of trust. In this paper, I describe the historical-philosophical roots of their objection and the premise that trust entails a human quality that technologies lack. Then, I review existing criticisms about ‘Trustworthy AI’ and the consequence of ignoring these criticisms: if the concept of ‘Trustworthy AI’ is kept being used, we risk attributing responsibilities to agents who cannot be held responsible, and consequently, deteriorate social structures which regard accountability and liability. Nevertheless, despite suggestions to shift the paradigm from ‘Trustworthy AI’ to ‘Reliable AI’, I argue that, realistically, this concept will be kept being used. I end by arguing that, ultimately, AI ethics is also about power, social justice, and scholarly activism. Therefore, I propose that community driven and social justice-oriented ethicists of AI and trust scholars further focus on (a) democratic aspects of trust formation; and (b) draw attention to critical social aspects highlighted by phenomena of distrust. This way, it will be possible to further reveal shifts in power relations, challenge unfair status quos, and suggest meaningful ways to keep the interests of citizens.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The [36] document mentions 52 members, however only 51 names are listed [87].

  2. While I could not trace the source of the term ‘Trustworthy AI’, the initial popularization of it can be associated to the HLEG’s first draft. See Google Trends worldwide search for “Trustworthy AI”.

  3. Despite the criticism aimed at HLEG, Metzinger [54] acknowledged that this initiative is “currently the best globally available platform for the next phase of discussion”.

  4. For the relations of the concepts of trust and reliability to the concepts of confidence, risk, and vulnerability, see [14]: §2.3. and references within.

  5. The literature hasn’t yet solidified around one term for this idea, and names are given according to the context this approach is raised. I named it ‘The Anthropocentric View of Trust’ to follow Humphrey’s [39] criticism that centers around the idea that some concepts denote only humans.

  6. It is possible to distinguish between strong and weak versions of the anthropocentric view. According to a strong version, technologies cannot be the actual objects of trust, but people and institutions can. On a weaker version, technologies are ipso facto objects of trust, yet it is only on a closer inspection that we recognize humans and institutions as additional objects of trust. Both versions reduce issues of trust in technologies to the people behind them and do not ascribe human agency to technologies.

  7. For the topic of AI and assigning moral status, a degree of moral consideration, or moral agency, see, [11, 12, 18, 30, 31, 64, 80, 85].

  8. ForHumanity’s website: https://forhumanity.center.

References

  1. Ağca, M.A., Faye, S., Khadraoui, D.: A survey on trusted distributed artificial intelligence. IEEE Access (2022). https://doi.org/10.1109/access.2022.3176385

    Article  Google Scholar 

  2. AlgorithmWatch. No red lines: industry defuses ethics guidelines for artificial intelligence. https://algorithmwatch.org/en/industry-defuses-ethics-guidelines-for-artificial-intelligence/ (2019)

  3. Article 19. Governance with teeth: How human rights can strengthen FAT and ethics initiatives on artificial intelligence. April 17, 2019. https://www.article19.org/resources/governance-with-teeth-how-human-rights-can-strengthen-fat-and-ethics-initiatives-on-artificial-intelligence/ (2019)

  4. Baier, A.: Trust and antitrust. Ethics 96(2), 231–260 (1986). https://doi.org/10.1086/292745

    Article  Google Scholar 

  5. Braun, M., Bleher, H., Hummel, P.: A leap of faith: is there a formula for “Trustworthy” AI? Hastings Cent. Rep. 51(3), 17–22 (2021). https://doi.org/10.1002/hast.1207

    Article  Google Scholar 

  6. Bryson, J. J. “AI & Global Governance: No One Should Trust AI,” November 13, 2018, United Nations University, Centre for Policy Research, https://cpr.unu.edu/publications/articles/ai-global-governance-no-one-should-trust-ai.html (2018)

  7. Bryson, J. J. One Day, AI Will Seem as Human as Anyone. What Then? Wired, June 27, 2022. https://www.wired.com/story/lamda-sentience-psychology-ethics-policy (2022)

  8. Buijsman, S., Veluwenkamp, H.: Spotting When Algorithms Are Wrong. Minds and Machines (2022). https://doi.org/10.1007/s11023-022-09591-0

  9. CAICT [China Academy of Information and Communications Technology]. White Paper on Trustworthy Artificial Intelligence. www.caict.ac.cn/english/research/whitepapers/202110/t20211014_391097.html. (2021)

  10. Coeckelbergh, M.: Can we trust robots? Ethics Inf. Technol. 14(1), 53–60 (2012). https://doi.org/10.1007/s10676-011-9279-1

    Article  Google Scholar 

  11. Coeckelbergh, M.: Artificial intelligence, responsibility attribution, and a relational justification of explainability. Sci. Eng. Ethics 26(4), 2051–2068 (2020). https://doi.org/10.1007/s11948-019-00146-8

    Article  Google Scholar 

  12. Danaher, J.: Welcoming robots into the moral circle: a defence of ethical behaviourism. Sci. Eng. Ethics 26(4), 2023–2049 (2020). https://doi.org/10.1007/s11948-019-00119-x

    Article  Google Scholar 

  13. Davies, J. Europe publishes stance on AI ethics, but don’t expect much, telecoms.com news 28 June 2019. https://telecoms.com/498190/europe-publishes-stance-on-ai-ethics-but-dont-expect-much. (2019)

  14. De Filippi, P., Mannan, M., Reijers, W.: Blockchain as a confidence machine: the problem of trust & challenges of governance. Technol. Soc. (2020). https://doi.org/10.1016/j.techsoc.2020.101284

    Article  Google Scholar 

  15. Dotan, R. The Proliferation of AI Ethics Principles: What’s Next?, MAIEI. https://montrealethics.ai/the-proliferation-of-ai-ethics-principles-whats-next/. (2021)

  16. Dubber, M.D., Pasquale, F., Das, S.: The Oxford handbook of ethics of AI. In: Oxford handbooks. Oxford University Press, Oxford (2020)

    Google Scholar 

  17. EC [European Council]. Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts. (Document 52021pc0206). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 (2021)

  18. Farina, L.: Sven nyholm, humans and robots; ethics, agency and anthropomorphism. J Moral Philos 19(2), 221–224 (2022). https://doi.org/10.1163/17455243-19020007

    Article  Google Scholar 

  19. Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., Srikumar, M.: Principled artificial intelligence: mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication, Cambridge (2020)

    Google Scholar 

  20. Floridi, L.: Translating principles into practices of digital ethics: five risks of being unethical. Phil. Technol. 32, 185–193 (2019). https://doi.org/10.1007/s13347-019-00354-x

    Article  Google Scholar 

  21. Floridi, L., Cowls, J.: A unified framework of five principles for AI in society. In: Ethics governance and policies in artificial intelligence, pp. 5–17. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81907-1_2

    Chapter  Google Scholar 

  22. Floridi, L., Sanders, J.W.: On the morality of artificial agents. Mind. Mach. 14(3), 349–379 (2004). https://doi.org/10.1023/b:mind.0000035461.63578.9d

  23. Freiman, O.: Towards the Epistemology of the Internet of Things Techno-Epistemology and Ethical Considerations Through the Prism of Trust. Int. Rev. Inf. Ethics 226–22 (2014). https://doi.org/10.29173/irie115

  24. Freiman, O.: The Role of Knowledge in the Formation of Trust in Technologies. Ph.D. Dissertation, Bar-Ilan University (2021).

  25. Freiman, O., & Miller, B.: “Can Artificial Entities Assert?”,In: S. Goldberg (ed.), The Oxford Handbook of Assertion. Oxford University Press (2020). https://academic.oup.com/edited-volume/34275/chapter-abstract/290604123

  26. Freiman, O., Geslevich Packin, N.: Artificial intelligence products cannot be moral agents. Toronto Star, August 7th, 2022. https://www.thestar.com/opinion/contributors/2022/08/07/artificial-intelligence-products-cannot-be-moral-agents-the-tech-industry-must-be-held-responsible-for-what-it-develops.html

  27. Gießler, S., Spielkamp, M, Ferrario, A., Christen, M., Shaw, D., Schneble, C.: ‘Trustworthy AI’ is not an appropriate framework. Algorithm Watch. https://algorithmwatch.org/en/trustworthy-ai-is-not-an-appropriate-framework/ (2019)

  28. Glikson, E., Woolley, A.W.: Human trust in artificial intelligence: review of empirical research. Acad. Manag. Ann. 14(2), 627–660 (2020). https://doi.org/10.5465/annals.2018.0057

    Article  Google Scholar 

  29. Green, B. The Contestation of Tech Ethics: A Sociotechnical Approach to Technology Ethics in Practice. The Digital Humanist, February 25, 2022. https://thedigitalhumanist.org/the-contestation-of-tech-ethics-a-sociotechnical-approach-to-technology-ethics-in-practice (2022)

  30. Gunkel, D.J.: The other question: can and should robots have rights? Ethics Inf. Technol. 20(2), 87–99 (2018). https://doi.org/10.1007/s10676-017-9442-4

    Article  Google Scholar 

  31. Gunkel, D.J.: Robot rights. MIT Press (2018)

    Book  Google Scholar 

  32. Hagendorff, T.: The ethics of AI ethics: an evaluation of guidelines. Mind. Mach. 30(1), 99–120 (2020). https://doi.org/10.1007/s11023-020-09517-8

    Article  Google Scholar 

  33. Hardin, R.: The street-level epistemology of trust. Polit. Soc. 21(4), 505–529 (1993). https://doi.org/10.1177/0032329293021004006

    Article  Google Scholar 

  34. Hatherley, J.J.: Limits of trust in medical AI. J. Med. Ethics 46(7), 478–481 (2020). https://doi.org/10.1136/medethics-2019-105935

    Article  Google Scholar 

  35. Hawley, K.: Trust, distrust and commitment. Noûs 48(1), 1–20 (2014). https://doi.org/10.1111/nous.12000

    Article  MathSciNet  Google Scholar 

  36. HLEG. Draft Ethics Guidelines for Trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/draft-ethics-guidelines-trustworthy-ai. (2018)

  37. HLEG. Ethics guidelines for trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai. (2019)

  38. Hoff, K.A., Bashir, M.: Trust in automation: integrating empirical evidence on factors that influence trust. Hum. Factors 57(3), 407–434 (2015). https://doi.org/10.1177/0018720814547570

    Article  Google Scholar 

  39. Humphreys, P.: Network epistemology. Episteme 6(2), 221–229 (2009). https://doi.org/10.3366/e1742360009000653

    Article  Google Scholar 

  40. ICO [Information Commissioner's Office].: ‘Immature biometric technologies could be discriminating against people’ says ICO in warning to organisations. News and Blogs, 26 October 2022. https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2022/10/immature-biometric-technologies-could-be-discriminating-against-people-says-ico-in-warning-to-organisations

  41. Isaeva, N., Bachmann, R., Bristow, A., Saunders, M.N.: Why the epistemologies of trust researchers matter. J. Trust Res. 5(2), 153–169 (2015). https://doi.org/10.1080/21515581.2015.1074585

    Article  Google Scholar 

  42. Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nature Mach. Intell. 1(9), 389–399 (2019). https://doi.org/10.1038/s42256-019-0088-2

    Article  Google Scholar 

  43. Jones, K.: Trust as an affective attitude. Ethics 107(1), 4–25 (1996). https://doi.org/10.1086/233694

    Article  Google Scholar 

  44. Jones, K.: Trustworthiness. Ethics 123(1), 61–85 (2012). https://doi.org/10.1086/667838

    Article  MathSciNet  Google Scholar 

  45. Kalluri, P.: Don’t ask if artificial intelligence is good or fair. Ask How It Shifts Power’. Nature (2020). https://doi.org/10.1038/d41586-020-02003-2

    Article  Google Scholar 

  46. Kelly, P.: Facial Recognition Technology and the Growing Power of Artificial Intelligence. Report of the Standing Committee on Access to Information, Privacy and Ethics. 44th Parliament, 1st Session. House of Commons, Canada (2021)

  47. Keymolen, E.: Trust on the line: a philosophical exploration of trust in the networked era. In: Dissertation. Erasmus University Rotterdam, Rotterdam (2016)

    Google Scholar 

  48. Kontogiorgos, D., et al.: The effects of anthropomorphism and non-verbal social behaviour in virtual assistants. Proc. ACM Int. Conf. Intell. Virtual Agents (2019). https://doi.org/10.1145/3308532.3329466

    Article  Google Scholar 

  49. Latour, B.: Where are the missing masses The sociology of a few mundane artifacts. In: Bijker, W.E., Law, J. (eds.) Shaping technology/building society studies in sociotechnical change, pp. 225–258. MIT Press, Cambridge (1992)

    Google Scholar 

  50. Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004). https://doi.org/10.1518/hfes.46.1.50.30392

    Article  Google Scholar 

  51. Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manag. Rev. 20(3), 709–734 (1995). https://doi.org/10.5465/amr.1995.9508080335

    Article  Google Scholar 

  52. McLeod, C.: Self-trust and reproductive autonomy. MIT Press (2002)

    Book  Google Scholar 

  53. Metz, R.: Amazon will block police indefinitely from using its facial-recognition software. CNN Business, May 18, 2021. https://www.cnn.com/2021/05/18/tech/amazon-police-facial-recognition-ban

  54. Metzinger, T. Ethics Washing Made in Europe (Der Tagesspiegel, 2019), https://www.tagesspiegel.de/politik/eu-guidelines-ethics-washing-made-in-europe/24195496.html (2019)

  55. Metzinger, T., & Coeckelbergh, M.: Europe needs more guts when it comes to AI ethics. Tagesspiegel BACKGROUND, 16. April 2020. https://background.tagesspiegel.de/digitalisierung/europe-needs-more-guts-when-it-comes-to-ai-ethics (2020)

  56. Miller, B., Freiman, O.: “Trust and Distributed Epistemic Labor”, In: J. Simon (ed.), The Routledge Handbook on Trust and Philosophy. Routledge (2020)

  57. Mittelstadt, B.: Principles alone cannot guarantee ethical AI. Nat. Mach. Intell. 1(11), 501–507 (2019). https://doi.org/10.1038/s42256-019-0114-4

    Article  Google Scholar 

  58. NAII [National Artificial Intelligence Initiative]. Advancing Trustworthy AI. https://www.ai.gov/strategic-pillars/advancing-trustworthy-ai/ (2021)

  59. Nguyen, T.C.: Trust as an unquestioning attitude. In: Oxford Studies in epistemology. Oxford University Press, Oxford (2022)

    Google Scholar 

  60. Nickel, P.J.: Trust in technological systems. In: De Vries, M.J., Hansson, S.O., Meijers, A.W. (eds.) Norms in technology, pp. 223–237. Springer, Dordrecht (2013). https://doi.org/10.1007/978-94-007-5243-6_14

    Chapter  Google Scholar 

  61. Nickel, P.J.: Being pragmatic about trust. In: Faulkner, P., Simpson, T. (eds.) The Philosophy of trust, pp. 195–213. Oxford University Press, Oxford (2017). https://doi.org/10.1093/acprof:oso/9780198732549.003.0012

    Chapter  Google Scholar 

  62. Nickel, P.J.: Trust in medical artificial intelligence: a discretionary account. Ethics Inf. Technol. 24(1), 1–10 (2022). https://doi.org/10.1007/s10676-022-09630-5

    Article  Google Scholar 

  63. Nickel, P.J., Franssen, M., Kroes, P.: Can we make sense of the notion of trustworthy technology? Knowl. Technol. Policy 23(3–4), 429–444 (2010). https://doi.org/10.1007/s12130-010-9124-6

    Article  Google Scholar 

  64. Nyholm, S.: Humans and robots: ethics, agency, and anthropomorphism. Rowman & Littlefield Publishers, Lanham (2020)

    Google Scholar 

  65. Opoku, V.: Regulation of artificial intelligence in the EU. In: Master Thesis. University of Hamburg, Hamburg (2019)

    Google Scholar 

  66. Origgi, G.: Qu’est-ce que la confiance? VRIN, Paris (2008)

    Google Scholar 

  67. Peukert, C., Kloker, S.: Trustworthy AI: how ethicswashing undermines consumer trust. WI2020 Zent. Tracks (2020). https://doi.org/10.30844/wi_2020_j11-peukert

    Article  Google Scholar 

  68. Pitt, J.C.: It’s not about technology. Knowl. Technol. Policy 23(3–4), 445–454 (2010). https://doi.org/10.1007/s12130-010-9125-5

    Article  Google Scholar 

  69. Ramasubramanian, S., Sousa, A.N.: Communication scholar-activism: conceptualizing key dimensions and practices based on interviews with scholar-activists. J. Appl. Commun. Res. 49(5), 477–496 (2021). https://doi.org/10.1080/00909882.2021.1964573

    Article  Google Scholar 

  70. Renda, A.: Europe: toward a policy framework for trustworthy AI. In: The Oxford handbook of ethics of AI, pp. 649–666. Oxford University Press, Oxford (2020). https://doi.org/10.1093/oxfordhb/9780190067397.013.41

    Chapter  Google Scholar 

  71. Rességuier, A., Rodrigues, R.: AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data & Society, 7(2) (2020). https://doi.org/10.1177/2053951720942541

  72. Rieder, G., Simon, J., Wong, P.H.: Mapping the stony road toward trustworthy AI: expectations, problems conundrums. In: Machines we trust: perspectives on dependable AI. MIT Press, Cambridge (2020)

    Google Scholar 

  73. Rousseau, D.M., Sitkin, S.B., Burt, R.S., Camerer, C.: Not so different after all: a cross-discipline view of trust. Acad. Manag. Rev. 23(3), 393–404 (1998). https://doi.org/10.5465/amr.1998.926617

    Article  Google Scholar 

  74. Ryan, M.: In AI we trust: ethics, artificial intelligence, and reliability. Sci. Eng. Ethics 26(5), 2749–2767 (2020). https://doi.org/10.1007/s11948-020-00228-y

    Article  Google Scholar 

  75. Schiff, D., Borenstein, J., Biddle, J., Laas, K.: AI ethics in the public, private, and ngo sectors: a review of a global document collection. IEEE Trans. Technol. Soc. 2(1), 31–42 (2021). https://doi.org/10.1109/TTS.2021.3052127

    Article  Google Scholar 

  76. Simon, J.: The entanglement of trust and knowledge on the web. Ethics Inf. Technol. 12(4), 343–355 (2010). https://doi.org/10.1007/s10676-010-9243-5

    Article  Google Scholar 

  77. Simon, J.: Trust. In: Pritchard, D. (ed.) Oxford bibliographies in philosophy. Oxford University Press, Oxford (2013). https://doi.org/10.1093/obo/9780195396577-0157

    Chapter  Google Scholar 

  78. Simpson, T.W.: What is Trust? Pac. Philos. Q. 93, 550–569 (2012). https://doi.org/10.1111/j.1468-0114.2012.01438.x

    Article  Google Scholar 

  79. Söllner, M., Hoffmann, A., Leimeister, J.M.: Why different trust relationships matter for information systems users. Eur. J. Inf. Syst. 25(3), 274–287 (2016). https://doi.org/10.1057/ejis.2015.17

    Article  Google Scholar 

  80. Stamboliev, E.: Robot rights by David. J. Gunkel. Leonardo 53(1), 110–111 (2020). https://doi.org/10.1162/leon_r_01849

    Article  Google Scholar 

  81. Sutrop, M.: Should we trust artificial intelligence? Trames 23(4), 499–522 (2019). https://doi.org/10.3176/tr.2019.4.07

    Article  Google Scholar 

  82. Taddeo, M., McCutcheon, T., Floridi, L.: Trusting artificial intelligence in cybersecurity is a double-edged sword. Nat. Mach. Intell. 1(12), 557–560 (2019). https://doi.org/10.1038/s42256-019-0109-1

    Article  Google Scholar 

  83. Tallant, J.: You can trust the ladder, but you shouldn’t. Theoria 85(2), 102–118 (2019). https://doi.org/10.1111/theo.12177

    Article  MathSciNet  Google Scholar 

  84. Tamir, P., Zohar, A.: Anthropomorphism and teleology in reasoning about biological phenomena. Sci. Educ. 75(1), 57–67 (1991). https://doi.org/10.1002/sce.3730750106

    Article  Google Scholar 

  85. Tavani, H.T.: Can social robots qualify for moral consideration? Reframing the question about robot rights. Information 9(4), 73 (2018). https://doi.org/10.3390/info9040073

    Article  Google Scholar 

  86. Torrance, S.: Machine ethics and the idea of a more-than-human moral world. In: Anderson, M., Anderson, S. (eds.) Machine ethics, pp. 115–137. Cambridge University Press, Cambridge (2011). https://doi.org/10.1017/cbo9780511978036.011

    Chapter  Google Scholar 

  87. Veale, M.: A critical take on the policy recommendations of the eu high-level expert group on artificial intelligence. Eur. J. Risk Regulat. 11(1), E1 (2020). https://doi.org/10.1017/err.2019.65

    Article  Google Scholar 

  88. Vesnic-Alujevic, L., Nascimento, S., Polvora, A.: Societal and ethical impacts of artificial intelligence: critical notes on European policy frameworks. Telecommun. Policy 44(6), 101961 (2020). https://doi.org/10.1016/j.telpol.2020.101961

    Article  Google Scholar 

  89. Wallach, W., Allen, C.: Moral machines: teaching robots right from wrong. Oxford University Press, Oxford (2009)

    Book  Google Scholar 

  90. Wang, W., Qiu, L., Kim, D., Benbasat, I.: Effects of rational and social appeals of online recommendation agents on cognition- and affect-based trust. Decis. Support Syst. 86, 48–60 (2016). https://doi.org/10.1016/j.dss.2016.03.007

    Article  Google Scholar 

  91. Weydner-Volkmann, S., Feiten, L.: Trust in technology: interlocking trust concepts for privacy respecting video surveillance. J. Inf. Commun. Ethics Soc. 19(4), 506–520 (2021). https://doi.org/10.1108/jices-12-2020-0128

    Article  Google Scholar 

  92. Wilholt, T.: Bias and values in scientific research. Stud. Hist. Philos. Sci. 40(1), 92–101 (2009). https://doi.org/10.1016/j.shpsa.2008.12.005

    Article  Google Scholar 

  93. Winner, L.: Do artifacts have politics? In: Mackenzie, D., Wajcman, J. (eds.) The Social shaping of technology. Open University Press, Maidenhead (1985)

    Google Scholar 

Download references

Acknowledgements

I thank two anonymous reviewers and participants of the ‘Trust and The Ethics of AI’ workshop, held virtually on June 20, 2022. All errors are my own.

Funding

The author reported there is no funding associated with the work featured in this article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ori Freiman.

Ethics declarations

Conflict of interest

No potential conflict of interest was reported by the author(s).

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Freiman, O. Making sense of the conceptual nonsense ‘trustworthy AI’. AI Ethics 3, 1351–1360 (2023). https://doi.org/10.1007/s43681-022-00241-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43681-022-00241-w

Keywords

Navigation