Abstract
Recent artificial intelligence advances, especially those of large language models (LLMs), have increasingly shown glimpses of human-like intelligence. This has led to bold claims that these systems are no longer a mere “it” but now a “who,” a kind of person deserving respect. In this paper, I argue that this view depends on a Cartesian account of personhood, on which identifying someone as a person is based on their cognitive sophistication and ability to address common-sense reasoning problems. I contrast this with a different account of personhood, one where an agent is a person if they are autonomous, responsive to norms, and culpable for their actions. On this latter account, I show that LLMs are not person-like, as evidenced by their propensity for dishonesty, inconsistency, and offensiveness. Moreover, I argue current LLMs, given the way they are designed and trained, cannot be persons—either social or Cartesian. The upshot is that contemporary LLMs are not, and never will be, persons.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability
Not applicable.
Notes
It is worth noting, both accounts are fundamentally anthropocentric. This may suggest we need a further notion of personhood, one that is not tied up with either cognitive or social capacities.
References
Agüera y Arcas B (2022) Do Large Language Models Understand Us? Daedalus 151(2):183–197. https://www.jstor.org/stable/48662035
Aru J, Labash A, Corcoll O, Vicente R (2023) Mind the gap: Challenges of deep learning approaches to theory of mind. Artif Intell Rev. https://doi.org/10.1007/s10462-023-10401-x
Bai Y, Kadavath S, Kundu S, Askell A, Kernion J, Jones A et al (2022) Constitutional AI: Harmlessness from AI feedback. ArXiv. https://doi.org/10.48550/arxiv.2212.08073
Bayern S (2015) The Implications of Modern Business-Entity Law for the Regulation of Autonomous Systems. Stanford Technol Law Rev 19(93):93–112
Bender EM, Gebru T, Mcmillan-Major A, Shmitchell S (2021) On the Dangers of Stochastic Parrots. ACM. https://doi.org/10.1145/3442188.3445922
Bender EM, Koller A (2020) Climbing towards NLU: On meaning, form, and understanding in the age of data. Assoc Comput Linguist https://doi.org/10.18653/v1/2020.acl-main.463
Boden MA (2016) AI: Its Nature and Future. Oxford University Press
Boden M (2018) Robot says: Whatever. Retrieved July 14, 2022, from https://aeon.co/essays/the-robots-wont-take-over-because-they-couldnt-care-less
Brandom R (1994) Making it Explicit. Harvard University Press
Carruthers P (2006) The Architecture of the Mind. Oxford University Press
Chowdhery A, Narang S, Devlin Bosma M, Mishra G et al. (2022) PaLM: Scaling Language Modeling with Pathways. ArXiv 1–83
Feinberg J (1965) The expressive function of punishment. Monist 49(3):397–423
Grice P (1975) "Logic and conversation". In: Cole P, Morgan J (Eds.). Syntax and semantics. Vol. 3: Speech acts. Academic Press
Haugeland J (1979) Understanding natural language. J Philos 76:619–632
Haugeland J (1982) Heidegger on Being a Person. Nous 16(1):15–26
Hegel G (1807/2019) The phenomenology of spirit. (Pinkard T, Baur M, Trans) Cambridge University Press
Hovy D, Yang D (2021) The importance of modeling social factors of language: Theory and practice. Assoc Computat Linguist https://doi.org/10.18653/v1/2021.naacl-main.49
Hu J, Floyd S, Jouravlev O, Fedorenko E, Gibson E (2022) A fine-grained comparison of pragmatic language understanding in humans and language models. ArXiv. https://doi.org/10.48550/arxiv.2212.06801
Kant I (1991) The Cambridge Edition of the Works of Immanuel Kant. (Guyer P, Wood A (Eds.) Cambridge University Press
Kasirzadeh A, Gabriel I (2023) In conversation with artificial intelligence: aligning language models with human values. Philos Technol. https://doi.org/10.1007/s13347-023-00606-x
Kassner N (2020) Negated and misprimed probes for pretrained language models. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 7811–7818
Kempt H, Lavie A, Lavie SK (2023) Appropriateness is all you need! ArXiv, 1–25
Kocoń J, Cichecki I, Kaszyca O, Kochanek M, Szydło D, Baran J et al (2023) Chatgpt: Jack of all trades, master of none. Elsevier BV. https://doi.org/10.2139/ssrn.4372889
Kosinski M (2023) Theory of mind may have spontaneously emerged in large language models. ArXiv. https://doi.org/10.48550/arxiv.2302.02083
Mahowald K, Ivanova AA, Blank IA, Kanwisher N, Tenenbaum JB, Fedorenko E (2023) Dissociating language and thought in large language models: A cognitive perspective a preprint. Arxiv
Marcus G, Davis E (2020) GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about. Retrieved July 14, 2022, from https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/
McCarthy J (1959) Programs with Common-sense. Semantic Information Processing. MIT Press, Cambridge, pp 403–418
Milliere R (2022) Welcome to the Next Level of Bullshit. Retrieved July 14, 2022, from Nautilus: https://nautil.us/welcome-to-the-next-level-of-bullshit-9245/#!
Mindt G, Montemayor C (2020) A roadmap for artificial general intelligence: intelligence, knowledge, and consciousness. Mind Matter 18(1):9–37
Mitchell M (2019) Artificial intelligence: a guide for thinking humans. Pelican Books
Montemayor C (2021) Language and intelligence. Mind Mach 31(4):471–486
Montemayor C (2023) The prospect of a humanitarian artificial intelligence: agency and value alignment. Bloomsbury
Piantasodi ST, Hill F (2022) Meaning without reference in large language models. ArXiv. https://doi.org/10.48550/arXiv.2208.02957
Russell S (2019) Human-Compatible AI, Viking
Sap M, LeBras R, Fried D, Choi Y (2022) Neural theory-of-mind? on the limits of social intelligence in large LMs. ArXiv. https://doi.org/10.48550/arxiv.2210.13312
Scott-Phillips T (2014) Speaking our Minds. Red Globe Press
Solaiman I, Dennison C (2021) Process for Adapting Language Models to Society (PALMS) with values-targeted datasets. Arxiv. https://doi.org/10.48550/arXiv.2106.10328
Taylor C (1992) The Politics of Recognition. In: Gutmann A (ed) Multiculturalism: Examining the Politics of Recognition. Princeton, Princeton University Press, pp 25–73
Trott S, Torrent TT, Chang N, Schneider N (2020) (Re)construing meaning in NLP. Assoc Computat Linguist https://doi.org/10.18653/v1/2020.acl-main.462
Valmeekam K, Olmo A, Sreeharan S, Kambhampati S (2023) Large language models still can’t plan. Arxiv 1–21
Wallach W (2015) A Dangerous Master: How to Keep Technology from Slipping Beyond our Control. Basic Books
Wallach W, Allen C (2009) Moral Machines: Teaching Robots Right from Wrong. Oxford University Press, New York
Ziegler DM, Stiennon N, Wu J, Brown TB, Radford A, Amodei D et al (2019) Fine-tuning language models from human preferences. Arxiv. https://doi.org/10.48550/arxiv.1909.08593
Acknowledgements
This paper benefited immensely from conversation with Zed Adams, Adam Gies, and Joe Lemelin. It also would not have been possible without conversations with, and support from, Yann LeCun.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Browning, J. “Personhood and AI: Why large language models don’t understand us”. AI & Soc 39, 2499–2506 (2024). https://doi.org/10.1007/s00146-023-01724-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-023-01724-y