Skip to main content

Advertisement

Log in

“Personhood and AI: Why large language models don’t understand us”

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Recent artificial intelligence advances, especially those of large language models (LLMs), have increasingly shown glimpses of human-like intelligence. This has led to bold claims that these systems are no longer a mere “it” but now a “who,” a kind of person deserving respect. In this paper, I argue that this view depends on a Cartesian account of personhood, on which identifying someone as a person is based on their cognitive sophistication and ability to address common-sense reasoning problems. I contrast this with a different account of personhood, one where an agent is a person if they are autonomous, responsive to norms, and culpable for their actions. On this latter account, I show that LLMs are not person-like, as evidenced by their propensity for dishonesty, inconsistency, and offensiveness. Moreover, I argue current LLMs, given the way they are designed and trained, cannot be persons—either social or Cartesian. The upshot is that contemporary LLMs are not, and never will be, persons.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

Not applicable.

Notes

  1. It is worth noting, both accounts are fundamentally anthropocentric. This may suggest we need a further notion of personhood, one that is not tied up with either cognitive or social capacities.

References

Download references

Acknowledgements

This paper benefited immensely from conversation with Zed Adams, Adam Gies, and Joe Lemelin. It also would not have been possible without conversations with, and support from, Yann LeCun.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jacob Browning.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Browning, J. “Personhood and AI: Why large language models don’t understand us”. AI & Soc 39, 2499–2506 (2024). https://doi.org/10.1007/s00146-023-01724-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-023-01724-y

Keywords

Navigation