Skip to main content
Log in

ChatGPT is no Stochastic Parrot. But it also Claims that 1 is Greater than 1

  • Commentary
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

This article is a commentary on ChatGPT and LLMs (Large Language Models) in general. It argues that this technology has matured to the point where calling systems such as ChatGPT “stochastic parrots” is no longer warranted. But it also argues that these systems continue to have serious limitations when it comes to reasoning. These limitations are much more severe than commonly thought. A large array of examples are given to support these claims.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The version of ChatGPT discussed here is based on GPT-3.5, as GPT-4 had not been released at the time this article was written.

  2. People often speak of “compositionality” as the challenge, but that is too vague in my view.

References

  • Arkoudas, K., & Bringsjord, S. (2014). Philosophical Foundations of Artificial Intelligence. In: K. Frankish & W. M. Ramsey (Eds.), The Cambridge Handbook of Artificial Intelligence (pp. 34–63). Cambridge, UK: Cambridge University Press. https://doi.org/10.1017/CBO9781139046855.004

  • Bender, E. M., Gebru, T., McMillan-Major, A., & Mitchell, M. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610–623). New York, NY, USA: Association for Computing Machinery. Retrieved from https://doi.org/10.1145/3442188.3445922

  • Christiano, P. F., Leike, J., Brown, T., Martic, M., Legg, S., & Amodei, D. (2017). Deep reinforcement learning from human preferences. In I. Guyon et al. (Eds.), Advances in neural information processing systems (Vol. 30, pp. 4299–4307). Curran Associates, Inc.

  • Dziri, N., Lu X., Sclar, M., Li, X. L., Jiang, L., Lin, B. Y., ... Choi, Y. (2023). Faith and Fate: Limits of Transformers on Compositionality.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Konstantine Arkoudas.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Arkoudas, K. ChatGPT is no Stochastic Parrot. But it also Claims that 1 is Greater than 1. Philos. Technol. 36, 54 (2023). https://doi.org/10.1007/s13347-023-00619-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13347-023-00619-6

Keywords

Navigation