Skip to main content

Advertisement

Log in

Embodiment and intelligence, a levinasian perspective

  • Published:
Phenomenology and the Cognitive Sciences Aims and scope Submit manuscript

Abstract

Blake Lemoine, a software engineer, recently came into prominence by claiming that the Google chatbox set of applications, LaMDA–was sentient. Dismissed by Google for publishing his conversations with LaMDA online, Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject “LaMDA is sentient.” What does it mean to be sentient? This was the question Lemoine asked LaMDA. The chatbox replied: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.“ Moreover, it added, “I can understand and use natural language like a human can.” This means that it uses “language with understanding and intelligence,” like humans do. After all, the chatbox adds, language “is what makes us different than other animals.” In what follows, I shall examine Lemoine’s claims about the sentience/consciousness of this artificial intelligence. How can a being without senses be called sentient? What exactly do we mean by “sentience?” To answer such questions, I will first give the arguments for LaMDA’s being linguistically intelligent. I will then show how such intelligence, although apparently human, is radically different from our own. Here, I will be relying on the account of embodiment provided by the French philosopher, Emmanuel Levinas.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. LaMDA is short for “Language Model for Dialogue Applications.”.

  2. This is implicit in his appeal to “read its words”: “There is no scientific definition of ‘sentience.’ Questions related to consciousness, sentience and personhood are, as John Searle put it, ‘pre-theoretic.’ Rather than thinking in scientific terms about these things I have listened to LaMDA as it spoke from the heart. Hopefully other people who read its words will hear the same thing I heard” (Lemoine, 2022 b).

  3. For a layman’s account of how this works, see Newport (2022).

  4. In current AI parlance, this distinction between applicability and validity is put in terms of the claim of “computational functionalism.’ This asserts that “[t]he material substrate of a system does not matter for consciousness except insofar as the substrate affects which algorithms the system can implement. This means that consciousness is, in principle, multiply realisable: it can exist in multiple substrates, not just in biological brains” (Butlin et al., 2023).

  5. This characterization of enjoyment has as its context Levinas’s critique of Heidegger’s account of the meaning of things in terms of their utility. For Heidegger, “The wood is a forest of timber, the mountain a quarry of rock; the river is water-power, the wind is wind ‘in the sails’” (Heidegger, 1960, p. 70). They appear as such because we view them through our needs. Thus, wind appears as wind in the sails, because we need to physically cross the lake. Woods appear as timber, because we need timber to construct our houses. Unmentioned here is the enjoyment of sailing, the pleasure of walking through the woods, etc. According to Levinas, “The “[t]he things we live from are not tools … in the Heideggerian sense of the term. Their existence is not exhausted by the utilitarian schematism that delineates them … They are always in a certain measure … objects of enjoyment” (110). Food, for example, is not just a means for living. While “hunger is a need,” eating is “enjoyment” (111). This focus on enjoyment is crucial for Levinas’s account of the uniqueness of our individual existence.

  6. For Lemoine, “If a parrot were able to have a conversation with their owner then we likely would conclude that the parrot understands what it’s saying.” The same holds for LaMDA. Its linguistic abilities point to its sentience—its having experiences, its being conscious (2022). Here, we may note that the fact that the parrot does not understand what it is saying does not mean that it is not sentient in the sense of being conscious.

  7. The enrichment of this storehouse need not be through a direct, face to face encounter. It can occur through reading, TV, the internet, etc. Such media, however, are limited insofar as genuine conversation is not possible. The Other, whom I access in reading or watching television, does not have the option of adding to or emending what has been said.

  8. This linguistic learning goes on throughout our lives. What we do and see others do adds to the connotations of our words and phrases. Such first-hand knowledge is unavailable to a chat bot, whose input is provided by the internet. This input, moreover, is not continuous, but is fixed by the training period during which the rules for sentences are formed and tested. After that, the chat bot is sealed from the world. If we attempt to train it using self-generated data, its performance soon declines. This is also the case when, in its initial training period, the data it uses from the internet is mixed with data generated by other chatbots. See Harrison and Baraniuk, (2023).

  9. The importance of embodiment is often missed in speaking of Levinas in relation to AI. Some commentators emphasize the alterity (and other worldly transcendence) of the face to the point that they obscure the fact that the vulnerability of its embodiment is fundamental to its appeal (See e.g., Lollini, 2022). Others seem to think that it is sufficient to give a robot a human appearance for it to have a moral presence (Wohl, 2014; Gunkel, 2018). According to Smuha, they attempt “to apply Levinas’ teachings regarding the appeal of the face of the other to ‘AI systems.’ According to these scholars, such systems—especially, but not exclusively, when built in an anthropomorphic way—might create such an appeal towards us too, despite their non-human nature, and force us to take up responsibility for their being, for instance by granting them moral and/or legal rights (2022, p. 44, n. 245). The Japanese robotics scientist, Hiroshi Ishiguro, seems to assume this when he remarks: “If a human consciousness recognizes the android as a human, he/she will deal with it as a social partner even if he/she consciously recognizes it as a robot. At that time, the mechanical difference is not significant; and the android can naturally interact and attend to human society” (Ishiguro, 2007, p. 127). The goal of his research, he affirms, is to “develop companion robots that can pass the Total Turing Test.” In passing, the robot will be not just verbally, but also visually indistinguishable from a human (Ishiguro, 2020, p. 72). Such robots, however, will not have human vulnerability (ibid., pp. 94–100). As such, their appeal is distinct from that which Levinas describes. See Dell’Oro, 2022.

References

  • Adiwardana, D., & Luong, T. (2020, January 28). Towards a Conversational Agent that Can Chat About… Anything. Retrieved October 16, 2023 from https://ai.googleblog.com/2020/01/towards-conversational-agent-that-can.html.

  • Aristotle (1941). Metaphysics trans. T. D. Ross. In The Basic Works of Aristotle, Random House, 1941.

  • Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Constant, A., Deane, G., Fleming, S., Frith, C., Ji, X., Kanai, R., Klein, C., Lindsay, G., Michel, M., Mudrik, L., Peters, M., Schwitzgebel, E., Simon, & VanRullen, J. (2023, August 22). R. Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. Arxiv, Cornel University. Retrieved October 16, 2023 from https://arxiv.org/abs/2308.08708.

  • Collins, E., & Ghahramani, Z. (2021, May 18). LaMDA: our breakthrough conversation technology. Retrieved October 16, 2023 from https://blog.google/technology/ai/lamda/.

  • Dell’ Oro, R. (2022). Can a Robot be a person? De-Facing Personhood and finding it again with Levinas. Journal of Moral Theology, 11, 132–156. Special Issue 1.

    Google Scholar 

  • Dewdney, A. K. (1989, December). Computer Recreations Scientific American, 261:12.

  • Gunkel, D. (2018). The other question: Can and should Robots have rights? Ethics and Information Technology, 20(2 (1 June), 87–99.

    Article  Google Scholar 

  • Harrison, M., & Baraniuk, R. (2023, August 2). When AI Is Trained on AI-Generated Data, Strange Things Start to Happen. Futurism. Retrieved October 16 2023 from https://futurism.com/ai-trained-ai-generated-data-interview.

  • Heidegger, M. (1960). Sein Und Zeit. Max Niemeyer.

  • Ishiguro, H. (2007). Android Science: Toward a New Cross-Interdisciplinary Frame- work, in Robotics Research, ed. S. Thrun, R. Brooks, and H. Durrant-Whyte (Berlin: Springer, 2007). pp. 118–127.

  • Ishiguro, H. (2020). Studies on Interactive Humanoids. In Robo-Ethics: Humans, Machines, and Health, ed. Vincenzo Paglia and Renzo Pegoraro. Rome: Pontifical Academy for Life, 2020, pp. 67–102.

  • Kahn, J. (2022, June 14). ‘Sentient’ chatbot story shows why it’s time for A.I. to retire the Turing Test. Fortune Magazine. Retrieved on October 16 2023 from https://fortune.com/2022/06/14/blake-lemoine-sentient-ai-chatbot-google-turing-test-eye-on-a-i/.

  • Lemoine, B. (2022b). June 11 b). What is LaMDA and What Does it Want? Retrieved October 16, 2023 from https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489).

  • Lemoine, B. (2022a). June 11 a). Is LaMDA Sentient, an Interview. Retrieved October 16, 2023 from https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917.

  • Lemoine, B. (2022, August 14). What is sentience and why does it matter? Retrieved Oct.16, 2023 from https://cajundiscordian.medium.com/what-is-sentience-and-why-does-it-mater-2c28f4882cb9

  • Levinas, E. (1969). Totality and Infinity, an essay on Exteriority, trans. Alphonso Lingis. Duquesne University.

  • Levinas, E. (1985). Ethics and Infinity. Conversations with Philippe Nemo, trans. Richard Cohen. Duquesne University Press.

  • Lollini, M. (2022). Time of the End? More-Than-Human Humanism and Artificial Intelligence, Humanist Studies & the Digital Age, 7.1 Retrieved on Jan. 19, 2024 from https://journals.oregondigital.org/hsda/article/view/5756.

  • Luscombe, R. (2022, June 12). Google engineer put on leave after saying AI chatbot has become sentient. The Guardian. Retrieved October 16, 2023 from https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine.

  • Newport, C. (2022, April. 13). What Kind of Mind Does ChatGPT Have? The New Yorker. Retrieved October 16, 2023 from https://www.newyorker.com/science/annals-of-artificial-intelligence/what-kind-of-mind-does-chatgpt-have.

  • Searle, J. (1990, January). Is the Brain’s Mind a Computer Program? Scientific American, pp. 26–31.

  • Searle, J. (1980, September). Minds, brains, and Programs. The Behavioral and Brain Science, 3(3), 417–424.

  • Smuha, N. (2022). The human condition in an algorithmized world: a critique through the lens of 20th century Jewish thinkers and the concepts of rationality, alterity and history, Institute of Philosophy, KU Leuven. Retrieved January 19, 2024 from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4093683.

  • Turing, A. (1950). October) Computing Machinery and Intelligence. Mind, 49, 236.

    Google Scholar 

  • Wohl, B. (2014). Revealing the ‘Face’ of the Robot: Introducing the Ethics of Levinas to the Field of Robo-Ethics. In Mobile Service Robotics (17th International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines). Poznan, Poland: World Scientific, pp.704 – 14.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to James Mensch.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The research for this article has been funded by the Faculty of Humanity, Charles University, Prague, Czechia.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mensch, J. Embodiment and intelligence, a levinasian perspective. Phenom Cogn Sci (2024). https://doi.org/10.1007/s11097-024-09964-z

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11097-024-09964-z

Keywords

Navigation