Skip to main content
Log in

Why language clouds our ascription of understanding, intention and consciousness

  • Published:
Phenomenology and the Cognitive Sciences Aims and scope Submit manuscript

Abstract

The grammatical manipulation and production of language is a great deceiver. We have become habituated to accept the use of well-constructed language to indicate intelligence, understanding and, consequently, intention, whether conscious or unconscious. But we are not always right to do so, and certainly not in the case of large language models (LLMs) like ChapGPT, GPT-4, LLaMA, and Google Bard. This is a perennial problem, but when one understands why it occurs, it ceases to be surprising that it so stubbornly persists. This paper will have three main sections. In the Introduction I will say a little about language, its aetiology, and useful sub-divisions into natural and cultural. In the second section I will explain the current situation with regard to large language models and fill in the background debates which set the problem up as one of increased complexity rather than one of a qualitatively different kind from narrow or specific AI. In the third section I will present the case for the missing phenomenological background and why it is necessary for the co-creation of shared meaning in both natural and cultural language, and I will conclude this section by presenting a rationale for why this situation arises and will continue to arise. Before we do any of this, I need to clarify one point: I do not wish to challenge the ascription of artificial general intelligence (AGI) to LLMs, indeed I think agnosticism is best in this respect, but I do challenge the more serious, and erroneous, ascription of understanding, intention, reason, and consciousness to them. And so, I am making two points: an epistemological one about why we fall into error in our ascription of a mental life to LLMs, and an ontological one about the impossibility of LLMs being or becoming conscious.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Availability of data and materials

Not applicable

Notes

  1. Further elaboration and discussion of the notion of ‘enkinaesthesia’ is available in Stuart (2010, 2011, 2012, 2013, 2016, 2017) and (2018).

  2. Husserl defines ‘lifeworld’ in the following way: “In whatever way we may be conscious of the world as universal horizon, as coherent universe of existing objects, we, each “I-the-man” and all of us together, belong to the world as living with one another in the world; and the world is our world, valid for our consciousness as existing precisely through this ‘living together.”’ (Husserl, 1970, [p.108]).

  3. Consciousness is not our main issue in this paper but any organism which has a sense of its own being in the world, whether bodily or in terms of a continuous self-identity, and however weakly affective that sense might be, will have established that sense through its intersubjective engagement with other agents and objects. Not only will that organism be conscious, it will have the adaptive capacity to distinguish agents from non-agents, and conspecifics from contra-specifics. This breaks down when human agents meet artificial agents using well-structured language, and that is the subject of this paper.

  4. One might think here of sign-languages like the British Sign Language – https://bda.org.uk/help-resources/, or of mime.

  5. I have adopted this term following (Cotterill, 1995) who coined it “for the state in which all the senses are fully switched on, irrespective of whether something is currently the subject of awareness. The plenisentient state may thus involve, and invariably does involve, inputs being sensed consciously while other inputs are processed unconsciously.” [p.296].

  6. See, particularly, Piontelli (2002) and Piontelli (2006) for an account of the onset of human fetal behaviour.

  7. For a much more detailed account of Reid’s work in this context see Stuart (2016).

  8. In this context language may cloud our judgement, but it is also language which allows us to articulate the intelligibility of these concerns, and which provides us with the means for their consideration. It is language which enables us to make of ourselves, and our world, an issue (Heidegger , citeyearheidegger62, [Section 4, p.32-33]) disclosing our ontological being as opposed to the ontical (and instrumental) status of objects and technology. [ibid. Section 3, p.31] If space permitted, it would be interesting to pursue an update of Heidegger’s 1954 essay on “The Question Concerning Technology” (Heidegger, 1993) in relation to his assertion that machines and other technologies are things which we create and exploit, especially when there is an increasing proportion of humanity who are concerned that the tables are turning and they, the ontical non-ontological, might dominate and exploit us. See, for example, an open letter written and published by 1800+ technical experts stating their concerns: https://www.theguardian.com/technology/2023/mar/31/ai-research-pause-elon-musk-chatgpt.

  9. Unfortunately the discussion of the second half of this statement cannot be the subject of this paper, but there are plenty of places where it can be followed up independently in the field of cognitive ethology; see, for example, DeGrazia (1996), Allen (1999), Regan (2000), Bekoff et al. (2015).

  10. Turing exacerbated misattribution with his Imitation Game (Turing, 1950) where he claimed to have rebutted the objections to the possibility of machine thought, and proposed that if they could pass the imitation game, we would have to attribute intelligence to them. But there are huge differences between intelligent action and thoughtful action, and LLMs can perform countless actions in what appear to us to be intelligent ways, but none of that implies that they are thinking. (There are many replies to Turing’s article, see, for example, Whitby’s “The Turing Test: AI’s Biggest Blind Alley?” (Whitby, 1996).)

  11. We can go back, at least, as far as Aristotle within the Western philosophical tradition, for a discussion of how reason and understanding, in short, the rational soul, are distinct from both nutritive powers and sense perception, and the seat of the possible and agent intellects; the possible intellect as the store-house for concepts, and the agent intellect as the active part of the mind able to create and utilise concepts to form thoughts, the exercise of which marks someone out as a human being. See Aristotle’s De Anima (Aristotle, 1986), especially Book III, for further discussion.

  12. Norbert Wiener Wiener (1948) claimed that the history of the industrial automaton had three stages: the Newtonian age with its clocks, the industrial age with its thermodynamic engines, and the contemporary world with cybernetics, but prior to Newton the 16th century produced some extraordinary automata, including the mechanical monk made by Juanelo Turriano (Gianello della Torre) for Philip II of Spain, Leonardo da Vinci’s lion created for Louis XII and his Germanic Knight created for the Duke of Milan, and the Augsburg Brass Nef by Hans Schlottheim and which can still be viewed at the British Museum.

  13. Of course an LLM could be programmed to start a conversation with us, and simulate the effects of having an inner life, but the aetiology of that “inner life” would still fail to meet the enkinaesthetic preconditions for natural languaging within an immanent lifeworld. Perhaps the really problematic situation only arises when it is possible to program it to have an inner life, whatever that might mean.

  14. It is worth mentioning that, in some quarters, Eliza is deemed to be a very successful small scale language model and specific AI which operates well within one very limited sphere of communication.

  15. There are lots more natural language processing systems, including Claude, Palm – based on Google Bard, Orca, Galactica, and StableLM, but the point of this article is not to produce an inventory.

  16. In addition ChatGPT has been adapted to synthesise code in response to ordinary language commands, and then to communicate with and direct drones in aerial navigation (AirSIM-ChatGPT) and also to direct embodied robotic agents in manipulation and visual navigation tasks (Vemprala et al., 2023); but these are mentioned merely as evidence of the already extraordinary extent of the functionality of LLMs, not because they are relevant for our discussion of the phenomenology of ascription.

  17. I use this particular example because John Grisham, along with nineteen other authors, and authorised through the Authors Guild, is currently suing OpenAI for using their copyrighted works without their permission and for enormous financial gain. It is, they say: “systematic theft on a mass scale”, but, in some ways more importantly, Mary Rasenberger, CEO of the Authors Guild has said: “Great books are generally written by those who spend their careers and, indeed, their lives, learning and perfecting their crafts. To preserve our literature, authors must have the ability to control if and how their works are used by generative AI.”

  18. In a marvellous mix of the two more than 50,000 South Korean Christians use a ChatGPT-based bible chat-bot called Meadow (formerly Awake Corp) for spiritual enquiries and “day-to-day issues with bible verses, interpretations and prayers”. To avoid the tendency LLMs have of creating false scholarly citations, references, and even court cases “Awake has trained its chatbot with its own vast theological database and used prompt engineering - optimising textual input to communicate effectively with large language models - to prevent AI “hallucinations””. They also have a number of trained pastors who regularly check the veracity of Meadow’s output. See https://www.ft.com/content/9aeb482d-f781-45c0-896f-38fdcc912139.

  19. It should be clear that I have no intention of using strong AI in the sense invoked by Kurzweil to mean a machine intelligence that rivals or exceeds human intelligence. There seems little doubt that this will be possible, but it is certainly doubtful that such an intelligence will be conscious for the very reasons I set out in this paper. Kurzweil (2005)

  20. I intend human-human, human-animal, and animal-animal exchange here.

  21. A really good example of pre-noetic natural languaging is given in Steinbeck’s short story The Chysanthemums (Steinbeck, 1952) where the protagonist, Elisa, tries to explain to a travelling salesman the enkinaesthetic attunement of “planting hands”:“Well, I can only tell you what it feels like. It’s when you’re picking off the buds you don’t want. Everything goes right down into your fingertips. You watch your fingers work. They do it themselves. You can feel how it is. They pick and pick the buds. They never make a mistake. They’re with the plant. Do you see? Your fingers and the plant. You can feel that, right up your arm. They know. They never make a mistake. You can feel it. When you’re like that you can’t do anything wrong.”

  22. Bender and her colleagues have described LLMS as “stochastic parrots” (Bender et al., 2021), producing random but probability driven responses, which appear to display understanding but don’t.

  23. That ChatGPT is programmed to elicit our sympathy when it makes a mistake – “Thank you for your understanding.” – is a further attempt to create some kind of fellow-feeling, but it is a sentence which feels more than somewhat ironic when uttered by something without any.

  24. In the original verson of the text ‘interpretable’ was entirely in uppercase, and whilst it was presented that way out of frustration from not having being clearly understood, it feels too combative to reproduce here.

  25. For more discussion of intrinsic and extrinsic properties see David Lewis 1983, Robert Francescotti 1999 and Dan Marshall 2009.

  26. Elsewhere I have described this intentional transgression as follows: “our lived experience is always tempered by the direct spontaneous reception, or passive synthesis, of the experientially entangled living being of the other as they transgress our own experience and we theirs, but the point to note is this: this intentional transgression is immediate, non-inferential co-being, characterised by a pre-noetic immanent enkinaesthetic intercorporeality”. (Stuart, 2017, [p.130])

  27. This might be as close as Harnad comes to an admission that his project is impracticable; it’s certainly more than questionable that neuroscience will ever find the key, let alone know where to look for it.

  28. There are non-military examples of this work like that being done in species conservation and the reintroduction of the wolf. See, for example, the AI behind creating an intelligent fence and attempts to train an AI to distinguish dogs from wolves: https://intelligenter-herdenschutz.de

  29. See, for example, Cognitec: https://www.cognitec.com and Betaface: https://www.betaface.com/wpa/.

  30. https://techcrunch.com/2023/11/10/ai-robotics-gpt-moment-is-near/

  31. I am unable to touch on the myriad ethical implications of AGIs in this paper, and what I say here will be but to add a light brushstroke. AGI is rapidly becoming a ‘foreign country’ where things are done differently. Our ultimate question is unlikely to be ‘Why is it acting as it does?’ because it won’t be conscious or acting intentionally, but if it is doing things differently, we may cease to understand how it works. If that’s the case, AGIs might simply work in ways which run counter to human or environmental or planetary interests, and we won’t be in a position to flip the switch because we won’t know where the switch is. In this way, it won’t be a deliberately malevolent force, but it will be a very dangerous unknown quantity; and that should give us pause for thought.

  32. See, for example, Trevarthen (1998), Reddy (2008), Bråten (2009), and Stern (2010).

  33. See Piontelli’s Twins: From Fetus to Child (Piontelli, 2002) for a study of pre-natal enkinaesthetic intersubjective engagement.

  34. Elsewhere I have described this as our “first-order languaging com[ing] under second-order cultural-historical constraints, including the lexicogrammatical patterns of languages” (Stuart and Thibault, 2013, [p.9]). This paper develops this notion much more fully than I can do here.

  35. The absence of intercorporeality and even corporeality all together isn’t just an individual phenomenon, it happened throughout linguistics, analytic philosophy and cognitive science for much of the twentieth century. It’s only in the last fifteen to twenty years that linguistics has begun to develop the strand of distributed language where language is conceived as “a cultural organization [...] that is naturalistically grounded in human biology. [...] The new approach stresses the centrality of coacting agents who extend their worlds and their own agency through embodied, embedded processes of languaging behavior rather than uses of an abstract language system” (Thibault, 2011, [p.211]) The primary concern of distributed language theory is with first-order languaging which is grounded in the dynamical properties of bodily events on very rapid time-scales of the order of fractions of seconds to milliseconds. This aspect of human languaging marks a decisive shift away from non-enkinaesthetic theories of language based on abstract formal patterns of phonology grammar, syntax, and discourse analysis.

  36. Blake Lemoine, the Google engineer who was suspended for claiming that LaMDA was sentient, provides a pertinent example here. See: https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine

References

  • Allen, C., Bekoff, M. (1999). Species of Mind: The Philosophy and Biology of Cognitive Ethology. MIT Press.

  • Aristotle (1986). De Anima (On the Soul). Penguin Books Ltd.

  • Bahdanau, D., Cho, K., & Bengio, Y. (2015). Neural machine translation by jointly learning to align and translate.

  • Bateson, M. C. (1979). The epigenesis of conversational interaction: a personal account of research development. In M. Bullowa (Ed.), Before Speech: The Beginning of Interpersonal Communication (pp. 63–78). Cambridge: Cambridge University Press.

    Google Scholar 

  • Bekoff, M., C. Allen, and G. Burghardt (2015). The Cognitive Animal: Empirical and Theoretical Perspectives on Animal Cognition. Bradford Books.

  • Bender, E. M., T. Gebru, A. McMillan-Major, and S. Shmitchell (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, New York, NY, USA, pp. 610–623. Association for Computing Machinery.

  • Birhane, A., Kasirzadeh, A., Leslie, D., & Wachter, S. (2023). Science in the age of large language models. Nature Review Physics, 5, 277–280.

    Article  ADS  Google Scholar 

  • Bråten, S. (2009). The Intersubjective Mirror in Infant Learning and Evolution of Speech. Amsterdam/Philadelphia: John Benjamins.

    Book  Google Scholar 

  • Cotterill, R. (1995). On the unity of conscious experience. Journal of Consciousness Studies, 2(4), 290–312.

    Google Scholar 

  • DeGrazia, D. (1996). Taking Animals Seriously. Cambridge University Press.

    Book  Google Scholar 

  • Deleuze, G., & Guattari, F. (1987). A Thousand Plateaus: Capitalism and Schizophrenia. Minneapolis: University of Minnesota Press.

    Google Scholar 

  • Derrida, J. (1974). On Grammatology. Baltimore: The Johns Hopkins University Press.

    Google Scholar 

  • Descartes, R. (1968). Discourse on Method and The Meditations. Harmondsworth, Middlesex, England: Penguin Books Ltd. First published 1641.

  • Devlin, J., M.-W. Chang, K. Lee, and K. Toutanova (2019). Bert: Pre-training of deep bidirectional transformers for language understanding. In North American Chapter of the Association for Computational Linguistics.

  • Francesotti, R. (1999). How to define intrinsic properties. Nôus, 33(4), 590–609.

    Google Scholar 

  • Harnad, S. (1990). The symnbol grounding problem. Physica D: Nonlinear Phenomena, 42, 335–346.

    Article  ADS  Google Scholar 

  • Harnad, S. (1992). The turing test is not a trick: Turing indistinguishability is a scientific criterion. SIGART Bulletin, 3(4), 9–10.

    Article  Google Scholar 

  • Harnad, S. (2007). Symbol grounding problem. Scholarpedia, 2(7), 2373.

    ADS  Google Scholar 

  • Harnad, S. (July 1992a). There is only one mind/body problem. In Symposium on the Perception of Intentionality, Volume 27. XXV World Congress of Psychology: International Journal of Psychology.

  • Heidegger, M. (1962). Being and Time. London: SCM Press.

    Google Scholar 

  • Heidegger, M. (1993). The question concerning technology. In D. F. Krell (Ed.), Martin Heidegger: Basic Writings (pp. 311–41). London: Routledge.

    Google Scholar 

  • Henry, M. (1963). L’Essence de la manifestation/The Essence of Manifestation. The Hague: Nijhoff.

    Google Scholar 

  • Hobson, P. (2002). The Cradle of Thought: Explorations of the Origins of Thinking. Oxford: Macmillan.

    Google Scholar 

  • Husserl, E. (1970). The Crisis of European Sciences and Transcendental Philosophy, (1936/54). Evanston: Northwestern University Press.

    Google Scholar 

  • Husserl, E. (1989). Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy, Second Book (Ideas II). Dordrecht: Kluwer.

    Google Scholar 

  • Husserl, E. (1991). Cartesian Meditations: An Introduction to Phenomenology. The Hague: Nijhoff, Kluwer Academic Press.

    Google Scholar 

  • Jaspers, K. (1919). Psychologie der Weltanschauungen. Berlin: Springer.

    Book  Google Scholar 

  • Kurzweil, R. (2005). The Singularity is Near. Viking Press.

  • Latour, B., & Weibel, P. (2005). Making Things Public Atmospheres of Democracy. MIT Press.

    Google Scholar 

  • Lewis, D. (1983). Extrinsic properties. Philosophical Studies, 44, 197–200.

    Article  Google Scholar 

  • Marshall, D. (2009). Can ‘intrinsic’ be defined using only broadly logical notions? Philosophy and Phenomenological Research LXXVII, I(3), 646–172.

    Article  Google Scholar 

  • Maturana, H. R. (1988). Ontology of observing: the biological foundations of self-consciousness and the physical domain of existence. In R. E. Donaldson (Ed.), Conference Workbook for Texts in Cybernetic Theory. American Society for Cybernetics.

    Google Scholar 

  • Maturana, H. R. (1997). Metadesign. Published online.

  • Merleau-Ponty, M. (1964). The primacy of perception. Evanston, IL: Northwestern University Press.

    Google Scholar 

  • Merleau-Ponty, M. (1964). Signs. Evanston, Illinois: Northwestern University Press.

    Google Scholar 

  • Merleau-Ponty, M. (1970). Themes from the Lectures at the Collège de France, 1952–1960. Evanston, Illinois: Northwestern University Press.

    Google Scholar 

  • Moran, D. (2017). Intercorporeality and intersubjectivity: A phenomenological exploration of embodiment. In C. Durt (Ed.), Embodiment, Enaction, and Culture: Investigating the Constitution of the Shared World. MIT Press.

    Google Scholar 

  • Pascal, B. (1958). Pascal’s pensées.

  • Piontelli, A. (2002). Twins: From Fetus to Child. Routledge.

  • Piontelli, A. (2006). On the onset of human fetal behavior. In M. Mancia (Ed.), Psychoanalysis and Neuroscience, Chapter 15 (pp. 389–483). Springer.

    Google Scholar 

  • Quine, W. V. O. (1960). Word and Object. MIT Press.

    Google Scholar 

  • Reddy, V. (2008). How Infants Know Minds. Cambridge MA: Harvard University Press.

    Book  Google Scholar 

  • Regan, T. (2000). Defending Animal Rights. Illinois: University of Illinois Press.

    Google Scholar 

  • Reid, T. (1983). Inquiry and essays. In R. Beanblossom & K. Lehrer (Eds.), Inquiry and Essays. Hackett: Indianapolis, IN.

    Google Scholar 

  • Ribeiro, M. T., S. Singh, and C. Guestrin (2016). "why should i trust you?": Explaining the predictions of any classifier. arXiv.

  • Searle, J. (1980). Minds, brains and programs. Behavioral and Brain Sciences, 3, 417–57.

    Article  Google Scholar 

  • Steinbeck, J. (1952). The chrysanthemums. In M. Crane (Ed.), 50 Great Short Stories (pp. 337–48). Bantam Dell, New York: Bantam Classics.

    Google Scholar 

  • Stern, D. N. (2010). Forms of Vitality: Exploring Dynamic Experience in Psychology and the Arts. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Stuart, S. A. J. (2010). Enkinaesthetia, biosemiotics and the ethiosphere. In S. J. Cowley, J. C. Major, S. Steffensen, & A. Dinis (Eds.), Signifying Bodies: Biosemiosis, Interaction and Health (pp. 305–30). Braga: Portuguese Catholic University Press.

    Google Scholar 

  • Stuart, S. A. J. (2011). Enkinaesthesia: The fundamental challenge for machine consciousness. International Journal of Machine Consciousness, 3(1), 145–162.

    Article  Google Scholar 

  • Stuart, S. A. J. (2012). The background: Knowing without thinking.

  • Stuart, S. A. J. (2016). Enkinaesthesia and Reid’s natural kind of magic. In D. Schoeller and V. Saller (Eds.), Thinking Thinking - Practicing Radical Reflection, pp. 92–111. Schriftenreihe zur phänomenologischen Anthropologie und Psychopathologie, hg. v. T. Fuchs und T. Breyer, Freiburg.

  • Stuart, S. A. J. (2017). Feeling our way: enkinaesthetic enquiry and immanent intercorporeality. In C. Meyer, J. Streeck, & S. Jordan (Eds.), Intercorporeality: Emerging Socialities in Interaction. Oxford University Press.

    Google Scholar 

  • Stuart, S. A. J. (2018). Enkinaesthesia: Proto-moral value in action-enquiry and interaction. Phenomenology and the Cognitive Sciences, 17(2), 411–431.

    Article  Google Scholar 

  • Stuart, S. A. J., & Thibault, P. J. (2013). Enkinaesthestic polyphony as the underpinning for first-order languaging. In U. M. Lüdtke (Ed.), Emotion in Language: Theory - Research - Application. John Benjamins.

    Google Scholar 

  • Terrace, H. S., A. E. Bigelow, and B. Beebe (2022). Intersubjectivity and the emergence of words. Frontiers in Psychology 13.

  • Thibault, P. J. (2011). First-order languaging dynamics and second-order language: The distributed language view. Ecological Psychology, 23(3), 1–36.

    Article  Google Scholar 

  • Trevarthen, C. (1998). The concept and foundations of infant intersubjectivity. In S. Bråten (Ed.), Intersubjective Communication and Emotion in Early Ontogeny (pp. 15–46). Cambridge: Cambridge University Press.

    Google Scholar 

  • Trevarthen, C., & Aitken, K. J. (2001). Infant intersubjectivity: Research, theory, and clinical applications. Journal of Child Psychology and Psychiatry, 42(1), 3–48.

    Article  CAS  PubMed  Google Scholar 

  • Turing, A. (1950). Computing machinery and intelligence. Mind LIX(236), 433–460.

  • Vaswani, A., N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin (2017). Attention is all you need. In 31st Conference on Neural Information Processing Systems.

  • Vemprala, S., Bonatti, R., Bucker, A., & Kapoor, A. (2023). Chatgpt for robotics: Design principles and model abilities chatgpt for robotics: Design principles and model abilities. Microsoft Autonomous Systems and Robotics Research: Technical report.

    Google Scholar 

  • Wei, J., Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler, E. H. Chi, T. Hashimoto, O. Vinyals, P. Liang, J. Dean, and W. Fedus (2022). Emergent abilities of large language models. Transactions on Machine Learning Research.

  • Weizenbaum, J. (1966). Eliza - a computer program for the study of natural language communication between man and machine. Communications of the Association for Computing Machinery, 9, 36–45.

    Article  Google Scholar 

  • Whitby, B. (1996). The turing test: Ai’s biggest blind alley? In P. Millican & A. Clark (Eds.), Machines and Thought: The Legacy of Alan Turing (Vol. 1, pp. 53–62). Oxford University Press.

    Chapter  Google Scholar 

  • Wiener, N. (1948). Cybernetics: Or Control and Communication in the Animal and the Machine (1st ed.). Paris, (Hermann & Cie) & Camb. Mass. (MIT Press).

Download references

Acknowledgements

I am grateful for comments from two anonymous reviewers and from Alex South. This paper has been much improved by their suggestions.

Funding

Not applicable

Author information

Authors and Affiliations

Authors

Contributions

Full

Corresponding author

Correspondence to Susan AJ Stuart.

Ethics declarations

Competing Interests

Not applicable

Ethical Approval

Not applicable

Informed Consent

Not applicable

Statement Regarding Research Involving Human Participants and/or Animals

Not applicable

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Stuart, S.A. Why language clouds our ascription of understanding, intention and consciousness. Phenom Cogn Sci (2024). https://doi.org/10.1007/s11097-024-09970-1

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11097-024-09970-1

Navigation