In this section I consider whether differences in conception or cognitive significance, the two aspects of concepts other than content, would entail that chatbots could not share human concepts in the way required for understanding. In each case I appeal to arguments from the recent literature in philosophy of mind.
On the topic of conceptions, one of the central doctrines of social externalism is that it is possible to possess a concept despite having incomplete or flawed understanding of the phenomenon concerned. This would entail having a different conception from experts. One of Burge’s classic cases is of a person who lacks the theoretical knowledge that arthritis affects only the joints, suspecting that it affects their thigh (1979); another is of a person who believes that sofas are religious artefacts (1986). It is a standard view among social externalists that these characters possess the concepts arthritis and sofa. They would also take me to possess the concept Higgs boson, even though I have very little knowledge of particle physics (Soames, 1989). However, some theorists do claim that concept-sharing can fail where thinkers have sufficiently diminished understanding, or radically divergent conceptions (Brown, 2000; Goldberg, 2009).
It is therefore possible to imagine an objection which claimed that although incomplete understanding is generally compatible with concept possession, incomplete understanding of the specific kind exemplified by chatbots is not. Take the case of the concept headache. Chatbots cannot experience pain, so their understanding of headaches would be incomplete. They could not recognise headaches in themselves, unlike (presumably) all human thinkers who possess this concept. However, human headaches would have significance for medical triage chatbots, since their functions concern them, as I argued in Sect. 2. Such chatbots might accurately represent many details about the possible causes, effects, and treatments for headaches, and might be highly capable of identifying them in others through the Fodor-Millikan route. This is the reverse of the typical human case of incomplete understanding, since it combines excellent textbook knowledge with the complete absence of ‘grounding’ perceptual familiarity with even related phenomena. Empiricists about concepts, such as Prinz (2002), might deny that possession of some concepts is compatible with this distinctive form of incomplete understanding.
If there are any concepts which chatbots could not possess, these would include concepts for bodily sensations and properties of perceptual experiences (or perhaps directly perceptible properties of objects), such as pain or red. If these concepts were inaccessible to them, it might then follow that they could not possess closely-related concepts such as arthritis or rash. But no argument on the lines suggested could plausibly be made that chatbots could not possess concepts of the latter kind, even though they could possess ones of the former kind. So one issue at stake here is whether there are any phenomenal concepts (Loar, 1997; Papineau, 2002). Phenomenal concepts are usually defined as concepts that can only be possessed by thinkers who have had conscious experiences of particular kinds, such as pain or the visual experience of the colour red. If there are phenomenal concepts, then chatbots cannot possess them; if there are none, then the fact that chatbots do not have human-like conscious perceptual experiences does not bar them from possessing the same concepts as humans.
Ball (2009) argues on social externalist grounds that there are no phenomenal concepts. He observes that if the English word ‘red’ expresses the concept red, it will be possible for this concept to be shared by the mechanisms described in the previous section, provided that social externalism is true. If this is correct, then red is not a phenomenal concept, because it can be possessed by someone who has never seen the colour red (or experienced, e.g., a red afterimage). Mary, the colour scientist in Jackson’s (1982) Knowledge Argument who lives in a black-and-white room, would be able to possess the concept red before leaving the room, thanks to her familiarity with scientific works and other English texts which discuss this colour. Phenomenal concept theorists might seek to reject this position by arguing that there are two concepts expressible by ‘red’, of which one is a phenomenal concept (call it redP), and the other is not (redN), and that the mechanisms of social externalism only apply to redN. This is the only alternative, since there must be some concept that Mary expresses by using ‘red’. But this has a range of implausible consequences. For instance, suppose that before leaving the room Mary has the thought that she would express by saying that seeing red is a phenomenal state. On the phenomenal concept theorist’s view, this thought must involve redN. But someone living in a multicoloured environment could have a thought that they would express in the same way, so the phenomenal concept theorist would have to say either that these are distinct thoughts; that the second person has two different concepts for which they would use the word ‘red’; or that the second person also lacks redP. None of these options is attractive, particularly in the context of social externalism.
Even if Ball’s argument fails, and there are phenomenal concepts, Mary would still possess (and chatbots could still possess) concepts such as redN. In this case there would be reason to believe that such concepts would suffice for understanding of words like ‘red’. Mary is supposed to be an expert colour scientist, so presumably she would understand the sentence ‘rashes are typically red’, perhaps by employing her knowledge that ‘red’ picks out a colour with certain specific cultural associations and a dominant wavelength of 625–740 nm. There are also, of course, many people in real life with severe visual impairments and it is far from clear that they cannot understand the word ‘red’. Helen Keller was deaf and blind from infancy, but she certainly understood English: she learnt to read Braille and write, earned a degree, and became an author, activist and public speaker (Stich, 1983).
A similar response can be given to an alternative line of thought which might also suggest that chatbots could not understand human language due to differences in conceptions. Theorists of embodied cognition suggest that the range of concepts we are capable of possessing depends on the form of our bodies and our perceptual apparatus (Shapiro, 2019). One version of this claim would be that thinkers with very different bodies could not share concepts with the same content, but social externalism provides us with an argument against this view. An alternative would be that different bodies lead to radically different conceptions, such that thinkers possess different concepts with the same content. This would be analogous to the case of redP and redN, except that the different conceptions would arise from differences in body forms rather than conscious experience. But again, even if this is true, it is doubtful whether it would prevent understanding. For example, evidence for the embodiment of concepts comes from Pulvermüller’s (2005) finding that reading the word ‘kick’ causes activation in areas of motor cortex associated with the legs, so it might be suggested that a link with a mechanism for performing the action of kicking is an essential component of the concept kick. But someone born without legs, such as the athlete Zion Clark, who would lack such a mechanism, could certainly understand the sentence ‘Smith kicked the winning penalty’.
It might further be objected that Mary’s or Keller’s conception of the colour red, and Clark’s conception of the action of kicking, are less different from those of most humans than a chatbot’s would be. This may be so, but we still have good reasons to expect chatbots to be able to understand: they could have concepts with the same contents as ours, and significant overlaps in conception, and in general differences in conception do not seem to entail the inability to understand.
Turning to the topic of cognitive significance, the potential objection to the claim that chatbots can possess the concepts necessary to understand human languages would be that chatbots’ concepts would differ from ours in this respect. Unlike in the case of conceptions, it is not obvious why one would expect the cognitive significance of chatbots’ concepts to differ from ours. It is not even obvious what this claim amounts to, because cognitive significance is defined intrapersonally in the first instance. However, it does seem that failures of understanding are possible for reasons other than differences in content, which could plausibly be described in terms of cognitive significance. Loar (1976) describes a case in which two people are each aware of a third in two different ways: they are currently watching him being interviewed on television, and they also see him on the train each morning. They do not know that the man on television is the same as the man on the train. If one says ‘He is a stockbroker’ to the other, intending to refer to the man on television, the other may take this to be a reference to the man on the train. This would be a misunderstanding even though they are thinking of the same man.
Loar’s case does not seem likely to point towards widespread or chatbot-specific problems. Prosser (2018) argues that when the use of shared words facilitates concept-sharing, the concepts thus shared will be alike in cognitive significance as well as content (although he talks of sharing ‘modes of presentation’ rather than cognitive significance). Prosser distinguishes between cases in which communication requires interpretation, and cases in which it is transparent. When utterances include indexicals, demonstratives, or perhaps words with common homonyms, it is necessary for hearers to interpret these words, and this generates the possibility of Loar-style cases. But otherwise we typically take it for granted that words have the same meanings in the mouths of our interlocutors as they would in ours. This is simultaneously necessary for, and made possible by, the phenomenon of concept-sharing through language. What Prosser means by calling communication ‘transparent’ in such cases is that we do not need to rely on interpretative premises, implicitly or explicitly, in reasoning from our interlocutors’ utterances. For example, if someone says to me, ‘Beech trees are native to the UK,’ I can infer that beeches have been present in the UK for thousands of years without employing a premise such as the speaker is using ‘beech trees’ to refer to beech trees. Prosser’s analysis implies that if the concept-sharing described by social externalism extends to chatbots, it will have the effect that the cognitive significance of chatbots’ concepts will be equivalent to that of human concepts, where these are associated with a shared word.