Minds and Machines

, Volume 21, Issue 1, pp 57–72

Helen Keller Was Never in a Chinese Room

Authors

    • Department of PhilosophyUniversity of Minnesota, Duluth
Article

DOI: 10.1007/s11023-010-9220-0

Cite this article as:
Ford, J. Minds & Machines (2011) 21: 57. doi:10.1007/s11023-010-9220-0

Abstract

William Rapaport, in “How Helen Keller used syntactic semantics to escape from a Chinese Room,” (Rapaport 2006), argues that Helen Keller was in a sort of Chinese Room, and that her subsequent development of natural language fluency illustrates the flaws in Searle’s famous Chinese Room Argument and provides a method for developing computers that have genuine semantics (and intentionality). I contend that his argument fails. In setting the problem, Rapaport uses his own preferred definitions of semantics and syntax, but he does not translate Searle’s Chinese Room argument into that idiom before attacking it. Once the Chinese Room is translated into Rapaport’s idiom (in a manner that preserves the distinction between meaningful representations and uninterpreted symbols), I demonstrate how Rapaport’s argument fails to defeat the CRA. This failure brings a crucial element of the Chinese Room Argument to the fore: the person in the Chinese Room is prevented from connecting the Chinese symbols to his/her own meaningful experiences and memories. This issue must be addressed before any victory over the CRA is announced.

Keywords

Chinese room argumentSearleHelen KellerComputationalismMeaningExperienceRapaport

Introduction and Preliminary Disambiguations

In “How Helen Keller used syntactic semantics to escape from a Chinese Room,” (Rapaport 2006), Rapaport presents an account of syntax and semantics which, he claims, will allow his favored artificial intelligence architecture (SNePS) to overcome Searle’s famous Chinese Room Argument (CRA hereafter). He also claims that Helen Keller was in a situation relevantly similar to the Chinese Room, and that she used a similar method when she acquired natural language. I will analyze the structure of Rapaport’s argument, at a level of generality above the actual details of the SNePS architecture (I’ll focus on Helen Keller, and take it for granted that if Rapaport is correct about her, his argument about the virtues of SNePS will proceed). The initial formulation of Rapaport’s argument turns on the particular definitions of “syntax” and “semantics” that he prefers. That flaw, however, can be remedied and the CRA can be translated into Rapaport’s preferred idiom. Once that is done, however, I will demonstrate that the CRA persists, and that the person in the Chinese Room will not acquire understanding in virtue of running a computer program. Further, I will show that Rapaport’s description of Helen Keller’s experiences, and his mistaken characterization of them as Chinese-Room-like, reveals a basic presupposition that begs the question against the CRA. The larger lesson that emerges from this analysis is that one essential feature of the CRA is the isolation between the meaningful experiences (including memories) of the person in the Room and the Chinese symbols (which remain uninterpreted and meaningless to the person in the Room). Thus, this examination of Rapaport’s argument will shed light on a whole category of responses to the CRA.

Before we dive into Rapaport, I will briefly recount the basic features of the CRA and discuss a couple of potential sources of confusion. In the Chinese Room, we have a native English speaker who knows no Chinese (Searle-in-the-Room1), a big book of instructions (in English, but the rules do not contain any English-Chinese translations; Searle-in-the-room has to identify the symbols by their shapes alone) on how to manipulate Chinese symbols and respond to Chinese messages sent into the room (including instructions to change the rule-book, to simulate learning, avoid repetitive responses, etc.), a lot of bins of Chinese symbols to manipulate and assemble responses, an in-slot and an out-slot. Searle grants, for the sake of argument, that the program (the big book of instructions) allows the person in the room to produce appropriate responses to the inputs, so a competent Chinese speaker would be fully warranted in concluding that the room (or whoever is in the room) has mastered the Chinese language. For all that, Searle-in-the-room will not understand any Chinese (contrast the way that Searle-in-the-room responds to a question in Chinese with the way that he responds to the very same question presented in English). Since the Searle-in-the-room could never come to understand Chinese by virtue of hand-working a program (syntactic symbol manipulation), neither could a computer. Since that is all computers ever get (by virtue of being computers), no computer has understanding simply in virtue of being a computer running a program that passes the Turing Test. Searle is not claiming that computers couldn’t have mental states, only that if they do have mental states, it will not be solely in virtue of running a program (engaging in syntactic manipulations of uninterpreted symbols). Hence Searle’s famous slogan: “Syntax is not sufficient for semantics”, to which he sometimes adds, by way of clarification, statements like the following, “…the syntax of the program is not sufficient for the understanding of the semantics of a language, whether conscious or unconscious,” (Searle 1997, p. 128).

The target of the CRA is a particular mental state, understanding Chinese, and this is a legitimate target, since Strong AI claims that instantiating and running the right program would be sufficient to create any particular mental state. Rapaport does take up that challenge, claiming that his system, beginning only with a set of uninterpreted symbols and the syntactic rules for relating them to each other, will produce both semantic content and the understanding of the meanings of the symbols. For instance, he claims, “The base case must be a system that is understood in terms of itself, i.e., syntactically,” (Rapaport 2006, p. 387, my emphasis; also see p. 431; a more complete explanation follows in the next section). Would a purely syntactic procedure really produce understanding? Would it produce semantic content? Are those two questions the same? That is the main source of potential confusion that I would like to address next.

There is a sense in which the Chinese symbols have their semantic contents (their meanings), even if Searle is correct and no understanding would be produced in virtue of running the program. Some philosophers might well insist on this sense of meaning as the central one. Would that affect the CRA? It might change how the problem is phrased, but I think it need not cause undue confusion.

Programs start with syntax, respond to symbols only in virtue of their formal features, and use rules that do not use nor mention the meanings that those symbols might have. In order to refute the CRA, Rapaport (or any Strong AI proponent) would have to show that running the program would either produce semantic content (or the underlying semantic content would have to emerge), so that the system would be able to access the semantic content as such, in addition to following the syntactic rules. If semantic content emerged and became accessible to the system as meaningful (the symbols become interpreted), then understanding would become possible (if not guaranteed). Searle claims that hand-working a program would not produce the mental state in question (understanding some particular string of Chinese symbols), even in a system (a human being) which undoubtedly has conscious mental states and representations for most, if not all, of the semantic content of the Chinese symbols that he manipulates.

While both Searle and Rapaport have different basic accounts of the source of semantic content, both are internalists (of different sorts, of course). For Searle, only the mind has intrinsic intentionality (and intrinsic semantic content). Our words have meaning because we ascribe those meanings to the words–they have derived intentionality.2 For Rapaport, all of our semantic content must emerge from syntactical relations (I’ll explain further below). Searle, then, can accommodate the sense of meaning in which the Chinese characters already bear semantic content, while Rapaport would have to reject it. If the Chinese characters have any semantic content prior to the syntactic processing, then the semantic content does not depend on syntactic processing, contrary to Rapaport’s main claim (he does put forward a system of syntactic semantics). I hope that these introductory remarks help to set the stage for our investigation of Rapaport’s novel attempt to defeat the CRA.

The Structure of Rapaport’s Argument

I believe we should begin with Rapaport’s preferred definitions of syntax and semantics: “Semantics is the study of relations between two sets, whereas syntax is the study of relations among the members of a single set (Morris 1938).” (Rapaport 2006, p. 386). Elsewhere, Rapaport calls this the “classical” approach to syntax and semantics (p. 393, for example), implicitly recognizing other understandings of syntax and semantics.3 The definition is essential (for he will argue that any things that can be legitimately brought into a single set can have all of their relationships handled syntactically, as I will shortly show), but there is an interpretive question involved in the definitions that Rapaport takes from Morris. My purpose in addressing it here is to illustrate some surprising aspects of Rapaport’s argument, not to answer the question of what Morris might actually endorse. Here is a typical passage from Morris, which includes some fodder for both interpretations:

Logical syntax deliberately neglects what has here been called the semantical and pragmatical dimensions of semiosis to concentrate upon the logico-grammatical structure of the language, i.e., upon the syntactical dimension of semiosis. In this type of consideration, a ‘language’ (i.e. Lsyn) becomes any set of things related in accordance with two classes of rules: formation rules, which determine permissible independent combinations of members of the set (such combinations being called sentences), and transformation rules, which determine the sentences which can be obtained from other sentences. These may be brought together under the term ‘syntactical rule’. Syntactics is, then, the consideration of signs and sign combinations in so far as they are subject to syntactical rules. It is not interested in the individual properties of the sign vehicles or in any of their relations except syntactical ones, i.e., relations determined by syntactical rules. (Morris 1971, p. 29, all italics in the original—this text contains a complete reprint of Morris 1938, along with other works of Morris.)

If we emphasize the claim that a syntactic language becomes “any set of things”, then we get Rapaport’s favored interpretation (where syntax covers intra-set relations, and semantics covers inter-set relations). One might be able to bring more types of objects into that set, thereby enlarging the scope of syntax and the range of allowable syntactic relationships. On the other hand, if we emphasize the claim that syntax deals with “signs,” and the only allowable syntactic relations are those that use “formation and transformation” rules to compose legitimate sentences, then syntax should be limited to handling symbols, and not the objects for which the symbols might stand. Now for Morris’s account of semantics, we have the following, “Semantics deals with the relation of signs to their designata and so to the objects which they may or do denote,” (Morris 1971, p. 35). Again, there is an interpretive tension between the two readings. If we are allowed to bring the objects into the same set with the signs, semantics may be subsumed under syntax, as Rapaport desires. But there are other passages, which seem to resist that interpretation: “One may study the relations of signs to the objects to which the signs are applicable. This relation will be called the semantical dimension of semiosis…,” (Morris 1971, p. 21, italics in the original). We may read that as keeping signs and object distinct, regardless of any other set-related maneuvers. Morris also specifies certain semantic relations, which seem to be excluded from syntax, “It will be convenient to have special terms to designate certain of the relations of signs to signs, to objects, and to interpreters. ‘Implicates’ will be restricted to [syntax], ‘designates’ and ‘denotes’ to [semantics], and ‘expresses’ to [pragmatics],” (Morris 1971, p. 22, italics in the original).

The important feature of Rapaport’s definitions of syntax and semantics is this: for him any relations among a single set (of signs or representations, as we will soon see) can be thought of as syntactic. His proposal will ultimately meld the semantics into the syntax, placing all the semantic and syntactical units and their relations into a single set.4 Now let us consider Searle’s definitions of “syntax” and “semantics”.

Searle holds that both syntax and semantics depend on the intrinsic intentionality of conscious minds, so his definitions of syntax and semantics are rather different from Rapaport’s. For instance, when introducing the CRA in Minds, Brains and Science, he says, “It is essential to our conception of a digital computer that its operations can be specified purely formally;… the symbols have no meaning; they have no semantic content; they are not about anything. They have to be specified purely in terms of their formal or syntactical structure,” (Searle 1984, pp. 30–31). The syntactical relationships make no use, nor mention, of the meanings that we might ascribe to the symbols. Searle would add that the symbols have no meaning intrinsically, and even their use as symbols depends on our treating them as symbols. For semantics, the difference is greater, “… even if my thoughts occur to me in strings of symbols, there must be more to the thought than the abstract strings, because strings by themselves can’t have any meaning. If my thoughts are to be about anything, then the strings must have a meaning which makes the thoughts about those things. In a word, the mind has more than a syntax, it has a semantics,” (Searle 1984, p. 31, italics in the original). Syntax is restricted to formal features of symbols and processes for operating on them that don’t make use of the meaning of the symbols. Semantics is about the meaning of the symbols, and the operations that depend on those meanings.

Just for the sake of clarity, I will add subscripts to the terms from here on out: syntaxR and semanticsR for Rapaport’s preferred definitions, syntaxS and semanticsS for Searle. These two different ways of using the terms are completely orthogonal to each other. SyntacticR relations (relations among the members of a single set) could be either syntacticS or semanticS (that is, purely formal or in virtue of meaning—and Rapaport provides an example of this in footnote 4, above). SemanticS relations (relations in virtue of meaning) could be either semanticR or syntacticR (that is, relations between two sets or within a single set). Likewise for the other terms involved. I will flesh out the details of Rapaport’s proposal in a moment, but I hope that it is now obvious that Searle’s bumper-sticker slogan, “SyntaxS is not sufficient for semanticsS,” is a very different claim from Rapaport’s similar-sounding, “SyntaxRis sufficient for semanticsR.” Unpacked, Searle’s slogan (no longer concise enough for bumper-sticker-hood) would be, “Formal operations on uninterpreted symbols will never be sufficient to produce (or reveal and make accessible) the meanings of those symbols, nor produce understanding of those symbols.” Rapaport’s similarly unpacked slogan would be, “Formal operations on symbols (independent of their meanings or referents) within a set that includes the symbols and the units of meaning and/or the objects themselves will yield relationships between those symbols and the things they stand for (their meanings and/or objects), such that those relationships match the semantic relations when the symbols are considered as one set and the meanings/objects as a second set.”

Recognizing the differences between the definitions of syntax and semantics might be enough to call Rapaport’s argument against the CRA into question at the outset, but let us extend charity to Rapaport—he could easily accept my terminological point and make the following argument: “If we accept the classical definitions of semanticsR and syntaxR, we can show that the CRA fails. Within that conceptual framework, we can generate or discover semantic content from the manipulation of uninterpreted symbols.” That would be a significant result, if it works. So, I am going to present Rapaport’s argument against the CRA, then translate the CRA into his idiom, to see if the modified CRA stands or falls, granting Rapaport all the conceptual machinery he desires.

Rapaport seeks to establish that syntaxR is, in fact, sufficient for semanticsR. He presents the theoretical framework and premises essential to his position in three theses.

Rapaport’s Thesis 1:

A computer (or a cognitive agent) can take two sets of symbols with relations between them and treat their union as a single syntactic system in which the previous “external” relations are now “internalized”. Initially there are three things: two sets (of things)—which may have a non-empty intersection—and a third set (of relations between them) that is external to both sets. One set of things can be thought of as a cognitive agent’s mental entities (thoughts, concepts, etc.). The other can be thought of as “the world” (in general, the meanings of the thoughts, concepts, etc., in the first set). The relations are intended to be the semantic relations of the mental entities to the objects in “the world”. These semantic relations are neither among the agent’s mental entities nor in “the world” (except in the sense that everything is in the world)… In a CR, one set might be the squiggles, the other set might be their meanings, and the “external” relations might be semantic interpretations of the former in terms of the latter.

When the two sets are unioned, the meanings become “internalized”…. Now the agent’s mental entities include both the original thoughts, concepts, etc., and representatives of the (formerly) external objects, and representatives of the (formerly) external relations. Before the union, the semantic relations obtained between two sets; now their mental analogues obtain within a single set. Hence they are syntactic: Semantics is the study of relations between two sets, whereas syntax is the study of relations among the members of a single set (Morris 1938). (Rapaport 2006, pp. 385–386, all italics in the original.)

Rapaport’s Thesis 2:

“But I take the internalization of these real-world/external-world meanings seriously…we only have direct access to internal representations of objects in the external world,” (Rapaport 2006, p. 387). This Kantian move allows Rapaport to claim that anything we represent mentally is already part of a single internalized set.

Rapaport’s Thesis 3:

“Briefly, we understand one system (a syntactic domain) in terms of another (a semantic domain) that is antecedently understood. The base case must be a system that is understood in terms of itself, i.e., syntactically,” (Rapaport 2006, p. 387). This claim, that the syntax must come first, that uninterpreted symbols are all we ever get to start with,5 will bear a great deal of weight in what follows.

If we accept these theses together, then it is clear that syntaxR is indeed sufficient for semanticsR. The base level at which understanding occurs is a single set (the set of our mental representations) which is understood in terms of itself. Since all intra-set relations (within a set of representations, at least) are syntacticR relations, this understanding must also be a syntacticR relation. In fact, if we accept Thesis 2, it seems that anything we can imagine, any relationship we can conceive of, is already syntacticR. I am not sure what, if anything, remains to be thought of as semanticR. Rapaport seems to support such an evisceration of the semanticR realm: “Our brain is just such a closed system: all information that we get from the external world, along with all of our thoughts, concepts, etc., is represented in a single system of neuron firings. Any description of the real neural network, by itself, is a purely syntactic one,” (Rapaport 2006 p. 387). If everything that we could be concerned with (when investigating the mind and the brain) is syntacticR, why strive to create a semanticR system? But let that pass.

Translation and Evaluation of the CRA

In order to translate the CRA into Rapaport’s idiom, I will avoid the contested terms, “semantic” and “syntactic”, as much as possible. Instead, I will use “understood” and “uninterpreted”, and their cognates, in order to express Searle’s position. If the description of the CRA in Rapaport’s theoretical framework forces us to conclude that Searle-in-the-room will come to understand the meaning of the initially uninterpreted Chinese characters by virtue of hand-working a program, then Rapaport will have accomplished his task.
  1. 1.

    Let us grant that Searle-in-the-room has a set of internal mental representations. All of his experiences inside the room, all his memories of his life outside the room, and so on, will belong to this set.

     
  2. 2.

    Within that overarching set, let us distinguish two subsets; set E for his experiences (walking around in the room, following the rules in the book, manipulating Chinese symbols according to the rules in the book, memories of life on the outside, and so on—anything that Searle-in-the-room is consciously aware of, has in memory, can bring to mind, etc.) and set C for his internal representations of the Chinese characters themselves.

     
  3. 3.

    Sets E and C will overlap,6 but only in this way: Searle-in-the-room will have experiences of carrying around the bits of paper with the Chinese symbols on them and experiences of looking up and following the rules involving them. They may even come to haunt his dreams. But this sort of overlap will not allow Searle-in-the-room to connect (for instance) the Chinese character for “tree” to any of his own experiences with trees, or with any of his other mental representations of trees (things he knows about trees, thoughts involving the English word “tree”, etc.). That is the kind of relationship we would need to see if Searle-in-the-room truly understood the meaning of that particular symbol. If he manipulates the symbol for “tree” without any connection to his tree-experiences (or any of his other tree-representations), he has no understanding of the meaning of that symbol.

     
  4. 4.

    Even if we grant Rapaport’s definitions of syntaxR and semanticsR and all the theoretical machinery pertaining thereto, Searle-in-the-room will have no understanding of the meaning of the Chinese symbols that he manipulates. That was the original challenge posed by the Chinese Room Argument. If we start with nothing more than uninterpreted symbols, rules for manipulating them, and actual rule-following which produces appropriate behavioral responses, no understanding of the meaning of those symbols will emerge. Since computers (as such) do nothing more than manipulate uninterpreted symbols, no computer (as such) understands anything.

     

So, having translated the CRA into Rapaport’s terms, we can see that there is no way that adopting his preferred definitions of syntax and semantics would solve the true issue presented in Searle’s CRA. What are we to make, then, of Rapaport’s supporting examples: Helen Keller, the Japanese Room and the Library Room? I contend that they all make the same error: they smuggle meaningful experience correlated with the supposedly uninterpreted symbols into the respective Rooms. Doing so violates one of the main features of the CRA, as I will show.

Helen Keller

Rapaport claims that Helen Keller was in a sort of Chinese Room—”Has anyone ever really been in a CR for a sufficiently long time? Yes—Helen Keller,” (Rapaport 2006, p. 395)—and that her situation was similar enough to Chinese Room-hood that her ability to learn and understand a natural language proves that the CRA is faulty. To put it another way, Rapaport is claiming that if the CRA were sound, Helen Keller would not have been able to acquire language. In Keller’s epiphany at the well-house, she first learned that her experiences of external things in the world could be correlated with finger-spellings—Sullivan finger-spelled “w-a-t-e-r” into one of Keller’s hands while the other was immersed in water, and Keller made the connection. Rapaport claims that the same thing would happen in the CRA: “This would eventually be the experience of Searle-in-the-room, who would then have both semantic methods for doing things and purely syntactic ones. The semantic methods, however, are strictly internal: they are not correspondences among words and things, but syntactic correspondences among internal nodes for words and things,” (Rapaport 2006, p. 396, italics in the original).7 The force of Rapaport’s claim here will turn on how similar Helen Keller’s pre-epiphany situation was to the situation of the person in the CRA.

The answer should be obvious: not very. Here is the key feature of the scenario that Rapaport seems to have neglected: the CRA affords no opportunity for Searle-in-the-room to associate any of the Chinese symbols with any of his experiences or memories of the things that the symbols stand for. Helen Keller had three senses that provided her with meaningful experience, and symbols presented to her via finger-spelling were associated with features of her simultaneous conscious experiences in one or more of those three functioning sense modalities. Two-fifths of a Chinese Room is no Chinese Room at all.

Rapaport addresses this issue by flatly denying that our sense modalities do what Searle contends that they do: provide us with meaningful experience as a biological product, and not via formal operations on uninterpreted symbols:

Searle-the-philosopher argues that the mere syntactic symbol-manipulation undertaken in the Room does not suffice to yield genuine semantic understanding (Searle 1980, 1993). What appears to be missing are meanings to be “attached” to the squiggles (Searle 2002, p. 53; cf. Rapaport 2006). In the “robot reply” to the CRA, these enter the Room via sensors. According to Searle-the-philosopher, these are just more symbols, not meanings, so nothing is gained. But more symbols is all that a cognitive agent gets; hence they must suffice for understanding, so nothing is lost. (Rapaport 2006, p. 385, italics in the original).

That last sentence reveals the core dispute. Searle contends that some “cognitive agents” (us and many of our evolutionary relatives) have biological systems that produce meaningful experiences (they do not provide us with uninterpreted symbols). Rapaport assumes what he needs to prove—that symbols produced by sensors interacting with the environment somehow automatically acquire semantic content or meaning. Neal Jahren makes a similar criticism of Rapaport in his 1990 article, “Can Semantics be Syntactic?” and I believe that this is an important issue, reaching beyond the particulars of Rapaport’s argument—we will return to it in the following section. Searle is challenging the presupposition that “uninterpreted symbols are all people get”, so to repeat that presupposition as a defense against Searle’s anti-robot argument just isn’t helpful. Consideration of the Robot Reply should allow us to go beyond the base conflict of intuitions here.8 I propose to examine three versions of the Robot Reply: first Searle’s, then a Keller-like version, finally a version that adheres to Rapaport’s commitments.
  1. 1.

    Searle’s version. The sensors of the robot send their information to Searle-driving-the-robot in the form of Chinese characters. Searle’s signals to the robot’s effectors are also sent via the symbols that he assembles. Searle-driving-the-robot must be granted superhuman speed in order for the robot to respond to the world in real time, so let’s grant him Flash-like super speed. Searle-driving-the-robot has no way of figuring out that his activity is actually piloting a robot around and responding appropriately to its environment.

     
  2. 2.

    Keller-like version. Searle-driving-the-robot gets a video screen, speakers, etc., so that he can see and hear what the robot is doing while he manipulates Chinese symbols (presumably these symbols represent the Chinese spoken by the Chinese speakers that the robot encounters, and any written Chinese that the robot happens across). In this scenario, super-speedy-Searle could indeed begin to figure out what some of the Chinese symbols mean, and might well eventually acquire real understanding of Chinese. In fact, we could make this scenario even more Keller-like by giving Searle-driving-the-robot a virtual reality body-suit and helmet (that provides tactile, olfactory, and gustatory sensations, appropriately matching what happens to the robot out in the world), rather than a screen and speakers. The point remains: Searle-driving-the-robot could begin to figure out the correlations between some of the Chinese symbols and his conscious experiences (provided by the suit), and might eventually come to master the language.

     
  3. 3.

    Rapaport’s version. As in the first version, the robot’s sensors send their information to Searle-driving-the-robot in the form of Chinese characters. Searle’s signals to the robot’s effectors are sent via the Chinese symbols that he assembles. By virtue of having representations for the Chinese symbols, his mental representations of English and his life-experiences all part of a unified set, Searle-driving-the-robot begins to understand Chinese. He must.

     
I believe that Rapaport’s commitment to the view, that uninterpreted symbols are all we get, so they must suffice for meaning and understanding, has led him to misattribute the third sort of Robot scenario to Helen Keller, and to miss the second. I will try to support this contention by illustrating parallel versions of the Systems Reply.
  1. 1.

    Searle’s version. Searle-outside-the-room memorizes the rulebook and works outdoors. We may have to grant him super-memory in addition to super-speed.9 People come up to him and hand him Chinese text, he follows the rules he’s memorized, and scribbles out his responses (to call it “writing” might not be appropriate). Again, he has no way of figuring out what the Chinese characters mean because he has no way of connecting the Chinese text with his other representations of the things that the symbols denote.

     
  2. 2.

    Keller-like version. Searle-outside-the-room is given some Chinese symbols, and somebody puts his other hand in some water. This is repeated a few times, and whenever he gets those symbols, he gets the water treatment. Searle-outside-the-room is smart, and is now fully capable of figuring out the likely meanings of that particular array of symbols. At least, he can narrow down the options:

    1. Water

       
    2. Wet

       
    3. Wet and cold

       
    4. Wet hand

       
    5. I now immerse part of you in water

       
    6. I now immerse your hand in water

       
    7. Your hand needs washing

       
    8. And so on (this list may be indefinite, but it is considerably narrowed down from what it would have been without the experiences of the water treatment).

       
     
  3. 3.

    Rapaport’s Version. Here, I have to make some extrapolations, but I think it would go this way. Searle-outside-the-room is given some Chinese symbols, and somebody finger-spells w-a-t-e-r into his other hand. We must presume that, at least at the outset, Searle does not understand finger-spelling either. This is repeated a few times, and whenever he gets those symbols, he gets same finger-spelling. Searle-outside-the-room now has a correlation between these two kinds of symbols. We imagine that the process is repeated for all the rest of the Chinese characters. Now, not only can Searle-outside-the-room provide appropriate responses to Chinese sentences, he can do the same for finger-spellings, and can also seemingly translate from one to the other. By virtue of having representations for the Chinese symbols, the finger-spelling tactile symbols, and his mental representations of English and his life-experiences all part of a unified set, Searle can now understand Chinese.

     
Again, it seems that Rapaport’s commitment to the syntax-must-suffice premise would drive him to mistakenly ascribe the third sort of scenario to Helen Keller, when the second would be more appropriate.

In Searle’s counter to the Systems Reply, Searle-outside-the-room clearly does internalize everything (the rules, the work of following the rules, etc.,). As long as we preserve the conditions of the CRA, so that Searle-outside-the-room is presented with Chinese text (and not simultaneously presented with experiences that are relevant to the Chinese text), then Searle-outside-the-room will never come to understand Chinese (even though Rapaport’s idiom will allow the true but potentially misleading claim that there are syntacticR relations between the subsets E and C).

The same goes for the reply to the Robot Reply. As long as the robot’s sensors send the information about the environment in the form of Chinese text, Searle-driving-the-robot will never know that he is driving a robot, and he certainly won’t come to understand the meaning of any of the symbols he encounters, nor understand any of the actions he unwittingly initiates via the Chinese symbols that he sends through his out-box.

In all the CRA variants, we find the same lesson. If Searle-in-the-room (or Searle-driving-the-robot, or Searle-outside-the-room) is given meaningfulexperiences (direct or mediated by sensors) and the capacity to associate features of those experiences with the Chinese symbols simultaneously presented, then of course Searle-in-the-room (or etc. as above) will have the potential to learn to understand Chinese. This indicates a core issue that may not have been recognized as such—how do we get meaningful experience? Can there be mechanisms of reliable causal contact with the outside world without that contact producing meaningful experience? That is the issue, perhaps neglected among the other issues involved in the CRA, which we can extract from Searle’s treatment of the Robot Reply. Helen Keller did have meaningful experiences and the opportunity to correlate features of those experiences with the finger-spellings simultaneously presented to her by Sullivan, and she did learn natural language thereby. Her case is not analogous to the CRA in the crucial respects needed in order to justify the rest of Rapaport’s claims about the virtues of SNePS. I will now show that the same is true of Rapaport’s other examples, the Japanese Room and the Library Room.

In the Japanese Room case, Rapaport describes how he once came across a page of Japanese text describing a SNePS network. (The page of Japanese characters, reproduced on p. 391, includes a recognizable SNePS diagram, with nine English words: “subclass”, “superclass”, “lex”, “SNePS”, “equiv”, “object”, “property”, “event”, “member” and “class”; all occurring from one to four times). Rapaport doesn’t read Japanese, and relates his experience as being CRA-like. “When I first saw them [the networks in Japanese], I felt like Searle-in-the-room,” (Rapaport, 2006, p. 392). He should not have. That bit of Japanese includes a SNePS diagram that is recognizableas one to people (like Rapaport) who are acquainted with such things. That is enough to tell him that he is dealing with something familiar, presented in a foreign language. Searle-in-the-room doesn’t get any pictures or recognizable diagrams—he just gets symbols. Those symbols, in the Chinese Room, must begin as uninterpreted symbols. If Searle-in-the-room gets a Chinese character and a picture of a rhino, that completely changes the scenario, removing the most essential feature. The purpose of the CRA was to investigate whether uninterpreted symbols could be understood by virtue of running a program that produced appropriate behavioral responses. If we make the symbols interpretable at the outset (by providing helpful diagrams, figures, illustrations, or the like), and apart from running the program, then we have created a Straw Chinese Room, with a Straw Man Searle in it.

In the Library Room, Rapaport again draws on his own experience:

Consider a second CR: my understanding of Library of Congress catalog numbers. I have neither studied nor know the rules of syntax or semantics for them, but I do not need to in order to use them ‘fluently’ to find books… [He then describes the rules of syntax that he has discovered, categories of books, author’s last names, years of publication, and the like.] The more links I make with my knowledge of books and libraries, the more I know of the syntax and semantics; and the more I know of syntax and semantics, the more I understand of what I’m doing. Searle-in-the-Library-of-Congress-room would also come to have such understanding. Why shouldn’t Searle-in-the-CR? (Rapaport 2006, p. 394)

Because Searle-in-the-room is prevented from doing the very thing that allowed Rapaport to learn to understand the LOC codes. Searle-in-the-room cannot associate any of his (book-related) experiences with the (LOC-related) Chinese symbols that he handles. The lesson is the same as we saw in Helen Keller’s case, and the case of the Japanese Room. All three of the scenarios that Rapaport presents as counterexamples to the CRA fail, and all for the same reason—Rapaport has smuggled in some meaningful conscious experience related to the symbols being presented, where the real CRA starts with nothing more than uninterpreted symbols.10 The smuggling is done under the cover of the assumption that perception just is syntactical, that it must be. Remove that contentious assumption, and the CRA remains undefeated.

The Bigger Picture

I believe there is a general lesson here, one that goes beyond Rapaport’s particular arguments. I hope that this exercise has demonstrated that one of the issues behind the CRA, especially pertaining to the robot variants, is that in order to solve the CRA, we must discover how our experiences become meaningful. Once you allow meaningful perception, you are no longer operating on uninterpreted, meaningless symbols (and acquiring other intentional mental states becomes unproblematic). So the slogan that Rapaport begins with, “The shortest answer to the question, ‘How can a computer come to understand natural language?’ is: ‘The same way Helen Keller did’,” (Rapaport 2006, p. 381, italics in the original, and citing Albert Goldfain) might well be true—but what Helen Keller did depended on her ability to correlate already meaningful conscious experiences with finger-spelling patterns (arbitrary symbols). If we can get a computer to have meaningful conscious experiences—the road to natural language acquisition and understanding would be clear (as far as Searle is concerned).

Searle is trying to carve out a middle position between the parsimony of the identity theory (minds = brains, so only things with brains can have minds), making such a strong claim about any and all organisms that have, do, or will live in the entirety of the history of the universe; and the permissiveness of functionalism (anything that is functionally similar to a mind, at a rather high level of description (the level of perceptions, beliefs and desires), has a mind). We don’t yet know exactly how neurons, organized in brains, produce conscious experience. Anything doing that job would have a mind.11 It is possible, so far as we know, that such a job could be done by things other than neurons in brains, so Searle is not guilty of neural chauvinism. But the causal reasons why our neurons do what they do will rule out simulations of minds that go into their respective states for different causal reasons (driven by the physical implementation of syntactic operations) than those that obtain in our human brains.

I have endeavored to find the bottom level, where the real disagreement lies. This analysis, which began as a defense of Searle’s Chinese Room against a particular family of challenges from Rapaport, has pushed the dispute back to this: How do we get meaningful perceptions? That question originally emerged in the Robot Reply and Searle’s rejoinder, but I contend it is really the central locus of the dispute. For Searle, perception is a biological process, and depends on the underlying causal processes of our neurology (in Searle’s terminology, these would be the “causal powers”). Rapaport, and other theorists who depend on similar premises—that perception must be merely syntactic—cannot handle the CRA. Does that mean that we cannot make any progress? Have we bottomed out in a conflict of intuitions? If my analysis of the respective arguments is correct, Searle wins. Searle, via the Robot Room, shows that if we present a system with syntactic input (in the form of initially uninterpreted symbols), and the ability to operate on those symbols and thereby produce appropriate behavior, the system would not understand the meaning of the symbols. My conclusion echoes Searle—if we want to understand consciousness (specifically, how we get meaningful experiences), we need to study the brain.

Footnotes
1

Rapaport calls the Room’s occupant “Searle-in-the-room”, and I will follow his convention. Rapaport also calls Searle himself “Searle-the-philosopher”. When considering the Systems Reply, I’ll call the person who memorizes the program and the symbols and works outdoors “Searle-outside-the-room”, and for the Robot Reply, “Searle-driving-the-robot”.

 
2

Searle makes a similar argument for syntax—that nothing has syntactic properties intrinsically. Things only become symbols when we minded creatures treat them as symbols (Searle 1992, Chap. 9). Evaluating Searle’s argument on the mind-dependent nature of syntax is beyond the scope of this paper, but if Searle is correct here, it would be doubly devastating to positions like Rapaport’s.

 
3

Rapaport recognizes the distinction between the two different understandings of syntax/semantics here: “in NL [natural language], there are grammatical properties and relations among the words in the sentence ‘Ricky loves Lucy’ (subject, verb, object), and there are semantic relations between the words ‘lawyer’ and ‘attorney’ (synonymy), or ‘tall’ and ‘short’ (antonymy). But, in the classical sense of syntax (Morris 1938), both of these sorts of relations are syntactic,” (Rapaport 2006, p. 392, emphasis added).

 
4

The main reason I’m not concerned with whether this account of syntax (allowing the inclusion of semantic relations within syntax) is true to Morris or not, is that Rapaport’s position is what it is, independent of whether it comes directly from Morris, requires some interpretation of Morris, or is an extension of Morris’s ideas.

 
5

A contention that Rapaport has made in several works besides the article in question here, e.g., “The linguistic and perceptual ‘input’ to a cognitive agent can be considered a syntactic domain…,” (Rapaport 1995, p. 59, italics in the original).

 
6

We may be tempted to think that C is a subset of E, but we should allow unconscious, non-experiential representations of the Chinese symbols. Even when Searle-in-the-room is in a well-earned dreamless sleep, we would want to say things like: he believes that some particular Chinese symbol is one that he saw for the first time yesterday, he believes that this symbol is more common than that one, etc.

 
7

Here is what Rapaport really said if we distinguish the definitions of syntax and semantics, as I suggest: “This would eventually be the experience of Searle-in-the-room, who would then have both semanticS methods for doing things and purely syntacticS ones. The semanticS methods, however, are strictly internal: they are not correspondences among words and things, but syntacticR correspondences among internal nodes for words and things.”

 
8

Though I believe Searle’s original response to the Robot Reply will survive Rapaport’s challenge, we can construct a variation of the Chinese Room that causes additional difficulty for the standard Robot Reply (e.g., Crane 1996)—I call it the Twisted Chinese Room. To build the Twisted Chinese Room, put all the Chinese characters in one column (in any arbitrary order), and all their meanings in a second column. Then move all the characters down one row, taking the last character from the bottom back up to the top. Change all the symbols in the Chinese Room instruction book accordingly. Then add two steps to the instructions, so that the first thing Searle-in-the-room does when he gets some Chinese symbols in his in-box is to transform the message into Twisted Chinese, and the last thing he does is to transform the output from Twisted back into Regular Chinese. Now, the vast majority of the operations that Searle-in-the-room performs are done on Twisted Chinese characters. Without the final transform-back step, the result would be gibberish (certainly not comprehensible to any native Chinese speaker). So, in the Robot Reply, would the interactions with the external environment confer understanding of Chinese or Twisted Chinese? Both? Neither? Would this divorce meaning from understanding? I don’t see a good answer, so I think the best solution is to avoid the initial claim that Searle-in-the-Room would understand anything on the basis of shuffling uninterpreted symbols around. An anonymous reviewer has remarked that such a Twisting, if done one character or word at a time, might allow a Chinese speaker to learn Twisted Chinese, rather than automatically yielding gibberish. That may be so, but it would presume comprehension of Chinese to start with.

 
9

Searle might need super-memory in order to memorize such a vast rulebook, including all the rules about keeping track of prior responses that would be needed for a convincing conversational simulation. Rules to handle repetitions of questions, or questions like, “What did you just say?” would require a magnificent memory indeed.

 
10

Rapaport isn’t the only person making this sort of move; Simon and Eisenstadt do the same when they add windows to the Chinese Room in “A Chinese Room that Understands,” (Simon and Eisenstadt 2002).

 
11

We could have two systems, identical at the formal, high-end level of beliefs (and “beliefs”) and desires (and “desires”), with very different underlying causal relations. Searle would call this the difference between functional causal relations and causal powers. We can also describe the difference as between what a system does (at the higher level of description) and why it does that (at the lower, micro-level description). Searle claims that the “why” matters to consciousness and intentionality.

This may shed some light on Rey’s claim that Searle is a sort of functionalist (Rey 1986, 2002)—if we suppose we had a causal/functional account of human consciousness in the brain and we duplicated those causal/functional processes in a silicon-based robot, such that the robot’s “brain” went from one state to the next for the very same reasons that the human brain went from one analogous state to the analogous next state, Searle would say that the robot has a mind. Rey recognizes Searle’s demand, but doesn’t see the spirit of it: “Searle’s phrasing here actually suggests that analysis of a belief must include an account of why the belief actually manages to have the causal role it does. But it’s hard to see what exactly such a demand would come to, much less why he or anyone else would want to insist upon it,” (Rey 2002, p. 206, footnote 11).

Perhaps this will help. Since we don’t know what, in the brain, actually causes conscious mental states, we don’t know what level of description to locate the causal relations we’d need to duplicate in order to produce conscious mentality. Suppose we choose a level of description (say, of beliefs, thoughts and desires—the level where propositional attitudes occur), and we jury-rig a program so that the states succeed each other in exactly the same way that they would in a human being. But the underlying reasons (the causal mechanisms) are very different. The reasons why State Xmachine is followed by State Ymachine is entirely different than the reasons why State Xhuman is followed by State Yhuman. It is possible that the thing we sought, the essence of the mind, depends on the underlying reasons and not the causal series that we’ve arranged via the program. Possible analogy: one system has objects moving because of gravity, and another system has similar objects moving in similar patterns, but via God’s Will. The underlying “reason why” would make a substantial difference.

 

Acknowledgments

I would like to express my gratitude most especially to David Cole, for his very productive discussions on the Chinese Room, and for reading several drafts of this paper. I would also like to thank Tristram McPherson, James Moor, Mark Newman, Sean Walsh and an anonymous reviewer for Minds and Machines for their very helpful questions and comments. Any errors that remain are entirely my own.

Copyright information

© Springer Science+Business Media B.V. 2010