Ford cites Jahren’s (1990) objections to my view. Let me take this opportunity to reply to Jahren, whose critique of my (1988) theory of syntactic understanding and its application to the CRA shows how easy it is in discussing these issues to talk just slightly past one another.
What, for example, is a natural language, and what does it mean to understand one? For Jahren, a natural-language is “a series of signs used by a system”, and “the sine qua non of natural-language understanding ... [is] an ability to take those signs to stand for something else ... in the world” (Jahren 1990: 310, my emphasis). But if a natural language is just “a series of signs”, it follows that to understand it is to understand the series of signs as used by the system—which is a syntactic process. Now, as I urged in “Syntactic Semantics” (Rapaport 1988), to understand, in general, is to map symbols to concepts.
9 Thus, for me to understand you is for me to map your symbols to my concepts, which is, to use Jahren’s phrase, taking “those signs to stand for something else”—but not “something in the world” (except in the uninteresting sense that my concepts are things in the world). This is also a syntactic process: Insofar as I internalize your symbols and then map my internalized representations (or counterparts) of your symbols to my concepts, I am doing nothing but internal symbol manipulation (syntax), even though I am taking your “signs to stand for something else”, namely, my concepts.
How do I understand my concepts? Do I take my concepts to stand for something else outside me? Yes—I so take them, although I only have indirect access to the “something else” outside me. The only way I can take your symbols “to stand for something in the world” would, pre-theoretically, have to be either directly or else indirectly via my symbols (concepts). But all of it is indirect, since I can at best take your symbols to stand for the same thing I take mine to stand for, and, in both cases, that’s just more symbols (cf. Rapaport 2000).
Jahren takes me to task for using ‘mentality’ in a “suprapsychological” sense (citing Flanagan 1984) instead of “in a human sense” (Jahren 1990: 314ff). But what sense is that? Is it determined by human behavior (as in, say, the Turing Test)? If so, then Jahren and I are talking about the same thing, since human mental behavior might be produced by different processes. Is it determined by the way the human brain does mental processing? But that is too strong for my computational philosophical tastes: I am concerned with how mentality, thinking, cognition, understanding—call it what you will—is possible, period. I am not concerned with how human mentality, in particular, works; I take that to be the domain of (computational) cognitive psychology.
10 However, I don’t intend (at least, I don’t think I intend) the very weak claim that as long as a computer can simulate human behavior by any means, that would be mentality. I do want to rule out table look-up or the (superhuman) ability to solve any mathematical problem, without error, in microseconds. The former is too finite (it can’t account for productivity); the latter is too perfect (in fact, if viewed as an infinite, God-like ability to know and do everything instantaneously, it, too, is a kind of table look-up that fails to account for productivity; cf. Rapaport 2005b).
Now, having excluded those two extremes, there is still a lot of variety in the middle. So I’ll agree with Jahren that, the extreme cases excepted, “a computational system is minded to the extent that the information processing it performs is functionally [that is, input-output, or behaviorally] equivalent to the information processing in a mind” (Jahren 1990: 315)—presumably, a human mind. However, Jahren says that two mappings are input-output equivalent “because these mappings themselves can be transformed into one another” (Jahren 1990: 315). This seems to me too restrictive, not to say vague (what does it mean to transform one mapping into another?). Jahren gives as an example “solving a matrix equation [which] is said to be equivalent to solving a system of linear equations” (Jahren 1990: 315). But surely two algorithms with the same input-output behavior would be functionally equivalent even if they were not thus transformable. Consider, for instance, two very different algorithms for computing greatest common divisors. They would be functionally equivalent even if there were no way to map parts of one to parts of the other in any way that preserved functional equivalence of the parts.
Jahren alludes to the symbol-grounding problem: “The semantics R [that is, the semantics in Rapaport’s sense] of a term is given by its position within the entire network” (Jahren 1990: 318). The proper response to this is: ‘Yes and no’. Yes, in the sense that ultimately all is syntactic, hence holistic, as Jahren observes (cf. Rapaport 2002, 2003a). But no in the sense that this misleadingly suggests that nothing in the network represents the external world. For instance, Jahren gives an example of ‘red’ linked as subclass to ‘color’ and as property to ‘apple’, etc. But this omits another, crucial—albeit still internal—link: to a node representing the sensation of redness.
11 Some parts of the network represent external objects, so an internal analogue of “reference” is possible.
Now, to be fair, Jahren is not unsympathetic to this view:
... Rapaport’s conception of natural-language understanding does shed some light on how humans work with natural language. For example, my own criterion states that when I use the term ‘alligator’, I should know that it (qua sign) stands for something else, but let us examine the character of my knowledge. The word ‘alligator’ might be connected in my mind to visual images of alligators, like the ones I saw sunning themselves at the Denver Zoo some years ago. But imagine a case where I have no idea what an alligator is but have been instructed to take a message about an alligator from one friend to another. Now the types of representations to which the word ‘alligator’ is connected are vastly different in both cases. In the first, I understand ‘alligator’ to mean the green, toothy beast that was before me; in the second, I understand it to be only something my friends were talking about. But I would submit that the character of the connection is the same: it is only that in the former case there are richer representations of alligators (qua object) for me to connect to the sign ‘alligator’. ... The question ... is whether the computer takes the information it stores in the ... [internal semantic network] to stand for something else. (Jahren 1990: 318–319; cf. Rapaport 1988, n. 16).
Well, the computer does and it doesn’t “take the information it stores ... to stand for something else”. It doesn’t, in the sense that it can’t directly access that something else (any more—or less—than we can). It does, in the sense that it assumes that there is an external world. But note that if it represents the external world internally, it’s doing so via more nodes! There’s no escaping our internal, first-person experience of the world. As Kant might have put it, there’s no escape from phenomena, no direct access to noumena.
I have been avoiding the issue of consciousness and what it “feels like” to understand or to think (though I have something to say about part of that problem in Rapaport 2005a). But let me make one observation here, in response to Jahren’s description of how we can experience what it is like to be the machine: “in accordance with the Thesis of Functional Equivalence one can be the machine in the only theoretically relevant sense if one performs the same information processing that the machine does” (Jahren 1990: 321). That is, to see if a machine that passes the Turing Test is conscious, we would need to be the machine, and, to do that, all we have to do is behave as it does. But just “being” the machine (or the “other mind”) isn’t sufficient—one would also have to simultaneously be oneself, too, in order to compare the two experiences. This seems to be at the core of Searle’s Chinese-Room Argument—he tries to be himself and the computer simultaneously (cf. Cole 1991; Rapaport 1990; Copeland 1993). But he can’t use his own experiences (or lack of them) to experience his own-qua-computer experiences (or lack of them). That’s like my sticking a pin into you and, failing to feel pain, claiming that you don’t, either. It is also like my making believe I’m you, sticking a pin into me-qua-you, feeling pain, and concluding that so do you. Either one “is” both cognitive agents at the same time, in which case there is no way to distinguish one from the other—the experiences of the one are the experiences of the other—or else one is somehow able to separate the two, in which case there is no way for either to know what it is like to be the other. Note, finally, that what holds for me (or Searle) imitating a computer holds for a computer as well: Assume that we are conscious, and let a computer simulate us; could the computer determine whether our consciousness matched its? I doubt it.
Let’s return to the syntactic understanding of Searle-in-the-room. Jahren says that Searle-in-the-room does not understand Chinese “because ... [he] cannot distinguish between categories. If everything is in Chinese, how is he to know when something is a proper name, when it is a property, or when it is a class or subclass?” (Jahren 1990: 322). I take it that Jahren is concerned with how Searle-in-the-room can decide of a given input expression whether it is a name, or a noun for a property, or a noun for a class or subclass. In terms of a computational cognitive agent (such as Cassie, discussed in Rapaport 2006), this is the question of how she would “know” that ‘Lucy’ in ‘Lucy is rich’ is a proper name (in SNePS terms, how she would “decide” whether to build an object-propername case frame or some other case frame) or of how she “knows” that ‘rich’ expresses a property rather than a class (how she “decides” whether to build an object-property case frame rather than a member-class case frame; see Rapaport 2006 for details on these SNePS semantic network notions).
In one sense, the answer is straightforward: In Cassie’s case, an augmented-transition-network parsing grammar “tells” her. And how does the augmented transition network “know”? Well, of course, we programmed it to know. But in a more realistic case, Cassie would learn her grammar, with some “innate” help, just as we would. In that case, what the arc labels are is absolutely irrelevant. For us programmers, it’s convenient to label them with terms that we understand. But Cassie has no access to those labels. So, in another sense, she does not know, de dicto, whether a term is a proper name or expresses a property rather than a class. Only if there were a node labeled ‘proper name’ and appropriately linked to other nodes in such a way that a dictionary definition of ‘proper name’ could be inferred would Cassie know de dicto the linguistic category of a term. Would she know that something was a proper name in our sense of ‘proper name’? Only if she had a conversation with us and was able to conclude something like, “Oh—what you call a ‘proper name’, I call a___”, where the blank is filled in with the appropriate node label.
This is simply the point that native speakers of a language don’t have to explicitly understand its grammar in order to understand the language. I once asked (in French) a native French-speaking clerk in a store in France whether a certain noun was masculine or feminine, so that I would know whether to use ‘le’ or ‘la’; the clerk had no idea what I was talking about, but she did volunteer that one said ‘le portefeuille’, not ‘la portefeuille’.
Jahren “argue[s] that Searle-in-the-room cannot interpret any of the Chinese terms in the way he understands English terms” (Jahren 1990: 323). But insofar as Searle-in-the-room is understanding Chinese, he is not understanding English. Neither does Cassie, strictly speaking, understand SNePS networks; rather, she understands natural language, and she uses SNePS networks to do so. Just as a native speaker of English would explicitly understand English grammar only if she had studied it formally, so would Cassie only explicitly understand SNePS networks if she were a SNePS programmer (or a computational cognitive scientist). And, even if she were, the networks she would understand wouldn’t be her own—they wouldn’t be the ones she was using in order to understand the ones she was programming. Insofar as Searle-in-the-room does understand English while he is processing Chinese, he could map the Chinese terms onto his English ones, and thus he would understand Chinese in a sense that even Searle-the-author would have to accept.