You are on page 1of 36

The Chinese Room Argument Reconsidered:

Essentialism, Indeterminacy, and Strong AI


JEROME C. WAKEFIELD
Rutgers University, New Brunswick, NJ, USA
Abstract. I argue that John Searles (1980) inuential Chinese room argument (CRA) against
computationalism and strong AI survives existing objections, including Blocks (1998) internalized
systems reply, Fodors (1991b) deviant causal chain reply, and Hausers (1997) unconscious content
reply. However, a new essentialist reply I construct shows that the CRA as presented by Searle
is an unsound argument that relies on a question-begging appeal to intuition. My diagnosis of the
CRA relies on an interpretation of computationalism as a scientic theory about the essential nature
of intentional content; such theories often yield non-intuitive results in non-standard cases, and so
cannot be judged by such intuitions. However, I further argue that the CRA can be transformed into a
potentially valid argument against computationalism simply by reinterpreting it as an indeterminacy
argument that shows that computationalism cannot explain the ordinary distinction between semantic
content and sheer syntactic manipulation, and thus cannot be an adequate account of content. This
conclusion admittedly rests on the arguable but plausible assumption that thought content is inter-
estingly determinate. I conclude that the viability of computationalism and strong AI depends on
their addressing the indeterminacy objection, but that it is currently unclear how this objection can
be successfully addressed.
Key words: articial intelligence, cognitive science, computation, essentialism, functionalism, inde-
terminacy, philosophy of mind, Searles Chinese room argument, semantics
1. Once More into the Chinese Room
Can computers literally think, understand, and generally possess intentional con-
tents in the same sense that humans do, as some in the articial intelligence (AI)
eld hold?
1
The claim that they can has come to be known by John Searles label,
strong AI, in contrast to weak AI, the claim that computers are merely able to
simulate thinking rather than literally think.
2
The only systematically developed and potentially persuasive argument for
strong AI is based on the doctrine of computationalism (or machine function-
alism), which holds that the essence of thinking in the literal sense of thinking
that applies to human intentional contents consists of the running of certain syn-
tactically dened programs.
3
Thus, computationalists hold that an entitys having a
specic kind of intentional content consists of its running the same (or sufciently
similar) Turing machine program with the same (or sufciently similar) input
Correspondence address: 309 W. 104 St. #9C, New York, NY 10025, USA. Tel: +1-212-932-
9705; Fax: +1-212-222-9524; E-mail: jcw2@columbia.edu
Minds and Machines 13: 285319, 2003.
2003 Kluwer Academic Publishers. Printed in the Netherlands.
286 JEROME C. WAKEFIELD
output relations and state transitions that constitutes a persons having that kind
of content.
4
Strong AI immediately follows from computationalism. If thinking is consti-
tuted by certain kinds of computation, and digital computers are (in principle,
modulo performance limitations) universal Turing machines, and universal Turing
machines can compute any kind of computable function (Turing-Church thesis),
then, in principle, computers can think because, in principle, they can be pro-
grammed with the same program that constitutes human thought.
5
John Searles (1991a) Chinese room argument (CRA) is aimed at refuting the
computationalist account of content, thus removing the only grounds for believ-
ing strong AI.
6
Searle constructs a counterexample via a thought experiment (the
Chinese room experiment [CRE]), on which his argument rests. The CRE is
claimed to show that running a program identical to the program of a person
possessing certain thought contents (in Searles example, Chinese language under-
standing) does not necessarily confer those contents on the entity so programmed.
The twist is that, whereas computationalism is controversially invoked to justify
attributing contents to computers, in the CRE it is a human being who performs
the steps of the program and yet, according to Searle, cannot be said to have the
relevant mental states. The CRA thus purports to show that human thinking cannot
consist of running a certain program.
With apologies for the familiarity of the exposition that follows, Searles counter-
example to computationalisms claim that thinking consists of implementation of
a syntactically dened program goes as follows. Imagine that an English speaker
(the operator) who knows no Chinese is enclosed in a room in the head of a large
robot, with an elaborate manual in English that instructs her on what to do in the
room, and she devotedly and successfully implements the manuals instructions.
The operator receives inputs in the form of sequences of shapes, utterly strange
to her, that light up on a console. In accordance with the directions in the manual,
when certain shapes light up in a certain sequence on the input console, the operator
pushes buttons with certain shapes in a specied sequence on another output
console. Thus, she produces specic outputs in response to specic sequences
of inputs. The program is fully syntactic in that the manuals rules use only the
shapes and sequences of past inputs and syntactically dened manipulations of
those sequences to determine the shapes and sequences of the output. The operator
follows the manual without any understanding of what any of this might mean.
Although the operator does not know it, the shapes on the input and output con-
soles are characters of the Chinese language, and the manual is a super-sophisticated
program for responding appropriately in Chinese to Chinese statements. The
input panel feeds in a sequence of Chinese characters corresponding to what the
robot has detected people saying to it in Chinese, and the output console con-
trols the robots speech behavior. For the purpose of reducing computationalism
to absurdity, it is assumed that the program implemented by the operator is the
same as the program which, per computationalist hypothesis, constitutes Chinese
THE CHINESE ROOM ARGUMENT RECONSIDERED 287
understanding in humans or (if there are variants) in some particular human, and
that the operator is so skilled at following the program that the robot appears to
speak uent Chinese to Chinese speakers that talk to it. Then, according to com-
putationalism, the operator literally understands Chinese, because she implements
the same program as is possessed by those who understand Chinese.
7
Searle argues, however, that the operator does not understand a word of Chinese,
indeed does not even know that she (via the robots utterances) is speaking a lan-
guage. She just follows the rules laid out in the manual. The sequences of inputted
and outputted signs are meaningless to her. Thus, Searle concludes, understanding
must be more that merely implementing the right program, and computationalism
is false.
The CRA has had an enormous impact. Even critics admit it is perhaps the
most inuential and widely cited argument against strong AI (Hauser, 1997, p.
199) and a touchstone of philosophical inquiries into the foundations of AI
(Rapaport, 1988, p. 83). Yet, despite the immense amount of published discussion,
I believe that the ways in which the CRA succeeds and fails, and the reasons for
its successes and failures, remain inadequately understood. I attempt to remedy
this situation by reconsidering Searles argument in this article. Many readers will
consider the CRA already refuted and will doubt that further attention to it is
warranted, so before presenting my own analysis I explain at some length why
even the best available objections fail to defeat the CRA. I also attend throughout,
sometimes in the text but mostly in the notes, to a number of anti-CRA arguments
recently put forward in this journal by Hauser (1997).
If correct, my analysis offers some good news and some bad news for strong
AI. The good news is that, as presented by Searle, the CRA, even in its most
sophisticated and objection-resistant form, is an unsound argument that relies on
a question-begging appeal to intuition. Many critics have contended that the CRA
begs the question or relies on faulty intuitions, but no one, in my opinion, has
offered a convincing diagnosis of why it does so and thus progressed beyond a
clash of intuitions. I offer such a diagnosis here that relies on an interpretation
of computationalism as a scientic theory about the essential nature of content.
8
I argue that such theories are impervious to counterexamples based on appeals
to intuitions about non-standard cases (such as the CRE), because such theories
by their nature often conict with such pre-theoretical intuitions. So, the CRA, as
stated by Searle, fails.
The bad news for strong AI, according to my analysis, is that the CRA can be
transformed into a potentially lethal argument against computationalism simply by
reinterpreting it as an indeterminacy argument that is, an argument that shows
that thought contents that are in fact determinate become indeterminate under a
computationalist account. The anti-computationalist conclusion of the indetermin-
acy version of the CRA admittedly rests on assumptions arguable but in the end
difcult to reject about the determinacy of intentional content. Moreover, I argue
that, contrary to Hausers (1997) claim that the CRA is simply warmed-over in-
288 JEROME C. WAKEFIELD
determinacy, in fact the CRA is a substantive advance in formulating a persuasive
indeterminacy argument against the computationalist account of content. I con-
clude that strong AI remains in peril from the indeterminacy version of the CRA,
and that the future of strong AI rests on somehow resolving the indeterminacy
challenge.
2. Failure of Existing Objections to the CRA
Searles argument rests on the common intuition that the operator in the Chinese
room does not understand Chinese, despite her successful manipulation of the ro-
bots verbal behavior using the manual. This intuition is widely accepted as correct,
even by many strong AI proponents. If one accepts this intuition, then there would
seem to be only three possible kinds of replies that might save computationalism
and strong AI from the CRA. Two of them deny that computationalism implies
that the operator should understand Chinese. First, it might be argued that not
the operator herself but some other entity in the Chinese room situation meets
computationalist criteria for understanding Chinese, and that this other entity does
understand Chinese. Second, it might be argued that the Chinese room situation
does not contain any entity, operator or otherwise, that meets computationalist
criteria for understanding Chinese, and that in fact no entity in that situation under-
stands Chinese. Both of these kinds of replies appear in the literature in multiple
forms. I consider the rst kind of reply in the next two sections, and then turn to
the second, in each case selecting for discussion what I consider the most effect-
ive recent versions of that kind of response. I then consider the third response,
which is to accept the intuition that the operator does not have the usual, conscious
understanding of Chinese but argue that the operator unconsciously understands
Chinese. I argue that none of these objections succeed. Only then do I consider the
alternative reply, which appeals to strong AI proponents but is in my view here-
tofore without adequate theoretical grounding, that the CRE provides insufcient
grounds to believe that the critical intuition it generates is correct, thus fails to
establish that the operator does not consciously understand Chinese in the standard
sense, thus fails to refute computationalism.
2.1. INTERNALIZING THE CHINESE ROOM
The most common response to the CRE is to distinguish the operator from the
broader operator-robot-manual system and to argue that, in focusing on the oper-
ator, Searle has selected the wrong entity for his test. Computationalism implies
that if an entity is programmed in the same way as a native speaker of Chinese,
then the entity understands Chinese. But, one might argue, in the CRE it is not
the operator but rather the entire system, including the operator, the robot, and the
manual, that are so programmed and thus should understand Chinese. The operator
THE CHINESE ROOM ARGUMENT RECONSIDERED 289
is just one part of this system, so the intuition that she herself does not understand
Chinese is entirely consistent with computationalism, according to this objection.
Searle (1991a) ingeniously attempts to block this systems objection by modi-
fying the CRE so as to eradicate the distinction between the operator and the
broader system:
My response to the systems theory is simple: Let the individual internalize all
of these elements of the system. He memorizes the rules in the ledger and the
data banks of Chinese symbols, and he does all the calculations in his head.
The individual then incorporates the entire system. There isnt anything at all
to the system which he does not encompass. We can even get rid of the room
and suppose he works outdoors. All the same, he understands nothing of the
Chinese, and a fortiori neither does the system, because there isnt anything in
the system which isnt in him. If he doesnt understand, then there is no way
the system could understand because the system is just a part of him. (p. 512)
In this amended scenario, the manual has been rewritten to apply to input and
output sequences of sounds rather than written symbols, and the operator has mem-
orized the manuals rules and internalized in her own head what were formerly the
operations in the room in the robots head. Rather than getting an input sequence
of shapes on a screen, the operator simply listens directly to a speaker; and rather
than feeding signals to a robot that makes corresponding sounds, the operator
discards the robot and utters the sounds herself directly to her interlocutor. We
may imagine that the operator has gotten so facile at following the program that
she is nearly instantaneous and virtually awless in her responses, so there is no
noticeable difference between her responses and those of someone who uently
speaks Chinese. Placed in a situation where everyone else understands and speaks
Chinese (though she does not know what language they are speaking or even that
they are speaking a language), she turns in a perfect performance, interacting as if
she actually understood Chinese without anyone knowing that she does not.
Under these conditions, it would seem that there is no distinction to be drawn
between the operator and the system because the operator is the system. The sys-
tems objection would thus seem to become irrelevant. In this scenario, Searle
claims, there is no question that strong AI must imply that the operator herself
understands Chinese because, per hypothesis, the operator instantiates exactly the
same program as a native speaker of Chinese. And yet, Searle further claims, our
intuition remains solid that the operator does not understand Chinese; she under-
stands nothing that either she or others say, and does not know the meaning of
even one word or sentence of Chinese. She responds correctly not because she
understands the meaning of her interlocutors assertion or her response, but be-
cause she perceives that the interlocutor makes certain sounds and recalling
the manual, or perhaps having it so well memorized that it is like a habit that is
second nature she responds in accordance with the manuals rules by making
certain specied (meaningless, to her) sounds in return. As Block (1998) notes
of the operator: When you seem to Chinese speakers to be conducting a learned
290 JEROME C. WAKEFIELD
discourse with them in Chinese, all you are aware of doing is thinking about what
noises the program tells you to make next, given the noises you hear and what
youve written on your mental scratch pad (p. 45). She may not even know she
is speaking a language; she may think that the entire effort is an experimental
test of the limits of nonsense learning (in the tradition of psychologists cherished
nonsense syllables), and that her interlocutors have merely memorized nonsense
sequences as test inputs. Searle concludes that instantiating the right program
cannot be what confers understanding, because in the amended CRA the operator
instantiates such a program but has no understanding.
2.2. BLOCK: RETURN OF THE SYSTEMS OBJECTION
Undaunted by Searles claim that in the new CRA, the operator is the system, Ned
Block (1998) argues that a more sophisticated version of the systems objection suc-
ceeds against the new CRA. Just as the new CRA internalizes the system within the
operator, so Block attempts to internalize the systems objection by distinguishing
the operators meanings from the meanings of the program she has internalized.
Block claims that the internalized program understands Chinese even though the
operator does not:
But how can it be, Searle would object, that you implement a system that
understands Chinese even though you dont understand Chinese? The sys-
tems objection rejoinder is that you implement a Chinese-understanding system
without yourself understanding Chinese or necessarily even being aware of
what you are doing under that description. The systems objection sees the
Chinese room (new and old) as an English system implementing a Chinese
system. What you are aware of are the thoughts of the English system, for
example your following instructions and consulting your internal library. But
in virtue of doing this Herculean task, you are also implementing a real in-
telligent Chinese-speaking system, and so your body houses two genuinely
distinct intelligent systems. The Chinese system also thinks, but though you
implement this thought, you are not aware of it.... Thus, you and the Chinese
system cohabit one body. Searle uses the fact that you are not aware of the
Chinese systems thoughts as an argument that it has no thoughts. But this is an
invalid argument. Real cases of multiple personalities are often cases in which
one personality is unaware of the other. (pp. 4647).
Block argues that, although the Chinese program is implemented by the operator,
Chinese contents occur as states of the program in the operators brain but not as
the operators contents. Note that Block acknowledges that the operator does not
possess Chinese semantic contents; he does not attempt to argue that the operator
unconsciously understands Chinese (the unconscious understanding argument is
considered in a later section). The operator is unaware of the Chinese meanings of
the steps in the program she implements, but that does not mean she unconsciously
THE CHINESE ROOM ARGUMENT RECONSIDERED 291
understands them, any more than my unawareness of your contents means I un-
consciously possess your contents. Rather, Block says, it is like a case of multiple
personality disorder in which a brain contains two agents, one of which is unaware
of the others contents; or, it is like the operators representing a step of the program
under one description and the programs representing it under another.
The major challenge for Block is to show how, within a computationalist frame-
work, the programs meanings can be different from the operators meanings. After
all, Searle designed the new CRA to eliminate any such distinction. For the oper-
ator to implement the program is for the operator to go through every step of the
program and thus to do everything, syntactically speaking, that the program does.
Indeed, the program was (per hypothesis) selected on the basis of the very fact
that a persons (i.e., a native Chinese speakers) implementation of the steps of
the program constitutes the persons (not the programs) understanding Chinese.
So, Blocks objection stands or falls with his ability to explain how to create a
relevant distinction between the programs and the operators meanings within a
computationalist account of meaning.
Block thinks he can draw such a distinction partly because he misconstrues
Searles argument as weaker than it is. Block suggests that Searles only ground
for denying that the operator understands Chinese is that the operator is unaware of
possessing Chinese meanings. He thus claims that Searles argument is of the form:
A (the operator) is unaware of As understanding the meanings of Chinese words
and sentences; therefore, A does not possess Chinese semantic contents. Without
assessing this argument regarding the operators meanings,
9
Block observes that it
loses whatever force it has when generalized to As lack of awareness of another
entitys contents, as in: A (the operator) is unaware of Bs (the programs) un-
derstanding of the meanings of Chinese words and sentences; therefore, B does not
possess Chinese semantic contents. Thus, Block concludes that Searles argument,
whatever its merits when applied to the operators understanding, does not support
the conclusion that the program itself does not understand Chinese.
However, Searles argument is more subtle than Block allows. Searle constructs
the internalized version of the CRE in such a way that the program exists as
thoughts in the operators mind; each step of the program when it is running is,
per hypothesis, a step in the operators thought process. Thus, if computationalism
is correct that the program determines the content, the operator and the program
must possess the same content. That, in conjunction with the fact that the operator
understands the steps of the program only as syntactic manipulations and not as
Chinese meanings (Block concedes this), yields the conclusion that the program
cannot understand Chinese. Block rightly observes that Searle argues only that the
operator, not the program itself, lacks Chinese understanding. But that is because
Searle realizes that, if implemented syntax constitutes semantics, then the fact that
the operator does not understand Chinese implies that the program also does not,
because in the new CRE, the program and the operator necessarily go through the
same syntactically dened steps.
292 JEROME C. WAKEFIELD
Block tries to justify distinguishing the operators and programs contents by
drawing an analogy between the operatorprogram relationship and the relation-
ship between personalities in multiple personality disorder. He notes that in such
disorders, one personality may not possess the thoughts of another in the same
brain.
One might be tempted to object that in such disorders, there are multiple selves,
and every content is a content of one of those selves, and that surely the Chinese
program is not by itself an agent or self, leaving no agent to possess the claimed
Chinese semantic contents. But this riposte would be inconclusive. Strong AI pro-
ponents might reject the assumption that semantic contents have to be someones
semantic contents or, less heroically, might insist that the Chinese-understanding
program is so complex and capable that it is enough of a self or agent to pos-
sess contents. The latter claim is suggested by Blocks comment that the Chinese
program is a genuinely distinct intelligent system.
However, there is another, more compelling reason why the multiple-personality
analogy is not supportive of Blocks analysis: the program and operator are not
sufciently distinct to justify the analogy. Unlike the divergent contents of multiple
personalities based on divergent brain states that implement different programs, the
occurrence of the Chinese-understanding programs states are not distinguishable
from the occurrence of the operators states when she is implementing the program.
Note that it might also be possible in principle for the very same brain event to
simultaneously constitute steps in two different programs (perhaps implemented
by two different selves) and thus to realize two different meanings. But, within a
computationalist framework, such an occurrence of two different meanings would
depend on the brain states constituting different steps in two different simul-
taneously running programs that form the context of its occurrence. But in the
internalized CRE, there is nothing analogous to such different programs that might
make the meaning of a syntactic step different for the operator and the program.
The operator implements the program by going through the steps of the program,
thus must possesses the same computationalist meanings as the program. Blocks
multiple-selves analogy fails to tear asunder the meanings Searles new CRA joins
together.
Block also claims that the operator and the program understand a given steps
meaning under two different descriptions; the operator understands the step un-
der an English description of its syntactic shape, while the program understands
the step under a description in terms of its semantic content in Chinese. These
divergent descriptions are claimed to allow for two different meanings despite the
fact that the identical step occurs within the operators implementation and the
programs running.
An identical event can be known to two agents under different descriptions.
However, according to computationalism, the meaning of a description (which is
itself, after all, a semantic content) must be determined by the formal structure of
an implemented program, and the operators and programs implemented formal
THE CHINESE ROOM ARGUMENT RECONSIDERED 293
structures are, per hypothesis, identical. Block makes much of the fact that the
operator describes the programs syntactic steps to herself in English, whereas the
program itself is not in English. However, the idea that the language used by the
operator to describe the steps of the program should matter to the meanings of the
programs steps is antithetical to the core computationalist hypothesis (on which
the generalization from the nature of human thought to the ability of computers to
think depends) that meaning is invariant over ways of implementing a program. So,
given that the operator implements the program, a computationalist cannot hold
that the program understands Chinese but the operator does not just because the
operator is aware of the programs syntactic steps under an English description.
In any event, the fact that there are two languages is an inessential element of the
new CRE. One could reframe the CRE so that it describes a syntactic-programming
idiot savant who as a child learned her rst language by extrapolating a set of
formal syntactic rules from the speech sounds of those around her, and which
she subsequently learns to habitually follow without needing a meta-language to
describe the rules. Or, the operator could have overlearned the program so that it is
habitualized and no English thought need intervene when going from step to step
of the program as dictated by the manual. In either case, the operator simply thinks
via the programs transformations with no meta-level chatter in another language,
like a mathematician who simply sees directly how to transform mathematical ex-
pressions without needing to think about it in English. Despite there being only one
language involved, such a person would have no semantic understanding despite
her perfect linguistic performance.
In sum, the internalized systems objection that Chinese understanding is pos-
sessed only by the program and not by the operator fails because, given the intern-
alization of the program, the features that, according to computationalism, yield
a semantic content for the program also occur in and yield the same content in
the operator. Within the new CRA as Searle constructs it, there is simply no way
for the program to understand Chinese without the operator possessing the same
understanding, if computationalism is true.
2.3. FODOR: THE DEVIANT CAUSAL CHAIN OBJECTION
Another way to try to evade the CRA is to accept that there is no understanding of
Chinese by any entity in the CRE situation, but to argue that this is not a counter-
example to computationalism because no entity in the CRE situation satises the
computationalist criterion for Chinese understanding. Most objections of this sort
hold that, for one reason or another, the micro-functioning of the CREs program
inadequately reects the functioning of a Chinese speaker, so the right program
to yield Chinese understanding has not been implemented.
The CRE is designed to avoid this sort of objection. The program, per hypo-
thesis, mimics the program of a Chinese speaker in all signicant details. If there is
294 JEROME C. WAKEFIELD
a syntactically denable program for Chinese understanding, as computationalism
implies there must be, then it is precisely matched by the rules in the manual
in the Chinese room and, consequently, by the mental processes of the operator
who internalizes the manual in the new CRE. So, it might seem that there can be
no successful objection based on a claimed lack of correspondence between the
Chinese speakers program and the program implemented by the operator in the
CRE.
However, there is one feature of the CRE implementation that is not analogous
to the typical Chinese speakers program, namely, the deliberate conscious imple-
mentation of the steps of the program by an operator. Jerry Fodor (1991a, b) puts
forward a distinctive version of the micro-functioning objection that focuses on the
role of the operator. Fodor accepts that neither the operator nor the overall system
understands Chinese: I do think that it is obvious that Searles setup doesnt un-
derstand Chinese (1991b, p. 525). He also accepts that the manuals rules exactly
mimic the program of a Chinese speaker. But he argues that the program alone
would normally understand Chinese by itself, and that it is only the intrusion of the
operator into the process that causes the program and the system not to understand
Chinese. The reason, he says, is that introduction of the operator renders the im-
plemented program non-equivalent to the original program of the Chinese speaker
on whom it was modeled. Fodor thus stands the systems objection on its head;
rather than arguing that the operators interaction with the program yields Chinese
understanding, he argues that the introduction of the operator undermines under-
standing that would otherwise exist, by rendering otherwise equivalent programs
non-equivalent.
Recall that in constructing the CRA, Searle assumes for the sake of reducing
computationalism to absurdity that the operator-implemented syntactic manipu-
lation is equivalent, as a program, to the syntactic program that (per computa-
tionalist hypothesis) constitutes Chinese understanding. Thus, Searle assumes that
introducing the operator preserves Turing-machine equivalence. Block (see above)
never challenges this assumption, and that makes it impossible for him to dis-
tinguish the operators and programs contents; if the programs are equivalent,
computationalism implies they must constitute the same contents.
Fodor challenges the assumption of program equivalence by arguing that equi-
valence depends not only on the programs formal steps but also on how the trans-
itions between the programs steps are implemented. If such transitions are not
direct and involve further mediating states (e.g., conscious, deliberate actions),
then, he argues, those mediating states are in effect part of the program, and the
program is not equivalent to programs lacking such mediating steps.
Fodors (1991a) argument against program equivalence between operator-im-
plemented and direct-causation programs relies heavily on the example of percep-
tion:
It is, for example, extremely plausible that a perceives b can be true only
where there is the right kind of causal connection between a and b.... For ex-
THE CHINESE ROOM ARGUMENT RECONSIDERED 295
ample, suppose we interpolated a little man between a and b, whose function
is to report to a on the presence of b. We would then have (inter alia) a sort
of causal link from a to b, but we wouldnt have the sort of causal link that is
required for a to perceive b. It would, of course, be a fallacy to argue from the
fact that this causal linkage fails to reconstruct perception to the conclusion that
no causal linkage would succeed. Searles argument...is a fallacy of precisely
this sort. (pp. 520521)
That is, imagine that you are looking at a scene and that your experience is just like
the experience you would have if you were perceiving the scene. However, in fact
someone is blocking your sight and relaying signals to your brain that give you the
experience you would have if you were seeing the scene. Fodor observes that the
resultant experiences, even though they are the same as would result from a direct
perception of the scene and indeed are caused by the scene (a necessary condition
for perception under most analyses) would not be considered a genuine case of
perception because they are caused by the scene not directly but only indirectly
via the intervening cause of an operators actions. The introduction of the deviant
causal chain involving the mediation of an agent yields (for whatever reason) an
intuition that genuine perception is not occurring. The very concept of perception
has built into it the requirement that the causation of the experience by the scene
be direct, not mediated by an operator.
Fodor argues that Searle has created a similar deviant causal chain by adding
the program operator as a relayer of inputs and outputs in the CRE. Fodor claims
that it is this deviant causal pathway which is not isomorphic to a real Chinese
speaker, in whom state transitions occur without such conscious mediation that
is responsible for the intuition that the operator lacks genuine understanding of
Chinese, analogous to the intuition that relayed signals do not constitute genu-
ine perception. Thus, although the intuition that the operator does not understand
Chinese is correct, it provides no argument against computationalism because it is
not due to the failure of syntax plus causal relations to the outside world, Fodor
adds to yield semantics. Rather, the intuition is due to the fact that the operators
implementation introduces deviations from the native speakers program: All that
Searles example shows is that the kind of causal linkage he imagines one that
is, in effect, mediated by a man sitting in the head of a robot is, unsurprisingly,
not the right kind (Fodor, 1991a, p. 520).
Fodor (1991b) goes so far as to claim that the intervention of the operator im-
plies that the CREs setup is not a Turing machine at all because a transition to a
new state of the system is not directly and proximally caused by the prior state:
When a machine table requires that a token of state-type Y succeeds a token of
the state-type X, nothing counts as an instantiation of the table unless its tokening
of X is the effective (immediate, proximal) cause of its tokening of Y (1991b, p.
525). Fodor claims this requirement would surely rule out systems in which the
mechanisms by which S1 tokens bring about S2 tokens involve a little man who
applies the rule if you see an S1, write down S2 (1991b, p. 525). He concludes:
296 JEROME C. WAKEFIELD
Even though the program the guy in the room follows is the same program that
a Chinese speakers brain follows, Searles setup does not instantiate the machine
that the brain instantiates (1991b, p. 525).
Searles (1991b) response to Fodor contains two elements. First, regarding
Fodors suggestion that the determinants of meaning include not only the CREs
symbol manipulation but also causal relations between symbol occurrences and
features of the outside world, Searle answers that the addition of such causal rela-
tions will not change the intuition that the operator does not understand Chinese:
No matter what caused the token, the agent still doesnt understand Chinese.... If
the causal linkages are just matters of fact about the relations between the symbols
and the outside world, they will never by themselves give any interpretation to the
symbols; they will carry by themselves no intentional content (1991b, p. 523).
On this point, Searle is surely correct. Including causal relations to the external
world in the operators manual (i.e., determining syntactic transitions not only
by past syntactic inputs but also by what caused the inputs) would not affect the
experiments outcome because, whatever caused the inputs, the processing of the
symbols could still proceed without any understanding of the semantic content of
the symbols. Just as there is intuitively a step from syntactic structure to semantic
content which is exploited by the CRE, so there is also intuitively a step from the
cause of the occurrence of a syntactic structure to semantic content, and a modied
CRE could exploit that intuitive gap.
Second, regarding Fodors crucial claim that the operators intervention leads to
program non-equivalence and even non-Turing machine status, Searle replies that
it is absurd to think that inclusion of an operator who consciously implements a
program in itself alters the program and that such a system is not a Turing machine:
To suppose that the idea of implementing a computer program, by denition, rules
out the possibility of the conscious implementation of the steps of the program
is, frankly, preposterous (1991c, p. 525). Searle offers two examples to show
that such intervention preserves Turing machine equivalence. First, he observes
that either clerks or adding machines can be used to add gures, and both are
surely instantiating the same addition program. Second, he imagines Martian
creatures imported only because they can consciously implement certain computer
programs at speeds much faster than computers, and argues that surely they would
be considered to literally implement the relevant computer programs.
Searles examples are persuasive counterexamples to Fodors claim that Turing
machine instantiation necessarily excludes conscious implementation. However,
the fact that Fodors universal generalization regarding the non-equivalence of
consciously implemented and direct-causation programs is false does not imply
that he is wrong about the CRE. Even if introduction of conscious implementation
sometimes does not violate Turing-machine equivalence, it is still possible that it
sometimes does so. Nor is Fodors generalization the only ground for his objection
to the CRE. Rather, Fodors analogy to perception, where it does seem that in-
THE CHINESE ROOM ARGUMENT RECONSIDERED 297
troduction of conscious implementation violates program equivalence, is the most
compelling aspect of Fodors argument.
Searles response does not address the perception analogy. Rather than refuting
Fodors universal generalization and then focusing on the case at hand, Searle in-
stead asserts his own opposite universal generalization on the basis of his above
examples. That is, he claims that inserting conscious implementation always pre-
serves program equivalence and is just an instance of Fodors requirement that
one step be directly caused by another: Even if we accept [Fodors] requirement
that that there must be a (immediate proximal) causal connection between the
tokenings, it does not violate that condition to suppose that the causal connection
is brought about through the (immediate proximal) conscious agency of someone
going through the steps (1991c, p. 525). On the basis of this generalization, Searle
concludes that the CRE operators consciously implemented program is equivalent
to the modeled Chinese speakers program.
However, Searles examples do not convincingly establish his general conclu-
sion that conscious implementation always preserves program equivalence. The
limitation of his reply lies in the fact that both of his examples involve relations
between artifactual programs and their conscious implementations, whereas Fodors
example of perception involves conscious implementation of a naturally occurring
biological process that may be considered to have an essential nature in standard
cases that excludes conscious mediation at some points. The distinction between
implementation of artifactually constructed versus naturally occurring programs
could be crucial to intuitions regarding program equivalence because (as Searle
himself points out in other contexts) artifacts like programs are subject to derived
attributions based on what they were designed to do.
How do Searles examples involve relations to artifacts? The rst example relies
on an artifact, the adding machine, that has been designed to carry out a program,
addition, that is consciously implemented by clerks. The example shows that
when a machine is designed to run a program that humans (actually or potentially)
consciously implement, the same program is intuitively instantiated (i.e., the pro-
grams are intuitively Turing-machine equivalent) based on the artifacts designed
function of reproducing the relevant steps of the consciously implemented pro-
gram. Similarly, the conscious-Martian-calculator example involves the Martians
conscious implementation of human computer programs, which are in turn arti-
facts designed to substitute for conscious human implementation. The Martians
are intuitively understood to be implementing the same program as the computers
because that is the Martians function in being brought to Earth (in this regard
they are living artifacts, functionally speaking), and because they (literally) and
the computers (by functionally derived attribution) are both understood to have the
function of implementing the same program that a human (whether actually or, due
to human limitations, only potentially) would consciously implement.
10
Fodors prime example, however, involves no artifacts but rather concerns con-
scious implementation that intervenes in the naturally occurring perceptual pro-
298 JEROME C. WAKEFIELD
cess. Fodor is surely correct that an operators implementation of perception yields,
in some intuitive sense, non-genuine perception, even if the process is otherwise
identical to normal perception. Searles adding-machine and Martian-calculator
examples refute Fodors claim that conscious implementation never preserves pro-
gram equivalence, but Fodors perception example equally refutes Searles claim
that conscious implementation always preserves program equivalence. The arti-
fact/natural distinction could be critical here in explaining attributions of equival-
ence. Like perception, processes of human thought, understanding, and so on are
also instances of naturally occurring systems in which the essence of the program
(assuming such processes consist of programs) could at some points include lack
of conscious implementation and could require direct, unmediated causal relations
between steps, as Fodor suggests. Searle fails to address this possibility and thus
fails to refute Fodors objection. So, it remains an open question whether the con-
scious implementation of the Chinese understanding program is inconsistent with
genuine Chinese understanding.
How can we assess Fodors claim that the deviant causal chain introduced by the
operators intervention in the CRE, and not the syntactic nature of the operators
program, is the source of the intuition that there is no Chinese understanding?
The only (admittedly imperfect) answer seems to be to examine each possible
source of such an intuition within the deviant causal chain. (Note that the issue
here concerns conscious implementation of the program, not conscious awareness
of the programs steps. A hyper-aware linguistics processor who is consciously
aware of every step in linguistic processing can still understand Chinese.)
Precisely which features of the deviant causal chain due to the operators im-
plementation might undermine the intuition that there is Chinese understanding?
A strict analogy to the perception example would suggest the following: The fact
that the operator intervenes between the sending of the verbal input from an in-
terlocutor and the initiation of the processing of the input by the program yields
non-equivalence. However, this sort of intervention has nothing whatever to do
with whether the subject is judged to understand the language. It is the subjects
understanding of the arriving sentences, not the causal relation to the emitter of
the arriving sentences, that is at issue. Unlike the perception case, there is noth-
ing in the concept of language understanding that changes an understander into a
non-understander if, rather than the program directly receiving inputs and directly
emitting outputs, an operator mediates between the arrival of verbal inputs and in-
ternal processing, or between the results of the processing and outputs. So, if taken
literally, the analogy Fodor attempts to forge between language understanding and
perception fails. The concept of perception (for whatever reason) is indeed partly
about the direct nature of the causal relation between the perceiver and the world,
but language understanding is not about the nature of such causal relations. Thus,
the intervention of an operator between the world and internal processing in the
CRE cannot explain the resulting no-understanding intuition.
THE CHINESE ROOM ARGUMENT RECONSIDERED 299
Nor can the source of the no-understanding intuition be the sheer occurrence of
some conscious implementation in moving from one step of the program to another
in the internal language understanding process. Such conscious implementation is
not in itself inconsistent with linguistic understanding. Although some steps in
linguistic processing, such as immediate understanding of the meanings of expres-
sions uttered in ones native language, are typically involuntary for uent speakers,
the recipient of linguistic input sometimes has to consciously implement the deci-
phering process (e.g., in learning to understand a second language) and certainly
may have to voluntarily formulate the output. In these respects, linguistic perform-
ance is different from perception, which is inherently involuntary once sense organs
are in a receptive position, and is one way in that there is no perceptual output.
Nor can the source of the no-understanding intuition be the intervention of an
external agent (as in the original, robotic CRE) who does not herself possess
the program (which is in the manual). Such intervention by an external agent
is salient in Fodors perception example. However, Searles new CRE, described
above, internalizes the entire program within the operator, so that the operator is
directly receiving verbal input, internally implementing all program steps involved
in understanding the input, and directly responding herself. Thus, the new CRE
eliminates any causal-chain deviance due to an external implementer.
The only remaining explanation of why the deviant causal chain due to con-
scious implementation would yield the no-understanding intuition lies in precisely
which program steps are implemented. Although linguistic responses can involve
conscious implementation, some steps appear to be inherently automatic and in-
voluntary. In contrast, the CRE involves such implementation of every step in the
program. According to this diagnosis, introducing the operator creates a situation
in which certain steps in the internal understanding process that are normally inher-
ently automatic become voluntarily implemented, thus introducing steps that fail
to be equivalent to the hypothesized essential nature of the natural program.
This possibility can be addressed by amending the new CRE to include not just
internalization but habituation and automation. Imagine that the operator, with the
program (syntactically identical to the native speakers) internalized, overlearns
and thus automatically and unreectively implements the syntactic steps that are
implemented automatically by native speakers, with one step directly causing the
next without conscious or deliberate intervention. This habituated CRE contains
none of the above elements that might allow the deviant causal chain pointed to
by Fodor to be the source of our intuition that there is no understanding. Yet, the
intuition remains that the operator does not understand a word of Chinese and that
(to use Ned Blocks example) when the operator seems to be asking for the salt
in Chinese, she is really thinking in English about what noises and gestures the
program dictates she should produce next.
The intuition that the CREs operator lacks Chinese understanding is thus in-
dependent of all plausible potential sources in the deviant causal chain due to
operator implementation. The perception example turns out to be misleading be-
300 JEROME C. WAKEFIELD
cause the concept of perception involves assumptions about a direct causal link
between perceiver and environment that are not present in the concept of language
understanding. Consequently, the proposed deviant causal chain cannot be held
responsible for the intuition that the CREs operator does not understand Chinese.
Searles alternative account, that the fact that the operator is implementing a sheerly
syntactic program yields the CREs negative intuition, remains undefeated and the
most plausible account.
2.4. DOES THE OPERATOR UNCONSCIOUSLY UNDERSTAND CHINESE?
For those defenders of strong AI who accept that the CRE operator does not under-
stand Chinese in the standard way, a remaining gambit is to suggest that although
she does not consciously understand Chinese (because her conscious contents are
about syntactic strings), she does unconsciously understand Chinese and has un-
conscious Chinese semantic contents. As in cases of people who demonstrate un-
conscious knowledge of languages they are unaware they understand, it is claimed
that although the operator cannot consciously access her Chinese understanding,
she nonetheless possesses such understanding unconsciously. Thus, for example,
Hauser (1997) argues as follows:
Even supposing one could respond passably in Chinese by the envisaged met-
hod without coming to have any shred of consciousness of the meanings of
Chinese symbols, it still does not follow that one fails, thereby, to understand.
Perhaps one understands unconsciously. In the usual case, when someone
doesnt understand a word of Chinese, this is apparent both from the rst-
person point of view of the agent and the third-person perspective of the
querents. The envisaged scenario is designedly abnormal in just this regard:
third-person and rst-person evidence of understanding drastically diverge. To
credit ones introspective sense of not understanding in the face of overwhelm-
ing evidence to the contrary tenders overriding epistemic privileges to rst-
person reports. This makes the crucial inference from seeming to oneself not to
understand to really not understanding objectionably theory dependent. Func-
tionalism does not so privilege the rst-person.... Here the troublesome result
for Functionalism...only follows if something like Searles Cartesian identi-
cation of thought with private experiencing...is already (question-beggingly)
assumed. Conicting inuitions about the Chinese room and like scenarios
conrm this. Privileging the rst person fatally biases the thought experiment.
(Hauser, 1997, pp. 214215)
First, there is no question-begging privileging of the rst person perspective in
the CRE. Rather, the thought experiment is an attempt to demonstrate that third
person competence is not sufcient for content attribution and that the rst person
perspective is relevant to such attributions. The example is not theory laden, but
rather provides a test of various theories.
THE CHINESE ROOM ARGUMENT RECONSIDERED 301
Note that the unconscious-content account is not the same as Blocks internal-
ized systems objection that the operator and program have different contents, and
thus is not necessarily subject to the same objections. According to the unconscious-
content account, the operator herself, not the program considered independently
of the operator, unconsciously possesses the Chinese meanings in virtue of her
implementation of the program.
To assess the objection that the operator unconsciously understands Chinese,
one has to have some notion of when it can be said that an agent has specic
unconscious contents. Searles (1992) connection principle is relevant here. (In-
deed, a potentially important and non-obvious link between Searles Chinese room
and connection principle arguments is suggested.) Searle argues (very roughly)
that one cannot be said to possess a genuine unconscious content unless at least
in principle the content could become conscious. According to this account, the
operator cannot be said to unconsciously understand Chinese if there is nothing
about the unconscious contents in the operators brain that would potentially allow
them to come to consciousness as genuine (not merely syntactic surrogates of)
semantic contents.
Whether or not Searle has gotten the account of unconscious mentation right,
he is certainly correct that something more than third-person dispositions to act as
if one has a content are required for attribution of unconscious content. The stand-
ard refutations of logical-behaviorist and Turing-test accounts of content show as
much. Moreover, the operator in the Chinese room does not possess either of the
two features that would typically support such attribution of genuine unconscious
Chinese semantic contents. First, the primary method of verifying unconscious
content, namely, by rst-person report when the contents come into conscious-
ness, would yield the conclusion that the contents are syntactic descriptions, not
semantic understandings. Second, there is no need to postulate unconscious un-
derstanding for explanatory purposes; in any instance of a conscious practical
reasoning sequence (e.g., the operators belief and desire reasons, The manual
says I should make sound S and I want to do what the manual says I should
do, lead to the action of uttering sound S), the attribution of unconscious semantic
contents is explanatorily superuous; for example, postulating that the operator un-
consciously understands that S means pass the salt in Chinese is unnecessary to
explain the utterance of S because the utterance is fully explained by the syntactic-
based reasons that led to the action. I conclude that attributing unconscious Chinese
understanding to the operator cannot be coherently defended.
Defense of strong AI based on the unconscious-content reply may seem more
attractive than it is because of a common failure to distinguish between possessing a
content unconsciously and not possessing the content at all. Consider, for example,
Hausers (1997) illustration:
During the Second World War, Wrens (Women Royal Engineers) blindly
deciphered German naval communications following programs of Turings de-
vising until machines (called bombes) replaced the Wrens. Like Searle in the
302 JEROME C. WAKEFIELD
room the Wrens did their appointed tasks without knowing what any of it was
for but rather than conclude (with Searle) that neither Wrens nor bombes were
really deciphering, Turing conjectured both were doing so and, in so doing,
doing something intellectually unawares (Hodges, 1983, p. 211). (Note 22, pp.
222223)
There is clearly a derived sense in which the Wrens were deciphering German,
namely, Turing used them to implement his program the function of which was to
decipher German. There is also a sense in which they were doing this unawares,
namely, they had no idea what service they performed in following Turings pro-
gram. But the critical question for strong AI and the CRE is whether the Wrens
literally understood the German they deciphered or the English meanings of the
syntactic strings that emerged from Turings program. The answer (taking the ex-
amples description at face value) is that they had no idea of these meanings, or
even that such meanings existed. The sense in which they were unaware of that
content is not the sense in which one is unaware of content one possesses un-
consciously; it is the sense in which one is just plain ignorant and does not possess
the content at all, consciously or unconsciously (e.g., the sense in which my three-
year-old son Zachy is unaware that when he moves his hand he is gravitationally
inuencing Jupiter). The Wrens, it appears, had no idea, conscious or unconscious,
of the meanings they were manipulating or even that they were manipulating mean-
ings. Intentional descriptions such as deciphering are thus applied to the Wrens
only in a non-literal, derived sense, based on the function their actions performed
for Turing, and not because semantic contents were possessed unconsciously.
Finally, note that a basic challenge to the unconscious-content reply is to ex-
plain what grounds there are for attributing one content rather than another to the
unconscious. This portends an issue, indeterminacy of meaning, that is central to
the analysis below.
3. Why the Chinese Room Argument is Unsound
3.1. THE ESSENTIALIST OBJECTION TO THE CHINESE ROOM EXPERIMENT
The CRE yields the intuition that the operator does not understand Chinese, from
which it is concluded that the operator does not in fact understand Chinese. The
CRA uses this result to argue that computationalism cannot be true. This argument
is a powerful one for those who share the critical no-understanding intuition about
the CRE and take it at face value. This response is widely shared even among
Searles opponents. The objections considered earlier all started from the premise
that the operator in the CRE does not understand Chinese, at least consciously.
However, many others in the AI community either do not share the intuition or
do not take it at face value. To them, it seems that, irrespective of pre-theoretical
intuitions, the operator literally and consciously does understand Chinese in virtue
THE CHINESE ROOM ARGUMENT RECONSIDERED 303
of her following the syntactic program. The dispute thus comes down to a matter of
conicting intuitions, or to a difference over how seriously to take such intuitions.
Such conicts of intuition cannot be resolved unless deeper principles can be cited
as to why one intuition or another is or is not good evidence for deciding the issue
at stake.
In this section, I am going to develop a new kind of objection to the CRA, which
I will dub the essentialist objection. The essentialist objection provides a theor-
etical rationale for concluding that the common pre-theoretical intuition that the
Chinese room operator does not understand Chinese is not an appropriate reason
for concluding the she does not in fact understand Chinese. I do not deny that there
is such a widely shared intuition. Rather, I argue that there are good reasons why
such an intuition cannot be taken at face value and thus that the intuition does not
support Searles broader argument. The essentialist objection is different from the
three objections considered earlier because it attacks the premise, on which those
objections are based, that the CRE-generated intuition shows that the operator does
not consciously understand Chinese. It is also different from the usual rejections of
that intuition in allowing that the intuition is broadly shared but providing a theor-
etical rationale for nonetheless rejecting the intuition as determinative of whether
the operator in fact understands Chinese.
Computationalism was never claimed to entirely conform to our pre-theoretical
intuitions about intentional content in all possible cases. Nor was it claimed to be
a conceptual analysis of what we intuitively mean by meaning or content. Rather,
computationalism is best considered a theoretical claim about the essence of con-
tent, that is, about what as a matter of scientic fact turns out to constitute content.
The claim is that, in standard cases of human thought, to have a certain intentional
content is in fact to be in a certain kind of state produced by the running of a certain
syntactically dened program. As Block (1998) puts it: The symbol manipulation
view of the mind is not a proposal about our everyday conception.... We nd the
symbol manipulation theory of the mind plausible as an empirical theory (p. 47).
According to this construal, strong AI is an empirical claim about what constitutes
the essence of meaning, in exactly the way that water is H
2
O is an empirical
claim about what constitutes the essence of water.
A theory about the essence of the things referred to by a concept often re-
veals how to extend the concept to new and surprising instances, with consequent
realignments of intuitions. In such extensions of concepts to novel cases, the pres-
ence of the identied essential property overrides previous intuitions based on
supercial properties. This sort of counter-intuitive recategorization is one of the
distinctive consequences and scientic strengths of essentialist theorizing. Thus,
for example, St. Elmos re is not re, while rust is a slow form of re; lightning
is electricity; the sun is a star; whales are not sh; there are non-green forms of
jade; etc. Consequently, proposed counterexamples to essentialist proposals that
rely heavily on pre-theoretical intuitions about specic non-standard examples do
304 JEROME C. WAKEFIELD
not carry much weight, because it is not clear beforehand where such examples
will fall after the essentialist criterion is applied.
Imagine, for example, rejecting the claim that ice is the same substance as water
on the grounds that our pre-theoretical intuitions are clear that nothing solid could
be water. Many coherent scientic theories have been rejected on such spurious
grounds. For example, the fact that the infants pleasure in sucking at the breast is
pre-theoretically a paradigm case of non-sexual pleasure was cited by many critics
as a sufcient refutation of Freuds claim that infantile oral pleasures are sexual.
However, Freuds claim was that infantile sucking and standard sexual activities
share the same underlying sexual motivational energy source and so, despite pre-
theoretical intuitions to the contrary, are essentially the same motivationally and
fall under the category sexual. Whether Freud was right or wrong, his claim
could not be refuted simply by consulting powerful pre-theoretical intuitions that
the infants sucking pleasure is non-sexual.
Computationalism holds that the essence of standard cases of human intentional
content is the running of certain formal programs, and if that is so, then anything
that shares that essence also has intentional content. Searles CRE presents a non-
standard human instance that possesses that essence but violates our pre-theoretical
intuitions regarding possession of content. Searle takes our intuitions to be determ-
inative of whether the CREs operator understands Chinese. The essentialist reply
is that Searles reliance on such intuitions in a very non-standard case is not a
persuasive way of refuting computationalisms essentialist claim. The acceptability
of that claim depends on whether computationalism can successfully explain stand-
ard, prototypical cases of content, as in typical human thought and understanding.
If the proposal works there (and the burden of proof is on the computationalist
to show that it does), then content can be justiably attributed to non-standard
instances sharing the identied essence (such as the operator in the CRE), whatever
the pre-theoretical intuitions about such non-standard instances.
3.2. UNSOUNDNESS OF THE CHINESE ROOM ARGUMENT
The essentialist reply points to a central problem with the CRA: the argument as
presented is only as deep as the intuition about the CRE example itself. There is
no deeper non-question-begging argument to which the example is used to point
that would justify accepting this particular example as a sufcient arbiter of the
theory of the essence of intentionality. It is just such strongly intuitive stand-alone
counterexamples that are most likely to offer misleading intuitions about essences
(e.g., ice is not water; whales are sh; white stones are not jade).
Searle (1997) asserts to the contrary that the CRE provides the basis for a
simple and decisive argument (p. 11) against computationalism, so it is im-
portant to assess whether the essentialist objection to the CRE survives Searles
formulation of the CRA, which goes as follows:
THE CHINESE ROOM ARGUMENT RECONSIDERED 305
1. Programs are entirely syntactical.
2. Minds have a semantics.
3. Syntax is not the same as, nor by itself sufcient for, semantics.
Therefore, programs are not minds. Q.E.D (pp. 1112).
Premises 1 and 2 are obviously true and the argument appears valid, so the ar-
guments soundness turns entirely on premise 3.
11
Clearly, premise 3, being pretty
much a straightforward denial of computationalism, begs the question unless some
justication is provided. Searle states: In order to refute the argument you would
have to show that one of those premises is false, and that is not a likely prospect
(p. 11). That is an overly demanding requirement, given that Searle claims to
demonstrate that computationalism is false. To refute Searles argument that
is, to show that Searle does not succeed in refuting computationalism one need
only show that Searle does not successfully and without begging any questions
establish premise 3.
The only evidence offered for premise 3 is the CRE and the associated intu-
ition that the operator does not understand Chinese, as Searles own explanation
indicates: Step 3 states the general principle that the Chinese Room thought ex-
periment illustrates: merely manipulating formal symbols is not in and of itself
constitutive of having semantic contents, nor is it sufcient by itself to guarantee
the presence of semantic contents (p. 12). The CRE is supposed to show that, in
at least one case, syntax does not constitute semantics, based on the intuition that
the CREs operator does not understand Chinese. However, the no-semantics-from-
syntax intuition is precisely what strong AI proponents are challenging with their
computationalist theory of content, so supporting premise 3 by relying on the pre-
theoretical intuition that there is no understanding in the non-standard CRE begs
the question.
So, strong AI proponents even those who are pulled in the direction of
Searles intuitions about the Chinese room operator can justiably complain
that Searle begs the question of computationalism. He does so by choosing for
his counterexample a non-standard case where, as it happens, computationalism
dictates that traditional intuitions are incorrect, and he does not offer any inde-
pendent non-question-begging reason for supporting the traditional intuition over
the proposed essentialist theory. Think here again of those who vigorously attacked
Freuds theory by focusing on the strong intuition that there is nothing sexual
about babies sucking at the breast, thus begging the question of whether the the-
ory, which aspired to overturn precisely this sort of pre-theoretical intuition, was
correct. Or, imagine someone denying that white jadeite is a form of jade because
it is not green. Strong AI proponents may justiably object that exploiting such
pre-theoretical intuitions is an unsound way to critique a theory that by its nature
challenges the pre-theoretical understanding of non-standard examples.
306 JEROME C. WAKEFIELD
3.3. WHAT KIND OF THEORY IS COMPUTATIONALISM?: FAILURE OF THE
ONTOLOGICAL REDUCTIONIST REPLY TO THE ESSENTIALIST OBJECTION
The above critique of the CRA depends on interpreting computationalism as an
essentialist theory about content. A defender of the CRA might object that compu-
tationalism is not this kind of theory at all, but rather a reductionist theory that has
to precisely track intuitions. Such an objection might, for example, go as follows:
You make it out as if strong AI is just another essentialist scientic claim.
Well, it isnt really. Its a reductionist claim. It is that mental states are nothing
but computational states. The problem is that all forms of reductionism have to
track the original, intuitive phenomenon on which the reduction was supposed
to be based. But they cant do that in the case of the Chinese room. So I dont
agree that strong AI was intended to be like the oxidization theory of re or the
atomic theory of matter. Its reductionist in a way that is more like traditional
functionalism or behaviorism, where the point is to show that common notions
can be systematically reduced to the proposed notions.
The objection suggests that I have mistaken strong AI for an essentialist theory
when it is really a reductionist theory. It should rst be noted that the term reduc-
tion itself is subject to the same ambiguity. There is an obvious sense in which
an essentialist theory is a reductionist theory, namely, a theoretical reduction. For
example, one reduces heat to molecular motion in virtue of the theory that the
essence of heat is molecular motion, and one reduces elements to atomic structure
in the atomic theory of matter. In such theories, to use the language of the objection,
one kind of thing is claimed to be nothing but another kind of thing. For example,
re is claimed to be nothing but oxidation and heat is claimed to be nothing
but molecular motion in the respective theoretical reductions. Reduction in this
sense is nothing but essentialist theorizing, and surely need not precisely track
pre-theoretical intuitions, as the earlier examples show.
The objection, however, appears to be that there is another form of reduction,
which we might label ontological reduction. (Admittedly, this phrase has the
same ambiguity, but I could not think of any better label.) This form of reduction
is not essentialist and does not aim to provide a scientic theory of the nature of
the phenomenon in question. Nor is it an analysis of the meaning of our ordinary
concept (which computationalism clearly is not). Rather, it aims to show that we
can reduce our overall ontology by exhaustively translating statements about one
type of thing into statements about another type of thing, and that this elimination
of a basic ontological category can be achieved while retaining our crucial intuitive
beliefs and without substantial loss of expressive power. Such an account asserts
that we can consider things of type A to be nothing but things of type B without
loss for ontological purposes; it does not assert that As are literally nothing but
Bs (i.e., that As are literally constituted by Bs), because that would involve either
a conceptual analytic or theoretical/essentialist claim, neither of which necessarily
apply.
THE CHINESE ROOM ARGUMENT RECONSIDERED 307
For example, the claim that numbers are nothing but sets is not an essential-
ist theory of what numbers have been scientically discovered to be, nor is it a
conceptual analysis of our ordinary concept of number. Rather, it is a claim that in
principle we dont need to postulate numbers as an irreducible ontological category
in addition to sets because the language of set theory is sufcient to capture all
the distinctions of interest about numbers, so in principle the expressive power of
our number ontology can be gotten without any additional ontological assump-
tions beyond those already implicit in set theory. Similarly, one might argue (I
dont agree, but leave that aside) that logical behaviorism and functionalism are
claims not about the essence of mental states or the concept of a mental state
but rather about the reducibility of talk about mental states to talk about behavi-
oral dispositions or certain causal relations, thus are claims that mental states can
be considered nothing but behavioral dispositions or certain causal relations for
ontological purposes.
Suppose for the sake of argument that computationalism is indeed an attempt
at ontological reduction in the above sense. Does that imply that computationalism
must track pre-theoretical intuitions and must not make anti-intuitive claims about
content? I dont believe so.
It is just not true that ontological reductions must exactly track pre-theoretical
intuitions and must not yield odd new assertions. For example, the prototypical
ontological reduction, the reduction of numbers to sets, implies that the number 2
is a set, certainly an anti-intuitive claim that does not track our number statements.
And, depending on which sets one identies with the natural numbers, there are all
sorts of bizarre things one might say that do not track pre-reduction intuitions, such
as 2 is a member of 3 or the null set is a member of 1. Moreover, novel claims
about what things fall under the target domain are not excluded; for example, de-
pending on ones account, one might end up saying counter-intuitive things like
the singleton set containing the null set is a number. The existence of some such
counter-intuitive results is not a serious objection to the success of the reduction of
numbers to sets; neither would counter-intuitive results in the CRE be an objection
to computationalism as an ontological reduction.
There are of course limits to the kinds of novel assertions that are acceptable,
because the point of an ontological reduction is to capture the target domain as ac-
curately as possible. But the same applies to essentialist theories; the essence must
encompass at least the concepts prototypical instances that are the base on which
the concepts denition is erected via the postulation of a common essence. So,
essentialist and reductionist theories are similar in this respect. Both must overall
successfully track the target domain, but neither must precisely track the entire set
of intuitions about the target domain.
Thus, an analog of the essentialist reply to the CRA could be constructed even
if computationalism were interpreted as an ontological reduction. The strong-AI
proponent could argue that Searles objection that intuitions about the CRE are
inconsistent with computationalism is like objecting that the reduction of num-
308 JEROME C. WAKEFIELD
ber theory to set theory leads to some pre-theoretically absurd results. Once we
successfully reduce the vast majority of important assertions about numbers to as-
sertions about sets within a certain theory, any pre-theoretical absurdity that results
(e.g., that the null set is a member of 1) is not an objection to the identication
of numbers with sets for ontological reductive purposes, and the analogous point
applies to computationalism. I conclude that there is nothing in the distinction
between essentialist theories and ontological reductions that defeats the essentialist
reply to the CRA.
In any event, computationalism is best considered an essentialist theory rather
than an ontological reduction. Computationalisms signature claim that certain com-
puter states are beliefs in the same literal sense that peoples intentional states are
beliefs is exactly the kind of claim that is characteristic of an essentialist theory
but not of an ontological reduction. When such an assertion that does not track
pre-theoretical intuitions is generated by an ontological reduction (as in the null
set is a member of 1), it is clear that the new assertion is not to be taken as a
literal discovery but rather as a bizarre and unfortunate side effect of the reduction.
This is not the way computationalists view the conclusion that computers literally
possess thoughts. They think that this is a discovery generated by an insight into the
nature of thought, namely, that in prototypical human cases the essence of thought
is syntactically dened programming, which allows them to generalize the category
along essentialist lines to computer cases. It is thus more charitable to interpret
computationalism as an essentialist theory.
4. The Chinese Room Indeterminacy Argument
I believe that the essentialist objection I have offered above is a valid objection to
the CRA as Searle states it. I am now going to try to pull a rabbit (or perhaps I
should say gavagai) out of a hat and show that the CRA can be reinterpreted in
such a way as to save it from the essentialist reply. Specically, I will argue that the
CRA continues to pose a potential challenge to computationalism and strong AI if
it is construed as an indeterminacy argument which Ill dub the Chinese room
indeterminacy argument (CRIA).
The strong AI proponent thinks that the operators intentional states are determ-
ined simply by the formal program that she follows. How can one argue that this is
not true, without simply begging the question (as in the CRA) and insisting that in-
tuitively there is no genuine intentionality in virtue of formal programming alone?
The only non-question-begging test I know of for whether a property constitutes
genuine intentional content is the indeterminacy test. If the Chinese-understanding
program leaves claimed intentional contents indeterminate in a way that genuine
intentional contents are not indeterminate, then we can say with condence that the
program does not constitute intentional content.
THE CHINESE ROOM ARGUMENT RECONSIDERED 309
The essentialist objection shows that non-standard counterexamples such as the
Chinese room experiment, no matter what their intuitive force, are not conclusive
against computationalism. To be effective, such counterexamples must be targeted
at prototypical cases of human thought and must show that in those prototyp-
ical cases computationalism cannot offer an adequate account of the essence of
thought. This means that if the CRA is to be effective, it must be reinterpreted as
an argument about normal speakers of Chinese. The CRIA is exactly this kind of
argument. That is, it is an argument that computationalism is unable to account for
how anyone can ever understand Chinese, even in standard cases of human thought
that intuitively are clear instances of genuine Chinese understanding. The argument
attempts to show that in such standard cases, if computationalism is correct, then
alternative incompatible interpretations are possible that are consistent with all the
syntactic evidence, thus content is indeterminate to a degree that precludes making
basic everyday distinctions among meanings.
Such an argument against computationalism obviously must be based on the
assumption that the distinctions we commonly make among meanings reect real
distinctions and that there is in fact some interesting degree of determinacy of
content in human thought processes (e.g., that there is a real distinction between
thinking about rabbits and thinking about rabbit stages or undetached rabbit parts,
however difcult it is to state the grounds for the distinction). This determin-
acy assumption has been accepted not only by Searle (1987) but also by many
of his philosophical opponents more sympathetic to the aspirations of cognitive
science.
12
Admittedly, the CRIA has force only for those who believe that there is
some truth about the content of human thoughts with roughly the neness of dis-
crimination common in ordinary discourse. Consequently, if one steadfastly denies
the determinacy-of-content premise, then one escapes the CRIA, but at the cost of
rendering ones account of content implausible for most observers.
To my knowledge, Searle has never suggested that the CRA is an indetermin-
acy argument. Wilks (1982), in a reference to Wittgenstein, implicitly suggested
an indeterminacy construal of the CRA, but Searle (1982) did not take the bait.
Nonetheless, as Hauser (1997) observes, one might consider the following kind
of statement to hint in this direction: The point of the story is to remind us of a
conceptual truth that we knew all along; namely, that there is a distinction between
manipulating the syntactical elements of languages and actually understanding the
language at a semantic level (Searle, 1988, p. 214). As Hauser notes, the only
plausible grounding for a conceptual claim that semantics is not just syntactic ma-
nipulation is some version of Quines (1960) indeterminacy argument that semantic
and intentional content remains indeterminate (i.e., open to multiple incompatible
interpretations consistent with all the possible evidence) if the relevant evidence is
limited to syntax alone.
13
What, then, is the indeterminacy argument that can be derived from the CRA?
To construct such an argument, consider a person who possesses the program that
according to strong AI constitutes the ability to understand and speak Chinese. The
310 JEROME C. WAKEFIELD
program is syntactically dened, so that to think a certain semantic or intentional
content is just to be in a certain syntactic state. However, there is an alternative
interpretation under which the individual does not understand a word of Chinese.
Rather, her thoughts and utterances can be interpreted as referring to the programs
syntactic structures and transitions themselves. These two interpretations are mu-
tually incompatible but, the CRIA shows, are consistent with all the facts about
programming that computationalism allows to be used to establish content. The
indeterminacy consists, then, of the fact that, consistent with all the syntactic and
programming evidence that strong AI claims to exhaust the evidence relevant to
xing content, a person who appears uent in a language may be meaningfully
using the language or may be merely implementing a program in which states
are identied syntactically and thus may not be imparting any meanings at all
to her utterances. For each brain state with syntactic structure S that would be
interpreted by strong AI as a thought with content T, the person could have T or
could have the thought syntactic structure S. For each intention-in-action that
would be interpreted as the intention to utter X to express meaning m, the person
could just have the intention to utter X to follow the program. These are distinct
contents, yet computationalism does not explain how they can be distinct.
Recall Blocks earlier-cited comments about the Chinese Room operator that
when she seems to be asking for the salt in Chinese, what she is really doing is
thinking in English about what noises and gestures the program dictates that she
should produce next, and that when she seems to be conducting a learned discourse
in Chinese, she is thinking about what noises the program tells her to make next
given the noises shes heard and written on her mental scratch pad. Block here
in effect notes the two possible interpretations revealed by the CRIA. The persons
intentional content leading to his utterance could be I want to express the meaning
please pass the salt, and I can do so by uttering the sentence please pass the
salt, or it could be I want to follow the program and I can do so by uttering the
noise pass the salt. There is no evidence in the program itself that could dis-
tinguish which of these two interpretations is correct. The resulting indeterminacy
argument might go as follows:
(i) There are in fact determinate meanings of thoughts and intentions-in-action
(at least at a certain level of neness of discrimination); and thoughts about
syntactic shapes are typically different (at the existing level of neness of
discrimination) from thoughts that possess the semantic contents typically
expressed by those shapes.
(ii) All the syntactic facts underdetermine, and therefore leave indeterminate, the
contents of thoughts and intentions-in-action; in particular, the syntactic struc-
ture S is ambiguous between a standard meaning M of S and the meaning,
the program species to be in syntactic structure S. Similarly, an utterance
U may possess its standard meaning and be caused by the intention to com-
municate that meaning or it may mean nothing and be caused by the intention
to utter the syntactic expression U as specied by the program.
THE CHINESE ROOM ARGUMENT RECONSIDERED 311
(iii) Therefore, the content of thoughts and intentions-in-action cannot be consti-
tuted by syntactic facts.
This indeterminacy argument provides the needed support for Searles crucial third
premise, Syntax is not the same as, nor by itself sufcient for, semantics, in his
argument against computationalism. With the shift to the CRIA, Searles argument
becomes potentially sound, modulo the determinacy-of-content assumption.
Hauser (1997) dismisses the indeterminacy interpretation of the CRA as offer-
ing warmed-over indeterminacy trivially applied. He comments: Troubles about
indeterminacy are ill brought out by the Chinese room example anyhow being
all mixed up, therein, with dubious intuitions about consciousness and emotions
about computers (p. 216).
The truth is quite the opposite. The CRIA has nothing to do with intuitions
about consciousness or emotions. Moreover, it presents a more effective indeterm-
inacy challenge than has previously been presented for computationalist and related
doctrines. Earlier arguments all have serious limitations that have made them less
than fully persuasive. Even Quine was dismissive of such theoretical proofs of
indeterminacy as the LowenheimSkolem theorem, complaining that they did not
yield a constructive procedure for producing actual examples so that it was hard
to tell just how serious a problem the indeterminacy would be for everyday dis-
tinctions. His own gavagai-type example was meant to be more philosophically
forceful and meaningful. But it, too, was never entirely convincing because of the
local nature of the example, involving just one term or small groups of terms.
These sorts of examples left many readers with the lurking doubt that there must
be some way of disambiguating the meaning using the rich resources of standard
languages, and that the examples could not be carried out for whole-language
translation. Consequently, many observers remain less than fully convinced that
indeterminacy is a problem. Indeed, many of those trying to naturalize semantics
dismiss indeterminacy as unproven and unlikely (Wakeeld, 2001).
This is where the CRIA makes a dramatic contribution. It offers the clearest
available example of an indeterminacy that can be shown to persist in whole-
language translation. This is because of the systematic way in which every sentence
in Chinese, with its usual semantic content under the standard interpretation, is
translated in the CRIAs alternative interpretation into a sentence about the syn-
tax of the original sentence. Unlike Quines examples, one has no doubt that there
is no way to use further terms to disambiguate the two interpretations, for it is
clear that any such additional terms would be equally subject to the indeterminacy.
The CRIA offers perhaps the most systematic example of indeterminacy in the
literature.
The CRIAposes the following serious and potentially insurmountable challenge
to computationalism: What makes it the case that people who in fact understand
Chinese do have genuine semantic understanding and that they are not, like the
operator in the CRIA, merely manipulating syntax of which the meanings are un-
known to them? Even if one claims on theoretical grounds, as some do in response
312 JEROME C. WAKEFIELD
to the original CRA, that the operators manipulation of syntax does constitute
an instance of Chinese understanding, one still has to be able to distinguish that
from ordinary semantic understanding of Chinese or explain why they are not
different; the syntactic and semantic interpretations of the operators utterances and
thoughts at least prima facie appear to involve quite different sets of contents. But,
the CRIA concludes, computationalism cannot explain this distinction. Without
such an explanation, computationalism remains an inadequate account of mean-
ing, unless it takes the heroic route of accepting indeterminacy and renouncing
ordinary semantic distinctions, in which case it is unclear that it is an account of
meaning at all (it should not be forgotten that Quines indeterminacy argument
led him to eliminativism, the renunciation of the existence of meanings). In my
view, resolving the dilemma posed by indeterminacy is the main challenge facing
computationalism and strong AI in the wake of the CRA.
Hauser (1997) is apparently willing to bite the indeterminacy bullet. He argues
that any indeterminacy that can be theoretically shown to infect computationalism
is just a reection of indeterminacy that equally can be theoretically shown to infect
actual content, as well:
In practice, there is no more doubt about the cherry and tree entries in the
cherry farmers spreadsheet referring to cherries and trees (rather than natural
numbers, cats and mats, undetached tree parts or cherry stages, etc.) than there
is about cherry and tree in the farmers conversation; or, for that matter,
the farmers cogitation. Conversely, in theory there is no less doubt about the
farmers representations than about the spreadsheets. Reference, whether com-
putational, conversational, or cogitative, being equally scrutable in practice and
vexed in theory, the conceptual truth Searle invokes impugns the aboutness of
computation no more or less than the aboutness of cogitation and conversation.
(pp. 215216)
But, it is the fact that we do ordinarily understand and make such distinctions
between meanings and that reference is scrutable in practice that requires explan-
ation. Quines argument shows that, if his assumption that only behavioral facts
are relevant to reference xing were correct, then such distinctions could not be
made in practice and reference would not be scrutable in the way it is. This sug-
gests that something is wrong with the theoretical conclusion that semantic and
intentional (or conversational and cogitative) contents are indeterminate, and
something more than behavioral (or computational) facts must determine reference.
The fact that we dont yet have an adequate account of those additional reference-
determining features that explain such distinctions among contents, and that Searle
does not present an adequate account of how consciousness can provide such fea-
tures (a point emphasized by Hauser, 1997), does not at all impugn the basic point
that such distinctions clearly exist.
Hausers suggestion that computation is in the same boat with conversation
and cogitation is thus incorrect. Only in the latter two cases are there independent
reasons to think that meanings are interestingly determinate (at the relevantly ne-
THE CHINESE ROOM ARGUMENT RECONSIDERED 313
grained level). Computation being not an intuitive domain of content but a theory
of content, computationalisms job is to explain what distinguishes the meanings
within conversation and cogitation that we commonly distinguish, including the
two different sets of meanings portrayed in the CRIA. But it has failed to do
so, and it is hard to see how it could ever succeed. If the indeterminacy objec-
tion raised by the CRIA cannot be adequately answered one way or another, then
computationalism, and strong AI with it, must be rejected.
Acknowledgements
This article is based on a section of my doctoral dissertation (Wakeeld, 2001). I
am deeply indebted to John Searle, the Chair of my dissertation committee, for his
unstinting support; it will be clear that he would not agree with some of what I say
here and should not be held responsible for the views expressed. I also thank my
committee members, Hubert Dreyfus, Steven Palmer, and William Runyan.
Notes
1
This question should be considered an abbreviation of a disjunctive question that encompasses
various divergent positions within the AI community, the differences among which are not critical
to the present analysis. These variants include: (1) currently existing computers and programs are
such that some computers already actually do think (or have thought in the past, if none of the
relevant programs happen to be running now); (2) currently existing computers or other machines
are in principle capable of thinking, etc., if adequately programmed, but the necessary programs
have not yet been developed; (3) currently existing computers cannot be adequately programmed to
think due to hardware limitations, but, in principle, future more powerful or differently congured
computers could be so programmed. Note that computer encompasses related machines, such as
adding machines, that some strong AI proponents claim can think.
2
Larry Hauser (1997) objects that the labels strong AI and weak AI for the two doctrines are
misleading, insisting that weak AI, the doctrine that computers merely simulate thought processes,
is not articial intelligence at all because, by its very meaning, the phrase articial intelligence
implies genuine intelligence and thus implies the stronger claim: The assertion that computers can
think being the weakest possible assertion of articial intelligence anything less than this (as
with so-called weak AI) actually being denial thereof it is really inapt, I think, to call this view
strong (p. 217).
Hauser is incorrect that the phrase articial intelligence automatically implies genuine intelligence.
In general, the phrase articial X has exactly the semantic ambiguity addressed by Searles distinc-
tion. For example, articial owers, articial limbs, and the articial Moon environments used by
NASA for training astronouts are not genuine owers, limbs, or Moon environments, respectively,
but rather simulations of certain features of the genuine instances, whereas articial diamonds, arti-
cial light, and articial sweeteners are genuine diamonds, light, and sweeteners, respectively, which
are created as artifacts in contrast to the naturally occurring instances. Articial intelligence thus
potentially refers both to simulations of crucial features of intelligence and to creation of genuine
instances of intelligence through artifactual means, and the distinction between weak AI and strong
AI is a legitimate disambiguation of the phrase. Moreover, both doctrines are of course historically
real claims within what is generally referred to as the AI community.
314 JEROME C. WAKEFIELD
3
Hauser (1997, p. 213), while admitting that computationalism provides the argument generally used
to support strong AI, suggests that there is an alternative nave AI argument for strong AI that
does not rely on computationalism. He argues that the very fact that people are intuitively inclined
to label computer processes with intentional labels such as calculating, recognizing, detecting,
memory, thinking, and so on itself provides prima facie evidence that computer processing literally
falls under our mental concepts and that computers literally have mental states:
Computers, even lowly pocket calculators, really have mental properties calculating that 7+5 is
12, detecting keypresses, recognizing commands, trying to initialize their printers answering
to the mental predications their intelligent seeming deeds inspire us to make of them.... I call
this view nave AI": it holds the seemingly intelligent doings of computers inspiring our pre-
dications of calculation, recognition, detection, etc. of them constitute prima facie warrant
for attributions of calculation, recognition, etc. to them and hence, given the negligibility of the
arguments against, typied by Searles, actual warrant. (p. 212).
Hauser thus considers the fact that computer doings commonly inspire such [mental] predications
to offer a wealth of empirical evidence for strong AI (p. 212). But this nave AI argument begs
the question and is, well, naive. Everyone agrees that we have a natural tendency to use mental
language in regard to computers. This is a statement of the phenomenon that is the target of the
dispute, not an argument for one explanation or another. The fact that the terms are used does not
imply that they are being used literally or (even if intended literally) that they literally apply. There
are several possible explanations for such usage, and the debate over strong AI is a debate over which
explanation is correct. One explanation is that these processes do literally t under mental concepts.
Another is that, although the mental concepts do not literally apply, certain interesting similarities
between computer and mental phenomena, plus the lack of any convenient vocabulary for describing
these common elements, leads to an almost unavoidable tendency to use mental terms to capture
the relevant phenomena, and perhaps even to a tendency to confusedly think that they do literally
apply until one reects on it a bit. Note that there are many examples of similar attractions to types
of predication that are not literal. Consider the continued use by both evolutionary theoreticians and
ordinary people of design and purpose predications of biological mechanisms (e.g., the eyes are
designed to enable us to see or the purpose of the heart is to pump the blood), even though it is
generally agreed that they do not literally apply because no intentional agent created them, and even
though the use of such vocabulary sometimes misleads people into thinking there must be some plan
of nature built into evolution. The reason for the use of such intentional vocabulary is transparent;
there are obvious and important commonalities between the results of intentional design of artifacts
and the results of natural selection of biological mechanisms, and there is no convenient vocabulary
other than that of purpose and intentional design that suggests these properties in a parsimonious
way, so there is an unavoidable attraction to using that vocabulary, despite its not literally applying.
Hauser also ignores Searles explanation for these nave predications, in terms of the distinction
between intrinsic and observer-relative intentional attributions:
Why then do people feel inclined to say that, in some sense at least, thermostats have beliefs?...
It is not because we suppose they have a mental life very much like our own; on the contrary,
we know that they have no mental life at all. Rather, it is because we have designed them (our
intentionality), to serve certain of our purposes (more of our intentionality), to perform the sort
of functions that we perform on the basis of our intentionality.... Our ascriptions of intentionality
to cars, computers, and adding machines is observer relative. (Searle, 1991b, p. 522)
Given these competing explanations, it begs the question to insist that just because we are attracted
to using mental terminology, the terminology must literally apply. Such nave usage is no argu-
ment whatever for strong AI. Only computationalism offers a potential justication for taking such
predications literally.
4
Computers are in principle universal Turing machines, and programs are in principle universal
Turing machine programs, dened by inputoutput relations and state transitions. Programs are syn-
THE CHINESE ROOM ARGUMENT RECONSIDERED 315
tactically dened in the sense that each step in the running of the program consists of a syntactically
denable structure recognizable by the sequencing of certain shapes, and each movement from one
step to another that occurs during the running of the program is denable as a syntactic transition
or transformation in which the sequences of shapes of the inputs and previous steps determine the
sequence of shapes constituting the next step. Hauser (1997) makes much of the fact that the running
of a program is not itself a syntactic process. This is true but not in conict with the common point
that, in the sense just elaborated, programs are syntactically denable and the implementation of a
program is the implementation of a syntactically dened structure.
Having the same Turing machine program is more demanding than, and thus not to be confused
with, having the same inputoutput relations. The latter is sufcient to pass the Turing test, which
captures a logical behaviorist account of thinking. But there are compelling arguments that the Turing
test is not a sufcient test for thinking in the sense that a human thinks, just as there are compelling
reasons for rejecting logical behaviorism. The basic problem is that widely divergent programs can
have the same inputoutput relations. Thus, a machines program that perfectly mimics the input
output relations of a humans thinking program may not include internal state that could be essential
to the possession of certain thoughts. Computationalism, in demanding the same program as the
humans, avoids this problem. It demands not only that the inputoutput relations be the same as a
humans, but also that this similarity be achieved in the right way, with the same state transitions
as in a human.
Note also that for the sake of simplicity, I assume that the Chinese program is exactly the same as
the program of a human who knows Chinese. However, it is often observed that certain kinds of
minor divergences would not undermine understanding. Note also that it remains disputed whether
the relevant programs of individuals who understand Chinese are the same and thus the machine
has the same program as all humans who understand Chinese, or they differ across individuals so
that the machines program is the program of just one particular human who understand Chinese, or
something in between.
5
This way of putting the argument from computationalism to strong AI is adapted, with some
changes, from Hauser (1997, p. 211). Hauser himself claims that this argument is unsound because
premise 1 is false. He argues that it is not just computation in principle but the implementation
of the computation within some reasonable time frame that constitutes thought, and neither ac-
tual computers nor Turing machines in general can necessarily implement any given program for
thinking within a time frame similar to that in which human beings can compute it: Once time is
acknowledged to be of the essence, the [argument] goes by the board...because Turings thesis is that
considerations of speed apart Turing machines are universal instruments capable of computing
any computable function (p. 211). However, the CRE is an idealized thought experiment, and time
constraints of these kinds are just the sorts of performance limitations that are idealized away in
such an experiment. In any event, Hausers premise that rate of processing is crucial to thought
has no warrant in our intuitive classicatory judgments. We can easily and coherently conceive of
instances of thinking so slow that neither we nor our descendants would be alive to see the thinker
reach the end of a short syllogism begun today, yet it is thinking nonetheless. For example, in some
science ction novels, it is postulated that suns (or even larger entities, such as nebulae or galaxy
clusters) are actually sensient, intelligent beings (with their various plasma motions, force elds,
etc., supporting thought), but in which the magnitude of the motions necessary for thought are so
vast that thinking takes place at an astronomically slow rate relative to our thoughts. There is nothing
conceptually incoherent in such imaginings, so, contrary to Hausers suggestion, time cannot be
of the essence of thought. Similar rejoinders can be framed for other such practical limitations to
computer processsing. Thus, if the tendered argument is indeed unsound, it is for reasons other than
problems of the sort Hauser raises with respect to premise 1. Searle, of course, aims at premise 3.
6
In his recent attack on the CRA, one of the main strands of Hausers (1997) argument is (to quote
his abstract) that although the CRA is advertised as effective against AI proper (p. 199), Searle
equivocates in using strong AI between referring to strong AI itself and referring to computation-
316 JEROME C. WAKEFIELD
alism, and thus musters persuasive force [against strong AI] fallaciously by indirection fostered by
equivocal deployment of the phrase strong AI (p. 199). According to Hauser, Searles equivocation
means that his arguments against computationalism are likely to be incorrectly taken to be arguments
against strong AI proper. Consequently, understood as targeting AI proper claims that computers
can think or do think Searles argument is logically and scientically a dud (p. 199).
This objection is, in my view, a non sequitur. The CRA is clearly aimed at demonstrating the
falsity of computationalism and thereby undermining the only potentially persuasive argument for
strong AI (see Note 3). If sound, the CRA leaves strong AI without support for its bold claims, thus
non-demonstratively justifying its (provisional) non-acceptance if not rejection. Hauser is of course
correct that the CRA does not strictly demonstrate the falsity of strong AI. This is not news; Hauser
himself quotes Searle as explicitly denying that the CRA is aimed at demonstrating the falsity of
strong AI. Searle says I have not tried to prove that a computer cannot think (Searle 1990, p.
27), and acknowledges that there is no logically compelling reason why [computers] could not
also give off [thought or] consciousness (Searle, 1990, p. 31). Taking Searle at his word, the target
of the CRA as demonstrative argument is computationalism, not strong AI. Searle (1990) observes
that, once computationalism is refuted, there is no further known reason to think that computers can
literally think because there is no other known property of computers that can be persuasively argued
to provide them with the causal power to generate thoughts, in the way that some property of the
brain allows the brain to generate thoughts. But the CRA still does not demonstrate the falsity of
strong AI because, as Searle acknowledges, it remains possible in principle that some heretofore
unsuggested or unknown property of computers other than their programming could have the power
to generate thought. Thus, Hausers accusation that the CRA is an ignoratio elenchi that misleadingly
tempts the reader to pass invalidly from the argued for nonidentity of thought with computation...to
the claim that computers cant think (1997, p. 201) is without merit. If the CRA succeeds in proving
the nonidentity of thought with computation, then there is good though non-demonstrative reason
to reject strong AI. Note in this regard that Hausers objection is itself wholly an attempt to show
that Searle fails to prove the falsity of strong AI (with which Searle agrees); he offers no positive
argument that there is some property other than (implemented) programming that plausibly might
constitute thought.
7
At least this is so according to Searles construal of what constitutes the same program. I later
consider a challenge by Fodor to this construal.
8
Both Searle and his critics (e.g., Hauser, 1997) have sometimes interpreted machine functionalism
as a metaphysical thesis about the essence of thought, but no one, to my knowledge, has explicitly
used this construal as the basis for the kind of argument against the CRA that I mount here. That
is not to say that my diagnosis is not implicit in many critics comments about the CRAs misuse
of intuition, a common objection. For example, Steven Pinkers (1997) objection to the CRA that
the history of science has not been kind to the simple intuitions of common sense (p. 94) could
be construed as implicitly referring to the kind of point I will be making, but without offering an
adequate diagnosis.
9
Indeed, Block seems modestly sympathetic to this argument. He notes that Searles argument else-
where that intentional systems require consciousness could be deployed here to argue that the pro-
gram cannot understand Chinese because it lacks conscious awareness.
10
Searle holds that both artifact and natural function attributions are observer relative and value
laden, whereas I hold that they are factual explanatory claims (see, e.g., Wakeeld, 1992, 1995, 1999
primarily regarding natural functions). However, the point that judgments of program equivalence
are sometimes derived from function attributions is independent of such issues.
11
Hauser agrees that this argument (or what comes to essentially the same argument) is valid, but
claims it is unsound on grounds independent of the CRE, due to the falsity of the rst premise:
To see the falsity of [premise 1; in Hausers version, Being a Program is a formal (syntactic)
property (1997, p. 210)], consider that only running Programs or Program executions are can-
THE CHINESE ROOM ARGUMENT RECONSIDERED 317
didate thinkings. E.g., according to Newells formulation of the functionalist hypothesis (which
Searle himself cites) the essence of the mental is the operation [Hausers emphasis] of a physical
symbol system. No one supposes that inert (nonexecuting) instantiations of Programs (e.g., on
diskettes), by themselves, think or sufce for thought. The Program instantiations in question
in the preceding argument, then, should be understood to be just dynamic ones. Program in the
context of this argument connotes execution: P in the formulation above should be understood
to be the property of being a Program run or Process. Now, although every formal or syntactic
difference is physical, not every physical difference is formal or syntactic.... Every instantiation
of a given program is syntactically identical with every other: this is what makes the spatial
sequences of stored instructions and the temporal sequence of operations tokenings or instanti-
ations of one and the same program. It follows that the difference between inert instantiation and
dynamic instantiation is nonsyntactic: P, the property of being a Process at issue, is not a formal
or syntactic property but, necessarily (essentially), includes a nonsyntactic element of dynamism
besides, contrary to 1. (pp. 210211)
However, the CRE portrays not the inert possession of a static program in a manual but rather the
dynamic implementation of the program by the operator. Thus, the CRE is equally a counterexample
against a modied computationalism that holds that the process of implementating a certain (syn-
tactic) program constitutes understanding Chinese. Although the act of running a program is not
itself syntactically dened, the running of a program is the running of a series of syntactically
dened steps and transformations. Note also that the process by which a program is implemented
does not, according to computationalism, inuence the meanings constituted by the program. Thus,
the argument to which Hauser objects can be reformulated as an argument about implementation of
syntactically dened objects, as follows:
(1) The implementation of a program is the implementation of something entirely syntactical, that
is, something in which each step is entirely syntactically dened.
(2) Minds have semantics.
(3) Implementing something entirely syntactical is not the same as, nor by itself sufcient for,
semantics.
(4) Therefore, implementations of programs are not minds.
The CRE supports premise 3 of this argument just as strongly as it does premise 3 of Searles version,
so Hauser gains nothing by this elaboration. If, however, Hauser is suggesting that there are non-
syntactic features of how a program is implemented that determine whether there is thought, he
suggests a doctrine that is no longer computationalism as generally understood, and thus is not the
original target of the CRE. It is also a doctrine that does not in any apparent way support strong AI.
For strong AI is based on the argument that computers can share with humans whatever is essential
about thinking (see the discussion of essentialism in the text). Computationalism supports strong AI
because, if a program (or the running of a program) is the essence of human thought, we know that
computers can share this with humans. But if some other non-syntactic property of the running of
programs is the essence of thinking in humans, then there is no reason to believe that the running of
programs in computers shares this property, and thus there is no reason to think that computers can
have the property in virtue of which people can be said to think. Consequently, in analyzing Searles
argument, I ignore Hausers claim that premise 1 is false.
12
To take just two examples of many, Jerry Fodor acknowledges that a predicate may in fact mean
rabbit and not undetached proper part of a rabbit (1994, p. 59), and tries to show how informational
semantics can provide the requisite determinacy of content; I argue that he fails in Wakeeld (in
press). And Robert Cummins (1989), noting that Mental representations are, by denition, individu-
ated by their contents (p. 10), asserts: The contents of thoughts typical adult human thoughts,
anyway are quite amazingly unique and determinate. When I think the thought that current U.S.
policy in Central America is ill-advised, the proposition that current U.S. policy in Central America
is ill-advised is the intentional content... It just isnt true that my thoughts about Central America are
about Central America only relative to some choice of...interpretation. A data structure s-represents
318 JEROME C. WAKEFIELD
different things under different interpretations, but thoughts dont represent in that sense at all. They
are just about what they are about (pp. 137138).
13
To my knowledge, Hausers (1997) discussion is the only explicit published recognition of the
potential for interpreting the Chinese room argument as an indeterminacy argument, other than the
analysis in my own dissertation (Wakeeld, 2001).
References
Block, N. (1998), The Philosophy of Psychology: Classical Computationalism, in A. C. Grayling,
ed., Philosophy 2: Further Through the Subject, New York: Oxford University Press, pp. 548.
Carleton, L. (1984), Programs, Language Understanding, and Searle, Synthese 59, pp. 219233.
Cummins, R. (1989), Meaning and Mental Representation, Cambridge, MA: MIT Press.
Fodor, J. A. (1991a; orig pub 1980), Searle on What Only Brains Can Do, in D. M. Rosenthal, ed.,
The Nature of Mind, New York: Oxford University Press, pp. 520521.
Fodor, J. A. (1991b), Afterthoughts: Yin and Yang in the Chinese Room, in D. M. Rosenthal, ed.,
The Nature of Mind, New York: Oxford University Press, pp. 524525.
Fodor, J. A. (1994), The Elm and the Expert: Mentalese and Its Semantics, Cambridge, MA: MIT
Press.
Hauser, L. (1997), Searles Chinese Box: Debunking the Chinese Room Argument, Minds and
Machines 7, pp. 199226.
Hodges, A. (1983), Alan Turing: The Enigma, New York: Simon and Schuster.
Pinker, S. (1997), How the Mind Works, New York: Norton.
Quine, W. O. (1960), Word and Object, Cambridge, MA: MIT Press.
Rapaport, W. J. (1988), Semantics: Foundations of Computational Natural Language Understand-
ing, in J. Fetzer, ed., Aspects of Articial Intelligence, Dordrecht, The Netherlands: Kluwer
Academic Publishers, pp. 81131.
Searle, J. R. (1980), Minds, Brains, and Programs, Behavioral and Brain Sciences 3, pp. 417424.
Searle, J. R. (1982), The Chinese Room Revisited, Behavioral and Brain Sciences 5, pp. 345348.
Searle, J. R. (1988), Minds and Brains Without Programs., in C. Blakemore and S. Greeneld, eds.,
Mindwaves, Oxford: Basil Blackwell, pp. 209233.
Searle, J. R. (1990), Is the Brains Mind a Computer Program?, Scientic American 262, pp. 2631.
Searle, J. R. (1991a; orig pub 1980), Minds, Brains, and Programs, in D.M. Rosenthal, ed., The
Nature of Mind, New York: Oxford University Press, pp. 509519.
Searle, J. R. (1991b; orig pub 1980), Authors Response, in D.M. Rosenthal, ed., The Nature of
Mind, New York: Oxford University Press, pp. 521523.
Searle, J. R. (1991c), Yin and Yang Strike Out, in D. M. Rosenthal, ed., The Nature of Mind, New
York: Oxford University Press, (pp. 525526).
Searle, J. R. (1992), The Rediscovery of Mind, Cambridge, MA: MIT Press.
Searle, J. R. (1997), The Mystery of Consciousness, New York: New York Review Books.
Searle, J. R., J. McCarthy, H. Dreyfus, M. Minsky and S. Papert (1984), Has Articial Intelligence
Research Illuminated Human Thinking?, Annals of the New York City Academy of Arts and
Sciences 426, pp. 138160.
Wakeeld, J. C. (1992), The Concept of Mental Disorder: On the Boundary between Biological
Facts and Social Values, American Psychologist 47, pp. 373388.
Wakeeld, J.C. (1995), Dysfunction as a Valuefree Concept: Reply to Sadler and Agich,
Philosophy, Psychiatry, and Psychology 2, pp. 233246.
Wakeeld, J. C. (1999), Disorder as a Black Box Essentialist Concept, Journal of Abnormal
Psychology 108, pp. 465472.
THE CHINESE ROOM ARGUMENT RECONSIDERED 319
Wakeeld, J. C. (2001), Do Unconscious Mental States Exist?: Freud, Searle, and the Conceptual
Foundations of Cognitive Science, Berkeley, CA: University of California at Berkeley (Doctoral
Dissertation).
Wakeeld, J. C. (in press), Fodor on inscrutability, Mind and Language.
Wilks, Y. (1982), Searles Straw Man, Behavioral and Brain Sciences 5, pp. 344345.

You might also like