The following remarks regard the text:

Searle, John. R. (1980) “Minds, Brains, and Programs”. Behavioral and Brain Sciences 3: 417-457

All quotations in this post were extracted from this paper.

Make sure to listen to The Dawdlers discuss this paper in depth in E1: John Searle Does not Understand – The Chinese Room Argument
———————–

Feeling somewhat conflicted about contributing to what I perceive as a problem, I’m going to press on and provide one more response to John Searle’s ‘Chinese Room’ argument. In my opinion, Searle’s argument has been successfully marginalized in a variety of ways, by a cadre of counters such as can be found in Hofstadter & Dennett, Pinker, Churchland & Churchland, Maudlin, et. al. [Not to mention that it was more than sufficiently shredded by the initial responses to the target article in BBS! I have a particular taste for Rorty’s…] I am also convinced that a sufficiently sophisticated version of ‘the systems reply‘ is a sound refutation.

However, here I would like, rather than presenting a counterargument, to engage with Searle’s original text and attempt to show that its putative ‘argument’ is not even worthy of a response, as it fails to state a meaningful claim at all. The content of the original paper appears to me unworthy of the attention it has garnered, and I am confused as to why so many ‘smart’ people have taken it seriously, and worse yet, still find it powerful after all these years.

First, to provide context while attempting to evince my grasp of Searle’s points, I will allow Searle to speak for himself by quoting from “Minds, Brains, and Programs” [MBP]. Then I will rehearse the Chinese Room argument as a numbered argument. Finally I will tell you why my interpretation of the case Searle presents does not convince me of his claim.

Searle declares and reiterates his thesis [emphasis mine]:

“Instantiating a computer program is never by itself a sufficient condition of intentionality.”
“…no program by itself is sufficient for thinking…”
“…whatever purely formal principles you put into the computer, they will not be sufficient for understanding…”
“…in the literal sense the programmed computer understands… exactly nothing.”
“…symbol manipulation by itself couldn’t be sufficient for understanding…”
“…no purely formal model will ever be sufficient by itself for intentionality because the formal properties are not by themselves constitutive of intentionality…”

Note the relative narrowness of Searle’s point here: he is arguing for the necessary inability of ‘programs’ to instantiate a set of ‘intentional’ predicates, especially what he calls ‘understanding’.

Here for convenience, at length, is Searle’s version of the Chinese Room argument as presented in “Minds, Brains, and Programs”:

“Suppose that I’m locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I’m not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles…

Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that ‘formal’ means here is that I can identify the symbols entirely by their shapes.

Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me all of these symbols call the first batch ‘a script’, they call the second batch a ‘story’, and they call the third batch ‘questions’. Furthermore, they call the symbols I give them back in response to the third batch ‘answers to the questions’, and the set of rules in English that they gave me, they call ‘the program’.

Now just to complicate the story a little, imagine that these people also give me stories in English, which I understand, and they then ask me questions in English about these stories, and I give them back answers in English. Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view that is, from the point of view of somebody outside the room in which I am locked — my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don’t speak a word of Chinese.
Let us also suppose that my answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I am a native English speaker. From the external point of view — from the point of view of someone reading my ‘answers’ — the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program.

…it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, [any given computer program] understands nothing of any stories whether in Chinese, English, or whatever — since in the Chinese case the computer is me and in cases where the computer is not me, the computer has nothing more than I have in the case where I understand nothing.”

To minimally and [hopefully!] charitably paraphrase this prose into a numbered argument we get:

    1. S is placed in an enclosure.
    2. S is given three batches of Chinese writing [the ‘script’, the ‘story’, the ‘questions’], and a set of formal correlation rules and instructions in English writing [the ‘program’].
    3. S does not know Chinese.
    4. S understands English.
    5. S is asked questions in English and responds as a native speaker would.
    6. S is asked questions in Chinese and responds as a native speaker would.
    7. S’s English responses are the result of being a native English speaker.
    8. S’s Chinese responses are the result of performing computational operations on formally specified elements according to a program.
    9. S, while programmatically performing formal computational operations on Chinese writing does not understand the stories written in Chinese.
    10. S, while performing native speaking of English does understand the stories written in English.
    11. No computer programmatically performing formal computational operations on any language L is relevantly different than S operating on Chinese.
    Therefore C (from 9 and 11): No computer programmatically performing computational operations on language L understands stories written in L.

I hope this to be a clear and fair representation of Searle’s Chinese Room [CR] argument. [If anyone disagrees, let me know where I went wrong!] I also believe this argument to be fatally flawed in multiple respects.

I have no quarrel with premises 1, 2, 5, 6, 8, or 11. And I have no problem with drawing conclusion C from 1-11. However, I do think there are significant problems with the remaining premises which together explain why I am not convinced by the Chinese Room argument. So let’s take a closer look at 3, 4, 7, 9, and 10.

Problem 1: The definition of “understanding”…

The argument of MBP hinges on the claim found in my Premise 9 (and as Searle equivocates in the prose version with the term ‘knows’ which I don’t think is relevant just an artistic choice by the writer of MBP, premises 3 and 4 as well) that S ‘does not understand’ the Chinese stories, but does understand English.

MBP employs the term ‘understanding’ at least 65 times, and MBP has a ‘Keywords’ section. ‘Understanding’ does not make it in to the ‘Keywords’ section, though ‘brain’ and ‘mind’ do whose combined occurrences total less than 65. MBP does not define ‘understanding’ [Again, please point me to where he does if this is incorrect.] and as a consequence, I have no idea what the fulcra premises of his argument even claim.

Searle should appreciate Problem 1, because Searle asks the right question:

“Well, then, what is it that I have in the case of the English sentences that I do not have in the case of the Chinese sentences?”

For this reader, that very question summarizes the whole Chinese Room ballgame. The CR argument claims to show that there is some [X], that S ‘has’ in relation to English sentences that they lack in relation to Chinese sentences. Lacking individuation criteria for [X], we would remain systematically unable to establish its presence or absence in ourselves or others and the CR argument would be meaningless.

3. S ~X’s Chinese
4. S X’s English

The Chinese Room argument hinges on these premises, and Searle never tells us what X is supposed to be. He relates to it in a purely intuitionistic ‘I know it when I see it’ manner:

“…I want to block some common misunderstandings about ‘understanding’: in many of these discussions one finds a lot of fancy footwork about the word ‘understanding’. My critics point out that there are many different degrees of understanding; that ‘understanding’ is not a simple two-place predicate; that there are even different kinds and levels of understanding… and so on. To all of these points I want to say: of course, of course. But they have nothing to do with the points at issue. There are clear cases in which ‘understanding’ literally applies and clear cases in which it does not apply; and these two sorts of cases are all I need for this argument.”

First of all, and I must assume that Searle does not mean this, but if you take this paragraph literally, Searle says “of course” to the critique that “‘understanding’ is not a simple two-place predicate” [S understands P], though MBP is riddled with dozens of sentences which take this form, i.e. “…the system in me that understands English…” The subtle ways in which Searle loses credibility with the attentive reader are legion.

More importantly, Searle does not appear to appreciate that assertions are not arguments. Read MPB and count the times Searle makes assertions about ‘understanding’, and then see if you can find him making any arguments about the presence or absence of ‘understanding’. Now, of course he reasons from presence/absence claims of ‘understanding’, but all the presence/absence premises themselves are established by fiat.

I find it nonsensical to state “There are clear cases in which [X] literally applies and clear cases in which [X] does not apply…” lacking a set of necessary and sufficient conditions for X. Searle not only does not give us this set, he waves his hands at our requests and merely asserts that “There are clear cases…” I do not buy this assertion, and so remain unpersuaded.


Problem 2: Incorrigible introspective intentionality identification …

So, how do we detect this undefined X? Searle provides no protocol, but why tell when one can show?

“…it seems to me quite obvious… that I do not understand a word of the Chinese stories.”

“The sense in which an automatic door ‘understands instructions’ from its photoelectric cell is not at all the sense in which I understand English… in the literal sense the programmed computer understands… exactly nothing. The computer understanding is not… partial or incomplete; it is zero.”

“The whole point of the Chinese room example was to argue that such symbol manipulation by itself couldn’t be sufficient for understanding Chinese in any literal sense because the man could write ‘squoggle squoggle’ after ‘squiggle squiggle’ without understanding anything in Chinese.”

“The whole point” of the Chinese Room ‘argument’, Searle admits, is grounded in incorrigible introspective intentionality identification “…it seems to me quite obvious…”. Why should we believe Searle when he declares that, during the course of his symbol manipulations no ‘understanding’ [no undefined X-ing] occurs? Premises 9 & 10, the sine qua non of the CR argument, have, as their subargumentation only this: Searle (and, surely, we as well!) can infallibly (or at the very least reliably) detect the presence or absence of ‘understanding’.

Problem 1 was that X detection is incoherent lacking individuation criteria for X’s, and Searle has not provided these criteria for ‘understanding’. But even if we found or invented some, Problem 2 would remain; the CR argument is maximally as persuasive as one’s faith in incorrigible introspective X identification. Due to a bevy of argumentation from sources such as behavioral economics, cognitive psychology, neuroscience, philosophy of mind, etc. that are beyond the scope of this entry– I have very little confidence in the reliability of introspection. Even worse yet, the CR adds another level of potential bias and error by relying not only on introspection but hypothetical introspection(!); ‘Don’t you think you would think there was no understanding if you were in the Chinese room?’. I have no trust in this methodology, and so remain unpersuaded.

Problem 3: Metaphilosophical methodological differences …

“Intentionality in human beings (and animals) is a product of causal features of the brain. I assume this is an empirical fact about the actual causal relations between mental processes and brains.”

As far as I know, and with the way I use the term, there is no ’empiricism’ conducted on ‘mental processes’ whatsoever. I would accept that we empirically examine brains, but I would need to hear more about an account of empiricism that claimed that we empirically examine ‘minds’. Minds seem to me to be theoretical posits employed to explain empirical behavioral results. To assume that we can empirically access minds at all begs an eliminativist / mental realist question that loses this reader.

Furthermore, as someone currently convinced of Humean causal skepticism, I will not be able to join Searle in supposing that we can empirically access “actual causal relations” either. Empirically, we observe one thing follow another to the limiting case of frequent replication with constant conjunction. But we do not empirically access ’causes’; we theoretically posit causes to explain the conjunctions.

“One way to test any theory of the mind is to ask oneself what it would be like if my mind actually worked on the principles that the theory says all minds work on.”

Here Searle clearly states his commitment to the viciously armchair practice of modal counterfactual hypothetical intuitionism. Each of these four components introduces compounding levels of skepticism in this reader. What would be the case if George W. Bush had been born missing his left thumb? What if Searle had had a metal rod blast through his frontal lobe when he was 25 and working on the railroad? This is among those sorts of claims that ‘give philosophers a bad name’ in parts of the intellectual community. That Searle not only thinks this is some sort of non-trivial introspective psychology but an actual test of a theory of mind is positively baffling. All that I would be willing to grant could be learned from this sort of ‘test’ is that we might learn something about the current doxastic dispositions of the testing agency.

“The Chinese room example shows that there could be two ‘systems’, both of which pass the Turing test, but only one of which understands; and it is no argument against this point to say that since they both pass the turing test they must both understand, since this claim fails to meet the argument that the system in me that understands English has a great deal more than the system that merely processes Chinese.”


A great deal more what, sir? Searle thinks that the strong AI reading of passing the Turing test as a sufficient result to justify attributions of understanding begs the question against him. There may be some people, with whom Searle interacts, who do use Turing test results in this way, but I would not, and the AI advocates with whom I am familiar would not either. My interpretation of Turing test discourse [and I believe this to be made clear in Turing’s “Computing Machinery and Intelligence”] is: let us design an operational procedure to whose results we could all agree and use this test to replace Searlesque discourse about undefined X’s [‘(original) intentionality’, ‘understanding’, ‘thinking’…] My stripe of Turing test advocate would not claim “Passing of the Turing test legitimizes ‘Searle-style-understanding’ attributions”, they would claim “Passing of the Turing test is intersubjectively determinate, well-defined, and interesting. Thus it ought supplant ‘Searle-style-understanding’ talk altogether.”

“And the mental-nonmental distinction cannot be just in the eye of the beholder but it must be intrinsic to the systems; otherwise it would be up to any beholder to treat people as nonmental and, for example, hurricanes as mental if he likes… I will simply assert the following without argument. The study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don’t. If you get a theory that denies this point you have produced a counterexample to the theory and the theory is false.”

I sometimes think that Searle’s work does have a potential value, though it is not the purpose to which it is typically yolked at this time– that it could be a major component of a university class on how not to do philosophy. “Here we find, class, a textbook example of what some of us call the fallacy of ‘begging the question’.” Searle is so dogmatically committed to the ‘fact’ that human beings and machines are fundamentally different as regards their intentional states that if you develop a theory which argues otherwise he will declare it false for that reason alone!

This paragraph is a shocking admission of question begging dogmatism, and immediately removes its author from the community of philosophical discourse that I wish to be a member of. At least four respectable candidates in the philosophy of mind (Putnamesque machine state functionalism, Dennettian intentional stance behaviorism, Churchlandish scientific realist eliminativism, and Frankishish illusionism) are dismissed out of hand as inherently false. Searle’s also undefined and undefended “causal powers of the brain” as the metaphysical fundament of his human chauvinistic intentionalist exclusivism– his anthropocentric mental realism –is bombastically and dogmatically asserted. As someone who values and respects argumentation, and who thinks that the larger the claims the greater the burden, I do not appreciate the anti-philosophical practice of stating as fact the very issue of the current problematic… And so, remain unpersuaded.


1 Comment

Sid · September 17, 2019 at 12:27 pm

This was a good read. However, relax with the thesaurus. You can get the same points across without coming across as pompous and making it difficult for an average reader to (want to) finish your article. I’m sure my comment will be taken personally but… oh well.

Leave a Reply

Your email address will not be published. Required fields are marked *