A REVIEW OF

JOHN R. SEARLE'S
"CHINESE ROOM"

ARTIFICIAL INTELLIGENCE:
A DEBATE--IS THE BRAIN'S MIND A COMPUTER PROGRAM?
SCIENTIFIC AMERICAN, JANUARY 1990



Review (in the text below) of the above article is by
David Lee Winston Miller

This review was originally prepared for Professor Jorge Novillo,
CSC 557, ARTIFICIAL INTELLIGENCE,
STATE UNIVERSITY OF NEW YORK, INSTITUTE OF TECHNOLOGY

Some changes to the original paper are included to reflect some very interesting recent developments.

Notes: This review is also used in Dave Grimshaw's CPS721 Introduction to Artificial Intelligence course at Ryerson Polytechnic University, Toronto, Canada. (An alternate, possibly not up-to-date, copy of the review may be found at "A Student Review of the Scientific American Article" in the course's Web page) The review (below) assumes that the reader is familiar with Searle's article. However, those familiar with consciousness issues will probably understand most of the ideas in my review. For background info, see "The Chinese Room Parable" in the Ryerson Polytechnic course mentioned above. Note that the Scientific American article also included a response by Paul and Patricia Churchland, entitled "Could a machine think?"


A REVIEW OF

JOHN R. SEARLE'S
"CHINESE ROOM"

by David Lee Winston Miller

SO? --That was the refrain I learned as a child to the predictable and constant bombardment of suggestive claims by other children. What child hasn't learned some simple reply (SO WHAT?!!!) to deal with inevitable--but apparently nonsensical--implications like MY DOG IS HAIRIER THAN YOURS!

As I read many of Searle's points, I could not contain my SO WHAT reply. Searle basically says that his brain is more causative than the other guy's computer. SO. . .? Searle says that programs can be run on different hardware--maybe even beer cans. SO? O.K.--I admit being uncomfortable that my dog wasn't as hairy as the other kid's dog, but after thinking about it, I concluded that the dog was a fine dog just the same. And I admit being more than uncomfortable with the thought that beer cans running a program might constitute a mind! But I don't believe that my level of discomfort should make me shy away from the possibility.

How does it follow that, because beer cans might manipulate symbols, SM (symbol manipulating) machines (or their programs) can't think? To be fair, Searle does not solely rely on the reader's discomfort or common sense. And he does concede that it may be possible that such a system would think if it had "the relevant causal capacities equivalent to those of brains." But wait . . . "causal capacities"? It seems that this should be a simple question of physics--all things that do things, do so with energy. Neurons (and the brain as a whole) derive their "causal capacities" from outside sources. So do computers, and so would a wind-powered symbol-manipulating string of beer cans. With respect to causal capacities, there may be no qualitative differences between brains and beer cans.

This is all so very frustrating because at some gut level, one cannot help but agree with Searle. His arguments seem compelling at times. But are they logically compelling? Or are we simply falling to this hairy dog business?

Searle seems impressed with the specificity of the brain and claims that its causation is from the "bottom up." This presents no problem with our beer can machine--we simply add a few car bumpers and black-velvet portraits of Elvis, assigning them specific functions and supplying them energy at these specific function points. Each function may even play an important role in the machine's survival--say, monitoring resources and maintenance. As output, this machine might have strings (or maybe chains made from beer can tabs) attached to a telex machine with which it could continually send lobbying messages to Congress urging perpetual funding for maintenance of its large structure. (Surely, such a machine would have been the creation of Congress in the first place.)

Searle's causal concept seems possibly not relevant. There are, however, other possible problems with his logic. He seems to be telling us that if you can dissect an SM machine into parts, and none of the individual parts constitute intelligence, then the machine is not intelligent. In a brilliant (yet insufficient to prove his point) example, he anticipates the reader's objection to this dissection argument:

On first reading, this rejoinder makes a lot of sense. After all, we can imagine doing this (the calculations) ourselves and not understanding Chinese, yet being the whole system--a system that produces conversation in Chinese. The point seems especially valid if we imagine that the process is still as slow as Searle's Chinese room. So, while doing the tasks, our cognitive skills could be preoccupied with tonight's date or a baseball game on television. The symbols would be produced by rote. In fact, one might not even realize that it is Chinese that is being produced. We might say, "Hey, I just work here. . . . I don't know what all these rules are about!"

But let's examine the system more closely. There is no relevant difference between storing the rules in our head and storing them in a rule book in the Chinese room. The same observation applies to storing symbols in our head as opposed to storing them in the Chinese room baskets. (Indeed, that is Searle's point in this whole-system example.) In imagining the whole-system example, one realizes that one's mind would not be cognizant of the semantic content of the symbols that are being manipulated by rote. However, this fact doesn't preclude the possibility that an additional mind has been created that is cognizant of the semantic content. (It may be more correct to say that a mind "has occurred" instead of "has been created"; it may be more correct to say that a mind "occurs" instead of a mind "exists.")

Just because one can imagine being the whole system and being unaware of any mind that understands Chinese, it does not follow that no such mind has occurred: It is possible that two minds are now occurring in the same body with no awareness of each other. One of the two minds would be very unconventional--its computational structure being very much like that of the beer can machine. Although the unconventional mind would borrow the conventional mind as a host processor and storage unit, the program of the unconventional mind would be executed top-level instead of at the "machine" level. Searle may have tricked himself (and possibly many readers) with this example. Nothing about the whole-system example excludes the possibility of semantics occurring. (Because we really don't know what causes semantics to occur!)

An interesting possibility about this is that the additional (unconventional) mind could be considered superior to the conventional mind in the sense that it could readily be duplicated on another type of SM machine. (However, in the future, the same might be said of the conventional mind.) The unconventional mind could even occur in two places simultaneously, say, as a programmed beer can SM machine and as a Chinese room. (Note that, for the sake of reading clarity, I generally treat the room--and other machines--as the 'mind'.) In either case, two such minds would presumably cease to be identical as soon as they received nonidentical inputs! (To the extent that they were not subject to quantum mechanical effects before that. However, it appears that to explain consciousness, we do need QM effects--or at least something beyond classical physics. To determine consciousness, please click here. (I claim that consciousness is an reasonably determinable animal property.)

Another interesting possibility is that the conventional mind might be considered as a subset mind of the unconventional mind and that the unconventional mind would therefore include the conventional one even if each was unaware of the other. Indeed, there is some reason to believe that conventional biological minds are composed of "subset minds." This opens up the possibility of applying minds to set theory. Minds might intersect or join, and differences might be expressed in set differences.

Of course, this is all very disturbing--what if Searle is wrong; what are we to think of the possibility that we could be simulated or even replicated someday (on a set of beer cans no less)? If it becomes possible, then complex ethical questions come into play:

--For instance, does the replicated mind have rights?

--If you accidentally replicated two minds, would it be alright to destroy one?

--Is the processor or the program (and its data) where the mind resides?

A very interesting possibility is that an examination of these and other similar questions may be useful in answering questions concerning animal rights and even more complex topics such as abortion, birth and death. (We may do well, though, to consider the advice of Jeremy Bentham, a nineteenth-century philosopher who warned that intellect is not the basis for rights. [See page 95, The Case for Animal Rights by Tom Regan, 1983, University of California Press.]) If life has value beyond its most obvious material aspects (it does), science may (or may not) be able to help us identify, in precise terms, just what it is about life that has such value. It may be possible to answer the question of who we are as well as what we are. It is reasonable to wonder if important answers lie in the science that concerns itself with information. If so, we have some difficult questions on our hands: "If there is good and bad in the universe, what role does information play; can some information be inherently bad while other information is inherently good?" Who knows, it may turn out that: information is literally ubiquitous, information is the basis for everything important, and physical entropy is its real measure. New note: Again, however, we would at least have to have some sort of non-local interaction between bits of information to explain consciousness. This may point back to my point about the occurrence of a program's execution.

Although Searle succeeds wildly in making the reader think, he fails in his attempt to refute the claim of what he calls "strong AI." This theory, according to Searle, "claims that thinking is merely the manipulation of formal symbols, and that is exactly what the computer does: manipulate formal symbols." After closely examining Searle's arguments, I cannot discount the possibility that someday, a program (when executed) may actually think and choose to answer Searle's refutation.

Searle says that "symbols and programs are purely abstract notions: they have no essential physical properties to define them and can be implemented in any physical medium whatsoever." But Searle would have to agree that despite being abstract, symbols and programs are real when implemented--the terms "symbols" and "programs" simply describe real phenomena that occur under the laws of physics where matter and energy interact. In this way, symbols and programs do have physical properties to define them (however broad and abstract the definitions of "symbols" and "programs" are).

Searle gives the following as axioms:

My understanding of Searle's position is that "syntax" is independent of causal powers, and can be contrasted with "semantics," which are closely tied to mental content and the causal powers of the brain. We have already briefly examined the "causal powers" question. However, further consider that:

  1. Semantics seem to occur in the physical world. If phenomena outside the physical world are responsible for semantics, they have not been scientifically qualified or quantified.
  2. Brains and Turing machines seem to occur in the physical world. If phenomena outside the physical world are responsible for brains (or for that matter, Turing machines), they have not been scientifically qualified or quantified.
  3. Searle has not convincingly identified the phenomena (real-world or otherwise) that explain how semantics occur in brains and has not shown that Turing machines lack any such (convincingly identified) phenomena. What he has presented is a property of brains that may or may not be a significant component in whatever causes thought. Furthermore, he has not shown that it is impossible for semantics to occur outside of the context of the particular phenomena responsible for semantics in the brain.

It seems that Searle's belief boils down to a realization that the architecture of the brain cannot be divided (at least not yet) into a separate program and computer. The architecture of the brain is an amazing biological integration of data and machine. On the other hand, an SM machine is easily dissectible. Searle's seemingly mystical pronouncements about causal powers seem rooted in the recognition of the phenomena of biological integration. (New note: However, Searle may have a valid point if we recognize the role of quantum mechanics in biological integration.)

Searle could be on to something. . . . The causal powers examples he presents seem to be related to survival. We might say that thinking occurs only when the machine that is manipulating symbols has a stake in the outcome of future inputs. If we use that definition, axiom 3 may seem reasonable (although we will have to decide what constitutes "a stake"). (But what if it were to occur, that our brain processes were disconnected from our survival related senses and that survival was no longer a problem--would we not continue to think?!!)

Does it matter if survival related instrumentation has integral power sources at the points of measurement? Does it matter if "causal" data travel on specialized channels to higher level processes from the "bottom up"? Isn't this what Searle's "causal powers" property is all about? How is it different, in any meaningful way, if the power source is an AC wall socket? How is it different if the architecture is very general, with most data sharing the same pathways?

Searle has pointed out real differences between the architecture of the brain and that of SM machines. But he has failed to show that these differences matter. It could be that semantics automatically occur, as a matter of fact, when symbols are manipulated (even when we cannot see or feel the meaning as an outside observer). If so, there is a lot less to thinking than we generally believe and it is occurring all around us.

Searle could be right. But he hasn't proven it. (To be fair, he bases his views on axioms. However, I didn't find axiom #3 to be a self-evident truth; I found it to be a common sense notion that is possibly false. New note: Although it does appear that there has to be some interactions--be they static or dynamic--between syntactical elements--be they physical or abstract--for semantics to occur.) More specifically, he hasn't proven that "strong AI" is wrong and he hasn't proven that his Chinese room fails to "think." Let me refute Searle's refutation precisely. I claim:

My dog turned out to be just as much a dog as the hairy one, but I don't know the answer to the "Chinese room." The fact is, we don't know what thinking is; and, with respect to thinking, we don't know, in scientific terms, what we animals are. Searle's refutation to strong AI might provide some insight to such questions if the refutation can ever be shored up. But for now, we are stuck with behavioral standards like the Turing test. With respect to intelligence, we really don't know what we are; we just know what we do. The future uniqueness of animal intelligence is still in question.

Copyright 1996, 1997, David Lee Winston Miller. This document may be linked or copied in any form provided that it is reproduced in its entirety with copyright notice and provided that the links are noted or included.


Resume of David Lee Winston Miller

David Lee Winston Miller's homepage.

Email. (Please report bad links!)