View Shopping Cart Your Famous Chinese Account Shopping Help Famous Chinese Homepage China Chinese Chinese Culture Chinese Restaurant & Chinese Food Travel to China Chinese Economy & Chinese Trade Chinese Medicine & Chinese Herb Chinese Art
logo
Search
March 8, 2014
Table of Contents
1 Introduction
Chinese room

Wikipedia

 
The Chinese room argument is a thought experiment designed by John Searle (1980) to debunk the stronger claims made by Artificial intelligence|strong AI (also functionalism (philosophy of mind)|functionalism).

A belief of strong AI is that if a machine were to pass a Turing test, then it can be regarded as "thinking" in the same sense as human thought. Or put another way, proponents of strong AI hold that the human brain is a computer (of a sort) and the mind nothing more than a program. Adherents to this idea believe furthermore that systems demonstrating these abilities help us to explain human thought. A third belief, necessary to the first two, is that the biological material present in the brain is not necessary for thought. Searle summarizes this viewpoint, which he opposes, in this manner:

The computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in a sense that computers given the right programs can be literally said to understand and have other cognitive states. (Douglas Hofstadter|Hofstadter and Daniel Dennett|Dennett, 353)



In the Chinese room thought experiment, a person who understands no China|Chinese sits in a room into which written Chinese characters are passed. In the room there is also a book containing a complex set of rules (established ahead of time) to manipulate these characters, and pass other characters out of the room. This would be done on a rote basis, eg. "When you see character X, write character Y". The idea is that a Chinese language|Chinese-speaking interviewer would pass questions written in Chinese into the room, and the corresponding answers would come out of the room appearing from the outside as if there were a native Chinese speaker in the room.

It is Searle's belief that such a system could indeed pass a Turing Test, yet the person who manipulated the symbols would obviously not understand Chinese any better than he did before entering the room. Searle proceeds in the article to try to refute the claims of strong AI one at a time, by positioning himself as the one who manipulates the Chinese symbols.
The Chinese room assails two claims of Strong AI. The first claim is that a system which can pass the Turing test understands the input and output. Searle replies that as the "computer" in the Chinese room, he gains no understanding of Chinese by simply manipulating the symbols according to the formal program, in this case being the complex rules. The operator of the room need not have any understanding of what the interviewer is asking, or the replies that he is producing. He may not even know that there is a question and answer session going on outside the room.

The second claim of strong AI which Searle objects to is the claim that the system explains human understanding. Searle asserts that since the system is functioning, in this case passing the Turing Test, and yet there is no understanding on the part of the operator, then the system does not understand and therefore could not explain human understanding.

The core of Searle's argument is the distinction between syntax and semantics. The room is able to shuffle characters according to the rule book. That is, the room?s behaviour can be described as following syntactical rules. But on Searle's account it does not know the meaning of what it has done; that is, it has no semantic content. The characters do not even count as symbols because they are at no stage of the process interpreted.

That syntax is insufficient to account for semantics is perhaps not as controversial as understanding what needs to be added to syntax in order to account for semantics. Searle lists consciousness, intentionality, subjectivity and mental causation as candidates. Any adequate theory of mind must be able to explain intentional states. Searle is at pains to point out that mind is a result of brain function. He rejects dualism, insisting that mental states are biological phenomena.



In 1984 Searle produced a more formal version of the argument of which the Chinese Room forms a part. He listed four premises:

Premise 1: Brains cause minds


Premise 2: Syntax is not sufficient for semantics


Premise 3: Computer programs are entirely defined by their formal, syntactic structure


Premise 4: Minds have semantic content


The second premise is supposedly supported by the Chinese Room argument, since Searle holds that the room follows only formal syntactical rules, and does not ?understand? Chinese. Searle posits that these lead directly to three conclusions:

Conclusion 1: No computer program by itself is sufficient to give a system a mind. Programs are not minds.


Conclusion 2: The way that brain functions cause minds cannot be solely in virtue of running a computer program


Conclusion 3: Anything else that causes minds would have to have causal powers at least equivalent to those of the brain


Searle describes this version as ?excessively crude?. There has been considerable debate about whether this argument is indeed valid. These discussions centre on the various ways in which the premises can be parsed. One can read premise 3 as saying that computer programs have syntactic but not semantic content, and so Premises 2, 3 and 4 validly lead to conclusion 1. This leads to debate as to the origin of the semantic content of a computer program.




There are many criticisms of Searle?s argument. Most can be categorized as either systems replies or robot replies.

The systems reply

Although the individual in the Chinese room does not understand Chinese, perhaps the person and the room considered together as a system, do. Searle?s reply to this is that someone might in principle memorise the rule book; they would then be able to interact as if they understood Chinese, but would still just be following a set of rules, with no understanding of the significance of the symbols they are manipulating.

The robot reply

Suppose that instead of a room, the program was placed into a robot that could wander around and interact with its environment. Surely then it would be said to understand what it is doing? Searle’s reply is to suppose that, unbeknownst to the individual in the Chinese room, some of the inputs he was receiving came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does not know what the symbols mean.

Suppose that the program instantiated in the rule book simulated in fine detail the interaction of the neurons in the brain of a Chinese speaker. Then surely the program must be said to understand Chinese? Searle replies that such a simulation will not have reproduced the important features of the brain—its causal and intentional states.

But what if a brain simulation were connected to the world in such a way that it possessed the causal power of a real brain—perhaps linked to a robot of the type described above? Then surely it would be able to think. Searle agrees that it is in principle possible to create an artificial intelligence, but points out that such a machine would have to have the same causal powers as a brain. It would be more than just a computer program.



  • John Searle (1980) "http://members.aol.com/NeoNoetics/MindsBrainsPrograms.html Minds, Brains and Programs"

  • John Searle (1984) "Minds, Brains & Science: The 1984 Reith Lectures" British Broadcasting Corporation

  • http://members.aol.com/wutsamada/disserta.html Dissertation by Larry Stephen Houser,

  • Searle's Chinese Box: Debunking the Chinese Room Argument. Larry Houser. available at http://members.aol.com/lshauser2/chinabox.html

  • http://plato.stanford.edu/entries/chinese-room/ The Chinese Room Argument

  • http://samvak.tripod.com/chinese.html Philosophical and analytic considerations in the Chinese Room thought experiment




de:Chinesisches Zimmer
es:Sala china
fr:Chambre chinoise
ko:중국어 방
ja:中国語の部屋
zh:中文房间

Category:Philosophy of mind
Category:Thought experiments

This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Chinese room".


Last Modified:   2005-04-13


Search
All informatin on the site is © FamousChinese.com 2002-2005. Last revised: January 2, 2004
Are you interested in our site or/and want to use our information? please read how to contact us and our copyrights.
To post your business in our web site? please click here. To send any comments to us, please use the Feedback.
To let us provide you with high quality information, you can help us by making a more or less donation: