Chinese Room

The Chinese Room Argument, a well-known argument in the domain of philosophy, originated from a thought experiment by American philosopher John Searle, who published this argument in his work Minds, Brains, and Science[1] in 1980. His goal was to argue that digital computers cannot obtain consciousness by simply executing an appropriate program.

Searle imagines a person that understands no word of Chinese, sitting alone in a room. By following a computer program, the person produces appropriate answers to Chinese questions by manipulating symbols according to a rule book, and sends these strings of Chinese characters back out under the door. A person outside of the room might now mistakenly suppose that the person inside the room has the ability to speak Chinese.

Searle’s intention with his thought experiment is to stress that a digital computer, which simply follows the commands implemented in a program, might appear to be intelligent, i.e. understand Chinese, but it actually does not really understand the language and thus it is not intelligent. He stresses that there is a huge difference between appearing intelligent and being intelligent.

According to Searle, computers just use syntactic rules to manipulate symbol strings, but have no understanding of the meaning or the semantics[2]. In contrast to human minds, which must result from biological processes, computers can – at maximum – simulate, but not duplicate, those biological processes.

  1. The Thought Experiment

Searle envisions a computer program, written by skilled programmers, which enables a computer to simulate that it would understand Chinese. If the computer was given a Chinese question, it would search for the question in a data base and produce an appropriate Chinese answer, which is good enough to imitate the answers a native Chinese might give. The questions that Searle imposes now is whether the computer actually understands Chinese in the same way a native speaker does. In order to underline the point he wants to make, he introduces the Chinese room thought experiment as an analogy.

The prerequisites are the following: A person that does not understand a single word of Chinese is locked in a room with different baskets filled with Chinese symbols. Furthermore, the person is given a book, written in English, which contains rules for formally manipulating Chinese symbols – in terms of their syntax, not their semantics – analogously to a computer program. Now, Searle supposes that some other Chinese symbols are passed into the room, together with further rules for passing back Chinese symbols out of the room. The imported symbols he calls “questions” by people outside of the room, whereas the exported symbols by the person in the room are called “answers to the questions”(30).[1] Moreover, the programmers should be so good at implementing the rules for manipulating the symbols, i.e. the program, that the answers are indistinguishable from those a native speaker would give.

Searle now argues that the person inside the room, rearranging and exporting Chinese symbols as answers to incoming questions, has no chance to understand Chinese at all by simply manipulating those formal Chinese symbols. However, by simply following the rules, an outside observer receives the impression that the person inside the room understands Chinese, although he or she does not understand a single word.

In order to contrast this, Searle introduces a new situation by asking what would be different if the person was asked questions in English and forced to give English answers. In English, the person would understand the questions and answers because the symbols incorporate a known meaning to the person; in contrast to the Chinese symbols which convey no meaning to the person at all.

Starting from this specific situation of a person in a room, Searle generalizes his conclusions by stating that, if going through a computer program containing the rules for Chinese – i.e. the syntax – is not enough to provide the person with the slightest understanding of the language, then it is not sufficient to provide any digital computer with an understanding of Chinese, i.e. the semantics. If the person was not able to understand Chinese, no digital computer would be able to do so just by running a program, because the computer has nothing that the person did not have. In contrast, all that the computer and the person have is a formal guideline to manipulate and rearrange Chinese symbols, without getting their meaning – or, in other words, computers only have a syntax, but so not get the semantics. On the other hand, understanding a language, and likewise having consciousness, requires more than simply being able to follow some rules and shuffling symbols, it requires interpretation of the meaning of those symbols.

  1. Criticism and Counterarguments

2.1 The Systems Reply

This critique admits that the person in the room does indeed not understand Chinese. However, it suggests that the person might just be the central processing unit in the larger systems of the entire computer, which contains the data base, the memory, and the instructions – the entire systems that is needed to answer the imposed Chinese questions. So, the point of the “system reply argument” is that the reply continues beyond the person in the room. The person running the program might not understand Chinese, but the system as a whole would.[2]

Searle confronts this opinion by stressing that there is still no way that the system can get from the syntax to the semantics. The person as the central processing unit has no chance to figure out the meaning and implications of the Chinese symbols, and neither does the whole system.

2.2 The Robot Reply

Another critique considers putting the program inside a robot. If the robot was mobile and able to interact with the world, one could state that this would be enough to guarantee it understood Chinese. In a free world outside of the room, the robot could move around and attach meanings to symbols. However, Searle references back to the distinguishment of syntax and semantics. As long as the robot only had a computer as a brain, then it would still not be able to go from syntax to semantics, even if the robot might appear to understand Chinese to outsiders. Once again, Searle imagines a person being the computer inside the robot to shuffle the symbols. As long as the person only has a formal program, he or she has no way of attaching meanings to the symbols, and the fact that the robot might interact with the outside world will not help the person at finding the meanings as long as the person does not know about the interactions (33).[1] All in all, if the computer or person is isolated from the outside world, syntax and internal connections are not enough to obtain semantics. Nevertheless, causal connections with the outside world could possibly provide meanings to the formal symbols, considering a scenario in which the computer does what children do: learning by seeing and doing.[2]

  1. References

[1] Searle, J. Minds, Brains, and Science. 1984. Harvard University Press, Cambridge, Massachusetts.

[2] Cole, D. The Chinese Room Argument. 2020. Stanford Encyclopedia of Philosophy.

Author: Julia Specht