Summary of Lecture 3 – John Searle, Can Computers think? (1984)

In 1984, John Searle published his work Minds, Brains, and Science[2], illustrating his conception of artificial intelligence. There are several conceptual key arguments he addresses in this work, including the thought experiment of the Chinese room, which later found entrance in philosophical encyclopedias as Searle’s Chinese Room Argument. [1]

However, first of all, Searle defines the term “strong AI” in his work and introduces his opinion about it. By his definition, strong AI means that the human brain is just a digital computer, while the mind is its computer program, i.e. the mind would be to the brain what a program would be to a computer hardware (26). [2] This would mean that the brain was just an assembly of computer hardware which sustains the programs responsible for human intelligence. Thus, any system, biological or physical, that would have the right program, would also have a mind like humans do. There would be nothing biological to the brain. According to this view, it would be possible to design the appropriate hardware and software to obtain artificial brains and minds that are equivalent of human brains and minds.

Anyway, Searle clearly contradicts this concept of strong AI and clarifies that his counterargument is timeless, independent of the stage of development of computer technology, but rather has to do with the pure definition of digital computers, which Searle describes as follows: The operations of a digital computer can be specified purely formally, meaning that the steps in the operation of a computer are specified by a sequence of zeroes and ones. A computer rule will then evaluate those symbols and depending on its state, it will perform a certain operation, but the symbols do not have any meaning to the computer (29).[2] Furthermore, if the hardware is properly designed, it can be used to run an indefinite number of different programs, and analogously, the same program can be run on an indefinite range of different types of hardware.

Searle’s key argument to contradict the conception that computers and programs are equivalent to human brains and minds is that programs are just formal and only contain some syntax, and they can thus not be equated with mental processes. According to Searle, “[t]here is more to having a mind than having formal or syntactical processes” (29). [2] Mental states have a certain content or meaning associated to their formal features. This is what Searle means by saying that human minds contain more than syntactical features, but they have semantics. In contrast, computer programs are only formal sequences of zeroes and ones – they are only syntactical -, and they cannot attach any meaning to those symbols. Thus, as minds are more than syntactical, but computer programs are not, a computer program cannot be a mind. This distinction between syntax and semantics is really the central concept of Searle’s argumentation and will be revisited multiple times in his work.

In order to further illustrate his point and argue that digital computers cannot obtain consciousness by simply executing an appropriate program, Searle then introduces a thought experiment about a Chinese room, which is known as the Chinese Room Argument nowadays and one of the most famous arguments in current artificial intelligence philosophy. However, Searle himself never calls it an argument, but rather a thought experiment or a parable, to invite the reader to imagine the situation and share the experience with him to ultimately convince the reader of his opinion. He paints an easy picture to illustrate the difference between syntax and semantics, and uses the method of imposing rhetorical questions to make the reader think. So, by drawing the analogy between the hardware of a computer and an isolated person in a room, and between semantics and consciousness, Searle argues against reductionism and states that one cannot reduce semantics to syntax or mental states to only formal processes in our brain – those are not the same.

The prerequisites are the following: A person which is not capable of understanding a single word of Chinese is in a room filled with several baskets of Chinese symbols. The person is now given a book, written in English, that contains rules for manipulating and rearranging Chinese symbols analogously to a computer program which contains the commands to rearrange Chinese symbols in a certain fashion. Thus, the Chinese symbols are only modified in terms of their syntax, not their semantics. Searle now imagines that some other Chinese symbols are passed into the room from the outside, together with further rules for passing back Chinese symbols out of the room. The incoming symbols should be called “questions by people outside of the room”, while the exported symbols by the person inside the room should be called “answers to the questions” (30).[2] In any case, the rules for manipulating the Chinese symbols implemented by the programmers should be so good that the answers are indistinguishable from the answers that native speakers would give.

Searle now argues that, when the person inside would simply be following the rules, an outside observer would receive the impression that he or she would understand Chinese quite well, although that is not the case. By simply rearranging and exporting Chinese symbols as answers to the incoming questions according to the English rulebook, the person has no chance to understand Chinese. Formally manipulating the Chinese symbols will not help in getting a learning experience and attaching a meaning to these hieroglyphics.

Starting from this given scenario, Searle generalizes his conclusions: If following a computer program, i.e. the rulebook, containing only the rules for rearranging the Chinese symbols is not enough to provide the person with the slightest understanding of the foreign language, then, analogously, it will not be sufficient to provide any digital computer with an understanding of Chinese. If the person was not able to understand Chinese, no digital computer would be able to do so simply by executing a program, since the computer has nothing that the person did not have. Both, the person as well as the computer, do only have a formal guideline to formally manipulate and rearrange obscure Chinese symbols, without getting access to their meaning. In other words, computer programs only contain syntactical elements, but do not get the semantics. Understanding a language and likewise having consciousness, i.e. getting the semantics, requires more than being able to follow formal commands to mingle symbols, but rather requires the ability to interpret and acquire the meaning of those symbols.

Another interesting analogy is the reductionism in mathematics related to the German mathematician David Hilbert. [3] As an approach, the idea that math can be reduced to a rule-based formulation of symbols was formulated. It was suggested that not knowing the meaning of numbers, i.e. the semantics, would still be enough to do math if one would reduce math to the computation by symbols, i.e. the syntax. There would only be two possible states: true or false, one or zero. However, the reality shows that syntax is indeed not enough to do math properly. There are many mathematical research fields that cannot be solved by simply having the syntax.

Following the introduction of the Chinese Room, the question “Could a machine think?” is addressed. According to Searle, all humans are machines, and all humans are able to think. So, if a machine was just “a physical system which is capable of performing certain kinds of operations, […] [all humans] are […] machines”, (33)[2] and thus machines could think. However, in the following, Searle struggles to appropriately formulate the question with respect to computers, but ultimately arrives at the conclusion that “Can a digital computer, as defined, think?” can be restated as “Is instantiating or implementing the right computer program with the right inputs and outputs, sufficient for, or constitutive of, thinking?” (34). [2] Finally, he is able to answer this question with a “no”, reasoning with the argument he elaborated on earlier, namely the fact that a computer program is defined only syntactically, whereas thinking requires the affiliation of meaning and semantics. One can easily see already at this point that this distinction between syntax and semantics is really the leitmotif and central theme is Searle’s argumentative pathway.

Arriving at the second important distinguishment, Searle then differentiates between simulation and duplication. He admits that the simulation of human behavior by artificial intelligence might further improve in the future, but, in his opinion, these simulations are irrelevant when talking about having mental states or a mind. If the object of investigation is indeed a computer, its operations must be defined purely formally and syntactically, but consciousness and emotions require more than this syntax. The latter two cannot be duplicated by the computer, by definition, no matter how elaborate its capacity to simulate is, as “no simulation by itself ever constitutes duplication”(35). [2]

Finally, Searle challenges the question itself of whether computer simulations of mental processes could actually have mental processes. He questions why someone would even arrive at this idea and impose such an issue, since it is anything but trivial. Approaching this question, there might be a hidden reference to Alan Turing’s conception of artificial intelligence. What Searle states is that many people would still be tempted to “behaviorism” and think that a system has to be able to understand Chinese if it behaves as if it would. But Searle immediately contradicts this with his Chinese room argument.

However, it is quite interesting to relate Searle’s and Turing’s account of artificial intelligence. In contrast to Searle, Turing was not even interested in defining what an intelligent machine would be and has a completely different approach of framing the question. Searle’s approach is quite straightforward, and he directly poses the question “Can there be intelligent machines?”. Turing, on the other hand, supports the conception that the question of whether a computer could be intelligent would not really make sense and was not treatable at the time he lived, but might only be addressable in the future. As an alternative intellectual approach, he introduces his imitation game (11)[4] to avoid the direct question. Turing clearly states that if a machine behaved intelligent, an outside observer would have no chance of concluding whether it really is intelligent or not. The only possibility for a person to get to an appropriate decision about whether the machine also has semantics, and not only syntax, would be if the person itself was the computer.

Thus, there are two opportunities. Either we assume that, if a computer, a machine, or anything else behaved intelligent, it would have to be intelligent, or, following Turing’s proposal, we should not argue whether a machine is intelligent or not at all. The difference to Searle’s approach becomes quite obvious here, as Searle clearly states that behaving and being intelligent is definitely not the same and one should distinguish between the two. While Turing tries to avoid the question of whether a computer machine could be intelligent, Searle explicitly disagrees and defends his opinion that computers cannot be intelligent due to the strict separation between syntax and semantics. A computer program, according to Searle, cannot perform the same tasks as the human mind. Ultimately, all operations carried out by a computer just rely on deciding whether an incoming signal is zero or one.

Summing up, Searle finishes this chapter in his work by drawing several conclusions, starting from four premises. Firstly, he states that mental processes which we consider to constitute a mind are entirely caused by processes going on inside the human brain (37).[2] However, one could criticize and argue that he simply mentions this as a fact, while up until today there is still very few knowledge about the relationship between the brain and mental processes. A lack of argument becomes quite obvious here, and continues in the consecutive premises, as he just mentions them and assumes that they are true, although there is no concrete prove for their correctness. Of course, that is why he calls them premises and not facts, but the premises should still be reasoned in order to be bound to a solid fundament and to draw sensible conclusions from them. Secondly, he repeats his central argument that one needs to distinguish between syntax and semantics, i.e. between formalities and meaning or content. Thirdly, he states that computer programs would be singly defined by their syntax, and fourthly, he recalls that minds have semantic contents.[]2

From those four premises, he concludes that a computer program is not able to provide a system with a mind, implying that the idea of creating minds by designing computer programs is and will never be not possible. Furthermore, the way brains cause minds cannot be solely by running a computer program (38); [2] thus, the brain is not just a computer, but there is more to it. Brains contain biological functions and the biology matters. And in order for any system to cause a mind, it needs to have the same “causal powers” equal to the brain. Mental states are biological phenomena (38), [2] and likewise are consciousness, emotions, and subjectivity biological features, which a computer can never duplicate by simply running a program.

Finally, I personally agree with the claim that computers cannot be equated to brains and that a computer program cannot achieve what human minds can. Human minds are of such high complexity, and they are barely understood. Obviously, the physical structure is well-known, but the causal interferences between brain activity and mind are a field of research that is still in its infancy. There are so many phenomena of human minds that cannot be rationally explained. Thus, one could agree with Searle’s argument that there is more to the brain; there is biology that that makes it unique in terms of its functions and capabilities, which cannot be achieved by computers. However, elaborating further on this topic might rapidly lead to a very profound discussion of life itself. What is life? How can a collection of elementary particles, i.e. electrons that orbit protons and neutrons in an almost empty space, produce animals and “highly intelligent” species? How can nature use those profound building blocks and form creatures out of it? I personally find this thought miraculous, incredible, and frightening at the same time. It shows us the strength and power of nature, and clearly demonstrates the limitations of humanity.

Nevertheless, with the fact that human minds do exist, combined with the knowledge that they are just comprised of a collection of atoms – and those are atoms of elements that are not even rare on earth – one cannot exclude the possibility that we might figure out a way to build computers that are able to perform analogous tasks. The building blocks are there, we just have to figure out how to put them together, right?. This might indeed be a quite romantic and simplified view on a very complex concept, but the main point I want to arise is: in my point of view, no one knows what future holds. I agree with Searle that, at the moment, computers cannot get the semantics. Computers, as we define and think of them today are not equal to human brains, but nobody can predict how our world will look in a thousand, in a million, or even in a billion of years. There might be technology that we cannot even dream about, and this technology might be comparative to human minds. When Searle wrote his text, he probably could not imagine smartphones and autonomous cars either, but advances over the last years made this possible. And we are only talking about decades, what might happen if we add a couple of zeroes to the length of the time period for technology improvement? Summing up, I agree with Searle in the present, but I am rather skeptical about his rigorous, insular point of view about the future.

[1] Cole, D. The Chinese Room Argument. 2020. Stanford Encyclopedia of Philosophy.

[2] Searle, J. Minds, Brains, and Science. 1984. Harvard University Press, Cambridge, Massachusetts.

[3] https://plato.stanford.edu/entries/hilbert-program/#3

[4] Turing, A.M. Computing Machinery and Intelligence. 1950. Oxford University Press.

Author: Julia Specht

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert