Discuss ‘the Chinese room’ argument.
In 1980, John Searle began a widespread dispute with his paper, ‘Minds, Brains, and Programmes’ (Searle, 1980). The paper referred to a thought experiment which argued against the possibility that computers can ever have artificial intelligence (AI); in essence a condemnation that machines will ever be able to think. Searle’s argument was based on two key claims. That;
“brains cause minds and syntax doesn’t suffice for semantics” (Searle, 1980, p.417).
Syntax in this instance refers to the computer language used to create a programme; a combination of illegible code (to the untrained eye) which provides the basis and commands for the action of a programme running on a computer. Semantics refers to the study of meaning or the understanding behind the use of language. Searle’s claim was that it is the existence of a brain which gives us our minds and the intelligence which we have, and that no combination of programming language is sufficient enough to contribute meaning to the machine and therein for the machine to understand. His claim was that the apparent understanding of a computer is merely more than a set of programmed codes, allowing the machine to extort answers based on available information. He did not deny that computers could be programmed to perform to act as if they understand and have meaning. In fact he quoted;
“the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive states” (Searle, 1980, p. 417).
Searle’s argument was that we may be able to create machines with ‘weak AI’ – that is, we can programme a machine to behave as if it were thinking, to simulate thought and produce a perceptible understanding, but the claim of ‘strong AI’ (that machines are able to run with syntax and have cognitive states as humans and understand and produce answers based on this cognitive understanding, that it really has (or is) a mind (Chalmers, 1992)) is just not possible. A machine is unable to generate fundamental human mindsets such as intentionality, subjectivity, and comprehension (Ibid, 1992). Searle’s main argument for this notion came from his ‘Chinese room experiment’, for which there has been much deliberation and denunciation from fellow researchers, philosophers and psychologists. This paper aims to analyse the arguments, assess counter augments and propose that John Searle was accurate in his philosophy; that machines will never think as humans and that the issue relates more to the simple fact that a computer is neither human nor biological in nature, nor can it ever be.
In 1950, Alan Turing proposed a method of examining the intelligibility of a machine to become known as ‘The Turing Test’ (Turing, 1950). It describes an examination of the veracity to which a machine can be deemed intelligent, should it so pass . Searle (1980) argued that the test is fallible, in that a machine without intelligence is able to pass such a test. ‘The Chinese Room’ is Searle’s example of such machine.
‘The Chinese room’ experiment is what is termed by physicists a ‘thought experiment’ (Reynolds and Kates, 1995); such that it is a hypothetical experiment which is not physically performed, often without any intention of the experiment ever being executed. It was proposed by Searle as a way of illustrating his understanding that a machine will never logically be able to possess a mind. Searle (1980) suggests that we envisage ourselves as a monolingual (speaking only one language) English speaker, locked inside a room with a large group of Chinese writing in addition to a second group of Chinese script. We are also presented with a set of rules in English which allow us to connect the initial set of writings, with the second set of script. The set of rules allows you to identify the first and...
References: Chalmers, D. 1992, ‘Subsymbolic Computation and the Chinese Room’, in J. Dinsmore (ed.), The Symbolic and Connectionist Paradigms: Closing the Gap, Hillsdale, NJ: Lawrence Erlbaum.
Harnad, S. 1989. Minds, machines and Searle. Journal of Experimental and Theoretical Artificial Intelligence, 1, pp.5-25.
Harnad, S. 1993. Grounding symbols in the analog [sic] world with neural nets. Think 2(1): 12-78 (Special issue on "Connectionism versus Symbolism," D.M.W. Powers & P.A. Flach, eds.).
Simon, H.A., & Eisenstadt, S.A., 2002. A Chinese Room that Understands Views into the Chinese room. In: J. Preston * M. Bishop (eds). New essays on Searle and artificial intelligence Oxford: Clarendon, pp. 95-108.
Hofstadter, D. 1980. Reductionism and religion. Behavioral and Brain Sciences 3(3),pp.433–34.
Reynolds, G. H., & Kates, D.B. 1995. The second amendment and states’ rights: a thought experiment. William and Mary Law Review, 36, pp.1737-73.
Searle, J. 1980. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3, pp.417-424.
Searle, J. 1982. 'The Myth of the Computer: An Exchange ', in New York Review of Books 4, pp.459-67.
Please join StudyMode to read the full document