From Robert L. Nadeau's Mind, Machines, and Human Consciousness: Are There Limits to Artificial Intelligence? (Chicago: Contemporary, 1991):
How do you ascertain if there's someone inside? How can you find out?
The best known of [John] Searle's arguments against the prospect of a cosncious computer is the Chinese Room thought experiment. here we are asked to imagine that there is a man inside a room in which the only entrance is a door containing a small mailbox-like slot. Although the man reads and understands English, he neithere reads nor understands Chinese. Inside the room are a large number of flashcards upon which are printed Chinese characters, and a book giving instructions in English about how to process the cards through the slot. As a card is passed into the room through the slot, the man looks at the Chinese character on the card, and then refers to his instruction book where pictures of the characters appear along with a seat of instructions. After he matches the character with the English instructions that indicate which cards within the room should be passed back through the slot and in what order, he follows these instructions. Although the cards being transferred into and out of the room contain meaningful information, like answering questions about recent political events, the man processing the cards knows nothing of this. Yet it could well appear on the part of people outisde the room that he does understand Chinese, and is, therefore, making intelligent and conscious responses to all the questions.
What Searle is trying to illustrate is that information processing in computers, identified in the thought experiment with the man sorting the cards, is accomplished only in terms of a seat of rules or in terms of syntax. There is no true intelligence or consciousness at work here, argues Searle, because semantics, or the forms in language that convey meaning in a complex relation of signs and symbols, cannot be reduced to a seat of rules or to syntax alone. These assumptions then provide the basis for the following line of argumentation: (1) brains have minds; (2) minds have mental content, particularly semantic context; (3) syntax is not sufficient to constitute semantics; and (4) computer programs are entirely defined by their formal, or syntactic, structure. The conclusion is, therefore, that the computer can never in principle be conscious as human beings are conscious, because mind is an inescapable precondition for this kind of consciousness.
The most frequent rebuttal by the advocates of the AI evolution-of-consciousness hypothesis is that while the man inside the room may not understand Chinese, the entire system--consisting of the man, the flashcards, the rule book, etc.--doesunderstand Chinese. The logical trick played by Searle is to place the entire system inside the brain of the man, and it is this which allows him to blur the distinction between simulation and duplication. The choice of such a strategy suggests, according to the advocates, that the philosopher is stuck in a meaningless quandary, nicely illustrated by Descartes's famous dictim "Cogito, ergo sum." The underlying assumption that makes for this quandary is that wherever there is thought, there must be an "I," or agent, that informs the thinking process. Another and more recent argument used to refute Searle's position is similar to that used to refute the position taken by Dreyfus. Although semantics might be virtually impossible to emulate on computers with top-down designs or architectures, it is conceivable that semantics could be emulated on bottom-up computer designs or architectures, where the associative aspects of semantics could be an emergent phenomena (154-156).
How do you ascertain if there's someone inside? How can you find out?