Maquinações
Experimentações teóricas, fichamentos e outros comentários descartáveis

Mentes, cérebros e programas

Jogo de xadrez
Rafael Gonçalves
13/01/2022
John Searleinteligencia artificialfilosofia da mentequarto chinês

Fichamento do artigo Minds, brains, and programs1 de John Searle.

Objetivo do artigo: extrair as consequências das assunções de que (1) intencionalidade é produto causal da mente e (2) instanciar um programa não é uma condição suficiente para intencionalidade

This article can be viewed as an attempt to explore the consequences of two propositions.

  • (1) Intentionality in human beings (and animals) is a product of causal features of the brain. I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain processes are sufficient for intentionality.
  • (2) Instantiating a computer program is never by itself a sufficient condition of intentionality. The main argument of this paper is directed at establishing this claim. The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. (p. 417)

Consequências: (3) a intencionalidade produzida pelo cérebro não pode ser explicada como a intanciação de um programa; (4) um mecanismo capaz de produzir intencionalidade deve ter poder causal igual ao do cérebro e (5) IA forte depende da replicação do poder causal cerebral, não de um programa

These two propositions have the following consequences:

  • (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2.
  • (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1.
  • (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4. (p. 417)

IA forte não diz nada sobre o pensar (pois isso dependeria da máquina, não do programa)

"Could a machine think?" On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking. (p. 417)

IA fraca (weak AI) e forte (strong AI)

In answering this question, I find it useful to distinguish what I will call "strong" AI from "weak" or "cautious" AI (Artificial Intelligence). According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states. In strong AI, because the programmed computer has cognitive states, the programs are not mere tools that enable us to test psychological explanations; rather, the programs are them- selves the explanations. (p. 417)

When I hereafter refer to AI, I have in mind the strong version, as expressed by these two claims. (p. 417)

Exemplo: máquina de Schank

Schank's machines can similarly answer questions about restaurants in this fashion. To do this, they have a "repre- sentation" of the sort of information that human beings have about restaurants, which enables them to answer such questions as those above, given these sorts of stories. When the machine is given the story and then asked the question, the machine will print out answers of the sort that we would expect human beings to give if told similar stories. Partisans of strong AI claim that in this question and answer sequence the machine is not only simulating a human ability but also

  1. that the machine can literally be said to understand the story and provide the answers to questions, and
  2. that what the machine and its program do explains the human ability to understand the story and answer questions about it. (p. 417)

Experimento da sala chinesa

One way to test any theory of the mind is to ask oneself what it would be like if my mind actually worked on the principles that the theory says all minds work on. (p. 417)

Let us apply this test to the Schank program with the following Cedankenexperiment. Suppose that I'm locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I'm not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles. (p. 417-8)

Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that "formal " means here is that I can identify the symbols entirely by their shapes. (p. 418)

Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. (p. 418)

Unknown to me, the people who are giving me all of these symbols call the first batch "a script, ' they call the second batch a "story, " and they call the third batch "questions." Furthermore, they call the symbols I give them back in response to the third batch "answers to the questions," and the set of rules in English that they gave me, they call "the program." (p. 418)

Now just to complicate the story a little, imagine that these people also give me stories in English, which I understand, and they then ask me questions in English about these stories, and I give them back answers in English. (p. 418)

Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view - that is, from the point of view of somebody outside the room in which I am locked - my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don't speak a word of Chinese. (p. 418)

Let us also suppose that my answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I am a native English speaker. (p. 418)

From the external point of view - from the point of view of someone reading my "answers" - the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program. (p. 418)

Consequência: a máquina de Schank não entende a história

  1. As regards the first claim, it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, Schank's computer understands nothing of any stories, whether in Chinese, English, or whatever, since in the Chinese case the computer is me, and in cases where the computer is not me, the computer has nothing more than I have in the case where I understand nothing. (p. 418)

Consequência: o programa não explica o entendimento humano

  1. As regards the second claim, that the program explains human understanding, we can see that the computer and its program do not provide sufficient conditions of understand- ing since the computer and the program are functioning, and there is no understanding. (p. 418)

Well, I suppose that is an empirical possibility, but not the slightest reason has so far been given to believe that it is true, since what is suggested - though certainly not demonstrated - by the example is that the computer program is simply irrelevant to my understand- ing of the story. In the Chinese case I have everything that artificial intelligence can put into me by way of a program, and I understand nothing; in the English case I understand everything, and there is so far no reason at all to suppose that my understanding has anything to do with computer programs, that is, with computational operations on purely formally specified elements. (p. 418)

As long as the program is defined in terms of computational operations on purely formally defined elements, what the example suggests is that these by themselves have no interesting connection with understanding. (p. 418)

Notice that the force of the argument is not simply that different machines can have the same input and output while operating on different formal principles - that is not the point at all. Rather, whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything. No reason whatever has been offered to suppose that such principles are necessary or even contributory, since no reason has been given to suppose that when I understand English I am operating with any formal program at all. (p. 418)

Diferença entre as duas operações: na primeira não se sabe o que o símbolo significa, na segunda, sim

Well, then, what is it that I have in the case of the English sentences that I do not have in the case of the Chinese sentences? The obvious answer is that I know what the former mean, while I haven't the faintest idea what the latter mean. (p. 418)

Sobre a crítica de que "entender" não tem uma definição clara

There are clear cases in which "understand-ing" literally applies and clear cases in which it does notapply; and these two sorts of cases are all I need for thisargument.2 I understand stories in English; to a lesser degree Ican understand stories in French; to a still lesser degree, stories in German; and in Chinese, not at all. My car and myadding machine, on the other hand, understand nothing: they are not in that line of business. We often attribute "understanding" and other cognitive predicates by metaphor and analogy to cars, adding machines, and other artifacts, but nothing is proved by such attributions. (p. 419)

  1. Also, "understanding " implies both the possession of mental (intentional) states and the truth (validity, success) of these states. For the purposes of this discussion we are concerned only with the possession of the states. (p. 424)

We say, "The door knows when to open because of its photoelectric cell," "Theadding machine knows how (understands how, is able) to do addition and subtraction but not division," and "The thermostat perceives chances in the temperature." The reason wemake these attributions is quite interesting, and it has to do with the fact that in artifacts we extend our own intention-ality;3 our tools are extensions of our purposes, and so we findit natural to make metaphorical attributions of intentionality to them; but I take it no philosophical ice is cut by such examples. The sense in which an automatic door "understands instructions" from its photoelectric cell is not at all the sense in which I understand English. (p. 419)

But Newell and Simon (1930) write that the kind of cognition they claim for computers is exactly the same as for human beings. I like the straightforwardness of this claim, and it is the sort of claim I will be considering. I will argue that in the literal sense the programmed computer understands what the car and the adding machine understand, namely, exactly nothing. Thecomputer understanding is not just (like my understanding ofGerman) partial or incomplete; it is zero. (p. 419)

A resposta de sistemas (Berkley)

I. The systems reply (Berkeley). "While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system, and the system does understand the story. The person has a large ledger in front of him in which are written the rules, he has a lot of scratch paper and pencils for doing calculations, he has 'data banks' of sets of Chinese symbols. Now, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part." (p. 419)

Comentário: se a pessoa internalizar os elementos do sistema, ela continua sem saber chinês

A resposta de Searle não repousa sobre uma recusa a priori de admitir que elementos não-humanos podem contribuir para ou de fato "entender"? O argumento não dizia respeito ao ser-humano ser capaz de "incorporar" os elementos: isso seria supor que não há diferença entre o sistema e a parte.

Por outro lado, o argumento de Searle compara o humano com a execução de um programa, se o sistema se expande a ponto de englobar um programador, por exemplo, o argumento perde sentido.

My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him. (p. 419)

According to one version of this view, while the man in the internalized systems example doesn't understand Chinese in the sense that a native Chinese speaker does (because, for example, he doesn't know that the story refers to restaurants and hamburgers, etc.), still "the man as a formal symbol manip- ulation system" really does understand Chinese. The subsys- tem of the man that is the formal symbol manipulation system for Chinese should not be confused with the subsystem for English. (p. 419)

So there are really two subsystems in the man; one understands English, the other Chinese, and "it's just that the two systems have little to do with each other." But, I want to reply, not only do they have little to do with each other, they are not even remotely alike. The subsystem that understands English (assuming we allow ourselves to talk in this jargon of "subsystems" for a moment) knows that the stories are about restaurants and eating hamburgers, he knows that he is being asked questions about restaurants and that he is answering questions as best he can by making various inferences from the content of the story, and so on. But the Chinese system knows none of this. Whereas the English subsystem knows that "hamburgers" refers to hamburgers, the Chinese subsys- tem knows only that "squiggle squiggle" is followed by "squoggle squoggle." All he knows is that various formal symbols are being introduced at one end and manipulated according to rules written in English, and other symbols are going out at the other end. The whole point of the original example was to argue that such symbol manipulation by itself couldn't be sufficient for understanding Chinese in any literal sense because the man could write "squoggle squoggle " after "squiggle squiggle" without understanding anything in Chinese. And it doesn't meet that argument to postulate subsystems within the man, because the subsystems are no better off than the man was in the first place; they still don't have anything even remotely like what the English-speaking man (or subsystem) has. Indeed, in the case as described, the Chinese subsystem is simply a part of the English subsystem, a part that engages in meaningless symbol manipulation according to rules in English. (p. 419)

Sobre o teste de Turing ser insuficiente

The only motivation for saying there must be a subsystem in me that understands Chinese is that I have a program and I can pass the Turing test; I can fool native Chinese speakers. But precisely one of the points at issue is the adequacy of the Turing test. (p. 419)

The example shows that there could be two "systems," both of which pass the Turing test, but only one of which understands; and it is no argument against this point to say that since they both pass the Turing test they must both understand, since this claim fails to meet the argument that the system in me that understands English has a great deal more than the system that merely processes Chinese. (p. 419)

Questionamento sobre a existência de informação em sistemas não-cognitivos (ex: estomago)

If we are to conclude that there must be cognition in me on the grounds that I have a certain sort of input and output and a program n between, then it looks like all sorts of noncognitive subsystems are going to turn out to be cognitive. For example, there is a level of description at which my stomach does information processing, and it instantiates any number of computer programs, but I take it we do not want to say that it has any understanding [cf. Pylyshyn: "Computation and Cognitition" BBS 3(1) 1980] (p. 419-20)

But if we accept the systems reply, then it is hard to see how we avoid saying that stomach, heart, liver, and so on, are all understanding subsystems, since there is no principled way to distinguish the motivation for saying the Chinese subsystem understands from saying that the stomach understands. It is, by the way, not an answer to this point to say that the Chinese system has information as input and output and the stomach has food and food products as input and output, since from the point of view of the agent, from my point of view, there is no information in either the food or the Chinese - the Chinese is just so many meaningless squiggles. The information in the Chinese case is solely in the eyes of the programmers and the interpreters, and there is nothing to prevent them from treating the input and output of my digestive organs as information if they so desire. (p. 420)

Alargamento da noção de mente para coisas. McCarthy sobre termostato ter crença

Nos termos de Latour, poderíamos pensar que se não uma crença, ao menos existem agências no termostato (facilita rotação para um lado e não para outro, possui limites de atuação, permite um controle contínuo de temperatura, etc.)

But quite often in the AI literature the distinction is blurred in ways that would in the long run prove disastrous to the claim that AI is a cognitive inquiry. McCarthy, for example, writes, "Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance" (McCarthy 1979). (p. 420)

Think hard for one minute about what would be necessary to establish that that hunk of metal on the wall over there had real beliefs, beliefs with direction of fit, propositional content, and conditions of satisfaction; beliefs that had the possibility of being strong beliefs or weak beliefs; nervous, anxious, or secure beliefs; dogmatic, rational, or superstitious beliefs; blind faiths or hesitant cogitations; any kind of beliefs. The thermostat is not a candidate. Neither is stomach, liver, adding machine, or telephone. However, since we are taking the idea seriously, notice that its truth would be fatal to strong AI's claim to be a science of the mind. For now the mind is everywhere. What we wanted to know is what distinguishe the mind from thermostats and livers. And if McCarthy were right, strong AI wouldn't have a hope of telling us that. (p. 420)

A resposta do robô (Yale)

II. The Robot Reply (Yale). "Suppose we wrote a different kind of program from Schank's program. Suppose we put a computer inside a robot, and this computer would not just take in formal symbols as input and give out formal symbols as output, but rather would actually operate the robot in such a way that the robot does something very much like perceiving, walking, moving about, hammering nails, eating, drinking - anything you like. The robot would, for example, have a television camera attached to it that enabled it to 'see,'it would have arms and legs that enabled it to 'act,' and all of this would be controlled by its computer 'brain.' Such a robot would, unlike Schank's computer, have genuine understanding and other mental states." (p. 420)

Comentário: adição de sensores e atuadores não mudam nada no problema

The first thing to notice about the robot reply is that it tacitly concedes that cognition is not soley a matter of formal symbol manipulation, since this reply adds a set of causal relation with the outside world [cf. Fodor: "Methodological Solipsism" BBS 3(1) 1980]. But the answer to the robot reply is that the addition of such "perceptual" and "motor" capacities adds nothing by way of understanding, in particular, or intentionality, in general, to Schank's original program. (p. 420)

Suppose that instead of the computer inside the robot, you put me inside the room and, as in the original Chinese case, you give me more Chinese symbols with more instructions in English for matching Chinese symbols to Chinese symbols and feeding back Chinese symbols to the outside. Suppose, unknown to me, some of the Chinese symbols that come to me come from a television camera attached to the robot and other Chinese symbols that I am giving out serve to make the motors inside the robot move the robot's legs or arms. It is important to emphasize that all I am doing is manipulating formal symbols: I know none of these other facts. I am receiving "information" from the robot's "perceptual" apparatus, and I am giving out "instructions" to its motor apparatus without knowing either of these facts. I am the robot's homunculus, but unlike the traditional homun- culus, I don't know what's going on. I don't understand anything except the rules for symbol manipulation. Now in this case I want to say that the robot has no intentional states at all; it is simply moving about as a result of its electrical wiring and its program. And furthermore, by instantiating the program I have no intentional states of the relevant type. All I do is follow formal instructions about manipulating formal symbols (p. 420)

A resposta do simulador de cérebro (Berkley e MIT)

III. The brain simulator reply (Berkeley and M.I.T.). "Suppose we design a program that doesn't represent information that we have about the world, such as the information in Schank's scripts, but simulates the actual sequence of neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives answers to them. The machine takes in Chinese stories and questions about them as input, it simulates the formal structure of actual Chinese brains in processing these stories, and it gives out Chinese answers as outputs. We can even imagine that the machine operates, not with a single serial program, but with a whole set of programs operating in parallel, in the manner that actual human brains presumably operate when they process natural language. Now surely in such a case we would have to say that the machine understood the stories; and if we refuse to say that, wouldn't we also have to deny that native Chinese speakers understood the stories? At the level of the synapses, what would or could be different about the program of the computer and the program of the Chinese brain?" (p. 420)

Comentário: entender o funcionamento cerebral não iria contra a ideia de que os processos mentais são um "software" independente do "hardware"? Ou seja, poderia ser replicado por uma IA forte.

Before countering this reply I want to digress to note that itis an odd reply for any partisan of artificial intelligence (orfunctionalism, etc.) to make: I thought the whole idea ofstrong AI is that we don't need to know how the brain worksto know how the mind works. The basic hypothesis, or so I had supposed, was that there is a level of mental operationsconsisting of computational processes over formal elementsthat constitute the essence of the mental and can be realizedin all sorts of different brain processes, in the same way thatany computer program can be realized in different computerhardwares: on the assumptions of strong AI, the mind is to thebrain as the program is to the hardware, and thus we canunderstand the mind without doing neurophysiology. If we had to know how the brain worked to do AI, we wouldn'tbother with AI. (p. 421)

Comentário: replicar sinapses não seria suficiente para ume entendimento

However, even getting this close to theoperation of the brain is still not sufficient to produceunderstanding. To see this, imagine that instead of a mono-lingual man in a room shuffling symbols we have the manoperate an elaborate set of water pipes with valves connectingthem. When the man receives the Chinese symbols, he looksup in the program, written in English, which valves he has toturn on and off. Each water connection corresponds to asynapse in the Chinese brain, and the whole system is riggedup so that after doing all the right firings, that is after turningon all the right faucets, the Chinese answers pop out at theoutput end of the series of pipe. (p. 421)

The problem with the brain simulator is that it is simulatingthe wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at thesynapses, it won't have simulated what matters about thebrain, namely its causal properties, its ability to produce intentional states. And that the formal properties are notsufficient for the causal properties is shown by the water pipe example: we can have all the formal properties carved offfrom the relevant neurobiological causal properties. (p. 421)

A resposta de combinação (Berkley e Stanford)

IV. The combination reply (Berkeley and Stanford). "While each of the previous three replies might not be completely convincing by itself as a refutation of the Chinese room counterexample, if you take all three together they are collectively much more convincing and even decisive. Imagine a robot with a brain-shaped computer lodged in its cranial cavity, imagine the computer programmed with all the synapses of a human brain, imagine the whole behavior of the robot is indistinguishable from human behavior, and now think of the whole thing as a unified system and not just as a computer with inputs and outputs. Surely in such a case we would have to ascribe intentionality to the system." (p. 421)

Comentário: aparente similaridade com humanos não implica intencionalidade

But I really don't see that this is any help to the claims of strong AI; and here's why: According to strong AI, instan- tiating a formal program with the right input and output is a sufficient condition of, indeed is constitutive of, intentional- ity. As Newell (1979) puts it, the essence of the mental is the operation of a physical symbol system. But the attributions of intentionality that we make to the robot in this example have nothing to do with formal programs. They are simply based on the assumption that if the robot looks and behaves sufficiently like us, then we would suppose, until proven otherwise, that it must have mental states like ours that cause and are expressed by its behavior and it must have an inner mechanism capable of producing such mental states. If we knew independently how to account for its behavior without such assumptions we would not attribute intentionality to it, especially if we knew it had a formal program. And this is precisely the point of my earlier reply to objection II. (p. 421)

Suppose we knew that the robot's behavior was entirely accounted for by the fact that a man inside it was receiving uninterpreted formal symbols from the robot's sensory recep- tors and sending out uninterpreted formal symbols to its motor mechanisms, and the man was doing this symbol manipulation in accordance with a bunch of rules. Further- more, suppose the man knows none of these facts about the robot, all he knows is which operations to perform on which meaningless symbols. In such a case we would regard the robot as an ingenious mechanical dummy. (p. 421)

To see this point, contrast this case with cases in which we find it completely natural to ascribe intentionality to members of certain other primate species such as apes and monkeys and to domestic animals such as dogs. The reasons we find it natural are, roughly, two: we can't make sense of the animal's behavior without the ascription of intentionality, and we can see that the beasts are made of similar stuff to ourselves - that is an eye, that a nose, this is its skin, and so on. Given the coherence of the animal's behavior and the assumption of the same causal stuff underlying it, we assume both that the animal must have mental states underlying its behavior, and that the mental states must be produced by mechanisms made out of the stuff that is like our stuff. We would certainly make similar assumptions about the robot unless we had some reason not to, but as soon as we knew that the behavior was the result of a formal program, and that the actual causal properties of the physical substance were irrelevant we would abandon the assumption of intentional- ity. [See "Cognition and Consciousness in Nonhuman Species" BBS 1(4) 1978.] (p. 421)

A resposta da mente outrem (Yale)

Similar ao questionamento de A. Turing ao considerar solipsista aquele que não atribui sentimentos na máquina, mas sim em outros humanos (sem ser capaz de estar no corpo de outro humano e, portanto, sentir) (?)

V. The other minds reply (Yale). "How do you know that other people understand Chinese or anything else? Only by their behavior. Now the computer can pass the behavioral tests as well as they can (in principle), so if you are going to attribute cognition to other people you must in principle also attribute it to computers." (p. 421)

Comentário: ciências cognitivas pressupõe a existência de estados mentais (no humano)

Por que pressupor mente no humano, mas não na máquina? "Bom senso"? Apesar de convincente pelas assunções quotidianas, não me parece que foi respondido. Sobretudo pensando na possibilidade da hipótese solipsista.

The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn't be just computational processes and their output because the computational processes and their output can exist without the cognitive state. It is no answer to this argument to feign anesthesia. In "cognitive sciences" one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects. (p. 421-2)

A resposta das muitas mansões (Berkley)

VI. The many mansions reply (Berkeley). "Your whole argument presupposes that AI is only about analogue and digital computers. But that just happens to be the present state of technology. Whatever these causal processes are that you say are essential for intentionality (assuming you are right), eventually we will be able to build devices that have these causal processes, and that will be artificial intelligence. So your arguments are in no way directed at the ability of artificial intelligence to produce and explain cognition." (p. 422)

Comentário: alargamento da questão da IA forte

I really have no objection to this reply save to say that it in effect trivializes the project of strong AI by redefining it as whatever artificially produces and explains cognition. The interest of the original claim made on behalf of artificial intelligence is that it was a precise, well defined thesis: mental processes are computational processes over formally defined elements. I have been concerned to challenge that thesis. If the claim is redefined so that it is no longer that thesis, my objections no longer apply because there is no longer a testable hypothesis for them to apply to. (p. 422)

Assumir a possibilidade de replicar materialmente o entendimento, não é identificar processos mentais com processos computacionais

I see no reason in principle why we couldn't give a machine the capacity to understand English or Chinese, since in an important sense our bodies with our brains are precisely such machines. But I do see very strong arguments for saying that we could not give such a thing to a machine where the operation of the machine is defined solely in terms of computational processes over formally defined elements; that is, where the operation of the machine is defined as an instantiation of a computer program. (p. 422)

It is not because I am the instantiation of a computer program that I am able to understand English and have other forms of intentionality (I am, I suppose, the instantiation of any number of computer programs), but as far as we know it is because I am a certain sort of organism with a certain biological (i.e. chemical and physical) structure, and this structure, under certain condi- tions, is causally capable of producing perception, action, understanding, learning, and other intentional phenomena. (p. 422)

Insuficiência de um modelo formal replicar intencionalidade

But the main point of the present argument is that no purely formal model will ever be sufficient by itself for intentionality because the formal properties are not by themselves constitutive of intentionality, and they have by themselves no causal powers except the power, when instan- tiated, to produce the next stage of the formalism when the machine is running. (p. 422)

And any other causal properties that particular realizations of the formal model have, are irrele- vant to the formal model because we can always put the same formal model in a different realization where those causal properties are obviously absent. Even if, by some miracle, Chinese speakers exactly realize Schank's program, we can put the same program in English speakers, water pipes, or computers, none of which understand Chinese, the program notwithstanding. (p. 422)

What matters about brain operations is not the formal shadow cast by the sequence of synapses but rather the actual properties of the sequences. All the arguments for the strong version of artificial intelligence that I have seen insist on drawing an outline around the shadows cast by cognition and then claiming that the shadows are the real thing. (p. 422)

Uma máquina pode pensar?

"Could a machine think?" The answer is, obviously, yes. We are precisely such machines. "Yes, but could an artifact, a man-made machine, think?" Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, inten- tionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question. (p. 422)

Um computador digital pode pensar?

"OK, but could a digital computer think?" If by "digital computer" we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think.

"But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?" This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.

"Why not?" Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.

The aim of the Chinese room example was to try to show this by showing that as soon as we put something into the system that really does have intentionality (a man), and we program him with the formal program, you can see that the formal program carries no additional intentionality. It adds nothing, for example, to a man's ability to understand Chinese. (p. 422)

Diferença entre o programa e sua realização

Precisely that feature of AI that seemed so appealing - the distinction between the program and the realization - proves fatal to the claim that simulation could be duplication. The distinction between the program and its realization in the hardware seems to be parallel to the distinction between the level of mental operations and the level of brain operations. And if we could describe the level of mental operations as a formal program, then it seems we could describe what was essential about the mind without doing either introspective psychology or neurophysiology of the brain. (p. 422-3)

Problemas em relacionar mente~cérebro//programa~hardware

But the equation, "mind is to brain as program is to hardware" breaks down at several points, among them the following three: (p. 423)

Um computador poderia ser feito com pedras. De fato, se uma circuitaria elétrica poderia compor um ser intencional, qualquer amontoado de matéria provavelmente poderia.

First, the distinction between program and realization has the consequence that the same program could have all sorts of crazy realizations that had no form of intentionality. Weizen- baum (1976, Ch. 2), for example, shows in detail how to construct a computer using a roll of toilet paper and a pile of small stones. Similarly, the Chinese story understanding program can be programmed into a sequence of water pipes, a set of wind machines, or a monolingual English speaker, none of which thereby acquires an understanding of Chinese. Stones, toilet paper, wind, and water pipes are the wrong kind of stuff to have intentionality in the first place - only something that has the same causal powers as brains can have intentionality - and though the English speaker has the right kind of stuff for intentionality you can easily see that he doesn't get any extra intentionality by memorizing the program, since memorizing it won't teach him Chinese. (p. 423)

Um programa é puramente formal. Estados intencionais são definidos por seu conteúdo (assumindo materialismo, qual seria a diferença entre forma e conteúdo? o contraargumento não é exatamente dizer que a forma (cérebro) define o conteúdo (consciência)?). Como poderíamos redefinir esse problema a luz do conceito de informação de Simondon?

Second, the program is purely formal, but the intentional states are not in that way formal. They are defined in terms of their content, not their form. The belief that it is raining, for example, is not defined as a certain formal shape, but as a certain mental content with conditions of satisfaction, a direction of fit (see Searle 1979), and the like. Indeed the belief as such hasn't even got a formal shape in this syntactic sense, since one and the same belief can be given an indefinite number of different syntactic expressions in different linguistic systems. (p. 423)

Assumindo materialismo, mente seria produto do cérebro. Mas um programa de computador não é produzido pelo hardware. (Mas no fundo programar não é mudar bits que representam conexões elétricas, ou seja, atuaríamos no hardware que produz o programa?)

Third, as I mentioned before, mental states and events are literally a product of the operation of the brain, but the program is not in that way a product of the computer. (p. 423)

Simulação != duplicação

Uma simulação está confinada a realidade simulada, não seria possível intencionalidade "real", apenas simulada (aparente?).

The idea that computer simulations could be the real thing ought to have seemed suspicious in the first place because the computer isn't confined to simulating mental operations, by any means. No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Why on earth would anyone suppose that a computer simulation of understanding actually understood anything? It is sometimes said that it would be frightfully hard to get computers to feel pain or fall in love, but love and pain are neither harder nor easier than cognition or anything else. For simulation, all you need is the right input and output and a program in the middle that transforms the former into the latter. That is all the computer has for anything it does. To confuse simulation with duplication is the same mistake, whether it is pain, love, cognition, fires, or rainstorms. (p. 423)

Redução do cérebro humano como processamento de informação (e este como manipulação de símbolos)

First, and perhaps most important, is a confusion about the notion of "information processing": many people in cognitive science believe that the human brain, with its mind, does something called "information processing, " and analogously the computer with its program does information processing; but fires and rainstorms, on the other hand, don't do information processing at all. Thus, though the computer can simulate the formal features of any process whatever, it stands in a special relation to the mind and brain because when the computer is properly programmed, ideally with the same program as the brain, the information processing is identical in the two cases, and this information processing is really the essence of the mental.
(p. 423)

But the trouble with this argument is that it rests on an ambiguity in the notion of "information." In the sense in which people "process infor- mation" when they reflect, say, on problems in arithmetic or when they read and answer questions about stories, the programmed computer does not do "information processing." Rather, what it does is manipulate formal symbols. The fact that the programmer and the interpreter of the computer output use the symbols to stand for objects in the world is totally beyond the scope of the computer. (p. 423)

The introduction of the notion of "information processing" therefore produces a dilemma: either we construe the notion of "information processing" in such a way that it implies intentionality as part of the process or we don't. If the former, then the programmed computer does not do infor- mation processing, it only manipulates formal symbols. If the latter, then, though the computer does information process- ing, it is only doing so in the sense in which adding machines, typewriters, stomachs, thermostats, rainstorms, and hurri- canes do information processing; namely, they have a level of description at which we can describe them as taking infor- mation in at one end, transforming it, and producing information as output. But in this case it is up to outside observers to interpret the input and output as information in the ordinary sense. And no similarity is established between the computer and the brain in terms of any similarity of information processing. (p. 423)

Computador tem sintaxe, não semântica

The computer, to repeat, has a syntax but no semantics. (p. 423)

Thus, if you type into the computer "2 plus 2 equals?" it will type out "4." But it has no idea that "4" means 4 or that it means anything at all. And the point is not that it lacks some second-order information about the interpretation of its first-order symbols, but rather that its first-order symbols don't have any interpretations as far as the computer is concerned. All the computer has is more symbols. (p. 423)

Comportamentalismo e operacionalismo no campo de IA

Second, in much of AI there is a residual behaviorism or operationalism. Since appropriately programmed computers can have input-output patterns similar to those of human beings, we are tempted to postulate mental states in the computer similar to human mental states. But once we see that it is both conceptually and empirically possible for a system to have human capacities in some realm without having any intentionality at all, we should be able to overcome this impulse. My desk adding machine has calcu- lating capacities, but no intentionality, and in this paper I have tried to show that a system could have input and output capabilities that duplicated those of a native Chinese speaker and still not understand Chinese, regardless of how it was programmed. The Turing test is typical of the tradition in being unashamedly behavioristic and operationalistic, and I believe that if AI workers totally repudiated behaviorism and operationalism much of the confusion between simulation and duplication would be eliminated. (p. 423)

IA forte depende de um dualismo mente-cérebro

Third, this residual operationalism is joined to a residual form of dualism; indeed strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn't matter. In strong AI (and in functionalism, as well) what matters are programs, and programs are indepen- dent of their realization in machines; indeed, as far as AI is concerned, the same program could be realized by an electronic machine, a Cartesian mental substance, or a Hegelian world spirit. The single most surprising discovery that I have made in discussing these issues is that many AI workers are quite shocked by my idea that actual human mental phenomena might be dependent on actual physical- chemical properties of actual human brains. But if you think about it a minute you can see that I should not have been surprised; for unless you accept some form of dualism, the strong AI project hasn't got a chance. The project is to reproduce and explain the mental by designing programs, but unless the mind is not only conceptually but empirically independent of the brain you couldn't carry out the project, for the program is completely independent of any realization. (p. 423-4)

Unless you believe that the mind is separable from the brain both conceptually and empirically - dualism in a strong form - you cannot hope to reproduce the mental by writing and running programs since programs must be independent of brains or any other particular forms of instantiation. If mental operations consist in computational operations on formal symbols, then it follows that they have no interesting connection with the brain; the only connection would be that the brain just happens to be one of the indefinitely many types of machines capable of instantiating the program. This form of dualism is not the traditional Cartesian variety that claims there are two sorts of substances, but it is Cartesian in the sense that it insists that what is specifically mental about the mind has no intrinsic connection with the actual proper- ties of the brain. This underlying dualism is masked from us by the fact that AI literature contains frequent fulminations against "dualism"; what the authors seem to be unaware of is that their position presupposes a strong version of dualism. (p. 424)

Intencionalidade como fenômeno biológico

Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochem- istry of its origins as lactation, photosynthesis, or any other biological phenomena. No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis, but where the mind is concerned many people are willing to believe in such a miracle because of a deep and abiding dualism: the mind they suppose is a matter of formal processes and is independent of quite specific material causes in the way that milk and sugar are not. (p. 424)


  1. SEARLE, J. R. Minds, brains, and programs. Behavioral and Brain Sciences, v. 3, n. 3, p. 417–424, 1980.