Nova Genesis Logo.png
The Red Rook (cover).png
The Red Rook, sequel to Dispensing Justice and the second novel of Nova Genesis World is now available for Kindle or as a paperback at Amazon.

In The Red Rook, Penny confronts her doubts about becoming a superhero as events around the disappearance of one her school mates unfolds.
Subscribe to the Liberty IV Newsletter here.
Read free chapters of Dispensing Justice here (or get it here).
Read free chapters of The Red Rook here (or get it here). -- Fritz Freiheit

DJ Cover 100x150.png
20th Annual Writer's Digest Self-Published Book Awards - Honorable Mention.png
Are you an author with a wiki? Or know of an author who has one?
If so, I'd like to hear about it. Add a comment here, or contact me.

John R. Searle's "Minds, Brains, and Programs" - My Response

From FritzWiki
Jump to: navigation, search
Article icon.svg
 v  d  e 
Salachinesa2.png

ChineseRoom2009 CRset.jpg

John R. Searle's "Minds, Brains, and Programs" - My Response

Fritz Freiheit

June 17th, 1996

1. Searle initially makes a misleading claim: "that the appropriately programmed computer literally has cognitive states and that the programs thereby explain human cognition." - Not so, the programs, if they do have cognitive states, only explain their own cognitive states.

2. The claim that "Searle as computer" (i.e. Searle as part of a system) does not understand the stories given to it in Chinese is again misleading. What do we mean by understand? If we ask the question, "if it exists, where does the understanding reside?" there is a simple answer to that, it exists in the "Searle as computer" system, i.e. as a combination of Searle and the program.

3. The statement "In the Chinese case I have everything that artificial intelligence can put into me by way of a program, and I understand nothing; in the English case I understand everything, and there is so far no reason at all to suppose that my understanding has anything to do with computer programs, that is, with computational operations on purely formally specified elements." - Here again Searle mixes things up, he is claiming no understanding purely for himself, not for the system that he is a component of. This claim is as absurd as that of a neuron claiming to not understand a conversation that the entire brain is participating in. In addition, Searle's claim to completely understanding everything about the English case is also misleading, he does not understand the brain states (in a mechanistic sense) that he is in during the English conversation, that is to say, he can not tell whether or not he is carrying out purely formal symbol manipulation to have the conversation in English. (Half an argument is not a substitute for a whole argument!)

4. Searle's argument continues in this vein from here. He continues to attempt fully equate "Searle as part of a system" with "Searle". Which is completely not equatable. One can easily see that "Searle as part of a system" is differently enabled.

5. For the "System Reply"; by internalizing all the rules and etc. by memorizing them, the individual doing so continues to not understand the stories in Chinese. This, of course, continues to be the case. This is the case because the process of internalizing the system as described by Searle has no effect on the role of the person as computer. That person continues to be a subcomponent of the system, i.e. the person as interpreter of the rules and the rules themselves. ("The system is just part of him." Nonsense, that is just like saying that a computer program which is running on a computer is just part of the computer.)

6. As to the implausibility of this answer, well, who cares? One can make implausibility arguments about the whole process of writing the program that understands Chinese and is executed by a person. But be that as it may.

7. Continued talk of "sub-systems" muddies the waters.

8. Point is, can Searle *prove* that he understands English any better than the "Searle as part of a system" can prove that it understands Chinese? No. So one is left thinking that in one case we "know" (since we too are members of the same class of entities, i.e. humans) that Searle understands English (without really knowing how that comes about) and we really don't know anything about "Searle as part of a system" since we are not such a class of entities.

9. The stomach, heart, etc. as cognitive systems seem more like counter-examples for what is cognitive systems that indicate a possible characteristic of cognitive systems, namely symbol manipulation.

10. Searle's argument about belief attribution again misleading. He fails to take node of the critical component of McCarthy's statement: "thermostats can be said to have beliefs" which is not the same as "thermostats have beliefs". (This is an argument of absurdity, which fails because it does not make a strong link between thermostats or livers and computers and brains, but instead relies on the weak surface similarities.)

11. Searle is correct in that the "Robot Reply" adds nothing to the discussion. He is incorrect in that it "tacitly concedes that cognition is not solely a matter of formal symbol manipulation" since all inputs to the robot can be converted into symbols before processing. Likewise, there is no way to make a definitive statement about whether or not human cognition is not dependent upon the conversion of perceptual inputs into some formal symbol system. Thus no progress is made here.

12. In the digression to the Brain Simulator Reply Searle states that the whole idea of simulating the human brain at the neuron firing level is in direct opposition to the primary goal of strong AI, i.e. functionalism. He goes as far as to say "If we had to know how the brain worked to do AI, we wouldn't bother with AI." Which may be his view point, but is certainly not true for every AI researcher, or for that matter non-AI researchers. (There are practical reasons one might want to create an artificial intelligence which has nothing to do with how the intelligence is derived.)

13. Searle then introduces the idea of replacing the system of paper and lookup tables with a system of water pipes and valves and then asks where the understanding is in the system. Again, where is the understanding in a human? Searle thinks that it is absurd that the conjunction of the man and the manipulation of the water pipes understands, and talks about the internalization of it in so far as the man would simulate the water pipe system in his imagination. Which leads Searle to his real objection of the system, and that is that there is some unstated causal properties of the human mind that has nothing to do with the "formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states." How?! This is pure opinion and has no more foundation in fact than the statement that some non-material spirit (i.e. soul) is the real foundation of intelligence, intentionality, and etc. rather than the physical structure and processes that occur in the brain. I'm not saying that either view is correct, just that you can't prove it. (See "An Unfortunate Dualist" by Raymond Smullyan.) In fact, Searle is caught in a worse trap in that his argument is circular. This can be seen by his statement "And that the formal properties are not sufficient for the causal properties is shown by the water pipe example: we can have all the formal properties carved off from the relevant neurobiological causal properties." (Can we?)

14. The "Combination Reply" does not really change anything except to introduce the idea of the "strong Turing test", that is to say that if a robot's behavior is indistinguishable from a humans behavior, then, barring other evidence, it is attributed with intentionality and mental states. But, Searle states, "If we knew independently how to account for its behavior [i.e., that it is a robot with a formal program] without such assumptions we would not attribute intentionality to it, especially if we know it had a formal program." So what is the difference between such a robot and a human? Searle does not answer this, but assumes that we will fill in such assumptions about humans vs. programs that we are inclined to make already having told us we are silly if we do not come to the same conclusion that he does. Again, there is nothing here that proves anything, nor is even particularly convincing.

15. Once again, as part of Searle's refutation of the Combination Reply, we are told that there is not intentionality to the system except that of the man's manipulation of the (meaningless) symbols.

16. The assignment of intentionality to animals, Searle argues, is made for reasons that we feel are natural. That is that we can't make sense of the animal's behavior without this assignment of intentionality, and because animals are made up of stuff that is similar to ourselves. Both of these "natural reasons" are essentially based on ignorance and have no scientific or logical basis. While it is true that these observations do not contradict what we "know" about ourselves and the world (and can be applied to our observations of other humans), the simple fact that they do not disturb our sense of the ways things work does not make them adequate explanations. Would Searle make the same arguments about an "organic robot" constructed of "stuff similar to our own" yet programmed with a formal symbol system? Would it make sense to do so?

17. In response to the "Other Minds Reply" Searle makes the statement "The thrust of the argument is that it couldn't be just computational processes and their output because the computational processes and their output exist without cognitive states." which is completely unsupported. Simply because we can not find a corresponding mechanism to the computer in the human brain does not mean that it does not exist, just as Searle argues that because you find a formal system of symbol manipulation does not mean that mental states exist within that formal system. Neither having a formal system of symbol manipulation precludes the existence of mental states, nor does the existence mental states preclude the existence of a formal system of symbol manipulation as its foundation.

18. Searle's response to the "Many Mansions Reply" is not arguable, or at least is not relevant to whether or not formal systems of symbol manipulation are capable of supporting intentionality or mental states. (But I disagree with his statement about whether or not such a system is necessarily a requirement for strong AI.)

20. Unfortunately, Searle is hoisted by his own petard in his response to the "Many Mansions Reply" as he states that the change of thesis that this reply entails untestability in the thesis. Very well, I'm inclined to agree with that. But, let's turn the question around, how does Searle define the testability of his hypothesis that formal systems as he describes them are not capable of intentionality or of mental states. For that matter, how do we test the existence of intentionality or mental states in human brains?

21. Searle asks "..., still there must be something about me that makes it the case that I understand English and a corresponding something lacking in me that makes it the case that I fail to understand Chinese [in the Chinese Room]." To which there is an easy answer, which is that in one case Searle has enough information (data, mental states, whatever) to understand English, and in the other case he does not, only the system represented by "Searle as computer" in conjunction with the "program for Chinese" has enough information (to apparently understand Chinese).

22. Searle continues his arguments about the special characteristics of the biological system that is Searle, which "under certain conditions, is causally capable of producing perception, action, understanding, learning, and other intentional phenomena." Unfortunately this continues to be a hand wave, as Searle makes no attempt to give us any way of testing, or measuring these qualities in the Searle/human system.

23. Searle says that only systems that have causal powers could have intentionality, so, one is to suppose that if one could prove that a formal system of symbol manipulation had these powers, it would by definition be capable of understanding. But how do we test this?

24. In the conclusion Searle answers the question "[Could] something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?" by saying "No." He then continues to reiterate his argument vis-à-vis the lack of meaning in computers with respect to formal symbol systems (i.e. that the observer/programmer of the system grants all meaning). This is almost certainly the case currently but does not necessarily have to be the case for any such formal system. One can make a strong argument that eventually at some level the human brain is simply a system that locally contains no intentionality, no mental states, and that only as a system do these properties emerge.

25. Searle uses a distracting argument about digital computers constructable from various sorts of odd things, including sticks, stones, and toilet paper rolls. Perhaps it is silly to construct a computer out of these things, but it has nothing to do with whether or not a formal system can have intentionality or mental states. Simply because it is hard for us to imagine such a system having those properties does not make it possible for such system to have such properties.

26. There is no proof, one way or another, that intentional states are not formal.

27. Searle states "Mental states and events are literally a product of the operation of the brain, but he program is not in that way a product of the computer." So what? That is an invalid argument as it implies that since a program is not to a computer as a mental state is to a brain that a program cannot participate in the production of a mental state. What about mental state is to a brain as mental state is to a running program?

28. A computer simulation of understanding is not understanding. OK. But is the calculation of 2 plus 2 not equal to 4 on a computer as versus doing it in ones mind? Which is to ask the question "Do computers do anything but simulate?" In any case this says nothing about whether or not computers can understand, simply that a simulation of understanding is not understanding.

29. Searle says "Computers have no semantics, only syntax." Compare this to "Neurons have no semantics, only physical state."

30. "You cannot hope to reproduce the mental by writing and running programs since programs must be independent of brains or any other particular forms of instantiation." This is only significant if you believe that a program, without a processor, is capable of mental states or intentionality. I don't believe this, I don't think you could find many other people that would believe it.

To sum up: The mind is not separable from the brain. An instantiation, that is to say a running program, is not separable from the computer. Do not confuse the written form of a program with its execution.


Legacy link

Source

The original "Minds, Brains, and Programs" appears in The Behavioral and Brain Sciences, vol 3, Cambridge University Press, 1980. The essay is reprinted in The Mind's I, edited by Douglas R. Hofstadter and Daniel C. Dennett, Basic Books, 1981. This reprinting contains an excellent critique of the Searle essay.




See


Backlinks





[[File:|right|thumbnail|50px]]

Categories[edit]

v  d  e