I digress.

[과제] On Searle's Chinese Room Thought Experiment 본문

공부하기/의식·뇌·인지

[과제] On Searle's Chinese Room Thought Experiment

빨간도란쓰

Issue: Understanding defined as capability seems to justify the systems+robot reply

 

There seem to be two distinct senses of understanding that may be relevant (and lacking for the man) in the Chinese room thought experiment--an intentional aspect that connects the syntactical elements to semantics, and the cognitive phenomenology aspect of understanding ("what it feels like to understand," if anything as such exists).

 

Searle mostly seems to be talking about the former--that there is no intentionality or proper semantics imbued with regards to the Chinese letters. But what does it mean for an entity to have a semantic understanding that goes beyond simple syntactic manipulation? Isn't it nothing more than to have the capability to correctly distinguish and identify the referent of the symbols in the syntactical system? Conceptualized as a capability of sort, it seems that we do not need to posit anything "internal" that is missing--it is just that the man in the Chinese room does not know how to use the symbols properly. I guess the Skinnerian argument really got me here. Knowledge and semantics seems to be nothing more than an "internal surrogate of behavior in its relation to contingencies" (Skinner), and whether an entity has that knowledge can be distinguished based on whether the entity is able to interact with the object of knowledge in an appropriate and successful manner. 

 

Now from this view, consider the man in the Chinese room. Clearly, he only has access to the Chinese letters, or the formal symbols. Here, it is correct to say that the man does not have a semantic understanding of the symbols. But suppose now there is another system, the Chinese robot room, which gets to engage causally with the real world, and does so in an appropriate way. Then we would have inputs of images, say of 'clouds', and somehow the woman in this system would have to be able to connect between the image input and the Chinese counterpart for 'cloud.' Searle may want to say -- oh but the woman gets as input a formal representation of images, not the images itself! But if we add the systems reply to this, then I think it is hard for us to deny that the robot system as a whole, which can distinguish clouds from non-clouds, really does understand what clouds are. Searle's classic rebuttal to the systems reply, namely that the woman can memorize and internalize all the program specifications, does not work, because then the woman really does know what "clouds" (in Chinese letters) refer to and look like!

 

As such, I think that a combination of the robot reply & the systems reply actually do the job of establishing that formal computer programs, if situated correctly so that it can display successful behavior while engaging causally with the outside world, can understand stuff. One might then say that it is not that there were some "causal powers" of brains not replicated inside the computer programs, but rather that the causal powers were missing in the external ways in which an abstract software could interact with the world. In all, it seems to me that as long as "understanding" is conceptualized in this Skinnerian sense, there is no reason a formal symbolic manipulation system cannot display such an understanding, contrary to Searle's argument.

 

+. Now if what is lacking is not intentionality/semantics but the lack of cognitive phenomenology for the man in the Chinese room, then this might be another story. But even for this aspect, I think Searle's claims are in no way more explanatorily powerful than the so-called "strong AI" views. Searle talks about "causal powers," but what really are they? Wasn't functionalism supposed to be an account that cached mental states in terms of its causal roles, in the first place? I am doubtful as to whether anyone has a satisfactory answer to what properties enable and constitute subjective experiences/qualia, and in my view, Searle, in vaguely hinting at the requirement for "causal powers," does no better job.

 

 

 

Comments