I digress.

[과제] On Methodological Behaviorism 본문

공부하기/의식·뇌·인지

[과제] On Methodological Behaviorism

빨간도란쓰

For this reading, you might think about the methodological perspective with its focus on behavior that is displayed (in different ways) by both Turing and Skinner. What are its virtues and weaknesses?

What, if anything, do such behaviorist commitments leave out?

Or, to ask the question from the other side, why should we care about anything beyond what is capable of generating behavioral consequences?  (If it has no behavioral consequences, does it matter?)

 

Last class, I asked if we can say that a machine who has learned the word "cloud" only by being trained on a series of abstract natural language processing datapoints really understands or knows what a cloud is, even if the machine is able to use the word in a seemingly intelligible way in a "conversation" with a human. To repeat my intuition, this machine would be completely unable to pick out the image of a cloud and distinguish it from other images, because it has no knowledge of the referent of the word "cloud" nor has it ever had the chance to learn the connection.

 

(I believe someone countered by saying that a blind person is also unable to distinguish an image of a cloud and a non-cloud, but really, it doesn't have to be an image of a cloud. The same idea applies to any object in the world that a human can meaningfully engage with, in ways other than visual stimulation. Here I am using the word "engage" in a very vague sense, but still I think there is a clear distinction between a computing machine whose representation of a cloud is only a number on a matrix and a human whose concept of a cloud occupies a certain place in the human's understanding of the actual world. In this sense, maybe, what I am asking for is a causal picture/understanding of the world and for the entity in question to be able to place the individual tokens in their correct places in the model of causal networks.)

 

Anyhow, I initially thought of this as one aspect of intelligence and understanding that could be missing from a wholly behaviorist picture, and that either Turing and Skinner could not capture this aspect of cognition in their framework. However, after thinking about it a bit more, I have come to the conclusion that what I required was actually only a different kind of behavior, that is, counterfactual behavior. What matters in the end is, given the counterfactual situation in which an image of a cloud and a non-cloud is presented, that the entity in question produce the "right" behavior of choosing the image of a cloud. While such behavior may not always be observable/present when the interrogator is conversing with a machine about clouds, I do think that the ability to distinguish the referent of the word "cloud" essentially comes down to the possibility of producing the right *behavior* with respect to the word and the referent. As such, apart from the questions of qualia/emotion/subjective experience, I do think this particular qualm I had with behaviorism has in a way been resolved--behavioral consequence is all that matters at the end for cognition (thinking), even if it may be counterfactual behavior produced under counterfactual conditions.

 

Nevertheless, as I mentioned in class, I still think the Turing Test per se is limited because it only tests a certain kind of behavior, i.e. conversations on a screen. To check the counterfactual behaviors and accurately judge that the machine is thinking, we need to allow the machine to somehow interact with the world in a meaningful manner and assess if it produces the "right" behavior. My initial doubt that the NLP-trained machine does not understand the word "cloud" and will fail on such trials still persists.

 

 

 

Comments