Tuesday, February 27, 2007

Carrier on Intentionality

This is a redated post on Carrier on intentionality

Thursday, November 10, 2005
Carrier on The Argument from Intentionality
This is a version of part of the paper I presented in England. I am bumping it up to today to help understand some of the intentionality issues we have been discussing.

I. The Argument from Intentionality
The first of the arguments that I presented is the Argument from Intentionality. Physical states have physical characteristics, but how can it be a characteristic of, say, some physical state of my brain, that it is about dogs Boots and Frisky, or about my late Uncle Stanley, or even about the number 2. Can’t we describe my brain, and its activities, without having any clue as to what my thoughts are about?
To consider this question, let us give a more detailed account of what intentionality is. Angus Menuge offers the following definition:
1) The representations are about something
2) They characterize the thing in a certain way
3) What they are about need not exist
4) Where reference succeeds, the content may be false
5) The content defines an intensional context in which the substitution of equivalents typically fails
So, if I believe that Boots and Frisky are in the back yard, this belief has to be about those dogs, I must have some characterization of those dogs in mind that identifies them for me, my thoughts can be about them even if, unbeknownst to me, they have just died, my reference two those two dogs can succeed even if they have found their way into the house, and someone can believe that Boots and Frisky are in the back yard without believing that “the Repperts’ 13 year old beagle” and “the Repperts’ 8 year old mutt” are in the back yard.
It is important to draw a further distinction, a distinction between original intentionality, which is intrinsic to the person possessing the intentional state, and derived or borrowed intentionality, which is found in maps, words, or computers. Maps, for example, have the meaning that they have, not in themselves, but in relation to other things that possess original intentionality, such as human persons. There can be no question that physical systems possess derived intentionality. But if they possess derived intentionality in virtue of other things that may or may not be physical systems, this does not really solve the materialist’s problem.
The problem facing a physicalist account of intentionality is presented very forcefully by John Searle:
Any attempt to reduce intentionality to something nonmental will always fail because it leaves out intentionality. Suppose for example that you had a perfect causal account of the belief that water is wet. This account is given by stating the set of causal relations in which a system stands to water and to wetness and these relations are entirely specified without any mental component. The problem is obvious: a system could have all those relations and still not believe that water is wet. This is just an extension of the Chinese Room argument, but the moral it points to is general: You cannot reduce intentional content (or pains, or "qualia") to something else, because if you did they would be something else, and it is not something else." (Searle, Rediscovery p. 51).
Admittedly, this is merely an assertion of something that needs to be brought out with further analysis. It seems to me that intentionality, as I understand it, requires consciousness. There are systems that behave in ways such that, in order to predict their behavior, it behooves us to act as if they were intentional systems. If I am playing chess against a computer, and I am trying to figure out what to expect it to play, then I am probably going to look for the moves it think are good and expect the computer to play those. I act as if the computer were conscious, even though I know that it has no more consciousness than a tin can. Similarly, we can look at the bee dances and describe them in intentional terms; the motions the bees engage in enable the other bees to go where the pollen is, but it does not seem plausible to attribute a conscious awareness of what information is being sent in the course of the bee dance. We can look at the bees as if they were consciously giving one another information, but the intentionality as-if intentionality, not the kind of original intentionality we find in conscious agents. As Colin McGinn writes:

I doubt that the self-same kind of content possessed by a conscious perceptual experience, say, could be possessed independently of consciousness; such content seems essentially conscious, shot through with subjectivity. This is because of the Janus- faced character of conscious content: it involves presence to the subject, and hence a subjective point of view. Remove the inward-looking face and you remove something integral—what the world seems like to the subject.If we ask what the content of a word is, the content of that word must be the content for
some conscious agent; how that conscious agent understands the word. There may be other concepts of content, but those concepts, it seems to me, are parasitical on the concept of content that I use in referring to states of mind found in a conscious agent. Put another way, my paradigm for understanding these concepts is my life as a conscious agent. If we make these words refer to something that occurs without consciousness, it seems that we are using the by way of analogy with their use in connection with our conscious life.

The intentionality that I am immediately familiar with is my own intentional states. That's the only template, the only paradigm I have. I wouldn't say that animals are not conscious, and if I found good evidence that animals could reason it would not undermine my argument, since I've never been a materialist about animals to begin with. Creatures other than myself could have intentional states, and no doubt do have them, if the evidence suggests that what it is like to be in the intentional state they are in is similar to what it is like to be in the intentional state that I am in.

In reading Carrier’s critique of my book we find, in his response to the argument from intentionality, terms being used that make sense to me from the point of view of my life as a conscious subject, but I am not at all sure what to make of them when we start thinking of them as elements in the life of something that is not conscious. His main definition of “aboutness” is this:
Cognitive science has established that the brain is a computer that constructs and runs virtual models. All conscious states of mind consist of or connect with one or more virtual models. The relation these virtual models have to the world is that of corresponding or not corresponding to actual systems in the world. Intentionality is an assignment (verbal or attentional) of a relation between the virtual models and the (hypothesized) real systems. Assignment of relation is a decision (conscious or not), and such decisions, as well as virtual models and actual systems, and patterns of correspondence between them, all can and do exist on naturalism, yet these four things are all that are needed for Proposition 1 to be true.
Or consider the following:
Returning to my earlier definition of aboutness, as long as we can know that "element A of model B is hypothesized to correspond to real item C in the universe" we have intentionality, we have a thought that is about a thing.
Or
Because the verbal link that alone completely establishes aboutness--the fact of being "hypothesized"--is something that many purely mechanical computers do.
Or again
Language is a tool--it is a convention invented by humans. Reality does not tell us what a word means. We decide what aspects of reality a word will refer to. Emphasis here: we decide. We create the meaning for words however we want. The universe has nothing to do with it--except in the trivial sense that we (as computational machines) are a part of the universe.
Now simply consider the words, hypothesize and decide that he uses in these passages. I think I know what it means to decide something as a conscious agent. I am aware of choice 1 and choice 2, I deliberate about it, and then consciously choose 1 as opposed to 2, or vice versa. All of this requires that I be a conscious agent who knows what my thoughts are about. That is why I have been rather puzzled by Carrier’s explaining intentionality in terms like these; such terms mean something to me only if we know what our thoughts are about. The same thing goes for hypothesizing. I can form a hypothesis (such as, all the houses in this subdivision were built by the same builder) just in case I know what the terms of the hypothesis mean, in other words, only if I already possess intentionality. That is what these terms mean to me, and unless I’m really confused, this is what those terms mean to most people.
Again, we have to take a look at the idea of a model. What is a model? A model is something that is supposed to resemble something else. But if we explain “X is about Y” at least partially in terms of “X is a model for Y,” I really don’t think we’ve gotten anywhere. How can X be a model for Y if it isn’t about Y in the first place.

Nevertheless we may be able to work though the critique and find how he proposes to naturalize the concepts.
Material state A is about material state B just in case “this system contains a pattern corresponding to a pattern in that system, in such a way that computations performed on this system are believed to match and predict behavior in that system.”
In correspondence with me Carrier said this:
As I explain in my critique, science already has a good explanation on hand for attentionality (how our brain focuses attention on one object over others). Combine that with a belief (a sensation of motivational confidence) that the object B that we have our attention on will behave as our model A predicts it will, and we have every element of intentionality.
But I am afraid I don’t see that this naturalization works. My objection to this is that in order for confidence to play the role it needs to play in Carrier's account of intentionality that confidence has to be a confidence that I have an accurate map, but confidence that P is true is a propositional attitude, which presupposes intentionality. In other words, Carrier is trying to bake an intentional cake with physical yeast and flour. But when the ingredients are examined closely, we find that some intentional ingredients have been smuggled in through the back door.

Here is another illustration:
The fact that one thought is about another thought (or thing) reduces to this (summarizing what I have argued several times above already): (a) there is a physical pattern in our brain of synaptic connections physically binding together every datum about the object of thought (let's say, Madell's "Uncle George"), (b) including a whole array of sensory memories, desires, emotions, other thoughts, and so on, (c) which our brain has calculated (by various computational strategies) are relevant to (they describe or relate to) that object (Uncle George), (d) which of course means a hypothesized object (we will never really know directly that there even is an Uncle George: we only hypothesize his existence based on an analysis, conscious and subconscious, of a large array of data), and (e) when our cerebral cortex detects this physical pattern as obtaining between two pieces of data (like the synaptic region that identifies Uncle George's face and that which generates our evidentially-based hypothesis that the entity with that face lives down the street), we "feel" the connection as an "aboutness" (just as when certain photons hit our eyes and electrical signals are sent to our brain we "feel" the impact as a "greenness").
Now did you notice the word “about” in step A of Carrier’s account of intentionality? If there is something in the brain that binds together everything about Uncle George, and that is supposed to explain how my thought can be about Uncle George, then it seems pretty clear to me that we are explaining intentionality in terms of intentionality.

What I think the deepest problem is in assigning intentionality to physical systems is that when we do that norms of rationality are applied when we determine what intentional states exist, but normative truths are not entailed by physical facts. In the realm of ethics, add up all the physical, chemical, biological, psychological, and sociological facts about a murder for hire, and nothing in that description will entail that it was a wrongful act. Similarly, scientific information about what is will not tell you what an agent ought to believe, but we need to know what an agent ought to believe in order to figure out what he or she does believe. According to Searle, for example, intentionality cannot be found in natural selection, because “intentional standards are inherently normative,” but “there is nothing normative about Darwinian evolution.” So any attempt to naturalize intentionality will end up bringing intentionality in through the back door, just as Carrier’s account does. When you encounter a new or unfamiliar attempt to account for intentionality naturalistically, look it over very carefully, and you should be able to find our where the bodies are buried.
Link
posted by Victor Reppert @ 3:56 PM


Labels: ,

5 Comments:

At 3/01/2007 08:13:00 AM , Blogger Blue Devil Knight said...

You actually took the time to write and present a paper about Carrier? If he were the best the naturalists had, I'd be a dualist.

I haven't actually read your post yet, and will when time permits.

 
At 3/01/2007 01:04:00 PM , Blogger Blue Devil Knight said...

I just read it. Assuming you charitably read/quote Carrier, then I'd agree that his view is incomplete at best. Frankly, I can't get past the confident claims about what cognitive science has shown. He should replace that with 'Cognitive scientists have speculated that.'

While I prefer Dretske over the Churchland's similar 'map' view of representation, I think the latter is far from dead, and it is similar to what Carrier espouses. Basically, brain structure X represents world domain Y when there is a roughly metric-preserving relation between X and Y (this is exactly analagous to a map, whose points don't have content alone, but only when considered in relation to other points on the map, and we see that the map indeed (roughly) preserves the relative distances among things in the world). I find this an interesting hypothesis.

But is the hypothesis true? In brains, we indeed find maps of the world, maps that weren't created by us, but by learning and evolutionary history. If these maps are indeed used to guide behavior directly (with respect to the relevant world domain) or used in the absence of proximal stimuli to simulate the world (counterfactual reasoning), then this hypothesis would need more investigation (which I think it does). Those maps seem to have many interesting features of representational systems (degrees of accuracy, for one; the ability to be flat-out wrong when the relevant map node is activated but the world is not in the corresponding state).

Whether they are conscious, or provide the contents of consciousness, I don't know, but they would at least have some properties of representational systems (I don't think being intentional is an all-or-none affair, and some of the interesting features of cognitive systems can obtain in the absence of consciousness, which we've all believed since Freud).

Also, note you could replace his 'about George' with 'properties of George' to avoid the obvious circularity. That seems the only main hermeneutic weakness of your response.

All that said, I think Carrier is on the right track. It seems quite reasonable, and perhaps likely, that internal representational states are some sort of model of the world that is used to control behavior or to simulate the world in the absence of the typical stimuli.

I think his biggest weakness is that he doesn't consider obvious counterarguments. One, what I call the 'three point problem'. Three arbitrary points will map onto an arbitrary number of world-domains, so how do you narry things down to give the map specific content? The theory underdetermines the contents of the map. This is why I think some sort of Dretskian informational semantics needs to be brought in to undergird the claims about specific contents.

 
At 3/07/2007 05:39:00 PM , Blogger Victor Reppert said...

I gave more attention to Carrier back then than I would now. You responded when I first published this on DI and said that while Carrier had clearly not succeeded in baking an intentional cake with physical yeast and flour, Dretske is the guy who can show you how to do just that. So far as I can tell, Carrier, at the time he wrote that discussion of intentionality, just didn't even see that he was using intentionality to explain intentionality. In short, he didn't see the problem that Dretske et al are trying to solve. But giving responses to Carrier was a good way, I thought back in 05, of advancing the discussion of each of my six arguments further. And I still see people who really think that Carrier had a good rebuttal of my book.

 
At 4/05/2007 06:36:00 PM , Blogger Wakefield Tolbert said...

Mr. Reppert. I am fascinated with your book just from the looks of things so far. So much more that I'm sure at this point it might be helpful to actually get a COPY of it.

But I'm afraid.....Most of this is WAY over my Mr. Brain, but it is NOT over the head of a relative who works in neuroscience and has some really pointed things to say (she's a talker, be warned) about CS Lewis, the Argument from Reason, and this notion of Induction. She's long winded but she makes some points about materialist science and new discoveries in the brain that I find difficult and ill prepared to even touch.

I'll go ahead and post a blogroll link to your site for easy access to and from.....

Please email me at swtolbert@hotmail.com

.... at your convenience.

Many thanks.


SWT

 
At 10/22/2024 12:38:00 PM , Anonymous Albert S said...

Good readding this post

 

Post a Comment

Subscribe to Post Comments [Atom]

<< Home