Thursday, August 30, 2007

Mental Representation on intentional content: a failure of entailment?


DI Post from Saturday, December 30, 2006

Ed Feser on the Argument from Intentionality

Originally dated Nov. 21 2006
The following is from Philosophy of Mind: an Introduction, by Edward Feser. Hat tip: Joe Markus from the Internet Infidels Discussion board.

When you draw your mother, you are creating a kind of representation of her. But notice that it is not the particular physical features of the drawing itself - the form of the lines you make, the chemicals in the ink you use, and so forth - which make it a representation of her.........Someone looking over your shoulder as you draw might later on produce an exact copy of the drawing you were making. Perhaps the person admires your craftsmanship and wants to see if he or she can do as well. But in doing so the person would not, strictly speaking, be drawing a respresentation of your mother - he or she may have no idea, nor any interest in, who it was that you were drawing - but rather a representation of your representation. And, in general, the very same image could count either as a drawing of an X, or as a drawing of a drawing of X - or indeed (supposing there's someone looking over the shoulder of the second artist and copying what he or she was drawing) as a drawing of a drawing of a drawing of an X, and so on ad infinitum.......Even if we count something as a drawing, and therefore as possessing some intentionality or other, exactly what it is a drawing of is still indeterminate from its physical properties alone. The same is true not just of drawings, but also of written and spoken words (for to say or write "cat" could be to represent cats, but it could also be to represent the word "cat") and indeed any material representation, including purported representations encoded in neural firing patterns in the brain. There seems in general to be nothing about the physical properties of a material representation that make it a material representation of an X as opposed to a material representation of a material representation of an X.......Sometimes, however, you are determinately thinking about a particular thing or person, such as your mother. Your thought about your mother is about your mother - it represents your mother, and doesn't represent a representation of your mother (representations, pictures, and the like might be the furthest thing from your mind). But then your thought, whatever it is, cannot be entirely material. Given that there's nothing about a material representation per se that could make it a representation of an X as opposed to a representation of a representation of an X, if your thought was entirely material then there would be no fact of the matter about whether your thought represented your mother as opposed to a representation of your mother. Your thought is determinate; purely material representations are not; so your thought is not purely material.

posted by Victor Reppert @ 3:49 PM
1 Comments:

*

At 5:47 PM, Jim Lippard said…

"There seems in general to be nothing about the physical properties of a material representation that make it a material representation of an X as opposed to a material representation of a material representation of an X."

This seems patently false. What makes an image of my mother an image of my mother is the fact that it resembles my mother--the images on my retina, the images in my visual brain maps cause stimulation of the neurons associated with my mother due to that similarity; and those associated with my mother are there as a result of my visual experiences with my mother (and are linked to other neurons as a result of my memories of experiences and thoughts about my mother).

Likewise even for stipulated/dubbed representations--they only are recognized as representations because of the appropriate neural connections in my brain, which are there because of past experiences and memories.

Without the appropriate connections in somebody's neural systems (or equivalent memory stores causally connected up in the right way to the world), there's no representation.

2 Comments:

  • At 3:54 PM , Blue Devil Knight said...

    While I agree that resemblance isn't necessary or sufficient for something to be a representation, the passage makes too much of this fact. More generally, just because we can't tell if something is a representation by studying its intrinsic properties (e.g., what it resembles), that doesn't imply that representations aren't natural. It seems to be a straw man.

    Natural represenatations clearly have to be more complicated than all that and minimally must take into account relations (either causal or informational or predictive) to the thing being represented. This was the point of Twin Earth.

    Neuroscientists studying how the brain represents the world don't just stare at the brain to figure out what it is representing, but always study how the neuronal activity relates to what is happening out in the world (e.g., showing different stimuli to the organism while recording the neuronal activity). If the neuronal activity is informative about the world, then it has met one of the conditions for being a representation of that feature of the world. I'd call such information-bearing states proto-representations, as you need to add more details to get the ability to misrepresent and to resolve different but coextensional contents.

    I think Lippard is right to focus on memories. Consider a simple example of a representational system studied by neuroscience: songbird learning. Early on, birds hear the songs in their mileau. Then, they do no singing for quite a few months, but then start to actually produce song that is almost identical to what they heard months ago. How is this ability to be explained? It seems natural to say that they have stored an internal representation of the original song, and that this representation guides the emergence of the correct song later in life.

    While this example is inadequate to provide a full-fledged theory of intentional contents (the song, arguably, isn't actually about anything), I think that the fuller story about intentional contents (i.e., things with truth values and referents) will have to include in its story something about the laying down of an original core of representational contents (proto-representations) that are later able to be activated and used to guide behavior even when the proximal stimulus is absent.

    Dretske tackles all this stuff head on in his Knowledge and the Flow of Information. He's brilliant.

  • At 4:04 PM , Blue Devil Knight said...

    PS. I said the SONG doesn't represent anything, but there are internal states that represent the original song. Similarly, human children, during some learning period (perhaps even evolution), acquire a rich bed of representations, or information-carrying structures, of the world. These can then, independently of the stimulus, be used to guide behavior wrt to the world, and even be activated incorrectly (e.g., activate the 'mom' structure when you see someone that looks like your mom).

    i.e., song:bird :: mom:human

    PPS. Paul Churchland's new book will include a fairly extended critique of 'resemblance' based theories of representation. I have no idea when it is due out.

VR: This is a fascinating exchange beginning with Feser's initial remarks, Jim Lippard's reply, and Blue Devil Knight's further response. Here's my enduring problem. Suppose I maintain that you can't get an ought from an is. I mention some simple naturalistic thing that, you can see, clearly cannot entail a moral obligation. OK, you reply, it's more complicated than that. If you add in a whole lot of neurophysiology, and specify all the social relations involved--if you could see all of that, you would be able to see what the moral obligations are. But, it seems to me that there is always going to be a logical gap.

The same thing is here. No matter how much you specify the neurophysiology, the causal context, etc. etc. etc., the physical information just seems insufficient to fix reference, because it is information of the wrong kind. There's no logical entailment from"such and such physical information is the case" to "X is about Y," and to be quite honest, it looks as if there cannot be such an entailment.

This connects to the Vallicella exchange I referred to earlier.

Labels: , ,

7 Comments:

At 8/30/2007 11:23:00 AM , Blogger Blue Devil Knight said...

There's no logical entailment from "such and such physical information is the case" to "X is about Y," and to be quite honest, it looks as if there cannot be such an entailment.

Reference is simply a relation between two things. What features of the reference relation do you think are not captured by these examples?

I submit that not all biological representations 'refer'. There is a rather interesting variety of biorepresentational systems, and this richness will not not be captured from the armchair. It will only be captured by studying the varities of representational engines used by biological systems. The names we use, the mapping from science to armchair concepts, will probably be quite messy. For instance, my hunch is that the original philosophical categories are vague (some systems clearly refer, some don't, and some it is just not clear).

To show what I mean I'll contrast two examples:
1. Honeybee navigation. There is an internal state that maps onto the location of nectar, and is used later to guide the 'waggle dance' and behavior of conspecifics. I say the internal state refers to nectar location.

2. Birdsong learning. Here is one I think is vague.

First, more details. Birds go through three stages. First, they hear a song but don't sing at all. Second, they go through a 'learning' phase, where they produce songs that are incorrect (different from the original). They slowly adjust their song in this phase until they hit Phase 3, when they reproduce the original song they heard. Phase two is done in the absence of experimenter or other bird feedback, so it has to be based on an internal representation of the orginal song (and they can't correctly learn it if they are deafened, so there seems to be a comparison of the stored representation with the actual song produced by the bird).

First, I assume everyone would agree that in some sense this internal state that stores the information about the original song is a 'representation' of the original song. I'm not sure it 'refers' to the original song, though.

One interesting way I like to think about it is: what would we have to add to this representational system to make it refer? A few things. Imagine the representation is of it's mother, and can be activated when it 'sees' similar birds (sometimes correctly, sometimes incorrectly). It uses this representation of its mother to tell other birds when its mother is nearby (this is similar to monkey calls when a predator is near). In that case I'd say the representation refers to its mother, albeit perhaps weakly.

Now say we add syntax, rules that operate over these basic representational atoms stored in memory. They can be combined to give rise to new (compound) representations such as 'Angry mother.' And these are used to guide behavior appropriately (perhaps bringing mother a worm). Is the bird thinking about its mother? I think so.

Another case. Say it's a parrot, and we teach it to say 'My song' when we play it back the original song, and 'Not my song' when it isn't the original song. Does the representation that is used to control such classificatory behavior refer to the song? I'm not sure. Again, as a biologist it really is probably not all that productive to worry about how my explanations map onto the philosophers' categories (but it is extremely productive for the philosophers to worry about how biobehavioral results bear on their own theories).

 
At 8/30/2007 11:40:00 AM , Blogger Victor Reppert said...

BDK: Reference is simply a relation between two things. What features of the reference relation do you think are not captured by these examples?

Maybe here's the problem. I think there has to be a term, an object, and a user who understand that object to be x.

 
At 8/30/2007 12:40:00 PM , Blogger Blue Devil Knight said...

I think there has to be a term, an object, and a user who understand[s] that object to be x.

Cavemen didn't understand they referred to H20, but they did. Why can't a bee internal state refer to nectar location even if they don't understand this?

 
At 8/30/2007 09:54:00 PM , Blogger Doctor Logic said...

Victor,

I think there is indeed a parallel between the is-ought problem and the intentionality problem.

Both morality and aboutness are what I call "system subjective." I'll define the term with an example.

Example: Bob observes that cats cause him to sneeze. A priori, Bob does not know whether cats have a system-objective property called "sneeziness" which he is detecting, or whether Bob simply sneezes in response to the other more fundamental system-objective properties of cats.

If sneeziness is a system-objective property, we might expect that we will see some predictive relation between cats that is a function of sneeziness. We could then establish that Bob is measuring the system-objective property of cats by showing that sneeziness mediates interactions of cats even when Bob is not present.

Of course, it's possible that cats do have a system-objective property of sneeziness, but that the only way this property manifests itself is when Bob measures it. However, we would have to concede that such a property bizarrely conspires against our ability to determine its system-objective nature. A bizarre property indeed is one with the sole function of allowing us to measure it.

On the other hand, we could establish that there is no system-objective sneeziness property by showing that sneeziness is precisely the interaction of Bob's system-objective chemical properties with a cat's system-objective chemical properties. That is, that Bob is just reacting to the chemistry of cats, and that the chemistry of cats is something that mediates cat-cat (or cat-water) interactions in the absence of Bob, and Bob's chemistry mediates the interactions of Bob with other stuff in the absence of cats.

By this "systems" definition, morality is not known to be objective. Thus far, morality doesn't predict anything beyond the "is" facts. One can even make the case that evidence shows that morality is precisely the effect of "is" physical facts on our evolved brains.

Similarly, meaning and aboutness are not system objective. A collection of symbols could mean different things in different languages. An accidental series of characters on an unformatted disk may well spell out a meaningful sentence in an alien language, but that doesn't make it meaningful to us (nor to the aliens). Likewise, a series of characters on a disk that spells out "eAtPoRk" may not be of significance when created by accident versus when typed by a friend.

The meaning of the characters only becomes system-objective if we expand the boundary of the system to include the author of the characters.

I think that in the physicalist view, intentionality and meaning are connected. Perhaps one might think of aboutness as the meaning of representations in our brains.

So, there's really not much to be learned about physicalism by looking at isolated representations for traces of intentionality. That's sort of like looking at isolated foods for traces of allergenic reactions without ever looking at any people who would react to them.

 
At 8/31/2007 08:33:00 PM , Blogger Blue Devil Knight said...

I disagree with DL. Interpretivism doesn't solve anything, as it leaves open the question of giving a naturalistic account of interpreters (interpretivism as the view that there are no objective intentional/referential states, but only our interpretations of systems as having such states).

Also, why doesn't the bee's internal state refer to nectar location, even if there are no interpreters around to interpret it as such (other than, perhaps, other downstream neurons that "observe" this state and control behavior appropriately).

 
At 9/01/2007 09:17:00 PM , Blogger Victor Reppert said...

Cavemen didn't understand they referred to H20, but they did. Why can't a bee internal state refer to nectar location even if they don't understand this?

VR: Reference, to my mind, requires a referrer. This may be something that we may have to subscript our way out of (reference-1 and reference-2).

 
At 9/02/2007 02:24:00 PM , Blogger Blue Devil Knight said...

VR: good idea. :)

 

Post a Comment

Subscribe to Post Comments [Atom]

<< Home