Thursday, October 25, 2007

Three levels of intentionality

I'm redating this, partly because it provides a context for discussing eliminativism.

VI. Three levels of intentionality

Intentional states are at the heart of the argument from reason. In philosophy of mind, the term “intentionality” refers to “aboutness.” Our thoughts are about other things, surely. The first thing that we notice about our mental states is that they are about certain other things. If there is to be rational inference, there has to be something to reason about.

However, intentionality is a rather complex phenomenon. Consider the following passage by Lewis:

The strength of the critic lies in the words "merely" or "nothing but. He sees all the facts but not the meaning. Quite truly, therefore, he claims to have seen all the facts. There is nothing else there, except the meaning. He is therefore, as regards the matter at hand, in the position of an animal. You will have noticed that most dogs cannot understand pointing. You point to a bit of food on the floor; the dog, instead of looking at the floor, sniffs at your finger. A finger is a finger to him, and that is all. His world is all fact and no meaning.

What is interesting about this passage is that although it is clear enough the dogs don’t understand pointing, it is equally true that dogs can be very good at tracking things. There are certainly states of the dog that link up to previous positions of a fox. The dog certainly can “track” a fox, and in one important sense we can say that the dog has states that are “about” the fox. But nevertheless the dog doesn’t understand pointing. It does not recognize the “aboutness” of our mental states. It does not understand the between its own fox-tracking activities and the fox.

So we might distinguish between simple representation on the one hand, with representation that is understood by the agent, what I will call understood representation. But clearly the latter type of intentionality is necessary for the kind of rational inference employed by the natural sciences. We have to know what we mean when we think, if we are to infer one claim from another. Consider the following joke syllogism, invented by a freshman student at the University of Illinois years ago.

1) Going to class is pointless.

2) An unsharpened pencil is pointless.

3) Therefore, going to class is an unsharpened pencil.

Recognizing that this is not a good argument is a matter of seeing that the meaning of the term “pointless” does not remain invariant between the first and second premises. And, as a recent President of the United States once observed, even the meaning of the word “is” does not remain constant from context to context. No rational inference, in or out of a scientific context, could occur if we never know what we mean when we use words.

So there is another characteristic of intentional states that is critical to their use in rational inference, and that is states of mind that are about other things are formulated together to provide us with a state with propositional content. This is a further development, which results in agents who have beliefs, desires, and other propositional attitudes. If we have propositional attitudes, not only do we understand what our thought are about, we also are able to formulate those thoughts in a sentential format. This I am going to call propositional intentionality.

Naturalistic discussions are going to have the easiest time with simple intentionality. But, I maintain, understood intentionality and propositional intentionality are essential for the possibility of science, and these are more difficult for naturalists to deal with.



At 10/10/2007 06:46:00 AM , Blogger properly basic said...

Bare with me:

Assume for a moment that beliefs are led by non-rational causes-in which case, mental states are not at all about other things. If beliefs are indeed led this way, what about a belief makes it a belief? Put differently, are beliefs really beliefs at all?

not sure I asked that correctly.

Great post, by the way!

At 10/12/2007 03:48:00 PM , Blogger Rino said...

Nice Post. I agree that there is a difference between 'understood intentionality' and regular intentionality. A sign that says 'London 6km' could be considered intentional, or the rings on a tree could be considered intentional. But they could not be considered 'understood intentionality'. Indeed, I think that some people make a strawman out of the problem - ie, they solve the easy problems of intentionality, but do not solve the hard problem. It may be helpful to just change words, since the hard problem is intent/purpose, not aboutness. There is a difference between saying 'This sign purposefully pointed to that City' and 'This sign points to that City'.


At 10/13/2007 11:17:00 AM , Blogger Doctor Logic said...

Suppose you are tracking a friend into a labyrinth by way of breadcrumbs that he has left behind.

If the friend was merely a messy eater, and accidentally left crumbs from his sandwich, then the crumbs are not a sign intended to mean something to you.

However, if the friend deliberately left the crumbs with the expectation that you (or someone else) would see the crumbs and follow them, then the crumbs are a sign with intent.

The distinction here relates to the expectation (or lack of expectation) on the part of your friend leaving crumbs that you (or another intelligent agent) will interpret the crumbs in a certain fashion.

A similar expectation model accounts for why my thoughts are about things.

This whole intentionality business seems to boil down to a matter of expectation. My thoughts about apple pie are about apple pie because they deliver expectations about apple pies. Those expectations can be minimal, as in, "I would recognize an apple pie if I saw one."

However, I could not sensibly think about "apple pie" if I had no means of recognizing an apple pie. Even if the only thing I knew about apple pie was that Victor loves to talk about them, I would still have a means of recognizing an apple pie with Victor's help. For example, I could hold up assorted objects until Victor says, "now that's what um talkin' 'bout!" Yet I cannot think about a thing that I have no means of recognizing.

So the question about naturalism comes down to this: can physical systems recognize things and have expectations? Of course, I think the answer is yes, but even if you don't believe such systems exist, I think that's the question you should be asking.

At 10/13/2007 01:23:00 PM , Blogger Rino said...

Hi Doctor Logic,

An interesting post. I like your point about the bread crumbs. However, I would stop at the intent of the bread crumb dropper, rather than the recognition of the bread crumb finder, as the crucial issue. As you say, it is easy for a physical system to recognize. A computer program could have a certain pitch and tone of one specific voice programmed in and then told to recognize that voice, or a certain word from that voice, from a huge database. It could achieve this without problem.
Or, to use Putnam's example, a snail could slide along the sand and unknowingly be drawing winston churchill's face. The drawing is about winston churchill, but it was not intended to be. It is intentional, and it is not. The simple intentionality (aboutness) can be solved within physicalism, but intent (purposiveness) cannot.

To use your model, a human come along and recognize: 'hey that sand drawing is about winston churchill'. Would you say that this is intentionality, since humans recognized it? I would say it is not, since we aren't picking up what was put down. We are picking up what was never put down by the snail. Your solution seems to be like making a strawman: 'this is the problem of intentionality, and here is how we solve it'. But, that is not the problem of intentionality, and the real problem is not solvable. If you survey the literature over the past fifty years, you will see that the currently debated problem of intentionality is not the traditional problem at all. It has been simplified in order to make it fit within physicalism.

At 10/13/2007 06:09:00 PM , Blogger Doctor Logic said...

Hi Rino,

Can you elaborate on the problem for me?

I thought I was focused on the breadcrumb-dropper as you suggested. In my example, the thing that induces the intentionality is the breadcrumb-dropper's expectation about your interpretation of the breadcrumbs (not merely your ability to have an interpretation of the breadcrumbs).

At 10/13/2007 08:34:00 PM , Blogger Rino said...

Hi Doctor Logic,

Perhaps I misread your post. I liked what you were saying about the breadcrumb dropper, and how it is his intent that mattered. But then, when you spoke of the expectation, I interpreted you as saying it was the expectation of the receiver of the message, not the giver of the message that was important. If you were focused on the intent of the message giver, then I agree with you.

Perhaps you could elaborate on how a physical system could have intents/purposes?
Computers and signposts have derived purposes, ie, they are filled with the intentions of an outside mind, and they pass that intention along. But they do not have native intents/purposes. And, if there were no mind that had programmed them to achieve a purpose, there would be no reason to ascribe intents/purposes to them. A rock that has been windblown over eons to look like an arrow which points to a city does not have the intent to point, even though it does point. Similarly, if all of the structures of the universe are time and chance, then the appearance of purposes is nothing more than the windblown rock pointing. There is no purpose there.

At 10/14/2007 04:02:00 PM , Blogger Doctor Logic said...


Perhaps you could elaborate on how a physical system could have intents/purposes?

I think that intentionality (along with purpose) have to do with expectation, and preference for outcomes.

Suppose I put an H on my calendar for next Friday to represent my hockey game that evening. The thing that makes the symbol about the game is that I anticipate seeing the H in the future, and I anticipate interpreting the H as signifying Hockey.

The purpose in the use of the symbol arises because I prefer that I remember to play hockey than that I forget I have a game. I have anticipated the consequences of making or not making the notation, and what the likely outcomes will be. I prefer the probable outcome that results from making the notation, so I make the notation purposefully.

This means that the intentionality of a written symbol depends on an expectation that the symbol will be cognitively connected with whatever the symbol represents. Purpose derived from intentionality and preference for a certain what-if outcomes.

Therefore, to possess intentionality and purpose, a system has to be able to predict the outcomes of possible choices, and have a preference for the resulting outcomes.

We can test a few boundary conditions. Would I willingly make a notation on my calendar that I would not be able to cognitively connect with at a later time? I would think that any such notation would be about nothing or be without purpose.

If I made notations on my calendar, and had no preference for the outcomes of making the notions versus not making them (including effort of making them), I think it would be fair to say that the notations were purposeless.

Suppose I often play a game in which I must answer a green light with a shout of "hazaah!" If, outside the context of the game, you flash a green light in my eyes, and I instinctively yell "hazaah!", then I would say that my yell was not purposeful (and perhaps not intentional).

So far, I have not said anything about physical systems. I think the above applies to minds in general. However, the low-level functions of expectation and preference are well within physical possibility, and it looks like the brain performs these very functions.

Not all brains do this, and not all physical systems possess intentionality. I find it odd when dualists point to physical systems that have no intentionality (and no expectation or preference), and use such examples to claim that humans cannot be fully physical. Most physical systems (including most computer systems) lack expectation regarding what-ifs, and lack preference for an outcome, so we should not expect them to have intentionality and purpose. However, human brains do have these abilities, so there's no conflict.

At 10/25/2007 04:06:00 PM , Blogger Rino said...

Hi Doctor Logic,

I forgot about this discussion. Sorry. To me, it seems like you are focusing on the consequences, rather than on the intrinsic nature of purposiveness.

To use your hockey example: you put an 'H' on the calendar, and it has purposiveness because you anticipate seeing the 'H' in the future, and you anticipate it representing a hockey game. But, what if you forget about the game? (I forget about things on my calendar all the time, ex, responding to your post here). So, you never see the 'H' on the calendar again, and you never think of the hockey game again. Does this mean that you placing the 'H' on the calendar had no purpose behind it, since no anticipated consequences came about.

Suppose you respond 'it still has purpose because of the anticipation at that moment'. This is getting closer. However, can't you think of examples where you do something on purpose, without anticipating anything further from it? For example, if I go to the moon and put a flag in the ground that has my name on it, but have no intention of ever returning, or ever telling anyone that it is there. I did it on purpose, for a wanted it to be there. But, I didn't anticipate anything further. Certainly there are more mundane examples, but my point is simple: we can act on purpose without acting with anticipation.

Purposiveness is as irreducible as qualia and understanding. I am just not sold on trying to break it down into smaller, simpler parts. Perhaps you can convince me though.

At 10/26/2007 09:00:00 AM , Blogger Doctor Logic said...

Hi Rino,

For example, if I go to the moon and put a flag in the ground that has my name on it, but have no intention of ever returning, or ever telling anyone that it is there. I did it on purpose, for a wanted it to be there.

I would say that you planted the flag because you expected that it would make you feel better than if you didn't plant the flag, or because you expected it would answer questions like "How would I feel if I saw a flag on the Moon with my name on it?"

I think that if you had no expectations at all about the results of your flag-planting, it would not have been a purposeful act. If you did it for no effect (on yourself or anything else), and you weren't even curious about the act, then it would be accidental or involuntary behavior.

At 10/29/2007 10:10:00 PM , Blogger Blue Devil Knight said...

Can a creature have propositional attitudes without 'understood intentionality'? I can imagine it is possible: a creature engages in rational inference about the world, but never develops a theory of mind, doesn't have language. Indeed, I think dogs have propositional attitudes (they desire food, believe it is in the kitchen, so run to the kitchen). (Though to be fair to dogs, they do seem to learn word reference: don't say 'squirrel' around my dog).


Post a Comment

Subscribe to Post Comments [Atom]

<< Home