Sunday, December 31, 2006

Lippard on Original vs. Derived Intentionality

Monday, July 25, 2005

Lippard on Original vs. Derived Intentionality
Victor: Suppose human beings build a robot that is capable of responding to verbal commands and building internal representations of its environment through cameras and by moving around and manipulating its environment.You would say that its internal representations of objects in its environment have only derived intentionality which comes from human intentionality. Now suppose all human beings cease to exist, while the robot continues to function. The robot is then discovered by some other intelligent alien species with original intentionality. That species learns how the robot works, and infers that it has internal representations which correspond to objects in its environment. What would you say about those internal representations during the time when there are no humans and before the aliens discover it? If the derived intentionality is only from the human intentionality, would you say that there is no representation going on anymore? Or does the derived intentionality survive the extinction of humans?Likewise, what would you say about the representations after the aliens discover it?Do they cause representation to begin anew?On my view, it doesn't matter how the causal structures which cause covariance of the internal structures of the robot in correspondence with objects in its environment originate, that is all there is to representation, and the robot has representations which refer regardless of who else exists. How would you describe these situations?-- Jim LippardJim: My concept of original intentionality requires that it be the intentionality of some conscious thinking subject to whom the objects are represented. We can attribute an as-if intentionality to systems where there is this kind of covariation between inner states and objects in the world, but unless is it recognized by the thinking subject, it is merely as-if intentionality and nor original intentionality.
posted by Victor Reppert @ 12:30 PM

1 Comments:

At 9:39 PM, Dogtown said…
Let's say the robot has internal states that were designed (using some nice external sensors) to reliably covary with temperature as well as object identity (e.g., rocks, other robots, plants, animals). It meets another robot after walking a bit, and says "There is a big warm rock around the corner, behind the plant".The other robot goes around the corner for a bit, and comes back saying "No, it is a big warm lion, not a rock. I lifted the plant out of the way and saw it was a lion." The second robot says, "Oh, thank you: there is a big lion around the corner."*******The robots have internal states that were designed to covary with states of the world (in fact they have the function of picking out or referring to things in the world), these internal models can be wrong (i.e., these processes can malfunction), and the internal states can be revised in light of new evidence. I would say these robots' utterances have semantic properties (i.e., the individual terms have referents) which confer upon their utterances truth values. This would still be the case if all humans were killed, if all non-robots were eradicated.If you would not want to say their utternances have truth values, what epistemic or semantic properties would you give the utterances? Are their utterances no different than the babblings of a brook?

VR: 12/31/06
Dogtown, they are different, in that these states would have as-if or derived intentionality. The words on this page have meaning, but the meaning exists relative to the people who use them. But there has to be a difference between what something means from the standpoint of the subject, and what something means from the standpoint of something other than the subject.

9 Comments:

At 1/01/2007 11:08:00 AM , Blogger Blue Devil Knight said...

I don't see how you can just stipulate that they have "as if" intentionality. It seems to me these robots dogtown discusses have an interesting cognitive life.

The history of how the robots were designed, or that they are robots, shouldn't really matter, as we could make up a naturalistic story about how these robots could have evolved. They seem like pretty simple little creatures. Dogtown has a good example (which needs only a little more embellishment) of a group of agents with minimal intentional states (by 'minimal' I mean probably not conscious, but with states that refer and can be true or false. It even seems one of them made an inference in his example).

Dogtown's question is still a good one:
If you would not want to say their utternances have truth values, what epistemic or semantic properties would you give the utterances? Are their utterances no different than the babblings of a brook?

 
At 1/01/2007 07:22:00 PM , Blogger Tim said...

BDK,

The noises the robots make covary with the environment in ways more interesting (perhaps) than the noises a brook makes. But so what? You can make the robots out of string, used chewing gum, and old beer cans if you have enough of the raw materials. (Having or lacking a cognitive life can't be simply a matter of size and building materials.) It's just staggeringly implausible to say that the resulting contraptions have interesting cognitive lives. Why think they have cognitive lives at all?

 
At 1/02/2007 08:50:00 AM , Blogger Blue Devil Knight said...

If they have any properties that are the mark of intentional states (e.g., reference, the utterances/internal states are true or false), I find that very interesting. I don't think they would be conscious, but they would have internal states that were true or false. Note if you read the original example, it is crucial that the internal states don't always covary with the environment. This allowed for the possibility of error (the robot thought there was a rock, but it was a lion: or in more anemic terms, the internal, communication-governing internal state that picked out the object as a rock was eventually updated with the information that the object in question was not a rock, but a lion).

This connects indirectly with my earlier comments on birdsong learning and neural representation.

 
At 1/03/2007 12:30:00 PM , Blogger Victor Reppert said...

It seems to me that you can have mental qualities which exist relative to a template of understanding. The words on this page are not mental in any interesting sense, but they have semantic properties relative to the understandings of conscious beings. Similarly, when Fritz beats Kramnik, no one is suggesting that Fritz in conscious, but we do think that the moves Fritz makes have meaning relative to how other people understand chess. The meaning wouldn't exist if the people didn't, but the meaning, relative to the template provided by the people, isn't nothing.

 
At 1/08/2007 01:59:00 PM , Blogger Blue Devil Knight said...

But if the robots in the example were evolved, there would be semantic properties with no designers with conscious 'templates.'

 
At 1/09/2007 05:27:00 PM , Blogger Victor Reppert said...

I still don't see how you can have a fixed intentional "referent" in this kind of case. Why "lion" and not "undetatched lion parts?"

 
At 1/19/2007 12:01:00 PM , Blogger Blue Devil Knight said...

I think there are different ways of dealing with the problem of individuating coextensional terms. One, you can just say there are equivalence classes of contents (so 'lion' and 'undetached lion parts' are indisinguishable in the robot systems). In the simple system, such equivalence classes are still all you need to get truth and falsity, perhaps the most fundamental semantic property.

However, if the system also has the concepts 'undetached', 'lion, and 'parts', then we can determine whether the activated content is the simple (lion) or is fixed by combining the three composite concepts. So the difference is not in the referent, but are distinct because of compositional differences (as Dretske puts it).

At any rate, it seems to me that they have internal representational states with truth values, even if we can't come up with unequivocal translations of the individual contents in our richer language. If you ask the robot if there is a lion there, it will say yes. If you ask it if there are undetached lion parts, it will say "I don't know" (assuming it doesn't have the requisite component concepts or the requisite grammatical skills to combine them).

Dretske discusses this, starting on p. 215 of Knowledge and the Flow of Information (he focuses on examples like 'square' versus 'four sided equiangular polygon' and 'water' versus 'H2O'). In all these cases, we are dealing with coextensional contents with different compositional structure.

One thing: how is this a special problem for naturalism rather than theories of content in general?

 
At 1/19/2007 12:01:00 PM , Blogger Blue Devil Knight said...

I think there are different ways of dealing with the problem of individuating coextensional terms. One, you can just say there are equivalence classes of contents (so 'lion' and 'undetached lion parts' are indisinguishable in the robot systems). In the simple system, such equivalence classes are still all you need to get truth and falsity, perhaps the most fundamental semantic property.

However, if the system also has the concepts 'undetached', 'lion, and 'parts', then we can determine whether the activated content is the simple (lion) or is fixed by combining the three composite concepts. So the difference is not in the referent, but are distinct because of compositional differences (as Dretske puts it).

At any rate, it seems to me that they have internal representational states with truth values, even if we can't come up with unequivocal translations of the individual contents in our richer language. If you ask the robot if there is a lion there, it will say yes. If you ask it if there are undetached lion parts, it will say "I don't know" (assuming it doesn't have the requisite component concepts or the requisite grammatical skills to combine them).

Dretske discusses this, starting on p. 215 of Knowledge and the Flow of Information (he focuses on examples like 'square' versus 'four sided equiangular polygon' and 'water' versus 'H2O'). In all these cases, we are dealing with coextensional contents with different compositional structure.

One thing: how is this a special problem for naturalism rather than theories of content in general?

 
At 10/16/2020 07:30:00 PM , Anonymous Anonymous said...

"It not something I usually do at a birth."But these are not typical times. Her mother speaks slowly, her quiet voice muffled by her blue surgical mask."I in Canada by myself. Earlier this year, she first entered a Toronto women shelter after fleeing domestic abuse, and while she not sure if that where shecaught the disease during the third trimester of her pregnancy, it Ray Ban Outlet a possibility."In the shelter, I lived in one room with three people," she New Jordan Shoes 2020 explains.

The Coach Handbags Clearance net revenue of the wheat growers is estimated by profit function. The results show that there is Ray Ban Glasses a positive relationship between Yeezy Boost 350 profit and price of wheat, quantity of wheat, Coach Outlet price of Yeezy Discount by product, quantity of by product and also show a negative relationship of profit with the total cost. The Coach Outlet Store effect of literacy, certified seed and flood affected area of the farmers has been estimated.

 

Post a Comment

Subscribe to Post Comments [Atom]

<< Home