Wednesday, July 02, 2008

The Richard Carrier Fallacy

NormaJean: I agree that I have been away from a lot of these discussions. I don't know if I could call it "getting off scot-free." Sometimes if you aren't in a debate which goes on for awhile it's a little difficult to pick up the thread deep in the discussion.



But I think the discussion didn't exactly go in the direction that I would have taken it. The following is a discussion from Doctor Logic's first reply, to which I want to push him a little.



Suppose I consider the proposition "My dog is on the porch."How do I know what this proposition is about? If I see the neighbor's cat on the porch instead of my dog, how do I know that the proposition is false? And how can I assert the proposition in advance of actually making the observation?There's a simple and elegant (and natural) solution. Intentionality is about my own cognitive abilities, and my cognitive abilities are in a physical brain that does exist as the thought is processed. A proposition has meaning in light of me knowing (approximately) what experiences would increase or decrease my confidence in the truth of that proposition. That is, the proposition isn't a physical reference to actual dogs and porches (which may not exist), but is about my presently-existing faculties for recognizing dogs on porches if those things existed. Rocks and CD-ROMs lack intentionality because they lack thought and recognition. Deep Blue lacks intentionality because Deep Blue does not formulate propositions about its abilities to recognize states of affairs. It just recognizes them. For example, Deep Blue does not ponder the proposition that it will lose the game (in some abstract way), even though it is capable of recognizing a great many specific ways of losing a match. Deep Blue's intelligence is fish-like or insect-like. It does not have ability to recognize its own mental states.So the argument that we cannot see how one lump of matter could be about another just doesn't hold up under scrutiny. If the first lump has recognition and expectation, and the ability to recurse those abilities on its own faculties, then that lump can have intentional thought.



What I am failing to see here is that this is an account of intentionality in non-intentional terms. Intentionality is "about my cognitive abilities," you say. If these cognitive abilities presuppose intentionality, then we are shuffling intentional concepts around and calling it an explanation of intentionality. OK, you mention the brain, but that doesn't make it a physicalist explanation. Meaning no disrespect to you or Richard, I would have to call this the Richard Carrier Fallacy. (OK, I'm asking for someone to come up with a Victor Reppert fallacy. I know that.)



DL: A proposition has meaning in light of me knowing (approximately) what experiences would increase or decrease my confidence in the truth of that proposition.



This seems to me just backwards. Meaning is determined by what experiences would increase or decrease confidence in the truth of the proposition?? You have to know what the proposition means before you can figure out what experiences would make it more or less likely to be true.

Further, it is naturalists like BDK who insist most firmly that the presence of intentionality. doesn't require the entertaining of propositions. Indeed, nothing can entertain propositions unless it possesses intentionality to begin with. Deep Blue doesn't recognize it's own mental states, it doesn't ponder propositions, but these capacity are exactly what you are trying to offer a non-intentional explanation.



Here's the problem I am getting at with the numbered premise argument. Add up all the physical, non-intentional states you want, don't help yourself to any states that are intentional, and see if it is possible that these non-intentional states can entail some intentional state or propositional attitude. It looks to me that whatever physical information you give me, I can deny the existence of any propositional attitude, or affirm the existence of alternative propositional attitude, without contradicting myself. Hence, if there is a truth about what the propositional attitude, the explanation of it in terms of a physical, non-intentional substrate is incomplete.



Again, you introduce terms like recognition and expectation. If those terms mean "recognizing that p" or "expecting that p", then you aren't explaining the propositional state, you are slipping the propositional state in through the back door and calling it an explanation. Otherwise, what do you mean by "recognition" and "expectation?"

Labels: ,

12 Comments:

At 7/02/2008 11:27:00 AM , Blogger Blue Devil Knight said...

Further, it is naturalists like BDK who insist most firmly that the presence of intentionality doesn't require the entertaining of propositions. Indeed, nothing can entertain propositions unless it possesses intentionality to begin with.

I guess I do believe that when I'm wearing my eliminative materialist's hat.

But note that that is certainly not typical of naturalists. Most naturalists are deeply in love with propositional attitude psychology.

 
At 7/02/2008 12:15:00 PM , Blogger Victor Reppert said...

Whether or not you accept propositional attitude psychology, I think it is certainly possible to distinguish between the presence of intentionality and the presence of a propositional attitude. The concepts seem to me to be logically distinct, and I have no eliminativist proclivities whatsoever. But even if you don't make this distinction, you still can't explain intentionality in terms of the presence of propositional attitudes. That's an intentional concept if there ever was one.

 
At 7/02/2008 12:34:00 PM , Blogger Blue Devil Knight said...

Good point, though the charge becomes one of explaining propositional attitudes (or at least their associated contents) naturalistically.

 
At 7/02/2008 07:45:00 PM , Blogger Doctor Logic said...

If these cognitive abilities presuppose intentionality, then we are shuffling intentional concepts around and calling it an explanation of intentionality.

So what you're looking for is a description of a mechanism operating in a physical world, and correspondences between functions of the mechanism, and what we call propositions, or intentionality.

I'm only too pleased to oblige, but first, a preface to my remarks. I haven't mapped out every correspondence (or every possible one), and the actual correspondence probably won't be exactly as I lay it out here. My goal is merely to show the plausibility of the idea.

Also, I wasn't able to make this "blog-sized", so I apologize for its length.

Suppose there is a basic cognitive element that can learn, recognize and abstract. For example, by training the mechanism on particular rabbits, it learns to recognize almost all rabbits by their long ears, fur, and body plan. By doing this it creates a representation of the universal for rabbit.

Suppose that this cognitive element also regurgitates the distinctive parts of patterns when recognizing part of the pattern. So when you see rabbit ears poking out from behind a fence, you trigger the rabbit element, and think of the rest of the rabbit body plan and the furriness of rabbits, etc.

Now a system built from many of these elements in hierarchy can represent and predict just about anything. If we have a cognitive element that has been trained on melodies, that element regurgitates the remainder of a melody after hearing just a few notes. This is a form of prediction about what will happen in the future.

I leave it to you to imagine more complex networks of these elements. For example, elements can recognize that the sound of the word "rabbit" is connected in experience with the abstraction of rabbits. Some element somewhere will then connect memories of eating rabbit stew with the universal concept for rabbit.

Okay, propositions.

Now suppose you hear "the cat is on the mat." It's easy to see that such a network of elements will recognize "cat" and "mat" and "on" in the sentence. The network will regurgitate memories of cats and mats and on-superpositions.

Now it is obvious to me that "the cat is on the mat" is about cats on mats to this particular network. The network was trained on cats and mats and English, the network has abstract representations of these things, and the network makes a prediction about what sensory inputs would correspond to the proposition.

In this model, it is also possible to have thoughts about things without thinking a corresponding proposition. If I see rabbit ears poking over the fence, I can think/predict that there is a rabbit behind the fence without necessarily generating language representing that fact. Again, this thought is about rabbits and fences because particular rabbits and particular fences were used to train the system's abstractions.

Let's take a more complex example. 1+1=2. In common sense terms, anyway. Suppose I train the system on one apple. Then two. I add and take away apples by sliding them in and out of view. Then I do the same thing with oranges and rabbits and so on. There will be a set of elements in the network that learn to count small numbers of objects. These nodes don't care what the object is because the kind of object gets abstracted away in the same way that the color of black rabbits and white rabbits gets astracted away by the element for "rabbit". So the network learns that 1 object "plus" 1 object leads to 2 objects. To the network, this becomes true of any kind of object we think of (as long as we're not talking about clay or water or mergeable objects).

So the network knows that if the objects are atoms (like apples or rabbits), then 1+1=2 no matter what the atom. What is 1+1=2 about? Well, it's about all particular atoms that have been observed, and anything we care to abstract from those particulars. It's also about the internal constraint that we're dealing with atoms.

I leave it to you to imagine the almost limitless possibilities with such a system.

Cognitive elements like I have described have been invented. They have not been built into networks that are intelligent like we are, but they could be.

Deep Blue does not use such elements. Deep Blue cannot learn to play chess. Chess is autonomic to Deep Blue. Deep Blue does not abstract, and it cannot learn to predict from abstractions it does not have.

In conclusion, I am saying that intentionality is a causal link between cognitive states and experiences (or predicted future experiences) of the system. A thought about a rabbit is about rabbits because it literally was trained from, and is connected to, perceptions of particular rabbits. It it an inductive inference about rabbits. I am agreeing that propositional attitudes are distinct from intentionality, and that this model accounts elegantly for both.

I hope this helps to make my position more clear.

 
At 7/03/2008 10:43:00 AM , Blogger Shackleman said...

doctor logic begins with this: "So what you're looking for is a description of a mechanism operating in a physical world, and correspondences between functions of the mechanism, and what we call propositions, or intentionality."

and then offers this immediately: "Suppose there is a basic cognitive element that can learn, recognize and abstract.

Well, no. We shouldn't be supposing that, because that is precisely what is at issue! You might as well have said that what Dr. Reppert is looking for is an explanation of a mechanism that is cognitive. Then in answer you ask him to suppose a mechanism that is cognitive!

You would first have to define "cognitive", (without being circular). Then you would have to define this "element". What is this "element"? What is it made of? What about its particular construction or configuration allows it to be "cognitive" (according to your definition)?

doctor logic: "Cognitive elements like I have described have been invented."

Really? Can you supply us readers with some support for this declaration?

 
At 7/03/2008 01:48:00 PM , Blogger Doctor Logic said...

Shackleman,

Don't get distracted by the word "cognitive". You can remove the word cognitive from my post and it still stands. It's not magic sauce. I just mean a mechanistic or computing element.

As for the technology, it's called hierarchical temporal memory. The details can be found at Numenta's web site. Watch the video, and notice that the network automatically learns to recognize the pictograms without being told that there are pictograms.

 
At 7/03/2008 09:37:00 PM , Blogger Unknown said...

Doctor Logic does an excellent and, if I may say so, superior job to myself at explaining the physicality of various states of mind.

Regarding whether or not we can have actual, decisive intention, I do not believe that we can.

That would be the start of my disagreement with Reppert.

Some other concerns are as follows...

To argue that the existence of intention establishes the existence of god is an argument from ignorance, especially given the vast amount we do understand about the brain. All that we know about states of mind has been revealed to us by the application of experiments dealing with the tangible aspects of cognizance. Conversely, it seem the argument of consciousness could once have been called the argument from emotion, before science granted us an explanation of the mechanisms that caused us to be emotional.

On your bit with Carrier, I'm by no means a philosophical guru like the two of you. I do know, however, that Carrier is nothing short of brilliant, and the fact that he never invokes the same basic mistakes that Reppert does leads me to believe that he is more likely to be correct. I have no problem with intentionality in non-intentional terms, because that seems to be how the brain operates. We feel that we have a choice as our brain cycles through out options, but ultimately we are moist robots whose parts will operate in certain ways under certain conditions, as I explain in my second link above.

For what it's worth,

JT

 
At 7/05/2008 12:46:00 PM , Blogger Victor Reppert said...

DL: Suppose there is a basic cognitive element that can learn, recognize and abstract. For example, by training the mechanism on particular rabbits, it learns to recognize almost all rabbits by their long ears, fur, and body plan. By doing this it creates a representation of the universal for rabbit.

VR: Stop right there. What is meant by "recognize?" Does it mean that there is some covarying behavior that is associated with the perception of the rabbit?

 
At 7/05/2008 06:38:00 PM , Blogger Doctor Logic said...

VR: What is meant by "recognize?" Does it mean that there is some covarying behavior that is associated with the perception of the rabbit?

Sure. Maybe the element responds with a 1 when exposed to the sight of a rabbit, 0 otherwise. Or 1 when exposed to the sight of a rabbit, 2 when exposed to the sight of a mouse, etc. etc. and 0 when exposed to the sight of something it cannot identify.

So recognize in this context means "classify".

 
At 7/08/2008 06:16:00 AM , Anonymous Anonymous said...

" I think it is certainly possible to distinguish between the presence of intentionality and the presence of a propositional attitude."

Would you care to explain what you mean by this?
How, in practice, would you distinguish between the presence of intentionality and a propositional attitude?

 
At 7/08/2008 06:26:00 AM , Blogger Blue Devil Knight said...

anon: I just think he means it is possible there exists nonpropositional representational contents.

For instance, nonconceptual content such as perceptual contents which arguably have intentionality. I discussed this some in this comment.

 
At 8/14/2019 03:41:00 AM , Blogger yanmaneee said...

michael kors outlet online
lebron 16
jordan shoes
michael kors factory outlet online
yeezy boost
jordan shoes
moncler coat
jordan shoes
calvin klein underwear
ralph lauren uk

 

Post a Comment

Subscribe to Post Comments [Atom]

<< Home