Friday, February 20, 2009

Wikipedia version of Hasker's Argument Against Causal Closure

  • a. Human beings are rational to a significant (though highly imperfect) degree.
  • b. If human beings are rational, there is an explanation for the fact that human beings are rational.
  • c. There is an explanation for the fact that human beings are rational.
  • d. If conscious experience is explanatorily irrelevant, there is no explanation for the fact that human beings are rational.
  • e. Conscious experience is explanatorily relevant.
  • f. If the physical realm is causally closed, conscious experience is explanatorily irrelevant.
  • g. The physical realm is not causally closed.

Labels: ,

8 Comments:

At 2/22/2009 11:40:00 AM , Blogger normajean said...

Does he mean (in part) that a causal closure theory precludes freewill?

 
At 2/22/2009 12:48:00 PM , Blogger Victor Reppert said...

Causal closure precludes libertarian free will, but so do a number of other doctrines not targeted by this argument.

 
At 2/23/2009 05:29:00 AM , Blogger normajean said...

Thanks, I see

 
At 2/23/2009 01:34:00 PM , Blogger Wakefield Tolbert said...

"Causal closure precludes libertarian free will, but so do a number of other doctrines not targeted by this argument."

Not to seem a bug in the vaseline, but why would this be the case of necessity?

--Wake

 
At 2/23/2009 01:36:00 PM , Blogger Wakefield Tolbert said...

Doctor Logic left me a note once that implied (or perhaps just stated--I can't tell just yet) that free will is all but irrelevent in his scenarios for conscious volitional entities like human beings.

If I can find his text, I'll forward it for review.


As...I can't tell completely what he means.

 
At 2/23/2009 04:16:00 PM , Blogger normajean said...

Wakefield, I'd like to read the note, so if you find it please do hang it here. Dr. Logic is an interesting dude. Peace

 
At 2/24/2009 11:52:00 AM , Blogger Wakefield Tolbert said...

DLs full missive to me, related to a side argument about free will and determinisim, is as follows. We started off with something else, but this was the result
If anyone can help me weed through this and pick out where he's going, that would be helpful.
(I'm not familiar with this kind of terminology as DL and the other interlocuters who visit this site):

Wakefield,

The evolutionary argument against naturalism is deeply flawed because individual beliefs do not evolve. What evolves are methods of coming to beliefs - belief-generating mechanisms.

Now, not all of these belief-generating mechanisms will be totally reliable, but we can theorize about which ones have survival advantage. Basically, a belief-generating mechanism that generates testable beliefs with material benefit will be selected for. So a belief about whether there will be good weather in the afternoon, or whether a mountain will be traversible is likely to be reliable in an evolutionary model. On the other hand, belief-generating mechanisms about untestable claims will be far less reliable. The belief that one will go to heaven for fighting relentlessly in battle isn't testable, but it might aid in the survival of a group with such beliefs.

This doesn't mean that certain delusions are inescapable. It means that we can only escape them by leveraging the good reasoning to improve the bad. That is what formal and symbolic logic does. By doing logic on symbols that could stand for anything, and checking that logic against analogous material situations, we can reason reliably about almost anything.

Other arguments against naturalism which cite modern computers or other physical systems as examples of non-reasoning are pretty impotent. We have to define what intentionality is before we can say whether or not a system has it. And intentionality is linked to prediction and recognition. A thought is about another thing when I would recognize the conclusion relative to that thing. A thought about green grass is about green grass because I would recognize green grass, versus blue grass or versus green frogs. So I would say that it is true that Deep Blue isn't reasoning "about" chess, but that's irrelevant to the question of whether machines could reason about stuff. Deep Blue does not recognize stuff. It doesn't see a corner of a chess board (with no pieces on it), and predict there is a whole chess board that might have pieces on it. Deep Blue is a mechanism optimized to generate responses. It cannot learn, nor recognize in a manner required for intentionality.

Finally, there's a knock-down argument against the kind of free will you're advocating. A deterministic thought would be a thought that was determined by past events or thoughts or by constants. Any part of that thought that is not determined by the past or by constants is determined by nothing whatsoever. The set of stuff that is in the past or timeless is exhaustive. (This assumes that the future does not already exist to cause the past.) Therefore, the logical complement of determinism is total, fundamental randomness. There's no third choice. Even if minds were not material, this same situation would occur. I have never seen a response to this argument. If you have one, I would love to hear it.

 
At 2/26/2009 07:19:00 PM , Blogger normajean said...

I'm not sure I know what DL is saying either. Perhaps he's keeping the article vague enough so he can custom tailor responses to any objections made against it. Yikes!

 

Post a Comment

Subscribe to Post Comments [Atom]

<< Home