Bruce Russell on Wielenberg
Bruce Russell reviews Erik Wielenberg's God and the Reach of Reason
The book is in interesting and valuable treatment of the arguments of Lewis, Hume, and Russell, mostly Lewis. The review offers a response to one of the lines of argument in the AFR, and just says that Wielenberg has answered another. I think there are plenty of difficulties in the idea that intentional mental states (or to be more specific, propositional attitudes), can evolve from non-intentional states, so long as we insist that they physical is mechanistic and closed, and that any mental state would have to supervene necessarily on the physical states. What that would mean would be that there is a set of truths at the non-intentional level that entails some truth at the intentional level. I don't think such entailments are even logically possible. Pile up the non-intentional truths as high as the ceiling, and they won't entail an intentional truth (S believes that p), necessarily. It will always be possible for the intentional state not to exist, or that multiple possible intentional states are logically consistent with the state of the physical. (For example, a world physically identical to this one could be populated with zombies). Given this, if we are in particular intentional states such as S believes that P, then there is something other than the physical that makes it the case that I am in this mental state as opposed to that one, or as opposed to no mental state whatsoever.Russell offers an analysis of Lewis's argument that goes like this.If S knows that P1) S believes pand2) The complete cause of S's belief that P is the truth of p itself.Hence if I believe that 2 + 2 = 4, in order to know that 2 + 2 = 4, the cause of my belief that 2 + 2 = 4 is the fact that 2 + 2 = 4, and that would be impossible given naturalism.However, says Russell, I know I will be dead onon Jan. 1, 2100, but the truth of that belief is a future state, and can't cause me to believe that this, so on this theory of knowledge, I can't know that 2 + 2 = 4. Therefore the theory of knowledge is flawed, and hence the Lewisian AFR on which the argument is based is also flawed.However, the case of my being dead in 92 years, the knowledge is not known directly, but is a conclusion of a principle of past-future resemblance (which Lewis actually thinks is rationally justified on a theistic world-view but not on a naturalistic one), plus evidence we have concerning human lifespans in our collective experience. Clearly some corrections and/or clarifications need to be done on Lewis's "An act of knowing thus solely determined by what is known," which Russell is surely referencing. Nevertheless, the fact that we live in a world that renders is likely that we will die before the age of 150 seems to be evident to us, and a bridge to the future fact seems possible if we grant the naturalist the resemblance principle. But how do we get a bridge from ourselves as physical beings to the fact that 2 + 2 = 4, of that arguments of the form "modus ponens" are valid.So I don't think the objections to the AFR work that are found in this review.I am glad to see Wielenberg's book getting some attention.
Labels: Erik Wielenberg, the argument from reason
112 Comments:
Do you agree that the naturalist's resemblance principle is a form of abstraction?
For example, if I see a rabbit, I can recognize other rabbits as being examples of the class of rabbits. In concrete terms, upon learning about a particular instance of a rabbit, the mind creates a filter that recognizes rabbits in general.
I think this is necessary for the naturalist's picture. Because if I see a hammer fall to the ground when it is released, I can only meaningfully predict that hammers will always fall to the floor when released if I can recognize a future instance of the event. I have to abstract and generalize from one hammer fall to conclude that hammers in general will fall.
Once this concession is made, we can construct numbers from abstractions of abstractions.
DL: How can abstractions exist in a world of physical concreta.
Just as a preface to this comment... I'm going to factor out issues of consciousness and qualia here. The issue is whether a mechanism can create abstractions.
If I have a mechanism that can recognize a thing, then the mechanism is an abstraction. (In fact, recognition implies recognition of a class. Every time I look at my cat I see something slightly different. So recognizing my cat implies an ability to recognize the class of views of my cat.)
Imagine a camera and a computer algorithm that can recognize rabbits. This system has created an abstraction of rabbits because it recognizes members of the class of rabbits. It recognizes rabbits it has never seen. We can also show that the mechanism would recognize rabbits that do not actually exist. For my purposes, this is an abstraction.
There exist mechanisms (computer algorithms) that automatically learn to recognize general patterns, thereby creating such abstractions. We can also place such mechanisms in a hierarchy, so that the top level mechanisms learn to recognize patterns in patterns, and create higher-level abstractions.
Now, I have not outlined a design for a fully reflective mind, but the question was whether you can create a concrete physical abstraction (in the sense of a system that can place particulars into general classes). Note that I'm not arguing here that this kind of abstraction is identical with our subjective experience of abstractions. What I am claiming is that there are physical mechanisms that automatically learn and create abstractions (and abstractions of abstractions), and that among these higher abstractions is the concept of a number.
Imagine an abstracting system that learns to recognize edges, corners, colors. On top of this, another abstracting system recognizes apples and oranges out of the patterns found in the previous layer. On top of this, another system recognizes one orange, two oranges, three oranges. Finally, another abstracting layer recognizes one apple or orange, two apples or oranges, three apples or three oranges, and so on. By combining layers in different ways, you can get a system that recognizes a number of objects.
I don't think the above directly threatens dualism, but it does threaten the AfR (or many variants of the AfR).
DL
I don't really have time for this, at least not for a long extended exchange. But the basic issue is what you mean by "recognize." Does "recognition" merely label a physical action that occurs under a given physical condition. If so, it falls short of the mental act that we associate with recognition. In other words, recognition in terms of the AfR refers to conscious recognition. Obviously, a thermometer "recognizes" temperature in some reactive physical sense but not in a conscious sense. Does a more complex response, such as that of a computer, necessarily amount to a conscious recognition? That is one of the bones of contention.
Darek,
I think I addressed the issue in my last comment.
I'm basically saying that there are physical systems that can perceive numbers. They can recognize when there are 3 of something, or when 3 objects are in a field of view, and so on. They can learn to recognize from experience that 1 + 1 = 2 (or perhaps 1x+1x=2x).
(As it happens, the brain is made of such mechanisms.)
This means that even if dualism is the case, our minds would be aware of mathematical concepts in the very same way they are aware of physical concepts.
With vision, there are multiple levels of abstraction: lines, corners, colors, shapes, 3D structures, and higher-level objects like letters, numbers, frogs, etc. This is neuroscientific fact. Now I assume you would say that this abstracting ability of the visual cortex does not contradict dualism because, even if the brain is seeing and recognizing a frog, there could be a supernatural thing in the brain that's required for us to think about that seen frog.
Well, here, I am saying that there are analogous physical mechanisms in the brain that are abstractions for numbers. The dualist may similarly claim that this is not a contradiction, and that the supernatural element is what allows us to have intentional thoughts about those "seen" numbers.
However, the OP isn't about whether dualists can integrate physical abstractions into their picture. The question was whether it is possible to find a physical correspondence between a thought about a number and numbers themselves.
Victor seems to concede that material minds can have thoughts about tables and chairs because there exist causal physical relationships between material minds and physical things by way of physical sensory (and abstracting) pathways.
Well, what I am saying is that there are physical abstracting pathways that can see folk arithmetic concepts like numbers. Hence, there is no barrier to the material mind being able to think about mathematics.
There are other questions like our ability to think about counterfactuals, but you know I have an answer for that too. :)
If thermometers are abstractions, they are at best what I'd call zeroth order abstractions. Suppose a thermometer has a calibration point at 80 degrees. What is being generalized by such an abstraction? The only thing being generalized in that case is the class of moments when the temperature is 80 degrees. That is, the thermometer matches on any moment that is 80 degrees. That's not much of an abstraction.
So let's not focus on zeroth order abstractions, but on sophisticated abstractions that require memory and computation.
DL
>>There exist mechanisms (computer algorithms) that automatically learn to recognize general patterns, thereby creating such abstractions.<<
What exactly do you mean by "recognize"? Do you mean that the mechanisms have a subjective conscious experience or do you mean that they generate certain symbol strings or other physical output? Can an informational output be identified with a subjective conscious experience?
Are you making the claim that once the causal claims that can be made are given, reference can be fixed? In other words, once I give make all the causal statements relevant to what the word "rabbit" refers to, it then becomes logically transparent that it refers to "rabbit" and not to "undetached rabbit parts."
There seem to be some arguments suggesting that this is not the case.
DL: Victor seems to concede that material minds can have thoughts about tables and chairs because there exist causal physical relationships between material minds and physical things by way of physical sensory (and abstracting) pathways.
VR: No, I think these causal connections leave reference indeterminate.
Darek,
What exactly do you mean by "recognize"? Do you mean that the mechanisms have a subjective conscious experience or do you mean that they generate certain symbol strings or other physical output?
In this context, I mean they have a certain physical output.
You think machines can't have subjective experience, and I think they can, but this is not the question at hand.
This thread is not about whether machines can have subjective experiences. The argument that physical systems don't have qualia isn't the AfR. That's a different argument (e.g., like Chalmers' argument).
The question here is whether the requirement that a physical reasoning system be physically and causally connected with what it is reasoning about would prevent such a system from reasoning about mathematics. The answer here is that there is no barrier because a machine can reason about physical abstractions. Or, put another way, a machine can reason about mathematics (or counterfactuals) by reasoning about machines that represent abstractions.
Victor,
Are you making the claim that once the causal claims that can be made are given, reference can be fixed?
Yes.
You seem to be referring to Quine's indeterminacy of translation (IOT). Well IOT is only an argument against being certain of a translation, not against the likelihood of having the right translation.
Imagine that there's a circuit inside my brain that recognizes rabbits in the sense that the neuron for rabbit fires when a rabbit is seen, and doesn't fire otherwise. (We don't have to imagine very hard because this is pretty much what happens in the brain.) Now what if I am defining "rabbit" implicitly to mean "that which triggers my rabbit-recognizing neurons." Isn't this a case of fixing reference?
DL
>>In this context, I mean they have a certain physical output.<<
>>The question here is whether the requirement that a physical reasoning system be physically and causally connected with what it is reasoning about<<
When you "recognize" an abstract truth as a result of reasoning, is that a recognition a subjective experience for you (something occurring in your mind)or is the recognition strictly something you say or some other physical action you take?
Please note that the assertion that you speak as a result of recognizing an abstract truth is not the same as identifying the recognition as the utterance--to say that the fire produced smoke is not the same as saying that fire "is" smoke.
DL
"is that a recognition a subjective experience for you"
above should read
"is that recognition a subjective experience for you"
Darek,
When you "recognize" an abstract truth as a result of reasoning, is that a recognition a subjective experience for you (something occurring in your mind)or is the recognition strictly something you say or some other physical action you take?
I don't see the relevance of this question to the thread (since the AfR isn't about qualia), but I'll bite.
Generally, conscious recognition of an "abstract truth" is a subjective experience for me. Nonetheless, it seems possible that I might have subconscious recognition of some abstract truths.
Just to illustrate the irrelevant nature of this issue... philosophical zombies (if they were possible) would still be reasoning about things, even if they lacked subjective experiences to go along with their recognition.
DL
>>Generally, conscious recognition of an "abstract truth" is a subjective experience for me. Nonetheless, it seems possible that I might have subconscious recognition of some abstract truths.<<
I don't see how "conscious recognition" can ever be "subconscious recognition." "Recognition" with the qualifier "conscious" must always be a subjective experience, wouldn't you say?
Moreover, usually we mean "conscious recognition" when we talk about recognizing an abstract truth. When we say that Newton "recognized" that gravitational force varied with the inverse square, we generally mean a conscious recognition--not something that occurred before he ever consciously thought about it.
Put it this way, reasoning often--even if not always--entails conscious recognition of abstract truths. If this kind of "recognition" properly falls within "reason" and if it defies naturalistic analysis, then the AfR goes through.
Darek,
Put it this way, reasoning often--even if not always--entails conscious recognition of abstract truths. If this kind of "recognition" properly falls within "reason" and if it defies naturalistic analysis, then the AfR goes through.
You're mixing up arguments.
The argument from subjective experience aims to show that consciousness cannot be explained by reductionist naturalism, and, hence, that the mind is more than natural laws.
The AfR aims to show that intentionality cannot be explained by reductionist naturalism, and, hence, that the mind is more than natural laws.
Here you are saying that if we assume that intentionality requires consciousness, and assume that consciousness cannot be explained by reductionist naturalism, then the mind is more than natural laws.
Well, that's not the AfR. That's the argument from consciousness. The argument you are speaking of has nothing whatsoever to do with intentionality. You could have made the same argument substituting "what it feels like to see red" for "intentionality".
If consciousness were the only important ingredient in the AfR, then there would be no point in my being here describing how material minds can recognize, predict, and have thoughts about mathematics.
DL
>>You're mixing up arguments.
The argument from subjective experience aims to show that consciousness cannot be explained by reductionist naturalism, and, hence, that the mind is more than natural laws.<<
First, the AfR is not simply an argument from intentionality. It has several subarguments or "prongs." The first couple of chapters of C. S. Lewis's book _Miracles_ makes this clear. He refers to the non-naturalistic character of "an act of knowing" that in context is a conscious recognition.
Second, arguments from consciousness simpliciter do not invoke the kind of recognition we are discussing. A dog is in some sense conscious but it doesn't recognize that 2+2=4 as an abstract truth.
The fact that consciousness is a necessary ingredient in one of the subarguments grouped under the umbrella of the AfR doesn't take that subargument off the table. For a fact, we do consciously recognize abstract truths and this recognition stands apart from physical behavior (output).
Darek,
The fact that consciousness is a necessary ingredient in one of the subarguments grouped under the umbrella of the AfR doesn't take that subargument off the table.
Well, I don't think the presence of qualia is properly the AfR. But let us suppose that it is. You can't invoke a completely different argument under the umbrella to prop up one of the others. You can always point to some overarching category as your umbrella, but that doesn't shield individual arguments. For example, you can't shield the AfR using the Cosmological Argument, or vice versa, just because they both fall under the umbrella of "arguments for the supernatural".
So, as it stands, I see no response to my argument. Material minds are not prevented from thinking about mathematics or counterfactuals, even if a material mind requires a causal connection to X in order to think about X. This answers the AfR with respect to intentionality.
DL
>>Material minds are not prevented from thinking about mathematics or counterfactuals, even if a material mind requires a causal connection to X in order to think about X.<<
That's fine, assuming you are talking about "thinking" that can occur apart from consciousness. But even if there is something that occurs unconsciously that might be called "thinking," it is not normally what we would consider in the context of reason as a mental activity.
Lewis argued that we become aware of certain abstract truths as a result of conscously contemplating other truths or facts. And this is not an oddball interpretation of "reasoning," but a common sense one.
There is an arm's length connection with qualia arguments. Do we remove our hand from a hot stove because of our awareness of the hot surface under our fingers? Or is it a neuromuscular reaction that is accompanied by experience such that we mistakenly give the experience a causal role?
In the case of AfR, do we arrive at a conclusion because of the experienced awareness that certain facts entail it? If the awareness or experienced component of this process is denied a causal role (as in the case of feeling the heat of the stove), then reasoning as we ordinarily understand it evaporates.
Darek,
In the case of AfR, do we arrive at a conclusion because of the experienced awareness that certain facts entail it?
There's only a problem with naturalism if there's a contradiction. There isn't.
Under naturalism, thinking is a mechanistic process in which a conclusion is causally connected to the corresponding premises that entail it. Now, if you could show that material minds must always contradict logical entailment, then you would have a case. You haven't done that.
The point I was originally making is this. If a machine can unconsciously recognize abstract numbers and relations, then it can be aware of abstract truths like 1+1=2. Or be aware of the transitive property of equality. This places awareness of abstractions on the same level as awareness of physical features like temperature. And no one doubts naturalism on the basis that material bodies might not reliably register temperature the way human bodies do.
This is critical. It means that abstract thinking can feature valid inferences, even if it is not conscious abstract thinking. The unconscious thinker may not be consciously justifying his thoughts, but conscious justification isn't necessary for correctness. Examples: A radar guided missile can be making correct inferences, even when it is not aware of its own inferences. A mouse can reliably run in from the cold even if it isn't consciously aware that it's cold in the same way a human is.
Moreover, there's nothing in our experience that tells us that a conclusion must be wrong if a valid inference precedes consciousness of the validity of that inference.
What we know is that we humans ought to be able to consciously see the validity of conclusions after the fact. This doesn't mean consciousness of entailment is necessary to valid thinking in all species/minds. It only makes it normative to entailment in human thinking. Indeed, consciousness may be necessary to reasoning in our species, given our physiology. However, that doesn't make consciousness necessary for all species to make valid inferences.
Of course, nothing I have said should be construed as the statement that naturalism cannot explain consciousness in humans.
DL
>>The point I was originally making is this. If a machine can unconsciously recognize abstract numbers and relations, then it can be aware of abstract truths like 1+1=2.<<
Going back up the thread, I pressed you on whether "recognize" is simply a physical action or behavior or is rather (ordinarily, at least) a conscious subjective experience.
When you say that a machine can "recognize" abstract truths, are you saying that it can generate a character string, switch on a device, etc.?
To analogize concerning qualia, when we feel (i.e., become aware of) severe pain we wince and scream. But wincing and screaming are not what the feeling of pain is. We can separate the physical responses of wincing and screaming from the awareness of pain that causes them. We can imagine wincing and screaming (even very lifelike wincing and screaming) occurring in the absence of any awareness of pain.
The same is true of devices. We can imagine a computer, calculator, etc. generating a character string, blinking a light--whatever physical response it might be--in the absence of what we experience as conscious awareness of an abstract truth.
>>This places awareness of abstractions on the same level as awareness of physical features like temperature. And no one doubts naturalism on the basis that material bodies might not reliably register temperature the way human bodies do.<<
Above, you simply equate "awareness" with registration. Is there an "awareness" that cannot be identified with response, i.e., registration? Of course there is, and in fact conscious awareness is what we usually have in mind when we use the word.
Darek,
Sorry for the delay.
When you say that a machine can "recognize" abstract truths, are you saying that it can generate a character string, switch on a device, etc.?
Essentially, yes.
I am saying that a machine could recognize what it meant for Fermat's Last Theorem to be true, and could recognize what it means for 1+1=2. Not that this is different from simply being able to calculate 1+1=2. I'm talking about abstraction in the sense of "generalization". It can generalize from 1 apple + 1 apple = 2 apples, and 1 brick + 1 brick = 2 bricks, to 1 + 1 = 2 for a limitless class of entities.
To analogize concerning qualia, when we feel (i.e., become aware of) severe pain we wince and scream. But wincing and screaming are not what the feeling of pain is. We can separate the physical responses of wincing and screaming from the awareness of pain that causes them. We can imagine wincing and screaming (even very lifelike wincing and screaming) occurring in the absence of any awareness of pain.
First, this isn't relevant to intentionality. To say that intentionality requires consciousness is question begging. Intentionality is what it means to have a thought about something, not how it feels to have a thought about something.
Second, how do you know when another person is feeling pain versus faking it? Are there any tests you would do that would increase or decrease your confidence in that fact? I think there are. So I don't think it's obvious that the two things are different.
Third, I dispute the claim that qualia are epiphenomenal. Qualia are, at the very least, and awareness of an awareness, or an awareness of an ability. It's also perfectly reasonable to suppose that every experience on our network of neurons is going to feel like something. The neurons that recognize cheese are going to be connected to the neurons that recognize pizza which are connected to the neurons that recognize your favorite pizza parlor which are connected to the neurons that recognize the photo of Babe Ruth on the wall of the pizza parlor, etc. As each of these neural circuits fires, you'll vaguely feel the original sensations associated with them all. So when people talk about there being something it feels like to recognize a thing, they're not saying anything particularly special. Any machine that works this way is going to produce qualia.
The same is true of devices. We can imagine a computer, calculator, etc. generating a character string, blinking a light--whatever physical response it might be--in the absence of what we experience as conscious awareness of an abstract truth.
Again, I'm not talking about simple computation. I'm talking about making a generalization. To say that such a machine could not recognize or understand an abstract truth due to lack of human-like consciousness is to confuse understanding with feelings about understanding. It would be the claim that something doesn't understand something else unless is feels good or bad (or whatever) about its understanding.
Above, you simply equate "awareness" with registration. Is there an "awareness" that cannot be identified with response, i.e., registration? Of course there is, and in fact conscious awareness is what we usually have in mind when we use the word.
It's not just a matter of registration. It's prediction, and the triggering of thoughts (and possibly emotions) about the prediction.
Consider a man who looks at a photo of a cat. What is the difference between the man being aware that he was looking at a photo of a cat and the man not being aware of it?
You seem to be arguing that awareness of any particular is beyond analysis. But if that were the case, we would be unable to determine that a man was unable to be aware of pictures of cats.
I just cannot agree with this morality that says we cannot and should not analyze things. I think that the man is aware of the cat when he sees the image and triggers his cat recognition circuit, thinks of his pet kitty, and feels happy (or something like that). If the man has never seen other animals, then he might not be aware of what he was looking at. Or if he has a defect in his vision where he cannot recognize 2D representations of 3D objects, then he might be unaware. These things are all physical.
Indeed, we can even go to a first-person view. If I show you a picture. How do you know you are aware of what it depicts? I show you the picture of the cat. How do you know you are aware of what the picture shows versus unaware? It must be because the image triggers thoughts and predictions about the image. For example, you might predict that if this is a picture of a cat, you ought to see a leg there, and a tail there. And when you see those things, your theory about the cat is reinforced (even if subconsciously). Again, it all comes back to the physical.
DL
I had ceased to check this thread, then I found your new post. Perhaps you, likewise, will see this eventually.
>>I am saying that a machine could recognize what it meant for Fermat's Last Theorem to be true<<
You force me to keep revisiting the issue of what we ordinarily consider "recognition" of this kind. We normally consider that a student recognizes Fermat's Last Theorem to be (likely) true to the extent that he or she they consciously understands it to be so.
An "unconscious recognition" of such a fact or probability by math students is just not something that a math teacher would find relevant.
Moreover, a physical response (correct answer from a student, etc.) is evidence for a subjectively experienced state, not identifiable with that state. Obviously, someone can recognize the truth of an abstract proposition and not give any outward sign of it. Or, conversely, a person could respond in a way appropriate to recognition in the absence of recognition.
To take this into the realm of evidence . . .
What evidence can you imagine that would cause you to doubt that you are to a significant extent conscious at the time you are considering the evidence?
What evidence can you imagine that would lead you to doubt that you are capable of arriving at rational conclusions, again, at the time you are considering that evidence?
Darek,
We normally consider that a student recognizes Fermat's Last Theorem to be (likely) true to the extent that he or she they consciously understands it to be so.
There are lots of things that we do with limited conscious thought. For example, I can drive a car, obeying correct traffic signals, while talking on a cellphone. Typically, when I do this, I have no recollection of my journey. Yet, by your reckoning, I don't "recognize" the intersections, traffic signals, or other traffic while I am on the phone. If you don't want to use the word "recognize", what word would you prefer?
What evidence can you imagine that would cause you to doubt that you are to a significant extent conscious at the time you are considering the evidence?
What evidence can you imagine that would lead you to doubt that you are capable of arriving at rational conclusions, again, at the time you are considering that evidence?
I don't see the relevance of these questions. I have not been arguing that mechanisms cannot be conscious. I have been arguing that mechanisms can have a physical causal link with abstractions.
The questions you ask are loaded with words that are ambiguous. You clearly intended to make answering the question impossible, and thereby establish that all evaluation of evidence requires consciousness. However, there are flaws in this approach.
What is consciousness? Let's suppose it is awareness of one's own thinking processes. In that case, the first question actually reads:
What evidence can you think of that would cause you to think about your thinking process and conclude that your awareness of your own thinking process is non-existent?
or simply
How can you think about your thinking process and conclude that you're not thinking about your thinking process?
It seems impossible to answer, but only because the question was loaded from the start. Just because a process doesn't think about itself reflectively doesn't mean it isn't reasoning correctly. It is question-begging to assume that correct inferences require conscious reflection.
I'll give you an example. Suppose you ask me whether my car has a full tank of gas. I have checked my car's gas gauge, and it reads FULL. Consequently, I answer yes. Was this a correct inference?
We would generally say that it was. But I can question whether it was a correct inference. Do gas gauges actually tell you how much gas is in the tank? Is the needle stuck? Or the sensor? Have I done a lot of driving since I last filled up the tank?
Obviously, critical thinking requires challenging and testing one's beliefs, and a general effort to infirm them. However, the issue at stake isn't whether my original response was a critical assessment of whether my car has a full tank of gas. The issue is whether my original response was a correct inference. I'm sure we would agree that a critical assessment is better than the first-order inference, but is the first-order inference wrong? I don't think that it is. My answer is causally connected to gas tanks being full.
If I had read the gauge and it said EMPTY and I then believed the tank was full, then that would have been a bad inference.
So there are two ways to assess an inference. The first is, did it actually give the correct answer, and is the process for getting that answer causally connected to the answer? The second is, was the answer the result of a critical assessment?
Of course, the problem with critical assessments is that they take time and are often open-ended. When I see a red traffic signal, I don't have much time to do a critical assessment (e.g., an ethical analysis) about whether I should stop.
I think what you are saying is that any decision that is not critically assessed is not rational. Is that right?
DL
No, you have misunderstood my point.
>>Obviously, critical thinking requires challenging and testing one's beliefs,<<
Can the type of thinking you refer to above, "critical thinking," be unconscious? I don't think so. Responding to traffic signals may not require full conscious attention, but there is a species of recognition--the kind that occurs during critical thought--that does require consciousness. And it is to that kind of "recognizing" that the AfR is directed.
That's the first point.
The second is that propositions about the natural world are established through empirical observation, theory formation, testing, reevaluation, etc. These propositions must be open to doubt in order to be testable.
If it is a fact about the natural world that you are consciously aware at a given moment, then it should be open to doubt. You should be able to conceive of disconfirming evidence.
I have trouble imagining evidence that would tend to disconfirm that I am conscious at the time I am considering that evidence. Likewise, what evidence could disconfirm to me that I possess the power of rational analysis at the time I am analyzing that evidence?
"If it is a fact about the natural world that you are consciously aware at a given moment, then it should be open to doubt."
Perhaps you could elaborate on this. Doesn't make much sense.
And what is the difference between 'consciously aware' and 'aware'? Is there unconscious awareness?
Anon
I'm referring to facts in the scientific or quasi-scientific sense. Such facts or propositions must be testable to have such a status.
Your current physical location is something you know with a high degree of probability. But you can imagine evidence being presented to you indicating that you were mistaken, have been misled through an elaborate hoax, etc. Therefore your current location is subject to confirmation.
A proposition for which no disconfirming evidence is conceivable is not subject to confirmation. That you are you are significantly conscious at the present moment and that you are significantly capable of rational thought fall into a special category because of the way they are known.
I agree that awareness usually means conscious awareness. But DL has been making a case that not all forms of awareness or "recognition" entail consciousness. So I felt the need to qualify. Whether or not there is awareness in the absence of consciousness, surely there is a type of awareness that is inseparable from consciousness.
Darek,
If conscious means reflective, then critical thinking is a form of conscious thinking. I have no problem with that.
But let's be clear on the implications here. The issue of intentionality would break down along similar lines, i.e., a system can think intentionally without reflection in the same way a system can reason and recognize without reflection. And so I assume you mean the AfR again addresses itself to reflective thought in both instances?
If it is a fact about the natural world that you are consciously aware at a given moment, then it should be open to doubt.
This is a misunderstanding about naturalism. Naturalism does not (and cannot) prove its own axioms. And axioms of rational thinking are included in that set.
So, again, you confuse two issues. One issue is whether physical minds can think according to unprovable axioms. The other issue is whether the unprovable axioms themselves are correct.
You chastise naturalism for not being able to prove its own axioms, when no system can do that, least of all your own.
You say correctly that naturalism functions through falsifiable theories, but the axioms of naturalism and rational thinking are not examples of such theories. What are? Well, there are physical theories about how the physical human mind thinks according to those unprovable rational axioms. And we can imagine evidence that would make us doubt that our physical brains can think in that way. The evidence is that physical minds do work, but we can imagine that the data went the other way (e.g., if brains were composed exclusively of fat cells).
DL
>>The issue of intentionality would break down along similar lines, i.e., a system can think intentionally without reflection in the same way a system can reason and recognize without reflection.<<
To say that someone can "think" while entirely unconscious is to take the concept of "thought" so far outside normal usage that I don't believe you can press it in argument. And if "thinking"--again in common usage--requires a significant degree of consciousness, then "thinking about" this or that does likewise.
>>Naturalism does not (and cannot) prove its own axioms.<<
Is it one of the axioms of naturalism that you know yourself to be conscious and rational as you read this sentence? And that you know it incorrigibly?
I suppose the question is, "Does our knowledge of some facts apart from empirical discovery tell us anything besides those bare facts? Does it challenge the completeness of a philosophy that attempts to account for the world entirely in terms of empirically discoverable, confirmable facts?"
BTW, I am going to post a comment on the other site in the discussion of "miracles" and prior probabilities that I hope you'll take a look at.
Darek,
Is it one of the axioms of naturalism that you know yourself to be conscious and rational as you read this sentence? And that you know it incorrigibly?
Short answer: Yes.
I suppose the question is, "Does our knowledge of some facts apart from empirical discovery tell us anything besides those bare facts? Does it challenge the completeness of a philosophy that attempts to account for the world entirely in terms of empirically discoverable, confirmable facts?"
Well, as I've said before, it is never the case that we "know" we're rational because we cannot rationally justify such beliefs without circularity. And that goes for any system. Assumptions have to be made, no matter which philosophy you adopt.
The question is really about bias and testability. Naturalism is clean and efficient, and it incorporates only those few axioms that are necessary for rational thinking. In contrast, dualism and supernaturalism incorporate a collection of extraneous axioms that are transparently designed to defeat testability by granting basicness to ideologically-prized concepts. Consciousness, intentionality, good/evil, explanation, and free will are all examples of things that are held by supernaturalists to be basic, beyond understanding or analysis. For if any of these phenomena are found to be a consequence of natural mechanisms, mainstream Christian theology falls like a house of cards.
BTW, I am going to post a comment on the other site in the discussion of "miracles" and prior probabilities that I hope you'll take a look at.
Yes, I'm looking forward to reading it.
Two points, in order of relevance. The first I'd be very curious to see if Victor has discussed it, or what he thinks.
1. Someone can consciously go through a bad inference:
If A, then B
Not A.
Therefore, not B.
They can perceive this as a good inference, feel its logical pull ery strongly. Let's stipulate that consciously their experience is the same as someone consciously going through a correct inference (e.g., modus ponens).
If this is possible (and I think it is), then it seems the question of logical thought in the argument from reason is really not very strong. The feeling of going through an inference doesn't depend on the soundness of the inference. It's just another conscious experience and can be right or wrong of its target. Hence, the argument from reason doesn't seem to have unique content over and above the argument from qualia to non-naturalism.
2. The discussion b/w DL and DB on epistemology etc is not very naturalistic (which shouldn't worry DB). It's a bid foundationalist for my tastes, and probably most naturalist's tastes.
I think we can justify our scientific methods (whether they be methods of measurement, logic, or mathematics) by using the scientific method itself. This turns Descartes on his head, but it also describes better how the growth of knowledge actuall works.
It sounds circular, but we can avoid circularity as long as we don't restrict our method of justification for method M to method M itself. Luckily, science doesn't limit itself to a single method for most questions, but tries to incorporate a diverse set of different methods that coevolve.
We can doubt mathematics while keeping our belief in first order predicate logic. We can doubt a belief in a particular system of logic while maintaining our belief in the deliverances of sense perception.
Instead of the picture of a bunch of "derived" knowledge sitting on a bunch of "basic" knowledge, I think in reality things are more messy: acomplicated scientific claims can be used to justify even basic perceptual beliefs, basic perceptual beliefs can be used to justify the choice of logics, etc..
We come to a problem and use the best methods we have learned up until that point to attack it. It would be strange to try to pretend we don't have these methods to attack the problem, to attack it "anew" or whatever like Descartes thought. It would be like trying to solve differential equations but limiting yourself to facts about world history when attacking the problem.
Yes, we can also attack the problem of the best method, but even this shouldn't be approached outside of our best methods. Perhaps some people think that our best methods, without exception, should be modeled as a mathematical proof, and that this axiom-inference-theorem format needs to be part of any method, and all knowledge should be part of a single uber-framework with a few simple assumptions (axioms) from which everything else can be derived (perhaps with the help of "sense experience"). But this picture of things seems to have created a lot of really useless philosophy, suggesting it is false.
This topic is orthogonal to the question of consciousness, conscious access to internal inferences, and whether internal thought is propositional/ nonpropositional, conscious/ nonconscious, or various weighted combinations of these possibilities.
DL
>>DB: Is it one of the axioms of naturalism that you know yourself to be conscious and rational as you read this sentence? And that you know it incorrigibly?
DL: Short answer: Yes.<<
Just to get this clear--it is an axiom of naturalism that the conscious and rational states of a physical system can be known apart from any empirical evidence?
Granted, I decide based on empirical evidence that other people are likely rational and conscious. But my understanding of the terms "rational" and "conscious" derives from my experience of states that I can know myself to be in apart from empirical evidence. I observe that when I am in these states I can walk, talk, etc., although I can also distinguish the walking and talking from the internal states that these behaviors (often) accompany.
Forgive me from zeroing in on this apart from your comments about circularity. Circularity doesn't loom large in the above. It seems to me that the critical point is the impossibility of separating 1) the characteristics of a state from 2) how that state can be known to exist.
I don't see how naturalism can accomodate a state with the characteristic that it can be known apart from empirical evidence.
The efficiency of the hypothetico-deductive method is not at issue, just the completeness of the naturalistic worldview.
BDK
>>They can perceive this as a good inference, feel its logical pull ery strongly.<<
>>The feeling of going through an inference doesn't depend on the soundness of the inference. It's just another conscious experience and can be right or wrong of its target.<<
I can't resist pointing out that your own line of reasoning about inference must invoke something other than a subjective impression of persuasiveness. We depend on "correctness" both existing and asserting itself with enough time and thought.
DB: it's a reductio, so I don't have to believe any of it. :)
BDK,
Instead of the picture of a bunch of "derived" knowledge sitting on a bunch of "basic" knowledge, I think in reality things are more messy: acomplicated scientific claims can be used to justify even basic perceptual beliefs, basic perceptual beliefs can be used to justify the choice of logics, etc..
I don't favor this picture. It seems like a fuzzy approach. Like throwing one's hands up and saying that "the accounting is too complex, so let's dispense with the accounting."
I prefer the foundational approach because, even if there is ultimately some arbitrariness in the selection of axioms, it is, at least, an accounting.
In this picture, the AfR is an attempt to show that naturalism's books are cooked. However, the AfR is and example of very sloppy accounting, and we can't see how sloppy it is until we get rigorous. I'm convinced that the fundamental flaw in the AfR (or DB's version of it) is that it confuses the axioms of naturalism with empirical observations.
I do agree with your point number 1. The feeling of having gone through an inference is not a validation of its correctness. I spent years debating philosophy with a couple of chaps. It turned out that they didn't even know what rationality was, and believed a conclusion was rational if it was merely a considered opinion.
Darek,
Just to get this clear--it is an axiom of naturalism that the conscious and rational states of a physical system can be known apart from any empirical evidence?
No, of course not. You are talking about some form of physicalism that holds as its primary axiom that everything is physical. I don't think anyone holds to this as axiomatic, do they?
Naturalism, like every other rationalist worldview, starts with axioms of rationality: consistency/logic, induction, and the incorrigibility of experiences (qua experiences). No one can be rational without these things.
Consequently, one cannot use naturalism to prove that our minds can think logically, that induction is true, or that my experiences are really my experiences. Any proof of these things would inevitably be circular in any system (including yours).
So, of course, the naturalist model of a physical mind would not prove any of these axioms. It could show physical minds can
* think logically, and often prefer to do so.
* think about things concrete and abstract.
* possess subjective morality based on desires and tastes with evolutionary advantages.
* gain evolutionary advantage from rational thinking in a naturalistic world.
* etc.
None of these possibilities would show that the axioms of rationality were "true" or normative.
Physicalism (of the a posteriori sort) reigns because experiments show that physical minds CAN do all of the above.
Maybe I should ask you the question... what do you think are the axioms of naturalism?
"So, of course, the naturalist model of a physical mind would not prove any of these axioms. It could show physical minds can
* think logically, and often prefer to do so.
* think about things concrete and abstract.
* possess subjective morality based on desires and tastes with evolutionary advantages.
* gain evolutionary advantage from rational thinking in a naturalistic world.
* etc.
None of these possibilities would show that the axioms of rationality were "true" or normative.
Physicalism (of the a posteriori sort) reigns because experiments show that physical minds CAN do all of the above."
Rather, they show that human beings can do all of the above.
There is no such thing as a physical mind. Just as there is no such thing as a spiritual mind. It is conceptually incoherent to identify the mind with any kind of substance.
Of course, humans are one kind of physical substance that exists in this world. But that is a rather mundane fact.
Anonymous,
There is no such thing as a physical mind. Just as there is no such thing as a spiritual mind. It is conceptually incoherent to identify the mind with any kind of substance.
I think you know what I mean. By "physical mind" I refer to a mind that reduces to the configurations and motions of physical particles and fields.
"I think you know what I mean. By "physical mind" I refer to a mind that reduces to the configurations and motions of physical particles and fields."
Yes, I do know what you mean. That is why I think it is conceptuallly incoherent.
I don't think you are going to have much success reducing the mind to the "motions of physical particales and fields".
DL
>>DB: Just to get this clear--it is an axiom of naturalism that the conscious and rational states of a physical system can be known apart from any empirical evidence?
DL: No, of course not. You are talking about some form of physicalism that holds as its primary axiom that everything is physical. I don't think anyone holds to this as axiomatic, do they?<<
I'm not sure how I'm supposed to take the "no" answer above. You said that your own incorrigible knowledge of your own states of consciousness and rationality are axiomatic under naturalism. (That's where you said, "Short answer, yes.")
Incorrigible knowledge is not the product of evaluation of evidence.
Therefore you know of your own conscious, rational state apart from evidence.
Therefore, it would seem to be axiomatic, given your earlier comments, that the conscious, rational state of a physical system can be known apart from evidence--in the case of the system itself. You, a physical system (use your new definition of physical if you like) in a significantly conscious and rational state, know yourself to be significantly conscious and rational apart from evidence. If you so know your own state, then that state can be known (by you) apart from evidence.
Are you having trouble following the line of reasoning above??
DL:
We can study perceptual systems and quantify conditions under which they tend to work, when they tend to make errors, that sort of thing. We can scientifically study perception-based belief-forming mechanisms and this can be used to give support to said beliefs. This is in contrast to those who would want to say that basic perceptual beliefs are foundational (i.e., not justifiable).
I understand the lure of foundationalism, as it seems more rational on a first pass (at least, when your view of rationality is based on axiomatic arithmetic systems, itself something that is not axiomatically obvious). To me, it seems to crash against the facts of our actual justification of beliefs. For an individual argument, of course, there are typically premises and conclusions. The move from this to foundationalism is what I don't like, the picture of a pyramid. I think there are a few pyramids (experimental methods, logic/math, perceptual reports, pragmatics), and none is permanently prior to another.
We start out with all these tools, and hone them over time as our experimental reach increases, our logical and mathematical understanding expands, etc.. There is never a time when we step back and axiomatize everything. We axiomatize local bits in certain domains, where it is helpful to do so (e.g., math, logic, some branches of physics), but to then take that as the model of all knowledge seems wrong. Empirically wrong.
You could say foundationalist epistemology is a normative enterprise, that you want to improve our knowledge making machinery, improve on the nonfoundationalist methods employed by science and people until now. This was the goal of Carnap, for instance, and of course Descartes, and even Husserl. They all failed, but perhaps someone will pull it off.
I think what has worked for science has lessons for epistemology of science. Maybe it works for science because it is the best way to go about knowing about something.
DB,
You are trying to say that, in the case of minds, we know something about a physical system incorrigibly. You are saying implicitly that this is different from, say, knowing that an oven is hot.
However, the implicit distinction doesn't really exist.
When I read the thermometer or see the heat rising from the oven, those experiences are incorrigible. Whether or not the oven is actually hot, I am not mistaken in having experiences that are consistent with a hot oven. Of course, I shall assign some level of uncertainty in my belief that the 'hot oven model' explains my experiences (even if the uncertainty is very small).
Your suggestion is that the mind isn't analogous because if minds are physical systems, we would know the physical mind's mental state without any uncertainty.
The error in your argument is that you assume I have an axiom that minds are physical systems. Were that true, then I would have no uncertainty in my belief that minds are physical systems AND no uncertainty in what that physical system of the mind was thinking in my case (because both propositions would be true axiomatically). If that were the case, my belief that my mind was a physical system in a particular mental state would not be properly scientific. It would be an article of faith.
However, it is not axiomatic for me that minds are physical systems, and, therefore, not certain. The only thing we ever get out of science is confidence in models. Confidence in models never reaches certainty. In order for me to be certain that my thoughts are the result of a physical mind AND the thoughts of that mind are my thoughts, I must be certain that my model of physical mind is certain. And, of course, I'm not certain that minds are physical. I'm extremely confident, but not certain.
I have never seen an argument that beliefs about our present experience are incorrigible. I have seen it asserted many times, though, starting with Descartes.
Is there anything incoherent about believing that when I think I am having a red sensation, that I am wrong? Is there a difference between incorrigibility and necessity?
I think I am sometimes wrong about my experiences (e.g., I mistake a tickle for an itch, or an itch for a pain).
I like Dretske's (and Sellars') distinction between epistemic and nonepistemic 'seeing.' If that distinction holds, then incorrigibility doesn't enter into experience (other than in the trivial sense in which when I have experience X it is necessarily the case that I am having experience X). The beliefs about that experience seem quite corrigible, i.e., the 'epistemic' seeing, the seeing 'that.'
Sellars once said that experience is no more epistemically evaluable than a stone. Since experience is nonpropositionally structured, it can't enter into arguments, is not the right sort of thing to provide evidence for another thing. It may have a tight causal link with certain beliefs (e.g., my theory of mind and experience), but that is contingent and subject to revision.
I'm not sure I believe Sellars, but I am tempted to.
DL
>>The error in your argument is that you assume I have an axiom that minds are physical systems.<<
No, but I assume that you have a belief that minds are physical systems and that you are defending that belief.
I am simply pointing out that the states of physical systems are known only by evaluation of empirical evidence. But the mind's states of consciousness and rationality can be known apart from evaluation of empirical evidence--i.e., in the case of a mind's knowledge of its own states.
BDK
DL brought in specific sensations as incorrigible, not me. My argument is slightly different.
I'm saying that in order for you to evaluate evidence to the effect that you are not significantly conscious and rational, you would have to be significantly conscious and rational. This deprives these states of corrigibility from the standpoint of a Cartesian thinker--at least, it deprives them of the corrigibility we associate with knowledge of physical states.
DB: yes, that was aimed at DL's claim.
DL, I'd be curious what you think of Maddy's Second Philosophy (link below). I think she is pretty much right.
Link
DB,
I am simply pointing out that the states of physical systems are known only by evaluation of empirical evidence. But the mind's states of consciousness and rationality can be known apart from evaluation of empirical evidence--i.e., in the case of a mind's knowledge of its own states.
In both cases, we're talking about "experienced" evidence. The term empirical is a red herring.
What difference does it make whether I'm finding patterns in my experiences from my 5 senses versus finding patterns in my experiences of computation or mood?
Why should I group experiences of touch with experiences of sight, but not group these experiences with experiences of inference?
That's sort of an aside...
If I'm interpreting your argument correctly (this argument anyway), you are saying that thoughts can't be both incorrigible and provisional at the same time. However, this is never a problem.
To review, my experiences are axiomatic for me (they are assumed true). I cannot dispute my experiences. I can only dispute the models I devise to explain them. Physics and physical minds are examples of such models.
If I devise a physical model of my own mind, it is a provisional model. Having the model does not place the epistemic status of my experiences provisional. My experiences are axiomatic, just as they always were. If my provisional physical model of my mind predicts my thoughts, my experiences don't alter their status and become contingent (i.e., they don't contradict the original axiom).
I can do precisely the same analysis with the hot oven. Suppose my provisional hot oven model infers from the thermometer that the oven will be hot to the touch. We have a contingent model predicting the experience I will have upon touching the oven. This doesn't make my experience of touching the oven less incorrigible when I actually have that experience.
DL
>>In both cases, we're talking about "experienced" evidence. The term empirical is a red herring.<<
Empirical evidence is an irrelevant concept? To you? Or to science?
Here's what you said earlier:
>>By "physical mind" I refer to a mind that reduces to the configurations and motions of physical particles and fields.<<
Does scientific knowledge of physical particles and fields rest on other than empirical evidence?
You seem to be saying that "configurations of physical particles" can know things about themselves apart from the empirical evidence that is the gold standard of science.
In fact, these configurations not only know things about themselves apart from empirical evidence (moods, experiences) but things about the the way they themselves--the configurations of particles and fields, I mean--had best move (e.g., in correct inference).
Obviously, the configuration of particles and fields that constitutes your mind is somehow inharmoniously engaged with the configuration that is my mind.
DB,
We're back to a straw man version of naturalism. Naturalism is not the view that everything that can be known must be known through the physical sciences. Sounds like you are describing some bizarre form of scientism, not naturalism or even physicalism.
Does scientific knowledge of physical particles and fields rest on other than empirical evidence?
It certainly does. It relies on axioms of rational thinking, and upon mathematical models, neither of which are empirical. It relies upon our experiences, not all of which are empirical.
It's true that we would know nothing of particles and fields without empirical evidence, but empirical evidence is only necessary for that knowledge, not sufficient.
Feeling the heat of a hot oven doesn't mean I have knowledge of the hot oven. Of course, I would not even know what an oven was without my physical senses, but I need more than those senses to understand what an oven is. I need the ability to think rationally and construct predictive models about experiences (whether those experiences be about the physical world or not).
You seem to be saying that "configurations of physical particles" can know things about themselves apart from the empirical evidence that is the gold standard of science.
See what I mean? This assumes that "the gold standard of science" is the way everything has to be known by a physical system. The assumption is false within naturalism, and I don't know anyone who makes that assumption.
DL
Do you know what the closure thesis is?
DB,
Do you know what the closure thesis is?
Sure, but the thesis isn't an axiom of naturalism. It is an inference from observations, and quite provisional.
Physicalism appears to be the case, a posteriori. Physicalism is not the starting point. For if we found human will could override conservation of energy, momentum, laws of thermodynamics, etc., then the closure thesis would be pretty well blown out of the water.
DL
Provisional or not, closure is widely seen as important to naturalism.
This is from the article on naturalism in the Stanford Encyclopedia:
"By the middle of the twentieth century, belief in sui generis mental or vital forces had become a minority view. This led to the widespread acceptance of the doctrine now known as the ‘causal closure’ or the ‘causal completeness’ of the physical realm, according to which all physical effects can be accounted for by basic physical causes . . ."
Notice that closure is called a "doctrine." Or consider the comment by Keith Augustine in his "A Defense of Naturalism" at infidels.org:
"Most reductive and nonreductive physicalists alike subscribe to the causal closure of the physical--the view that all caused events in the physical world must have physical causes (Van Gulick 1992, p. 160)."
The common disavowal of "mental essences" by naturalists is motivated by adherence to closure.
"Basic physical causes" (the words in the Stanford article) are empirically observable/detectable by virtue of their status as physical causes. Observation/detection is a cause-effect process. Above, you seem to acknowledge the private, privileged, subjective nature of certain acts of knowing--such as your knowledge of your own conscious and rational mental states.
Put simply, closure entails the empirical availability and therefore the objective status of causes and effects. But private subjective experience in principle lacks objective status, lacks empirical detectability, and therefore lacks power to have physical effects under closure.
Take this a step further. Your private, privileged, subjective awareness of logical grounds for your beliefs does not have causal power to produce physical brain states under closure. Voila: the AfR.
What are the axioms of naturalism? I have never heard of such things.
DB,
You're still fighting a straw man. The fact that closure is an inference from experiment has everything to say about its status.
But even if that were not the case, your argument still would not work...
The belief that everything is the result of physical cause and effect does not imply that every belief a (physical) mind acquires must have been acquired by that mind's application of the scientific method. It only implies that every belief a mind acquires is part of a chain of physical causality. It implies that every belief a mind acquires could be (in principle) accounted for in the physical chain of causality.
As a provisional physicalist, I believe that my subjective awareness is caused by physics, is a physical state, and has physical effects.
There is no contradiction in such a physicalism, but, of course, my foundational beliefs (e.g., about logic or induction) cannot be proven true using the physical model. If the beliefs are true, then the physical model is self-consistent. If the beliefs are false, then we would not be here talking. Again, the model cannot prove its foundational (or intuitive, if you like) assumptions about the axioms of logic, induction, etc. If the model could, that would be circular because the model was based on those assumptions in the first place.
Your claim that subjective experience "in principle lacks objective status, lacks empirical detectability, and therefore lacks power to have physical effects under closure" is unjustified. If a physical model predicted my subjective experiences, then my subjective experiences would be explained by a physical model.
At this point, I would expect you to say that subjective experiences are private by definition, but it can't be both "by definition" and referring to my actual subjective experiences at the same time. Because we don't know what subjective experiences are in any detail, you can't say that they're necessarily private. And if you define subjective experiences as necessarily private, then it may turn out that we don't have subjective experiences as you have defined them.
DL
>>If a physical model predicted my subjective experiences, then my subjective experiences would be explained by a physical model.<<
That's just it. A physical model predicts physical effects, not subjective experience. Subjective experience is layered on top of physical effects based on analogy with one's own private experience.
You might check out the Stanford entry on the Other Minds Problem to get an idea of how deep the difficulty is.
The physical structure of an organism in its physical environment provides all the vectors we would need to predict future physical behavior (excluding quantum and chaotic blurriness). Subjective experience adds no additional predictability and therefore has no naturalistic status.
We could even form predictive models based on past observed external behavior to project future behavior without any resort to the subjective experience or awareness of the subject.
If we substitute a behavioral definition of subjective experience or awareness, we stumble over the epistemic issue I outlined: you know that you are "aware" apart from knowing that you are behaving in any particular manner.
"The physical structure of an organism in its physical environment provides all the vectors we would need to predict future physical behavior (excluding quantum and chaotic blurriness). Subjective experience adds no additional predictability and therefore has no naturalistic status."
So my knowledge that you can't stand the taste of vanilla is not going to enable me to predict that when we go the Ice Cream Shoppe that you will not be ordering a vanilla ice cream cone?
Or my knowledge that your favorite type of movie is science fiction is not going to enable me to predict that you will be selecting a science fiction movie from the movie rental store?
Anon
>>So my knowledge that you can't stand the taste of vanilla is not going to enable me to predict that when we go the Ice Cream Shoppe that you will not be ordering a vanilla ice cream cone?<<
All you need to know is that I have avoided eating vanilla ice cream when it was offered to me in the past, or taken a single bite and refused more of it, etc. You need not infer anything about my subjective experience of the taste of vanilla.
All such cases can be addressed in terms of behavioral patterns rather than subjective private experiences.
A physical model predicts physical effects, not subjective experience.
If this were an argument, it would beg the question. If subjective experiences were identical to natural processes (either functionally or causally individuated), for instance, the above claim would be false.
Darek,
I was responding to what you wrote:
Subjective experience adds no additional predictability
Now you seem to be making a different claim:
All you need to know is that I have avoided eating vanilla ice cream when it was offered to me in the past, or taken a single bite and refused more of it, etc. You need not infer anything about my subjective experience of the taste of vanilla.
The question isn’t what you ‘need to know’ in order to make a prediction at all. It is whether or not knowledge of another’s subjective experiences ‘adds no additional predictability’ (as you put it).
It seems obvious to me that if I know that you detest vanilla ice cream rather than that you merely prefer chocolate or strawberry to vanilla then I would have more confidence in predicting that you will not select a vanilla ice cream cone.
And I am not inferring anything about you subjective experience in this example. You simply informed me that you can’t stand the taste of vanilla ice cream and so I know what your subjective experience of tasting vanilla is.
Randy
I think I am sometimes wrong about my experiences (e.g., I mistake a tickle for an itch, or an itch for a pain).
A tickle, an itch and a pain are all sensations. It makes no sense to say that you can mistake one sensation for another. If you have a tickle and then a few moments later the sensation of pain in the same place, then you’ve had two different sensations occur sequentially in the same place. You haven’t mistaken one for the other.
How could you substantiate that the tickle was really a mistaken pain?
Randy
Randy
>>It seems obvious to me that if I know that you detest vanilla ice cream<<
Again, why does "detest" here need to mean anything more than:
1) A pattern of avoidance on my part
2) An exhibition of behavior (such as saying that I don't like vanilla) that is associated with avoidance
?
Notice that a pattern of past avoidance is not a subjective experience. Nor is my saying that I don't like vanilla--that's a behavior, too.
You might scan my brain, but that just moves the issue from the behavior of the smooth (so-called "voluntary") muscles of the body to the behavior of organs such as the central nervous system.
BDK
>>If subjective experiences were identical to natural processes<<
Natural processes are generally considered to be available, in principle, to objective observation in a way that subjective experiences are not. Excepting the subjective seems hopelessly (and needlessly) ad hoc.
Randy & BDK
Let's give BDK the benefit of the doubt here and concede that he was mistaken about a tickle being an itch. He had an initial impression about the tickle that proved to be wrong--assuming he is right about the nature of that initial impression.
But suppose, BDK, that you were wrong about the nature of the impression you first had. In that case, you may be wrong about your not having been right.
Darek,
>>It seems obvious to me that if I know that you detest vanilla ice cream<<
Again, why does "detest" here need to mean anything more than:
1) A pattern of avoidance on my part
If you wish to stipulate that as the meaning of “detest” then you can. But then we wouldn’t be talking about the same thing, would we? Most people don’t understand “detest” to mean a pattern of avoidance.
2) An exhibition of behavior (such as saying that I don't like vanilla) that is associated with avoidance
?
If I say I detest vanilla ice cream, then I am telling you what my subjective experience of tasting vanilla ice cream is. I don’t quite understand why you would make the assumption that I don’t mean what I am saying?
Notice that a pattern of past avoidance is not a subjective experience. Nor is my saying that I don't like vanilla--that's a behavior, too.
Yes, saying is a behavior. In this case, it is also an assertion of one’s subjective experience. One can lie about one’s subjective experiences.
For example you could tell me that you detest vanilla ice cream. Later that day I happen to pass the Ice Cream Shoppe and see you inside eagerly wolfing down a dish of vanilla ice cream with a look of pleasure on your face. I would then know that your subjective experience was one of delight.
Let's give BDK the benefit of the doubt here and concede that he was mistaken about a tickle being an itch. He had an initial impression about the tickle that proved to be wrong--assuming he is right about the nature of that initial impression.
It is not a question of giving him the benefit of the doubt. It is a question of what it means to have a sensation. Sensations are different from perceptions. It is possible to make mistakes in perception. It simply makes no sense to say I mistook an itch for a pain.
Randy
DB:
Natural processes are generally considered to be available, in principle, to objective observation in a way that subjective experiences are not. Excepting the subjective seems hopelessly (and needlessly) ad hoc.
They could still be two different aspects of the same thing (like water and H20). Naturalism doesn't imply that nothing is subjective, only that nothing is purely subjective, only available from a subjective point of view (so we could have one part of the brain monitored by another part of the brain).
Randy: I had the same sensation, but I was mistaken about the type of sensation. It felt like a mild pain, but as I attended to it I realized that isn't what it was.
To repeate, I haven't seen a sound argument that we can't be mistaken about our experiences.
blue devil knight,
Randy: I had the same sensation, but I was mistaken about the type of sensation. It felt like a mild pain, but as I attended to it I realized that isn't what it was.
If it felt like a mild pain then that is what the sensation was: a sensation of mild pain. A sensation is not a perception it is just a feeling. It only exists as a feeling. It is not like a perception where what one perceives can be independently examined by others (or more closely examined by oneself).
And you’ve already attended to it simply by feeling it to begin with. Of course, there is no reason why a mild pain can’t then become an itch or a severe pain. The fact that one sensation is followed by another does not entail that the first sensation was not real.
It is incoherent to say something felt like a mild pain but it really felt like an itch. A sensation is just a feeling.
To repeate, I haven't seen a sound argument that we can't be mistaken about our experiences.
Oh we can be mistaken about our experiences. I’m in agreement with you there. But you seem to be lumping all experiences together as if they were somehow the same thing.
Sensation is a passive power. It should not be conflated with the perceptual powers of the five senses.
Randy
Randy: The sensation was the same and I was wrong about it. I initially thought I felt a mild pain, but I realized that in fact I was wrong, that it was actually a mild itch. This happens not infrequently with subtle or less intense sensations.
Working with patients, such things happen often when they are trying to describe their pain experiences ("Oh wait, I said it was sharp and here, but I realize now it is more throbbing and located here."). There are even blind patients that deny their blindness (Anton's syndrome). I don't know what rides on this, frankly, but I have met many philosophers steeped in Locke/Descartes/Hume that strong negative reactions to these kinds of claims.
I'm more of a Kantian I guess. Perhaps you can clarify what you mean by "pure" sensations. When I see a stone on the trail, is that a pure sensation?
Blue devil knight,
Randy: The sensation was the same and I was wrong about it. I initially thought I felt a mild pain, but I realized that in fact I was wrong, that it was actually a mild itch. This happens not infrequently with subtle or less intense sensations.
How can you substantiate that you were wrong? Is it logically or empirically impossible that a mild pain can be followed by a mild itch? Is an itch an object that can be studied by others in order to see if you perceived it correctly?
You perceive objects but you feel pain.
Working with patients, such things happen often when they are trying to describe their pain experiences ("Oh wait, I said it was sharp and here, but I realize now it is more throbbing and located here.").
But that doesn’t mean that originally they didn’t have a sharp pain located in the leg and now have a throbbing sensation that is in the lower back. Sensations can vary over time. I’ve had a toothache where for a time it felt like a dull ache and then became a throbbing sharp pain and then later became more of a dull ache.
I work with patients also. One factor that has to be considered when collecting a patient history is memory. Sometimes people simply have a faulty memory of what their sensations were in the past.
There are plenty of examples of people mistaking their experiences. A faulty memory is one. Mistaking the sound of a backfiring car for a gun is another. Its easy to demonstrate mistakes like that.
I’m having trouble seeing how you could demonstrate that a feeling of a sharp pain was really a feeling of a dull ache.
There are even blind patients that deny their blindness (Anton's syndrome). I don't know what rides on this, frankly, but I have met many philosophers steeped in Locke/Descartes/Hume that strong negative reactions to these kinds of claims.
But now you are talking about a visual perception. A perception is not a sensation.
I'm more of a Kantian I guess. Perhaps you can clarify what you mean by "pure" sensations. When I see a stone on the trail, is that a pure sensation?
A sensation is a localized bodily feeling such as a pain, tingle, tickle or itch. I don’t recall saying they were “pure sensations.”
When you see a stone you are having a visual perception. When you feel a pain you are having a sensation.
I’m not sure I understand why you want to lump perceptions and sensations together. Do you not think there is a difference between feeling a pain and perceiving an object?
Randy
Randy
>>If you wish to stipulate that as the meaning of “detest” then you can. But then we wouldn’t be talking about the same thing, would we?<<
We would be talking about the same evidence on which to base a prediction--evidence that need have no reference to subjective experience and gains nothing in predictive value from including it.
You could reason: Darek has turned down vanilla ice cream in my presence and he says he doesn't like the taste. Therefore, it is probable that he has the subjective experience of unpleasantness when he eats vanilla ice cream. Therefore he will likely refuse it if I offer vanilla to him again.
Notice in the above informal reasoning we can dispense with the middle inference and diminish in no way the degree of inductive likelihood of the conclusion.
BDK
>>Naturalism doesn't imply that nothing is subjective, only that nothing is purely subjective, only available from a subjective point of view<<
Clearly, the subjective experience itself is only available only from a subjective point of view. Call it an aspect. OK, the experience is an aspect that is only available from a subjective point of view.
Naturalism eschews mental essences and in general tends to take seriously only that which can be fitted into an objective, empirically available causal framework. Subjective experience gains no purchase, seems able to stake no predictive claim, within that causal framework (hence the difficulty of the other minds problem, the Chinese room, etc.).
The problem of the status of subjective experience peaks in the case of subjective experience of logical grounds for beliefs. The key quote from C. S. Lewis in this respect is on p. 25 of Miracles: "One thought can cause another, not by being, but by being seen to be [experienced as], a ground for it."
Randy:
The experience was the same, but I misidentified it as a pain when it was an itch. Upon realizing this, the experience didn't change. I don't see why you have so much trouble with this. I can remember the sensation, and there was no transition in it. What changed was my recognition.
I don't make a fundamental distinction between somatosensory and visual experiences. You mentioned that sensations weren't of the five senses, but somatosensory inputs are just another sense. What is it about them that makes somatosensory experiences special, such that we can't be mistaken about them?
Anton's syndrome is a pretty serious problem for anybody that thinks sensation-judgments are incorrigible. I'm not sure why you don't think there are visual sensations, but there are somatosensory sensations. 'Red' is typically taken as a prototypical sensation, for better or worse.
Darek: on your response to Randy, it still doesn't mean the experience talk isn't doing anything. I can dispense with temperature talk and instead talk of mean kinetic energy, but that doesn't mean temperature talk is not useful or predictive.
Clearly, the subjective experience itself is only available only from a subjective point of view.
But that is an epistemic, not an ontological point, so it doesn't imply naturalism is false. The subjective experience "itself" could be a brain state that we can objectively observe even though when observing it we are not in that brain state. Just by observing something we shouldn't expect to enter that state ourselves.
We could even observe it and not know that it is a subjective experience from the subject's point of view (just as we could observe H20 and not know we are observing water because we don't have the right conceptual equipment to identify the two).
Note I am not dogmatic about this in that I think it is reasonable to be an antinaturalist because of qualia (the 'logical thought' dimension doesn't add anything for reasons I laid out above). I am more just blocking the argument that nonnaturalism follows from subjective experience.
Frankly, 'subjective experience' is not a very well-defined target. It is always defined using synonyms, so it isn't even very clear to me what the explanatory target is (whether it be a dualist or a naturalist flavored explanation of consciousness). To me, this indicates the psychology of subjectivity is in its infancy, sort of like the study of heredity before Mendel.
Blue devil knight,
The experience was the same, but I misidentified it as a pain when it was an itch. Upon realizing this, the experience didn't change. I don't see why you have so much trouble with this. I can remember the sensation, and there was no transition in it. What changed was my recognition.
There are experiences and then there are experiences. “Experience” is a rather broad and vague term. In this instance we are talking about the experience of sensations. And sensations are localized bodily feelings. And the only way we can identify a feeling is by the way it feels.
So let me rephrase what you wrote above:
“The feeling was the same, but I felt it as a pain when it was really a feeling of an itch.”
I’m not sure what that means. I’d probably remark that the feeling of pain is not the same as the feeling of an itch. How could you logically claim that it felt the same but it really felt like an itch rather than a pain? That seems incoherent to me.
Also, I’d have to wonder whether or nor your recollection of this sensation was correct. Are you now going to assert that you always remember everything the way it actually happened? That one cannot have a false memory?
I don't make a fundamental distinction between somatosensory and visual experiences.
But there is a difference between perceptions and sensations. I’m not sure if this difference is “fundamental” they all can be lumped into the category of experience. But experience is a very broad term.
You mentioned that sensations weren't of the five senses, but somatosensory inputs are just another sense. What is it about them that makes somatosensory experiences special, such that we can't be mistaken about them?
Lets go back to you example of seeing a rock on the path. You think it looks interesting and so you walk over to pick it up. When you approach it starts to hop away. It wasn’t a rock you were looking at but a toad. You mistook a toad for a rock. In this case it makes sense to say you misidentified a toad for a rock. This could be due to such things as poor light or weak vision. Maybe you left your glasses at home.
The subjects of our perceptions are physical objects. Perceptions are not merely passive but active. If we wish to see something we have to look in the direction of the seen object. We may have to approach closer to the object or shine a light on it in order to clarify what we are looking at.
Those perceived objects exist whether or not we do perceive them. They are located in space.
A sensation is a passive power.
Our sensations are localized feelings. They are only feelings. It is having the feeling of pain or the feeling of an itch that warrants me saying I have a pain or an itch. Can you have an itch without feeling an itch? It makes no sense to say these feelings exist apart from our sensation of them.
The feeling of pain is not the same as the feeling of an itch. Does it make sense to you to say that pain really feels like an itch? In other words, if someone tells you that they are having a pain are you going to deny their assertion and say they are really having an itch because the feelings of a pain and an itch are the same?
Sensations are localized, they can occur in the foot or the back or the teeth. But the cause of the sensation can be in a different location.
Again, perceptions and sensation are all experiences, but there are logical or conceptual differences between sensations and perceptions that I think one should pay heed to.
Anton's syndrome is a pretty serious problem for anybody that thinks sensation-judgments are incorrigible. I'm not sure why you don't think there are visual sensations, but there are somatosensory sensations. 'Red' is typically taken as a prototypical sensation, for better or worse.
Blindness or lack of vision has to do with perception.
Randy
Darek,
We would be talking about the same evidence on which to base a prediction--evidence that need have no reference to subjective experience and gains nothing in predictive value from including it.
You could reason: Darek has turned down vanilla ice cream in my presence and he says he doesn't like the taste. Therefore, it is probable that he has the subjective experience of unpleasantness when he eats vanilla ice cream. Therefore he will likely refuse it if I offer vanilla to him again.
Notice in the above informal reasoning we can dispense with the middle inference and diminish in no way the degree of inductive likelihood of the conclusion.
If Darek tells me that he detests the taste of vanilla ice cream, then I know what his subjective experience is. I would be more confident in predicting that he is not going to order a vanilla ice cream cone than simply knowing that in the past he has not chosen to order a vanilla ice cream cone. I would also have a better understanding of why he doesn’t order vanilla cones.
You are correct to characterize speech as a behavior. But I am puzzled as to why you think we can ignore the semantic content of speech. Language is used to convey information that can help in predicting and understanding human behavior. In the example, Darek can inform me of his subjective experience of tasting ice cream and of the fact that he has only ordered a vanilla ice cream cone one time in the past. Both pieces of information are helpful in predicting and understanding his future behavior.
Randy: we disagree. Your paraphrase is inaccurate. I'll be repeating myself if I respond, so interested parties can just see my previous post as my last say on this subthread.
blue devil knight,
Thanks for the interesting discussion.
Randy
Randy
>>If Darek tells me that he detests the taste of vanilla ice cream, then I know what his subjective experience is. I would be more confident in predicting that he is not going to order a vanilla ice cream cone than simply knowing that in the past he has not chosen to order a vanilla ice cream cone.<<
I don't think this additional margin of predictive confidence is justified in the least.
The associative link between a subjective experience you have had and words such as "detest" and "dislike" when applied to foods is no firmer than the link between those same words and the behaviors of avoidance, making a sour face when presented with the offending food, etc. In fact, it is the strength of the link between the word and certain behaviors that underwrites the link with your own subjective experience--which you then project onto me.
>>I would also have a better understanding of why he doesn’t order vanilla cones.<<
Now we are getting somewhere. The mere fact that the subjective experience seems to provide a level of explanation beyond Humean conjunction is exactly why it does not fit the naturalist model of reality. It also illustrates its importance to the Argument from Reason. The experience of grounds as warranting the conclusion is a crucial "why" issue.
BDK
>>I can dispense with temperature talk and instead talk of mean kinetic energy, but that doesn't mean temperature talk is not useful or predictive.<<
Are you claiming that temperature talk enhances empirical predictability and/or provides an entirely different level of explanation? Or is temperature talk simply a term of convenience approximating mean kinetic energy?
>>Just by observing something we shouldn't expect to enter that state ourselves.<<
The issue is the "enterability" (if I can coin that term from your comment) of a state. If we cannot "enter" it from the objective point of view, then naturalism cannot consistently endorse its "enterability" as a real feature of that state. The epistemic question cannot be severed from ontology here.
If a crucial causative function of a state is dependent upon its enterability and its enterability is invisible from the natural perspective, naturalism has a big problem.
Finally, I made a somewhat tongue-in-cheek comment above concerning sensation that I don't believe you have addressed.
You told us that you had an initial impression or "recognition" that a tickle was really an itch (or vice versa). If we separate the "recognition" from the sensation itself, aren't you implicitly claiming incorrigible status for the "recognition"?
DB: You get more predictive power from models that posit internal states (whether they be experiences, beliefs, or whatever) than those that do not. You are just wrong about this. Naturalists aren't behaviorists. You have painted yourself into a corner with a rather silly claim. Let it go.
BDK
I plead guilty to using the word "behavior" perhaps too freely, as opposed to referring to all those characteristics of a system that are (in principle) observable/measurable.
I guess I should say that naturalism consistently followed leads in the direction of eliminativism--Dennett, the Churchlands, pick your flavor. And even they are constrained a bit, paint themselves into a corner as it were. Take Dennett's "intentional stance": the line between elimination and reduction can be exceedingly fine.
Perhaps the relationship between Newtonian mechanics and general relativity is somewhat analogous here. I don't believe that physicists argue that the predictive value of NM derives other than from its useful approximation of the way GR works out for common cases. I did not think NM is something more than or other than that in the minds of most theoretical physicists. But maybe you can correct me.
I also notice that you don't seem to address the points I put directly to you, but you are free to refrain from doing so.
Reduction and elimination are opposite ends of a spectrum, with a graded sequence of intertheoretic relations between the two (e.g., smooth reduction, bumpy reduction, partial elimination, total elimination). Where we will end up with consciousness is still an open question.
(Two poles are thermodynamics-stat mech versus caloric theory-stat mech). Caloric theory is gone. Nobody believes it. Thermodynamics remains, even though it is reducible to stat mech. Temperature is real, even though reducible, just as a brick is real, even though reducible.
darek,
I don't think this additional margin of predictive confidence is justified in the least.
Why not? If all I know is that Darek has not ordered a vanilla ice cream the last ten times he has visited the Ice Cream Shoppe, then I can be fairly certain that he won’t get one when he goes next time. But I don’t really know the reason why he is not ordering that flavor. Perhaps he doesn’t like vanilla or perhaps he strongly favors another flavor. Or maybe it just never occurred to him to order a vanilla cone.
I would be more certain that he is not going to order a vanilla cone on his next visit if he tells me that he detests the flavor of vanilla than if he tells me that he simply hasn’t been in the mood for one on his last ten visits.
Of course he could be lying to me. But then he could be lying about not having gotten a vanilla cone on his last ten visits.
The associative link between a subjective experience you have had and words such as "detest" and "dislike" when applied to foods is no firmer than the link between those same words and the behaviors of avoidance, making a sour face when presented with the offending food, etc. In fact, it is the strength of the link between the word and certain behaviors that underwrites the link with your own subjective experience--which you then project onto me.
You seem to be forgetting the link between subjective experience and behavior. Making a sour face (as you put it) is a behavioral expression of one’s detestation of the flavor of vanilla. The behavior expressing subjective experiences is partly constitutive of what we mean when we talk about those experiences.
These subjective experiences are not private in the way you seem to suppose they are. It is often quite easy to know precisely what another person is feeling. As Wittgenstein put it: “Look into someone else’s face, and see consciousness in it, and a particular shade of consciousness. You see on it, in it, joy, indifference, interest, excitement, torpor, and so on. The light in other people’s faces. Do you look into yourself in order to recognize the fury in his face? It is there as clearly as in your own breast.”
Also, I’m not quite sure of what you mean by the ‘associative link’ between a subjective experience and the word used to refer to it. The meaning of a word is not the object it stands for.
Randy
BDK
I accept that there are subtleties in the theoretical relationships between sets of descriptive/definitional terms that I haven't begun to master. But in spite of having this rich spectrum available, there is no consensus solution to the other minds problem, which I think is importantly related to the AfR, qualms over qualia, concerns raised by the likes of Chalmers, Searle, et al. Minds would seem to be entities capable of consciousness and rationality. In the Stanford article on the OMP, the solutions that seem most promising are those that bite the bullet--behaviorism and its mutant offspring, eliminativism, except that they are so offensively counterintuitive. (Epiphenomenalism would have to be close behind, although it is not dealt with as such in the entry.)
In your brick example, I suppose the question is whether "brickness" as a property contains some causal efficacy over and above whatever is strictly derivative from a more basic level of explanation.
Also, I'm not sure where in your spectrum useful fictions fit in. The vast majority of us reject geocentrism as an astrophysical model of reality, yet we find it convenient to refer to the sun coming up, going down, moving north or south as the seasons change, etc.
DB: I agree that there isn't a good reductive (or eliminative) naturalistic story about consciousness on the table. I think that we need development on two fronts. First, refinement in our notion of consciousness (i.e., what, exactly is the explanatory target). Second, more knowledge of the reducing base (e.g., the neuro). Both are inadequately developed, so I'm not suprised (or particularly bothered by) the lack of an explanation of one in terms of the other.
The move from geo- to heliocentrism is an interesting case. It could probably be called a bumpy elimination or a bumpy reduction, depending on one's predisposition. Some geocentric ideas remain (not many though), such as the claim that from the perspective of the Earth, the sun travels from East to West across the sky. There are still planets and stars in the ontology. However, the specific claims about spatial locations are replaced (eliminated) by a heliocentric view.
The brick question is a good one. I think, ontologically speaking, the causal powers of bricks are likely all micro. But at the level of explanation and prediction, bricks are much better than the Shroedinger Equation for the brick. However, if we needed to understand the brick's behavior at the microlevel, our macrobrick theory would not say anything, so there is an explanatory assymmetry that points toward the supervenience direction.
As an aside, bricks are a bad example for the reductionist since they are defined partly by their history and human intentions in building them, so it would get very messy and we probably couldn't reduce bricks to their proximate physical structure. Similarly we likely can't reduce intentional states (which I differentiate from conscious states) to neuronal states because intentional contents are partly individuated by historical factors (I'm thinking of Twin Earth here)
Randy
Randy
Let me put this in terms of Dennett's "intentional stance." It might be a useful predictive strategy to assume that a chess-playing computer has a desire to win chess games and contemplates its moves the way we think of ouselves as doing. But most of us assume that the computer has no internally experienced desires and does not consciously plan its moves. Therefore the usefulness of the strategy is not evidence for the existence of internal states as opposed to objectively observable circuitry reacting with environment. The strategy must be summarizing or simplifying patterns of empirically accessible events.
To take this a step further, perhaps the perception of causally efficacious internally experienced states--as opposed to neural circuitry interacting with environment--is a kind of misapprehension. The brain is physical. Neurons interact chemically. Given naturalism, we do not have to consider that neural networks have an "inside reality" to see them fulfilling their causal role in behavor.
To recast your Wittenstein quote, when we see the physical manifestations of anger such as angry voice, angry face, threatening words, we are seeing all there is of anger save the neural machinery (which in principle is observable as well). A thoroughgoing naturalism would suggest that there need be nothing more to anger than that--at least, nothing more to complete the causal picture. And the causal picture is what drives prediction.
Now, do I personally buy all that? No, I don't. I'm just saying that naturalism points in that direction, whether naturalism's proponents care to follow it there or not.
Gang,
Sorry to have been away for the last two dozen posts, but here's my 2 cents.
I agree with DB about there being some aspect of experience which we cannot rationally doubt, but it's experience in the sense that BDK earlier described as "trivial".
Suppose I thought I had an itch, but it was actually a pain. Then it is trivially true that I initially thought I had an itch. Yet I think this observation validates the claim that experiences have to be taken as axiomatic.
BDK, perhaps you might say that all your experiences warrant an explanation, or are fodder for explanation. If you felt an itch, but it was really a pain, that requires some sort of theory wherein there was a pain all along that you misread as an itch. But when you introduce models, that's not what I'm talking about. We can always be wrong about models of experiences, but not the experiences themselves. The "pain mistaken as itch" is a model of something that is effectively beyond question for rational thinkers, IMO.
At the same time, I don't see how this helps DB at all.
DB is arguing that naturalism/physicalism must translate into objective facts, not subjective ones, and that this leaves some sort of explanatory gap. Needless to say, I think this is a deeply confused argument.
In philosophy, we have to take a first person view (e.g., we get to wonder whether other people "exist" or are zombies, etc.). So, for me personally, physics doesn't predict "objective" facts in the sense DB is using them. What does objective mean to me? A subjective fact is labeled by me as objective when it meets certain criteria (because subjective facts are all I ever get). Hence, physics is ultimately a model of subjective facts, and the claim that minds are physical systems is a claim that such models predict subjective thought.
As for reductionism, does anyone here really think that bricks don't exist because they are made of atoms? Even if bricks are made of atoms, that doesn't mean brick-like configurations of atoms don't exist. Of course they exist, and they're called "bricks". Just ask a physicist.
Likewise, minds that reduce to physics still exist. Minds are defined by what they do (think). Minds (and thoughts) are not defined by what they're not made of.
blue devil knight,
Similarly we likely can't reduce intentional states (which I differentiate from conscious states) to neuronal states because intentional contents are partly individuated by historical factors (I'm thinking of Twin Earth here)
We also individuate conscious experiences by what they are expereinces of. For example, the experience of drinking coffee vs. that of drinking beer.
You make a good point about having to consider historical (external?) factors.
All of our actions take place within varying circumsances. The raising of an arm could be described as hailing a cab or stretching or reaching or pointing or warning depending on the situation in which it occurs.
darek,
To recast your Wittenstein quote, when we see the physical manifestations of anger such as angry voice, angry face, threatening words, we are seeing all there is of anger save the neural machinery (which in principle is observable as well). A thoroughgoing naturalism would suggest that there need be nothing more to anger than that--at least, nothing more to complete the causal picture. And the causal picture is what drives prediction.
I don't see that as being workable. Behaviorism has shown itself to be a dead end.
And what use is prediction without understanding?
Now, do I personally buy all that? No, I don't. I'm just saying that naturalism points in that direction, whether naturalism's proponents care to follow it there or not.
Well, we do agree that we don't buy what you wrote above.
I'm not sure it is fair of you to imply thst all naturalists believe as you are portraying them here. Though physical reductionism seems to be the craze now among most naturalists.
In any case, I'm not particularly interested in defending one camp (naturalism) verses the other (supernaturalism).
randy
Darek,
Minds would seem to be entities capable of consciousness and rationality.
Yes, they do seem that don't they? I don't think minds are entities at all.
I would suggest that the problem of other minds is a problem created by misconceptions of what it means for a person to have a mind.
Randy
>>I'm not sure it is fair of you to imply thst all naturalists believe as you are portraying them here.<<
I didn't say that all naturalists believe that. I said that naturalism consistently followed leads in that direction.
Darek,
I didn't say that all naturalists believe that. I said that naturalism consistently followed leads in that direction.
Well, yes if it is the sort of naturalism that assumes that the only thing that exists is matter and that everything can be reduced to physics then I think your critique is justified. But not all naturalists start off with that assumption.
Also, I forgot to sign the anonymous post commenting on your remark that minds are entities. Sorry.
Randy
DL
>>In philosophy, we have to take a first person view (e.g., we get to wonder whether other people "exist" or are zombies, etc.).<<
I think we all take a first person view whether we're being philosophical or not.
I keep encountering this kind of argument from you: We are forced to assume X or acknowledge X. Therefore the assumption or acknowledgement of X is off the table as evidence of anything. I don't see the inevitability of that logic.
Why does the fact that we find subjective experience to have an incorrigible aspect come off the table as grounds for any further conclusions? Such as the conclusion that experience lies outside the bounds of physical events/processes?
>>A subjective fact is labeled by me as objective when it meets certain criteria (because subjective facts are all I ever get).<<
What are your criteria?
DB,
Why does the fact that we find subjective experience to have an incorrigible aspect come off the table as grounds for any further conclusions? Such as the conclusion that experience lies outside the bounds of physical events/processes?
Because it's a useless as a tool for distinguishing naturalism and dualism.
Suppose we accept the rules of rational thought, and accept that naturalism is true. Our minds evolved because rational thought conveys strong survival advantages. That is, rationality exists for our minds because it works. Such evolved minds could not prove that their rationality was morally correct, i.e., that they ought to think rationally. They might instinctively be rational, and know that rationality conveys benefits under certain assumptions, but they cannot prove it.
You are arguing that our inability to prove the axioms correct picks out dualism. It doesn't. The same thing would be true in naturalism because physics is a rational endeavor.
A subjective fact is labeled by me as objective when it meets certain criteria (because subjective facts are all I ever get).
What are your criteria?
The term objective has at least 3 meanings, and the following one is the only one that really makes any sense to me, philosophically speaking.
A property of an object is objective when we have evidence that it's not being "added" by our own minds, but is actually in the object itself. (For this definition to work, there need to be criteria for what constitutes the self.)
For example, I am afraid of scorpions. It could be that scariness is an objective property of scorpions transmitted by "fear rays", and that people not afraid of scorpions are blind to these rays. Alternatively, the scariness could simply be subjective: when I see the objective properties of a scorpion, my brain/self (which has its own properties and mechanisms) adds fear to the properties I am observing directly.
There are several ways to get evidence for objective properties, but generally they involve hiding the external object from the self with some sort of screen, and trying to detect the raw property. For example, if scariness were objective, we ought to be able to develop a fear-permeable membrane, through which I can detect scary things on the other side without seeing or smelling them first. We can't do that, so scariness is not objective. (The same goes for good and evil.)
Another way to detect objective properties is to note their effect on entities that don't have subjectivity. A hot iron has an effect on water that a cool iron does not. But water does not have subjectivity (as far as we know), so we have evidence that heat is objective. Scariness has no effect on water. It only has an effect on entities with subjectivity.
We can also prove some things to be subjective by identifying the mechanisms in ourselves that add subjective properties.
DB loose ends:
If a crucial causative function of a state is dependent upon its enterability and its enterability is invisible from the natural perspective, naturalism has a big problem.
I can't be in the same state as a plant in photosynthesis, but photosynthesis isn't invisible from the naturalistic perspective.
Finally, I made a somewhat tongue-in-cheek comment above concerning sensation that I don't believe you have addressed.
You told us that you had an initial impression or "recognition" that a tickle was really an itch (or vice versa). If we separate the "recognition" from the sensation itself, aren't you implicitly claiming incorrigible status for the "recognition"?
No. I take it to be corrigible and fallible just like everything else. I could say 'putative recognition' because 'recognition' alone often implies success.
I earlier said:
I like Dretske's (and Sellars') distinction between epistemic and nonepistemic 'seeing.' If that distinction holds, then incorrigibility doesn't enter into experience (other than in the trivial sense in which when I have experience X it is necessarily the case that I am having experience X). The beliefs about that experience seem quite corrigible, i.e., the 'epistemic' seeing, the seeing 'that.'
DL later said:
I agree with DB about there being some aspect of experience which we cannot rationally doubt, but it's experience in the sense that BDK earlier described as "trivial".
Suppose I thought I had an itch, but it was actually a pain. Then it is trivially true that I initially thought I had an itch. Yet I think this observation validates the claim that experiences have to be taken as axiomatic.
How does this follow? For one, aren't axioms usually propositions? Are experiences propositionally structured?
BDK, perhaps you might say that all your experiences warrant an explanation, or are fodder for explanation. If you felt an itch, but it was really a pain, that requires some sort of theory wherein there was a pain all along that you misread as an itch. But when you introduce models, that's not what I'm talking about. We can always be wrong about models of experiences, but not the experiences themselves. The "pain mistaken as itch" is a model of something that is effectively beyond question for rational thinkers, IMO.
Why is it beyond question?
You should give up this weird epistemology and join the Second Philosophy bandwagon. :)
BDK
>>I can't be in the same state as a plant in photosynthesis, but photosynthesis isn't invisible from the naturalistic perspective.<<
Is this really analogous? We cannot observe/measure the subjective experience of pain, but we can observe/measure the physical correlate. The subjective experience and the correlate can be distinguished. Now, which of these is the cause of behavior? Given naturalism, why would we seek anything beyond the physical properties of the correlate to explain successive physical states? But from the standpoint of the subject, it is the subjective experience of pain that motivates (at least some of) their behavior.
On the other hand, on what grounds could we distinguish the state of photosynthesis from the physical correlate? The correlate is photosynthesis.
>>No. I take it to be corrigible and fallible just like everything else.<<
Alright. You are loyal to the core to second philosophy. But take the proposition: "All is subject to doubt." It must be self-inclusive. So, "All is subject to doubt" is subject to doubt. But doubting that all is subject to doubt means allowing the possibility that something is not subject to doubt. So this reduces to, "Perhaps all is subject to doubt, and perhaps not." I just don't see how the original proposition can bear its own weight.
DL wrote: The "pain mistaken as itch" is a model of something that is effectively beyond question for rational thinkers, IMO.
BDK responded:Why is it beyond question?
You should give up this weird epistemology and join the Second Philosophy bandwagon. :)
Does it make sense to say, "Bill is in pain, but he doubts whether he is."?
Or can someone sensibly say, "Perhaps I am in pain, but I rather doubt it."?
Of course, someone might say that he had an unpleasant sensation but was not sure that it was unpleasant enough to merit the name of "pain". But that wouldn't be the case of him having a pain and not knowing he has it, but of his having a borderline case of pain.
His hesitation would be a criterion for his having a sensation which is a borderline case of pain.
Being in pain is not really an epistemological question. Doubt that one is in pain is logically excluded. There is no such thing as being in pain and doubting that one is - just as there is no such thing as castling in checkers.
Of course this also means it does not really make sense to say one knows one is in pain.
Randy
DB:
Now, which of these [subjective experience or brain state] is the cause of behavior?
If subjective experiences are brain states, one can't cause behavior without the other.
However, your point is a good one--in the case of photosynthesis we have no reason to believe there is "something it is like" to photosynthesize. It is obvious that photosynthesis is identical to some complicated biological process. In the case of subjectivity, it is less obvious as there is no theory out there yet (my diagnosis of this state of ignorance, and one one reason why the problem of consciousness doesn't make me a nonnaturalist, is in a previous comment).
DB: of course 'everything is open to revision' is open to revision, and that doesn't lead to any contradictions (it's not like saying 'Everything is false' or 'No knowledge is possible').
Recognition machinery wouldn't be in my list of things for which that claim would be suspect. It is even less reliable than basic perception.
Randy: I was sure I was in pain, but then realized wow I was wrong it was an itch. Until I see an argument, instead of more assertions that I am wrong, I really don't know what to make of these claims.
DL
>>Because it's a useless as a tool for distinguishing naturalism and dualism.<<
>>Our minds evolved because rational thought conveys strong survival advantages.<<
I disagree. I'll use your scorpion example to show why. Look at these three statements:
1) DL avoided the scorpion because when he saw it he experienced fear.
2) DL avoided the scorpion because he was aware that it could harm him.
3) DL avoided the scorpion because it could harm him.
Only 3 need obtain for natural selection to occur. Simply put, the dualist argument from qualia points out that 1 is naturally superfluous the AfR says that 2 is equally extravagant.
The naturalist must claim that 1 and/or 2 must necessarily obtain in order for 3 to occur. But that is a tough sell. To show why, I'll use an example suggested by Stephan Harnad, one of the world's leading authorities on artificial intelligence and a dualist (though an atheist):
1a) The thermostat turned on the heater because it experienced the feeling of cold.
2a) The thermostat turned on the heater because it was aware that the room was cold.
3a) The thermostat turned on the heater because the room was cold.
Harnad points out that we do not see either 1a or 2a as necessary for 3a to obtain. The only obvious naturalist response is to say that organisms are more complicated than thermostats. But as Harnad says, though organisms are more complicated, they are not infinitely so. The complexity argument commits us to the theory that the thermostat is a little bit conscious and/or a little bit rational--it is just less conscious and rational than we are in proportion to its lesser complexity compared with human beings. But we intuitively reject any attribution of consciousness or rationality to the thermostat, we credit it with none whatsoever. There's the rub for naturalism.
On the other hand, theism says that the material scaffolding is there in order to contain, in a particular way, the mysterious spark of consciousness and rationality that makes for persons who can enjoy relationships, appreciate their environment, and seek God. When you throw in difficulties about making ethics/morality subjective, the case for supernaturalism gets even stronger
Blue devil knight,
Randy: I was sure I was in pain, but then realized wow I was wrong it was an itch. Until I see an argument, instead of more assertions that I am wrong, I really don't know what to make of these claims.
I’m afraid I’m having a similar reaction to your posts. Am not seeing much beyond the assertion that it is possible to have a sensation and not really have that sensation. You’ve pointed out how we can make perceptual errors and I’ve agreed with that. And I think that we would both agree that we are often mistaken about what goes on in our head.
Perhaps it would be helpful if I gave an example of why I think it makes little or no sense to say that something like: “Perhaps I am in pain, but I really doubt it.”
I imagine you’ve heard of the “cutaneous rabbit” illusion.
Wikipedia has an article
on it.
It is the illusion that one is being touched at equal distances all the way up one’s arm, while in fact one is touched, fore example, five times at the wrist, twice near the elbow and three times near the shoulder. So in fact one is not being touched at equal distances all the way up one’s arm. That is a tactile illusion. But the fact that one feels a sequence of sensations equally distributed along one’s arm is not an illusion – one really does have those sensations.
In this instance it makes perfect sense to say that one was wrong about how they were touched. They really were not touched at equal distances all the way up the arm. But it does not follow that they were mistaken about the sensations they had.
Sensations are merely localized feelings. They aren’t entities or objects that one can examine objectively in order to understand them better. If one mistakes a rock for a toad it is quite easy to demonstrate it was a mistake. I don’t understand how you could do that with sensations. I’m not really arguing for some special incorrigible knowledge of the mind. It is more of a grammatical issue. Given what we mean by “sensations” and “feelings” I don’t know how to understand the claim that we can have a sensation like pain and yet not really have it.
Randy
Darek,
3) DL avoided the scorpion because it could harm him.
Only 3 need obtain for natural selection to occur.
But #3 makes no sense withut assuming sentience.
Randy
DB,
It drives me crazy when people use this evolutionary argument against naturalism. The people who use it seem to think that either (1) beliefs evolve or (2) there's no advantage to a rational belief generating mechanism.
(1) is patently false. I did not evolve to believe that my antivirus subscription needs renewal because computers didn't even exist for my ancestors.
(2) is wrong because it makes the assumption that the rules in our environment and in our society do not change. If the rules did not change, then, yes, we could evolve strategies, and have no need to reason things through within a single human lifetime. But, of course, environments do change, and they do so within human lifetimes.
Let me put it this way. Genetic evolution only allows a population to adapt to the rules of its environment over many generations. For this reason, larger species can be wiped out by rapid changes in weather or habitat. They simply don't have the time to adapt to new predators or new environments.
Rationality can provide this advantage, but it has to be generalized rationality. You don't get the same advantage if you are simply adapted to knowing your present environment. You have to have belief-generating mechanisms that are generalized. That's what confers the greatest advantage.
So it's not enough for me to evolve a brain that only works in a wet environment if an ice age shows up or if a 50 drought changes the landscape, or if my tribe becomes dominated by another tribe. My brain has to evolve the ability to deal with any set of rules that comes along within my lifetime, including social rules that are very complex.
Also, your example about fear of scorpions seems to have things backwards. Less intelligent species, e.g., rabbits seem to experience fear. It seems that rational thinking isn't required for fear, which is why it works so well for animals that aren't rational. It's only in an evolutionary model that our fear makes sense. Animals cannot survive by coldly reasoning towards pre-programmed goals. They need visceral, adrenaline-inducing reactions to stimuli. If evolution were not true, we would not have inherited these things from mammals that evolved before us. We could be cold, reasoning, goal-seeking machines.
All theism can say is that things are the way they are because God wanted them that way. That's not an explanation. I could just as easily say that things are the way they are because the laws of nature made them that way. You would immediately recognize a reliance on undiscovered natural laws as being non-explanatory, but, for some reason, you don't notice the exact same flaw in theism. The theistic explanation works only if you know the mind of God, which you don't. And you can't say that you know the mind of God simply by pointing at the way things are, because I could say the same about undiscovered natural laws.
The test of explanation is prediction, and you can't predict anything with theism.
anon: I'm not arguing it is possible, but actual, and has happened with me more than once. I wasn't unsure of the sensation, I had a belief that was wrong. Without a good argument that I am deluded about this, we won't get further. I'm not making a claim about the nonmental perceptual object, but about my judgment about the sensation itself. I was simply wrong at first. This has happened, it is sort of strange to see so much animosity! Maybe your mind is different than mine.
DB: the thermostat issue seems a straw man. Thermostats don't have memories (working memory or other short-term or long-term memories), don't have multiple processing stages between the input and output, multimodal processing stages, etc etc etc..
I am not saying that simply adding "complexity" to a thermostat will make it conscious. Complexity alone would make a pile of straw or the nation of China conscious.
Darek
1a) The thermostat turned on the heater because it experienced the feeling of cold.
2a) The thermostat turned on the heater because it was aware that the room was cold.
3a) The thermostat turned on the heater because the room was cold.
Harnad points out that we do not see either 1a or 2a as necessary for 3a to obtain. The only obvious naturalist response is to say that organisms are more complicated than thermostats.
I think not.
The obvious response here is that thermostats do not meet the criteria for having consciousness. So it is not really just a question of organisms being more complicated than thermostats.
Looks like a red herring to me.
Randy
As I was posting this I saw that BDK has made a similar response which I agree with.
BDK,
Have to run, am late for work, but I just want to apologize to you. I really don't hold any animosity toward you and I'm sorry it came across that way in my other posts.
Randy: no need at all good to discuss these things with you.
"I’m afraid I’m having a similar reaction to your posts. Am not seeing much beyond the assertion that it is possible to have a sensation and not really have that sensation."
Anonymous, it has not escaped some readers that your argument for sensation reports being incorrigible consists in the attempt to corridge his reports.
Anon
>>But #3 makes no sense withut assuming sentience.<<
Why?
A chess playing computer can avoid a strategic trap in a chess game. Does that make the computer sentient?
Hiero5ant,
Anonymous, it has not escaped some readers that your argument for sensation reports being incorrigible consists in the attempt to corridge his reports.
Then you’ve misinterpreted my intent. Perhaps it would be helpful to look closer at what I’ve written.
For instance in my 6:52 am post above I wrote:
“I’m not really arguing for some special incorrigible knowledge of the mind. It is more of a grammatical issue. Given what we mean by “sensations” and “feelings” I don’t know how to understand the claim that we can have a sensation like pain and yet not really have it.”
Or in this earlier post at 5:59 pm:
“Being in pain is not really an epistemological question. Doubt that one is in pain is logically excluded. There is no such thing as being in pain and doubting that one is - just as there is no such thing as castling in checkers.
If anything, I was pointing out that BDK’s reports of his sensation have appeared to me to be meaningless.
Sensations can at times be fuzzy. Especially at the onset of a sensation or if it is a very mild sensation. So I am not claiming that there aren’t times when it is difficult to decide what to call the sensation. But one of the criteria for having such a hard to identify sensation would be to say something like: “I have a weird feeling in my finger.”
In his 7:19 am post, BDK stated :
“ I wasn't unsure of the sensation, I had a belief that was wrong.
And above that one at 7:33 pm he stated:
“I was sure I was in pain, but then realized wow I was wrong it was an itch.”
I don’t understand what it means for someone to be sure of a sensation and then say they mistook it for another sensation. That’s like saying: “I am sure I am in pain, but I am really mistaken.”
I would understand someone saying that at first they were unsure of what the sensation was but it became a mild discomfort and then a throbbing pain.
I’m not so much trying to ‘corridge’ his reports as to insist that they actually make sense.
Randy
Darek,
Anon,
>>But #3 makes no sense withut assuming sentience.<<
Why?
A chess playing computer can avoid a strategic trap in a chess game. Does that make the computer sentient?
But you’ve changed the context. The chess computer is a machine that has been designed to mimic the behavior of a sentient animal.
In your original example you said:
“3) DL avoided the scorpion because it could harm him.”
In the context of talking about animals, it makes no sense to say that in order to escape being harmed one animal avoided another without assuming that the animal is sentient.
Randy
Anon
>>to mimic the behavior of a sentient animal.<<
What is the difference between mimicking the behavior of a sentient animal and simply behaving like a sentient animal? And if something can behave like a sentient animal without being sentient, why does the animal need sentience?
DL, BDK, & all
Please remember where we've been on this thread. There seemed to be a consensus that behavior, no matter how complex or sophisticated, does not a mind make. Patterns of behavior, no matter how complex, do not constitute beliefs or experiences. Behaviorism is passe, naive, a dead end, right?
Now contemplate that under the skin of a computer or an organism what we find is . . . behavior! Behavior of switches or neurons. Behavior, no matter how sophisticated, does not a mind make . . . including the behavior of little cells or circuits.
A thermostat is a switch and has a simple behavior. Want the behavior more complex? More adaptive? Want "learned" behavior? Wire lots of swtiches together. The behavior will assume more complex patterns--patterns external and patterns internal. But let's remember again that patterns of behavior do not a mind make. Not behavior external, not behavior internal.
The human skull is not a magician's scarf that naturalism can throw over the top hat of behavior so that the rabbits of consciousness and rationality can germinate. Behavior is behavior, whether of individuals, crowds of people, or crowds of neurons. If something "magical" or mysterious happens when behavior gets complex enough, naturalism starts to burst at the seams.
The apostle Paul said that there were certain moral realities that he confronted through introspection (Rom 7:22-23). One theistic prediction is that naturalistic explanations of introspected realities such as morality, subjective experience, and rationality will continue to suffer from deep conceptual inadequacies. That has proven to be a safe bet until now.
Darek,
On the other hand, theism says that the material scaffolding is there in order to contain, in a particular way, the mysterious spark of consciousness and rationality that makes for persons who can enjoy relationships, appreciate their environment, and seek God.
The “mystery” of consciousness is mainly due to the conceptual confusions philosophers tangle themselves up in.
It is no more wondrous or mysterious that there are conscious beings than that there are non-sentient substances. They are equally wondrous.
And you can’t coherently separate consciousness from the physical behavior which manifests it. Such behavior partially constitutes what it means for a creature to be conscious.
Randy
Darek,
What is the difference between mimicking the behavior of a sentient animal and simply behaving like a sentient animal?
One is a copy of the other. We can only say some behavior is mimicked if we already know what the original is. We know what it is to act with sentience. And that is why we can design a machine to copy such behavior.
And if something can behave like a sentient animal without being sentient, why does the animal need sentience?
If a machine actually behaves like a sentient creature then it is behaving as if it has sentience.
Isn't it obvious to you why the original needs to have sentience in order for the copy to be accurate?
Also, we can design a machine to behave as though it is not a sentient creature.
DB:
Now contemplate that under the skin of a computer or an organism what we find is . . . behavior! Behavior of switches or neurons. Behavior, no matter how sophisticated, does not a mind make . . . including the behavior of little cells or circuits.
It is obviously coherent to not be a behaviorist but to believe that mental events supervene on neuronal events rather than (or in addition to) behavior. Indeed, even behavior supervenes on neuronal events. That is the big problem with behaviorism.
I envy the confidence you have in your predictions about how you will feel once neuroscience has advanced over the next couple of centuries.
I wonder what will happen as neurology/neuroprosthetics/psychiatry advances from its present infant stage to more fine-grained and predictive models. It's possible you will be vindicated, I'll grant you that.
Darek
Now contemplate that under the skin of a computer or an organism what we find is . . . behavior! Behavior of switches or neurons. Behavior, no matter how sophisticated, does not a mind make . . .
What do you think “makes a mind”?
All
You may have noticed that this thread gas gotten long and unweildy, so I'm going up to the next topic to post.
Steve Lovell's article is equally appropriate to this discussion.
Post a Comment
Subscribe to Post Comments [Atom]
<< Home