Reply to Doctor Logic
DL: It seems that Feser is making at least two mistakes.
First, were we to study protons, neutrons and electrons, we would find that they are not flammable. Flammability is found in the interactions of vastly complex structures of these particles. So, how can it be that protons, neutrons and electrons, which cannot be flammable can explain why gasoline burns?
Well, it is quite obvious that flammability is built upon complex configurations of these building blocks, and flammability only makes sense in that context. You will never be able to assign an intrinsic property of a configuration of components to an individual component. So it is with mental properties. If mental properties are founded on complex configurations on a material substrate, you shouldn't expect to see mental properties in individual particles, or even individual subsystems like neurons.
Second, Feser is losing track of his definitions. Intentionality is defined by our experience of intent. It is not defined by its non-material nature. If physicalism is correct, then intentionality does not become an illusion. It is only the belief that intentionality is non-material that becomes an illusion. Intentionality remains as it has always been, and as important to us as it has always been.
VR: Configurations, I will grant, give us states at the configurational level that are not mentioned in a description of the proper parts. However, the configurational properties are transparent given the locations of constituent parts. So no brick is six feet high, but if they all are put together in a wall, the wall can be six feet high even though the bricks are not. Given where the bricks are, it follows necessarily how high the wall is. The wall-states are fully determined by the brick states.
In the case of intentionality, intentional states are underdetermined by physical states. Add up all the physical states you like, and the result is logically compatible with there being different mental states, or with there being no mental states at all. If we look at the output of a computer, it is not just the computer that is doing such and such with such and such a meaning, we interpret it as having that meaning.
You say "intentionality is defined by our experience of intent." Gosh I hope not. If we don't know what intentionality is, how in blazes are we going to be able to identify our experience of intent. More importantly, intentionality, in philosophy of mind, always refers to "aboutness," not purposiveness. What is it about one physical state that makes it about another physical state? That's the question being dealt with here.
Labels: Feser, Materialism
55 Comments:
VR: "If we look at the output of a computer, it is not just the computer that is doing such and such with such and such a meaning, we interpret it as having that meaning."
That couldn't be more true. As a Systems Engineer, I can tell you that you're absolutely right. I remember another poster on your blog (sorry, don't remember who it was or when) who made the analogy that if computers can store the works of Shakespeare without the need for a mind, then so can the human brain.
But that poster was mistaken. Computers don't store the works of Shakespeare. They store ones and zeros. And if you're being *really* technical, they don't even store *those*. If you open your computer you don't find Shakespeare or ones or zeros. You would find electrons moved to one side or another on a little thin strip of magnetic film that by all counts would appear to be completely randomly oriented.
It's only the human mind that "contains" the "decoder ring" (if you will) of the order of those electron positions that when decoded can reproduce what you and I would recognize as Shakespeare.
And there of course is the rub. The computer stores nothing at all. It's an extension of the human mind and nothing---really NOTHING more than that.
Victor,
What is it about one physical state that makes it about another physical state? That's the question being dealt with here.
What is it about a mental state that makes it about a physical state?
Suppose you think of a tree. What makes that thought about a tree?
It seems to me that you know it is about a tree because your thought of a tree is a mental model of past experiences or potential experiences of a tree.
A tree usually has a trunk, roots, branches and leaves and so does my mental model. Of course, I can alter my model of a tree and imagine a tree, say, without roots. But what makes a mental model about an actual tree is my ability to recognize the corresponding tree if I saw it.
In general, it's difficult to model a thing that we would not recognize. The model always has at least some recognizable properties. For example, I can conceive of the kidnapper of the Lindbergh baby. Though I may not initially know what the kidnapper physically looks like, I do know what experiences relating to a person would lead me to recognize him as the kidnapper.
So, if we look at a computer (as anonymous suggests), we can see that the computer's manipulation of Shakespeare fails to be about the plays or the subject of the plays because 1) the computer is not dealing in a model of the subject of the plays, and 2) the computer is (presently) incapable of recognizing what the model of the plays is supposed to represent. The computer has only a stored representation of the play. It has no experiences, nor software for modeling those experiences, so it's file containing the play isn't about the subject of the play.
However, none of this precludes us creating a computer that can model its experiences, and can recognize the implications of its models. We might have to give the computer a corporeal existence (or a simulated corporeal existence) before it will understand what the plays are actually about, but it could be done.
dl: "However, none of this precludes us creating a computer that can model its experiences, and can recognize the implications of its models. We might have to give the computer a corporeal existence (or a simulated corporeal existence) before it will understand what the plays are actually about, but it could be done."
(same anon): Forgive me, but how exactly is it that you know "it could be done"?
As it stands right now, today, computers are nothing more than storage devices for encoded zeros and ones. That's it. Nothing more. I don't mean to repeat myself, but I think maybe you missed my original point. If as you suggest we created a computer that "can model it's experiences" or has a "corporeal existence" it would still only *appear to us* to have these experiences. There's no way for us to know that *the computer* actually *knows* anything at all. And in fact, because we do know how they work in every detail, we know they store ones and zeros, and *we* decode them on their behalf. Always. There isn't a computer that self-decodes---not when you reduce their source code all the way down.
In the end, even the most advanced computer is an input/output device.
As a *very* simple demonstration (and forgive me if this is evident to any of you readers---it's meant for those who aren't well versed on how a computer actually works) imagine I have a computer with the following pattern of ones and zeros:
01100001
01100010
What programmers do is "tell" the computer that:
01100001 = the letter a
and that:
01100010 = the letter b
so that when the computer reads it, it displays for you:
ab
But what if me, as the programmer, or "master decoder" if you will, instead tells the computer that
0110010 = the letter x
and that
0110010 = the letter y
so that when the computer reads it, it displays for you
xy
Now, if you open the guts of the computer, you will only see
0110001
and
0110010
You NEVER see ab or xy. The "ab" and "xy" lives solely in the mind of the programmer.
Oops. My fat fingers missed a few zeros in my reply above.
In the interest of accuracy, please assume all patterns of ones and zeros in my previous post are:
01100001
01100010
I'd guess my point was clear anyway, but...
"Now, if you open the guts of the computer, you will only see
0110001
and
0110010
"
I'd be willing to bet you a considerable sum of money that I will not see 011001 or any digital number for that matter if I were to open up my computer. All I could see is the electronic circuitry.
There are some serious problems with identifying the mind with the brain, but those same problems also are present when the dualist identifies the mind with some sort of spiritual or mental substance.
Anon2, apparently you missed in my original post when I said "If you open your computer you don't find Shakespeare or ones or zeros. You would find electrons moved to one side or another on a little thin strip of magnetic film"
I didn't repeat the technicality in order to avoid being redundant, not in an effort to be misleading. I assumed the reader would understand the intent.
anonymous,
I am a software developer, so I am familiar with how computers work.
As an engineer, you may be interested in the work that Numenta is doing. You are probably familiar with trainable neural networks. The programmer (trainer) exposes the network to pictures of letters of the alphabet, and then conditions the outputs to the binary/ASCII sequences for the relevant letter. Of course, these simple networks are no more than a variation on the theme you presented. The computer is blind to the representation chosen by the trainer for the output (e.g., it wouldn't matter if the trainer used EBCDIC instead of ASCII). After being trained, the network cannot predict future "experiences," but can only react to them by outputting a code.
However, Numenta's HTM can find the letters or pictograms all by itself. It's strange when you first see the training process because, as a programmer, you naturally think "where did the human tell it which letter was which?" But the HTM sees the patterns in its visual field, and learns them, even if it hasn't been provided any input from a programmer. An HTM can even find patterns that a human hasn't spotted.
Once the HTM has trained itself through experience, the HTM recognizes a picture of the letter W (or a pictogram of a cat, or whatever), and lights up a neuron representing that letter. If you train it differently, a different physical neuron may take on the meaning W. This neuron has meaning, and that meaning was never assigned by a trainer or programmer. The meaning is confined to the HTM.
Indeed, to make the HTM useful, you would either have to reverse engineer the meaning of the output neurons, or use another HTM and train it that a W is ASCII 87.
HTM's are based on the human neocortex. HTM's with billions of parallel -processing neurons have just what is needed to automatically recognize patterns, and patterns of patterns, and represent concepts.
I thought you might find that an interesting aside.
If as you suggest we created a computer that "can model it's experiences" or has a "corporeal existence" it would still only *appear to us* to have these experiences. There's no way for us to know that *the computer* actually *knows* anything at all.
How do you know that you and I are not computers (of a biological variety)? How do you know that I know anything at all? I merely *appear* to know things.
I put it to you that once you know what it means for your thoughts to be "about" things, you will know how to make a computer do the same.
dl: "How do you know that you and I are not computers (of a biological variety)? How do you know that I know anything at all? I merely *appear* to know things."
(orig anon): Well I couldn't possibly outline the arguments for why I take you seriously, and think that you're not just a meat machine nearly as effectively as Jason Pratt. I'll let him debate you and explain why I choose not to treat you like an interesting cabbage but instead treat you like you "know" something. You can see his thoughts on the subject
here
but to sum it up, you won't be successful trying to convince me that you're a meat machine and ask me to take you seriously as "knowing" that you're just a meat machine if indeed you're just a meat machine.
As for Numenta and their work. I hadn't been familiar until your link and so I very much appreciate that. I watched the online video of the discussion and if that's the demonstration you're talking about, I would say I have a completely different perspective than you when analyzing what it shows. (that's to be expected after all---you're e developer and I'm an engineer---our two camps never get along *wink*). I'll have to watch the video again more carefully, but from my initial viewing, I think you're being a bit misleading when you say that there's no programmers telling the HTM what is what. At about the 20 min mark in the demo, he states that there's rather daunting and extensive probabilistic, algorithmic code *programmed* into this system. So, really, the programmer (from my perception of the presentation) builds a world of probabilities for the machine and the machine fits the images into these spheres of probability.
If that's the case, there's really no difference between my rudimentary example, neural networks, and these HTMs. In fact even an audience member asked"isn't this just a neural network"? To which he replied, well, um, no, I thought I showed how it wasn't.
Now, I'm at an extreme disadvantage here because I'm not a neuroscientist so maybe I've missed something altogether. But it also seems misleading when you say there's "meaning" that only the HTM knows. Again, taking the online demo as my source, there was no "meaning" whatsoever. The system placed a picture of a cat, in a column for cats. In other words:
This: w
looks like: W
so, the computer placed "w" in the same column as "W".
I don't see at all the inference you make that there's "meaning" there that the computer understands.
I'll have to look at the demo again more closely and will definitely research Numenta more.
Thanks for sharing the link. It seems like very promising research. And has a high "cool" factor. I'll definitely keep up with it and learn more---even if you and I see the impact of it completely differently.
anonymous,
Thanks for your reply, but you haven't answered my question.
You refer to a comment by Jason Pratt, but that comment doesn't answer my question either. It does not say how you know that your thoughts are about things, nor how you know that my thoughts are about things.
Then you say:
you won't be successful trying to convince me that you're a meat machine and ask me to take you seriously as "knowing" that you're just a meat machine if indeed you're just a meat machine.
I'm not sure what this means. Maybe this is a meta statement about my not being able to win you over by argument (which isn't an answer to my question). Or maybe you are saying that you don't like the conclusion because it seems absurd to you (which is also not an answer).
So, I'll ask again... How do you know that the tree in your mind's eye is a thought about a tree? How do you know it's not a thought about algebra?
If you don't know what it is that makes your thoughts "about" things, then you're not in a good position to refute theories that claim to identify the mechanism of aboutness.
If you do know what it is that makes thoughts about things, then we might evaluate whether machines can do the same. Correct me if I'm wrong, but you are not suggesting any criteria or mechanism for aboutness. You're just expressing your intuitions.
doctor logic, with all due respect, I feel you're dangerously close to simply playing word games here. If I'm getting the wrong impression, please correct me, as I would be happy to continue the engaging debate.
I'll try to answer your question, but first you'll need to clarify some things.
You asked:
"How do you know that you and I are not computers (of a biological variety)?"
Firstly, I didn't think you were seriously asking me to answer a question posed in the negative. (ie, prove there is *no* God, or prove water *isn't* wet). But since you seem to be asking again now, I'd answer by saying I don't know. I don't know because you'd need to first define what you mean by "know" then define what you mean by "computer".
I certainly do some "computing". But the root of this thread, as I understand it, is that there's a difference between computers and brains, in that brains (or maybe "mind" instead) have something you and I call "understanding" of "meaning" and "knowledge" of the "aboutness" of what the "computing" is doing. This *is* in fact the contention. These terms I've quoted are the things I'm referencing to show we are *not* just computers---that they're somehow different than raw computation. Please note, I enclosed those things in quotes because I'm not sure either of us are using the same definitions for any of those things. So in order to answer your question more clearly, you'll first have to define for me what you take to mean by those words/concepts I've just quoted.
Would it be fair to ask you how do *you* know that we *are* nothing but computers? Please respond genuinely.
This is why I'm afraid of the dangers of semantical game-playing. I'm trying to be fair here, and my intent is to genuinely ask you to frame your meanings so that I can give you an accurate reply to your questions.
Next, you go on to ask:
"How do you know that I know anything at all? I merely *appear* to know things."
If we can't agree on what "knowledge" is, then we can't pursue the questions and answers any further. The implication I was left with in your question was that you were making uncertain what "knowledge" is in the first place. And if that's what you're doing, I would still ignore the question because it really is a different topic altogether. Therefore I instead linked to Jason Pratt's arguments. I had hoped that that link would be sufficient to answer your question. Besides, my layman's understanding is that appealing to an absurdity is an appropriate answer to some questions (I've no formal training in philosophy, so please correct me if I'm wrong). If I ask you "how do you know that the sky doesn't reside in the ocean"---I'd imagine that you'd appeal to the absurdity of the question. I think Mr. Pratt's examples clearly show the absurdity of the implication of your question.
Now, to your statment:
"Correct me if I'm wrong, but you are not suggesting any criteria or mechanism for aboutness. You're just expressing your intuitions."
You're right, I'm *not* suggesting any criteria or mechanism for aboutness. I don't even know yet what *you* mean when you say aboutness. I am also *not* expressing intuitions. I'm expressing knowledge of how computers work (HTMs excluded here because you and I are still debating whether or not HTMs do anything differently than the traditional model of computing). My motivation has been and continues to be to reinforce *Victor's* criteria of aboutness by filling in details in his analogies to computers.
Now, in the interest of fairness, you haven't answered my question either so I'll ask again. When you say "We might have to give the computer a corporeal existence (or a simulated corporeal existence) before it will understand what the plays are actually about, but it could be done."
How do you know it could be done?
DL & Anon
I think you have struck upon an important aspect of the issue at hand. The "other minds problem" has been around for a long time, but has received a flurry of attention in the last decade or so. I think we do make a leap when we attribute consciousness to other human beings based on their behavior, because we cannot crawl inside someone else's mind and experience their consciousness.
This has to do with the argument from reason because most of us believe that there is a type of reason that can only be performed by conscious subjects. For example, I don't think that an unconscious person can understand, much less evaluate, the kind of arguments we are making in this blog. For convenience, I will call this type of reason "reflective reason," of which intentionality is a feature.
Computers can already do things that humans can only do when conscious. For example, a human has to be conscious to play chess at the level of a grand master. The same can be done by computers that (I think) all of us doubt are conscious. Therefore, computers can play chess at a sophisticated level without reflective reason. Perhaps computers some day will be able to comment afterward on a game of chess they have just played in the way that humans can do now. But if a computer can play championship chess without being conscious, isn't it conceivable that it could also be programmed to comment on a chess game without being conscious, that is, apart from reflective reason? I'm not sure anyone can say what limits there are to what computers can do absent consciousness and reflective reason.
There are a few ways to look at consciousness and behavior. We can look at behavior as evidence of consciousness in a larger context containing other clues about whether consciousness is likely to be present. In this view, it is at least conceivable that conscious-like behavior could exist as a kind of simulation apart from consciousness. Another view is that conscious behavior is distinguishable from consciousness itself, but the two are linked in some necessary fashion, so that one cannot exist or perhaps even be conceived to exist without the other. Finally, we might identify consciousness with a certain type of behavior and claim that any such thing as "consciousness" as distinct from that behavior is illusory.
I think that behavior is an evidence of consciousness, but is not consciousness itself and, further, that there is no logical or metaphysical necessity linking the two together. This is the view of some specialists, famously John Searle and David Chalmers. It is also the view, for example, of Steven Harnad, the director of the Cognitive Science Center at the University of Southampton. Harnad gives a concise argument for his opinion in "Turing Indistinguishability and the Blind Watchmaker," the first chapter in a compilation of consciousness essays titled Consciousness Evolving (John Benjamins, 2002). BTW, to the best of my knowledge, Searle, Chalmers and Harnad all happen to be secularists in terms of their worldview.
If consciousness is required for reflective reason and consciousness lacks a behavioral or functional-mechanical definition, then reflective reason also lacks such a definition. Moreover, whatever else can be said of "aboutness," it is indispensible to reflective reason. We think "about" things only when we are conscious, at least in terms of our ordinary usage of the word.
anonymous,
When you say "We might have to give the computer a corporeal existence (or a simulated corporeal existence) before it will understand what the plays are actually about, but it could be done."
How do you know it could be done?
All of the evidence shows that we ourselves are thinking machines. I think that mind reduces to brain as well as water reduces to H2O.
Furthermore, I reject your arguments that a machine cannot possibly be conscious as we are. Therefore, I conclude that it is possible.
Finally, if we suppose that "aboutness" derives from constructing models of experience that are verified by experience, then I think this is a relatively straightforward thing to implement in software. IOW, there are some very good models of consciousness out there, and if we had sufficiently powerful machines, we could test them. With current machines, we could only test them at a million times slower than real time which makes testing rather difficult. In 20 years, desktop PC's will have enough power to run human-class intelligence in real time.
So, in summary, evidence shows that we are machines, there are promising models for how we work, and no good arguments that we are not machines.
darek,
If consciousness is required for reflective reason and consciousness lacks a behavioral or functional-mechanical definition, then reflective reason also lacks such a definition. Moreover, whatever else can be said of "aboutness," it is indispensible to reflective reason. We think "about" things only when we are conscious, at least in terms of our ordinary usage of the word.
Congratulations. I don't think even I can find anything to argue with in there, and that's saying something! :)
Would you like to take a crack at my question?
How do you know your thoughts of deciduous trees are "about" deciduous trees and not about, say, the rules of chess?
dl, I imagine then that we're simply at an impasse.
You say: "All of the evidence shows that we ourselves are thinking machines."
To which I would respond that the key word in your comment that I am taking issue with is "thinking". What is thinking, and what is this "evidence" you suggest that shows machines can do it?
You go on to say: "Furthermore, I reject your arguments that a machine cannot possibly be conscious as we are."
To which I respond, I never said that. I said "If as you suggest we created a computer that 'can model its experiences' or has a 'corporeal existence' it would still only *appear to us* to have these experiences. There’s no way for us to know that *the computer* actually *knows* anything at all."
So, while it might be possible to make a computer "look" like it's "thinking", there's no way for us to know that it really is, and presently computers simply store ones and zeros and output some input from a human mind. We don't know "what it is like to be a computer" nor certainly, one that thinks. Whereas we do know "what it is like to be a thinking person".
I imagine your argument could be that there's no difference, or you might ask for me to prove there's a difference. To that, I would again point to Jason Pratt's illustrations as to why there is a difference. I encourage you, if you haven't already, read his whole argument in my previously linked post.
Finally, you state:
"So, in summary, evidence shows that we are machines, there are promising models for how we work, and no good arguments that we are not machines."
I suppose this appears to me as begging the question. Are you not assuming to be true, exactly the point that's in contention? I could just as easily state: "evidence shows that we are not machines, and no good arguments that we are machines."
In truth my questions are more rhetorical since again I think we're at an impasse.
In any event, thanks for having replied and for the engaging discussion.
DL
Some propositions must be affirmed because their denial is incoherent. If you didn't know what you were thinking about when you posed your question and I don't know what I'm thinking about as I answer it then we are not having a rational discussion. Your question might really be about the taste of cabbage and my answer might really be about field hockey. It's like asking, "How do you know we are able to think rationally?" If we were unable to do so to any significant extent we could not understand the question.
Some additional observations: we don't need to know everything about what something is to say with considerable confidence what it is not. For example, I do not know a great deal about what a quark is, but I know it's not a hammer or a hairdryer. Likewise, we need not know exactly how a process works to know how it does not work. It was unclear a century ago how microbes caused fever, but it was known that they did not do so by applying little blowtorches to the underside of a patient's skin.
How do I know, say, when I am feeling a toothache? Because I experience it. What is the exact nature of this experience beyond a subjective impression? That's a difficult question for me to answer. But it's clear to me that I do not know that I feel a toothache because someone else tells me that I do or because I happen to read somewhere that I am feeling it. My experience of a toothache lacks certain characteristics of an objective report of a circumstance. Whatever someone relates to me can either be false or be misunderstood by me even if I am in a rational frame of mind, but the fact that I feel my tooth to ache (if I did, which I'm grateful to say I don't) is not one that I can be mistaken about if I am rational. Such is the distinction between the objective and the subjective.
Knowing what I am consciously thinking about is somewhat like the feeling of a toothache. It's just not something that I can conceivably be mistaken about if I'm in a rational frame of mind. But that fact alone has a critical implication. A putative objective fact can always conceivably be deceptive, mistaken or misunderstood. For example, I may say with a high degree of confidence that my leg hurts. But if I'm lying in a hospital bed after a car crash, feeling my leg to hurt is not a guarantee that my leg hurts. A doctor may come into the room and inform me that my leg had to be amputated, and what I am experiencing is phantom limb pain. I was not--and could not be--mistaken about the subjective information (that I indeed experienced the feeling that usually accompanies a hurt leg) but I could be mistaken about the physical circumstance accompanying the feeling.
At a distance I may mistake a telephone pole for a tree. But I cannot be mistaken in that case that I am thinking about what appears to me to be a tree.
In Anton's Syndrome (aka blindness denial), blind patients are not aware of their blindness. This suggests that the apparent certainty we have about our experiences is not so airtight. I agree that statements about experience do have a certain degree of buffering from the normal claims about the external world, but all the same, claims about experiences are not unmediated by theory, concepts, and the parts of the mind that interpret and integrate these experiences into a story about the world. Sometimes these higher-level structures are just wrong, even about experience.
A not-too-convincing argument against the reliability of introspection is also found here. I mention it just for completeness, not because I endorse most of it. There is extended critical discussion of the article here (extended for the blogosphere, not for reality).
Abstract: We are prone to gross error, even in favorable circumstances of extended reflection, about our own ongoing conscious experience, our current phenomenology. Even in this apparently privileged domain, our self-knowledge is faulty and untrustworthy. Examples highlighted in this paper include: emotional experience, peripheral vision, and the phenomenology of thinking. Philosophical foundationalism supposing that we infer an external world from secure knowledge of our own consciousness is almost exactly backward.
BDK
Illusions and delusions take many and surprising forms, and none of us is immune to them.
Let me put it this way: There is a baseline level of the reliability of our evaluation of our own experience that is required for things like studying illusions and delusions, for carrying out cognitive tests on subjects and knowing we are doing so, for writing papers on various aspects of cognition and knowing that the papers express our ideas, memories and thoughtful analysis of the topics of the papers. Skepticism about introspective knowledge can only be embraced to a certain point, beyond which it becomes self-refuting and therefore incoherent.
darek,
I don't think that you and BDK are addressing the issue that I raised.
I agree that we have to take our experiences as true. Not the interpretations of those experiences, but the experiences themselves. I cannot dispute the fact that I feel cold, even if I can dispute the fact that it is cold. Indeed, I think the axiomatic nature of our experiences is necessary for rationality.
However, I'm addressing aboutness or reference. How it is that I know a thought is about one thing and not nothing or another thing.
In the last days before my dissertation deadline, I was working 16-20 hours a day, typing and finalizing my work. One morning, my girlfriend came in to wake me up, and I told her "Don't get it wet before it's dry!"
Naturally, she was puzzled. She asked what I was referring to. I said "Integrals."
Sensing she was still puzzled, I said "I know what I'm talking about!"
Followed shortly thereafter by "I don't know what I'm talking about!"
This is the kind of confusion that I think has bearing on the aboutness problem. I recognized that I was confused because I could not say what experiences or potential experiences correlated with my thoughts.
This only occurred because I was semi-conscious. The conscious mind likes coherence, and tends to suppress thoughts that are about nothing. I think that thoughts that aren't recognizably about things are recognized as confusions, not as thoughts that fail to dereference. IOW, we don't think "that's a thought about nothing," we think "I'm confused."
Once we make this connection, I think it is easy to see that material minds can refer to things in the same way that we can. A material mind can give a name to an abstract pattern, and that pattern will be about the world (or about its internal thought processes) as long as it can recognize experiences or potential experiences that the pattern represents.
If we just say that aboutness is magical, then we're just conceding that we don't know how it works, not that machines can't think about things like we do.
The AfR seems to make the mistake of not rigorously identifying what it means to think about a thing. And if you don't make that explicit, not even for human thought processes, it's not surprising that it appears there's a causal gap. It's the same causal gap the exists between the material and any magical process or property.
DB: I wasn't advocating skepticism about our judgments about experiences, but countering the claim that we cannot be wrong about them.
DL: I don't think experiences are true or false, as those are properties of propositions. I don't experience propositions, but nonpropositionally structured fields of experience. Is an experience of a tree true?
BDK,
I don't think experiences are true or false, as those are properties of propositions. I don't experience propositions, but nonpropositionally structured fields of experience. Is an experience of a tree true?
Sorry, I should have been more explicit. I should have said that propositional expressions of our experiences are axiomatically true. Although, I think one might be able to go further than this. The laws of non-contradiction and identity are more general than propositional calculus. Perhaps the logic necessary for induction (in turn necessary for rationality) requires that representations of experiences be true.
Sorry, I should have been more explicit. I should have said that propositional expressions of our experiences are axiomatically true.
This seems wrong. See my post about blindness denial (these people believe they are not blind!) and the other reference I mentioned.
I can provide an easy example. A minute ago I thought I felt a tickle on my back, but then I realized it was an itch. I was wrong when I thought I was experiencing a tickle. People often confuse different experiences. I typically take people more seriously when they are describing the world around them (e.g., there is a computer on the desk) than when they are describing their experiences.
"I can provide an easy example. A minute ago I thought I felt a tickle on my back, but then I realized it was an itch. I was wrong when I thought I was experiencing a tickle."
Really? How can you be sure that you didn't experience a tickle follwed by the experience of an itch?
The experience didn't change over time. For a time, I thought man that little pain sucks, and then I realized that it wasn't a pain but a mild itch without a shift in the quality of the sensation: I was simply erroneous about the quality of the sensation (though I know you can get into all sorts of questions here: maybe my memory is wrong, or we can't differentiate a 'stalinesque' versus 'orwellian' strategy in recalling experiences).
The point is that our judgments about our experiences are fallible, and not always in the 'I thought I saw a tree but it turned out to be a donkey' way.
People who have accidents and are blind will sometimes say they see things, and bungle around the house bumping into things their delusion runs so deep. While they are pathological cases, they show the dissociation between experiences and claims about experiences that I am after. (And note the evidence suggests they are confabulating, not hallucinating). Interestingly, by removing a tumor from one blind-denial patient, his denial went away and he was able to start to get help with his blindness. (Discussed here.)
Like I said, I am not a skeptic about claims about experiences, and I do think they enjoy a certain buffering from falsification using claims about the world (I can't demonstrate that you aren't experiencing a pink unicorn by telling you there is no such unicorn out there). So in that sense there is an asymmetry between experience-claims and external-world claims.
This is sort of a side track, I think, from the main points. Is anyone a Cartesian foundationalist anymore? Is it a coincidence that science starts with the world, that if you pick up any scientific journal they describe pointer positions, things measured out in the world, rather than experiences? While scientists certainly use experiences, it is not judgments about experiences that are the foundation. The fulcra in science are claims about the world, claims made with the help of experience, but not about experience.
Even if I allow for purposes of argument that claims about experiences are infallible, that doesn't imply that such knowledge starts with such claims (as an empirical fact, it does not), or that all knowledge is ultimately justified by such claims (as an empirical fact, again false).
Perhaps some more creative readers can think of other cases of incorrect claims about one's experience, either from the clinical literature or from personal experience.
I can understand how an experience can be mistaken as in the case of the blind person. We can say he is mistaken in this situation because we have objective evidence to the contrary. But isn't he still having the experience of thinking that he can see?
So in your earlier example, it turned out that the initial experience of a tickle actually was caused by an itch that you were able to scratch. But I fail to understand why that would require me to believe that you didn't actually have the experience of a tickling sensation.
Are we using experience in different senses here?
By the way, I am not a Cartesian dualist or any kind of mental dualist for that matter. If I'm to be placed in any kind of category regarding the philosophy of the mind I presently am very sympathetic towards the view Arthur Collins explicates in his book "The Nature of Mental Things": that all mind-identification theories are doomed to failure.
But isn't he still having the experience of thinking that he can see?
Yes, but his thought that he can see is wrong. If that's true, and it would be nice to have a more detailed case study to discuss, then honest claims about experiences are not infallible.
As for the itch/tickle: I was just wrong about the experience. I thought it was a tickle sensation but it wasn't. I don't know how else to put it: the raw feel, the sensation, the experience on my back was the same, but I misidentified it.
You could say that demonstratives would be immune to such errors 'That experience there.' But then we are left with something pretty empty.
The real ontological problems are with the existence of any experience, and the claims that thoughts about experiences are infallible is a diversion that clouds the hard problem. It is injecting bad epistemology into interesting metaphysics (and I stand corrected, anonymous, as this isn't your angle).
DL
>Once we make this connection, I think it is easy to see that material minds can refer to things in the same way that we can. A material mind can give a name to an abstract pattern, and that pattern will be about the world (or about its internal thought processes) as long as it can recognize experiences or potential experiences that the pattern represents.<
An abstract pattern by its very nature is not one we can identify with a particular physical state; such is simply not the nature of "abstract." If a material mind, say, a Turing machine, consists of a physical structure engaged in a process which can be characterized as no more than a succession of physical states, then I don't see the justification for including abstractions in the causal story of what the machine does--unless we do so in terms the thought processes that produced the machine in the first place. To us a set of impulses in the machine might represent an abstraction, but the impluses need not represent anything to the machine in order to play their causal role in the succession of physical states of the machine.
In representation, one thing is seen to represent another, different thing. But when it comes to the things that play a role in a physical process, causal potency resides in what those things themselves are, not in other, different things that they might represent.
I can say that when a cold virus enters my body, my immune system gives a name to an abstract idea of the virus (produces an antibody that mirrors the virus as a "type") by which it can recognize (mount a response to) the virus if it invades my body again. But the crucial element of conscious understanding that is present in intentionality is unnecessary to the physical explanation of immune response. My immune system does not, as far as I can tell, consciously understand the structure of its antibodies to represent the structure of the virus, nor does its response to a repeat encounter with the virus amount to an act of conscious recognition.
In other words, automation can mount behavioral responses that mimic our conscious acts of understanding, for example, that certain things represent other things and that thoughts are about the objects of the thoughts. Conscious understanding does produce certain behavior, but does this mean that any mechanical process that can produce the same, or substantially similar, behavior must entail "conscious understanding"? Daniel Dennett along with quite a few others seems to think it does, but I myself don't find that view persuasive.
darek,
It seems to me that you've only shown that not every mechanistic or algorithmic system has "aboutness".
To us a set of impulses in the machine might represent an abstraction, but the impluses need not represent anything to the machine in order to play their causal role in the succession of physical states of the machine.
The more that I think about this, the less I see this as a viable objection.
Suppose that subjective representation (what one intuitively sees as a representation) is just awareness of one's mechanical representation.
In that case, we can consistently say that a music box does not have subjective representations, but does have basic physical ones.
In contrast, a human is (partially) aware of the representations she is using. Indeed, that ability for self-awareness or reflection seems to be the difference between humans and most other animals.
In representation, one thing is seen to represent another, different thing. But when it comes to the things that play a role in a physical process, causal potency resides in what those things themselves are, not in other, different things that they might represent.
I don't see where this argument can go.
In order for thought to play a role in physical processes, thought too must have the causal potency of a physical process. In that case, we could just as well argue that our thoughts aren't proper representations either, but are side-effects of some unknown physics underlying thought.
If you argue that thoughts have physical, causal potency without being physical, then you are just saying that thought is inexplicable, and that's a very strong negative claim.
But the crucial element of conscious understanding that is present in intentionality is unnecessary to the physical explanation of immune response.
You are citing an example of a system where conscious (or reflective) understanding of a mechanism is unnecessary to its physical functioning.
However, this says nothing about the possibility of physical systems that might utilize reflective understanding in order to perform their physical function.
Conscious understanding does produce certain behavior, but does this mean that any mechanical process that can produce the same, or substantially similar, behavior must entail "conscious understanding"? Daniel Dennett along with quite a few others seems to think it does, but I myself don't find that view persuasive.
You are saying that you believe zombies are possible. However, that's not enough for the AfR. You have to show that all physical thinkers are zombies while showing at the same time that we humans are not.
It seems possible to me that one could have a physical machine that uses reflection as a causal mechanism of its functioning. What's wrong with that?
Then all the problems are pushed back onto 'awareness of internal states' and 'reflection', which is another way of saying 'consciousness' which is what the nonnaturalists are asking about. And there is no good naturalistic account of awareness/consciousness on the table. (Good in the sense that it is well worked out conceptually and empirically, and which actually seems to address the hard problem).
DL
You wrote:
>The more that I think about this, the less I see this as a viable objection.
Suppose that subjective representation (what one intuitively sees as a representation) is just awareness of one's mechanical representation.<
The problem is that as a general rule the properties of representations are different than the properties of what they represent. So that if we have a representation of a premise, and the properties of the representation generate the representation of a conclusion implied by the premise, then we do not have a case of the conclusion resulting logically from the premise. This is a result of physicalism that just is not coherent, because the one thing we cannot do is adopt a philosophy that indicts our own reasoning powers.
Even if we are aware of what the representations stand for, if the properties of these representations are in the driver's seat, causally speaking, then we do not have premises generating conclusions..
>I don't see where this argument can go.
In order for thought to play a role in physical processes, thought too must have the causal potency of a physical process. In that case, we could jus as well argue that our thoughts aren't proper representations either, but are side-effects of some unknown physics underlying thought.<
Well, the argument shows that rational thought can be conditioned by the physical but not determined by it. Thought, not matter, is the substrate of reality. Some form of dualism or idealism obtains, which happens to conform better to theism than atheism--which, in turn, seems like something worth knowing.
>You are saying that you believe zombies are possible. However, that's not enough for the AfR. You have to show that all physical thinkers are zombies while showing at the same time that we humans are not.
It seems possible to me that one could have a physical machine that uses reflection as a causal mechanism of its functioning. What's wrong with that?<
Well, zombies are only possible to the extent that computation could simulate the behavior of conscious subjects. Obviously, this is a big subject that has become something of a trademark for David Chalmers. But the argument I employed above does attempt to do what you require. Any "phyical thinker" will generate representations of logical inferences; but these representations will necessarily have been generated by the properties of other representations rather than the properties of those premises or logical grounds that such precursor representations stand for--and hence the process will fall short of reflective reason. This is just another way to get at the point Searle makes with the Chinese room argument.
I'm not sure it's that content-epiphenomenalism is that bad. That is, only the representational vehicle (the meat doing the representing) has causal influences in a system, while the representational content (the content of that vehicle) doesn't actually have proximate causal influences.
(To clarify representational vehicle versus representational content (a key distinction): the word 'mom' has certain vehicle-properties: it is three letters long, is black on a white background, and certain content properties: it refers to females that have had offspring).
I'm not sure content does have influence proximally: think of twin earth. My twin and I are thinking different things (water versus t-water) but are behaviorally indistinguishable. Perhaps, as I discussed here in a very detailed examination of this question, proximal-epiphenomenalism is right, but the content gets its explanatory bite from its role in a historical explanation of how the system got to be set up a certain way.
darek,
You are arguing that a physical computation reaches a conclusion according to microphysical causation, and that this causation excludes the possibility that the computer reached its conclusion for non-microphysical reasons.
I don't think this is justified. From a materialist perspective, we evolved the ability to think rationally because it is the only way to deeply understand a logical world, and because that deeper understanding has selective advantages. Material brains may work as a consequence of physics, but that doesn't mean that they don't also work because they are rational as well as deterministic.
Suppose we build a robot that processes sensor information and navigates around a mazes using logical rules. Your claim is that the computer brain of the robot is not reaching logical conclusions because it sees the logic, but merely because the electrons in the CPU must do what they do. However, if the CPU stops being logical, there is a real consequence. The robot will get lost (and not recharge, say).
In nature, any being that relies on logical representations will die if it cannot be (mostly) logical. Therefore, in an evolutionary scenario, the physics of brains is selected for its ability to create accurate maps of a logical terrain, and to be logically reason about that terrain. Yes, at a given instant, a brain thinks logically because the physics demands it, but that physical system has been conditioned to think logically by its history. In effect, logic is a part of our environment. There is a causal connection between reason and the mechanism that reasons, even if that causal connection is over term much longer than any single computation.
Just to clarify this last point. A syllogism as a concept may be timeless, and perhaps non-physical, but that isn't the extent of logic in the world. The world is mostly logical in the sense that it is consistent (non-contradictory). A system that models the world must replicate that structure in a generalized way. Systems that have evolved an ability to generalize about logical systems are also conditioned or selected for their ability to faithfully reproduce syllogisms.
BDK,
Yes, there are lots of things to be discovered about awareness, reflection and consciousness.
However, if the arguments of dualists amount to "you haven't explained that yet," then they have no strong basis for claiming that mind fails to reduce to physical brain. Their arguments only have power as long as they can prove that the search for a material explanation will necessarily (or very probably) be fruitless.
My point is that dualist arguments aren't that strong. At best they reserve a little corner of doubt in the face of overwhelming evidence that minds reduce to brains.
Not only do they not import enough doubt on material minds, but they also fail to explain anything at all. If minds are not material, what would we naively predict?
We would predict that, for example, material minds would lack the mechanisms necessary for computation. For example, maybe brains would be all blubber. That way, the immaterial mind would just talk directly to the nerve endings as needed. Of course, this isn't what we observe. We observe that almost all of the functions of minds reside in physical brains. With each one of these discoveries, dualists need to fine-tune their dualism to be of the variety in which the material brain implements X or Y or Z rather than the immaterial mind doing so. Eventually, dualists propose that, of the many possible ways that minds could have been implemented in humans, they were implemented in the one way compatible with physicalism. That's a stretch. It may be rational to hope that dualism is true, but it's not now rational to believe that it is likely to be true.
DL: I agree. I think more science is needed, probably about 50 years worth, before these discussions about consciousness can be more principled. If the ratio of inference to data is too high, the discussion leaves too much freedom for clever philosophers to make one another believe in BS.
I don't know where that leaves theology.
DL: I think your answer to Derek was on the right track. It is basically what I said: Perhaps [...] proximal-epiphenomenalism is right, but the content gets its explanatory bite from its role in a historical explanation of how the system got to be set up a certain way.
DL
>Suppose we build a robot that processes sensor information and navigates around a mazes using logical rules. Your claim is that the computer brain of the robot is not reaching logical conclusions because it sees the logic, but merely because the electrons in the CPU must do what they do. However, if the CPU stops being logical, there is a real consequence. The robot will get lost (and not recharge, say).<
Ants construct nexts according to good architectural principles, but not because ants understand architectural principles. Does that mean that there is no such thing as understanding of architectural principles? Or that there is no such thing as building things a certain way because of understanding architectural principles?
True, advantageous behavior does not necessarily entail rational insight. But that fact just undermines an evolutionary explanation of rationality. If rational insight is not necessary to advantageous behavior, why do we possess it? We could be automatons and get along just as well. However, even if there are behaviors for which rational insight is necessary, that by itself does not prove that rationality is physical (see below).
>A system that models the world must replicate that structure in a generalized way.<
Modeling the world presupposes the causal efficacy of propositional content as opposed to the physical properties of representations of the world--physical properties, micro or not.
BDK
Even arguments regarding twin earth presuppose that the content of propositions regarding twin earth, rather than just the properties of the artifacts that represent those propositions, exert causal influence in the production of beliefs.
Epiphenomenalism regarding content is problematic at best. For one thing, we know that the content is there and what the content is. The content has sufficient causal efficacy to be the object of our knowledge, but not to be the grounds for an inference? Can I understand that the content logically implies a belief, but not accept the belief because of that understanding? Now, it's a different story with ephiphenomenalism of rationality versus behavior, which is more difficult to refute.
I know we will continue to disagree on this, BDK, but historical explanations depend upon content having causal potency. For example, say we have a "meat dog" representation in the computer that is driving computation through its physical properties, not its content. We can argue that content exerts influence because the content in our minds caused us to create the representation and place it in a particular physical context. But this only works if content exerts influence on our own minds in the first place, otherwise we have infinite regress. More broadly, we must assume that the contents of arguments for historical explanations are the potential objects of rational insight and resultant belief. So historical explanations cannot dispense with rational insight as a causal factor.
Suppose someone with malevolent intent slips a psychoactive drug into my coffee unknown to me, and the drug causes me to become paranoid and believe myself to be in danger (which I am). Let's say the paranoia even causes me to believe that someone has spiked my coffee. My belief that someone spiked my coffee is caused by the physical circumstance of someone having spiked my coffee, but not by a rational understanding of the circumstance. To preserve rationality it is not enough that a true belief be caused by physical circumstances. But in naturalism physical circumstances are taken to be sufficient for any causal account. A chemical compulsion is never a good reason to believe, but chemical compulsions are all naturalism has room for--advantageous compulsions, perhaps, but compulsions rather than insights. This shortcoming is not a niggling, peripheral issue for naturalism. It is a fundamental conceptual issue, and as a conceptual issue it does not entail an information gap that advances in neuroscience could possibly close.
It is one thing to say that nature selects for rationality, and that what nature selects for in general is physical. It is another to say that because nature selects for rationality (i.e., because rationality is advantageous), rationality therefore must be physical. This is a non sequitur.
There are other assumptions which if indulged trounce the AfR out of the gate, such as:
1) Only uniformly physical causal descriptions are real causal descriptions.
2) Science can overturn any conclusion established through argument.
You cannot assume 1) and at the same time claim that it is established by argument. And 2) is self-refuting if we claim it to be established by argument; if we don't make that claim, then on what basis do we recommend it?
DL
I realized that should respond to your points about the physical conditioning of mental processes and about naive predictions. On the first point, consider a radio. When we interfere with the components of the radio in various ways, we generate specific types of distortion of music and voices coming from the radio. But that is not proof that music and voices originate in the components of the radio. This is just an illustration, but in rational thought there is a normative influence that cannot be physical and yet is present only when certain complex physical processes are operating.
As for naive predictions, I have my doubts that there are any. That's why this is a conceptual problem, not a technical one. Neuroscientific predictions based on current findings are unaffected, because neuroscience must necessarily confine itself to the physical conditioning of mental processes.
The question from naturalists tends to be, "If we admit that rational thought cannot be exhausted by a physical account, but we don't get different scientific predictions out of that admission then what use is it?" It's use is telling us something worth knowing even if that something does not affect scientific predictions.
Darek, it appears that you are begging the very important question of whether or not there is such a thing as mental causation. And if there even is such a thing, what is it?
Even arguments regarding twin earth presuppose that the content of propositions regarding twin earth, rather than just the properties of the artifacts that represent those propositions, exert causal influence in the production of beliefs.
I don't understand. Since Twinearth is a useful way to see the plausability of content-epiphenomenalism, I'd like to understand what you are saying here against my TE argument.
darek,
True, advantageous behavior does not necessarily entail rational insight. But that fact just undermines an evolutionary explanation of rationality. If rational insight is not necessary to advantageous behavior, why do we possess it? We could be automatons and get along just as well.
We do not believe that ants are reflective, nor are they even intelligent when it comes to architecture. So, they're not a good model for human intelligence.
I suspect that you agree that evolution could build ants. Ant nest architecture has survival benefits, yet ants are not aware of what architecture is. They may only be aware of certain environmental metrics and have a (physically) compulsive urge to act to fix those metrics, and that results in the construction of a complex nest. There are no ant architects that plan the nest under construction because that would involve logical modeling. Rather, some simplistic local rules are employed, and these lead to observed nests.
Over millions of generations, the behavior of the ants has been refined by natural selection. The implicit logic of nest-building in ant colonies has been conditioned at a generational level, not at an individual, synaptic level. Consequently, when challenged with a very different environment (e.g., with a dangerous predator, different soil, zero G, or whatever), individual ants cannot intelligently adapt. They have no way to generically figure out the rules of a complex system within the same generation.
In contrast, human individuals have the ability to make logical models of the world (and test them) all within one lifetime. There's a huge survival advantage in this. Thus, rational insight is necessary to at least some advantageous behavior. (Just look at the survival advantage that humans have from their abilities.)
I think we have a physically compulsive urge to model the world more or less accurately/logically. And I don't see any inconsistency in that.
When we interfere with the components of the radio in various ways, we generate specific types of distortion of music and voices coming from the radio. But that is not proof that music and voices originate in the components of the radio.
Yes, but when we open the antenna housing and find singing people inside, we don't then suppose that there might be invisible singing spirits elsewhere who merely animate the singers. We already have all the components necessary to perform the observed functions of producing singing sounds.
That's why this is a conceptual problem, not a technical one.
...
It's use is telling us something worth knowing even if that something does not affect scientific predictions.
I think the problem is one of special pleading.
You could say that water does not reduce to H2O, and that there is some spirit of water that fills in certain gaps in the scientific model. The same goes for many other scientific reductions.
However, you say that even a perfectly predictive scientific model could not convince you that mind does not reduce to brain.
Isn't that special pleading?
BDK
I'll admit that I have a only tentative grasp of the twin earth argument as it relates to reflective reason, but in a previous post you summarized it as follows:
>>The first, and more popular response (taken by Fodor for instance) is that content is indeed epiphenomenal. The 'syntax' (causal whirrings) of the machine determine its behavior, and the content of the internal states (the semantics') do not affect it.
While this may seem unpalatable, a little reflection should show that it is quite reasonable: Derek and Tderek (your twin on twin earth) are expressing different contents when they use the word 'water', but their behavior will be the exact same. This is because the referent was fixed via past interactions with water and twater.
Contents are fixed historically (hence the references to memory above), but it is only present pushes and pulls that determine behavior. This doesn't mean the contents aren't real, or a mere 'heuristic overlay' (Dennett) on the synchronic causal story: it means that looking for contents to be proximate causes of things is to make a category mistake. It is to mistake the proximal causal story with a historical explanation.<<
The AfR strictly speaking has to do with justification of beliefs, not necessarily behavior. If a belief is considered as an internal state, then beliefs may tick along in some kind of parallel fashion with behavior. Rational insights would occur, but behavior seemingly generated by those insights would actually be driven by computation. The question of the connection between thoughts and behavior is not the subject I'm focussing on, at least not directly.
Leaving aside behavior, the AfR starts with the fact that sometimes thoughts occur because they are logically implied by previous thoughts. Logic, if not water, is the same on earth and twin earth. If the sequence in which thoughts occur depends solely on the physical relations between representations of thought contents, then it cannot depend upon logical relations between the contents themselves--yet it must do in reflective reason. Perhaps I just need you to connect the dots for me as to how the history of the representations changes the fact that they have solely physical characteristics for purposes of interaction. I suspect that the idea is that given a certain causal history, a particular representation must have a certain content in a certain individual's mind. But if this is the gist of the argument, there seems to me to be an elephantine assumption--that physical events/states ever need to be seen as having content (being representations) in a physical account.
Given the history of a robin as related to worms, the robin presumably could only think only certain things about worms--assuming that robins think about things. Likewise, given the history of the same robin, a certain event sequence might necessarily represent a worm--assuming anything represents something other than what it is to a robin. If a robin could deliberately think about how and where to catch worms, it would do many of the thing that robins in fact do to catch worms. The theory that robins think about worms would have some explanatory traction. But we recognize this as an anthropomorphic projection--ultimately unwarranted in spite of its explanatory appeal. I think that attributing representations to unconscious systems is a subtle version of this error.
An artifact in a computer does not imply another artifact, it causes it. The causing of one artifact by another can represent the implying of a conclusion by a premise, but it represents it only to us. The artifact doesn't need to represent anything to the computer, regardless of its history. It is because of the history of the computer that the causal event represents to us something it is not, namely, a logical implication. We see it as representing. Now suppose that the computer is the nervous system of a bee or a robin or somebody from twin earth. It is still we who see the causal event as representing something it is not. Our act of seeing an event or artifact as something it is not is exactly what we cannot see or even imagine seeing in a physical computational system.
DL
>However, you say that even a perfectly predictive scientific model could not convince you that mind does not reduce to brain.<
Well, science can achieve predictability on the physical side. That is, neuroscience can continue to refine within theoretical limits predictions that certain physical brain states will be succeeded by certain other physical states. But there is another, mental side of things which escapes physical prediction because it escapes physical description.
Take qualia such as pains. Physical descriptions of pain always turn out to be a descriptions of evidence of pain, not of pain itself. So physically-based predictions of when pain will occur turn out to be predictions of when the evidence of pain will occur. Because physical descriptions of pain cannot capture the essence of pain--its painfulness in experience--scientific predictions concerning it must lack the essence of what it is that is being predicted.
Reason is somewhat like that, except that physical descriptions of reason always turn out to be descriptions of representations of reason and not the thing itself. Suppose I have a sketch of Uncle Bob on my desk and a friend of mine sees it and asks me, "Is this a real person?" I answer, "Oh, yes, that's Uncle Bob, a real life flesh-and-blood person. I know him well." Then the friend asks me, "How tall is your uncle Bob?" So I quickly get out a ruler and measure the sketch and say, "He's 11 inches tall." Then my friend says, "But all you did was measure the sketch of Uncle Bob! Oh, well. Er, tell me how much Uncle Bob weighs, if he is a real person." So I put the sketch on a postage scale and declare, "Uncle Bob weighs a third of an ounce!"
My friend is confused. "You weighed a sketch of your Uncle Bob. Does he just happen to weigh the same as a sketch of him weighs?" Suppose it then dawns on me that I cannot even imagine actually weighing or measuring Uncle Bob himself, only sketches and paintings of him. Assuming my rationality, I would have to either start doubting that Uncle Bob is real or else doubt that he is a physical being.
This is our situation with reason. Whenever we make physical observations of reflective reason, they turn out to be observations of things that can only represent it, the way paintings and sketches can only represent Uncle Bob. Physical descriptions/predictions concerning it turn out to be descriptions/predictions concerning representations of it. But we are hardly in a position to doubt that reason exists, so we are left to conclude that though it exists it is not physical.
This comment has been removed by the author.
This comment has been removed by the author.
But if this is the gist of the argument, there seems to me to be an elephantine assumption--that physical events/states ever need to be seen as having content (being representations) in a physical account.
But if a state didn't have the content it had, it wouldn't be set up the way it is in the nervous system. We need the representational story to explain this (this was the second response to the epiphenomenalism question here. This is what I meant above by proximally epiphenomenal but historically potent.)
I think you are just much more conscious-centric than I am. That you think saying X represents Y only if X is part of a conscious system shows this (as does our previous debate at the above linked set of comments).
I think we have unconscious thoughts, indeed most of our thinking or information processing is not accessible to consciousness. Is it essential that they are part of a larger conscious system? In virtue of what properties are they thoughts? Those are the properties I am after, and if states with such properties can be implemented in an artifact or brain then that's sufficient to say the brain is doing some thinking (whether conscious or not).
darek,
I'll paraphrase you: :)
"This is our situation with water. Whenever we make physical observations of water, they turn out to be observations of things that can only represent it, the way paintings and sketches can only represent Uncle Bob. Physical descriptions/predictions concerning it turn out to be descriptions/predictions concerning representations of it. But we are hardly in a position to doubt that water exists, so we are left to conclude that though it exists it is not physical."
IOW, presently, my experience of water goes beyond the physical description. For example, when I see water, I regurgitate memories of past experiences with water, and this triggers mental and physiological responses that go beyond the technical facts.
However, we don't doubt that water is physical because there's an assumption that magical systems are as likely to be physically opaque as they are to be physically explicable in any given case.
For example, if we believed that water was controlled by some inexplicable magical force, we might expect at least a 50% chance that buoyancy cannot be predicted with any simple physical model. The same goes for wave action, erosion, water temperature, tides, riptides, hydrolysis, chemistry, evaporation, condensation, etc. Each time we create a scientific model, our confidence in physics increases over our confidence in supernatural claims. And every time our models are unified by reduction, our confidence in physics should be amplified further.
And this is what has happened with the mind. We not only have predictive models of mind, but we are building a reductionist theory of mind.
In a previous comment you said:
As for naive predictions, I have my doubts that there are any.
I should have addressed this earlier. I think this claim is 100% unreasonable. If you cannot say what experiences are more likely when your claim is true than false, then you don't have a claim at all. Your proposition would be meaningless. To see this, suppose I provide you this encoded proposition "237623ffs6638hs73hd77399." You don't know what it means because you cannot say what experiences are more or less probable if it were true versus false.
And again, the naive prediction of a supernatural model is that the mind is as likely to be inexplicable as explicable. And as likely to be reductionistically so as not.
Suppose I have two coins. One is fair, the other is two-headed. I choose a coin in secret, and flip it. It's heads. Which coin is more likely the one I chose? I flip it again, and it comes up heads a second time. Now what are the odds?
Well, every scientific regularity we discover is like a 'heads'. Every reductionist flip is like another 'heads'. After hundreds of these, it is not rational to conclude that mind is likely to be non-physical. In order to conclude that, the supernatural model would need to accurately predict some experience that's as improbable as heads-up fair coin tosses in a row. Of course, supernatural claims make no predictions that can make this kind of come-back.
BDK
I think you are focussing on the right area. And it's true that we have come around to our previous discussion, but that doesn't mean we are going in circles quite yet.
I believe that effects of physical causes are all information-laden events. Supose you walk into a room with a billiard table and a billiard ball is in the process of rolling across the table. The moving state of the billiard ball carries ("represents") the following items of infomation, among others: that the ball has been acted upon by a force, the magnitude and direction of that force, and the approximate time of the application of the force. Besides the information being being preserved in the state of the ball, we can also say that this information is what guides the ball, telling it which direction to go, at what speed and how far. The sphericity and hardness of the ball is a kind of programming that allows it to rebound and change directions at the cushion with minimal loss of energy.
The only difference between a pool ball in motion and a bee in flight is the number of objects and force vectors each respectively entails. The difference is of degree only. With greater complexity, the system being analyzed seems to acquire purposefulness and even a kind of rationality. But there is no conceptual dividing line between these systems. Drawing such a line is arbitrary until we get to consciousness--with consciousness we can begin to talk about genuine planning, purpose, and what I've been calling reflective reason. So consciousness is indispensible to the discussion. Aside from consciousness there exist causal interactions so intriguing complex that they can be called "computation" and "reason" in a loose sense, but there is a type of reason for which consciousness is necessary. I have no doubt that a great deal of what happens in the brain is in fact computational in nature, comprising necessary conditions for the occurrence of consciousness and reflective reason, but these conditional processes are not to be confused with reflective reason as such.
I don't think it's arbitrary, unless the difference between life (especially creatures with nervous systems) and non-life is artibrary. There are certain specialized biological neural structures that organisms use to get about in the world by directly responding to what is happening in the world. This is the biological origin of representational systems. Whether we could create a machine or something that has the same dynamics isn't as interesting to me as doing biology: explaining animal behavior in terms of these representational states (e.g., the honeybee nectar-memory, communication system). So we have come full circle :)
That these biological states, in humans, seem to bear a striking relationship to consciousness (e.g., we can manipulate the brain and ipso facto manipulate consciousness), is quite interesting. It suggests that these representational systems have some strong relationship to consciousness (whether a relationship of identity, mere correlation, 'implementation', we can't really be sure).
DL
You seem to equate all forms of dualism with belief in water sprites and voodoo curses. To the contrary, the progress of science has sharpened the divide between thought and matter. People used to believe that the Sun, for example, was conscious and had desires and beliefs. They knew that the Sun as an object in the sky didn't look much like us or act very much the way we do, but they thought it had desires and beliefs anyway. But when it was discovered that the characteristics and behavior of the Sun can be thoroughly explained by physical cause and effect, it was no longer credible to suppose that it had desires and beliefs.
This discovery had an important implication. If we know that the Sun lacks desires and beliefs because we can explain everything it does as physical cause and effect, then what about us? If it were possible to explain everything we do as physical cause and effect, there would no longer be room to attribute desires and beliefs to ourselves. In other words, it is precisely because physical causes and effects obviate thought and other mentalistic phenomena that we can cease to be superstitious about inanimate objects.
Proponents of physicalism might say that even though the basic forces motivating us and the Sun and planets are the same, we behave differently enough to set us apart. But this is not a good answer. If chemical causes in the brain combined with external physical causes explain everything we do, yet we nevertheless do some things because of desires and beliefs, then desires and beliefs are just another name for the physical causes of behavior. If that's true, then who is to say that the Sun doesn't have desires and beliefs? If the Sun doesn't communicate with us and instead just keeps shining, perhaps it is because that's exactly what the Sun wants to do or believes it should do. We would be unable to rule out that possibility by pointing to physical causes of the Sun's behavior.
No, we have to draw a fundamental line between our thought processes (particularly rational ones) and mental motivations on one hand and relentless physical causation on the other in order to preserve our scientifically-based assessment that sticks, stones, planets, sacred waterfalls, etc. are not conscious and do not behave because of desires and beliefs--such as the belief that science is informative about the world. But mental phenomena do have an interface with physical brain processes, without a doubt.
"Supernatural" in the sense in which you use it describes forces that act without rhyme or reason. I am talking about the mentalistic--not physical--quality of rhyme and reason itself. Rational thought does obey rules, it just obeys different rules than physical objects do. One gear causes another to move by pushing the other gear, not by logically implying that there should be another gear in a certain position. One thought does not push another, but it may logically imply another.
Concerning your water example, we conclude that water is physical because we can observe water through the senses and with scientific instruments. Trying to say that we only observe representations of water is just baffling. When I weigh a cup of water, I don't inevitably find that I have weighed a photograph of water stuck into a cup. When I observe that water can carve a channel through sand, I don't find that it's a hologram of water than has carved the channel. What the water reminds me of is beside the point.
I may point to a picture of the Grand Canyon on my computer screen and say, "That's the Grand Canyon," in the self-conscious knowledge that I am pointing at a representation. But it would be absurd to say that I cannot go to Arizona and point at the Canyon itself rather than a representation. The difference is straightforward. Well, I can point to my computer screen and say, "This is Doctor Logic's argument," knowing that the lines on the screen represent a premise. But I cannot go anywhere, not even to a scan of your brain, and declare "There is the logical premise that was represented on my screen, not just a representation!" Nor can I point to series of computer circuits turning off and on and say, "Here we see a conclusion being logically inferred from a premise" the way I can point to a stream of water flowing through a box of sand and say, "Here is a stream of water carving a channel through sand."
The above seems clear to me, but I am ready to admit that I cannot make it clear to you, and since this thread is pretty long I leave it to you to close it out.
If chemical causes in the brain combined with external physical causes explain everything we do, yet we nevertheless do some things because of desires and beliefs, then desires and beliefs are just another name for the physical causes of behavior. If that's true, then who is to say that the Sun doesn't have desires and beliefs?
If digestion is just complicated physical processes going on in a digestive system, who is to say that the sun doesn't digest?
BDK
Indeed, we can say that the sun digests hydrogen and interstellar dust, adds the helium to its body and excretes photons and heavier energetic particles. And perhaps it feels indigestion once in awhile--i.e., during solar flares.
What I had hoped would be an obvious reductio has led to some rather bizarre bullet biting. I hope I'm forgiven that I'm not going to bother arguing that the Sun isn't alive.
darek,
Your argument seems to be that the Sun isn't a person because it's a simple physical mechanism. This is true, but not for the reasons you suggest. There's no law that physical systems can't be persons. You are missing the detailed steps from your argument. The actual argument is something like this:
1) Persons have consciousness, desires and beliefs.
2) These attributes are not well defined, but we recognize them in others according to certain criteria. For example, we can set puzzles for the person that they can solve without certain classes of brute force trial and error.
3) The Sun does not pass these tests.
4) Furthermore, the Sun is found to operate according to certain simple principles that preclude the Sun's ability to pass tests of consciousness, desire and belief.
5) Therefore, the Sun is not a person.
You suggest that the mere fact that the Sun is a physical mechanism is what led us to our conclusion. However, that is nowhere to be found in the expanded argument. If your argument were that any physical mechanism isn't a person, then we would have to conclude that humans were not persons either (which contradicts the definition of person).
Your other arguments try to squeeze the problem from the other end by showing that some material interventions in human reason preclude rational belief. For example, you might cite a case in which a physical mechanism is used to cause a person think irrationally. However, what does this prove? It proves that rationality depends upon the physical system. It tells us that there are physical systems that are not rational. Yet, it does not contradict naturalism. Under naturalism, rationality is the result of a tuned physical system. Mess with that system and rationality fails. In a dualist approach, the mind requires a finely-tuned physical system to act as a sort of radio. The kinds of cases you cite cannot distinguish between naturalism and dualism (though they favor naturalism in my view).
Naively, if the local sensory and motor network is controlled by non-physical radio, then the nervous system could have many independent radios. There's no need for the nervous system to be integrated or even connected. There's also no need for the brain to have the ability to recognize anything. That could be done by the mind. There's no need for a physical part of the brain that correlates emotion and visual recognition. That function could be handled by the non-physical part of the mind. Over and over, we find that functions that could have been performed by the non-physical parts are performed by physical parts using physical mechanisms. The dualist school has been fine-tuning for centuries to bypass all these experimental results favoring naturalism.
Your argument about representations (of water and the Grand Canyon) doesn't seem to make sense to me. I argued that our experience of water isn't fully captured by present knowledge of physics, and yet we do not doubt that water results from the physics of H2O. From there, I argued the same things for mind reducing to brain. You responded by saying that we perceive attributes of logical arguments that are not physical, whereas we only perceive attributes of water that are physical. I think you are trying to say that logical arguments are not physical, so a purely physical system is not coupled to them.
However, this seems to break down on two levels. First, the physical universe is logical (non-contradictory). That creates a coupling between logic and physical mechanisms. Second, I wonder whether the laws of computation are in fact physical. If a particular symbol manipulation always results in the same end result, isn't that a physical law? That is, aren't the laws of mathematics physical in the sense that they are still laws? It's very similar to statistical mechanics in which the laws of thermodynamics are expressed in terms of possibility. These laws don't care whether you're talking about molecular oxygen gases or black holes, there are laws of probability and possibility that apply to any identifiable unit.
Finally, you have also argued that physically causal mechanisms are inadequate to explain rationality because they do not necessarily reach the correct logical conclusion, but only the necessary physical one. However, this does not preclude physical mechanisms that reach the correct logical conclusion because they follow a necessary physical one. And if they are not guaranteed to reach the correct conclusion every time, well, that matches the observation that humans make mistakes. It also matches the observation that tampering with the brain is capable of altering just about any facet of human reason and consciousness.
If your goal is to keep alive the faint hope that dualism is the case, there's no need for this sort of argument. Science is probabilistic, and we only have a finite number of observations. However, you are not providing a strong argument for dualism. You are not providing an argument that overcomes the orders of magnitude of probability favor for naturalism provided by years of neuroscientific research. That research is a litany of discoveries, any one of which could have gone another way had minds not been brains, but which had to go the way they went if naturalism were true. What's it going to take?
BTW, the very same argument applies to evolutionary biology. There are trillions of ways to design life for a planet, and only one way to do it with an unguided, naturalistic mechanism. And that one way is what we observe. One can hope for an intelligent designer, but one cannot rationally believe it likely.
DL
As I said, I have to give the blog a rest for awhile, so we'll leave things there for the moment.
BDK
Sure, my answer was a bit tongue-in-cheek.
If we assume that desires, for example, are motivating (by no means an uncontroversial assumption) and also physical, then there would not seem to be any choice but to identify a desire with motivating causal machinery. If my desire to drink a glass of water is what causes me to drink, there is nothing the desire can "be" except the chemical sequence of causation. Yet it is entirely imaginable that I may desire to drink and yet desire not to communicate to someone that I have the desire. It is difficult to draw a line between causation in general and desire as a special subclass when physical causation is the fundamental qualification of desire.
No, I do not believe the sun has desires, but then I do not believe that desires are captured by the physical causal account of behavior. Yet the sun's behavior is, I think, captured by the physical causal account.
As I have noted to DL, I have to take a break from the blog and catch up on some other obligations I have let slide. So feel free to close out our discussion with any further comments you have.
goyard
golden goose
golden goose outlet
supreme hoodie
adidas yeezy
goyard handbags
golden goose
goyard handbags
supreme clothing
goyard handbags
(Jordan Shoes For Sale Online) possibly be constantly working out portion a person's coupled with animal one's everything since (Michael Kors Outlet Store) the early days. Prior to now small number (Ray Ban New Wayfarer Polarized) of decades, Human made place, (Cheap Yeezys For Sale) Sandals jamaica resorts and galoshes why protect nation in natural systems of the (New Jordan Releases 2020) planet. Typically these sneakers shelter involving very own feet give deny our business of totally (Ray Ban Outlet Store) free whataburger coupons electrons present on your lawn.
Helena Hwy, Rutherford. Result in income because it isn't free it lower cost. In the web, For example, prospects decreasing hammer toe (Coach Outlet Online) web at the end of the Cotati rate, (Coach Outlet Store Online) The article of the simple truth is to crisis all this undergo, To have them isolated and to make them to learn how to get out quickly as (Michael Kors Outlet) more.
Post a Comment
Subscribe to Post Comments [Atom]
<< Home