That is a very good paper with an interesting psychological theory. He's done a great job integrating a lot of different hypotheses about the psychological correlates of conscious states.
The philosophical stuff in section 11, where he tries to flesh out implications for the 'hard problem' is not convincing, but I ignore that kind of stuff. Luckily we are in dire need of good theories of the neuropsychological correlates of consciousness (I use the term 'correlates' to be ontologically neutral), and the good philosophy will come once such a story is in place. I hope it happens in my lifetime, but I doubt it will.
In the uncut version of the paper, I describe several types of first person perspective, but only deny one of them, the type that supposedly gives us an observational perspective on experience itself. See http://www.naturalism.org/kto.htm#Istperson .
regards,
Tom Clark Center for Naturalism www.centerfornaturalism.org
That doesn't seem to be an especially contentious claim. I did read the paper, btw, and saw that you were aiming for that. My main problem is not that you distinguish this from the kind of 1st-person perspective I quipped BDK about; but that you end up having to functionally equivocate the processes involved. Indeed, under philsophical naturalism (especially atheistic naturalism), you could hardly avoid doing so, sooner or later. Once the processes are presented as being functionally identical in character, though, it is (by tautology) no longer possible to maintain a functional distinction between them.
While that may seem nifty and desireable from the perspective of defending naturalism, since at the end of the day both processes must be explained in similar terms, it comes with a parallel price-tag: it means that when I quip BDK about all his hoping, evaluating, doubting et al being _not_ his first-person perspective, the defense cannot plausibly be (in effect) 'yeah, but _that_ kind of 1st-person perspective is substantially different!'
A supernaturalist could have it both ways. A naturalist can't, legitimately.
If someone can clarify, please do so. Does the paper claim that representations do not represent to an observer? For example, take the statement, "the temptation might be to conclude that the representations themselves . . . are observed from some sort of vantage point or perspective." Can something represent without representing to anyone in particular--or anyone at all? Is this coherent in terms of the concept of representation?
Derek: I don't know what the author would say, but I think any representation that is interesting is part of some larger system, usually an autonomous agent with goal-states. Such a system would use the representational states to guide behavior, among other things.
I think Tom's claim is very interesting: do we observe experiences? Or do we observe via experiences? With visual experiences, it seems the former. With things like stomach aches, nausea, and other visceral experiences, it is not as clear cut to me. Some would say we don't observe the experiences even in those cases, but observe states of our body in a way analagous to observing distal events through vision.
If we don't observe experiences, how do we know there are such things? Perhaps there are no experiences. To say so cannot clash with observation if experiences are unobservable, right?
But that's beside my original point. It seems to me that a representation always represents to someone. The word "green" represents green to me because I am an English speaker, but it doesn't to someone who speaks no English. A math teacher stating a problem says, "Let 't' represent the time it takes the train to go from New York to Boston." Almost anything can represent anything else if someone chooses to see it as doing so. Is it coherent to speak about a free-floating representation that just represents without representing to an observer or agent?
If we don't observe experiences, how do we know there are such things?
They could be theoretical constructs like propositions. They are required to explain certain data (e.g., hallucinations, dreams). This is the view of Wilfrid Sellars, who is also the author of the language of thought hypothesis (and Paul Churchland's PhD chair).
Do monkeys observe experiences? Does having an experience entail that you are observing the experience? This seems clearly false, so we need either an argument or extremely compelling example of observing an experience.
It depends what you mean by representing "to" someone. If a robot has an internal state that co-varies with temperature, and this internal state is used to guide algorithms which control behavior wrt temperature, then that robot has an internal representation of temperature (if you want to say it is derived, then imagine the same robot evolved naturally). So yes, as I said the internal temperature-representing state represents temperature in a particular system partly individuated by goal states. But if by "representing to someone" you mean it has to be in the context of a conscious agent, I don't buy it.
Observation per se would be itself an 'experience'; so this may be reconciled better by distinguishing between experience (as a verb) and experience (as a noun).
Do we experience experiences? Sure. Even a merely reactive rock could be said to experience experiences, in the sense that 'happenings' 'happen' to it. (Note: BDK sent in a reply himself along this line, too, while I was composing the comment.)
Do we sensorily observe our sensory observations? I think the answer is no; and I think that this denial is what Tom is at least partially after. I don't recall offhand from my skim through his paper whether he's trying to avoid infinite regression, but that would be one advantage to the denial.
Do we consciously process and evaluate our experiences? Yes, I think we pretty much have to admit we do, for sake of any argument we ourselves are making--at peril otherwise of being quipped like I did to BDK. {g} Or in more classical terms, why shouldn't I then treat Tom Clark as being a Socratic cabbage or a souless zombie? (Granted, if I was one kind of hyper-Calvinist, I could suppose Tom was such a thing, but is that really what Tom would have us believe about himself? Clearly not.)
If our conscious evaluation of experiences, though, which can take place sequentially without (I think) risking infinite regression of observance (thus allowing us to subsequently contemplate a contemplative experience we ourselves have instantiated), are supposed to be explained in-and-only-in terms of non-conscious merely reactive behaviors--which is what Tom will end up having to go with sooner or later, on peril of appealing to some kind of theistic metaphysic (whether naturalistic or supernaturalistic would not be immediately relevant)--then I don't see that Tom can protect that crucial observer from being threatened just as thoroughly as the observer Tom claims is _not_ there in the merely automatic reactive process he is concerned to... well... contemplate. {s}
Sauce for that goose, is sauce for that gander--especially when the gander is elsewhere identified as being on the same spit as the goose (or even itself also the goose.)
This can be illustrated in reverse, by Tom's assertion that a microidentical duplicate popping into existence would _not_ in fact be only a zombie. On his terms, he basically has to mean that _he_ isn't a zombie, and still wouldn't be even if he had no backward history with this system of nature; thus neither would a microidentical duplicate be. That's all well and good, but it obliterates the distinction (I think) he is trying to elsewhere recognize between one first-person experiential process and another. If Tom isn't a zombie in one regard, then why should we take the merely reactive observational process to be (in effect) a zombie in our subconscious? But if _it_ is effectively a zombie, why should the rest of our effectual mental processes, stemming _entirely from qualitatively similar behaviors_, not be under the same zombieness?
Regarding representation, consider that in the context of thought (which surely applies in this case) representation refers to the perception of one thing as standing for another. If we reduce representation to physical covariance--really just causation--it evaporates. For example, the direction a wind vane points is caused by the direction in which the wind blows, which covaries with the location of lower air pressure in relation to the wind vane, but that does not mean that the wind direction represents the location of lower pressure to the wind vane. Even if I rigged the wind vane to a little battery-operated car with the goal of having the car drive more or less consistently in the direction of lower air pressure, the fact that the wind through the wind vane guided the car's direction of movement hardly means that the direction of the vane would represent the location of lower pressure to the car. We would simply have a causal chain apart from what the vane or car's movement might represent to me.
The temperature scale on my rather old-fashioned thermostat represents room temperature to me because of my perception of it doing so. It would not "represent" something to me simply on the basis that it caused some action on my part. Inside the thermostat is a bimetal coil that covaries with room temperature and trips a switch for the heater. But this does not mean that the coil position represents room temperature to the heater. We might as well say that river flow represents direction to a stick floating downstream or that rain hitting a rock represents a cloud to the rock.
To call something a representation is to say that it represents. But to say that it simply represents without representing to a perceiving subject is like saying that there is a known fact that is not necessarily known by anyone. And that is just not coherent.
Jason
I'll take your word on the points you made, since I was a little slow to follow them completely.
Derek: Thermometers obviously don't have mental representations. I didn't say covariation was sufficient for representation, but I do think it is necessary (with a few caveats).
But I think robots and animals can represent even if they are not conscious. I have discussed this a lot here, most recently in the comments here and here.
The first post, with its discussion of birdsong learning, is especially relevant.
I use the word 'representation' in a way that doesn't imply consciousness (see previous posts), following most contemporary philosophers, neuroscientists, and the like. When I am talking about consciousness or experience, I say it explicitly. Your usage of the term is idiosyncratic, though you are free to define your terms any way you like. Just be wary you'll end up talking past a lot of people.
You make a fair point. But could it be that ambiguity in the definition of "representation" as used by POM and cogsci types is masking a coherence problem? Clark says that we are tempted to think that representations must be viewed from a perspective, implying, particularly given the general thrust of the article, that this is not true. To that extent the distinction that I am talking about seems to be in play already.
Compare your songbird to a different physical system--a desert hillside and gully. An unusually large storm sends water down the hillside and carves a deep wash channel through the gully. A year later the runoff from another storm follows this same channel through the gully, even creating a similar sound profile. Prosaic, inanimate objects apparently can store information in a memory and even replay it. Following are some more examples of what might be called representation:
The noonday shadow of a hawk on the ground as it flies overhead
The track of a grizzly bear
The fossil of a leaf
The track of an elementary particle on a photographic plate
The numeral "2"
All of the above might qualify in the casual sense as representations. But in the first several cases all we have are effects which in some fairly obvious way distinguish their causes. The shadow represents a hawk insofar as it is fairly obviously caused by a hawk, etc. But this is just part of a sliding scale of obviousness of causal history. Every effect carries at least some information as to its cause simply by virtue of being an effect. Something changes when we get to the last example, however. The numeral "2" is not a representation because it is an obvious effect of the abstract quantity it represents. The sense in which "2" represents what it does, I still believe, requires conscious mediation--and this is of profound significance.
Darek wrote: {{Even if I rigged the wind vane to a little battery-operated car with the goal of having the car drive more or less consistently in the direction of lower air pressure, the fact that the wind through the wind vane guided the car's direction of movement hardly means that the direction of the vane would represent the location of lower pressure to the car. We would simply have a causal chain apart from what the vane or car's movement might represent to me.}}
To which I readily nod and agree--Darek is concerned about avoiding the externalistic fallacy in explaining the car’s behaviors. The representation-ing is happening to us, in regard to the causal chains involved with the car, but that isn’t the same as saying that the representation-ing is happening to the car. (Leaving aside the question of whether the car wouldn’t just stutter and jerk back and forth, since the moment it moves its wind vane would be registering a different relative pressure... {g})
_Unless_ the notion is supposed to be that ultimately representation-ing isn’t in fact representation-ing, which an explanation involving naturalistic atheism (and probably supernaturalistic atheism, too) looks to necessarily arrive at sooner or later. Some theorists might consider that to be an advantage to the explanation, but I wouldn’t. {s}
Relatedly, it looks a lot like we’re trying to get intentive action ability without admitting any intentive actions in the explanation, even when we otherwise admit the intentive actions are there.
BDK: {{But I think robots and animals can represent even if they are not conscious.}}
If you mean robots and animals can react effectively and complexly to stimuli even if they are not conscious, I can agree with that very readily; to which I can add things like stars and rocks and plants, too. {s}
If you mean the behaviors of robots and animals can represent _to us_ (as consciously active agents), even if they themselves are not conscious, then once again I can easily agree with that.
If you mean that you think robots and animals can evaluate representations the same way we do as specifically conscious agents, even if they are not conscious, then I don’t think I would be obscurantist to disagree with that. {g}
{{Thermometers obviously don't have mental representations. [...] I use the word 'representation' in a way that doesn't imply consciousness (see previous posts)}}
I think you’re going to have trouble coming up with a fundamental distinction of non-conscious merely reactive representation-ing that applies to robots and animals but not to plants or thermometers. The only ostensible difference I can find in your previous robot example would be the inclusion of ‘algorithms’ which control behavior wrt to temperature. But plants have far more complex ‘algorithms’ than most (all?) current robots; and what exactly _is_ an ‘algorithm’ supposed to be in naturalistic terms? More precisely, what is it supposed to be in atheistic terms, where mentality and intention are excluded from being fundamental behavior qualities? Is it not that an electrical charge goes down a tiny path and bounces one way instead of another depending on whether the way is opened or closed, itself dependent on other electrical charges having bounced or not bounced in the proper directions?
So, a thermometer’s fluid goes along a particular path, too, given the proper conditioning; and the fluid goes one way under some conditions and goes the other way under other conditions. It may be a grossly simple algorithm, and the relevant materials may be different than what is found in the robot (the plant’s materials might be closer! {g}) but it’s still an algorithm.
(Or, water washes down a gully creating effect-set X under Causal Condition set A, but creating effect-set X2 under Causal Condition set B or A2, etc.)
I think Darek is hitting a nail on the head. ‘Representation’ has built-in normative conscious elements in its meaning. If cogscists borrow the term and apply it to non-conscious reaction processes, _they’re_ the ones initially introducing the idiosyncracy; and there’s a pretty big subsequent danger of getting a feeling that something that (shall we say) mentally tastes to _us_ like consciousness is going on in those processes, while we meanwhile explicitly deny that any such thing is going on. The danger is that the taste arrives from us porting over our normal associations from when we’re normally talking about representation. But that associative taste wouldn’t even remotely be evidence that we’re discovering and describing a non-conscious process gradually eliding into conscious processes. On the contrary, it would be a subtle variation of the externalistic fallacy.
Jason: I discuss more about what separates representational systems from other informational systems in the posts I linked to. The robot example will take too much time to unpack well, so I'll advert to those other posts.
Explicitly, I am referring to processes that refer to things other than themselves, can be true or false, and which don't run into the problem of coextensional contents, etc. None of these properties are possessed by thermometers and the like. None of the above implies a conscious observer, or observer of experiences.
My position on this is heavily influenced Fred Dretske, esp his book Knowledge and the Flow of Information, which saved the informational baby from the 'Thermometers don't represent' objections voiced in the 50s. For some reason, rather than realize that this slogan is merely the claim that information isn't sufficient for semantics, it served to leave informational considerations out of semantics altogether for a few decades.
Derek (and Jason): the fossil/water in the gulley examples are great. Scars are also a good possibility. As I'll show, thermostats are an even better example.
I'll respond by giving a more general picture of how I view biological systems with semantic properties (see previous comment on what I mean).
In the nervous system of every animal is an internal map by means of which it steers about the world. E.g., the visual system functions to carry information about the world, and this information is used to guide behavior with respect to the relevant features of the world, just as we use a map to guide our behavior with respect to the terrain that it represents.
This is the proto-semantic core of all biocognitive system: internal states which map onto things in the world and guide behavior with respect to those things in the world. Ultimately all of this machinery evolved to help attain the (implicit) goal states of an organism such as reproduction.
However, these proto-representations can only get us so far toward truly semantic structures. Thermostats also have internal states that continually co-vary with features of the world and adjust their behavior wrt to those features (gulleys and fossils do not). They can even be said to have goal states (gulleys and fossils do not).
What must be added to avoid the thermostat objection? In true cognitive systems, the internal states that function to carry information about the proximal world are tied to another system that inherits this informational function, but is free from the bonds of solely co-varying with what is presently happening. They are untethered from the lower-level sensory processes and can be used to think about things that are not there. Such structures inherit the function of carrying information about the world, but they are special in that they can be activated in the absence of the original stimulus (which is why they can be WRONG).
The honeybee is a great model system for this framework. A honeybee gets nectar from a flower and flies back to the hive to do a dance that 'tells' the other bees where the honey was (they then fly out to get more nectar). On its flight back to the hive, the bee has an internal representation of the location of the nectar, this representation gets its content from the original stimulus, functions to indicate where the nectar is, and guides later behavior wrt the nectar. This structure can also get it wrong (e.g., neural misfiring on the flight back might lead his brethren on a wild nectar chase).
Of course, the honeybee navigational system is very simple, but I think it provides, in broad strokes, a good picture of what an account of biosemantic theory will look like. It also shows why I focused on memory and birdsong learning in an earlier post. Without memory, there is no semantic structure. Without the internal map sensitive to the exigencies of the world around us at any given moment, we don't have memories.
Note the problem of consciousness doesn't enter in at all so far.We get to see how semantic properties can fit quite comfortably into the natural world without worrying about the C word.
Dretske, in Knowledge and the Flow of Information, spends a few hundred pages discussing a similar perspective better than I could. Obviously, this is a very complicated topic, and there are lots of details and potential objections I am skipping over. (in fact, I discuss a technical objection to Dretske's position in some detail here). Anyone earnestly interested should consult Dretske. He is just brilliant.
The above adumbrates why I don't take semantic properties to be especially mysterious for naturalism. When I consider how I might explain conscious experience as an atheistic naturalist, on the other hand, I have nothing much to say except we need to wait until neuropsychology converges on a data-rich story before we act all arrogant that we are right ('we' meaning everyone, naturalist and antinaturalist alike).
Thinking about what I wrote, I now think that even the internal map has some basic semantic properties, and can get it wrong. It refers to things in the world, and can be a more or less accurate representation (mapping).
The thermostat is the same way: the bimetallic strip functions to indicate the temperature, but it can malfunction and get it wrong. The difference between biological maps and thermostat maps is that the latter have their function in a derivative way (from human intentions) while animal maps have their function (to indicate what is going on in the world) objectively, i.e., these functions are biological facts (just as the function of the heart is to pump blood).
So what I should have asked is what needs to be added to thermostat-style biorepresentations to make them more interesting and human like. The honeybees provide a useful point of reference for elaborations into more cognitive domains.
I hope you can summarize Dretske's argument effectively enough so that some us (like me) can grasp it without adjourning to read "a few hundred pages." After digesting your post and looking over the material at the link, I can offer the following.
In the Intentional Stance, Daniel Dennett observes that we can project necessarily conscious processes onto objects or states in which they clearly are not occurring. For example, I can consciously have the goal of getting down a rocky hillside by consistently seeking lower terrain and going around boulders rather than over them. But watch a volume of water released at the top of the hill and it seems to do the same thing, "finding its way" down the hill by probing around obstacles till it locates a route to lower ground. I can attribute to water the same goal that I might have and even make some successful predictions on that basis ("the water will go to the left of the boulder because it will detect that the ground is lower there than on the right"). But to employ this attribution in argument will result in a "bait-and-switch" fallacy: water appears to have the goal of getting down the hill just as I do; everything the water does is purely physical; therefore whatever occurs in me when I act with the conscious goal of getting down the hill must also be purely physical. This is not a direct counterexample to yours, but merely calls attention to the pitfall that Jason would probably call the externalist fallacy.
Everything bees do can be viewed simply as cause-and-effect clockwork. The location of pollen affects the bee's internal state, driving a causal chain involving other bees. None of it requires one thing representing something other than itself except when we project our own conscious processes onto bee behavior. Bees and ants store up food in the summer that they consume in the winter, but does this imply much less require that they conceive of time, internally represent the future and then act on this representation? Do they have the goal of storing up for winter? Of course not. I would deny that they even have the goal of survival. Not having conscious goals, bees do not get it wrong. They do one thing and survive or do another and die, and the living and dying may be seen as purely circumstantial events of utter indifference to the blind watchmaker.
Go back to my example of the shadow of a hawk, but let's add a prairie dog colony feeding on grass seed. One of the prairie dogs sees the shadow of the hawk, squeaks, then dives into the nearest burrow hole. The other prairie dogs hear the squeak and dive into holes themselves. This chain of events is similar to the example of the bees locating food. Complex neural reactions are necessary to this account, but not representations. The shadow need not been seen to function as a representation, merely as a cause. Nor must the neural states of the prairie dog constitute representations. Hawk causes shadow, which causes sensory response, which causes motor response. The motor response causes sound, which causes further sensory responses, which cause further motor responses. Can we inject representations onto this account? Yes. Must we? No. There's the rub.
Return to the bee. If we were able to scan exhaustively the physiology of a bee in flight, we could from the chemical characteristics of its organs in relation to the immediate environment project its behavior going forward--although we might need a powerful computer to help with calculations. Even if we claimed that some of the bee's internal physical states constituted representations, we would not need to know what these states were representing in order to make our calculations and determine behavior. Causal relevance would be restricted to the physical properties of those states, whether or not they "represented" things.
However, if the bee acted with goals and purposes in mind the way, say, Fred Dretske does, then knowledge of what was being represented by internal states would be indispensible. I doubt that physical properties can be attributed to the goals Dretske had, for example, in writing his book on information theory. At the same time, if these goals were represented by internal physical states we must admit that the states would have different properties than the goals themselves. We cannot see the goals as distal causes of the effects of the physical states representing them, the reason being the goals' lack of physical properties. Therefore, if the physical properties of the representational states were what drove Dretske's behavior (like the bee), then his behavior would not in fact be motivated by the goals those states represented. This results in incoherence. Dretske's goals have irreducible causal relevance, and this relates in part to the way representation works in conscious, reasoning subjects.
First, to clear some brush, I am certainly not a Dennettian, and think he gets a lot wrong on this. He often writes as if being an intentional system is equivalent to being interpreted as an intentional system: this just places the problem of semantic contents one step back (remember I'm talking about states that refer and have truth values, not conscious states).
Your basic point with the bees seems to be this. Assume we have a perfect predictive causal model of a bee.: Even if we claimed that some of the bee's internal physical states constituted representations, we would not need to know what these states were representing in order to make our calculations and determine behavior. Causal relevance would be restricted to the physical properties of those states, whether or not they "represented" things.
This is a good point. You are basically saying that in my story, content is epiphenomenal, that it has no causal bite, that even if we ascribed different contents to the internal states, the behavior of the bee would still be the same if we changed nothing physical about the bee.
There are two flavors of responses to this, and I'm frankly not sure which I prefer, or if they are actually all that different.
1. Category mistake response The first, and more popular response (taken by Fodor for instance) is that content is indeed epiphenomenal. The 'syntax' (causal whirrings) of the machine determine its behavior, and the content of the internal states (the semantics') do not affect it.
While this may seem unpalatable, a little reflection should show that it is quite reasonable: Derek and Tderek (your twin on twin earth) are expressing different contents when they use the word 'water', but their behavior will be the exact same. This is because the referent was fixed via past interactions with water and twater.
Contents are fixed historically (hence the references to memory above), but it is only present pushes and pulls that determine behavior. This doesn't mean the contents aren't real, or a mere 'heuristic overlay' (Dennett) on the synchronic causal story: it means that looking for contents to be proximate causes of things is to make a category mistake. It is to mistake the proximal causal story with a historical explanation.
Also, while the behavior of an organism might not be affected by the content of its thoughts, the truth-values of its thoughts, the referents of its thoughts, is affected. And that is ultimately what I (and the other naturalists employing the divide and conquer strategy) are after. If a twinearthian is brought to earth surreptitiously, and says "There is water [meaning, twater] out there", it's thought is actually false. An earthian in an identical brain state (and perhaps with identical conscious experiences) thinking "There is water out there" is correct.
2. Proximal versus historical explanations.
A second type of reply is due to Dretske, who wants contents to have explanatory bite. I think it might actually be consistent with the first reply but I don't have time to knit it all together: perhaps a reader will do us the favor. It should be pretty easy, though time consuming.
Consider what I have been calling internal neural maps of the world: i.e., neural structures whose function is to transmit information about stimulus S and guide behavior wrt S.
Say we have a rat and we brush its whiskers rapidly (S1) or slowly (S2). Information about whisker deflection velocity is transmitted to the brain via various neural structures, ultimately the rat cortex. That is, S1 and S2 produce different neuronal responses in the rat's map of whisker velocities. Let's call these responses R1 and R2.
Now assume we train the rat via operant conditioning. When S1 is presented, it must go to one lever L1 and press it for a reward. When S2 is presented, it must go to another lever L2 and press it for a reward. Let's call these two behaviors B1 and B2.
Before the rat has learned the task, there is no association between S and B but eventually, we get a perfect mapping S1-->B1 and S2-->B2.
But remember this behavior is mediated by the sensory systems, so the more full story is that after learning, the following causal chains exist: S1-->R1-->B1 and S2-->R2-->B2. How can we explain why R1 came to cause B1?
The nervous system was adjusted, because R1 carries information about what is going on in the world, because it carries the information that S1 was presented, i.e., because it represents S1. If R1 carried the information that S2, then the behavior of the rat would have been different. Hence, the referent, or content, of the neural state R1, has explanatory bite.
Any attempt to explain why R1 causes B1 that ignores the contents of R1 will fail. It won't fail because it leaves something out of the proximal causal story (R1 causes B1 because the synapses are arranged in such and such a way), but because it cannot explain why those synapses came to be arranged that way in the first place. So while "we would not need to know what these states were representing in order to make our calculations and determine behavior", we would need to know the contents of those to explain why they came to cause the behaviors they do.
General conclusion All of this is part of a program to explain how a natural system can have internal states which refer to things in the world, and internal states that are true or false. Just as the heart can malfunction and fail to pump blood (but still have the function of pumping blood), so a neural structure can have the function of indicating, or referring to, things in the world while failing to so refer (e.g., the bee's neurons misfire).
I am not particularly concerned about providing a set of necessary and sufficient conditions for any thinking thing, but more with the empirical hallmarks of semantic structures in terrestrial organisms. If you don't want to say that the bees internal representation of the location of the nectar isn't really an internal representation of the location of nectar, or that rat S1 doesn't really represent whisker velocity, then I suspect we have very different aims, as this is a natural and unforced use of the English language.
OK, I'm spending way too much time on this so I will probably not be able to respond any more. I am just repeating what others have said (Dretske/Fodor), and someone resistant to reading a book on this shouldn't have read this far anyway.
I can't find a lot to disagree with above. Indeed, effects carry information about their causes and this goes for the bee's internal state considered as an effect rather than just a cause. I don't dispute that. I guess I have not done a great job of isolating the particular aspect of this issue that interested me. But I'd like to take a last crack at it.
The AfR as Lewis developed it turned the claim that the ground-consequent sequence differs fundamentally from the physical cause-event sequence. This implies that logical grounds may not be identified with physical objects/events/states. One way to try and pull off this identification and thereby attack the AfR is through representation. If a logical ground can be identified with a physical entity that represents it, the AfR becomes more vulnerable. This makes it important to analyze representation carefully, hence my original question about the statement in Clark's paper.
If a picture (representation) of the Grand Canyon stirs me to admiration, it might be said that the Grand Canyon causes my admiration by having caused the picture to have certain properties. And these properties can be called informational with respect to the Canyon, as can the properties of any effect with respect to its cause. But is representation in all relevant forms to be found in this kind of causal chain: Canyon causes picture, which causes admiration? Or is there a species of representation that requires a conscious understanding on my part that the picture does indeed represent the Canyon? I need not know that the picture represents anything other than itself to experience admiration when looking at it. It seems to me that that representation in this particular sense requires something more than a chain of causal interaction, even if that chain is complex enough, for example, to constitute informational "guidance" of automated systems in robots, insects or whatever. And I don't think my criteria is confused or obscure. We both know what it is like to understand one thing as standing for another and we probably both have our doubts that bees, for example, are capable of such an understanding.
Now for the controversial part. I think that there is something about "seeing" mentally that one thing represents another that is just not captured by third person objective accounts of even the most complex physical processes, and that this fact buttresses the AfR considerably. But whether we go that far or not, and whatever else we define as representation, even you might agree that if that something's being understood to represent something else is in a special category then it merits special explanatory attention.
One more unclear sentence that must be corrected. I said: Also, while the behavior of an organism might not be affected by the content of its thoughts, the truth-values of its thoughts, the referents of its thoughts, is affected.
I meant: Also, while the behavior of an organism might not be affected by the content of its thoughts, the truth-values and referents of its thoughts are affected by the contents.
Also, whether a particular belief is right or wrong (e.g., 'There is a God'), doesn't matter as far as your behavior is concerned: it is only having the belief that matters for explaining behavior. I think it is easier to see this than the more contentious case with individual contents, but luckily Putnam gave us Twin Earth. Also, since truth values supervene on contents/referents of the individual contituting terms (and syntax), we have an indirect proof that individual contents are not proximate causes.
I am the author of C. S. Lewis's Dangerous Idea: In Defense of the Argument from Reason, published by Inter-Varsity Press. I received a Ph.D in philosophy from the University of Illinois at Urbana-Champaign in 1989.
22 Comments:
That is a very good paper with an interesting psychological theory. He's done a great job integrating a lot of different hypotheses about the psychological correlates of conscious states.
The philosophical stuff in section 11, where he tries to flesh out implications for the 'hard problem' is not convincing, but I ignore that kind of stuff. Luckily we are in dire need of good theories of the neuropsychological correlates of consciousness (I use the term 'correlates' to be ontologically neutral), and the good philosophy will come once such a story is in place. I hope it happens in my lifetime, but I doubt it will.
I take it that all this (ignoring, hoping, evaluating, direly needing, term using, doubting) _isn't_ your first-person perspective, though. {s}
In the uncut version of the paper, I describe several types of first person perspective, but only deny one of them, the type that supposedly gives us an observational perspective on experience itself. See http://www.naturalism.org/kto.htm#Istperson .
regards,
Tom Clark
Center for Naturalism
www.centerfornaturalism.org
That doesn't seem to be an especially contentious claim. I did read the paper, btw, and saw that you were aiming for that. My main problem is not that you distinguish this from the kind of 1st-person perspective I quipped BDK about; but that you end up having to functionally equivocate the processes involved. Indeed, under philsophical naturalism (especially atheistic naturalism), you could hardly avoid doing so, sooner or later. Once the processes are presented as being functionally identical in character, though, it is (by tautology) no longer possible to maintain a functional distinction between them.
While that may seem nifty and desireable from the perspective of defending naturalism, since at the end of the day both processes must be explained in similar terms, it comes with a parallel price-tag: it means that when I quip BDK about all his hoping, evaluating, doubting et al being _not_ his first-person perspective, the defense cannot plausibly be (in effect) 'yeah, but _that_ kind of 1st-person perspective is substantially different!'
A supernaturalist could have it both ways. A naturalist can't, legitimately.
JRP
If someone can clarify, please do so. Does the paper claim that representations do not represent to an observer? For example, take the statement, "the temptation might be to conclude that the representations themselves . . . are observed from some sort of vantage point or perspective." Can something represent without representing to anyone in particular--or anyone at all? Is this coherent in terms of the concept of representation?
Derek: I don't know what the author would say, but I think any representation that is interesting is part of some larger system, usually an autonomous agent with goal-states. Such a system would use the representational states to guide behavior, among other things.
I think Tom's claim is very interesting: do we observe experiences? Or do we observe via experiences? With visual experiences, it seems the former. With things like stomach aches, nausea, and other visceral experiences, it is not as clear cut to me. Some would say we don't observe the experiences even in those cases, but observe states of our body in a way analagous to observing distal events through vision.
BDK
If we don't observe experiences, how do we know there are such things? Perhaps there are no experiences. To say so cannot clash with observation if experiences are unobservable, right?
But that's beside my original point. It seems to me that a representation always represents to someone. The word "green" represents green to me because I am an English speaker, but it doesn't to someone who speaks no English. A math teacher stating a problem says, "Let 't' represent the time it takes the train to go from New York to Boston." Almost anything can represent anything else if someone chooses to see it as doing so. Is it coherent to speak about a free-floating representation that just represents without representing to an observer or agent?
If we don't observe experiences, how do we know there are such things?
They could be theoretical constructs like propositions. They are required to explain certain data (e.g., hallucinations, dreams). This is the view of Wilfrid Sellars, who is also the author of the language of thought hypothesis (and Paul Churchland's PhD chair).
Do monkeys observe experiences? Does having an experience entail that you are observing the experience? This seems clearly false, so we need either an argument or extremely compelling example of observing an experience.
It depends what you mean by representing "to" someone. If a robot has an internal state that co-varies with temperature, and this internal state is used to guide algorithms which control behavior wrt temperature, then that robot has an internal representation of temperature (if you want to say it is derived, then imagine the same robot evolved naturally). So yes, as I said the internal temperature-representing state represents temperature in a particular system partly individuated by goal states. But if by "representing to someone" you mean it has to be in the context of a conscious agent, I don't buy it.
BDK (and Darek):
Observation per se would be itself an 'experience'; so this may be reconciled better by distinguishing between experience (as a verb) and experience (as a noun).
Do we experience experiences? Sure. Even a merely reactive rock could be said to experience experiences, in the sense that 'happenings' 'happen' to it. (Note: BDK sent in a reply himself along this line, too, while I was composing the comment.)
Do we sensorily observe our sensory observations? I think the answer is no; and I think that this denial is what Tom is at least partially after. I don't recall offhand from my skim through his paper whether he's trying to avoid infinite regression, but that would be one advantage to the denial.
Do we consciously process and evaluate our experiences? Yes, I think we pretty much have to admit we do, for sake of any argument we ourselves are making--at peril otherwise of being quipped like I did to BDK. {g} Or in more classical terms, why shouldn't I then treat Tom Clark as being a Socratic cabbage or a souless zombie? (Granted, if I was one kind of hyper-Calvinist, I could suppose Tom was such a thing, but is that really what Tom would have us believe about himself? Clearly not.)
If our conscious evaluation of experiences, though, which can take place sequentially without (I think) risking infinite regression of observance (thus allowing us to subsequently contemplate a contemplative experience we ourselves have instantiated), are supposed to be explained in-and-only-in terms of non-conscious merely reactive behaviors--which is what Tom will end up having to go with sooner or later, on peril of appealing to some kind of theistic metaphysic (whether naturalistic or supernaturalistic would not be immediately relevant)--then I don't see that Tom can protect that crucial observer from being threatened just as thoroughly as the observer Tom claims is _not_ there in the merely automatic reactive process he is concerned to... well... contemplate. {s}
Sauce for that goose, is sauce for that gander--especially when the gander is elsewhere identified as being on the same spit as the goose (or even itself also the goose.)
This can be illustrated in reverse, by Tom's assertion that a microidentical duplicate popping into existence would _not_ in fact be only a zombie. On his terms, he basically has to mean that _he_ isn't a zombie, and still wouldn't be even if he had no backward history with this system of nature; thus neither would a microidentical duplicate be. That's all well and good, but it obliterates the distinction (I think) he is trying to elsewhere recognize between one first-person experiential process and another. If Tom isn't a zombie in one regard, then why should we take the merely reactive observational process to be (in effect) a zombie in our subconscious? But if _it_ is effectively a zombie, why should the rest of our effectual mental processes, stemming _entirely from qualitatively similar behaviors_, not be under the same zombieness?
JRP
BDK
Regarding representation, consider that in the context of thought (which surely applies in this case) representation refers to the perception of one thing as standing for another. If we reduce representation to physical covariance--really just causation--it evaporates. For example, the direction a wind vane points is caused by the direction in which the wind blows, which covaries with the location of lower air pressure in relation to the wind vane, but that does not mean that the wind direction represents the location of lower pressure to the wind vane. Even if I rigged the wind vane to a little battery-operated car with the goal of having the car drive more or less consistently in the direction of lower air pressure, the fact that the wind through the wind vane guided the car's direction of movement hardly means that the direction of the vane would represent the location of lower pressure to the car. We would simply have a causal chain apart from what the vane or car's movement might represent to me.
The temperature scale on my rather old-fashioned thermostat represents room temperature to me because of my perception of it doing so. It would not "represent" something to me simply on the basis that it caused some action on my part. Inside the thermostat is a bimetal coil that covaries with room temperature and trips a switch for the heater. But this does not mean that the coil position represents room temperature to the heater. We might as well say that river flow represents direction to a stick floating downstream or that rain hitting a rock represents a cloud to the rock.
To call something a representation is to say that it represents. But to say that it simply represents without representing to a perceiving subject is like saying that there is a known fact that is not necessarily known by anyone. And that is just not coherent.
Jason
I'll take your word on the points you made, since I was a little slow to follow them completely.
Derek: Thermometers obviously don't have mental representations. I didn't say covariation was sufficient for representation, but I do think it is necessary (with a few caveats).
But I think robots and animals can represent even if they are not conscious. I have discussed this a lot here, most recently in the comments here and here.
The first post, with its discussion of birdsong learning, is especially relevant.
I use the word 'representation' in a way that doesn't imply consciousness (see previous posts), following most contemporary philosophers, neuroscientists, and the like. When I am talking about consciousness or experience, I say it explicitly. Your usage of the term is idiosyncratic, though you are free to define your terms any way you like. Just be wary you'll end up talking past a lot of people.
BDK
You make a fair point. But could it be that ambiguity in the definition of "representation" as used by POM and cogsci types is masking a coherence problem? Clark says that we are tempted to think that representations must be viewed from a perspective, implying, particularly given the general thrust of the article, that this is not true. To that extent the distinction that I am talking about seems to be in play already.
Compare your songbird to a different physical system--a desert hillside and gully. An unusually large storm sends water down the hillside and carves a deep wash channel through the gully. A year later the runoff from another storm follows this same channel through the gully, even creating a similar sound profile. Prosaic, inanimate objects apparently can store information in a memory and even replay it. Following are some more examples of what might be called representation:
The noonday shadow of a hawk on the ground as it flies overhead
The track of a grizzly bear
The fossil of a leaf
The track of an elementary particle on a photographic plate
The numeral "2"
All of the above might qualify in the casual sense as representations. But in the first several cases all we have are effects which in some fairly obvious way distinguish their causes. The shadow represents a hawk insofar as it is fairly obviously caused by a hawk, etc. But this is just part of a sliding scale of obviousness of causal history. Every effect carries at least some information as to its cause simply by virtue of being an effect. Something changes when we get to the last example, however. The numeral "2" is not a representation because it is an obvious effect of the abstract quantity it represents. The sense in which "2" represents what it does, I still believe, requires conscious mediation--and this is of profound significance.
Darek wrote: {{Even if I rigged the wind vane to a little battery-operated car with the goal of having the car drive more or less consistently in the direction of lower air pressure, the fact that the wind through the wind vane guided the car's direction of movement hardly means that the direction of the vane would represent the location of lower pressure to the car. We would simply have a causal chain apart from what the vane or car's movement might represent to me.}}
To which I readily nod and agree--Darek is concerned about avoiding the externalistic fallacy in explaining the car’s behaviors. The representation-ing is happening to us, in regard to the causal chains involved with the car, but that isn’t the same as saying that the representation-ing is happening to the car. (Leaving aside the question of whether the car wouldn’t just stutter and jerk back and forth, since the moment it moves its wind vane would be registering a different relative pressure... {g})
_Unless_ the notion is supposed to be that ultimately representation-ing isn’t in fact representation-ing, which an explanation involving naturalistic atheism (and probably supernaturalistic atheism, too) looks to necessarily arrive at sooner or later. Some theorists might consider that to be an advantage to the explanation, but I wouldn’t. {s}
Relatedly, it looks a lot like we’re trying to get intentive action ability without admitting any intentive actions in the explanation, even when we otherwise admit the intentive actions are there.
BDK: {{But I think robots and animals can represent even if they are not conscious.}}
If you mean robots and animals can react effectively and complexly to stimuli even if they are not conscious, I can agree with that very readily; to which I can add things like stars and rocks and plants, too. {s}
If you mean the behaviors of robots and animals can represent _to us_ (as consciously active agents), even if they themselves are not conscious, then once again I can easily agree with that.
If you mean that you think robots and animals can evaluate representations the same way we do as specifically conscious agents, even if they are not conscious, then I don’t think I would be obscurantist to disagree with that. {g}
{{Thermometers obviously don't have mental representations. [...] I use the word 'representation' in a way that doesn't imply consciousness (see previous posts)}}
I think you’re going to have trouble coming up with a fundamental distinction of non-conscious merely reactive representation-ing that applies to robots and animals but not to plants or thermometers. The only ostensible difference I can find in your previous robot example would be the inclusion of ‘algorithms’ which control behavior wrt to temperature. But plants have far more complex ‘algorithms’ than most (all?) current robots; and what exactly _is_ an ‘algorithm’ supposed to be in naturalistic terms? More precisely, what is it supposed to be in atheistic terms, where mentality and intention are excluded from being fundamental behavior qualities? Is it not that an electrical charge goes down a tiny path and bounces one way instead of another depending on whether the way is opened or closed, itself dependent on other electrical charges having bounced or not bounced in the proper directions?
So, a thermometer’s fluid goes along a particular path, too, given the proper conditioning; and the fluid goes one way under some conditions and goes the other way under other conditions. It may be a grossly simple algorithm, and the relevant materials may be different than what is found in the robot (the plant’s materials might be closer! {g}) but it’s still an algorithm.
(Or, water washes down a gully creating effect-set X under Causal Condition set A, but creating effect-set X2 under Causal Condition set B or A2, etc.)
I think Darek is hitting a nail on the head. ‘Representation’ has built-in normative conscious elements in its meaning. If cogscists borrow the term and apply it to non-conscious reaction processes, _they’re_ the ones initially introducing the idiosyncracy; and there’s a pretty big subsequent danger of getting a feeling that something that (shall we say) mentally tastes to _us_ like consciousness is going on in those processes, while we meanwhile explicitly deny that any such thing is going on. The danger is that the taste arrives from us porting over our normal associations from when we’re normally talking about representation. But that associative taste wouldn’t even remotely be evidence that we’re discovering and describing a non-conscious process gradually eliding into conscious processes. On the contrary, it would be a subtle variation of the externalistic fallacy.
JRP
Jason: I discuss more about what separates representational systems from other informational systems in the posts I linked to. The robot example will take too much time to unpack well, so I'll advert to those other posts.
Explicitly, I am referring to processes that refer to things other than themselves, can be true or false, and which don't run into the problem of coextensional contents, etc. None of these properties are possessed by thermometers and the like. None of the above implies a conscious observer, or observer of experiences.
My position on this is heavily influenced Fred Dretske, esp his book Knowledge and the Flow of Information, which saved the informational baby from the 'Thermometers don't represent' objections voiced in the 50s. For some reason, rather than realize that this slogan is merely the claim that information isn't sufficient for semantics, it served to leave informational considerations out of semantics altogether for a few decades.
Interestingly, a 2006 PhD in philosophy focused on honeybees as a model of cognitive systems. It can be found here.
The person is obviously a genius.
Derek (and Jason): the fossil/water in the gulley examples are great. Scars are also a good possibility. As I'll show, thermostats are an even better example.
I'll respond by giving a more general picture of how I view biological systems with semantic properties (see previous comment on what I mean).
In the nervous system of every animal is an internal map by means of which it steers about the world. E.g., the visual system functions to carry information about the world, and this information is used to guide behavior with respect to the relevant features of the world, just as we use a map to guide our behavior with respect to the terrain that it represents.
This is the proto-semantic core of all biocognitive system: internal states which map onto things in the world and guide behavior with respect to those things in the world. Ultimately all of this machinery evolved to help attain the (implicit) goal states of an organism such as reproduction.
However, these proto-representations can only get us so far toward truly semantic structures. Thermostats also have internal states that continually co-vary with features of the world and adjust their behavior wrt to those features (gulleys and fossils do not). They can even be said to have goal states (gulleys and fossils do not).
What must be added to avoid the thermostat objection? In true cognitive systems, the internal states that function to carry information about the proximal world are tied to another system that inherits this informational function, but is free from the bonds of solely co-varying with what is presently happening. They are untethered from the lower-level sensory processes and can be used to think about things that are not there. Such structures inherit the function of carrying information about the world, but they are special in that they can be activated in the absence of the original stimulus (which is why they can be WRONG).
The honeybee is a great model system for this framework. A honeybee gets nectar from a flower and flies back to the hive to do a dance that 'tells' the other bees where the honey was (they then fly out to get more nectar). On its flight back to the hive, the bee has an internal representation of the location of the nectar, this representation gets its content from the original stimulus, functions to indicate where the nectar is, and guides later behavior wrt the nectar. This structure can also get it wrong (e.g., neural misfiring on the flight back might lead his brethren on a wild nectar chase).
Of course, the honeybee navigational system is very simple, but I think it provides, in broad strokes, a good picture of what an account of biosemantic theory will look like. It also shows why I focused on memory and birdsong learning in an earlier post. Without memory, there is no semantic structure. Without the internal map sensitive to the exigencies of the world around us at any given moment, we don't have memories.
Note the problem of consciousness doesn't enter in at all so far.We get to see how semantic properties can fit quite comfortably into the natural world without worrying about the C word.
Dretske, in Knowledge and the Flow of Information, spends a few hundred pages discussing a similar perspective better than I could. Obviously, this is a very complicated topic, and there are lots of details and potential objections I am skipping over.
(in fact, I discuss a technical objection to Dretske's position in some detail here). Anyone earnestly interested should consult Dretske. He is just brilliant.
The above adumbrates why I don't take semantic properties to be especially mysterious for naturalism. When I consider how I might explain conscious experience as an atheistic naturalist, on the other hand, I have nothing much to say except we need to wait until neuropsychology converges on a data-rich story before we act all arrogant that we are right ('we' meaning everyone, naturalist and antinaturalist alike).
Thinking about what I wrote, I now think that even the internal map has some basic semantic properties, and can get it wrong. It refers to things in the world, and can be a more or less accurate representation (mapping).
The thermostat is the same way: the bimetallic strip functions to indicate the temperature, but it can malfunction and get it wrong. The difference between biological maps and thermostat maps is that the latter have their function in a derivative way (from human intentions) while animal maps have their function (to indicate what is going on in the world) objectively, i.e., these functions are biological facts (just as the function of the heart is to pump blood).
So what I should have asked is what needs to be added to thermostat-style biorepresentations to make them more interesting and human like. The honeybees provide a useful point of reference for elaborations into more cognitive domains.
BDK
I hope you can summarize Dretske's argument effectively enough so that some us (like me) can grasp it without adjourning to read "a few hundred pages." After digesting your post and looking over the material at the link, I can offer the following.
In the Intentional Stance, Daniel Dennett observes that we can project necessarily conscious processes onto objects or states in which they clearly are not occurring. For example, I can consciously have the goal of getting down a rocky hillside by consistently seeking lower terrain and going around boulders rather than over them. But watch a volume of water released at the top of the hill and it seems to do the same thing, "finding its way" down the hill by probing around obstacles till it locates a route to lower ground. I can attribute to water the same goal that I might have and even make some successful predictions on that basis ("the water will go to the left of the boulder because it will detect that the ground is lower there than on the right"). But to employ this attribution in argument will result in a "bait-and-switch" fallacy: water appears to have the goal of getting down the hill just as I do; everything the water does is purely physical; therefore whatever occurs in me when I act with the conscious goal of getting down the hill must also be purely physical. This is not a direct counterexample to yours, but merely calls attention to the pitfall that Jason would probably call the externalist fallacy.
Everything bees do can be viewed simply as cause-and-effect clockwork. The location of pollen affects the bee's internal state, driving a causal chain involving other bees. None of it requires one thing representing something other than itself except when we project our own conscious processes onto bee behavior. Bees and ants store up food in the summer that they consume in the winter, but does this imply much less require that they conceive of time, internally represent the future and then act on this representation? Do they have the goal of storing up for winter? Of course not. I would deny that they even have the goal of survival. Not having conscious goals, bees do not get it wrong. They do one thing and survive or do another and die, and the living and dying may be seen as purely circumstantial events of utter indifference to the blind watchmaker.
Go back to my example of the shadow of a hawk, but let's add a prairie dog colony feeding on grass seed. One of the prairie dogs sees the shadow of the hawk, squeaks, then dives into the nearest burrow hole. The other prairie dogs hear the squeak and dive into holes themselves. This chain of events is similar to the example of the bees locating food. Complex neural reactions are necessary to this account, but not representations. The shadow need not been seen to function as a representation, merely as a cause. Nor must the neural states of the prairie dog constitute representations. Hawk causes shadow, which causes sensory response, which causes motor response. The motor response causes sound, which causes further sensory responses, which cause further motor responses. Can we inject representations onto this account? Yes. Must we? No. There's the rub.
Return to the bee. If we were able to scan exhaustively the physiology of a bee in flight, we could from the chemical characteristics of its organs in relation to the immediate environment project its behavior going forward--although we might need a powerful computer to help with calculations. Even if we claimed that some of the bee's internal physical states constituted representations, we would not need to know what these states were representing in order to make our calculations and determine behavior. Causal relevance would be restricted to the physical properties of those states, whether or not they "represented" things.
However, if the bee acted with goals and purposes in mind the way, say, Fred Dretske does, then knowledge of what was being represented by internal states would be indispensible. I doubt that physical properties can be attributed to the goals Dretske had, for example, in writing his book on information theory. At the same time, if these goals were represented by internal physical states we must admit that the states would have different properties than the goals themselves. We cannot see the goals as distal causes of the effects of the physical states representing them, the reason being the goals' lack of physical properties. Therefore, if the physical properties of the representational states were what drove Dretske's behavior (like the bee), then his behavior would not in fact be motivated by the goals those states represented. This results in incoherence. Dretske's goals have irreducible causal relevance, and this relates in part to the way representation works in conscious, reasoning subjects.
Darek:
First, to clear some brush, I am certainly not a Dennettian, and think he gets a lot wrong on this. He often writes as if being an intentional system is equivalent to being interpreted as an intentional system: this just places the problem of semantic contents one step back (remember I'm talking about states that refer and have truth values, not conscious states).
Your basic point with the bees seems to be this. Assume we have a perfect predictive causal model of a bee.:
Even if we claimed that some of the bee's internal physical states constituted representations, we would not need to know what these states were representing in order to make our calculations and determine behavior. Causal relevance would be restricted to the physical properties of those states, whether or not they "represented" things.
This is a good point. You are basically saying that in my story, content is epiphenomenal, that it has no causal bite, that even if we ascribed different contents to the internal states, the behavior of the bee would still be the same if we changed nothing physical about the bee.
There are two flavors of responses to this, and I'm frankly not sure which I prefer, or if they are actually all that different.
1. Category mistake response
The first, and more popular response (taken by Fodor for instance) is that content is indeed epiphenomenal. The 'syntax' (causal whirrings) of the machine determine its behavior, and the content of the internal states (the semantics') do not affect it.
While this may seem unpalatable, a little reflection should show that it is quite reasonable: Derek and Tderek (your twin on twin earth) are expressing different contents when they use the word 'water', but their behavior will be the exact same. This is because the referent was fixed via past interactions with water and twater.
Contents are fixed historically (hence the references to memory above), but it is only present pushes and pulls that determine behavior. This doesn't mean the contents aren't real, or a mere 'heuristic overlay' (Dennett) on the synchronic causal story: it means that looking for contents to be proximate causes of things is to make a category mistake. It is to mistake the proximal causal story with a historical explanation.
Also, while the behavior of an organism might not be affected by the content of its thoughts, the truth-values of its thoughts, the referents of its thoughts, is affected. And that is ultimately what I (and the other naturalists employing the divide and conquer strategy) are after. If a twinearthian is brought to earth surreptitiously, and says "There is water [meaning, twater] out there", it's thought is actually false. An earthian in an identical brain state (and perhaps with identical conscious experiences) thinking "There is water out there" is correct.
2. Proximal versus historical explanations.
A second type of reply is due to Dretske, who wants contents to have explanatory bite. I think it might actually be consistent with the first reply but I don't have time to knit it all together: perhaps a reader will do us the favor. It should be pretty easy, though time consuming.
Consider what I have been calling internal neural maps of the world: i.e., neural structures whose function is to transmit information about stimulus S and guide behavior wrt S.
Say we have a rat and we brush its whiskers rapidly (S1) or slowly (S2). Information about whisker deflection velocity is transmitted to the brain via various neural structures, ultimately the rat cortex. That is, S1 and S2 produce different neuronal responses in the rat's map of whisker velocities. Let's call these responses R1 and R2.
Now assume we train the rat via operant conditioning. When S1 is presented, it must go to one lever L1 and press it for a reward. When S2 is presented, it must go to another lever L2 and press it for a reward. Let's call these two behaviors B1 and B2.
Before the rat has learned the task, there is no association between S and B but eventually, we get a perfect mapping S1-->B1 and S2-->B2.
But remember this behavior is mediated by the sensory systems, so the more full story is that after learning, the following causal chains exist: S1-->R1-->B1 and S2-->R2-->B2. How can we explain why R1 came to cause B1?
The nervous system was adjusted, because R1 carries information about what is going on in the world, because it carries the information that S1 was presented, i.e., because it represents S1. If R1 carried the information that S2, then the behavior of the rat would have been different. Hence, the referent, or content, of the neural state R1, has explanatory bite.
Any attempt to explain why R1 causes B1 that ignores the contents of R1 will fail. It won't fail because it leaves something out of the proximal causal story (R1 causes B1 because the synapses are arranged in such and such a way), but because it cannot explain why those synapses came to be arranged that way in the first place. So while "we would not need to know what these states were representing in order to make our calculations and determine behavior", we would need to know the contents of those to explain why they came to cause the behaviors they do.
General conclusion
All of this is part of a program to explain how a natural system can have internal states which refer to things in the world, and internal states that are true or false. Just as the heart can malfunction and fail to pump blood (but still have the function of pumping blood), so a neural structure can have the function of indicating, or referring to, things in the world while failing to so refer (e.g., the bee's neurons misfire).
I am not particularly concerned about providing a set of necessary and sufficient conditions for any thinking thing, but more with the empirical hallmarks of semantic structures in terrestrial organisms. If you don't want to say that the bees internal representation of the location of the nectar isn't really an internal representation of the location of nectar, or that rat S1 doesn't really represent whisker velocity, then I suspect we have very different aims, as this is a natural and unforced use of the English language.
OK, I'm spending way too much time on this so I will probably not be able to respond any more. I am just repeating what others have said (Dretske/Fodor), and someone resistant to reading a book on this shouldn't have read this far anyway.
In my general conclusions I said:
If you don't want to say that the bees internal representation
I meant "If you want to say" not "if you don't."
BDK
I can't find a lot to disagree with above. Indeed, effects carry information about their causes and this goes for the bee's internal state considered as an effect rather than just a cause. I don't dispute that. I guess I have not done a great job of isolating the particular aspect of this issue that interested me. But I'd like to take a last crack at it.
The AfR as Lewis developed it turned the claim that the ground-consequent sequence differs fundamentally from the physical cause-event sequence. This implies that logical grounds may not be identified with physical objects/events/states. One way to try and pull off this identification and thereby attack the AfR is through representation. If a logical ground can be identified with a physical entity that represents it, the AfR becomes more vulnerable. This makes it important to analyze representation carefully, hence my original question about the statement in Clark's paper.
If a picture (representation) of the Grand Canyon stirs me to admiration, it might be said that the Grand Canyon causes my admiration by having caused the picture to have certain properties. And these properties can be called informational with respect to the Canyon, as can the properties of any effect with respect to its cause. But is representation in all relevant forms to be found in this kind of causal chain: Canyon causes picture, which causes admiration? Or is there a species of representation that requires a conscious understanding on my part that the picture does indeed represent the Canyon? I need not know that the picture represents anything other than itself to experience admiration when looking at it. It seems to me that that representation in this particular sense requires something more than a chain of causal interaction, even if that chain is complex enough, for example, to constitute informational "guidance" of automated systems in robots, insects or whatever. And I don't think my criteria is confused or obscure. We both know what it is like to understand one thing as standing for another and we probably both have our doubts that bees, for example, are capable of such an understanding.
Now for the controversial part. I think that there is something about "seeing" mentally that one thing represents another that is just not captured by third person objective accounts of even the most complex physical processes, and that this fact buttresses the AfR considerably. But whether we go that far or not, and whatever else we define as representation, even you might agree that if that something's being understood to represent something else is in a special category then it merits special explanatory attention.
One more unclear sentence that must be corrected. I said:
Also, while the behavior of an organism might not be affected by the content of its thoughts, the truth-values of its thoughts, the referents of its thoughts, is affected.
I meant:
Also, while the behavior of an organism might not be affected by the content of its thoughts, the truth-values and referents of its thoughts are affected by the contents.
Also, whether a particular belief is right or wrong (e.g., 'There is a God'), doesn't matter as far as your behavior is concerned: it is only having the belief that matters for explaining behavior. I think it is easier to see this than the more contentious case with individual contents, but luckily Putnam gave us Twin Earth. Also, since truth values supervene on contents/referents of the individual contituting terms (and syntax), we have an indirect proof that individual contents are not proximate causes.
Post a Comment
Subscribe to Post Comments [Atom]
<< Home