The Unity of Consciousness (again)
V. The Argument from the Unity of Consciousness
Consider once again the inference “All men are mortal, Socrates is a man, therefore Socrates is mortal.” Now if there is one entity, namely me, that has all these thoughts, then it might be supposed that we have a rational inference here. If Bill has the thought “All men are mortal,” and Dennis has the thought “Socrates is a man,” and I have the thought “Socrates is mortal,” then we have a problem. No one person has actually performed the inference, and so the inference has not been performed at all.
Hasker, who has been both one of the chief proponents of the Argument from the Unity of Consciousness and the Argument from Reason, nevertheless thinks that there are separate arguments, and that the argument from the unity of consciousness should not be counted among the arguments from reason. Carrier thinks the argument is really an argument from consciousness rather than an argument from reason, and he thinks that in the last analysis what is plausible in the arguments from reason is simply the argument from consciousness. As Hasker put it, “The issue of unity of consciousness, after all, applies to conscious states that are in no way concerned with reasoning, including the states of sentient beings incapable of reason.”
True enough. But some people, confronted with the problem of the unity of consciousness, attempt to show that this unity is an illusion of some kind. I have in mind Dennett’s “multiple drafts” model from Consciousness Explained, and other theories like it. According to Susan Blackmore,
Each illusory self is a construct of the memetic world in which it successfully competes. Each selfplex gives rise to ordinary human consciousness based on the false idea that there is something inside who is in charge.
Or Steven Pinker, who writes,
There’s considerable evidence that the unified self is a fiction—that the mind is a congeries of parts acting a synchronously, and that it is only an illusion that there’s a president in the Oval Office of the brain who oversees the activity of everything.
Now if this is really true, if there is really no one individual who thinks the thoughts we think, then it follows straightforwardly that no one performs any rational inferences, including the rational inferences that have been used to reach the conclusion that the unified self is a fiction.
Now a philosophical naturalist can be a fictionalist about all sorts of things, but he cannot be a fictionalist about the sorts of inferences scientists make. So the Argument from Reason comes to the aid of the Argument from the Unity of Consciousness, and block the "eliminativist" response with respect to the unity of consciousness.
Kant argued, in the Second Paralogism
Every composite substance is an aggregate of several substances, and the action of a composite, or whatever inheres in it as thus composite, is an aggregate of several actions or accidents, distributed among the plurality of substances. Now an effect which arises from the concurrence of many acting substances is indeed possible, namely, when this effect is external only (as, for instance, the motion of a body is the combined motion of all it parts). But with thoughts, as internal accidents belonging to a thinking being, it is different. For suppose it be the composite that thinks: then every part of it would be part of the thought, and only all of them taken together would be the whole thought. But this cannot be consistently maintained. For representations (for instance, the single words of a verse) distributed among different beings, never make up a whole thought (a verse) and it is therefore impossible that a thought should inhere in what is essentially composite. It is therefore possible only in a single substance, which, not being an aggregate of many, is absolutely simple.
A formalization of the argument, which is developed in William Hasker’s The Emergent Self, goes as follows:
1. I am aware of my present visual field as a unity; in other words, the various components of the field are experienced by a single subject simultaneously.
2. Only something that functions as a whole rather than as a system of parts could experience a visual field as a unity.
3. Therefore, the subject functions as a whole rather than as a system of parts.
4. The brain and nervous system, and the entire body, is nothing more than a collection of physical parts organized in a certain way. (In other words, holism is false).
5. Therefore, the brain and nervous system cannot function as a whole; it must function as a system of parts.
6. Therefore, the subject is not the brain and nervous system (or the body, etc).
7. If the subject is not the brain and nervous system then it is (or contains as a proper part) a non-physical mind or “soul”, that is, a mind that is not ontologically reducible to the sorts of entities studied in the physical sciences. Such a mind, even if it is extended in space, could function as a whole rather than as a system of parts and so could be aware of my present visual field as a unity.
8. Therefore the subject is a soul, or contains a soul as part of itself.
Hasker’s example is the synchronic unity of being aware of my visual field, but in rational inference we find a diachronic unity; the inferring subject, who holds the premises of the argument in mind and draws the conclusion from them.
Now it will not do to simply point out that the brain is a highly complex system that is interconnected functionally and has billions of neurons. A genuine physical system is a system whose properties must be “summative” properties of its proper parts. If that is what a brain is, then no matter how complex it is, it is a set of parts.
A braking system of a car, a nutcracker, and even a chess-playing computer are all systems whose operations are the sums of the operations of their proper parts. Sometimes human beings are able to provide a framework of meaning for these objects that, if taken literally, would attribute to the system characteristics that they lack individually. But in human consciousness we find a subjective unity.
Carrier responds to this argument by sayingBut the point is the same: just as a collection of cells can organize and cooperate into a body that can walk—even though no one of those cells can walk at all or even has legs, much less the other needed organs, like hearts and lungs—so also can a collection of brain systems organize and cooperate into a mind that can think. And it does this by producing the virtual appearance of a singularity of consciousness, just as it produces the mere appearance that unified patches of color exist—when in fact only streams of various distinct particles exist.
But I am not talking about a unity of function that can exist in a braking system, I am talking about a unity of perspective experienced by the thinking agent itself. When a person infers “Socrates is mortal” from “All men are mortal” and “Socrates is a man,” that person infers the conclusion from his own perspective. There are truths that we know from a first-person perspective that cannot be known from any other perspective. For example, the truth that “I am Victor Reppert” is significant from my own perspective that cannot be discovered from a physical perspective. By taking an outside, third-person point of view, something is invariably lost.
It seems to me that Carrier, like Blackmore and Pinker, has fallen back on the fictionalist view of the unity of consciousness. But this position, I maintain, undermines rational inference.
Labels: the unity of consciousness
27 Comments:
A genuine physical system is a system whose properties must be “summative” properties of its proper parts.
Can this be precisely defined?
You say, "But in human consciousness we find a subjective unity."
Fine, let's say that is true for purposes of argument. What does that imply about the vehicle or substrate of that unity?
I find subjective blue as a content of my experience, but that doesn't mean the vehicle doing the experiencing is blue. I experience (visually) things as outside me in space--that doesn't mean the vehicle of the experiential content is outside me in space. How is unity different from color, spatiality, all other properties?
In other words, the content-vehicle distinction is key and seems to have been lost in this argument. To make the inference from "I experience things as being X in my phenomenology" to "The vehicle doing the experiencing must have property X" is a mistake. It is obvious for X=blue, outside my body. However, for some reason people get sucked into the content-vehicle conflation when it comes to unity and often spatial variables like 'fieldlike'.
If I were a quantum mechanics lover, I'd say that QM has a mechanism for unity. But to get all excited about that would be to buy into the content-vehicle confusion, to buy into the faulty premise of this unity argument. However, if I were confused about content-vehicle distinctions, and so bought your argument, I'd just point to quantum mechanics and say your premise is wrong, that you are working with an anachronistic classical view of physics.
Stepping back even one more step, this is all assuming unity is real and a useful term of the trade, of course.
Finally, I want to stress again my confusion about your claim that all physical systems are "summative". Is that a mathematical claim? A metaphysical claim? What exactly does it mean?
BDK writes: "I want to stress again my confusion about your claim that all physical systems are "summative". Is that a mathematical claim? A metaphysical claim? What exactly does it mean?"
It seems to be sort of a mathematical claim. The function of a mousetrap, for example, depends upon the the proper functioning of all of its parts. Substract one of the parts, and the entire system ceases to function as a mousetrap (unless we reconfigure the parts).
However, it's not clear to me that our unity of consciousness needs to be ontologically distinct. Say the consciousness that, "All men are mortal, Socrates is a man, therefore Socrates is mortal," depends upon 4 physical events in the brain occuring simultaneously, one for each statement, and a fourth physical event that is the consciousness of the first 3 statements. Substract one of the physical events, and no proper inference occurs. How is this different from our mousetrap?
Now if this is really true, if there is really no one individual who thinks the thoughts we think, then it follows straightforwardly that no one performs any rational inferences, including the rational inferences that have been used to reach the conclusion that the unified self is a fiction.
Not only is this not obvious, but I think its false. Surely what makes a rational inference (in a thinking being) is a causal connection between representations of the premises and representations of the conclusion.
Saying that the mind is some sort of chorus does not imply that there's no causal connection between the voices in the chorus. Indeed, the idea behind chorus theories of consciousness is that there is a causal mechanism connecting the voices.
For example, when figuring out a problem, we often experience seeing one possible solution after another, throwing out the bad solutions until we get to a good one. But what if all the solutions are speaking in parallel, and we're only listening to the loudest voices, and only keeping the the most harmonious solutions? I don't see why this defeats the notion of rational inference.
Dr.Logic writes: "But what if all the solutions are speaking in parallel, and we're only listening to the loudest voices, and only keeping the the most harmonious solutions? I don't see why this defeats the notion of rational inference."
But who is the "we"? Isn't it the human consciousness? Besides the premises of the argument, there is a consciousness that observes the premises and draws the conclusion. Or at least observes the conclusion and picks it as "the most harmonious solution."
Bilbo,
Sorry if my wording was misleading. I think the idea is that the conscious thought is a matter of self-organization.
Edelman's book "A Universe of Consciousness" explains one mechanism for this organization based on selection.
However, we can imagine alternatives, like the attention mechanism we see in other brain regions.
For example, in vision, our attention may be drawn away from a familiar street and to the dog walking across it. When this happens, we may not be consciously changing our attention, and yet attention is redirected to the most visually dynamic part of the scene automatically. Now suppose that the brain is reflective in that it observes itself like the autonomic visual system observes an external scene. Then an attention scheme in the brain might focus on the most mentally interesting solution in the same way the visual cortex focuses on the most interesting part of a visual scene.
Now there is a model of neocortical function (called hierarchical temporal memory) that does precisely this. Attention is a heuristic method that helps to locate solutions consistent with a goal.
So the idea that there's no possible account of unity of consciousness is not a compelling argument against naturalism or physicalism.
The argument Victor quotes says:
2. Only something that functions as a whole rather than as a system of parts could experience a visual field as a unity.
This premise is totally unsupported. It really says that since I see the world as a whole, it must be the case that my mind is an indivisible whole. I think this kind of argument ignores science in favor of gut. As the eye saccades across a scene, what the eye actually sees is a series of images which the brain then glues together into a subjectively static scene. Well if the brain can unify multiple images into a single scene, why can't it unify multiple mental states into a single mental scene? Of course it can.
Dr. Logic wrote: "Well if the brain can unify multiple images into a single scene, why can't it unify multiple mental states into a single mental scene? Of course it can."
I'm not sure I disagree with this, assuming that we call this "single mental scene" consciousness, and by that mean that this "single mental scene" is aware of the other mental states.
Now how does this affect that aspect of the argument from reason ? From what I can tell, not at all.
Bilbo,
Now how does this affect that aspect of the argument from reason? From what I can tell, not at all.
We don't have a first-person intuition that vision is discontinuous, but science shows that our first-person intuition is flat wrong. This isn't controversial.
Yet, if we're going to say that unity of consciousness is a problem, we have to also claim that unity of vision is a problem.
Why would you say that one intuition is open to challenge, and the other intuition is incorrigible? I think it's because the latter is morally offensive.
The excuse given for the specialness of the unity of consciousness argument is reasoning ability. If the intuition about our reasoning process is wrong, then everything else goes out the window (not just vision).
However, the science of vision does not tell us that vision is unreliable. It says that vision is unreliable in some special situations.
Likewise, naturalistic choir models of mind claim that our beliefs about the world and our methods of finding consistent solutions (i.e., inference) are statistically reliable. If the models told us that minds were unreliable to the point of invalidating our inferences, then naturalism would have a problem. But they don't. So there's no contradiction. In fact, the models suggest that we are imperfect, yet somewhat reliable reasoners, as is consistent with experience.
Naturalism says that most of our intuitions about reason are correct, but some are incorrect. And the incorrect intuitions do not threaten the validity of our arguments.
"We don't have a first-person intuition that vision is discontinuous, but science shows that our first-person intuition is flat wrong. This isn't controversial.
Yet, if we're going to say that unity of consciousness is a problem, we have to also claim that unity of vision is a problem."
Not sure this follows. But I'm out of time for today.
DL said:
We don't have a first-person intuition that vision is discontinuous, but science shows that our first-person intuition is flat wrong.
What do you mean that vision is discontinuous? What are the data? Saccades and such?
BDK,
What do you mean that vision is discontinuous? What are the data? Saccades and such?
Yes, saccades and optical illusions.
I was thinking last night that there are probably mental illusions that support the corresponding point for consciousness. Subliminal messages, for example.
DL:
Saccades don't show visual experience is discontinuous. There is a lot of work done on why our visual experiences are so stable despite the jerkiness of our eye movements (it involves the eye movement control system sending two signals out--one to the eyes, the other to other parts of the brain to tell it that the changes you are about to experiences are coming from an eye movement, not from movements in the world). Recent good work on this includes this paper.
Lightly poke the side of your eye and you will see that the world looks like it jiggles, as the "corrolary discharge" from the saccade system is not present. Our eye movements during normal vision are much more dramatic, but the world seems stable.
At any rate, I think it is clearly not settled that vision is discontinuous (not that I know exactly what that means).
Dr. Logic wrote: "We don't have a first-person intuition that vision is discontinuous, but science shows that our first-person intuition is flat wrong. This isn't controversial.
Yet, if we're going to say that unity of consciousness is a problem, we have to also claim that unity of vision is a problem."
First, I wouldn't be surprised if most of our thinking was carried out discontinuously, with only occasional intermittent fits of consciousness.
Second, no doubt much or our cognitive activity is carried out subconsciously or unconsciously by our brains, and no doubt we often come up with beliefs or conclusions without being aware of how or why we did so.
But third, Reppert's point is that at least some of our thinking must involve a unity of consciousness, if things like science or philosophy (what we're attempting to do now) are to be reliable. Thus, we need to be able to recognize and draw logical inferences, in order for the sciences to proceed. And that, as far as I can tell, requires the simultaneous awareness of premises and conclusion. So that at least temporarily there must be some sort of unity of consciousness. It may not last long. And if the proof or argument is a long one, it may not even last all the way through. But there would at least be the memory that the premises were true and the conclusion followed logically from them.
If we don't grant such there is such a unity of consciousness (no matter how short the duration), then I don't quite see the point of our posting arguments here at this blog. Certainly, Dr. Logic, you hope that you have posted a sound argument, that your readers are capable of reading, considering, and hopefully agreeing with. If you really thought there were no unity of consciousness, why bother?
BDK,
Sorry. My original phrasing was poor, and it gave the impression that I was saying visual experience was discontinuous. I'm saying that the experience is continuous, but the that experience is the result of an integral over many distinct, discontinuous visual frames. That is, vision is discontinuous, experience of vision is not.
Thanks for the link to that paper on corollary discharge. That's quite interesting, and illuminates one of the ways in which the brain is performing the integration.
Hi Bilbo,
If we don't grant such there is such a unity of consciousness (no matter how short the duration), then I don't quite see the point of our posting arguments here at this blog.
I agree, but you (and Victor) seem to falsely assume that unity of consciousness requires a single agency to make rational decisions.
In Victor's OP, he talks about Bill and Dennis and himself all thinking different parts of the Socrates syllogism, and Victor says that the independent thoughts of the three people do not constitute an example reasoning through the syllogism. This is true, but the thing that makes it true is that Victor's thought about the conclusion is causally disconnected from the thoughts of the others about the premises. If Bill thinks the opposite, there's no reason that Victor will ever know the premise changed.
What is required for a correct inference is that there be a causal connection between the agents.
A good analogy would be economics. An economy will obey the basic rules of economics, even when the agents in the economy don't know those rules. Consider an inflation cycle. If worker paychecks increase, some workers will be willing to spend more on basic needs. That statistically favors vendors with higher prices, and so the average vendor price increases. Due to reduced competitive pressure, many vendors are free to raise prices. This places more pressure on workers to find work with higher salaries, and the cycle continues.
Now all of these agents are causing the economy to correctly fulfill an inflation cycle (raising salaries generally raises inflation), while none of the agents has necessarily even heard of inflation. No vendor agent needs to say "Worker salaries are up, so I'll raise prices." They can have less sophisticated connections (like greed or lack of knowledge) and still contribute to a unity of economy.
If elements in the mind do the same, we can have lots of loosely-connected elements that result in (mostly) correct inferences, without any individual agent making the explicit inference from big-picture facts.
Dr.Logic wrote: "If elements in the mind do the same, we can have lots of loosely-connected elements that result in (mostly) correct inferences, without any individual agent making the explicit inference from big-picture facts."
Sure, that's what I was admitting when I said that for all I know most of our beliefs could be the result of subconscious activity.
However, in order for science or philosophy to proceed I think there needs to be conscious awareness of the reasoning process. Both disciplines depend upon the ability to publicly scrutinize any inference made "in their name", so to speak. And that means that conscious individuals need to consciously assess the evidence and the inferences. If that doesn't happen, then whatever subconscious inferences are made are, by definition, not open to scrutiny. No scientist says, "My conclusion follows from the evidence and logical inferences, because my subconscious processes are reliable." She says, "Here is the evidence, and here are the premises and conclusion." And she depends upon the rest of the scientific community to individually and consciously evaluate them.
Bilbo,
I don't see how any of this contradicts our ability to know our reasoning. It only contradicts the ability of all of the elements knowing the same thing at the same time. Factually, we know that the brain is a collection of associative memories. The memory that knows the melody for the national anthem is not going to know what a rabbit looks like, and not going to know why the Pythagorean theorem is true given the axioms of geometry.
So let's break this down another notch. There are two basic functions we're looking for. There's the ability to validate an inference, and the ability to hold an inference (or a chunk of it) in mind at once.
Let's work on the latter function first. Suppose I am reasoning about dropping a metal ball from the Tower of Pisa. Specifically, I ask "What will happen when I drop a metal ball from the Tower of Pisa?"
The brain element connected with the concept for "drop" triggers thousands of other elements. It thinks about dropping the soap in the tub, dropping eggs in the kitchen, dropping my bodyweight, dropping a nuke, dropping a name, and so on. The element representing the concept for tower triggers of many memories/elements related to towers: the Sears Tower, the Tower of London, the Tower of Pisa, even Mr. Tower, my old HS wrestling coach.
Triggering these elements triggers corresponding predictions for each element. Dropping a dish causes it to fall. When soap falls, there is a loud noise when it hits the floor of the tub. When an egg falls, it quietly splats on the ground. I drop a name, and my social standing rises, etc.
After a short time, the system subdues all but a few network elements (there are algorithms that can find the most consistent beliefs). The "Tower of Pisa" is not my wrestling coach but the Leaning Tower in Italy, I'm more likely to drop a physical object from a tower than drop a name from it, the metal ball is more like hard soap than a fragile egg, and, hence, the likely outcome is a clunk when the ball falls to the hard ground below.
This is a valid inductive inference, and all the elements corresponding to that inference are alive and firing in my head by the time I have reached my conclusion. The other voices (like those of my wrestling coach) are effectively silent by then. In the mind, this inference is a pattern of element firings, and the mind can memorize, categorize, and index the inference.
Finally, the inference can be explored and validated by finding the broad categories that motivated the original inference. To do this, the network might generalize all the elements of the story. For example, the metal ball belongs to the category of hard objects. Towers are high and serve as platforms for dropping things. Dropping hard objects onto hard surfaces results in loud noises. Each of these generalizations can be expressed as a proposition.
None of this speculation runs afoul of your objection. The thinker is aware of his inference, and aware of how he made the inference.
Dr.Logic wrote: "None of this speculation runs afoul of your objection. The thinker is aware of his inference, and aware of how he made the inference."
We may be at an impasse. It sounds like we're saying the same thing, but apparently we're not. And I'm not sure how to clarify the issue. It may take someone whose mind exercises more clarity.
Well, since no one else is stepping in, I'll take another crack at it.
"The brain element connected with the concept for "drop" triggers thousands of other elements. It thinks about dropping the soap in the tub, dropping eggs in the kitchen, dropping my bodyweight, dropping a nuke, dropping a name, and so on. The element representing the concept for tower triggers of many memories/elements related to towers: the Sears Tower, the Tower of London, the Tower of Pisa, even Mr. Tower, my old HS wrestling coach.
Triggering these elements triggers corresponding predictions for each element. Dropping a dish causes it to fall. When soap falls, there is a loud noise when it hits the floor of the tub. When an egg falls, it quietly splats on the ground. I drop a name, and my social standing rises, etc.
Dr. Logic, I take it you would say that these brain events are happening subconsciously? I'm willing to buy that.
Then: "After a short time, the system subdues all but a few network elements (there are algorithms that can find the most consistent beliefs). The "Tower of Pisa" is not my wrestling coach but the Leaning Tower in Italy, I'm more likely to drop a physical object from a tower than drop a name from it, the metal ball is more like hard soap than a fragile egg, and, hence, the likely outcome is a clunk when the ball falls to the hard ground below.
This is a valid inductive inference, and all the elements corresponding to that inference are alive and firing in my head by the time I have reached my conclusion. The other voices (like those of my wrestling coach) are effectively silent by then. In the mind, this inference is a pattern of element firings, and the mind can memorize, categorize, and index the inference."
Not clear here how much of this you think is happening subconsciously or consciously.
Then: "Finally, the inference can be explored and validated by finding the broad categories that motivated the original inference. To do this, the network might generalize all the elements of the story. For example, the metal ball belongs to the category of hard objects. Towers are high and serve as platforms for dropping things. Dropping hard objects onto hard surfaces results in loud noises. Each of these generalizations can be expressed as a proposition."
Again, it's not clear how much of this you think is happening subconsciously or consciously.
I would say that once we get to the propositional stage, we are in the conscious realm, and that we now have a continuously conscious event. Or if not continuously, then at least in rapid succession, as the frames in a movie film, happening so fast, that we experience them as continuous. And it is the "we" that is the interesting part. For not only do we have conscious awareness of the propositions, we have conscious awareness that we have conscious awareness of the propositions. And it is this second level of consciousness that is the source of the unity. Or so it seems to "me." How does it seem to "you?"
Bilbo,
I would say that when the network has stabilized, then the we are conscious of what the network has to say.
Imagine that our faculty for consciousness is like the visual cortex, but turned in upon the brain itself. Before the network stabilizes, consciousness is a blur. It's confusion. But when the network stabilizes on a consistent set of firings, the faculty can "focus" on the active nodes.
BTW, there are theories about how this can happen without a specific faculty for consciousness.
To review, the process of inference in such a mind is not deductive, but Bayesian. The Bayesian network explores all the possibilities in parallel, and settles on a solution according to learned correlations. Once the network has settled, the mind can point to the reasons (category rules) that would make the solution valid.
Of course, there is no guarantee that such a Bayesian network will always result in a solution that would be chosen by a purely deductive computer. Nor is there a guarantee that the mind will always be able to offer a rationalization for the solution in terms of propositional rules. However, human minds do not guarantee these things either.
I think one of the reasons that models of physical minds are hard to understand is that there is an intuitive assumption that inference is atomic and deductive. I'm arguing that it is neither atomic nor deductive, but rather statistical and post-rationalized. Statistical, post-rationalized thinking could be reliable (and self-correcting when post-rationalization feeds back into the Bayesian network).
The premise that Victor started with is that a model without a central reasoner cannot make inferences. This is a non sequitur. The reason we need a central reasoner is so that there is a causal connection between premises and conclusions, but physicalist models of mind all have that feature, so the argument doesn't apply. The reasoner doesn't have to be centralized. It only has to be causally connected.
Let me mull this over, Doc, and I'll get back to you.
Still mulling, Doc. But out of time, today.
Still mulling. Though I this point I would say it seems that my subconscious understand Bayesian probability and can calculate with it, even though I can't. And I can understand deductive logic and can reason with it, but apparently my subconscious can't. Paradox?
more typos. when will i learn to preview?
Hi Bilbo,
You used the term "understand" in two different ways in your last comment.
When we normally speak of understanding a thing, we refer to the final product of the thought process. Understanding in this sense requires consciousness. In the model I presented, it refers to the results of consciously reflecting on the strongest subconscious connection.
However, when you say that the subconscious understands Bayesian inference, you mean it in a different sense. Like the sense in which a frog "understands" how to catch a fly, or how a car "understands" what to do when we hit the gas. Yet neither of these "understandings" is of the conscious variety, and "understanding" is a poor word for what is happening in these cases.
Sticking with the first meaning, our subconscious doesn't understand anything because subconscious thoughts aren't final, conscious products. The subconscious may use a Bayesian algorithm, but that doesn't mean the subconscious understands it. My heart pumps blood, but it doesn't know or understand how to pump blood.
Similarly, while the elements of the brain may not use deductive logic, that doesn't prevent them acting in concert as a mind that does understand deductive logic.
If we assume thought reduces to physics, then we're going to end up with a picture in which non-thinking, non-understanding parts add up to something that thinks and understands. Any reductionist theory of thought will have to specify what functions equate to thinking, understanding, etc.
If we assume thought reduces to physics, then we're going to end up with a picture in which non-thinking, non-understanding parts add up to something that thinks and understands. Any reductionist theory of thought will have to specify what functions equate to thinking, understanding, etc.
OK, I take it that you're not a reductionist. I think that's a good thing. However, we still have a problem. Our brain is functioning according to Bayesian probability calculations, while our mind is functioning according to deductive reasoning. How do the two connect?
Bilbo,
I am a reductionist.
Our brain is functioning according to Bayesian probability calculations, while our mind is functioning according to deductive reasoning. How do the two connect?
There's no reason to believe that the mind is functioning according to deductive reasoning at a low level. What we know is that we can point to observed regularities that would justify our conclusion. Deductive rules count among observed regularities.
As an example, if I solve a word problem, I don't generally think in terms of deductions. I intuit the conclusion, and then, if asked, I find deductive rules to justify my intuition.
Also, Bayesian probability gives the same answers as deduction in the limit in which deductive rules have been successfully verified many times.
For example, are you certain that 2+2=4? You might have made a mistake every time the question came up. Yet after trillions of calculations and validations, the probability that you and everyone else is wrong about 2+2=4 is very very tiny. So we might be effectively certain that 2+2=4, but we cannot claim absolute certainty. This shows us that Bayesian thinking leads eventually (after much experience) to deductive-like thinking.
Hmmm...you are a reductionist? Back to mulling. I'll get back to you.
Post a Comment
Subscribe to Post Comments [Atom]
<< Home