Lewis and computers
C. S. Lewis lived before computers became the major force that they are now. I think he would not be impressed by the argument that
1) We know that computers are purely material systems.
2) We know that computers reason.
3) Therefore we know that material systems are capable of reasoning.
The reason he would not be impressed is that while computers do the "ratio" part of rational inference very well, the "intellectus" aspect is not to be found in the computer system itself, but is rather a "put in" by human programmers and builders.
He wrote:
"We are enjoying intellectus when we ‘just see’ a self-evident [basic] truth; we are exercising ratio when we proceed step by step to prove a truth which is not self-evident. A cognitive life in which all truth can be simply ‘seen’ would be the life of an intelligentia, an angel. A life of unmitigated ratio where nothing was simply ‘seen’ and all had to be proved, would presumably be impossible; for nothing can be proved if nothing is self-evident. Man’s mental life is spent labouriously connecting those frequent, but momentary, flashes of intelligentia which constitute intelluctus."
Can a computer "just see" that a truth is self-evident, like we do? Or does it accept those "truths" because we humans perceive those truths and get it to act as if it did as well? Isn't the latter obviously true? How can intellectus be possible for computers, even jazzy neural nets?
Labels: C. S. Lewis, computers, The Argument from Reason
9 Comments:
Victor
Would you mind providing the reference for the quote? Thx.
Hi Victor,
Nice post. Do you think that an acknowledgment that a computer can 'ratio' is a vindication of the physicalist position that reason can work in a physicalist scheme alone? I'm not sure it does, since the reason in the computer is derived from an intelligent force (a person). I'm not sure a physicalist would want to acknowledge that humans are rational because an intelligent force structured them. Also, do you think that the 'intellectus' could also be described as the qualitative aspects of mental states? I know this is overlapping conceptual schemes, but do you think it could work, or do you think the ancients had something else in mind when they talked about 'intellectus'.
I like this line of thought, but think it could be pressed a little harder, along lines suggested by rino.
If you agree that computers aren't capable of intellectus, then among the things that they cannot "just see" are the truths of reason and the laws of rational inference themselves. As such, although a computer can be programmed to occupy states in a sequence which adhere to the laws of logic and rational inference, it's "seeing" the truth of the law which made that inference rational could never form any part of the explanation for this chain of events.
As such, laws of inference would have to be "given" to the system, hard-wired or hard-coded.
Steve
Darek: Here's the Lewis reference.
35] C.S.Lewis, The Discarded Image, p157.
David Berlinski makes a similar point in his article about evolutionary psychology, 'On the Origins of the Mind' (well worth a read). He argues that any computer program requires a rational agent to set the initial conditions and commands in order for it to do any work resembling rationality, thus leaving us no nearer to a computer understanding of rationality, and that
What gives to the program its air of cool command is the fact that its symbols function in a double sense. They are symbols by virtue of their meaning, and so reflect the intentions of the human mind that has created them; but they are causes by virtue of their structure, and so enter into the rhythms of the real world.
I haven't picked up Plantinga's warrant trilogy for a good while, but if I remember correctly, he made a big deal about how he thought the early moderns were wrong about the phenomenology of rational insight, whether that of demonstration or intuition. He said he didn't experience some sort of "lustre" or "natural light of reason" accompanying his conviction when entertaining, say, that 2+3=5. Since he rejected the traditional picture of a special phenomenology of rational insight, (I think) he concluded that *the mere conviction itself* was the evidence (and the only evidence) for "rational insight" -- I think he calls it "impulsional evidence".
Now I'm not endorsing this picture. But if he's right, then the idea of incorporating rational insight into a "naturalistic" picture of the mind is, I think, a little easier to accept.
I'm not sure what I think about this at the moment -- it just struck me when I read your post. Any thoughts?
Best,
EA
Steve
If you are still checking these comments, I wonder if Lewis's point requires recognition of rules of inference or merely following them by the compulsion of logical necessity.
It seems generally accepted that physical cause-and-effect lacks logical necessity. Nothing about physical effect state B owes by logical necessity to physically causal state A. If this is right, then no physical state of a computer can be the logically necessary result of a previous physical state.
However, is there a case where a given mental state is logically necessary given a preceding state? For example, in the case of 2+3=5, does the mental state constituting a correct understanding of "2+3=," to the extent that it entails the desire to supply the missing term, logically necessitate a succeeding mental state constituting the awareness of 5 (or its equivalent) as the appropriate sum? Or take the example of the Socrates syllogism. The major and minor premises are not the same as the conclusion, even if they contain all that is necessary for it. Is it possible to understand correctly the premises and be motivated to deduce the conclusion and yet fail to do so (other than, say, due to having a stroke or being struck by lightning while contemplating the premises)? Or would failure to deduce the conclusion necessitate that the either the premises were not understood or motivation to determine a conclusion was lacking?
If a mental state can be the logically necessary result of a previous state, then we can on that basis alone distinguish rational mental processes from computer processes.
Darek,
Hmm. I'm not too sure about this, and on several levels.
Firstly, I doubt that any thought could be logically sufficent for the occurence of any other thought. Obviously, you're right that we'd want to ignore exceptions resulting from death, stroke and other cases of complete cognitive failure, but even then, we'd seem also to need to rule out irrationality and to require a certain kind of relevance or "attention".
The proposition P has many entailments. Among them are the trivial ones like:
P AND N
P OR Q
if not-P then Q
Where N is any necessary truth and Q is any proposition whatsoever. Given my awareness of these general entailments, I'm rather glad not to "automatically" find myself thinking about these entailments whenever I think about P. I don't think my brain would cope.
Thankfully, inference takes more than just entertainment of the premises. In sufficiently obvious cases the inference may seem inevitable, but even there I suspect our minds are doing a little extra work.
But even if we thought that certain thoughts were "inevitable" I'm not sure that this would cause the naturalist the problems you suggested. Though it might, I'm not sure. My idea is that the naturalist will simply redescribe the cases, so that the entertaining of the conclusion of the inference is simply a part of the entertaining of the (combined) premises. This would mean that the conclusion was not really "later" in time than the premises, but I'm not sure that would be so bad. Perhaps it's only after the premises have been enterained genuinely "together" that the conclusion becomes "inevitable".
At any rate, I think this line of argument would be very difficult to make stick.
Steve
Steve
I think this line of thought is potentially fruitful. How is it that we conclude that necessary truths are necessary except by their inescapability given a condition of rational attention to them?
We understand that an otherwise unsupported apple will fall when dropped from the hand. We also understand that 2+3=5. But in the second case we find ourselves compelled to the conclusion more forcefully than we do in the first. We understand in the second case that no other conclusion than 5 or its equvalent is possible. 2+3 must equal 5 in any logically possible universe. Doesn't this mean that in any logically possible universe, the a thinker paying the kind of rational attention we pay to 2+3 in order to come up with 5 will likewise come up with 5? But that puts the character of the thinker's activity itself into the category of logical necessity, doesn't it?
Why would a computer, in order to perform its function, need to understand (to the extent that a computer can be said to understand) "2+3=5" in a way other than the way it might be programmed to understand that an unsupported apple will fall from the ground when released from the hand? And if the computer could perform its functions using only the type of understanding exemplified in the first case, why could not our brains--if indeed our brains are computers?
Post a Comment
Subscribe to Post Comments [Atom]
<< Home