Thursday, April 03, 2008

Picking up a point of BDK's

In a previous dialogue BDK wrote: Regardless of my argument that a child with no epistemology training can know X, would you agree with the conclusion that a naive child can have such knowledge (i.e., that observation judgments can be knowledge)?

I'm not asking for an analysis of why they are knowledge, but if you think they do not constitute knowledge, then we definitely will not get any further. If you do there might be interesting questions at that point. So, can a philosophically naive child know X in the way I've described?

BDK: I'm interested in BDK's conception of philosophical naivete. Does this child know how to use the term "I know?" It seems to me that long before one takes philosophy one knows how to use the word know and can distinguish knowledge from "just guessing."

Can one know without knowing how to use the word "know?" My intuition says yes.

This isn't supposed to prove a point with respect to the argument from reason, one way or the other.

Labels:

16 Comments:

At 4/04/2008 12:12:00 AM , Blogger Blue Devil Knight said...

Can one know without knowing how to use the word "know?" My intuition says yes.

This isn't supposed to prove a point with respect to the argument from reason, one way or the other.


I agree with both points. It is likely neutral wrt the argument from reason (at least without a lot of additional junk added in it shouldn't do any damage to the argument from reason).

By 'naive' I mean something like what you say--a child (or adult for that matter) that knows no epistemology, hasn't consciously thought about consciousness, just making claims about how the world is (e.g., "The TV is on the table"). It is a fairly vanilla claim, so I was surprised by all the hemming and hawing that it met.

Now, I wanted to make an additional, much more contentious claim, that naturalists should explore the possibility that humans use such observation judgments to bootstrap our way into rules of inference. Such bootstrapping (the 'extra junk' I referred to above) would involve an ability for people to judge the truth of individual claims, to ascribe truth-values to uttered sentences. E.g., someone says:
1. The TV is on the table.
2. The Simpsons is on the TV.
3. Therefore, Bart Simpson is on the table.

We can see that the 'therefore' (which we can take to be an implicit assertion that 3 'follows from' 1 and 2, that the truth property flows from 1 and 2 to 3) here doesn't work. I think a lot of mileage can be gotten from such relatively simple observations of 'therefores': which ones work, which ones don't. The rules (implicit in different types of such inferences) that don't work can be weeded out pretty quickly, and we'd be left with rules that work pretty well after a few hundred years of writing them down and thinking about them, comparing their "predictions" to reality.

Logic seems to be an attempt to extract away from the messy semantic properties of individual sentences to express what types of 'therefores' are worth keeping, stripping the expression of the rule down to syntax and truth values of sentences (or sentence constituents).

So that's a little bit of the extra junk I was thinking of. Now, of course, you could say that I am talking about inferences using public langage (which includes logical language), while you are thinking of something internal in the argument from reason. But I also assume you would think there are strong analogies between external lingusitic justification and internal justication.

OTOH, I bet we would disagree on how much to push the analogy between lingustic and mental inference. However, most of us would agree that the contents of the public expressions depends in some sense on the contents of our thoughts--the question is whether logical thought (of the type required in the argument from reason) is possible without a public language, or whether such logical thought is the internalization of the learned patterns of public inference.

Obviously, I tend to be skeptical of language of thought type ideas, and propositional attitude psychology in general (whereby there are internal propositional semantic states, as well as an extra internal attitudes such as 'belief' that we take toward those internal states). I think it is reasonable to think that the brain uses nonpropositional contents as intuitively people seem to think is reasonable when we talk about the contents of our visual field, which don't seem propositional.

Obviously this is a very complicated topic, I have barely scratched the surface here, and would need to think through studies suggesting that humans, left to their own devices (and even philosophers in the studies) are not all that good at deductive inference. You could just say that just because one formal logic (e.g., Boolean) is something we are bad at, that just suggests that we have more to learn about the rules we do in fact use in our reasoning, rules the caveman even used without knowing it.

 
At 4/06/2008 05:07:00 PM , Blogger Darek Barefoot said...

BDK

I'm not trying to be cute with you when I insist that the word "know" has a range of meanings and it would be easy--and dangerous--to trade on the imprecision involved. "The dog knows that her dish is in the kitchen." "Einstein knew that the principle of addition of velocities was not universally applicable." There's a lot of distance between statements like this.

I'm not sure I can always peg exactly where higher levels of knowing come into play in every situation. I'm interested in the fact that inference does take place, that it entails a type of normativity, and that this normativity cannot be observationally confirmed any more than moral normativity can.

Someone in the same room with me says, "There's a TV on the table." I'm interested in what happens when I ask them, "Why do you think so?" Do they say, "Because I see it there" or "Because all brillig were the slithy toves" or do they give me a blank stare? How they respond probably says something about how active the power of inference is in the individual.

Suppose a woman talking on the phone says to her friend, "The For Rent sign is gone from the house across the street and there's a moving van parked in front, so I think it has finally rented." Then later in the conversation she says, "I didn't sleep well last night. I watched a horror movie, so all night I laid awake thinking that every little creak was an intruder." Can the woman distinguish between:

"The sign was gone and a moving van appeared, therefore the house must have rented."

"I watched a horror movie, therefore every creak in the house was made by an intruder."

I think that she can make that distinction and that she would express it in terms of what "makes sense" and what does not. She is depending on what I call the metanorm of rationality--the idea of correctness in arriving at conclusions--whether she puts it in those terms or not.

As for weeding out rules that don't work, you have to have a rule primary to the rules at issue--a rule for what to weed out and what to keep--in order to accomplish that. Unless you mean that selection weeds out individuals according to the rules they employ.

 
At 4/07/2008 11:31:00 PM , Blogger Blue Devil Knight said...

DB: your questions are good ones. I am going to avoid the perceptual judgment stuf for this message, as I think I've found a more direct way to penetrate to the heart of the matter.

I am advocating something here I am not entirely certain is true, but I do think it is in the right ballpark.

Note I disagree with your claims that there has to be an assumption of good inference rules. Perhaps the belief that our memories don't systematically deceive us is there, though, and perhaps other beliefs about the reliability of our more basic faculties.

I take it that the overriding norm, when it comes to (deductive) inference-rule endorsing is a desire for truth-preserving inferences. We notice that some don't work, and keep those that do. I don't want to push the natural selection analogy too hard, but it seems pretty apt.

It could have turned out that there were hardly any truth-preserving inferences. It is something of a happy accident that we live in a world stable enough, with cognitive systems reliable enough, such that given the urge for truth-preserving inferences, we actually found quite a few of them that work most of the time.

This is a strange claim, sure to put me at odds with a priori realists about math, logical truths, that sort of thing.

So let me tentatively try to paint a world where there are weird, different, or even perhaps no (deductive) rules of inference that are universally valid.

Consider an axiomatic system S powerful enough to derive certain basic truths about arithmetic. Godel proved (deducitvely) that if S is consistent, then there are statements in the language of S that are true but not provable using S. To add insult to injury, he also proved that it is impossible to prove that S is consistent. (Aside: this is why people don't like Godelian arguments about math and mind--they all assume we can know that S-type systems are consistent).

Consider a universe in which S is not consistent. Nobody can prove it isn't consistent, but it is not (e.g., God adds a belief commandment--S is not consistent). We can't prove it either way, but God told us the correct answer, so it's probably true.

So, assuming God is right, that implies [sic] that for some proposition P, P and not P is the case. There goes a cherished, perhaps the most cherished logical law, the law of the excluded middle. Is anyone here sure we aren't living in such a universe? Is there a good argument that we aren't?

So, continuing this line of thought, if we are living in such a universe, why do the rules of inference work so well? Because of the selection process I described above. We got lucky. It could have been differently (after all P or not P isn't even universally true), but the universe and our attempts to understand it using truth-preserving inferences, has turned out to work really really well. But since we can't just assume they work, we have to treat them all as predictions or hypotheses about truth values of sentences that come after 'therefore' based on the truth-values of sentences that come before the 'therefore'.

I'm pretty sure this argument has a hole somewhere, but I frankly can't see where. Perhaps one of those cocky logicians who understands this stuff better than I could give me the beat down. (Perhaps they would mention that Godel proved that Boolean logic is complete).

The main reason I think there must be a hole is that seems sort of an obvious argument, and I have never seen it before.

A closely related argument that connects to actual work in logic uses the liars paradox. Is truth-value a bivalent property? That is, is a proposition P either true or false? 'This sentence is false' suggests it isn't clear. There are all sorts of alternative logics (paraconsistent logics in which the law of the excluded middle is not (always) true inspired by Godel's results). Now you might say "Fine, assume the paraconsistent logic is true. Within paraconsistent logics there are still inference rules. How do you know those are true?" I would say that a priori, I don't know they are true, but that such logics provide the best model of how truth-values propagate through sentences, and this model was constructed via trial and error, and it could well be false too. It is the best model of truth-propagation that we have so far come up with.

 
At 4/07/2008 11:35:00 PM , Blogger Blue Devil Knight said...

This is a nice little article on paraconsistent logics.

 
At 4/08/2008 12:19:00 AM , Blogger Blue Devil Knight said...

I posted my last post, cleaned up quite a bit, under my real name here.

 
At 4/08/2008 11:58:00 AM , Blogger Blue Devil Knight said...

I think my Godel argument is weak at best, perhaps just totally wrong (see good comments at the link to the other version of the argument). The paraconsistent logics stuff, though, is more interesting.

 
At 4/08/2008 09:53:00 PM , Blogger Darek Barefoot said...

BDK

>>Note I disagree with your claims that there has to be an assumption of good inference rules. Perhaps the belief that our memories don't systematically deceive us is there, though, and perhaps other beliefs about the reliability of our more basic faculties.<<

Well, I didn't claim that assumption of valid methods of inference is the only necessary assumption. However, I honestly cannot see it as dispensible. It is an assumption that Godel made. It is one that dialetheists make. It is one that the Copenhagen school of quantum theory makes. And it seems to be one that your posts show that you make, since you invoke logical implication at regular intervals.

I don't claim to have more than a tenuous grasp of Godel's arguments, mush less those of dialetheists. I just notice that they all do present arguments. I don't see how anyone presents an argument--even an argument to themselves--without presupposing that there are at least some (or one?) correct ways to arrive at conclusions. The very claim that there possibly are no correct methods of inference, if it were more than a bald assertion, would have to be accompanied by arguments of some kind, which is another way of saying that it would have to present itself as a conclusion correctly arrived. Maybe I am just insufficiently tuned to the subtleties of philosophy to see that I am wrong about that.

 
At 4/09/2008 12:38:00 AM , Blogger Blue Devil Knight said...

DB: of course we employ such rules all the time. The question is how did we discover the rules. Did we have to have those (same) rules in place in order to discover those rules, or could a kind of bootstrapping work, a fine-tuning of inference rules to patterns of truth and falsity (and things like the liars paradox).

 
At 4/09/2008 06:41:00 AM , Blogger Darek Barefoot said...

BDK

Maybe it's that I think rationality entails being able to give some kind of reason for a belief, and that in order to do that even within the confines of one's own thoughts one has to suppose a potentially right way of thinking about whatever it is. (In deference to Wittgentstein, even if we never actually are in the confines of our own thoughts, it makes no difference to the point.) You have to take some kind of step, mentally, and you will never take that step in the absence of the feeling that there is some ground to step onto, so to speak. Even, "I just don't know what to think about this!" is a conclusion of sorts and makes that supposition. The idea that you can take that step without any sense that there is something to step on to--which is what bootstrapping entails--seems to me to fall short of coherence.

 
At 4/10/2008 01:00:00 AM , Blogger Blue Devil Knight said...

But I didn't say there were no standards in place, but that we can get to newer, better standards, even starting with a minimal crappy set of standards (hence the use of the modifier 'same') in my previous comment.

I bet I could build a computer model of little agents that run around making perceptual judgments that are followed by random claims about other things (or other properties of the same thing) in the environment. Give these agents only one ability--the ability to recognize whether a given statement is true or not, and the ability to correct conspecifics.

I bet after a while, they'd be doing some good inferences (e.g., 'That table is square...therefore that table has four sides').

There would be no assumptions on the part of the agents about inference rules. No consciousness to even worry about. But even they would develop public rules of inference that worked well. If you buy this, that suggests you are wrong--i.e., that no assumptions like the ones you mention are necessary. Only the ability to track truth-values, and weed out sentence transitions that are bologney.

Of course we are not such agents. We have a lot more going on than they do. But this reliabilism about inference rules doesn't seem to have an in-principle barrier, so your philosophical objection seems wrong.

It should only help that we have a lot more going on cognitively than these things. Memories, the ability to experiment on things to really put the rules we are using to the test.

Perhaps more worrisome for the view you are pushing, how would you account for revision of logical practices? There has to be some kind of bootstrapping from an older system to a newer, different system. Unless you want to say that inference rules are never revised then you have to account for this (then call Frege and tell him that Russell didn't really revise his system of logic after discovering the paradox). I am simply giving an account of how I think it happens.

 
At 4/10/2008 07:45:00 PM , Blogger Darek Barefoot said...

BDK

>>Perhaps more worrisome for the view you are pushing, how would you account for revision of logical practices?<<

I have answered this before. The conviction that nature is lawlike does not entail an absolute committment to certain scientific laws. And that leaves scientists free to revise the laws, but always within the context of a conviction that nature operates in a lawlike way.

Just shift this situation over from science proper to logic. The conviction that there are correct ways to arrive at conclusions leaves open the possibility of refining our conception of exactly what these ways consist of. That is why, as I say above, even dialetheists make arguments that cannot but imply that they have correctly reached conclusions. So I call correctness a "metanorm" as opposed to "norms" such as the laws of thought and the rules of inference.

 
At 4/15/2008 11:39:00 AM , Blogger Blue Devil Knight said...

DB: OK, that is reasonable. I don't look at it as an assumption, but something we see confirmed, so that is our main difference. It could have turned out differently (e.g., if we lived in a semantic world where the liars' paradox was the norm, we'd be in trouble).

 
At 4/15/2008 07:42:00 PM , Blogger Darek Barefoot said...

BDK

We are in agreement then as to the basic difference.

Hume argued that nature's "lawlikeness" (he used the word "uniformity," while others talk about intelligibility) is something that we must assume. I am fond of reminding everyone that Einstein agreed on this point. If Hume was right, we necessarily assume the lawlikeness of nature in the process of discovering the particular laws it entails.

If the normativity of reason parallels the normativity of physical causation, then we assume correctness in the process of investigating the particular norms that this correctness entails.

Not everyone agrees with Hume that nature's intelligibility must be assumed. And you disagree on my analogous point about reasoning. That's clear. But IF it were the case that correctness is necessarily assumed in all investigations of rational thought, then on that basis alone the AfR goes through.

 
At 4/18/2008 09:33:00 AM , Blogger Blue Devil Knight said...

This makes me feel very good about my view of logic as applying to contents of public linguistic strings:
A deduction is speech (logos) in which, certain things having been supposed, something different from those supposed results of necessity because of their being so. (Prior Analytics I.2, 24b18-20)

 
At 4/18/2008 11:05:00 AM , Blogger Blue Devil Knight said...

Hmm, perhaps 'speech' isn't the best translation of 'logos' in that context. I'm not sure, I just don't want to be tendentious in my appeal to authority.

 
At 4/18/2008 09:51:00 PM , Blogger Darek Barefoot said...

BDK

I see the quote as an invitation to think correctly about the nature of deduction or simply the function of speech, otherwise it is just a piece of flotsam on a sea of shifting assertions. Even to claim that rational discussion is just that--a tide of statements morphing into other statements-- implies an island or rock from which the ebb and flow of assertions can be seen for what it is.

 

Post a Comment

Subscribe to Post Comments [Atom]

<< Home