Wednesday, November 14, 2007

An Argument That You Can Never Know How Justified P is, If You Think P is Highly Justified

Here is an odd argument

I have been asking around and a significant number of people seem to believe that coming to know or even merely to believe that p is highly justified for me, is to increase your justification for p. Call this principle J.

Given that, an odd argument can be made.

Suppose that p is justified for S with a degree of justification of .99 at t (if you don’t favor a numerical scale for justification, it works just as well to give a proper name to your degree of justification, dub it ‘Bob’ or something). Suppose further that at t+1, S believes p and the proposition p has a justification of .99. For the sake of argument, assume that if any belief is justified to .99, then it is highly justified and that S believes this. Thus, at t+1, S believes p and p has a justification of .99, but the latter belief is false, because S’s justification for p is now >.99, because of principle J. Thus, if J is true, if any S comes to believe that his belief in some p is justified to any specified level whatever, if he takes that level to be a high level of justification, then his belief is false, and thus, he can never know his level of belief in such a situation. If he infers from this level of justification, he may fall prey to Gettier cases, too. This works just as well for beliefs that S takes to have a very low justification, it may, in fact completely defeat his justification for the belief. Oddly enough, so long as S doesn’t believe that his level of justification is high (either because he simply hasn’t thought about it or because he believes that it would need to be more highly justified to be a high level of justification), he can know his specified level of justification holds.

It is also interesting that it doesn’t seem to affect temporally indexed beliefs: If S believes that p’s level of justification at t was .99, it doesn’t defeat it, for obvious reasons. It also doesn’t effect beliefs like, p’s justification is at >.99, for it will remain true that p’s level of justification is >.99 if S’s justification increases.

I’m not sure what to think of all this. Some I’ve talked to think this is an acceptable result, some even think it is welcome. I find it bizarre.

I’ll add one more thing:
Jeremy Fantl uses a very similar argument against reliabilism. Suppose that S believes p and p was caused by a 100% reliable process. Thus, it would seem that S’s degree of justification is 100%, that is, it cannot be increased. But, suppose that S became aware that p is justified at 100%. Principle J entails that S’s justification for p must increase in some degree. But then, S’s degree of justification for p was not at 100%, as was supposed.

I thought it was a clever argument, but since I’m not inclined toward maximal justification and even less inclined toward reliabilism, it doesn’t trouble me much. But I do wonder about principle J.

Fantl, Jeremy. "Modest Infinitism." Canadian Journal of Philosophy 44, no. 4 (2003): 537-562.

No comments: