Thursday, November 15, 2007

Does the Denial of Maximal Justification Entail Infinitism?

Since I’ve been reading about infinitism lately, here’s an argument for infinitism. Infinitism is the theory of the structure of epistemic justification according to which a belief is justified if and only if there is an available infinite regress inferentially related justifying reasons. Peter Klein is the most famous proponent of the view, though there have been some new infinitists in the literature of late, including Jeremy Fantl, whose argument I’m concerned with here:
If for any degree of justification there is a higher degree of justification, then there will always be a reason such that, were you to have it, the degree of justification would increase. But this is just infinitism. [1]
Fantl here is arguing that you deny that there are any completely or maximally justified propositions, justified and incapable of having its justification increased, then you are committed to infinitism. I think this argument is subject to a counterexample. Fantl’s argument seems to be that if for any p justified for any S, there is always a higher degree of justification that p could have, then there is always another reason that could justify p further. Therefore there are an infinite number of reasons, which is infinitism. But here is why I think that this argument doesn’t work. Infinitism doesn’t only require that there be an infinite number of reasons, indeed, infinitism is completely consistent with skepticism, if infinitism is true and there are no propositions for which there are not infinite regresses of reasons justifying the antecedent reasons, then there is no knowledge. The key to infinitism is that there cannot only be an infinite collection of reasons, but that the reasons must themselves be inferentially justified. The denial of maximal justification is consistent with there being an infinite number of beliefs which are both noninferentially justified and inferentially justified. Thus, infinitism does not follow from the denial of maximal justification

[1] Fantl, Jeremy. "Modest Infinitism." Canadian Journal of Philosophy 44, no. 4 (2003): 537-562., 538.

Wednesday, November 14, 2007

An Argument That You Can Never Know How Justified P is, If You Think P is Highly Justified

Here is an odd argument

I have been asking around and a significant number of people seem to believe that coming to know or even merely to believe that p is highly justified for me, is to increase your justification for p. Call this principle J.

Given that, an odd argument can be made.

Suppose that p is justified for S with a degree of justification of .99 at t (if you don’t favor a numerical scale for justification, it works just as well to give a proper name to your degree of justification, dub it ‘Bob’ or something). Suppose further that at t+1, S believes p and the proposition p has a justification of .99. For the sake of argument, assume that if any belief is justified to .99, then it is highly justified and that S believes this. Thus, at t+1, S believes p and p has a justification of .99, but the latter belief is false, because S’s justification for p is now >.99, because of principle J. Thus, if J is true, if any S comes to believe that his belief in some p is justified to any specified level whatever, if he takes that level to be a high level of justification, then his belief is false, and thus, he can never know his level of belief in such a situation. If he infers from this level of justification, he may fall prey to Gettier cases, too. This works just as well for beliefs that S takes to have a very low justification, it may, in fact completely defeat his justification for the belief. Oddly enough, so long as S doesn’t believe that his level of justification is high (either because he simply hasn’t thought about it or because he believes that it would need to be more highly justified to be a high level of justification), he can know his specified level of justification holds.

It is also interesting that it doesn’t seem to affect temporally indexed beliefs: If S believes that p’s level of justification at t was .99, it doesn’t defeat it, for obvious reasons. It also doesn’t effect beliefs like, p’s justification is at >.99, for it will remain true that p’s level of justification is >.99 if S’s justification increases.

I’m not sure what to think of all this. Some I’ve talked to think this is an acceptable result, some even think it is welcome. I find it bizarre.

I’ll add one more thing:
Jeremy Fantl uses a very similar argument against reliabilism. Suppose that S believes p and p was caused by a 100% reliable process. Thus, it would seem that S’s degree of justification is 100%, that is, it cannot be increased. But, suppose that S became aware that p is justified at 100%. Principle J entails that S’s justification for p must increase in some degree. But then, S’s degree of justification for p was not at 100%, as was supposed.

I thought it was a clever argument, but since I’m not inclined toward maximal justification and even less inclined toward reliabilism, it doesn’t trouble me much. But I do wonder about principle J.

Fantl, Jeremy. "Modest Infinitism." Canadian Journal of Philosophy 44, no. 4 (2003): 537-562.