Friday, June 20, 2008

Unconscious Inference

I received internalism about justification in my mother's milk, so to speak. Most of my earliest philosophical teachers were internalists in this way, and I have usually thought that this is about right when it comes to justification, despite my flirtations with proper-functionalism.

Recently, I have been more latitudinarian in my internalism - favoring what's been termed "mentalism" by Conee and Feldman in Evidentialism. Mentalism is a kind of bare internalism, it holds that what's justified for me to believe supervenes on my mental states. Thus, a brain-in-a-vat can have all the same justified beliefs that I have if he has all of the same mental states I do. That sounds about right to me (pace externalism about content worries). The upside of this form of internalism is this: my grounding for certain beliefs may not be easily accessible to me in the way that classical internalism envisions.

Philosophers often think (or it seems so) of the thinker as an intellectual, and perhaps even a caricature of an intellectual: a person who comes to all of their beliefs through conscious deliberation. But really, if I survey my own mind, and others seem to avow this, most of my beliefs are not formed this way: I have flashes of insight about theoretical matters when I'm working on something else (maybe after awaking from sleep, in the shower, while I'm working on an unrelated topic), I seem to make inductive inferences without going through any sort of explicit process (I see the groceries on the floor and immediately believe that my wife is home), I act in a way that shows that I have some sort of pro-attitude toward some proposition without it ever crossing my mind that I do (especially in skilled activity - like driving - I evade a road hazard without even thinking about it), etc. 

The basing relation with mentalism points the way toward explaining how we have justified belief (or some other doxastic/quasi-doxastic state) in these cases. Various mental processes, those I don't have direct control over dispose me to form the correct attitude from my evidence without the slightest conscious thought about it. These seem to be operating when I'm thinking about other things, when I'm asleep, when I'm driving, and so on. Often the only indication of these processes are when they seem to 'pop' into my conscious mental landscape, but even then, like in the case of driving, I only seem to be aware of them from the way my mind has directed my behavior.

Anyway, this seems interesting and bears more thought.

Thursday, June 5, 2008

Content Externalism

(So, this is musing at this point... I'd love to be corrected by those who understand this issue better, especially if I'm making any simple mistakes due to my ignorance in this area. But blogs are good for this - exploring and figuring out when one has something seriously in error.)

I'm currently reading through Timothy Williamson's Knowledge and Its Limits[1] and one of the crucial issues in the book is Williamson's adoption of an "externalist" theory of mental content. I wish to muse a bit on such theories, since I am undecided about what the correct attitude toward externalism of this sort ought to be. I think Williamson is right in at least so far as he sees that the adoption of externalism about content will require some serious rethinking of traditional epistemology. 

Externalism has been typically argued for via "Twin Earth"-style thought experiments. I'll adapt for my own use one given by Tyler Burge. [2] Al is a fairly resident of Earth. Al has a variety of beliefs about aluminum. "He thinks that aluminum is a light, flexible metal, that many sailboat masts are made out of aluminum, that someone across the street recently bought an aluminum canoe."[3] Suppose that there is a duplicate of Al living on Twin Earth, a planet in every way like our own with one important exception, what plays the functional role of aluminum on Twin Earth. When twin-Al (call him 'Twal') runs his hand over twin-aluminum (call it 'twaluminum') he has just the same sorts of tactile stimuli as when Al runs his hand over aluminum. It weighs the same in each's hand, it reflects light in just the same way. Twal also calls twaluminum 'aluminum'. So, it would seem when Twal thinks of twaluminium, it's hard to see how his internal concept of twaluminum differs from Al's concept of aluminum. This is made more explicit on supervenience theses. If mental states, including intentional propositional attitudes, supervene on brains, then surely the mental state that Twal and Al are in are precisely the same. But if Al asserts, "This is a fine aluminum canoe" and Twal asserts, "This is a fine aluminum canoe" they surely are saying something different. Twal has never encountered aluminum and Al has never encountered twaluminum. But additionally, how could they be saying something different? If their mental act of asserting some proposition with those words supervenes on precisely the same (ex hypothesi) type of brain state, then surely they are making the same kind of assertative (sp?) act.

I think that it's intuitive, even with the rejection of a supervenience thesis about mind, that this problem occurs. I think that one of the central issues in these externalist thought experiments is that Al's and Twal's experiences underdetermine what it is that their intentional states are of. Suppose I don't have any beliefs about what the microstructure of aluminium is. All I know about it are the typical descriptive characteristics that accompany aluminum. When I am introduced to it, suppose that someone says "That's aluminum" and points at it. My experience underdetermines whether or not what I am seeing is aluminum or twaluminum. Every subsequent encounter with aluminum will be similar, because there is nothing that will tell the two metals apart without very close examination (let's say requiring a laboratory). So, how could my mental content about aluminum be different from my counterpart on twin-earth who has just the same sorts of experiences... someone points out to him twaluminum saying "That's twaluminum." He goes on to have the same sorts of experiences I do, but in every case with twaluminum instead of aluminum. What grounds do I have for saying that his concept of twaluminum differs in the least from my concept of aluminum? On the classical sense-datum conception of mental content, there doesn't seem to be any reason to think that there could be any difference between Al's and Twal's concepts, because all of the sensible properties that Al and Twal are aware of are exactly similar phenomenologically. 

The solution that has generally been recommended is externalism about content. That a belief is about a given proposition is not only determined by the thinker, but by the thinker's environment. Putnam seemed to think it was the local linguistic community. It might just as well be determined by causal contact with the things "out there." 

What makes this somewhat disturbing for the traditional epistemologist is something like this: I find the notion that epistemology is a first-person enterprise attractive and think that there are some propositions that I am the expert on, probably to the point of infallibility. If I am in intense pain, there is no way that I can be wrong about thinking that I am. If I am thinking about giraffes, there is some part of that thought that is simply not in doubt. But if content externalism is true, this first-person privelege seems somewhat fantastical. I cannot tell, just by inspecting my own thoughts what I am thinking about. I may not even be able to tell what propositional attitude I am in, if any. For instance, belief requires a proposition, at least on a standard interpretation (we do sometimes talk of believing in a person - but let's set those cases aside). Suppose that propositions have their subjects as constituents, so when I mentally token the proposition expressed by "That canoe is made of aluminum", I express a proposition that has a canoe as a constituent. Suppose when I take myself to token that proposition and believe it, I do it in a case where I do not actually see a canoe, but am the victim of an illusion. Thus, we a case similar to a Frege-puzzle. But unless propositions can have gaps (and they might), then there is properly no proposition that I am believing, so though it seems to me that I am believing a proposition, I am NOT. So, it would seem that I'm not in a believing state at all. But that seems batty in the extreme - how could I be mistaken about that kind of thing? 



[1] Williamson, Timothy, Knowledge and Its Limits...
[2] Burge, Tyler, "Two Thought Experiments Reviewed", Notre Dame Journal of Formal Logic, Vol 23.3 July 1982
[3] Burge, 284.