Acta Analytica

, Volume 27, Issue 2, pp 145–161

Basic Knowledge and Easy Understanding

Authors

    • Department of Philosophy MSC 03 21401 University of New Mexico
Article

DOI: 10.1007/s12136-011-0139-8

Cite this article as:
Becker, K. Acta Anal (2012) 27: 145. doi:10.1007/s12136-011-0139-8

Abstract

Reliabilism is a theory that countenances basic knowledge, that is, knowledge from a reliable source, without requiring that the agent knows the source is reliable. Critics (especially Cohen 2002) have argued that such theories generate all-too-easy, intuitively implausible cases of higher-order knowledge based on inference from basic knowledge. For present purposes, the criticism might be recast as claiming that reliabilism implausibly generates cases of understanding from brute, basic knowledge. I argue that the easy knowledge (or easy understanding) criticism rests on an implicit mischaracterization of the notion of a reliable process. Properly understood, reliable processes do not permit the transition from basic knowledge to understanding based on inference.

Keywords

Basic knowledgeEasy knowledgeEpistemic closureHigher-order knowledgeReliabilism

Philosophers can be very hard to satisfy. The skeptical claim that we cannot attain any empirical knowledge, either because of the regress of reasons or because of reflection on global skeptical hypotheses, set the agenda for epistemological thinking for millennia. In the middle of the last century, clearly articulated versions of externalism were proposed as answers to this skeptical claim. Resistance to externalism typically came in one of two forms: (1) it is too weak to answer the real skeptical challenge, which is to show not merely that knowledge is possible, but that we actually do sometimes attain knowledge; (2) it is a conception that makes space for knowledge without reasons, that is, without requiring that the agent take some sort of rational responsibility in aiming to attain beliefs that are true.1 On the other hand, providing a principled answer to a skeptical question more than 2000 years old is no mean feat, and it’s often been pointed out that much of our knowledge, paradigmatically perceptual knowledge, is not in any obvious way reasons-based.2 It’s fair to say that externalism has at least withstood the initial wave of resistance.

After the initial wave, more specific, sometimes somewhat technical criticisms of externalism came to light. Because my focus herein is on process reliabilism, I’ll attend only to criticisms thereof. Start with the basic reliabilist thought that knowledge is true belief formed from a process that tends to produce true beliefs. Given just that much, it makes sense to say that human sense perception forms the heart of processes that do, or at least can, generate knowledge.3 For example, Jones knows that the cat is on the mat by seeing the cat. But if Jones knows that the cat is on the mat, and knows that this entails that she’s not a brain-in-a-vat (BIV) in a catless world, then, by the principle that knowledge is closed under known entailment, she ought to be able to know that she’s not a BIV. “Foul!” cry the critics. “How could Jones know that?” Along came sister theories of process reliabilism, prominent among them Nozick’s (1981) sensitivity theory, aiming to provide solid theoretical grounds for the intuitive claims, first, that we can obtain ordinary empirical knowledge but, second, that knowledge that radical (global) skeptical hypotheses are false is beyond our human capacities. “Foul!” cry the critics. “How can you deny that knowledge is closed under known entailment?” Of course, if we both accept closure and deny that we can know that radical skeptical hypotheses are false, hence deny that we can attain everyday knowledge, we’ll hear the anti-skeptics—just about everyone—cry “Foul!” Hard to satisfy indeed.

Let me put my central concern stemming from these considerations in a form relevant to the “Knowledge, Understanding, and Wisdom” Conference theme. Suppose the reliabilist position is, at least in outline, plausible. It then gives a credible account of basic knowledge, that is, knowledge that is both reliably formed but not on the basis of reasons-providing beliefs, and such that one need not possess reasons for believing that one’s basic belief-producing processes are reliable. Of course, the reliabilist does not deny that there are better kinds of knowledge—better insofar as they exercise the “higher” capacities of creatures like us, for example, our rational ability to see how the truth of a proposition is indicated by evidence or reasons. Now, I will not even attempt an analysis or even broad characterization of ‘understanding’ here. But it’s obvious that basic knowledge by itself is utterly consistent with total lack of understanding, whereas having reasons to believe puts one on the path to understanding. Let’s stipulate that, when one acquires reasons to believe, in some cases reason to believe that one’s basic process is safe from certain kinds of error, one has gained at least some understanding—of oneself, one’s knowledge status, the sources of and coherence between one’s beliefs, and one’s relation to the world.4 My central concern herein is the problem of easy knowledge as it putatively afflicts process reliabilism, which can be recast as the criticism that reliabilism illicitly generates understanding on the meager foundation of basic knowledge.

I shall argue, however, that process reliabilism has the resources, without any ad hoc maneuvers and without redefining ‘reliable’, to block the easy knowledge problem. I will explain how reliabilism does not generate higher-grade knowledge, or understanding, just from basic knowledge, and also sketch the reliabilist account of when understanding is attainable. (For reasons that arise near the end of the paper, I am distinguishing, at this early juncture, higher-grade knowledge from higher- or second-order knowledge, allowing that one could have higher-grade knowledge without knowing that one knows, and perhaps vice versa.) Some or most or, I have to face it, probably all of you will cry “Foul!” as I explain how knowledge is not closed under known entailment. But cut me a little slack. If I were to argue that knowledge is closed under known entailment and that the basic reliabilist picture is right, hence that one can know that one is not a BIV, you’d cry, “Foul! How can anyone know that?”5 And you might even throw the easy knowledge problem in my face. Or I could argue that knowledge is closed, but that the skeptic is right that we know nothing. “Foul! Burn the skeptic!” Will you never be satisfied?

1 Easy Knowledge: Closure

For purposes of this paper, I will look only at the closure version of the easy knowledge problem. I believe that my basic solution also resolves the bootstrapping version of the problem (Vogel 2000), but that’s another paper. (See Becker, MS) Stewart Cohen (2002) formulates the problem succinctly and forcefully, and so I’ll follow his lead. Suppose that S’s vision is functioning properly as she walks into a room with a red table, and S forms the true, reliably formed belief that the table is red, just by looking at it. The reliabilist will not balk at the following two claims:
  1. (1)

    S knows that the table is red.

     
  2. (2)

    S knows that if the table is red, it is not white with hidden red lights deceptively shining on it.

     
Now, given that competent deductive inference is the paradigm of reliability, it would seem natural that the reliabilist would accept:
  1. (3)

    Therefore, S knows that the table is not white with hidden red lights deceptively shining on it.

     

Intuitively, however, S knows no such thing, at least not just by looking at the table and without checking her surroundings for possible sources of deception. (But mightn’t S be warranted in believing that nothing deceptive is occurring simply because she has inductive evidence that such deceptions are unusual? Not really. As Markie (2005, 410) points out, warrant for believing that deceptions are unusual does not suffice for warrant that no deception is occurring in this case. But even if S were so warranted, it wouldn’t help with the particular presentation of the easy knowledge criticism at issue here, where S allegedly gains knowledge that no deception is occurring solely by the look of the table. So just mindfully erase any such background belief about deceptions from S’s system, to make the case neat.) In short, the reliabilist appears committed to the possibility of an agent’s achieving some elevated epistemic status on the basis of mere brute, animal, non-rational knowledge. Cohen says, “The problem is that once we allow for basic knowledge, we can acquire reliability knowledge very easily—in fact, all too easily, from an intuitive perspective” (Cohen 2002, 311). And later in the same paper: “If you allow for basic knowledge, there is nothing to stop us from acquiring, by trivial inferences, all sorts of knowledge about how we are not deceived or misled by our belief sources” (ibid., 315). In order to know that the table is not deceptively lit and other known entailments, the table’s appearing red simply does not suffice. On the other hand, in cases where S does know that nothing deceptive is occurring, perhaps through a thorough check of the room, her warrant for believing that the table is red is strengthened, and S is thus on the path to higher-grade knowledge—a kind of understanding.

In case the connection to skepticism isn’t obvious, here’s another version of the easy knowledge problem:
  1. (1)

    S knows that she is typing at her computer.

     
  2. (2)

    S knows that if she is typing at her computer, then she is not just a BIV.

     
  3. (3)

    Therefore, S knows that she is not just a BIV.

     

Another reason for introducing the more familiar skeptical (or, in the first-person presentation with 3-d props, Moorean) version of the closure-cum-easy-knowledge problem is that my solution will explain how, according to reliabilism, S can know that the table is not deceptively lit, just not solely on the basis of inference from basic knowledge, whereas S cannot know that she is not a BIV. Cry “foul” all you want. If I can do this much, I take myself to have succeeded in my self-appointed task.

2 Two Assumptions

My solution to the easy knowledge problem has two crucial elements, neither of which is controversial, in my view, and yet neither of which is a universally accepted commitment of process reliabilism. The first is that the notion of a reliable process needs to be characterized not only as truth-conducive in the actual world, but also as truth-conducive throughout close possible worlds. Think about Plantinga’s (1993, 199) brain lesion example or Greco’s (2000, 175) helpful demon. In both cases there is a process that produces mostly true beliefs in the actual world but is intuitively unreliable. The brain lesion causes the agent to believe he has a brain lesion, but somehow that seems like a quirk. Indeed, we have to deem it quirky if we think it provides any sort of motivation for Plantinga’s proper function account of knowledge. If such a brain lesion normally caused true beliefs that one has a brain lesion, thus not just in this one odd case, but throughout close worlds, we wouldn’t be so quick to judge that the brain lesion sufferer does not know, whereas in Plantinga’s original case the judgment comes naturally. This is evidence that truth-conduciveness in the actual world does not suffice for genuine reliability. Similarly for Greco’s helpful demon. If the demon just happens to make my gambler’s fallacy beliefs come true as I pick a number on the roulette wheel, it’s clear that I do not know which number will come up. The truth-conduciveness of my gambler’s fallacy reasoning is not at all robust; it’s not stable throughout close worlds. Actual world truth conduciveness just doesn’t suffice for actual reliability, and no reliabilist is committed to saying otherwise.

Let me add a further example to bring the phenomenon closer to home—in my case, New Mexico. Suppose that Johnny Jr.’s father, John Sr., has built a high wall around his yard to keep the coyotes out. As a result, the only animals that get into the yard are roadrunners, which can jump or pseudo-fly to scale the eight-foot wall.6 Now, Johnny Jr. does not know much about animals or their capabilities, but he has heard of roadrunners, and perhaps he’s seen one before. Whenever Junior ventures out into the yard and happens upon animal tracks, he forms the belief that there has been a roadrunner in the yard. His process is not at all sophisticated, and might be informally captured by one of his typical exclamations: “Hey, footprints! Must’ve been a roadrunner by recently.” It just so happens that, because the wall keeps all other animals out, all of Junior’s roadrunner beliefs are true. But Junior doesn’t know roadrunner tracks from any other animal’s prints, and if a coyote (or any other animal) were to get in the yard—say, had John Sr. put up a shorter wall—Junior would notice the tracks and form the false belief that a roadrunner has been in the yard. So Junior does have a repeatable belief-forming process that produces beliefs about the presence of roadrunners, and all those beliefs are true, but clearly he doesn’t know. Even if Junior is right every time, 100 times, it’s fairly clear that he’s just gotten lucky, akin to having an uncanny track record of forming true beliefs through wishful thinking. Johnny Jr.’s process is not reliable, even though it’s actual-world truth-conducive.

The second crucial element of my solution is that the processes at issue in process reliabilism need to be very fine-grained. As a partial response to the generality problem for reliabilism, Goldman once said (1979, 12) that processes should be individuated content-neutrally, and years later (Goldman 1986, 50 f.) said that the relevant process type is the narrowest (most specific) process type that is causally operative in belief production. Content-neutrality is Goldman’s way of capturing some generality, whereas narrowness preserves specificity. Content-neutrality is required because the theory type under discussion is process reliabilism, as opposed, say, to sensitivity or safety. Whether the sensitivity or safety principles are satisfied hinges on features of a particular belief and how it was formed rather than on the truth-conduciveness of a general process. But specificity is also required because different agents have very different abilities. It’s not implausible to think, for example, that some particular person, say John Sr., could have a belief-forming process that reliably determines whether a certain object in the foreground is a roadrunner, without even knowing any other kinds of birds. And maybe John can do so only in very good lighting at very close range when on certain medication; perhaps his vision is not generally reliable. If we characterize this process as forming beliefs based on vision, we lose the fineness of grain requisite to capture John’s reliable roadrunner detecting abilities, and this is true even if we individuate John’s process more narrowly than what is usually encountered in the literature, for example as forming beliefs about birds, based on vision, when properly medicated. After all, by hypothesis John knows a roadrunner when he sees one, but he doesn’t know anything about other kinds of birds (which is not to say that he would mistake another bird for a roadrunner—that’s Johnny Jr.). My suggestion, following Goldman, is that we ought to account for every feature of a belief-forming process that is causally operative in producing belief when we individuate processes. John’s process might be: forming beliefs about the presence of roadrunners, based on vision, when properly medicated (but this is still probably under-described).

A couple of notes to round off discussion of processes. First, often a process is deemed reliable only in “normal” or “good” environmental conditions. The question is whether some such clause should be built into the characterization of the process or added as a proviso. Because it’s not part of the belief-producing mechanism, specification of the external conditions in which a process is reliable should probably be construed as a proviso, albeit one that is notoriously difficult to explicate. This open question won’t affect my solution to the easy knowledge (or easy understanding) problem. All I need is the ‘narrowest content-neutral’ criterion for process individuation. Second, what about content-neutrality? I’ve got reference to roadrunners built right into John’s belief-forming process. That doesn’t sound content-neutral. Sure, but the point is that it’s specific content-neutral. The concern is to avoid what Feldman calls the ‘single-case problem’:

If relevant types are characterized very narrowly then the relevant type for some or all process tokens will have only one instance (namely, that token itself). If that token leads to a true belief, then its relevant type is completely reliable, and according to [the proposed individuation criterion], the belief it produces is justified… This is plainly unacceptable, and in the extreme case, where every relevant type has only one instance, [this proposal] has the absurd consequence that all true beliefs are justified and all false beliefs are unjustified. (Feldman 1985, 160-61)

Characterizing John’s process the way I have avoids this problem.7 John’s process is repeatable, even though it includes the concept roadrunner. And again, this is critical, because John is a great roadrunner detector, but perhaps knows very little else. (No wonder Johnny Jr. is clueless!)

In summary, in my solution to the easy knowledge problem, I shall assume that reliable processes are truth-conducive at least throughout close possible worlds, and that processes evaluated for their reliability are individuated by both of Goldman’s proposed criteria: the narrowest, specific content-neutral type that is causally operative in producing belief.8

3 Reliabilist Solution

As noted, any explanation of how S can know that the table is red, in some cases, without being in position to know that it is not white and deceptively lit, or of how S can know that she’s typing at her computer without ever being in position to know that she’s not a BIV, is eo ipso an explanation of closure failure. To understand why closure is false, we need to look at the relevant belief forming processes involved. Because God only knows all the factors causally efficacious in belief production, when describing processes, I will give only as much detail as my purposes require.

For starters, then, we might characterize S’s basic process in coming to believe that the table is red as forming beliefs about middle-sized, clearly visible objects (tables?) and their colors based on visual appearances. We stipulate that this process is reliable—truth conducive throughout close possible worlds—and that S’s belief is true, hence that S knows that the table is red. Now, suppose that S infers, from the appearance-based belief that the table is red, that nothing, or no hidden light anyway, is deceiving her into thinking it is. She’s right, because in the actual world and close worlds, appearances are reliable guides to truth, but does she know that? Is that inference reliable?

I will use the well-known phenomenon of epistemic luck, together with reliabilist means of precluding a particular strain of luck, to argue that S’s inference is not reliable. I mentioned before that an excellent track record does not suffice for reliability, and hinted at a diagnosis in terms of epistemic luck. Johnny Jr. always forms true beliefs, and has already done so 100 times, about there having been a roadrunner in the yard, but he doesn’t know one kind of animal footprint from another. Clearly Johnny Jr. does not know, despite the facts that his process is actually right 100% of the time, and that the process will continue to produce true beliefs if no salient features of his actual environment (particularly, his yard) change. The diagnosis in terms of epistemic luck is pretty straightforward: Were other animals to enter the yard, his process would yield many false beliefs. This fact indicates that there’s simply a fortuitous coincidence between the animals responsible for the visible tracks and Jr.’s beliefs about them.

Let’s return to S’s belief that the table is not white but deceptively lit, based on inference from her belief that it is red. Here, too, it is natural to think that S does not really know that the table is not deceptively lit because the process through which she forms this belief does not strike us as actually reliable. On the other hand, the reliabilist is hard-pressed to say what’s unreliable about S’s process—deduction from known premises sounds foolproof. But the process, for reasons above, should be more narrowly individuated, perhaps along these lines: forming beliefs about whether something deceptive is taking place via inference from appearance-based basic belief. Is that process truth-conducive throughout close worlds? Yes and no. Yes, it’s truth-conducive throughout the close worlds relevant to assessing the reliability of S’s basic process, but it’s not truth-conducive throughout the close worlds relevant to assessing the reliability of this process, an inferential process that issues in understanding.

Why should the “band” of worlds relevant to assessing the reliability of higher-grade processes be wider than those relevant to assessing basic processes? We can approach this question from a couple of angles. First, all parties to this debate accept, at least for sake of argument, the reliabilist story about basic knowledge. The alleged problem is that reliabilism makes higher-grade knowledge too easily attainable because, while the relevant beliefs are somehow not truly reliably formed, there seems to be no clear explanation of that fact available to the reliabilist. The critique rightly implies that higher-grade knowledge is more difficult to achieve. A relevant alternatives approach might put it this way: The relevant alternatives that one must (be able to) rule out in order to have basic knowledge are fewer than those one must rule out to attain higher-grade knowledge. To know that the table is red, one must be able to distinguish red from other distinct colors. But to know that nothing deceptive is taking place, one must also be able to rule out various sources of deception, such as colored lighting. Second, the reliabilist translates this insight in her own terms. Luckily inferring that there are no such deceptions is not reliable. The basic process of forming beliefs about middle-sized, clearly visible objects and their colors based on visual appearance is reliable only if it produces mostly true beliefs throughout close worlds. S’s process does this, by hypothesis. In close worlds where the object S sees is not a table—perhaps it’s a chair or a desk—she forms the relevant true belief, not the false belief that it is a table. In the close worlds where the object S sees is not red, it’s some other color that S can identify. It’s a somewhat more distant world where S’s beliefs about the table’s color are at all frequently caused by deceptive lighting. This is not an unwarranted stipulation, for were this not true—were it the case that lighting is typically deceptive in close worlds —S’s basic process would obviously be unreliable. She might acquire true beliefs, but they would involve knowledge-precluding luck.

The range of worlds through which S’s higher-grade belief-forming process must be truth-conducive to be genuinely reliable expands because the possible worlds which intuitively render S’s higher-grade beliefs merely luckily true are more distant than those that would render her basic beliefs merely luckily true. Again, by hypothesis, S’s belief that the table is not white and deceptively lit to look red is true but not a case of knowledge—it’s true but lucky. The belief is true because all valid inferences from true premises yield true conclusions, but not necessarily known conclusions. And the claim that S’s higher-grade belief is luckily true is well grounded, for it’s a quite plausible diagnosis of the intuitive lack of higher-grade knowledge in the original easy knowledge problem. Well, if it’s lucky, it’s plausibly because there are worlds that are relevant to assessing the reliability of S’s higher-grade process in which S forms many false beliefs. And yet these cannot be the same worlds relevant to assessing S’s basic process, because by hypothesis it is reliable. Whether the process at issue in higher-grade belief production is reliable, then, depends on whether it would produce mostly true beliefs in the closest worlds where something deceptive, for example, deceptive lighting, is responsible for S’s basic belief. Because S bases her higher-grade belief solely on inference from basic belief generated by mere appearance, her beliefs would often be false in those worlds. Hence her higher-grade belief is unreliably formed. The reliabilist, I submit, has well-motivated, cogent grounds for denying that one can achieve understanding, or higher-grade knowledge, just by inference from basic knowledge.

My solution to the easy knowledge problem, which hinges on non-closure and a particular conception of reliability, might enjoy further support if it can be seen to instantiate a more general pattern. If it does, this is at least some evidence that my conception of reliability is on the right track. To that end, we can ask: Are there other skills where successful Φ-ing (analogous to believing truly (successfully) that the table is red) entails successful Ψ-ing (believing truly that the table is not deceptively lit to look red), but where successful, reliable Φ-ing does not entail reliable Ψ-ing? This is, after all, the heart of the matter. Suppose that I can reliably follow a well-worn path back home, given that I am somewhere on the path. Successfully following the path entails successfully finding my way home. Am I reliable at finding my way home? Not if the only way I manage it is to use the path. And because I’m usually nowhere near the path, I’m not reliable at finding my way home. Hence:
  1. (1)

    I can reliably follow the path home.

    (Analogously, S knows that p.)

     
  2. (2)

    Reliably, if I successfully follow the path home, I succeed in getting home.

    (S knows that, if p then q.)

     
  3. (3)

    But I am not reliable in getting home.

    (S does not know that q.)

     

Let me highlight another salient feature of the analogy. The space of worlds relevant to determining my reliability in following the path is narrower than the space of worlds relevant to determining my reliability in getting home. Reliably following the path requires more than mere actual, even frequent success in following the path. Suppose that I have successfully navigated the path many times, but in each case, I’ve been aided by the voice of my mother calling from a distance, and there have been no obstacles. Suppose that, without being able to hear my mother I wouldn’t find my way home. Or if there were a fallen tree in the path, I’d turn around. It’s plausible to say, in such cases, that I’m not really reliable, even when successful, in following the path. I’ve just been somewhat lucky. On the other hand, if I can find my way home in these non-actual close possible worlds, I am (at least more) reliable in following the path. But that doesn’t mean I am reliable in finding my way home. To determine that, we should consider close worlds where I’m not even on the path, because if the only way one can find one’s way home is to be on one particular path, then even success at getting home is somewhat lucky.9 How shall I put this general point? “Reliability isn’t closed under reliable entailment of success”?

4 Radical Skeptical Hypotheses

That completes my reliabilist response to the easy understanding problem. I want to extend the analysis to explain how one can achieve higher-grade knowledge without knowing that radical skeptical hypotheses are false. Suppose, contrary to the easy knowledge problem case, that S is the sort of person who carefully looks around for possible sources of deception and finds none. Here it is plausible to say that she can know that that there is no deceptive lighting, enhancing her knowledge that the table is red. In the closest worlds where appearances are misleading, S finds out about it, and so knows, for instance, that the table is not white but deceptively illuminated by red light.

But even this much reflection will not yield for S knowledge that she is not a BIV. We need, then, some way to differentiate the process by which S forms the belief that she is not a BIV from the process by which she forms the belief that the table is not deceptively lit. Here’s a suggestion that abstracts from the details: forming beliefs about whether some local deception is occurring based on investigating such possibilities versus forming beliefs about whether some global deception is occurring based on x, where x can be just about anything you like. If x is inference from basic knowledge, we know that’s not reliable, for reasons already given. If x is based on a thorough empirical investigation, we sense that’s not reliable either. But why not? It would seem to have something to do with the fact that there’s no such thing as a belief-forming process so reliable that it can rule out global deception, but capturing that insight is a bit tricky. Safety theorists think we can know that we’re not BIVs because in all the close worlds where we believe it, it’s true. If safety is an appropriate characterization of reliability, as Williamson (2000) suggests, then S’s inference from basic knowledge to the conclusion that she’s not a BIV constitutes knowledge. That approach, however, invites the easy knowledge problem or, if not that exactly, it provokes the charge of permitting knowledge on the cheap. If it’s correct to say that no human being knows that he or she is not a BIV, and I believe it is correct, then the process reliabilist diagnosis is that worlds relevant to assessing the reliability of whatever processes produce such beliefs are truly distant. What such knowledge would require is truth-conduciveness throughout almost all possible worlds. This might explain why global skeptical hypotheses have produced such angst in epistemology. In addition, this suggestion conforms to the pattern of reliabilist explanation of lack of higher-grade knowledge in the original easy knowledge problem. S can know that the table is red because her vision is in fact reliable, but does not know that it is not white but deceptively lit because that belief is too lucky—she hasn’t checked for deceptions, and in the closest worlds where they occur, her process produces false beliefs. She can know, if she investigates sources of deception, that the table is not deceptively lit, but this won’t suffice for her to know that no global deception is taking place. In the closest, albeit distant, worlds where there is global deception, her process produces all false beliefs.

5 Two Objections Considered

(1) “But why do you insist on assessing reliability about such beliefs by reference to such far out worlds?” Here’s an example similar to the “path home” and “fastball” (note 9) cases that I hope makes the motivation for these modalized ideas about reliability even clearer. A gas gauge reliably tells how much is in the tank. If the gauge’s indication that the tank is half full is correct, this entails that there is no mechanism rigged to the gauge such that it fools the gauge into giving incorrect readings—correctness entails not-incorrectness. (Allusions to Vogel’s (2000) Roxanne case intended.) Now, is the gauge a reliable indicator of whether some such mechanism is rigged to it? Of course not. It doesn’t work that way and isn’t supposed to, and we don’t work that way either. Still, that doesn’t undermine the actual reliability of the gauge—the fact that it produces true readings across close worlds—just as our unreliability about whether some global deception is the source of all our beliefs does not impugn the reliability of our basic belief-forming mechanisms. If this is right, then capturing the nature of the gauge’s unreliability about whether there is a mechanism rigged up to make it give false readings requires consideration of its performance in worlds more distant than those relevant to assessing the reliability of its actual readings.

(2) “But now it seems you’ve turned process reliabilism into Nozickean sensitivity, which says, ‘S knows that p only if, were p false, S would not believe that p’. You’re saying that a process reliably produces the belief that p only if, in the closest worlds where p is false, it produces mostly true beliefs. Well, everybody already knows that sensitivity implies non-closure, which is why sensitivity has fallen out of favor! If that’s all you’re offering, no thanks.”

The objection notes two things correctly, one unwittingly. First, process reliabilism, properly construed (i.e., by my lights!), is a cousin of sensitivity. They both aim to capture the idea that empirical knowledge requires a capacity to discriminate. It is arguable whether early versions of process reliabilism adequately lived up to this idea, but not whether it was a goal. Early on, Alvin Goldman described reliability thus: “To be reliable, a cognitive mechanism must enable a person to discriminate or differentiate between incompatible states of affairs. It must operate in such a way that incompatible states of the world would generate different cognitive responses” (Goldman 1976, 771). Following Goldman, Colin McGinn wrote:

[T]he underlying [basic] notion [of knowledge] is that of what might be called distinguishing knowledge, i.e. knowing one thing…from another. The result is a unified theory of knowledge in which the notion of discrimination is central and basic. This kind of theory is by no means novel—it is, indeed, a variant of what have come to be called reliability theories of knowledge. (McGinn 1984, 530; emphasis in original)

Second, the objection itself notes the crucial difference between sensitivity and reliability. Sensitivity is a property of particular doxastic attitudes toward particular propositions, whereas reliability is a property of belief-forming processes (though of course a reliable process’s belief outputs have the property of being reliably formed). So long as we keep that straight, there will be no conflation of the two. And while my solution to the easy knowledge problem sometimes involves individuating processes partly by the content of beliefs produced by them (to capture the specificity of particular agents’ cognitive skills), it never individuates processes by the specific content of a particular belief (to capture the nature of a general belief-forming process and to avoid the single-case problem). Finally, to ensure that the notions are distinct, we ought to seek cases where a belief is sensitive but unreliably formed, and reliably formed but insensitive. Fake Barns is a case of reliability without sensitivity. Henry’s true belief that he sees a barn is reliably formed but insensitive.10 If it were false, he would believe he sees a barn anyway. But his process of forming beliefs, even about the presence of barns, is reliable because it produces mostly true beliefs in the actual world, just not at this particular hillside,11 and throughout close worlds.

Are there sensitive beliefs that are unreliably formed? Suppose Jones knows a lot and I believe everything he tells me. My process is something like believing whatever Jones tells me, which, notice, is far narrower than just believing via testimony, and so in the right ballpark. Suppose Jones always lies to me, except one time when, for whatever reason, he wants me to have a true belief, say, that the Twins beat the Yankees yesterday. If the Twins hadn’t beaten the Yankees, he would have told me that instead, and of course I would have believed him. My belief is sensitive but unreliably formed.

Once various details are in place, the solution to the easy knowledge-cum-understanding problem is straightforward. Yes, it implies the falsity of closure, but I’m taking that as an explanandum in order to answer the easy knowledge criticism, which essentially says that you cannot get higher-grade knowledge just by inference from basic knowledge. Well, I agree.

6 But You Still Won’t Be Satisfied, Will You?

“Wait!” you say. “Now that I see the solution and was almost naïve enough to accept it, I can’t. How can closure be false? I’d rather accept skepticism than that.” I doubt I can convince you, but there are a couple points that might help a little and should not be neglected. First, notice that the closure principle is not formally valid. It’s not the same as modus ponens because the propositions are prefixed with the operator ‘knows that’. I know you know that, but it’s a point that is often overlooked. Closure holds that if S knows that p and knows that p entails q, then S is in a position to know that q.12 Second, because knowledge is factive—it implies truth—valid inference from known premises always yields truth, but not necessarily knowledge. Of course, most cases of valid deductive inference produce knowledge of the conclusions because the combined empirical-warrant-plus-inference process involved in coming to believe the conclusion requires no greater reliability—no wider band of worlds through which the process must be truth-conducive—than the process at issue in establishing the original empirical warrant. It’s quite clear that in the examples at issue here, warrant for the premises is insufficient for warrant for the conclusions. I’ve aimed to give a reliabilist explanation of these facts.

On the other hand, a natural thought occurs to us that, surely, knowing that the closure premises are true—knowing that one knows that p, and knowing that one knows that p entails q—is sufficient to put one in a position to know that q. This may seem to present a problem for me, because if the process through which S comes to believe that the table is not white and deceptively lit is reliable, then one may be tempted to say that the process produces higher-order knowledge, that is, knowledge that one knows that the table is red, which precisely is knowledge of a key premise in closure. And if that’s right, then because S knows (1) that she knows that there is a red table in front of her, and S knows (2) that she knows that if there is a red table in front of her, then she’s not a BIV deceived into thinking there is, S ought to be able to know, through deductive inference from these known premises, that she’s not a BIV.

While it’s not clear that higher-grade knowledge is equivalent to higher-order knowledge, in the case where S rules out sources of local deception, it is not implausible to say that S also achieves higher-order knowledge. One could say that she has reliably formed a true belief that her first-order belief is reliably formed. However, what the analysis above shows is that the natural thought, namely, that second-order knowledge—knowledge of the closure premises—secures (the possibility of) knowledge of the conclusions, is incorrect. (That’s just the upshot and is not meant to be an argument.) In particular, S can know that she knows that the table is red (or that there is a red table before her), know that she knows this entails that she’s not a BIV, and still not be in position to know that she’s not a BIV. One way to gloss this point would be to say that knowing that one is not a BIV requires third or fourth or, for all one can tell, infinite-order knowledge, because one’s belief-forming process would have to be truth-conducive throughout all possible worlds, which effectively requires that one can rule out every possible, non-actual world inconsistent with what one believes. But rather than take a stand on ever-ascending orders of knowledge and offer some criteria by which one distinguishes myriad orders, which would get very messy, very fast, within a modalized process reliabilist framework, I’ll simply let the analysis stand on its own. The process through which one can form a reliable second-order belief that her first-order process is reliable does not suffice for knowledge of every known entailment. The problems that afflict closure for first-order knowledge also afflict, in some cases, higher-order knowledge.13 So long as the reliabilist can offer a consistent, non-ad hoc explanation for why closure fails, even where knowledge of one’s premises involves second-order knowledge, the result may be somewhat counterintuitive, but it is at least motivated and coherent. It won’t be much more counterintuitive than closure failure for first-order knowledge, and once you give that up, why not go further? But I grant that, if you thought the idea of closure failure was outlandish before reading this paper, and yet were willing to consider it, you might find higher-order closure failure to be the clincher against my view. Maybe you think it’s just too big a pill to swallow.

Perhaps something sweet to help the pill go down? Let me paint a broader picture, inspired by Tyler Burge’s (2010) book, Origins of Objectivity. Burge offers an empirically informed account of objective perception, together with a masterful rejection of philosophically and empirically problematic approaches to perception and to the sources of basic empirical warrant. Burge makes the following claims: (1) “[I]ndividuals’ discriminatory abilities operate in a restricted context of environmental alternatives… It is enough that the individual have perceptual capacities that discriminate environmental attributes within ranges that have figured causally in the formation of the states and that are relevant to biological needs and activities… [T]he perceiver need not be able to distinguish bodies from philosophical contrived stand-ins.” (2) “The perceiver’s objectifying discriminatory abilities determine the nature and content of his perceptual abilities only within this larger environmental and ethological framework” (407). Surely what we can know depends on the abilities we have, and the abilities we have are not “designed”, by evolutionary means “fitted” to biological needs, to discriminate the actual world from BIV worlds. Burge claims that skeptical hypotheses such as the BIV possibility are not relevant alternatives to the propositions believed through basic perception, but by the same token, we simply don’t know, in any given situation, that certain sorts of possible deception are not actual. Note that the notion of a relevant alternative here is not ad hoc, and it’s not based on what we take to be relevant. Relevant alternatives are determined by the nature of our environment, and are ones that our cognitive processes are designed to be able to distinguish from whatever we in fact perceive at a given time. Given the nature of the environments in which our belief-forming processes are designed to produce true belief, one can achieve basic knowledge, for example, knowledge that the table is red, and second-order knowledge, for example, knowledge that one knows that the table is red, without knowing that no global deception is taking place.

Of course, Burge is only one among many to distinguish relevant from irrelevant alternatives in some such way. In a more positive vein, Burge offers an empirically informed, plausible account of objective perception, which in turn is the source of empirical warrant or at least entitlement for basic belief. His account looks at actual human cognitive processes, which yield perception of distal objects rather than of mere subjective proximal stimulations, and he argues that such perception is neither a construction from subjective elements of consciousness, as in phenomenalism or traditional foundationalism, nor the result of top-down processing based on the ability to represent constitutive conditions for objective representation (for instance, as on Quine’s and Davidson’s views). Rather, the cognitive processes that yield perceptions on which knowledge of objects is based are subpersonal and more or less automatic. They automatically and non-inferentially yield perceptual constancies regardless of the vagaries of subjective stimulation—they “look past” the merely subjective. This is a crucial basis for Burge’s view that in the first instance, human (and much animal) perception is objective.14

The takeaway, for present purposes, is that impoverished philosophical theorizing about perception of objects has led to bad epistemology. Treating subjective content as epistemically basic threatens knowledge altogether. Treating “basic” object perception and “basic” belief as necessarily involving higher cognitive processes, such that the content of perception is partly determined, for instance, by logico-linguistic criteria, and often such that the resultant belief is somehow justified by its place in one’s web of belief, adds a problematic grade of mediation (beyond non-inferential representation) to the mind-world nexus, leaving no room for bona fide basic knowledge.15 A more plausible story says that there is a basic level of non-inferential, objective perceptual representation that is not grounded in subjective content. That story situates human perception of objects—the basis of all human knowledge—in an environment suited to the purposes of the human organism; human perception is responsive to objects in those environments. This more plausible story also implies that we can be fooled, that if the environment were not normal, for example, if somehow subjects were exposed to the exact same proximal stimulation through some bizarre causes, we might never be able to tell. (Everyone agrees with this, I should think.) Surely our best science ought to at least be taken into account when doing epistemology. Our best science gives us glimpses into the nature of human epistemic capacities. In my view, the science suggests (i) that we can achieve basic knowledge that is objective, for example, that the table is red (or that there is a rectangular red object with such and so features), (ii) that we can enhance that knowledge in various ways by digging deeper and searching farther and eliminating sources of error such that, for example, we can know that the table is not deceptively lit, but (iii) that we cannot know that we are not globally deceived. I am absolutely certain that no science is committed to these claims since, after all, even given all the scientific facts, what to call and how to characterize ‘knowledge’ is and will be a matter for reflection, interpretation, and debate. But when we situate human knowledge where it belongs—in the actual world, especially on earth (because it wouldn’t be surprising, but neither would it be relevant, to find out that our visual processes are not truth-conducive on a planet in a distant galaxy)—the idea that our sources of basic knowledge do not position us to know propositions entailed by instances of basic knowledge seems right, and, in fact, is the heart of and solution to the easy knowledge problem.

Perhaps you may still not want what I’m offering. But in making my case on behalf of reliabilism, I haven’t claimed that S actually knows, just by inference from basic knowledge, that the table is not white and illuminated by red light, as Markie (2005) does. And, while I have distinguished basic, “animal” knowledge from reflective knowledge and understanding, I haven’t attributed the easy knowledge problem to the idea that “animal knowledge does not obey closure,” as Cohen (2002, 327) does. After all, on my resolution of the problem, even non-basic knowledge does not obey closure.16 Instead, I’ve argued on independent grounds that genuinely reliable processes are modally truth-conducive processes, and that processes ought to be finely individuated. The upshot of these reflections is that the easy knowledge problem implicitly wrongly characterizes the nature of reliable processes and, when this mistake is corrected, reliabilism does not generate the easy knowledge criticism.

Footnotes
1

For versions of the first complaint, see Stroud (Stroud 1984) and Fumerton (Fumerton 1995). Bonjour (Bonjour 1978) expresses the latter concern.

 
2

Sosa’s (Sosa 2007) distinction between animal knowledge and reflective knowledge manages to legitimize and yet somewhat denigrate reasons-less knowledge all at once.

 
3

Tyler Burge’s (2010) book, Origins of Objectivity, makes, in my view, an utterly convincing case for basic perceptual warrant, once and for all quashing the commonly held ideas that objective perception is constituted either by some mental, perhaps inferential, construction upon sense data, that objective perception is achieved only through some top-down cognitive processing involving internalized, theory-laden criteria for objective representation, including individuation criteria, and that only propositional reasons can provide empirical warrant. The book will become the standard by which all other philosophical accounts of perception are judged. I say more about it below.

 
4

In his book, The Value of Knowledge and the Pursuit of Understanding, Jonathan Kvanvig (2003) characterizes understanding as grasping how one’s beliefs cohere, which seems consistent with the sort of understanding briefly indicated herein.

 
5

Even if you wouldn’t, I would! Safety theory, contextualism, subject sensitive invariantism, and contrastivism all aim to uphold the closure principle while achieving anti-skeptical results. Contrastivism, in my view, at least rightly notices the problem of ‘cheap knowledge’ ((Schaffer 2007), 238), for instance, that one is not a BIV, and aims to avoid that consequence. The other theories wear the possibility of knowledge that radical skeptical hypotheses are false as a badge of honor. Let me be the one to cry “Foul! How could we know that?”

 
6

When I first drafted this paper, I thought up this example just because it is colorful. In the meantime, our department administrator tells me that, since having built a high wall to keep her cats, Chuck and Lucy, in the yard and safe from predators, only roadrunners have gotten in!

 
7

One might worry that the first element of my solution to the easy knowledge problem—that processes must be truth-conducive throughout close worlds—raises another version of the generality problem: We cannot assess a process for reliability until we know the range of close worlds through which is must be truth-conducive ((Comesaña 2006), 30). A few notes on this. First, I am not attempting a full-blown solution to every formulation of the generality problem. Relatedly, my point above was only that actual-world truth-conduciveness is not sufficient for reliability. I never said what does constitute sufficiency, though I will say more below to distinguish cases where even truth-conduciveness throughout very close worlds is not sufficient from cases where it is. Third, I don’t think there is, or needs to be, a clear answer here. Reliability comes in degrees, and we shouldn’t seek an exact characterization. Hence the more robust the truth-conduciveness—the larger the range of worlds through which it produces mostly true beliefs—the more reliable the process.

 
8

There is, of course, much more to be said about process individuation, but for now I want just to forestall one potential misinterpretation. The actual natures of objects and properties that figure causally in the formation of a token belief ought not be treated as part of the content of the process. (This in no way contradicts the externalist thesis that causal contact with certain kinds of objects or properties is necessary for thinking thought-types with specific contents.) For example, when Henry forms the true belief, in fake barn country, that he sees a barn, we ought not characterize the process as forming beliefs about barns that are caused by barns (even though thinking barn thoughts may require having had some such causal contact with barns). Suppose (contra the original fake barns case) that Henry would believe of almost anything he sees in the countryside that it’s a barn. The process through which he forms beliefs about barns is obviously unreliable, but if a process is to be individuated by reference to the actual nature of the object that causes a token belief, it would imply several different processes for Henry, depending on what he’s looking at: forming barn beliefs caused by barns; forming barn beliefs caused by fake barns; forming barn beliefs caused by large animals; forming barn beliefs caused by minivans, etc. On this kind of individuation, the first of these is reliable (while the rest are pretty worthless), but that is surely the wrong result.

 
9

Another example, one that’s utterly non-epistemic. I reliably hit a thigh-high, inner-half fastball coming at less than 85mph. Reliably, if I hit a thigh-high, inner-half fastball coming at less than 85mph, I don’t strike out. Does that mean that I reliably don’t strike out? Not if most pitches are not thigh-high, inner-half mediocre fastballs. The point of these analogies is to make clear that I’m not playing fast-and-loose with the notion of reliability.

 
10

Notice that I didn’t say Henry believes that that’s a barn. Indexicals create trouble for sensitivity, taken in the usual way, as a property of particular beliefs: If it were false that that is a barn, Henry would not believe that that is a barn, because that wouldn’t be a fake if it were not a barn. It wouldn’t be anything. The thought underlying the sensitivity diagnosis is that Henry would mistake a fake for the real thing—if whatever barn-looking thing he sees [not necessarily that] weren’t a barn, he’d believe it [whatever barn-looking thing he sees/would then be seeing] is a barn. I’m not sure how big a problem this is, but this isn’t the place for further discussion.

 
11

Is it right to say that Henry’s process is reliable because, even though in this environment it often produces false beliefs, it produces mostly true beliefs in the actual world (and nearby worlds)? Before I said that Johnny’s Jr.’s process is unreliable, even though it in fact produces mostly true beliefs. So in the case of Henry, I am distinguishing ‘local environment’ from ‘actual world’—the former contained in the latter—but I elide the distinction when talking about Johnny, Jr., insofar as I appear to be saying that Jr.’s process produces actual world true beliefs because it does so in his local environment. This could make big trouble for me, for if being consistent on this distinction required me to say that Johnny’s Jr.’s process is unreliable because, even though it produces true beliefs in his local environment, it produces many false beliefs in the actual world, it would undermine the motivation proffered for modalizing reliability, that is, for the appeal to non-actual possible worlds to explain Jr.’s unreliability.

Happily, I’ve made no such mistake. The crucial difference is that Johnny Jr., by stipulation, never gets out of his yard. So in actuality his process always produces true beliefs. I stipulate, similarly, that Henry has been around and will continue to get around (even if no such stipulation was intended originally). His presence-of-barns identifying process doesn’t work in this particular place on the countryside, but it actually works. If that doesn’t convince, then one more stipulation. Junior would only use his “Tracks! Must’ve been a roadrunner nearby lately” process in his own yard, perhaps because he thinks only the sand in his yard is suitable for the required “discriminations”. This is surely an unreliable process, even though impeccable in the actual world, which for all intents and purposes extends no further than Jr.’s backyard. In sum, that he’s near this hillside has nothing to do with how Henry’s forms barn beliefs, whereas being in this yard is a crucial causal antecedent to Jr.’s beliefs.

 
12

And S would know that q if she correctly inferred it from what she knows. See Dretske (Dretske 1970) for further considerations in favor of the idea that epistemic operators such as ‘knows that’ do not always penetrate to conclusions.

 
13

My thanks to participants at the Bled Conference, 2011, especially Chris Kelp, Jack Lyons, and Adam Morton, for suggesting to me that I don’t take the route of simply denying that one can achieve second-order knowledge, even when one has higher-grade knowledge, which was another option. I could then have said S does not know that she knows—does not know the closure premise—while maintaining the intuitive thought that if one knows that one knows that p and that p entails q, one is in position to know q. But I don’t want to reject the possibility of second-order knowledge. After all, in many cases second-order knowledge does not seem all that difficult to achieve, perhaps easier than the higher-grade knowledge, or understanding, exemplified in the cases discussed herein.

 
14

Interestingly, as one reads Burge’s discussions of perceptual constancies (see 343 ff.), in particular lightness constancy (351 ff.) and color constancy (410), one begins to wonder whether Cohen’s original example can even get off the ground. To make a long story short, it may very well be that our automatic “calculations” of distal causes reliably determine, just by looking, whether the redness of the table is due to its color or to ambient lighting. Obviously, I’ve set aside this possibility in order to take the easy knowledge problem head-on.

 
15

Of course, these matters are obviously complicated, and seem to cry out for a criterion of basicality. That’s a project for science—which types of objects and properties do our processes reliably represent mediated (non-epistemically) only through proximal causes (e.g. 2-d retinal stimulation), and which ones are theoretically embedded? Likely candidates for the former are body, shape, size, color, distance, contour, boundary, etc. Provided we have conceptual resources to think and describe these properties, which of course we do, beliefs based thereon can be counted as basic. Perhaps beliefs involving concepts like chair, car, house, etc., are not then properly basic but require explicit background knowledge and are mediated by background belief. This wouldn’t impugn the main line of thought herein. Our processes for producing basic belief pick up on objective, distal features of the environment, not subjective features of consciousness, and they do so directly and automatically, not inferentially. But those processes are designed to discriminate features within our environment; they’re inept in discriminating global environments.

 
16

For example, S can know in a non-basic way that the table is not white but deceptively lit, after a thorough investigation of possible sources of deception, and she can know that this entails that she’s not deceived by an evil demon, without being able to know that she’s not deceived by an evil demon.

 

Copyright information

© Springer Science+Business Media B.V. 2011