Philosophical Studies

, Volume 155, Issue 3, pp 371–381

Possible disagreements and defeat

Authors

    • Department of PhilosophyUniversity of Rochester
Article

DOI: 10.1007/s11098-010-9581-5

Cite this article as:
Carey, B. Philos Stud (2011) 155: 371. doi:10.1007/s11098-010-9581-5

Abstract

Conciliatory views about disagreement with one’s epistemic peers lead to a somewhat troubling skeptical conclusion: that often, when we know others disagree, we ought to be (perhaps much) less sure of our beliefs than we typically are. One might attempt to extend this skeptical conclusion by arguing that disagreement with merely possible epistemic agents should be epistemically significant to the same degree as disagreement with actual agents, and that, since for any belief we have, it is possible that someone should disagree in the appropriate way, we ought to be much less sure of all of our beliefs than we typically are. In this paper, I identify what I take to be the main motivation for thinking that actual disagreement is epistemically significant and argue that it does not also motivate the epistemic significance of merely possible disagreement.

Keywords

DisagreementDefeatersHigher-order evidenceEpistemic peersConciliatory views

Sometimes equally competent epistemic agents, after careful consideration of equally good evidence, reach incompatible conclusions. There are various principles we might adopt prescribing how (if at all) each agent ought to modify her beliefs upon learning of such disagreement. A certain class of such principles (what Elga (2007) has called ‘conciliatory views’) lead to a somewhat troubling skeptical conclusion: that, for the most part, when we know others disagree under the right conditions, we ought to be (perhaps much) less sure of our beliefs than we typically are.

In an effort to discredit conciliatory views, one might attempt to extend this skeptical conclusion by arguing that disagreement with merelypossible epistemic agents should be epistemically relevant in the same way as disagreement with actual agents, and that, since for any belief we have, it is possible that someone should disagree in the appropriate way, we ought to be much less sure of all of our beliefs than we typically are. Since this conclusion is unacceptable, the conciliatory views must be false.

In this paper, I will consider the epistemic relevance of actual and possible disagreements, concluding that they differ greatly in this respect. However, this may not get conciliatory views entirely off the hook, as I will further argue that certain unencountered disagreements are just as epistemically relevant as typical disagreements.

1 Conciliatory views

Before considering the rational response to encountering a disagreement, we will need to adopt some theoretical tools. First, we need the notion of an epistemic peer:

S and T are epistemic peers if and only if: for any body of evidence E, if S and T disagree about what proposition E supports, then neither is more likely to be right than the other.1

Epistemic peerhood is an essential component of philosophically interesting cases of disagreement, as, if S has good evidence that T is significantly more or less likely to be right than she is, then the intuitively rational strategy for S is to ignore or adopt T’s position, respectively.

Next, let us define a D-case as a case in which:
  1. (i)

    S believes P on the basis of E

     
  2. (ii)

    S has strong evidence that an epistemic peer believes ¬P on the basis of evidence that is as good as E2

     
  3. (iii)

    S has no independent reasons for discounting her peer’s evidence.3

     

The question, then, is how an agent should rationally respond to finding himself in a D-case.

Consider the following potential answers:
  • (C1) If S is in a D-case, then S should suspend judgment on P.

  • (C2) If S is in a D case, then S should split the difference with S’s epistemic peer.4

  • (S1) If S is in a D-case, then S should continue believing P (to the same degree).

C1 and C2 are both conciliatory principles,5 in that they mandate a change in S’s belief in the direction of the peer’s belief, while S1 is stubborn in that it requires no change from S’s original belief.6

Given a plausible restriction on how we should think about evidence, something like C1 or C2 seems to follow fairly naturally from the description of a D-case. Suppose my friend and I are splitting the check at dinner, and each of us looks at the bill and performs some quick mental arithmetic to determine our share. I come to the conclusion that we each owe $17, while my friend concludes that we each owe $19.7 Suppose further that I have good evidence that I am typically reliable about this sort of thing, and that I have in fact made no mistake in this case, so that, prior to discussing it with my friend, it is rational for me to believe that we each owe $17. But suppose I also have good reason to believe that my friend is an epistemic peer, such that, when we disagree (at least about arithmetic), she is just as likely to be right as I am. When I discover that she disagrees with me on roughly equivalent evidence, what should I then believe?

According to S1, I should continue believing that we each owe only $17, and (if there are degrees of belief) to the same degree. But this seems implausible. After all, prior to discovering that we disagreed, I regarded her as just as likely as I to be right when we disagree. How, then, can it be rational for me to give her conclusion no epistemic weight once I discover that we actually do disagree?

C1 and C2 offer a more acceptable prescription: I should alter my belief in the direction of my friend’s belief. The basis for this claim is that, since on my evidence my friend is just as likely to be right as I am, I should not give my calculation extra epistemic weight simply because it is mine. This seems like a plausible restriction on rationality: if I have good reason to think that my friend is just as likely to be right as I am, it is irrational for me to grant my judgment greater epistemic weight than hers.8

This point generalizes. In a D-case, S’s evidence supports that some other agent who is just as good at interpreting evidence has evidence which is just as good as her own and has reached an incompatible conclusion. Given that (iii) is also satisfied,9 it seems that S should think her peer is just as likely to be right as she is. In the absence of a reason to doubt either her peer’s ability to read the evidence (since she is a peer) or her peer’s body of evidence (since it is just as good as S’s), S should grant her peer’s conclusion as much epistemic weight as her own. Thus, it is plausible that an appropriate principle about disagreement will hold that if S is in a D-case, then S should modify her belief significantly in the direction of her peer’s.10

2 The threat of skepticism

C1 has a prima facie unattractive consequence: if C1 is true, then whenever I disagree with someone who I have good (undefeated) reason to think is an epistemic peer with equally good evidence, I should suspend judgment on the disputed proposition. So, it seems I should suspend judgment on virtually all philosophical theses, simply because there are people that I have good reason to think are my epistemic peers who have considered all of the same arguments and examples that I have, and have reached different conclusions. And, of course, the problem extends beyond the domain of philosophy. There are those who disagree with me about issues in ethics, politics, aesthetics, and almost any other topic of importance whom I take to satisfy the relevant conditions,11 and so it seems that, if the proponents of conciliatory views have it right, then a broad, though not total, withdrawal from my convictions is in order.

Note, however, that the scope of this skeptical conclusion is at least somewhat restricted, as there are many beliefs I have with respect to which I am not in a D-case. For example, I have no evidence to suggest that I have an epistemic peer who disagrees with me on equally good evidence about the propositions that 2 + 2 = 4, that Earth is the third planet from the sun, or that I am having an experience as of looking at a computer screen right now. These sorts of beliefs, it seems, are safe from the implications of C1: the former two because no peer disagrees with me about them, and the latter because, even if a peer did disagree, I would have reason to think my evidence was better than hers.12 It is only in more controversial domains like philosophy and politics that disagreement among peers is rampant, and in these cases, the proponent of C1 might maintain, we must simply accept the unfortunate conclusion that we ought to suspend judgment on a great many more issues than we typically do.

3 Merely possible disagreement

However, merely possible disagreement threatens to expand the skeptical consequence of C1 beyond controversial issues. Take one of my beliefs that is uncontroversial—that 2 + 2 = 4, for example. Suppose my evidence for this belief is just that I carefully consider the proposition and it seems true to me.13 No one has ever disagreed with me about this proposition, but someone I have good evidence for believing to be a peer with equally good evidence could disagree. That is, in some possible world, I find myself in a D-case with respect to the proposition that 2 + 2 = 4.

Now consider an agent, Ben, who lives on an island. Everyone on the island agrees that 2 + 2 = 4. Ben knows that he has epistemic peers who live off the island who believe that it is not the case that 2 + 2 = 4 on the basis of equally good evidence, but persists in believing as he does because he takes the views of people who do not live on his island to be epistemically irrelevant. It seems right to say that Ben is not taking proper account of his total evidence. But how am I any different? Am I not treating the actual world as my island and simply declaring the attitudes of my peers in other worlds irrelevant by fiat?

Following this intuition, Kelly (2005) considers the case of a student in some possible world who studies Newcomb’s Problem.14 In her world, One-Boxing is unanimously endorsed by everyone who has considered the problem, despite the fact that they have exactly the same arguments available to them as we have in the actual world. If only actual disagreement is epistemically relevant, then, despite having access to the same arguments, the student should be much more confident that One-Boxing is the correct strategy in her world than she would be in ours. But, Kelly asks, “…can’t the student in the unanimous possible world simply look over at our own fragmented world, and realize that here she has epistemic peers who extol Two-Boxing?” (21) That is, doesn’t her evidence support that she is in a D-case with respect to some possible peers?

Kelly further motivates minimizing the epistemic difference between actual and merely possible disagreements by arguing that “whether there is any actual disagreement with respect to some question as opposed to merely possible disagreement might, in a particular case, be an extremely contingent and fragile matter.” As an example, Kelly offers a case in which an evil tyrant executes everyone who disagrees with him on some issue.15 He would have no actual disagreers, but not for epistemically relevant reasons. So, in certain kinds of cases, that disagreement is merely possible rather than actual seems to make no epistemic difference, meaning we have at least some reason to think that actual disagreements should not be epistemically privileged over merely possible ones.

4 The problem of possible disagreements

Unfortunately for the proponents of conciliatory views, allowing that merely possible disagreements are as epistemically relevant as actual disagreements leads to an uncomfortable dilemma. Consider the following conditional:

(PD) If (actually) being in a D-case with respect to P should always lead S to suspend judgment on P, then possibly being in a D-case with respect to P should always lead S to suspend judgment on P.16

Suppose we accept C1, which tells us that the rational response to being in an actual D-case is to suspend judgment. If merely possibly being in a D-case is no less epistemically important, then rationality must also require at least suspension of judgment in response to such merely possible disagreement. So, if we accept that actual disagreement (being in an actual D-case) is no more epistemically relevant than merely possible disagreement (merely possibly being in a D-case), then we must accept something at least as strong as (PD).17 If the proponent of C1 accepts this conditional, however, it seems he must accept one of the following two arguments:

(A1)
  1. (1)

    If being in a D-case should always lead S to suspend judgment on P, then possibly being in a D-case should always lead S to suspend judgment on P.

     
  2. (2)

    It is not the case that possibly being in a D-case should always lead S to suspend judgment on P.

     
  3. (3)

    Therefore, it is not the case that being in a D-case should always lead S to suspend judgment on P.

     
(A2)
  1. (1′)

    If being in a D-case should always lead S to suspend judgment on P, then possibly being in a D-case should always lead S to suspend judgment on P.

     
  2. (2′)

    Being in a D-case should always lead S to suspend judgment on P.

     
  3. (3′)

    Therefore, possibly being in a D-case should always lead S to suspend judgment on P.

     
  4. (4)

    If possibly being in a D-case should always lead S to suspend judgment on P, then S should suspend judgment on almost every proposition.

     
  5. (5)

    Therefore, S should suspend judgment on almost every proposition.

     

As (A1) indicates, denying that possible disagreements should lead an agent to suspend judgment immediately entails (3), which is incompatible with C1. However, maintaining C1 will entail (3′) which, given (4) leads to an unattractively broad skeptical conclusion.

Some defense of (4) is in order. The motivation is simply that, given the breadth of metaphysical possibility, there is some possible world in which I am in a D-case with respect to virtually any proposition. I know that there are many merely possible agents who are no less reliable than I am, have equally good evidence, and nevertheless disagree with me about the proposition that the Earth is the third planet from the sun.18 Given that my evidence supports this, is discounting the views of these peers anything other than actual-world chauvinism? If not (that is, if (3′) is true), then we are left with the conclusion that I should suspend judgment on almost every proposition. So, if we accept (PD), it seems that we must either reject C1 or accept an implausibly skeptical conclusion.

5 Responding to the problem

Fortunately for the proponent of C1, it is false that merely possible disagreement affects one’s evidence in the same way as actual disagreement, so (PD) is unsupported and the dilemma can be avoided. Though there is something correct about the intuitions driving Kelly’s points,19 evidence from actual disagreement is importantly distinct from evidence from merely possible disagreement. To see why, we will need a more careful explanation of how actual disagreement leads to an evidential situation that supports suspending judgment.

Consider again the restaurant case. Suppose my evidence supports the following: my friend and I are both fairly reliable when it comes to dividing restaurant bills, such that we each get it right 85% of the time.20 80% of the time we both get it right, while 10% of the time, we are both wrong. The remaining 10% of the time, we disagree, and therefore only one of us is right. Since, we are epistemic peers, we are equally likely to be right when this happens, so in half of these disagreement cases I am right, and in half of them my friend is right.21 So, supposing that the actual amount we owe, x, is $17, my evidence supports the following:

 

 

My belief

Friend’s belief

Probability

(A)

x = 17

x = 17

.8

(B)

x = 17

x = 19

.05

(C)

x = 19

x = 17

.05

(D)

x = 19

x = 19

.1

When I first calculate that x = 17, I have good evidence that I have calculated correctly. After all, I have good evidence that am right 85% of the time. However, when I discover that my friend thinks that x = 19, I have evidence that this is one of the 10% of cases (either (B) or (C)) in which only one of us is right, and in those cases, I only get it right 50% of the time. Since I therefore have no better reason to think that I am right than that my friend is, I should suspend judgment. Finding out that my friend and I disagree thus defeats my evidence for thinking that my calculation was correct.22,23

This, it seems, is why disagreement with a peer is so epistemically important: it provides an undercutting defeater for my evidence. I think it is precisely this kind of defeat story that makes plausible strong conciliatory views like C1, as, since the disagreement is with someone I have good reason to think is a peer, my evidence will always support that my reliability given that we disagree is 50%, which naturally supports suspension of judgment.

However, merely possible disagreement does not defeat my evidence in this way. Suppose we add to the case that my evidence supports that the probability that it is possible that my friend will disagree with me is 1. My evidential situation with respect to the proposition that we each owe only $17 does not change in the same way, because it’s being possible that we disagree is no evidence at all that this is one of the cases in which we disagree, which is what serves as the defeater.24

To make the point even clearer, let us consider exactly what is doing the defeating in these cases. Imagine a case where I have strong evidence that I am in a D-case with respect to someone, but I’m not sure exactly who. Suppose, for example, that I am at dinner with two epistemic peers and they tell me that one of them looked at the bill and calculated that we owe some amount other than $17, but they do not tell me which one of them it was. In this case, it seems that my evidence is defeated just as well as in the previous case, the relevant defeater being something like: ∃x(I am in a D-case with x with respect to the proposition that we each owe $17).25 So let us suppose that the relevant defeaters are all of the following form:

∃x(I am in a D-case with x with respect to P)

Possible disagreers do not make any proposition of this form true. Rather, they make propositions of one (or both) of the two following forms true:

Possibly: (∃x(I am in a D-case with x with respect to P)), or

∃x(Possibly: (I am in a D-case with x with respect to P))26

And, as in the original restaurant case, even being certain of these kinds of possible cases does not defeat evidence for P in the same way that evidence of actually being in a D-case does.27 So, the student of Newcomb’s Problem cannot look from her unanimous world into our fragmented one and find disagreeing peers here, because she is entertaining a proposition of the wrong form to serve as a defeater of her evidence.

What, then, should we make of Kelly’s claims about the extreme contingency and fragility of actual disagreement? First, we need to distinguish between (at least) two separate elements of the evidence in disagreement cases. The student in the unanimous world is not only missing a defeater for her own judgment that One-Boxing is correct, she also has additional evidence from the fact that many people agree with her. If she were to find herself in a D-case with respect to One-Boxing, she would gain a defeater for this evidence from agreement, as well as a defeater for the evidence from her own initial judgment.

To see how this works in more detail, consider a modification of Kelly’s tyrant case. Suppose that at w1 the Earth is ruled over by a benevolent dictator, Victor, whose only weakness is his incredible vanity. Due to this vanity, Victor cannot tolerate anyone believing that he is not the greatest ruler the Earth has ever known. As a result, anyone who has this belief is executed. Despite this policy, it is still true that Victor is Earth’s greatest ruler, and there is ample evidence of this. Obviously, there is no disagreement among peers about the proposition that Victor is Earth’s greatest ruler.

Now consider Reed, Sue, and Johnny, three residents of w1 who share all of the evidence about Victor’s various benevolent acts, leadership abilities, etc., but differ in their evidence about the policy of execution:
  1. (i)

    Reed has evidence of the policy and has evidence that at least one person has been executed because of it.

     
  2. (ii)

    Sue has evidence of the policy, but has no evidence to suggest that it has ever been enforced.

     
  3. (iii)

    Johnny has no evidence of the policy.

     

Seemingly, Johnny should believe that Victor is Earth’s greatest ruler, as he has evidence to support this proposition and no defeaters for it. Sue should be less confident of the proposition, but still believe it, as she has an undercutting defeater for some of her evidence (the evidence from agreement), but no defeater for her own assessment that Victor is Earth’s greatest ruler. Reed, on the other hand, has both a defeater of the evidence from agreement and evidence of actual disagreement (from those who were executed), which also serves to defeat his evidence of his own reliability in this case, so he should suspend judgment.28

Now suppose they all consider some other world, w2, in which there is broad disagreement about Victor’s greatness (perhaps because he is less vain there, and allows dissent on the issue of his greatness). Reed’s evidence supports that w1 is a world where people disagree, so considering w2 should have no effect on his situation. Sue’s evidence from agreement has been defeated; she knows that the execution policy might be responsible for the lack of disagreement, rather than genuine consensus. However, she knows that she judged Victor to be Earth’s greatest ruler on the relevant evidence (his benevolence, etc.). So, given that, for all her evidence supports, the execution policy has never been enforced, is seems that she can reasonably maintain her belief. Johnny has no defeaters and does not gain one by considering w2, for the same reason that being certain that my friend could disagree with me does not defeat my evidence in the restaurant case. So, it seems that evidence of merely possible disagreement does not support a change in any of their beliefs, while evidence about actual disagreement would (if Reed told Sue about the executed disagreers, she would also have a defeater). Thus, it seems that the proponent of C1 can retain his view while denying the broadening of the skeptical conclusion by denying (PD).

6 Unencountered disagreement

However, the proponent of C1 must admit the epistemic relevance of unencountered disagreers. Consider again the case of Ben and his cohabitants on the island. They have never encountered anyone from off the island, so in at least one sense of the word ‘disagreed’ no one has ever disagreed with them about the proposition that 2 + 2 = 4. Nevertheless, Ben has good evidence that ∃x(I am in a D-case with x with respect to 2 + 2 = 4) is true. It is, of course, contingent that Ben has never encountered someone who disagrees with him. After all, it is always a contingent and in some sense fragile matter that an agent has the evidence he has, so what is rational for anyone to believe is always determined by such factors. However, the actual disagreers are epistemically relevant because according to Ben’s evidence there are such disagreers, and so the relevant defeat propositions are supported.

Disagreers can also be unencountered for chronological reasons. I, for example, have never encountered anyone who died before 1950, though many such people were my epistemic peers on various topics, including logic and set theory. So, suppose I had good evidence that many of my (now dead) epistemic peers were not persuaded by Russell that Naïve Set Theory is inconsistent with classical first‐order logic. My evidence that these two systems are inconsistent is just Russell’s Paradox (plus, perhaps, certain intuitive evidence). But the peers under consideration were aware of just the same Paradox (and had, so far as I know, equally good intuitive evidence) but nevertheless adopted different conclusions. In such a situation, absent an independent reason to downgrade the evidence of the deceased peers, my evidence is defeated to the same degree that it would be if I had equally good evidence of contemporary disagreeing peers.29

In both kinds of cases, the unencountered disagreers are epistemically relevant not simply because it is a contingent and in some sense fragile matter that they are unencountered. Rather, they are relevant because according to the agent’s evidence there are such disagreers, and so the relevant defeat propositions are supported. So, though unencountered disagreement may expand the set of propositions on which we must suspend judgment somewhat, it does so only when we have reason to think that someone disagrees with us, as opposed to when it is merely possible that someone disagrees. Whether skepticism of this breadth is an acceptable cost of maintaining a conciliatory view is a question I will not answer here, but it seems that the proponent of C1 must accept it.

Footnotes
1

We might use this as a basis to develop conditions for epistemic peerhood on a particular topic or subject by restricting the range of bodies of evidence or supported propositions (or both).

 
2

I set aside here the complicated issue of what makes one body of evidence as good as another.

 
3

By ‘independent reasons’, I mean reasons other than the fact that the peer’s evidence seems to support a proposition incompatible with P.

 
4

C2 should be taken as a rough generalization of C1 which allows for different degrees of belief. To do it justice, we would need to reformulate the notion of a D-case so that (i) S believes P on the basis of E to degree D, and (ii) S has strong evidence that an epistemic peer believes P on the basis of E to degree F ≠ E. For S to split the difference, then, would be to believe P on the basis of E to degree \( {\frac{{({\text{D}} + {\text{F}})}}{2}} \). For a defense of this view, see Christensen (2007).

 
5

For discussion of such views, see Feldman (2006), and Elga (2007).

 
6

For examples of this kind of view, see Kelly (2005) and van Inwagen (forthcoming).

 
7

The example is Christensen’s, with the numbers changed. He no doubt runs with a more financially secure crowd than I do.

 
8

A conciliatory view need not be an equal weight view of the sort to which I am appealing here, but a view of the latter kind certainly supports a view of the former kind.

 
9

So, for example, S does not have independent defeaters for her peer’s evidence.

 
10

In what follows, I will drop the discussion of conciliatory views generally and focus on C1 as a representative of the kind. Some points need to be made about this. First, the plausibility of C1 depends on granting very high epistemic weight to the opinions of disagreeing peers. However, a principle that granted less weight and therefore required only minimal modification of belief in the direction of a peer would also rightly be called conciliatory. Most of what I say about the consequences of C1 will apply only minimally to such views, but, with minor modifications, would apply to any view that requires significant adjustment in the direction of a peer. Second, given the description of a D-case, C1 covers only a limited range of disagreements. For example, it says nothing about encountering a peer who suspends judgment on P, or who does not explicitly believe P, but merely some other proposition incompatible with P. Despite both of these deficiencies, I will focus on C1 as a representative of conciliatory views, primarily for the sake of simplicity.

 
11

Actually, in at least some of these domains, I usually fail to regard those who disagree as peers. There is considerable doubt as to whether my evidence supports this judgment.

 
12

The reason being, roughly, that I have better evidence of my mental states by introspection than anyone else could have by any other means. For a slightly different account of such cases, see Sosa (forthcoming).

 
13

So I do not, for example, derive it.

 
14

It is worth noting that Kelly (forthcoming) backs off a bit on his earlier stubborn view of disagreement. He does not, however, revisit this particular objection.

 
15

Actually, Kelly’s example is problematic, since the executed disagreers are not merely possible, they are just in the past. We might instead imagine that the tyrant somehow prevents the conception of anyone who would disagree with him, so that no such person ever actually exists.

 
16

It should be noted that (PD) is considerably stronger than any claim that Kelly makes, though Alston (1991) accepts something very similar regarding epistemic practices. I use the stronger claim here to make things seem as bad for C1 as possible. Regardless, a restricted version of the conditional would still create some problems for conciliatory views, and would still be false for the reasons discussed later.

 
17

I take it as uncontroversial that actual disagreements are no less epistemically relevant than merely possible ones. However, if we thought that possible disagreements were more relevant than actual ones, then we might endorse something stronger (e.g. that if actual disagreement should lead S to suspend judgment, possible disagreement should lead S to disbelieve P). Such principles strike me as obviously untenable.

 
18

Of course, there are possible worlds where the Earth is in fact not the third planet from the sun, but even in some of those in which it is, I have the relevant kind of disagreers.

 
19

We will consider what is right about them in the last section.

 
20

I present this in terms of probabilities based on track records for simplicity only. We might just as easily think of these as confidences or degrees of justification based on any kind of evidence that could support the displayed distribution.

 
21

The actual numbers here are irrelevant. The only constraint is that my evidence supports that the probability that I am right when we disagree is equal to the probability that my friend is right when we disagree, which is entailed by our being epistemic peers.

 
22

Christensen (2007) makes a similar point.

 
23

Of course, if I had some reason (other than that she disagrees with me) to think that my friend’s evidence was faulty, such as that she had forgotten her glasses, this would give me a reason to assign different probabilities to each of us being right conditional on the fact that we disagree. But in D-cases, by stipulation, I have no such independent reason.

 
24

This is not to say of course, that evidence of possibilities can never serve as a defeater. If I think that it is actually the case that 2 + 2 = 5 only because I believe that it is necessarily the case that 2 + 2 = 5, then evidence that it is possible that 2 + 2 ≠ 5 does defeat my evidence for the proposition that 2 + 2 = 5. The point is that evidence of merely possible disagreement does not fill the same defeat role as evidence of actual disagreement. It may fill some other defeat role in other kinds of cases, but since it does not serve as a defeater in the way that motivates C1, the proponent of C1 can safely deny (PD).

 
25

Note that in the previous case, I was also justified in believing this proposition, or at least I would have been after a simple existential generalization.

 
26

As I use the existential quantifier, it quantifies only over the objects that exist at a world. As such, the second form differs from the first in that it claims of something that actually exists that I could be in a D-case with that thing about p, while the first claims only that there could be a thing that I would be in a D-case with about p.

 
27

Of course, this leaves open that it defeats in some other way, but absent some story about how this occurs, there is no reason to believe (PD).

 
28

This is assuming, of course, that his evidence supports that those executed were epistemic peers with equally good evidence, etc.

 
29

I would need a variety of eternalism in order to say that evidence of past (or future) disagreers supports exactly the same defeat proposition as evidence of present disagreers. I am happy to take on this metaphysical baggage for the sake of simplicity, but I suspect a presentist could tell an appropriate story about why PAST(∃x(I am in a D‐case with x with respect to P)), bears different evidential relations than Possibly: (∃x(I am in a D‐case with x with respect to P)). (The story would likely need to modify D‐cases in some way, such that they explicitly allow cross‐temporal relations.).

 

Acknowledgments

Thanks to participants in the 2009 Graduate Epistemology Conference at the University of Miami and Richard Feldman’s 2008 Disagreement seminar for helpful discussions of these issues. I am especially grateful to Richard Feldman, Jonathan Matheson, and Nick Wiltsher for comments on earlier drafts of this paper.

Copyright information

© Springer Science+Business Media B.V. 2010