Skip to main content
Log in

Lotteries, knowledge, and inconsistent belief: why you know your ticket will lose

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Suppose that I hold a ticket in a fair lottery and that I believe that my ticket will lose [L] on the basis of its extremely high probability of losing. What is the appropriate epistemic appraisal of me and my belief that L? Am I justified in believing that L? Do I know that L? While there is disagreement among epistemologists over whether or not I am justified in believing that L, there is widespread agreement that I do not know that L. I defend the two-pronged view that I am justified in believing that my ticket will lose and that I know that it will lose. Along the way, I discuss four different but related versions of the lottery paradox—The Paradox for Rationality, The Paradox for Knowledge, The Paradox for Fallibilism, and The Paradox for Epistemic Closure—and offer a unified resolution of each of them.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Throughout this paper, the terms ‘rational’, ‘irrational’, ‘reasonable’, ‘unreasonable’, ‘justified’, and ‘unjustified’ are being used in their epistemic senses (unless otherwise qualified).

  2. Strictly speaking, Nelkin argues that we’re justified in believing that L is very probable, which I take to be roughly equivalent to suspecting L, but argues that it is not rational to believe that L purely on the basis of its high statistical probability (Nelkin 2000). Smith (2010) also thinks that I’m justified in suspecting that L but not justified in believing that L.

  3. This not to say that externalism is false with respect to all forms of justification. I am only claiming that externalistic justification is irrelevant to the normative belief-guiding kind of rationality which concerns us here.

  4. Igor Douven (2008) also argues that accounts of justification can be assessed in terms of how well they help us achieve the epistemic goal of maximizing truth and avoiding falsity. And Richard Campbell (1981) appeals to the epistemic goal of maximizing truth and minimizing falsity to defend the rationality of believing preface-style inconsistency-making propositions.

  5. It is important to be clear about what sort of probability I’m appealing to here and throughout the paper because, as Peter Kung (2010) has observed, some probabilities provide no evidence or rational support for their target propositions. For example, as Kung points out, some appropriately assigned subjective probabilities provide no reason or evidence in support of their target propositions at all. Kung provides the following example. You are watching an unfamiliar card game about which you have no information. You know none of the rules of the game. You have no idea about the various skill levels of the current players. And you have no information about the ratio of good hands to bad hands or about the frequency of winning hands, etc. You don’t know if anyone ever wins or if everyone always wins. You are currently peering over the shoulder of one of the players and are looking at the cards she has been dealt. In such a situation, what subjective probability should you assign to the proposition that she will win this hand? Kung’s answer: “On the basis of the indifference principle it seems that you should conclude the probability that she will win the hand is ½” (2010, p. 3). Kung argues that this initial subjective probability assignment gives you no evidence that it is 50% likely that she will win. However, Kung argues that after consulting the rulebook and discovering that the statistical probability of hands identical to hers winning is .5, your newly-informed subjective probability assignment (also .5) does provide evidence that she has a 50% chance of winning.

    Pryor (2013) thinks that high probability is not closely related with having evidence because probability concerns the balance of evidence whereas justification requires having evidence. Pryor is right where subjective probabilities are concerned. Our prior subjective probability assignments get updated (or at least should get updated) as new evidence is acquired so that our new subjective probability assignments reflect our total evidence. Pryor is also right that justification requires having evidence/reasons. To be justified in believing that p, I must possess an epistemic reason (either doxastic or experiential) to think that p is true.

    While Kung and Pryor are right to note that there is no tight connection between subjective probabilities and evidence, the probabilities that I am appealing to are not subjective probabilities. They are objective statistical probabilities that are easily calculated. In a fair lottery with 13,983,816 tickets, the objective statistical probability of my ticket being randomly picked is 1/13,983,816, which equals .000000072. So, the objective statistical probability that my ticket will lose [1 minus .000000072] is .999999928. Once I do the calculations, I know that the probability of L is .999999928. It is not the probability itself that constitutes my reason for believing L (for I could be unaware of that probability). It is my knowledge of that probability that justifies me in believing L. I know that the probability of L is .999999928, and this knowledge gives me a fallible albeit extremely compelling doxastic reason to think that L is true.

  6. Throughout this paper, I use the double arrow ‘⇒’ for logical implication, the single arrow ‘→’ for material implication, and the single arrow with ‘sc’ subscript ‘→sc’ for the subjunctive conditional connective.

  7. It is important not to confuse justificational closure with principle of epistemic closure [PEC]. See Sect. 2.4 for a precise statement of PEC. Also see footnote 15.

  8. This rendition of the “rationality version” of the lottery paradox closely parallels Nelkin’s. See Nelkin (2000, p. 375).

  9. A set of propositions is inconsistent just in case it is logically impossible for all of the members of the set to be true. The set mentioned in premise (7) is clearly inconsistent since the final member proposition is the disjunction of the negations of all the other members of the set. If that final member proposition is true, then one of the other member propositions must be false. On the other hand, if all of the other member propositions are true, then the disjunction of their negations is false. Either way, not all of the members of the set can be true.

  10. See footnote 9.

  11. The point can be developed more fully as follows: Nelkin can only rationally assert (2*)—if she assumes the falsity of (1*). Since she cannot rationally assert (2*) without assuming the falsity of (1*), she cannot assert (2*) without assuming the very thing in question, namely, ~ Kt1. Granted, if Nelkin could give an independent reason for thinking (2*) is true—a reason that did not make reference to the falsity of (1*), then LPK would not beg the question; but she can’t because (2*) is false if (1*) is true. Consequently, LPK effectively begs the question, because to be rationally entitled to assert (2*), Nelkin must first be rationally entitled to deny (1*). Perhaps Nelkin can provide some other argument for rejecting (1*), which she can then use to establish ~ (1*) and ipso facto establish (2*). But then, it is this other argument—not LPK—that is doing all the work. Any argument A1 such that one must first establish the intended conclusion of A1 via some second argument A2 before one can rationally assert the premises of A1 is itself worthless in establishing the conclusion of A1. LPK is such an argument. In order for Nelkin to rationally assert (2*) of LPK, she must first prove the falsity (1*) with a different argument, thereby rendering LPK superfluous. Consequently, LPK gives us no reason to think that I don’t know that L.

  12. Ryan presents a version of the lottery paradox similar to LPF in that it is predicated on the conjunction principle (CPJ), but she doesn’t regard her version as a reductio against fallibilism per se, but rather as a reductio of the assumption that one is justified (to the degree needed for knowledge) in believing that one’s ticket will lose. She contends that there is something special about the structure of the lottery that prevents one from being justified in believing that one’s ticket will lose, not the mere fact that the justification is nonconclusive.

  13. (16′) violates the No Known Contradictions Principle.

  14. For any probabilistic standard for justification less than 1, one can create a lottery with a sufficiently large number of tickets such that the probability of one’s ticket losing in that lottery will exceed the probabilistic standard in question.

  15. It should be clear that PEC spells out a restricted version of epistemic closure. The unrestricted epistemic closure principle holds that knowledge is closed under entailment:

    (UEC):

    (p)(q)[(Kp & (p → q)) ⇒ Kq]

    UEC implies that we know all of the logical consequences of the propositions we know, which is an utterly implausible result for finite reasoners. For example, UEC implies that we know every necessary truth (since every proposition implies a necessary truth), including necessary truths so complex that we cannot even understand them. Since knowledge requires belief and since we cannot believe propositions we cannot understand, UEC is a non-starter. What about a version of closure that restricts epistemic closure to the known implications of the propositions we know:

    (KIEC):

    (p)(q)[(Kp & K(p → q)) ⇒ Kq]

    As is widely noted in the literature on closure, KIEC is also open to counterexample, for one can know that p and know that p implies q, without ever drawing the inference that q. No contemporary proponent of epistemic closure that I am aware of defends UEC or KIEC. [Note: While principles like UEC and KIEC are often presupposed in idealized formal epistemic logics, these logics are expressly criticized on the grounds that they are “committed to an excessively idealized picture of human reasoning” that finite human agents do not and cannot instantiate. See the SEP entry on “Epistemic Logic” (Rendsvig and Symons 2019) for details.]

    The most widely defended version of epistemic closure has come to be known as single premise closure (because the agent is inferring q from p). In his definitive defense of restricted closure, John Hawthorne formulates and defends the following version of single premise closure:

    (SPC):

    If one knows P and competently deduces Q from P, thereby coming to believe Q, while retaining one’s knowledge that P, one comes to know that Q. (2005, p. 29)

    Hawthorne does not defend the following multi-premise closure principle:

    (MPC):

    If one knows some premises and competently deduces Q from those premises, thereby coming to believe Q, while retaining one’s knowledge of those premises throughout, one comes to know that Q, (2005, p. 29)

    because, as he puts it: “small risks can add up to big risks” (2005, p. 30). Hawthorne doesn’t spell out what he means by ‘competently deduces’ but presumably in order to competently deduce Q from P, one must know or intuitively grasp that P implies Q. [Note: One needn’t use “P implies Q” as a premise when competently deducing Q from P, as demonstrated by Lewis Carroll’s “What Achilles Said to the Tortoise” (1895), but one must intuitively grasp that implication when competently deducing Q from P.] The principle of epistemic closure as I have formulated it, viz., PEC, is a version of single premise closure that makes explicit that to competently deduce Q from P one must grasp that P implies Q. Throughout this paper, I use the term ‘epistemic closure’ to refer to the restricted version of closure spelled out by PEC.

  16. If we assume that the competent deduction and basing requirements are satisfied, then PEC entails: [KBN & KB(N → ~ W)] ⇒ KB ~ W. So, given PEC, competent deduction, basing, and KB(N → ~ W), it follows that KBN ⇒ KB ~ W, which, by contraposition, entails ~ KB ~ W ⇒ ~ KBN. Thus, LPEC’s premise 3 follows from PEC, competent deduction, basing, and the assumption that Bob knows that (N → ~ W).

  17. A number of other epistemologists have observed that denying that we know “lottery propositions” threatens widespread skepticism. See, e.g., Vogel (1990), Hawthorne (2004) and Timmerman (2013, pp. 91–96).

  18. Of course, if LPF forces us to embrace infallibilism, then skepticism with respect to non-cogito contingent propositions is a foregone conclusion. So, there must be a fallibilistic solution to LPF, if we are to have virtually any knowledge at all.

  19. Here I am turning Ryan’s “What Else Can It Be?” argument on its head. Ryan assumes that I don’t know my ticket will lose, even if (i) it actually loses and (ii) I believe it will lose. She contends that the lottery is not a Gettier case, and so, if I don’t know that my ticket will lose, as most maintain, then it must be because I fail to satisfy the justification condition for knowledge. See Ryan (1996, pp. 136–137). Of course, if I am justified in believing my ticket will lose, then Ryan’s reasoning entails that I do know my ticket will lose, as I will subsequently argue.

  20. A minimally inconsistent set is a set such that all of its members are required to make it inconsistent. It is an inconsistent set such that if you drop any one member, the resulting set becomes consistent. Following Keith Lehrer, we can define a minimally inconsistent set more precisely as follows:

    (MI):

    A set of statements is a minimally inconsistent set iff the set of statements is logically inconsistent and such that every proper subset of the set is logically consistent. (1990, p. 99).

  21. Those familiar with the preface paradox will recognize that this is just the introspective version of that paradox. For twentieth century discussions of the preface paradox, see Kyburg (1970), Foley (1979), Campbell (1981), Klein (1985), Pollock (1986) and Engel (1991). With the exception of Pollock, all of these authors reject the consistency requirement for rationality [CRR]. For more recent alternative responses to the preface paradox, see Leitgeb (2014), Cevolani (2017), Cevolani and Schurz (2017) and Schurz (2019). All of these latter authors either implicitly or explicitly reject the unrestricted conjunction principle for justification [CPJ]. To see why, let C be the conjunction of all of the individual propositions S is justified in believing. Unrestricted CPJ entails that S is justified in believing that C is true. All four of these latter authors reject that claim. They contend, instead, that S is only justified in believing that C is approximately true or truthlike or verisimilar.

  22. Campbell (1981, p. 251) and Klein (1985, p. 131) offer similar inductive arguments based on TROOE.

  23. The probability of the conjunction of 100,000 independent contingent propositions each with a probability of .999 equals .999100,000 (which = 3.5385 × 10−44). Since prob(~ IMP) = 3.5385 × 10−44, prob(IMP) = 1 − 3.5385 × 10−44, which would be expressed by a decimal point followed by 43 nines and change, i.e., prob(IMP) = .999999999999999999999999999999999999999999964615.

  24. Kyburg (1970), Foley (1979), Campbell (1981), Klein (1985) and Engel (1991) have offered similar arguments for the rationality of believing such inconsistency-making propositions. In particular, see Foley (1979, p. 252), Campbell (1981, pp. 250–251), and especially Engel (1991, pp. 119–127), where I dub the inconsistency-making proposition ‘IMP’ and argue that even Lehrer-style coherence theories imply that we are justified in believing IMP.

  25. Klein (1985, p. 131) makes a similar point, as do Hill and Schechter (2007, pp. 105–106). I also raised a similar point in Engel (1991, pp. 120–124).

  26. For independent reasons to think that CRR is false, see Klein (1985).

  27. While it’s beyond the scope of this paper to discuss the preface paradox in any further detail, it should be clear from Sect. 4.1 that I think that a unified solution to both the lottery paradox for rationality and the preface paradox is possible and that a significant part of that unified solution consists in demonstrating that both CRR and CPJ are false, since without these two principles both paradoxes collapse. [Klein (1985) makes a similar claim.].

  28. Repeated applications of the conjunction principle to J* would entail J[(p1 ^ p2 ^ … ^ pn ^ IMP) ^ ~ (p1 ^ p2 ^ … ^ pn ^ IMP)].

  29. I wish to thank an anonymous referee for this journal for encouraging me to directly address this worry.

  30. Klein (1985) also addresses this worry. He argues that the justified acceptance of each member of a “weakly inconsistent set” (i.e., a minimally inconsistent set) does not justify us in accepting every proposition on the grounds that any argument based on inconsistent premises would be subject to defeat by internally blocking warrant paths. Consequently, we would not be justified in accepting anything on the basis of such an argument.

  31. Believing propositions with a probability of .999999928 is an extremely reliable way of forming beliefs.

  32. The fact that there is no introspectable difference between fallible knowledge and merely apparent fallible knowledge is one reason that second-order knowledge is difficult to come by.

  33. Ernest Sosa offers another reason for rejecting the sensitivity requirement, namely, that it has the following absurd result:

    Suppose first we have two propositions as follows: (a) that p, and (b) that I do not believe incorrectly (falsely) that p. Surely no one minimally rational and attentive who believes both of these will normally know either without knowing the other. Yet even in cases where one’s belief of (a) is sensitive, one’s belief of (b) could never be sensitive. (1999, p. 145)

    So, if sensitivity were necessary for knowledge, we could know that p, but could never know that we are not mistaken with respect to p. For Sosa’s trash chute counterexample to the sensitivity requirement, see Sosa (1999, pp. 145–146).

  34. In “Skepticism and Contextualism” Sosa seems to interpret safety in this strong sense. See Sosa (2000, pp. 13–16).

  35. Sosa has proposed a similar weak safety condition: “Not easily would S believe that p without it being the case that p” (1999, p. 142).

  36. In Engel (1992), I introduced the distinction between evidential luck and veritic luck and argued that of these two types of luck, only veritic luck is incompatible with knowledge. For a comprehensive discussion of knowledge-destroying veritic luck and attempts to preclude it, see my Internet Encyclopedia of Philosophy article on epistemic luck (Engel 2011). For a detailed look at safety-based attempts to block veritic luck, see Pritchard (2005).

  37. Hawthorne (2004) calls propositions like the proposition that Mary’s father won’t have a heart attack this weekend “lottery propositions.” According to Hawthorne, a lottery proposition is “a proposition of the sort that, while highly likely, is a proposition that we would be intuitively disinclined to take ourselves to know” (2004, p. 5). Lottery propositions threaten our knowledge of ordinary propositions (i.e., propositions of the sort we ordinarily take ourselves to know), because in each case the ordinary proposition entails the lottery proposition. So, given epistemic closure, if we don’t know the lottery proposition, we don’t know the ordinary proposition either.

  38. A number of epistemologists have noticed that the skeptical concerns raised by the lottery paradox for epistemic closure generalize. See, in particular, Vogel (1990), Hawthorne (2004) and Timmerman (2013).

  39. Examples of known future truths were provided in Sect. 6.4. Propositions P1–P4 all involve future truths, and yet we know that they are true.

  40. Keith DeRose (1996) makes a similar observation.

  41. A number of epistemologists have defended metaepistemological skepticism, the view that no one ever fallibly knows that they fallibly know that p. See, e.g., Chisholm (1986), Engel (2000) and Pritchard (2005). Even those epistemologists who defend the possibility of second-order knowledge concede that second-order knowledge is considerably harder to come by than first-order knowledge. See, e.g., Feldman (1981). My arguments in Sects. 6.6 and 6.7 do not hinge on the truth of metaepistemological skepticism. They only turn on the weaker claim that second-order knowledge is more difficult to obtain than first-order knowledge, which is currently the standard view in the second-order knowledge literature.

  42. Even most epistemologists don’t believe that they know that their ticket will lose. It follows that these epistemologists don’t know that they know that their ticket will lose.

  43. For a detailed discussion of such “meta-Gettierization,” see Engel (2000).

  44. Of course, some purchasers of lottery tickets don’t know that their ticket will lose, because they irrationally believe that their ticket will win. My claim is that anyone who believes that her ticket will lose on the basis of its known extremely high objective probability of losing knows that it will lose, whether she realizes that she knows this or not, provided she does not win.

  45. Prominent defenders of the knowledge view of assertion include Unger (1975), Williamson (1996), DeRose (1996) and Hawthorne (2004).

  46. Hill and Schechter (2007) defend an alternative contextualist approach to lottery propositions. They claim that “there are many contexts in which it is entirely appropriate for agents to assert that particular lottery tickets will lose” (Hill and Schechter 2007, p. 110). They argue that the appropriateness of such assertions is governed by a stakes-informed Gricean maxim of relevance coupled with a justified-belief norm of assertion.

  47. I embrace a restricted version of epistemic closure [PEC], but reject both the conjunction principle for justification [CPJ] and the consistency requirement for rationality [CRR]. An anonymous referee for this journal expressed doubts as to whether this is a consistent position. Indeed, the referee argued that “CPJ is a special case of epistemic closure because a1, a2, … an implies a1 ^ a2 ^ … ^ an.”

    To see that CPJ is not a special case of PEC, recall that CPJ is the principle:

    (p)(q)[(Jp ^ Jq) ⇒ J(p ^ q)]

    Also recall that PEC is the principle:

    If S knows that p, and S knows that (p → q), and S competently deduces q from p, and S believes that q on the basis of this deduction, then S knows that q.

    To see that CPJ is not a special case of PEC, suppose that S is justified in believing p and justified in believing q. Given these suppositions, CPJ entails that S is justified in believing that (p ^ q), but CPJ does not entail that S actually believes that (p ^ q). Since knowledge requires belief and since CPJ does not entail that S believes that (p ^ q), CPJ is not a special case of PEC, for CPJ can be satisfied when PEC is not satisfied.

    Another difference between PEC and CPJ is that PEC is a version of single premise epistemic closure, the basic idea of which is that if I know that p and competently deduce q from p, then I know that q [See footnote 15 for details.]. In contrast, CPJ is a multi-premise principle of justification/warrant transfer. CPJ is false because as more conjuncts are added, the probability of the conjunction rapidly decreases and the risk of error dramatically increases. For example, suppose I’m justified in believing seven independent proposition p1–p7, each of which is .9 probable. The probability of the conjunction (p1 ^ p2 ^ p3 ^ p14 ^ p5 ^ p6 ^ p7) is .47 and thus probably false. CPJ also entails a number of impossibility results and should be rejected for that reason as well [For formal demonstrations of these impossibility results, see Schurz (2019).].

    Unlike CPJ, PEC is immune to the aggregative risk problem. Since knowledge is factive, if S knows that p, then p is true; and if S competently deduces q from known proposition p, then q must also be true. The factivity of the known premise and the validity of the deduction block that aggregative risk problem that plagues CPJ. In short, CPJ is not an instance of PEC. So, there is no inconsistency in embracing PEC and rejecting CPJ.

    Epistemic closure is also compatible with the denial of CRR. The denial of CRR entails that it is possible to rationally accept each member of a minimally inconsistent set of propositions SI. But one cannot know each member of SI because, given the factivity of knowledge, one cannot know the false proposition in SI.

  48. An ancient ancestor of this essay was delivered as part of a panel discussion on the lottery paradox at the NEH Institute on Knowledge, Teaching, and Wisdom. I want to thank the Institute directors, Keith Lehrer and Nicholas Smith, and the Institute participants, especially my fellow panelists Mike Roth and Sharon Ryan, for valuable input on that nascent essay. More recent versions of this essay were presented at the Russell Philosophy Conference, the Inland Northwest Philosophy Conference on “Knowledge and Skepticism,” and the Mainz-Frankfurt Colloquium on Analytical Philosophy: “Issues in Contemporary Epistemology.” I’m grateful to those in attendance for their helpful comments and criticisms. I’d also like to thank two anonymous referees for Synthese for their constructive suggestions. I most want to thank Bruce Russell for his invaluable feedback on multiple versions of this essay.

References

  • Campbell, R. (1981). Can inconsistency be reasonable? Canadian Journal of Philosophy, 11(2), 245–270.

    Article  Google Scholar 

  • Carroll, L. (1895). What Achilles said to the tortoise. Retrieved January 18, 2020, from https://wmpeople.wm.edu/asset/index/cvance/Carroll.

  • Cevolani, G. (2017). Fallibilism, verisimilitude, and the preface paradox. Erkenntnis, 82(1), 169–183.

    Article  Google Scholar 

  • Cevolani, G., & Schurz, G. (2017). Probability, approximate truth, and truthlikeness: More ways out of the preface paradox. Australasian Journal of Philosophy, 95(2), 209–225.

    Article  Google Scholar 

  • Chisholm, R. (1986). The place of epistemic justification. Philosophical Topics, 14(1), 85–92.

    Article  Google Scholar 

  • Cohen, S. (1988). How to be a fallibilist. Philosophical Perspectives, 2(Epistemology), 91–123.

    Article  Google Scholar 

  • Cohen, S. (2004). Knowledge, assertion, and practical reasoning. Philosophical Issues, 14(Epistemology), 482–491.

    Article  Google Scholar 

  • Cohen, S. (2005). Knowledge, speaker and subject. The Philosophical Quarterly, 55, 199–212.

    Article  Google Scholar 

  • DeRose, K. (1995). Solving the skeptical problem. The Philosophical Review, 104(1), 1–52.

    Article  Google Scholar 

  • DeRose, K. (1996). Knowledge, assertion and lotteries. Australasian Journal of Philosophy, 74(4), 568–579.

    Article  Google Scholar 

  • Douven, I. (2008). The lottery paradox and our epistemic goal. Pacific Philosophical Quarterly, 89(2), 204–225.

    Article  Google Scholar 

  • Engel, M. (1991). Inconsistency: The coherence theorist’s nemesis? Grazer Philosophische Studien, 40(1), 113–130.

    Article  Google Scholar 

  • Engel, M. (1992). Is epistemic luck incompatible with knowledge? The Southern Journal of Philosophy, 30(2), 59–75.

    Article  Google Scholar 

  • Engel, M. (2000). Internalism, the Gettier problem, and metaepistemological skepticism. Grazer Philosophische Studien, 60(1), 99–117.

    Article  Google Scholar 

  • Engel, M. (2004). What’s wrong with contextualism, and a noncontextualist resolution of the skeptical paradox. Erkenntnis, 61, 202–231.

    Article  Google Scholar 

  • Engel, M. (2011). Epistemic luck. Internet encyclopedia of philosophy. Retrieved January 18, 2020, from https://www.iep.utm.edu/epi-luck/.

  • Feldman, R. (1981). Fallibilism and knowing that one knows. The Philosophical Review, 90(2), 266–282.

    Article  Google Scholar 

  • Foley, R. (1979). Justified inconsistent beliefs. American Philosophical Quarterly, 16(4), 247–257.

    Google Scholar 

  • Harman, G. (1986). Change in view. Camdridge, MA: The MIT Press.

    Google Scholar 

  • Hawthorne, J. (2004). Knowledge and lotteries. Oxford: Oxford University Press.

    Google Scholar 

  • Hawthorne, J. (2005). The case for closure. In M. Steup & E. Sosa (Eds.), Contemporary debates in epistemology (pp. 26–43). Malden, MA: Blackwell.

    Google Scholar 

  • Hill, C. S., & Schechter, J. (2007). Hawthorne’s lottery puzzle and the nature of belief. Philosophical Issues, 17, 102–122.

    Article  Google Scholar 

  • James, W. (1897). The will to believe and other essays in popular philosophy. New York: Longmans Green and Company.

    Google Scholar 

  • Klein, P. (1985). The virtues of inconsistency. The Monist, 68, 105–135.

    Article  Google Scholar 

  • Kung, P. (2010). On having no reason: Dogmatism and Bayesian confirmation. Synthese, 177(1), 1–17.

    Article  Google Scholar 

  • Kyburg, H. (1961). Probability and the logic of rational belief. Middleton, CT: Wesleyan University Press.

    Google Scholar 

  • Kyburg, H. (1970). Conjunctivitis. In M. Swain (Ed.), Induction, acceptance, and rational belief (pp. 55–82). Dordrecht: D. Reidel.

    Chapter  Google Scholar 

  • Lehrer, K. (1990). Metamind. Oxford: Clarendon Press.

    Book  Google Scholar 

  • Leitgeb, H. (2014). A way out of the preface paradox. Analysis, 74(1), 11–15.

    Article  Google Scholar 

  • Lewis, D. (1996). Elusive knowledge. Australasian Journal of Philosophy, 74(4), 549–567.

    Article  Google Scholar 

  • Nelkin, D. (2000). The lottery paradox, knowledge, and rationality. The Philosophical Review, 109(3), 373–409.

    Article  Google Scholar 

  • Nozick, R. (1981). Philosophical explanations. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Pollock, J. L. (1986). The paradox of the preface. Philosophy of Science, 53(2), 246–258.

    Article  Google Scholar 

  • Pritchard, D. (2003). Virtue epistemology and epistemic luck. Metaphilosophy, 34, 106–130.

    Article  Google Scholar 

  • Pritchard, D. (2005). Epistemic luck. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Pryor, J. (2013). Problems for credulism. In C. Tucker (Ed.), Seemings and justification: New essays on dogmatism and phenomenal conservatism (pp. 89–132). Oxford: Oxford University Press.

    Chapter  Google Scholar 

  • Rendsvig, R., & Symons, J. (2019). Epistemic logic. In E. N. Zalta (ed.), The Stanford encyclopedia of philosophy. Retrieved January 18, 2020, from https://plato.stanford.edu/archives/sum2019/entries/logic-epistemic/.

  • Ryan, S. (1996). The epistemic virtues of consistency. Synthese, 109(2), 121–141.

    Article  Google Scholar 

  • Schiffer, S. (1996). Contextualist solutions to scepticism. Proceedings of the Aristotelian Society, 96, 317–333.

    Article  Google Scholar 

  • Schurz, G. (2019). Impossibility results for rational belief. Nous, 53(1), 134–159.

    Article  Google Scholar 

  • Scott, J. (2017). The odds of picking a perfect NCAA bracket, explained by a mathematician. CU Boulder Today. Retrieved January 18, 2020, from https://www.colorado.edu/today/2017/03/10/odds-picking-perfect-ncaa-bracket-explained-mathematician.

  • Smith, M. (2010). What else justification could be. Nous, 44(1), 10–31.

    Article  Google Scholar 

  • Sosa, E. (1999). How to defeat opposition to Moore. Philosophical Perspectives, 13(Epistemology), 141–153.

    Google Scholar 

  • Sosa, E. (2000). Skepticism and contextualism. Philosophical Issues, 10(Skepticism), 1–13.

    Article  Google Scholar 

  • Sosa, E. (2004). Relevant alternatives, contextualism included. Philosophical Studies, 119, 35–65.

    Article  Google Scholar 

  • Timmerman, T. (2013). The persistent problem of the lottery paradox: And its unwelcome consequences for contextualism. Logos & Episteme, 4(1), 85–100.

    Article  Google Scholar 

  • Unger, P. (1975). Ignorance: A case for scepticism. Oxford: Oxford University Press.

    Google Scholar 

  • Vogel, J. (1990). Are there counterexamples to the closure principle? In M. Roth & G. Ross (Eds.), Doubting: Contemporary perspectives on skepticism (pp. 13–28). Dordrecht: Klewer.

    Chapter  Google Scholar 

  • Vogel, J. (1999). The new relevant alternatives theory. Philosophical Perspectives, 13(Epistemology), 155–180.

    Google Scholar 

  • Williamson, T. (1996). Knowing and asserting. The Philosophical Review, 105(4), 489–523.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mylan Engel Jr..

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Engel, M. Lotteries, knowledge, and inconsistent belief: why you know your ticket will lose. Synthese 198, 7891–7921 (2021). https://doi.org/10.1007/s11229-020-02555-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-020-02555-w

Keywords

Navigation