This article argues that there can be epistemic dilemmas: situations in which one faces conflicting epistemic requirements with the result that whatever one does, one is doomed to do wrong from the epistemic point of view. Accepting this view, I argue, may enable us to solve several epistemological puzzles.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
Little, but not none. Ross (2010) argues that there can be certain kinds of epistemic dilemmas. Srinivasan (2015), Christensen (2016), and (tentatively) Hawthorne and Srinivasan (2013), endorse views that come quite close to the idea. Moss (2013) argues that there can be epistemic dilemmas, but not of the sort that I am interested in here. Her view is that there are cases in which nothing speaks in favour of adopting one epistemic attitude over another incompatible one, but not that one is required to take each attitude. Moss’s view can be thought of as a kind of Permissivism.
I have in mind here the kind of rationality associated with reasonableness, rather than means-end coherence. On a different note, some philosophers treat ‘it is rational for you to \(\upvarphi \)’ as synonymous with ‘you ought to \(\upvarphi \)’. I think this is a mistake. The question whether you ought to always be rational cannot be settled by definitional fiat.
Gibbons (2013). I also think that the norm K, according to which one ought (epistemically) to only believe that P if one would thereby come to know that P, is genuine and non-optional (Unge 1975; Williamson 2000; Adler 2002; Sutton 2005, 2007; Littlejohn 2013). I won’t discuss it here because it is superfluous to our concerns. Given the factivity of knowledge, anyone who endorses K is committed to T, and K and T cannot conflict with one another in the way that I’ll be arguing T and R can. The reader is welcome to extend everything I say about T to K if they are also inclined accept it.
You might think that since it is irrational to believe that which one knows to be false, only R is needed to explain what is wrong with each kind of belief, and so the wrongness of believing them does not motivate T. I disagree. If T were false, or merely optional, then it would be a mystery just why it is irrational to believe known falsehoods in the first place.
A referee has pointed out that in the MUG case the fact that P is false is not part of your evidence. They ask: in what epistemic sense ought you not believe it, then? Perhaps we could say that there are alethic grounds on which you ought not believe it, but epistemic? I suspect that the worry here is partly a terminological one. Some philosophers take phrases like ‘epistemically, you ought to believe that...’ to essentially make reference to what one ought to believe on the assumption that one ought to believe what the evidence supports. They take the phrase ‘epistemically, you ought to believe that...’ to be synonymous with ‘in order to conform to your evidence, you ought to believe that...’. Other use the phrase more broadly—to pick out a particular domain of normativity; roughly, that domain of normativity which is neither moral nor practical, but rather (putting it roughly) involved in acquiring an accurate picture of the world. On this usage it is not assumed that only your evidence bears on what you ought to do, epistemically speaking (though that remains a possibility). Here I use the phrase ‘epistemically, you ought to belief that...’ in the latter sense. One may prefer to use it in the former sense, but any dispute about how it should be used would be merely verbal – it would not concern what one ought to believe, but rather what words we should use in articulating a theory of belief formation. Of course, it might be argued that we should reject T on evidentialist grounds, but to assume a form of evidentialism incompatible with T at this point would be to rule out the view that I will be developing by prior to giving it the chance at a hearing.
Or at least, so say I. I don’t deny that this is controversial.
‘Hypological’ is a rarely used term. It comes from the Greek hypologos: ‘to hold accountable or liable’ (Srinivasan 2015). The deontic realm covers things like requirements, obligations, and permissions. The hypological realm covers things like blameworthiness, criticisability, and praiseworthiness.
If you suspend judgement on P, on the other hand, then you are, intuitively, epistemically blameworthy for doing so, even though you did the right thing by T. At this point you might think that I’m identifying rationality with blamelessness and irrationality with blameworthiness. I’m not. Whilst I think that you are rational and blameless if you believe that P in MUG, and irrational and—excuses notwithstanding—blameworthy if you suspend judgement on P or disbelieve it, I also think that there can be cases of blameless irrationality. My reasons will become apparent later.
That there is a distinction to be drawn between permissible behaviour and merely excusable behaviour is by now widely recognised (see Austin 1957; Gardner 1997, amongst others). In epistemology, it is often invoked by those who accept various knowledge norms (Williamson 2000; Hawthorne and Stanley 2008, for instance). Feldman (2008) also discusses the distinction.
If we endorse K along with T and R, this will have to be revised slightly, since the combination of R and T permits belief in Gettier cases, but K doesn’t.
Some of those with whom I have discussed the view have suggested that, whilst it shouldn’t be discarded out of hand, it should nevertheless be thought of as a last resort—a view to be adopted only if all else fails—on the grounds that it is somehow pessimistic. I find this mentality puzzling. I see no reason to rank potential solutions to the puzzle prior to investigating their pros and cons, and given what I judge to be the pros and cons of dilemmism, I think it should be very far from the last resort.
A referee has asked: do I take the dilemmic view to provide an argument for T and R? I do not. Rather, I take them to be data that must be accommodated by an epistemology of belief. The dilemmic view accommodates them, and that is something that speaks in its favour. The same goes for the fact that T and R are both non-optional. This is data that likewise must be accommodated, and in virtue of interpreting them as both issuing requirements the dilemmic view accommodates it, since whenever one is required to \(\upvarphi \), \(\upvarphi \)-ing is not optional in the way that it would be if it were supererogatory or merely permissible. This is what I mean when I say that dilemmism ‘vindicates’ the non-optionality of T and R. Arguments for T and R abound in the literature. See, for instance, Unge (1975), Williamson (2000), Wedgwood (2002), Shah (2003), Gibbons (2013), and Whiting (2013), amongst others, for arguments for T. See Wedgwood (2002), Gibbons (2013), Cohen & Comesana (forthcoming) and Hughes (forthcoming), amongst others, for arguments for R. The observation that conformity with each of T and R is non-optional is due to Gibbons (2013).
For good discussions of the idea that belief aims at truth see Williams (1973), Velleman (2000), Wedgwood (2002), Boghossian (2003), Shah (2003), Shah and Velleman (2005) and the essays collected in Chan (2013). (Note: I don’t mean to suggest that these authors all reject the idea that T expresses a requirement—one might think that belief aims at truth and that one is required to only believe truths).
The reason they think that T expresses a genuine requirement is because they think that Kdoes, and T follows from K.
Although he doesn’t propose it as a solution to the puzzle I am interested in here, Feldman (2000) is an example of someone who thinks that the question of what you ought to believe all-things-considered is ill-formed and has no answer.
McConnell (1978), Marcus (1980) make this point about moral dilemmas.
The reader is free to substitute in here whatever logic they think is most robust.
My impression is that this is the general consensus in ethical theory.
Here’s how the explosion occurs:
O \(\lnot \) BP (assump.)
O(BP & \(\lnot \) BP) (by AGG)
O(BP & \(\lnot \) BP) \(\rightarrow \) O\(\varphi \) (by OE and EFQ)
O\(\upvarphi \) (from 3 and 4)
(1) and (2) describe the dilemmic view. (3) follows from (1) and (2), given AND. (4) follows from (3), given OE and EFQ, and (5) follows from (3) and (4).
Here’s how the contradiction arises:
O \(\lnot \)BP (assump.)
OBP \(\rightarrow \)\(\lnot \)O \(\lnot \)BP (by PC)
\(\lnot \)O \(\lnot \)BP (from 1 and 3)
(1) and (2) describe the dilemmic view. (3) applies PC. (4) follows from (1) and (3), and (2) and (4) contradict one another.
To avoid confusion I should be clear that A-OIC should not be read as only claiming that if you have two conflicting requirements, they are jointly satisfiable. The number of requirements may be arbitrarily large. So A-OIC says that when you have 10, or 100, or 1000...and so on, requirements, they are jointly satisfiable. Here’s why dilemmism is incompatible with A-OIC:
O \(\lnot \)BP (assump.)
\(\lnot \)C (BP & \(\lnot \)BP) (assump.)
(OBP & O \(\lnot \)BP) \(\rightarrow \) C (BP & \(\lnot \)BP) (by OIC*)
C (BP & \(\lnot \)BP) (by 1, 2, and 4)
(1) and (2) describe the dilemmic view. (3) follows from the fact that BP and \(\lnot \)BP are logically incompossible. (4) follows from (1) and (2), given A-OIC, and (5) contradicts (3).
Rinard (forthcoming). The same objection is often made against the possibility of moral dilemmas.
A notable exception is Smith (2012).
A condition is luminous just in case whenever it obtains one is in a position to know that it obtains.
See Williamson (2000), Chap. 4.
Readers who are on the fence about the anti-luminosity argument are welcome to read much of the argument that follows as conditional: if no non-trivial condition is luminous, then (as we will see) many of the objections one might have to dilemmism lack bite.
Thus, access internalism is an untenable position.
Srinivasan (2015) also makes this point.
Smith (2012)also makes this point.
Might it be said that even if you don’t know that you’re required to \(\upvarphi \), but you believe that you are, and so you \(\upvarphi \) anyway, then you have still been guided by the requirement? I don’t think so. What you’ve been guided by is your belief that you are subject to the requirement. And this is still the case even if it turns out to be true that you are required to \(\upvarphi \). Perhaps instead it could be argued that even if it isn’t a desideratum on adequate guidance that whenever one is required to \(\upvarphi \) one is in a position to know about it, it is nevertheless a desideratum that one is in a position to truly believe it? I’m skeptical, but even if it were, dilemmism would satisfy this desideratum, for when you are in a conflict case you are in a position to truly believe that you both ought to believe that P and ought not believe that P. You have no reason to believe that, of course, but that’s a different matter; you can still do it. Perhaps instead could it be argued that even if it is not a desideratum on adequate guidance that you know that you are required to \(\upvarphi \), it is nevertheless a desideratum that it is probable on your evidence (to some degree n) that you are required to \(\upvarphi \)? I’m still skeptical. Williamson (2014) argues—again, convincingly in my view—that one can know that P even though it is arbitrarily improbable short of 0 on one’s evidence that one knows that P. And as Hawthorne and Srinivasan (2013) point out, this argument will extend to all non-trivial conditions. For any non-trivial condition C, it can obtain even though it is arbitrarily improbable short of 0 on one’s evidence that it obtains.
I don’t mean to suggest that only dilemmism instructs you to avoid misleading evidence. Any view on which T expresses a requirement will also do so. All the same, it is to the credit of dilemmism that it does too.
Cohen and Comesana (forthcoming).
I have in mind here dispositional rather than occurrent beliefs.
That said, unlike with T-centric theories the number of cases in which dilemmism fails to give you guidance can be larger than the number of cases in which R-centric theories fail, because dilemmism fails to give you guidance in all cases in which you don’t know what the rational attitude to take is and in cases in which you know that R requires you to believe that P but (unbeknownst to you) T requires you not to. However, the question is whether the set of cases in which dilemmism fails to give guidance will be so much greater than the set in which R-centric theories fail to give guidance that we are thereby availed of a reason to reject dilemmism and instead opt for R-centric theories. I find it hard to see how such a claim could be substantiated.
I don’t mean to suggest that anyone would think that ‘believe whatever you feel like believing’ is a good epistemology; I’m not trying to knock down a strawman here. Rather, I only want to use it to illustrate my point.
Why? Because the consequences of failing to meet one’s moral requirements are usually more serious than the consequences of failing to meet one’s epistemic requirements.
A-OIC entails OIC (just replace ‘\(\uppsi \)’ in A-OIC with ‘\(\upvarphi \)’), so we need not worry about the possibility of arguments for A-OIC that are distinct from arguments for OIC. Thanks to Tim Williamson for drawing this to my attention.
Though Graham doesn’t endorse the argument.
Robin McKenna has suggested to me that the argument could be revised as follows: 1. If one is required to \(\upvarphi \) one has a conclusive reason to \(\upvarphi \). 2. One cannot have a conclusive reason to \(\upvarphi \) and a conclusive reason to not \(\upvarphi \). 3. Therefore, one cannot be required to \(\upvarphi \) and at the same time required to not \(\upvarphi \). My worry about this revised argument is whether we have a sufficiently good grip on how conclusive reasons function to think that (2) is any more plausible than (3).
Whatever it is, it is perhaps worth noting that it is clearly not the notion at work in discussions of epistemic injustice such as Fricker (2007).
Even that’s not always right. If someone who should know better reads Breitbart so often that they find themselves psychologically incapable of believing that Donald Trump is prone to lying, it is quite natural to think that—excuses notwithstanding—they are criticisable and blameworthy, morally and epistemically for getting themselves into such a mess. Still, to say that someone is epistemically blameworthy in that case is not to say that they are epistemically blameworthy in conflict cases no matter what they do.
To my knowledge Hintikka (1969) was the first to present this argument.
As I said earlier, some logicians have already made a start on this task
We need to be careful though. How often will the demands of steadfastness and conciliationism pull in opposite directions? The answer might be very often. If so, then a dilemmic epistemology of disagreement may be vulnerable to the objection that it fails to be guiding all too often.
But again, we need to be careful. How often will this view fail to be guiding?
Thanks to Daniel Greco, Tim Williamson, Jessica Brown, Robin McKenna, Clayton Littlejohn, Torfinn Huvenes, Finnur Dellsen, Michael Hannon, Sebastian Watzl, Peter Fritz, Juhani Yli-Vakkuri, Maria Baghramian, Cameron Boult, Caroline Krager, Olav Gjelsvik, Rowland Stout, Herman Cappelen, Matt McGrath, Adam Carter, Wesley Buckwalter, audiences at University College Dublin, the University of Oslo, the University of St Andrews, and KU Leuven, and two anonymous referees.
Adler, J. (2002). Belief‘s own ethics. Cambridge: MIT Press.
Alston, W. (1988). The deontological conception of epistemic justification. Philosophical Perspectives, 2, 257–299.
Andric, V. (2015). Objective consequentialism and the rationales of ‘ought. Implies ‘Can” Ratio, 29, 1–16.
Austin, J. L. (1957). a plea for excuses. In Proceedings of the Aristotelian Society: (pp. 1–30).
Boghossian, P. (2003). The normativity of content. Philosophical Issues, 13(1), 31–45.
Bonjour, L. (1980). Externalist theories of empirical knowledge. Midwest Studies in Philosophy, 5(1), 53–74.
Booth, A. R. (2012). All things considered duties to believe. Synthese, 187, 509–517.
Brown, J. (2008). Subject-sensitive invariantism and the knowledge norm of practical reasoning. Nous, 42, 167–189.
Chan, T. (2013). The Aim of Belief. Oxford University Press
Chisholm, R. (1988). The indispensability of internal justification. Synthese, 74(3), 285–296.
Christensen, D. (2016). Conciliationism, uniqueness, and rational toxicity. Nous, 50, 584–603.
Chituc, V., et al. (2016). Blame, not ability, impacts moral “ought” judgements for impossible actions: Towards an empirical refutation of “ought” implies “can”. Cognition, 150, 20–25.
Cohen, S. & Comesana, J. (Forthcoming). “Being Rational and Being Right ” Dutant, J and Dorsch, F (eds.) The New Evil Demon (OUP)
Copp, D. (2003). Defending the principle of alternate possibilities: Blameworthiness and moral responsibility. Nous, 31, 441–456.
Douven, I. (2006). Assertion, knowledge, and rational credibility. Philosophical Review, 144, 361–375.
Driver, J. (1983). Promises. Obligations, and Abilities’ Philosophical Studies, 44, 221–223.
Fantl, J., & McGrath, M. (2009). Knowledge in an uncertain world. Oxford: Oxford University Press.
Feldman, R. (2000). The ethics of belief. Philosophy and Phenomenological Research, 60(3), 667–695.
Feldman, R. (2008). Modest deontologism in epistemology. Synthese, 161, 339–55.
Fricker, M. (2007). Epistemic injustice. Oxford: Oxford University Press.
Gardner, J. (1997). The gist of excuses. Buffalo Criminal Law Review, 2(1), 575.
Gerken, M. (2011). Warrant and action. Synthese, 178(3), 529–547.
Gibbard, A. (1990). Wise choices, apt feelings. Oxford: Oxford University Press.
Gibbons, J. (2013). The norm of belief. Oxford: Oxford University Press.
Goble, L. (2005). A logic for deontic dilemmas. Journal of Applied Logic, 3(3), 461–483.
Graham, P. (2011). Ought’ and ability. Philosophical Review, 120, 337–382.
Griffin, J. (1992). The human good and the ambitions of consequentialism. In E. F. Paul, D. Miller, & J. E. Paul (Eds.), The good life and the human good. Cambridge: Cambridge University Press.
Hansen, J. et al., (2007). Ten philosophical problems in deontic logic’. In G. Boella, L. van der Torre, & H. Verhagen (Eds.), Normative multi-agent systems. Dagstuhl Seminar Proceedings, Vol. 07122.
Hare, R. M. (1963). Reason and freedom. Oxford: Oxford University Press.
Hawthorne, J. (2004). Knowledge and lotteries. Oxford: Oxford University Press.
Hawthorne, J., & Stanley, J. (2008). Knowledge and action. Journal of Philosophy, 105(10), 571–590.
Hawthorne, J., & Srinivasan, A. (2013). Disagreement without transparency: Some bleak thoughts. In J. Lackey & D. Christensen (Eds.), The epistemology of disagreement. Oxford: Oxford University Press.
Hintikka, J. (1969). Deontic logic and its philosophical morals. In D. Reidel (Ed.), Models for modalities. Berlin: Springer.
Hornsby, J. (2007). Knowledge in action. Action in context (pp. 285–302). New York: De Gruyter.
Horowitz, S. (2013). Epistemic Akrasia. Nous, 28(3), 718–744.
Horty, J. (2003). Reasoning with moral conflicts. Nous, 37(4), 557–605.
Hudson, J. (1989). Subjectivization in ethics. American Philosophical Quarterly, 26(3), 221–229.
Hughes, N. (Forthcoming). Uniqueness, rationality, and the norm of belief. Erkenntnis
Hyman, J. (1999). How knowledge works. Philosophical Quarterly, 49(197), 433–451.
Jackson, F. (1991). Decision-theoretic consequentialism and the nearest and dearest objection. Ethics, 101, 461–482.
Kvanvig, J. (2011). Norms of assertion. In J. Brown & H. Cappelen (Eds.), Assertion. Oxford: Oxford University Press.
Lackey, J. (2007). Norms of assertion. Nous, 41(4), 594–626.
Lasonen-Aarnio, M. (2010). Unreasonable knowledge. Philosophical Perspectives, 24, 1–21.
Lasonen-Aarnio, M. (forthcoming). Virtuous failure and victims of deceit. Dutant, J., and Dorsch, F. (eds.) The New Evil Demon (OUP)
Littlejohn, C. (2012). Justification and the truth-connection. Cambridge: Cambridge University Press.
Littlejohn, C. (2013). The Russellian retreat. Proceedings of the Aristotelian Society, 113(3), 293–320.
Marcus, R. B. (1980). Moral dilemmas and consistency. Journal of Philosophy, 77(3), 121–136.
Moss, S. (2013). Epistemology formalized. Philosophical Review, 122(1), 1–43.
Nelson, M. (2010). We have no positive epistemic duties. Mind, 119(473), 83–102.
Neta, R. (2009). Treating something as a reason for action. Nous, 43, 684–699.
Nussbaum, M. (1986). The fragility of goodness. Cambridge: Cambridge University Press.
Rinard, S. (Forthcoming). Reasoning one’s way out of skepticism. Brill Studies in Skepticism.
Ross, W. D. (1930). The right and the good. Oxford: Oxford University Press.
Ross, J. (2010). Sleeping beauty, countable additivity, and rational dilemmas. Philosophical Review, 119(4), 411–447.
Ryan, S. (2003). Doxastic compatibilism and the ethics of belief. Philosophical Studies, 114, 47–79.
Ryan, S. (2015). In defense of moral evidentialism. Logos and Episteme, 6(4), 405–427.
Sayre-McCord, G. (1986). Deontic logic and the priority of moral theory. Nous, 20, 179–197.
Shah, N. (2003). How truth governs belief. Philosophical Review, 112(4), 447–482.
Shah, N., & Velleman, D. (2005). Doxastic deliberation. Philosophical Review, 114(4), 497–534.
Siegel, S. (2013). The epistemic impact of the etiology of experience. Philosophical Studies, 162, 697–722.
Smith, H. (2012). Using moral principles to guide decisions. Philosophical Issues, 22, 369–386.
Smithies, D. (2011). Moore’s paradox and the accessibility of justification. Philosophy and Phenomenological Research, 85(2), 273–300.
Srinivasan, A. (2015). Normativity without cartesian privilege. Philosophical Issues, 25(1), 273–299.
Stapleford, S. (2013). Imperfect epistemic duties and the justificational fecundity of evidence. Synthese, 190, 4065–4075.
Stapleford, S. (2015). Epistemic versus all things considered requirements. Synthese, 192, 1861–1881.
Struemer, B. (2007). Reasons and impossibility. Philosophical Studies, 136, 351–384.
Sutton, J. (2005). Stick to what you know. Nous, 39(3), 359–396.
Sutton, J. (2007). Without justification. Oxford: MIT Press.
Turri, J., & Blouw, P. (2015). Excuse validation: A study in rule-breaking. Philosophical Studies, 172(3), 615–634.
Unger, P. (1975). Ignorance. Oxford: Clarendon Press.
van Frassen, B. (1973). Values and the heart’s command. Journal of Philosophy, 70(1), 5–19.
Velleman, D. (2000). On the aim of belief. In D. Velleman (Ed.), The possibility of practical reason. Oxford: Oxford University Press.
Wedgwood, R. (2002). The aim of belief. Philosophical Perspectives, 16, 267–297.
Whiting, D. (2013). Nothing but the truth: On the aims and norms of belief. In T. Chan (Ed.), The aim of belief (pp. 184–204). Oxford: Oxford University Press.
Williams, B. (1965). Ethical consistency. In Proceedings of the Aristotelian Society, Supplementary volumes 39 (pp. 103–138).
Williams, B. (1973). Deciding To believe. Problems of the self (pp. 136–151). Cambridge: Cambridge University Press.
Williams, B. (1989). Internal reasons and the obscurity of blame. Making sense of humanity (pp. 35–46). Cambridge: Cambridge University Press.
Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.
Williamson, T. (2014). Very improbable knowing. Erkenntnis, 79, 971–999.
About this article
Cite this article
Hughes, N. Dilemmic Epistemology. Synthese 196, 4059–4090 (2019). https://doi.org/10.1007/s11229-017-1639-x
- Epistemic dilemma
- Truth norm
- Knowledge norm
- Epistemic rationality
- Action guidance
- Epistemic ought-implies-can
- Deontic logic