Skip to main content
Log in

Evidentialism and pragmatic constraints on outright belief

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

Evidentialism is the view that facts about whether or not an agent is justified in having a particular belief are entirely determined by facts about the agent’s evidence; the agent’s practical needs and interests are irrelevant. I examine an array of arguments against evidentialism (by Jeremy Fantl, Matthew McGrath, David Owens, and others), and demonstrate how their force is affected when we take into account the relation between degrees of belief and outright belief. Once we are sensitive to one of the factors that secure thresholds for outright believing (namely, outright believing that p in a given circumstance requires, at the minimum, that one’s degree of belief that p is high enough for one to be willing to act as if p in the circumstances), we see how pragmatic considerations can be relevant to facts about whether or not an agent is justified in believing that p—but largely as a consequence of the pragmatic constraints on outright believing.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Given the agent’s beliefs about her circumstances, her desires, interests and purposes, their relative strength and importance, the range of actions available to her, and the likelihood of various outcomes given certain actions, we can offer a teleological explanation of an action by noting that the action is one the agent regards as the most effective means to the relevant ends.

  2. Here I follow Howson and Urbach (1993, p. 75), who make a good case for resisting the temptation to try to define degrees of belief in terms of dispositions for various sorts of behavior (generally involving betting) under certain conditions.

  3. This formulation of evidentialism is offered by Fantl and McGrath (2002), who attribute the view to Conee and Feldman (1985). Conee and Feldman’s evidentialism—understood broadly as the view that facts about whether or not a person’s doxastic attitude is epistemically justified depend entirely on that person’s evidence—is typically contrasted with reliabilist accounts of justification. In this discussion, as in Fantl and McGrath (2002) and Weatherson (2005), the relevant contrast position is the view that pragmatic, non-evidential considerations can have a bearing on whether or not a person’s believing a proposition is epistemically justified.

    We should construe evidence quite broadly here to include a wide variety of epistemic reasons for believing p, as well as any relevant background knowledge (or at least all the evidence supporting the background beliefs which are needed to fulfill the justification). Otherwise, evidentialism would be subject to the quick objection that two subjects faced with the same evidence for p, but possessing different degrees of background knowledge (e.g. a scientist and a child), could differ with respect to being justified in believing that p.

  4. Naturally, the debate over evidentialism is premised on the idea that a legitimate distinction between pragmatic/practical and epistemic/evidential reasons can be drawn. If the distinction were not genuine, as some pragmatists claim, then there would be no real distinction between practical and epistemic justification, and hence no distinctively epistemic sense of justification which pragmatic concerns could or couldn’t bear upon. Practical reasons for belief would always be relevant to justification, call it “epistemic” or what you will, because such reasons would be the only sort we ever have anyway. Intuitively, there seems to be a clear difference between epistemic and practical reasons for belief: Aren’t the former simply those reasons which advance the truth-oriented or cognitive ends we have in believing, whereas the latter are those which help secure our purely practical ends? This natural suggestion falls short because adopting a false, evidentially baseless belief might sometimes be the best way to advance our epistemic ends overall. For example, by falsely believing that he is very smart on the basis of his feeling a tickle, Harry will have enhanced self-confidence and discipline, motivating him to study more and learn more truths in college. Despite the role taking the tickle as evidence of smartness plays in furthering Harry’s cognitive goals overall, we are hardly inclined to count Harry’s reason for thinking he is smart as epistemic. A simple reply seems to do the trick: since an epistemic reason for believing p ought to make p more likely to be true, an epistemic reason for believing p has to advance the epistemic goal of believing p when and only when p is true and not just one’s other epistemic goals. But such a response seems to saddle Harry with having the epistemic goal of believing that he is smart when and only when he is smart. Does Harry really have such a goal? Why should he—particularly when, given his personality structure, having it might make some of his more cherished epistemic goals inaccessible? The matter is a complicated one, and well deserving of greater attention. For purposes of this paper, however, we will suppose, as others do who are engaged in the debate over evidentialism, that the distinction between practical and epistemic reasons for belief, and pragmatic and epistemic justification is perfectly in order.

  5. See, for example, DeRose (1992, p. 913–916).

  6. Fantl and McGrath’s argument proceeds through a series of stages, beginning with a strengthening of an intuitive closure argument concerning knowledge. We cannot include every detail of their lengthy defense here, though hopefully the following rough, and somewhat liberal reconstruction will highlight and elucidate the most essential points. We begin at a slightly advanced stage of their discussion, with an enhanced version of their original intuitive closure argument. (Note that rational preferences for states of affairs is simply a broader category which encompasses preferences concerning actions on the part of the agent.)

    1. (1′)

      S knows that p.

    2. (2′)

      S knows that if p, A is better for S than B (given her needs and interests).

    Therefore,

    1. (3′)

      S is rational to prefer A to B.

    To conclude that S wouldn’t be rational to prefer state of affairs A, even though she knows A is preferable to B, given p, would be to concede that S doesn’t genuinely know that p. (Note that here, as elsewhere in the argument, the relevant sense of rational isn’t purely practical. What the authors seem to mean by S is rational to prefer A to B is that S thinks A will satisfy her needs and interests to a greater extent than B, and has good grounds for doing so.) A parallel point can be made, even when S is simply rational, i.e. has good reason, to think (yet might fail to know) that A is better for her than B, given p.

    1. (1′′)

      S knows that p.

    2. (2′′)

      S is rational to prefer A to B, given p. [S has good reason to think that if p, A is better for her than B]

    Therefore,

    1. (3′′)

      S is rational to prefer A to B in fact.

    To conclude that S wouldn’t be rational to prefer A, even though she is rational to think that A is preferable to B, given p, would be to concede that S doesn’t genuinely know that p. If you know that p, and are in a position to make reasonable judgments about what’s best given p, there shouldn’t be any problem in preferring as if p.

    Fantl and McGrath operate with what they take to be a standard way of understanding ‘S is justified in believing that p,’ namely, as S has good enough evidence to know that p (if S fails to know, it’s not on account of S’s lack of evidence). Presupposing this interpretation, they argue that (1′′)–(3′′) can be modified to apply to a subject who is merely justified in believing that p.

    1. (1′′′)

      S is justified in believing that p.

    2. (2′′′)

      S is rational to prefer A to B, given p.

    Therefore,

    1. (3′′′)

      S is rational to prefer A to B in fact.

    If S fails to know that p, it isn’t because S fails to have enough evidence to know that p, so we can consider a subject S′ who has exactly the same evidence, needs and interests as S, but who knows that p. Since the rationality of a preference is entirely the product of one’s evidence, needs, and interests, and S and S′ have the same evidence, needs, and interests, whatever is a rational preference for S′, who knows that p, is also a rational preferences for S, who is merely justified in believing that p. So if both are rational to prefer A to B, given p, both are rational to prefer A to B in fact.

    Switching (2′′) and (3′′) produces yet another valid argument:

    1. (1′′′′)

      S is justified in believing that p.

    2. (2′′′′)

      S is rational to prefer A to B in fact.

    Therefore,

    1. (3′′′′)

      S is rational to prefer A to B, given p.

    Given (1′′′′) and (2′′′′), could (3′′′′) turn out to be false? The authors argue against this possibility. Say (3′′′′) is false. Then either (I) S is rational to prefer B to A, given p, or (II) S is rational to be indifferent between A and B, given p. Appealing to the previous argument, (1′′′′) and (I) imply that S is rational to prefer B to A, in fact, which contradicts our stipulation that (2′′′′). Parallel reasoning suggest that (1′′′′) and (II) also lead to a conclusion which contradicts (3′′′′). If S is justified in believing that p, and S is rational to be indifferent between A and B, given p, then S is rational to be indifferent between A and B—a conclusion at odds with our stipulation that S is rational to prefer A to B in fact. (1′′′)–(3′′′), and (1′′′′)–(3′′′′) can be converted into a principle articulating a pragmatic necessary condition on justification:

    • (NC) S is justified in believing that p only if, for any states of affairs A and B, S is rational to prefer A to B, given p, iff S is rational to prefer A to B in fact.

    This principle can be reworded as:

    • (PC) S is justified in believing that p only if S is rational to prefer as if p.

    Preferences for states of affairs encompass preferences for actions, so we see that the principle with which we began, (PCA), is simply a special case of (PC):

    • (PCA) S is justified in believing that p only if S is rational to act as if p.

  7. Note that the Bayesian evidentialist won’t be committed to the view that two subjects S and S′ with the same evidence e for p cannot differ with respect to whether or not their degree of belief that p is epistemically justified: S and S′ might assign radically different likelihoods for p in light of e, only one of which is rationally acceptable. But S and S′, if they have the same evidence, cannot differ with respect to whether or not having a particular degree of belief that p is epistemically justified.

  8. An anti-evidentialist might be tempted to argue that the degree of harmfulness of error could have a bearing on what degree of credence boost is warranted: when more is at stake, a more conservative updating policy is rationally required; when less is at stake, a more generous credence boost is rationally acceptable. For this approach to target evidentialism, however, the relevant notions of warrant and rationality would have to be purely epistemic—a weak point of the strategy. Troubling, too, are the effects of moving from one situation to another which presents a higher or lower level of risk involved should p be false: the proposal in question implies that one would be rationally required to raise or lower one’s subjective probability for p, despite the lack of new evidence. That the anti-evidentialists Fantl and McGrath would themselves resist this option is supported by the following passage: “But it ought to be common ground between theories of evidence that having a lot at stake in whether p is true does not, by itself, provide evidence for or against p. Evidence for p ought to raise the probability of p’s truth (in some appropriate sense of ‘probability’). But having a lot at stake in whether p is true doesn’t affect its probability, except in rare cases in which one possesses special background information.” (Fantl and McGrath 2002, p. 69).

  9. In his celebrated Knowledge and Its Limits, Timothy Williamson makes a proposal along these lines (though he transcends the contextual/non-contextual dualism by introducing the idea that outright belief itself comes in degrees): “What is the difference between believing p outright and assigning p a high subjective probability? Intuitively, one believes p outright when one is willing to use p as a premise in practical reasoning. Thus one may assign p a high subjective probability without believing p outright, if the corresponding premise in one’s practical reasoning is just that p is highly probable on one’s evidence, not p itself. Outright belief still comes in degrees, for one may be willing to use p as a premise in practical reasoning only when the stakes are sufficiently low.” (Williamson 2000, p. 99)

    Saying that we have degrees of belief (subjective probability) and degrees of outright belief appears to introduce an unnecessary complication. Our degree of willingness to use p as a premise in practical reasoning is a direct product of our degree of belief that p and our fundamental preferences, so why not simply stick with degrees of belief? It seems simpler just to say that, in some circumstances, our degree of confidence is high enough so that we are inclined to use p as a premise in practical reasoning (i.e., to act as if p). In those circumstances, we can count as outright believing that p. For those circumstances where our degree of belief isn’t high enough for us to use p as a premise in practical reasoning, we simply fail to count as outright believing that p in those circumstances (not: we still have some measure of degree of outright belief that p, as distinguished from our degree of belief that p, because of our willingness act as if p in some other range of circumstances).

  10. Though I admit a pragmatist element to outright believing, I do not favor a purely pragmatist account of outright belief. What is constitutive of outright believing that p in a given circumstance is (i) that one’s degree of belief is high enough for one to be willing to act as if p is true and (ii) that one aims to have one’s degree of preference for believing that p (i.e. one’s degree of belief that p) answerable to the extent to which p seems likely to be true. One aims to have one’s degree of belief well-calibrated to (what one takes to be) one’s evidence.

  11. In case some readers are wary of the notion that ascriptions of outright belief are context-sensitive, we might try to salvage a context-insensitive interpretation of C as necessary for outright belief by requiring only that the outright believer possess a willingness to act as if p in most (rather than all) circumstances where the evidence for p is unchanged. Exactly how the “most” should be understood is unclear: it probably shouldn’t be understood as most relative to all logically possible scenarios—that would be too demanding; on the other hand, if we mean most normal circumstances, or most of those circumstances sufficiently like the believer’s current circumstances in the relevant respects, we face the further difficulty of specifying what circumstances count as normal, or sufficiently like the believer’s current circumstances. Some ways of resolving these further questions could make the proposal indistinguishable from the context-sensitive interpretation. Furthermore, it’s unclear how we should understand the train case under the context-insensitive interpretation. In some sense, both subjects possess the same proclivities to act, relative to various circumstances: they have the same fundamental preferences, assign the same degree of belief to p, and differ only with respect to which circumstances are actual. Do both or neither or only one count as outright believing that p?

    Extraordinary circumstances with unusually severe consequences in the case of error might incline an otherwise confident believer to refrain from acting as if p. Does ordinary practice dictate that such a subject still counts as believing that p, on account of her general proclivities under more normal circumstances? Or must we say that, while she counts as a believer in more normal settings, strictly speaking, she doesn’t outright believe that p in the extraordinary setting. Since the matter seems under-determined by the data of our everyday experience, we may as well avoid the complications inherent in the context-insensitive interpretation, and stick with the context-sensitive one.

  12. A possible exception to condition C could arise for a rather peculiar sort of irrational agent. Say S thinks it would be best for her to take the shortest road to Elyria, and she has a very high degree of confidence q that taking path A is the shortest road to Elyria, but she has a bizarre phobia which makes it psychologically impossible for her to choose to take path A when she has degree of belief q that path A is the shortest road to Elyria (we can imagine only absolute certainty would bypass the problem). Even though her degree of belief that p is not high enough for her to be willing to act as if p, it still seems as though she could count as believing that p. Such situations are so atypical—at such great remove from the usual kinds of circumstances where our ordinary concept of belief is at play—that it should come as no surprise that the concept is put under some strain here.

  13. Two authors who, in addition to Williamson, admit an even tighter relation between believing that p and being disposed to act as if p are Stalnaker (1987) and Weatherson (2005), both of whom accept a pragmatist, functionalist view of belief. Stalnaker writes: “To say that an agent believes that P is to say something like this: the actions that are appropriate for that agent—those that he is disposed to perform—are those that will tend to serve his interests and desires in situations in which P is true.” (Stalnaker 1987, p. 82) And Weatherson notes: “A better move is to start with the functionalist idea that to believe that p is to treat p as true for the purposes of practical reasoning. To believe p is to have preferences that make sense, by your own lights, in a world where p is true.” (Weatherson 2005, p. 421). Weatherson expands upon this basic pragmatist insight in a way which places special emphasis on conditional preferences—“an agent believes that p iff conditionalizing on p doesn’t change any conditional preferences over things that matter” (Weatherson 2005, p. 422)—a move which may introduce some difficulties (see note 14 below).

    Both Stalnaker and Weatherson develop accounts of belief which respect an intuitive, yet somewhat controversial closure principle (given certain interpretations of the lottery and preface paradoxes): if an agent believes p and believes q, then she also believes p ∧ q. While typical threshold accounts of the relation between degrees of credence and outright belief might fail to accommodate this principle, the modest view I put forward here allows for it. The principle could be regarded as an additional necessary constraint on outright belief.

  14. Since independently developing the views expressed in this paper, I have found a kindred spirit in Weatherson (2005), who also explores the possibility that the pragmatic sensitivity expressed in certain normative epistemic principles may be derivable from pragmatic conditions on belief. Weatherson as well challenges Fantl and McGrath’s interpretation of the train example by suggesting that while two agents with the same evidence in favor of a proposition and same degree of credence cannot differ in whether or not that degree of credence is justified, one could count and the other could fail to count as believing the given proposition on the basis of practical differences in their situations. Practical interests matter to philosophy of mind (insofar as they are relevant to determining whether a person’s doxastic state counts as belief), but not really to epistemology per se. While there are some points of agreement between Weatherson and myself, I am inclined to reject the theory of belief which he relies on, as well as the further criticisms he raises against counting Fantl and McGrath’s PCA (S is justified in believing that p only if S is rational to act as if p) as a pragmatic necessary condition on justification (see note 16).

    Weatherson’s theory is far too complex to survey with any detail here, though his guiding insight is roughly the idea that the propositions you believe are those which leave all your conditional preferences unchanged (relative to all genuinely possible options and propositions you are disposed to take seriously). He expresses his central thesis as follows, where “A ≥ q B” means the agent regards action A as at least as good as or preferable to action B, given q. The first two quantifiers below range over all live, salient actions, and the third ranges over all non-far-fetched propositions q compatible with p whose truth makes a practical difference—i.e. propositions q where conditionalizing on q changes the agent’s preferences with respect to some live, salient actions. Bel(p) ↔ ∀A∀B∀q (A ≥ q B ↔ A ≥p ∧ q B ) Weatherson supplements this thesis with some additional constraints in order to deflect counterexamples involving propositions you believe whose truth or falsehood makes no practical difference. Weatherson is unhappy with what he designates as “threshold” views about the relation between belief and degree of belief, the view that “S believes that p iff S’s credence in p is greater than some salient number r, where r is made salient either by the context of belief ascription, or the context that S is in” (Weatherson 2005, p. 420). A probabilistically coherent agent who believes p and believes q will also have to count as believing p ∧ q. Threshold views fail to accommodate this intuitive closure principle. Weatherson’s alternative theory avoids this potential pitfall, but it does so at the cost of issuing a counter-intuitive verdict for a certain class of cases where an agent believes p, yet would no longer do so should strongly countervailing (even if not decisive) evidence q against p arise—a possibility which the agent thinks is unlikely. Consider, for instance, a fairminded juror who wants to do the best job she can bringing the guilty to justice while protecting the innocent. The juror believes that a defendant is innocent on the basis of strong evidence: a different person has recently confessed to the crime, the defendant’s footprints fail to match those at the crime scene, he has a strong alibi, etc. DNA test results have been delayed, and may not be forthcoming, but the juror is fairly confident that the defendant’s DNA will not be a match if testing is completed. If the test does turn out positive for a match, the juror would change her mind about the defendant’s innocence and issue a “guilty” verdict (a positive match speaks strongly, even if not decisively against innocence). The thought of sending an innocent person to jail is pretty horrific for her, though, so she concedes that in the unlikely scenario where the defendant is in fact innocent, and the DNA test is positive, she would prefer to recommend the verdict “innocent.” In this case p: the defendant is innocent q: The DNA test is completed and is positive for a match. A: recommend the verdict “guilty” B: recommend the verdict “innocent” A < B, A > q B, A < q ∧ p B The juror believes that p, even though p does change some of her conditional preferences, contrary to what Weatherson’s theory requires.

  15. While much of my discussion is most naturally interpreted as concerning ex post justification, when ex ante justification is at issue a concern can be raised about my claim that PCA is not an independent source of a pragmatic constraint for practically rational agents. Say S is practically rational, but epistemically irrational: she does not believe p even though she should, because there is ample evidence (believing p is ex ante justified for S). S is rational to act as if p, in Fantl and McGrath’s sense, and PCA is satisfied. But S does not have a high enough degree of belief to be willing to act as if p—not because of practical irrationality, but simply because her degree of belief that p is so low (too low, indeed, to be epistemically rational). PCA and PJ appear to come apart: either because PJ does not apply in cases of ex ante justification, or perhaps because PJ simply fails. I think, however, that PJ can be understood in ways that would make it potentially relevant to accounts of ex post and ex ante justification. We can ask, of an agent who actually outright believes that p, whether or not her available evidence supports her degree of belief, which is high enough for her to be willing to act as if p. We can ask, of an agent who may not actually outright believe that p, whether or not her available evidence supports a degree of belief high enough such that the agent would be willing to act as if p were she to have that degree of belief. When the latter, ex ante reading is adopted, I believe that the above concern can be addressed. Since S is practically rational, we can take it that the available evidence supports a degree high enough such that S would be willing to act as if p were she to have that degree of belief (which, alas, she does not). PJ is satisfied, just like PCA. I thank an anonymous reviewer for pointing out the need to address this issue.

  16. Weatherson presents an objection to Fantl and McGrath’s PCA (S is justified in believing that p only if S is rational to act as if p) by way of a complicated counterexample. I have a difficult time seeing how Weatherson’s example poses a potential challenge to Fantl and McGrath’s principle, unless we take him to be construing PCA in a way which, however natural, is ultimately at odds with the authors’ intentions. He presents an example where two agents are intuitively justified in believing that p (p is well supported by their evidence), and they do act as if p—what they actually prefer to do is what they would prefer to do, given the truth of p—but they are not rational in their choice of action in so far as their conception of which action is best is dependent on their beliefs in some other claims which are countered by their evidence. A different choice would have struck them as the best, as utility maximizing, had they responded to their evidence appropriately. The agents are, then, in some sense not rational to act as they do—a sense which Weatherson spells out in the following quote: “If we take rational decisions to be those that maximize utility given a rational response to the evidence, then the decisions are clearly not rational.” (Weatherson 2005, p. 439) This looks, superficially, like a case which counters PCA (the antecedent is true, and the consequent appears to be false), but not when we take into account that “S is rational to act as if p” is taken by Fantl and McGrath as equivalent to, or shorthand for “for all acts A, S is rational to do A, given p iff S is rational to do A in fact.” (Fantl and McGrath 2002, p. 77) The agents in question count as being rational to prefer as if p (in Fantl and McGrath’s sense) because both flanks of the biconditional are false. It remains true in Weatherson’s example that what is rational for the agents to do (utility maximizing, given a well grounded response to the evidence), is the same as what is rational for the agents to do, given p.

  17. I thank an anonymous reviewer for bringing this kind of case to my attention.

  18. See pp. 103–104.

  19. Thanks to Peter McInerney, Jim Bell, Tim Hall, Kate Thomson-Jones, and Martin Thomson-Jones for helpful discussion of an earlier draft of this paper. Special thanks are due to Todd Ganson for his considerable feedback and support, as well as to an anonymous reviewer for insightful comments and questions.

References

  • Cohen S. (1999). Contextualism, skepticism, and the structure of reasons. Philosophical Perspectives, 13, 57–89.

    Google Scholar 

  • Conee, E., & Feldman, R. (2004). Evidentialism. Oxford: Clarendon Press.

    Google Scholar 

  • DeRose, K. (1992). Contextualism and knowledge attributions. Philosophy and Phenomenological Research, 52, 913–923.

    Article  Google Scholar 

  • Fantl, F., & McGrath, M. (2002). Evidence, pragmatics, and justification. The Philosophical Review, III(1), 67–94.

    Article  Google Scholar 

  • Howson, C., & Urbach, P. (1993). Scientific reasoning: The Bayesian approach. Chicago: Open Court, 1993.

    Google Scholar 

  • James, W. (1896). The will to believe. New York: Longmans, Gree & Co.

    Google Scholar 

  • Owens, D. (2000). Reason without freedom. London: Routledge.

    Google Scholar 

  • Stalnaker, R. (1987). Inquiry. Cambridge, MA: M.I.T Press.

    Google Scholar 

  • Weatherson, B. (2005). Pragmatic encroachment? Philosophical Perspectives, 19, 417–443.

    Article  Google Scholar 

  • Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dorit Ganson.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ganson, D. Evidentialism and pragmatic constraints on outright belief. Philos Stud 139, 441–458 (2008). https://doi.org/10.1007/s11098-007-9133-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-007-9133-9

Keywords

Navigation