Skip to main content
Log in

Moderate Skeptical Invariantism

  • Original Research
  • Published:
Erkenntnis Aims and scope Submit manuscript

Abstract

I introduce and defend a view about knowledge that I call Moderate Skeptical Invariantism. According to this view, a subject knows p only if she is practically certain that p, where practical certainty is defined as the confidence a rational subject would have to have for her to believe that p and act on p no matter the stakes. I do not provide a definitive case for this view, but I argue that it has several explanatory advantages over alternative views and I show how it can avoid two pressing problems commonly addressed to similar approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. It is worth mentioning here that it is debatable whether variable intuitions in these cases are due to stakes or to the mention of an error-possibility in HIGH (namely, that banks sometimes change their hours). A series of empirical studies were conducted with the aim of ascertaining the respective relevance of stakes and salience factors in knowledge-attributions (Buckwalter, 2010, 2014; DeRose 2011; Pinillos 2012; Pinillos and Simpson 2014; Sripada and Stanley 2012). But, because these studies have yielded conflicting results, the empirical evidence is still inconclusive. Furthermore, it has recently been suggested that mentioning error-possibilities influences intuitive judgments by indicating a difference in the evidential support in HIGH and LOW (e.g., Dinges 2016a). Here I will assume the standard view according to which evidential support is held fixed in the two scenarios. Notice however that this assumption is made for simplicity of exposition, and it is not essential to the view defended in the following sections. The Bank cases are here introduced to illustrate the main views in the literature. Though it would be a virtue of my view if it were able to account for ordinary intuitions in such cases, my positive arguments in Sect. 3 do not depend on this.

  2. There are two ways in which truth-values can vary, either because the semantic contents vary, or because a context (whether of utterance or assessment) is needed for a truth-value. Types of variantism can be distinguished by the context they take as fixing the contextual standards for appropriately ascribing knowledge—that of the ascriber and/or the assessor of the attribution. Relativism holds that the standard is determined by the context in which the knowledge attribution is assessed; for contextualism the standard is fixed by the context in which knowledge is ascribed, or by a combination of the context of the ascriber and the subject to whom knowledge is ascribed. Contextualist accounts are defended by, for example, DeRose (1999) and Cohen (1999). For a relativist account see MacFarlane (2005).

  3. Stanley (2005), Fantl and McGrath (2002, 2009) and Weatherson (2012). Hawthorne (2004) defends a similar view, though he includes salience effects among the factors that affect knowledge.

  4. I just note here that some Interest Relative Invariantists would deny that in HIGH the subject possesses the same evidence as in LOW (e.g., Stanley 2005). If for example evidence is one’s total knowledge, then if the subject knows p in LOW but not in HIGH, evidence differs in the two cases. However here I am using ‘evidence’ in a slightly different sense intended to cover all and only those known facts that are conducive to p being true (excluding p itself).

  5. Noteworthy exceptions are Cappelen (2005), Davis (2007, 414–415), Levin (2008, 381–382) and Dinges (2016b), but none of these authors develop the approach in detail and only Davis fully endorses it. However the view has been considered a live option by many philosophers (e.g., Hawthorne 2004; Fantl and McGrath 2009).

  6. (KPC) does not state a sufficient condition for knowing. Other traditional truth-related conditions must obtain (e.g., a safety condition).

  7. This characterization is inspired by Descartes’ notion of moral certainty (Descartes 1984, Vol. 1, pp. 289–290). See also Reed (2008). Other notions of PC in the literature, like those introduced in Wedgwood (2012) and Locke (2015), differ in important ways from the present one.

  8. Or take someone who must pursue a certain course of action no matter what, yet it matters a lot whether p is true (e.g., there is a big difference in expected value depending on whether p or not p). Assuming that in high stakes cases the subject would lose the confidence required to believe that p, still she would be rational in acting on p no matter the stakes, given that acting on p is the only available choice, or one that strongly dominates all other alternatives.

  9. Some have argued that full belief constitutively involves a disposition to act on what is believed. If this is true, there cannot be situations in which a rational subject’s degree of confidence is sufficient to believe that p but insufficient to dispose her to act on p. Considerations about dispositions of the agent to act on p would then be unnecessary for a characterization of Practical Certainty. Accordingly one may delete the phrase “and act on p” in the formulation of PC.

  10. A reviewer observes that there are problems in providing a precise account of stakes. In particular, Anderson and Hawthorne (forthcoming) have recently discussed some difficulties with theorizing about stakes. However, their arguments are primarily concerned with notions of stakes that could play a certain theoretical role in views endorsing standard versions of Pragmatic Encroachment such as IRI. Their arguments are supposed to vindicate the claim that “the ideology of stakes is unlikely to be very useful as a medium in which to theorize about pragmatic encroachment” (Sect. 4.1). Their arguments do not challenge the possibility of theorizing about stakes in general, nor do I think they are problematic for the moderately skeptical view I defend in my paper. As the authors admit, their criticisms are primarily directed against purely decision-based accounts of stakes. They are less forceful against psychological or value-based accounts. Anderson and Hawthorne themselves admit that there is a perfectly good ordinary use of ‘stakes’ as perceived costs of being wrong on a certain matter (Sect. 3). They take this notion to be an inadequate tool for certain versions of Pragmatic Encroachment, but do not provide any objection to the coherence of this notion. Notice also that, if one doesn’t like stakes-talk, PC can be restated uniquely in terms of the notion of perceived costs on being wrong about a given proposition.

  11. One may wonder whether the adoption of a subjective notion of stakes could cause problems with so called Ignorant High Stakes cases, in which the subject’s perceived stakes significantly differ from her actual stakes. Intuitively, knowledge assessments track the latter (cfr. Stanley 2005; Weatherson 2012). The answer is no: the present account provides a correct diagnosis of Ignorant High Stakes cases, since it focuses on perceived stakes in possible worlds in which such stakes are high (for discussion see Sect. 3).

  12. As with any condition resorting to conditionals, there are problem cases where the counterfactual turns out true, but not in the intended way. These are instances of the so-called “conditional fallacy”. We could put additional clauses in the counterfactual to block such cases. But this would steer us away from the central points of our discussion. So we simply ignore the complicated cases in what follows. Thanks to Julien Dutant for encouraging me to clarify this point.

  13. The present description of the rationality constraint relies on Nagel's (2008, 2010) account of how perceived stakes affect emotions and feelings of rational agents. See also Hookway (2008). According to an alternative view, a rational subject is one that proportions her confidence to her evidence, or her epistemic position (e.g., Fantl and McGrath 2002). This view entails that the confidence level is the same between high and low stakes, though it is supposed to be enough to rationally warrant a low-stakes action, but not a high-stakes action. I am open to reformulating the rationality constraint in a way that would suit the latter view. For example one could define PC as follows: S is practically certain that p if and only if S would warrantedly believe that p and act on p no matter how much turned on p. While I don’t need to fill in the constraint in a certain way, in the paper I will adopt my favored account (introduced in the main text) as a working hypothesis.

  14. Foley (1993, 199–200) defends a similar view about the threshold of evidence necessary to make a belief rational. Thanks to several commentators, including a reviewer for this journal, for pressing me to clarify this point.

  15. This is just a rough approximation of how such a theory would look. Complications arise when we consider the real number scale representing expected values. If there is no limit to which numbers we can use to represent values, this characterization requires that a practically certain proposition p is assigned probability 1. Otherwise, there is a possible assignment of expected values such that the subject would be irrational in acting on p. This entails that, if p is known, not-p is assigned value 0, or alternatively is excluded from the decision table (see Weatherson 2012 for a similar approach). Alternatively, we can assign a probability less than 1 to practically certain propositions by fixing upper and lower boundaries for the numbers we use to represent values. We can take the highest possible stakes for a subject as fixing the upper limit, with a number lower than 1 representing this limit. The limits could be fixed by the maximal degree of anxiety one can feel when perceiving high stakes, or the maximum value one could attribute to what one cares most about. The thought that there is an upper limit on stakes fits nicely with the idea of decreasing marginal utility due to risk aversion. I will further discuss this hypothesis in Sect. 2.1. An alternative formulation of PC in a decision theoretic framework is one according to which (i) the outcome rankings are not held fixed; (ii) one's preference ordering overall is the same as one's preference ordering conditional on p on any re-evaluation of the outcomes. Thanks to Julien Dutant for pointing to my attention this alternative framework. Unfortunately I’ve not the space to discuss here details and complications of these alternative accounts.

  16. For a related distinction between theoretical and practical error possibilities see Levin (2008, 382). I’ll return to this distinction in Sect. 3.

  17. MSI therefore shows that knowledge is non-luminous, albeit for reasons different to those discussed in Williamson (2000). I will return to this point in Sect. 2.2. Notice that I am not claiming here that we cannot know what in general the response of a rational agent would tend to be in high stakes situations. We could have a general understanding of whether rational agents would meet (KPC) in a wide range of situations (as in Bank). Also, I am not claiming that we cannot know, at least in principle, things such as the exact degree of rational confidence necessary for practical certainty. I am merely claiming that, for many of our actual beliefs that we take to be knowledge, we cannot know what the degree of our rational confidence would be if stakes were significantly higher on being right about them—i.e., we cannot know how ‘robust’ our rational confidence in many ordinary beliefs is.

  18. We can imagine a case similar to HIGH in which Bob, after reminding Hannah that a lot is at stake, asks her whether she really knows that tomorrow is Saturday (after all, sometimes one is wrong about what day it is). Assuming normal circumstances (Hannah is the kind of person that keeps track of the day; she received her paycheck today, and she knows she gets paid on Fridays) Hannah could remain as confident as before that tomorrow is Saturday, and it seems perfectly reasonable for her to say that she knows that tomorrow is Saturday.

  19. For statements of the problem see, for example, Hawthorne (2004), Fantl and McGrath (2009), Reed (2010).

  20. For similar considerations see Fantl and McGrath (2009, 188–191).

  21. Let me stress that with bet-like cases I have in mind specific situations in which one is offered a high bet or is engaged in a weird experiment as the one described by Reed. I do not have in mind the technical decision-theoretic notion of bet, which is of course extensible to any case involving stakes. With ordinary bank-like cases I mean here typical cases in the literature which only involve changes in the stakes and, possibly, in the psychology of the subject who rationally perceives such stakes. The further discussion will make clearer the difference between the two types of cases.

  22. More precisely, the defeater here is P2’s justified belief that there is information speaking against p which P1 possesses. In general, if you have a justified belief that there is information speaking against p that someone possesses, then you have a defeater for your belief that p. The belief is justified because P2 is justified in believing P1 to be rational, and the most obvious way for P1 to be rational would be if P1 had some special information which P2 lacks. Notice here how much more complex this case is than an ordinary bank-like case, which just involves a single subject making a decision. See Bach (2010, Sect. 5) for other ways in which salient factors can undermine knowledge by providing counterevidence.

  23. See Williamson (2005), Dutant “Normative sceptical paradoxes” (Manuscript) for more on prudential considerations and the risk of developing bad habits. See also Hawthorne and Stanley (2008, 588–589) on assessments relative to prudential character’s traits.

  24. It is important to qualify the types of practical factors relevant here. Some prudential considerations depend on epistemic weakness. There are cases in which we consider it imprudent to act on p because we don’t take our beliefs as sufficiently epistemically justified to warrant us acting in high stakes. For example consider the bank case, in which the subject’s epistemic failings make it imprudent to act on p, given the unacceptable risk. These epistemic failings are precisely what prevents the subject from being practically certain, and thus from knowing. However here I am concerned with a different type of prudential consideration. These considerations do not depend on the epistemic weakness of the subject with respect to a specific proposition. Rather, they render the performance of a certain type of act inappropriate as such, because it is socially disapproved or could generate bad habits and vicious character traits. Betting on high sums or playing stupid and dangerous games are examples of acts that are inappropriate as such, independently of one’s epistemic position with respect to specific propositions.

  25. It is a further question whether the subjects in bet-like cases know. There will be some cases in which the subject could not know the relevant proposition even before being presented with the bet, for example because condition (KPC) isn’t satisfied. There will be other cases in which the subject could not know because of some of the factors just discussed. But there will also be cases in which the subject could know the relevant proposition yet be irrational to rely on it for practical or prudential reasons (e.g. that it is imprudent to rely on that sort of proposition in these sorts of cases). But whether subjects in bet-like cases know does not concern us here. The important point is that, if lack of knowledge in bet-like cases is due to factors specific to those scenarios and independent of stakes, then these cases are irrelevant to whether the subject is practically certain (and thus to whether she knows) in the actual situation. This is sufficient to block the “radical skepticism” objection to MSI.

  26. If you don’t believe moral and prudential considerations are relevant to evaluating betting behaviour, try asking your partner if she would be happy for you to take this sort of bet (no matter how confident you are).

  27. Knowledge-based decision theories assigning maximal probability to known propositions have been discussed by, amongst others, Dutant (forthcoming), Hawthorne and Stanley (2008), Schulz (2017) and Weatherson (2012). A decision theory assigning probability 1 to practical certainty would mirror the functioning of the above theories.

  28. Observe that saying that utility values have limits doesn’t mean that the utility scale is finite. The scale can be infinite, but converge to a finite limit.

  29. The assumption of decreasing marginal utility provides simple and elegant solutions to several traditional problems of decision theory such as the Allais, Ellsberg and St Petersburg paradoxes, and is supported by a wide body of psychological studies in descriptive decision theory. For an overview and relevant literature see Buchak (2013, Sects. 1.2 and 1.4).

  30. In particular, we have upper and lower limits on the utility scale if we assume that the marginally diminishing utility function is convergent. This condition is required by standard solutions to the St Petersburg paradox (e.g., the ones advocated by Bernoulli and Pareto) and confirmed by several empirical studies.

  31. Given maximum limits of possible utility values, we can then calculate the minimum degree of rational confidence necessary for acting on a proposition no matter what the stakes are. It is provable that there is a threshold on confidence below probability 1 such that, no matter how possible outcomes are re-evaluated, it is always rational to rely on a proposition for which we have a rational confidence that meets that threshold. This threshold will be the minimal degree of confidence in p sufficient to rationally act on p even when being right or wrong about whether p are worth respectively the maximal utility and disutility value. Unfortunately, for reasons of space I must postpone a comprehensive discussion of this point to another occasion.

  32. Moderate Invariantists need an error-theoretic explanation of High Stakes knowledge attributions. IRI needs an explanation of cases in which the knowledge attributor is in a high stakes context and the subject in low stakes (e.g., Cohen 1999’s Airport Case). Attributor contextualism needs an explanation of so-called Low Attributor-High Subject Stakes cases, in which the subject is in an high stakes context but the attributor takes her to be in a low stakes context (Stanley 2005: 24), as well as an explanation for why attributions in LOW and HIGH, expressing distinct consistent propositions, do not contradict each other despite the intuition that they do. Relativism needs an explanation of cases in which the subject is in high stakes but the assessor takes her to be in low stakes. One exception is the view that the concept of knowledge is inconsistent. For a defense of this view see Weiner (2009).

  33. This and other similar examples are discussed in Davis (2007: 406 and ff.). Like Davis, I take this phenomenon to be pragmatic rather than semantic. Some have argued that loose use is a semantic rather than pragmatic phenomenon. While I will not discuss the issue here, for arguments in favor of a pragmatic interpretation see Davis (2007, 415–417).

  34. For similar cases see Fumerton (2010) and Stanley (2005). Davis (2007, 405 and ff.) provides a careful and informed defense of a loose use interpretation of knowledge attributions in low stakes Bank cases. Blome-Tillmann (2013) and Dinges (2016b) provided criticisms of similar pragmatic strategies for skeptic views. A response to their criticisms is a topic for a further paper. I just note that their arguments only apply to radical forms of skepticism according to which subjects almost never know, not to moderate forms of skepticism like MSI. Furthermore, their objections can be addressed by admitting that a pragmatic account is only part of the story, addressing only some low stakes knowledge attributions. Others would be accounted for by an error theory like the one I introduce below. Bach (2010, Sect. 6) and Fumerton (2010) provide alternative pragmatic accounts compatible with MSI.

  35. For variants of this strategy see Rysiew (2001, 2007) and Brown (2006). For discussion and criticism of these strategies see Fantl and McGrath (2009, 40–41) and Blome-Tillmann (2013).

  36. A further advantage of my approach over the moderate error theory is that, in general, loose use is a far more systematic phenomenon than implicature. It is more plausible to assume systematic loose use in low stakes cases than systematic implicature in high stakes cases. For other advantages see Davis (2007, 411).

  37. That a subject is in a better epistemic position to assess whether she knows a given proposition in high stakes cases than in low seems also to be confirmed by psychological studies. As Nagel observes, “In general, high-stakes subjects think more systematically and less heuristically, relying more on deliberate and controlled cognition and less on first impressions and automatic responses (Kunda 1990; Lerner and Tetlock 1999). Many cognitive biases—a recent survey article on accountability counts sixteen—are known to be attenuated when subjects take themselves to be shifted into a higher-stakes condition (Lerner and Tetlock 1999)” (2008, 282).

  38. In order to better understand the analogy with courage, it is useful to recall the specific sense in which I use the expression ‘rational confidence’ in this paper. In Sect. 1 I stipulated that this expression refers to the sort of confidence that meets the rationality constraint relevant for practical certainty. This constraint was specified as the degree of confidence one should maintain in high stakes conditions if one appropriately reacts to the perceived stakes, i.e., if one feels the right amount of anxiety and pressure when stakes go up—where ‘right’ here means ‘commensurate to the stakes’—and this pressure appropriately affects one’s confidence (see fn. 13 for more details on the specific sense of “appropriately affected”). Therefore, when I talk of rational confidence in this paper, I refer to the confidence that a subject rationally responsive to stakes should have, where this rational responsiveness depends on specific rational emotional responses of the subject (it is helpful to compare here with Hookway 2008, 60–63 and literature quoted therein). While according to other more ordinary senses of rationality (e.g., responsiveness to evidence), we can better tell whether a certain amount of confidence would be rational from the safety of a low stakes situation, whether our confidence is rational in the present technical sense is not always fully transparent in low stakes cases, for it is usually hard to determine which our proper emotional responses would be if stakes were relevantly higher (for example, which amount of anxiety we would have if our psychological system were rationally responding to the situation), and how these responses would affect our confidence if our emotional and cognitive systems were working and interacting rationally. Similar considerations are valid for courage. A wide literature in social psychology documents how certain dispositions typically attributed to traits of character such as courage are not always transparent to the subject absent their manifestation conditions (see, for example, Doris 2002). To be courageous, a subject rationally responsive to anxiety and pressure in high stakes contexts will react by acting in a certain way. Similarly, I argue, to know something is in part to rationally react to anxiety and pressure in high stakes contexts by maintaining a certain degree of confidence in that thing. In this respect, we could even characterize Practical Certainty as a type of courage, and knowledge as necessarily involving a specific type of courage (namely, the courage required to maintain a certain degree of confidence in a proposition when stakes go up). This description would fit nicely with a virtue-based account of knowledge. However, this is a topic for another paper.

  39. For a discussion of this phenomenon see, for example, Lewis (1979), Pritchard (2001) and Levin (2008, 381). See Davis (2007, 420–421) for a discussion of the phenomenon in connection with the present issue.

  40. On the practice of concession under challenge in ordinary contexts of knowledge attribution see, for example, Hawthorne (2004, 124): “On the most natural reading, this seems to be an admission that one previously believed something false (namely, that one knew that p)”. See also Davis (2007, 398–399 and 420–421).

  41. For similar remarks see Hawthorne (2004, 129) and Cappelen (2005, 19).

  42. The idea that knowledge is stable comes from Plato. See also Williamson (2000).

  43. For this objection see in particular Williamson (2005) and Davis (2007).

  44. For a discussion of similar cases see, for example, Fantl and McGrath (2009, 210–211). In such cases they notice a “tendency of ordinary people to deny knowledge to themselves when they think about people in high-stakes situations who do not know” (Ibid., 211).

  45. See, for example, Stanley (2005, 102 and ff.) and Fantl and McGrath (2009, 211).

  46. For the relevance of such factors see also Fantl and McGrath (2009, 202 and ff).

  47. For a discussion of such cases see Stanley (2005), Sripada and Stanley (2012) and Weatherson (2012).

  48. For a similar case see Stanley (2005, 5).

  49. It is important to stress that in Ignorant High Stakes the subject is not only unaware of the stakes, but is not accountable as responsible for not knowing the stakes. In such cases the subject has absolutely no clue that she is in a high stakes situation.

  50. I just note that there is not universal agreement about this claim. Some recent studies in experimental philosophy seem to confute these intuitions (e.g., Buckwalter 2010), while others seem to vindicate them (Sripada and Stanley 2012; Pinillos and Simpson 2014). Here I will just assume the correctness of these intuitions.

  51. See Bach (2005, 2008), Nagel (2008, 2010) and Ross and Schroeder (2014).

  52. Hawthorne (2004), Stanley (2005) and Hawthorne and Stanley (2008).

  53. Similar principles have been defended by Hawthorne (2004), Stanley (2005), Hawthorne and Stanley (2008), Fantl and McGrath (2009). Some of these philosophers defend bi-conditional versions of this principle. However it is contentious whether knowledge is both necessary and sufficient for rational action. On this see, for example, Brown (2008), Fantl and McGrath (2009) and Littlejohn (2009).

  54. For a criticism see, for example, Fantl and McGrath (2009).

  55. The appeal to the distinction between rational appropriateness and excusability is not new in the literature on the knowledge norm of action. For example, Hawthorne and Stanley (2008, 573, 582, 586) consider several cases in which one violates the knowledge norm of action in excusable ways.

  56. See Hawthorne and Stanley (2008, 581–585) and Levin (2008, 366).

  57. I am saying that (3) sounds less infelicitous than (1), not that (3) doesn’t sound infelicitous. I want to note two things here. First, I am following other authors (e.g., Dodd 2010; Dougherty and Rysiew 2009) in taking it for granted that CKAs that mention ‘far fetched’ possibilities are less infelicitous than CKAs that mention less far-fetched possibilities. If the reader doesn’t share this intuition, then the argument of this sub-section won’t have much force (this argument is, of course, just one among many arguments for MSI). Second, I borrow the distinction between far-fetched, or far out possibilities, and non-far-fetched possibilities from the literature (e.g., Bach 2005, Dodd 2010, Dougherty and Rysiew 2009, Frances 2005, Levin 2008, Rysiew 2001, Vogel 1999, and the authors quoted in the next footnote. A similar distinction can also be found in Peirce 1992, 28–29 and 115; see Hookway 2008, 61 for discussion). Other examples of far-fetched possibilities are that I am dreaming rather than in my office, or that I am deceived by an evil demon. Following these authors, I take the distinction as fairly intuitive.

  58. For similar considerations see for example Pritchard (2001), MacFarlane (2005), Davis (2007, 436), Adler (2012, 264). Referring to radical skeptical hypotheses like the BIV one, Bach writes: “just imagining yourself in such a scenario is not to take seriously the possibility that you’re actually in it. So-called skeptical “hypotheses” are really just fantasies” (Bach 2010, Sect. 5).

  59. Compare Adler (2012, 264–265): "Moreover, even when a counter-possibility is accepted (regarded as relevant), in radical cases the persuasion is often half-hearted. The student who is moved by your dreaming argument to sincerely deny that he knows that he is in the classroom does not cease to believe that he is in the classroom […] [E]ven when a way-out counter-possibility is persuasive so far as concerns a subject’s sincere avowals, as with dreaming, belief remains firm or at least warrant for the belief remains firm”.

  60. See Hookway (2008) for a similar diagnosis of the uncompellingness of skeptical intuitions. MSI’s answer to traditional skepticism is that knowledge doesn’t require theoretical certainty. MSI is a fallibilist view that only requires a type of certainty concerning the level of confidence required to rationally believe and act. The skeptic who casts doubts on whether she has hands is nevertheless rationally disposed to believe and act on the proposition that she has hands no matter the stakes. This is sufficient for her to know that she has hands, even if she questions this in her abstract theorizing. This appeal to a link between knowledge and practical factors in order to solve the skeptical threat is not new in the literature, and it belongs to the (broadly) pragmatist tradition. For a recent proposal in this direction see Hookway (1990).

References

  • Adler, J. E. (2012). Contextualism and fallibility: Pragmatic encroachment, possibility, and strength of epistemic position. Synthese, 188(2), 247–272.

    Google Scholar 

  • Anderson, C., & Hawthorne, J. (forthcoming). Knowledge, practical adequacy, and stakes. In Oxford studies in epistemology (Vol. 6). Oxford: Oxford University Press.

  • Bach, K. (2005). The emperor’s new “knows”. In G. Preyer & G. Peter (Eds.), Contextualism in philosophy: Knowledge, meaning, and truth (pp. 51–89). Oxford: Oxford University Press.

    Google Scholar 

  • Bach, K. (2008). Applying pragmatics to epistemology. Philosophical Issues, 18(1), 68–88.

    Google Scholar 

  • Bach, K. (2010). Knowledge in and out of context. In J. K. Campbell, M. O.’Rourke, & H. S. Silverstein (Eds.), Knowledge and skepticism (pp. 105–136). Cambridge: Mit Press.

    Google Scholar 

  • Blome-Tillmann, M. (2013). Knowledge and implicatures. Synthese, 190, 4293–4319.

    Google Scholar 

  • Brown, J. (2006). Contextualism and waranted assertibility manoeuvres. Philosophical Studies, 130, 407–435.

    Google Scholar 

  • Brown, J. (2008). Subject-sensitive invariantism and the knowledge norm for practical reasoning. Noûs, 42(2), 167–189.

    Google Scholar 

  • Buchak, L. (2013). Risk and rationality. Oxford: Oxford University Press.

    Google Scholar 

  • Buckwalter, W. (2010). Knowledge isn’t closed on Saturday: A study in ordinary language. Review of Philosophy and Psychology, 1(3), 395–406.

    Google Scholar 

  • Buckwalter, W. (2014). The mystery of stakes and error in ascriber intuitions. In J. Beebe (Ed.), Advances in experimental epistemology. London: Bloomsbury.

    Google Scholar 

  • Cappelen, H. (2005). Pluralistic skepticism: Advertisement for speech act pluralism. Philosophical Perspectives, 19(1), 15–39.

    Google Scholar 

  • Cohen, S. (1999). Contextualism, skepticism, and the structure of reasons. Philosophical Perspectives, 13, 57–89.

    Google Scholar 

  • Davis, W. A. (2007). Knowledge claims and context: Loose use. Philosophical Studies, 132(3), 395–438.

    Google Scholar 

  • DeRose, K. (1992). Contextualism and knowledge attributions. Philosophy and Phenomenological Research, 52(4), 913–929.

    Google Scholar 

  • DeRose, K. (1999). Contextualism: An explanation and defense. In J. Greco & E. Sosa (Eds.), The Blackwell guide to epistemology (pp. 187–205). Hoboken: Blackwell Publishers.

    Google Scholar 

  • DeRose, K. (2011). Contextualism, contrastivism, and X-Phi surveys. Philosophical Studies, 156(1), 81–110.

    Google Scholar 

  • Descartes, R. (1984). The philosophical writings of Descartes. Cambridge: Cambridge University Press.

    Google Scholar 

  • Dinges, A. (2016a). Epistemic invariantist and contextualist intuitions. Episteme, 13(2), 219–232.

    Google Scholar 

  • Dinges, A. (2016b). Skeptical pragmatic invariantism: Good, but not good enough. Synthese, 193(8), 2577–2593.

    Google Scholar 

  • Dodd, D. (2010). Confusion about concessive knowledge attributions. Synthese, 172(3), 381–396.

    Google Scholar 

  • Doris, J. M. (2002). Lack of character: Personality and moral behavior. Cambridge: Cambridge University Press.

    Google Scholar 

  • Dougherty, T., & Rysiew, P. (2009). Fallibilism, epistemic possibility, and concessive knowledge attributions. Philosophy and Phenomenological Research, 78, 128–132.

    Google Scholar 

  • Dutant, J. (Forthcoming). Knowledge-first evidentialism about rationality. In F. Dorsch & J. Dutant (Eds.), The new evil demon problem. Oxford: Oxford University Press.

  • Fantl, J., & McGrath, M. (2002). Evidence, pragmatics, and justification. Philosophical Review, 111(1), 67–94.

    Google Scholar 

  • Fantl, J., & McGrath, M. (2009). Knowledge in an uncertain world. Oxford: Oxford University Press.

    Google Scholar 

  • Foley, R. (1993). Working without a net: A study of egocentric epistemology. Oxford: Oxford University Press.

    Google Scholar 

  • Frances, B. (2005). When a skeptical hypothesis is live. Noûs, 39(4), 559–595.

    Google Scholar 

  • Fumerton, R. (2010). Fencing out pragmatic encroachment. Philosophical Perspectives, 24(1), 243–253.

    Google Scholar 

  • Hawthorne, J. (2004). Knowledge and lotteries. Oxford: Oxford University Press.

    Google Scholar 

  • Hawthorne, J., & Stanley, J. (2008). Knowledge and action. Journal of Philosophy, 105(10), 571–590.

    Google Scholar 

  • Hookway, C. (1990). Scepticism. London: Routledge.

    Google Scholar 

  • Hookway, C. (2008). Epistemic immediacy, doubt and anxiety: On a role for affective states in epistemic evaluation. In G. Brun, U. Doguoglu & D. Kuenzle (Eds.), Epistemology and emotions (pp. 51–65). Ashgate.

  • Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108, 480–498.

    Google Scholar 

  • Lerner, J., & Tetlock, P. (1999). Accounting for the effects of accountability. Psychological Bulletin, 125, 255–275.

    Google Scholar 

  • Levin, J. (2008). Assertion, practical reason, and pragmatic theories of knowledge. Philosophy and Phenomenological Research, 76(2), 359–384.

    Google Scholar 

  • Lewis, D. (1979). Scorekeeping in a language game. Journal of Philosophical Logic, 8(1), 339–359.

    Google Scholar 

  • Littlejohn, C. (2009). Must we act only on what we know? Journal of Philosophy, 106(8), 463–473.

    Google Scholar 

  • Locke, D. (2015). Practical certainty. Philosophy and Phenomenological Research, 90(1), 72–95.

    Google Scholar 

  • MacFarlane, J. (2005). The assessment sensitivity of knowledge attributions. In T. S. Gendler & J. Hawthorne (Eds.), Oxford studies in epistemology (Vol. 1, pp. 197–234). Oxford: Oxford University Press.

    Google Scholar 

  • Nagel, J. (2008). Knowledge ascriptions and the psychological consequences of changing stakes. Australasian Journal of Philosophy, 86(2), 279–294.

    Google Scholar 

  • Nagel, J. (2010). Epistemic anxiety and adaptive invariantism. Philosophical Perspectives, 24(1), 407–435.

    Google Scholar 

  • Peirce, C. S. (1992). The essential Peirce (Vol. 1). In N. Houser & C. Kloesel (Eds.). Indianapolis: Indiana University Press.

  • Pinillos, N. Á. (2012). Knowledge, experiments and practical interests. In J. Brown & M. Gerken (Eds.), New essays on knowledge ascriptions (pp. 192–219). Oxford: Oxford University Press.

    Google Scholar 

  • Pinillos, N. Á., & Simpson, S. (2014). Experimental evidence supporting anti-intellectualism about knowledge. In J. Beebe (Ed.), Advances in experimental epistemology (pp. 9–43). London: Bloomsbury.

    Google Scholar 

  • Pritchard, D. (2001). Contextualism, scepticism, and the problem of epistemic descent. Dialectica, 55(4), 327–349.

    Google Scholar 

  • Reed, B. (2008). Certainty. In E. Zalta (Ed.), Stanford encyclopedia of philosophy.

  • Reed, B. (2010). A defense of stable invariantism. Noûs, 44(2), 224–244.

    Google Scholar 

  • Ross, J., & Schroeder, M. (2014). Belief, credence, and pragmatic encroachment. Philosophy and Phenomenological Research, 88(2), 259–288.

    Google Scholar 

  • Rysiew, P. (2001). The context-sensitivity of knowledge attributions. Noûs, 35(4), 477–514.

    Google Scholar 

  • Rysiew, P. (2007). Speaking of knowing. Noûs, 41(4), 627–662.

    Google Scholar 

  • Schulz, M. (2017). Decisions and higher-order knowledge. Noûs, 51(3), 463–483.

    Google Scholar 

  • Sripada, C. S., & Stanley, J. (2012). Empirical tests of interest-relative invariantism. Episteme, 9(1), 3–26.

    Google Scholar 

  • Stanley, J. (2005). Knowledge and practical interests. Oxford: Oxford University Press.

    Google Scholar 

  • Stanley, J. (2007). Replies to Gilbert Harman, Ram Neta, and Stephen Schiffer. Philosophy and Phenomenological Research, 75(1), 196–210.

    Google Scholar 

  • Vogel, J. (1999). The new relevant alternatives theory. Philosophical Perspectives, 13, 155–180.

    Google Scholar 

  • Weatherson, B. (2012). Knowledge, bets, and interests. In J. Brown & M. Gerken (Eds.), Knowledge ascriptions (pp. 75–103). Oxford: Oxford University Press.

    Google Scholar 

  • Wedgwood, R. (2012). Outright belief. Dialectica, 66(3), 309–329.

    Google Scholar 

  • Weiner, M. (2009). The (mostly harmless) inconsistency of knowledge ascriptions. Philosophers’ Imprint, 9(1), 1–25.

    Google Scholar 

  • Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.

    Google Scholar 

  • Williamson, T. (2005). Contextualism, subject-sensitive invariantism and knowledge of knowledge. Philosophical Quarterly, 55(219), 213–235.

    Google Scholar 

Download references

Acknowledgements

I would like to thank Julien Dutant, Jie Gao, Robin McKenna, Jacques Vollet and three anonymous reviewers for helpful comments on earlier drafts of this paper. A special thank goes to Robin McKenna for his invaluable comments on several revised versions of the paper. Earlier versions of this paper were presented at the Third Lucerne Philosophy Graduate Conference (2014), the Workshop “Knowledge, Scepticism and Modality” at the University of Padua, the European Epistemology Network Meeting 2014 in Madrid, and the VAF Conference 2015 at the University of Rotterdam. Thanks to the audiences for their helpful feedback. Early research on this article was partially funded by the Swiss National Science Foundation research project ‘The Unity of Reasons’ (P300P1_164569).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Davide Fassio.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fassio, D. Moderate Skeptical Invariantism. Erkenn 85, 841–870 (2020). https://doi.org/10.1007/s10670-018-0053-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10670-018-0053-1

Navigation