Abstract
This paper explores how a certain theory of knowledge—known as anti-luck virtue epistemology—can account for, and in the process shed light on, the notion of an epistemic defeater. To this end, an overview of the motivations for anti-luck virtue epistemology is offered, along with a taxonomy of different kinds of epistemic defeater. It is then shown how anti-luck virtue epistemology can explain: (i) why certain kinds of putative epistemic defeater are not bona fide; (ii) how certain kinds of epistemic defeater are genuine in virtue of exposing the subject to significant levels of epistemic risk; and (iii) how certain kinds of epistemic defeater are genuine in virtue of highlighting how the subject’s safe cognitive success does not stand in the appropriate explanatory relationship to her manifestation of relevant cognitive ability.
Similar content being viewed by others
Notes
Note that, henceforth, I will refer to epistemic defeaters as defeaters simpliciter.
For more on the notion of epistemic dependence, see Kallestrup and Pritchard (2011, 2012, 2013) and Pritchard (forthcominga). Note that this notion is rooted in an earlier distinction between intervening and environmental epistemic luck, and an associated critique of robust virtue epistemology—see Pritchard (2009a, b, 2012a) and Pritchard et al. (2010, chap. 2–4).
I will not be getting into the issue of how best to unpack the notion of safety in play here. I discuss this notion (and the related notion of veritic luck), and its place within the broader framework of what I call anti-luck epistemology, in a number of places. See, especially, Pritchard (2005, 2007, 2012a, c, 2015a). See also my recent defence of what I refer to as anti-risk epistemology, which involves a subtle adaption to the anti-luck epistemology framework, in Pritchard (forthcomingb).
For more on the epistemic twin earth argument, see Kallestrup and Pritchard (2011).
Related to this point, one sometimes sees commentators running together the notions of defeat and counterevidence. While they are clearly related, it important to keep them apart, since counterevidence is a less demanding notion. That is, in principle one could be in possession of counterevidence and yet it fails to suffice to defeat the epistemic standing in question (knowledge, say). Note also that while I only contrast knowledge with a general positive epistemic standing here, other contrasts are possible (e.g., one could contrast a positive epistemic standing sufficient for justification with a general positive epistemic standing, and so on).
The point of this formulation is that we don’t want to understand knowledge-defeaters in such a way that one must have previously had the knowledge prior to it being defeated (even though sometimes this is of course the case), since that would be unduly restrictive.
Note the emphasis on the sustaining basis for the belief. This is important since the process which originally gave rise to the belief might not be the one that currently sustains it. When we are discussing undercutting defeaters it is important that our focus is on the sustaining basis rather than the original basis, since grounds for thinking that the original basis is unreliable may have no defeating power if this basis has subsequently come apart from the sustaining basis.
For a classic discussion of the distinction between undercutting and overriding defeaters, see Pollock (1970). (Note though that at this time he is calling defeaters ‘excluders’ and classifying them as ‘type I’ and ‘type II’. Moreover, in later work—such as Pollock (1974)—although he uses the nomenclature of an undercutting defeater, he refers to overriding defeaters as ‘rebutting defeaters’).
Of course, there are certainly some cases where undercutting defeaters collapse into overriding defeaters. If, for example, I discover that your belief-forming process is so unreliable that it is guaranteed to deliver false beliefs, then it follows that I have an overriding defeater as regards any belief produced by this process. My point is just that merely discovering that a belief-forming process has failed to clear the relevant threshold of reliability is not in itself generally a good reason to doubt the truth of any belief formed via this process. Rather, one will normally require further grounds (such as grounds for thinking that the process in question is massively unreliable). I am grateful to an anonymous referee from Synthese for pressing me on this point.
That said, misleading defeaters tend to be less epistemically deleterious than non-misleading defeaters to the extent that they are easier to in turn defeat. Once one has taken a look behind the barn frontage and seen that there is a genuine barn, for example, then this particular defeater is in turn defeated. Had it been a non-misleading defeater, in contrast, then clearly one would not be able to defeat the defeater in this fashion.
This feature of knowledge has led to a puzzle, sometimes known as the paradox of dogmatism. Roughly, if one takes oneself to know that p, then one should regard all evidence that suggests that one doesn’t know that p (including undercutting and overriding defeaters) as misleading. But doesn’t that mean that one should reject all counterevidence in advance of knowing what it is? And, if so, isn’t that just a licence for dogmatism? For two important discussions of this paradox, see Harman (1973) and Kripke (2011). See also Sorensen (1988, 2011, Sect. 6.2). I offer my own explanation of why the paradox is illusory in Pritchard (2015b, pp. 210–211). My focus there is on how this paradox affects an account of perceptual knowledge known as epistemological disjunctivism—see Pritchard (2012b) for more details about this proposal—a view that seems particularly susceptible to the paradox. Accordingly, insofar as my treatment of the paradox works for epistemological disjunctivism, it ought to be applicable to (plausible) accounts of knowledge more generally.
See Lackey (2010) for a very useful discussion of normative defeaters.
Notice that I am implicitly taking it for granted here that being aware of a defeater involves being aware of it qua defeater. Of course, there may be certain cases where the latter condition isn’t met, but I will be setting this complication aside in what follows as it isn’t germane to our current discussion.
For a helpful overview of recent work on epistemic defeaters, see Sudduth (2015). Note that one issue that I have not engaged with here, but which I have discussed extensively elsewhere, is what it takes to turn the mere presentation of an error-possibility into a defeater (or, for that matter, counterevidence). I think this has an important bearing on a number of debates (such as concerning radical scepticism), since epistemologists are often too quick to treat any presentation of an error-possibility as a defeater. I have also not engaged here with a further issue that I think is related, which is how there can be ways of responding to defeaters that are not particularly onerous from an epistemic point of view (again, I think epistemologists are often too quick to impose unduly austere epistemic demands on those defeating defeaters, particularly in the context of certain epistemological debates). For discussion of both of these points, see Pritchard (2010; 2012b, part two, 2015b, part three). See also Carter and Pritchard (2015).
This general distinction between an epistemically benign form of evidential epistemic risk and an epistemically malignant form of epistemic risk that is veritic can be found in Unger (1968). For a developed account of the distinction and its epistemological ramifications, see Pritchard (2005, chap. 5–6). Note that many putative counterexamples to the necessity of an anti-luck/risk condition like safety for knowledge trade on a failure to take this distinction into account (and, relatedly, a failure to keep the basis for the target belief fixed). Consider, for example, the clock case offered by Bogardus (2014). In this scenario we are asked to imagine the world’s most reliable clock which, as it happens, could have easily been disrupted by something in its vicinity (but wasn’t). We further imagine our agent just happening to be in the vicinity of the clock when it is functioning perfectly and forming a true belief as a result. Is this belief knowledge? Bogardus claims that it is, but that it is also manifestly unsafe. I agree that it’s knowledge, but I dispute that the belief is unsafe. The crux of the matter is that we need to keep the subject’s actual evidential basis fixed, and of course his actual evidential basis for his belief is formed by consulting the reliable and unaffected clock. While it is lucky that the subject has this evidential basis (in that there are close possible worlds where it is absent), it is not lucky that she forms a true belief on this basis. Indeed, in all close possible worlds where she continues to enjoy the same evidential basis she continues to form a true belief. (This would thus be a case of a non-normative modally close defeater). See Pritchard (2015a) for a more detailed discussion of why the main putative counterexamples to safety in the literature are defective. See also Broncano-Berrocal (2014) for further critical discussion of Bogardus’s clock case, to which Bogardus and Marxen (2014) is a response.
And notice that it would be irrelevant to respond to this argument by nothing that it only works on misleading modally close non-normative defeaters. After all, as we noted above, insofar as our focus is on knowledge-defeaters, then all defeaters are misleading.
Thanks to two anonymous referees from Synthese for helpful comments on an earlier version of this paper.
References
Bogardus, T. (2014). Knowledge under threat. Philosophy and Phenomenological Research, 88, 289–313.
Bogardus, T., & Marxen, C. (2014). Yes, safety is in danger, Philosophia. doi:10.1007/s11406-013-9508-4.
Broncano-Berrocal, F. (2014). Is safety in danger? Philosophia, 42, 63–81.
Carter, J. A., & Pritchard, D. H. (2015). Perceptual knowledge and relevant alternatives, Philosophical Studies. doi:10.1007/s11098-015-0533-y.
Greco, J. (2009). Achieving knowledge. Cambridge: Cambridge University Press.
Harman, G. (1973). Thought. Princeton, NJ: Princeton University Press.
Kallestrup, J., & Pritchard, D. H. (2011). Virtue epistemology and epistemic twin earth, European Journal of Philosophy. doi:10.1111/j.1468-0378.2011.00495.x.
Kallestrup, J., & Pritchard, D. H. (2012). Robust virtue epistemology and epistemic anti-individualism. Pacific Philosophical Quarterly, 93, 84–103.
Kallestrup, J., & Pritchard, D. H. (2013). Robust virtue epistemology and epistemic dependence, ch. 11. In T. Henning & D. Schweikard (Eds.), Knowledge, virtue and action: putting epistemic virtues to work. London: Routledge.
Kripke, S. (2011). Two paradoxes of knowledge. In Philosophical troubles: collected papers (Vol. 1, pp. 27–51). Oxford: Oxford University Press.
Lackey, J. (2010). Testimonial knowledge. In S. Bernecker & D. H. Pritchard (Eds.), Routledge companion to epistemology (pp. 316–325). New York: Routledge.
Pollock, J. (1970). The structure of epistemic justification. American Philosophical Quarterly, 4, 62–78.
Pollock, J. (1974). Knowledge and justification. Princeton, NJ: Princeton University Press.
Pritchard, D. H. (2005). Epistemic luck. Oxford: Oxford University Press.
Pritchard, D. H. (2007). Anti-luck epistemology. Synthese, 158, 277–297.
Pritchard, D. H. (2009a). Apt performance and epistemic value. Philosophical Studies, 143, 407–416.
Pritchard, D. H. (2009b). Knowledge, understanding and epistemic value. In A. O’Hear (Ed.), Epistemology. Royal Institute of Philosophy Lectures (pp. 19–43). Cambridge: Cambridge University Press.
Pritchard, D. H. (2010). Relevant alternatives perceptual knowledge, and discrimination. Noûs, 44, 245–268.
Pritchard, D. H. (2012a). Anti-luck virtue epistemology. Journal of Philosophy, 109, 247–279.
Pritchard, D. H. (2012b). Epistemological disjunctivism. Oxford: Oxford University Press.
Pritchard, D. H. (2012c). In defence of modest anti-luck epistemology. In T. Black & K. Becker (Eds.), The sensitivity principle in epistemology (pp. 173–192). Cambridge: Cambridge University Press.
Pritchard, D. H. (2015a). Anti-luck epistemology and the Gettier problem. Philosophical Studies, 172, 93–111.
Pritchard, D. H. (2015b). Epistemic angst: radical skepticism and the groundlessness of our believing. Princeton, NJ: Princeton University Press.
Pritchard, D. H. (Forthcominga). Epistemic dependence, Philosophical Issues.
Pritchard, D. H. (Forthcomingb). Epistemic risk, Journal of Philosophy.
Pritchard, D. H. (Forthcomingc). Knowledge, luck and virtue: resolving the Gettier problem. In C. Almeida, P. Klein & R. Borges (Eds.) The Gettier problem. Oxford: Oxford University Press.
Pritchard, D. H., Millar, A., & Haddock, A. (2010). The nature and value of knowledge: three investigations. Oxford: Oxford University Press.
Sorensen, R. (1988). Blindspots. Oxford: Clarendon.
Sorensen, R. (2011). Epistemic paradoxes. In E. Zalta (Ed) Stanford encyclopedia of philosophy. http://plato.stanford.edu/entries/epistemic-paradoxes/.
Sosa, E. (2007). A virtue epistemology: apt belief and reflective knowledge. Oxford: Oxford University Press.
Sosa, E. (2009). Reflective knowledge: apt belief and reflective knowledge. Oxford: Oxford University Press.
Sosa, E. (2015). Judgement and agency. Oxford: Oxford University Press.
Sudduth, M. (2015). Defeaters in epistemology. In J. Fieser & B. Dowden (Eds.) Internet encyclopedia of philosophy. http://www.iep.utm.edu/ep-defea/.
Unger, P. (1968). An analysis of factual knowledge. Journal of Philosophy, 65, 157–170.
Zagzebski, L. (1996). Virtues of the mind: an inquiry into the nature of virtue and the ethical foundations of knowledge. Cambridge: Cambridge University Press.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Pritchard, D. Anti-luck virtue epistemology and epistemic defeat. Synthese 195, 3065–3077 (2018). https://doi.org/10.1007/s11229-016-1074-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-016-1074-4