Abstract
In this paper I will offer a comprehensive defense of the safety account of knowledge against counterexamples that have been recently put forward. In Sect. 2, I will discuss different versions of safety, arguing that a specific variant of method-relativized safety is the most plausible. I will then use this specific version of safety to respond to counterexamples in the recent literature. In Sect. 3, I will address alleged examples of safe beliefs that still constitute Gettier cases. In Sect. 4, I will discuss alleged examples of unsafe (and in this sense lucky) knowledge. In Sect. 5, I will address alleged cases of safe belief that do not constitute knowledge for non-Gettier reasons. My overall goal is to show that there are no successful counterexamples to robust anti-luck epistemology and to highlight some major presuppositions of my reply.
Similar content being viewed by others
Notes
See Kripke (2011: pp. 167–68).
See Pritchard (2005: pp. 145–152) for a more comprehensive analysis of veritic luck.
See Pritchard (2015: pp 96–97).
See Pritchard (2009a: p. 23) for this distinction.
See Pritchard (2009a: p. 51).
Sosa finds fault with safety because it does not require the belief to result from competences; he also disputes that safety is necessary for knowledge. See Sosa (2009: pp. 206–207).
See Pritchard (2005: pp. 161–173) for a discussion of the different strengths of the safety principle.
The best motivation for this version is that it explains why we do not know that we lose the lottery if we truly believe this on the basis of our general knowledge about the odds.
See Sosa (1999: p. 146).
See Hiller and Neta (2007).
Pritchard (2009b) suggests this version of safety when he says: “[A]ll we need to do is to talk of the doxastic result of the target belief-forming process, whatever that may be, and then not only focus solely on belief in the target proposition.” Williamson (2000) can be interpreted in a similar way. In contrast, Sosa addresses (S2)-safety only.
More specifically, a safety account of knowledge that uses (S4) might run into skeptical problems too easily. Consider a case in which you believe that there is a cup on the table in front of you on the basis of your current vision. If someone might easily have left a fake spoon on the table, you would, according to (S4), not know that there is a cup. Unfortunately, I do not have enough space to deal with this problem here.
This is an abridged version of Lackey (2006: p. 288).
Compare Coffman (2010: 246).
The operative method is italicized.
Compare Goldberg (2015: pp. 277–278). I present an abridged version of the original case.
See Goldberg (2015: p. 278): “I submit that the intuitively correct description of S is this: S’s true justified belief is formed through a safe method, yet the belief still fails to be knowledge (owing to the presence of epistemic luck).”
See Horvath and Wiegmann (2016) for a recent experimental study that suggests that although the textbook view is that Henry does not know, the majority of real expert epistemologists does not share this view.
This is an abridged version of the original case in Neta and Rohrbaugh (2004: pp. 399–400).
Ibid.: 401.
This reply was suggested by an anonymous referee. For further discussion see footnote 31.
See also Pritchard (2015: p. 104).
The case is a slightly abridged version of (Kelp 2009: pp. 27–28).
Bogardus (2012) presents a variant of this case with his Atomic Clock. When in this case Smith relies on the atomic clock, there is the persistent counterfactual risk that a radioactive isotope in the vicinity of the clock will interfere with its proper functioning. Bogardus argues that Kelp’s clock case does not establish a proper example of an unsafe belief since in the actual situation in which Russell has already decided to come down the stairs at 8:22 a.m., there are no longer nearby worlds in which the demon intervenes and Russell’s belief is false. In contrast, Bogardus’ own case is supposed to give an example of the continuing risk of forming a false belief. However, I do not think that the difference between the two cases has any epistemological significance. In particular, if one takes (S4)-safety to be the relevant notion here, we should not keep Russell’s decision to come down at 8.22 a.m. fixed across nearby possible worlds. The only thing that one should keep fixed is the operative method. As we will later see (pp. 17–18), this may include external epistemic conditions as, e.g., observation conditions. However, we cannot arbitrarily choose any parts of the environment and hold them fixed across counterfactual evaluation. This choice must be well motivated.
This is a slightly revised version of the original Comesaña case. See Comesaña (2005: p. 397).
Kelp articulates the worry that the possible worlds in which Juan acquires a false belief on the basis of Judy’s misleading testimony might not be similar enough to the actual world to undermine safety (Kelp 2009: p. 25). After all, these worlds are different from the actual world in many aspects: Juan decides to dress up as Michael; Judy believes that he is Michael; she calls Andy; the party is moved to Adam’s house. However, the distance between the worlds should not simply be measured by the number of differing facts. Otherwise, the consequent of the counterfactual conditional If Nixon had pressed the button, a nuclear inferno would have been the result could never be true in the next world in which Nixon presses the button. In fact, all the differences in worlds in which Juan dresses up as Michael are causally triggered by his disguise. The otherwise behaviorally relevant dispositions are already present in the actual world. So it is simply not true that in order to effect Juan’s false belief many facts that are independent of each other have to be different. In conclusion, I do not think that Kelp’s worry here is substantial.
If one wants to avoid method talk, one might also use the label ‘the basis of belief’ or ‘the way a belief is formed.’
This revised version was inspired by the discussions at the Saving Safety?-workshop that took place from Sept. 30 to Oct. 2, 2013 at the University of Bonn. I am especially grateful to Dominik Balg, Juan Comesaña, Wolfgang Freitag, and Sanford Goldberg for further helpful discussions about this issue.
See Broncano-Berrocal (2014). However, his sufficient account of method individuation runs into severe problems, as Bogardus and Marxen (2014) argue. Sosa (2007: p. 27) mentions, but does not develop, the idea of external methods. Pritchard (2016: fn. 17) also considers this as a successful strategy against counterexamples.
Compare Broncano-Berrocal (2014: p. 73).
Thanks to an anonymous referee for articulating this concern.
See p. 6 above.
If this is a correct and comprehensive list of relevant factors, it can explain why certain gerrymandered descriptions of methods do not constitute genuine methods in the relevant sense. Consider a clock that works properly only 1 min a day. The rest of the time, it’s wildly inaccurate. But suppose you happen to look at the clock during that 1 min when it is working properly. Is the externalist about method individuation committed to the view that you acquire knowledge by relying on that clock? The answer is negative, for the following reason: In this case neither normal external stimuli nor teleological factors determine the method’s individuation. Hence, the intentions of the epistemic agent play a key role here. Typically, however, people do not intend to use a clock that is improperly working most of the time just at the right moment when it happens to work properly. Rather, they rely on a clock they take to be properly working at all times. In the case at hand, the clock is not working properly according to this individuation. Hence, one would not acquire knowledge by relying on it. (Thanks to an anonymous referee for pressing me on this point.)
Thanks to an anonymous referee for raising this issue.
Warfield (2005: pp. 407–408).
For the following see Bradley (2015: p. 205).
Interestingly, the defeater in this case is a defeater for knowledge without being a defeater for justified belief. This is so because the (mis)information that Lottie is participating in a fair lottery suggests that her target belief (that the ticket will lose) might easily be false, but it does not suggest that her belief is unreliable.
With some clarifications from Pritchard (2009a: p. 49).
Pritchard (2009a: p 49).
Pritchard (2009a: p 50).
Pritchard (2009a: p 50).
Pritchard (2009a: 49).
This is what Sosa requires for apt beliefs, namely that they are accurate (i.e. true) because they are adroit (i.e. are based on competence).
This concern is due to an anonymous referee.
References
Alspector-Kelly, M. (2011). Why safety doesn’t save closure. Synthese, 183, 127–142.
Bernecker, S. (2012). Sensitivity, safety, and closure. Acta Analytica, 27, 367–381.
Bogardus, T. (2012). Knowledge under threat. Philosophy and Phenomenological Research, 88, 289–313.
Bogardus, T., & Marxen, C. (2014). Yes, safety is in danger. Philosophia, 42, 321–334.
Bradley, D. (2015). A critical introduction to formal epistemology. London: Bloomsbury.
Broncano-Berrocal, F. (2014). Is safety in danger? Philosophia, 42, 63–81.
Brown, J. (2000). Reliabilism, knowledge, and mental content. Proceedings of the Aristotelian Society, 100, 115–135.
Coffmann, E. J. (2010). Misleading dispositions and the value of knowledge. Journal of Philosophical Research, 35, 241–258.
Comesaña, J. (2005). Unsafe knowledge. Synthese, 146, 395–404.
Dodd, D. (2012). Safety, skepticism, and lotteries. Erkenntnis, 77, 95–120.
Goldberg, S. (2015). Epistemic entitlement and luck. Philosophy and Phenomenological Research, 91, 273–302.
Hetherington, S. (2006). How to know (that knowledge-that is knowledge-how). In S. Hetherington (Ed.), Epistemology futures (pp. 71–94). Oxford: Oxford University Press.
Hiller, A., & Neta, R. (2007). Safety and epistemic luck. Synthese, 158, 303–313.
Horvath, J., & Wiegmann, A. (2016). Intuitive expertise and intuitions about knowledge. Philosophical Studies, 173, 2701–2726.
Kelp, C. (2009). Knowledge and safety. Journal of Philosophical Research, 34, 21–31.
Kripke, S. (2011). Nozick on knowledge. In S. Kripke (Ed.), Philosophical troubles. Collected papers (Vol. 1, pp. 162–224). Oxford: Oxford University Press.
Lackey, J. (2006). Pritchard’s epistemic luck. Philosophical Quarterly, 56, 284–9.
Neta, R., & Rohrbaugh, G. (2004). Luminosity and the safety of knowledge. Pacific Philosophical Quarterly, 85, 396–406.
Pritchard, D. (2005). Epistemic luck. Oxford: Oxford University Press.
Pritchard, D. (2009a). Knowledge. London: Palgrave Macmillan.
Pritchard, D. (2009b). Safety-based epistemology: Whither now? Journal of Philosophical Research, 34, 33–45.
Pritchard, D. (2015). Anti-luck epistemology and the Gettier problem. Philosophical Studies, 172, 93–111.
Pritchard, D. (2016). Anti-luck virtue epistemology and epistemic defeat. In: Synthese (online first). https://doi.org/10.1007/s11229-016-1074-4.
Sainsbury, M. (1997). Easy possibilities. Philosophy and Phenomenological Research, 57, 907–919.
Sosa, E. (1996). Postscript to proper functionalism and virtue epistemology. In J. Kvanvig (Ed.), Warrant in contemporary epistemology (pp. 271–81). Lanham: Rowman & Littlefield.
Sosa, E. (1999). How to defeat opposition to Moore. Philosophical Perspectives, 13, 141–54.
Sosa, E. (2007). A virtue epistemology. Apt belief and reflective knowledge. Oxford: Oxford University Press.
Sosa, E. (2009). Timothy Williamson’s knowledge and its limits. In P. Greenough & D. Pritchard (Eds.), Williamson on knowledge (pp. 203–216). Oxford: Oxford University Press.
Warfield, T. (2005). Knowledge from falsehood. Philosophical Perspectives, 19, 405–416.
Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.
Acknowledgements
An earlier draft of this paper was presented at a workshop on Saving Safety?–Problems and Prospects of Safety-Based Accounts of Knowledge at the University of Bonn, Germany, Sept. 30 to Oct. 2, 2013. Substantial comments from and extensive discussions with the following colleagues helped me to work out a significantly revised final version of this paper: Dominik Balg, Juan Comesaña, Jan Constantin, Wolfgang Freitag, Linus Eusterbrock, Sanford Goldberg, Frank Hofmann, Joachim Horvath, Chris Kelp, Jens Kipper, Ernest Sosa. I am extremely grateful to all of them and to two anonymous referees of this journal.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Grundmann, T. Saving safety from counterexamples. Synthese 197, 5161–5185 (2020). https://doi.org/10.1007/s11229-018-1677-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-018-1677-z