Skip to main content
Log in

Saving safety from counterexamples

  • S.I.: The Epistemology of Ernest Sosa
  • Published:
Synthese Aims and scope Submit manuscript

Abstract

In this paper I will offer a comprehensive defense of the safety account of knowledge against counterexamples that have been recently put forward. In Sect. 2, I will discuss different versions of safety, arguing that a specific variant of method-relativized safety is the most plausible. I will then use this specific version of safety to respond to counterexamples in the recent literature. In Sect. 3, I will address alleged examples of safe beliefs that still constitute Gettier cases. In Sect. 4, I will discuss alleged examples of unsafe (and in this sense lucky) knowledge. In Sect. 5, I will address alleged cases of safe belief that do not constitute knowledge for non-Gettier reasons. My overall goal is to show that there are no successful counterexamples to robust anti-luck epistemology and to highlight some major presuppositions of my reply.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. See Sosa (1999) for the introduction of the term. The idea is already present in Sosa (1996) and Sainsbury (1997). Williamson (2000) is another early proponent of the safety account, although he does not believe that one can give a non-circular definition of knowledge as safe belief.

  2. See Sosa (1999: p. 146). For a dissenting view, see Dodd (2012).

  3. See Sosa (1999: p. 143) and Pritchard (2005: p. 168) for a defense of safety’s closure. For a dissenting view on safety, see Alspector-Kelly (2011). Bernecker (2012) claims that sensitivity does not violate closure.

  4. See Kripke (2011: pp. 167–68).

  5. For the following, see Pritchard (2005, 2009a).

  6. See Pritchard (2005: pp. 145–152) for a more comprehensive analysis of veritic luck.

  7. See Pritchard (2015: pp 96–97).

  8. See Pritchard (2009a: p. 23) for this distinction.

  9. See Pritchard (2009a: p. 51).

  10. Sosa finds fault with safety because it does not require the belief to result from competences; he also disputes that safety is necessary for knowledge. See Sosa (2009: pp. 206–207).

  11. See Pritchard (2005: pp. 161–173) for a discussion of the different strengths of the safety principle.

  12. The best motivation for this version is that it explains why we do not know that we lose the lottery if we truly believe this on the basis of our general knowledge about the odds.

  13. See Sosa (1999: p. 146).

  14. See Pritchard (2005: p. 156) and Sosa (2007: p. 26).

  15. See Hiller and Neta (2007).

  16. Pritchard (2009b) suggests this version of safety when he says: “[A]ll we need to do is to talk of the doxastic result of the target belief-forming process, whatever that may be, and then not only focus solely on belief in the target proposition.” Williamson (2000) can be interpreted in a similar way. In contrast, Sosa addresses (S2)-safety only.

  17. More specifically, a safety account of knowledge that uses (S4) might run into skeptical problems too easily. Consider a case in which you believe that there is a cup on the table in front of you on the basis of your current vision. If someone might easily have left a fake spoon on the table, you would, according to (S4), not know that there is a cup. Unfortunately, I do not have enough space to deal with this problem here.

  18. This is an abridged version of Lackey (2006: p. 288).

  19. Compare Coffman (2010: 246).

  20. The operative method is italicized.

  21. Compare Goldberg (2015: pp. 277–278). I present an abridged version of the original case.

  22. See Goldberg (2015: p. 278): “I submit that the intuitively correct description of S is this: S’s true justified belief is formed through a safe method, yet the belief still fails to be knowledge (owing to the presence of epistemic luck).”

  23. This distinction is made by Pritchard (2015: p. 105). It resonates with Hetherington’s earlier distinction between helpful and dangerous Gettier cases. Compare Hetherington (2006: pp. 85–89).

  24. See Horvath and Wiegmann (2016) for a recent experimental study that suggests that although the textbook view is that Henry does not know, the majority of real expert epistemologists does not share this view.

  25. This is an abridged version of the original case in Neta and Rohrbaugh (2004: pp. 399–400).

  26. See Neta and Rohrbaugh (2004: p. 400); Sosa (2009: p. 207) shares this view.

  27. Ibid.: 401.

  28. This reply was suggested by an anonymous referee. For further discussion see footnote 31.

  29. See also Pritchard (2015: p. 104).

  30. The case is a slightly abridged version of (Kelp 2009: pp. 27–28).

  31. Bogardus (2012) presents a variant of this case with his Atomic Clock. When in this case Smith relies on the atomic clock, there is the persistent counterfactual risk that a radioactive isotope in the vicinity of the clock will interfere with its proper functioning. Bogardus argues that Kelp’s clock case does not establish a proper example of an unsafe belief since in the actual situation in which Russell has already decided to come down the stairs at 8:22 a.m., there are no longer nearby worlds in which the demon intervenes and Russell’s belief is false. In contrast, Bogardus’ own case is supposed to give an example of the continuing risk of forming a false belief. However, I do not think that the difference between the two cases has any epistemological significance. In particular, if one takes (S4)-safety to be the relevant notion here, we should not keep Russell’s decision to come down at 8.22 a.m. fixed across nearby possible worlds. The only thing that one should keep fixed is the operative method. As we will later see (pp. 17–18), this may include external epistemic conditions as, e.g., observation conditions. However, we cannot arbitrarily choose any parts of the environment and hold them fixed across counterfactual evaluation. This choice must be well motivated.

  32. This is a slightly revised version of the original Comesaña case. See Comesaña (2005: p. 397).

  33. Kelp articulates the worry that the possible worlds in which Juan acquires a false belief on the basis of Judy’s misleading testimony might not be similar enough to the actual world to undermine safety (Kelp 2009: p. 25). After all, these worlds are different from the actual world in many aspects: Juan decides to dress up as Michael; Judy believes that he is Michael; she calls Andy; the party is moved to Adam’s house. However, the distance between the worlds should not simply be measured by the number of differing facts. Otherwise, the consequent of the counterfactual conditional If Nixon had pressed the button, a nuclear inferno would have been the result could never be true in the next world in which Nixon presses the button. In fact, all the differences in worlds in which Juan dresses up as Michael are causally triggered by his disguise. The otherwise behaviorally relevant dispositions are already present in the actual world. So it is simply not true that in order to effect Juan’s false belief many facts that are independent of each other have to be different. In conclusion, I do not think that Kelp’s worry here is substantial.

  34. The original case is from Sosa (2007: p. 31). Strictly speaking, Sosa claims that Kyle possesses animal knowledge rather than reflective knowledge. See Sosa (2007: p. 96, n.1).

  35. If one wants to avoid method talk, one might also use the label ‘the basis of belief’ or ‘the way a belief is formed.’

  36. This revised version was inspired by the discussions at the Saving Safety?-workshop that took place from Sept. 30 to Oct. 2, 2013 at the University of Bonn. I am especially grateful to Dominik Balg, Juan Comesaña, Wolfgang Freitag, and Sanford Goldberg for further helpful discussions about this issue.

  37. For more on this worry, see Bogardus and Marxen (2014: p. 329). Sosa seems to think, along these lines, that there is no relevant difference between the Kaleidoscope Case and the Fake Barn Case. See Sosa (2007: p. 96, n. 1).

  38. See Broncano-Berrocal (2014). However, his sufficient account of method individuation runs into severe problems, as Bogardus and Marxen (2014) argue. Sosa (2007: p. 27) mentions, but does not develop, the idea of external methods. Pritchard (2016: fn. 17) also considers this as a successful strategy against counterexamples.

  39. Compare Broncano-Berrocal (2014: p. 73).

  40. Thanks to an anonymous referee for articulating this concern.

  41. See p. 6 above.

  42. If this is a correct and comprehensive list of relevant factors, it can explain why certain gerrymandered descriptions of methods do not constitute genuine methods in the relevant sense. Consider a clock that works properly only 1 min a day. The rest of the time, it’s wildly inaccurate. But suppose you happen to look at the clock during that 1 min when it is working properly. Is the externalist about method individuation committed to the view that you acquire knowledge by relying on that clock? The answer is negative, for the following reason: In this case neither normal external stimuli nor teleological factors determine the method’s individuation. Hence, the intentions of the epistemic agent play a key role here. Typically, however, people do not intend to use a clock that is improperly working most of the time just at the right moment when it happens to work properly. Rather, they rely on a clock they take to be properly working at all times. In the case at hand, the clock is not working properly according to this individuation. Hence, one would not acquire knowledge by relying on it. (Thanks to an anonymous referee for pressing me on this point.)

  43. Thanks to an anonymous referee for raising this issue.

  44. Warfield (2005: pp. 407–408).

  45. For the following see Bradley (2015: p. 205).

  46. Interestingly, the defeater in this case is a defeater for knowledge without being a defeater for justified belief. This is so because the (mis)information that Lottie is participating in a fair lottery suggests that her target belief (that the ticket will lose) might easily be false, but it does not suggest that her belief is unreliable.

  47. With some clarifications from Pritchard (2009a: p. 49).

  48. Pritchard (2009a: p 49).

  49. Pritchard (2009a: p 50).

  50. Pritchard (2009a: p 50).

  51. Pritchard (2009a: 49).

  52. This is what Sosa requires for apt beliefs, namely that they are accurate (i.e. true) because they are adroit (i.e. are based on competence).

  53. This concern is due to an anonymous referee.

References

  • Alspector-Kelly, M. (2011). Why safety doesn’t save closure. Synthese, 183, 127–142.

    Article  Google Scholar 

  • Bernecker, S. (2012). Sensitivity, safety, and closure. Acta Analytica, 27, 367–381.

    Article  Google Scholar 

  • Bogardus, T. (2012). Knowledge under threat. Philosophy and Phenomenological Research, 88, 289–313.

    Article  Google Scholar 

  • Bogardus, T., & Marxen, C. (2014). Yes, safety is in danger. Philosophia, 42, 321–334.

    Article  Google Scholar 

  • Bradley, D. (2015). A critical introduction to formal epistemology. London: Bloomsbury.

    Google Scholar 

  • Broncano-Berrocal, F. (2014). Is safety in danger? Philosophia, 42, 63–81.

    Article  Google Scholar 

  • Brown, J. (2000). Reliabilism, knowledge, and mental content. Proceedings of the Aristotelian Society, 100, 115–135.

    Article  Google Scholar 

  • Coffmann, E. J. (2010). Misleading dispositions and the value of knowledge. Journal of Philosophical Research, 35, 241–258.

    Article  Google Scholar 

  • Comesaña, J. (2005). Unsafe knowledge. Synthese, 146, 395–404.

    Article  Google Scholar 

  • Dodd, D. (2012). Safety, skepticism, and lotteries. Erkenntnis, 77, 95–120.

    Article  Google Scholar 

  • Goldberg, S. (2015). Epistemic entitlement and luck. Philosophy and Phenomenological Research, 91, 273–302.

    Article  Google Scholar 

  • Hetherington, S. (2006). How to know (that knowledge-that is knowledge-how). In S. Hetherington (Ed.), Epistemology futures (pp. 71–94). Oxford: Oxford University Press.

    Google Scholar 

  • Hiller, A., & Neta, R. (2007). Safety and epistemic luck. Synthese, 158, 303–313.

    Article  Google Scholar 

  • Horvath, J., & Wiegmann, A. (2016). Intuitive expertise and intuitions about knowledge. Philosophical Studies, 173, 2701–2726.

    Article  Google Scholar 

  • Kelp, C. (2009). Knowledge and safety. Journal of Philosophical Research, 34, 21–31.

    Article  Google Scholar 

  • Kripke, S. (2011). Nozick on knowledge. In S. Kripke (Ed.), Philosophical troubles. Collected papers (Vol. 1, pp. 162–224). Oxford: Oxford University Press.

    Chapter  Google Scholar 

  • Lackey, J. (2006). Pritchard’s epistemic luck. Philosophical Quarterly, 56, 284–9.

    Article  Google Scholar 

  • Neta, R., & Rohrbaugh, G. (2004). Luminosity and the safety of knowledge. Pacific Philosophical Quarterly, 85, 396–406.

    Article  Google Scholar 

  • Pritchard, D. (2005). Epistemic luck. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Pritchard, D. (2009a). Knowledge. London: Palgrave Macmillan.

    Google Scholar 

  • Pritchard, D. (2009b). Safety-based epistemology: Whither now? Journal of Philosophical Research, 34, 33–45.

    Article  Google Scholar 

  • Pritchard, D. (2015). Anti-luck epistemology and the Gettier problem. Philosophical Studies, 172, 93–111.

    Article  Google Scholar 

  • Pritchard, D. (2016). Anti-luck virtue epistemology and epistemic defeat. In: Synthese (online first). https://doi.org/10.1007/s11229-016-1074-4.

  • Sainsbury, M. (1997). Easy possibilities. Philosophy and Phenomenological Research, 57, 907–919.

    Article  Google Scholar 

  • Sosa, E. (1996). Postscript to proper functionalism and virtue epistemology. In J. Kvanvig (Ed.), Warrant in contemporary epistemology (pp. 271–81). Lanham: Rowman & Littlefield.

    Google Scholar 

  • Sosa, E. (1999). How to defeat opposition to Moore. Philosophical Perspectives, 13, 141–54.

    Google Scholar 

  • Sosa, E. (2007). A virtue epistemology. Apt belief and reflective knowledge. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Sosa, E. (2009). Timothy Williamson’s knowledge and its limits. In P. Greenough & D. Pritchard (Eds.), Williamson on knowledge (pp. 203–216). Oxford: Oxford University Press.

    Chapter  Google Scholar 

  • Warfield, T. (2005). Knowledge from falsehood. Philosophical Perspectives, 19, 405–416.

    Article  Google Scholar 

  • Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.

    Google Scholar 

Download references

Acknowledgements

An earlier draft of this paper was presented at a workshop on Saving Safety?Problems and Prospects of Safety-Based Accounts of Knowledge at the University of Bonn, Germany, Sept. 30 to Oct. 2, 2013. Substantial comments from and extensive discussions with the following colleagues helped me to work out a significantly revised final version of this paper: Dominik Balg, Juan Comesaña, Jan Constantin, Wolfgang Freitag, Linus Eusterbrock, Sanford Goldberg, Frank Hofmann, Joachim Horvath, Chris Kelp, Jens Kipper, Ernest Sosa. I am extremely grateful to all of them and to two anonymous referees of this journal.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Grundmann.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Grundmann, T. Saving safety from counterexamples. Synthese 197, 5161–5185 (2020). https://doi.org/10.1007/s11229-018-1677-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-018-1677-z

Keywords

Navigation