Skip to main content

Williamson, closure, and KK


Closure principles say that if you know some proposition which entails a second and you meet further conditions then you know the second. In this paper I construct an argument against closure principles which turns on the idea that knowing a proposition requires that one’s belief-forming process be reliable. My argument parallels an influential argument offered by Timothy Williamson against KK principles–principles that say that if you know some proposition and you meet further conditions then you know that you know the proposition. After offering my argument, I provisionally assess its damage to closure principles and also look at how responses to my argument against closure principles can be used to generate responses to Williamson’s argument against KK principles.

This is a preview of subscription content, access via your institution.


  1. 1.

    For discussion of closure principles, see e.g. Baumann (2012), Feldman (1995), Kvanvig (2006), Lawlor (2005), Luper (2011), Luzzi (2010), Schechter (2013) and Vogel (1990).

  2. 2.

    I have introduced closure principles via a schema and will talk as if there are multiple closure principles. However, some talk as if there is only one closure principle. For instance, the Stanford Encyclopedia of Philosophy article devoted to epistemic closure principles is called “The Epistemic Closure Principle” (Luper 2011). I find talk of closure principles in the singular to be somewhat mysterious. Perhaps people who engage in it are assuming there is only one closure principle best suited to a particular role? In any case, I shall continue to refer to closure principles in the plural. Similar things go for KK principles.

  3. 3.

    Stating a more general schema for this type of principle requires allowing for various bells and whistles. These include time indices and modal operators, such as being in a position to know.

  4. 4.

    For discussion of KK principles, see e.g. Castañeda (1970), Conn (2001), Das and Salow (2018), Feldman (1981), Ginet (1970), Goodman and Salow (2018), Greco (2014, 2015) and Hemp (2014).

  5. 5.

    For some more quotes in a similar vein, see Dretske (2005, p. 17).

  6. 6.

    It has been noticed that a modified version can target other sorts of principles; see e.g. Ramachandran (2012, pp. 128–131).

  7. 7.

    As I’ll note later on, the principle in question isn’t entirely uncontroversial, but this should be unsurprising; arguments for deeply controversial views in epistemology rarely have supports that are entirely uncontroversial.

  8. 8.

    Thanks to an anonymous referee for suggesting I mention this here.

  9. 9.

    Here is a variant story: the problems remain at the same level of difficulty, but Lisa very gradually loses brain function. All my conclusions should go through for this variant story as well. So someone who wishes to challenge what I say here should make sure their challenge works on this variant as well.

  10. 10.

    Let me state this formally. Let (Start\(_{i}\)) be the starting proposition to problem i and (LisaAnswer\(_{i}\)) be the answer that Lisa gives to problem i, that is, the proposition that she believes to be deducible from (Start\(_{i}\)). Then (Set-Up\(_{Lisa}\)) says: for all i from 0 to 666, Lisa knows (Start\(_{i}\)) and for all i from 0 to 665, Lisa believes (LisaAnswer\(_{i}\)) and (Start\(_{i}\)) entails (LisaAnswer\(_{i}\)).

  11. 11.

    Let me state this formally, keeping the same abbreviations as the previous footnote. (Reliability\(_{Lisa}\)) says: for all i from 0 to 665, if Lisa knows (LisaAnswer\(_{i}\)) then (LisaAnswer\(_{i+1}\)) is true.

  12. 12.

    Let me state this formally, keeping the same abbreviations as the previous footnote. (Closure\(_{Challenged}\)) says: for all i from 0 to 666, if Lisa knows (Start\(_{i}\)) and (Start\(_{i}\)) entails q and Lisa believes q then Lisa knows q.

  13. 13.

    I would like to briefly note how this argument differs from another argument challenging closure principles, viz. Maria Lasonen-Aarnio’s argument in “Single Premise Deduction and Risk.” Her argument is explicitly fallibilist, resting on the idea that one can have knowledge if one avoids relevantly similar false belief in most close worlds, even if one doesn’t do so in all of them (Lasonen-Aarnio 2008, p. 167). But I make no such assumption. Thanks to an anonymous referee for suggesting I clarify this.

  14. 14.

    Thanks to an anonymous referee for suggesting I clarify this.

  15. 15.

    Often, we use the word “guess” to mean a belief arrived at on the basis of a feeling or hunch, as opposed to a process of reasoning. For example, a detective who believed that the butler did it on the basis of a complex reasoning process would not normally be said to have “guessed” that the butler did it, even if the reasoning process employed a number of highly questionable inferences. But Williamson cannot mean “guess” in this sense. For, as the logic of Williamson’s argument makes clear, (KReliability\(_{Magoo}\)) has to apply to a case in which Mr Magoo reasons to the conclusion that the tree is not 665 inches tall, through the employment of some questionable inferences. A similar point is emphasized in Dokic and Égré (2009) and Sharon and Spectre (2008).

  16. 16.

    Here is a worry one might have about my interpreting Williamson’s argument against (KK\(_{Challenged}\)) as invoking (GeneralizedReliability): KK principles tend to be defended by internalists, it’s not clear that an internalist should accept (GeneralizedReliability), and thus I am interpreting Williamson as invoking a principle which is dialectically suspect. In response, it’s worth noting that Williamson explicitly invokes this principle while offering his anti-luminosity argument which is explicitly an argument against internalists so he is clearly happy to make this move that is allegedly dialectically suspect. Also, I argue later in this paper that not all those who accept KK principles need be internalists and that furthermore internalists can endorse an internalist version of KK while accepting (GeneralizedReliability). Thanks to an anonymous referee for pressing this worry.

  17. 17.

    Here is another piece of evidence that Williamson intends to motivate (Reliability\(_{Magoo}\)) via (GeneralizedReliability): at one point he writes: “A reliability condition on knowledge was implicit in the argument of section 5.1 [i.e. the argument against KK\(_{Challenged}\)] and explicit in sections 4.3 and 4.4 [i.e. the sections in which (GeneralizedReliability) is explicitly stated]” (Williamson 2000, p. 124).” Williamson clarifies that this reliability condition is (GeneralizedReliability) a page later, writing, “For present purposes, we are interested in a notion of reliability on which, in given circumstances, something happens reliably if and only if it is not in danger of not happening. That is, it happens reliably in a case \(\alpha \) if and only if it happens (reliably or not) in every case similar enough to \(\alpha \)” (Williamson 2000, p. 124).

  18. 18.

    One could object to this claim by arguing that in order for one case to be sufficiently similar to another, the proposition believed in each case has to be the same. This idea shows up in the way some people formulate safety conditions, viz:

    S’s belief in p is safe if and only if S could not easily have falsely believed p in similarcases (Sosa 1999, p. 142).

    Here the idea seems to be that the only cases relevant for determining whether a belief in p is safe are ones in the subject believes p. But if we understand similarity in this way, then the case in which Mr Magoo believed that the tree is 665 inches tall would not count as sufficiently similar to the case in which Mr Magoo believed that the tree is 666 inches tall, so (SimilarCases\(_{Magoo}\)) would be false.

    In any case, we can rephrase Williamson’s argument, and my own, so as not to need different propositions; in the Magoo case, one can consider a similar situation in which Magoo believes the same proposition but is looking at a tree that has a slightly different height, whereas in my case we can have Lisa deduce the same (rather complex) proposition each time (after first having her memory wiped), with the difference that each time her competence is ever so slightly reduced. Or we can imagine a story in which it is not her competence that is reduced, but rather that we keep the same proposition to be deduced—the true conclusion—and alter the level of difficulty of the problem by altering the starting proposition. Thanks to an anonymous referee for pressing me on this.

  19. 19.

    One might think that there is a key difference between the two beliefs in the Lisa case, viz. that her belief in her answer to problem 665 is the result of an accurate inference while her belief in her answer to problem 666 is the result of a logical mistake. But there is a similar difference between two beliefs in the Magoo case as well; a belief that the tree is not 665 inches tall is accurate, but a belief that it is not 666 inches results from a mistake. Of course, this is not to say that his belief that the tree is not 665 inches tall amounts to knowledge; Magoo’s belief-forming process isn’t reliable to pick out the difference in heights. But likewise, it’s highly dubious that Lisa’s belief in her answer to problem 665 amounts to knowledge; while her inference in this case is logically sound, this is not enough to yield knowledge. Surely not everyone who produces a logically sound inference knows the conclusion, otherwise every belief in a logical tautology I had would amount to knowledge, no matter how complicated the logical tautology was.

  20. 20.

    I should note that (GeneralizedReliability) is somewhat controversial; see e.g. Fitelson (2006), and Ramachandran (2005). Thanks to an anonymous referee for suggesting I mention this.

  21. 21.

    One might worry that, even if Williamson’s own motivation for (Reliability\(_{Magoo}\)) was (GeneralizedReliability), there is another motivation for (Reliability\(_{Magoo}\)) that cannot be used to motivate (Reliabilty\(_{Lisa}\)). In particular, we can motivate (Reliability\(_{Magoo}\)) as follows: whatever you say about Mr Magoo’s epistemic standing with regard to the tree he’s observing, he cannot know to the nearest inch (using whatever method he can at the moment) that the tree is not at a certain height when its actual height is one inch from that. This argument relies, then (or so the challenge goes), on the quality of Magoo’s evidence not on reliability considerations.

    In response, it’s worth emphasizing that (Reliability\(_{Magoo}\)) says more than merely that Mr. Magoo cannot know to the nearest inch what the height of the tree is. To see this, imagine Mr Magoo had a device that precisely measures the tree’s height but reports the tree’s height in a peculiar way: it either announces that the tree is below 65.5 inches, or it announces that it is between 65.5 and 165.5, or between 165.5 and 265.5, or between 265.5 and 365.5, and so on. Then, if Mr Magoo had this device, he wouldn’t be able to tell the height to the nearest inch. But (Reliabilty\(_{Magoo}\)) would be false, because if the device reported that the tree was between 665.5 and 765.5, and the tree was in fact 666 inches, Mr. Magoo would know that it was not 665 inches, even though it was in fact 666.

    Of course, this still leaves open whether there is some way to appeal to facts about evidence in a way that supports (Reliability\(_{Magoo}\)) without supporting (Reliability\(_{Lisa}\)). I think the answer is that there is not; in the fifth section of this paper, I give some reason to think Mr. Magoo and Lisa are in a similar evidential position, both with regards to the evidence they have, and their ability to appropriately base their beliefs on that evidence. Thanks to an anonymous referee for pressing this worry.

  22. 22.

    I should also note that it’s possible for the difficulty of this task to increase gradually. For instance, there can be a series that starts with simple pairs of drawings for which it’s easy to tell if there are any differences between them and moves gradually to complex pairs of drawings for which it’s really difficult to tell if there’s any difference between them. Similar things hold for the task of comparing formulas of the form “A”, “A entails B” and “B” to confirm matches. If A and B are simple, it’s easy to compare; if they are somewhat long, it’s more difficult, and if they are extremely long it’s more difficult still. Thanks to an anonymous referee for pressing me on this.

  23. 23.

    One might object: “the task of knowing the implications of complex formulas seems more demanding than the task of matching two complex formulas. So if the subject knows A entails B, it seems plausible that she would be able to match A with the occurrence of A in A entails B.” Two responses. First, even if one task is more demanding than a second, it doesn’t follow that if one is able to competently accomplish the first, one is able to competently accomplish the second. For instance, making waffles from scratch is more demanding than reheating frozen ones. Nonetheless, I may competently accomplish the first and fail to competently accomplish the second. Maybe reheating frozen things causes me great anxiety and leads to my making mistakes. Maybe, for whatever reason, I just never learned how to reheat frozen foods.

    Also, it’s not obvious that knowing A entails B will always be more demanding than matching formulas. For instance, suppose that first one comes to know that A entails B through being told this by one’s math teacher. Next, one has to match A with the occurrence of A in A entails B. Seemingly, in such a case, coming to know A entails B could be an easier task than the matching.

  24. 24.

    Strictly speaking, Williamson offers a multi-premise version, writing “Knowing \(p_1\), ..., \(p_n\), competently deducing q, and thereby coming to believe q is in general a way of coming to know q.” (Williamson 2000, p. 117). I have offered a version restricted to a single premise so as to avoid unnecessary complications.

  25. 25.

    It is worth noting that such a move incorporates a fairly controversial notion of competence, and thus may not appeal to others who are also happy to invoke competence but wish to understand it differently. It holds that in order to competently deduce something, one could not easily have made a mistake in a similar situation. But some think that one can competently come to believe something, even though one could easily have made a mistake in a similar situation. For instance, Ernest Sosa discusses the following case:

    You see a surface that looks red in ostensibly normal conditions. But it is a kaleidoscope surface controlled by a jokester who also controls the ambient light, and might as easily have presented you with a red-light\(+\)white-surface combination as with the actual white-light\(+\)red-surface combination (Sosa 2009, p. 31).

    Sosa thinks in this case you have an apt belief—that is, one that succeeds through the exercise of that competence—even though your belief could easily have been false (Sosa 2009, pp. 35–36).

    Similar things are often said about fake barn cases, in which one looks at a real barn in an area full of barn facades. That is, such cases are often alleged to be cases in which one’s current belief that there is a barn before one is due to a competence, even though in similar cases one would have had a false belief. See e.g. Sosa (2009, p. 96).

  26. 26.

    Very briefly, the reason is because Williamson understands competence in terms of not easily having been mistaken, where this in turn is understood in terms of not being mistaken in any nearby worlds. He then gets the controversial commitments through endorsing the following three claims, each of which is very difficult to avoid: (1) Not being mistaken in nearby worlds is closed under disjunction; if there are no nearby worlds in which is one mistaken that P and no nearby worlds in which one is mistaken that Q, then there are no nearby worlds in which one is mistaken that P or mistaken that Q. (2) There are a number of events each of which has low physical probability, is independent from the others, and which you know didn’t happen. For instance, one such event is that the interior of my desk just rearranged itself a moment ago in such a way that it’s no longer a desk thanks to weird quantum phenomena. (3) The disjunction of a sufficiently large number of physically independent events can be quite probable even if their individual physical probabilities are each quite low.

  27. 27.

    Thanks to an anonymous referee for suggesting this response.

  28. 28.

    I will leave “good” in “good evidence” unanalyzed. There are various ways of analyzing “good evidence” on which having good evidence is closed under entailment. One example: evidence for p is good evidence if p is highly probable given this evidence.

  29. 29.

    Usually internalists add in that one must also avoid Gettier cases, but I will leave out this detail, because it is irrelevant for this discussion. See e.g. Feldman and Conee (1985).

  30. 30.

    For such defenses, see e.g. Klein (1995, p. 219), Luper 2011, Moretti and Shogenji (2017, p. 7), Stine (1975, p. 250) and Wang (2014, p. 1130).

  31. 31.

    Again, setting aside Gettier cases.

  32. 32.

    Note: at this point in the paper, we are assuming that my parallel argument is successful and trying to assess the consequences thereof, thus we are assuming for the sake of argument that (Generalized Reliability) is true and thus that knowledge requires reliability in other similar cases).

  33. 33.

    I should perhaps briefly relate the point I’m making to Williamson’s criticisms of internalism. Internalists tend to claim that there are certain properties, such as phenomenal properties, that are of special significance. In particular, they say one has some kind of special access to facts regarding these properties and thus, if these properties are instantiated, then, so long as one bases one’s beliefs regarding these properties in the appropriate way, one’s beliefs will amount to knowledge. In other words, when these properties obtain, one is in a special sort of intermediate state and this intermediate state is such that if one bases one’s belief in the right way, then one will know.

    But Williamson argues that this appropriate basing can be quite difficult. In particular, he argues that these properties can be instantiated without one’s being in a position to know that they are (Williamson 2000, p. 24). Given this argument, Williamson thinks it is dubious that the intermediate state internalists focus on and the properties related to it, are of much significance. As Williamson says in another context, while criticizing the views of Earl Conee:

    Conee asserts that phenomenal qualities provide a comfortable cognitive home ... because they are always there among our ultimate evidential resources. But what use to you as evidence is a phenomenal quality when you are not even in a position to know that you have it? Conee reassures us that phenomenal qualities are still ‘known to us by acquaintance’, but in Conee’s sense you can know x by acquaintance even if you are not in a position to know that x exists (Williamson 2005, p. 8).

    In short, then, Williamson questions the significance of the sort of state that internalists care about—the sort of state one occupies when one instantiates some of the properties in question. This is so despite its being the case that if one occupies this sort of state then one will have knowledge, so long as one bases one’s beliefs in the appropriate way. And the reason Williamson is dubious about the significance of this sort of state is because he thinks one can occupy such a state and still be very far from knowledge, given how difficult it can be to appropriately base one’s beliefs.

  34. 34.

    Also, I’m not assuming that validation is an inferential or quasi-inferential process; I’m leaving open that one can validate that one knows p directly, without inference.

  35. 35.

    For those who employ a similar strategy but with a different KK principle, see e.g. McHugh (2010).

  36. 36.

    More formally, this is to endorse:

    Competent Validation. If in any case \(\alpha \) one believes that one knows that p on a basis b, which consists of competent validation, then in any case close to \(\alpha \) in which one believes that one knows a proposition p* close to p on a basis b* close to b, one knows that p*.

    Thanks to an anonymous referee for suggesting that I include this.

  37. 37.

    Relatedly: one way of putting Williamson’s point from his argument against KK is that safety doesn’t iterate; that is, one can safely believe that p while failing to safely believe that one safely believes. The internalist response here is that safety iterates so long as one believes in the right way. That is, if one safely believes that p and one bases one’s belief that one safely believes that p in the right way, then one safely believes that one safely believes that p.

  38. 38.

    This is important because, as I shall briefly argue in this footnote, (Introspection\(_{Internalist-friendly}\)) can be used to establish conclusions that Williamson wanted to reject. (To fully establish this conclusion would take a more thorough argument than I will give here, but I hope my remarks will at least provisionally establish a difficulty for Williamson.) In particular, Williamson wanted to reject the idea that we have a cognitive home—that one is “guaranteed epistemic access to one’s current mental states” (Williamson 2000, p. 93). If (Introspection\(_{Internalist-friendly}\)) is right, then we do have a special kind of access to our phenomenal experiences—so long as we base our beliefs in the right sort of way, we can know that we are having them. So in failing to attack (Introspection\(_{Internalist-friendly}\)), Williamson leaves himself vulnerable to those who wish to use it to establish that we do have a cognitive home.

  39. 39.

    This sort of strategy is taken in Smithies (2012).

  40. 40.

    Why do I say “KK and other similar principles?” The reason is that, as Williamson notes, there are multiple ways of formulating the surprise test paradox—all of the versions use principles about iterating knowledge, but there are several closely-related options to choose from, only one of which is KK (Williamson 2000, p. 140). Thanks to an anonymous refere for pressing me to clarify this.


  1. Baumann, P. (2012). Nozick’s defense of closure. In K. Becker & T. Black (Eds.), The sensitivity principle in epistemology (pp. 11–27). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  2. Castañeda, H. N. (1970). On knowing (or believing) that one knows (or believes). Synthese, 21(2), 187–203.

    Article  Google Scholar 

  3. Conn, C. (2001). Chisholm, internalism, and knowing that one knows. American Philosophical Quarterly, 38(4), 333–347.

    Google Scholar 

  4. Das, N., & Salow, B. (2018). Transparency and the k k principle. Noûs, 52(1), 3–23.

    Article  Google Scholar 

  5. Dokic, J., & Égré, P. (2009). Margin for error and the transparency of knowledge. Synthese, 166(1), 1–20.

    Article  Google Scholar 

  6. Dretske, F. (2005). Is knowledge closed under entailment. In Contemporary Debates in Epistemology. Blackwell.

  7. Feldman, R. (1981). Fallibilism and knowing that one knows. The Philosophical Review, 90(2), 266–282.

    Article  Google Scholar 

  8. Feldman, R. (1995). In defense of closure. Philosophical Quarterly, 45(181), 487–494.

    Article  Google Scholar 

  9. Feldman, R., & Conee, E. (1985). Evidentialism. Philosophical Studies, 48, 15–34.

    Article  Google Scholar 

  10. Fitelson, B. (2006). Williamson’s argument against kk,\_6.pdf.

  11. Ginet, C. (1970). What must be added to knowing to obtain knowing that one knows. Synthese, 21(2), 163–186.

    Article  Google Scholar 

  12. Goodman, J., & Salow, B. (2018). Taking a chance on kk. Philosophical Studies, 175(1), 183–196.

    Article  Google Scholar 

  13. Greco, D. (2014). Could kk be ok? The Journal of Philosophy CX, I(4), 169–197.

    Article  Google Scholar 

  14. Greco, D. (2015). Iteration and fragmentation. Philosophy and Phenomenological Research, 91(3), 656–73.

    Article  Google Scholar 

  15. Hemp, D. (2014). The kk (knowing that one knows). The Internet Encyclopedia of Philosophy

  16. Klein, P. (1995). Skepticism and closure: Why the evil genius argument fails. Philosophical Topics, 23(1), 213–236.

    Article  Google Scholar 

  17. Kvanvig, J. L. (2006). Closure principles. Philosophy Compass, 1(3), 256–67.

    Article  Google Scholar 

  18. Lasonen-Aarnio, M. (2008). Single premise deduction and risk. Philosophical Studies, 141(2), 157–173.

    Article  Google Scholar 

  19. Lawlor, K. (2005). Living without closure. Grazer Philosophische Studien, 69, 25–49.

    Article  Google Scholar 

  20. Luper, S. (Winter 2011). The epistemic closure principle. In E. N. Zalta (Ed.) The Stanford encyclopedia of philosophy,

  21. Luzzi, F. (2010). Counter-closure. Australasian Journal of Philosophy, 88(4), 673–683.

    Article  Google Scholar 

  22. McHugh, C. (2010). Self-knowledge and the kk principle. Synthese, 173, 231–257.

    Article  Google Scholar 

  23. Moretti, L., & Shogenji, T. (2017). Skepticism and epistemic closure: Two bayesian accounts. International Journal For the Study of Skepticism, 7(1), 1–25.

    Article  Google Scholar 

  24. Ramachandran, M. (2005). Williamson’s argument against the kk-principle. The Baltic International Yearbook of Cognition, Logic and Communication 1.

  25. Ramachandran, M. (2012). The kk-principle, margins for error, and safety. Erkenntnis, 76, 121–136.

    Article  Google Scholar 

  26. Schechter, J. (2013). Rational self-doubt and the failure of closure. Philosophical Studies, 163, 429–452.

    Article  Google Scholar 

  27. Sharon, A., & Spectre, L. (2008). Mr magoo’s mistake. Philosophical Studies, 139, 289–306.

    Article  Google Scholar 

  28. Smithies, D. (2012). Mentalism and epistemic transparency. Australasian Journal of Philosophy, 90(4), 723–41.

    Article  Google Scholar 

  29. Sosa, E. (1999). How to defeat opposition to moore. Philosophical Perspectives, 13, 141–153.

    Google Scholar 

  30. Sosa, E. (2009). A virtue epistemology (Apt belief and reflective knowledge, volume I). Oxford: Oxford University Press.

    Google Scholar 

  31. Stine, G. C. (1975). Skepticism, relevant alternatives, and deductive closure. Philosophical Studies, 29, 249–261.

    Article  Google Scholar 

  32. Vogel, J. (1990). Are there counterexamples to the closure principle. In M. D. Roth & G. Ross (Eds.), Doubting: Contemporary perspectives on skepticism (pp. 13–27). Dordrecht: Kluwer.

    Chapter  Google Scholar 

  33. Wang, J. (2014). Closure and underdetermination again. Philosophia, 42, 1129–40.

    Article  Google Scholar 

  34. Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.

    Google Scholar 

  35. Williamson, T. (2005). Reply to conee. Philosophy and Phenomenological Research, LXX(2), 470–476.

    Article  Google Scholar 

  36. Williamson, T. (2009). Replies to critics. In Williamson on knowledge. Oxford: Oxford University Press.

  37. Williamson, T. (2011). Improbable knowing. In Evidentialism and its discontents. Oxford: Oxford University Press.

Download references


Thanks for comments to Nevin Climenhaga, Amy Flowerree, Jennifer Jhun, Graham Leach-Krouse, Matthew Lee, Fritz Warfield, audiences at the 2014 Central States Philosophical Association and the 2015 Central APA and some very helpful anonymous referees.

Author information



Corresponding author

Correspondence to Daniel Immerman.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Immerman, D. Williamson, closure, and KK. Synthese 197, 3349–3373 (2020).

Download citation


  • Closure principles
  • Knows–Knows principles
  • Timothy Williamson
  • Internalism
  • Externalism