1 Introduction

Most philosophical work disappears into the void. And most of what remains garners passing citations – acknowledgements, at best. It’s a rare privilege for one’s work to receive the kind of sustained and careful attention which my critics have bestowed on Reasons First, let alone for five distinguished such critics to cram themselves into three available symposium slots. And it’s an especial honor to be taken so seriously for the contributions that I have tried to make to epistemology in the book, especially given that I might be perceived by many as an interloper from ethics. I have been allotted too few words, here, to be able to answer every issue that they raise in the detail that they deserve, but I am reassured about the contents of most of the book that all have chosen to focus their critical energies on chapters three through five, which are also the chapters in which I admit in the book that I have the least confidence.

2 What comes first?

I should start with Schellenberg and Comesaña’s invitation to say more about the question of what comes first. As I have introduced it in the book, the question is not that of what concept has explanatory priority in epistemology, nor a question of the proper methodology of epistemology. It is not even a question within epistemology at all, although the focus on epistemology in the book and the salient contrast between the slogan of ‘reasons first’ and ‘knowledge first’ makes the question inevitable. It is, rather, a question from meta-normative theory about which normative property or relation, if any, has analytic and explanatory priority over each of the others.

So insofar as it applies to epistemology, it tells us that reasons have analytic and explanatory priority over the central normative concepts in epistemology, so that it would be a mistake to try to understand these without appealing to reasons. Rationality and justification are, I believe, normative, and so if the thesis of reasons first is true, then that tells us something about them. In the book I also argue that knowledge is normative. Among Schellenberg and Comesaña’s candidates for what to take as fundamental in epistemology, however, are mental states, reliability, and capacities. I suspect, however, that none of these are essentially normative. And so even if reasons come first in my sense, that would still tell us nothing about what work might be done by any of mental states, reliability, or capacities.

It would, however, limit how any of these things could help to tell us about rationality, justification, or knowledge – they would have to do so in a way that is compatible with reasons’ analytic and explanatory role in the proper account of these. But there is a lot of room for mental states, reliability, and capacities to play such a role. My reasons-based explanations of rationality, justification, and knowledge in the book all rely on the concept of subjective reasons, and I argue in the book that in order to have a subjective reason you must be in the right mental state. This should not be surprising, because Schellenberg and Comesaña identify dogmatism as the familiar view in epistemology that begins with mental states, and I think of the view in chapter five of the book as providing a deeper and more theoretically satisfying explanation of why dogmatism is correct.

Similarly, there is almost certainly a role for reliability in the theory of content determination of perceptual states, and so reliability is almost certainly part of the picture of how perceptual experiences justify, and there is almost certainly a role for capacities to play in explaining which kinds of mental state count as possessing a reason. In none of these places do the views in the book obviate the importance of these concepts; they simply temper and direct them to be applied in a way that is more cognizant of the fact that we should be giving explanations of rationality and justification that are continuous with how they apply outside of epistemology.

3 Rationality in the bad case

Almost all of my critics’ fire, however, is directed at Part 2 of the book – the chapters in which I consider what sort of evidence perceptual experience equips us with about the external world. In chapter three, I argue that epistemology of the last century or more has been unduly shaped by the assumption that evidence must be true. In chapter four, I argue that this assumption is not true for the kind of evidence that provides subjective reason to believe, and that one of (though not the only) reasons why it has been taken for granted falls out of the orthodox way of answering what I call the problem of unjustified belief. And in chapter five, I offer my own answer to what evidence perceptual experiences equip us with about the external world – one that answers some of my own worries about earlier attempts I have made to solve this problem.

Silva and Bernecker worry about my central objection to one of the key alternative views in chapter three. The basic dialectic of the chapter is that if you assume that the evidence with which perceptual experience equips us about the external world must be true, then you face a dilemma. If this evidence does not entail anything about the external world (is not “world-implicating”), then it leaves a gap – the gap that is occupied by skeptical arguments. The various attempts to cross this gap lead to correspondingly various forms of coherentism, rationalism, brute dogmatism, and, by way of Armstrong’s In, to traditional forms of externalism. On the other hand, if this evidence does entail anything about the external world (is “world-implicating”), then you cannot have it in the bad case. And if so, then your evidential position in the bad case is inferior. But if your subjective reasons are inferior in the bad case, then the worry is that you have less to rationally support your belief. But, I contend, it is equally rational to believe in the good and bad cases, at least when these cases share the same history and environment, and differ only in the single-case veridicality of a single perceptual experience. Moreover, you can be equally rational in your belief formed in that case.

Silva and Bernecker advocate a version of what I call a backup reasons strategy in bad cases like these, which is the answer to this objection that I take most seriously in the book. They say that even in the bad case, in which you don’t see that there is something red because you are wearing rose-colored glasses, you have available to you the evidence that it seems to you that you see something red. In the book I considered this kind of response and offered two answers. The first was dialectical. At this point in the dialectic, I have argued that the other fork of the dilemma in chapter three is unsatisfactory. And if we were convinced that the fact that it seems to you that you see something red were just as strong a reason to believe that there is something red as that there is something red, then I submit that we would not have been dissatisfied by the first prong of this dilemma in the first place, and so there is no particular reason why we should have ended up on the second prong.

Silva and Bernecker’s appeal to Alston’s externalist answer to why that it seems to you that you see something red is in fact clear evidence that they should see the option on the first prong of the dilemma as satisfactory. After all, Alston offers this view not as backup evidence in the bad case that is also supplemented by the equally good primary evidence in the good case that there is something red, but as an account of the evidence that applies to both the good and bad cases. Indeed, Alston’s move is precisely the kind of thing that Armstrong [1973] was referring to, when he argued that there is no non-externalist path to explaining why the fact that it seems to you that you see something red is just as good a reason to believe that there is something red as the fact that there is something red is, rather than just as good a reason to believe that the ones and zeros of the matrix are arranged red-wise, as the fact that the ones and zeros of the matrix are arranged red-wise is. It is what I refer to in the book as Armstrong’s In, and argue (following Armstrong) is the first step of a slippery slope to doing epistemology without evidence altogether.

It might be, however, since Silva and Bernecker are attracted to this combination, that they see something that the disjunctivist’s account of the extra evidence available in the good case can add to Alston’s view, that Alston’s view alone cannot do. But I am not sure what this would be. You might think, perhaps, that it could give us a better explanation of how we are in a position to rationally believe the ingredients of Alston’s explanation are actually true. I consider a variant of this idea in the book – on which when good case/bad case pairs share an identical history of mostly good case experience, it is equally rational to believe in both cases, but part of the explanation why that is so, is that in the history of good cases you collect track record evidence that is required in order to make your fallback reason in the bad case just as good as your reason in the good case. But this explanation of the importance of the world-entailing evidence in the good case is flatly inconsistent with Alston’s account, on which the fallback reason is simply better support for the external world conclusion than it is for the matrix conclusion simply because there is an external world, and there isn’t a matrix. So I remain stuck with the dialectical point that I don’t understand why someone would go for this combination of views.

But fortunately I also give a non-dialectical argument against views like this in the book. It is that in order for this backup evidence no only to make it propositionally rational for you to believe that there is something red, but to make you doxastically rational in your belief that there is something red, you must hold this belief for this reason – it must be what epistemologists like to call the basis for your belief. As Silva and Bernecker note, Errol Lord [2018] has developed a creative and permissive view of basing one of whose principle motivations is to answer this objection, and I agree with them as in the book that everything hangs on whether it can be done successfully.

4 Subjective defeat

The problem of bad case rationality is in a way a problem about how world-implicating factive views can lead to too little subjective reason to believe. But there is also a dual problem that I consider in chapter five, according to which such views can also allow for too much subjective reason to believe. This problem arises in cases of what I call subjective defeat, and I argue in chapter five that it also besets some kinds of non-factive world-implicating view.

Silva and Bernecker have similar worries about how world-implicating factive views might respond to the problem of subjective defeaters to those they have about how they might respond to the problem from bad cases. And Schellenberg and Comesaña suggest that there is an all-purpose solution to the problem that non-factive world-implicating views can take on as well, without needing to go in for my solution the remainder of chapter five. Let’s take Silva and Bernecker first.

The problem of subjective defeat is simple. If all that it takes to have a world-implicating piece of evidence as a reason to believe is that you see that it is true, then you can have this reason to believe even if you have excellent evidence that you are not seeing that it is true. For example, you might see that there is something red in front of you, but also be told by a reliable source that you are wearing rose-colored glasses. So you rationally believe that you are wearing rose-colored glasses, from which you know that it follows that you can’t be seeing that anything is red. Nevertheless, you are not wearing rose-colored glasses, so you are still having a verdical perceptual experience that there is something red. So you see that there is something red.

This is a problem for world-implicating factive views, because it is not rational to believe that there is something red when your only basis for this is your visual evidence but you rationally believe that you are wearing rose-colored glasses. But the world-implicating factive view says that you do have a reason – indeed, a totally compelling reason that entails the conclusion that there really is something red in front of you. And it is also a problem for simple kinds of non-factive world-entailing views, such as the view that I used to endorse that I call the non-factive content view, according to which you come to acquire the contents of your non-veridical, as well as your veridical perceptual experiences as reasons to believe. On this view, it still appears visually to you that there is something red even though you rationally believe that you are wearing rose-colored glasses, and so you still have excellent – entailing – evidence that there is something red.

Silva and Bernecker’s response to this problem distorts it from the outset by distorting the nature of the defeating evidence. In addition to an analogue (B) of that there is something red, which is the evidence provided by your perceptual experience, they say that the defeating evidence comes from a proposition of the form D and therefore probably not B. They seem to be thinking that the fact that you are wearing rose-colored glasses implies that there is probably not something red, and that is why it defeats your visual evidence that there is something red.

But my case does not turn on you believing that there is probably not something red – it only turns on your believing that you are wearing rose-colored glasses. And I claim – indeed, it is the whole basis for this argument – that if you believe that you are wearing rose-colored glasses, it is not rational to use your visual evidence to form beliefs about what is red. Maybe you can believe that things are red on independent grounds, such as that you expected to be surrounded by mostly red things – maybe you are stopping in for tea with the Queen of Hearts. But it is not rational to treat your visual evidence as adding to this support.

Silva and Bernecker’s answer to my argument, however, trades on their distortion of the case. They say that whatever evidence makes it rational to believe that D and therefore probably not B will also make it rational to believe that probably not B (which it entails), and therefore make it irrational to believe that B. I agree with this entirely, which is precisely why I did not say that your subjective defeater is D and therefore probably not B, but instead that it is D. But unfortunately, D (that you are wearing rose-colored glasses) does not entail that probably not B, and my objection applies, as I have just shown, even in cases in which they are appropriately correlated.

Schellenberg and Comesaña offer a much more promising response to the problem of subjective defeasibility – indeed, what I believe is the only tenable response. They argue that this case can be subsumed to familiar cases of higher-order evidence. In these cases, you use some clear form of evidence – say, simple arithmetic – to form a belief (usually in the examples it is about how to split the tip, or about how much further the fuel in your plane over the ocean will last), but then you acquire evidence that you are not doing arithmetic well – perhaps because you are suffering from hypoxia. It is a familiar thought shared by many about these cases that it is less rational to draw your conclusion about how to split the tip once you learn that you are hypoxic, but that the arithmetic that you did still supports this conclusion equally well – indeed, by entailing it. Schellenberg and Comesaña call such cases exogenous defeaters.

It would be in a way surprising if it turned out that classic cases of undercutting defeat like believing that you are wearing rose-colored glasses actually work like these newfangled cases of exogenous defeaters, given that they were some of the principal examples used by philosophers fifty years ago to introduce the concept of what Schellenberg and Comesaña call endogenous defeaters. But it would not be too surprising, given that the theorists who used these examples were plausibly taking for granted a non-world-entailing conception of the evidence provided by perceptual experience. And indeed, my own account of these cases later in chapter five is also quite different from the traditional model for endogenous defeaters.

This is the point in this response where I say the unsatisfactory thing that I don’t have any great argument against Schellenberg and Comesañag’s suggestion – I considered it myself for a while before latching onto my preferred view, the non-factive attitude view, about eleven years ago. The main reason why I went for my view rather than this one is that I was simply unconfident whether cases of higher-order evidence are really cases of defeat at all. There is certainly something complicated going on in cases of higher-order evidence – indeed, over the last fifteen years there has come to be an enormous literature about exactly what this is. But I have never been able to figure out what to think about it, and so I developed a view that did not require me to figure it out. If you are satisfied with this response to the problem, however, then you can safely skip chapter five of the book because I will then have accomplished the dialectical aims of Part 2 in chapters three and four.

5 The apparent factive attitude view

My own solution to the problem of subjective defeasibility is what I call the apparent factive attitude view. On this view, when it visually appears to you that there is something red, you acquire the evidence that you see that there is something red. This entails something about the external world, because see that is factive. So just as with other world-implicating views of perceptual evidence, having a perceptual experience gives you the best possible evidence that there really is something red – something that entails it. At the same time, it easily and elegantly explains the equal rationality of belief in the good and bad cases, because you acquire this reason whether or not your visual experience is veridical.

But the apparent factive attitude view also avoids the problem of subjective defeat, because that you see that there is something red is also inconsistent with that you are wearing rose-colored glasses. So when it visually appears to you that there is something red and you also rationally believe that you are wearing rose-colored glasses, you have two subjective reasons – two pieces of evidence – that cannot both be true. So you have to reject one of them. Since by stipulation of the case it is rational to believe that you are wearing rose-colored glasses, that isn’t the one to reject. So you should instead reject your visual evidence.

Schellenberg and Comesaña dislike almost everything about this proposal. They dislike that it allows your evidence to be inconsistent. But this seems to me to be a purely terminological matter; if we only dignify as your “evidence” whatever survives a rational process of being made consistent that starts with what I have called your “evidence” in the book, then as far as I can tell nothing is lost, so long as we remember what does the ultimate explaining and how. They dislike the vague association that I drew between motivating this view and an argument due to John Searle, and they dislike the implication that we somehow represent perceptual modes in our perceptual experiences. I’m going to set aside the objection to Searle because I have not endorsed (and indeed reject) any of the commitments about the perception of individuals to which they object. But their arguments against my conjecture that perceptual phenomenology represents perceptual modes are important to address.

Schellenberg and Comesaña begin by asserting that “in perception (even consciously accessible perception) we do not necessarily represent the sensory mode via which we gain information about our environment”.Footnote 1 But this is not exactly contrary to my own view, for I did not say that when you have a visual experience as of something red, you represent the mode by which you actually gain information about it. For one thing, my view applies in bad cases in which, perhaps because you are hallucinating, you have not actually gained information about anything. But for another, my view is compatible with, and indeed motivated by, the fact that the way that you did gain information is not the same as the mode that is represented.

This is exactly what happens in cases of mixed mode perception, according to my view. In experimental cases of mixed mode perception, we can verify through experimental controls which way a subject acquires information about their environment. And then they can tell us where it “sounds” like it is coming from, or how it “feels”, or how it “looks”. It turns out that in a wide class of similar experiments, subjects routinely report the phenomenology of (for example) auditory experience of things where we know that the relevant controlling information comes from retinal stimulation.

Schellenberg and Comesaña contend that these cases flatly show that my view is incorrect, but on the contrary, these are precisely the cases that are grist for my view. If the phenomenology of audition does not represent the spatial source of a sound as coming from hearing rather than from sight, then what, exactly, do experiments like these show is surprising and interesting? It is not that we believe the source of the information comes from our eardrums. On the contrary, the experiments work on subjects who know exactly how the experimental apparatus works. No, instead the information appears to be auditory. That is precisely what its phenomenology as auditory consists in. So these experiments do not reveal that the phenomenology of perceptual experience does not represent a mode; on the contrary, they strongly suggest that it does.

Now, Schellenberg and Comesana also argue, somewhat more forcefully, that if the apparent factive attitude view’s commitments were true, then animals would need to represent their perceptual modes as well. But this is not exactly right. Some animals – sponges, surely, and I imagine probably insects – receive and process information about their environments without needing to distinguish their sources. Others – dogs, I imagine, and I suppose birds – might. I am only committed to the view that in order to experience a phenomenological difference between audition and vision, you have to somehow represent perceptual modes. But this seems to me not an overly audacious prediction, not only because it will be extraordinarily difficult to falsify experimentally with evidence about insect phenomenology, but because something must go into the phenomenologically experienced difference in such experiences. So although the apparent factive attitude view remains the single most speculative conjecture in the entire book, I am not persuaded that it can be taken down so easily.

6 Option-dependence

In the framing of the book, the reason why we go down the path of considering whether perceptual reasons must be true even to make beliefs rational is because, as I argue, plausibly one of the reasons why it has gone surprisingly relatively unquestioned in epistemology whether the reasons that rationalize beliefs must be truths is that this is motivated by the default answer to what I call the problem of unjustified belief. The problem of unjustified belief is that when a belief is unjustified, it does not seem that it should be able to play any role in making other beliefs rational or justified, or to be the basis for further knowledge. So something must rule out unjustified beliefs providing a source of reasons, and the default answer is that what must rule this out is applying some normative condition – justification, rationality, or knowledge – on the possession of subjective reasons for belief. Because subjective reasons must come from justified sources, they cannot come from unjustified beliefs. But since justification is on this view a prior constraint on subjective reasons, it follows that justification cannot itself be fully grounded in or analyzed in terms of subjective reasons.

But Wedgwood argues forcefully in his contribution that the problem of unjustified belief is just the leading edge of a more encompassing problem. My solution to the problem of unjustified belief does rule out beliefs that it is irrational to have as making any further contribution to what else it is rational to believe. But it leaves open that beliefs that have no further basis but are not actively ruled out as irrational by other beliefs could still provide a rational source for further beliefs. And you might think that is bad. Indeed, Wedgwood argues that it is very bad, because it makes the question of what it is rational for you to believe depend on the question of what you do go on to believe. And that, Wedgwood argues, violates a compelling constraint known as option-independence.

Option-independence, as Wedgwood defines it, says that the answer to what you ought to do cannot depend on what you in fact end up doing. So as Wedgwood defines it, option-dependence is a principle about what you ought to do. But I never say anything in the book about what you ought to believe. I only discuss what it is rational to believe. So I am at most committed to denying the option-independence of rationality. But these two principles are not equally compelling. Part of what is compelling about option-independence for ought is that the question about what you ought to do is unique. If there are n incompatible options, and any of them at all is one that you ought to take, then it is the only one that you ought to take. If ought were option-dependent, then it would frustrate your efforts to use your knowledge of what you ought to do in order to decide what to do. You might figure out that you ought to do something and then do it, but since what you ought to do depends on whether you do it, it could end up being the case because you do it that you ought to have done something else instead.

In contrast, the option-dependence of rationality does not automatically have this consequence. Many of your options could be rational. For example, you could have three options, A, B, and C. If rationality is option-dependent, it could be that if you do A, then all of the options are rational, if you do B, then A and B rational, and if you do C then only A and C are rational. This poses no trouble for you to figure out what to do – no matter what you do, it will be rational. In this scenario, option-independence fails, but because rationality is permissive, its failure is consistent with its advice always being followable.

So I am not convinced that option-independence is necessarily a compelling principle when applied directly to rationality. Indeed, I am attracted to the view that it is false. For its falsity is entailed, I think, by a kind of principle of conservatism, according to which it is rational to keep believing what you already believe unless you find some problem with it. I suspect that something like this principle is true, and though I didn’t argue for it in the book, the Horty-inspired model in Sect. 4.4 of book takes it for granted. The principle of epistemic conservatism says that so long as you start by believing something, it does not matter whether you have other evidence for it, for it to be rational to continue to believe it. If you begin with some such unsupported belief, the principle says that it is rational to continue with it.

But suppose that you begin with such an unsupported belief but then give it up. In that case, I believe, it would not be rational for you to believe it anymore, for you have no evidence that it is true. So whether it is rational for you to believe it does not just depend on what you believe at the earlier time; it depends on what you go on to believe at the later time. This is not a damned-if-you-do/damned-if-you-don’t type dilemma – it is a blessed-if-you-do/blessed-if-you-don’t situation. So nothing about this makes the advice given by rationality unfollowable.

Still, Wedgwood speculates that the particular Horty-inspired model that I give in Sect. 4.4 does more than this, also generating damned-if-you-do/damned-if-you-don’t cases. And I agree with Wedgwood that this is something that we should want to eliminate. Since it wasn’t the issue that I was trying to illustrate with the model, I didn’t worry about articulating a version of the model that would avoid this kind of problem, and the model that I used drew on the simplest model from Horty [2012]. But it turns out that Horty’s book actually develops a more sophisticated definition of defeat that is designed to eliminate problems like this one.Footnote 2 And because on Horty’s model it is not really individual beliefs that are rationally supported, but rather packages of beliefs known as extensions, it is also easy to amend the model to incorporate stronger conditions on which kinds of packages are unacceptable that are modeled on the kinds of holistic rational constraints on belief such as Silva and Bernecker’s cases of self-defeating beliefs, because extensions need to be stable, and we can add conditions on stability, if necessary.

So I believe that the question of whether you can allow for the kind of “innocent” option-dependence that I favor without allowing for the kind of problematic option-dependence to which Wedgwood objects is just one to be worked out in the mechanics of how reasons combine, which is not a question that I was trying to answer in the book. The only thing that I was trying to illustrate with that model is that it is easy to define models for how reasons support conclusions that do not obey the principle of reflexivity – allowing us to throw out some of the reasons with which we start, and therefore allowing us to start with inconsistent sets of reasons – the key ingredient both in my answer to the problem of unjustified belief in chapter four and my account of the objective defeasibility of perceptual evidence in chapter five.

Still, although I am sympathetic to the principle of conservatism mentioned above, you could also preserve all of the central ideas in the book by giving up my suggestion that belief is ever a way of possessing a reason in favor of the view that only seemings are a way to have subjective reasons. In that case, everything would go through as before, I believe, but there would be a straightforward, reasons-first, explanation of why there is no option-dependence of any kind – for while beliefs are options that can be rational or irrational, seemings are not. Consequently, I think that Wedgwood’s objection is not to the priority of reasons or to their holistic combination, but just to the view that reasons can ever come from the same thing that is one of the options. And I would be happy to accept that as an amendment.

7 Knowledge from Falsehood

Finally, Silva and Bernecker argue that the Kantian theory of knowledge that I develop in the book faces problems because it incorporates uncritically the “no false lemmas” principle about knowledge, and therefore is counterexampled by so-called cases of “knowledge from falsehood” that have been offered as counterexamples to that principle. In these cases, you quickly count 48 people in the room and conclude that you have enough food because you ordered food for sixty, but in fact there are only 47 people in the room and you counted someone twice because they moved. Intuitively, the proponent of these cases argues, you do know that you have enough food, but the reason on which you based this belief is false. So knowledge can, after all, be based on false lemmas.

It is true that Reasons First takes for granted – inappropriately, it should be added, given the recent prominence of such examples – the “no false lemmas” principle. So there is no solution to this problem in the book. And it is a problem that plagues both the final Kantian theory of knowledge in Part 4 of the book, and also the explanation of the defeasiblity of perceptual evidence in chapter five. So it’s an important issue relevant to the final assessment of whether the pieces of the positive views in the book can all work. But Silva and Bernecker go farther, and suggest that it is a further problem that I am also unable to adopt the familiar answers to how we can have knowledge from falsehoods.

But I don’t think that this is right. Each of the off of the shelf answers to the cases of knowledge from falsehood is at home in a different kind of theory of knowledge. Safety-theoretic characterizations of knowledge will say that we can have such knowledge because it is still safe. Reliabilist theories, on the contrary, will not say this, but they will say that is still a reliable method. So while it is true that the Kantian theory needs its own answer, it should never have been expected that its answer would look like anyone else’s answer. The more interesting question is whether it will be especially difficult for it to give a plausible answer.

I do think that there are things that the Kantian account can say about this, some of which require modifying the account a little bit in light of these cases, and all of which require more space than I have here. But I want to close on a conciliatory note by observing that my response to one of Silva and Bernecker’s own earlier arguments actually makes this harder to do. One of the strategies that you might try on behalf of the Kantian view is to allow that in the counting case, your knowledge can still be based on the “backup reason” that there are approximately 48 people, which is still, after all, true. But I have already claimed both here and in the book that it is difficult to give an account of basing on which this would be true, and that was an important part of my argument against the claim that perceptual reasons must be true. And if there is an account of basing that works here – whether Lord’s, or some other – then it could likely be extended to answer my objection to world-implicating factive views in chapter three, thus dialectically limiting the space of responses to this problem that I can offer without giving up something else.Footnote 3