Let us reconsider Rational Disregard, the principle that a subject who knows that a certain piece of evidence is misleading is rationally entitled to disregard it. It has been said that, even if it may initially sound plausible, this principle can be criticized as ‘bad epistemic policy’:
Acceptance of [Rational Disregard] overlooks the fact that we often take ourselves to know when we do not […]. Cases will arise where we apply the principle to things we merely think we know, and by disregarding what could be corrective evidence, we force our heads deeper and deeper into the sand. [Rational Disregard] is bad epistemic policy. (Veber 2004, 567)
Veber’s complaint in this passage is essentially this: Rational Disregard says that it is rational for one to disregard evidence that one knows to be misleading, but if one were to use this principle as a policy or rule guiding one’s epistemic endeavours, one would often end up disregarding evidence that should not be disregarded – for we often take ourselves to know when we do not. Therefore, Rational Disregard should be rejected.
The complaint is not entirely fair, for – as Lasonen-Arnio points out – “the truth of [Rational Disregard] does not entail in any straightforward way that a subject ought to employ [a maxim like: ‘If you know that a piece of evidence is misleading, ignore it’] as a policy or rule guiding her belief-revision” (2014, 430). We can see Rational Disregard as offering a third-personal normative evaluation, not a first-personal normative directive – let alone one that subjects may be described as ‘following’ even when they disregard a certain piece of evidence only because they think they know it to be misleading.Footnote 15 But now consider the following variant of Rational Disregard, obtained by replacing ‘knows’ with ‘believes’:
Rational Disregard*: As long as S believes that E is misleading evidence that not-h, it is rational for S to disregard E as it bears on whether h
Here a complaint along the lines of Veber’s seems perfectly apt and can plausibly be made without interpreting the principle as offering a ‘followable’ policy, rule or directive. When one disregards evidence that one believes to be misleading, one may be doing something irrational. Since our beliefs are often false, there are plenty of cases where we regard what could be corrective evidence as misleading. By disregarding such evidence, we would force our heads deeper and deeper in the sand. There is nothing rational about that. So – whatever the merits of Rational Disregard – Rational Disregard* should certainly be rejected.
The question is where the falsity of Rational Disregard* leaves us with respect to our goal, which is to undermine the plausibility of No Early Appreciation. If Rational Disregard* is false, it is not rational for S to disregard evidence that she believes to be misleading. But this does not yet show that it is possible for S not to disregard such evidence. In general, it is not obvious whether and how one can go from ‘It is not rational to φ’ (or, for that matter, ‘It is rational not to φ’) to ‘It is possible for a rational agent not to φ’. And in this particular case, the transition is especially problematic: we can see perfectly well that S should not disregard the evidence, but – in order to have a satisfactory response to the Paradox of Belief Revision – we need a concrete account of how it can make sense, from S’s own perspective, not to disregard it – indeed, we need a concrete account of how S can bring herself to appreciate the significance of the evidence.
It is at this juncture that self-reflection can be seen to play an important (and, perhaps, irreplaceable) role. The suggestion I want to explore in the remainder of this section is that, insofar as a rational agent is aware of her own beliefs as beliefs she has, she will be able to ‘see’ a certain kind of risk involved in disregarding evidence that she takes to be misleading, and that ‘seeing’ that risk will allow her to appreciate why such evidence should in any case bear on the question at hand. Thus, the assumption that rational agents are systematically aware of their own beliefs as beliefs they have may put us in a position to deny No Early Appreciation and, thereby, see a way out of the Paradox of Belief Revision.
To begin to see the connection between the ability to form true second-order beliefs about one’s mental states and the ability to appreciate the significance of one’s evidence, it is useful to reflect on a type of reasoning that (though perhaps not frequent in everyday life) is philosophically familiar and relatively well-understood. Consider René, a rational subject endowed with the ability to form true second-order beliefs about his mental states. One day, a sceptic challenges René to produce evidence that he is not an envatted brain whose experiences are produced by the stimulations of a powerful computer, and not by the kind of ‘external world’ that we ordinarily take ourselves to inhabit. Initially, René is tempted to rule out the truth of the sceptical scenario by resorting to the simple fact that he has hands. But then the following line of thought occurs to him:
“If the scenario described by the sceptic were true, I would not have hands, and yet it would seem to me as if I did. For if I were a brain in a vat, the computer stimulating my brain would produce in me exactly the same kind of experiences to which I owe my conviction that I have hands. Therefore, a certain risk is involved in letting the fact that I have hands bear on the question whether the sceptical scenario is true or not. It would be nice if I could avoid that risk and rest my response to the sceptic on a more solid basis…”
This is not the place to discuss whether it is possible for René (or anyone else in René’s position) to find a ‘more solid basis’ on which to refute external world scepticism. What interests me in this example is the notion of ‘risk’ it involves. While I do not pretend to have a fully worked-out account of this notion, I think that even a partial and intuitive grasp of it encourages two considerations which may prove helpful in connection with our discussion of No Early Appreciation.
First, in whatever sense it is ‘risky’ for René to respond to the sceptic by relying on the fact that he has hands, the relevant risk is one that René would not be able to ‘see’ if he did not have the ability to form true second-order beliefs about his mental states – specifically, beliefs about his own perceptual experiences. Note that it would not suffice for René to have the general notion of an illusory perceptual experience – for mere possession of that notion (or application of that notion to someone else) would not put René in a position to see the fact that he has hands under the dubious light that the sceptical scenario is meant to shed upon it. René needs to be aware that the fact that he has hands is also the content of a perceptual experience he is having if he is to be able to see that content as something that, under the circumstances described by the sceptic, would still seem to him to be fact without being one.
Second, insofar as he can see a ‘risk’ in his first response to the sceptic, René may be rationally motivated to avoid that risk. And, equally important, he can be so motivated even if he retains his belief (and, perhaps, knowledge of the fact) that he has hands. It may be that, further down the line, René’s epistemic perspective will change. He may end up a sceptic and revise his belief that he has hands. But, initially, the most immediate effect of his seeing the risk will be to prompt him to seek a different (‘more solid’) basis on which to rest his rejection of the sceptical scenario. And that is just to say that, by seeing the risk, René can come to appreciate that certain evidence, despite being perfectly good, should not bear on the question whether the sceptical scenario is true.
My proposal is that, in self-reflective agents, any case of rational belief revision involves a dynamic that is parallel and symmetric to the one we observe in René. As in René’s case, a certain risk is revealed to the agent thanks to his or her ability to form true second-order beliefs about his or her mental states. But, this time, what the agent comes to appreciate by way of perceiving the risk is that certain evidence, despite being (believed by him or her to be) misleading, should bear on the question at hand.
Another example will make the parallel vivid. Consider again Mary and her belief that John is in Paris. Let us assume that, just like René, Mary is a rational subject endowed with the ability to form true second-order beliefs about her mental states. Specifically, let us assume that Mary is endowed with self-reflection and, therefore, can form the true second-order belief that she believes that John is in Paris. When John calls Mary and tells her that he is in Madrid, Mary’s initial reaction is to dismiss John’s testimony as irrelevant, based on what she takes to be a fact, namely that John is in Paris, not in Madrid. But then self-reflection puts Mary in a position to reason as follows:
“If the scenario described by John were true, John would not be in Paris, and yet it would seem to me as if he was. For I believe that John is in Paris and, when we believe something, what we believe seems to us to be a fact whether or not it is one. Therefore, a certain risk is involved in disregarding John’s testimony based solely on a fact that, if his testimony was true, wouldn’t be a fact at all. If I want to avoid this risk, I should allow John’s testimony to bear on the question whether he is in Paris or in Madrid…”
The two points I made above, in connection with René’s perception of the ‘risk’ involved in his first response to the sceptic, applies here as well. First, Mary would not be able to see the risk involved in her initial dismissal of John’s testimony if she were not a self-reflective agent. It will not suffice for her to have the general notion of a false belief – for mere possession of that notion (or application of that notion to someone else) will not put her in a position to see what she takes to be a fact (namely, that John is in Paris) as something that, in the scenario depicted by John, would seem to her to be fact without being one. Second, we can see Mary’s perception of the risk as affecting her attitude towards the evidence she has at her disposal. Maybe, further down the line, Mary will give up her belief that John is in Paris. But even as she clings to this belief and continues to regard contrary evidence as misleading, she can come to appreciate that (absent other, independent reasons to disregard it) such evidence should bear on the question whether John is in Madrid or in Paris.
The general picture of belief revision emerging from this example looks as follows:
Thanks to self-reflection, an agent who believes that p can become aware of the fact that she believes that p
In being aware of the fact that she believes that p, the agent is aware that, if her belief that p was wrong, it would not be the case that p but it would still seem to her as if p
Therefore, when the agent acquires a piece of evidence E that she takes to be evidence that not-p, she can see a certain risk involved in disregarding E based on (what she takes to be) the fact that p
Seeing this risk and being rationally motivated to avoid it, the agent comes to appreciate that (absent other, independent reasons to disregard it) E should bear on the question whether p, even if she continues to take E to be misleading evidence.
I admit that this picture puts a heavy burden on the notion of ‘risk’, and that I have not said enough to put this notion on a firmer footing. One might be tempted to do that by linking risk with epistemic modality – for example, it might be suggested that (for any agent x) there is a risk that p if, and only if, (relative to x) it might be the case that p. But note that, on this view, the agent’s ‘seeing the risk’ would be a matter of her realizing that she ‘might’ be wrong in regarding the relevant piece of evidence as misleading. And it is unclear how the agent could realize that while, at the same time, retaining her belief that the piece of evidence in question is misleading. When one believes that p, one has made up one’s mind in such a way that one no longer regards its being the case that not-p (and therefore, its being the case that one wrongly believes that p) as a live possibility.Footnote 16 We must, therefore, understand (iii) and (iv) differently – not in terms of the subject’s being open to the possibility of being wrong, but simply in terms of the subject’s awareness of the fact that, if she were wrong, she would not be able to tell (at least, not based solely on her current evidential resources).
It may be complained that this kind of ‘risk’ (if it is one) cannot rationally motivate the agent to change her appreciation of the evidence – for the only risks that can do that are those representing possibilities that the agent has not ruled out. But this complaint begs the question against the present proposal. The point of René’s example is that one can take sceptical arguments seriously – and be motivated by them to look for a ‘solid basis’ on which to refute them – while remaining convinced of the fact that one has hands.Footnote 17 If we are to solve the Paradox of Belief Revision by denying No Early Appreciation, it is exactly this combination of attitudes that we need to make sense of. My tentative suggestion is that we can do so if we help ourselves to the assumption that rational agents are systematically aware of their own beliefs as beliefs they have.