1 Introduction

A central debate in metaethics concerns the nature of moral judgments.Footnote 1 Cognitivists argue that moral judgments are beliefs, whereas non-cognitivists argue that moral judgments are desire-like attitudes. Moral judgments appear to motivate us in a way that is characteristic of desires. Considerations based on the practicality of moral judgments suggest that moral judgments are desire-like attitudes rather than beliefs.Footnote 2 Cognitivists, on the other hand, stress various belief-like features of moral judgments. We talk, for example, as if moral judgments are true or false. This makes sense given that moral judgments, like beliefs (and in contrast to desires), have a mind-to-world direction of fit. But whether moral judgments really fit the belief mold depends on the details. Several philosophers (Bedke 2014a, b; Rowland 2018; Suikkanen 2013) have recently argued that moral judgments do not invariably have belief-like features. The arguments turn on two premises. The first premise concerns the nature of belief. What makes a mental state a belief is that it is responsive to evidence. The second premise claims that some moral judgments are evidence non-responsive. If the two premises are true, then it constitutes a challenge to the cognitivist’s identification of moral judgments with beliefs.Footnote 3 The exact details of the premises and conclusions differ somewhat between the authors, but the key premise concerns the apparent fact that some moral judgments are evidence non-responsive. This motivates examining the arguments together. Since it is evidence non-responsiveness that figures as the key premise, we will call it the argument from evidence non-responsiveness.

This paper has two aims. First, it aims to systematically examine different versions of the argument from evidence non-responsiveness. Second, it aims to outline a more nuanced understanding of the sense in which beliefs are evidence responsive that also explains why the extant versions of the argument do not constitute a challenge to cognitivism. The plan of this paper is as follows. In the next section, our focus is on presenting the outlines of the argument from evidence non-responsiveness. In section 2 we consider some challenges to the second premise, i.e., that there are subjects whose moral judgments are evidence non-responsive, but end up arguing that it has intuitive support. In section 3 we give a brief rationale for the first premise. Section 4 examines some important differences regarding how the premise is understood by the different authors. Section 5 outlines an explanation of how a belief can (systematically) deviate from the normal profile of belief that shows why the extant versions of the argument from evidence non-responsiveness do not work. Finally, in section 6, a rough test for belief is advanced.

2 The argument from evidence non-responsiveness

What makes a mental state a belief? In metaethics, cognitivists are seldom explicit about how they understand the nature of belief. The focus is predominately on what beliefs are about, e.g., whether they are about natural or non-natural properties and whether the properties are mind-independent or not. If we are told anything, we are usually merely told that beliefs are mental states with a mind-to-word direction of fit or that they aim at the truth. A notable exception is Michael Smith who argues that “a belief that p tends to go out of existence in the presence of a perception with the content that not p” (Smith 1994: 115).Footnote 4 This view is reminiscent of a view that is widely endorsed regarding the nature of belief: the distinguishing feature of a belief is that it is a mental state that is evidence responsive.Footnote 5 This is also the kind of view that the argument from evidence non-responsiveness relies on. Consider the following passages:

Normally, we expect beliefs about matters of fact to go out of existence when we receive defeaters. (Bedke 2014a: 123)

[a]ccording to the most popular theories of beliefs, if a subject believes that there seems to be sufficient evidence for the falsehood of her mental state and she can still continue to be in that state […] then that mental state cannot count as a belief. (Suikkanen 2013: 178)

If one gains what one believes to be new evidence that bears on whether p, and one’s judgment regarding p (whether p) is a belief, then either (i) one adjusts one’s judgment regarding p in the light of and in line with this evidence or (ii) one adjusts one’s judgment that what one gained was in fact evidence bearing on whether p. (Rowland 2018: 270)

Although there are important differences in these passages that we will return to below, they are nevertheless united in their endorsement of a thesis according to which evidence responsiveness is a distinguishing feature of belief. To avoid misunderstanding the relevant thesis, two things need to be emphasized.

First, the thesis is descriptive rather than normative, i.e., it is a claim about how a belief actually functions rather than a claim about how a belief ought to function. Mark Eli Kalderon (2005) and Alison Hills (2015) have advanced arguments against cognitivism based on an alleged normative difference between moral judgments and beliefs. Kalderon, for example, argues that “the norms governing moral acceptance differ from norms appropriate to belief” (Kalderon 2005: 42). Considering this kind of argument is outside the scope of this paper.Footnote 6 Our focus is on evidence responsiveness as a descriptive claim.

Second, it is important to be clear about how “evidence responsiveness” should be understood. In order for a mental state to be a belief, it must be responsive to evidence, but exactly how should this be interpreted? Consider the following passage from Bedke.

Beliefs need not be directly sensitive to worldly facts. If one judges that a dictator has weapons of mass destruction, for instance, that judgment does not fail to be a belief merely because the dictator lacks such weapons. More plausibly, if one acquires conclusive evidence that the dictator lacks weapons of mass destruction, we would expect any belief that the dictator has such weapons to go out of existence. Even more plausibly, if one acquired such evidence and accepted it as evidence, or took it to be evidence […], we would expect the corresponding belief to go out of existence. (Bedke 2014b: 190)

On an intuitive view, E is evidence for P if E makes it more likely that P is true. For example, that John’s fingerprints are on the murder weapon is evidence to the effect that John is the murderer. If Jane, the not-so-good detective, is unaware of this, we do not expect her belief in John’s innocence to change. In fact, Jane’s belief in John’s innocence can be completely unresponsive to Jane’s belief that John’s fingerprints are on the murder weapon if Jane does not see John’s fingerprints as something that makes it more likely that John is the murderer. In these cases, it is not surprising that a belief does not change (even if one may think that the belief ought to change). Rather, the most plausible interpretation of the sense in which a belief is responsive to evidence is, as Bedke also emphasizes, when the agent takes something to be evidence. It is, in other words, what an agent takes to be evidence that a belief plausibly is responsive to.Footnote 7

Against this background, we can understand the premises of the argument from evidence non-responsiveness roughly as follows.

  • Beliefs are evidence responsive (ER): Necessarily, a belief that P (normally) changes in response to what is taken as (sufficiently strong) evidence against believing P.

  • Moral judgments are evidence non-responsive (NER): There are subjects whose moral judgments do not change in response to what they take to be (sufficiently strong) evidence against their moral judgments.

Bedke, Suikkanen and Rowland advance different examples of agents in support of NER. Bedke (2014a) argues that the moral judgments of non-naturalist realists are “systematically recalcitrant in the face of the coincidence-as-obliviousness defeater” (Bedke 2014a: 123). The “coincidence-as-obliviousness” is Bedke’s way of talking about evolutionary debunking arguments. If there are mind-independent moral truths, as non-naturalist realists argue, why should we think that “evolutionary pressures affected our evaluative attitudes in such a way that they just happened to land on or near the true normative views among all the conceptually possible ones” (Street 2006: 208-9). An evolutionary debunking argument purports to undermine the justification of our moral beliefs by revealing something about their causal history – usually by showing that the belief is not the result of a process that is conducive to truth. Evolutionary forces aim at fitness rather than tracking and detecting mind-independent truths. It, therefore, seems as if it would be a miraculous coincidence if evolution happened to land on the truth. Bedke thinks that the “coincidence-as-obliviousness” is an undercutting defeater, i.e., it “preempts the prevailing first-order evidence from serving as a justificatory basis” (Grundmann 2019: 140). However, the moral judgments of non-naturalist realists, Bedke claims, are systematically non-responsive to this defeater.

Rowland (2018) argues that examples of moral peer disagreement show that moral judgments do not change in response to what is perceived as evidence. One of his examples involves Anna. Anna judges that torture is always morally wrong. She also acknowledges that many of her epistemic peers disagree with her on this issue, but she is “intransigent in light of her judging that a significant number of her epistemic peers” (Rowland 2018: 267) disagree. Peer disagreement is typically understood as higher-order evidence, i.e., roughly evidence that indicates that something has gone wrong in the belief-forming process regarding the target belief.Footnote 8 Evidence of peer disagreement, like evidence of debunking, is supposed to be an undercutting defeater, i.e., the evidence attacks the relation between the source of justification or the justificatory process and the target proposition. However, Anna’s moral judgment, Rowland claims, is non-responsive to this defeater.Footnote 9

The two examples above are supposed to give us reason to think that some people’s moral judgments are non-responsive to undercutting defeaters. Suikkanen (2013) and Bedke (2014b) argue that some people’s moral judgments are non-responsive to rebutting defeaters. A rebutting defeater, by contrast to an undercutting defeater, provides evidential support for thinking that the target proposition is false. What they purport to show is, in other words, that there are agents who judge, e.g., that stealing is wrong, but who also take there to be sufficiently strong evidence for thinking that this judgment is false. Both Suikkanen (2013) and Bedke (2014b) appeal to error theorists about morality for illustration. Following Bart Streumer, we can think of an error theory as consisting of two parts (Streumer 2017: 132):

  1. (1)

    Normative judgements are beliefs that ascribe normative properties

  2. (2)

    Normative properties do not exist.Footnote 10

Of course, most people do not think that moral (or normative) judgments are systematically false, but moral error theorists do. Suikkanen argues that endorsing a moral error theory amounts to believing that there is sufficient evidence for the falsehood of one’s first-order moral beliefs. Bedke, similarly, claims that the moral error theorist “takes himself to have decisive evidence that non-natural wrongness is not instantiated in this case (or any other)” (Bedke 2014b: 191). The error theorist thus seems to be like someone who professes to believe in witches, but at the same time claims to believe that there are no witches. Some error theorists have, as a consequence, abandoned their first-order moral judgments. However, other error theorists, e.g., John Mackie (1977) and Jonas Olson (2011, 2014) argue that we can (and should) conserve our moral judgments despite the belief in the error theory. Some error theorists’ moral judgments thus seem to be non-responsive to what they take to be sufficiently strong evidence for their falsehood.Footnote 11 This is supposed to give us examples of agents whose moral judgments are non-response to rebutting defeaters.

Thus far we have provided a background (and, in part, a rationale) for the premises of the argument from evidence non-responsiveness. The premises, again, can roughly be understood as follows.

  • Beliefs are evidence responsive (ER): Necessarily, a belief that P (normally) changes in response to what is taken as (sufficiently strong) evidence against believing P.

  • Moral judgments are evidence non-responsive (NER): There are subjects whose moral judgments do not change in response to what they take to be (sufficiently strong) evidence against their moral judgments.

If the premises are true, they suggest that there is a mismatch between moral judgments and beliefs, but what kind of conclusion should we draw from this? Rowland argues that the intelligibility of peer disagreement supports thinking that “non-derivative moral judgments are not beliefs” (Rowland 2018: 267). Suikkanen claims that the intelligibility of error theorists supports thinking that “our first-order moral thoughts … could not count as beliefs” (Suikkanen 2013: 178). We can thus think of the conclusion of the argument as follows:

Conclusion A: Not all moral judgments are beliefs.Footnote 12

Bedke, by contrast, draws a somewhat different conclusion. Consider the following two passages.

The fact that [the realist] is not the least inclined to abandon her normative commitments in the face of the defeater is some evidence that these normative commitments are not beliefs about non-natural properties after all. (Bedke 2014a: 123)

… because normative judgments systematically fail to exhibit the thetic direction of fit for error theorists, we have good evidence that this judgment type – whether it be found in error theorists or not – is not belief about non-natural properties. (Bedke 2014b: 195)

First, Bedke’s conclusion is not about moral judgments in general, but merely about a class of moral judgments, viz., judgments made by non-naturalist realists and error theorists. Second, Bedke does not conclude that these judgments are not beliefs. Rather, he concludes that moral beliefs are not about non-natural properties. We can thus think of the conclusion as follows:

Conclusion B: Moral judgments are not about non-natural properties.

Despite the different conclusions, we treat the arguments together because of their similarities. In particular, they put pressure on (a type of) cognitivism by virtue of the (purported) evidence non-responsiveness of (a type of) moral judgments.

3 Are moral judgments evidence non-responsive?

One way of challenging the argument is to argue that the examples of agents whose moral judgments do not change in response to what is taken as evidence are unpersuasive. To support NER, it needs to be argued that there are subjects that display certain combinations of mental states, viz., that there are subjects who judge, e.g., that murder is wrong, but also judge that the evidence does not support the first-order moral judgment. In at least one of the examples considered above, it is actually doubtful that this is the case. Bedke claims that the non-naturalist’s moral beliefs are “systematically recalcitrant in the fact of the coincidence-as-obliviousness defeater” (Bedke 2014a: 123). “Coincidence-as-obliviousness” may be defeating evidence, but Bedke does not provide any reason to think that the non-naturalist takes it as sufficiently strong evidence against their view. Rather, they usually resist the conclusion that the justification of their moral judgments is defeated by rejecting one of the premises of the debunking argument or by advanced so-called third-factor explanations (see e.g., Enoch 2010). Even if Bedke were to argue that the coincidence-as-obliviousness is evidence against the non-naturalist realists' view, it is not something that the non-naturalist realist takes to be evidence. Hence, we should not expect the corresponding belief to go out of existence.

There is a similar problem with Rowland’s example. Anna believes that her peers disagree with her, but this does not, by itself, entail that she believes that peer disagreement is something that defeats her first-order moral judgment. This is only the case if we assume that she endorses a concilliationist view (which perhaps is what Rowland does since he claims that “it is generally assumed by all parties in the peer disagreement literature that Perceived Peer Disagreement is Perceived Evidence” (Rowland 2018: 272). However, Rowland never makes it explicit that Anna takes peer disagreement to be sufficiently strong evidence against her view. Moreover, he does not provide any other example of someone who explicitly endorses conciliationism, but who is non-responsive to moral peer disagreement. It is therefore not obvious that Rowland provides an example that gives us reason to think that moral judgments are evidence non-responsive.

Indeed, it may also be argued that the error theorist does not obviously display the pertinent combination of mental states. Olson, for example, “recommends moral belief in morally engaged and everyday contexts and reserves attendance to the belief that moral error theory is true to detached and critical contexts, such as the seminar room” (Olson 2011: 199). This is a kind of compartmentalization. If the moral belief and the belief about the evidence can be kept apart, then the subject will not take the error theory as evidence against the first-order moral judgments. One may wonder, as Olson also does, about the plausibility of this move. Olson nevertheless argues that compartmentalization is a common and familiar phenomenon not only in the moral domain.

For instance, someone might say truly the following about a cunning politician: ‘I knew she was lying, but hearing her speech and the audience’s reaction last week, I really believed what she said’ … Hence we are sometimes taken in by what people say (be it cunning politicians, manipulative partners, etc.) in the sense that we believe what is said, even though we are disposed to believe, upon detached and critical reflection, that is it false. […]. Something similar might be going on with moral beliefs. The error theorist might say, ‘I knew all along there is no such thing as moral wrongness, but hearing about the massacre on civilians on the news yesterday I really believed that what the perpetrators did was wrong; I really believed that the UN ought morally to enforce a ceasefire’. (Olson 2011: 200)

There are several reasons for doubting the plausibility of this. First, it may be argued that saying “I knew all along” should not be taken literally, but as a mere façon de parler. Although it may seem that we knew it all along, we often did not know it all along. Indeed, there is even a name for the phenomenon: hindsight bias (Roese and Vohs 2012). Second, claiming that we are disposed to believe upon critical reflection suggests that we do not really believe it. An agent may not yet have formed a particular belief that they would form upon critical reflection. Third, note that “I really believed that” is in the past tense. The use of past tense suggests that the subject has changed their belief, i.e., once the subject realized that the politician lied, they stopped believing it. This is, of course, what to expect of a belief.Footnote 13

It may nevertheless be argued that compartmentalization is indeed a common and familiar phenomenon. Joseph Bendaña and Eric Mandelbaum (2021), for example, defend a fragmented view of belief storage. According to this view, beliefs are “stored in distinct, independently accessible data structures” (Bendaña and Mandelbaum 2021: 80) or what they call “fragments.” One of the advantages of this view, they argue, is that it makes sense of people having inconsistent beliefs.Footnote 14 Beliefs that are stored in different fragments can be accessed independently of one another. A belief that P can be stored in one fragment. A belief that not-P can be stored in a different fragment. Couldn’t the error theorist argue that the belief in the error theory and the first-order moral judgments simply are stored in different fragments? Bart Streumer, for example, claims that “I can believe each part of the [error] theory individually, but only if I do not believe the other part at the same time” (Streumer 2017: 131). However, one may wonder if the error theorist’s purported beliefs can be kept compartmentalized. Bendaña and Mandelbaum consider what they call inter- and intrafragmental consistency. Intrafragment consistency, they argue, is automatically maintained. This is not the case, however, for interfragment consistency (see Bendaña and Mandelbaum 2021: 99). The most interesting issue in the present context is what happens when different fragments with inconsistent beliefs are activated.

…imagine a case where there is a reminder that serves to activate a previously quiescent fragment that contains ¬P. Here, one would have two activated fragments, one containing P and one containing ¬P. In this case, we expect the fragments to merge: the fragments will combine their information while deleting (or sequestering) the weaker belief. (Bendaña and Mandelbaum 2021: 99)

Part of Streumer’s explanation of why he cannot believe the error theory is that he understands it so well. What explains this is, plausibly, that he cannot keep the different parts of the error theory fragmented. Similarly, it seems rather plausible to think that a belief in a moral error theory and individual moral beliefs can’t be kept fragmented.

If I believed the error theory, I would believe that all normative judgements are false. I would then be inclined to give up my normative judgements. But I am in fact not at all inclined to give up these judgments. That is how I know that I do not believe the error theory. (Streumer 2017: 138)

We think that Mackie and Olson understand the error theory just as well as Streumer does. However, they seem to believe that all moral judgments are false, but they do not seem inclined to give them up. Bedke, moreover, makes the following autobiographical observation.

Having once been an error theorist of sorts, I can report in my own case that there was no discernable difference in my first-order normative judgments before and after accepting nihilism. The meta-normative position did not penetrate my everyday moral life. (Bedke 2014b: 192)

We do not think that the best explanation of this is that Bedke managed to keep the belief in the error theory and the first-order normative judgments compartmentalized in different fragments. Even if the two (purported) beliefs start as fragmented they will, at least if you make your living as a philosopher, eventually be activated together and, if Bendaña and Mandelbaum are right, merge. However, at least some error theorists do not thereby give up either their belief in the error theory or their first-order moral judgments. This suggests that there are subjects whose moral judgments do not change in response to what they take to be sufficiently strong evidence against their moral judgments. Error theorists like Mackie and Olson thus seem to give us good reasons to think that NER is true.Footnote 15

4 Examining evidence responsiveness

A second way of challenging the argument is to argue against ER. First, it may be argued that evidence responsiveness is not the distinguishing feature of belief. Second, it may be argued that, even if evidence responsiveness is the distinguishing feature of a belief, ER is too strong. It is often argued that the distinguishing feature of beliefs is “that they aim at the truth” (Williams 1973: 148). However, we do not arrive at true beliefs merely by forming beliefs. Rather, “evidence seems to play a mediating role vis-à-vis our efforts to arrive at an accurate picture of the world” (Kelly 2014). For this reason, it seems plausible to think that part of the function of a belief is to respond to what we take as evidence. Consider some views to this effect. Tamar Gendler claims that “beliefs change in response to changes in evidence” (Gendler 2008: 566). Nishi Shah claims that “[i]f we conclude that the evidence confirms the proposition we believe, then we will retain it; if we conclude otherwise, then we will not” (Shah 2013: 217). Alex Worsnip claims that “[p]art of what it is for something to be a belief […] is for it not to be reflectively sustainable in the face of an acknowledged judgment that it is not supported by the evidence” (Worsnip 2018: 194). Based on considerations like this, it seems, as David Velleman claims, that if a mental state “isn’t regulated by mechanisms designed to track the truth, then it isn’t a belief: it’s some other kind of cognition” (Velleman 2000: 253).

ER is also supported by intuitive examples. “If you believe there’s coffee made but then, walking into the kitchen, see that the coffee pot is empty, you will stop believing there’s coffee made and will start believing there’s no coffee made” (Helton 2018: 501). This is how, at least normally, our beliefs function. It is also supported by theoretical considerations like deliberative transparency and inter-level coherence requirements. If you deliberate about whether to believe that P, it is only considerations relevant to the truth or falsity of P that matters. It also seems as if certain combinations of mental states are puzzling.

There is something incredibly odd about an utterance like “all my evidence suggests that I’m not very attractive to members of the opposite sex. Nevertheless, in fact I am very attractive to members of the opposite sex.” Again, it sounds like a kind of joke. There is a strong pressure to interpret the agent either as not really believing that his total set of evidence suggests that he is not attractive, or as not really believing that he is attractive. One of the cognitive states may be weaker: it may be a fantasy, or a wish, or a hope, or an assumption, or faith, but not a belief. (Worsnip 2018: 194)

The considerations advanced above not only show how widely endorsed a thesis like ER is, but it also helps explain at least part of its attraction.

5 (Systematic) deviation

One may nevertheless argue that ER is too strong. Although we think it is rather plausible to interpret Rowland as endorsing a thesis much like ER, Bedke and Suikkanen seem to endorse more nuanced theses. Bedke (2014a) claims that beliefs normally go “out of existence when we receive defeaters” (Bedke 2014a: 123). However, if beliefs merely normally respond to what is taken as evidence, it allows for beliefs that are evidence non-responsive. It is therefore not obvious why the non-naturalist cognitivist should worry about a few evidence non-responsive moral judgments. Suikkanen claims that a belief changes when the “subject believes that there seems to be sufficient evidence for the falsehood” (Suikkanen 2013: 178). By contrast to Bedke’s view, this thesis does not seem to allow for any exceptions. If a mental state does not change in response to decisive evidence, it is not a belief. However, the thesis does not rule out believing P and believing that there is undercutting evidence against P. It does, however, rule out interpreting the error theorist’s moral judgment as a belief.

Bedke’s view in (2014b) is more permissive. Bedke argues that beliefs have a thetic direction of fit: “Judgment J with content P is cognitive only if it would tend to go out of existence in cases one takes oneself to have decisive evidence that not-P” (Bedke 2014b: 190), but he also claims that “individual beliefs can lack the characteristic disposition for a variety of local reasons and still count as a belief” (Bedke 2014b: 190). Instead, Bedke defends what he calls weak dispositionalism: “if a type of judgment systematically fails to have the thetic direction of fit, this is very good evidence that the judgment is not belief” (Bedke 2014b: 190).

These nuances suggest that there are different versions of ER worth distinguishing. One issue concerns the scope of the premise. Rowland’s argument is based on peer disagreement, but peer disagreement does not discriminate between moral judgments with different contents. Bedke’s argument, by contrast, is narrower. It includes as a variable the content of the belief, viz., that it is about non-natural properties.Footnote 16 This is why it merely targets non-naturalist realism rather than cognitivism more generally.

Moral beliefs do not change in the minds of those who become nihilists. Because this runs afoul of the dispositions one would expect if cognitive non-naturalism were true, we have very good evidence that normative judgments are not beliefs about non-natural properties after all. (Bedke 2014b: 193)

A different issue concerns whether the premises focuses on individual beliefs or types of beliefs. Rowland and Suikkanen focus on individual beliefs. Bedke, by contrast, focuses on types of beliefs or judgments. By contrast to Rowland and Suikkanen, Bedke allows for an individual belief to be evidence non-responsive as long as it is not a type of judgment that systemically fails to be evidence responsive. These different ways of understanding the first premise give us different versions of the argument.

6 Explaining (systematic) deviation

As Worsnip writes, “it is very hard to make sense of […] a persistent, stable state whereby [someone] consciously and transparently violates inter-level coherence” (Worsnip 2018: 194). However, saying that it is very hard does not mean that it is impossible. How hard it is to make sense of transparent violations of inter-level coherence, it seems, depends on the details. Suppose, for example, that John believes that the wall is green because he trusts his color perceptions, but comes to believe that he is color blind, i.e., he comes to believe that he has an undercutting defeater. In this case, it does not seem impossible that John maintains his belief. It is, by contrast, more difficult to imagine that John believes that P and believes that P is false, i.e., that John believes that he has a rebutting defeater for his belief, but that John maintains his belief.Footnote 17 Again, that it is hard does not show that it is impossible. Russ Shafer-Landau, for example, claims that beliefs, “sometimes do persist even after the agent recognizes that the belief is false” (Shafer-Landau 2003: 35) and uses self-deception to illustrate this. “A man who perceives all of the excellent evidence for the straying affections of his wife may nevertheless continue to believe in her fidelity” (Shafer-Landau 2003: 35). For this to show that transparent violations of inter-level coherence are possible, we need to understand self-deception in a particular way. Alfred Mele (2001), for example, does not think of self-deception as believing something that one believes is false. Rather, a self-deceptive belief is a belief that is the result of treating the evidence in a biased way.Footnote 18 If the cuckolded man discounts the evidence, it does not provide us with an example of a transparent violation of inter-level coherence. Rather, we must understand the man as simultaneously believing that his wife is faithful and believing that there is sufficiently strong evidence against believing this. This, however, does not seem to be a combination of mental states that can happily co-exist.

When we discern a gap between a belief and the truth, the belief immediately becomes unsettled and begins to change. If it persists, we form another belief to close the gap, while reclassifying the recalcitrant cognition as an illusion or as a bias. I cannot imagine evidence that would show this reclassification to be a mistake. (Velleman 2000: 278)

This is what normally happens when we have a belief that P and come to believe that there is sufficiently strong evidence against P. However, if one finds Shafer-Landau’s example compelling, then there are exceptions. Indeed, as Bedke claims, “individual beliefs can lack the characteristic disposition [to change in response to what is taken as evidence] for a variety of local reasons” (Bedke 2014b: 190). Bedke does not explain how this is supposed to work. Instead, consider the following example discussed by Helton.Footnote 19

Suppose you have a much-loved friend who is accused of stealing from you. You believe your friend to be innocent, despite the overwhelming evidence against her. In fact, your affect for your friend is so strong that you are psychologically incapable of revising your belief in your friend’s innocence. (Helton 2018: 515-516)

By contrast to the example given by Shafer-Landau, we are here provided with an explanation of why the belief does not change despite the belief about the decisive evidence. This arguably helps to make sense of the transparent violation of inter-level coherence. In other words, in order for persistent and transparent violations of inter-level coherence to make sense, there has to be some kind of explanation of why the target belief does not go out of existence. Of course, it is easy to come up with a similar explanation in the self-deception case. The man does not want it to be true that his wife is unfaithful: it would simply be too painful to believe that the wife is having an affair. This gives us a counterexample to the kind of thesis that Rowland and Suikkanen rely on, viz., it is not necessarily the case that a subject’s belief changes when the “subject believes that there seems to be sufficient evidence for the falsehood” (Suikkanen 2013: 178).

Bedke, recall, allows for individual beliefs to be evidence non-responsive as long as it is not a type of judgment that systematically fails to be evidence responsive. Bedke also wants to “use weak dispositionalism to raise an explanatory burden: anyone who would maintain that there is a type of belief that systemically deviates from the normal epistemic profile for beliefs has some explaining to do” (Bedke 2014b: 191). We agree. Beliefs that deviate from their normal epistemic profile are puzzling. However, we also think that the considerations advanced above constitute a way of starting to shoulder this burden. We also think that certain types of beliefs systematically deviate from the normal epistemic profile. Consider the following passage from Schwitzgebel.

Someone tells me that John has replaced Georgia as department chair. Swiftly, a whole fleet of dispositions change: I’ll go to John not Georgia if I have a question about my merit file, I’ll put forms for signature in his box not hers, I’ll go down to the chair’s office to look for him, etc. … Our morally most important beliefs, however, the ones that reflect our values, our commitments, our enduring ways of viewing the world – they’re not like this. They change slowly, painfully, effortfully. (Schwitzgebel 2010: 547)

The difference between Schwitzgebel’s belief about Georgia and his moral beliefs is that he is emotionally invested in the latter. This connects to what Nicolas Porot and Mandelbaum (2020) call the psychological immune system.Footnote 20 Even if beliefs that are held dispassionately “change in accordance with one’s evidence” (Porot and Mandelbaum 2020: 7), beliefs that are important to us differ in this respect.

For beliefs one self-identifies with, rational updating – for example, apportioning and weighing evidence – is not the norm. People accept and reject information not to maintain epistemic coherence as much as to buttress their sense of self. For beliefs we self-identify with, belief updating is dictated by a psychological immune system, where counterattitudinal information is seen not just as any new evidence, but instead as a deep psychological threat. The psychological immune system functions, first and foremost, to help us keep our most deeply held self-image. (Porot and Mandelbaum 2020: 7)

The psychological immune system helps make sense of why certain of our beliefs do not change in response to what one takes is strong evidence against them, but it also seems to explain why certain types of beliefs systematically deviate from the normal epistemic profile of beliefs. Moral beliefs (assuming that they are beliefs) are not like other beliefs. Rather, moral beliefs are beliefs that we are emotionally invested in and beliefs that matter to us. Olson, for example, claims that “[i]t appears realistic that in morally engaged and engaging contexts, affective attitudes like anger, admiration, empathy and the like, tend to silence beliefs to the effect that moral error theory is true” (Olson 2011: 201). One hypothesis here is that the psychological immune system explains why Olson’s moral beliefs systematically deviate from the normal epistemic profile of beliefs. Olson’s beliefs, after all, are about (albeit non-existent) normative properties that prescribe actions. This, it seems, makes sense of why a type of belief systematically deviates from the normal epistemic profile for beliefs. We thus also have a counterexample to the kind of thesis that Bedke relies on. A type of belief can systematically deviate from the normal epistemic profile of a belief because the type of belief is important to us.Footnote 21

7 Testing for belief

A belief that P, on the view outlined above, does not necessarily go away even if the subject believes that there is sufficiently strong evidence against P. One way of making sense of this is by understanding beliefs as dispositional states.Footnote 22 To have a belief, or any other mental state, is “to embody a certain broad-ranging actual and counterfactual pattern of activity and reactivity” (Schwitzgebel 2013: 76). Normally, a belief that P is disposed to go away in response to what one takes to be decisive evidence against P, but as argued above, sometimes this does not happen. But when it does not happen, it needs to be explained. Part of the reason for this is that certain combinations of mental states cannot happily co-exist. Simultaneously believing that P and believing that there is sufficiently strong evidence against P, will give rise to cognitive dissonance. As Marianna Ganapini argues, we will respond to this kind of “perceived irrationality by re-establishing coherence” (Ganapini 2020: 3272). Normally, we re-establish coherence by giving up the target belief. At other times, we re-establish coherence by rationalizing away the evidence. But the most interesting cases are where we fail to re-establish coherence in these ways. It is in those cases that we need some kind of explanation for the persistent and transparent violation of inter-level coherence to make sense. Other combinations of attitudes, by contrast, e.g., fantasizing that P and believing that there is decisive evidence against P, are mental states that can happily co-exist. Consider, for example, what Andrew Huddleston calls “naughty beliefs.”

[i]f we raise our wine glasses in a toast, and we don’t look into each others’ eyes as the glasses clink, we’ll be doomed to 7 years of bad sex. … Here is something else I now believe: that belief of mine is false. (Huddleston 2012: 209)

Huddleston’s “naughty belief” is a type of superstitious belief and superstitious beliefs often systematically deviate from the normal epistemic profile of belief. However, superstitious beliefs also seem to differ from the types of beliefs considered above. In particular, Huddleston’s combination of mental states is not very puzzling. One explanation is that superstitious beliefs are not really beliefs. If they were, it would be more difficult to make sense of the case. Relatedly, we do not have to come up with some kind of explanation of why the combination makes sense. This also suggests that a superstitious belief is not really a belief. On the basis of similar considerations, Ganapini suggests the following test for belief.

If you see discomfort and coping strategies to avoid conflict, you are bearing witness to a belief … By contrast, an attitude is not a belief if it is impervious to changes even when the subject detects that her cognitive attitude breaches doxastic rationality. (Ganapini 2020: 3274)

If Huddleston’s combination of attitudes were a transparent violation of inter-level coherence, then we should at least see some minimal attempts to re-establish coherence. However, the two mental states seem to happily co-exist.Footnote 23 Insofar as this is typical of superstitious beliefs, it suggests that superstitious beliefs really are not beliefs in the relevant sense.

Compare this to the error theorist who judges that murder is wrong and judges that there is sufficiently strong evidence against the first-order moral judgment. If moral judgments are beliefs, we should, given Ganapini’s test, expect to see discomfort and coping strategies. The efforts to compartmentalize, it seems, is at least some evidence to the effect that there is some conflict. If first-order moral judgments were not beliefs there would be no pressure to compartmentalize.Footnote 24 However, it also seems doubtful that an error theorist can successfully keep the two putative beliefs apart. Even if they are part of different fragments, they will on some occasion be actualized simultaneously and merge. If moral judgments are beliefs, as error theorists claim, this will give rise to a transparent violation of inter-level coherence. This is where we need an explanation of why the first-order moral belief does not change despite the belief that there is sufficiently strong evidence against the moral belief. Again, one plausible explanation is that a moral belief is a type of belief that is important to us or that we are emotionally invested in. Error theorists like Olson and Mackie, moreover, often emphasize that moral beliefs play an important social role.Footnote 25 The psychological immune system thus functions to protect them.

These considerations can also be used to argue that Bedke’s claim, i.e., that the error theorist’s moral beliefs are not about non-natural properties, is doubtful. If a belief is about non-natural properties, Bedke claims, it should change in the mind of an error theorist. That such beliefs (or judgments) do not change is “very good evidence that normative judgments are not beliefs about non-natural properties” (Bedke 2014b: 193). The explanation outlined above, by contrast, suggests that the reason that the beliefs do not change is that the psychological immune system protects them.

Moreover, if there is a type of judgment that systematically fails to change in response to (purported) rebutting defeaters, it is not at all obvious that the best explanation is that we should think of the judgment as a belief with a different content. Superstitious beliefs, for example, are a type of judgment that systematically deviates from the normal epistemic profile for beliefs. Like Huddleston, many people seem to have superstitious beliefs that they believe are false. According to Bedke, this suggests that superstitious beliefs do not have the content that they seem to have. A better explanation, we think, is that superstitious beliefs have the kind of content they seem to have, but that superstitious beliefs are not really beliefs. Rather, (most) superstitious beliefs are better thought of as some kind of pretense or fiction. Part of the reason for thinking this is that the superstitious belief and the belief that the superstitious belief is false is a type of combination that typically does not give rise to any dissonance. This is also why we typically do not have to come up with some kind of explanation of how the combination of attitudes can be maintained. Again, this is different for moral beliefs. We need some kind of explanation to make sense of the error theorist’s combination of attitudes. If the error theorist’s moral beliefs were not about non-natural properties, then the belief that there are no non-natural properties should not lead to any dissonance, but it does. The best explanation of why the error theorist’s belief does not change is therefore not that it is not about non-natural properties, but that the first-order moral beliefs are protected by the psychological immune system.Footnote 26

However, in order to adequately test for belief, we cannot merely examine how a (type of) mental state actually functions. We must also consider how it functions counterfactually. In the case discussed by Helton, the explanation of why your belief in your friend’s innocence is not given up, despite your belief that there is decisive evidence against it, is because of your emotional investment in your friendship. The emotional investment functions to psychologically protect the belief from changing. This suggests, as Helton claims, a kind of counterfactual test for belief: “were you to lose that affection, you would revise your belief” (Helton 2018: 516). If you were to lose your affection for your friend, there would be nothing that protects your belief from changing. If the putative belief does not change in the counterfactual scenario, i.e., where there is no explanation of why the belief does not change, then we have very good reason to think that it is not a belief.

But how do we apply this test to moral judgments? It is obvious in what sense one can lose one’s affection for a friend. It is less obvious how we should think about moral judgments that lose their importance. This depends on how the importance is explained. Since it seems possible to have a belief that has no importance, it does not seem as if it is the attitude itself that explains importance. Rather, it seems as if it is either the content of the belief (e.g., that the judgment is about a particular non-natural property that prescribes a particular action) that presents a particular consideration as important or something external to the judgment (e.g., that moral judgments play an important social function). Given the latter explanation, it seems that the error theorist should give up their moral beliefs if they were to stop thinking that moral beliefs did not have an important social function. Indeed, this is how error theorists who reject the usefulness of moral beliefs have reacted. Of course, this does not show that Mackie and Olson would give up their first-order moral beliefs if they stopped thinking that they play an important role. However, it is plausible to think that the explanation of why some error theorists have moral beliefs that do not display the normal epistemic profile of beliefs is because they think moral beliefs are important. The psychological immune system thus protects them. Again, this does not show that moral judgments are beliefs. It does, however, show why the extant argument from evidence non-responsiveness does not work.

8 Concluding remarks

Bedke, Suikkanen and Rowland have recently argued that moral judgments differ from ordinary prosaic beliefs by virtue of their (systematic) evidence non-responsiveness and that this constitutes a challenge to cognitivism (or, in Bedke’s case, cognitive non-naturalism). We have argued that extant versions of the argument from evidence-non-responsiveness fail. We nevertheless think it is an interesting argument. In particular, cognitivists should be more explicit about how they conceive of the nature of belief and the sense in which, if at all, moral judgments are evidence responsive and what they are supposed to be responsive to. The considerations advanced above can also be thought of as providing the outline of an answer to at least the first two challenges. It does indeed seem very plausible to think that the distinguishing feature of a belief is that it is a mental state that is evidence responsive, but this does not mean that it is impossible to have a belief that P and believe that there is sufficiently strong evidence against P. Transparent violations of inter-level coherence are possible, but they require explanation to make sense. One plausible, and quite general, explanation of why a belief that P does not change in response to what one believes is decisive evidence against P is that the target belief is, for some reason or other, important. When an important belief is threatened, the psychological immune system kicks in to protect it. Moreover, moral beliefs are a type of belief that is important. This plausibly explains why moral beliefs are a type of judgment that systematically fails to display the standard epistemic profile for beliefs and why this does not threaten their status as beliefs. Of course, this does not show that moral judgments are beliefs. But in order to challenge cognitivism (or, e.g., non-naturalism), other considerations are needed.