Skip to main content
Log in

Acceptance and the ethics of belief

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

Various philosophers authors have argued—on the basis of powerful examples—that we can have compelling moral or practical reasons to believe, even when the evidence suggests otherwise. This paper explores an alternative story, which still aims to respect widely shared intuitions about the motivating examples. Specifically, the paper proposes that what is at stake in these cases is not belief, but rather acceptance—an attitude classically characterized as taking a proposition as a premise in practical deliberation and action. I suggest that acceptance’s theoretical usefulness in the ethics of belief has been hidden by its psychological obscurity. I thus aim to develop an empirically adequate and mechanistically specific psychological profile of acceptance. I characterize acceptance as centrally involving a cognitive gating function, in which we prevent a target belief state from having its characteristic downstream effects on reasoning, cognition, and action, and restructure those downstream processes. I then argue that there is substantial empirical support for the existence of the cognitive mechanisms needed to instantiate this view, coming from the science of emotion regulation. I argue that acceptance involves deploying the same mechanisms used in emotional response modulation to belief states: acceptance is doxastic response modulation. I then propose that having a better understanding of the psychological profile of acceptance leaves us better positioned to appreciate its potential usefulness for making progress on various puzzles within the ethics of belief.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Or perhaps a sick person is more likely to recover if they believe against the evidence about their chances (e.g., Reisner 2008; Rinard 2015).

  2. Stroud (2006) and Keller (2004, 2018).

  3. Basu (2019b) and Begby (2013, 2021).

  4. Including, though certainly not limited to: the debates around moral and pragmatic encroachment (see Jorgensen Bolinger (2020a, 2020b) for a thorough overview), doxastic wronging (Basu, 2018; Basu and Schroeder, 2019), epistemic partiality (e.g., Arpaly and Brinkerhoff, 2018; Kawall, 2013; Keller, 2004; Stroud, 2006), and how we should believe in and about others and ourselves (e.g., Morton and Paul, 2019; Paul and Morton, 2018).

  5. Various authors have argued that we can make sense of epistemic deontology in the face of involuntarism: e.g., see Hieronymi (2008; 2006), Shah (2002), Chrisman (2008), Weatherson (2008), Steup (2012), Flowerree (2017), among others. However, these defenses of epistemic agency generally do not claim that we have the control to believe for non-epistemic reasons. Though see Jackson (2021) and Roeber (2019; 2020) for discussion of the latter.

  6. Some deny that we ought to pursue an ethics of belief at all, rejecting the idea that we ought not have certain kinds of “inappropriate” beliefs identified by those in the literature; see, for example, Enoch and Spectre (forthcoming) and Sher (2019). Note that even those who reject that we ought to think our beliefs are sensitive to moral evaluation might still think that there can be cases in which it would be practically beneficial for an agent to believe against her evidence (for instance, that she will succeed even when the odds look slim)—thus, even those who want to reject a morality of belief might still have interest in the question of belief against the evidence more broadly.

  7. E.g., Brinkerhoff (2021) does this with some prejudiced beliefs.

  8. Something like this “active endorsement” picture of belief seems to be at work in much of Basu’s discussions of doxastic wronging, for instance (e.g., Basu, 2018), though she does not explicitly defend such a view there (though see Basu, 2022 for some explicit discussion of this). See also McKaughan (2007) for an overview of this “active endorsement” account.

  9. Most prominently, this distinction has been developed by Cohen (1989; 1992), Engel (1998), and Bratman (1992) in epistemology (see also Frankish (2007b) and Van Fraassen (1985) for related discussion).

  10. Thanks to Tez Clark and two anonymous reviewers for encouraging me to be more explicit about this division.

  11. By “in response to the truth of p,”, I mean something akin to Shah and Velleman (2005)’s discussion: for an agent to believe p, she must conceive of it as being regulated for truth, and subject to a normative standard according to which the state is correct if and only if it is true. Beliefs can, of course, be false—but an agent cannot take her belief to be false and still believe it (e.g., see Frankish, 2007a). This truth-responsiveness is what distinguishes belief from other kinds of attitudes that also involve some version of “taking p as true,” such as imagining.

  12. I borrow and elaborate on this characterization from Bratman (1992), integrating it in particular with ideas from Railton (2014).

  13. This is crucial for understanding the role of belief in our cognitive lives. The set of things we believe (at least implicitly, though certainly not occurrently) is indefinitely large, and in practical deliberation and reasoning we rely on many beliefs as background premises without consciously entertaining all of them. We could never navigate the world if we had to consider every belief we relied on; we simply lack the time and cognitive resources this would demand. So while we might be able to make many of our beliefs explicit and occurrent, we needn’t do so in order to rely on them.

  14. This is not to say needing to navigate the world will always result in precisely accurate beliefs, especially regarding matters that have little impact on our practical lives. Nonetheless, our need to navigate the world is closely tied to our ability to represent it accurately, and so the connection between belief’s truth-responsiveness and this navigation is important.

  15. E.g., see Shah (2003); Shah and Velleman (2005).

  16. Of course, we do sometimes step in and deliberate about what to believe in response to a complex body of evidence, but such cases actually represent a very small portion of our total belief-formation experiences. Further, this is not to say that rational belief formation never goes wrong; rationalization, perceptual distortion, and delusions might all be examples of cases in which we fail to form beliefs that are spontaneously and accurately response to our what our evidence actually justifies. All that is needed for present purposes is that belief-formation is responsive (at least normally/often) to what we take the evidence to point to. This is compatible with there being substantive questions about cases in which what we take the evidence to justify is different from what it actually justifies.

  17. It is plausible that in often, the domains where people systematically tend to form false beliefs involve epistemic environments that are deficient or distorted in some significant way (perhaps in conjunction with pernicious or disordered motivational factors). When beliefs are false, it often becomes difficult to reason and act on the basis of those beliefs, as the world will continue to push back on the believer. The well-functioning, non-disordered cognitive agent will often find their minds changed by the world, in the end. Of course, this is not true in every case, and there certainly are people who manage to maintain highly unjustified beliefs in the face of significant counterevidence. It may also be easier to cling to such beliefs when they are more abstract and less amenable to everyday evidential support or lack thereof—e.g., the average person’s beliefs about the origins of the universe will receive far less pushback than their beliefs about their singing abilities. Yet we should be wary of focusing too much on these cases at the expense of realizing just how well our belief systems do in general adapt to the evidence they are given.

  18. See Alston (1988) and Williams (1973) for foundational defenses of this claim.

  19. See Traldi (2022) for an argument about why we should be skeptical of claims that these kinds of norms can never conflict.

  20. My discussion will focus specifically on the epistemic conception of acceptance. As Fleisher (2018, p. 2652 fn4) and McKaughan (2007) note, there are a number of other kinds of acceptance discussed in the philosophical literature, including in philosophy of language, philosophy of science, and literature on metacognition. There may be some systematic differences in how acceptance is conceptualized across domains.

  21. It would be both irrational and highly psychologically odd for one to believe (for example) that Mercury is the closest planet to the sun on Tuesdays, but not to believe this on Thursdays. The same problem does not arise for acceptance: a lawyer might accept her client’s innocence in the courtroom, but not at brunch with her friends. Notable dissent to the context-generality of belief comes from philosophers who argue that belief is “fragmented” (see Elga and Rayo, 2021; Egan, 2008; Bendaña and Mandelbaum, 2021 for recent discussions); though I do not take it that these authors think that the context-dependency of belief is volitional. The view of acceptance I develop in this paper is compatible with fragmented belief storage accounts; it will just turn out that there are interesting questions about how precisely the mechanisms I discuss below interact with belief—and this will be different across different accounts of belief architecture.

  22. In a similar vein, Stalnaker (1984) characterizes acceptance as treating a proposition as true—but takes this to be a broad category that includes belief as sub-kind, along with presupposition, postulation, assumption, and other nearby attitudes. In virtue of this breadth, Stalnaker’s notion of acceptance is sufficiently different from the kind I am concerned with that I will not discuss it further here. Van Fraassen's notion of acceptance may also be in a rather different class than some of the others listed above, insofar as a scientist who accepts in his sense can (and should) still believe her hypothesis is empirically adequate.  

  23. See Begby (2013), Basu (2019a), among others, for discussion of such cases.

  24. The idea that tasks involving monitoring and intervening on default cognitive processes is effortful appears in many domains in cognitive science. For a few well-known examples: see Evans and Stanovich (2013), Evans (2008), and Evans (2019) for discussion of dual-process theories of cognition (I do not mean to commit to the “intuition vs. reasoning” framing that is sometimes associated with dual-process theories; just the familiar idea that the overriding of default psychological processes is controlled and effortful); see J.D. Cohen (2017) for an overview of the idea of overriding default responses using cognitive control mechanisms and Botvinick et al. (2001) for discussion of the monitoring function in cognitive control; see Devine (1989) and Payne (2005) for discussions of automatic and controlled components of stereotyping in social cognition; see Shenhav et al. (2017) for relevant discussion on mental effort. These are just some examples of an idea ubiquitous in many areas of cognitive psychology.

  25. There seem to be two distinct roles for skill: at the stage of identification of where the belief is involved in influencing our reasoning and behavior, and at the stage of intervening at this points once recognized. We might think that people could be differentially skilled at these two components, and perhaps it will turn out that in the case of belief, the former is particularly difficult (compared to emotion, for example). This idea deserves further exploration; thanks to Matt Stichter for encouraging me to think about it.

  26. In §5.2 I discuss the relationship between this and suspension of judgment.

  27. Later accounts more precisely divide strategies into five categories: situation selection, situation modification, attentional deployment, cognitive restructuring, and response modulation (Gross, 1998b, 2015; McRae, 2016). Further, because emotions are temporally extended mental processes, the line between categories is in practice somewhat blurry. However, since I will be discussing only response modulation in detail, the coarse two-category distinction is sufficient for present purposes.

  28. Although regulation is often discussed in the context of negative emotions, people can regulate positive emotions as well. For instance, someone trying to keep a neutral face and hide excitement upon learning that they were accepted into a prestigious school, or stifling laughter in response to a funny video, are examples of expressive suppression for positive emotions (Gross and Levenson, 1993).

  29. Those who endorse accounts on which belief is a form of confidence or trust, such as Railton (2014), McCormick (2022), and perhaps Schwitzgebel (2002), and insofar as confidence and trust are affective states, might simply say that acceptance is a specific kind of emotional response suppression.

  30. This idea has been established in a variety of domains in psychology. One way it presents itself is that our emotional reactions to stimuli can affect how we reason about them. This focus has been especially central in the study of moral reasoning, where it has been shown that automatic or “intuitive” emotional reactions to stimuli can affect our moral judgments and decision-making (e.g., see Greene, 2015; 2007; Greene et al., 2009, 2001; Haidt, 2001 for classic discussion). Similar effects have been shown in other domains where emotional reactions can affect reasoning processes, such as framing effects and decisions involving perceived risk (clasically, Kahneman & Tversky, 1979; see also Keysar et al., 2012; Costa et al., 2014). More generally, it is well-recognized that emotions, when activated, cause emotion-congruent biases across a range of cognitive mechanisms (e.g.m Brosch et al., 2013; Dolcos & Denkova, 2014; Phan & Sripada, 2013), including action and goal-selection mechanisms (emotions involve “action tendencies,” e.g., anger biases us towards retaliative goals; see Frijda, 1987; Frijda et al., 1989; Scarantino, 2014), attention mechanisms (e.g., fear causes us to be more sensitive to threat-related stimuli) (Domínguez-Borràs & Vuilleumier, 2013), how we interpret new information, what and how we remember information, and so on. See Sripada (2021, Sect. 4.2) for a helpful overview of this research framed for a philosophical audience.

  31. An example of empirical research on suppression of specifically cognitive effects of emotion is the suppression of emotion-laden or emotion-activated thoughts (e.g., Matos et al., 2013; Muris et al., 1992; Roemer & Borkovec, 1994; see also Mauss, Bunge, and Gross 2007 for some discussion of automatic suppression techniques in various domains).

  32. See Shenhav et al. (2017) for a general discussion of mental effort.

  33. For a recent discussion of various ways of understanding the talk of right and wrong kinds of reasons for belief formation, as well as a paper with a helpful overview of relevant literature, see Maguire and Woods (2020).

  34. This raises a question about how we ought to think about the rational or epistemic assessability of acceptance. Though I lack the space for a full treatment here, for now I propose that we should think of the decision to accept as a decision about the tradeoff between on the one hand your evidence, and on the other how you want to be and act in the world given our moral and practical motivations. We often have very good reason to be guided by our evidence and our beliefs—but not always. Decisions about whether to accept are thus cross-domain decisions between the epistemic and the moral/practical; like any cross-domain decision, both sets of norms are going to have some relevance, and neither be decisive. So it’s not the case that acceptance is not assessable according to epistemic norms—but it’s not assessable only against epistemic norms.

  35. There already exist some attempts to integrate acceptance as a solution to some problems in the ethics of belief; one notable recent treatment comes from Renée Jorgensen (see Bolinger, 2020b), who appeals to acceptance to make sense of what goes (rationally) wrong in (at least some cases of) racial/social group generalizations. The account of acceptance Jorgensen relies on is a bit different than the one I develop. For one, her discussion is pitched entirely at the level of epistemological theorizing rather than questions of psychological mechanisms (though on that front, I suspect that much of what each of us say is compatible with the other’s account). However, Jorgensen specifies that on her account, accepting a proposition involves “taking it for granted” in a sense that it incompatible with thinking p is false (2020b, p. 2417 FN 3). But on my account, an agent who believes some proposition to be false can nonetheless prevent that belief from guiding her reasoning and action (though I take no explicit stand here on the rational status of so doing). Begby (2021, especially Ch. 9) also discusses acceptance in the context of the ethics of belief (and notes that his discussion is inspired by Jorgensen’s, p. 161 FN 10). I agree with much of what Begby has to say, though the psychological profile developed here goes beyond his treatment; I thus think our approaches are complimentary.

  36. See Sripada (2018) for discussion of responsibility for effortful regulation in the context of addiction, as an example.

  37. For recent discussions of this kind of view, see Jackson (2021), Roeber (2019), and Ross (2022), among others. For more general theoretical work on suspension, and for work showing very different accounts of the nature of suspension, see e.g., McGrath (2021), and also Friedman (2013; 2017), Masny (2020), Crawford (2022), and Staffel (2019).

  38. In his (1992) discussion, Bratman also explicitly discusses how supposition differs from acceptance.

  39. These differences in high-level characteristics may also reflect differences in the lower-level psychological profiles of acceptance and supposition. A full treatment of the psychological profile of supposition is beyond the scope of this paper; but as a first pass, one could argue for a gating and response modulation account of supposition with a more limited target: the supposer only needs to gate and suppress the reasoning and inferential processes involving the target belief. Alternatively, perhaps supposition involves somewhat of a different cognitive profile, centrally involving processes of counterfactual reasoning, hypothetical simulation, and cognitive decoupling–and that these processes are more emphasized than the monitoring, gating, and suppression mechanisms that characterize acceptance. Such processes more closely align with the exploratory goals of supposition.

  40. It may be difficult to know precisely where to draw the line between these attitudes in some cases, especially when describing the psychologies of other people–and some may resist making the distinction at all. But, for those who want to distinguish acceptance and supposition, the characteristic aims and psychological profiles seem promisingly different.

  41. One might be tempted to characterize acceptance as having specifically practical (broadly construed) aims, and supposition as having specifically epistemic aims. However, I think acceptance can be undertaken for specifically epistemic aims. That is: sometimes, being the most successful epistemic agent in the broad sense will involve responding not merely to the considerations of the evidence directly in front of us. A classic example is a scientist who favors a hypothesis for reasons of theoretical virtue that is less well empirically supported than some alternative: she might accept to forward her epistemic-scientific goals. (The kinds of cognitive regulation mechanisms I’ve argued for her may actually be stronger than what the scientist needs, though. A more appropriate attitude might be something like Fleisher (2018)’s rational endorsement, which focuses on broader norms of inquiry rather than a specific cognitive profile.) For another example, perhaps an agent who knows she’ll be entering a deeply unreliable evidential situation thinks that her belief-forming mechanisms might be overwhelmed by the deluge of unreliable evidence. She might seek to regulate her resulting beliefs via the kinds of acceptance mechanisms discussed here, for the clearly epistemic purpose of retaining overall better beliefs. This is a topic that merits further exploration elsewhere.

  42. Rapstine (2021) develops the idea of epistemic agent regret, building on Bernard Williams’s conception of agent regret in the moral sphere, I find the heart of Rapstine’s proposal compelling: the idea that we can hold a belief, take that belief to be evidentially justified, but nevertheless regret being a “vehicle” for that belief on moral grounds. Acceptance gives us a resource to do something about our beliefs in such cases, rather than merely resigning ourselves to this regret.

  43. I in fact think there may be less difference between what I call acceptance and what authors like Basu and Schroeder call belief than it initially appears. I suspect that a difficulty in some discussions of the ethics of belief is that people are sometimes trading on importantly different notions of belief, where some are more thick and commitment-like, and others are more thin and merely-evidence-responsive. Untangling this idea is something I am pursuing elsewhere.

References

Download references

Acknowledgements

I am grateful to Peter Railton, Chandra Sripada, Rénee Jorgensen, and Maegan Fairchild for detailed discussion and comments on multiple versions of this paper; to Mica Rapstine, Adam Waggoner, Brian Weatherson, Corey Cusimano, Sarah Buss, Alex Madva, Susan Gelman, Ethan Kross, and Matt Stichter for their feedback on drafts; to Aliosha Barranco Lopez, Henry Schiller, Tez Clark, and Caitlin Mace for excellent conference comments; and to two anonymous referees for their exceptionally useful and constructive reviews. Additionally, this paper benefited from conversations with Zach Barnett, Gabrielle Kerbel, Andrew Lichter, Malte Hendrickx, Aaron Glasser, Jonathan Jenkins Ichikawa, Gwen Bradford, Mark Schroeder, Daniel Kelly, and Walter Sinnott-Armstrong; my thanks to all of them (and to others who I’ve failed to name). Versions of this paper were presented at the University of Michigan Graduate Student Working Group, the University of Michigan Candidacy Seminar, the 2021 Princeton-Michigan Metanormativity Workshop, the 2021 Austin Graduate Ethics and Normativity Talks, the 2022 Southern Society for Philosophy and Psychology, the 2022 Pacific American Philosophical Association Meeting, Athena in Action 2022 (extra thanks to all those involved in this workshop who read and discussed this paper), and the 2022 Moral Psychology Research Group at Cornell; thanks to those audiences for their engagement and discussion.

Funding

The author was partially funded by a National Science Foundation Graduate Research Fellowship.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Laura K. Soter.

Ethics declarations

Conflict of interest

There are no conflicts of interest to declare.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Soter, L.K. Acceptance and the ethics of belief. Philos Stud 180, 2213–2243 (2023). https://doi.org/10.1007/s11098-023-01963-1

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-023-01963-1

Keywords

Navigation