Abstract
Various philosophers authors have argued—on the basis of powerful examples—that we can have compelling moral or practical reasons to believe, even when the evidence suggests otherwise. This paper explores an alternative story, which still aims to respect widely shared intuitions about the motivating examples. Specifically, the paper proposes that what is at stake in these cases is not belief, but rather acceptance—an attitude classically characterized as taking a proposition as a premise in practical deliberation and action. I suggest that acceptance’s theoretical usefulness in the ethics of belief has been hidden by its psychological obscurity. I thus aim to develop an empirically adequate and mechanistically specific psychological profile of acceptance. I characterize acceptance as centrally involving a cognitive gating function, in which we prevent a target belief state from having its characteristic downstream effects on reasoning, cognition, and action, and restructure those downstream processes. I then argue that there is substantial empirical support for the existence of the cognitive mechanisms needed to instantiate this view, coming from the science of emotion regulation. I argue that acceptance involves deploying the same mechanisms used in emotional response modulation to belief states: acceptance is doxastic response modulation. I then propose that having a better understanding of the psychological profile of acceptance leaves us better positioned to appreciate its potential usefulness for making progress on various puzzles within the ethics of belief.
Similar content being viewed by others
Notes
Including, though certainly not limited to: the debates around moral and pragmatic encroachment (see Jorgensen Bolinger (2020a, 2020b) for a thorough overview), doxastic wronging (Basu, 2018; Basu and Schroeder, 2019), epistemic partiality (e.g., Arpaly and Brinkerhoff, 2018; Kawall, 2013; Keller, 2004; Stroud, 2006), and how we should believe in and about others and ourselves (e.g., Morton and Paul, 2019; Paul and Morton, 2018).
Various authors have argued that we can make sense of epistemic deontology in the face of involuntarism: e.g., see Hieronymi (2008; 2006), Shah (2002), Chrisman (2008), Weatherson (2008), Steup (2012), Flowerree (2017), among others. However, these defenses of epistemic agency generally do not claim that we have the control to believe for non-epistemic reasons. Though see Jackson (2021) and Roeber (2019; 2020) for discussion of the latter.
Some deny that we ought to pursue an ethics of belief at all, rejecting the idea that we ought not have certain kinds of “inappropriate” beliefs identified by those in the literature; see, for example, Enoch and Spectre (forthcoming) and Sher (2019). Note that even those who reject that we ought to think our beliefs are sensitive to moral evaluation might still think that there can be cases in which it would be practically beneficial for an agent to believe against her evidence (for instance, that she will succeed even when the odds look slim)—thus, even those who want to reject a morality of belief might still have interest in the question of belief against the evidence more broadly.
E.g., Brinkerhoff (2021) does this with some prejudiced beliefs.
Something like this “active endorsement” picture of belief seems to be at work in much of Basu’s discussions of doxastic wronging, for instance (e.g., Basu, 2018), though she does not explicitly defend such a view there (though see Basu, 2022 for some explicit discussion of this). See also McKaughan (2007) for an overview of this “active endorsement” account.
Thanks to Tez Clark and two anonymous reviewers for encouraging me to be more explicit about this division.
By “in response to the truth of p,”, I mean something akin to Shah and Velleman (2005)’s discussion: for an agent to believe p, she must conceive of it as being regulated for truth, and subject to a normative standard according to which the state is correct if and only if it is true. Beliefs can, of course, be false—but an agent cannot take her belief to be false and still believe it (e.g., see Frankish, 2007a). This truth-responsiveness is what distinguishes belief from other kinds of attitudes that also involve some version of “taking p as true,” such as imagining.
This is crucial for understanding the role of belief in our cognitive lives. The set of things we believe (at least implicitly, though certainly not occurrently) is indefinitely large, and in practical deliberation and reasoning we rely on many beliefs as background premises without consciously entertaining all of them. We could never navigate the world if we had to consider every belief we relied on; we simply lack the time and cognitive resources this would demand. So while we might be able to make many of our beliefs explicit and occurrent, we needn’t do so in order to rely on them.
This is not to say needing to navigate the world will always result in precisely accurate beliefs, especially regarding matters that have little impact on our practical lives. Nonetheless, our need to navigate the world is closely tied to our ability to represent it accurately, and so the connection between belief’s truth-responsiveness and this navigation is important.
Of course, we do sometimes step in and deliberate about what to believe in response to a complex body of evidence, but such cases actually represent a very small portion of our total belief-formation experiences. Further, this is not to say that rational belief formation never goes wrong; rationalization, perceptual distortion, and delusions might all be examples of cases in which we fail to form beliefs that are spontaneously and accurately response to our what our evidence actually justifies. All that is needed for present purposes is that belief-formation is responsive (at least normally/often) to what we take the evidence to point to. This is compatible with there being substantive questions about cases in which what we take the evidence to justify is different from what it actually justifies.
It is plausible that in often, the domains where people systematically tend to form false beliefs involve epistemic environments that are deficient or distorted in some significant way (perhaps in conjunction with pernicious or disordered motivational factors). When beliefs are false, it often becomes difficult to reason and act on the basis of those beliefs, as the world will continue to push back on the believer. The well-functioning, non-disordered cognitive agent will often find their minds changed by the world, in the end. Of course, this is not true in every case, and there certainly are people who manage to maintain highly unjustified beliefs in the face of significant counterevidence. It may also be easier to cling to such beliefs when they are more abstract and less amenable to everyday evidential support or lack thereof—e.g., the average person’s beliefs about the origins of the universe will receive far less pushback than their beliefs about their singing abilities. Yet we should be wary of focusing too much on these cases at the expense of realizing just how well our belief systems do in general adapt to the evidence they are given.
See Traldi (2022) for an argument about why we should be skeptical of claims that these kinds of norms can never conflict.
My discussion will focus specifically on the epistemic conception of acceptance. As Fleisher (2018, p. 2652 fn4) and McKaughan (2007) note, there are a number of other kinds of acceptance discussed in the philosophical literature, including in philosophy of language, philosophy of science, and literature on metacognition. There may be some systematic differences in how acceptance is conceptualized across domains.
It would be both irrational and highly psychologically odd for one to believe (for example) that Mercury is the closest planet to the sun on Tuesdays, but not to believe this on Thursdays. The same problem does not arise for acceptance: a lawyer might accept her client’s innocence in the courtroom, but not at brunch with her friends. Notable dissent to the context-generality of belief comes from philosophers who argue that belief is “fragmented” (see Elga and Rayo, 2021; Egan, 2008; Bendaña and Mandelbaum, 2021 for recent discussions); though I do not take it that these authors think that the context-dependency of belief is volitional. The view of acceptance I develop in this paper is compatible with fragmented belief storage accounts; it will just turn out that there are interesting questions about how precisely the mechanisms I discuss below interact with belief—and this will be different across different accounts of belief architecture.
In a similar vein, Stalnaker (1984) characterizes acceptance as treating a proposition as true—but takes this to be a broad category that includes belief as sub-kind, along with presupposition, postulation, assumption, and other nearby attitudes. In virtue of this breadth, Stalnaker’s notion of acceptance is sufficiently different from the kind I am concerned with that I will not discuss it further here. Van Fraassen's notion of acceptance may also be in a rather different class than some of the others listed above, insofar as a scientist who accepts in his sense can (and should) still believe her hypothesis is empirically adequate.
The idea that tasks involving monitoring and intervening on default cognitive processes is effortful appears in many domains in cognitive science. For a few well-known examples: see Evans and Stanovich (2013), Evans (2008), and Evans (2019) for discussion of dual-process theories of cognition (I do not mean to commit to the “intuition vs. reasoning” framing that is sometimes associated with dual-process theories; just the familiar idea that the overriding of default psychological processes is controlled and effortful); see J.D. Cohen (2017) for an overview of the idea of overriding default responses using cognitive control mechanisms and Botvinick et al. (2001) for discussion of the monitoring function in cognitive control; see Devine (1989) and Payne (2005) for discussions of automatic and controlled components of stereotyping in social cognition; see Shenhav et al. (2017) for relevant discussion on mental effort. These are just some examples of an idea ubiquitous in many areas of cognitive psychology.
There seem to be two distinct roles for skill: at the stage of identification of where the belief is involved in influencing our reasoning and behavior, and at the stage of intervening at this points once recognized. We might think that people could be differentially skilled at these two components, and perhaps it will turn out that in the case of belief, the former is particularly difficult (compared to emotion, for example). This idea deserves further exploration; thanks to Matt Stichter for encouraging me to think about it.
In §5.2 I discuss the relationship between this and suspension of judgment.
Later accounts more precisely divide strategies into five categories: situation selection, situation modification, attentional deployment, cognitive restructuring, and response modulation (Gross, 1998b, 2015; McRae, 2016). Further, because emotions are temporally extended mental processes, the line between categories is in practice somewhat blurry. However, since I will be discussing only response modulation in detail, the coarse two-category distinction is sufficient for present purposes.
Although regulation is often discussed in the context of negative emotions, people can regulate positive emotions as well. For instance, someone trying to keep a neutral face and hide excitement upon learning that they were accepted into a prestigious school, or stifling laughter in response to a funny video, are examples of expressive suppression for positive emotions (Gross and Levenson, 1993).
This idea has been established in a variety of domains in psychology. One way it presents itself is that our emotional reactions to stimuli can affect how we reason about them. This focus has been especially central in the study of moral reasoning, where it has been shown that automatic or “intuitive” emotional reactions to stimuli can affect our moral judgments and decision-making (e.g., see Greene, 2015; 2007; Greene et al., 2009, 2001; Haidt, 2001 for classic discussion). Similar effects have been shown in other domains where emotional reactions can affect reasoning processes, such as framing effects and decisions involving perceived risk (clasically, Kahneman & Tversky, 1979; see also Keysar et al., 2012; Costa et al., 2014). More generally, it is well-recognized that emotions, when activated, cause emotion-congruent biases across a range of cognitive mechanisms (e.g.m Brosch et al., 2013; Dolcos & Denkova, 2014; Phan & Sripada, 2013), including action and goal-selection mechanisms (emotions involve “action tendencies,” e.g., anger biases us towards retaliative goals; see Frijda, 1987; Frijda et al., 1989; Scarantino, 2014), attention mechanisms (e.g., fear causes us to be more sensitive to threat-related stimuli) (Domínguez-Borràs & Vuilleumier, 2013), how we interpret new information, what and how we remember information, and so on. See Sripada (2021, Sect. 4.2) for a helpful overview of this research framed for a philosophical audience.
An example of empirical research on suppression of specifically cognitive effects of emotion is the suppression of emotion-laden or emotion-activated thoughts (e.g., Matos et al., 2013; Muris et al., 1992; Roemer & Borkovec, 1994; see also Mauss, Bunge, and Gross 2007 for some discussion of automatic suppression techniques in various domains).
See Shenhav et al. (2017) for a general discussion of mental effort.
For a recent discussion of various ways of understanding the talk of right and wrong kinds of reasons for belief formation, as well as a paper with a helpful overview of relevant literature, see Maguire and Woods (2020).
This raises a question about how we ought to think about the rational or epistemic assessability of acceptance. Though I lack the space for a full treatment here, for now I propose that we should think of the decision to accept as a decision about the tradeoff between on the one hand your evidence, and on the other how you want to be and act in the world given our moral and practical motivations. We often have very good reason to be guided by our evidence and our beliefs—but not always. Decisions about whether to accept are thus cross-domain decisions between the epistemic and the moral/practical; like any cross-domain decision, both sets of norms are going to have some relevance, and neither be decisive. So it’s not the case that acceptance is not assessable according to epistemic norms—but it’s not assessable only against epistemic norms.
There already exist some attempts to integrate acceptance as a solution to some problems in the ethics of belief; one notable recent treatment comes from Renée Jorgensen (see Bolinger, 2020b), who appeals to acceptance to make sense of what goes (rationally) wrong in (at least some cases of) racial/social group generalizations. The account of acceptance Jorgensen relies on is a bit different than the one I develop. For one, her discussion is pitched entirely at the level of epistemological theorizing rather than questions of psychological mechanisms (though on that front, I suspect that much of what each of us say is compatible with the other’s account). However, Jorgensen specifies that on her account, accepting a proposition involves “taking it for granted” in a sense that it incompatible with thinking p is false (2020b, p. 2417 FN 3). But on my account, an agent who believes some proposition to be false can nonetheless prevent that belief from guiding her reasoning and action (though I take no explicit stand here on the rational status of so doing). Begby (2021, especially Ch. 9) also discusses acceptance in the context of the ethics of belief (and notes that his discussion is inspired by Jorgensen’s, p. 161 FN 10). I agree with much of what Begby has to say, though the psychological profile developed here goes beyond his treatment; I thus think our approaches are complimentary.
See Sripada (2018) for discussion of responsibility for effortful regulation in the context of addiction, as an example.
For recent discussions of this kind of view, see Jackson (2021), Roeber (2019), and Ross (2022), among others. For more general theoretical work on suspension, and for work showing very different accounts of the nature of suspension, see e.g., McGrath (2021), and also Friedman (2013; 2017), Masny (2020), Crawford (2022), and Staffel (2019).
In his (1992) discussion, Bratman also explicitly discusses how supposition differs from acceptance.
These differences in high-level characteristics may also reflect differences in the lower-level psychological profiles of acceptance and supposition. A full treatment of the psychological profile of supposition is beyond the scope of this paper; but as a first pass, one could argue for a gating and response modulation account of supposition with a more limited target: the supposer only needs to gate and suppress the reasoning and inferential processes involving the target belief. Alternatively, perhaps supposition involves somewhat of a different cognitive profile, centrally involving processes of counterfactual reasoning, hypothetical simulation, and cognitive decoupling–and that these processes are more emphasized than the monitoring, gating, and suppression mechanisms that characterize acceptance. Such processes more closely align with the exploratory goals of supposition.
It may be difficult to know precisely where to draw the line between these attitudes in some cases, especially when describing the psychologies of other people–and some may resist making the distinction at all. But, for those who want to distinguish acceptance and supposition, the characteristic aims and psychological profiles seem promisingly different.
One might be tempted to characterize acceptance as having specifically practical (broadly construed) aims, and supposition as having specifically epistemic aims. However, I think acceptance can be undertaken for specifically epistemic aims. That is: sometimes, being the most successful epistemic agent in the broad sense will involve responding not merely to the considerations of the evidence directly in front of us. A classic example is a scientist who favors a hypothesis for reasons of theoretical virtue that is less well empirically supported than some alternative: she might accept to forward her epistemic-scientific goals. (The kinds of cognitive regulation mechanisms I’ve argued for her may actually be stronger than what the scientist needs, though. A more appropriate attitude might be something like Fleisher (2018)’s rational endorsement, which focuses on broader norms of inquiry rather than a specific cognitive profile.) For another example, perhaps an agent who knows she’ll be entering a deeply unreliable evidential situation thinks that her belief-forming mechanisms might be overwhelmed by the deluge of unreliable evidence. She might seek to regulate her resulting beliefs via the kinds of acceptance mechanisms discussed here, for the clearly epistemic purpose of retaining overall better beliefs. This is a topic that merits further exploration elsewhere.
Rapstine (2021) develops the idea of epistemic agent regret, building on Bernard Williams’s conception of agent regret in the moral sphere, I find the heart of Rapstine’s proposal compelling: the idea that we can hold a belief, take that belief to be evidentially justified, but nevertheless regret being a “vehicle” for that belief on moral grounds. Acceptance gives us a resource to do something about our beliefs in such cases, rather than merely resigning ourselves to this regret.
I in fact think there may be less difference between what I call acceptance and what authors like Basu and Schroeder call belief than it initially appears. I suspect that a difficulty in some discussions of the ethics of belief is that people are sometimes trading on importantly different notions of belief, where some are more thick and commitment-like, and others are more thin and merely-evidence-responsive. Untangling this idea is something I am pursuing elsewhere.
References
Alston, W. P. (1988). The deontological conception of epistemic justification. Philosophical Perspectives, 2, 257–299. https://doi.org/10.2307/2214077
Arpaly, N., & Brinkerhoff, A. (2018). Why epistemic partiality is overrated. Philosophical Topics, 46(1), 37–51. https://doi.org/10.5840/philtopics20184613
Basu, R. (2018). Can beliefs wrong? Philosophical Topics, 46(1), 1–18.
Basu, R. (2019a). The wrongs of racist beliefs. Philosophical Studies, 176(9), 2497–2515. https://doi.org/10.1007/s11098-018-1137-0
Basu, R. (2019b). What we epistemically owe to each other. Philosophical Studies, 176, 915–931.
Basu, R., & Schroeder M. (2019). “Doxastic wronging.” In B. Kim, M. McGrath (Eds.) Pragmatic Encroachment in Epistemology (pp. 181–205). Routledge.
Basu, R. (2022). Belief. The Philosopher, 110(2), 7–10.
Begby, E. (2021). Prejudice: A Study in Non-Ideal Epistemology. Oxford University Press.
Begby, E. (2013). The epistemology of prejudice. Thought: A Journal of Philosophy, 2(2), 90–99. https://doi.org/10.1002/tht3.71
Bendaña, J., & Mandelbaum, E. (2021). “The Fragmentation of Belief.” In J. Bendaña, E. Mandelbaum (Eds.) The Fragmented Mind (pp. 78–107). Oxford University Press. https://doi.org/10.1093/oso/9780198850670.003.0004.
Bolinger, R. J. (2020a). Varieties of moral encroachment. Philosophical Perspectives, 34(1), 5–26. https://doi.org/10.1111/phpe.12124
Bolinger, R. J. (2020b). The rational impermissibility of accepting (some) racial generalizations. Synthese, 197(6), 2415–2431. https://doi.org/10.1007/s11229-018-1809-5
Botvinick, M. M., Braver, T. S., Barch, D. M., Carter, C. S., & Cohen, J. D. (2001). Conflict monitoring and cognitive control. Psychological Review, 108, 624–652. https://doi.org/10.1037/0033-295X.108.3.624
Brans, K., Koval, P., Verduyn, P., Lim, Y. L., & Kuppens, P. (2013). The regulation of negative and positive affect in daily life. Emotion, 13(5), 926–939. https://doi.org/10.1037/a0032400
Bratman, M. E. (1992). Practical reasoning and acceptance in a context. Mind, 101(401), 1–15.
Brinkerhoff, A. (2021). Prejudiced beliefs based on the evidence: responding to a challenge for evidentialism. Synthese, 199(5), 14317–14331. https://doi.org/10.1007/s11229-021-03422-y
Brosch, T., Scherer, K., Grandjean, D., & Sander, D. (2013). The impact of emotion on perception, attention, memory, and decision-making. Swiss Medical Weekly, 143(1920), w13786–w13786. https://doi.org/10.4414/smw.2013.13786
Butler, E. A., Egloff, B., Wlhelm, F. H., Smith, N. C., Erickson, E. A., & Gross, J. J. (2003). The social consequences of expressive suppression. Emotion, 3(1), 48–67. https://doi.org/10.1037/1528-3542.3.1.48
Campbell-Sills, L., Barlow, D. H., Brown, T. A., & Hofmann, S. G. (2006). Acceptability and suppression of negative emotion in anxiety and mood disorders. Emotion, 6(4), 587–595. https://doi.org/10.1037/1528-3542.6.4.587
Chrisman, M. (2008). Ought to believe. The Journal of Philosophy, 105(7), 346–370. https://doi.org/10.5840/jphil2008105736
Cohen, L. J., & Cohen, J. L. (1992). An Essay on Belief and Acceptance. Clarendon Press.
Cohen, J. D. (2017). “Cognitive Control.” In The Wiley Handbook of Cognitive Control (pp. 1–28). John Wiley & Sons, Ltd. https://doi.org/10.1002/9781118920497.ch1.
Cohen, L. J. (1989). Belief and acceptance. Mind, 98(391), 367–389.
Costa, A., Foucart, A., Arnon, I., Aparici, M., & Apesteguia, J. (2014). ‘Piensa’ twice: on the foreign language effect in decision making. Cognition, 130(2), 236–254. https://doi.org/10.1016/j.cognition.2013.11.010
Crawford, L. (2022). Suspending judgment is something you do. Episteme, 19(4), 561–577. https://doi.org/10.1017/epi.2022.40
Devine, P. G. (1989). Stereotypes and prejudice: Their automatic and controlled components. Journal of Personality and Social Psychology, 56, 5–18. https://doi.org/10.1037/0022-3514.56.1.5
Dolcos, F., & Denkova, E. (2014). Current emotion research in cognitive neuroscience: linking enhancing and impairing effects of emotion on cognition. Emotion Review, 6(4), 362–375. https://doi.org/10.1177/1754073914536449
Domínguez-Borràs, J., & Vuilleumier, P. (2013). “Affective Biases in Attention and Perception.” In The Cambridge Handbook of Human Affective Neuroscience (pp. 331–56). New York, NY, US: Cambridge University Press. https://doi.org/10.1017/CBO9780511843716.018.
Egan, A. (2008). Seeing and believing: perception, belief formation and the divided mind. Philosophical Studies, 140(1), 47–63. https://doi.org/10.1007/s11098-008-9225-1
Elga, A., & Rayo, A. (2021). “Fragmentation and information access.” In A. Elga, A. Rayo (Eds.) The Fragmented Mind (pp. 37–53). Oxford University Press. https://doi.org/10.1093/oso/9780198850670.003.0002.
Engel, P. (1998). Believing, holding true, and accepting. Philosophical Explorations, 1(2), 140–151. https://doi.org/10.1080/10001998058538695
Enoch, D., & Spectre, L. (n.d.) “There is no such thing as doxastic wrongdoing.” In Philosophical Perspectives. Accessed December 28, 2022. https://philarchive.org/rec/ENOTIN.
Evans, J. S. B. T. (2008). Dual-processing accounts of reasoning, judgment, and social cognition. Annual Review of Psychology, 59, 255–278.
Evans, J. S. B. T. (2019). Reflections on reflection: the nature and function of type 2 processes in dual-process theories of reasoning. Thinking & Reasoning, 25(4), 383–415. https://doi.org/10.1080/13546783.2019.1623071
EvansStanovich, J. S. B. T. K. E. (2013). Dual-process theories of higher cognition: advancing the debate. Perspectives on Psychological Science, 8(3), 223–241. https://doi.org/10.1177/1745691612460685
Fleisher, W. (2018). Rational endorsement. Philosophical Studies, 175(10), 2649–2675. https://doi.org/10.1007/s11098-017-0976-4
Flowerree, A. K. (2017). Agency of belief and intention. Synthese, 194(8), 2763–2784. https://doi.org/10.1007/s11229-016-1138-5
Van Fraassen, B. C. (1985). Images of science: essays on realism and empiricism. University of Chicago Press.
Franchow, E. I., & Suchy, Y. (2015). Naturally-occurring expressive suppression in daily life depletes executive functioning. Emotion, 15(1), 78–89. https://doi.org/10.1037/emo0000013
Franchow, E. I., & Suchy, Y. (2017). Expressive suppression depletes executive functioning in older adulthood. Journal of the International Neuropsychological Society, 23(4), 341–351. https://doi.org/10.1017/S1355617717000054
Frankish, K. (2007b). Mind and Supermind. Cambridge University Press.
Frankish, K. (2007a). Deciding to believe again. Mind, 116(463), 523–548. https://doi.org/10.1093/mind/fzm523
Friedman, J. (2013). Suspended judgment. Philosophical Studies, 162(2), 165–181. https://doi.org/10.1007/s11098-011-9753-y
Friedman, J. (2017). Why suspend judging? Noûs, 51(2), 302–326. https://doi.org/10.1111/nous.12137
Frijda, N. H. (1987). Emotion, cognitive structure, and action tendency. Cognition and Emotion, 1(2), 115–143. https://doi.org/10.1080/02699938708408043
Frijda, N. H., Kuipers, P., & ter Schure, E. (1989). Relations among emotion, appraisal, and emotional action readiness. Journal of Personality and Social Psychology, 57, 212–228. https://doi.org/10.1037/0022-3514.57.2.212
Giuliani, N. R., Drabant, E. M., Bhatnagar, R., & Gross, J. J. (2011). Emotion regulation and brain plasticity: expressive suppression use predicts anterior insula volume. NeuroImage, 58(1), 10–15. https://doi.org/10.1016/j.neuroimage.2011.06.028
Greene, J. D. (2007). Why are VMPFC patients more utilitarian? A dual-process theory of moral judgment explains. Trends in Cognitive Sciences, 11(8), 322–323. https://doi.org/10.1016/j.tics.2007.06.004
Greene, J. D. (2015). Beyond Point-and-shoot morality: Why cognitive (neuro)science matters for ethics. Law & Ethics of Human Rights, 9(2), 141–172. https://doi.org/10.1515/lehr-2015-0011
Greene, J. D., Brian Sommerville, R., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An FMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105–2108. https://doi.org/10.1126/science.1062872
Greene, J. D., Cushman, F. A., Stewart, L. E., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2009). Pushing moral buttons: The interaction between personal force and intention in moral judgment. Cognition, 111(3), 364–371. https://doi.org/10.1016/j.cognition.2009.02.001
Gross, J. J. (1998a). Antecedent- and response-focused emotion regulation: divergent consequences for experience, expression, and physiology. Journal of Personality and Social Psychology, 74(1), 224–237. https://doi.org/10.1037/0022-3514.74.1.224
Gross, J. J. (1998b). The emerging field of emotion regulation: an integrative review. Review of General Psychology, 2(3), 271–299. https://doi.org/10.1037/1089-2680.2.3.271
Gross, J. J. (2015). Emotion regulation: Current status and future prospects. Psychological inquiry, 26(1), 1-26.
Gross, J. J., & John, O. P. (2003). Individual differences in two emotion regulation processes: Implications for affect, relationships, and well-being. Journal of Personality and Social Psychology, 85(2), 348–362. https://doi.org/10.1037/0022-3514.85.2.348
Gross, J. J., & Levenson, R. W. (1993). Emotional suppression: Physiology, self-report, and expressive behavior. Journal of Personality and Social Psychology, 64(6), 970–986. https://doi.org/10.1037/0022-3514.64.6.970
Gyurak, A., Goodkind, M. S., Kramer, J. H., Miller, B. L., & Levenson, R. W. (2012). Executive functions and the down-regulation and up-regulation of emotion. Cognition and Emotion, 26(1), 103–118. https://doi.org/10.1080/02699931.2011.557291
Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834.
Hieronymi, P. (2006). Controlling attitudes. Pacific Philosophical Quarterly, 87(1), 45–74. https://doi.org/10.1111/j.1468-0114.2006.00247.x
Hieronymi, P. (2008). Responsibility for believing. Synthese, 161(3), 357–373. https://doi.org/10.1007/s11229-006-9089-x
Jackson, E. G. (2021). A permissivist defense of pascal’s wager. Erkenntnis, September. https://doi.org/10.1007/s10670-021-00454-1
Kahneman, D., & Tversky, A. (1979). Prospect theory: an analysis of decision under risk. Econometrica, 47(2), 263–291.
Kashdan, T. B., & Steger, M. F. (2006). Expanding the topography of social anxiety: An experience-sampling assessment of positive emotions, positive events, and emotion suppression. Psychological Science, 17(2), 120–128. https://doi.org/10.1111/j.1467-9280.2006.01674.x
Kawall, J. (2013). Friendship and epistemic norms. Philosophical Studies, 165(2), 349–370. https://doi.org/10.1007/s11098-012-9953-0
Keller, S. (2004). Friendship and belief. Philosophical Papers, 33(3), 329–351. https://doi.org/10.1080/05568640409485146
Keller, S. (2018). Belief for someone else’s sake. Philosophical Topics, 46(1), 19–36.
Keysar, B., Hayakawa, S. L., & An, S. G. (2012). The foreign-language effect: Thinking in a foreign tongue reduces decision biases. Psychological Science, 23(6), 661–668. https://doi.org/10.1177/0956797611432178
Koole, S. L. (2009). The psychology of emotion regulation: An integrative review. Cognition and Emotion, 23(1), 4–41. https://doi.org/10.1080/02699930802619031
Lynch, T. R., Robins, C. J., Morse, J. Q., & Krause, E. D. (2001). A mediational model relating affect intensity, emotion inhibition, and psychological distress. Behavior Therapy, 32(3), 519–536. https://doi.org/10.1016/S0005-7894(01)80034-4
Maguire, B., & Woods, J. (2020). The game of belief. The Philosophical Review, 129(2), 211–249. https://doi.org/10.1215/00318108-8012843
Masny, M. (2020). Friedman on suspended judgment. Synthese, 197(11), 5009–5026. https://doi.org/10.1007/s11229-018-01957-1
Matos, M., Pinto-Gouveia, J., & Costa, V. (2013). Understanding the importance of attachment in shame traumatic memory relation to depression: the impact of emotion regulation processes. Clinical Psychology & Psychotherapy, 20(2), 149–165. https://doi.org/10.1002/cpp.786
McCormick, M. S. (2022). Belief as emotion. Philosophical Issues, 32(1), 104–119. https://doi.org/10.1111/phis.12232
McGrath, M. (2021). Being neutral: Agnosticism, inquiry and the suspension of judgment. Noûs, 55(2), 463–484. https://doi.org/10.1111/nous.12323
McKaughan, D. J. (2007). “Toward a Richer Vocabulary for Epistemic Attitudes: Mapping the Cognitive Landscape.” Ph.D., United States -- Indiana: University of Notre Dame. http://search.proquest.com/docview/304819154/abstract/D48CC791876E46A5PQ/1.
McRae, K. (2016). Cognitive emotion regulation: a review of theory and scientific findings. Current Opinion in Behavioral Sciences, Neuroscience of Education, 10(August), 119–124. https://doi.org/10.1016/j.cobeha.2016.06.004
Moore, S. A., Zoellner, L. A., & Mollenholt, N. (2008). Are expressive suppression and cognitive reappraisal associated with stress-related symptoms? Behaviour Research and Therapy, 46(9), 993–1000. https://doi.org/10.1016/j.brat.2008.05.001
Morton, J. M., & Paul, S. K. (2019). Grit. Ethics, 129(2), 175–203. https://doi.org/10.1086/700029
Muris, P., Merckelbach, H., van den Hout, M., & de Jong, P. (1992). Suppression of emotional and neutral material. Behaviour Research and Therapy, 30(6), 639–642. https://doi.org/10.1016/0005-7967(92)90009-6
Niermeyer, M. A., Franchow, E. I., & Suchy, Y. (2016). Reported expressive suppression in daily life is associated with slower action planning. Journal of the International Neuropsychological Society, 22(6), 671–681. https://doi.org/10.1017/S1355617716000473
Paul, S. K., & Morton, J. M. (2018). “Believing in Others.” Philosophical Topics. August 1, 2018. https://doi.org/10.5840/philtopics20184615.
Payne, B. K. (2005). Conceptualizing control in social cognition: How executive functioning modulates the expression of automatic stereotyping. Journal of Personality and Social Psychology, 89(4), 488–503. https://doi.org/10.1037/0022-3514.89.4.488
Phan, K. L., & Sripada, C. S. (2013). “Emotion Regulation.” In J., Armony, & P., Vuilleumier (Eds.) The Cambridge Handbook of Human Affective Neuroscience (pp. 375–400).
Railton, P. (2014). Reliance, trust, and belief. Inquiry, 57(1), 122–150.
Rapstine, M. (2021). Regrettable beliefs. Philosophical Studies, 178(7), 2169–2190. https://doi.org/10.1007/s11098-020-01535-7
Reisner, A. (2008). Weighing pragmatic and evidential reasons for belief. Philosophical Studies, 138(1), 17–27. https://doi.org/10.1007/s11098-006-0007-3
Richards, J. M. (2004). The cognitive consequences of concealing feelings. Current Directions in Psychological Science, 13(4), 131–134. https://doi.org/10.1111/j.0963-7214.2004.00291.x
Richards, J. M., & Gross, J. J. (1999). Composure at any cost? The cognitive consequences of emotion suppression. Personality and Social Psychology Bulletin, 25(8), 1033–1044. https://doi.org/10.1177/01461672992511010
Richards, J. M., & Gross, J. J. (2000). Emotion regulation and memory: The cognitive costs of keeping one’s cool. Journal of Personality and Social Psychology, 79(3), 410–424. https://doi.org/10.1037/0022-3514.79.3.410
Rinard, S. (2015). Against the new evidentialists. Philosophical Issues, 25(1), 208–223. https://doi.org/10.1111/phis.12061
Roeber, B. (2019). Evidence, judgment, and belief at will. Mind, 128(511), 837–859. https://doi.org/10.1093/mind/fzy065
Roeber, B. (2020). Permissive situations and direct doxastic control. Philosophy and Phenomenological Research, 101(2), 415–431. https://doi.org/10.1111/phpr.12594
Roemer, L., & Borkovec, T. D. (1994). Effects of suppressing thoughts about emotional material. Journal of Abnormal Psychology, 103, 467–474. https://doi.org/10.1037/0021-843X.103.3.467
Ross, L. (2022). Profiling, neutrality, and social equality. Australasian Journal of Philosophy, 100(4), 808–824. https://doi.org/10.1080/00048402.2021.1926522
Scarantino, A. (2014). “The Motivational Theory of Emotions.” In Moral Psychology and Human Agency: Philosophical Essays on the Science of Ethics (pp. 156–85). New York, NY, US: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198717812.003.0008.
Schwitzgebel, E. (2002). A phenomenal, dispositional account of belief. Noûs, 36(2), 249–275.
Shah, N. (2002). “Clearing space for doxastic voluntarism.” The Monist. August 1, 2002. https://doi.org/10.5840/monist200285326.
Shah, N. (2003). How truth governs belief. The Philosophical Review, 112(4), 447–482.
Shah, N., & David Velleman, J. (2005). Doxastic deliberation. The Philosophical Review, 114(4), 497–534.
Shenhav, A., Musslick, S., Lieder, F., Kool, W., Griffiths, T. L., Cohen, J. D., & Botvinick, M. M. (2017). Toward a rational and mechanistic account of mental effort. Annual Review of Neuroscience, 40, 99–124.
Sher, G. (2019). A wild west of the mind. Australasian Journal of Philosophy, 97(3), 483–496. https://doi.org/10.1080/00048402.2018.1490326
Sperberg, E. D., & Stabb, S. D. (1998). Depression in women as related to anger and mutuality in relationships. Psychology of Women Quarterly, 22(2), 223–238. https://doi.org/10.1111/j.1471-6402.1998.tb00152.x
Sripada, C. (2018). Addiction and fallibility. The Journal of Philosophy, 115(11), 569–587. https://doi.org/10.5840/jphil20181151133
Sripada, C. (2021). The atoms of self-control. Noûs, 55(4), 800–824. https://doi.org/10.1111/nous.12332
Staffel, J. (2019). Credences and suspended judgments as transitional attitudes. Philosophical Issues, 29(1), 281–294. https://doi.org/10.1111/phis.12154
Stalnaker, R., & Stalnaker, R. (1984). Inquiry. Cambridge, Mass.: MIT Press
Steup, M. (2012). Belief control and intentionality. Synthese, 188(2), 145–163. https://doi.org/10.1007/s11229-011-9919-3
Stroud, S. (2006). Epistemic partiality in friendship. Ethics, 116(3), 498–524. https://doi.org/10.1086/500337
Traldi, O. (2022). Uncoordinated norms of belief. Australasian Journal of Philosophy. https://doi.org/10.1080/00048402.2022.2030378
Weatherson, B. (2008). Deontology and descartes’s demon. The Journal of Philosophy, 105(9), 540–569.
Williams, B. (1973). “Deciding to believe.” In B. Williams (Eds.) Problems of the Self (pp. 136–51). Cambridge University Press.
Acknowledgements
I am grateful to Peter Railton, Chandra Sripada, Rénee Jorgensen, and Maegan Fairchild for detailed discussion and comments on multiple versions of this paper; to Mica Rapstine, Adam Waggoner, Brian Weatherson, Corey Cusimano, Sarah Buss, Alex Madva, Susan Gelman, Ethan Kross, and Matt Stichter for their feedback on drafts; to Aliosha Barranco Lopez, Henry Schiller, Tez Clark, and Caitlin Mace for excellent conference comments; and to two anonymous referees for their exceptionally useful and constructive reviews. Additionally, this paper benefited from conversations with Zach Barnett, Gabrielle Kerbel, Andrew Lichter, Malte Hendrickx, Aaron Glasser, Jonathan Jenkins Ichikawa, Gwen Bradford, Mark Schroeder, Daniel Kelly, and Walter Sinnott-Armstrong; my thanks to all of them (and to others who I’ve failed to name). Versions of this paper were presented at the University of Michigan Graduate Student Working Group, the University of Michigan Candidacy Seminar, the 2021 Princeton-Michigan Metanormativity Workshop, the 2021 Austin Graduate Ethics and Normativity Talks, the 2022 Southern Society for Philosophy and Psychology, the 2022 Pacific American Philosophical Association Meeting, Athena in Action 2022 (extra thanks to all those involved in this workshop who read and discussed this paper), and the 2022 Moral Psychology Research Group at Cornell; thanks to those audiences for their engagement and discussion.
Funding
The author was partially funded by a National Science Foundation Graduate Research Fellowship.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
There are no conflicts of interest to declare.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Soter, L.K. Acceptance and the ethics of belief. Philos Stud 180, 2213–2243 (2023). https://doi.org/10.1007/s11098-023-01963-1
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11098-023-01963-1