In this work, I argue for the possibility of epistemic akrasia. An individual S is epistemically akratic if the following conditions hold: (1) S knowingly believes that P though she judges that it is epistemically wrong to do so and (2) Having these mental states displays a failure of rationality that is analogous to classic akrasia. I propose two different types of epistemic akrasia involving different kinds of evidence on which the subject bases her evaluation of her akratic belief. I examine three objections to their possibility. I suggest that the key to defending the possibility of epistemic akrasia is to explain condition (2). I finally argue that epistemic akrasia is possible, and that it represents a failure of mental agency.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
Tax calculation will be finalised during checkout.
The phenomenon has received other labels such as ‘doxastic incontinence’ (Heil 1984), ‘incontinent believing’ (Mele 1986), and ‘theoretical akrasia’. I discuss the phenomenon of ‘critical doxastic resistance’ in (Borgoni ms, Dissonance and Doxastic Resistance), which seems to be related to epistemic akrasia.
The recent debate regarding epistemic akrasia deals with the question of whether epistemic akrasia is rational. However, since I regard the notion of irrationality as intrinsic to the concept of akrasia, to say that epistemic akrasia is a rational phenomenon is to deny its possibility. I examine such a debate in the third section of this paper.
It is a common strategy to identify a case of epistemic akrasia only by reference to condition (1). However, there are other characterizations of the phenomenon that similarly make use of two conditions, for example, the one given by Owens (2002).
Cf. Owens (2002).
The notion of ‘evidence’ is used neutrally throughout the paper. One can have misleading or incorrect evidence in support of a given belief.
I differentiate between the judgment that a given belief is epistemically defective and the judgment that one’s believing is epistemically wrong. The former judgment is about the belief’s epistemic features. The latter judgment involves an ‘ought-judgment’ based on the epistemic evaluation of the belief. When one judges that believing that P is epistemically wrong, one judges that one ought not believe that P for epistemic reasons.
To the best of my knowledge, there is no empirical data supporting the idea that an individual cannot be in the conjunction of mental states described in (1).
According to Adler, beliefs weaken if they lack epistemic justification. He argues ‘when beliefs conflict, they weaken one another, since both cannot be true. When one belief is favored by the evidence, the disfavored belief evaporates, since it has been determined to be false’ (Adler 2002: 6). However, Adler seems to be making at most a normative claim about beliefs, not providing a description of how beliefs actually work. An account of the way beliefs work must accommodate irrational beliefs and beliefs that are resistant to the evidence.
Ribeiro (2011) offers the original version of this example in which he is the skeptical individual who nevertheless believes he has hands. He argues that epistemic akrasia is actual, thus it is possible.
This is not a description of the psychology of patients with phantom limb syndrome. The fact that the syndrome is characterized by the sensation of having a limb that does not exist does not imply that patients have a corresponding belief about having the limbs.
Since Ian lacks rational-access consciousness of the belief, he is unable to reason with his recalcitrant belief (i.e. to employ the belief in reasoning). However, since he is conscious of the belief (he knows he has such a belief), he is able to reason about the recalcitrant belief. Lacking rational-access consciousness of a belief does not imply being unable to reason about the belief. Ian is able to evaluate the belief as false.
Having phenomenal consciousness is not a requirement of the case. Ian could lack both types of consciousness and be epistemically akratic nevertheless. He could know he has a recalcitrant belief without having phenomenal consciousness of the belief.
Davidson (1982) invokes the metaphor of an internal division in the individual’s mind to explain such cases. He is particularly interested in cases of self-deception. However, although self-deception and epistemic akrasia may share some aspects such as mental division, they identify different irrational phenomena. There are at least two important differences between self-deception and epistemic akrasia. First, meeting condition (1) is a necessary condition in the case of epistemic akrasia, but not in the case of self-deception. In fact, in paradigmatic cases of self-deception, the individual does not form a higher-order judgment about her deceived belief or about what to believe. Secondly, in contrast to self-deception, the epistemically akratic individual deals correctly with the evidence, even though the evidence might be misleading. In contrast, it is normally argued that in self-deception, the individual deals incorrectly with the available evidence for and against the deceived belief. In paradigmatic cases of self-deception, the individual ignores, rationalizes or under-evaluates obvious evidence against the deceived belief. Such a ‘mistake’ is what forms or maintains the deceived belief in the subject’s belief system. See, for example, Pears (1984), Mele (2001) and Michel and Newen (2010).
This statement is formulated as a conditional because it is unclear how to investigate whether it is possible to consciously hold P and not-P. There is, however, a general assumption that such a state of mind is psychologically impossible. In the proposed example, Ian does not consciously hold P and not-P. Phenomenal consciousness is not sufficient for assenting to the belief’s content. Even though Ian supposedly has phenomenal consciousness of his belief that not-P, he does not assent to not-P, and thus does not consciously hold a contradiction.
One may wonder whether cases of delusions could constitute counterexamples to the present objection. One worry about treating them in this context is that they do not clearly meet condition (1). Consider patients with Capgras Syndrome who claim that a loved one has been replaced by an impostor. It is not part of the syndrome that those patients access that claim and judge it to be epistemically defective and based on that, form a further judgment that they should stop believing that their loved ones have been replaced. Quite the opposite, those patients do not seem to be in a position to epistemically evaluate their delusional beliefs. Similarly, consider someone with Cotard Syndrome, who claims that she is dead. The syndrome does not normally describe individuals who access the falsity of their beliefs but cannot eliminate the beliefs nevertheless. They do not meet condition (1) either. There are, nevertheless, special cases in which patients acknowledge the implausibility of their delusional beliefs. However, these cases do not seem to represent the standard cases. For more on deluded beliefs, see Matthews (2013) and Bayne and Hattiangadi (2013).
According to this paper’s characterization of akrasia, if a case meets condition (1) but not condition (2), it is not a case of epistemic akrasia. This paper holds that being epistemically akratic is necessarily irrational, but it does not hold that being in (1) is necessarily irrational. The objection constitutes a threat to positions that hold the latter, like Horowitz’s (2013) position.
Horowitz (2013), for example, argues that accepting the mental state described in (1) as rational licenses patently bad reasoning and irrational action. Greco (2014) argues in a rather different way, focusing on the fact that being in (1) requires mental fragmentation. According to Greco, mental fragmentation, or inner conflict, characterizes an irrational mental state. Greco (2014) holds that the mental states that compose (1) regard the same subject matter P and the difference between them is that they are products of distinct, competing belief-producing psychological systems. I disagree with Greco on both points, but agree that epistemic akrasia requires some sort of mental division that amounts to a type of irrationality. It is because the individual is not able to combine the reasons supporting each of the mental states composing (1) that he or she is irrational.
Watson and Alice could be considered parallel cases to what some authors call ‘inverse akrasia’ (see Arpaly and Schroeder 1999). In a case of classic inverse akrasia, the akratic course of action is superior to the course of action recommended by the agent’s best judgment’ (Arpaly and Schroeder 1999: 162). One popular example discussed in the literature is Mark Twain’s story of Huckleberry Finn. As Huckleberry becomes friends with Jim, a runaway slave, he finds himself incapable of doing what he judges to be the right thing to do, which is to return the slave to his lawful owner. His akratic course of action ends up being morally superior and deserving of praise. A parallel case of inverse akrasia in theoretical reasoning would be the case of a person’s akratic belief being epistemically superior to the belief recommended by the person’s best judgment. Those cases are particularly interesting because they motivate the idea that the inverse akratic believer is then rational in maintaining the akratic belief. However, saying that Watson and Alice are more rational in holding the akratic belief seems to suppose the above-mentioned notion of objective rationality.
I will try to shed some light on this issue in the next section, although it is not my aim to construct a detailed system of how conflicting evidence interacts.
I am relying on Davidson’s view that having mutually contradictory beliefs is a possible state of mind, although it is an irrational one. In contrast, since believing in a contradiction is not considered to be a possible state of mind, it should not be even classified as irrational.
Denying that Watson is rational does not rule out the possibility of a case in which being (1) is rational. According to Horowitz (2013: 19), ‘under certain special conditions, higher-order evidence might work differently from how it does in cases like [WATSON]’, and it might be rational to be in (1). Cf. Horowitz (2013), Elga (2013) proposes such a case. In fact, finding such cases would provide extra support for my characterization of epistemic akrasia in terms of two conditions. However, this is not the primary aim of this paper. The primary aim is rather to explain how condition (2) can obtain. I have offered further reasons to include condition (2) into the definition of epistemic akrasia in sections 1 and 3.
The notion of agency employed here relates to the way we are said to be agents of our beliefs. Being agents of our beliefs involves a type of rational control over our beliefs that is not specified in terms of deliberative control. The idea of being agents of our beliefs, and of rational control, is explored in what follows.
The supposition that agency requires voluntary control is disputable even beyond the discussion about mental agency. See Burge (2010: 334–335) for examples of actions in which the individual acts without having any control over the action. In his discussion of primitive agency, Burge also argues for cases of primitive action in which there is no intention guiding the action due to the primitive psychological apparatus of the agent. Such cases exemplify the idea that agency can occur in the absence of voluntary control or of the formation of intentions to act.
In a previous work, I further develop Burge’s (1996, 1998) idea that in critical reasoning, a particular rational rule emerges: the evaluation resulting from critical reasoning should be transmitted into, and implemented by, the belief or reasoning under evaluation (Borgoni ms, Dissonance and Doxastic Resistance). Otherwise, the individual would not be able to undertake critical reasoning about his or her own beliefs and reasons. This sort of agency is not like the control that we have over ordinary objects (called ‘managerial control’ by Hieronymi 2009), like when we organize our tables. In critical reasoning, our control is necessarily mediated by reasoning. In this paper, I employ some ideas related to the implementation of that rule.
Pace Greco (2014).
See Burge (2010) for a criterion for what counts as action based on the distinction between actions undertaken by the individual and processes occurring only ‘within’ the individual. According to Burge, the relevant notion of action to understand primitive agency ‘is grounded in functioning, coordinated behavior by the whole organism, issuing from the individual’s central behavioral capacities, not purely from subsystems’ (Burge 2010: 331). It would require a further paper to develop the parallel between Burge’s notion of primitive agency and mental agency. However, as indicated above, we can also find the distinction between operations involving the individual and operations undertaken by his or her subsystems in the mental level. I am also relying on Burge’s methodology that the notion of individual action is driven by examples. In that sense, saying that the individual does something when he or she critically reasons is not only a façon de parler, but somehow grounds the attribution of an action to the individual.
It is important to note that one can commit mistakes in critical reasoning just like one can commit mistakes in non-critical reasoning. If mistakes are a subspecies of failures, not all failures of critical reasoning are also failure of mental agency.
One might wonder whether Ian is capable of critically reason about his belief since he does not endorse the content of his belief. As mentioned in the second section, I hold that being able to reason about one’s own beliefs is sufficient to undertake critical reasoning. Ian is able to do that. What he has is a limited capability of reasoning with the belief.
I am referring to the particular case of IAN. Accepting rebutting evidence does not need to be a sufficient reason to eliminate a given belief since the individual’s belief system could be receiving contrasting evidence in support of that belief.
For valuable conversation and helpful comments on previous drafts of this article, I am grateful to Marian David, Guido Melchior, and Daniel Morgan. This research was partly funded by the research project FFI2010-19455, supported by the Spanish Ministry of Science and Education.
Adler, J. 2002. Akratic believing? Philosophical Studies 110: 1–27.
Alston, W. 1988. The Deontological Conception of Epistemic Justification. Philosophical Perspectives 2: 257–299.
Arpaly, N., and T. Schroeder. 1999. Praise, Blame and the Whole Self. Philosophical Studies 93: 161–188.
Bayne, T., and A. Hattiangadi. 2013. ‘Belief and its bedfellows’: In New Essays on Belief, 124–144. Palgrave Macmillan: Basingstoke.
Block, N. 1995. On a Confusion About the Function of Consciousness. Behavioral and Brain Sciences 18(1995): 227–47.
Burge, T. 1995. ‘Two Kinds of Consciousness’. In his 2007 Foundations of Mind, Oxford University Press: 383–391.
Burge, T. 1996. Our Entitlement to self-knowledge. Proceedings of the Aristotelian Society 96: 91–116.
Burge, T. 1998. Reason and the first person. In C. Wright, B. Smith and C. MacDonald (eds.). Knowing our own Minds. Oxford University Press: 243-270
Burge, T. 2010. Origins of Objectivity, Oxford University Press.
Coates, A. 2012. Rational epistemic akrasia. American Philosophical Quarterly 49: 113–124.
Davidson, D. 1982. ‘Paradoxes of Irrationality’. In his 2004: Problems of Rationality. Oxford University Press: 169–88.
Davidson, D. 1985. ‘Incoherence and Irrationality’. In his 2004: Problems of Rationality, Oxford University Press: 189–198.
Elga, A. 2013. The puzzle of the unmarked clock and the new rational reflection principle. Philosophical Studies 164: 127–139.
Greco, D. 2014. A puzzle about epistemic akrasia. Philosophical Studies 167(2): 201–219.
Heil, J. 1984. Doxastic Incontinence. Mind 93: 56–70.
Hieronymi, P. 2006. Controlling attitudes. Pacific Philosophical Quarterly 87: 45–74.
Hieronymi, P. 2009. ‘Two Kinds of Agency’. In L. O’Brien and M. Soteriou (eds.) Mental Actions, Oxford University Press: 138–162.
Hookway, C. 2001. ‘Epistemic Akrasia and Epistemic Virtue’. In A. Fairweather and L. Zagzebski (eds.), Virtue Epistemology: Essays on Epistemic Virtue and Responsibility, Oxford University Press: 178–199.
Horowitz, S. 2013. Epistemic Akrasia. Noûs. doi:10.1111/nous.12026.
Hurley, S. 1989. Natural Reasons, Oxford University Press.
Levy, N. 2004. Epistemic akrasia and the subsumption of evidence: a reconsideration. Croatian Journal of Philosophy 4(10): 149–156.
Matthews, R. J. 2013. ‘Belief and Belief’s Penumbra’. In New Essays on Belief, 100–123. Palgrave Macmillan: Basingstoke.
Mele, A. 1986. Incontinent believing. The Philosophical Quarterly 36: 212–222.
Mele, A. 2001. Self-Deception Unmasked. Princeton: Princeton University Press.
Michel, C., and A. Newen. 2010. Self-deception as Pseudo-Rational Regulation of Belief. Consciousness and Cognition 19(2010): 731–744.
Owens, D. 2002. Epistemic akrasia. The Monist 85: 381–397.
Pears, D. 1984. Motivated Irrationality. Oxford University Press.
Pettit, P., and M. Smith. 1996. Freedom in belief and desire. Journal of Philosophy 93: 429–449.
Ribeiro, B. 2011. Epistemic akrasia. International Journal for the Study of Skepticism 1: 18–25.
About this article
Cite this article
Borgoni, C. Epistemic Akrasia and Mental Agency. Rev.Phil.Psych. 6, 827–842 (2015). https://doi.org/10.1007/s13164-014-0205-4
- Critical Reasoning
- Phenomenal Consciousness
- Epistemic Evaluation
- Mental Agency
- Contradictory Belief