Instability occurs when the very fact of choosing one particular possible option rather than another affects the expected values of those possible options. In decision theory: An act is stable iff given that it is actually performed, its expected utility is maximal. When there is no stable choice available, the resulting instability can seem to pose a dilemma of practical rationality. A structurally very similar kind of instability, which occurs in cases of anti-expertise, can likewise seem to create dilemmas of epistemic rationality. One possible line of response to such cases of instability, suggested by both Jeffrey (The logic of decision, University of Chicago Press, Chicago, 1983) and Sorensen (Aust J Philos 65(3):301–315, 1987), is to insist that a rational agent can simply refuse to accept that such instability applies to herself in the first place. According to this line of thought it can be rational for a subject to discount even very strong empirical evidence that the anti-expertise condition obtains. I present a new variety of anti-expertise condition where no particular empirical stage-setting is required, since the subject can deduce a priori that an anti-expertise condition obtains. This kind of anti-expertise case is therefore not amenable to the line of response that Jeffrey and Sorensen recommend.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
Tax calculation will be finalised during checkout.
“Death works from an appointment book which states time and place; a person dies if and only if the book correctly states in what city he will be at the stated time. The book is made up weeks in advance on the basis of highly reliable predictions. An appointment on the next day has been inscribed for him. Suppose, on this basis, the man would take his being in Damascus the next day as strong evidence that his appointment with Death is in Damascus, and would take his being in Aleppo the next day as strong evidence that his appointment is in Aleppo… If… he decides to go to Aleppo, he then has strong grounds for expecting that Aleppo is where Death already expects him to be, and hence it is rational for him to prefer staying in Damascus. Similarly, deciding to stay in Damascus would give him strong grounds for thinking that he ought to go to Aleppo.” (Gibbard and Harper 1978, p. 373).
One might alternatively define stability in terms of conditional expected value: an option is stably preferred iff its expected utility conditional on its actually being chosen is maximal. I am very grateful to an anonymous referee for helpful comments on defining stability.
Notice that strictly speaking, according to Jeffrey’s own official view, there are no such things as outright or full beliefs—we should eliminate such talk and replace it with subjective probabilities.
A different condition that is also sometimes labeled ‘anti-expertise’ is when: (Bp → ¬ p) & (B¬p → p). I.e. if you believe it, it’s false, if you disbelieve it, it’s true. Notice that this condition creates less threat of a dilemma insofar as it says nothing about suspending judgement or simply neither believing nor disbelieving that p. The bi-conditional AE, in contrast, states that: (Bp → ¬ p) & (¬ Bp → p).
Reed Richter (1990) likewise argues that it can be rationally permitted to not believe the deductive consequences of your beliefs, even if you know and grasp that they have these consequences.
Sorensen does allow that in light of new evidence one could increase one’s confidence that one is an anti-expert from a low credence to a less low credence, so long as one does not actually (outright) believe that one is an anti-expert. Likewise he allows that one can permissibly believe that one might be an anti-expert. So Sorensen does allow for some limited sensitivity to evidence in favour of one’s being an anti-expert. Many thanks to an anonymous referee for this journal for helpful comments on this point.
This last sentence is, of course, an instance what we now call (following Wittgenstein 1953, part II, section x) ‘Moore’s paradox’. Moore’s own example was: ‘I went to the pictures last Tuesday, but I don’t believe that I did.’ (Moore 1942, p. 543), which is of the form: p & ¬ Bp. Sentences of the form: p & B ¬ p, or: Bp & ¬ p, are also standardly counted as ‘Moorean’.
Compare for example, Koons (1990), who presents a pair of ‘Doxastic Paradoxes” that do not rely on self-reference. But the set-up of these situations does still rely on the subject somehow having extremely strong empirical evidence that a bi-conditional of the form: p ↔ S cannot justifiably Bp, really does apply to her.
Wittgenstein (1953) very briefly mentions this sort of demonstrative judgement of one’s own height in section §279 of the Investigations, in the course of what is generally considered to be his ‘private language argument’: ‘Imagine someone saying: "But I know how tall I am!" and laying his hand on top of his head to prove it.’ One might naturally read Wittgenstein here as dismissing such a claim and/or demonstration as meaningless. However, in the immediately preceding section §278 he writes: ‘"I know how the colour green looks to me"—surely that makes sense!—Certainly: what use of the proposition are you thinking of?’ Assuming that in the last sentence here Wittgenstein is speaking in propria persona, his point then seems to be not so much that such statements are simply meaningless, but rather just that we should ask what the point or usage of such statements is supposed to be in any given context. But setting aside matters of Wittgensteinian interpretation, one possible objection would be to claim that these kinds of demonstrative judgements are meaningless, hence fail to be true. Let me just state that this strikes me as highly implausible. If someone were to point at the ground beneath their feet and assert “I am located here!” we may wonder what the point of the assertion is, but there seems no basis whatsoever for denying that they have uttered a perfectly well-formed English sentence, expressing a perfectly meaningful claim that is, trivially, true. Many thanks to an anonymous referee for interesting and helpful discussion on this point.
E.g. I can know by reflection that “I exist right now”, whereas I might not be able to know just by reflection that “N exists at time t”. One might dispute whether the former sort of knowledge should really count as a priori since it plausibly relies on the subject’s self-conscious experience of her own mental life, but it seems clear at least that it does not rely on any empirical evidence about the external world and so is something that can be known just ‘by reflection’.
If, we understand the demonstrative ‘this’ in the thought ‘I am not in this mental state’ to be referring to one’s actual current total mental state, Mi, then the anti-expertise dilemma evaporates. For when I am in the initial state Mi I don’t believe that: [I am not in Mi], which is just as it should be, since it is false that [I am not in Mi] when I am in Mi. Conversely if I were to go ahead and believe [I am not in Mi] then that belief would be true, for I would then be in a different total mental state, one which is neither Mi nor M* – for recall, M* is just like Mi except for the addition of the belief [I am not in M*], whereas this new total mental state has added the different new belief [I am not in Mi].
Notice also that this sort of instability would not be evaded, but merely shifted, by simultaneously believing both “I am not in this total mental state” and some other unrelated new true proposition, n. Admittedly, forming a belief in the conjunction: ‘n & S is not in M* at t” would then put S not into M* (which is exactly like Mi except for the addition of the belief “S is not in M* at t”), but into some other new total mental state. And so S’s belief that: ‘S is not in M* at t’ would here be true. But, again, this just shifts the instability to a different proposition. For consider, if starting in Mi, one formed (all at once!) the judgement: “n & I am not in this mental state”, the demonstrative this would now be picking out a total mental state that is exactly like Mi except for the addition of this conjunctive belief: ‘n & I am not in this mental state’. Call this latter new total mental state M***. Given that n is also a true proposition, then by remaining in the initial total mental state Mi at time t, the proposition “n & S is not in M*** at t’ is true but not believed by S. But if, starting from Mi, S were to go ahead at time t and actually believe that: n & S is not in M***, then this proposition would be false. So the anti-expertise instability remains.
I am extremely grateful to an anonymous referee for this journal whose helpful comments substantially improved the non-indexical formulations in this section.
In fact I don’t think that the use of the indexicals ‘you’ or ‘I’ is essential for Crimmins’ case—the same issues would arise for ‘S falsely believes that Gonzalez, master of disguise, is an idiot’.
I am very grateful to Peter Brössel for helpful conversations about the comparison with Godel’s theorem.
I am very grateful to an anonymous referee for this journal for pressing me to discuss this point.
Notice also, though it is an ad hominem, that Sorensen himself is well-known for championing the epistemicist approach to vagueness on the basis that it allows us to retain classical logic, and for rejecting any rival approaches which would require us to revise or deviate from classical logic.
Earlier versions of this paper were presented at the conference ‘Hard Cases and Rational Choice’ at the University of Bern, at the workshop ‘Rationality: Epistemic and Practical Perspectives’ and at the colloquium on Logic and Epistemology, both at the Ruhr University Bochum. I am very grateful to the audiences on all those occasions for helpful questions and feedback. Many thanks in particular to Peter Brössel, Ruth Chang, Insa Lawler, Jim Pryor, Kevin Reuter, Christian Straßer and Filippo Vindrola for their comments, objections and advice. Finally, I am especially grateful to the anonymous referees for this journal and for another journal, whose reports very substantially improved this paper.
Baumann, P. (2017). Is everything revisable? Ergo: An Open Access Journal of Philosophy, 4, 349–357.
Bommarito, N. (2010). Rationally self-ascribed anti-expertise. Philosophical Studies, 151(3), 413–419.
Bonjour, L. (1998). In Defense of Pure Reason. New York: Cambridge University Press.
Burge, T. (1978). Buridan and epistemic paradox. Philosophical Studies, 34, 21–35.
Caie, M. (2013). Belief and indeterminacy. Philosophical Review, 122(4), 527–575.
Casullo, A. (1977). Kripke on the a priori and the necessary. Analysis, 37, 152–159.
Christensen, D. (2007). Epistemic self-respect. Proceedings of the Aristotelian Society, 107(1pt3), 319–337.
Christensen, D. (2010). Higher-order evidence. Philosophy and Phenomenological Research, 81(1), 185–215.
Conee, E. (1982). Utilitarianism and rationality. Analysis, 42(1), 55–59.
Crimmins, M. (1992). (1992) ‘I falsely believe that p.’ Analysis, 52(3), 191.
Donnellan, K. (1977). The contingent a priori and rigid designators. Midwest Studies in Philosophy, 2, 12–27.
Ebbs, G. (2016). Reading Quine’s claim that no statement is immune to revision. In F. Janssen-Lauret & G. Kemp (Eds.), Quine and his place in history. London: Palgrave Press.
Egan, A., & Elga, A. (2005). I can’t believe I’m stupid. Philosophical Perspectives, 19(1), 77–93.
Elstein, D. (2007). A new revisability paradox. Pacific Philosophical Quarterly, 88(3), 308–318.
Gibbard, A., & Harper, W. L. (1978). Counterfactuals and two kinds of expected utility. In A. Hooker, J. J. Leach, & E. F. McClennen (Eds.), Foundations and applications of decision theory. Dordrecht: D. Reidel.
Jeffrey, R. (1983). The Logic of Decision (2nd ed.). Chicago: University of Chicago Press.
Katz, J. (1988). Realistic Rationalism. Cambridge, MA: MIT Press.
Kripke, S. (1975). Outline of a theory of truth. Journal of Philosophy, 72, 690–712.
Kripke, S. (1980). Naming and Necessity. Cambridge, MA: Harvard University Press.
Koons, R. (1990). Doxastic paradoxes without self-reference. Australasian Journal of Philosophy, 68(2), 168–177.
Kyburg, H. (1961). Probability and the logic of rational belief. Middletown: Wesleyan University Press.
Makinson, D. (1965). The paradox of the preface. Analysis, 25, 205–207.
Moore, G. E. (1942). A reply to my critics. In P. A. Schilpp (Ed.), The Philosophy of G. E. Moore. Evanston, IL: Northwestern University Press.
Mortensen, C., & Priest, G. (1981). The truth teller paradox. Logique et Analyse, 95–96, 381–388.
Quine, W. V. O. (1961). Two dogmas of empiricism. In W. O. Van Quine (Ed.), From a logical point of view (2nd ed.). Cambridge: Harvard University Press.
Richter, R. (1990). Ideal rationality and hand waving. Australasian Journal of Philosophy, 68(2), 147–156.
Sorensen, R. (1987). Anti-Expertise, Instability, and Rational Choice. Australasian Journal of Philosophy, 65(3), 301–315.
Sorensen, R. (1988). Blindspots. Oxford: Oxford University Press.
Titelbaum, M. (2015). Rationality’s fixed point (Or. In Defense of Right Reason). Oxford Studies in Epistemology, 5, 253–294.
Turri, J. (2011). Contingent a priori knowledge. Philosophy and Phenomenological Research, 83(2), 327–344.
Tymoczko, T. (1984). An unsolved puzzle about knowledge. The Philosophical Quarterly, 34(137), 437–458.
Wittgenstein, L. (1953). Philosophical Investigations. G. E. M. Anscombe & R. Rhees (Eds.), G.E.M. Anscombe (trans.), Oxford: Blackwell.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Raleigh, T. A new anti-expertise dilemma. Synthese (2021). https://doi.org/10.1007/s11229-021-03035-5
- Decision theory
- A priori knowledge
- Demonstrative judgement
- Rational dilemma
- Epistemic dilemma