All those opinions and notions of things, to which we have been accustomed from our infancy, take deep root, that ‘tis impossible for us, by all powers of reason and experience, to eradicate them.
Hume, A Treatise of Human Nature.
People sometimes try to call others’ beliefs into question by pointing out the contingent causal origins of those beliefs. The significance of such ‘Etiological Challenges’ is a topic that has started attracting attention in epistemology. Current work on this topic aims to show that Etiological Challenges are, at most, only indirectly epistemically significant, insofar as they bring other generic epistemic considerations to the agent’s attention (e.g. disagreement, consistency with one’s own epistemic standards, evidence of one’s fallibility). Against this approach, we argue that Etiological Challenges are epistemically significant in a more direct and more distinctive way. An Etiological Challenge prompts the agent to assess whether her beliefs result from practices of indoctrination, and whether she should reduce confidence in those beliefs, given the anti-reliability of indoctrination as a method of belief-acquisition. Our analysis also draws attention to some of the ways in which epistemic concerns interact with political issues—e.g. relating to epistemic injustice, identity-based discrimination, and segregation—when we’re thinking about the contingent causal origins of our beliefs.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
Tax calculation will be finalised during checkout.
Interesting unpublished work in this area also includes Adam Elga, “Lucky to be rational”; Joshua Schechter, “Luck, rationality, and explanation: A reply to Elga’s “Lucky to be rational””; Katia Vavova, “Irrelevant influences”; Paul Silva, “Etiological information and diminishing justification (or, the truth behind debunking arguments)”; and Amia Srinivasan, “The Archimedean urge.” Beyond work that’s explicitly focused on the etiology of belief, there is also a large literature on the evolutionary debunking of morality; seminal works include Joyce (2006) and Street (2006). For an overview of this literature see Vavova (2015).
This view is advanced in Vavova, “Irrelevant influences” (see note 1); we’ll discuss it in Sect. 4.
We take this term and formulation from Srinivasan (2011).
We take this term and formulation from Mogensen (forthcoming).
We allow that Indoctrination Anxiety may be an instance of these other types of Anxiety. But an account that characterizes the worry arising from Etiological Challenges in terms of Indoctrination Anxiety—as ours does—differs from and identifies a narrower worry than one that characterizes it merely in terms of either Genealogical or Contingency Anxiety.
Note that, as well as being issued in second-personal terms, Etiological Challenges can also be issued first-personally (i.e. to oneself: “I just believe P because...”), or third-personally (i.e. “he just believes P because...”). Some uses that we’ve identified for second-personal Etiological Challenges may be unintelligible in first- or third-personal terms. For instance, it’s unclear whether Indoctrination Anxiety could be provoked in anyone, in the case where A tells B that some third party, C, just believes P because he was brought up in culture Q. And it seems unlikely that any form of rhetorical ammunition could be at work in a first-personal context. While some authors in the literature show an interest in first-personal Etiological Challenges—most notably, Cohen (2000)—we’ll confine our attention to the implications of second-personal Etiological Challenges in what follows.
Note, however, that Cohen focuses on a case that differs from standard examples of symmetrical peer disagreement.
Mogensen (forthcoming, p. 17) makes the strong claim that even in cases where there’s no actual disagreement the significance of disagreement can defeat the rationality of Target Beliefs, if and when the actual disagreement is ‘arbitrarily absent.’ Mogensen thus argues that some merely possible disagreement can be undermining, since some merely possible disagreement is arbitrarily absent. If this is correct, it supplies Conciliationism with a reply to Kelly’s argument that Conciliationism tends towards global skepticism, since, as Kelly says Conciliationists must claim, all merely possible disagreement has undermining force (see Kelly 2005).
Sher also partly locates the significance of Etiological Challenges in facts about disagreement, but unlike White and Mogensen, he doesn’t regard their force as reducible to the significance of disagreement. For Sher disagreement and Etiological Challenges combine to exert a complementary undermining challenge (see Sher 2001, pp. 69–72).
That is, provided that the agent is (as far as she can discern) in a so-called Permissive case, that is, a case in which there is more than one rationally permissible doxastic attitude to hold towards the proposition given the evidence.
Elga, “Lucky to be rational,” p. 2 (see note 1).
Mogensen suggests that Etiological Challenges can help us think in a way that resists this temptation. Perhaps that’s right, but the temptation to be self-serving in such judgments is strong; see for example Peter van Inwagen (1996) reflections on disagreement.
Social psychologists suggest that humans operate within a framework of “naïve realism,” the sense that one sees the world objectively, and that this leads us to judge our own views as relatively common, and alternative views as relatively uncommon, in the wider population, and to overestimate the prevalence of bias among those who clearly disagree with us. So one may be antecedently likely to behave as we predict. See Pronin et al. (2004).
Perhaps, like in Garber’s approach (2009, p. 46ff.), in processing the doubts occasioned by reflecting on the historical contingency of her religious convictions, B may end up retaining her convictions while coming to regard them as ‘works-in-progress,’ which she shouldn’t rely upon as reference-points for practical decision-making. But still, intellectual humility isn’t the same as belief-revision, and treating Etiological Challenges as Indirect Pointers prematurely limits their capacity to elicit the latter.
Although the characteristic practices of indoctrination frequently are enacted deliberately or intentionally, we maintain that it’s possible for such practices to be enacted in the absence of any conscious intention to indoctrinate.
This is from the text of a definition given at http://www.merriam-webster.com/dictionary/indoctrination.
One means of Credibility-Prejudicing is what we might call Isolation, that is, the structuring of students’ environment so that they don’t encounter people who might present—or constitute— counter-evidence to the ideology into which the students are being indoctrinated. Allen Buchanan (2004, p. 97) discusses a process of Credibility-Prejudicing via Isolation: “A person brought up in a racist society typically not only absorbs an interwoven set of false beliefs about the natural characteristics of blacks (or Jews, and so on), but also learns epistemic vices that make it hard for him to come to see the falsity of these beliefs. For example, when a child, who has been taught that blacks are intellectually inferior, encounters an obviously highly intelligent black person, he may be told that the latter “must have some white blood.” Along with substantive false beliefs, the racist (like the anti-Semite and the sexist) learns strategies for overcoming cognitive dissonance and for retaining those false beliefs in the face of disconfirming evidence.”
Third-personal Etiological Challenges may sometimes be used to effect Credibility-Prejudicing. When inculcating a belief in some proposition, P, an educator may say, of those who deny P, “they just believe P because they were raised in culture Q” with the aim of tarnishing their opponents’ credibility in the students’ eyes.
This practice pushes one towards being a ‘toady’ rather than a ‘pawn,’ in the sense of those terms (applied to beliefs) as used by Yaffe (2003, p. 338). Indoctrination that uses Affective-Conditioning doesn’t merely work towards getting someone to affirm the Target Beliefs, it pushes the person towards self-consciously identifying with those beliefs.
A type of case central to Cohen’s discussion (2000).
To be clear, then, a negative view of what we’re calling ‘indoctrination’ doesn’t entail skepticism about whether testimony can be a source of justification. Of course in some areas (e.g. morality) it’s controversial whether people should rely on testimony in their deliberations, and when indoctrination involves testimony that broaches such contested terrains, skepticism about testimony will be concomitantly appropriate. But outside these areas, one can accept that testimony generally justifies belief, while remaining suspicious of indoctrination, since indoctrination is a form of belief-inculcation in which the normal channels of routine information-exchange that testimony utilizes are exploited in the service of a manipulative agenda.
Although we can’t defend this assertion here, we think the notion that exposure to alternative worldviews itself constitutes a form of indoctrination (e.g. of liberal ideals about tolerance) is wrongheaded. For discussion of some legal and philosophical aspects of this critique of liberal pluralism, see Stolzenberg (1993).
In describing this as a coercive educational practice, we’re not suggesting that beliefs can be coerced as such. We agree with the view in Locke (1946) that people can only be coerced into actions, not into holding beliefs. We suggest, though, that educational practices can coerce forms of behavior that result in predictable doxastic outcomes.
In their defense of directive moral education—their proxy for indoctrination—Sher and Bennet (1982, p. 676) distinguish between directive methods that impair a child’s later ability to respond to moral reasons and those that don’t, and they argue that although the latter can be acceptable, the former aren’t. In general, everyone should recognize a distinction like this, between objectionable and unobjectionable forms of education vis-à-vis respect for a student’s rationality. Regardless of whether our version of the distinction is the optimal one, most of what we say here would hold given any reasonable way of drawing the relevant distinction.
There are at least two ways, for instance, that indoctrination may undermine knowledge. First, if one’s belief in fact resulted from indoctrination, it probably won’t constitute knowledge, since (i) knowledge (plausibly) has a safety condition (see Williamson 2000), and (ii) indoctrination is an unsafe process of belief-acquisition. (On this point, see also Toby Handfield, “Genealogical explanations of chance and morals,” (October 10, 2013), Available at SSRN: http://ssrn.com/abstract=2343405.) Second, regardless of whether one’s belief in fact resulted from indoctrination, if one has reason to believe (or if one just does believe) it did, this too may undermine knowledge. These two ways that indoctrination undermines knowledge mirror White’s distinction between blocking v. undermining debunking (see 2010, p. 575), and Mark Schroeder’s distinction between objective v. subjective defeaters (2015, pp. 228–230).
That is, we claim that not only does indoctrination fail to lead to a high proportion of true to false beliefs, it also tends to lead to a high proportion of false to true beliefs.
Again, we are not claiming that indoctrination is necessarily anti-reliable, but only that it’s plausible that in the environments in which indoctrination characteristically operates in our world it is anti-reliable. If, however, we were mistaken about this, and indoctrination turned out to be merely unreliable rather than anti-reliable, this would not affect our argument very much. For, regarding belief revision, the main difference between learning a belief was formed via an unreliable process versus an anti-reliable process concerns the extent of the necessary revision; the latter typically calls for more extensive revision than the former. But insofar as agents must revise their beliefs in response to learning those beliefs were formed via an unreliable process, and insofar as learning this is not akin to the possibility of global error, the main points of our argument could still be sustained.
This is why Elga is wrong to make the inferences he makes on pp. 7–8 of “Lucky to be rational” (see note 1).
Vavova, “Irrelevant influences,” p. 12 (see note 1); see also Sher (2001, p. 67).
Yaffe says we sometimes “manipulate the way an agent responds to reasons, and sometimes... manipulate what reasons she has,” and he calls these two forms of manipulation “indoctrination” and “coercion” respectively (2003, p. 340). In the epistemic case, what reasons a person has may result from censorship, which may itself be a component of indoctrination. Thus, we include both sorts of manipulation under the category of indoctrination.
Vavova does think some of the central cases of irrelevant influence resemble brainwashing; see “Irrelevant Influences,” p. 10ff. As brainwashing is arguably an extreme form of indoctrination, we are sympathetic to this thought. But her characterization of the phenomenon as one merely involving general higher-order evidence makes the worry presented by Etiological Challenges too generic. By contrast, our proposal implies that there is something special about Etiological Challenges—vis-à-vis the connection to practices of indoctrination—which is not shared by all higher-order evidence.
This is paraphrased from Harman (1973, p. 148).
Kripke’s discussion of this kind of reasoning can be found in Kripke (2011).
Though according to Kelly (2013), one can reject the kind of reasoning Christensen describes without endorsing Independence.
Allen Buchanan describes his own experience with this sort of process. In coming to reject the racist worldview which he was indoctrinated into as a child, Buchanan describes how moving away—geographically—from his deeply racist community of origin initiated the transformation. “I left this toxic social environment at the age of eighteen, and came to understand that the racist worldview that had been inculcated in me was built on a web of false beliefs about natural differences between blacks and whites. My first reaction was a bitter sense of betrayal: Those I had trusted and looked up to—my parents, aunts and uncles, pastor, teachers, and local government officials—had been sources of dangerous error, not truth” (Buchanan 2004, p. 96).
A significant recent example of this approach in free speech theory is Shiffrin (2014).
The use of isolating strategies is relevant here again; see note 18.
See Epistemic Injustice, Chapter 4. Of course, as Fricker makes clear, inculcating these virtues may require more than merely recognizing oneself as having fallen prey to the corresponding vices.
Anderson, E. (2010). The imperative of integration. Princeton: Princeton University Press.
Buchanan, A. (2004). Political liberalism and social epistemology. Philosophy & Public Affairs, 32(2), 95–130.
Christensen, D. (2007). Epistemology of disagreement: The good news. Philosophical Review, 116(2), 187–217.
Christensen, D. (2011). Disagreement, question-begging, and epistemic self-criticism. Philosophers’ Imprint, 11(6), 1–22.
Cohen, G. A. (2000). If you’re an egalitarian, how come you’re so rich?. Cambridge, MA: Harvard University Press.
Elga, A. (2007). Reflection and disagreement. Noûs, 41(3), 478–502.
Feldman, R. (2006). Epistemological puzzles about disagreement. In S. Hetherington (Ed.), Epistemology futures. Oxford: Oxford University Press.
Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford: Oxford University Press.
Garber, D. (2009). What happens after Pascal’s Wager: Living faith and rational belief. Milwaukee: Marquette University Press.
Goldman, A. (1992). Epistemic folkways and scientific epistemology. In Liaisons (pp. 155–177). Cambridge, MA: MIT Press.
Harman, G. (1973). Thought. Princeton: Princeton University Press.
Joyce, R. (2006). The evolution of morality. Cambridge, MA: MIT Press.
Kelly, T. (2005). The epistemic significance of disagreement. In T. S. Gendler & J. Hawthorne (Eds.), Oxford studies in epistemology (Vol. 1, p. 2005). Oxford: Oxford University Press.
Kelly, T. (2008). Disagreement, dogmatism, and belief polarization. The Journal of Philosophy, 105(10), 611–33.
Kelly, T. (2013). Disagreement and the burdens of judgment. In D. Christensen & J. Lackey (Eds.), The epistemology of disagreement: New essays. Oxford: Oxford University Press.
Kripke, S. (2011). On two paradoxes of knowledge. In Philosophical troubles: Collected papers (Vol. 1). Oxford: Oxford University Press.
Leiter, B. (2004). The hermeneutics of suspicion: recovering Marx, Nietzsche, and Freud. In B. Leiter (Ed.), The future for philosophy. Oxford: Clarendon Press.
Locke, J. (1946). A letter concerning toleration. In J. W. Gough (Ed.), The second treatise of civil government and a letter concerning toleration. Oxford: Basil Blackwell. (originally published 1689).
Mogensen, A. L. (forthcoming). Contingency anxiety and the epistemology of disagreement. Pacific Philosophical Quarterly.
Pollock, J. L. (1986). Contemporary theories of knowledge. Savage: Rowman & Littlefield.
Prinz, J. J. (2007). The emotional construction of morals. Oxford: Oxford University Press.
Pronin, E., Gilovich, T., & Ross, L. (2004). Objectivity in the eye of the beholder: Divergent perceptions of bias in self versus others. Psychological Review, 111(3), 781–99.
Ricoeur, P. (1970). Freud and philosophy (D. Savage, Trans.). New Haven: Yale University Press.
Schoenfield, M. (2014). Permission to believe: Why permissivism is true and what it tells us about irrelevant influences on belief. Noûs, 48(2), 193–218.
Schroeder, M. (2015). Knowledge is belief for sufficient (objective and subjective) reason. In T. S. Gendler & J. Hawthorne (Eds.), Oxford studies in epistemology (Vol. 5). Oxford: Oxford University Press.
Sher, G. (2001). But I could be wrong. Social Philosophy and Policy, 18(2), 64–78.
Sher, G., & Bennett, W. J. (1982). Moral Education and Indoctrination. The Journal of Philosophy, 79(11), 665–677.
Shiffrin, S. V. (2014). Speech matters: Lying, morality, and the law. Princeton: Princeton University Press.
Snook, I. A. (2000). Introduction. In I. A. Snook (Ed.), Concepts of indoctrination: Philosophical essays (2nd ed.). Oxford: Routledge.
Sorensen, R. (1988). Dogmatism, junk knowledge, and conditionals. The Philosophical Quarterly, 38(153), 433–54.
Sosa, E. (1991). Knowledge in perspective. New York: Cambridge University Press.
Srinivasan, A. (2011). Armchair v. laboratory. London Review of Books, 33 (18).
Stolzenberg, N. M. (1993). ‘He drew a circle that shut me out’: Assimilation, indoctrination, and the paradox of a liberal education. Harvard Law Review, 106(3), 581–667.
Street, S. (2006). A Darwinian dilemma for realist theories of value. Philosophical Studies, 127(1), 109–66.
van Inwagen, P. (1996). It is wrong, everywhere, always, for anyone, to believe anything upon insufficient evidence. In J. Jordan & D. Howard-Snyder (Eds.), Faith, freedom and rationality. Savage, MD: Rowman and Littlefield.
Vavova, K. (2015). Evolutionary debunking of morality. Philosophy Compass, 10(2), 104–116.
White, R. (2010). You just believe that because. Philosophical Perspectives, 24(1), 573–615.
Williamson, T. (2000). Knowledge and its Limits. Oxford: Oxford University Press.
Yaffe, G. (2003). Indoctrination, coercion, and freedom of will. Philosophy and Phenomenological Research, 67(2), 335–56.
We are grateful to Hilary Kornblith, Brian Leiter, Katia Vavova, and two anonymous referees for comments, criticisms, and suggestions. DiPaolo’s work on this paper was made possible through the support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. He would also like to thank Saint Louis University, and its Department of Philosophy, for their funding and support.
About this article
Cite this article
DiPaolo, J., Simpson, R.M. Indoctrination anxiety and the etiology of belief. Synthese 193, 3079–3098 (2016). https://doi.org/10.1007/s11229-015-0919-6