Abstract
This paper defends the claim that pragmatic encroachment—the idea that knowledge is sensitive to the practical stakes of believing—can explain a distinctive kind of epistemic injustice: the injustice that occurs when prejudice causes someone to know less than they otherwise would. This encroachment injustice, as we call it, occurs when the threat of being met with prejudice raises the stakes for someone to rely on her belief when acting, by raising the level of evidential support required for knowledge. We explain this notion of encroachment injustice, connect it to the empirical literature on implicit bias, and defend it against important objections.
1 Introduction
Prejudice, it has been argued, can generate epistemic wrongs, not just because it can lead to, for example, an undue failure to appreciate what someone knows, but also because it can cause someone to know less than they otherwise would.Footnote 1 This undue erosion of knowledge can be accounted for in (at least) two ways. One is via self-doubt. As Miranda Fricker (2007) has famously argued, prejudice can cause a distinctive type of injustice: an epistemic injustice—that is, the injustice that occurs when a subject is “wronged strictly in their capacity as an epistemic subject” (Fricker, 2013, 1320). On this picture, the expectation of facing such prejudice can, over time, lead an agent to self-doubt. This self-doubt can, in turn, cause the agent to lose knowledge—at least if knowledge is incompatible with such self-doubt. The other way to account for this loss of knowledge is via the practical stakes of justified belief. On this far less explored picture, the expectation of facing prejudice can raise the practical stakes for an agent to rely on her belief when acting. These raised stakes can, in turn, undermine what she knows—at least if knowledge is sensitive to the practical costs of belief: that is, if knowledge is subject to pragmatic encroachment. On this picture, self-doubt can occur, but it is not what causes the loss of knowledge. What causes the loss of knowledge are the prejudice-based practical considerations that an agent faces.
Few have recognized the possibility of this latter kind of undue loss of knowledge: Jason Stanley (2015) recognizes it in passing, and, in a paper that argues that pragmatic encroachment cannot account for central cases of epistemic injustice, Mikkel Gerken (2019) briefly considers, but ultimately dismisses, it. However, a defense and exploration of this idea have yet to be provided. This is the aim of this paper.
To be clear, the kind of case we seek to explain is the following:
Participation Nilam is taking a class in bioethics, where she recently learned that ‘The principle of nonmaleficence holds that one should do no harm’. Today, her professor asks, “Who can explain the principle of nonmaleficence?”. Nilam is the only person of color in the room. Given the society she lives in, if she were to be wrong about what the principle of nonmaleficence holds, her professor and many fellow students would think, on the basis of prejudice, that people of color are underprepared. In this sense, the stakes for Nilam to act on her belief that ‘The principle of nonmaleficence holds that one should do no harm’—for example, by raising her hand and answering her professor’s question—are raised if she is wrong. If she is right, however, no one would have second thoughts and the teacher would simply move on to the next question. Nilam does not raise her hand.
As the example suggests, the prejudice that Nilam faces if she acts on her belief that ‘The principle of nonmaleficence holds that one should do no harm’—for example, by raising her hand and answering her professor’s question—and she is wrong constitutes a risk that she faces. Thus, the stakes are raised for her to participate in class when her belief is false.Footnote 2 According to the pragmatic encroachment thesis, these raised stakes can cause Nilam to cease to know that ‘The principle of nonmaleficence holds that one should do no harm’.Footnote 3 This is because the raised practical stakes of acting on one’s belief can raise the amount of evidence required to have knowledge. If a person’s knowledge base is eroded in this way, we submit, she suffers a pragmatic encroachment-based epistemic injustice, or encroachment injustice for short.
Encroachment injustice thus captures the kind of wrong that occurs when one person causes another to lose knowledge. This moral wrong, which is associated with a variety of harms, is the subject of this paper.
The paper develops as follows. Section 2 explains the notion of pragmatic encroachment in greater detail. Section 3 illustrates how encroachment injustice works and examines the harms and wrongs that this injustice involves. Section 4 then highlights the relevance of this kind of injustice in the context of empirical findings on stereotype immutability. Finally, Sect. 5 defends the account we propose against Gerken’s (2019) charge that on a framework like ours many discriminatory wrongs are mischaracterized as ‘mere’ distributive wrongs.
2 Pragmatic encroachment
Traditionally, philosophers have assumed that what someone knows depends only on truth-conducive factors such as consistency and evidential support. In the last couple of decades, however, an increasing number of philosophers have come to defend the ‘pragmatic encroachment’ thesis, which, broadly speaking, can be put as follows:
Pragmatic Encroachment Whether an agent knows that p can depend, at least in part, on pragmatic factors. (see Ross & Schroeder, 2014)
This rendition of pragmatic encroachment is neutral about the mechanisms by way of which practical factors can have an effect on knowledge, thus allowing for the view that knowledge can also depend on non-pragmatic factors, as well as the view that pragmatic factors need not always make a difference. Moreover, it is compatible with encroachment operating either on justification (e.g., Fantl & McGrath, 2002; Hawthorne, 2004; Stanley, 2005) or on belief (Bach, 2005; Schroeder, 2012; Weatherson, 2005). For our purposes, the crucial implication of this thesis is that an agent may fail to know p because acting on the presupposition that p is true poses a sufficient, and narrowly specified, risk. This section explains why this is so and provides a prima facie case in favor of the pragmatic encroachment thesis: that the costs of falsely believing some proposition raise the level of evidential support required for knowing it.Footnote 4
A note on terminology. Although the notion of ‘pragmatic factors’ in the formulation of pragmatic encroachment above can be interpreted to include a wide variety of factors—for example, the availability of further evidence (Schroeder 2018, 118), considerations of urgency (Shin, 2014), and the availability of alternative courses of action (see Gerken, 2017, 133 for a summary of this issue)Footnote 5—most of the current literature on pragmatic encroachment has understood it in terms of ‘stakes’: that is, as the “practical ramifications of acting on p” (Gerken, 2017, 133). Thus, this is how we will understand it in what follows. Moreover, we will refer to the negative consequences of acting on a belief as the ‘costs’ of believing, reserving the term ‘stakes’ to characterize cases as a whole. Thus, whether a case is high- or low-stakes is determined by the distribution of costs of believing truly or falsely. Finally, we will say that an action is ‘risky’ if its costs resemble a high-stakes case (e.g., ‘Almond Butter—High’ below).
On a view according to which pragmatic factors are understood as stakes-related factors, knowledge is, by its nature, something that one can rely on when acting (e.g., Fantl & McGrath, 2002; Hawthorne, 2004; Stanley, 2005)—or, as Jeremy Fantl and Matthew McGrath (2012) put it, knowledge is “actionable.” According to the pragmatic encroachment thesis, the rationality of such reliance is stakes-sensitive: if the stakes are raised high enough, it may become irrational to rely on p (i.e., by acting as if it were true). As a result, one may fail to know that p. This suggests that knowledge itself is stakes-sensitive.
The idea that knowledge is stakes-sensitive can be forcefully illustrated by considering the following pair of cases from Ross and Schroeder (2014, p. 261), which differ only in the practical stakes involved:
Almond Butter—Low Five minutes ago, Hannah made three sandwiches and placed them in the refrigerator. She told Sarah that she placed the peanut butter sandwich on the left, the tuna sandwich in the middle, and the almond butter sandwich on the right. Hannah then departed just as Sarah’s friend Almira arrived for lunch. Sarah knows that Almira has no allergies. Almira says: “I’d love an almond butter sandwich.” And so Sarah opens the refrigerator door, points to the sandwich on the right, and says: “The sandwich on the right is an almond butter sandwich. You can have it.”
Almond Butter—High This case is just like Low, except here it is Sarah’s nephew Algernon who is visiting for lunch, and he has a severe peanut allergy. He asks Sarah for a sandwich. Sarah knows that the peanut butter sandwich would be fatal to Algernon, but that the almond butter sandwich would be harmless. She also knows that he would slightly prefer the almond butter sandwich to the tuna sandwich. When Sarah goes to the fridge, she can tell by visual inspection which is the tuna sandwich, but she cannot tell by visual inspection which is the peanut butter sandwich, and which is the almond butter sandwich. So she gives him the tuna sandwich.
Many of us would agree that Sarah may rely on her belief that the sandwich on the right is almond butter only in the low-stakes case. Given that in the high-stakes case, the costs of being wrong are high, she needs more evidence to justify taking the risk of mistaking the almond butter sandwich for the peanut butter sandwich. According to the reliance conception of knowledge, then, pragmatic encroachment entails that Sarah knows that the sandwich on the right is almond butter when the stakes are low but does not know it when the stakes are high.
In the philosophical literature, this crucial connection between knowing and acting is often crystallized in an influential, albeit controversial,Footnote 6 principle that connects knowing and acting:
Knowledge-Action Principle For any agent S and proposition p, if S is in a choice situation in which S could not rationally act as if p, then S does not know that p (where to act as if p is to act in the manner that would be rationally optimal on the supposition that p is true).Footnote 7
The idea here is this. If an action would be rational for an agent S on the presupposition that p is true, but, given the context, acting in this way would in fact be irrational, then S does not know that p. In the context of ‘Almond Butter—High’, it may seem rational for Sarah to pick the sandwich on the right on the presupposition that this is the almond butter sandwich; but, given the risk of being wrong, it is in fact irrational for her to pick the sandwich on the right. Consequently, she does not know that the sandwich on the right is almond butter.
On this take on pragmatic encroachment, raising the costs of false belief makes it riskier to act as if p were true, such that, at some point, those risks become too great for an action based on p to be rational. As we demonstrate below, this point is at least prima facie plausible. It is also crucial for understanding the kind of epistemic injustice this paper explores: one that involves the loss of knowledge.
To see the prima facie plausibility of this point, note, first, that the Knowledge-Action Principle applies only to p-dependent actions, so it applies only if the action would in fact be rational given p.Footnote 8 To see this, suppose that you know that the Trump administration won the election with the help of the Russian government—perhaps because you were on the team that closed the deal with the Russians—but you also know that asserting this would make your family act in a hostile manner toward you at Thanksgiving dinner; so you choose not to assert it. In this case, the Knowledge-Action Principle is compatible with you knowing that the Trump administration won the election with the help of the Russian government. After all, the negative outcome of acting on this belief—that is, that your family acts in a hostile manner toward you—does not raise the stakes for it to be justified, because you simply would not act on it either way.Footnote 9
Second, note that one may wonder whether pinning the raised costs to believing falsely is required for pragmatic encroachment to work. One might think, for instance, that the requirement could be extended to believing per se. But as Sarah Moss (2018) argues, pragmatic encroachment on belief per se would implausibly justify wishful thinking. Contrast the following vignettes that she provides:
Costly Rodents A home inspection company has just sent you photographs of the insides of the air ducts of your house. You are trying to figure out whether there are rodents living there. If you come to believe that there are rodents, you will have to hire a costly exterminator and vacate your house for several days.
Costless Rodents An anonymous blogger has just posted photographs of the insides of the air ducts of another house. You are trying to figure out whether there were rodents living there. But nothing turns on the question. The photographs are old, and the house in question is no longer standing. (Moss, 2018, 196)
As Moss rightly indicates, it would be a mistake to be more reluctant to judge that there are rodents in the house if the belief is costly. Such reluctance would amount to mere wishful thinking insofar as we would avoid paying the exterminator.Footnote 10 Thus, pragmatic encroachment should not apply to belief per se.Footnote 11
Finally, note that, as we mentioned above, while pragmatic factors can explain what someone knows, these factors should not be construed as reasons for which someone believes. Pragmatic encroachment is compatible with the popular view that one ought only to believe for evidential (i.e., truth-related) reasons. The idea is, rather, that pragmatic factors shape the epistemic requirements for knowledge indirectly, as it were. There are (at least) three general approaches to how this might work. One is that high-stakes situations may require a specific kind of evidence to turn an otherwise justified belief into knowledge: for example, when the evidence is of legal or moral importance.Footnote 12 Another is that high-stakes situations may raise the threshold of evidential support required to turn an otherwise justified belief into knowledge: for example, by holding both that beliefs are warranted only if a reasoner meets some crucial credal threshold and that stakes-related reasons can shift this threshold up or down.Footnote 13
A third way in which pragmatic factors might shape the epistemic requirements for knowledge is if high-stakes situations raise the threshold of evidential support required to justify belief in the first place. The prima facie case for this way of implementing pragmatic encroachment is quite strong. One could, as Mark Schroeder (2012) does, grant that one ought only to believe for truth-related reasons, but submit that reasons for withholding belief can be pragmatic:
[A] natural place to look for reasons to withhold is in the costs of error. When you form a belief, you take a risk of getting things wrong that you do not take by withholding. In contrast, when you withhold, you guarantee that you miss out on getting things right. So plausibly, one important source of reasons to withhold will come from the preponderance of the cost of having a false belief over the cost of missing out on having a true belief. (Schroeder, 2012, 277)
As Schroeder suggests, reasons for withholding belief by design cannot be evidential; after all, any evidential reason bearing on p will favor either believing or disbelieving p; it won’t favor withholding p. But, Schroeder argues, reasons for withholding belief can nonetheless be epistemic. And among these epistemic reasons are pragmatic, “stakes-related reasons.” This is clearly illustrated in the pair of ‘Almond Butter’ cases above. The fact that serious damage ensues if Sarah falsely believes of the peanut butter sandwich that it is an almond butter sandwich is a reason for Sarah to withhold belief. Importantly, as Schroeder further argues, a belief is justified only if the reasons for belief are at least as good as the reasons for withholding belief (Schroeder, 2012, 274). Thus, as the practical stakes strengthen the case for withholding belief, more evidential reasons are required to justify believing the relevant proposition.Footnote 14
To sum up, pragmatic encroachment entails that when the costs of holding a false belief are raised, an agent who would otherwise know that p can fail to know p. Crucially, in order for pragmatic encroachment to gain purchase, the negative stakes of acting on a belief need to be associated only with acting on a false belief—and not with acting on a true belief. Moreover, pragmatic considerations should not be taken to mandate believing for pragmatic reasons. We believe that stakes-based reasons to withhold belief provide the best available explanation of how pragmatic factors determine the justification of beliefs, and thereby their status as knowledge.Footnote 15 This makes the account of pragmatic encroachment we have described in this section prima facie appealing.
3 Encroachment injustice
With this picture of pragmatic encroachment in hand, we can now begin to examine the idea that prejudice can raise the practical stakes for someone to rely on her belief, thereby undermining what she knows. This section explicates the conditions under which an encroachment injustice occurs. It then discusses, first, some of the ways in which encroachment injustice can be harmful to a subject, and, then, the wrongs that agents who engage in actions that risk causing these harms commit.
As we have already indicated, the idea that prejudice can generate an epistemic wrong has been famously explored by Fricker (2007), who argues that there is a distinctive kind of wrong that is “done to someone specifically in their capacity as a knower”: what she calls epistemic injustice (Fricker, 2007, 1). According to Fricker, there are two central kinds of epistemic injustice: ‘testimonial injustice’—an injustice involving a credibility deficit—and ‘hermeneutical injustice’—an injustice involving an intelligibility deficit.Footnote 16 Fricker originally characterized epistemic injustice as a type of discriminatory injustice: that is, as an injustice involving an ethically culpable (i.e., prejudice-based) error (Fricker, 2007, 22–23). But, as Coady (2010, 2017) and Fricker (2013) have argued, epistemic injustice can also be understood as a kind of distributive injustice: that is, an injustice in the distribution of epistemic goods such as information, education, and knowledge (Coady, 2010, 112).Footnote 17 For the purposes of our argument, it’s important to note that while we primarily focus on cases of discriminatory (i.e., prejudice-based) epistemic injustice, as we indicate below, our analysis has implications for distributive forms of epistemic injustice as well.
Now, on Fricker’s view, repeated exposure to testimonial injustice can lead an agent to self-doubt, which, in turn, can cause her to lose knowledge—at least if knowledge is incompatible with such self-doubt. Yet, as we have been suggesting, there is another distinctive way in which prejudice can unduly cause the loss of knowledge: pragmatic considerations might encroach upon knowledge. The following two vignettes (alluded to at the outset) explain this contrast:
Participation—Low Nilam is taking a class in bioethics, where she recently learned that ‘The principle of nonmaleficence holds that one should do no harm’. Today, her professor asks, “Who can explain the principle of nonmaleficence?”. If Nilam is right about what the principle of nonmaleficence holds, her professor will move on to the next question. If she is wrong, no one will think much of it and the professor will simply ask another student. Nilam raises her hand and answers the question.
Participation—High Nilam is taking a class in bioethics, where she recently learned that ‘The principle of nonmaleficence holds that one should do no harm’. Today, her professor asks, “Who can explain the principle of nonmaleficence?”. If Nilam is right about what the principle of nonmaleficence holds, her professor will move on to the next question. However, Nilam is the only person of color in the room. Given the society she lives in, if she is wrong about what the principle of nonmaleficence holds, her professor and many fellow students will think, on the basis of prejudice, that people of color are underprepared. Nilam does not raise her hand.
Many of us would agree that Nilam may rely on her belief that ‘The principle of nonmaleficence holds that one should do no harm’ only in the low-stakes case. Given the costs of being wrong in the high-stakes case, she needs more evidence to justify taking the risk of being wrong about what the principle of nonmaleficence holds.
According to the view we have been proposing, then, Nilam knows what the principle of nonmaleficence holds when the stakes are low but does not know it when the stakes are high. To clarify, the practical considerations of falsely believing that the principle of nonmaleficence holds that one should do no harm encroach upon whether or not Nilam’s belief is knowledge. How? By raising the practical stakes for her to rely on her belief when acting. Thus, prejudice raises the level of evidential support required for her belief to count as knowledge, by giving her reason to withhold belief.Footnote 18 This is why Nilam suffers a pragmatic encroachment-based epistemic injustice—or, ‘encroachment injustice,’ for short.
Now, the type of encroachment injustice we have been describing is discriminatory (i.e., prejudice-based). But, as we noted above, encroachment injustice can also be distributive. After all, it would seem that just like prejudice can generate a high-stakes situation for an agent to rely on her belief, so, too, could an injustice in the distribution of various (epistemic and non-epistemic) goods generate a high-stakes situation that would cause an agent to lose knowledge. As we will indicate in Sect. 5, one such good may be income. For now, note that, for an encroachment injustice to occur on this picture, the goods that are unjustly distributed need not be epistemic (as the goods of education or information mentioned by Fricker and Coady might be). They can also be non-epistemic, so long as they have the effect of preventing an agent from having knowledge that she would otherwise had.
Based on this discussion, encroachment injustice can be defined as follows:
Encroachment Injustice An injustice is an encroachment injustice iff prejudice or an unfair distribution of goods generates a high-stakes situation for a person, thereby preventing her from having knowledge that she would otherwise have had.
Let us return to ‘Participation—High,’ the primary discriminatory injustice we hope to elucidate here. If our analysis is correct, the questions arise of whether and how someone in Nilam’s social position is harmed or wronged in cases of encroachment injustice, and of whether and how people like Nilam’s professor and classmates, who seem to play a role in raising the stakes for Nilam to rely on her belief, might play a central role in bringing about such harms or wrongs. These are the questions we now turn to.
Thus far, we have been presupposing that the loss of knowledge that results from encroachment injustice is harmful. This judgment stems from the idea that knowledge is at the center of important practical norms. As the Knowledge-Action Principle discussed above suggests, raising the stakes for someone to rely on her belief, thereby eroding her knowledge, amounts to depriving her of a set of rational actions that would have been available to her had the stakes not been raised. This is not to say that ignorance can never be bliss; after all, knowing can have negative (e.g., emotional) effects apart from rationalizing action. But, at least with respect to central, practical aspects of life—making plans for the summer, taking out a mortgage, choosing (not) to undergo a medical treatment, for example—knowing would seem to be better than not knowing. Other things equal, then, encroachment injustice is harmful because it causes the loss of knowledge, which unduly makes inferior choices rational.
But encroachment injustice can also be harmful because, by raising the level of evidential support required for belief, and thereby knowledge, it can make it epistemically rational for an agent to remain in a state of inquiry. Philosophers agree that justified belief and knowledge settle questions and bring inquiry to an end—or at the very least, that these attitudes justify ceasing to inquire further (see, e.g., Williamson, 2000; McDowell, 1995; Friedman, 2019). This needn’t always be the case; scientific theories are a case in point. Knowing that a theory is true need not entail that a matter has been settled. But, at least practically speaking, knowledge (or justified belief) can and often does reasonably settle a matter. If you know that the stove is off, then you can move on to the next activity; after all, it would be irrational to triple-, or quadruple-check that it is, in fact, off (see Friedman, 2019). Inquiry, by contrast, is unsettling, and can prevent one from making everyday decisions, thereby brewing uncertainty and anxiety. Encroachment injustice is thus also harmful because, by raising the level of evidential support required for knowledge, it can keep a rational agent in a state of inquiry when she would otherwise have had knowledge.
How, then, and under which conditions ought these harms be prevented?Footnote 19 Given the subtly of the harms that result from stakes-raising as well as the complexity of identifying behavior that might cause them, a requirement to look out for such behavior would be much too stringent. Even minimal social interaction would, on such a requirement, become risky and tireless. But note that our judgments would likely be different if these harms were cumulative: that is, if they occurred repeatedly over a person’s life, and intensified or changed for the worse overtime. In that case, it would seem like the worse the harm, the stronger the reason to prevent it. Consider for a moment what these cumulative harms may be.
Imagine that Nilam repeatedly found herself in high-stakes situations like the one in ‘Participation—High’—situations in which, as a result of encroachment injustices, inferior choices were unduly made rational for her, and she was forced, rationally speaking, to remain in a state of deliberation. Given these considerations, it would not be implausible to suppose that there could be a point at which Nilam would, as a result of being repeatedly subject to these harms, begin to experience negative psychological states like anxiety or regret at much higher rates than the average person.Footnote 20 Similarly, it would not be implausible to suppose that there could be a point at which Nilam would begin to experience self-doubt so pervasive that her capacities to make judgments about the world, as well as to make practical decisions would be undermined.Footnote 21 Indeed, Nilam may, over time, even be subject to what Kristie Dotson (2011) calls ‘testimonial smothering’: the wrong that occurs when a “speaker perceives [her] immediate audience as unwilling or unable to gain the appropriate uptake of proffered testimony” (Dotson 2011, 244). This is because as a result of regularly being in high-stakes situations that prevent her from having knowledge, she may, over time, come to restrict her actions (e.g., by truncating her testimony) just in case she is in a high-stakes situation.
Our point here is that repeated exposure to such injustices causes the loss of knowledge, which can then cause (sometimes pervasive) self-doubt. In this way, the badness of encroachment injustice aligns with the badness of the traditional understanding of epistemic injustice: repeated instances of these injustices can be worse than a single such instance. But, unlike the traditional understanding of epistemic injustice, which holds that repeated exposure to epistemic injustice can cause self-doubt, which can then cause the loss of knowledge, our view holds that repeated exposure to epistemic injustices of the encroachment injustice type, which can erode one’s knowledge, can lead to rational (justified) self-doubt. Our view thus reveals a distinctive causal relation between repeated exposure to epistemic injustice and self-doubt.
The fact that repeated exposure to encroachment injustice can be harmful in ways that are distinct from and arguably worse than the ways in which isolated exposure to it can be harmful provides a strong reason for being more cautious when interacting with others, just in case one’s actions risk causing these cumulative harms. Importantly, the strength of this reason does not come from the possibility of causing someone to lose knowledge as an event that singly makes inferior choices rational or singly forces her to continue deliberating. As we said above, this would be much too stringent. Rather, the strength of this reason comes from the risk that an agent takes in raising the stakes of relying on a false belief for someone who is repeatedly subject to this harm: he risks engaging in an action that intensifies the isolated harm or changes its nature for the worse: for example, by making self-doubt or testimonial smothering rational.
To sum up this section, encroachment injustice occurs when prejudice-based practical considerations raise the practical stakes for someone to rely on her belief, thereby providing reasons to withhold belief. Encroachment injustice can cause some distinctive harms that can be intensified or worsened by repeated exposure. Moreover, while a requirement according to which agents should in general engage in behavior to avoid raising the stakes for another to rely on a false belief seems too stringent, there is a reason in favor of such a requirement when, in raising the stakes for someone to rely on a false belief, an agent risks contributing to a cumulative harm.
4 Implicit bias and stereotype mutability
In Sects. 2 and 3, we pointed out that pragmatic encroachment is responsible for the erosion of knowledge just in case the stakes are raised with respect to believing falsely, but not if they are raised with respect to believing per se, or with respect to not believing at all. Since pragmatic encroachment operates only under such very specific conditions, it might be suspected that its application is relegated to a few niche cases. This suspicion would be a mistake, however. As we suggested at the outset, our framework has the potential to explain a broad range of epistemic wrongs grounded in implicit bias, specifically in cases in which the prejudiced reaction is muted under true belief. This is what we now turn to.
To understand the mutability of stereotypes, it will be helpful to take a closer look at the various ways in which stereotypes may be cognitively represented. Stereotypes are “widely held associations between a given social group and one or more attributes” (Fricker, 2017, 30): for example, ‘women are irrational’ or ‘Blacks are intellectually inferior to whites’. Research has shown that stereotypes can be cognitively represented along various dimensions, including a statistical and a salience dimension. On the statistical interpretation, the attribute is taken to be true of sufficiently many members of the target group. On the salience interpretation, the attribute is seen as a salient feature of members of the target group. A token prejudiced attitude—for example, Jason’s thinking, based on his conviction that women are intuitive, that Betty is intuitive—is ambiguous between both interpretations. Jason might think that Betty is intuitive because most women are intuitive (the statistical sense), or because ‘being intuitive’ is, in Jason’s eyes, a salient feature of women (the salience sense).
It is becoming increasingly recognized, however, that sorting stereotypes in these two ways leaves much unsaid. Specifically, it leaves open how important, or how central, a stereotype is to someone’s assessment of another person. A more adequate typography of stereotypes should thus also include some measure of a stereotype’s cognitive centrality—that is, a dimension according to which the attribute is a central feature of members of the target group.
Note that salience-statistical generalizations and centrality-based generalizations can dissociate. ‘Being yellow’, for instance, is a salient feature of taxicabs. It is nevertheless in no way central to our conception of a taxicab’ that it be yellow. If taxicabs were painted black tomorrow (as they are in London) not much would change in our conception of taxis. Salience, then, does not imply centrality. Furthermore, most chairs have backs and seats. Statistically speaking, there is not much difference in the distribution of both features with respect to chairs. However, of these two features, ‘having a seat’ is more central to our conception of chairs, because seats are for sitting, and not, as it were, for leaning. Statistical validity, then, does not imply centrality, either.Footnote 22
Now, thinking of a feature as more or less central to a person results in various downstream effects. One such effect is the mutability of the stereotype under further subcategorization. Features that are more central are also more difficult to mute or suppress: for example, since seats are more central to a chair than backs, this feature is much more difficult to suppress in thinking about chairs. As a result, it is more effortful to think of a chair without a seat than to think of a chair without a back.Footnote 23 Mere salience-statistical generalizations, on the other hand, are easily muted under further subcategorizations—or so empirical evidence suggests.Footnote 24
Recent studies also suggest that implicit, but not explicit, stereotype is highly mutable. Consider a study by Govan and Williams (2004), in which participants “displayed significant anti-Black implicit bias,” yet this effect went away when the stimuli were further subcategorized into famous and well-liked Black men: for example, Michael Jordan (Govan & Williams, 2004, 17). Wittenbrink et al. (2001, 18) similarly report that implicit bias against Blacks is muted when “Black faces are presented in the background context of a church interior as opposed to an urban street corner.” On the basis of these studies, Del Pinal and Spaulding hypothesize that “most measures of implicit bias […] detect salient statistical associations” (Del Pinal & Spaulding, 2018, 17).
We review these findings in order to point to the potentially far-reaching applications of the encroachment-based model of epistemic wrongs explicated in this paper. On this model, pragmatic encroachment can explain epistemic wrongs involving prejudice only if the prejudiced reactions are active in the false belief setting but muted otherwise. If, as empirical evidence suggests, it is true that non-centrally encoded stereotypes of implicit bias are systematically mutable in the relevant ways, then our model would seem to be able to describe the epistemic wrongs of implicit bias.
Consider once again ‘Participation’. As we construed the case at the outset of the paper, the prejudice in Nilam’s classmates incites discriminatory thoughts only if Nilam is wrong about what the principle of nonmaleficence holds. When she is right, however, by our stipulation, her classmates do not have second thoughts. The empirical literature discussed above suggests that this is an empirically plausible scenario. Implicit racial stereotypes are often muted in favorable contexts: for example, stereotypes against Blacks are often muted in the context of famous and well-liked Black men. Similarly, in Nilam’s case, stereotypes about people of color being underprepared are likely to be muted in the favorable context in which she is right about what the principle of nonmaleficence holds, but are activated in the unfavorable context in which she is wrong.
All this is highly suggestive, but we do advise for caution. To our knowledge, there are no studies specifically investigating whether implicit bias can be muted in situations like ‘Participation’. But although such experiments have yet to come forth, our general framework is strong: Important instances of unjust knowledge erosion can be accounted for by employing pragmatic encroachment as a theoretical tool.
5 Objections
Mikkel Gerken (2019) has argued that pragmatic encroachment cannot account for some central cases of discriminatory (i.e., prejudice-based) epistemic injustice. In this section, we argue that Gerken’s challenge is less convincing than it first may seem. The pragmatic encroachment theorist can indeed account for some central cases of discriminatory epistemic injustice, and some of the cases that concern Gerken can be explained as non-epistemic discriminatory wrongs.
Let us begin with a depiction of Gerken’s (2019) argument at the center of which is the contrast between the following vignettes (henceforth collectively referred to as ‘Interview’):
Richie’s Interview Richie is extremely wealthy, but to practice his Spanish, he has applied for a job at a US company with a market in South America. It is not particularly important to Richie that he gets the job since he just wants to practice his Spanish and can easily find another job opportunity. At the interview, Richie is asked what the capital of Peru is. Richie has a reliable memory and correctly remembers it is Lima, although he does not remember the source of his belief.
Brooke’s Interview Brooke is extremely poor and has applied for a job at a US company with a market in South America in order to get some much-needed income. It is extremely important to Brooke that she get the job since she is in serious debt and cannot easily find another job opportunity. At the interview, Brooke is asked what the capital of Peru is. Brooke has a reliable memory and correctly remembers that it is Lima, although she does not remember the source of her belief. (Gerken, 2019, 6)Footnote 25
Gerken asks us to suppose that both Richie and Brooke reply to the interviewer’s question by asserting ‘Lima is the capital of Peru’. He then argues that if the interviewer judges Richie to know this, but judges Brooke to not know this, the interviewer commits a discriminatory epistemic injustice against Brooke: the injustice of not giving her the appropriate level of credibility given what she knows. Gerken suggests that this point is clear on a traditional account of knowledge, according to which the practical stakes of acting on a false belief do not affect Richie or Brooke’s knowledge. However, supporters of pragmatic encroachment are committed to the view that Brooke will, at least on some versions of the case, lose knowledge, given the practical stakes of acting on her belief. And if this is so, then in such cases the interviewer will not commit an epistemic injustice against Brooke when he regards he as a non-knower, even though Richie is equally reliable and is judged to be a knower—at least if such an injustice presupposes that the interviewer’s judgment is false. Gerken’s challenge to the pragmatic encroachment theorist, then, is to explain the discriminatory epistemic wrong that Brooke suffers, while also holding on to the idea that the raised stakes erode her knowledge.
Two points should be noted here. One is that, at least on some elaborations of ‘Interview’, pragmatic encroachment theorists can agree with Gerken that a discriminatory epistemic injustice of the kind he describes takes place. This is because while pragmatic encroachment can result in the loss of knowledge, it need not do so. In ‘Interview’, pragmatic encroachment entails that the level of evidential support required for Brooke to have knowledge is higher than Richie’s, but it does not entail that Brooke fails to possess the required level of evidential support to know the capital of Peru.
The other is that pragmatic encroachment theorists can also grant that an epistemic injustice takes place even in cases in which Brooke does indeed lose knowledge, but they can also hold that such cases involve a distributive injustice—not a discriminatory one. As we noted in Sect. 3, this could be the case if, for example, Brooke’s knowledge were eroded because poverty—and not prejudice—raised the stakes for her to act on a false belief.
Gerken’s concern, however, is with the case in which Brooke does in fact lose knowledge, since the pragmatic encroachment theorist does not seem to have the tools to claim that a discriminatory epistemic injustice nonetheless takes place. We disagree—at least with respect to some versions of the case. To see why, consider the simple version of the challenge that Gerken presents to the pragmatic encroachment theorist.
On this version of the challenge, Gerken’s point is that the pragmatic encroachment theorist cannot account for the discriminatory epistemic injustice that takes place if the interviewer judges that Brooke does not know because she is poor, and he is correct. But this is wrongheaded. The pragmatic encroachment theorist can indeed account for this wrong—at least if the interviewer judges that the reason Brooke does not know is that she is poor. Consider that in the bounds of the example, it’s easy enough to stipulate that Brooke is in a situation in which she has lost knowledge due to the high-stakes situation she is in; but it is much more difficult to conceive of situations in which the interviewer knows this fact. For instance, if the interviewer knows that more is at stake for Brooke, he also seems to have reason to suppose that she prepared much more vigorously for the interview, which, in turn, seems to give him reason to think that Brooke is more prepared than Richie. We may of course stipulate that he knows all the relevant facts, but it is less clear whether our intuitions about the case would be sensitive to such fine-grained stipulations. Our point, then, is that in most intuitively plausible elaborations of the case, the interviewer commits a discriminatory epistemic injustice when he correctly judges that she does not know because he does not have access to the fact that Brooke is in a situation in which she has lost knowledge due to pragmatic encroachment. In other words, the intuition that a discriminatory wrong nonetheless takes place can be explained by the wrong of not being sensitive to the right kinds of features.
Now, suppose, for the sake of argument, that the interviewer does know the relevant facts and judges that Brooke does not know because she is poor. In that case, the pragmatic encroachment theorist can also agree that a discriminatory epistemic injustice takes place if the interviewer again judges that the reason Brooke does not know is that she is poor. This is because, instead of reasoning directly from a belief that people who are poor know less than the average person, the right thing to do in such a case would seem to be to judge that Brooke might not know the answer to the question. After all, as we mentioned above, even if the stakes are raised for Brooke, this does not entail that she now knows less than Richie. Indeed, the right kind of sensitivity should lead the interviewer to reason from (1) Brooke’s poverty to (2) the stakes being raised for her (if he knows this), and from (2) to the fact that (3) she may not know. The right kind of sensitivity would thus not overlook the fact that the proximal cause for (3) is (2), and not (1). Our point here, then, is that while it does not seem discriminatory to judge that Brooke may not know because of her raised stakes, and that the stakes are raised for her because she is poor, it does seem discriminatory to judge that Brooke does not know because she is poor. Thus, if the interviewer judges that Brooke does not know because she is poor, he commits a discriminatory epistemic injustice against her.
On another, more in-depth version of his challenge to the pragmatic encroachment theorist, Gerken argues that a discriminatory epistemic injustice seems to take place even if the interviewer is sensitive to the right reasons and judges that the reason Brooke does not know is pragmatic encroachment—even, that is, if he doesn’t skip step (2) above. Gerken asks us to imagine a situation in which the interviewer reasons as follows: “[Brooke and Richie] have the same evidence. But she does not know since the stakes are high for her, and he does know since the stakes are low for him. So, I suggest we hire him” (Gerken, 2019, 14). Gerken’s point is that this line of reasoning is discriminatory, but that the pragmatic encroachment theorist is again committed to the view that it is not.
Here, we believe that, while Gerken may be right that a discriminatory wrong does take place, it is not a discriminatory epistemic wrong. In particular, if a wrong occurs, it does not occur when the interviewer judges that Brooke does not know; rather, it occurs when the interviewer decides not to hire Brooke on the basis of this judgment. The problem with Gerken’s modification of the case is that it is not a case of pragmatic encroachment. After all, when the interviewer decides not to hire Brooke because she doesn’t know, he imposes costs on Brooke because acting on her belief is risky, and not, as pragmatic encroachment would standardly require, because she would be acting on a false belief. In other words, the interviewer raises the stakes for Brooke simply because her belief could be false, and not only when it is false. But the fact that Brooke’s answer is risky and that she thereby does not know the capital of Peru does not imply that her belief is false and that she will give the wrong answer. Even if Brooke is in a high-stakes situation such that she does not know the capital of Peru, she could still give the right answer. Thus, if the interviewer’s decision not to hire Brooke is discriminatory, this cannot be because of pragmatic encroachment; raising the costs of acting on a risky belief does not result in pragmatic encroachment–only raising the costs of acting on a false belief does. Rather, if the interviewer’s decision not to hire Brooke is discriminatory, it is because when he decides not to hire her because he judges that she does not know, he prevents her from even attempting to give the right answer; he frustrates her chance to give the right answer. This is why were the interviewer to decide not to hire Brooke only once she gave a false answer, his decision would seem less than discriminatory—at least by the stipulations of the example.
6 Conclusion
This paper appeals to the pragmatic encroachment thesis to illuminate the epistemic injustice that occurs when the prejudice that a person faces in response to her actions erodes her knowledge. In accounting for this intuitive wrong from the point of view of pragmatic encroachment, we do not mean to imply that traditional epistemologists do not have the tools to account for this wrong. Our goal is simply to show that there is an empirically supported, novel case to be made in favor of pragmatic encroachment that withstands crucial challenges.
In closing, it’s worth noting that philosophers have recently begun to explore how various moral considerations can inform the practical stakes relevant for pragmatic encroachment to occur. The bulk of this strand of research has focused on ways in which the negative moral consequences that an agent might cause in acting on a false belief influence what she knows.Footnote 26 If it would, for instance, be disrespectful towards you to treat you as a waiter when you’re really just a regular restaurant guest, then it takes more evidence to know that you are a waiter.Footnote 27 In contrast, the ways in which the negative moral consequences that an agent may experience when acting on a false belief may influence what she knows have remained underexplored. As we have argued, the threat of being met with prejudice when relying on a false belief, and not just the threat of causing prejudice to occur, can wrongfully raise the stakes for someone to rely on her belief. In a nutshell, the threat of being met with prejudice when relying on a false belief can wrong someone into believing on eggshells.Footnote 28
Notes
See Dotson (2018) for a discussion of the difference between injustice resulting from a failure of due knowledge attribution, and injustice resulting from undermining knowledge possession.
Prejudice can raise the stakes in a number of ways. The prejudiced judgment that Nilam is underprepared might subject Nilam to unjustified treatment by the teacher and her fellow students. In this sense, being wrong about what the principle of nonmaleficence holds would have unreasonable personal costs for her. But it may also consolidate Nilam’s classmates’ prejudice against people of color in general., which could have unreasonable societal costs. In line with this latter thought, it’s worth noting that this is a case in which the relevant features of Nilam that ground the prejudice with which she is met are easily muted when the relevant belief is true—or so empirical evidence seems to suggest. Put differently, the relevant stereotype—that people of color are underprepared—only activates if Nilam is wrong about the principle of nonmaleficence and it is true that she is underprepared. The framework we defend here thus has the potential to explain a broad range of epistemic wrongs grounded in implicit bias, specifically in cases in which the prejudiced reaction is muted under true belief. We will return to this in Sect. 4.
Importantly, the point is not that Nilam has a reason against acting on her belief. Rather, the idea is that the risk of being wrong can affect whether she has enough evidence to know the relevant proposition. We return to this point in Sect. 3.
To be clear, we do not mean to suggest that pragmatic encroachment is not a complex and contentious philosophical position; this would be misleading. Our only goal is to motivate the thesis in sufficient detail to defend its plausibility and to illuminate why its consequences and applications deserve to be explored.
Thus, we do not intend to claim that raised stakes are necessary for encroachment to occur. Furthermore, we would like to acknowledge the possibility that the rising stakes might not be sufficient to raise the epistemic requirements for knowledge. Some cases, as Gerken (2017, 134ff) points out, are high-stakes because of the raised costs of not believing. Thus, if it is risky not to believe a proposition, then, presumably, less evidence is required to believe in such a high-stakes situation. Concomitantly, in such cases, it becomes harder to believe if the stakes go down. For more on this point, see Anderson and Hawthorne (2019, 241ff).
The Knowledge-Action Principle has been criticized for a variety of reasons. Some (e.g., Brown, 2008, 171; Littlejohn, 2009, 469; Gerken, 2011, 535–536; Gerken, 2017; Neta 2009, 687) argue that knowledge is not required for the actionable use of one’s beliefs, if the reason fails to be knowledge for Gettier-type reasons. Moreover, it is sometimes argued that knowledge often sets the bar for the warranted use of one’s beliefs in practical reasoning too high (e.g., Gerken 2017, 141). Some philosophers argue that the allegedly important role of knowledge in justifying actions (e.g., “You shouldn’t have done that, because you didn’t know what would happen”) can equally be explained in terms certainty (Brown, 2008; Gerken, 2011, 532), evidence, or good reasons (Gerken 2017, 138). Furthermore, Roeber (2018, Sect. 3) argues that pragmatic encroachment is subject to plain and simple counterexamples.
Several subtly different versions of this principle exist. This formulation, gleaned from Ross and Schroeder (2014), is meant to be representative. See, e.g., Fantl and McGrath (2002, 2007) and Hawthorne and Stanley (2008). Hawthorne and Stanley (2008) argue that one ought not to treat p as a reason for acting unless one knows that p. Williamson (2005, 231) holds that one ought not to treat p as a premise in practical reasoning unless one knows that p.
Hawthorne and Stanley (2008) define ‘p-dependence’ as follows: “Let us say that a choice between options × 1... xn is p dependent iff the most preferable of × 1... xn conditional on the proposition that p is not the same as the most preferable of × 1... xn conditional on the proposition that not-p” (580).
One way for an action to fail to be p-dependent is for there to be no opt-out option. This highlights that pragmatic encroachment should further require that the negative stakes of believing falsely not be associated with some opt-out option. Consider again ‘Almond Butter’, in which the opt-out option was the tuna sandwich. Notice that Sarah’s acting on the belief that the sandwich on the right is almond butter is risky precisely because the tuna sandwich is the safe option. Had the tuna sandwich not been a safe option—e.g., had Sarah’s nephew Algernon also had an equally dire tuna allergy—it would not have been risky for Sarah to serve what she thought was the almond butter sandwich, even in the high-stakes case (assuming that serving nothing is not an option).
Similarly, if pragmatic encroachment applies to belief per se, then we could simply be bribed into not believing (See Worsnip 2020, 2).
Owens (2000, 24–31); Fantl and McGrath (2002, 81–83); Schroeder (2012a, 266, 268) defend this kind of view. Contrary to this idea, pragmatists believe that pragmatic considerations are good reasons for belief. More specifically, robust pragmatists believe that all and only pragmatic reasons constitute reasons for belief (e.g., Maguire & Woods, 2020; Rinard, 2019), while pluralists believe that there are two domains—pragmatic and evidential—that can stand in conflict with one another (Reisner, 2008).
Weatherson (2005) pursues a similar strategy, arguing that pragmatic encroachment works by way of raising the epistemic requirements for belief. Summarizing his view, Weatherson states that “pragmatic encroachment [..] is best thought of as a pragmatic condition on belief” (Weatherson, 2005, 417) (Italics in the original).
There are other alternatives to answering the question of how pragmatic factors might make a difference in whether or not someone knows, without conceiving of them as reasons for belief. But we need not go into detail on the specifics of such an account; our point is simply that it is not implausible to conceive of pragmatic considerations as epistemic considerations that are independent of evidential reasons.
More specifically, according to Fricker (2007), a testimonial injustice occurs when prejudice causes an undue failure to appreciate what someone knows, and a hermeneutical injustice occurs when structural prejudice puts someone at a disadvantage in terms of making a social experience intelligible to herself or others. An example of the former might be when, as a result of gender prejudice, a hearer does not believe a woman’s recounting of a soccer play. An example of the latter might be when a woman is sexually abused by her husband before the background social and legal conditions make it clear that marriage does not annul sexual abuse. It’s also worth noting that, contra Fricker, testimonial injustice can occur not only in the form of credibility deficit, but also in the form of credibility excess (see Medina, 2013).
Coady (2010) argues that there are two concepts of epistemic injustice: one is Fricker’s—that is, discriminatory epistemic injustice such as testimonial and hermeneutical injustice; the other is the injustice that occurs when someone is “unjustly denied what they have a right to know” (Coady, 2010, 112). Moreover, in part following Medina (2011), who writes of credibility “disparities” (Medina 2011, 23), Coady (2017) argues, contra Fricker (2007), that testimonial and hermeneutical injustice can also be distributive, since the former can involve an unjust distribution of credibility, and the latter can involve an unjust distribution of hermeneutical power. It’s worth noting here that the loss of knowledge we try to capture in this paper is similar to Coady’s (2010) take on the kind of injustice that occurs when one lacks knowledge as a result of an unjust distribution of knowledge; however, unlike Coady, we do not conceive of this as a loss of knowledge that one has a ‘right’ to have, but as the loss of knowledge one would have had had the stakes of relying on a false belief not been raised by prejudice.
Notice that it is possible to raise the stakes of falsely believing without undermining an agent’s knowledge, so not every case in which the costs of falsely believing are raised will be a case of encroachment injustice. (We return to this point in Sect. 5.) Thus, this case is importantly different from a case in which Nilam knows that ‘The principle of nonmaleficence holds that one should do no harm,’ yet chooses not to raise her hand just in case doing so leads to certain negative outcomes.
To be sure, it is beyond the scope of this paper to give a general account of the conditions under which raising the stakes for someone to act on a false belief is impermissible. After all, such an account would require taking a stance on separate issues, such as which reactions to mistrusting beliefs are justified, which can be daunting task. In what follows, then, our goal is not to attempt to solve general questions about the wrongness of stakes-raising. We seek only to offer some key insights.
Interestingly, this point may help explain a wrong in some microaggressions. Although an appropriate analysis of this suggestion is well beyond the scope of this paper, consider that microaggression theorists tend to agree that a central harm of microaggression is that it is attributionally ambiguous—in other words, it is difficult to ascertain whether the microaggression perpetrator intended to communicate the hurtful message that she communicated. See, e.g., Fatima (2017), Friedlaender (2018), Sue et al. (2007), Sue and Spanierman (2020), and Rini (2020). Our account of encroachment injustice suggests that this attributional ambiguity may stem not from self-doubt, but from the raised stakes of believing falsely, which may force an agent, rationally speaking, to remain in a state of inquiry when she would otherwise have had knowledge.
There are two lines of research exploring the cognitive implementation of the centrality dimension of stereotyped judgment. First, it has been argued that a stereotype can be central because someone thinks of the attribution as essential to a member of a group (e.g., Haslanger, 2011). Second, it has been argued that a stereotype can be more or less explanatorily central to one’s view of a person (Del Pinal and Spaulding, 2018; Del Pinal, 2017). See Westra (2019) for a discussion of the immutability of stereotyped character traits.
As Del Pinal and Spaulding put it, many “empirical studies have corroborated, that the salient-statistical features associated with concepts are easily dropped when those concepts enter certain combinatorial environments (Barsalou, 1987; Fodor, 1998; Fodor & Lepore, 2002; Hampton, 2006; Rey, 1983)” (Del Pinal & Spaulding, 2018, p. 105).
While we are using Gerken’s own cases in this discussion, it’s important to note that they draw on features (e.g., poverty) that, it is generally agreed upon, ought to be bracketed in legal contexts. Moreover, Interview does not, at least on a natural reading of the case, respect the constraints of pragmatic encroachment as we’ve described it. In a typical interview situation, the raised negative stakes are associated with asserting falsely and with not answering at all. In other words, in Gerken’s case, Brooke finds herself in a forced-choice scenario. Opting out and simply not answering is not a feasible option. Compare this case to the case described at the outset of this paper. Unlike in Gerken’s case, if Nilam remains silent and decides not to participate in class, then she won’t face the same battery of negative consequences. Silence, in her case, is the safe, albeit suboptimal, option.
See Bolinger (2020) for a review of this debate.
See, for example, Basu (2019a).
The phrase “believing on eggshells” is a modified version of Basu’s (2019b) phrase “walking on epistemic eggshells,” which, as she notes, comes from Amy Kind, Adrienne Martin, or Michele Moody-Adams.
References
Anderson, C., & Hawthorne, J. (2019). 10 Knowledge Practical Adequacy. Oxford Studies in Epistemology, 6(6), 234.
Bach, K. (2008). Applying pragmatics to epistemology. philosophical. Issues, 18, 68–88.
Bach, K. (2005). The emperor’s New “Knows.” In G. Preyer & G. Peter (Eds.), Contextualism in Philosophy: Knowledge, Meaning, and Truth (pp. 51–89). Oxford University Press.
Barsalou, L. W. (1987). The instability of graded structure: Implications for the nature of concepts. In U. Neisser (Ed.), Concepts and conceptual development: Ecological and intellectual factors in categorization. Cambridge University Press.
Basu, R. (2019a). What we epistemically owe to each other. Philosophical Studies, 176, 915–931. https://doi.org/10.1007/s11098-018-1219-z
Basu, R. (2019b). Radical moral encroachment: The moral stakes of racist beliefs. Philosophical Issues, 29, 9–23. https://doi.org/10.1111/phis.12137
Bolinger, R. J. (2020). Varieties of moral encroachment. Philosophical Perspectives. https://doi.org/10.1111/phpe.12124
Brown, J. (2008). Subject-sensitive invariantism and the knowledge norm for practical reasoning. Noûs, 42(2), 167–189.
Coady, D. (2010). Two concepts of epistemic injustice. Episteme, 7(2), 101–113.
Coady, D. (2017). Epistemic injustice as distributive injustice. In I. J. Kidd, J. Medina, & G. Pohlhaus Jr. (Eds.), The Routledge Handbook of Epistemic Injustice (pp. 61–68). United Kingdom: Routledge.
Del Pinal, G., & Spaulding, S. (2018). Conceptual centrality and implicit bias. Mind and Language, 33(1), 95–111.
Del Pinal, G., Madva, A., & Reuter, K. (2017). Stereotypes, conceptual centrality and gender bias: An empirical investigation. Ratio, 30(4), 384–410.
Dotson, K. (2011). Tracking epistemic violence tracking practices of silencing. Hypatia, 26(2), 236–257. https://doi.org/10.1111/j.1527-2001.2011.01177.x
Dotson, K. (2018). Distinguishing knowledge possession and knowledge attribution: The difference metaphilosophy makes. Philosophy and Phenomenological Research, 96(2), 475–482.
Fantl, J., & McGrath, M. (2002). Evidence, pragmatics, and justification. The Philosophical Review, 111(1), 67–94.
Fantl, J., & McGrath, M. (2010). Knowledge in an Uncertain World. Oxford University Press.
Fantl, J., & McGrath, M. (2007). On pragmatic encroachment in epistemology. Philosophy and Phenomenological Research, 75(3), 558–589.
Fantl, J., & McGrath, M. (2012). Arguing for shifty epistemology. In J. Brown & M. Gerken (Eds.), Knowledge Ascriptions (pp. 55–74). Oxford University Press.
Fatima, S. (2017). On the edge of knowing: Microaggression and epistemic uncertainty as a woman of color. In K. Cole & H. Hassel (Eds.), Surviving Sexism in Academia: Strategies for Feminist Leadership (pp. 147–154). Routledge.
Fodor, J. (1998). Concepts: Where cognitive science went wrong. Oxford University Press.
Fodor, J., & Lepore, E. (2002). The compositionality papers. Oxford University Press.
Freeman, L., & Stewart, H. (2018). Microaggressions in clinical medicine. Kennedy Institute of Ethics Journal, 28, 411–449.
Friedlaender, C. (2018). On microaggressions: Cumulative harm and individual responsibility. Hypatia, 33, 5–21.
Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press.
Fricker, M. (2013). Epistemic justice as a condition of political freedom? Synthese, 190(7), 1317–1332.
Fricker, M. (2017). Evolving Concepts of Epistemic Injustice. In I. J. Kidd, J. Medina, & G. Pohlhaus Jr. (Eds.), The Routledge Handbook of Epistemic Injustice. Routledge.
Friedman, J. (2019). Checking again Philosophical. Issues, 29(1), 84–96.
Ganson, D. (2008). Evidentialism and pragmatic constraints on outright belief. Philosophical Studies, 139(3), 441–458.
Gerken, M. (2011). Warrant and action. Synthese, 178(3), 529–547.
Gerken, M. (2017). On folk epistemology: How we think and talk about knowledge. Oxford University Press.
Gerken, M. (2019). Pragmatic encroachment and the challenge from epistemic injustice. Philosophers’ Imprint, 19(15), 1–19.
Govan, C. L., & Williams, K. D. (2004). Changing the affective valence of the stimulus items influences the IAT by re-defining the category labels. Journal of Experimental Social Psychology, 40(3), 357–365.
Hampton, J. A. (1987). Inheritance of attributes in natural concept conjunctions. Memory and Cognition, 15(1), 55–71.
Haslanger, S. (2011). Ideology, generics, and common ground In Feminist metaphysics. Springers.
Hawthorne, J. (2004). Knowledge and lotteries. Oxford University Press.
Hawthorne, J., & Stanley, J. (2008). Knowledge and action. Journal of Philosophy, 105(10), 571–590.
Littlejohn, C. (2009). Must we act only on what we know? The Journal of Philosophy, 106(8), 463–473.
McDowell, J. (1995). Knowledge and the internal. Philosophy and Phenomenological Research, 55, 877–893.
Maguire, B., & Woods, J. (2020). The game of belief. Philosophical Review., 129(2), 211–249.
Medina, J. (2011). The relevance of credibility excess in a proportional view of epistemic injustice: Differential epistemic authority and the social imaginary. Social Epistemology, 25(1), 15–35. https://doi.org/10.1080/02691728.2010.534568
Medina, J. (2013). The epistemology of resistance: Gender and racial oppression, epistemic injustice, and the social imagination. Oxford University Press.
Moss, S. (2018). Moral encroachment. Proceedings of the Aristotelian Society, 118(2), 177–205.
Neta, R. (2009). Treating something as a reason for action. Noûs, 43(4), 684–699. https://doi.org/10.1111/j.1468-0068.2009.00724.x
Owens, D. (2000). Reason without freedom: The problem of epistemic normativity. London: Routledge.
Perez Gomez, J. (2020). Verbal microaggressions as hyper-implicatures. The Journal of Political Philosophy. https://doi.org/10.1111/jopp.12243
Reisner, A. (2008). Weighing pragmatic and evidential reasons for belief. Philosophical Studies, 138(1), 17–27.
Rey, G. (1983). Concepts and stereotypes. Cognition, 15(1), 237–262.
Rinard, S. (2019). Equal treatment for belief. Philosophical Studies, 176(7), 1923–1950.
Rini, R. (2020). The ethics of microaggression. Routledge.
Roeber, B. (2018). The pragmatic encroachment debate. Noûs, 52(1), 171–195.
Ross, J., & Schroeder, M. (2014). Belief, credence and pragmatic encroachment. Philosophy and Phenomenological Research, 88(2), 259–288.
Schroeder, M. (2012). Stakes, withholding, and pragmatic encroachment on knowledge. Philosophical Studies, 160(2), 265–285.
Stanley, J. (2005). Knowledge and practical interests. Oxford University Press.
Stanley, J. (2015). How propaganda works. Princeton University Press.
Sue, D. W., Capodilupo, C. M., Torino, G. C., Bucceri, J. M., Holder, A., Nadal, K. L., & Esquilin, M. (2007). Racial microaggressions in everyday life: Implications for clinical practice. American Psychologist, 62, 271–286.
Spanierman, L. B., & Sue, D. W. (2020). Microaggressions in everyday life. Wiley.
Sloman, S. A., Love, B. C., & Woo-K, A. (1998). Feature centrality and conceptual coherence. Cognitive Science, 22(2), 189–228.
Weatherson, B. (2005). Can we do without pragmatic encroachment? Philosophical Perspectives, 19(1), 417–443.
Westra, E. (2019). Stereotypes, theory of mind, and the action–prediction hierarchy. Synthese, 196(7), 2821–2846.
Williamson, T. (2000). Knowledge and its limits. Oxford University Press.
Williamson, T. (2005). Contextualism, subject-sensitive invariantism and knowledge of knowledge. The Philosophical Quarterly, 55, 213–235. https://doi.org/10.1111/j.0031-8094.2005.00396.x
Wittenbrink, B., Judd, C. M., & Bernadette, P. (2001). Spontaneous prejudice in context: Variability in automatically activated attitudes. Journal of Personality and Social Psychology, 81(5), 815.
Worsnip, A. (2020). Can pragmatists be moderate? Philosophy and Phenomenological Research, 00(1), 28. https://doi.org/10.1111/phpr.12673
Acknowledgements
We are immensely grateful to Eric Bayruns García, Zuzanna Gnatek, Thomas Hodgson, Chang Liu, Dan Moller, Francisca Perez, Nicholas Rimell, Arthur Schipper, Aiden Woodcock, and an anonymous referee for this journal for comments on drafts of this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Schönherr, J., Perez Gomez, J. Believing on eggshells: epistemic injustice through pragmatic encroachment. Philos Stud 179, 593–613 (2022). https://doi.org/10.1007/s11098-021-01672-7
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11098-021-01672-7