Review of Philosophy and Psychology

, Volume 2, Issue 1, pp 121–136

Explaining Away Intuitions About Traits: Why Virtue Ethics Seems Plausible (Even if it Isn’t)


    • CUNY Graduate Center, Program in Philosophy

DOI: 10.1007/s13164-010-0045-9

Cite this article as:
Alfano, M. Rev.Phil.Psych. (2011) 2: 121. doi:10.1007/s13164-010-0045-9


This article addresses the question whether we can know on the basis of folk intuitions that we have character traits. I answer in the negative, arguing that on any of the primary theories of knowledge, our intuitions about traits do not amount to knowledge. For instance, because we would attribute traits to one another regardless of whether we actually possessed such metaphysically robust dispositions, Nozickian sensitivity theory disqualifies our intuitions about traits from being knowledge. Yet we do think we know that we have traits, so I am advancing an error theory, which means that I owe an account of why we fall into error. Why do we feel so comfortable navigating the language of traits if we lack knowledge of them? To answer this question, I refer to a slew of heuristics and biases. Some, like the fundamental attribution error, the false consensus effect, and the power of construal, pertain directly to trait attributions. Others are more general cognitive heuristics and biases whose relevance to trait attributions requires explanation and can be classed under the headings of input heuristics and biases and processing heuristics and biases. Input heuristics and biases include selection bias, availability bias, availability cascade, and anchoring. Processing heuristics and biases include disregard of base rates, disregard of regression to the mean, and confirmation bias.

Isn’t it pretty to think so?

~ The Sun Also Rises, Hemingway

1 Virtue Ethics and the Situationist Challenge

According to most versions of virtue ethics, an agent’s primary ethical goal is to cultivate the virtues. The fully virtuous person possesses all the virtues, and so is disposed to do the appropriate thing in all circumstances. Such a disposition has counterfactual heft: the virtuous person gives (money, time, attention, whatever is needed) when presented with the opportunity, and she would give were she presented with a similar opportunity; she acts confidently when something valuable is threatened, and she would so act were something valuable threatened; she speaks the truth, and she would speak the truth even if it were unpleasant or personally detrimental. This metaphysically robust disposition underwrites the prediction and explanation of behavior. It is therefore a presupposition of trait-based theories of virtue that moral agents have—or at least could have—counterfactual-supporting dispositions.

At first blush this presupposition is uncontentious. How could one deny that people are—or at least could be—just, sincere, compassionate, chaste, considerate, trustworthy, courteous, diligent, faithful, tactful, valorous, and humble? We seem to understand ourselves and one another in terms of such character traits. Williams (1985, p. 10, n. 7) goes so far as to say that objecting to the notion of character amounts to “an objection to ethical thought itself rather than to one way of conducting it.” Yet skeptics such as Doris (1998, 2002) and Harman (1999, 2000, 2001, 2003, 2006) argue that situational influences swamp dispositional ones, rendering them predictively and explanatorily impotent. And in both science and philosophy, it is but a single step from such impotence to the dustbin.

We can precisify the skeptics’ argument in the following way. If someone possesses a character trait like a virtue, she is disposed to behave in trait-relevant ways in both actual and counterfactual circumstances. However, exceedingly few people—even the seemingly virtuous—would behave in virtue-relevant ways in both actual and counterfactual circumstances. Seemingly (and normatively) irrelevant situational features like ambient smells, ambient sounds, and degree of hurry overpower whatever feeble dispositions inhere in people’s moral psychology, making them passive pawns of forces they themselves typically do not recognize or consider.

Are individual dispositions really so frail? A firestorm followed the publication of Doris’s and Harman’s arguments that virtue ethics is empirically inadequate. If they are right, virtue ethics is in dire straits: it cannot reasonably recommend that people acquire the virtues if they are not possible properties of “creatures like us” (Flanagan 1991, p. 18).

In this article, I address a slightly different question: Can we know on the basis of folk intuitions that we have traits? I answer in the negative. If we assess knowledge-claims about virtues in light of the primary theories of knowledge (reliabilism, sensitivity, justified true belief, virtue epistemology, and Lewisian evidentialism), we see that they fail across the board. According to reliabilism (Goldman 1975), an agent a knows that p just in case a acquired the true belief that p via a reliable process. But (or so I shall argue) folk intuitions about virtues are unreliable; they tend to yield false positives. According to sensitivity (Nozick 1981, p. 177), a knows that p only if a would not believe p were p false. But again (or so I shall argue), folk intuitions would often lead us to attribute virtues when they are not there. Since intuitions would lead us to believe in the existence of traits despite the absence of those traits, sensitivity theorists must reject claims to knowledge of virtue based on folk intuitions. According to the justified true belief account, a knows that p if and only if a truly believes that p and a’s belief is justified. Once more (or so I shall argue), folk intuitions do not provide justification, so any beliefs based merely on them do not rise to the level of knowledge. According to virtue epistemology (Sosa 1991, p. 227; Zagzebski 1996), a knows that p just in case a truly believes that p because a has exercised some intellectual virtues (such as perceptiveness, conscientiousness, and open-mindedness) and violated none of them. Again (or so I shall argue), our folk reasoning about virtues is systematically prone to lack of perceptiveness, intellectual unscrupulousness, and narrow-mindedness. If this is right, virtue epistemology should reject claims to know about the moral virtues on the basis of folk intuition. Finally, on David Lewis’s (1996) theory of a knowledge, a knows that p if and only if p holds in every possibility left uneliminated by a’s evidence. But for the last time (or so I shall argue), folk intuitions alone generally do not eliminate the possibility that someone merely seems virtuous but is not so.

Hence, if my arguments are correct, whatever theory of knowledge one subscribes to, one should reject knowledge-claims about virtues that are backed only by folk intuitions.

Thus, where the previous generation of empirically-informed critics of virtue ethics have taken the strong position that people do not possess traits, I argue for the weaker position that we cannot know on the basis of folk intuitions whether we possess traits. This weaker view leaves it open whether we have traits. It also leaves it open whether we can know about traits on the basis of some other sort of evidence (such as scientific statistical methodologies). However, if my arguments are sound, a whole class of rebuttals to the situationist challenge is swept away simultaneously. We need pay no heed to any attempt to defend virtue ethics that appeals only to intuitions about character traits (for instance, Annas 2003) rather than scientific evidence of the sort favored by psychologists, behavioral economists, and (more recently) experimental philosophers (for instance, Miller 2003).

2 Explaining Away Intuitions About Traits

Situationism is an error theory. It claims that people are systematically mistaken in attributing virtues. Error theories owe us an account of why we fall into error, if only to comfort us in our intellectual iniquity. Why do we have so many trait terms and feel so comfortable navigating the language of traits if actual correlations between traits and individual actions (typically <0.30, as Mischel 1968 persuasively argues)1 are undetectable without the use of sophisticated statistical methodologies (Jennings et al. 1982)?

To answer this question, situationists invoke a veritable pantheon of gods of ignorance and error. Some, like the fundamental attribution error, the false consensus effect, and the power of construal, pertain directly to trait attributions. Others are more general cognitive heuristics and biases, whose relevance to trait attributions requires explanation. These more general heuristics and biases can be classed under the headings of input heuristics and biases and processing heuristics and biases. Input heuristics and biases include selection bias, availability bias, availability cascade, and anchoring. Processing heuristics and biases include disregard of base rates, disregard of regression to the mean, and confirmation bias.

Only three of these phenomena have been addressed in the philosophy literature to date: the power of construal, the fundamental attribution error, and confirmation bias. Somewhat oddly, defenders of virtue ethics, such as Sreenivasan (2002, p. 58), have invoked the power of construal, but I purport to show that they are mistaken in thinking that it supports their position. Harman has briefly discussed the fundamental attribution error and confirmation bias (1999, p. 325), but otherwise little attention has been paid to explaining why we might be so prone to attributing character traits despite scant evidence and even in the face of contrary evidence.

This article aims to fill that void by canvassing various relevant phenomena and explaining their relevance to explaining away intuitions about the existence and robustness of traits like virtues. In the balance of this article, I discuss the attribution errors, which are peculiar to our folk intuitions about traits. Next, I turn to the input heuristics and biases, which—though they apply more broadly than just to reasoning about traits—entail further errors in our judgments about trait-possession. After that, I discuss the processing heuristics and biases, which again apply more broadly than the attribution errors but are nevertheless relevant to intuitions about traits. I elaborate all three types of biases in a somewhat dogmatic fashion; it is beyond the scope of this article to argue in each case that the bias really exists. Instead, I explain what the biases are, cite the relevant authorities, and draw inferences from them in order to show their relevance to the dialectic about virtue ethics. At the end of the article, I evaluate knowledge-claims about virtues in light of these attribution biases, input heuristics and biases, and processing heuristics and biases. Every widely accepted theory of knowledge must reject such knowledge-claims when they are based merely on folk intuitions.

2.1 Attribution Errors

2.1.1 The Fundamental Attribution Error

According to Ross (1977, p. 183; see also Ross and Nisbett 1991; Yzerbyt et al. 2001), people are prone to the provocatively named “fundamental attribution error,” a tendency to attribute most or all observed behavior to internal, dispositional factors rather than external, situational ones. When we observe others reading a script, for instance, we tend to assume they believe what they are saying, even when we are told in advance that they did not prepare the script and are merely reading it because they were asked to (Jones and Harris 1967, p. 22). We seemingly cannot help making what Uleman et al. (1996, p. 211) call “spontaneous trait inferences,” which occur “when attending to another person’s behavior produces a trait inference in the absence of our explicit intention to infer traits or form an impression of that person.”

Why exactly people exhibit this spontaneous reaction has not been fully answered. The problem may be partly perceptual, stemming from the Gestalt phenomenon: we focus on the figure rather than the ground. So when we observe people acting in a situation, the people themselves are our focus and hence the only factor we consider in explaining their behavior (Harman 1999, p. 325; Ross and Nisbett 1991, p. 139). In any event, though, it is a robust, well-documented phenomenon with few known counter-instances.

The obverse side of the fundamental attribution error supposedly has to do with people’s attributions regarding their own behavior. According to Jones and Nisbett (1971, p. 93), the unique breakdown of the fundamental attribution error occurs when we explain what we ourselves have done: instead of underemphasizing the influence of environmental factors, we overemphasize them. Especially when the outcome is negative, we attribute our actions to external factors. This bias seems to tell against situationism, since it suggests that we can recognize the power of situations at least in some cases. However, the existence of such an actor-observer bias has recently come in for trenchant criticism from Malle (2006), whose meta-analysis of three decades worth of data fails to demonstrate a consistent actor-observer asymmetry.2 Malle’s meta-analysis only strengthens the case for the fundamental attribution error. Whereas Jones & Nisbett had argued that it admitted of certain exceptions at least in first-personal cases, Malle shows their exceptionalism to be ungrounded.

Other theorists have challenged the ubiquity of the fundamental attribution error. Sabini et al. (2001, p.3), for instance, claim that experiments like the Milgram (1974) studies in obedience demonstrate not so much the unexpected power of external, situational forces but the unexpected power of our motivation to avoid embarrassment and save face. According to Sabini et al., participants in Milgram’s experiments went along with the experimenter’s instructions to shock innocent victims because they were unwilling to accuse the experimenter of making a mistake or doing something immoral. However, this explanation does not save the phenomena. In one variation of the obedience experiment, a second experimenter played the role of the victim and begged to be released from the electrodes. Participants in this version of the study had to disagree with one of the experimenters, so a desire to avoid embarrassment and save face would give them no preference for obedience to one experimenter over the other. Nevertheless, in this condition 65% of the participants were maximally obedient to the experimenter in authority, shocking the other experimenter with what they took to be 450 volts three times in a row while he slumped over unconscious (Milgram, p. 95).

Another challenge to the fundamental attribution error is due to Nisbett (2003, p. 125), who found that both Westerners and East Asians commit the error, but that Asians are less prone to making it when they are put in a position to empathize with the target of the attribution. This surprising fact does not detract, however, from the importance of the fundamental attribution error. After all, Nisbett did not find that East Asians avoid the error altogether, just that they more easily avoid and correct the error when the situational factor is made salient to them through empathy.

Despite the challenges, then, the evidence strongly suggests that people who use folk intuitions as their evidential base are all too ready to attribute traits on the basis of insufficient and even contrary evidence. Since virtues are traits, it follows that people would attribute virtues without sufficient evidence and even in the face of contrary evidence. And, as I attempt to show in section 3, this fact undermines knowledge-claims about virtues based on such evidence—as do the other biases catalogued in this section.

2.1.2 The False Consensus Effect

Ross et al. (1977; see also Fields and Schuman 1976) were the first to discuss the false consensus effect, which occurs when people assume that their own opinion, desire, or other internal state is representative of the opinions, desires, etc. of a group or population to which they belong. In particular, this effect engenders a tendency to think that one’s own choice in a particular dilemma is the norm, which leads to false trait attributions to others. For instance, Ross and colleagues asked passers-by on the street to carry a sign inscribed “Eat at Joe’s!” Those who agreed to help estimated that 62% of others would do the same, while those who declined thought others would decline 67% of the time. Unless 29% of people would both accept and decline the solicitation, something is fishy about these numbers.

The false consensus effect helps explain away intuitions about dispositions in the following way: once we make the fundamental attribution error and attribute a trait, we assume everyone else attributes the trait too, thereby reinforcing our own belief. If we assume that others explain Ignacio’s tax-evasion as expressing his dishonesty, we are more likely to say so to them, thereby triggering an availability cascade (defined below) about Ignacio, who might be an otherwise upstanding citizen. If, however, we think that tax evasion is patriotic, we may praise him as a civic hero, triggering a different cascade.

Like the fundamental attribution error, the false consensus effect has come in for criticism, and once again I believe this criticism is erroneous. Dawes and Mulford (1996, p. 202), for instance, point out that treating oneself as a sample of size n = 1 means that one should in fact use one’s own opinion as a guide to the opinions of others, just as one should use an arbitrary other person’s opinion as a guide. Nevertheless, the problem is that a sample of size n = 1 is a scant evidential base. Dawes & Mulford do not dispute the claim that people do treat their own case as evidence; they merely point out that it is not entirely irrational to do so. But since one should be wary of making precise predictions based on any single datapoint, so one should be wary of making such predictions when that datapoint is oneself. The amount of irrationality indicated by the false consensus effect is therefore in question. It would be fantastic to see a follow-up study in which participants were asked not only what percentage of other people would accede to the request but also how confident they were in their prediction; high levels of confidence would tell in favor of the false consensus effect, whereas low levels would suggest that Dawes & Mulford are right.

Assuming the evidence favors the false consensus effect, we may explain its relevance to the dispute about virtues by pointing out that, since people tend to make such rash inferences, they are prone to over-attributing traits. They could reason as follows: “Well, I helped these strange fellows advertise for Joe’s Bar, so almost anyone would do the same. I guess most people are helpful!” Such an inference, however, is at best dubious. Statistical methodologies with sufficiently large and diverse samples should instead be used to assess such general claims, a point to which I return below.

2.1.3 The Power of Construal

Mischel and Shoda (1995, p. 258; see Ross and Nisbett 1991, pp. 59–89) argue that people’s subjective construals of their situations account for a lot of variability in behavior. Ambiguous environmental cues require interpretation. Was John’s laugh light-hearted or sadistic? Is that person running down the street panicked or just late for a meeting? Are those cries coming from the apartment next door a plea for help from a battered wife or just the television blaring? What one person sees as an emergency calling for immediate action, another sees as a nuisance or at least as unclear.

The power of construal is relevant to the dispute about traits in at least two ways. First, if someone attributes one trait (say, helpfulness) to Jack and another (say, thirst for recognition) to Jill, he will interpret the same objective behavior (going up the hill to fetch a pail of water) differently depending on which person does it. Jack is trying to help out, but Jill just wants to be praised. Jill could not care less about our welfare, but Jack wants to make sure we stay hydrated. Once a trait has been attributed, all ambiguous evidence is interpreted as if it flowed from the trait, an instance of the confirmation bias discussed below.

Second, studies of the existence and robustness of traits use trait-eliciting conditions as independent variable and trait-relevant behavior as dependent variable. Really, though, trait-eliciting conditions should be divided into objective and subjective components. Someone may be in a situation where helping is the appropriate response but not see it that way; conversely, someone may be in a situation where helping would be inappropriate but believe she ought to help. If I have decided that Jack is helpful and am confronted with a case where he seems unhelpful, I may appeal to construal to defuse the inconsistency. Jack really is helpful, I tell myself; he just did not realize that his help was wanted. Similarly, if I have decided that Jill has a thirst for recognition and am confronted with a case where she seems modest, I may appeal to construal once again to defuse the inconsistency. Jill really does thirst for recognition, I assure myself; she just did not realize that she could have earned praise by acting differently.

I have been unable to locate studies that demonstrate such an appeal to construal to massage apparent inconsistencies in trait attributions. If this is because there really are no such studies, it suggests that experimental philosophers have a job to do. Nevertheless, insightful novelists have picked up on the phenomenon, as Angel (1957/1984, p. 72) by Elizabeth Taylor (the English writer, not the American actress of the same name) demonstrates. In one particularly striking passage from this novel, Angel, the daughter of the widow Mrs. Deverell, has recently come into money. Angel whisks her mother away from the grocery store she had run for years to live at Alderhurst, a sumptuous mansion. Mrs. Deverell still tries to keep in touch with her working-class friends but they seem changed. I quote at length:

Either they put out their best china and thought twice before they said anything, or they were defiantly informal—“You’ll have to take us as you find us”—and would persist in making remarks like “I don’t suppose you ever have bloaters [non-gourmet fish] up at Alderhurst” or “Pardon the apron, but there’s no servants here to polish the grate.” In each case, they were watching her for signs of grandeur or condescension. She fell into little traps they laid and then they were able to report to the neighbors. “It hasn’t taken her long to start putting on side.” She had to be especially careful to recognize everyone she met, and walked up the street with an expression of anxiety which was misinterpreted as disdain. […] All of their kindnesses were remembered and brooded over; any past kindness Mrs. Deverell had done—and they were many—only served to underline the change which had come over her.

In this passage, Mrs. Deverell’s former friends systematically misconstrue her actions now that she has become rich. In the novel, if any character is virtuous, it is she; none of the others recognizes this fact, however, because of the power of construal. By contrast, several of the other characters gain undeserved reputations for their virtue because of converse misconstrual. Such all-too-human tendencies vitiate our ability to detect virtue and vice, leading to unreliable attribution of traits.

The power of construal raises a further question whether tests of traits have attempted to correlate the wrong variables. Which of the three—objective conditions, subjective construals, behavior—really matter?

Both virtue ethicists and situationists agree on the importance of subjective states, and both also agree on the ultimate importance of behavior. Causally impotent virtues are not worthy of the name. As Adams (2006, p. 119) says, “surely a disposition to honest behavior is at least necessary, if not sufficient, for a virtue of honesty.” As long as virtue ethics maintains that there is a right thing (or a range of right things) to do for a given person in a given context, the precise details of the causal path from objective stimulus to behavior are in a way irrelevant. In the end, if one fails to do the right thing, one is not fully virtuous.3 In her defense of virtue ethics against the situationist challenge, Annas (2003, pp. 26–27) argues that a virtue is a disposition to act and make choices, not just to behave; she stresses that “the agent’s practical reasoning is essential.” Yet she herself in articulating a theory of virtue (1993, p. 43) claims that the “virtuous person is,” among other things, “the person who does in fact do the morally right thing.” Presumably she is right that virtues are not mere behavioral dispositions, but that is because they are more, not less. Introducing the intervening variable of construal between objective stimulus and behavior just gives a fuller account of how people can fail to react virtuously; it does not save virtues from empirical critique. Thus, the ultimate objects of correlation remain objective conditions and behavior, but subjective construals form part of the theory connecting the two, much as genotype is part of the theory connecting the phenotype of parents with the phenotype of offspring. It is striking that many defenders of virtue ethics appeal to the power of construal as though it supported their defense against the situationist critique (Sreenivasan 2002, p. 58). Even the most recent monographs that explicitly address this debate (Russell 2009; Snow 2008) say so. As the foregoing discussion shows, however, the power of construal interferes with subjects’ ability to respond consistently in a virtuous way to all virtue-eliciting circumstances.

2.2 Input Heuristics and Biases

Section 2.1. covered biases that attach specifically to attribution of traits. In this section, I discuss several input heuristics and biases that—though they apply more broadly than attribution biases proper—are relevant to folk intuitions about traits. In particular, these biases have to do with the inputs to our cognitive processing. The selection bias, for instance, leads us to make inferences on non-representative data; the availability bias leads us to make inferences based on stereotypes and easily recalled examples rather than all the data; availability cascades redouble the problems caused by the availability bias; and anchoring occurs when first impressions have an overly strong effect on our understanding of and future inferences about the world around us.

2.2.1 Selection Bias

Though they argue that people are not cross-situationally consistent in the way that talk of traits leads us to believe, situationists usually also admit that, when socially embedded in day-to-day life, our attributions of traits lead to correct predictions of behavior. Like an unsound argument with a true conclusion, our reasoning processes begin with false premises about the existence and robustness of traits and derive true predictions about others’ behavior. As Ross and Nisbett (1991, p. 7) put it, “biased processing of evidence plays an important role in perceptions of consistency,” yet “the predictability of everyday life is, for the most part, real.”4 We use “fast and frugal heuristics” (Gigerenzer 2007, p. 158; see also Gigerenzer et al. 2000; Kahneman and Tversky 1973; Sunstein 2005) to make inferences to conclusions that are (usually approximately) true in the environments we typically inhabit. Such heuristically powered inferences are not deductively valid arguments from true premises, and they can go wildly wrong if used in circumstances for which they were not adapted, but they do an adequate job of guiding us in everyday life.

Think of the old yarn about the King of Siam’s refusal to believe that water could freeze: he made an inference from the behavior of water in typical conditions to the behavior of water in atypical (for him) conditions. We can reconstruct his reasoning as relying on the premise that all water is liquid or vapor. While this premise is false, it led to only true inferences until the King contradicted his Dutch guest. Analogously, we make inferences based on traits about how people will behave in counterfactual scenarios (Edgar is honest; if anyone were honest he would not lie; so Edgar will not lie). Even if the trait-invoking premises of such inferences are false, they will not lead us astray if the right social and other environmental influences conspire to make our conclusion true.5

2.2.2 Availability Bias

When people use the availability heuristic, they take the first few examples of a type that come to mind as emblematic of the whole population. This process can lead to surprisingly accurate conclusions (Gigerenzer 2007, p. 28), but it can also lead to preposterously inaccurate guesses (Tversky and Kahnemann 1973, p. 241). We remember the one time Maria acted benevolently and forget all the times when she failed to show supererogatory kindness, leading us to infer that she must be a benevolent person. Since extremely virtuous and vicious actions are more memorable than ordinary actions, they will typically be the ones we remember when we consider whether someone possesses a trait, leading to over-attribution of both virtues and vices.

In her defense of virtue ethics, Kupperman (2001, p. 243) mentions word-of-mouth testimony that “the one student who, when the Milgram experiment was performed at Princeton, walked out at the start was also the person who in Viet Nam blew the whistle on the My Lai massacre.” Such tales are comforting: perhaps a few people really are compassionate in all kinds of circumstances, whether the battlefield or the lab. But while anecdotes about character may be soothing, it should be clear that anecdotal evidence is at best skewed and biased, as well as prone to misinterpretation. We should focus on the fact that most experimental subjects are easily swayed by normatively irrelevant factors, not the fact that one person might be virtuous. The preponderance of evidence suggests that in any particular case, someone’s behavior would be better explained by appeal to situational factors and not by appeal to traits.

2.2.3 Availability Cascade

An availability cascade occurs when the availability bias goes viral; Kuran and Sunstein (1999, p. 683) define a cascade as “a self-reinforcing process of collective belief formation by which an expressed perception triggers a chain reaction that gives the perception of increasing plausibility through its rising availability in public discourse.” In the United States, for instance, it is common believed (and, importantly, commonly asserted) that many of Christopher Columbus’s contemporaries thought the Earth was flat prior to his famous voyage of 1492. This falsehood has been repeated so many times that people assume it must be true. Similarly, it is commonly believed (and repeated) that Albert Einstein failed at least one of his mathematics classes in grade school. Because this howler is repeated so frequently, it seems plausible to many.

This phenomenon is relevant to our belief in traits because cascades about a person’s personality are easily triggered. In fact, we already have a word for them: gossip. This is the fate suffered by poor Mrs. Deverell in the passage from Angel quoted above. People spread the word that she has become haughty; that lie is repeated and repeated; and eventually it comes to seem plausible, despite the fact that not a single person has witnessed an instance of her acting haughtily.

2.2.4 Anchoring

Anchoring is the bias of relying solely or too heavily on a single, early-acquired piece of information when making a decision. In a study discussed further below, Ariely et al. (2003) found that when people wrote down the last two digits of their social security number then made bids for goods such as wine and chocolate, those who had written higher numbers submitted bids that were 60% to 120% greater than those submitted by people who had written low numbers. Of course, they themselves did not believe that their social security number influenced their bids, but that just goes to show the deliverances of introspection should be taken with about three barrels of salt.

Anchoring is relevant to folk intuitions about virtues for the following reason. Often, the first thing we notice about a person serves as our anchor, so if Lenny does something vicious when one first meets him, one will expect him to act viciously in the future, and if Manny does something virtuous when one first meets him, one will expect him to act virtuously in the future. Paired with the fundamental attribution error and the power of construal, the anchoring bias quickly leads to over-attribution of virtues and vices.

2.3 Processing Heuristics and Biases

In addition to the input biases discussed in the previous section, certain processing biases plague our reasoning in general and our reasoning about traits in particular. This section discusses people’s tendency to disregard base rates and regression to the mean, as well as confirmation bias—the tendency to seek, interpret, and infer evidence that supports one’s previously held beliefs.

2.3.1 Disregard of Base Rates

According to Bayes’s Law, the conditional probability of a hypothesis given some evidence (the “posterior”) is equal to the product of the prior probability of the hypothesis and the conditional probability of the evidence given the hypothesis (the “likelihood”), divided by the prior probability of the evidence:
$$ {\hbox{P}}\left( {\left. h \right|e} \right) = \frac{{{\text{P}}(h) \cdot {\text{P}}\left( {e\left| h \right.} \right)}}{{P(e)}} $$
The base rate fallacy occurs when someone neglects the prior probabilities, thereby over- or under-estimating posterior probability. People are notoriously bad at probabilistic reasoning, committing the base rate fallacy even after they have been trained to recognize and avoid it.

This fallacy is relevant to intuitions about dispositions because it leads us to jump from a small number of observed trait-relevant actions to the attribution of the full-fledged trait, as when we use the availability heuristic or fall prey to the false consensus effect.

Let h be the proposition Karla is honest and e be the proposition Karla does not lie at time t. Presumably P(e | h) ≈ 1, so if the prior probabilities of P(h) and P(e) are ignored (i.e., assumed to be 1), the posterior P(h | e) is also roughly 1. Conditionalizing on a single example of truthfulness thus leads to an inflated estimation of Karla’s honesty. If instead one took a more realistic view of the prior probabilities and incorporated them into one’s conditionalizing, Karla would not come off looking nearly so virtuous. Let us grant that people tell the truth more often than not, setting the prior probability of e at .6. Let us also grant that exceedingly few people are fully honest—a claim that even most virtue ethicists would endorse—setting the prior probability of h at .001. Conditionalizing based on just this one piece of evidence, then, would yield the following estimate of the probability that Karla is honest:
$$ P(h|e) = \frac{{P(h) \cdot P(e|h)}}{{P(e)}} = \frac{{.001 \cdot 1}}{{.6}} = .0017 $$
As it turns out, a little knowledge really is a dangerous thing. By ignoring base rates, we set ourselves on a course to over-attribute both virtues and vices. A single instance of trait-relevant behavior can be exaggerated out of all proportion when base rates are ignored; where a Bayesian updater would still estimate someone’s virtue at a very low level of certainty, someone who disregarded base rates would see it as highly likely.

2.3.2 Disregard of Regression to the Mean

Regression to the mean is a statistical phenomenon: when a variable is observed at several standard deviations from the mean initially, it is quite likely to be observed closer to the mean on subsequent occasions. Disregarding regression to the mean, then, is the fallacy of assuming an extreme variable will remain extreme on later observations.

It has been noticed, for example, that flight instructors praise pilots for successfully maneuvering and criticize them for unsuccessfully maneuvering. Regression to the mean should lead one to expect that, regardless of the praise or criticism, pilots would perform worse after a successful maneuver and better after an unsuccessful one. Failure to recognize regression to the mean, however, may lead flight instructors to attribute improvement to their criticism and worse flying to their praise, then conclude that praise makes pilots lazy, whereas criticism helps them to fly better.6

Similarly, if we see someone act justly, recognizing regression to the mean should lead us to expect her to behave with less than perfect justice on the next occasion, but—because of anchoring and disregard of regression to the mean—we instead expect her to continue exemplifying the ideal of justice. And conversely, if we witness someone acting selfishly, recognizing regression to the mean should lead us to expect him to behave less selfishly in the future, but—because of anchoring and disregard of regression to the mean—we instead expect him to continue expressing the trait of selfishness.

2.3.3 Confirmation Bias

Confirmation bias is the tendency to search for, interpret, and remember information in a way that confirms one’s beliefs. The bias has a long pedigree, having been identified (though not under its current name) by Sir Francis Bacon (1620, p. 79), who said:

The human understanding when it has once adopted an opinion [...] draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside or rejects.

Darwin (2009, p. 44) in his autobiography remarks that after years of working as a scientist, he adopted the following “golden rule”:

whenever a published fact, a new observation or thought came across me, which was opposed to my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from the memory than favourable ones.7

More recently, contemporary psychology has provided experimental evidence for the confirmation bias and its relation to trait ascription. Mischel and Peake (1982, p. 750) suggest that anecdotal evidence is likely to be biased (or misconstrued) because instances of prototypic trait-relevant behavior are given too much weight in assessments. When once we decide (perhaps because of the fundamental attribution error) that someone is cowardly or temperate or conscientious, all our further observations are guided and colored by that decision. This is precisely the phenomenon dramatized so well in the passage from Taylor’s Angel quoted above.

3 The Inadequacy of Knowledge-claims Backed by Folk Intuitions

With so many biases stacked against us, it should come as no surprise that we think people possess character traits. The attribution biases lead us to attribute traits without sufficient evidence. Input heuristics and biases lead us to ignore relevant evidence against the existence of traits and overemphasize what little information we have that supports the existence of traits. Processing heuristics and biases explain how we persist in making foundationless attributions in the face of contrary evidence.

The existence of these biases does not prove that no one has traits, nor does it demonstrate that no arguments could warrant the conclusion that people have traits. What it instead shows it that regardless of whether people have traits, folk intuitions would lead us to attribute traits to them.

According to the sensitivity theory of knowledge (Nozick 1981, p. 177), we cannot be said to know something we would think regardless of its truth-value. Those who subscribe to a sensitivity analysis of knowledge, according to which one knows that p only if one would cease to believe p if p were false, may therefore conclude that we cannot know (at least not without the aid of scientific psychology) whether we possess character traits. Even those who think sensitivity is a necessary but insufficient condition for knowledge should be dubious about whether we can know we have traits on the basis of folk intuitions.

But situationism spells trouble for knowledge-claims based merely on folk intuitions regardless of which theory of knowledge one endorses. On the reliabilist account of knowledge, one can only know what one has come to believe by a reliable process. Attribution biases and input heuristics and biases, however, undermine the supposition that our folk intuitions are sufficiently reliable in the case of virtue attributions.

Likewise, according to the justified true belief theory of knowledge, one’s beliefs rise to the level of knowledge only when they are justified. There are at least as many theories of justification as there are theorists of justification, but all of them would be hard-pressed to accept folk intuitions about traits as contributing justification to one’s beliefs. Such intuitions are prone to a host of biases and rely on a host of heuristics, as documented above. Thus, if knowledge is justified true belief, we cannot know on the basis of folk intuitions that someone has a virtue.

For the virtue epistemologist, knowledge is an admirable state because it is arrived at through the exercise of the intellectual virtues, such as conscientiousness and open-mindedness. However, given our tendencies to fall into the confirmation bias and ignore evidence by employing heuristics, it would appear that we do not exercise these intellectual virtues when forming beliefs about traits. Being conscientious means seeking out all the relevant evidence; those subject to the confirmation bias do not. Being open-minded means being willing to revise one's views in light of new evidence; those who use heuristics are not. Virtue epistemologists should therefore reject claims to know about the moral virtues on the basis of folk intuitions.

Finally, on David Lewis’s (1996) theory of a knowledge, a knows that p if and only if p holds in every possibility left uneliminated by a’s evidence. As the fundamental attribution error shows, however, we do not believe just in those traits that our evidence leaves uneliminated, but in a host of other traits as well.

Evidently, regardless of which theory of knowledge one subscribes to, one should not accept knowledge-claims about virtues that are grounded merely in folk intuitions. If we want a more sensitive index of whether we have traits, we will need to use the scientific methodologies of psychology and behavioral economics.8 Thus, while my arguments in this article are primarily negative, they have a positive programmatic upshot: we need more and better interdisciplinary work by philosophers, psychologists, and economists.


See also Mischel and Peake (1982). Epstein (1983), a personality psychologist, admits that predicting particular behaviors on the basis of trait variables is “usually hopeless.” Fleeson (2001, p. 1013), an interactionist, likewise endorses the 0.30 ceiling.


For more on this controversy, see Malle et al. (2007).


See Blum (1994), Driver (2001), McDowell (1979), and Rosati (1995).

Doris (1998, p. 508) puts it even more strongly:

[S]ituationism is not embarrassed by the considerable behavioral regularity we do observe: because the preponderance of our life circumstances may involve a relatively structured range of situations, behavioral patterns are not, for the most part, haphazard.


See Ross and Nisbett (1991, p. 19) and Merritt (2000).


Kahnemann and Tversky (1973, p. 251) were the first to use this example, but they had only anecdotal evidence. More recently, Dorsey-Palmeteer and Smith (unpublished) have corroborated Kahnemann & Tversky’s story with hard data from US Navy flight training.


See Popper (2002, p. 45) on the related tendency of scientists to interpret all data in light of their current theory and therefore find confirmation everywhere (and disconfirmation nowhere).


It should be noted here that many of psychologists, such as Fleeson (2001), do believe in traits, and not merely on the basis of folk intuitions. It is beyond the scope of this article to assess the success of their arguments and the extent to which those arguments apply to virtues (which are a distinctive subspecies of traits individuated not merely causally but by their characteristic reasons).


Copyright information

© Springer Science+Business Media B.V. 2010