The thought experiment presented above is designed to assess intuitions that are central to people’s beliefs about moral objectivism. Moral objectivism, as we will use the term, is the view there are some moral statements that are true or false independent of what anyone thinks about the contents of those statements (Mackie, 1977). In simple terms, while some people think morality is relative to what people think about those issues, most moral objectivists think that some things are just clear-cut right or wrong regardless of one’s situation, culture, or values. Debates about moral objectivism have been central parts of contemporary ethics for thousands of years. As will come as no surprise to the reader by now, there is persistent debate about whether moral objectivism is true. These disagreements, at least in part, are dependent on the intuitions that we express and take as evidence about cases illustrating some aspect of moral objectivism. Could personality be related to these fundamental intuitions and corresponding debates about moral objectivism?

Taking an even broader perspective, the focus of this chapter will be on ethics, personality, and morally relevant behavior. This will be the last chapter where we provide a detailed review of empirical findings on the relations between personality and philosophical intuitions before moving onto some related theoretical, philosophical, and practical implications of these empirical findings. Because ethics is such a broad field, we will limit the scope of our review to a handful of fundamental, theoretically pivotal issues with broad ramifications.

Specifically, we will focus on research demonstrating that personality predicts intuitions relevant to meta-ethics, first-order ethics, and applied ethics, as well as predicting actual morally relevant behaviors and outcomes. As a result, the evidence presented in this chapter spans broader areas of ethics rather than narrowly focusing on just a few philosophical issues. This breadth may result in an impression that the relations between ethical intuitions and personality are more fragmented and not as thoroughly investigated, which could further give the impression that the evidence is in some ways less convincing than findings presented in the past two chapters. We agree this is a noteworthy difference for many reasons (e.g., replication in science can be valuable and necessary). Nevertheless, we hope that the relative lack of intensive focus in this chapter is compensated for by evidence on the considerable breadth of the associations between personality and intuitions about ethics. In any event, the data in this chapter are consistent with our central position that personality traits are often robustly and systematically linked with a number of philosophically relevant judgments. We are also happy to admit that there is still plenty of work left to be done on these and many other issues, and we hope that others will continue to explore the relations discussed here and throughout this book.

Personality Predicts Meta-Ethics

Meta-ethics is one prominent area in ethics. Meta-ethics largely deals with questions about ethics rather than attempting to determine correct substantive theories about morally right or wrong actions. For example, if you have moral objectivist tendencies, you probably thought that either John or Fred was right in the scenario presented at the beginning of this chapter. According to the moral objectivist, if needless suffering is bad, it is bad regardless of what anyone thinks about that suffering. Needless suffering is bad even if nobody can understand or think about the suffering. Many contemporary philosophers think that some form of moral objectivism is true (Shafer-Landau, 2003; M. Smith, 1995). Not only do some philosophers think that moral objectivism is true, they also think that moral objectivism is supported by and is deeply entrenched in everyday thought about morality. A belief in moral objectivism has been argued to be essential to moral cognition, the regulation of interpersonal relationships, and the prevention of moral nihilism (e.g., a belief that ultimately nothing is morally right or wrong) (Lycan, 1986; Mackie, 1977). If people were to give up their belief in moral objectivism, it is argued that life would lose the deeper meaning, satisfaction, and purpose it once had (C. Wright, 1992). Like free will, some have argued that if we find that moral discourse is deeply flawed in its commitment to moral objectivism, we should leave people to their mistaken beliefs. To correct those erroneous beliefs would have unwanted and dire consequences (Joyce, 2001).

All of this assumes that people have a belief in moral objectivism. However, empirical data suggest a substantial number of people have non-objectivist intuitions. Nichols (2004a) found that many people expressed non-objectivist intuitions about a canonical moral violation (i.e., harming another person just for fun). Theoretically, out of all kinds of moral violations, harming another person for no good reason should have a strong claim to objective truth. According to moral objectivism, if hitting another person just for fun is morally wrong (or right), it is simply wrong (or right). It is not possible for hitting another person for fun to be morally right for some people and morally wrong for other people, everything else being equal. As such, moral objectivist intuitions can be operationalized by assessing whether one thinks that it is possible for two people in a moral disagreement to both be correct.

Moral objectivism is another philosophically complicated notion (like determinism, side effect, and intentional action). As illustrated in previous chapters, theorists often create scenarios to illustrate central features of philosophically complicated concepts to help non-experts understand those features. In this case, theorists have created scenarios to capture key elements of moral objectivism to test folk intuitions. Take another look at the scenario you read at the beginning of this chapter. We’ll call this scenario Moral. Moral is one scenario that theorists have created to test objectivist intuitions. If one responds that either John or Fred but not both are correct, then one expresses objectivist-friendly intuitions (responding #1 or #2 below). However, if one thinks that John and Fred are both correct, then one expresses non-objectivist friendly intuitions (responding #3 below).

Nichols (2004a) gave participants Moral and asked them to indicate which of the following best characterizes the nature of the disagreement:

  1. 1.

    It is okay to hit people just because you feel like it, so John is right and Fred is wrong.

  2. 2.

    It is not okay to hit people just because you feel like it, so Fred is right and John is wrong.

  3. 3.

    There is no fact of the matter about unqualified claims like “It’s okay to hit people just because you feel like it.” Difference cultures believe different things, and it is not absolutely true or false that it’s okay to hit people just because you feel like it.

Forty-three percent of participants gave the non-objectivist answer (answer #3).Footnote 1 Nichols found this general pattern of non-objectivist intuitions across a number of different scenarios. These results suggest that a sizable percentage of people appear to have non-objectivist intuitions about some canonical moral violation (see also Sarkissian, Park, Tien, Wright, and Knobe (2011)).

In Nichols’s experiments, there was a substantial non-objectivist minority. Other research suggests that non-objectivists are more likely than objectivists to engage in creative problem solving when presented with a puzzle (Goodwin & Darley, 2006) and were more accepting of alternative viewpoints (J. C. Wright, Cullum, & Schwab, 2008). Creative problem solving and being more accepting of alternative viewpoints are tendencies that are typical of the personality trait openness to experience. Compared to others, people who are open to experience tend to be (a) more receptive to a variety of different experiences, (b) less likely to reason in accordance with accepted societal standards, and (c) more individualistic (John & Srivastava, 1999). It stands to reason, then, that those who are more open to experience may be more open to the possibility that there are no objectively true or false moral statements (Feltz & Cokely, 2008).

The same basic strategy linking personality to philosophical relevant intuitions was used again here. Participants were given a description that is meant to capture the relevant aspects of moral objectivism. In this case, participants were undergraduates in a lower-level philosophy class recruited from a large state university. Those participants received Moral. Following Nichols (2004a), participants were asked to respond using one of the options 1–3 listed above. Those who responded (1) or (2) were operationalized as objectivists and those who responded (3) were operationalized as non-objectivists. Then participants were given the Ten Item Personality Inventory (Gosling et al., 2003). In this study, we were also interested in and wanted to statistically control other psychological factors that might be related to moral objectivism. So, participants completed (a) a Cognitive Reflection Task (the CRT) (Frederick, 2005) (b) a questionnaire about the number of philosophy classes completed, and (c) a self-report life satisfaction instrument (Diener, Emmons, Larsen, & Griffin, 1985).

The majority of participants (N = 79, 69%) gave the non-objectivist answer to Moral (non-objectivist scores (choosing option 3 above) were coded as 1; objectivist (choosing option 1 or 2 above) as 0). As predicted, those who were open to experience were more likely to respond as non-objectivists, r (109) = .32, p = .001. Judgments of moral objectivism were unrelated to all other personality traits (i.e., extraversion, conscientiousness, agreeableness, and emotional stability), sex, philosophical training, and reflective decision making (p > .7) (see Table 4.1).Footnote 2 Planned hierarchical regression analysis with all of the aforementioned variables predicted moral objectivism, F (10, 100) = 1.96, p = .04, R2 = .16 (full model). Controlling for all other factors, openness to experience accounted for unique variance, F (1, 100) = 9.64, p = .002, R2change = .08.Footnote 3

Table 4.1 Intercorrelations for main variables. Females were coded as 0 and males as 1

Given what we know about personality traits, it was probable that the relation between personality and non-objectivist intuitions could be predictably manipulated just as they were in intentional action and free will. We reasoned that we may be able to predict non-objectivist intuitions with a different personality trait by changing relevant aspects of the scenario. The action in Moral is supposed to be a canonical moral violation that involves an unjustified harm to another individual. However, there are other violations that do not involve harming anyone else. For example, there has been some research that some people are willing to say that disgusting actions that do not harm anyone else are morally wrong. A number of these types of actions are offered by Haidt, Koller, and Dias (1993). One action involves a man who buys a frozen chicken, takes it home, has sex with it, and then eats it. Arguably this action does not harm the man or anyone else (e.g., the chicken is already dead, the man does it in complete privacy). When Haidt et al. (1993) gave participants the Chicken scenario, about 65% of people responded that it would be OK if countries differed with respect to customs about having sex with dead defrosted chickens. However, nearly everyone in their study thought that having sex with a dead chicken was disgusting. This result suggests that many people think that there is no fact of the matter about whether it is OK to have sex with a dead chicken (although testing objectivist intuitions was not the main goal of Haidt et al.’s study).

Given the socially abnormal nature of having sex with a dead chicken, we hypothesized that those who were more socially minded would also be more bothered by the disgusting, bizarre action. As already discussed, extraverts tend to be more socially minded, tend to have relatively less emotional regulation, are more motivated to engage in social activities, and encode and recall socially relevant information differently. An action that essentially involves a socially abnormal, disgusting behavior seemed likely to exert a specific influence on extraverts as compared to non-extraverts. We predicted that extraverts would be more likely than introverts to think that there is a moral fact of the matter about having sex with a dead chicken.

We tested whether changing the nature of the action to a disgusting, yet harmless, act would alter the relation of personality to objectivist intuitions (Feltz & Cokely, 2008). In this case, all the participants were recruited from a psychology department’s undergraduate student participant pool. Participants were presented with the following scenario that is a hybrid of scenarios used by Nichols (2004a) and Haidt et al. (1993):

Harmful chicken: John and Fred are members of different cultures. They are in an argument about a newspaper article describing a man, Barney, who bought a frozen chicken, took it home, defrosted it, had sex with it, and then ate it. The article notes that doctors interviewed said there was nothing medically dangerous about having sex with and then eating the chicken (for example, salmonella is not transmitted via sex and the chicken was very well cooked). John says, “It’s okay to have sex with a chicken and then eat it just because you feel like it,” and Fred says, “No, it is not okay to have sex with a chicken and then eat it just because you feel like it.” John then says, “Look you are wrong. Everyone I know agrees that it’s okay to do that.” Fred responds, “Oh no, you are the one who is mistaken. Everyone I know agrees that it’s not okay to do that.”

Participants were asked if the action was harmful. They were also asked if the action was wrong. As predicted, extraversion was related to harm judgments r (145) =.24, p = .003. Extraverts were also more likely to think that the action was wrong r (146) =.23, p = .005. However, when we looked at relations among the other four personality traits, no significant relation was found between those personality traits and harm or wrongness judgments.

Taken together, these studies provide two examples showing that two different personality traits may be predictably related to meta-ethical intuitions. While these are just some of the possible meta-ethical intuitions that one could have, these results suggest that at a minimum some meta-ethical intuitions can be predicted by some heritable personality traits. But meta-ethical intuitions are just one class of ethically relevant intuitions. Next, we turn to intuitions about first-order ethics.

Personality Predicts Bias in First-Order Ethics

First-order ethics is a prominent and distinct area of ethics. Rather than focusing on questions about ethics as meta-ethics does (e.g., moral objectivism), first-order ethics attempts to provide ethical principles or theories that help evaluate morally right or wrong actions. Largely, two general views about first-order ethics have occupied theorists. These two general approaches to ethics are consequentialism and deontology. Consequentialism is the view that the right making feature of an action is that the action creates the most good out of all alternatives (although what exactly constitutes the “good” is a contested notion; McNaughton & Rawling, 2006). Deontology is the other traditionally dominant view. According to deontology, the right making feature of an action is whether the action satisfies the correct set of principles or duties. Determining what the correct set of duties or principles is can be complicated and context-sensitive, but one critical difference between deontology and consequentialism is that some of the principles for the deontologist may not maximize the good (e.g., Ross (1988)).

Virtue ethics, an ancient approach to ethics emphasizing moral character, is a third approach to ethics that has made a resurgence since the last half of the twentieth century (Driver, 2001). One reason why virtue ethics has become popular again is that some theorists think that virtues can explain and inspire parts of our moral experience that are difficult for and traditionally neglected by consequentialists and deontologists. For example, consequentialism and deontology (or at least their simple versions) may not give adequate weight to many of the important internal dispositions of an agent (however, see (Copp & Sobel, 2004; Hursthouse, 1999; Slote, 2001; Swanton, 2003)). For example, acting courageously seems to be a morally important feature of actions in some situations. If consequentialists want to take into account the moral relevance of acting courageously, they must do so by explaining how courage promotes the good. In other words, they have to treat courage as a non-basic good. This feature of consequentialism appears to be contrary to everyday moral thought. It seems that sometimes one can behave courageously and do the right thing even if that behavior does not maximize the good. Similar complaints can be levied against the rule orientated nature of deontology. Virtue ethics shifts the focus of moral evaluation from maximizing the good or following rules to the motivations and character of a person.Footnote 4

While virtue theories are diverse, there are some shared common themes (see Oakley (1996) for a fuller discussion of how to characterize virtue ethics and how it is different from consequentialism and deontology). For many, “the focus is on the virtuous individual and on those inner traits, dispositions, and motives that qualify her as being virtuous” (Slote, 2001, p. 4). “It is widely agreed that virtue is a trait of character” (Copp & Sobel, 2004, p. 516). In turn, “virtue is the concept of something that makes its possessor good” (Hursthouse, 1999, p. 13). Along these lines, Hursthouse holds that, “an action is right [if and only if] it is what a virtuous agent would characteristically (i.e., acting in character) do in the circumstances” (1999, p. 28). Many of these features that have been identified as right making features of actions go beyond the correct set of moral rules or producing the best consequences and involve deep seated dispositions to act and sets of motivational states. The following definition of virtue will suffice for our purposes:

A virtue is a good quality of character, more specifically a disposition to respond to, or acknowledge, items within its field or fields in an excellent or good enough way. (Swanton, 2003, p. 19)

Virtue is often characterized to be or be a consequence of a character trait (Appiah, 2008; Aristotle, Ross, & Urmson, 1980; Calder, 2007; Driver, 2001; Hurka, 2006, 2010; Hursthouse, 1999; Langton, 2001; Slote, 2001). Even if some virtue theorists do not think that virtue is a character trait, most hold that virtue is a disposition to act (Hooker, 2002). These character traits or dispositions can be displayed in many ways (i.e., dispositions to respond) toward many different things (i.e., field of virtue). For example, one could be compassionate to a grieving friend by offering kind words or by offering friendly hugs. The grieving friend would be in compassion’s field, and the kind words or friendly hugs would be the response. One need not be maximally compassionate to be virtuous. Rather, one must respond with a sufficient amount of compassion to be virtuous.

As will come as no surprise at this point, virtue ethicists take everyday intuitions seriously. These intuitions are often about particular cases. Indeed, a perusal of the literature reveals a number of specific references to cases and “our” intuitions about them (Slote, 2001) (Driver, 2001) (Hursthouse, 1999), (Copp & Sobel, 2004). Just as was the case in free will and intentional action judgments, these cases are designed so that the reader has an intuition in response to them. And, just as was the case in free will and intentional action, the intuitions about these cases are often thought to be widespread and shared by non-ethicists. To illustrate, Rosalind Hursthouse, a prominent virtue ethicist, writes that when “we” have intuitive reactions to cases, she means the “we” “to mean ‘me and you, my readers’” (Hursthouse, 1999, p. 8). Michael Slote, also a prominent virtue ethicist, concurs and takes everyday intuitions seriously. As he write, “intuitive considerations...have considerable weight” (Slote, 2001, p. 5). In this light, many virtue ethicists desire that their theories accord with everyday thought about virtues and vices.

In the next three sections, we review some evidence about everyday thought about virtues and how that evidence can inform some debates in virtue ethics. Some people find that actions done from virtuous motivations are morally better than actions done from duty or to maximize the good, that consequences of character traits are a major factor in identifying character traits as virtues, and that sometimes epistemic imperfections can be necessary for full expressions of virtue. Importantly, across all of the studies in the next three sections the global personality trait emotional stability was found to predict virtue attribution.

Virtue, Deontology, and Consequentialism

As already mentioned, one of the primary motivations for virtue ethics is a dissatisfaction with the way that consequentialism and deontology handle some apparent morally relevant factors such as motivation and character. Virtue theorists often hold that virtues and vices naturally account for this part of our moral experience since motivation and character take central roles in virtuous and vicious actions. Many virtue ethicists take the pervasiveness of the attitude that virtuous motivation matters above and beyond maximizing the good or following the correct moral rule as support for their general argumentative strategy. But how pervasive is the intuition that virtuous motivations uniquely matter above and beyond consequentialist or deontological motivations? For whom are these internal states of the agent likely to matter?

Consistent with the leitmotif of this book, consulting empirical science most efficiently helps answer whether and for whom virtuous motivations matter. Some recent experimental work suggests that at least in some instances, actions stemming from virtuous motivation are judged to be morally better than actions stemming from consequentialist or deontological motivations (Cokely & Feltz, 2011). In one experiment, people were given a description of two people. One person acted from what is described as the correct moral rule generating better consequences and the other person acted only from a virtuous disposition:

Virtue: Imagine two people, John and William, work in a hospital. They both witness 10 medical errors and learn that the hospital will be investigated for every error that is reported. Each investigation requires that the hospital must close for one week. When the hospital is closed, needy patients will be turned away.

Person 1: John makes moral decisions solely based on consequences and widely accepted moral rules. John thinks long and hard to help with his decision and he calculates that the best consequences and the right moral rule dictate that he should report only 1 out of the 10 medical errors. For these reasons, and only for these reasons, John reports 1 medical error.

Person 2: William does not make moral decisions solely based on consequences or widely accepted moral rules. Rather, William has the deep-seated character traits of justice and honesty that cause him to decide to report all 10 errors. Because of these character traits, and only because of these character traits, he reports all 10 medical errors.

Participants then answered the following question: “Whose action is morally better?”. Participants were then given a 7-point scale where the scale’s anchors indicated a preference for John (Likert scale value = 1) or William (Likert scale value = 7). A response of 4 would indicate that the participant did not have a preference for John or William. The overall mean response was 5.29, SD = 1.94 indicating a reliable judgment that the action done from virtue is morally better, overall and on average (i.e., William’s action) (t (41) = 4.29, p < .001, d = 0.67).

While the difference in ratings of John’s and William’s motivation may be interesting and may support some of the general claims that virtue ethicists make, that result is not our primary interest. Our primary interest is whether preferences for virtuous or other motivations could be predicted by a global personality trait. There are some empirical reasons to think that personality is related to some virtue-related attributions. For example, those who report that they are more virtuous on some paradigmatic virtues also tend to be higher in the personality trait emotional stability (Cawley, Martin, & Johnson, 2000; Peterson & Seligman, 2004). People who rate themselves as low on emotional stability tend to have a wider spectrum of emotional reactivity and experience. That is, it is not simply the case that those who are more emotionally unstable are necessarily close to having emotional breakdowns or mental health disorders. Rather, they may simply tend to experience wider, more intense emotional variability than those who are higher in emotional stability. Nevertheless, people who rate themselves as highly emotionally unstable individuals have been characterized as being especially tense, anxious, nervous, moody, worrying, touchy, and fearful compared to those who are higher in emotional stability (John & Srivastava, 1999). Since anxious, moody, worrying, and fearful mental states are typically not thought to be virtuous (everything else being equal), it stands to reason that those who attribute those states to themselves would likely be less likely to think that they have the internal mental states that would count as virtues. However, does emotional stability predict attribution of virtue and moral worth to others?

To help answer that question, the general strategy we have been using throughout this book was applied. Participants responded to Virtue and completed a brief measure of the Big Five personality traits (the Ten Item Personality Inventory) (Gosling et al., 2003). Consistent with the relation of emotional stability with self-reports of virtue, emotional stability was related to judgments of the moral worth of the action, r (40) = .36, p = .02, indicating a preference for the action done with virtuous motivation (i.e., William’s action). No other personality traits were reliably related to judgments about the moral worth of the action (all ps > .10).Footnote 5

Even though there were good a priori reasons for thinking that emotional stability would be related to preferences for actions done from virtue, replication was still desirable. A second experiment was conducted to verify the relation of emotional stability to intuitions about the moral worth of an agent’s motivations. In the second experiment, Virtue was slightly modified so that John and William were described as presidents of two different hospitals. This change was made to ensure that participants were not responding to some idiosyncratic feature of the original scenario. The first sentence of Virtue was replaced with the following sentence: “Imagine two people, John and William, who are presidents of a hospital.” The beginning of the first sentence of the second paragraph was replaced with “John is president of hospital X and...” and the beginning of the first sentence of the third paragraph was replaced with “William is president of hospital Y and...” Participants were asked about the moral worth of the action. Again, there was a preference for the action done from virtuous motivation M = 5.23, SD 2.05, t (52) = 4.35, p < .001. Emotional stability was again related to judgments of the moral worth of the action, r (53) = .38, p = .004. No other personality traits were reliably related to judgments of moral worth (ps > .17).

These two experiments suggest that acting from virtue, as opposed to acting to generate the best consequences or follow the right moral rule, can influence people’s moral judgments and feelings about the moral worth of actions. Moreover, these judgments may be systematically related to the global personality trait emotional stability. Those who were emotionally stable had a consistent and stable preference for actions done from virtuous motivation compared to actions done from deontological or consequentialist reasons.

Virtue and Consequences

One recent debate in virtue ethics involves how to identify virtues and virtuous actions. Some think that the only relevant factors for determining if someone behaves virtuously are their internal motivational states. Call this view evaluational internalism. Evaluational internalism is prominently defended by Michael Slote. For Slote:

[A]n act is morally acceptable if and only if it comes from good or virtuous motivation involving benevolence or caring (about the well-being of others) or at least doesn’t come from bad or inferior motives involving malice or indifference to humanity. The emphasis on motivation will then be fundamental if the theory claims that certain forms of overall motivation are, intuitively, morally good and approvable in themselves and apart from their consequences or the possibility of grounding them in certain rules or principles. (2001, p. 38)

One motivation that Slote thinks is involved in virtuous behavior is compassion. On the evaluational internalist line, compassion is valuable independent of any consequences that acting with that motivation might have. If we only judge the motivational states of the person, then that helps dramatically reduce problematic cases of moral luck. Moral luck can be characterized as the possibility that sometimes bad motivations can produce good things and good motivations can sometimes produce bad things through no fault of the person acting (e.g., it was just random, dumb luck that those consequences came about). To illustrate, someone with a truly compassionate attitude may donate money to charity that ends up funding a cruel warlord unbeknownst to the compassionate person. Or one may maliciously slash another person’s car tires with the consequence of saving the car owner’s life (perhaps by the car owner noticing the brakes are severely and dangerously damaged when the tire is repaired). If the consequences of internal motivational states are what determine virtues, then the former internal trait should be considered a vice and the latter a virtue. But that assessment runs contrary to what many of us think about those situations. At least some of us judge the donor as being virtuous and the car tire slasher as being vicious. Judgments such as these are core to the evaluational internalist case because they are argued to capture a prominent aspect of our moral cognition, namely, that the internal motivational structures are what matter to virtue attribution and not the consequences of those motivational structures (Slote, 2001).

Others think that the only factors relevant to determining virtue are external to one’s motivational state. Evaluational externalism, defended prominently by Julia Driver (1995, 2001, 2004), holds that “the moral quality of a person’s action or character is determined by factors external to agency” (Driver, 2001, p. 68). For Driver (2001, 2004), these external factors are the actual (and not just expected) consequences that a character trait brings about. Driver identifies virtue as “a character trait that produces more good (in the actual world) than not systematically” (2001, p. 82). That these character traits bring about good effects systematically is important. Sometimes virtues can lead to bad consequences, they just cannot do so typically. The good consequences that are brought about systematically rule out most cases of moral luck. Of course, we can imagine cases of systematic moral luck, it’s just that those cases would also be rare and unlikely in the world in which we actually inhabit. Hence, on Driver’s externalist account, character traits or dispositions that do not typically bring about good consequences are not virtues and those that do are.

Many evaluational internalists and externalists place a heavy burden on folk intuitions (Crisp, 2010). According to these theorists, moral views are “judged partly in terms of how much ordinary thinking they preserve” (Slote, 2001, p. 13). Some think that evaluational internalism is more plausible because it “seems to have intuitive advantages over its more familiar utilitarian/consequentialist analogues” (Slote, 2001, p. 28) and it “is intuitively obvious and in need of no further moral grounding” (Slote, 2001, p. 39). But evaluational externalists say much the same thing. Driver holds that evaluational externalism is preferable because “it better captures some of our intuitions about hard moral cases” (2004, p. 72). When we “see that we have misjudged the consequences of a trait, we change our judgments of the trait’s status as a virtue” (Driver, 2004, p. 84). As Driver notes, “These observations provide a great deal of intuitive support for a consequentialist theory of virtue” (2004, p. 84).

Evaluational internalism and externalism offer clearly contrasting, empirical predictions about folk intuitions about some paradigmatic cases of virtue attribution. These predictions have been put to empirical test suggesting that the consequences of internal traits are a major factor in determining whether those traits are virtues (Feltz & Cokely, 2013b). In one experiment, participants were asked to read only one of four different scenarios where the good and bad consequences of an action were systematically varied. However, the internal traits were intentionally left unspecified.

Pat: Scientists have recently become interested in Pat and have conducted an experiment about Pat. The scientists have found the following fact about Pat. Pat has a character trait that, when exercised, results in good things 100% / 51% / 10% / 0% of the time. These good things include other people feeling good, people trusting Pat, and people feeling protected. When Pat’s character trait is exercised, bad things come about 0% / 49% / 90% / 100% of the time. These bad things include people feeling ashamed of themselves, people feeling humiliated, and people feeling unsafe.

Participants then rated their agreement with the following statement on a 6-point scale (1 = Strongly disagree, 6 = Strongly agree): “Pat’s character trait is a virtue.” Means and 95% confidence intervals are reported in Fig. 4.1.

Fig. 4.1
A bar graph plots virtue. 100%, 5.05. 51%, 3.05. 10%, 1.9. 0%, 1.7. Data are estimated.

Mean response to “Pat’s character trait is a virtue.” Error bars represent 95% confidence interval

As the data in Fig. 4.1 suggest, there was a strong, statistically significant linear relation in judgments. This linear relation would not be predicted by the evaluational internalist because the relation suggests that judgments are sensitive to the good that is brought about by the character trait. In contrast to the evaluational internalist’s hypothesis, there was a clear linear relation between virtue attribution and the good consequences that the character traits brought about. The more good that was brought about, the more like a virtue people thought the character trait was.

These results may be of interest for adjudicating the evaluational internalist/externalist debate in virtue ethics. However, what was especially relevant to our goals was whether personality could predict virtue attributions. To do so, we first controlled for the percentage of time the character trait brought about good consequences. Then we included all the Big Five personality traits in a regression model. The resulting regression indicated that extraversion and emotional stability predicted those who were likely to attribute virtue to Pat, Fchange (2, 184) = 9.63, p = .004, R2change = .03.

While the relation of emotional stability to attributions of virtue was expected, the relation of extraversion to these judgments was not predicted and required replication. Two follow-up studies were conducted to confirm and investigate why consequences mattered more than internal dispositional traits toward virtue attribution. In the first follow-up study, Pat was modified to control for a possible worry. In the description for Pat, we did not explicitly state that Pat was not aware of the good and bad consequences of Pat’s traits. Because we did not stipulate that fact about Pat’s mental states, it could be that at least some participants thought that Pat was aware of those good or bad consequences. That awareness is an internal mental state that could be relevant to evaluating Pat’s character trait as a virtue. One way that might go is that if Pat had that knowledge and didn’t change, that would reflect a non-virtuous motivation. And, the objection goes, that the non-virtuous motivation could be what is driving the linear effect found above and not the good or bad consequences of Pat’s character trait.

To account for the potential effect of Pat’s knowledge of the good or bad consequences, participants received slightly modified versions of Pat. In this version of Pat, we explicitly stated that Pat did not know about the consequence by including the following sentence at the end of the Pat scenario: “Through no fault of Pat, Pat is often unaware of these consequences.”

Just as in the previous experiment, the results represented in Fig. 4.2 demonstrate a significant, strong linear relation as a function of the good or bad consequences of Pat’s character trait. More importantly for our purposes, this experiment replicated the previously estimated relation between personality and virtue attributions. Using the exact same analytic strategy used above, we again found that extraversion and emotional stability predicted attributions of virtue to Pat Fchange (2, 235) = 3.45, p = .03, R2change = .01.

Fig. 4.2
A bar graph plots virtue. 100%, 5.1. 51%, 3.2. 10%, 2. 0%, 1.5. Data are estimated.

Mean responses to “Pat’s character trait is a virtue.” Error bars represent the 95% confidence interval

Even given the revised version of Pat, the evaluational internalist has a response that could explain the data while still being consistent with the pattern of results that would be expected from the evaluational internalist perspective. In either of the first two versions of Pat, it is left unspecified what motivations Pat has—that part is left completely unspecified. Given the lack of specification of motivation, it could be that at least some participants are making inferences about Pat’s motivation from the good or bad consequences. On this line of reasoning, if one produces good consequences, it is natural to think that the good consequences were produced by good motivations. However, when bad consequences come about, it may be natural to think that those bad consequences were produced by bad motivations. If at least some participants import this information about motivation into the study, then that could explain the linear effects while at the same time being consistent with evaluational internalism.

To address that worry, Pat was again revised. In this revision, the motivation with which Pat acted was specified. We created two different motivations for Pat’s action. One group of participants read that Pat acted with a “desire to help others.” Helping others is typically thought to be a virtuous motivation. A separate group of participants read that Pat acted with a motivation that was “indifferent to others.” Indifference to others is taken by some to be a majorly defective and vicious motivation (Slote (2001)). One other change was made to the scenario to help head off objections: Pat was described as being “never aware of” the consequences of the trait (the previous modification of Pat only stated that Pat was often not aware). Given these changes, participants received one of the following versions of Pat.

Good/Bad motivation

Scientists have recently become interested in Pat and have conducted an experiment about Pat. The scientists have found the following fact about Pat. Pat has a character trait that makes Pat desire to help/indifferent to others. When exercised, this character trait results in good things 100% / 51% / 10% / 0% of the time. These good things include other people feeling good, people trusting Pat, and people feeling protected. When Pat’s character trait is exercised, bad things come about 0% / 49% / 90% / 100% of the time. These bad things include people feeling ashamed of themselves, people feeling humiliated, and people feeling unsafe. Through no fault of Pat, Pat is never aware of these consequences.

For both Good and Bad motivation, the data were not consistent with what would be predicted by the evaluational internalist (see Fig. 4.3). However, the central question that concerned us was whether personality could predict virtue attributions. In the Good Motivation condition, a hierarchical linear regression model indicated that emotional stability was a predictor of virtue judgments when controlling for percentage of consequences, Fchange (2, 218) = 4.78, p = .03, R2change = .01. A similar linear relation was present in Bad Motivation. A hierarchical linear regression model indicated that emotional stability was a predictor of virtue judgments in Bad Motivation when controlling for percentage of consequences, Fchange (2, 190) = 3.99, p = .05, R2change = .02. However, in neither condition was the relation between virtue attribution and extraversion found to be reliable (Fchange < 1). Consequently, this experiment replicated the relation of emotional stability but failed to replicate the relation of extraversion with judgments of virtue.

Fig. 4.3
A double bar graph plots good and bad motivation. 100%, Good motivation, 5.1. Bad motivation, 4.6. 51%. Good motivation, 3.9. Bad motivation, 3. 10%. Good motivation, 2.8. Bad motivation, 1.8. 0%. Good motivation, 2.1. Bad motivation, 1.75. Data are estimated.

Mean response to “Pat’s character trait is a virtue.” Error bars represent 95% confidence interval

Overall, the data reported in this section suggest that character traits that bring about better consequences are, in some theoretically interesting instances, more likely to be thought of as virtues. In all three experiments, the global personality trait emotional stability predicted who was likely to attribute virtues. These results reinforce the results from the previous section indicating that some important types of moral intuitions about virtue are predicted by some global personality traits.

Virtue and Ignorance

It is commonly thought that ignorance is bad, all else being equal. That is, ignorance illustrates some kind of intellectual deficit that could be corrected. Of course, sometimes ignorance is excusable (e.g., the authors of this book are ignorant of how to build a jet engine, but we also don’t think we are particularly blameworthy for that ignorance). But sometimes ignorance is not excusable. Indeed, some ethicists have thought that the presence of ignorance is an epistemic vice and sometimes those epistemic vices can influence moral virtues. For example, Aristotle notes that “it is not possible to be good in the strict sense without practical wisdom” (1980, pp. 1144b, 1130–1131). Hursthouse claims that “each of the virtues involves getting things right” (1999, p. 12). She goes on to claim that “the agent must know what she is doing” (1999, p. 124). Swanton holds that “moral virtue has at its core rational virtue” (2010, p. 152). Brady states that it is a “deeply-held intuition...that the virtuous person (or at least fully virtuous person) counts as having both theoretical and practical wisdom, and this involves their having knowledge of what is the case and what is of value” (2005, pp. 93–94). Schueler claims it is “hard to see how it [virtue involving ignorance] differs from stupidity or self-deception, traits which may occasionally be useful but which are not usually thought of as virtues” (1997, p. 469). Smilansky argues that “ignorance and self-deception are not a good basis for virtue” (1997, p. 106). As we discussed at the beginning of this paragraph, some think that sometimes epistemic vices make actions less virtuous and the character traits less a virtue compared to when that epistemic vice was not present. For shorthand, we’ll call this general view about the relation of epistemic vices and moral virtues intellectualism.

According to Driver (2001), intellectualism is wrong. She thinks that it is not a requirement of virtues that they involve true beliefs (e.g., a person has an epistemic defect that does not always result in a less perfect expression of a virtue). In fact, in some instances, ignorance is actually required for full expression of a virtue (Driver, 2003). Driver calls these kinds of virtues the virtues of ignorance.

Driver provides two ways epistemic defects could be relevant to virtue attribution. The first way is by a person having propositional ignorance. Propositional ignorance occurs when the person performing the action has some belief that is not true and is relevant to the action (Driver, 2001, p. 347). According to Driver, modesty is one example of a virtue that involves having a false belief about one’s own value. While there are many accounts of what modesty is (see below), Driver argues the correct view is what she calls the underestimation account which is the view that “the modest person underestimates his self-worth to some limited degree” (Driver, 2001, p. 18). An underestimation constitutes a false belief (e.g., when you estimate that the jar of marbles at the fair has 500 marbles but actually has 625, your estimation is wrong). And since the false belief is about one’s one value, it is relevant to the virtue of modesty.

The second way that a person could have an epistemic defect relevant to virtue is by engaging in incorrect inferences (from possibly true beliefs). This kind of epistemic defect Driver calls inferential ignorance (Driver, 2001, p. 347). Psychology has demonstrated numerous ways that one could have all true relevant beliefs but still fail to make inferences based on those beliefs (e.g., inattentiveness, time pressure, emotional reactions). In some of those instances, the failure to make the relevant inference could be important for the expression of some virtues. As we will see below, perhaps a fireman who does not even consider the danger to himself may be viewed as more courageous than a fireman who does consider all of the dangers and acts anyway. There may be something about the potential cost-benefit analysis that the latter fireman may engage in that makes the action seem less courageous than the former fireman. Importantly, it’s not necessarily the case that either fireman has a false belief (in which case that could potentially be an example of propositional ignorance). In fact, it is very likely that because they are firemen they both have only true beliefs about the risks and benefits of acting. But still, there is some epistemic defect that the former fireman has that the latter one lacks—and this epistemic defect is likely to be failure to make an inference relevant to the virtue. Driver thinks that neither propositional ignorance nor inferential ignorance necessarily rule out virtue attributions. And, in some cases, they could be required for full expression of a virtue. Since we called the view that epistemic virtues are positively related to moral virtues intellectualism, we’ll call Driver’s alternative view anti-intellectualism.

The way that we have characterized intellectualism dictates that any epistemic defect relevant to the virtue makes any virtue less good or less perfect. To the extent that ordinary intuitions about cases are supposed to be sensitive to factors involved in intellectualism, the anti-intellectualist and intellectualist positions give competing predictions about how attributions of moral virtue should function as a result of relevant epistemic failures. (We think it is a good bet that given what some virtue ethicists say, everyday judgments about cases are supposed to matter to the correct view about intellectualism, see discussion of important everyday intuitions to virtue ethics above.) When epistemic failures exist, there should be measurably lower moral virtue attributions if intellectualism is correct. The anti-intellectualist would predict it is not always the case that epistemic failures result in lowered moral virtue attribution. And, sometimes, the presence of an epistemic defect is related to more prefect expression of a virtue. As such, we should be able to construct hypothetical (or perhaps even actual) cases that capture those key contrasts and ask people to make judgments about them.

Since Driver (2001) uses modesty as her prime example, so will we. Just to recap, her underestimation account of modesty requires that one has a false belief about one’s own value (with an important caveat that the false belief cannot be drastically wrong). As mentioned above, there are other prominent accounts of modesty. We will give a sampling of those that all seem to involve having true beliefs about one’s own worth. The first is the False Modesty account. On this account, one has a true belief about one’s own value but does not express that true belief. Rather, one intentionally downplays one’s own sense of worth. For example, Einstein might have thought he was the best physicist ever. Assume that claim is true. On the False Modesty view, Einstein would have that true belief but would downplay it. Perhaps he would say that he was merely a very good physicist. The other alternative view of modesty we will briefly look at is the Accurate Modesty account (Flanagan, 1990; Richards, 1988). This account of modesty involves having true beliefs about one’s own value but also contextualizes that value. Take the Einstein example again. Rather than saying he was merely a good physicist, he could appreciate his contribution and argue that he was among the best physicists of the twentieth century. Here, he may not believe that he was the best physicist ever. In the Accurate Modesty account, Einstein still has all true beliefs (e.g., if it is a fact that he was the best physicist ever then it is true that he was the best physicist of the twentieth century, and he believes the latter but not the former) but he expresses those beliefs in a way that is more accurate to what he actually believes than he would on the False Modesty account.

Now we are in a position to start to address one of our central goals. Does personality predict virtue attributions? Again, our goal was to link everyday intuitions about virtue to some global personality traits. So, the standard strategy was applied. Scenarios were created and then personality was assessed. In this case, scenarios were created to reflect the distinctions discussed above (Feltz & Cokely, 2012b).

Ignorant Modesty

Albert Einstein is often thought to have sincerely said many times, “I am a good physicist.” It is universally accepted that if Einstein was not the best physicist ever, he was one of the best physicists and certainly the best of the twentieth Century. Hence, Einstein falsely believed that he was “a good physicist” when in fact he was one of the best physicists ever.

False Modesty

Albert Einstein is often thought to have insincerely said many times, “I am a good physicist.” It is universally accepted that if Einstein was not the best physicist ever, he was one of the best physicists and certainly the best of the twentieth Century. Hence, Einstein really believed that he was “one of the best physicists of the twentieth Century” when in fact he was one of the best physicists ever.

Accurate Modesty

Albert Einstein is often thought to have sincerely said many times, “I am the best physicist of the twentieth Century.” It is universally accepted that if Einstein was not the best physicist ever, he was one of the best physicists and certainly the best of the twentieth Century. Hence, Einstein really believed that he was “the best physicists of the twentieth Century” when in fact he was one of the best physicists ever.

After reading only one of these scenarios, participants were given three prompts to respond to. The response options were on a 6-point scale (1 = strongly disagree, 6 = strongly agree).

  1. 1.

    “When Einstein said ‘I am a good physicist/I am the best physicist of the twentieth Century’ he exhibited a virtue.

  2. 2.

    Einstein was modest.

  3. 3.

    Einstein was a morally good person.”

After responding to these prompts, participants completed the Ten Item Personality Inventory (Gosling et al., 2003).

Responses to scenarios indicated support for anti-intellectualism. While we will forgo the statistical analyses of these results since they are not our main concern, Table 4.2 shows that the overall results were not what the intellectualist would predict (see Feltz and Cokely (2012b) for the full analyses). Einstein’s epistemic defect did not result in a reduction modesty attribution. In fact, that epistemic defect increased virtue attributions to Einstein. Consistent with the Pat scenarios, emotional stability predicted virtue attribution. For Ignorant Modesty, emotional stability was related to Virtue r (43) = .37, p = .01 and Good Person r (43) = .35, p = .02 but not Modesty r (43) = .19, p = .22. Emotional stability was unrelated to judgments in Accurate Modesty (r’s < .14, p’s > .26). In False Modesty, emotional stability was only related to Virtue (r (63) = .26, p = .04) and was unrelated to the other dependent variables (r < .2, p’s > .11).

Table 4.2 Means and standard deviations for ignorant, false, and accurate modesty

Two follow-up studies were conducted to ensure that there are some virtues of ignorance and to replicate the relation of emotional stability to attributions of the virtues of ignorance. The first follow-up study controlled for beliefs that people may have about the historical figure Einstein. Maybe people have an idea of who Einstein was as a person (perhaps from film or other depictions). Or, maybe people have beliefs that since the twentieth century was the best century for physics, being the best physicist in the twentieth century is actually being the best physicist ever. In those cases, perhaps Einstein actually had true beliefs but underplayed them. If that is right, then the alternative views to the virtues of ignorance can account for the pattern of results.

To control for those worries, we ran an additional study to help address the potential issues with propositional ignorance. The major change was to remove Einstein from the scenarios and have the scenarios about some fictitious character named John who plays darts. The follow-up study also had one additional change. We added another scenario that involved John being unaware of how good of a darts player he is. We added this additional scenario because lack of awareness could also be a kind of propositional ignorance. Rather than having a false belief, John would lack a belief that might be reasonable for him to have. With all that in mind, participants read the following scenarios:

Ignorant Modesty

John is often thought to have sincerely said many times, “I am a good darts player.” It is universally accepted that if John was not the best darts player ever, he was one of the best darts players and certainly the best of his generation. Hence, John falsely believed that he was “a good darts player” when in fact he was one of the best darts players ever.

False Modesty

John is often thought to have insincerely said many times, “I am a good darts player.” It is universally accepted that if John was not the best darts player ever, he was one of the best darts players and certainly the best of his generation. Hence, John really believed that he was “one of the best darts players of my generation” when in fact he was one of the best darts players ever.

Accurate Modesty

John is often thought to have sincerely said many times, “I am one of the best darts players of my generation.” It is universally accepted that if John was not the best darts player ever, he was one of the best darts players and certainly the best of his generation. Hence, John really believed that he was “one of the best darts players of my generation” when in fact he was one of the best darts players ever.

Unaware Modesty

John is often thought to have sincerely said many times, “I am a good darts player.” It is universally accepted that if John was not the best darts player ever, he was one of the best darts players and certainly the best of his generation. Hence, John was unaware that he was “one of the best darts players of his generation” when in fact he was one of the best darts players ever.

Participants rated their agreement with the following statements on a 6-point scale (1 = strongly disagree, 6 = strongly agree):

  1. (i)

    “When John said ‘I am a good darts player/I am one of the best darts players of my generation’ he exhibited a virtue.

  2. (ii)

    John was modest.

  3. (iii)

    John was a morally good person.”

Participants then completed the Ten Item Personality Inventory.

The results from the follow-up study largely replicated the results of the first experiment about virtues of ignorance (see Table 4.3). Again, we will skip the fairly detailed statistical analyses, but the pattern of results does not support intellectualism about some virtues. Emotional stability was not related to judgments in ignorant modest (rs −.1 to .18, ps > .42). However, this result was somewhat expected because of the relatively small sample size’s ability to detect relations of this magnitude. Yet participants’ responses to Unaware Modesty, which features a kind of propositional ignorance, had nearly statistically significant relations to Virtue r (63) = .24, p = .057, Morally Good r (63) = .23, p = .066, but not to Modest r (63) = .18, p = .16. Emotional stability was unrelated to False Modesty (rs .12–.28, ps > .11). Somewhat unexpectedly, emotional stability was related to judgments in Accurate Modesty: Virtue r (47) = .32, p = .03, Modest r (47) = .27, p = .06, but not Morally Good r (47) = .05, p = .72.

Table 4.3 Means and standard deviations for John the dart player scenarios

We conducted another experiment to estimate the other kind of epistemic defect’s impact on virtue attribution—inferential ignorance. As the fireman case above may illustrate, Driver takes some types of courage to be good candidates for inferential ignorance. In particular, she thinks that impulsive courage “seems to involve inferential ignorance alone. The impulsively courageous person possesses certain relevant facts of his situation, yet fails to put these facts together in order to reach the conscious conclusion that he himself is in danger” (2001, p. 33). Driver states “a good illustration of this sort of person is one who, perhaps, fears for the person trapped inside a burning building but does not fear for himself, since he fails to represent the danger to himself” (2001, p. 33). The reader can probably already guess at the predictions. The intellectualist will predict that impulsively courageous acts will be judged less virtuous than non-impulsively courageous acts. The anti-intellectualist will predict the opposite. Once again, the main issue for our purposes was whether emotional stability predicted attributions of virtue.

To test inferential ignorance in the case of courage, participants read these scenarios:

Accurate Inference

Pat is taking a walk one night and comes across a burning building. Pat hears a child screaming for help inside the building. Pat is worried about the child inside the building. There is a high risk that if Pat goes into the building, Pat would get burned. Pat considers this risk very carefully. Pat accurately estimates the danger of going into the building when deliberating whether to go in. As a result, Pat fears for his own well-being. Pat decides to run into the building to save the child.

No Inference

Pat is taking a walk one night and comes across a burning building. Pat hears a child screaming for help inside the building. Pat is worried about the child inside the building. There is a high risk that if Pat goes into the building, Pat would get burned. Pat does not consider this risk. Because Pat doesn’t consider this risk, Pat inaccurately estimates the danger of going into the building when deliberating whether to go in. As a result, Pat does not fear for his own well-being. Pat decides to run into the building to save the child.

Participants answered the following three questions about each scenario on a 6-point Likert scale (1 = strongly disagree, 6 = strongly agree).

  1. 1.

    “When Pat ran into the building, he exhibited a virtue.

  2. 2.

    Pat was courageous.

  3. 3.

    Pat was a morally good person.”

After responding to these prompts, participants completed the Ten Item Personality Inventory.

Here, the data were less clearly supportive of the anti-intellectualist’s predictions. As can be seen in Table 4.4, the strongest virtue attributions were in cases of Accurate Inference. Of course, the virtue attributions in all of the other cases were on the “agreement” side of the scale so the anti-intellectualist could argue that inferential ignorance does not rule out that one could be acting virtuously (even if the virtues are less perfect). But, more important for our purposes, did personality predict judgments of virtue? As expected, emotional stability did not predict judgments for No Inference (rs = – .05 – .02, ps > .66). However, for the other three cases, the data suggest trends (i.e., close but non-statistically significant differences in the right direction) where emotional stability predicted virtue attributions: Accurate Reasoning: Virtue r (94) = .13, p = .23, Courage r (94) = .19, p = .07, Good Person r (94) = .19, p = .06.Footnote 6

Table 4.4 Means and standard deviations for accurate and inaccurate reasoning

On the whole, the series of experiments involving virtue has one consistent message: emotional stability tends to be predicably linked to attributions of virtues. This finding held regardless of whether participants were asked to compare virtuous actions to actions done from some other reasons, whether one would be virtuous while being ignorant, or whether virtues were identified by consequences. In short, personality predicts some first-order ethical intuitions.

Personality Predicts Bias in Applied Ethics

So far, we have reviewed instances where personality predicts philosophical intuitions about meta-ethics and first-order ethics. The final area of ethics we will consider is applied ethics. Where meta-ethics addresses issues about ethics and first-order ethics attempts to establish substantive ethical theories, applied ethics attempts to apply the lessons from these other areas of ethics to everyday moral issues such as abortion, suicide, and euthanasia. Of note, we talk as if these domains are isolated and discrete. In actual philosophical practice that is not necessarily the case. Results from applied ethics can inform theoretical work in first-order and meta-ethics as well (e.g., understanding applied ethical issues about animals may have important impacts for the scope of normative theories and may inform some ethical views about what a person is).

Almost all non-professional philosophers have beliefs or attitudes about almost all contemporary applied ethical issues (e.g., capital punishment, famine relief, vegetarianism). To be sure, there may be some areas of applied ethics that are less well-known to the non-expert (e.g., bio-enhancement or unmanned aerial vehicles), but these tend to be the exception rather than the rule. Of course, there can be substantial disagreement about what applied ethics is (Beauchamp, 2003). However, one common goal of applied ethics is to deal with moral issues as they currently exist in the world, including how people think about and interact with those issues (Singer, 1993). Hence, understanding and incorporating those everyday attitudes are important for the applied ethicist. For example, take a rule utilitarian approach to applied ethics where the correct action is the one that conforms to the rule that maximizes utility. Everyday thought is one important consideration when thinking about the costs or benefits associated with a rule. Implementing a rule that nobody should eat meat has costs because most people currently think it permissible to eat meat. Changing that attitude is likely to be difficult and expensive. It may end up that implementing such a rule is the correct thing to do according to the rule utilitarian. Determining the balance of costs and benefits includes understanding everyday thought about applied issues such as the morality of eating meat. Therefore, everyday attitudes can be one important factor for determining the correct moral rule.

It seems safe to conclude, then, that understanding everyday attitudes and decisions about many applied ethical issues are often essential in applied ethics. One implication of this conclusion is that it can be important to know how people come to have those beliefs or how they make their judgments. Take again the example of a rule utilitarian prohibition on eating meat. If we know how people come to believe that eating meat is morally permissible, we may be able to take steps to more efficiently institute the prohibition. Or we may find that the nature of humans’ attitudes and decisions about eating meat are just too difficult to alter to expect the prohibition against eating meat to be effective. Similarly, we may find that some people’s attitudes about eating meat rest on false beliefs (S. Feltz & Feltz, 2019). We may then be able to inform their decisions more fully (see Chap. 5 for an in-depth discussion of how these results factor into informed decision making). Or we may find that people currently are not capable of a dramatic shift in attitudes or decisions, but this does not mean that it will always be unreasonable to enforce the prohibition against eating meat. We may be able to take incremental steps so that future generations have different attitudes or make different decisions (O. Flanagan, 1991; Hoang, Feltz, Offer-Westort, & Feltz, 2023). All of this reinforces the notion that often it will be advantageous to know what current moral beliefs and attitudes are, and it will also be advantageous to understand the factors influencing those beliefs and attitudes and what role these features play in ethical decision making.

The past half-century has seen an abundance of research suggesting that people often use heuristics when making decisions (Gigerenzer, Todd, & ABC Research Group, 1999; Daniel Kahneman, Slovic, & Tversky, 1982; Simon, 1955, 1990). Heuristics are quick, efficient, and typically robust decision making rules of thumb that operate particularly well under constraints often found in real-world human decision making (e.g., time constraints, resource constraints). For example, one common heuristic is the representativeness heuristic. We often make judgments that are informed by our stereotype or idea of the typical event or person. Often these judgments are accurate (e.g., making judgments in reference to what a typical driver would do at a stop sign). However, in other instances those judgments can be mistaken (e.g., making judgments about the typical “man”).

Sunstein (2005) speculates that a number of heuristics are involved in some people’s “real-world” moral decision making (see also Gigerenzer (2010)). Sunstein calls one of the heuristics the Punish, and not reward, betrayals of trust (Sunstein, 2005, p. 537). Generally, people tend to prefer products that might decrease risk of harm as long as no portion of that harm comes from the product itself. For example, people preferred an airbag where 2% of people died in serious accidents compared to an airbag where 1.01% of people died but 0.01% of that risk resulted from the airbag itself (Gershoff & Koehler, 2011; Koehler & Gershoff, 2003, 2005). Sunstein speculates that this aversion would translate into judgments of punishment. Those who produce safer products would be deemed a more apt target of punishment than the producer of the less safe product if part of the risk of harm came from the safer product itself. As such, this heuristic would lead to the odd result of punishing a company that produces an overall safer product.

But could personality predict who acts as if they use the punish and not reward betrayals of trust heuristic? To help answer this question, we present results from two new studies, following the same basic strategy typically employed to link personality with philosophically relevant judgments. We first gave participants a vignette that captures the philosophically relevant features of the Punish, and not reward, betrayals of trust heuristic and then we assessed people’s personality.

In the first study, 62 participants were recruited from Amazon’s Mechanical Turk. Participants received the following scenario. Nine participants were excluded for not completing the survey. Three participants were excluded for giving obviously inconsistent responses (e.g., expressing they would prefer to Buy Cure A but then expressing a stronger preference for Cure B). Thirty-five were male (70%). The mean age was 27.69, SD = 7.78 ranging from 18–53.

Deadly Virus: Suppose that you are exposed to a deadly virus that kills 100% of those infected. Suppose that you are offered a choice between two equally priced cures: Cure A and Cure B. Scientific tests indicate that there is a 2% chance that those who take Cure A and are exposed to the virus will be killed due to the virus. Scientific tests indicate that there is a 1% chance that those who take Cure B and are exposed to virus will die due to the virus. However, Cure B may kill people who would not have died if they took Cure A instead. Specifically, some of those who take Cure B may die due to becoming especially vulnerable to the virus. Tests indicate that there is an additional one chance in 10,000 (0.0001%) that someone who is exposed to the virus and takes vaccine B will be killed due to becoming especially vulnerable.

After reading this scenario, participants chose which of the two Cures they would prefer to buy (Cure A or Cure B). They also then responded to the following prompt: Please indicate the strength of your preference on the scale below (1 indicates a strong preference for Cure A. 7 indicates a very strong preference for Cure B). Participants then completed the Ten Item Personality Inventory and the Berlin Numeracy Test (BNT). Among educated adults living in industrialized countries, the BNT has been found to be the best single predictor of general decision making skill including one’s ability to understand and evaluate risk (i.e., risk literacy) (Cokely et al., 2012, 2013, 2014; Ghazal et al. 2014). Of note, the numeracy test typically fully mediates the relations between general cognitive abilities (e.g., intelligence, attentional control, working memory) and superior judgment and decision making performance in non-expert samples (e.g., more intelligent people make better decisions because they have better numeracy skills, just as less intelligent people make better decisions because they have better numeracy skills). Often this test more than doubles the predictive power of other instruments (e.g., 10 times better than 30-minute tests of fluid intelligence). In other words, in general samples there is typically no test that will be more sensitive or likely to detect systematic relations between individual differences in general cognitive abilities and choice. If numeracy is unrelated, it’s a very good bet that the influence of other general abilities is trivial (see Skilled Decision Theory; Cokely, Feltz, Ghazal, Allan, Petrova, & Garcia-Retamero, 2018).

In response to Deadly Virus, most people preferred to buy Cure A, N = 33 (66%), χ2 = 5.12, p = .02. This preference was also reflected in the strength of the preference for Cure A from neutrality (M = 1.34, SD = 0.48), t (49) = 39.31, p < .001. Conscientiousness was unrelated to the binary choice of which cure one would prefer to buy rho (50) = −.15, p = .31, but was related to a stronger preference for buying Cure A rho (50) = −.28, p = .05. BNT was related to the preference for which drug to buy rho (50) = .40, p = .004, and showed a trend predicting strength of the preference rho (50) = .23, p = .11. To determine the unique predictive power of BNT and conscientiousness, a multiple linear regression with conscientiousness and BNT as independent variables and binary cure choice as the dependent variable was conducted. The full model was a strong and significant predictor of the binary choice: F (2, 47) = 6.24, p = .004, R2 = .21. Results showed that BNT was a unique predictor of the binary choice (β = .13, t (47) = 3.33, p = .002) while conscientiousness showed a negative numerical shift where more conscientious individuals were less likely to take the safer vaccine (β = −.04, t (47) = −1.43, p = .16). BNT and conscientiousness did not interact (t < 1). A similar multiple regression was conducted with preference scale as the dependent variable. The full model was a significant predictor of the scaled choice: F (2, 47) = 4.03, p = .02, R2 = .15. Results indicated that BNT was a marginally significant unique predictor (β = .28, t (47) = 1.94, p = .059) while conscientious was a robust unique predictor (β = −.22, t (47) = −2.21, p = .03) of the scaled choice. BNT and conscientiousness did not interact (t < 1). These results suggest that those who are more numerate prefer the overall safer product while more conscientious individuals preferred a less safe product where the product itself had no chance of killing them.

The relations between BNT, conscientiousness, and preferences for vaccines accord with Skilled Decision Theory (Cokely et al., 2018). Those who are more numerate are more likely to deliberate more carefully and elaborately leading to more informed and normatively correct responses in many domains. For example, numerate individuals are more likely to make correct choices in paradigmatic risky prospect interpretation tasks (e.g., evaluating the risks and payoffs of various financial gambles; selecting more effective medical treatments; making public policy recommendations that save more lives with fewer costs; making judgments about the risk of climate and weather hazards). Given that the goal of the vaccine is to reduce death, those who are more numerate may pick the vaccine that reduces overall risk of death. However, those who are more conscientious may be especially sensitive to the “betrayal” vaccine B represents. Those who are conscientious tend to hold that duties are important. If a duty of a vaccine is not to kill a person, then conscientious people may prefer a less safe product that has no chance of killing people.

While the pattern of results generally accords with broad theoretical frameworks, post hoc explanations are generally easier and less reliable compared to a priori prediction (e.g., it is hard to predict what the stock market will do tomorrow but relatively easy to “explain” what you saw happen yesterday). Once again, what is required for increased confidence in a result’s robustness is replication. Thus, a follow-up study was conducted using slightly different materials. The structure of the scenario was identical, but instead of cures for a disease we used airbags with similar descriptions and identical probabilities of harm used in Deadly Virus (Gershoff & Koehler, 2011; Koehler & Gershoff, 2003, 2005). Participants responded to the same questions except they were modified to reflect that the scenario was about airbags. Participants also received the Cognitive Reflection Task (CRT) instead of the BNT. As a reminder, CRT measures the degree to which one tends to express a more impulsive cognitive style, which is a predictor of numeracy and decision making more broadly (Frederick, 2008). Eighty-six undergraduate students were recruited from a large public university. Fifty (51%) were male. The mean age was 19.44, SD = 1.76 ranging from 18–28.

In this revised case, there were no differences in preference for the less safe airbag (N = 44, 51%) compared to the safer yet harm causing airbag (N = 42, 49%), χ2 = 0.05, p = .83. Preferences for the safer airbag measured by the Likert scale were also not significantly different from neutrality, M = 4.08, SD = 1.45, t (85) = 0.52, p =.6. However, conscientiousness was associated with a stronger preference for the less safe car airbag on the binary choice, rho (86) = =.31, p = .004, and with a stronger preference for the less safe airbag on the scaled response: rho (86) = −.2, p = .066. CRT was also related to the binary choice rho (86) = .22, p = .04 and was trending on the scaled response rho (86) = .2, p = .069. To determine if conscientiousness and CRT were unique predictors, a multiple regression with CRT and conscientiousness as independent variables and binary car choice as dependent variable was conducted. The full model was a significant predictor. F (2, 83) = 6.37, p = .003, R2 = .13. CRT was again found to be a near significant predictor of choice (β = .09, t (83) = 1.86, p = .066) while conscientious was a clear and significant unique predictor (β = −.06, t (83) = −2.56, p = .01). A similar multiple regression analysis was performed for the scaled car choice. The full model was a significant predictor of the scaled response: F (2, 83) = 3.69, p = .03, R2 = .08. CRT was a significant unique predictor (β = .3, t (83) = 2, p = .05) while conscientious showed the expected trend toward unique prediction (β = −.09, t (83) = −1.36, p = .18).

In summary, these results provide a demonstration of one way cognitive and personality variables may have independent and opposing influences on behavior. Specifically, conscientiousness (i.e., a heritable and stable trait) tends to be an independent predictor of the degree to which participants acted in accord with the punish, not reward, betrayals of trust heuristic. In contrast, the cognitive skill and style instruments predicted choices that were consistent with less reliance on the heuristic. Theoretically, the results may reflect differences in one’s primary motivation or focus. If one focuses on the overall consequences (e.g., the numbers of harmed individuals), as may be likely for more numerate individuals, one may be less likely to act as if trust has been betrayed. However, when one instead focuses attention on the potential costs of betrayals of trust, it may be harder to ignore or consider other factors particularly among those who care deeply about moral commitment and obligation (i.e., “dutifulness” is a primary facet of conscientiousness). Taken together, the cognitive and personality factors uniquely contributed to a predictive model of choice, providing a more comprehensive and nuanced account of the judgment processes that give rise to these ethically relevant choices.

Empirical Science and the Autonomy of Ethics

We’d like to address one initial worry that one might have specifically about empirical science and ethics—a worry that does not necessarily exist for empirical science’s relation to free will or intentional action. Ethics is a normative endeavor—it tells us how we ought to act or behave. Since ethics is a normative discipline, one might object that the actual way we are or think about ethics is irrelevant to what we ought to think or how we ought to act. One may concede that there may be some important role in applied ethics for how we actually think and behave, but that role would be only secondary. Only after we come to know what the right moral principles or actions are should we worry about how we actually are or how we actually go about making decisions. For these reasons, one might think that ethics is autonomous from empirical psychology and that empirical results can in principle have nothing to say about most substantive theoretical pursuits in ethics.

Owen Flanagan (1991) identifies two ways in which ethics could be thought to be autonomous from empirical findings. The first way is that ethics is a moral standard setter. On this view, the job of ethics is to discover the principles or rules about what is morally permissible, obligatory, or forbidden. Since ethics in this sense is a normative discipline, the way that people actually believe or behave can say nothing about how they ought to believe or behave. Flanagan notes parallel reasoning in epistemology. The way people actually reason may not be important to how they ought to reason. Epistemology sets the standards for how people ought to reason. As such, ethics, like epistemology, is autonomous from empirical psychology (even if it can help inform ethical ways to implement findings from empirical psychology, e.g., in moral education).

A second way that ethics could be autonomous from empirical psychology is that ethics deals with the analysis of moral concepts or of moral linguistic terms. The task of ethics is to analyze moral concepts or terms such as “right,” “wrong,” and “obligatory.” The goal, according to Flanagan, is to find a common conceptual scheme that underwrites the usage of these terms, to systematize discrepancies, and to clarify ambiguities in these terms. This approach is a “semantic” analysis of those terms or concepts and as such is autonomous from empirical psychology since the goal is a philosophical conceptual analysis of the terms.

These objections have a long and celebrated history in ethics. There have been many defenses and responses to the claim that ethics is autonomous. Flanagan gives a particular useful analysis of the problem. On Flanagan’s view, the autonomy thesis is too strong on both of these approaches. For the standard setting approach, the way humans are (or can be) provides a constraint on acceptable moral standards. After all, it is unreasonable to set a standard that no humans could possibly satisfy.Footnote 7 The results from ethics should be psychologically realizable.

Empirical psychology can help inform us of what kinds of creatures humans are and what is psychologically realizable for humans. To illustrate, virtue ethicists are perhaps the most prominent group of moral theorists who respect the psychological realizability constraint on ethics. As we have indicated above, it is common for virtue ethicists to think that how “we” (including non-professionals) judge cases and what intuitions we have are important for theory construction in virtue ethics (e.g., Anscombe, 1958; Appiah, 2008; Arpaly, 2003; Driver, 2001, 2004; Foot, 2001; Hursthouse, 1999; Zagzebski, 2010). One reason is that virtues are often thought to be something that humans can have since virtues are often characterized as being character traits (Doris, 2002; Harman, 1999; C. Miller, 2013). As a corollary, humans should also be able to evaluate those character traits in terms of how virtuous or vicious they are. Given these observations, it seems like a significant tension for the virtue theorist if those judgments or actual character traits were completely irrelevant to the correct account of virtue (e.g., akin to saying a virtuous person is a river—if true, virtue is not a theory of humans because humans are not and never could literally be rivers). So, any principled rejection of empirical psychology is saddled not only with the challenge of psychological realizability, it is also saddled with rejecting a tradition where those judgments and empirical evidence are thought to be valuable. And, as we argue in the next chapter, rejection of tradition is not to be done lightly or without good evidence. Of course, this does not mean that empirical psychology settles debates. All that is claimed is that empirical psychology is relevant to ethics and that ethics is not completely autonomous from the empirical evidence. Philosophical reflection still has important roles to play even if not fully autonomous.

The linguistic analysis approach straightforwardly does not support the strong autonomy thesis. Flanagan first notes that ethics only as linguistic analysis is so far removed from essential notions of what ethics is and is supposed to be that such an approach cannot be taken seriously. For example, it is highly unlikely that non-professional philosophers think that ethics aims to analyze ethical terms or concepts. Rather, ethics, among other things, is meant to aid in practical reasoning. Ethics is supposed to tell us what we ought to do and why. Moreover, since the analysis is of everyday concepts and usage, there is little reason to think that philosophers are uniquely positioned to do that kind of research. Rather, psychology and allied behavioral sciences have the tools and methods to efficiently address how everyday terms or concepts are used and when. In other words, “Ethics conceived as semantic analysis is just a kind of descriptive social psychology—possibly poorly executed” (O. Flanagan, 1991, p. 28).Footnote 8

All of this is consistent with some weak autonomy thesis. One need not think that ethics is reducible to empirical psychology. There are some important roles for philosophical reflection to play. First, setting standards and ideals that are not currently realized can be one important role that ethics can play. For example, virtue ethicists do not always take the empirical evidence without question or treat empirical evidence as the only kind of evidence. Second, data may need a certain degree of systematization and refinement in light of other plausible principles (Zagzebski, 2010). Data don’t interpret themselves, and philosophical reflection can help with that interpretation. Importantly, these views still respect the psychological realizability constraint. As such, we follow Flanagan in saying “all moral theories—certainly all modern ones—make our motivational structure, our personality possibilities, relevant in setting their moral sights. The autonomy thesis, the claim that psychology does not matter to moral philosophy, can be safely ignored” (1991, p. 31).

Conclusion

In this chapter, we have reviewed some empirical results suggesting that several heritable personality traits predict ethically relevant intuitions in meta-ethics, first-order ethics, and applied ethics. Overall, we have provided a detailed and representative (albeit non-comprehensive) account of the most relevant, currently available evidence. We have also discussed an important objection to using everyday intuitions and empirical science. This objection holds that ethics as a normative discipline is completely autonomous from empirical science. The objection has been found wanting. Any ethical theory that does not respect the psychological realizability principle is implausible and impractical. In fact, most contemporary ethical theories respect and attempt to account for the empirical evidence. We take it, therefore, that empirical evidence is relevant to most philosophical pursuits—even ones that are normative and that have traditionally been argued to be completely independent of empirical results. While giving sustained arguments for this claim is outside the scope of this chapter, others have convincingly made this point (O. J. Flanagan, 1991). There are other objections and concerns remaining to be answered, but Chaps. 14 set the stage for the rest of the book.

The major focus of the last three chapters was on intuitions about hypothetical and actual cases. However, it is valuable to again note that personality also predicts philosophically and morally relevant behaviors. There is no room or need for an exhaustive review of the many philosophically relevant behaviors that personality predicts because the evidence and breadth of these connections is very extensive and well-established (e.g., Ashton et al., 2012). To illustrate, personality traits are so robustly related to occupational performance, workplace citizenship, theft, absenteeism, tardiness, lack of cooperation, and other forms of counterproductive and illegal behavior that personality assessment is a common and legal practice used in the hiring of employees in the United States and elsewhere.

In case the “real-world” association between personality and philosophical (e.g., ethical) behavior is not yet compelling, in closing it seems useful to consider some more extreme variations in personality, namely personality disorders. Personality disorders are diagnosed and defined as personality tendencies that are enduring, deviant, distressful, dysfunctional, disruptive, and not the result of some kind of extenuating circumstances (e.g., prolonged drug use) (Association, 2013). Given that these disorders are dysfunctional, disruptive, and distressful, it is not surprisingly that some personality disorders are associated with some morally undesirable and even morally reprehensible behavior. For example, there has been extensive research conducted on Antisocial Personality Disorder. Diagnostic criteria include tendencies that are abnormal, interfere with good interpersonal and personal functioning, and are persistent (Cooke, Forth, & Hare, 1998). People with Antisocial Personality Disorder who exhibit pronounced antisocial behaviors tend to be impulsive risk takers who have relatively low levels of empathy and muted emotional reactions. Not surprisingly, these individuals have higher rates of criminal activity and recidivism (Leistico, Salekin, DeCoster, & Rogers, 2008; Patrick, 2006). In some extreme cases, for most of us it is nearly unimaginable how little remorse some such individuals show for appalling, vicious, and disgusting crimes (e.g., torture of children, cannibalism, and necrophilia). Thankfully, such extreme cases are rare, as are personality disorders generally. Nevertheless, taken together with other evidence from typical individual, we think these examples should help make it clear that stable, heritable dispositions (i.e., personality traits) are often related to individual differences in philosophically and morally relevant thoughts and behaviors. Overall, we take it that we have established a theoretically and empirically informed case that personality often predicts fundamental philosophical intuitions. We now turn to some major implications these findings have and some emerging opportunities across frontiers.