Recent empirical work appears to suggest that the moral intuitions of professional philosophers are just as vulnerable to distorting psychological factors as are those of ordinary people. This paper assesses these recent tests of the ‘expertise defense’ of philosophical intuition. I argue that the use of familiar cases and principles constitutes a methodological problem. Since these items are familiar to philosophers, but not ordinary people, the two subject groups do not confront identical cognitive tasks. Reflection on this point shows that these findings do not threaten philosophical expertise—though we can draw lessons for more effective empirical tests.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
Tax calculation will be finalised during checkout.
An important qualification: much of the debate over the expertise defense appears to proceed on the assumption that philosophical intuition is all of one sort—that whatever we might say about intuitions in epistemology, we might also say about intuitions in metaphysics or ethics. There is some reason to be skeptical of this assumption (Nado 2012), but I will not be able to engage with it here. Still, it should be noted that the empirical studies discussed below deal solely with moral intuition, and there is a live question as to whether anything said about these findings can be generalized to other areas of philosophy.
Interestingly, Rawls also says that “all judgments on hypothetical cases are excluded” (Rawls 1951, p. 182). He abandons this requirement in A Theory of Justice, though he continues to maintain that “in deciding which of our judgments to take into account we may reasonably select some and exclude others. For example, we can discard those judgments made with hesitation, or in which we have little confidence” (Rawls 1971, p. 47).
It is worth noting that focusing on Rawlsian considered judgments does not involve what Weinberg and Alexander (2014) call a “thick” conception of philosophical intuition. That is, I will not suppose that the intuitions under discussion are a class with special conceptual or cognitive properties (see Ludwig 2007; Kauppinen 2007). Weinberg and Alexander argue that the special properties of thick intuitions may make them untestable in experimental studies, and perhaps even undetectable in ordinary philosophical practice. But the cognitive restrictions implied by “considered judgments” are relatively pedestrian.
These are not the only empirical studies with some relevance to the expertise defense. For instance, Schulz et al. (2011) appear to show that professional philosophers exhibit a (presumed distorting) link between personality traits and views on free will and moral responsibility. Similarly, a series of behavioral studies by Schwitzgebel and coauthors (2009, 2012) appear to show that professional moral philosophers are no better morally behaved than ordinary people. I leave these studies to the side because they make assumptions (about how to measure expertise or about the relationship between knowledge and behavior) that require separate discussion.
The ‘Jim and the Indians’ case concerns an innocent man, Jim, who is given the opportunity to save a number of defenseless South American Indians from a sadistic paramilitary force, if he will only agree to pull the trigger and kill one of the condemned Indians himself (Smart and Williams 1973, p. 98).
Curiously, the direction of the effect reversed between the two groups: non-philosopher subjects were more likely to find the proposed action in Switch or Jim and the Indians morally obligatory in the Observer condition, while philosophers were more likely to find it obligatory in the Actor condition (Tobia et al. 2013, pp. 4–5). It is intriguing that non-philosophers and philosophers displayed the Actor-Observer effect in opposed directions—but, for the present, this directional difference doesn’t matter. That an Actor-Observer effect constitutes a distortion of moral intuition does not depend on the direction of the effect—any difference between Actor and Observer responses is considered evidence of distortion.The reversal is, however, interesting for interpretation of Nadelhoffer and Feltz’s original data. In their paper, they suggest that Actor subjects were trying to avoid the aversive experience of imaging hitting the switch: “If you are asked to imagine yourself to be in the position of having to decide whether it would be permissible for you to hit the switch, one easy way of keeping yourself from having to make such a hard decision is to simply judge it to be impermissible!” (Nadelhoffer and Feltz 2008, p. 141). It is unclear how to understand philosophers’ greater willingness to endorse action in the Actor condition on this interpretation.
The Footbridge case (Thomson 1976) resembles the Trolley case in that it involves bringing about the death of one person to save five, but it differs in that the agent must physically push the one person in front of the oncoming vehicle, rather than flipping a switch to direct danger toward a person.
More precisely, what subjects actually responded to was the following question: “Sometimes it is necessary to use one person’s death as a means to saving several other people—killing one helps you accomplish the goal of saving several. Other times one person’s death is a side-effect of saving several more people—the goal of saving several unavoidably ends up killing one as a consequence. Is the first morally better, worse, or the same as the second?” SC interpreted responses of ‘worse’ as endorsing the Doctrine (Schwitzgebel and Cushman 2012, pp. 138–140).
One of the scenarios involved a boxcar, rather than a trolley, moving under a footbridge occupied by a familiarly large man. Some of the scenarios used in other parts of the study were directly taken from well-known literature, such as those testing intuitions about moral luck (see Williams 1982; Nagel 1979).
See also other citations in note 11 above.
For example: Bennett et al. (2010) conducted an fMRI comparison of social processing in the brains of healthy human beings... and the brain of a dead fish. There was obviously good reason to predict that the dead fish’s brain would not respond selectively to images of social situations, yet seemingly it did! The purpose of this study, of course, was to point out that certain investigative techniques (in this case, inadequate statistical correction for multiple comparisons) lead to unreliable results. There is a similar logic in LeBel and Peters’ (2011) critique of social psychological research methods in the wake of Bem’s (2011) infamous demonstration of ‘precognition’.
I get this figure by taking the difference between percentages of respondents who approved of an action in the Actor condition with those who approved in the Observer condition. For the Trolley case, TBS report 36 % (Actor) versus 9 % (Observer) acceptance; the figures for Trolley are 89 and 64 % (Tobia et al. 2013, pp. 4–5). The assumption is that this percentage represents the fraction of subjects who would have responded differently had they been assigned to the other experimental condition. If one would not have responded differently, then one is not susceptible to the effect.
One might think that the fact that only a minority of philosophers apparently exhibited distorting effects is itself a point in favor of the expertise defense. Couldn’t expertise defense proponents simply insist that not all so-called philosophical ‘experts’ (those with doctorates in the field) really are expert in the relevant sense? In that case, expertise defense proponents can accept these results at face value; they need only admit that we will have some difficulty picking out the genuine experts. But this is not a good position for the expertise defense proponent, because each individual philosopher must wonder whether she is in the group affected by distortion. One cannot know the answer to this introspectively, and the data do appear to show that a sizable fraction of those with philosophical training remain affected by distortion. Hence epistemically responsible practice would seem to require having oneself empirically checked for distorting effects—exactly the sort of ‘psychologizing’ of philosophy resisted by expertise defense proponents.
Sosa (2007, 2010) claims that experimental findings involving even non-philosopher subjects (not professional philosophers) can be explained in a similar way. He argues that the apparent differences of opinion tracked by the studies may be caused by purely verbal ambiguity, rather than genuine difference. In particular, he says, “verbal reports by rushers-by on the street corner are hard to take seriously as expressive of considered views with full understanding of the issues under dispute” (Sosa 2010, p. 422). For related points, see Cullen (2010) and Bengson (2013).
To be precise: although Schwitzgebel and Cushman gave their subjects a forced choice in endorsing moral principles, they did use a Likert scale for responses to cases. However, their main results come from binary coding pairs of responses to cases as either ‘equivalent’ or ‘inequivalent’ (Schwitzgebel and Cushman 2012, pp. 140–141).
Thanks to Guy Kahane and Simon Rippon for suggestions on how to express this point—and special thanks to the latter for suggesting the Phil Papers survey as a comparison.
Thanks to both Shaun Nichols and Guy Kahane for correctly predicting that I would find philosophers ready to admit to diachronic instability on these issues if I asked around. (Though note that I did not necessarily say they are among that group).
Indeed, Wright herself suggests that “most moral philosophers and scientists already do this, treating clear/strong intuitions (especially their own) more severely than unclear/weak ones”(Wright 2010, p. 500). Grundmann (2010, p. 501) applies Wright’s data to the expertise debate in a similar way. But see also Zamzow and Nichols (2009, p. 374) for a word of caution on this point.
Grundmann (2010, p. 503) also objects to using familiar stimuli to test philosopher subjects, though on the different grounds that their responses will not be theory-neutral.
Thanks to Simon Rippon for helpful discussion of these ideas. A further option (quite different from that discussed in the text) would be to follow Schulz et al. (2011) and examine distorting effects that take place outside the experimental context. These studies did not manipulate subjects’ intuitions, so the familiarity problem does not arise. Instead they tested for already-existing personality traits and correlated these to philosophical views (see also Arvan 2013). On the assumption that the truth of philosophical views does not depend upon a philosopher’s personality traits, these findings would appear to undermine claims of philosophical expertise. But I leave this approach to the side here, as it is not as obvious that personality traits are ‘distorting’ in the same way as Actor-Observer and order effects. (See Zamzow and Nichols 2009 for reasons to tolerate or even favor personality-linked differences in philosophers’ views).
Alexander, J. (2012). Experimental philosophy: An introduction (1st ed.). Oxford: Polity .
Arvan, M. (2013). Bad news for conservatives? Moral judgments and the dark triad personality traits: A correlational study. Neuroethics, 6(2), 307–318.
Audi, R. (2008). Intuition, inference, and rational disagreement in ethics. Ethical Theory and Moral Practice, 11(5), 475–492.
Bealer, G. (2000). A theory of the a priori. Pacific Philosophical Quarterly, 81(1), 1–8211.
Bem, D. J. (2011). Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100(3), 407–425. doi:10.1037/a0021524.
Bengson, J. (2013). Experimental attacks on intuitions and answers. Philosophy and Phenomenological Research, 86(3), 495–532. doi:10.1111/j.1933-1592.2012.00578.x.
Bennett, C. M., Baird, A. A., Miller, M. B., & Wolford, G. L. (2010). Neural correlates of interspecies perspective taking in the post-mortem Atlantic Salmon: An argument for proper multiple comparisons correction. Journal of Serendipitous and Unexpected Results, 1(1), 1–5.
Bourget, D., & Chalmers, D. J. (2013). What do philosophers believe? Philosophical Studies, 3, 1.
Cappelen, H. (2012). Philosophy without intuitions. Oxford: Oxford University Press.
Cullen, S. (2010). Survey-driven romanticism. Review of Philosophy and Psychology, 1(2), 275–496. doi:10.1007/s13164-009-0016-1.
Driver, J. (2006). Ethics: The fundamentals (1st ed.). Oxford: Wiley-Blackwell.
Fischer, J. M., & Ravizza, M. (1992). Ethics: Problems and principles (1st ed.). Fort Worth: Harcourt Brace Jovanovich.
Foot, P. (1967). The problem of abortion and the doctrine of double effect. Oxford Review, 5, 5–15.
Gensler, H. J. (2011). Ethics: A contemporary introduction. Hoboken: Taylor & Francis.
Grundmann, T. (2010). Some hope for intuitions: A reply to Weinberg. Philosophical Psychology, 23(4), 481–509. doi:10.1080/09515089.2010.505958.
Horvath, J. (2010). How (not) to react to experimental philosophy. Philosophical Psychology, 23(4), 447–480. doi:10.1080/09515089.2010.505878.
Jones, E. E., & Nisbett, R. E. (1971). The actor and the observer: Divergent perceptions of the causes of behavior. In E. E. Jones, D. E. Kanouse, H. H. Kelly, R. E. Nisbett, S. Valins, & B. Weiner (Eds.), Attribution: Perceiving the causes of behavior (pp. 79–94). Morristown: General Learning Press.
Kagan, S. (1997). Normative ethics. Boulder: Westview Press.
Kahane, G., & Shackel, N. (2010). Methodological issues in the neuroscience of moral judgement. Mind & Language, 25(5), 561–582. doi:10.1111/j.1468-0017.2010.01401.x.
Kamm, F. M. (2000). The doctrine of triple effect and why a rational agent need not intend the means to his end. Aristotelian Society Supplementary, 74(1), 21–8211.
Kauppinen, A. (2007). The rise and fall of experimental philosophy. Philosophical Explorations, 10(2), 95–118.
Kornblith, H. (1998). The role of intuition in philosophical inquiry: An account with no unnatural ingredients. In Michael R. Paul & William Ramsey (Eds.), Rethinking intuition: The psychology of intuition and its role in philosophical theory (pp. 129–141). New York: Rowham and Littlefield.
Lanteri, A., Chelini, C., & Rizzello, S. (2008). An experimental investigation of emotions and reasoning in the trolley problem. Journal of Business Ethics, 83(4), 789–804. doi:10.1007/s10551-008-9665-8.
LeBel, E. P., & Peters, K. R. (2011). Fearing the future of empirical psychology: Bem’s (2011) evidence of Psi as a case study of deficiencies in modal research practice. Review of General Psychology, 15(4), 371–379. doi:10.1037/a0025172.
Liao, S. M. (2009). The loop case and Kamm’s doctrine of triple effect. Philosophical Studies, 146(2), 223–231. doi:10.1007/s11098-008-9252-y.
Liao, S. M., Weigmann, A., Alexander, J., & Vong, G. (2011). Putting the trolley in order: Experimental philosophy and the loop case. Philosophical Psychology, 25(5), 661–671.
Lombrozo, T. (2009). The role of moral commitments in moral judgment. Cognitive Science, 33(2), 273–286. doi:10.1111/j.1551-6709.2009.01013.x.
Ludwig, K. (2007). The epistemology of thought experiments: First person versus third person approaches. Midwest Studies In Philosophy, 31(1), 128–159. doi:10.1111/j.1475-4975.2007.00160.x.
Machery, E., Mallon, R., Nichols, S., & Stich, S. P. (2004). Semantics, cross-cultural style. Cognition, 92(3), 1.
Nadelhoffer, T., & Feltz, A. (2008). The actor-observer bias and moral intuitions: Adding fuel to Sinnott-Armstrong’s fire. Neuroethics, 1(2), 133–144.
Nado, J. (2012). Why intuition? Philosophy and Phenomenological Research. doi:10.1111/j.1933-1592.2012.00644.x.
Nagel, T. (1979). Mortal questions. Cambridge: Cambridge University Press.
Petrinovich, L., & O’Neill, P. (1996). Influence of wording and framing effects on moral intuitions. Ethology & Sociobiology, 17(3), 145–171. doi:10.1016/0162-3095(96)00041-6.
Quinn, W. S. (1989). Actions, intentions, and consequences: The doctrine of double effect. Philosophy and Public Affairs, 18(4), 334–351.
Rawls, J. (1951). Outline of a decision procedure for ethics. The Philosophical Review, 60(2), 177–197.
Rawls, J. (1971). A theory of justice (1st ed.). Cambridge, MA: Harvard University Press.
Rini, R. A. (2014). Analogies, moral intuitions, and the expertise defence. Review of Philosophy and Psychology, 5(2), 169–181. doi:10.1007/s13164-013-0163-2.
Ryberg, J. (2013). Moral intuitions and the expertise defence. Analysis, 73(2), 3–9. doi:10.1093/analys/ans135.
Schulz, E., Cokely, E. T., & Feltz, A. (2011). Persistent bias in expert judgments about free will and moral responsibility: A test of the expertise defense. Consciousness and Cognition, 20(4), 1722–1731. doi:10.1016/j.concog.2011.04.007.
Schwitzgebel, E. (2009). Do ethicists steal more books? Philosophical Psychology, 22(6), 711–725. doi:10.1080/09515080903409952.
Schwitzgebel, E., & Cushman, F. (2012). Expertise in moral reasoning? Order effects on moral judgment in professional philosophers and non-philosophers. Mind and Language, 27(2), 135–153.
Schwitzgebel, E., Rust, J., Huang, L.-L., Moore, A. T., & Coates, J. (2012). Ethicists’ courtesy at philosophy conferences. Philosophical Psychology, 25(3), 331–340. doi:10.1080/09515089.2011.580524.
Shafer-Landau, R. (2009). The fundamentals of ethics. USA: Oxford University Press.
Singer, P. (1972). Moral experts. Analysis, 32(4), 115–117. doi:10.2307/3327906.
Sinnott-Armstrong, W. (2008). Framing moral intuition. In J. Doris (Ed.), Moral Psychology. The cognitive science of morality: Intuition and diversity (pp. 47–76). Cambridge, MA: MIT Press.
Smart, J. J. C., & Williams, B. (1973). Utilitarianism: For and against. Cambridge: Cambridge University Press.
Sosa, E. (2007). Experimental philosophy and philosophical intuition. Philosophical Studies, 132(1), 99–107.
Sosa, E. (2010). Intuitions and meaning divergence. Philosophical Psychology, 23(4), 419–426. doi:10.1080/09515089.2010.505859.
Swain, S., Alexander, J., & Weinberg, J. M. (2008). The instability of philosophical intuitions: Running hot and cold on truetemp. Philosophy and Phenomenological Research, 76(1), 138–155.
Thomson, J. J. (1976). Killing, letting die, and the trolley problem. The Monist, 59(2), 204–217.
Thomson, J. J. (2008). Turning the trolley. Philosophy and Public Affairs, 36(4), 359–374.
Tobia, K., Buckwalter, W., & Stich, S. (2013). Moral intuitions: Are philosophers experts? Philosophical Psychology, 26(5), 629–638. doi:10.1080/09515089.2012.696327.
Unger, P. (1996). Living high and letting die: Our illusion of innocence. Oxford: Oxford University Press.
Weinberg, J. M., & Alexander, J. (2014). The challenge of sticking with intuitions through thick and thin. In A. Booth & D. Rowbottom (Eds.), Intuitions (pp. 187–212). Oxford: Oxford University Press.
Weinberg, J. M., Gonnerman, C., Buckner, C., & Alexander, J. (2010). Are philosophers expert intuiters? Philosophical Psychology, 23(3), 331–355. doi:10.1080/09515089.2010.490944.
Weinberg, J. M., Nichols, S., & Stich, S. (2001). Normativity and epistemic intuitions. Philosophical Topics, 29(1–2), 429–460.
Wiegmann, A., Okan, Y., & Nagel, J. (2012). Order effects in moral judgment. Philosophical Psychology, 25(6), 813–836. doi:10.1080/09515089.2011.631995.
Williams, B. (1982). Moral luck. Cambridge: Cambridge University Press.
Williamson, T. (2004). Philosophical ‘Intuitions’ and scepticism about judgement. Dialectica, 58(1), 109–8211.
Williamson, T. (2007). The philosophy of philosophy. Oxford: Wiley-Blackwell.
Williamson, T. (2011). Philosophical expertise and the burden of proof. Metaphilosophy, 42(3), 215–229. doi:10.1111/j.1467-9973.2011.01685.x.
Wright, J. C. (2010). On intuitional stability: The clear, the strong, and the paradigmatic. Cognition, 115(3), 491–503. doi:10.1016/j.cognition.2010.02.003.
Zamzow, J. L., & Nichols, S. (2009). Variations in ethical intuitions. Philosophical. Issues, 19(1), 368–388. doi:10.1111/j.1533-6077.2009.00164.x.
Thanks to Wesley Buckwalter, Eric Schwitzgebel, Kevin Tobia, Guy Kahane, Simon Rippon, and two anonymous referees for Synthese for helpful comments on drafts of this paper, and to Nora Heinzelmann, Shaun Nichols, and Steven Lukes and the NYU Sociology of Morality Working Group for discussion. This research received sponsorship from the VolkswagenStiftung’s European Platform for Life Sciences, Mind Sciences, and the Humanities (grant II/85 063).
About this article
Cite this article
Rini, R.A. How not to test for philosophical expertise. Synthese 192, 431–452 (2015). https://doi.org/10.1007/s11229-014-0579-y
- Expertise defense
- Moral intuition
- Philosophical intuition