Bright mind, moral mind? Intelligence is unrelated to consequentialist moral judgment in sacrificial moral dilemmas

Abstract

The dual-process model of moral cognition suggests that outcome-focused, consequentialist moral judgment in sacrificial moral dilemmas is driven by a deliberative, reasoned, cognitive process. Although many studies have demonstrated a positive association of consequentialist judgment with measures of cognitive engagement, no work has investigated whether cognitive ability itself is also related to consequentialist judgment. Therefore, we conducted three studies to investigate whether participants’ preference for consequentialist moral judgment is related to their intelligence. A meta-analytic integration of these three studies (with a total N = 675) uncovered no association between the two measures (r = – .02). Furthermore, a Bayesian reanalysis of the same data provided substantial evidence in favor of a null effect (BFH0 = 7.2). As such, the present studies show that if consequentialist judgments depend on deliberative reasoning, this association is not driven by cognitive ability, but by cognitive motivation.

When is it morally appropriate to disregard the rights of the individual for the wellbeing of the larger group? A burgeoning literature on peoples’ responses to ethical dilemmas has helped to provide an empirical backdrop on how we approach such issues. Central in this field is the study of trolley-style moral dilemmas in which participants are asked whether they consider it appropriate to actively sacrifice the life of a single individual to ensure that the lives of multiple others are saved. These dilemmas contrast an outcome-focused, consequentialist moral logic (i.e., sacrifice one to save many; Rosen, 2005), with a deontological moral logic that focuses on rights, duties and a disavowal of active harm (Alexander & Moore, 2008). The main theory within the field, a dual-process model (Cushman, 2013; Greene, 2007), suggests that each of these two different perspectives is related to a different psychological process. When confronted with a moral dilemma, two processes compete to determine our judgment: a fast, intuitive, automatic process linked with a preference for deontological moral judgment, and a cognitive, deliberative, reasoning-based process that steers our preference toward a consequentialist logic, which weighs the harms versus the potential benefits of each course of action.

This dual-process model was advanced in seminal work by Greene, Sommerville, Nystrom, Darley, and Cohen (2001). Using neuroimaging techniques, they uncovered that consequentialist moral judgment was associated with increased activation in “cognitive” areas of the brain such as the dorsolateral prefrontal cortex, whereas deontological moral judgment was associated with increased activation in “emotional” areas of the brain such as the medial prefrontal cortex. The association of deontological judgment with emotional reactivity has been widely corroborated. For instance, individual differences in empathic concern are consistently associated with deontological judgment (r = .17, p = .02, N = 194—Kahane, Everett, Earp, Farias, & Savulescu, 2015; r = .28, p < .001, N = 112—Conway & Gawronski, 2013; r = .30, p < .001, N = 296—Reynolds & Conway, 2018; ds = 0.64 and 0.52, p < .001 and p < .001, Ns = 718 and 366—Gleichgerrcht & Young, 2013). Interestingly, these associations are not related to concern for the sacrificial victim, but because people high in empathic concern are more aversive of the sacrificial action (Miller, Hannikainen, & Cushman, 2014).

In contrast, the association of deliberate cognition with consequentialist judgment appears to be more tenuous. Greene, Morelli, Lowenberg, Nystrom, and Cohen (2008) attempted to experimentally decrease participants’ inclination to deliberate and found that a concurrent cognitive load decreased the speed of consequentialist but not of deontological judgments (p = .002, N = 82).Footnote 1 Some studies have failed to replicate this effect (d = 0.10, p = .110, N = 311—Tinghög et al., 2016; p = .273, N = 85—Cova et al., 2018), whereas others have uncovered a load effect not on response time but on participants’ overall inclination toward consequentialist judgment (d = 0.73, p = .009, N = 57—Conway & Gawronski, 2013; \( {\eta}_p^2 \) = .033, p < .015, N = 191—Białek & De Neys, 2017; see also Trémolière, & Bonnefon, 2014).

Relatedly, some studies have attempted to increase participants’ inclination to deliberate—for instance, by administering the Cognitive Reflection Test (CRT; Frederick, 2005). The CRT is a reasoning test that asks participants to solve mathematical riddles. Although the correct answers to these riddles require only elementary calculations, they necessitate the suppression of an intuitively appealing wrong answer. Paxton, Ungar, and Greene (2012) found that administering the CRT increased the likelihood of consequentialist judgment (d = 0.43, p = .05, N = 91), but another study failed to replicate this effect (d = – 0.13, p = .24, N = 297—Cova et al., 2018).

Individual difference studies are similarly mixed. Paxton, Ungar, and Greene (2012) reported a positive association of participants’ CRT scores with consequentialist judgment (r = .39, p = .001, N = 41). Aktas, Yılmaz, and Bahçekapılı (2017) replicated this finding in a first study (r = .15, p < .01, N = 269), but not in a second one (r = .00, p > .05, N = 246), nor did Cova et al. (2018; r = .08, p = .11, N = 316; see also Baron, Scott, Fincher, & Metz, 2015; Royzman, Landy, & Leeman, 2015).

One reason why the literature might be mixed is that deliberative reasoning has two components: a motivational component and an ability component. For a deliberative process to suppress intuitive processing, both the motivation to expend the necessary cognitive resources and the availability of these resources (i.e., cognitive ability) are relevant. The existing literature on the dual-process model for moral cognition has not differentiated between the motivational and ability components of deliberate reasoning. This is peculiar, as Evans and Stanovich (2013) have suggested that the ability component is in fact the “defining” aspect of deliberative reasoning. For most of the measures that have been used to study the association of “deliberate reasoning” with consequentialist judgment, the motivational and ability components are heavily entwined. For instance, the CRT is typically perceived as a measure of participants’ cognitive style (intuitive vs. reasoned), but it also correlates well with general intelligence (approximately r = .42, p < .001, N = 376; Saribay & Yilmaz, 2017). Any association that the CRT might (or might not) have with consequentialist judgment could be caused by either the motivational or the ability component of deliberate reasoning. Similarly, most experimental manipulations impact both motivation and ability simultaneously. A concurrent cognitive load not only hinders participants’ ability to deliberate, but also impacts their motivation to complete a second, demanding task (Roets & Van Hiel, 2011).

Despite the large literature on the association of consequentialist moral judgments with deliberative reasoning, we are not aware of any study that has directly investigated whether cognitive ability itself plays a role in this connection. Perhaps most similar is a series of studies by Moore, Clark, and Kane (2008) that investigated whether working memory capacity is related to consequentialist judgment, and they did not find a consistent effect. Investigating the association between intelligence and consequentialist moral reasoning would help clarify the nature of the inconsistent associations between consequentialist and deliberative reasoning in the literature.

The present manuscript investigates this issue through an internal meta-analysis of three studies. The data for Study 1 were gathered as part of two unrelated projects. We decided to combine the cognitive ability and moral judgment data of both projects and investigate their possible association through an unplanned, exploratory test. The result of this test served as the impetus for gathering additional data. The data for Studies 2 and 3, although not preregistered, were gathered with the explicit intent of testing this association.Footnote 2 No other hypotheses were explored for the latter two studies. We report how we determined our sample size, all data exclusions (none), and all measures in these studies.

Method

Participants and sample size

We conducted a total of three studies. Table 1 describes demographic statistics. For Studies 1 (n = 210) and 2 (n = 211), undergraduate students at a Belgian university completed the relevant measures for course credit. For Study 3 (n = 254), North American participants were recruited from Amazon Mechanical Turk and paid US$1.15. The participants in Studies 1 and 2 were able to choose from multiple time slots but were not informed about the nature of the studies that would be conducted during each time slot. For Studies 1 and 2, we aimed for samples with n > 200 (80% power for r ≥ .20). For Study 3, we aimed for a more powerful study (90% power when assuming a population effect size of r ≥ .20). No specific instructions were given to participants during any of the studies.

Table 1 Sample size and summary demographic statistics

Measures

Cognitive ability

In Studies 1 and 2, cognitive ability was measured with a shortened version of the Wilde Intelligence Test (λ2s = .70 and .75; see Kersting, Althoff, & Jäger, 2008). In this test, participants are presented with 45 logic problems tapping fluid intelligence and are instructed to solve as many problems as possible in 12 min. The number of correct responses constitutes the participant’s ability score.

In Study 3, cognitive ability was measured as the number of correct responses on the ten-item WordSum test (α = .77), a vocabulary subtest from the Wechsler Adult Intelligence Scale (Zhu & Weiss, 2005), that is used as a measure of general intelligence in the General Social Survey. In this test, participants are presented with ten target words and have to select for each target word—the word that comes closest to the meaning of the target from a set of five words.

Preference for consequentialist (and for deontological) judgment

We used two different measures for these constructs. In Studies 1 and 3, participants were presented with a battery of ten trolley-style dilemmas (Bostyn, Sevenhant, & Roets, 2019) and were asked, for each of the two possible options within each dilemma, to what extent they considered that option to be morally appropriate, on a scale from 1 (completely inappropriate) to 5 (completely appropriate). This battery includes a mix of personal and impersonal dilemmas.Footnote 3 Participants’ preference for consequentialist judgment was calculated by averaging their appropriateness ratings of the consequentialist options (αs = .87 and .88). A preference for deontological judgment was calculated similarly (αs = .85 and .89). Deontological and consequentialist reasoning are envisioned as being driven by dissociable and independent mental processes (Conway & Gawronski, 2013; Greene et al., 2001). As such, we did not expect to find an association between cognitive ability and deontological reasoning.

In Study 2, moral preferences were measured through a process dissociation approach developed by Conway and Gawronski (2013). This procedure contrasts participants’ responses on congruent dilemmas with their responses on incongruent dilemmas. Both types of dilemmas have the same structure as traditional trolley-style moral dilemmas, but on incongruent dilemmas (similar to traditional trolley-style dilemmas), each moral preference is associated with a different response (e.g., “Torture someone to stop a bomb from exploding”), whereas on congruent dilemmas, participants’ preferences for consequentialist or deontological judgment suggest the same response, because the sacrificial harm does not outweigh the benefit that would be gained (e.g., “Torture someone to stop them from vandalizing a bus stop?”). Participants were confronted with 20 dilemmas, ten of each kind, and were asked to report, in a binary fashion (yes/no), whether the suggested sacrificial harm was morally appropriate. Each moral preference was then calculated through a set of equations (Conway & Gawronski, 2013). Some of the original dilemmas from Conway and Gawronski were interchanged with alternatives that were more culturally appropriate for our sample (see Bostyn, Roets, & Van Hiel, 2016). All dilemmas used in all studies were framed from a first-person perspective and are available at https://osf.io/txvjb/.

Results

The data and statistical code are available at https://osf.io/z7uxe/. In each study, we correlated participants’ preferences for consequentialist or deontological moral judgment with their cognitive ability. We then conducted a random-effect meta-analysis with a Paule–Mandel estimator using the metafor package in R (Viechtbauer, 2010). Figure 1 displays the results of each study. Interestingly, we uncovered no association between cognitive ability and the participants’ propensity for consequentialist judgment, rmeta = – .02, pmeta = .415. A large amount of heterogeneity was present, I2 = 71%, 95% CI = 0%, 99%, τ2 = .10. However, given the small number of studies included in this meta-analysis, we caution against interpretation of these heterogeneity estimates.

Fig. 1
figure1

Correlations of intelligence with preference for consequentialist moral judgment.

To quantify the strength of the evidence in favor of a null effect as compared to the expected positive association, we calculated a directional meta-analytical Bayes factor with the metaBMA package in R (Heck, Gronau, & Wagenmakers, 2017), using a model-averaging approach that weights the results of fixed- and random-effects meta-analysis. We used a half-Normal prior (μ = 0, σ = 0.3) for the effect size, and a half-Cauchy prior (scale factor = 0.5) for the between-study variance (the default options in the package). This analysis suggested that, on the basis of the present work, a null association between preference for consequentialist moral judgment and intelligence is 7.2 times more credible than the expected positive association. A prior sensitivity analysis (using 36 different prior combinations, reported in the online supplementary materials) found that BFH0 ranged from 2.73 to 199.5 (available at https://osf.io/wfasb/). The smallest Bayes factors were obtained when using priors that assumed a high likelihood of a null effect, and the largest Bayes factors were obtained when using priors that assumed a large positive effect.

Finally, though it was not the focus of the present studies, we also uncovered no evidence for an association between preference for deontological judgment and cognitive ability, rmeta = .04, pmeta = .133, τ2 = 0.02, I2 = 8.78%, 95% CI = 0%, 98%).

Discussion

The dominant theoretical framework for moral cognition, the dual-process model, states that consequentialist judgment is driven by a deliberative cognitive process rather than through automatic processing. Research on this issue has uncovered a mixed set of findings, with some studies reporting positive effects and others reporting null effects (see above). Importantly, the previous research did not distinguish between the motivational and ability components of deliberate cognition. Investigating whether cognitive ability is related to consequentialist reasoning can inform which specific aspects of deliberative processing (if any) are driving the overall association. Based on previous work, one could have expected a positive association, however across a set of three studies we uncover no evidence for an association (rmeta = – .02).

The present results clarify some aspects of the dual process model for moral cognition. To the extent that previous research has uncovered associations of measures for “deliberate cognition” with increased consequentialist responding, our studies suggest that these associations are likely driven by participants’ cognitive motivation and not by their cognitive ability. Accordingly, these results qualify earlier work about the effect of cognitive load manipulations on moral reasoning (such as Greene et al., 2008), and suggest that these load manipulations sort their effect through inhibiting cognitive motivation rather than through ability reduction.

One could argue that the lack of an association of consequentialist reasoning with intelligence is not surprising, given the limited mathematical complexity of the 1-versus-5 comparison. However, reducing consequentialist choice to a game of “pick the higher number” ignores the maze of conflicting moral norms one has to navigate to make this choice. The complexity of this type of moral cognition does not lie in the math of the cost–benefit analysis. It lies in whether the consequentialist benefit outweighs violating several moral norms. Trolley dilemmas are hard not because the underlying math is hard, but because weighing norms is hard. Additionally, consequentialist decisions require participants to assume responsibility for the dilemma situation. This puts them in social jeopardy, as research has uncovered that consequentialists are seen as cold, unempathic, and less trustworthy (Bostyn & Roets, 2017; Everett, Faber, Savulescu, & Crockett, 2018; Everett, Pizarro, & Crockett, 2016; Uhlmann, Zhu, & Tannenbaum, 2013). Given the social and moral complexities involved, cognitive ability could very well have impacted participants’ decision making.

In any case, this null effect raises the question of how measures of cognitive motivation can be associated with consequentialist decision making in the absence of an effect of ability. How can the motivation to deliberate have an impact when the ability to deliberate does not? One potential answer could be that individuals with a high motivation for deliberative thinking simply take more time to respond to dilemmas. Previous research has suggested that deontological judgment is driven by a strong emotionally aversive reaction to the sacrificial harm suggested in a trolley-style moral dilemma (Greene, 2007). If so, then taking longer to respond might lessen the impact of this emotional reaction. Perhaps the association of motivational measures with consequentialist decision making is not due to increased deliberation per se, but rather due to attenuation of the initial emotional response.

The present studies have some limitations. A first limitation is that our studies investigated moral decision making using hypothetical dilemmas. In all dilemmas, participants were confronted with a limited set of potential actions, and the outcome of each action was predetermined. Although such dilemmas are common in psychological research, they might be too simplistic to measure moral decision making in the context of cognitive ability. Real-life moral decisions are fraught with uncertainty, and in contrast to hypothetical judgments, the decisions made are actually consequential. We cannot preclude the possibility that real-life moral decision making might be more cognitively demanding than hypothetical decision making. Similarly, given that the present studies only investigated moral decision making in the context of sacrificial moral dilemmas, we should be careful not to generalize our conclusions beyond such dilemmas. It is possible that other types of consequentialist moral reasoning (cf. impartial beneficence; Kahane et al., 2018) might be associated with cognitive ability. Finally, we restricted our investigation to the effects of cognitive ability in isolation from any measures of cognitive motivation. One could assume that any effect of cognitive ability would be most pronounced for participants who additionally have a high motivation toward reasoning cognition. Although there is merit to a study including such variables, our samples contained participants who were both high and low in motivation. Even if cognitive ability and motivation interact and, even if the effect of cognitive ability emerges only for those who score high on motivation, we should still have uncovered an attenuated main effect of cognitive ability. Since our meta-analytic estimate was negative, we think it unlikely that this could explain our findings.

In any case, Greene (2014) has argued that societal progress relies on assuming the meta-ethical perspective offered by consequentialist morality. At least from that vantage point, it seems encouraging that people’s ability to take a consequentialist perspective is not hindered by limitations of their cognitive ability.

Open practice statement

The data, statistical code, and materials for all studies are available at https://osf.io/z7uxe/.

Notes

  1. 1.

    Given that the Greene et al. (2008) study involves a within-subjects design we were unable to straightforwardly compute an effect size estimate from the data provided in the manuscript.

  2. 2.

    The cognitive ability measure from Study 2 was also used in an unrelated study (De keersmaecker, J., Dunning, D., Pennycook, G., Rand, D. G., Sanchez, C., Unkelbach, C., & Roets, A. (2019). Investigating the Robustness of the Illusory Truth Effect Across IndiAbility, Need for Cognitive Closure, and Cognitive Style. Personality and Social Psychology Bulletin, https://doi.org/10.1177/0146167219853844). Only the cognitive ability measure was shared among datasets.

  3. 3.

    Analyzing our data along this dimension did not impact any of the reported result.

References

  1. Aktaş, B., Yılmaz, O., & Bahçekapılı, H. G. (2017). Moral pluralism on the trolley tracks: Different normative principles are used for different reasons in justifying moral judgments. Judgment and Decision Making, 12, 297–307.

    Google Scholar 

  2. Alexander, L., & Moore, M. (2008). Deontological ethics. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. Retrieved from http://plato.standord.edu/archives/fall2008/entries/ethicsdeontological.

  3. Baron, J., Scott, S., Fincher, K., & Metz, S. E. (2015). Why does the Cognitive Reflection Test (sometimes) predict utilitarian moral judgment (and other things)? Journal of Applied Research in Memory and Cognition, 4, 265–284.

    Article  Google Scholar 

  4. Białek, M., & De Neys, W. (2017). Dual processes and moral conflict: Evidence for deontological reasoners’ intuitive utilitarian sensitivity. Judgment and Decision Making, 12, 148–167.

    Google Scholar 

  5. Bostyn, D. H., & Roets, A. (2017). Trust, trolleys and social dilemmas: A replication study. Journal of Experimental Psychology: General, 146(5), e1–e7. https://doi.org/10.1037/xge0000295

    Article  Google Scholar 

  6. Bostyn, D. H., Roets, A., & Van Hiel, A. (2016). Right-wing attitudes and moral cognition: Are right-wing authoritarianism and social dominance orientation related to utilitarian judgment? Personality and Individual Differences, 96, 164–171.

    Article  Google Scholar 

  7. Bostyn, D. H., Sevenhant, S., & Roets, A. (2019). Beyond physical harm: How preference for consequentialism and primary psychopathy relate to decisions on a monetary trolley dilemma. Thinking & Reasoning, 25, 192–206. https://doi.org/10.1080/13546783.2018.1497536

    Article  Google Scholar 

  8. Conway, P., & Gawronski, B. (2013). Deontological and utilitarian inclinations in moral decision making: A process dissociation approach. Journal of Personality and Social Psychology, 104, 216–235. https://doi.org/10.1037/a0031021

    Article  PubMed  Google Scholar 

  9. Cova, F., Strickland, B., Abatista, A., Allard, A., Andow, J., Attie, M., … , Cushman, F. (2018). Estimating the reproducibility of experimental philosophy. Review of Philosophy and Psychology, 1-36. https://doi.org/10.1007/s13164-018-0400-9

  10. Cushman, F. (2013). Action, outcome, and value: A dual-system framework for morality. Personality and Social Psychology Review, 17, 273–292.

    Article  Google Scholar 

  11. Evans, J. S. B., & Stanovich, K. E. (2013). Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science, 8, 223–241.

    Article  Google Scholar 

  12. Everett, J. A., Faber, N. S., Savulescu, J., & Crockett, M. J. (2018). The costs of being consequentialist: Social inference from instrumental harm and impartial beneficence. Journal of Experimental Social Psychology, 79, 200–216.

    Article  Google Scholar 

  13. Everett, J. A., Pizarro, D. A., & Crockett, M. J. (2016). Inference of trustworthiness from intuitive moral judgments. Journal of Experimental Psychology: General, 145, 772–787. https://doi.org/10.1037/xge0000165

    Article  Google Scholar 

  14. Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19, 25–42.

    Article  Google Scholar 

  15. Gleichgerrcht, E., & Young, L. (2013). Low levels of empathic concern predict utilitarian moral judgment. PLoS ONE, 8, e60418. https://doi.org/10.1371/journal.pone.0060418

    Article  PubMed  PubMed Central  Google Scholar 

  16. Greene, J. D. (2007). The secret joke of Kant’s soul. In W. Sinnott Armstrong (Ed.), Moral psychology: Vol. 3. The neuroscience of morality: Emotion, disease, and development (pp. 35–80). Cambridge, MA: MIT Press.

    Google Scholar 

  17. Greene, J. D. (2014). Moral tribes: Emotion, reason, and the gap between us and them. New York, NY: Penguin.

    Google Scholar 

  18. Greene, J. D., Morelli, S. A., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2008). Cognitive load selectively interferes with utilitarian moral judgment. Cognition, 107, 1144–1154.

    Article  Google Scholar 

  19. Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293, 2105–2108. https://doi.org/10.1126/science.1062872

    Article  PubMed  Google Scholar 

  20. Heck, D. W., Gronau, Q. F., & Wagenmakers, E.-J. (2017). metaBMA: Bayesian model averaging for random and fixed effects meta-analysis (R package). Retrieved from https://cran.r-project.org/package=metaBMA.

  21. Kahane, G., Everett, J. A. C., Earp, B. D., Farias, M., & Savulescu, J. (2015). “Utilitarian” judgments in sacrificial moral dilemmas do not reflect impartial concern for the greater good. Cognition, 134, 193–209.

    Article  Google Scholar 

  22. Kahane, G., Everett, J. A., Earp, B. D., Caviola, L., Faber, N. S., Crockett, M. J., & Savulescu, J. (2018). Beyond sacrificial harm: A two-dimensional model of utilitarian psychology. Psychological Review, 125, 131–164. https://doi.org/10.1037/rev0000093

    Article  PubMed  Google Scholar 

  23. Kersting, M., Althoff, K., & Jäger, A. O. (2008). Wilde-Intelligenz-Test 2 (WIT-2) [Wilde Intelligence Test 2 (WIT-2)]. Göttingen, Germany: Hogrefe.

    Google Scholar 

  24. Miller, R. M., Hannikainen, I. A., & Cushman, F. A. (2014). Bad actions or bad outcomes? Differentiating affective contributions to the moral condemnation of harm. Emotion, 14, 573–587.

    Article  Google Scholar 

  25. Moore, A. B., Clark, B. A., & Kane, M. J. (2008). Who shalt not kill? Individual differences in working memory capacity, executive control, and moral judgment. Psychological Science, 19, 549–557.

    Article  Google Scholar 

  26. Paxton, J. M., Ungar, L., & Greene, J. D. (2012). Reflection and reasoning in moral judgment. Cognitive Science, 36, 163–177.

    Article  Google Scholar 

  27. Reynolds, C. J., & Conway, P. (2018). Not just bad actions: Affective concern for bad outcomes contributes to moral condemnation of harm in moral dilemmas. Emotion, 18, 1009–1023.

    Article  Google Scholar 

  28. Roets, A., & Van Hiel, A. (2011). Impaired performance as a source of reduced energy investment in judgment under stressors. Journal of Cognitive Psychology, 23, 625–632.

    Article  Google Scholar 

  29. Rosen, F. (2005). Classical utilitarianism from Hume to Mill. New York, NY: Routledge.

    Google Scholar 

  30. Royzman, E. B., Landy, J. F., & Leeman, R. F. (2015). Are thoughtful people more utilitarian? CRT as a unique predictor of moral minimalism in the dilemmatic context. Cognitive Science, 39, 325–352.

    Article  Google Scholar 

  31. Saribay, S. A., & Yilmaz, O. (2017). Analytic cognitive style and cognitive ability differentially predict religiosity and social conservatism. Personality and Individual Differences, 114, 24–29.

    Article  Google Scholar 

  32. Trémolière, B., & Bonnefon, J. F. (2014). Efficient kill–save ratios ease up the cognitive demands on counterintuitive moral utilitarianism. Personality and Social Psychology Bulletin, 40, 923–930.

    Article  Google Scholar 

  33. Tinghög, G., Andersson, D., Bonn, C., Johannesson, M., Kirchler, M., Koppel, L., & Västfjäll, D. (2016). Intuition and moral decision-making—The effect of time pressure and cognitive load on moral judgment and altruistic behavior. PLoS ONE, 11, e0164012. https://doi.org/10.1371/journal.pone.0164012

    Article  PubMed  PubMed Central  Google Scholar 

  34. Uhlmann, E. L., Zhu, L. L., & Tannenbaum, D. (2013). When it takes a bad person to do the right thing. Cognition, 126, 326–334.

    Article  Google Scholar 

  35. Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36, 1–48.

    Article  Google Scholar 

  36. Zhu, J., & Weiss, L. (2005). The Wechsler scales. In D. P. Flanagan & P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (2nd ed., pp. 297–324). New York, NY: Guilford Press

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to D. H. Bostyn.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Bostyn, D.H., De Keersmaecker, J., Van Assche, J. et al. Bright mind, moral mind? Intelligence is unrelated to consequentialist moral judgment in sacrificial moral dilemmas. Psychon Bull Rev 27, 392–397 (2020). https://doi.org/10.3758/s13423-019-01676-9

Download citation

Keywords

  • Cognitive ability
  • Intelligence
  • Moral judgment
  • Consequentialism
  • Trolley dilemmas