Mind & Society

, Volume 7, Issue 2, pp 215–226

Taboo or tragic: effect of tradeoff type on moral choice, conflict, and confidence

Authors

    • Defence Research and Development Canada
  • Oshin Vartanian
    • Defence Research and Development Canada
Original Article

DOI: 10.1007/s11299-007-0037-3

Cite this article as:
Mandel, D.R. & Vartanian, O. Mind Soc (2008) 7: 215. doi:10.1007/s11299-007-0037-3

Abstract

Historically, cognitivists considered moral choices to be determined by analytic processes. Recent theories, however, have emphasized the role of intuitive processes in determining moral choices. We propose that the engagement of analytic and intuitive processes is contingent on the type of tradeoff being considered. Specifically, when a tradeoff necessarily violates a moral principle no matter what choice is made, as in tragic tradeoffs, its resolution should result in greater moral conflict and less confidence in choice than when the tradeoff offers a moral escape route, as in taboo tradeoffs. We manipulated tradeoff type in between subjects design and confirmed the prediction that tragic tradeoffs prompt more conflict and less confidence than taboo tradeoffs. The findings further revealed that moral conflict mediated the effect of tradeoff type on confidence. The study sheds light on the manner in which human minds resolve moral problems involving social agents.

Keywords

Analytic and intuitive processesSubjective mental effortProblem difficultyMoral choiceConfidenceMoral conflictOmission biasTrolley problem

1 Introduction

The ability to take moral considerations into account when making decisions has long been regarded as a distinguishing feature of human consciousness. Buttressing this view, early cognitivist theories of moral choice placed a strong emphasis on the role of analytic processes in deliberating about moral problems (see Shweder and Haidt 1993). These analytic processes were hypothesized to increase in complexity and sophistication over a series of developmental stages, and the higher stages involved reasoning about moral issues such as justice in accordance with abstract principles that require analytic thinking (Kohlberg 1984; Piaget 1965). More recently, however, some theorists have proposed that analytic thinking takes a secondary role to intuitive processes. For instance, according to Haidt’s (2001, 2004) influential social-intuitionist account, people tend to reach moral judgments quickly on the basis of emotionally driven intuitions, and they flesh out decision-bolstering reasons using analytic thinking only if held accountable by others.

In the present article, we refer to intuitive and analytic processes as the low and high endpoints of a continuum of subjective mental effort, respectively. Our thesis is that moral judgment and choice can be more or less effortful—namely, more analytic or more intuitive—and that much of the variance in the degree of subjective mental effort applied to a moral problem will depend on the characteristics of the problem itself. This thesis is in line with recent trends in the literature that focus on the interplay between problem and agent characteristics in understanding decision making in the moral domain (see Krebs and Denton 2005). In this respect, what is sorely needed is a clearer understanding of the features that make a moral problem subjectively easy or hard to resolve.

Our focus in this paper is primarily on one feature that we hypothesize to be important in this regard; namely, whether or not the problem permits one to arrive at a choice without violating a moral prohibition, edict, or norm. Specifically, we hypothesize that moral problems allowing for such an escape route will be perceived as relatively easier to resolve than those that do not allow the decision maker to avoid violating a moral norm. This problem cannot be addressed in a typical study of moral decision making, in which the focus is exclusively on people’s choices because choices per se do not directly reveal how much conflict the decision maker may have actually endured in reaching the decision. Rather, the subjective ease or difficulty of choice (or of the problem for which a choice must be made) is likely to be revealed by decision makers’ degrees of perceived conflict in reaching their decision and in their confidence in that choice.

In support of this view, Simmons and Nelson (2006), have shown that participants who chose an intuitively appealing option over a less intuitively appealing one that was nevertheless of equal expected value tended to feel more confident in their choices than participants who chose the less-intuitive option. For instance, in some of their studies, participants were asked to choose between two sports teams given a point spread, which in fact made the two alternatives equally probable. In line with intuition, the majority of bets were placed on the favorite team, and those who bet on this intuitive option felt more confident than those who betrayed their intuition and bet on the underdog (given the point spread). In short, Simmons and Nelson (2006), found that the “easy decision” of picking the intuitive option led people to feel more confident in their choices than those who chose the counterintuitive option (despite its equal likelihood of yielding a winning bet). Given the importance of examining indirect measures of subjective mental effort in the context of moral, we conducted an experiment that directly examined the effect of problem type on the levels of perceived moral conflict and confidence in choice.

1.1 Determinants of subjective mental effort in moral decision making

Subjective assessments of mental effort and problem difficulty may be viewed as opposing sides of the same coin. In general, a problem will be perceived as difficult if its resolution feels effortful. The substantive difference is one of emphasis: the term mental effort emphasizes the cognitive processes involved in resolving a problem, whereas the term problem difficulty emphasizes features of task structure. Thus, to say that problem difficulty is a determinant of mental effort is more of a tautological statement than an explanatory one, and we wish to acknowledge this fact at the outset of our discussion. Determinants of both, we would argue, may be located in features of problems as well as social–cognitive constraints on decision makers. Our primary interest here is in the nexus of these features—that is, problem features that elucidate constraints on human minds faced with compelling social problems of a moral nature.

Recent research by Greene et al. (2004), has revealed intriguing findings regarding this issue. These authors compared the effect of what they termed easy and hard moral problems on decision makers’ response times and patterns of cortical activation. Greene et al.’s (2004) hard moral problems had a specific structure: namely, they pitted the negative social-emotional response associated with violating a moral norm against a more abstract, cognitive response that maximized the welfare of the aggregate. For example, one of their hard problems, called “crying baby,” involved a situation in which a mother in a war-torn region is faced with the following choice: smother her crying child, which would save her own life as well as the lives of other hiding villagers, or do not smother her child, in which case enemy soldiers would kill everyone. According to Greene et al. (2004), this is a hard problem because in order to maximize the aggregate good (i.e., to save more lives) one must opt for a mother to kill her own child, which would in turn result in the negative social-emotional state associated with violating a moral norm (i.e., do not kill children). In contrast, according to Greene et al. (2004), an example of an easy problem was one involving a teenage mother who had to decide whether or not to kill her unwanted newborn infant. Greene et al. (2004), treat this as an easy problem because, in their view, maximizing the aggregate good does not require the violation of a moral norm. That is, the decision not to kill the infant is in line with the moral norm prohibiting murder and is also presumably in line with maximizing the aggregate good (i.e., the baby lives). Greene et al. (2004) found that participants took longer to make a decision for hard problems than for easy problems. They also found that compared to easy problems, hard problems activated cortical areas known to mediate the effortful processing of conflict, abstract reasoning, and cognitive control.

According to Greene et al. (2004), choosing to maximize the aggregate good is difficult in hard problems because by doing so one must violate a moral norm, which in turn elicits a strong negative social–emotional response. This has the effect of inhibiting the consideration of norm-violating acts in order to avoid the negative emotional response that accompanies norm violation. More generally, we propose that the subjective difficulty of moral problems is determined largely by whether at least one of the available options allows the decision maker to escape from having to violate what otherwise ought to be an inviolable moral principle. In the absence of such an escape route, the decision maker is likely to face a conflict situation that may be difficult to resolve, especially if the stakes are high. Indeed, if every option leads to a fundamental moral breach then the conflict may very well appear intractable. Thus, we would argue that the crying-baby problem is hard precisely because no matter what the mother chooses, she is forced to violate a moral rule with dire consequences. To summarize, then, we propose that assessments of problem difficulty (on the task side) and mental effort (on the cognitive processes side) will be direct reflections of the level of moral conflict experienced by the decision maker.

Our analysis is closely related to Tetlock et al.’s (2000) distinction between tragic and taboo tradeoffs. A key feature of tragic tradeoffs is that they force decision makers to choose the lesser of multiple evils, thus making them hard to reason through or “dilemma-ish.” For example, having to decide, which of two children in need of a liver transplant will get the only available liver would represent a tragic tradeoff. Much like Greene et al.’s (2004) hard problems, the difficulty of tragic tradeoffs is due to their structure: no matter what choice is made, the decision maker is forced to violate an absolute prohibition, such as killing someone or letting someone die when one could have prevented it. By contrast, taboo tradeoffs offer decision makers a moral “way out,” provided they accept the constitutive incommensurability of the tradeoff; that is, provided they abide by social norms proscribing the monetization of certain values, such as the protection of human life, or otherwise treat such values in a non-compensatory manner. For instance, having to decide between conducting a costly liver transplant for a sick child or allocating the same sum of money toward renovations in the hospital would represent a taboo tradeoff. Note that the taboo–tragic distinction does not necessarily map on to the easy–hard distinction perfectly in this example. It may very well be the case that allocating the money to the renovation yields the greatest aggregate good, which would make this a hard problem according to Greene et al.’s (2004) criteria. In our view, the life versus money tradeoff is in an important sense not a hard problem precisely because the moral proscription against monetizing life gives the decision maker a moral way out. Thus, our parsing of the moral problem space comes somewhat closer to Tetlock et al.’s taboo-tragic distinction than to Greene et al.’s easy–hard distinction.

We draw on Tetlock et al.’s (2000) terminology in this paper precisely because the notion of what makes some tradeoffs tragic as opposed to taboo is indicative, we believe, of an important proximal determinant of mental effort. However, the present research also goes beyond past work by directly examining the effect of tradeoff type on decision makers’ assessments of the moral conflict they experienced in reaching their choices. Specifically, we hypothesized that, compared to taboo tradeoffs, tragic tradeoffs will prompt higher levels of moral conflict, which we define in the present study as the extent to which one regards the relevant problem as a moral dilemma.

1.2 Confidence in moral choice

A second, inter-related objective of the present research was to introduce the measurement of confidence into studies of moral decision making. As noted earlier, we believe that the literature on moral decision making would benefit by doing so because such studies can shed light on the experience of mental effort required to resolve moral problems. For example, consider the following moral dilemma known as the trolley problem (Foot 1967; Thompson 1976): a runaway trolley is about to kill five workmen, and you are the only one who can intervene. If you do nothing, the five workmen will die. Alternatively, if you flip a switch that turns the trolley onto another track, the five workmen would be saved, but one workman on the other track would die. What would you do: let the five workmen die, or save them but kill another man? If you were like most people, you would choose the latter option (Greene et al. 2001).

However, now consider a version of the trolley problem in which the lone workman was replaced by a priceless statue. What would you do then? We predicted that in the statue version most participants would similarly opt to save the five workmen (and let the statue be destroyed). However, we also predicted that participants would be less confident in their choices involving the tragic tradeoff (i.e., the workman version) than in their choices involving the taboo tradeoff (i.e., the statue version). Thus, we anticipated that even though this manipulation of tradeoff type would have a negligible effect on choice, it would have a significant effect on confidence. The predicted effect of tradeoff on confidence, we further hypothesized, would be mediated by participants’ level of moral conflict. That is, we anticipated that the tragic tradeoff would generate more conflict than the taboo tradeoff, and that variation in moral conflict would, in turn mediate the effect of tradeoff on confidence such that conflict would be inversely related to confidence.

Our inclusion of confidence and conflict measures also allowed us to test Simmons and Nelson’s (2006) intuitive betrayal hypothesis, which as discussed earlier involves the tendency for people who betray their intuitions to feel less confident in their choices than people who choose in line with their intuitions. In the present study, as in some of Simmons and Nelson’s studies, we define the intuitive option as the one selected by the majority of participants. Thus, the intuitive betrayal hypothesis predicts that participants making the majority choice would feel more confident than those making the minority choice. We extended this test as well to our measure of moral conflict.

1.3 Acts of omission versus commission

Another objective of our research was to investigate whether norm-violating options involving an act of commission would be less likely to be chosen than those involving an act of omission. In the present context, omission bias refers to a preference for harm caused by omissions over equal or lesser harm caused by actions (Baron and Ritov 2004). The omission bias seems to be motivated by unwillingness to cause direct harm. In a particularly relevant study, Baron (1992) asked participants to imagine that they were one of four innocent prisoners held captive in the Middle East. The captors will surely kill two of the other prisoners unless the participant kills one of the three prisoners. The participants are asked to assume that their fellow captives will not know about their decision and, moreover, that they believe in the sincerity of their captors’ offer. When asked, “Do you think that you would kill the one to save the two?” (1992, p. 322), approximately 88% indicated that they would reject the offer. This finding provides some evidence in support of the omission bias because the majority of participants chose not to act even though inaction would presumably result in more deaths. Nevertheless, factors unrelated to the omission bias may explain this result. For instance, the assumption that the captors would keep their word is highly suspect. Given the captors’ control of the situation, there is little reason why participants should trust the integrity of the offer. Indeed, the scenario brings to mind one in which the captors’ intent may be to double cross the participant. There is also a sense in which it may be more heroic not to play the captors’ game. Accordingly, it would be of value to test for the omission bias in cases in which these other possible explanations are excluded.

Cushman, Young, and Hauser (2006) tested the omission bias using alternative versions of the trolley problem that were unlikely to evoke the kinds of inferences that might bias people toward inaction. For instance, in the omission version of one problem they can save five people if they do not pull a lever (thus killing one in the process), whereas in the commission version they can save the same five people by pulling the lever (also resulting in one dying). Thus, the only difference is whether the response that saves more lives requires an act of omission or an act of commission. Cushman et al.’s (2006) study involved two stages. In the first stage, participants were presented with dilemmas and asked to rate the moral permissibility of a protagonist’s omission or commission. In the second stage, they were presented with pairs of dilemmas that were identical but for a critical feature—for our purpose, the omission–commission distinction—and then asked to justify differences in their own ratings. A key finding of their study was that participants viewed harm-inflicting acts of omission as more permissible than harm inflicting acts of commission. While revealing, a shortcoming of their study was that it did not examine the effect of action on choice directly, nor did it examine whether the inaction effect survives a between-subjects design in which attention is not so obviously drawn to the action–inaction difference. Thus, to address this issue in a more rigorous manner, we conducted a direct test of the omission bias by manipulating in a between-subjects design whether the harm-inflicting “save-five” option in the trolley problem was associated with either hitting or not hitting the switch.

1.4 Comparability of losses

In the standard (workman) version of the trolley problem, the description of the lives that might be lost is identical but for their number: five people will die by default and one person will die if action is taken. The comparability of these generic descriptions of potential losses of the same type may lead to a consequentialist mindset in which decision makers reason that preserving five lives is better than preserving one. This is in line with Nichols and Mallon’s (2006) view that moral decisions are influenced by multiple factors, including cost–benefit analysis. The final objective of the present research, then, was to examine the effect of the comparability of losses in a tragic tradeoff. To investigate this issue, we included an additional condition in our design that forced decision makers to choose between saving the five workmen and letting one child die or preserving the child’s life but allowing the five workmen to die. We predicted that a smaller proportion of participants in this condition would opt to save the five workmen than in the standard version because the diminished comparability of losses would likely undermine a simple “five beats one” analysis.

2 Method

2.1 Participants

We recruited 150 participants (70 females and 80 males) who were in the waiting area of a train station in the city of Toronto, Canada. Given that the task scenario involved an impending train disaster, we conducted the experiment in this situational context anticipating that participants may be more inclined to put themselves in the decision maker’s perspective. The mean age of the sample was 27 years (SD = 10). Participants provided informed consent before completing the task.

2.2 Design and Procedure

Participants were randomly assigned to conditions in a 3 (version: workman, child, statue) × 2 (act: omission, commission) between-subjects factorial design. Each participant was presented with one version of the problem and instructed to imagine the scenario corresponding to it. For example, in the workman version, participants read the following description:

You often take the train from this station. Today, you are waiting on a familiar platform for your train. You know the schedule of trains at this time of day very well, so you notice a runaway train that is quickly approaching a fork in the tracks. On the tracks extending to the left is a group of five railway workmen. On the tracks extending to the right is a single railway workman.

From past experience, you know that if you do nothing, the train will surely proceed to the left. However, if you hit a switch that you have seen railway workmen use to switch between these two tracks many times in the past, the train will surely proceed to the right. You know that you are the only one who could reach this switch in time, and so the decision is yours.

Depending on the version to which they were assigned, the lone entity either was a workman, a priceless statue in transport, or a child who was momentarily away from the attention of parents.

After reading the problem, participants were asked to indicate whether or not they would act to save the five workmen. We varied whether saving the five workmen required hitting or not hitting the switch (i.e., our manipulation of act). After indicating their choice, participants rated their degree of confidence in the choice and the level of moral conflict they experienced in reaching their choice. They responded on 7-point scales ranging from 1 (not at all) to 7 (extremely).

3 Results

Table 1 shows the descriptive statistics for all measures by condition. Testing our key hypothesis involved determining whether participants in the workman condition who were confronted with the tragic tradeoff were less confident than those in the statue condition who were confronted with the taboo tradeoff. Participants who were confronted with the tragic tradeoff were indeed less confident than those who were confronted with the taboo tradeoff, t(98) = −7.13, P < 0.001, d = 1.43. This difference emerged despite identical choice distributions in these two conditions (see Table 1). Moreover, confirming our second prediction, moral conflict significantly mediated this effect: as Fig. 1 shows, tradeoff predicted moral conflict such that the tragic tradeoff invited more moral conflict than the taboo tradeoff. Controlling for tradeoff, moral conflict predicted confidence. And, controlling for moral conflict, the effect of tradeoff on confidence was significantly attenuated, Sobel-test z = 2.67, P < 0.01. Given that the effect of tradeoff type on confidence remained significant after controlling for conflict, moral conflict is a partial mediator of this relation.
Table 1

Percentage of save-five choices, mean confidence, and mean moral conflict as a function of version and act

Measure

Act

Version

Workman

Child

Statue

Save-five choice

Commission

92

56

100

Omission

96

76

88

M

94

66

94

Confidence

Commission

4.68

4.32

6.60

Omission

4.88

4.64

6.28

M

4.78

4.48

6.44

Moral conflict

Commission

5.08

5.36

2.24

Omission

5.52

5.88

2.68

M

5.30

5.62

2.46

https://static-content.springer.com/image/art%3A10.1007%2Fs11299-007-0037-3/MediaObjects/11299_2007_37_Fig1_HTML.gif
Fig. 1

Mediator model of the effect of tradeoff type on confidence (values reported are non-standardized regression coefficients, and the standard errors are reported in parentheses)

We also tested Simmons and Nelson’s (2006) intuitive betrayal hypothesis. Given that in all three versions the majority choice was to save five, we treat that as the intuitive option for our subsequent tests. Participants choosing the intuitive option (M = 5.40, SD = 1.49) were slightly more confident than those choosing the alternative option (M = 4.92, SD = 1.80), but the difference failed to reach conventional significance levels, t(138) = 1.44, one-tailed P < 0.08. As well, participants choosing the intuitive option (M = 4.33, SD = 2.26) were slightly less morally conflicted than those choosing the alternative option (M = 5.30, SD = 2.05), and here the difference was significant, t(138) = 2.05, one-tailed P < 0.025.

Next, we conducted a test of the omission bias. Interestingly, although the problems that Cushman et al. (2006) used were similar to those used in the present study, act did not influence participants’ choices in the present study, z = 0.68 ns. In other words, there was no omission bias in our study.

Finally, we tested our comparability hypothesis by examining whether participants made different choices in the workman versus child conditions. As predicted, participants were less likely to opt to save five in the child condition than in the standard, workman condition, z = 4.47, P < 0.001, γ1* = 1.0 (Hedges and Olkin 1985).

4 Discussion

We have proposed that the experience of problem difficulty, on the one hand, and mental effort, on the other hand, is largely affected by whether decision makers view the moral problems that confront them as offering a moral escape route. We have suggested further that, whereas taboo tradeoffs offer a way out of intractable conflict, tragic tradeoffs do not, instead forcing decision makers to live with the conflict and difficultly work through it. The findings of the present research supported these theoretical contentions as well as our specific hypotheses. First, we demonstrated that participants were more morally conflicted when faced with a tragic tradeoff (i.e., the workman version) than with a taboo tradeoff (i.e., the statue version). Second, we showed that participants were also less confident in their decisions when faced with a tragic tradeoff than with a taboo tradeoff. Third, we showed that moral conflict partially mediated the effect of tradeoff type on confidence.

Our account bears similarities to and differences with other recent accounts of moral choice. Consistent with Haidt (2001) and Greene et al. (2001, 2004), we propose that intuition plays an important role in moral choice. The moral norms that people attempt to adhere to are not necessarily adopted through any rigorous analytic process and may be “deeply felt” even if not deeply thought through. This is why people are often morally dumbfounded, as Haidt (2001) put it, referring to situations in which people are highly confident in the correctness of their moral judgments despite their inability to provide adequate reasons for their beliefs. Moreover, like other theorists (e.g., Greene et al. 2004; Krebs and Denton 2005), we propose that the relative contributions of intuitive and analytic processes brought to bear on moral decision–making are moderated by the type of tradeoff that decision makers confront.

Unlike Greene et al. (2004), however, we do not view moral problems that prompt deeper analytic reasoning—those that are hard problems in their view—as necessarily requiring a conflict in which the decision maker must either violate a moral norm with the consequence of wreaking emotional havoc or else fail to maximize the aggregate good. We agree that such tradeoffs are likely to be viewed as dilemma-ish. However, we focus instead on the subjective experience of high moral conflict and deflated confidence as the psychological markers of moral dilemmas. In this regard, our account is also more general than Tetlock et al. (2000) because ours does not assume that conflict-freeing taboo tradeoffs must be lifted specifically via social norms proscribing the monetization of certain values. Hence, we regard the distinctions offered by other theorists (e.g., Greene et al. 2004; Tetlock et al. 2000) as exemplary rather than definitive ones, and we believe that future research geared toward comprehensively assessing the ways in which moral conflicts tend to be generated and resolved is still needed.

Another aim of our study was to highlight the value of measuring confidence and subjective indices of conflict in research on moral decision–making. As our findings revealed, such measures can illuminate one’s understanding of underlying cognitive processes in cases where choice distributions are similar across problems. For example, in the present research, we found that 94% of participants in the workman and statue versions of the trolley problem chose to save the five workmen. Having access to this choice pattern alone may have led the reader to assume that the two conditions induced the same decision–making processes in participants, and that participants in the two conditions may have experienced similar levels of moral conflict or confidence in their choices. Our findings clearly show that such assumptions would be neither valid nor true. In addition, we were able to test Simmons and Nelson’s (2006) intuitive betrayal hypothesis in an entirely different domain of decision making. Although our findings offered weak support for their hypothesis in terms of confidence, they offered somewhat stronger support in terms of our conflict measure. That is, participants who selected the option favored by the majority—and therefore the one likely to be the more intuitive option (see Simmons and Nelson 2006 for empirical evidence supporting this line of reasoning)—reported feeling less morally conflicted than those who chose the less-intuitive minority option. These findings suggest that there is value in exploring a broader range of subjective measures of mental effort as indicators of intuitive or analytic reasoning. Whereas reported confidence may tell us more about an individual’s experience of uncertainty (e.g., Kahneman and Tversky 1982), reported conflict may tell us more about that individual’s experience of task difficulty.

Another aim of the present research was to re-examine the omission bias using a more rigorous experimental design than had been previously employed. Our findings, in fact, revealed a lack of omission bias on moral choice. Participants were just as likely to opt to save the five regardless of whether that involved an act of omission or an act of commission. As noted earlier, we attribute this finding to our more conservative design. The within-subjects methodology used in Cushman et al.’s (2006) study makes it likely that participants would focus on the omission-commission distinction given that it is the only difference between the two-paired versions. Our experiment not only bypassed this limitation by using a between-subjects design, it also focused directly on participants’ choices rather than permissibility judgments. We therefore urge caution in assessing claims of omission bias in moral decision–making, at least until equally rigorous research might reveal such a bias.

Finally, the findings of the present study supported our prediction that a smaller proportion of participants would opt to save the five workmen when the comparability of losses was diminished by having the lone entity be a child. We found this difference despite the fact that the two versions of the problem held constant other factors that have been proposed to influence moral choice, such as the level of personal involvement evoked by the characteristics of the problem (i.e., both problems would be defined as impersonal by Greene et al.’s 2001, criteria). Of course, the direction of the predicted difference was shaped by our choice of non-comparable loss. If we had selected one person from a class of socially despised individuals (e.g., pedophiles) instead of a child, then we would likely have found the proportion of participants opting to save the five workmen higher than in the standard version. Nevertheless, just as the present findings highlight the importance of tradeoff type in understanding how the human mind reasons through moral conflict, so do they highlight the importance of the type of losses that decision makers must weigh. The comparability of these features in tandem with their perceived desirability, deservingness, value, and the like will also influence the decision that is made. When the entities involved in a tradeoff are not the same, it is less likely that people will engage in cost–benefit analyses that focus on only a single attribute, such as the quantity of discrete lives lost. More generally, we suggest that by paying close attention to problem features, researchers can better understand why people resolve moral problems involving the fate of human lives in the ways that they do.

Acknowledgments

This research was funded by Discovery Grant 249537-2002 from the Natural Sciences and Engineering Council of Canada to the first author. We thank Cpl Sara Salehi for her assistance with this research and two anonymous reviewers for their feedback on an earlier draft of this article.

Copyright information

© Her Majesty the Queen in right of Canada as represented by the Minister of National Defence 2007