Risk research has become a major interdisciplinary field in the last decades, engaging scholars from diverging disciplines, such as economics, legal theory, decision theory, and psychology. However, few philosophers have focused their research on risk. This has now slowly started to change, and we are currently witnessing a growing interest in the ethics of risk among moral theorists and an increased awareness of ethical aspects among risk theorists. In broad strokes, the cross-fertilization between risk research and moral theory has been two-directional. In the first form of cross-fertilization, studies of risk have raised concerns for moral theory, since risk is about indeterminate outcomes and lack of knowledge whereas moral theory largely has been conducted in a space of determinate outcomes and under conditions of sufficient knowledge. This is easily illuminated by the examples commonly utilized in moral theory, such as the Jim and the Indians thought experiment (Smart and Williams 1973) and the Trolley Problem (Foot 1967), cases where the morally relevant properties of the action to be assessed are presumed to be both well-determined and knowable. In most situations in life, however, our actions impose risk on others in that they may cause undesirable events to occur, and the consequences of such actions are typically highly uncertain. In the second form of cross-fertilization, moral theory has intervened with risk assessment and risk management by offering a normative lens through which problems are exposed, which cannot be satisfactorily dealt with within a non-normative framework alone (often presumed by the decision theoretical paradigms that long dominated risk research).

The papers in this collection contribute to both exchanges, further advancing our understanding of how risk and moral theory are interconnected. This interconnection is still undertheorized in philosophy. While work on risk is mainly conducted from the perspective of decision theory, work on moral theory dealing with risk is still rare (Hansson 2009; Heyenhjelm and Wolff 2011). The aim of this collection is to take some steps to fill this void.

A common, albeit often implicit, assumption in much traditional work in moral theory as well as in risk research takes the form of a division-of-labour premise, according to which the role of moral theory is to focus on determinate outcomes in well-specified and transparent situations, and the role of disciplines such as decision theory and other areas of risk research is to analyze the problems that indeterminism raises in real-life situations. Since decision theory works solely with criteria of rationality, the argument goes, no additional criteria are needed to make assessments for rational action.

A shared starting point of the papers in the present collection is a rejection of this tacit assumption. As stressed by Sven Ove Hansson – one of the most prominent scholars on the ethics of risk and one of the contributors – the assumption is problematic since it neglects several aspects of moral relevance that permeate both domains. For example, the difference between voluntary risk-taking and risk imposed on a person who does not accept it, and between intentional and unintentional risk exposure, are morally salient aspects which are not accounted for by approaching risk solely within the traditional probabilistic framework. Likewise, in view of the fact that most decision outcomes in real situations are indeterminate and underspecified, it is a fundamental problem for moral theory if it assumes away those facts (Hansson 2014, 2009, 2003). The papers in this collection in different ways point to the interconnectedness between risk and moral theory.

1 Contributions to the Debate

Let us begin with the contributions to the first form of cross-fertilization, viz., how studies of risk have raised concerns for moral theory. A central question here, from a philosophical viewpoint, is in what sense risk of harm, in contrast to actual harm, has moral significance. In broad strokes, two main kinds of approaches have been adopted in moral theory to deal with risk: consequentialism, often in the form of utilitarian theory, and non-consequentialist approaches, often in the form of contractualist, rights-based, or deontological theories.

According to consequentialism, the moral evaluation of a particular action is exclusively determined by the outcome that the action would have. Comparing different available actions at a given time, the action with the best consequences is considered the right one. Consequentialist reasoning about risk often takes the form of cost-benefit analysis (Heyenhjelm and Wolff 2011: e33). Just as in consequentialism, actions are here evaluated based on the values of their consequences. The positive and negative effects of an action are compared and a unitary measure of the outcomes is reached. In hedonism, the classical version of consequentialism, the ultimate measure of consequences is wellbeing (or happiness). The larger the sum of positive wellbeing over negative (harm), the better the action. In cost-benefit analysis, the unitary measure is monetary. In theory, every consequence of an action is assigned a monetary value, and the sum of the values for all consequences is the total value of that action.

Historically, applying cost-benefit analysis to managing risks has been seen as the rational choice among many theorists. Although cost-benefit analysis of risk has been accused of putting a monetary value on human lives, proponents of the approach have insisted that since the resources of society are not endless, cost-benefit considerations are the only reasonable way of making decisions about risk.

Cass Sunstein is one of the most influential proponents of the cost-benefit approach in the current debate. In several books and journal articles since the beginning of the millennium, he has argued for using cost-benefit analysis as our main approach to risk management. One of his most important arguments for this conclusion is that cost-benefit analysis corrects for the many cognitive biases that distort people’s understanding of risk. In his paper “On the Cognitive Argument for Cost-Benefit Analysis”, Andreas Christiansen critically evaluates this ‘Cognitive Argument’ and finds it wanting.

On closer scrutiny, Christiansen argues, what Sunstein refers to as the Cognitive Argument are really two distinct arguments. According to the first argument, cost-benefit analysis is justified since it corrects for false beliefs about the severity of harmful outcomes and for the neglect of some costs. Christiansen acknowledges the soundness of the argument as such, but argues that it is much more limited in scope than proponents of cost-benefit analysis assume; it fails to justify other aspects of cost-benefit analysis and is thus not overall successful in justifying the approach. The second cognitive argument purports to demonstrate that providing a unitary monetary value corrects for the use of widely diverging values that other methods assign to a statistical life. Christiansen argues that this second argument, on a closer look, fails to demonstrate that it is a cognitive error to assign widely diverging values of a statistical life. In his view, there are good reasons to do so, and the common arguments for a unitary measure fail: neither considerations of instrumental rationality nor the requirement of treating like cases alike justifies that we should use a unitary value for a statistical life.

With regard to non-consequentialist approaches to risk, contractualism has gained increased attention as an alternative to consequentialism. On this view, risk impositions are morally permissible if and only if everyone concerned could – at least in principle – agree to them. In contrast to the asymmetrical implications of aggregations made in consequentialist accounts – where the risk to one person may be offset by the benefits for the many and thereby arguably cause an unfair distribution of risk – there is a plausible symmetry underlying the contractualist idea that a risk for one person must be acceptable to that very person and cannot be trumped by the greater good (Heyenhjelm and Wolff 2011: e39-e40; Scanlon 2000; Nozick 1974).

Two of the contributions engage with the contractualist debate on risk impositions. Sune Holm’s paper, “The Luckless and the Doomed: Contractualism on Justified Risk-Imposition”, investigates a central problem in applying contractualist reasoning to risk. The move from determinate outcomes, given the action or policy chosen, to risk impositions where the outcome is uncertain, has been thought to be especially problematic for moral theories focusing on individuals’ rights not to be subjected to harm. Theorists have argued that since collective goods cannot be compared with individual harms, contractualist theories face a dilemma when it comes to justifying that we should accept activities which generate socially valuable goods, but which also come with potential harms. In most cases, this will mean that some individuals will suffer harmful effects which, for them, do not justify the social goods the activity brings about. To solve (or rather dissolve) this dilemma, many contractualist theorists have argued that risky activities should be assessed by adopting an ex ante approach, where the likelihood of a person facing a negative effect should be compared to the positive effects of the activity, rather than an ex post approach, where the actual effects of the activity constitute the basis for justification.

Holm’s main aim is to test whether ex post contractualism, even in view of the recent arguments launched against it, may still be a viable alternative for the contractualist. His paper focuses on Johann Frick’s influential argument against ex post approaches. Through an analysis of its fundamental premises, Holm finds that it includes an unsupported assumption which proponents of the ex post approach may deny. He then argues that ex post contractualism can respond to the challenge that it seems to prohibit a range of intuitively permissible and socially valuable activities, concluding that it indeed is a viable alternative for the contractualist.

The second contractualist contribution is Rahul Kumar’s paper “Risking Future Generations”. Kumar opens up with a quote from Derek Parfit’s famous example of the Risky Policy in Reasons and Persons, where a community must choose between two energy policies, both of which will be safe for at least three centuries, but one would have some risk in the further future. If the community chooses this Risky Policy, the standard of living will be somewhat higher over the next century. The community chooses this policy and as a result, there is a catastrophe several centuries later which releases radiation that kills thousands of people (Parfit 1984). Kumar not only wants to make sense of the intuition that the community has wronged those who live in the future; he also wants to make sense of the intuition that they have wronged even if the catastrophe would not have occurred, because it could have, similar to when we wrong our contemporaries when we put their interests at risk to gain small benefits for ourselves, no matter the outcome.

Utilizing Scanlon’s contractualist account of what it is for one person to wrong another, together with certain assumptions about the nature of risk, Kumar defends the force of the intuition that the moral objection to the adoption of the Risky Policy is that it wrongs those who live in the further future. The attractiveness of the contractualist view of understanding our obligations to future generations as interpersonal obligations, Kumar argues, is that it allows us to understand living persons as being in a kind of personal relationship with those who will live in the further future that is not different in kind from those that bind currently living persons to one another. It also allows us to reject the idea that obligations to protect and promote the interests of those who will live in the further future are of secondary importance to the obligations that currently living persons have towards each other.

Moving to the second form of cross-fertilization, in which moral theoretical concerns are utilized to develop and refine risk research, two papers contribute to this literature. In “Scopes, Options, and Horizons: Key Issues in Decision Structuring”, Sven Ove Hansson investigates the question of what aspects are involved in determining a decision problem. In decision theory, the decision problem is typically assumed to be well-defined and focus is instead directed at the decision process, that is, the process of making a decision. However, real-world decision-making rarely begins with a clear decision problem to be decided upon. Yet, this process preceding the decision-making, what Hansson calls the process of ‘decision structuring’, has largely been ignored both in decision theory and applied decision analysis.

Hansson identifies and discusses ten main components of decision structuring: the determination of scope (the totality of issues to be covered by the decision); subdivision (if and in that case how the decision will be divided into smaller parts); agency (who will make the decision); timing (determining the time-scale of the decision-making); options (the identification of the options available to be chosen), control (the degree of control that agents are assumed to have over their own future actions); framing (how the options or the background conditions are described); horizon (the consequences and other aspects of outcomes that will be taken into account); criteria (the evaluation criteria – usually in the form of decision principles connected with how we conceive the nature and structure of moral values – used to evaluate the aspects of the horizon); and, finally, restructuring (the degree to which the decision process facilitates or impedes a restructuring of the decision in the course of the decision-making).

An important conclusion from Hansson’s analysis is that the structuring of decisions is inevitably value-based and thus that an ethically neutral structuring of decisions is impossible. This is an important lesson for decision theory, and goes against the common assumption that decision theory needs no additional input from ethics apart from the evaluation of decision outcomes.

The second contribution to this literature is made by Veronica Alfano, Caroline Nevejan, and Sabine Roeser. In the paper “The Role of Art in Emotional-Moral Reflection on Risky and Controversial Technologies”, they explore the role that art may play in ethical reflection on risky technologies. As discussed above, traditional approaches to risk are often quantitative, calculating potential positive and negative outcomes and comparing them on an aggregate level. One of the main advances of injecting risk research with moral theory has been that it points to the problems of neglecting or oversimplifying the ethical aspects of risk and uncertainty. Among critics of these quantitative approaches, there is not only broad consensus that decision-making about risk should be informed by values generally, but also that the values and concerns of stakeholders should be taken into consideration. According to the authors, this also means that emotions should be explicitly addressed in studies of risky technologies, as they can draw our attention to important moral values. Emotions such as indignation or compassion, for example, may point to ethical considerations about justice or autonomy. The problem of studying emotions, however, is that they might be biased, as they are shaped by cultural norms and personal beliefs.

The aim of Alfano et al.’s contribution is to explore how art can provide a way to broaden people’s emotional and moral outlooks as well as offer alternative viewpoints so that such biases may be overcome. Few philosophers have studied art as a catalyst for new perspectives on public deliberation about technologies that are potentially risky. This is problematic, especially in light of the sharp rise in new technologies in the last decades. The authors study the role of art from this perspective in the context of so-called BNCI (Brain/Neural Computer Interface) technologies. They discuss a number of salient artworks to illustrate how art can be used as triggers for emotional-moral reflection, and how it can generate a more engaging public debate about risky technologies than abstract ethical and scientific theorizing might be able to.

2 Moving Forward

In investigating particular aspects of the interrelation between moral theory and risk analysis, the general thesis that the two areas can and should cross-fertilize each other is clearly exemplified by the papers in this collection. This in turn demonstrates that the division-of-labour premise does not hold. Risk is theoretically relevant for moral theory, and moral theory is relevant for risk, and so dealing only with one of these areas will render our theories incomplete. However, while there is convincing evidence for the general interdependence of risk and morality, the extent to which the division-of-labour criticism has force against particular theories of risk or morality it is less clear. A theory in which the expectation value of harmful outcomes is the sole measure of risk may be problematic in many ways, and thus insufficient as a general theory of risk. But if applied to situations in which there is, for example, no difference between voluntary and involuntary risk-taking, or between intentional and unintentional risk exposure, it may still be a valuable theory (see, e.g., Möller et al. 2006, for an account of limited safety comparisons). And while, as we see in this collection, risk of harm constitutes a challenge for contractualist moral theories, it is not clear that the dimension of uncertain outcomes is theoretically relevant for all moral theories. In classical hedonism, for example, what matters according to its criterion of moral rightness is what will happen, what the consequences of the actions turn out to be, regardless of the likelihood of various alternatives, or even our uncertainty about which those alternatives may be. While uncertainty may still play an important role (for example as a psychologically efficacious aspect), this role is mainly empirical, and thus on par with other empirical considerations rather than having any particular theoretical significance.

We suggest that a next step forward in the debate it to gain a deeper and more fine-grained picture of the interrelation between risk and moral theory. It seems to us that whether – and if so in what way – a moral theory should account for risk in theorizing appropriate principles or standards, and vice versa, depends on what the theory is supposed to achieve; for example, what it is that the suggested principles are intended to regulate. An objection to the abovementioned thought that hedonistic utilitarianism need not take risk into consideration – except as an empirical circumstance among others – is to argue that unless it assigns uncertainty a theoretical significance, it fails to properly connect to the main function of a moral theory, namely to provide action-guidance. If moral principles fail to be action-guiding, we cannot follow them. And in order for a moral norm to be justified, it is often argued, we need to be able to act in accordance with it: a meta-principle which is often summarized in the slogan ‘ought implies can’.

But in order for this line of objection to be successful, it has to be both true and relevant. It is not clear that utilitarianism thus understood cannot be action-guiding; and since the correctness of the ‘ought implies can’ principle is strongly controversial it is not clear that a moral theory needs to be action-guiding (e.g. Saka 2000; Stern 2004; Wedgwood 2013). Again, whether and to what extent it must be action-guiding seems to depend on what it is supposed to achieve. To elaborate this functional aspect in the present context and develop even more fine-grained tools for handling risk in moral theory (and vice versa), we believe that some of the methodological discussions held in recent years’ debate on justice could fruitfully be imported to the ethics of risk literature. Some of the methodological tools developed there could be used to investigate in more detail the many – and perhaps even compatible – ways in which moral theory could handle risk.

In recent years, an intensified discussion about the role of normative ideals has re-emerged primarily in the post-Rawlsian literature. What often goes under the heading of ‘ideal theory’ has become a label for different theories involving ‘idealistic’ assumptions, assumptions that are often not likely, or even possible, to achieve in reality. For example, ideal theory may refer to ‘full-compliance theory’, where all agents comply with the just principles of society (and the corresponding non-ideal theory is understood as ‘partial compliance theory’). Here, the role of the idealistic assumptions is to justify the principles. It may also refer to ‘utopian theory’, where idealistic aspects are part of the very theory rather than of its justification (and the corresponding non-ideal theory is understood as more or less ‘realistic theory’). Moreover, so-called ‘end-state theory’ is a kind of ideal theory that sets out a long-term goal for institutional reform (and the corresponding non-ideal theory is understood as ‘transitional theory’, specifying gradual steps toward a more just society) (Valentini 2012: 655–62; Rawls 1999: 89–90, 1971; Sen 2006; Simmons 2010; Estlund 2014).

Critics of ideal theory, often called ‘non-ideal theorists’, maintain that ideal theory (under any of the above construals) is too far removed from the concerns of real world situations to be action-guiding or even be of practical use at all. Instead, it is argued that normative theorizing must be much more deeply integrated with the empirical reality and take much more seriously the non-ideal circumstances under which normative ideals and principles are supposed to be applied (Mills 2005; Farrelly 2007).

At the heart of the debate between ideal and non-ideal theory in the justice literature lies a methodological concern about the proper nature of political philosophy, which has emerged as a result of philosophers starting to cross-examine the methodology they adopt in developing their normative prescriptions. One strategy has been to defend one kind of theory (ideal or non-ideal) while arguing that the other kind is flawed (Mills 2005; Farrelly 2007). Another strategy has been to argue that one of them must be prioritized. An advocate of the latter priority view is John Simmons, who argues that we must make use of normative ideals: “to dive into nonideal theory without an ideal theory in hand is simply to dive blind, to allow irrational free rein to the mere conviction of injustice and to eagerness for change of any sort” (Simmons 2010, p. 34; see also Rawls 1971).

There are many ways in which these methodological considerations may be relevant for assessing the interdependence of risk and moral theory. Above all, such considerations could press the theorist to specify in more detail the nature of the argument pursued, for example, in what sense it is ideal or non-ideal, and in what sense (if at all) it is supposed to be action-guiding or otherwise practically useful. As mentioned earlier, one of the main criticisms of mainstream moral theory from risk ethicists is that the morally relevant properties of human actions are (problematically) assumed to be both knowable and well-determined. Indeed, it is often considered to be a “major defect” of a moral theory that it cannot deal with decision-making under risky and uncertain conditions (Hansson 2009: 12). However, whether it is a defect of a theory depends on what the theory aims to do, for example, whether it intends to prescribe what to do here and now or tell us what an ideal state looks like, towards which we should try to strive. And for the latter aim, the level of action-guidance needed is an open question.

Moreover, even if we agree that “action-guidance is largely what we need ethics for” (Hansson 2003: 294), the debate on ideal and non-ideal theory has revealed that it is a complex and relatively open question what is required of a principle or standard to count as action-guiding in some relevant sense. Consider, for example, the motivational aspects of action-guidance and compare two principles: a principle of non-smoking and a principle of no accidents in traffic (Vision Zero). The first principle is action-guiding in the sense that we have complete knowledge about each step to take in order to reach the goal; but it still fails to be properly action-guiding. In order to quit smoking, there are a number of clear and uncontested instructions for how to achieve this, all well within the limit of what is humanly possible. Yet, many smokers are not sufficiently motivated to follow this advice. Hence, the mere fact that a principle is directly and concretely step-by-step action-guiding does not say very much about whether it will be realized (Erman and Möller 2013: 33). The second principle is ideal in the sense that it prescribes an unrealistic state of affairs, but it does not offer much (if any) in terms of concrete guidance for action here and now. However, Vision Zero goal in traffic safety has made safety professionals and other planners motivated to strive to come as close to it as possible. Starting in Sweden in the 1990s, the vision to have no casualties in traffic, despite the fact that traffic is the most common cause of death for young adults, has resulted in a revolutionary rethinking. For example, the object of safety has shifted from avoiding accidents to avoiding casualties, so that the severity of an accident rather than accident prevention per se has come to the forefront. This has resulted in several safety improvements focusing on speed reduction, such as roundabouts, where the number of accidents has increased, but the severity has decreased significantly (Belin et al. 2012). Consequently, even ideals which are unattainable may guide action and motivate agents (in this case traffic safety professionals) to act in order to come closer to the ideal.

Also the justificatory aspects of action-guidance are important. It may be the case that we prefer to choose ideal principle X over non-ideal principle Y even if X is realizable only to a very small degree in the foreseeable future, whilst Y is likely to be fully realized within the near future. Realists in political philosophy, for example, often put such a low bar on political legitimacy that even societies such as Mugabe’s Zimbabwe turns out to be legitimate. Compared to such ‘hyper-realistic’ principles, even theories in which all current societies turn out to be less than ideally legitimate, and in which the path to becoming so is unclear, may be preferable. Similarly, moral theories which give no clear answer how uncertainties should be handled might be preferable to those that do so, if the evaluation of the fully determined alternatives of the former give the best answers to our moral questions. Hence, to what extent and how a principle should be action-guiding is a substantive matter, not something to be decided pre-theoretically by presuming it to be a constraint on normative principles and standards (Erman and Möller 2018).

Our point here is merely to demonstrate that considerations about what it is that a theory aims to achieve and how it is supposed to be action-guiding open up space for a range of different (albeit potentially compatible) kinds of moral accounts dealing with risk. As part of a non-ideal theory, risk could be stressed as an aspect of the human condition, as a necessary feature of our everyday lives that a normative principle or standard must take into consideration and properly respond to. This, in fact, is one of the core assumptions in the ethics of risk literature. But there are other candidate factual assumptions considered to be essential for non-ideal theorizing. In the debate on ideal and non-ideal theory, one such candidate is ‘partial compliance’ (which in Rawls’ theory of justice is contrasted with the idealization about full compliance in the justification of his ideal principles). Hence, a non-ideal moral theory dealing with risk would have to account for the relative importance of risk in relation to other non-ideal aspects, answering questions such as if risk should take precedence over, replace, or be placed on an equal footing with other particular assumptions about our current non-ideal world.

Indeed, at first glance it seems as if moral theories dealing with risk are best seen as species of non-ideal theory. But this need not be the case. Risk is arguably not merely an ‘unfortunate aspect’ of contemporary social life, but a fundamental empirical circumstance any society, present and future, needs to take into consideration. An ideal theory dealing with risk could thus attempt to offer a justification for why risk should be part of the ‘circumstances of morality’, similar to how ‘limited altruism’ or ‘moderate scarcity’ constitute parts of the ‘circumstances of justice’ in Rawls’ original position, of which the parties under the veil of ignorance have general knowledge. For Rawls, these circumstances are the conditions under which “human cooperation is both possible and necessary” (Rawls 1971: 126).

An example of such an account is Aaron James’ theory of fairness in trade. James starts out from and builds on his earlier work on practice-dependence and applies his approach to the global economy. Adopting a contractualist framework, James defends three principles of fairness for trade, which he labels ‘principles of equity’ (James 2012: 29). However, he firmly insists that his account is not merely a non-ideal theory responding to our current conditions, but an ideal theory in its own right and as such a genuine alternative to, say, cosmopolitan theories of justice. In James’ view, if we want political philosophy to be normative for us, recommended principles must address the available human means for resolving problems of moral assurance, i.e. that the principles of justice can only require arrangements that we can know with reasonable confidence that we can jointly establish and maintain (James 2012: 116). On James’ analysis, availability is not to be understood merely in terms of physical or logical possibility, but as ‘epistemic availability’ in a sense that sets limits to the basic form of human cooperation that justice can require (2012: 114–17). The basic thought is that since it is a general feature of the human condition that we lack direct knowledge of or control over the minds of others, and we therefore interact in uncertainty about what others will do, there is always risk involved in cooperation. The basic predicament that agents face uncertainty about what others will do is a fundamental feature of nearly all human social interaction, also under the best of circumstances for cooperation (James 2012: 58). Therefore, the demands of epistemic availability put limitations on abstraction and idealization within ideal theory “even as a matter of basic principle” (James 2012: 113), such that any viable principle “must address this basic epistemic predicament” (2012: 115).

In large parts, James’ thoughts about epistemic uncertainty, problems of moral assurance, and epistemic availability bear a close resemblance to the kind of considerations made about knowledge deficits and indeterminacy with regard to risk. For risk to be incorporated into an ideal moral theory, the theorist would have to specify what kind of evidentiary standard is justifiable for epistemic availability when it comes to handling situations involving risk, such as knowledge about the probability for an undesired event to occur, a measure of the magnitude of the undesired event, or other morally relevant aspects of the outcome. In James’ case, the evidentiary standard is set at “reasonable confidence”, which is supposed to reflect “reasonable uncertainty-aversion” (2012: 119). But while reasonable uncertainty-aversion is one candidate, the appropriate evidentiary standard in cases of risk is of course an open question, and something that the theorist must argue for (Erman and Möller 2017).

Another important question for the moral theorist integrating risk into an ideal account is to demonstrate why the suggested evidentiary standard of epistemic availability, even when it is reasonable, should trump other important values that the theory attempts to account for – or if not trump, what the proper relation between the salient aspects should be. Indeed, we could accept epistemic availability as one central value among many that we should consider when theorizing what morality requires in different contexts. It might turn out that we, on balance, prefer not to let assurance considerations trump other central considerations, such as equality. All things considered, we may prioritize one moral principle over another more ‘risk-apt’ principle even if it is less ‘well-assured’.

Indeed, the questions briefly reviewed here are just a fraction of all methodological considerations that the moral theorist is confronted with. Nevertheless, we contend that they may both open the door for more fine-grained moral theories handling risk and point to new ways in which the ethics of risk as a research field may advance the debate about methodology in moral theory in general.