Encyclopedia of Global Bioethics

Living Edition
| Editors: Henk ten Have

Risk

  • Sabine RoeserEmail author
  • Jessica Nihlén Fahlquist
  • Rafaela Hillerbrand
Living reference work entry
DOI: https://doi.org/10.1007/978-3-319-05544-2_388-1

Abstract

In this contribution, the most important developments in risk theory and risk ethics are presented and connected with key issues in the debate on global bioethics. First, a short historical sketch of the development of the fairly young field of risk ethics is provided. This is followed by an exploration of concepts and notions of risk and related notions of uncertainty and ignorance. This discussion is based on work in decision theory, epistemology, and philosophy of science. The main body of this contribution focuses on the link between risk and global bioethics. It starts with a discussion of the role of risk and uncertainty in global bioethics in general. It then discusses values and moral emotions in global bioethics, building on insights from risk perception research. The last section explores the complex interrelationships between risk and responsibility in the context of global bioethics.

Keywords

Risk Uncertainty Ignorance Expected utility Bayesianism Precautionary principle Technology Epistemology Ethics Responsibility Emotions 

Introduction

Biotechnology and health interventions are developed to improve human well-being, but they can also give rise to unforeseen consequences or risks for human beings or the environment, on a local as well as on a global scale. This requires an ethical assessment concerning which risks are acceptable and under which circumstances. However, conventional approaches to risk, on the one hand, mainly rely on quantitative methods and do not take into account a broad range of morally relevant considerations. Normative approaches to bioethics, on the other hand, do often not explicitly acknowledge the specific conceptual and practical problems that risk and uncertainty give rise to. In this contribution, the most important developments in risk theory and risk ethics will be introduced and connected with key issues in the debate on global bioethics.

The contribution is structured as follows: first a short historical sketch of the development of the fairly young field of risk ethics is provided. This is followed by a conceptual analysis of the notion of risk and related notions of uncertainty and ignorance, drawing on work in decision theory, epistemology, and philosophy of science. The main body of this contribution will focus on the link between risk and global bioethics. It will first discuss the role of risk and uncertainty for global bioethics in general. This is followed by a discussion of the role of risk perception, values, and moral emotions in global bioethics. After that the complex interrelationships between risk and responsibility in the context of global bioethics will be discussed. The contribution ends with a conclusion that summarizes the most important insights.

History of Risk Ethics

Risk and uncertainty are investigated by a variety of academic disciplines (cf. Roeser et al. 2012). Until the 1960s, decision-making under risk and uncertainty was mainly studied by formal disciplines, for example, through mathematical approaches such as Expected Utility Theory (EUT). In the 1970s, psychologists started to study decision-making in an empirical way. Daniel Kahneman and Amos Tversky showed how people’s decisions in cases of uncertainty deviate from these formal approaches (cf. Kahneman 2011 for a popularized overview). This work resulted in the study of heuristics and biases that is extremely influential in psychology and for which Kahneman won the Nobel Prize in economics in 2002. Parallel to and partially overlapping with the work by Tversky and Kahneman, Paul Slovic founded the discipline of risk perception research, eventually showing that laypeople have a broader conception of risk than experts that also comprise values (cf. Slovic 2000 for a collection of his most important articles). Sociologists have argued in the last decades that risk is largely a social construction, as the definition and measurement of risk depends on people’s status and perspective (Krimsky and Golding 1992). Since the 1980s, philosophers have started to study the ethical implications of risk, with the pioneering work of Kristin Shrader-Frechette (1991) and Sven Ove Hansson (1996) who proposed to connect the study of risk with ethical reflection and vice versa. Since the early 2000s, a more substantial community of philosophers is working in the field of risk ethics, on a theoretical level as well as in applied and interdisciplinary ways (cf. contributions by risk ethicists in Roeser et al. 2012).

Despite these diverging backgrounds, a consensus has emerged from the work by psychologists, social scientists, and philosophers. These scholars have argued that risk is not merely a quantitative, scientific notion. Risk is more than the expected harm, i.e., an unwanted effect times its occurrence probability, that one could assess with formal models and (risk-)cost-benefit analysis, which are dominant in quantitative approaches to risk analysis and risk management (cf. also Hillerbrand 2010a). Risk concerns the well-being of human beings and involves ethical considerations such as fairness, equity, and autonomy.

These considerations also figure in standard approaches to bioethics. Ethical theory for a long time, however, has focused on decisions under certainty. Global bioethics needs to address the risks and uncertain effects of various biotechnological interventions, ranging from the risks volunteers in a clinical trial face to the risks of genetically modified organisms (GMOs). The main part of this entry will discuss important themes from risk scholarship for global bioethics. But first some conceptual clarifications of risk and closely related notions will be provided.

The Concept of Risk

Risk and uncertainty play a central role in decision theory and epistemology, respectively. These disciplines can hence be very informative for global bioethics. In addition, philosophy of science is relevant as it addresses issues of uncertainty in the context of the special sciences such as life sciences or ecology.

Due to the plethora of circumstances, empirical facts, and epistemic shortcomings that give rise to uncertainties in decision-making, various classifications of uncertainty are suggested in the literature. There are two classifications that are particularly important for global bioethics: the first one concerns uncertainty of the consequences of a decision and the second uncertainty of a decisions demarcation (Hansson 1996). Starting with the former, regarding the uncertain consequences of a decision, it is important to note that technology assessment distinguishes between risk (in a narrow sense), uncertainty, and ignorance. Risk in this narrow understanding refers to decisions for which all possible consequences are known and can be assigned a meaningful probability. For a decision under uncertainty, probabilities are not known for all possible (side) effects. Probabilities here are to be taken as objective probabilities like relative frequencies. In the case of uncertainty, the full range of possible outcomes, or in mathematical parlor, the full probability space, is known. For decisions under ignorance, however, not even the full range of possible decision outcomes can be determined. These decisions under ignorance were recently popularized under the slogans “unknown unknowns” (D. Rumsfeld) or “black swans” (N. Taleb). Dealing with the long-term aftermaths of a (global) market release of a genetically modified (GM) crop, one presumably faces decisions under ignorance, while as regards clinical trials, one often faces decisions under uncertainty. In neither case does one have well-grounded knowledge of probabilities. Nonetheless, scientific as well as ethical discussions often assume that all outcomes are known with corresponding probabilities and that one hence faces a decision under risk in the narrow sense. Such a reductive approach is criticized in the contemporary decision-theoretic literature for not being attentive to the complexity of real life (ethical) problems.

Concerning the uncertainty of demarcation, one should note that before one can actually reason about the consequences of a decision, one must start with some demarcation of the decision itself. The uncertainty of demarcation denotes the fact that not in every decision situation it is established how to determine the decision horizon: the scope of the decision or even which problem the decision is supposed to solve might be unclear. Hereby different interest groups often set the decision horizon differently. A typical example is provided by discussions on genetically modified crops. Products like Golden Rice or drought-resistant tobacco plants promise to solve world hunger. But while the proponents of genetically modified crops often focus on possible benefits for poor countries on a short or medium timescale, opponents zoom in on possible negative side effects due to cross-fertilization or other unintended long-term effects. The uncertainty of demarcation also entails the problem that it can be unclear whether all the available decision options have been identified. The broader the decision horizon (spatially, temporally, as well as concerning the number of decision options), the more likely one faces a decision under ignorance.

Risk and Global Bioethics

Risk and Uncertainties in Global Bioethics from an Epistemic Point of View

The uncertainty global bioethics has to deal with has numerous sources. In deciding for a specific setup of a clinical trial, for example, the question needs to be addressed which information is essential to give to a volunteer. The answer to this question depends, among others, on the value system or the ethical theory one embraces. It is not certain whether the potential volunteer shares the same values as the person designing the trial. Such ethical uncertainties may be due to cultural differences, which can be especially poignant in a global context or when bioethics touches on intergenerational issues: values and preferences of future generations are in parts opaque to us. Also various epistemic uncertainties hamper bioethical decision-making. Firstly, one may not be sure whether all decision options have been considered. It may well be that a clinical trial set up to test a new medication turns out to be unnecessary because future research shows certain trivial lifestyle changes to eliminate the disease without any medical intervention. Secondly, the impacts of a decision can be uncertain due to various reasons. When bioethics is concerned with the market release of genetically modified plants, for example, the impact these have depends on their interaction with various parts of the ecological system – a highly complex system that is not sufficiently understood to predict all possible consequences. Even if a genetically modified plant runs the risk of lacking a certain nutrition content that is vital for some species, this species will only be adversely affected in the case that it cannot find a substitute for the plant or the nutrition. But the GM plant may affect the ecological system also in other, unforeseen ways (cf. Hillerbrand 2010b). In most fields of biotechnology, the impacts of a particular decision also depend on the decisions of other people now or in the future.

If all consequences can be assigned some reliable probability of occurrence, one faces a decision under risk in the narrow sense. The probabilities can either be derived inductively from a series of experiments or deductively with the help of a theoretical, often computer-implemented model. Frequently, the utilitarian paradigm of maximizing overall benefit is applied to decisions under risk. This means that one strives to maximize the expected benefit, consisting in the benefit times the occurrence probability. This approach is known in the literature as Expected Utility Theory or in short EUT, and it is the by far the most used and best known probabilistic decision model. Quantitative risk analysis uses a form of EUT that focuses on negative consequences rather than benefits, and risk-cost-benefit analysis uses EUT to compare risks and benefits of different options. In bioethics, risk is often not defined in a one-dimensional way; rather, multiple categories of risk are used. This can partially be accommodated in a multidimensional EUT or risk analysis. However, these approaches judge a decision solely in terms of the harms and benefits that can be expected on average. There are two central problems with this. The first one arises when different people bear the costs or the benefits. Consider the case of a clinical trial. On EUT severe harm to the research subject may be acceptable when the mean expected benefit is high. However, considerations of fairness may well require to put more weight on the research subject’s utility than on that of what a person would bear “on average” – the average person being a mathematical construct. The second problem EUT faces is even more general. Consider the following two decision situations under risk. In both cases, the costs and benefits are borne by the same person, and in both cases, one knows the full probability distribution of the harm it may cause. However, the probability functions differ in both cases. In the first case, the probability function is Gaussian, i.e., a normal distribution. In the second case, it is a fat tail distribution, but with the same mean as the Gaussian distribution. EUT focuses only on the mean, which means that it would calculate the same expected utility in both cases. However, in the case of the fat tail distribution, rare events with possible high harm are much more likely than in the case of the Gaussian distribution. Hence, it is not at all clear why one should use the mean as the only criterion on which to base one’s decision. EUT may provide a reasonable decision approach for decisions under risk where the entire information on the probability distribution is entailed in the mean, i.e., when the probability distribution is Gaussian. However, there is no a priori reason why this should be the case for many of the side effects that are at stake in bioethics – ranging from the use of green, white, or yellow GMO to public health and clinical trials. This holds particularly when side effects touch on complex systems like the biosphere or economic systems. A decision theory incorporating not only the mean but also higher moments of the distribution, which provide information on the shape of the probability distribution, may be more adequate.

Uncertain situations are often treated as situations under risk by reducing uncertainty to risk via assigning subjective probabilities. This is known as the Bayesian approach. However, caution is required, as this requires sufficient updating of the information, which may be impossible in practice. Hence, the prior probability distributions may lead to under- or overestimating the actual negative consequences. Decisions under ignorance give rise to additional problems, as in those cases, relevant consequences may remain unconsidered because they are not (yet) known.

In cases where not all probability estimates are known or in cases where possible consequences may be so severe (e.g., when the extinction of humankind is at stake) that their actual occurrence probability seems irrelevant for ethical reasoning, some authors argue to focus on the worst case scenario only and avoid it at all costs. This decision guideline is referred to as a particular version of the precautionary principle. Note, however, that the term “precautionary principle” is fraught with ambiguity. Within a juridical context, particularly in the European Union legislation, the notion is fairly vague and mainly used to stress that scientific prognoses are uncertain. In decision theory, decision models like the precautionary principle are sometimes referred to as “minimax” strategies: minimize the maximal harm that can be imagined. However, if the worst case scenario does not occur, this decision option might be far from optimal. Consider the following example of a GM product designed for curing or preventing AIDS. It may not be feasible to conduct extensive testing in order to completely rule out the highly unlikely chance of a change in DNA that might in the long run constitute a major threat to the existence of humanity. The precautionary principle or minimax rule would then entail that no matter how high the possible benefits of such a treatment would be, it is forbidden to launch this GM technology.

Depending on the various types of uncertainties, different decision approaches are adequate. There is no silver bullet that can deal with all types of practical problems bioethics faces, nor can formal decision theory satisfactorily accommodate uncertainties and ignorance. Formal approaches cannot do justice to the ethical complexities involved in decision-making under risk and uncertainty. This has also been argued for by psychologists, social scientists, and philosophers alike, as will be discussed in more detail in the next subsection.

Risk Perception, Moral Values, and Emotions in Global Bioethics

New biotechnologies and health interventions often give rise to intense public debates. Frequently, society is alarmed and worried about the risks and uncertain negative consequences of such new technologies or interventions. Experts instead assure that the risks are negligible, based on scientific studies and formal methods.

However, as discussed in the previous sections, risk scholars agree that ethical considerations should supplement formal methods in risk assessment. Risk ethicists have proposed various approaches over the last years that can do justice to contextual aspects of risk and that complement formal decision approaches. For example, Rawlsian approaches to risk are intended to include perspectives of different stakeholders in risk assessment. Contextualist ethical theories draw attention to the complex trade-offs that are involved in decision-making under risk and uncertainty. Virtue epistemology and virtue ethics approaches have been proposed to specifically deal with issues of responsibility that are especially complex and important in the context of risk and uncertainty (cf. section “Risk and Responsibility in Global Bioethics”). These various approaches emphasize the notions of autonomy, justice, fairness, and care as important ethical aspects of risks that are not included in formal approaches to risk and uncertainty (cf. Roeser et al. 2012 for discussions of a variety of approaches in risk ethics).

Biotechnologies and medical technologies can affect the well-being of people for better or worse in many ways that are impossible to capture a priori in formal approaches to risk. For example, public health policies are usually modeled on Expected Utility Theory (EUT) as described in the previous subsection. This leads to approaches that try to optimize the average for the largest number of people. However, such an approach does not do justice to people who are in statistical terms outliers, and it can lead to unfair distributions of risks and benefits and violations of autonomy. Biotechnologies are often developed by wealthy, industrialized countries but can have impact on people around the world. For example, GM crops are developed by Western multinationals. Such crops can have huge benefits, also from a global perspective, by leading to more efficient yields, but they can also lead to harm by diminishing local markets and the independence of farmers. The effects of new biotechnological developments are often hard to foresee, requiring precautions and further research.

From the work by Paul Slovic (2000), one can see that similar considerations such as those highlighted by ethicists play a role in the risk perceptions of laypeople. For example, laypeople find the following considerations important: whether people have sufficient knowledge to oversee possibly negative consequences and whether risky activities are freely chosen and distributed fairly. Slovic has argued that laypeople have a different rationality than experts, but one that is equally legitimate. This idea also underlies approaches to participatory technology assessment that have been developed by social scientists and that aim at involving people in decision-making about risky technologies. These approaches are aimed to contribute to democratic legitimacy, by respecting people’s autonomy (cf. Krimsky and Golding 1992). This resonates with the centrality of autonomy and informed consent in bioethics. Furthermore, some approaches also aim at including laypeople’s perspectives as an important source of reflection on qualitative aspects of risks.

However, Slovic and colleagues have also found that laypeople’s risk perceptions are largely influenced by their emotions. Slovic and colleagues call this the “affect heuristic” or “risk as feeling,” which should be corrected by “risk as analysis,” in other words, scientific and formal methods (Slovic 2000). These claims suggest that formal, technocratic approaches to risk are more reliable than public risk perceptions, due to their emotional nature.

It is very common to see emotions as opposed to reason and rationality, both in public debates and in the academic literature on decision-making. In the empirical literature, this approach is called Dual Process Theory (cf. Kahneman 2011). According to Dual Process Theory, the mind works through two distinct systems. System 1 is intuitive, affective, and largely unconscious, leading to fast but irrational gut reactions. System 2 is rational, analytical, and deliberative but also slow and lazy, as it requires substantial effort. Emotions are taken to fall under system 1 and reason under system 2. Given its higher reliability, system 2 is argued by Kahneman, Slovic, and others to be preferred in decision-making.

However, emotion research can cast doubt on the model of Dual Process Theory. Philosophers and psychologists who study emotions challenge the commonly accepted dichotomy between reason and emotion. Cognitive theories of emotions hold that emotions are not only affective states but also have a cognitive aspect, making them a possible source of knowledge. Several philosophers have argued that emotions are (important for) judgments of value (cf. Roeser and Todd 2014).

Based on such an alternative theory of emotions, it can be argued that emotions are not a threat to decision-making about acceptable risk; rather, they are an important prerequisite in order to grasp ethical aspects of risk that do not figure in quantitative approaches to risk. For example, fear can remind people of the ethical problems resulting from the uncertainty accompanying new biotechnological developments and the unforeseen threats they might pose to people’s well-being. Emotions such as sympathy and compassion can alert people to violations of global justice in the case of unfair distributions of risks and benefits of biotechnologies. Indignation and resentment can indicate violations of autonomy when risks are imposed on people against their will, for example, in the case of a medical trial or in the case of the implementation of a new biotechnology. Disgust can highlight the ambiguous moral status of, for example, clones and human-animal hybrids. Positive emotions such as enthusiasm and hope can also play an important role in highlighting benefits for well-being resulting from a biotechnology or medical treatment. Hence, emotions are not necessarily a threat to decision-making about risk; rather, by being closely related to people’s values, they highlight important ethical aspects of risk that do not play a role in quantitative approaches to risk. Addressing emotions in deliberation and communication about risks can also contribute to needed changes in behavior, as emotions are intrinsically motivating. The role of emotions is not confined to the public. Experts often feel responsible for the technologies they develop, which can help them to be aware of ethical implications and to strive for morally acceptable solutions (for all this, cf. Roeser 2012). This also relates to the next subsection which discusses risk and responsibility.

Risk and Responsibility in Global Bioethics

Just as risk, responsibility is a central notion in today’s society, and just as risk, responsibility is a complex concept. Furthermore, the two concepts are intertwined. The concept of responsibility explicitly or implicitly underlies risk debates. Whereas human beings have always been faced with “dangers” and “hazards,” the transition to using the concept of “risk” is partly related to how people conceive control and responsibility for risk. Dangers are often seen as uncontrollable and as part of nature, and natural dangers are rarely seen as events for which someone is responsible. By talking about risk, a certain degree of control and hence responsibility is presupposed (e.g., Luhmann 1990). Even if the difference between danger and risk is not undisputable, the way people use the concepts tells something about their expectations when it comes to managing risks.

Health-related risks are clearly connected to notions of moral responsibility. In previous times, diseases were viewed as part of one’s fate, and whether one would be cured was dependent on God’s will. Today, people trust that medical and health-care professionals have some degree of control and responsibility to provide diagnosis and treatment. However, people also view diseases increasingly through the lens of individual responsibility, i.e., as more or less connected to lifestyle and personal choices. Smokers and obese people are often considered to lack discipline and responsibility.

The links between risk, health, and a personal responsibility to reduce risks and prolong one’s life and the life of one’s children are common in contemporary society. There are daily reports in the media and information provided by government authorities telling people whether different food products and lifestyle choices increase or decrease the risk of getting cardiovascular diseases or cancer. The underlying idea is that individuals have a responsibility to manage risks to their and their children’s health. Because health is also connected to social status and often seen as the most important value in life, this adds to the pressure on individuals to do their utmost to stay healthy for longer. Underlying the idea that someone is responsible is the notion that they have control over their health. The most recent findings in epigenetics suggest that people might be able to affect the development of their genes. This finding may add even more weight to the argument that individuals are responsible for their own health and the health of their offspring.

The core principle of Western bioethics is autonomy (Beauchamp and Childress 2009). Against this background, the notion of personal responsibility for health is not surprising. In contrast to previous more paternalistic ideas about the doctor-patient relationship, the patient is now seen as autonomous and competent to decide for herself about treatments and withdrawal of treatments. In a time where information is everywhere, the idea of the autonomous patient who makes an informed choice and takes responsibility for her own health is becoming even stronger.

The concept of moral responsibility is closely related to autonomy. Autonomy refers to someone’s right or competence to make decisions regarding their health. The right of patients to make decisions concerning treatment underlies health-care systems, at least in industrialized countries. A common idea is that people should always be provided all relevant information in order to decide for themselves, thereby exercising their right to autonomy. However, this gives rise to the question whether individuals should also be held responsible for the outcome. If an agent could have chosen a specific vaccine but refused to and then gets ill, is she to be held responsible for being ill? Similarly, “personalized medicine,” based on the new possibilities provided by research in genetics facilitating “tailor-made” treatments and advice, raises questions concerning individual and collective responsibility for health.

However, the idea of the autonomous and responsible individual is a product of the Western liberal and relatively secular part of the world. Religious and cultural differences are bound to affect notions of risk and responsibility. In addition, socioeconomic and political differences affect what is reasonable in terms of taking responsibility for health. To view a poor mother of ten children living in rural India as responsible for her own and her children’s health is not unproblematic. Consequently, although the notion of personal responsibility is not uncontroversial in wealthy nations, it is even more complicated and questionable in the context of low-income nations. Health inequalities are present and need urgent attention (Anand et al. 2006).

These considerations show the complexity of the concept of responsibility. What does it mean to say that someone is responsible for her own health and for the risks in her life? In the philosophical literature, moral responsibility is usually associated or equated with blaming (and praising) agents for actions they have caused. However, whereas notions of blameworthiness are ancient, the modern concept of responsibility is more complex.

Ideas concerning moral responsibility often presuppose causal responsibility, i.e., an agent can only be held responsible for that which she caused. The opposite is not necessarily true, i.e., an agent is not necessarily held morally responsible for everything she causes. This also requires voluntariness, which is usually considered to be a further aspect of moral responsibility. In order to be responsible for something, an agent should have a certain amount of control, i.e., she is free to do what needs to be done.

Hence, causation and control are central notions in assigning responsibility. However, in the case of risk and uncertainty, this is often complicated, as the outcomes of one’s actions are uncertain or even unknown. The causal and voluntary background of health risks and health problems is highly complex. The extent to which a disease, for example, is caused by genetic setup and environment, behavior, or “lifestyle” is yet uncertain.

Furthermore, responsibility can be either backward looking or forward looking (Nihlén Fahlquist 2006, 2009). Backward-looking responsibility concerns effects someone has caused with actions in the past. Forward-looking responsibility concerns actions yet to come. Whereas one can change one’s behavior in the case of forward-looking responsibility, one cannot do so in the case of backward-looking responsibility.

It appears reasonable to expect individuals to take some responsibility for their health, especially in a welfare state where health costs are shared collectively. The very idea of the welfare state involves a notion of collective responsibility for health. Arguably, forward-looking responsibility for health should be promoted. Intuitively, health is an important value and there are good reasons to think that people should take care of themselves. However, if individuals are blamed when getting ill, i.e., held responsible in a backward-looking sense, this is more problematic. This is particularly worrisome against the background of socioeconomic differences, nationally and globally.

So, while there is an intuitive link between risk and responsibility, it can be hard to establish in concrete cases whether an individual is responsible for a risk. This entails that one should be cautious with ascriptions of blame. On the other hand, there are many contexts in which people are responsible and should take responsibility and act accordingly. This can be the case concerning their own lives as well as concerning actions that have potential impact for other people, for example, when it comes to local or global environmental and climatic change. In such contexts, moral emotions such as feelings of responsibility, guilt, blameworthiness, and shame, as well as positive emotions such as enthusiasm and hope, can be important sources of moral awareness and motivation to act accordingly.

Furthermore, experts have a unique responsibility to develop morally responsible medical treatments and biotechnologies in which risks are treated adequately but which also respect autonomy and considerations of justice and fairness. Policy makers have a responsibility to develop policies that regulate risks in an ethically sound way. For example, in the context of public health, policy makers should supplement EUT with considerations of justice, fairness, and autonomy and respect for the difficulties that people may face when trying to optimize their health. Hence, biotechnology experts, medical experts, and policy makers should broaden their view of risk from purely quantitative approaches to ones that also take into account ethical considerations in order to live up to the responsibilities that come with their expertise and societal role.

Conclusion

This contribution has discussed the relationship between risk and global bioethics. It has pointed out that scholarship on risk is a fairly young academic field that can shed important light on global bioethics, given the uncertain outcomes of many developments in biotechnology and health. Risk and uncertainty give rise to intricate moral issues that cannot be sufficiently covered by conventional, deterministic ethical theories. On the other hand, formal approaches to risk have to be supplemented with explicit ethical considerations such as justice, fairness, and autonomy. These considerations play a role in laypeople’s more intuitive and emotional perceptions of risk, which means that public risk perceptions and emotions can make an important contribution to an ethical deliberation about risk. Furthermore, the specifically complex interrelationship between risk and responsibility has been discussed, given frequently uncertain causes and interaction effects. The study of risk can hence make important contributions to the field of global bioethics.

Cross-References

References

  1. Anand, S., Peter, F., & Sen, A. (2006). Public health, ethics and equity. Oxford: Oxford University Press.Google Scholar
  2. Beauchamp, T. L., & Childress, J. F. (2009). Principles of biomedical ethics. Oxford: Oxford University Press.Google Scholar
  3. Hansson, S. O. (1996). Decision-making under great uncertainty. Philosophy of the Social Sciences, 26, 369–386.CrossRefGoogle Scholar
  4. Hillerbrand, R. (2010a). On non-propositional aspects in modeling complex systems. Analyse & Kritik, 32, 107–120.Google Scholar
  5. Hillerbrand, R. (2010b). Unintended consequences and risky technologies. A virtue ethical approach to the moral problems caused by genetic engineering. In D. Pavlich (Ed.), Environmental justice and global citizenship (pp. 167–183). Amsterdam/New York: Rodopi.Google Scholar
  6. Kahneman, D. (2011). Thinking fast and slow. New York: Farrar, Straus and Giroux.Google Scholar
  7. Krimsky, S., & Golding, D. (Eds.). (1992). Social theories of risk. Westport: Praeger Publishers.Google Scholar
  8. Luhmann, N. (1990). Technology, environment, and social risk: A systems perspective. Organization and Environment, 4, 223–231.CrossRefGoogle Scholar
  9. Nihlén Fahlquist, J. (2006). Responsibility ascriptions and public health problems. Who is responsible for obesity and lung cancer? Journal of Public Health, 14(1), 15–19.CrossRefGoogle Scholar
  10. Nihlén Fahlquist, J. (2009). Moral responsibility for environmental problems – Individual or institutional? Journal of Agricultural and Environmental Ethics, 22(2), 109–124.CrossRefGoogle Scholar
  11. Roeser, S. (2012). Moral emotions as guide to acceptable risk. In S. Roeser, R. Hillerbrand, M. Peterson, & P. Sandin (Eds.), Handbook of risk theory (pp. 819–832). Dordrecht: Springer.CrossRefGoogle Scholar
  12. Roeser, S., & Todd, C. (Eds.). (2014). Emotion and value. Oxford: Oxford University Press.Google Scholar
  13. Roeser, S., Hillerbrand, R., Peterson, M., & Sandin, P. (Eds.). (2012). Handbook of risk theory. Dordrecht: Springer.Google Scholar
  14. Shrader-Frechette, K. (1991). Risk and rationality: Philosophical foundations for populist reforms. Berkeley: University of California Press.Google Scholar
  15. Slovic, P. (2000). The perception of risk. London: Earthscan.Google Scholar

Further Reading

  1. Asveld, L., & Roeser, S. (Eds.). (2009). The ethics of technological risk. London: Earthscan/Routledge.Google Scholar
  2. Fischhoff, B., & Kadvany, J. (2011). Risk: A very short introduction. Oxford: Oxford University Press.CrossRefGoogle Scholar
  3. Lewens, T. (Ed.). (2007). Risk: Philosophical perspectives. London: Routledge.Google Scholar
  4. Roeser, S. (Ed.). (2010). Emotions and risky technologies. Dordrecht: Springer.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2015

Authors and Affiliations

  • Sabine Roeser
    • 1
    Email author
  • Jessica Nihlén Fahlquist
    • 1
    • 2
  • Rafaela Hillerbrand
    • 3
  1. 1.Philosophy DepartmentTU DelftDelftNetherlands
  2. 2.Centre for Research Ethics & BioethicsUppsala UniversityUppsalaSweden
  3. 3.Institut für Technikfolgenabschätzung und Systemanalyse (ITAS)Karlsruher Institut für Technologie (KIT)KarlsruheGermany