Is reflective equilibrium (RE) a method that can be used in an insightful and fruitful way to justify principles or theories? In order to answer this question, and to gain further insights into the applicability of RE, I will conduct a case study in which I test whether RE can be used to formulate and justify a precautionary principle (PP). The present chapter describes the setup of this case study, whereas the application itself takes place in Chaps. 68.

5.1 Objectives and Overview

In Chap. 3, I proposed to spell out the method of reflective equilibrium as starting from an initial position and then proceeding in two alternating steps of adjusting commitments and system. In order to apply RE, one has to identify the elements of the initial position, and to specify the criteria of reflective equilibrium with respect to the particular justificatory project.

The first step is thus to clarify my pragmatic-epistemic objective and the subject matter, before specifying the method and describing the input. As this is a case study for reflective equilibrium as a method, is has two goals: one that is pursued within the application of RE, and one that is pursued with it, as a case study for RE.

Pragmatic-Epistemic Objective in the Case Study::

Justifying an action-guiding moral principle that is applicable to the subject matter of precaution and precautionary decision-making (see Chap. 4).

Objective of the Case Study::

Testing whether, and how, RE can be implemented as a method; and what we can learn about the theoretical foundations of RE by putting it into practice.

Having these two goals has certain consequences for the case study, as the method and its applicability (goal 2) is the main concern. To make the application of RE feasible and comprehensible, I will work with plausible simplifications and stipulations with respect to the content of the case study, e.g., only taking into account a very limited amount of (empirical) background information and only examining a restricted (but hopefully exemplary) set of commitments. Because of these restrictions, we cannot expect that a justified precautionary principle will follow. But the general structure and process should be exemplary and help to identify needs for modification, i.e., how to continue to work towards a justified position.

The structure of this chapter is as follows: I start by specifying the method of RE in Sect. 5.2, i.e., by concretizing the two RE steps through defining measures for the RE criteria. I then describe the selection of initial input commitments in Sect. 5.3, elements of the background in Sect. 5.4, my preliminary selection of theoretical virtues in Sect. 5.5, and candidates for the system in Sect. 5.6. Section 5.7 recapitulates the main points of the setup, and sketches the way ahead.

The elements of an RE process can quickly become hard to keep in mind. To keep the description manageable and comprehensible, I only describe exemplary or relevant aspects of the setup. The complete list of all the elements of the setup and of the RE process can be found in the appendix starting on p. 245.

5.2 Specifying the Criteria and Steps of Reflective Equilibrium

In Chap. 3, I developed a methodology for obtaining a method of RE based on the theoretical conception described in Chap. 2. I suggested that the RE process of adjustments can be structured in the form of two alternating kinds of steps:

Adjusting Commitments :

Keeping the system constant, find the set of commitments that maximizes the combination of (i) agreement with the current system, (ii) independent credibility, (iii) respect for input commitments, and (iv) support from background theories.

Adjusting the System :

Keeping the current set of commitments constant, find a system that maximizes the combination of (i) agreement with the current system, (ii) theoretical virtues, and (iii) support from background theories.

To specify the method for particular pragmatic-epistemic projects, we thus have to specify the various RE criteria, i.e., to define how they should be measured and how potential trade-offs should be handled. The following three main tasks were identified in Chap. 3:

  • Define the RE criteria as exactly as possible while keeping them informative enough for the project at hand (i.e., the subject matter and pragmatic-epistemic objective in question, as well as the available resources);

  • Give a preliminary weighting of the different criteria, noting if any of them are more important with respect to the pragmatic-epistemic objective;

  • Concretize the two alternating steps of the process by inserting the so-defined criteria.

When specifying the criteria, my focus is on obtaining implementable criteria which are assessable in the practical application of the case study without making the process too technical. The goal is to work with plausible approximations of the RE criteria which can, based on the results of the case study, also serve as the basis for further elaboration and more refined specifications.

I will bracket the assessment of support from background theories at each step. Instead, I will use background theories as potential tie-breakers in case of trade-offs that are difficult to resolve, and to assess relatively well-advanced positions with respect to whether or not they are in a state of reflective equilibrium.

The other criteria I specify as follows:

Independent Credibility of Commitments :

Each commitment is assigned a weight that gives a rough indication of its (independent) credibility: commitments either have a low, medium, or high weight. This ranking is only ordinal, i.e., it expresses neither that two commitments with a high weight necessarily have the exact same degree of independent credibility, nor that the difference between low and medium is the same as the difference between medium and high.

Agreement between System and Commitments :

I specify the relation of agreement as Account, which is measured between a candidate system Sn and the set of current (explicit) commitments Cn. To measure account, a value is assigned for each commitment c ∈ Cn, depending on the kind of relation between Sn and c:

conflict (Sn implies ¬c): − 2

consistent non-account (neither c nor ¬c is implied by Sn): − 0.5

partial account (Sn implies part of c): + 1

full account (Sn implies c): + 2

A weighted sum is then formed by first multiplying each such value with another

value depending on the weight assigned to c:

low weight: ∗ 1

medium weight: ∗ 2

high weight: ∗ 3

A commitment is fully accounted for by the system if it can be inferred from the system via deductive or non-deductive valid arguments. These arguments can include background information or be supported by background theories. A commitment can be partially accounted for if a part of it can be inferred from the system. For example, this is possible in the case of general commitments: in order to fully account for a general commitment, the system would have to allow us to infer everything that can also be inferred from the general commitment. For example, if you have the commitment “You should never lie”, then the principle “One should not lie if it will harm another person” will partially account for the commitment. However, to count as partial account, it is important that the principle stays silent on the part of the commitment that it does not account for: if the principle were “One should not lie if and only if it will harm another person”, it would conflict with the commitment that you should never lie.

I assigned the numerical values based on the following considerations: we are aiming for full agreement between system and commitments, meaning that full account should be valued highest and conflict should receive the highest penalty. While consistency is a necessary condition for agreement, I decided to assign a small penalty for consistent non-account, as we want there to be some positive connection between system and commitments, i.e., something more than mere consistency. (Consistent) partial account is valuable—it can be an indicator that the current system gets something right, but of course it is less valuable than full account.

While the weights of the commitments are ordinal, as stated above, I still decided to assign numerical values to them in order to be able to take them into consideration when measuring account. Thus, they are effectively measured on an interval scale, but one has to take this measurement with a grain of salt—this account function only works as a rough indicator of how well competing candidates for the system are able to account for a given set of current commitments.

Respecting Input Commitments :

When adjusting a set of current commitments Cn with respect to a current system Sn, then each adjustment towards increasing account is lexically constrained by the criterion that current commitments have to respect input commitments: an input commitment can only be adjusted if it can be plausibly argued that its independent credibility is outweighed or negated by other considerations.Footnote 1

An input commitment ic is respected in C n iff either:

  • icC n, or

  • icC n, but there is a plausible argument for why a current commitment c ≠  ic should replace ic,Footnote 2 or

  • icC n, and there is a plausible argument for why ic does not belong to the subject matter (i.e., for why it is not relevant whether or not the target system can account for it).Footnote 3

Whether or not the independent credibility of an input commitment is respected depends partly on how plausible the reasons are that can be given for its adjustment against the whole position (i.e., how well the system does justice to theoretical virtues, and how well commitments and system agree overall). This means that whether or not the independent credibility of a commitment ic is respected by, e.g., a specific commitment c ≠  ic (or the current set of commitments Cn as a whole), might change during the progress of the RE process and always has to be assessed anew.

Maybe it seems too strict to give lexical priority to the criterion of Respecting Input Commitments when adjusting commitments—maybe sometimes it will be necessary to explore various routes of adjustments before being able to vindicate the adjustment of a specific input commitment. But it is important to note that commitment refers to a specific epistemic state, i.e., not simply the content of a sentence, but being committed to what this sentence expresses. That an input commitment must not be adjusted until there is a plausible argument for this adjustment does not preclude the option of tentatively exploring what the consequences of adjusting this commitment would be for the position, and whether or not, looking back, we can defend adjusting the input commitment from the position that we ultimately reached. But until we can provide such an argument, the position will be “in the air”, since we only tentatively try out what would happen if we were to adjust the input commitment, but without being able to defend said adjustment.

Doing Justice to Theoretical Virtues :

The target system should have theoretical virtues which can be measured at least on ordinal scales. The specific virtues and how they are measured and weighed is described in Sect. 5.5.

Having spelled out the criteria, let us now concretize the two steps of the process of adjustments:

Step A n+1: Adjusting the System :

Adjust (a part of) the current system, Sn, with respect to (a subset of) the current commitments Cn. For this step, at least the following considerations are relevant:

(i):

Assess and rank candidate systems with respect to how well they can account for current commitments Cn.

(ii):

Assess and rank candidate systems with respect to their theoretical virtues.

(iii):

Assess and rank candidate systems with respect to how well they can account for current commitments and do justice to theoretical virtues—ideally, a complete ordering will result; if not, describe the partial orderings that can be made and the trade-offs involved.

(iv):

Based on the results of (i)–(iii), adopt a system Sn+1 in order to continue the process, and provide reasons for why this candidate was chosen: If a candidate is pareto optimal with respect to account and the theoretical virtues, it has to be chosen. If no such candidate is available and trade-offs have to be made, they have to be defensible with respect to (a) their effects on the position as a whole, and (b) the pragmatic-epistemic objective.

Step B n+1: Adjusting Commitments :

Adjust (a subset of) the current commitments Cn with respect to (a part of) the current system Sn+1 by using the following strategy:

(i):

For each commitment that is not fully accounted for, is there a way to adjust it in order to increase account that fulfills the respecting-condition? If yes, adjust, otherwise keep the commitment.Footnote 4

(ii):

Check whether all previous adjustments of input commitments still meet the respecting-condition. If not: Replace it by a commitment that does respect the independent credibility of the input commitment (this can also be the original input commitment).

(iii):

Systematically explore: Are there further relevant commitments that, e.g., might conflict with the current system? If yes, add them to Cn+1 (but do not yet adjust themFootnote 5).

(iv):

As a result of (i)–(iii), adopt a new set of current commitments Cn+1 to continue the process, i.e., select a set of commitments that maximizes agreement with the current system Sn+1 while respecting the independent credibility of input commitments.

As noted above, I decided not to explicitly include the assessment of support from background theories at each step. It can of course serve as tie-breaker in cases of trade-offs, and arguments referring to the background might often play a role when deciding between different possible adjustments. However, when the process of adjusting commitments and systems alternately comes to an end point—that is, when neither of the two steps leads to any further improvement of the position—we need to assess the resulting position with respect to all of the RE criteria, which includes the degree of support from background theories. As explained in Chap. 3, we then have to ask the following questions and assess to what degree the criteria are met:

  • Are the resulting commitments and the system in agreement?

  • Can the position be supported by background theories?

  • Does the system do justice to theoretical virtues?

  • When comparing input commitments and resulting commitments, is it plausible that we did not abandon the subject?

  • Do the resulting commitments have independent credibility?

  • Is the resulting position at least as plausible as relevant alternatives?

Having thus specified the method for its application, let us now turn to the description of the starting position, i.e., the input that we will work with.

5.3 Initial Input Commitments

In line with what was said in Chap. 3, I made a selection of initial input commitments that I deem representative, or at least representative enough to start the process of adjustments with them. The initial commitments are the subset of the input commitments that enters the RE process as explicit input in the first step. The input commitments constrain the subject matter since they are what the target system has to respect.Footnote 6 Consequently, it makes sense that they do not only consist of case-specific, particular judgments that we are committed to. On the contrary, as the debate about precautionary principles and precaution (see Chap. 4) shows, we often seem to be quite confident about general statements like “uncertainty should not be a reason for inaction in the face of severe harm”, or “the environment should be protected from serious irreversible harm, even if it is not certain that this harm would occur”—but will be less confident when it comes to deciding what the actual consequences are for individual decisions: if an action against a specific uncertain harm is very costly, which we know for sure, should we still take it? If a genetically modified crop could be used to avert an impeding hunger catastrophe, but there is a chance that its use will also have irreversible negative effects, e.g., on biodiversity, should we avoid using it?

As there are lots of different cases for which precaution is relevant, I decided to use as an example the case of the climate engineering strategy of solar radiation management (SRM) through stratospheric aerosol injections (SAI), so as to gain some focus for my selection of commitments. To make the selected commitments comprehensible, we need some background on this climate engineering strategy, which I describe in the following, before listing some examples of the selected commitments. Thus, what follows now is technically part of the background, which I only address in the next Sect. 5.4, but we need this information now in order to be able to correctly interpret some of the commitments.

5.3.1 An Illustrative Example: Precautionary Principles and Solar Radiation Management

Climate change is often cited as one of the paradigm case where a precautionary principle should apply (see, e.g., Gardiner 2006). Especially alarming is the possibility of so-called “runaway climate change”, or “climate emergencies” (Blackstock et al. 2009). This refers to the possibility of passing certain thresholds that might accelerate climate change dramatically. Because of the possibilities for climate emergencies, even radical mitigation and adaptation measures might not be enough to avert catastrophic climate change impacts.

As a reaction, a range of technological approaches to alleviate the causes and/or effects of climate change have been suggested under the label of “geoengineering” or “climate engineering”. Typically, a distinction between so-called “carbon dioxide removal (CDR)” and “solar radiation management (SRM)” strategies is made. CDR strategies aim at removing greenhouse gases from the atmosphere, e.g., via technological means or also via more “traditional” means like reforestation. SRM refers to technologies and measures with the goal to reduce global warming by enhancing the reflectivity of the earth. Examples range from painting roofs white to putting reflective aerosols in the stratosphere or even space mirrors. Since the proposed measures differ widely with respect to, e.g., their scale, costliness, speed of bringing about sizeable effects, associated uncertainties and possible side-effects, relating a PP to climate engineering in general is difficult (Elliott 2010).

Thus, I will specifically focus on large-scale SRM measures, i.e., stratospheric aerosol injections (SAI). While it is expected that this kind of SRM could cool the earth rapidly and cancel increases in global average temperature caused by high concentrations of greenhouse gases (GHGs), it would not compensate for other impacts of high levels of atmospheric GHG concentrations, e.g. ocean acidification. Also, impacts at a regional level and on other climate parameters are uncertain, e.g., how it will affect (regional) precipitation, and atmospheric and oceanic circulation. And the potential for so-called “unknown unknowns”, i.e., completely unanticipated outcomes is high. Moreover, the research needed to potentially reduce uncertainties is itself beset with uncertainties and introduces new risks of its own (this description of SRM-SAI is based on Blackstock et al. 2009). Additionally, the so-called “termination problem” means that, as fast as SAI is expected to cancel out the warming from increased GHG concentrations, temperature would increase again equally quickly if we stopped engineering the climate abruptly. Especially if SAI would have been implemented without additional strict mitigation and adaptation measures, this increase could be devastatingly steep.

SRM-SAI and Precaution

Expecting guidance from a precautionary principle with respect to the question of whether or not the solar radiation management (SRM) technology of stratospheric aerosol injections (SAI) should be researched and eventually deployed seems reasonable. Uncertainties are huge: We can identify outcomes that are seen as possible, but no or no reliable probability information is available, while it is plausible that there are even more possible outcomes that we haven’t even been able to identify yet; and impacts (both of climate change without SAI, and of SAI itself) could be catastrophic from a human perspective.

Yet we can find arguments that invoke precaution on both sides of the debate: On the one hand, the “Lesser-evil argument” states that, because at a future point in time, SAI could be the lesser evil as compared to climate change catastrophes, we should research it now so that it is available then. It has been argued that research into SAI is in itself valuable, because it gives us an additional option, independently of whether or not we will make use of it (Reynolds and Fleurke 2013, 103). On the other hand, the “It-might-get-worse argument” states that SAI could, in the worst case, even worsen climate change catastrophes, and should, as a precaution, never be deployed and consequently shouldn’t be researched either (for a reconstruction of the debate, see Betz and Cacean 2012). Additionally, there are concerns that SAI research could create some sort of “lock in” effect, leading via a slippery slope to the deployment of SAI even if no real catastrophe is impending, or that it could sideline the discussion and development of alternative approaches to deal with the threat of dangerous climate change (e.g., Fragnière and Gardiner 2016).

In part, these contradictory invocations of precaution rest on different empirical assumptions, e.g., about the possible side-effects or the psychological implications of SAI-SRM research. In part, they rest on different value bases, i.e., on a disagreement about what exactly the harms are that we should take precautions against, and what exactly makes them harmful. But to a large part, they also rest on a disagreement about what precaution means in this context, and on a lack of agreement on which precautionary principle to adopt (Elliott 2010).

For the case study, this poses the challenges of (i) how to assess the empirical background information and the scientific knowledge about effects and side-effects of SAI, (ii) how to evaluate different possible options and outcomes, and (iii) what the relevant factors are that a PP should take into account, and how it can guide us with respect to the results from (i) and (ii). Since I am interested in formulating an action-guiding PP that is part of a position in reflective equilibrium, my main goal is to address (iii). This presupposes answers to (i) and (ii), which would require doing a lot of further work before starting to tackle the problem of justifying a PP. I will therefore work as far as possible with plausible stipulations and assumptions with respect to (i) and (ii). The goal is to formulate a PP that can be coherently applied in a consistent framing of a decision-problem, and not to identify what the correct framing of a problem such as whether or not to research SAI is. Thus, throughout the case study, I will mostly work with what I call “toy examples”, simplified case-descriptions of specific decision-problems.

5.3.2 Examples of Selected Commitments

For the purpose of this case study, I am only working with my own commitments (or rather with the commitments of a hypothetical person that is rather similar to me), even though that does not exclude adopting commitments based on arguments of others, e.g., convincing arguments from the literature. Based on, e.g., arguments that can be made in their favor independently of the current position, intuitions that are in line with a commitment, or also knowledge about how widely shared a commitment is, rough weights of lowmediumhigh are assigned to the commitments. When thinking about possible adjustments, these weights do not replace the need to consider the reasons for and against a commitment in detail. They only serve as a rough indication of the independent credibility of a commitment. As explained in Sect. 5.2, this ranking is only ordinal, i.e., it does expresses neither that two commitments with a high weight necessarily have the exact same degree of independent credibility, nor that there is a specific interval between the three weights.

I selected my initial commitments from the three groups of (1) general commitments about precaution and precautionary principles, (2) commitments to judgments in simplified “toy” examples, (3) commitments concerning an actual and complex problem, namely, research and development of solar radiation management (SRM) (although I will also use some toy examples for SRM in order to hopefully single out important aspects). As it is not possible to consider each and every one of my commitments concerning precaution and precautionary decision-making explicitly, I aimed for a representative selection of initial commitments. Still, throughout the process of adjustments, it will be important to search for further emerging input commitments which might be relevant.

A full list of the selected initial input commitments can be found in the Appendix A at the end of the book. In the following, I name some examples for each category, together with the rough weights of high, medium, and low assigned to them.

General Commitments About Precaution and Precautionary Principles

Some of the general commitments are statements about, e.g., general features of situations that warrant precautions. Others put more direct demands, or constraints, on the target system, as they express commitments concerning what the target system should achieve, or what it could look like. The latter function as a sort of “working hypotheses”, but are also commitments to features or functions/roles I expect the target system to have or fulfill. They can be adjusted or rejected just like every other commitment, e.g., by giving an argument for why this specific demand is not reasonable or cannot be consistently implemented—or maybe by showing that a system that does not meet these expectations does a better job of fulfilling the pragmatic-epistemic objective. Here are some examples:

IC 1:

Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation. (Principle 15 of the Rio Declaration) [low]

IC 2:

When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause-and-effect relationships are not fully established scientifically. (Wingspread Formulation of the Precautionary Principle) [low]

IC 3:

Pro tanto, it is better to take precautionary measures now than to deal with serious harms to the environment or human health later on. [high]

IC 6:

If we are not sure whether a substance or technology is safe, but have a viable alternative that can be shown to be safe (at least with higher certainty than the option in question), we should use the alternative, even if it might be more costly in economic terms. [high]

IC 8:

The structure of a PP includes two “trigger conditions”, threat and knowledge, and a precautionary response. [low]

A low weight was assigned to IC 1 and IC 2 for the following reasons: they should be included and are an important part of the subject matter—indeed, they are often cited as paradigm examples. However, they are also often cited by critics who attack PPs for being vacuous or paralyzing, and there are good reasons to think that the target PP should differ from those two paradigm examples. IC 8 is widely endorsed in the literature, but as it is primarily concerned with the structure of a PP, it should only serve as a working hypothesis that can easily be given up. Consequently, I assign a low weight to it. IC 3 and IC 6, however, express what I take to be important and substantial claims about a PP. I thus assign a high weight to them.

Commitments About Toy Examples

Commitments about toy examples are typically my own, intuitive judgements, and the weight indicates how secure I feel in this judgement. Some of the commitments about toy examples directly include the description of the case in question, like IC 14:

IC 14:

You find a firearm, and from examining it, you come to the conclusion that it is not loaded. But you are aware that you don’t know much about weapons—this is, in fact, the first firearm you have ever held in your hands. You must not point it at someone else and pull the trigger. Neither should you do the same with yourself. [high]

In other cases, the toy example is a bit more complex and the case description is separately listed as a part of the background. For example, take case 5, Asbestos 1:

Case 5: Asbestos 1

Large-scale mining and manufacturing of asbestos has started about 15 years ago. Asbestos is seen as a desirable material because of its properties like sound absorption, tensile strength, and its resistance to fire and heat. Production costs are low, so it is also affordable. However, there are observations and reports that associate lung diseases with inhaling asbestos, although no systematic scientific research has been done on it so far; thus, a clear connection cannot be proved, and the diseases might have other causes.

We have to choose between the following four options:

  1. (i)

    BAU: Continuing business-as-usual,

  2. (ii)

    Research: Starting systematic scientific research on the harmfulness of asbestos dust, including long-term studies and mortality statistics of asbestos workers,

  3. (iii)

    Research&Regulation: Starting systematic scientific research while already strictly regulating asbestos production, including, e.g., limiting exposure of workers to asbestos dust, and making compensation arrangements, based on agreed liabilities, or

  4. (iv)

    Ban: Banning asbestos.

Concerning this toy example, which is a simplified case based on real past events (cf. Harremoës et al. 2001), I have the following input commitment:

IC 15:

In case 5, Asbestos 1, we should choose option (iii), Research&Regulation. [medium]

Some of the toy examples are also examples that I took from the literature, like case 12, Chemical Waste:

Case 12: Chemical Waste

“A company applies for an emission permit to discharge its chemical waste into an adjacent, previously unpolluted lake. The waste in question has no known ecotoxic effects. A local environmental group opposes the application, claiming that the substance may have unknown deleterious effects on organisms in the lake.

[…] We know from experience that chemicals can harm life in a lake, but we have no correspondingly credible reasons to believe that a chemical can improve the ecological situation in a lake. (To the extent that this “can” happen, it does so in a much weaker sense of “can” than that of the original argument […]).” (Hansson 2016, 96)

Concerning this case, I have the following input commitment:

IC 22:

In case 12, Chemical Waste, the company should not be allowed to discharge the chemical waste into the lake (example from Hansson 2016, 96). [high]

Commitments About R&D of Solar Radiation Management (SRM)

These commitments refer to the case of solar radiation management, which was chosen as an illustrative example for the case study. The weights of these commitments mainly express how secure I feel in my judgement.

IC 23:

Independently of whether or not SRM should be considered as part of precautionary measures in case the globally implemented mitigation and adaptation strategies turn out to be insufficient to prevent dangerous climate change, it should not be used as the only precautionary measure. (“SRM” here is short for “research and development on solar radiation management in order to have it ready to use should dangerous climate change be imminent”.) [high]

IC 25:

Non-invasive research into SRM should be done, as long as this does not negatively interfere with the search for and discussion of other approaches. [medium]

IC 26:

A necessary condition for any application of SRM against harmful impacts of climate change is that it has to be accompanied by a strict mitigation and adaptation program that would allow us to stop doing SRM again as soon as possible. [medium]

IC 29:

In case 3, R&D into SRM, two kinds of research, we should choose option (ii), doing non-invasive research into SRM, especially the aspects that contribute to our general understanding of climate science. [medium]

The last commitment in this list also refers to a toy example, which is described in case 2:

Case 2: R&D into SRM

A strict mitigation and adaptation policy is implemented, but dangerous climate change is still possible because of feedback effects and tipping points. There are no signs that a catastrophe is imminent in the next 5 years. The basic mechanisms of solar radiation management are known, but there are still huge uncertainties, e.g. about its effects on a local level and possible (so far unforeseen, possibly catastrophic) side-effects. Should we do research and development on SRM with the goal of developing it ready to use?

We know that:Footnote 7 R&D has no, neither positive nor negative, influences on our mitigation and adaptation efforts, and that R&D itself does not pose any additional threats to the climate system.

We are given two choices: (i) implementing a research and development (R&D) program for SRM with the objective of developing SRM ready to use, or (ii) not implementing an R&D program for SRM.

5.4 The Background

In this section, I set out some parts of the background that likely will be relevant for the RE process, but this is not an exhaustive description. Also, I make some stipulations in the background in order to facilitate the case study.

Background Theories

that might be relevant to argue for or against parts of the position are, e.g., rational choice theory, cost-benefit analysis, and maximizing expected utility theory.

Background Information

is necessary in order to understand commitments, and to relate candidate systems to them. Concerning the case of solar radiation management, some background information is described above in Sect. 5.3.

For toy examples, I typically summarize the relevant background information in case descriptions like the one of case 5, Asbestos 1, which is also quoted above. Working with such toy examples allows me to have a simplified description of all the background information that is potentially relevant when it comes to assessing whether or not a given system is in agreement with a commitment. The full list of these case descriptions can be found in Appendix A at the end of the book.

Additionally, relevant background information includes knowledge about historical cases that are relevant for precautions, like the ones described in the case studies of “Late Lessons from Early Warnings” (Harremoës et al. 2001). For background information on current risk regulation practices, I use the description and discussion in Randall (2011). As background information for solar radiation management, I use a narrow selection of relevant papers, especially Blackstock et al. (2009), Irvine et al. (2016), and Lenferna et al. (2017).

In order not to lose sight of the main line of the case study, I also work with a number of background assumptions and stipulations. For example, I assume that decisions concerning climate change policies take place under conditions of uncertainty, i.e., that no reliable probabilities about possible outcomes are available (Aldred 2013, 133). Additionally, I sometimes work with assumptions like stipulating numerical utilities, or stipulating that outcomes are in the relevant sense “reasonable” or “realistic”—i.e., I stipulate background information that is necessary to relate a candidate system to commitments. The results of the RE process are then of course contingent on whether or not the kind of information that I stipulate is actually obtainable in the real world.Footnote 8

5.5 Theoretical Virtues

I am looking for a moral system—a moral precautionary principle, to be more precise—and consequently, the theoretical virtues that are relevant in this RE process should be relevant for moral theories. In the literature, we can find the following examples of virtues, or desiderata, for moral theories:

Determinacy::

“A moral theory should feature principles which, together with relevant factual information, yield determinate moral verdicts about the morality of actions, persons, and other objects of evaluation in a wide range of cases” (Timmons 2012, 13).

Applicability::

“The principles of a moral theory should be applicable in the sense that they specify relevant information about actions and other items of evaluation that human beings can typically obtain and use to arrive at moral verdicts on the basis of those principles” (Timmons 2012, 13).

Explanatory Power::

“A moral theory should feature principles that explain our more specific considered moral beliefs, thus helping us understand why actions, persons, and other objects or moral evaluation are right or wrong, good or bad, have or lack moral worth” (Timmons 2012, 15); “A theory has explanatory power when it provides enough insight to help us understand the moral life: its purpose, its objective or subjective status, how rights are related to obligations, and the like” (Beauchamp and Childress 2013, 340); “Moral theories should identify a fundamental principle that both (a) explains why our more specific considered moral convictions are correct and (b) justifies them from an impartial point of view” (Hooker 2000, 4).

Clarity::

“A theory should be as clear as possible, as a whole and in its parts” (Beauchamp and Childress 2013, 339).

Completeness and Comprehensiveness::

“A theory should be as complete and comprehensive as possible. A theory would be fully comprehensive if it could account for all moral values and judgments. Any theory that includes fewer moral values will fall somewhere on a continuum, from partially complete to empty of important values” (Beauchamp and Childress 2013, 339).

Simplicity::

“A theory should have no more [basic] norms than are necessary, and no more than people can use without confusion” (Beauchamp and Childress 2013, 339).

Output Power::

“A theory has output power when it produces judgments that were not in the original data base of particular and general considered judgments on which the theory was constructed. If a normative theory did no more than repeat the list of judgments thought to be sound prior to the construction of the theory, it would have accomplished nothing. For example, if the parts of a theory pertaining to obligations of beneficence do not yield new judgments about role obligations of care in medicine beyond those assumed in constructing the theory, the theory will amount to no more than a classificatory scheme. A theory, then, must generate more than a list of the axioms already present in pretheoretic belief” (Beauchamp and Childress 2013, 340); “Moral theories should help us deal with moral questions about which we are not confident, or do not agree” (Hooker 2000, 4).

Practicability::

“A proposed moral theory is unacceptable if its requirements are so demanding that they probably cannot be satisfied or could be satisfied by only a few extraordinary persons or communities. A moral theory that presents utopian ideals or unfeasible recommendations fails the criterion of practicability” (Beauchamp and Childress 2013, 340).

Based on this list, I selected Practicability, Determinacy, Broad Scope, and Simplicity as theoretical virtues for my RE project; although my understanding of them often differs from the one mentioned above. (And Scope is missing from the above list altogether.) In the following, I detail my understanding of these virtues, and give some reasons for why I selected them.

I decided to leave out “explanatory power” since it is notoriously unclear what this exactly means—and many attempts to explicate it ultimately refer to other theoretical virtues (cf. Keas 2017; Ylikoski and Kuorikoski 2010).

“Output Power”, the demand that the system should not merely “repeat the list of judgments thought to be sound prior to the construction of the theory”, and that it should “help us deal with moral questions about which we are not confident” is certainly desirable, but also seems rather have to do with the relation of the system to commitments, and its scope. Thus, I take it that this desideratum is already covered by other aspects of RE.

I subsume “Applicability” under Practicability, which I understand as having to do with whether or not we can apply the system and follow its instructions—e.g., how accessible the necessary information is typically for us. As Roser (2017, 1397) argues: in order to be action-guiding, a principle “must process inputs that are available to us”. Consequently, it should also produce outputs that are implementable/realizable for us, respectively identifiable by us. E.g., a target system that tells us to select the course of action that will bring about the least actual harm is not very applicable if we want to know what to do in a situation that is exactly characterized by the fact that we do not know which course of action will effectively lead to how much harm.

On the other hand, such a system still can have Determinacy: “The course of action that will bring about the least actual harm” is not a vague expression, and if the necessary information were accessible to us, we would have no problem in applying it and identifying the correct course of action. I understand “Determinacy” as the opposite of vagueness and imprecision, as requiring that the conditions of application of a system are precise and clear enough to yield, together with relevant background information, definite verdicts. E.g., it should—given relevant background information—allow us to unequivocally identify whether a specific course of action is permissible, required, or prohibited. For example, a system that tells us to choose the most sustainable course of action is not very determinate if it does not also specify what makes a course of action “the most sustainable one”.

The virtue of determinacy is, to some extent, dependent on background information: if we have, e.g., a suitable explication of “sustainability” already in our background, then a system like my last example will be determinate without having to specify sustainability. Similarly, it could be the case that we only have a very vague or imprecise concept of “harm”, making the first example less determinate.

One can ask whether it also could be that the system is not determinate because its conditions of application and/or its verdicts are too general. But generality is not a problem in itself, as long as every case that falls under the general conditions is truly a case where the principle should apply. When the generality of the antecedent is a problem, this is rather because it is not clear whether a given case falls under it (but this is a problem of vagueness or ambiguity), or because there is a case that falls under it but we feel that it should not (and this is a problem with account, but not determinacy). If these problems do not occur, then generality is even desirable on grounds of the theoretical virtue of broad scope: a more general antecedent will typically be applicable to a broader range of cases than one that is more specific.

“Comprehensiveness and Completeness” is already partly covered by the RE criterion of Account. But there is something virtuous about the range of applicability of a system that is not completely covered by its ability to account for commitments: we want a system that is applicable to a broad range of cases, i.e., we want a system to have a Broad Scope. While scope is connected to account, it is not the same: Account is about the relation between commitments and the target system, i.e., demanding that the target system can account for the commitments. This means that account can also be increased by rejecting commitments the target system cannot account for, or by excluding them from the subject matter, e.g., by continuing to be committed to something but realizing that it is not relevant for precaution. While these strategies would increase account, they would at the same time reduce the scope of the target system.

I understand the virtue of scope in terms of the range of applicability of a system. My use of “range of applicability” is based on Scharp (2013, 40), although it has to be adapted from the applicability of concepts (in Scharp’s case) to the applicability of systems (in the RE context). I take it that the range of applicability of a system consists of those classes of facts in which the system yields a verdict, which can mean that it prescribes or prohibits an action, but also includes those cases in which it tells us that a specific action is permissible, or not required. I.e., for a precautionary principle, its range of applicability consists of all those situations in which it tells us whether or not a (specific) precautionary measure should be taken. Continuing the adaptation of Scharp’s terminology, the application set of a PP would then consist of all those cases in which it prescribes precaution, and its disapplication set all those cases in which it tells us that no precaution is required.Footnote 9 But this latter distinction is less relevant, at least for the virtue of scope: while we want a PP with a broad range of applicability, i.e., a PP that tells us in as many cases as possible whether or not we should take precautions, this does not mean that we are looking for one that prescribes precautionary measures in as many cases as possible.Footnote 10

This means, e.g., that a target system that includes necessary and sufficient conditions for precaution has a broader scope than if the same conditions were only sufficient. Take two principles, P1 and P2: P1 has the form “If conditions a and b, then take measure c”. This means that it is applicable to all cases that have the properties a and b. If one of the two is missing, P1 is not applicable. Since a and b are sufficient but not necessary conditions to take measure c, we do not know what to do if one of them is missing: both that c is permissible or that it is prohibited would be consistent with P1 in such a situation. For P1, its range of applicability coincides with its application set.

On the other hand, P2 has the form “If and only if conditions a and b, take measure c”. Since here, a and b are not only sufficient but also necessary conditions for c, P2 is thus applicable to the classes of cases and situations that have the properties (a, b), (a, non-b), (non-a, b), and (non-a, non-b): in all these combinations, it tells us whether or not we should take measure c. Its range of applicability is thus broader than its application set, i.e., the situations in which its conditions are fulfilled and it prescribes c—and also broader than the one of P1.

Notably, the virtue of Broad Scope has to do with the range of classes of situations to which a system is applicable. This does not say anything about, e.g., how prevalent these classes are, or whether or not the most relevant commitments belong to them. Whether the range of applicability of a system actually covers the relevant cases is, on my understanding, not a question of broad scope, but of assessing account for (weighted) commitments.

Simplicity is one of the “classic” theoretical virtues, but this does not mean that there is one straightforward interpretation of it.Footnote 11 In the context of this project, I settled for an interpretation of the theoretical virtue of simplicity as demanding that the conceptual apparatus of the target system should be economical in the sense that the concepts it includes that cannot be reduced to each other are kept to a minimum. There might not be a direct argument for why simplicity, understood in this sense, has epistemic virtue. But striving for simplicity will contribute to systematicity since it forces us to, e.g., identify relevant features that commitments share in order to reduce the number of concepts needed to account for them.Footnote 12

Operationalization and Weighing of Virtues

Here is a list summarizing the theoretical virtues of systems that I selected for the case study:

Practicability :

The target system should be applicable in the sense that it specifies relevant information about actions and other items of evaluation that human beings can typically obtain and use to arrive at moral verdicts. E.g., it should process inputs that are typically available to us, and yield verdicts that are realizable by us.

Determinacy :

The target system should, together with relevant factual information, yield determinate verdicts, i.e., both its conditions of application and its verdicts should be precise and clear enough.

Broad Scope :

The target system should have a broad range of applicability, i.e., it should tell us in as many cases as possible whether or not (and specifically which) precautionary measures are required.

Simplicity :

The conceptual apparatus of the target system should be economical in the sense that the concepts it includes that cannot be reduced to each other are kept to a minimum.

I selected this specific list of virtues of the system because I take them to be relevant to reach my pragmatic-epistemic objective of formulating a defensible, action-guiding moral principle that is applicable to the subject matter of precaution and precautionary decision-making. As I argued in Chap. 3, there is no unequivocal ranking of virtues in case of trade-offs: rather, such trade-offs have to be decided on a case-by-case basis, and need to be defensible with respect to (a) the pragmatic-epistemic objective, and (b) their effects on the position as a whole.

Thus, while I expect that practicability and determinacy will typically be more important than scope in trade-offs, whereas simplicity might be not much more than a “tie-breaker” when candidate systems are otherwise equally virtuous, this will always have to be defended on a case-by-case basis.

5.6 Candidates for the System

As a result of the literature survey on precautionary principles in Chap. 4, the Rawlsian Core Precautionary Principle (RCPP) of Gardiner (2006), the integrated risk regulation framework of Randall (2011), the tripartite proposal of Steel (2015), and the Catastrophic Harm Precautionary Principle of Hartzell-Nichols (2017) were identified as promising candidates for a PP. I do not explicitly consider the proposals of Randall and Hartzell-Nichols, for the following reasons: the Catastrophic Harm PP of Hartzell-Nichols comes with a framework that includes substantial procedural aspects which are difficult to simulate in an RE process that is conducted by a single epistemic agent like myself. The proposal of Randall has a strong focus on risk assessment and risk management, which provides relevant background information, but does not seem very promising as a candidate for a moral precautionary principle. This leaves us with the RCPP of Gardiner (2006) and the tripartite approach of Steel (2015). I will also consider Bognar’s counterproposal against the RCPP. Bognar (2011) argues that a “utilitarian principle”, which consists of a combination of the principles of indifference and of maximizing expected utility, can equally well account for the cases where the RCPP applies, while having a broader scope.

In addition to critically assessing and comparing these candidates in the process of adjustments, I will also explore how one can develop new candidates for the system as part of the process of adjustments.

5.7 Recapitulation: Design of the Case Study

This chapter has first specified the method of RE for its application to the project of justifying a moral precautionary principle. It then identified the various elements that enter the process of adjustments—initial commitments, elements of the background, theoretical virtues, and candidates for the system. I named some examples of the selected initial commitments, and the full list can be found in the appendix at the end of the book. The appendix plays an important role for the case study, as when applying RE throughout Chaps. 68, I will continue to only discuss representative or relevant aspects.

RE will be applied with the pragmatic-epistemic objective of justifying an action-guiding moral principle that is applicable to the subject matter of precaution and precautionary decision-making. At the same time, I have the goal of putting the method itself to a test: the case study will demonstrate how the method of RE can be applied, and test the way the method was conceptualized in Chaps. 2 and 3. To be able to focus in detail on specific aspects of applying RE, the case study is divided into three phases: Phase 1 (Chap. 6) explores how theory construction, i.e., the development of a candidate system, works in RE. Phase 2 focuses in detail on the two steps of the process of adjustments (Chap. 7). And the third phase works towards a preliminary end point of the RE process, and evaluates this resulting position (Chap. 8).