In the following three Chaps. 6–8, reflective equilibrium, as described in Chaps. 2 and 3, is applied in a case study with the input and design as described in Chaps. 4 and 5. I start with an introduction to the case study as a whole, before giving an overview of the current chapter.

6.1 Overview: Three Phases of the Case Study

For this application of reflective equilibrium (RE), two objectives have to be distinguished: In the application of RE, I pursue the pragmatic-epistemic objective of justifying a moral precautionary principle (PP). But with this application, I pursue the goal of testing how RE can be implemented and what we can learn for the method of RE by putting it into practice.

In the first place, this is a case study for RE, meaning that the second objective takes precedence. However, the first objective is still very important, not least because one aspect of evaluating the applicability of RE (goal 2) will depend on how well the application of RE contributes to the goal of justifying a moral PP (goal 1). This means that even though not every detail of it will be spelled out, and I will sometimes work with plausible assumptions and stipulations, the precaution-content must not be oversimplified in order not to run counter to the goal of testing RE with respect to an actual and complex problem. Still, the focus of the case study is on aspects of the application that are interesting from the perspective of RE, that is, on exploring different ways to use the method, on how specific problems for its application can be solved, etc. To achieve this, the RE process is roughly divided into three parts:

Phase 1 is the content of the present Chap. 6. In it, I explore how theory construction works in RE, i.e., how first candidate systems can be developed starting from the initial commitments. I develop a first preliminary candidate and give an outlook on how it might be improved. But in order to also test other aspects of an RE implementation, I then move on to phase 2 instead of fully developing this candidate.

In phase 2, Chap. 7, the focus is on the steps of alternating between adjusting commitments and adjusting the system. While these steps are applied in all three chapters, Chap. 7 describes them in most detail, and uses them to assess and compare a range of candidate systems. In particular, it shows how the RE criteria—as specified in Chap. 5—can be used to assess possible adjustments to the position, and how trade-offs can be resolved. To have real variety in the compared systems, I adopt candidates for PPs from the literature (see Chap. 4) in order to compare them with RE, with respect to my input commitments and my pragmatic-epistemic objective.

In phase 3, Chap. 8, I focus on a specific pathway of the RE process and develop a rights-based PP in answer to the pragmatic-epistemic objective of my RE process, i.e., to justify a moral precautionary principle. Compared with phase 2, I move away from assessing many different candidate systems, and focus on making one candidate as strong as possible. At the end of this final phase, I show how the RE criteria can be used to assess whether or not a justified position in reflective equilibrium was reached.

These phases are not an inherent feature of RE. It is by no means necessary that a process will proceed in these three phases, or that we will always find them. Nonetheless, each of them focuses on relevant aspects an RE process can, and typically will, have: (a) the construction of a system, often involving sub-processes, e.g., explications, (b) the comparison of, and choice between, different possible adjustments, given a broad range of candidate systems, and (c) spelling out, assessing, and defending a particular position in detail. These aspects can also appear together, or in a different order. Dividing the case study in three phases, each focusing on one of these aspects, allows me to go into more detail with respect to each of them.

Gray Boxes

This level of detail is necessary to really test the RE method and not to gloss over important challenges, but it can be difficult to keep track of the big picture. To help readers follow the process, gray boxes summarize the main points of each step. Readers not interested in every detail should be able to get a general idea of the case study by reading the gray boxes and then the recapitulation and discussion of results at the end of each chapter.

Role of the Appendix

Even though the description is often very detailed, it is impossible to describe everything. Thus, only relevant or exemplary aspects of what I did when applying RE are described—for example, the assessment of account for commitments is exemplary for a small selection of commitments, but results for the set of commitments as a whole are only summarized. To describe how account was assessed for each individual candidate system with respect to each individual commitment etc. would take too much space and become excessively repetitive. For reference, all of the commitments, candidates for (parts of) the system, and background information can be found in Appendix A at the end of the book.

6.2 Overview: Phase 1

In the first phase, the focus is on how we can develop candidate systems within the RE framework. If no candidate is already available, one natural starting point is to survey one’s commitments for suitable candidates. Thus, I start in Sect. 6.3 with an A-step, i.e., with adjusting the system, in the very specific sense of constructing a first system. I adopt two general commitments—the Rio and the Wingspread PP—as candidate systems, and assess them comparatively with respect to commitments and theoretical virtues. On this basis, I identify guiding questions for further system development. These guiding questions are used in Sect. 6.4 in a B-step to systematically broaden the set of commitments, which leads to the formulation of working hypotheses. These working hypotheses are weak emerging commitments to certain specifics of the target system, e.g., functions it should fulfill or elements we expect it to have. Thus, they are at the same time tentative attempts towards a systematization of the subject matter.

One problem of both candidate principles is that it is unclear what counts as a “precautionary measure”. The system is thus further developed through a “sub-RE-process”, in this case, an explication: the goal is to explicate a part of the system (the concept of “precautionary measures”). Consequently only this part, and the relevant subset of the commitments, are adjusted with respect to each other in steps A2 and B2, Sects. 6.5 and 6.6. This demonstrates how explications can, as “sub-RE-processes”, be part of system development in RE. The resulting explication and some of the working hypotheses are then put together in order to formulate a first candidate system in Sect. 6.7. This formulation is a part of the next A-step, A3, which will be continued in the next Chap. 7, when this candidate is assessed in comparison with other candidates.

Section 6.8 recapitulates the main results from the first phase and discusses some intermediate results both with respect to RE as well as with respect to PPs. It also gives a schematic summary of the RE steps from phase 1 in Fig. 6.1.

Fig. 6.1
figure 1

Schematic overview of the steps of Phase 1

The description of this first phase is structured along the lines of the two RE steps. However, they are not fully completed, since, e.g., in step A1, none of the two candidates is selected. And in step B1, commitments are not adjusted with respect to a system, but rather with respect to guiding questions that are a result of the partial implementation of step A1. However, even if the steps are not completely implemented in every respect, I argue that what is done can reasonably be seen as partial instances of them.

6.3 Step A1: Assessing Rio and Wingspread as First Candidate Systems

In this step, two input commitments, the Rio and the Wingspread formulation of a precautionary principle (PP) are assessed as candidate systems. They are rejected as inadequate candidates: They attain a very low account value (Sect. 6.3.1) and have a low theoretical virtuousness (Sect. 6.3.2). None of them can defensibly be adopted, so no system is chosen at the end of this step. However, their assessment allows for the formulation of guiding questions toward the construction of a new candidate system (Sect. 6.3.3).

While we already identified some possible candidates for the system in the design of the case study (Chap. 5), the goal of this first phase is to explore how RE can be used to develop new candidates. So let us for the moment assume that we do not already have plausible candidates for the system. How could we proceed? As explained, one natural starting point would be to survey one’s commitments for suitable candidates. Looking at the set of input commitments, almost any of the general commitments about Precaution and Precautionary Principles (see A.1.1.1) could be tested as candidate systems.Footnote 1 I decided as an example to focus on principle 15 of the Rio declaration, and the Wingspread formulation of a precautionary principle, which are typical starting point for discussions about PPs.

IC 1:

Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation. (Principle 15 of the Rio Declaration) [low]

IC 2:

When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause-and-effect relationships are not fully established scientifically. (Wingspread Formulation of the Precautionary Principle) [low]

I am committed to those because they are often cited as paradigm examples of precautionary principles (Ahteensuu 2008, 79), and I think that because of this, they determine important aspects of the subject matter. However, I only assign a low weight to them, since they also face a lot of criticism and are typically the start, not the endpoint of attempts to formulate and defend a PP (compare the survey on PPs, Chap. 4). But what exactly are the problems with Rio and Wingspread? Assessing them as candidate systems according to the RE criteria allows us to systematically identify their weaknesses, i.e., it helps us to work towards developing stronger candidates. Thus, I adopt them as the following two candidate principles:

Principle 1 (P 1, The Rio PP) :

Where there are threats of serious or irreversible damage, lack of full scientific certainty [about those threats, T.R.] shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.

Principle 2 (P 2, The Wingspread PP) :

When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause-and-effect relationships are not fully established scientifically.

Even when adopting them as candidate systems, they remain in my set of current commitments and need to be accounted for. Also note that this does not mean that I am committed to them as candidate systems.

After assessing how well P 1 and P 2 can account for commitments (Sect. 6.3.1) and their theoretical virtues (Sect. 6.3.2), I formulate guiding questions for the further development of a new candidate system (Sect. 6.3.3).

6.3.1 Rio and Wingspread: Account for Commitments

Principle 1 and Principle 2 are virtually never able to account for commitments. There are some borderline cases where one could argue that the commitment is accounted for if we were to presuppose additional information, but no clear-cut case of account aside from the fact that, trivially, they account for themselves (because Rio and Wingspread are themselves commitments, they can of course account for themselves).

Assessing Account: Some Examples

In the following, I use two commitments to exemplify how account was assessed, and to demonstrate how P 1 and P 2 relate to commitments, but fail to account for them.

First, here is a commitment concerning precaution and the climate engineering technology of solar radiation management (SRM):

IC 23:

Independently of whether or not SRM should be considered as part of precautionary measures in case the globally implemented mitigation and adaptation strategies turn out to be insufficient to prevent dangerous climate change, it should not be used as the only precautionary measure. (“SRM” here is short for “research and development on solar radiation management in order to have it ready to use should dangerous climate change be imminent”.) [high]

Principle 1, the Rio PP: Its conditions for application are met, but it cannot account for the commitment. There are “threats of serious or irreversible damage”: (a) threats from dangerous climate change, e.g., it could be that global average temperature rises rapidly because of positive feedback loops, as well as (b) threats from solar radiation management, e.g., disruptions of local precipitation patterns that could have further disruptive effects on the global climate. We don’t have “full scientific certainty” about either of these threats. Thus, P 1 tells us that this uncertainty cannot be used as a reason against taking measures to prevent environmental degradation that could be caused by those threats. But even if we assume that there are no other reasons against taking such measures, P 1 cannot account for the commitment: It would follow that uncertainty must not be a reason for postponing measures to prevent (a) and (b), but it does not follow that (b), i.e., SRM, cannot be the only measure against (a), as long as we also take measures against the threats of (b). P 1 is, however, at least consistent with IC 23, since neither does it follow that SRM should be the only precautionary measure against the threat of failed mitigation and adaptation strategies.

Principle 2, the Wingspread PP, also applies in the sense that its conditions for application are met, but it can’t account for the commitment either. The reasoning is similar: Applying P 2 tells us that we should take precautionary measures against (a) the threats from dangerous climate change as well as against (b) the threats from researching, developing, and deploying SRM. But it does not tell us what kind of precautionary measures we should take, and consequently cannot tell us whether or not the combination of “SRM + precautionary measures against the threats of SRM” is on its own an adequate precautionary measure against threat (a).

As the second example, here is a commitment to a decision in a toy example:

IC 22:

In case 12, Chemical Waste, the company should not be allowed to discharge the chemical waste into the lake (example from Hansson 2016, 96). [high]

Principle 1, the Rio PP, is at least consistent with this commitment. There is a threat of serious damage (deleterious effects on organisms in the lake). Arguably, not allowing the discharge of the waste is a cost-effective measure to prevent environmental degradation (e.g., more cost-effective than cleaning up the waste again in case there is indication that it actually causes harm). Thus, the lack of full scientific certainty about the threat shall not be used as a reason not to forbid the discharge of the waste. However, this does not amount to a full account of the commitment that the waste should not be discharged. There is additional information which allows us to construct an argument which accounts for the commitment, and which includes P 1: according to the background information, there are no other (relevant, important) reasons against prohibiting discharging the chemical waste, and there is already a demand to prohibit it. But P 1 does not on its own demand that the waste should not be discharged. It can at best partly account for the commitment.

Principle 2, the Wingspread PP, cannot account for the commitment. We could try to argue that, similar to the reasoning when applying P 1, we can assume that not allowing the discharge of the chemical waste is an adequate precautionary measure and therefore demanded by P 2. But it seems to me that, just as well, “precautionary measures” could mean that we have to take measures to monitor the lake in order to react quickly when there are indications of harm, etc. Thus, P 2 is only consistent with the commitment.

6.3.2 Rio and Wingspread: Theoretical Virtues

Although Principle 1 and Principle 2 have a low account value, their conditions of application are often approximated or even met. The reasons for this slightly surprising result—i.e., that the conditions of application are often approximated, yet the candidates fail to account for commitments—also has to do with their theoretical virtues.

The examples of IC 23 and IC 22 are characteristic of the failure of P 1 (the Rio PP) and P 2 (The Wingspread PP) to account for commitments. If we roughly assess the theoretical virtues of these two candidates, we can see some of the reasons for this failure: most strikingly, both candidates do exhibit a very low degree of Determinacy. One reason for this is that a number of concepts that are used in them could be interpreted in different ways, e.g., “lack of full scientific certainty”, “cost-effective”, “serious or irreversible damage”, “not fully established scientifically”, or “precautionary measures”. In fact, without clarifying these concepts, it is not even really possible to assess the applicability of the two principles: maybe we could argue that, in general, we have some implicit, pre-theoretic understanding of what, e.g., “serious” damage is, or what the relevant sense of “irreversible” damage is, or what does or does not count as a “precautionary measure”, and that there are cases where the relevant information is accessible to and processable by us. But given only these pre-theoretic, imprecise concepts, there will be many boundary cases where we will just not be able to understand what the principle even requires of us.

In short, even though there is currently no other candidate available, it seems that P 1 and P 2 should be dismissed. However, by analyzing their shortcomings, we are now in a position to identify questions that need to be addressed in order to formulate a more promising candidate system.

6.3.3 Formulating Guiding Questions

Based on the identified shortcomings of the Rio and the Wingspread PP, guiding questions for the further development of the system are formulated. These express desiderata for the system, i.e., the target system should be able to provide answers to them.

While both candidates were often applicable in a way that did not contradict the commitments, the results were typically too uninformative to account for the commitments.

So one important objective for further candidate principles is that they should yield more informative verdicts, e.g., with respect to the characterization of precautionary measures: while the Wingspread PP (P 2) does not characterize the required “precautionary measures” any further, the Rio PP (P 1) at least asks for “cost-effective measures to prevent environmental degradation”. But this is still lacking in clarity: does “prevent” mean that the measures have to guarantee (to some sufficient degree) that harm can be avoided? Or would it be enough if at least part of the possible harm were prevented? In any case, it seems to exclude measures such as doing further research to get a better understanding of the threat from counting as precautionary measures, and while one could argue that such an exclusion is indeed reasonable, it does not seem to fit with what I am committed to.

Then there is the question of what exactly “cost-effective” means: taking the cheapest measures available? The ones that promise the greatest net benefit? The ones that promise to achieve a given goal in the least costly way?

Additionally, the Rio PP does not directly demand of us that we take precautionary action, but states that “lack of full scientific certainty shall not be used as a reason for postponing [italics T.R.]” measures. This means that in order for there to be a demand for action, (a) there has to be some other demand or imperative for action, and (b) we always have to consider whether there are valid other reasons not to take the measures. Clause (b) in itself is certainly reasonable, but the Rio PP gives us no guidance in determining what these other reasons could be. Because of (a), the Rio PP itself is not directly demanding action, but is rather an argumentative principle that says something about what kinds of arguments are admissible (Sandin et al. 2002).

Also, neither of the candidate principles takes into account that actions that pose a threat often also bring chances of benefits, and consequently they do not say anything about trade-offs, and could not help when choosing between different proposed measures that also introduce their own threats.

A further problem is that the scope of application of both principles is somewhat unclear: the Rio PP refers to “serious or irreversible” threats, but then measures should only be taken to protect the environment—this leads to the somewhat puzzling consequence that there might be threats where the principle does apply because they are “serious or irreversible”, but then would not demand anything, because they do not threaten to lead to environmental damage.

And while the Rio PP talks about threats in general, the Wingspread only refers to activities that raise threats, thereby arguably excluding non-anthropogenic threats like, e.g., asteroids. The latter does not have to be a problem, but one has to decide whether this restriction can be defended, i.e., whether it fits with our commitments.

Lastly, the Wingspread PP is unclear with respect to the knowledge condition: it states that we should take measures “even if” there is uncertainty, but this could mean that it also applies when we are certain—although taking measures then would arguably no longer count as precautionary.

To sum up, pressing questions to be answered are:

  • What kinds of threats demand precautions?

  • What does count as precautionary measures?

  • How should we deal with trade-offs, e.g., if a threat also provides chances of great benefits, or if a precautionary measure introduces new threats?

  • When exactly should a PP apply and prescribe measures?

In the next step, B1, I systematically search for answers to these questions by broadening my set of commitments.

6.4 Step B1: Adjusting Commitments by Broadening the Current Set

Since there is no current system, commitments cannot be adjusted with respect to account. Instead, the guiding questions that were formulated at the end of step A1 are used to systematically search for further relevant commitments. This is structured according to the elements of “threat” (Sect. 6.4.1), “knowledge” (Sect. 6.4.2) and “precautionary measures” (Sect. 6.4.3).

While the literature on PPs disagrees on many points, it is commonly accepted that the general structure of a PP includes the three elements of threat, knowledge, and precautionary response (Gardiner 2006; Manson 2002; Randall 2011; Sandin 1999; Steel and Yu 2019). The tripartite PP-structure is part of my initial commitments:

IC 8:

The structure of a PP includes two “trigger conditions”, threat and knowledge, and a precautionary response. [low]

The two elements of threat and knowledge are also often called “trigger conditions”, i.e., that if those two conditions are fulfilled, then precautionary measures are “triggered”. Consequently, a lot depends on how those elements are specified. This is closely connected to the last guiding question, which asks when exactly a PP should apply and prescribe measures.

The present step, B1, is structured with respect to the three elements of “threat”, “knowledge”, and “precautionary response”, in order to find some tentative answers (in the form of emerging commitments) to the guiding questions identified in the last step, A1.

6.4.1 The Element of “Threat”

The notion of “threat” is defined as a “possibility of harm that is uncertain”. By examining the threats mentioned in the initial input commitments, further input commitments emerge: the target PP should not be restricted to threats to specific entities, but all serious threats pro tanto warrant precaution.

In order to explore which threats are identified as relevant in the commitments, it is first necessary to clarify what a “threat” is. Based on the literature, I adopt Randall’s (2011, 31) “chance of harm” concept as a candidate for the conception of threat, where harm has the meaning of “damage, impairment” whereas chance “concerns possibilities that are indeterminate, unpredictable, and (in some renditions) unintended”. I take that to mean that threat encompasses all possibilities of harm that are not certain. In this definition, the uncertainty of the harm neither restricts “threats” to cases where probabilities are available, nor does it exclude them. I choose this conception also because it seems to fit with how threat is used in the commitments, but this will have to be assessed when relating the candidate systems to the commitments. Notably, this definition of “threat” is not a commitment, but an attempt at systematizing the subject matter—that is, a part of the candidate system.

Definition 1: Threat :

A threat is a chance of harm in the sense that there is an indication of possible harm, or a signal correlated with contingent future harm. (Randall 2011, 31–36)

One important open question is what kinds of threats warrant precautions. In my initial commitments (see Appendix A), I refer to threats such as:

  • dangerous climate change

  • unintended side-effects, both foreseen and unforeseen, of solar radiation management (SRM)

  • distributive and intergenerational injustices from SRM implementation

  • being shot

  • dying in a plane accident

  • increased likelihood of getting cancer

  • lung cancer

  • a small negative impact on the brain development of children

  • deleterious effects on organisms in a lake

  • unforeseen consequences of a new technology, such as the spread of highly toxic algae

  • serious or irreversible damage

  • harm to human health or the environment.

Most of these threats actually concern the environment and/or human health, so the question is whether it would be a useful systematization to restrict the target PP to threats to those entities. I argue that no, it is not, at least not for the first attempts: firstly, “distributive and intergenerational injustices” only indirectly concerns aspects of human health; rather the main point here is the injustice. Secondly, it also makes sense to take precautions against, e.g., financial loss. Restricting the target PP only to environmental harm or harm to human health could thus unnecessarily restrict its scope, and a broad scope is one of the desiderata for the system. However, we could wonder whether harm to the environment and/or human health should take lexical priority over other kinds of harms, e.g., that if we have to decide between an action that carries a threat to the environment and an action that carries a threat of economic loss, we should always decide in favor of the environment. But this also seems unduly restrictive, since it could lead to disproportionally huge economic losses for the sake of preventing a negligibly small harm to the environment. Thus, here is an emerging commitment that at the same time might serve to systematize other commitments:

EC 1:

The target PP is neither restricted to threats to specific entities (e.g., the environment and/or human health), nor is there a category of threat that takes lexical priority for the application of a PP insofar as it is a threat to specific entities. [low] [emerged at Step B1]

But what might threats then have in common that makes them warrant precautions? It is important to note that not all of the threats mentioned in the list above are taken to warrant precautions: I am committed to the claims that you should take the job in Chicago even though it means that there is a small probability that you will die in a plane accident, or that radiation therapy for cancer patients is permissible even though it increases the likelihood for them to get cancer again at a later point. This suggests that whether a threat warrants precaution in the sense that the target PP should demand measures be taken cannot solely depend on the severity of the harm, but will also depend on the available evidence as well as on the trade-offs involved.

Still, it should be possible to give some characterization of what kinds of threats pro tanto warrant precautions. A pro tanto ought is a nonfinal ought that only results in a final ought if there either are no other relevant pro tanto oughts or no other relevant considerations, or if it outweighs these other oughts and considerations (Reisner 2013).

Thus, I take it that there are threats that pro tanto warrant taking special precautions; but then, based on other information and considerations, this could result in a demand for very minimal measures, or could even be overruled. On that interpretation, threats like dying in a plane accident or getting cancer are of course harms we should take precautions against—it is just that in these specific cases, other considerations overrule the need to avoid those threats.

For now, I am looking for a minimal, qualitative characterization of the kinds of threats that pro tanto warrant precaution. A qualitative characterization in terms of so-called “thick” or “value-laden” concepts makes sense, because they highlight that which threats warrant precaution will depend partly on our values, but also on some descriptive characteristics. Although they still require interpretation and deliberation, they facilitate discussion and provide focus (Gardiner 2006, 57–58). Candidates for such a characterization of threats are, e.g., serious harm, irreversible harm, unacceptable outcomes, or catastrophic outcomes.

As a commitment, I accept that serious threats pro tanto warrant precautions. This qualitative characterization seems useful to me, since a threat is—according to Definition 1—a possibility of harm that is uncertain. Since this entails that we are not sure whether or not harm will occur, it makes sense to focus on threats that are in some way serious, i.e., cases in which the costs of being wrong are significant.

EC 2:

Serious threats pro tanto warrant precaution. [low] [emerged at Step B1]

While threats of unacceptable or catastrophic outcomes are arguably also serious threats, irreversible harm is not always serious and is sometimes even completely negligible. Although irreversibility of harm can make a threat more serious, it is not plausible that it should in itself warrant precautions.Footnote 2

For the understanding of “serious”, I commit to the proposal of Resnik (2003): seriousness is assessed according to (i) the potential for harm of the threat, and (ii) whether or not the potential damage is seen as reversible. This also allows us to compare threats.

EC 3:

The seriousness of a threat is assessed according to (i) the potential for harm of the threat, and (ii) whether or not the possible harm is seen as reversible. [low] [emerged at Step B1]

Perhaps more characteristics can be named that make a threat serious, e.g., how its potential for harm can be assessed, but this seems like a good first formulation.

It will then be a task for the target precautionary principle to identify against which threats that pro tanto warrant precautions we actually should take precautions, and to what extent.

The next important step towards this is to address the level of knowledge—respectively uncertainty—at which the target PP should demand measures.

6.4.2 The Element of “Knowledge”

As a minimal knowledge level, i.e., what we have to know about a threat in order for that threat to warrant precaution, plausibility is selected. For a threat to be assessed as plausible, we need at least some credible scientific evidence in its favor, even though it might not be enough to assign probabilities.

One of the core ideas of PPs is that we do not have to—and often should not—wait for full scientific certainty before taking measures to prevent harm. Consequently, the level of knowledge required to “trigger” precautionary measures should be something less than full (scientific) certainty. On the other hand, it should be more than mere logical possibility: if it is only required that a threat is logically possible, then virtually every action or inaction can lead to catastrophic harm—my writing of this book might by some ludicrous, but logically possible, chain of events lead to a nuclear holocaust.

One popular approach to settling the knowledge condition for a precautionary principle is to argue that it applies under conditions of decision-theoretic uncertainty, meaning decision situations in which we have knowledge of the available courses of actions along with their complete set of possible outcomes, but cannot assign probabilities to those outcomes. This is often combined with suggesting a “division of labor” with quantitative approaches like cost-benefit analysis, that can be applied in situations of decision-theoretic risk, where we also have knowledge of the probabilities of the possible outcomes. However, neither the risk/uncertainty distinction nor the use of decision-theoretic uncertainty as the knowledge condition for PPs is without critics (Randall 2011; Roser 2017; Steel 2015); and it has been argued that there are situations where a PP should apply even though probabilities are available (e.g., Randall 2011; Steel 2015; Thalos 2012). For now, I do not want to take a stance on this by committing to whether or not the risk/uncertainty distinction is relevant for the application set of the target PP. This is rather a question that should be addressed during the course of this process.

However, it makes sense to set a minimal knowledge level at which serious threats pro tanto warrant precautions. This lower boundary should be in some meaningful sense more than mere logical possibility without yet presupposing that relative likelihoods or probabilities are available. For this purpose, the notion of plausibility, or credibility, of a threat seems suitable. For a threat to be assessed as plausible, we need at least some credible scientific evidence in its favor, even though it might not be enough to assign probabilities. Consequently, the plausibility of a threat is not to be confused with its likelihood (Resnik 2003, 340–41). While a plausible serious threat pro tanto warrants precaution, how extensive the measures are that we take then might just as well depend, inter alia, on how likely we judge the threat to be. But I argue that the target PP should pro tanto apply to all relevant plausible threats, and adopt the following commitment:

EC 4:

All plausible serious threats pro tanto warrant precaution. [low] [emerged at Step B1]

Of course, what “plausible” means needs to be spelled out more—it could refer to, e.g., epistemic and pragmatic criteria to assess the plausibility of a hypothesis (Resnik 2003), or that we have to know a mechanism by which the threat would be realized (Hartzell-Nichols 2017), or that we cannot show that the threat is inconsistent with our scientific background knowledge (Betz 2010). However, explicating the notion of “plausibility” is outside the scope of my current epistemic project, which focuses on formulating a moral precautionary principle. In continuing, then, I thus stipulate that there is a meaningful notion of plausibility in the background, respectively that the explication of such a notion has no direct implications for the formulation of the target PP in the current RE process.

6.4.3 The Element of “Precautionary Measures”

The set of input commitments is extended by a range of emerging input commitments concerning what does, or does not, count as a precautionary measure.

I am committed to a range of measures, but almost none of them are explicitly characterized as being precautionary. When trying to find clear-cut cases of precaution, it can be difficult to distinguish precautionary measures—of the kind the target PP should demand—from everyday caution, as well as from taking preventive measures where not taking them would simply be careless, reckless, or outright suicidal.

In order to elicit more commitments that explicitly concern whether or not a measure is precautionary, I did consider cases where people might tell you to be cautious, or might rebuke you, saying that you should have been more cautious if something happens to you because you did not take specific measures or actions—but these measures nonetheless might not count as precautionary measures. You can find the full list of emerging commitments in the Appendix A, but here are some examples:

EC 6:

Looking left and right before crossing the street is not a precautionary measure. [low] [emerged at Step B1]

EC 7:

To bring a parachute when planning to jump out of an airplane is not a precautionary measure. (Example from Sandin 2004) [medium] [emerged at Step B1]

EC 8:

To have a parachute on board when planning to fly somewhere is a precautionary measure. (Example from Sandin 2004) [medium] [emerged at Step B1]

EC 11:

Chewing your food is not a precautionary measure against choking. [medium] [emerged at Step B1]

EC 12:

As a factory worker who is well informed about the dangers of being exposed to the hazardous chemical X in your work, performing a ritualistic dance to protect you from a hazardous chemical is not a precautionary measure. (Example from Sandin 2004) [high] [emerged at Step B1]

I also identified a more general commitment about precautionary measures:

EC 13:

Precautionary measures should be effective in preventing or substantially ameliorating either a threat or the harm of a threat. [high] [emerged at Step B1]

6.4.4 The Broadened Set of Current Commitments, C1

At the end of step B1  , the current set of commitments, C1, consists of the initial commitments C0 (none of them having so far been adjusted) and 13 emerging commitments, EC 1–EC 13. Among these, EC 5–EC 13 specifically concern what does or does not count as a precautionary measure. However, we are still lacking clear criteria for what makes a measure a precautionary measure.

In the next step, A2, an explication for “being a precautionary measure against an undesirable x” is proposed as a partial systematization of the subject matter which is constrained by the commitments EC 5–EC 13 and further emerging commitments. Steps A2 and B2 explicate what the necessary and jointly sufficient conditions are for an action to count as a precautionary measure. However, it does not follow that every precautionary measure is warranted. Identifying which precautionary measures are justified, respectively required, from a moral standpoint, is then the purpose of the target system as a whole.

6.5 Step A2: Explicating “Precautionary Measures”

Steps A2 and B2 are an explication of the concept of “being a precautionary measure against an undesirable x.” They are thus a “sub-process”, i.e., they concern only a part of the position. Consequently, in this step, A2, a candidate for a part of the system, and not a candidate for the whole system, is suggested. It has a smaller scope and is only supposed to systematize commitments concerning what does or does not count as a “precautionary measure”. I use the explication proposed by Sandin (2004), and after assessing it with respect to account (Sect. 6.5.1) and its theoretical virtues (Sect. 6.5.2), it is adopted as part of the system.

Some characteristics that the commitments concerning precautionary measures have in common seem to be: the action has to be performed intentionally (if you bring a fire-extinguisher as part of your costume, this was not a precaution against a sudden fire outbreak at the party, EC 9), there has to be sufficient uncertainty about whether or not the threat would materialize if the measures were not taken (EC 7: Bringing a parachute when you plan to jump out of an airplane is not a precautionary measure), and the measures should in fact eliminate or at least diminish the threat (EC 13: Precautionary measures should be effective in preventing or substantially ameliorating a threat, and EC 12: A ritualistic dance is not a precautionary measure against threats from chemicals). These aspects have been identified as necessary and jointly sufficient conditions for some action to be a precautionary measure against an undesirable x by Sandin (2004):

ExplicPrec :

Explication of “being a precautionary measure against an undesirable x:

An action a is precautionary with respect to something undesirable x if a fulfills the following necessary and jointly sufficient criteria:

  1. 1.

    Intentionality: a is performed with the intention of preventing x.

  2. 2.

    Uncertainty: the agent does not believe it to be certain or highly probable that x will occur if a is not performed.

  3. 3.

    Reasonableness: the agent has externally good reasons (a) for believing that x might occur, (b) for believing that a will in fact at least contribute to the prevention of x, and (c) for not believing it to be certain or highly probable that x will occur if a is not performed.

I adopt ExplicPrec as a candidate for the explication of precautionary measures, and in the following assess it with respect to account for commitments and its theoretical virtues.

6.5.1 Account for Commitments About Precautionary Measures

ExplicPrec accounts for all of the emergent commitments on precautionary measures, i.e., EC 5–EC 13. Take the examples from before:

EC 6, crossing the street: we know that if one does look left and right before crossing the street, it is very likely that at some point one will be hit by other road users. Not looking left and right before crossing the street is reckless. Hence, the fact that it does not count as a precautionary measure fits with the explication: the uncertainty criterion is not fulfilled.

EC 7, bringing a parachute when planning to jump out of a plane: we know that people die if they jump out of an airplane without a parachute at 4000 meters. Thus, taking a parachute is not a precaution against uncertain harm: not taking one would be outright suicidal. This fits with the explication: the uncertainty criterion is not fulfilled: we believe it to be certain that people die when they jump out of airplanes without a parachute.

EC 8, bringing a parachute when flying somewhere: we know that it is possible that planes have accidents that make it necessary to jump off with a parachute to save oneself, but we do not expect that it will happen this time. Still, we bring the parachute as a precautionary measure against dying in a plane crash. This fits with the explication, all three criteria are fulfilled.

EC 11, chewing your food carefully: we know that people can choke if they don’t chew their food properly, but wolf it down. So it again fits with the explication, because the uncertainty criterion is not fulfilled.

EC 12, performing a ritualistic dance: at least in our society, there are no good reasons for the factory worker to believe that this will prevent them from harm. Thus, the commitment that it does not count as a precautionary measure against the hazardous chemical is accounted for by the explication, since the reasonableness criterion is not fulfilled.

Through the Intentionality- and the Reasonableness criterion, ExplicPrec also accounts for the commitment EC 13, i.e., that precautionary measures should be effective in preventing or substantially ameliorating either a harm, or the threat of a harm.

In the initial set of commitments, there were a few commitments that included some relevant aspects of precautionary measures and that need to be considered here as well. There weren’t any real conflicts, even if the explication could not always straightforwardly account for them. But the latter is rather caused by the fact that we do not yet have a full system that could account for all aspects of the commitments. The relevant commitments are IC 23 and IC 24:

IC 23:

Independently of whether or not SRM should be considered as part of precautionary measures in case the globally implemented mitigation and adaptation strategies turn out to be insufficient to prevent dangerous climate change, it should not be used as the only precautionary measure. (“SRM” here is short for “research and development on solar radiation management in order to have it ready to use should dangerous climate change be imminent”.) [high]

This commitment expresses that SRM on its own is not enough as a precautionary measure, but it does not tell us whether or not it is a precautionary measure. To determine this, we can now use the explication. However, assessing the implications of the current system for the current set of commitments will be discussed in step B2.

IC 24:

SRM should not be considered as the only precautionary measure against the threat that the globally implemented mitigation and adaptation strategies turn out to be insufficient to prevent dangerous climate change, because it is inadequate as a precautionary measure. It is inadequate because: it introduces threats of its own, it is uncertain whether it would work in the intended way without unforeseen (negative) side-effects, and it imposes costs and responsibilities (e.g., for maintenance) on future generations. [medium]

This commitment claims again that SRM on its own is not a precautionary measure against dangerous climate change, and it basically states that aspect (b) of the reasonableness criterion is not fulfilled: it is uncertain whether SRM would work in the intended way, i.e., the reasons to believe that it will in fact prevent dangerous climate change are not good enough. However, the explication cannot account for the claim that SRM is inadequate because it imposes costs and responsibilities on future generations. This might be a sign that there is something that makes SRM inadequate as a measure not from a precautionary perspective, but on some other grounds.

6.5.2 Theoretical Virtues of the Precautionary-Measures Explication

The theoretical virtues of ExplicPrec have to be assessed with respect to it being part of the overall target system. Since there is no alternative candidate for the concept of “precautionary measures”, and not yet a current system that the explication can be assessed as a part of, I assess the virtues of the explication on its own, in order to decide whether there is something that speaks strongly against it.

Determinacy

There can be boundary cases about what does count as “certain or highly probable” (the Uncertainty condition), in part because this might be context-dependent. The same holds for “externally good reasons” (the Reasonableness condition). But aside from this, the criteria seem clear-cut and precise—at least clear-cut and precise enough to contribute to the pragmatic-epistemic objective.

Practicability

That the two criteria of Intentionality and Uncertainty refer to the intentions and beliefs of the agent could cause a problem for practicability, in the sense that we do not have direct access to the intentions and beliefs of others. But as long as we understand Intentionality as saying that nothing can be a precautionary measure against x as long as it has not at least been declared to be intended to prevent x, this is not a real problem. And the Uncertainty criterion also has to be checked by the Reasonableness criterion. I.e., even for other agents we can assess whether or not they can reasonably see a measure they intend to take as a precautionary measure.

Broad Scope

The explication has a broad range of applicability, since its criteria 1.–4. are necessary and jointly sufficient: for every combination of fulfilled or not fulfilled criteria, it will tell us whether a measure is precautionary against a specific undesirable x, or not.

Simplicity

I take it that the explication includes the following concepts that are not reducible to each other: intention; preventing (an event); belief; certain or highly probable; externally good reasons (for believing something). While not being extremely minimal, this is not a level of complexity that would be high enough to diminish any of the other virtues, as their assessment has shown.

Evaluating ExplicPrec

Since there was no other candidate, and the explication of precautionary measures against an undesirable x did do very well with respect to accounting for relevant commitments, and reasonably well with respect to theoretical virtues, I argue that it should be adopted as (a part of) the system, i.e., as the current partial system S1.

6.6 Step B2: Adjusting Commitments About Precautionary Measures

The subset of the current commitments that concern precautionary measures are adjusted with respect to the newly chosen explication of “being a precautionary measure against an undesirable x”. By making a further emerging commitment explicit, the account value can be increased (Sect. 6.6.1). Additionally, the explication has more implications than are part of the current commitments. Those implications concerning current commitments are added as newly inferred commitments (NCs) (Sect. 6.6.2). Lastly, the set of emerging input commitments is further broadened by exploring my commitments concerning which precautionary measures are warranted (Sect. 6.6.2).

In this step, the current set of commitments C1 is adjusted with respect to the current partial system S1, the explication of “precautionary measures”, ExplicPrec.

First, are there ways to adjust the current set of commitments in order to increase agreement with the current partial system?

6.6.1 Trying to Increase Account

Above, I stated that the explication cannot directly account for IC 23:

IC 23:

Independently of whether or not SRM should be considered as part of precautionary measures in case the globally implemented mitigation and adaptation strategies turn out to be insufficient to prevent dangerous climate change, it should not be used as the only precautionary measure. (“SRM” here is short for “research and development on solar radiation management in order to have it ready to use should dangerous climate change be imminent”.) [high]

This commitment only directly entails that SRM on its own is not enough as a precautionary measure, but not whether or not it should be considered as a precautionary measure (just maybe an inadequate one). But when thinking about this, it becomes clear to me that I am also committed to the following:

EC 14:

Research and development (R&D) into solar radiation management (SRM) in order to have it ready to use should dangerous climate change be imminent, is not, on its own, a precautionary measure against the threat of dangerous climate change. [medium] [emerged at Step B2]

This means that there is a further emerging commitment that needs to be added to the set of current commitments (and the set of input commitments). Can the explication account for EC 14? The Intentionality and the Uncertainty criterion are fulfilled, but what about the Reasonableness criterion? Do I have (a) externally good reasons for believing that dangerous climate change might occur?—According to my background information, yes. Do I have (b) good reasons for believing that R&D into SRM will in fact at least contribute to the prevention of dangerous climate change?—This depends on what “good reasons for believing” means, because there are reasons for believing that SRM can substantially counteract global warming caused by alleviated GHG levels. But there are also reasons to be concerned that it might have unforeseen, unintended negative side-effects that would prevent it from working in the intended way, maybe even such that it adds to the harmful impacts of dangerous climate change. Since there are (c) externally good reasons for not believing it to be certain or highly probable that dangerous climate change will happen if R&D into SRM is not performed, whether or not SRM counts as a precautionary measure thus depends on how we evaluate aspect (b) of the Reasonableness criterion. I stipulate that according to my background knowledge, the uncertainty about the effectiveness and the potential side-effects of SRM and its R&D is just too high to fulfill this criterion. Consequently, SRM on its own is not a precautionary measure against the threat of dangerous climate change—the explication can account for this commitment.

6.6.2 Newly Inferred Commitments that Classify Measures as (Not) Precautionary

In the case of most of my commitments I wasn’t yet committed on whether or not the endorsed or rejected measures counted as precautionary. Adopting the explication of “being a precautionary measure against an undesirable x” thus leads to a range of newly inferred statements that I would have to accept as newly inferred commitments if I were to adopt the explication as a part of the system. This actually generates a lot of new commitments, since every measure that I endorse or reject in my current commitments is classified as precautionary or not precautionary by the explication. Here are some examples (for a full list, see Appendix A):

NC 2:

Requiring that any application of SRM against harmful impacts of climate change has to be accompanied by a strict mitigation and adaptation program (IC 6) is not a precautionary measure against other effects of increased GHG levels and negative effects from prolonged SRM-implementation. (Uncertainty about those negative effects is too low.)

NC 9:

In case 3, R&D into SRM, two kinds of research, choosing option (iii), not implementing any research and/or development program into SRM, does count as a precautionary measure against the potential dangers of a full-blown R&D program into SRM.

NC 10:

In case 3, R&D into SRM, two kinds of research, choosing option (iii), not implementing any research and/or development program into SRM, does count as a precautionary measure against the possibility that a non-invasive research program into SRM turns out to be a waste of money and effort that does not help us to prevent dangerous climate change.

NC 13:

In case 1, Genetically Engineered Algae, banning the technology is a precautionary measure against possible harmful effects from it.

NC 14:

In case 9, Job Offers, choosing the job in Chicago is not a precautionary measure against having a tedious and badly paid job in New York. (By design of the case, it is certain that you end up with the bad job if you don’t go to Chicago.)

NC 15:

In case 9, Job Offers, choosing the job in New York is a precautionary measure against being killed in a plane accident.

These “newly inferred commitments” are commitments that I accept because they follow from my current system. But they are not input commitments—independently of the current system, I would not have come up with them.

6.6.3 Searching for Further Relevant Commitments

ExplicPrec tells us what a precautionary measure is, but not whether or not a precautionary measure is warranted. This is something that the target system as a whole will have to address. For this purpose, we can already search for further relevant commitments.

First, I am committed to the claim that precautionary measures should not introduce serious threats of their own. I do, however, only assign a low weight to this commitment: I think it is a possibility that there might be further constraints that can make it defensible that a precautionary measure introduces serious threats of its own—depending on the trade-offs involved.

EC 15:

Precautionary measures should not introduce serious threats of their own. [low] [emerged at Step B2]

Also, I am committed to the claim that in order to be defensible, costs and responsibilities for precautionary measures should be distributed in a morally sound way, e.g., it should not be the case that the general public has to pay for precautionary measures against a threat caused by an action that will only benefit a very small minority.

EC 16:

The costs and responsibilities for precautionary measures should be distributed in a morally sound way. [high] [emerged at Step B2]

When talking about additional threats, and the costs of precautionary measures, it seems that there is a price that we have to pay for a precautionary measure, and the objective of a PP is also to tell us when this price is adequate (cf. Munthe 2011).

EC 17:

The price of a precautionary measure consists of—compared with the course of action entailing the threat it is supposed to address—foregone benefits,Footnote 3 foregone opportunities, and additional threats. [medium] [emerged at Step B2]

And this price should be proportional to what is at stake:

EC 19:

The price of precaution should be proportional to the seriousness and the plausibility of the threat, given the available alternatives. [low] [emerged at Step B2]

Based on this discussion, I am now also making explicit the following general commitment concerning the target system:

EC 20:

The target PP applies to plausible and serious threats and prescribes measures that are proportional to the severity and plausibility of the threat. [medium] [emerged at Step B2]

6.6.4 The Adjusted Set of Current Commitments, C2

At the end of step B2, the current set of commitments, C2, consists of all the commitments that were in C1, thirty newly inferred commitments on what does and does not count as a precautionary measure against an undesirable x, and six more emerging commitments concerning justified precaution and demands on the target PP, EC 15–EC 20.

Thus, the current commitments C2 consist of the following subsets:

  1. 1.

    all the initial commitments of C0, i.e., IC 1–31, plus

  2. 2.

    ten emerging commitments specifying constraints and desiderata for the target system (EC 1–EC 4; EC 15–EC 20),

  3. 3.

    ten emerging commitments on what does and does not count as a precautionary measure against some x (EC 5–EC 14), and

  4. 4.

    thirty newly inferred commitments on what does and does not count as a precautionary measure against some x (NC 1–NC 30).

Currently, we were only adjusting commitments with respect to a part of the system, namely the explication of “being a precautionary measure against an undesirable x”. The relevant subsets of the current commitments are 3 and 4, and both these subsets are accounted for by the explication.

6.7 Step A3.1: Formulating the Principle 3 Candidate System

A new candidate system, the Principle 3-System, is formulated by combining the explication of “precautionary measures” with results from answering the guiding questions.

Step A3 is “split” between phases 1 and 2, since it on the one hand includes the formulation of a candidate system based on steps A1–B2, which still belongs to phase 1. I call this part step A3.1. On the other hand, this candidate system will then be compared with further candidates that are taken from the literature. While still being a part of step A3, the latter introduces the beginning of my phase 2—thus being labeled step A3.2.

At the end of step A1, the following open issues for the formulation of a candidate system were identified:

  • What kinds of threats demand precautions?

  • What does count as precautionary measures?

  • How should we deal with trade-offs, e.g., if a threat also provides chances of great benefits, or if a precautionary measure introduces new threats?

  • When exactly should a PP apply and prescribe measures?

Based on the results from steps B1–B2, I suggest the following candidate principle:

Principle 3 (P 3) :

Where there are plausible threats of serious harm, precautionary measures that are proportional to the severity and plausibility of the threat should be taken.

Arguably, P 3 has more determinacy than P 1 and P 2 because it at least somewhat specifies the precautionary measures that should be taken. By demanding that measures should be proportional it also somewhat addressed the problem of threat trade-offs, i.e., taking precautionary measures that themselves threaten to cause more harm than the original threat.

Of course, I also have to say a bit more about the concepts used in P 3, or it won’t be much more determinate than P 1 or P 2. In order to do that, I add some further parts to the system.

Here I can draw on the results from B1, when the set of commitments was systematically broadened with respect to the guiding questions. As a candidate for the conception of threat, I adopt my commitment to Randall’s “chance of harm” concept, where harm has the meaning of “damage, impairment”, whereas chance “concerns possibilities that are indeterminate, unpredictable, and (in some renditions) unintended” (Randall 2011, 31):

P 3.1: Definition: Threat :

A threat is a possibility of harm that is uncertain.

For the understanding of serious, I follow Resnik (2003): seriousness is assessed (i) according to the potential for harm of the threat, and (ii) whether or not the potential damage is seen as reversible. This also allows the comparison of threats.

P 3.2: Seriousness of Threats :

The seriousness of a threat is assessed according to (i) the potential for harm of the threat, and (ii) whether or not the possible harm is seen as reversible. [same content as IC 11]

For a threat to be assessed as plausible, we need at least some credible scientific evidence in favor of this, even though it might not be enough to assign probabilities. Consequently, the plausibility of a threat is not to be confused with its likelihood (Resnik 2003, 340–41). While a plausible serious threat pro tanto warrants precaution, how extensive the measures are that we take then might just as well depend, inter alia, on how likely we judge the threat to be. But I argue that the target PP should pro tanto apply to all relevant plausible threats. Together with ExplicPrec, we then have a new candidate system, which I call the Principle 3-System.

P 3.3: ExplicPrec :

Explication of “Being a precautionary measure against an undesirable x”: An action a is precautionary with respect to something undesirable x if a fulfills the following necessary and jointly sufficient criteria:

  1. 1.

    Intentionality: a is performed with the intention of preventing x.

  2. 2.

    Uncertainty: the agent does not believe it to be certain or highly probable that x will occur if a is not performed.

  3. 3.

    Reasonableness: the agent has externally good reasons (a) for believing that x might occur, (b) for believing that a will in fact at least contribute to the prevention of x, and (c) for not believing it to be certain or highly probable that x will occur if a is not performed.

The Principle 3-System thus consists of the four parts of P 3, P 3.1, P 3.2, and P 3.3.

6.8 Recapitulation Phase 1

Formulating the Principle 3-System concludes the first phase of my RE application, in which I explore how candidate systems can be constructed in the RE framework. The results of the steps from phase 1 are summarized in the schematic overview of Fig. 6.1, starting from the initial set of commitments C0 to the formulation of the P 3-System at the beginning of step A3. In the following, I discuss what can be learned from this both for RE and PPs, before moving to phase 2 in Chap. 7.

I first discuss what the intermediate results for RE and its application are (Sect. 6.8.1)—my goal with its application, i.e., the case study—before discussing some results for (moral) precaution and precautionary principles (Sect. 6.8.2)—my goal in the RE application.

6.8.1 Phase 1: Discussion of Intermediate Results for RE

The following intermediate results for RE are discussed:

  • Background information and background theories can play important roles for, e.g., how candidate systems are interpreted and assessed;

  • emerging input commitments played a central role: systematically broadening the set of commitments as part of the RE process;

  • the difference between commitments and (parts of) the system: a difference of function, not form or content;

  • the construction and development of systems can be made part of an RE process.

Firstly, one result of assessing the Rio and the Wingspread PP formulations as Principle 1 and Principle 2 is that it demonstrates the relevance of background information and of background theories: If we could presuppose a lot more, then Rio or Wingspread would be more determinate and reach a higher account value. Indeed, as regulatory principles, they, and other similar formulations, stand in a specific context and (legal) practice, which might often make them more determinate than assessing them as stand-alone (moral) principles suggests (cf. Fisher 2002). However, the pragmatic-epistemic objective of this RE implementation is not to justify a principle for regulatory policy, but an action-guiding moral principle.

Secondly, some results with respect to commitments: Emerging commitments played an important role—a factor that is typically neglected in RE conceptions (see Chaps. 1 and 2). Ideally, an RE process would be conducted with respect to all commitments that the epistemic agent holds. But since this is impossible for several reasons (e.g., problems with individuating commitments, comprehensibility and manageability of the process, cognitive limitations, implicit commitments, etc.), it can only be done with respect to the commitments that are explicitly considered. Even if those are carefully selected with respect to being, e.g., as representative as possible, it is still likely that relevant commitments are missing. Thus, in the first two B-steps of the RE implementation, B1 and B2, adjustments to the set of current commitments did consist in broadening the set by making further input commitments explicit and adopting commitments as inferences from the system. One could object that this is not really a part of RE, but rather a part of selecting the initial commitments. In this view, steps A1–B2 aren’t already part of an RE process, but rather of identifying relevant initial commitments to start the process. EC 1–20 then would not count as emerging commitments, but should simply be part of the initial commitments.

However, I argue that since we cannot make everything explicit from the beginning, we have to start somewhere, and making further commitments explicit is necessarily part of applying RE to actual, complex problems. That further commitments concerning what does or does not count as a precautionary measure would be relevant only became apparent when assessing the Rio and the Wingspread PP as first candidate systems in the form of Principle 1 and Principle 2. This also shows that starting with other candidate principles might have led to other emerging commitments, putting the process on another pathway.

A further insight concerns the difference between commitments and (partial) systematizations. The commitments that emerge with respect to the guiding questions from step A1 support that the difference between commitment and system is a difference of function, but not of form or content: most of them are candidates for systematizing other commitments, while at the same time I am committed to them too. Take for example EC 4:

EC 4:

All plausible serious threats pro tanto warrant precaution. [low] [emerged at Step B1]

While expressing my (low) commitment to the claim that all plausible serious threats pro tanto warrant precaution, EC 4 also can serve as a candidate for a partial systematization of other commitments, e.g., to distinguish between threats that pro tanto warrant precaution and those that do not. Depending on which function it is supposed to fulfill, different constraints apply to it: as a commitment, I have to respect its independent credibility when trying to bring it into agreement with a system. As a candidate for a part of the system, it has to prove successful in systematizing commitments by accounting for the relevant subsets of them, while also having theoretical virtues (respectively contributing to the theoretical virtuousness of the system as a whole). In its role as a candidate for a part of the system, it does, however, not have independent credibility and can always be given up.

Thirdly, the results from phase 1 show how the construction and development of candidate systems can be part of an RE process: normally, in RE conceptions, it is presupposed that we compare and adjust principles, but it is not described how we obtain them. As a creative element, the proposal of candidate systems is not seen as a part of the RE process. It might indeed be true that formulating candidate systems is a creative process that cannot be rule-governed and thus cannot be described in explicit terms as part of the RE steps. However, the steps of phase 1 show how the RE criteria can provide heuristics and guidelines for the development of candidate systems: they might, for example, show us how commitments can be systematically explored with respect to questions and problems that available candidate systems leave open, and how systems can be step-by-step constructed by sub-processes like explications. The formulation of guiding questions is not something that is usually described as part of RE, but it proved to be helpful for the formulation of candidate principles. In a sense, these questions themselves are (very preliminary) candidate systematizations, since they suggest what the relevant factors might be that one should explore further.

The first phase also demonstrates how the holistic process of justification via RE often has to proceed piecemeal, since we often have to clarify one specific aspect before being able to move on with the bigger picture. It thereby also shows how seemingly piecemeal and isolated work can be part of a more holistic process.

6.8.2 Phase 1: Discussion of Intermediate Results for Precaution

The main intermediate result for the discussion of precautionary principles is that measures are not automatically justified qua being precautionary.

As the main result for precaution from phase 1, we can note that being “precautionary” does not necessarily amount to being justified: what is striking when assessing the agreement between the explication and the current commitments is that the explication does not classify every measure that I already endorse as precautionary, and actually does classify some rejected measures as precautionary. However, I do not see this as a problem.

First of all, I see it as an advantage that when accepting the explication of “Being a precautionary measure against an undesirable x”, measures are not automatically justified just qua being precautionary. There are cases of unwarranted precaution, and it will be part of the task for the target precautionary principle to distinguish them from the warranted ones.

Secondly, there are two reasons why measures expressed in my commitments are classified as not being precautionary: either (1) because they do not fulfill the Reasonableness criterion, e.g., because there are no good reasons for believing that those measures would in fact contribute to the prevention of the threat they are aimed at. Or (2) because the Uncertainty criterion is not fulfilled and it is actually highly probable or even certain that the threat would materialize if the measures were not taken.

With respect to (1), this is in line with my commitments, i.e., the measures that are classified as not being precautionary on grounds that they are not reasonable in some respect are also rejected in the commitments. With respect to (2), these are measures that seem to be demanded on some other, maybe stronger grounds than precaution: while the first group is not precautionary because it seems questionable whether it would avoid the threat, the second group is not precautionary because it is clear that without them the threat would materialize. That is why those measures are not precautionary, even though not doing them would be completely negligent, respectively would mean to accept the negative consequences knowingly.

The question is now how to handle these commitments that endorse the second kind of non-precautionary measures. Do they still belong to the subject matter in the sense that the target system has to account for them, and has to account for them in the sense that they should be inferable from the system?

It seems clear that the target PP in any case should not recommend the opposite of those measures. In that sense they are like control stations—if the target PP recommends against them, then it is plausible that something is very, very wrong. Nonetheless, it does not necessarily have to recommend them: the most important part is that it covers the cases in which precautionary measures are warranted, and is able to distinguish warranted from unwarranted precaution. However, since a broad scope is a desideratum for the system, if it can cover all the relevant cases of precaution and further cases, then all the better.

All in all, I argue that accepting the explication of “being a precautionary measure against an undesirable x”, ExplicPrec, provides us with a first important systematization of part of the subject matter, picking out a class of relevant cases from related, similar ones. However, it would of course be desirable to find a system that can pick out justified cases of precautionary measures without having to refer to such an additional explication: for reasons of simplicity and maybe also practicability, having fewer additional parts in the system is desirable.