The three key goals of this section are to further characterize the notion of deep uncertainty, outline the procedures and decision support tools used to facilitate decision-making under deep uncertainty in the context of RDM, and highlight and interpret key claims about robustness and satisficing as decision criteria made by researchers associated with its development. Along with the methodological remarks in Sect. 3, this will set the stage for the subsequent critical discussion of robust satisficing in Sects. 4 and 5.
Deep uncertainty
Many decision theorists have felt that it is important to rank decision problems according to depth of uncertainty. The best-known taxonomy is due to Luce and Raiffa (1957: p. 13), according to whom decision-making under certainty is said to occur when “each action is known to lead invariably to a specific outcome,” whereas decision-making under risk occurs when “each action leads to one of a set of possible specific outcomes, each outcome occurring with a known probability,” and decision-making under uncertainty refers to the case where outcome probabilities are “completely unknown, or not even meaningful.”Footnote 3,Footnote 4
Theories of deep uncertainty aim to unpack the many gradations of uncertainty beneath Luce and Raiffa’s category of decision-making under risk.Footnote 5 Although there is no agreed-upon taxonomy for deep uncertainty in the DMDU literature, one influential proposal is due to W.E. Walker and colleagues (Walker, Harremöes, Rotmans, van Der Sluijs, van Asselt, Janssen, & von Krauss, 2003), and parallels Luce and Raiffa in its account of the different levels of uncertainty. Walker et al. suggest that deep uncertainty should be seen as a matter of degrees of deviation, along one or more dimensions, from the limit case of certainty.
One dimension of uncertainty captures its location within our model of a decision situation. Some uncertainty centres around model context: which parts of the world should be included or excluded from the model? Another location is the structure and technical implementation of the model: which equations are used and how are they computed? This constitutes model uncertainty. Parameter uncertainty corresponds to the case where the location of uncertainty pertains instead to the parameters of the model. Uncertainty may also target the non-constant inputs, or data used to inform the model. An important insight is that uncertainty along these various locations need not be correlated. For example, we may be quite certain that the right model context has been found and the right inputs and parameters have been entered, but retain considerable model uncertainty about how the system is to be modelled.
For our purposes, the more important dimension of uncertainty is what Walker et al. call the level of uncertainty. Beneath full certainty lies what they call statistical uncertainty, where our uncertainty “can be described adequately in statistical terms.” (8) This plausibly corresponds to what Luce and Raiffa call ‘risk’. Below this, we have what Walker et al. call scenario uncertainty, in which “there is a range of possible outcomes, but the mechanisms leading to these outcomes are not well understood and it is, therefore, not possible to formulate the probability of any one particular outcome occurring.” (Ibid.) This corresponds to what Luce and Raiffa call ‘uncertainty.’ Walker et al. recognize yet deeper levels of uncertainty. Beneath scenario uncertainty they locate recognized ignorance, in which we know “neither the functional relationships nor the statistical properties and the scientific basis for developing scenarios is weak.” (Ibid.) Still further down is the case of total uncertainty, which “implies a deep level of uncertainty, to the extent that we do not even know that we do not know.” (9).
As we see it, there remain a number of non-trivial questions about how to relate Walker et al.’s hierarchy to relevant concepts used in contemporary decision theory to describe the doxastic states of highly uncertain agents.Footnote 6 For example, it is not immediately clear where in Walker et al.’s hierarchy we might include cases of agents facing situations of ambiguity (Ellsberg, 1961) in which the decision-maker’s doxastic state cannot be represented by a unique probability distribution warranted by her evidence. One proposal would be to interpret ambiguity as a matter of the level of uncertainty, picking out the disjunction of all levels deeper than statistical uncertainty. Many philosophers will want to allow for the possibility that some types of ambiguity can be well captured by imprecise probabilities (Bradley, 2014). To accommodate this thought, we might enrich Walker et al.’s taxonomy with a new level of uncertainty, imprecise statistical uncertainty, located between statistical and scenario uncertainty, and occurring when imprecise but not precise probabilities can be constructed. On this reading, ambiguity refers to the disjunction of imprecise statistical uncertainty together with all deeper levels of uncertainty.
We may also wonder how the taxonomy captures what philosophers and economists call unawareness: that is, cases in which the agent fails to consider or conceive of some possible state, act, or outcome relevant to the decision problem (Bradley, 2017: pp. 252–61; Grant & Quiggin, 2013; Karni & Vierø, 2013; Steele & Stefansson, 2021). Unawareness may play several different roles in contemporary approaches to decision-making under deep uncertainty. It clearly informs the general procedures used for decision support, such as the use of computer-based exploratory modelling to capture a wide range of possibilities and the technique of encouraging participants to formulate new strategies on which to iterate the analysis, which we describe below. These processes help to diminish our unawareness. Lempert Popper, & Bankes (2003: p. 66) emphasize that “no one can determine a priori all the factors that may ultimately prove relevant to the decision. Thus, users will frequently gather new information that helps define new futures relevant to the choice among strategies … or that represents new, potentially promising strategies." Another possibility is that unawareness is taken to motivate the application of novel decision rules, like robust satisficing, to decision problems, once suitably framed, given that some degree of unawareness will inevitably remain.Footnote 7 However, on our reading of the DMDU literature, we do not find clear evidence of this.Footnote 8
Some might think of unawareness as the marker of truly deep uncertainty, and it seems plausible that unawareness characterizes the deep levels of uncertainty below scenario uncertainty in the Walker et al. taxonomy, especially the basement level of total uncertainty.Footnote 9 At the same time, this framework urges a holistic view on which many factors beyond unawareness may modulate an agent’s level of uncertainty.
RDM
Having discussed the notion of deep uncertainty, our next goal is to introduce the method of Robust Decision-Making (RDM) under deep uncertainty.
The procedures and decision support tools used in RDM build on the techniques of scenario-based planning (Schwartz, 1996) and assumption-based planning (Dewar, Builder, Hix, & Levin, 1993). Scenario-based planning encourages decision-makers to construct detailed plans in response to concrete, narrative projections of possible futures. Assumption-based planning emphasizes the identification of load-bearing assumptions, whose falsification would cause the organization’s plan to fail.
Developed at RAND around the turn of the millennium, RDM uses computer-based exploratory modelling (Bankes, 1993) to augment these techniques. Computational simulations are run many times, in order to build up a large database of possible futures and evaluate candidate strategies across the ensemble. This contrasts with the small handful of user-generated futures considered in traditional scenario planning, with Schwartz (1996: p. 29) recommending use of just two or three scenarios for the sake of tractability. Once an ensemble of futures has been generated, RDM uses scenario discovery algorithms to identify the key factors that determine whether a given strategy is able to satisfactorily meet the organization’s goals. Data visualization techniques are used to represent the performance of alternative strategies across the ensemble of possible futures and identify key trade-offs.
By way of illustration, Robalino and Lempert (2000) apply RDM in exploring the conditions under which government subsidies for energy-efficient technologies can be an effective complement to carbon taxes as a tool for greenhouse gas abatement.
The first step of their analysis is to construct a system model. To that end, they develop an agent-based model in which energy consumption decisions made by a heterogenous population of agents generate macroeconomic and policy impacts, which in turn affect the behaviour of agents at later times.
Their second step is to settle on a set of policy alternatives to compare. Robalino and Lempert compare three strategies: a tax-only approach, a subsidy-only approach, and a combined policy using both taxes and technology subsidies.
Because model parameters are uncertain, the next step is to use data and theory to constrain model parameters within a plausible range. Sometimes this can be done with a point estimate: for example, the pre-industrial atmospheric concentration of carbon is estimated at 280 parts per million. Other parameters can be estimated only more imprecisely. For example, the elasticity of economic growth with respect to the cost of energy is constrained to the range [0, 0.1] (Dean & Hoeller, 1992). The resulting uncertainty space of allowable combinations of parameter values is intractably large. As a result, a genetic search algorithm was used to identify a landscape of plausible futures: a set of 1611 combinations of parameter values intended to serve as a good proxy for the larger uncertainty space.
To improve the tractability of the analysis and the interpretability of results, a dimensionality-reduction approach was used to identify the five most decision-relevant model parameters. Allowing these parameters to vary across the landscape of plausible futures and fixing the rest at their mean values generates an ensemble of scenarios in which the performance of policy alternatives can be compared. In each scenario, alternatives were assessed by their regret: the difference in performance between the given alternative and the best-performing alternative.Footnote 10
The purpose of RDM is to identify robust alternatives, which in this case means seeking alternatives with low regret across the ensemble of scenarios. This analysis leads to policy-relevant conclusions. For example, Robalino and Lempert find that a combined policy can be more effective than a tax-only policy if agents have heterogenous preferences and there are either significant opportunities for energy technologies to deliver increasing returns to scale or for agents to learn about candidate technologies by adopting them.
Although RDM models are not intended to deliver precise probabilistic forecasts, they can help us get a grip on the ways in which assignments of probabilities may inform policy choices. Thus, Robalino and Lempert also map the conditions under which different strategies are favoured given different probability assignments to the possibility of high damages from climate change and to the possibility that the economy violates classical assumptions about learning and returns to scale. They find that the combined strategy is preferable when both probabilities are moderate or large (Fig. 1).
Because RDM analysis uses a subset of the full uncertainty space, the last step is to search through the full space for possible counterexamples to these strategy recommendations. If necessary, these counterexamples can be used to develop new alternatives or to constrain the landscape of plausible futures in a way more likely to discriminate between candidate alternatives, and then the analysis can be repeated.
Robust satisficing
The previous section gives a sense of the procedures and tools used in decision framing that characterize the RDM approach. Our aim in this sub-section is to highlight and offer some preliminary interpretation of the recommendation to engage in robust satisficing when solving decision problems of the kind to which RDM is applied.
Researchers associated with the development of RDM typically emphasize the extreme fallibility of prediction in the face of deep uncertainty and recommend identifying strategies that perform reasonably well across the entire ensemble of possible futures, as opposed to trying to fix a unique probability distribution over scenarios relative to which the expected utility of options can be assessed. It is granted that expected utility maximization is appropriate when uncertainties are shallow. Lempert, Groves, Popper, & Bankes (2006: p. 514) emphasize that “[w]hen risks are well defined, quantitative analysis should clearly aim to identify optimal strategies … based on Bayesian decision analysis … When uncertainties are deep, however, robustness may be preferable to optimality as a criterion for evaluating decisions.” They define a robust strategy as one that “performs relatively well—compared to alternatives—across a wide range of plausible futures.” (Ibid.)
In line with the example discussed in Sect. 2.2, it is common for RDM theorists to use a regret measure to assess the performance of different strategies within a scenario. Regret may be measured in either absolute terms or relative terms: i.e., as the difference between the performance of the optimal policy and the assessed policy, or as this difference considered as a fraction of the maximum performance achievable. Using either measure, a robust strategy is defined by Lempert, Popper, & Bankes (2003: p. 56) as “one with small regret over a wide range of plausible futures, F, using different value measures, M.”Footnote 11 No details are given about how to determine what counts as ‘small regret,’ so far as we are aware. Decision-makers who rely on RDM tools are presumably asked to define their own preferred satisficing threshold. In Sect. 5.3, we will provide reasons against interpreting satisficing in terms of a regret measure.
The standard approach when using RDM is to consider how well a given policy performs across the range of possible futures: i.e., its ability to produce satisfactory outcomes given different possible states of the world. So understood, robust satisficing could be considered as a competitor to standard non-probabilistic norms for decision-making under uncertainty like Maximin, Minimax Regret, or Hurwicz (Luce & Raiffa, 1957: pp. 278–86).
The assessment of a candidate policy against the norm of robust satisficing need not entirely forego probabilities, however. As we saw, Robalino and Lempert are willing to use probabilistic analysis to shed light on the ways in which economic non-classicality and damages from climate change bear on the optimality of taxes and subsidies. Lempert, Popper, & Bankes (2003: 48) note that the notion of robust satisficing “can be generalized by considering the ensemble of plausible probability distributions that can be explored using the same techniques described here for ensembles of scenarios.” In other words, we may choose to assess the robustness of a given policy by considering the extent to which its expected performance is acceptable across the range of probability distributions taken to be consistent with our evidence. So understood, robust satisficing may be considered as a competitor to norms for decision-making with imprecise probabilities, like Liberal or MaxMinEU (discussed in Sect. 4.3).
Notably, the norm of robust satisficing is presented by Lempert and Collins (2007) and Lempert (2019) as inspired by Starr’s domain criterion for decision-making under uncertainty, which canonically makes use of sets of probability assignments (Schneller & Sphicas, 1983; Starr, 1966). Roughly speaking, the domain criterion asks agents who are completely ignorant about which world-state is likely to be actual to consider the set of all possible probability assignments to the states, giving equal weight to each possible probability assignment and choosing the act which is optimal on the largest number of probability assignments.Footnote 12
Apart from the more explicit focus on deciding by reference to sets of probability values, we find the inspiration drawn from the domain criterion to be striking, in that it serves to highlight two concerns about robust satisficing that we will explore in greater depth in this paper.
The first concerns the relationship between robustness and satisficing. The domain criterion is recommended by Starr in part on the basis of considerations of robustness. Starr (1966: p. 75) argues that his criterion is superior to the Laplace criterionFootnote 13 because, he claims, a decision rule that relies on a unique probability assignment to the possible states is too sensitive to the particular probability vector that we choose. Nonetheless, the domain criterion compares strategies in terms of the number of probability assignments on which they maximize expected utility and is therefore naturally thought of as a norm of robust optimizing.Footnote 14 In expositions of RDM, robustness and optimizing are frequently contrasted, and a desire for robustness is linked to satisficing choice. Starr’s domain criterion serves to highlight that there is no strictly logical connection here. What, then, explains the emphasis on satisficing choice that informs the design and application of RDM? We take up this issue in Sect. 5.
Secondly, we re-iterate that in Starr’s exposition, the domain criterion explicitly relies on higher-order probabilities. We are “to measure the probability that a randomly drawn [first-order probability assignment] will yield a maximum expected value for each particular strategy.” (Starr, 1966: p. 73) This is done using a uniform second-order probability measure over first-order probability functions, a strategy Starr takes to represent a more plausible interpretation of the Principle of Insufficient ReasonFootnote 15 than the Laplace criterion. Notably, in what may be considered the locus classicus for discussions of robustness as a decision criterion in the field of operations research, Rosenhead, Elton, & Gupta (1972) also emphasize the close kinship of their favoured principle to the Laplace criterion. In planning problems in which an initial decision \(d_{i}\) must be chosen from a set \(D\) and serves to restrict the set \(S\) of alternative plans capable of being realized in the long run to a subset \(S_{i}\), they define the robustness of \(d_{i}\) in terms of \(\left| {\tilde{S}_{i} } \right|{/}\left| {\tilde{S}} \right|\), where \(\left| X \right|\) denotes the cardinality of set \(X\) and \(\tilde{S} \subseteq S\) is the set of long-run outcomes considered “acceptable according to some combination of satisficing criteria.” (419) They explicitly note that using their robustness criterion is equivalent to using the Laplace criterion if the agent’s utility function is defined to be 1 for any \(s \in\) \(\tilde{S}\) and 0 otherwise.
Given its pedigree, it would seem natural to expect that the robustness criterion appealed to in RDM must also involve reliance on a uniform first-order or second-order probability measure, if only implicitly. However, this seems at odds with the skepticism toward unique probability assignments that is otherwise characteristic of the DMDU community (see, e.g., Lempert, Popper, & Bankes, 2003: pp. 47–8). We consider this issue in greater depth in Sects. 4.3–4.4.
As one final point, we note that many formulations of the robust satisficing norm emphasize robustness not only with respect to different possible states of the world or different probability assignments, but also with respect to alternative valuations of outcomes. Thus, Lempert, Popper, & Bankes (2003: p. 56) state that a robust strategy must perform reasonably well as assessed “using different value measures”. For the sake of simplicity our discussion throughout this paper will ignore the issue of robustness with respect to different value systems and focus exclusively on robustness with respect to empirical uncertainty.