Kadaptability in twostage mixedinteger robust optimization
 62 Downloads
Abstract
We study twostage robust optimization problems with mixed discretecontinuous decisions in both stages. Despite their broad range of applications, these problems pose two fundamental challenges: (i) they constitute infinitedimensional problems that require a finitedimensional approximation, and (ii) the presence of discrete recourse decisions typically prohibits dualitybased solution schemes. We address the first challenge by studying a Kadaptability formulation that selects K candidate recourse policies before observing the realization of the uncertain parameters and that implements the best of these policies after the realization is known. We address the second challenge through a branchandbound scheme that enjoys asymptotic convergence in general and finite convergence under specific conditions. We illustrate the performance of our algorithm in numerical experiments involving benchmark data from several application domains.
Keywords
Robust optimization Twostage problems Kadaptability BranchandboundMathematics Subject Classification
90C11 90C15 90C34 90C471 Introduction
Dynamic decisionmaking under uncertainty, where actions need to be taken both in anticipation of and in response to the realization of a priori uncertain problem parameters, arguably forms one of the most challenging domains of operations research and optimization theory. Despite intensive research efforts over the past six decades, many uncertaintyaffected optimization problems resist solution, and even our understanding of the complexity of these problems remains incomplete.
In the last two decades, robust optimization has emerged as a promising methodology to counter some of the intricacies associated with decisionmaking under uncertainty. The rich theory on static robust optimization problems, in which all decisions have to be taken before the uncertainty is resolved, is summarized in [2, 4, 16]. However, dynamic robust optimization problems, in which some of the decisions can adapt to the observed uncertainties, are still poorly understood.
Remark 1
(Uncertain FirstStage Objective Coefficients) The assumption that \(\varvec{c}\) is deterministic does not restrict generality. Indeed, problem (1) accounts for uncertain firststage objective coefficients \(\varvec{c}' : \varXi \mapsto \mathbb {R}^{N_1}\) if we augment the secondstage decisions \(\varvec{y}\) to \((\varvec{y}, \varvec{y}')\), replace the secondstage objective coefficients \(\varvec{d}\) with \((\varvec{d}, \varvec{c}')\) and impose the constraint that \(\varvec{y}' = \varvec{x}\).
Even in the special case where \({\mathcal {X}}\), \({\mathcal {Y}}\) and \(\varXi \) are linear programming (LP) representable, problem (1) has been shown to be NPhard [23]. Nevertheless, problem (1) simplifies considerably if the sets \({\mathcal {Y}}\) and \(\varXi \) are LP representable. For this setting, several approximate solution schemes have been proposed that replace the secondstage decisions with decision rules, i.e., parametric classes of linear or nonlinear functions of \(\varvec{\xi }\) [3, 15, 17, 18, 28]. If we further assume that \(\varvec{d}\), \(\varvec{T}\) and \(\varvec{W}\) are deterministic and \(\varXi \) is of simple form (e.g., a budget uncertainty set), a number of exact solution schemes based on Benders’ decomposition [10, 26, 34, 40] and semiinfinite programming [1, 39] have been developed.
Problem (1) becomes significantly more challenging if the set \({\mathcal {Y}}\) is not LP representable. For this setting, conservative MILP approximations have been developed in [20, 36] by partitioning the uncertainty set \(\varXi \) into hyperrectangles and restricting the continuous and integer recourse decisions to affine and constant functions of \(\varvec{\xi }\) over each hyperrectangle, respectively. These a priori partitioning schemes have been extended to iterative partitioning approaches in [6, 31]. Iterative solution approaches based on decision rules have been proposed in [7, 8]. However, to the best of our knowledge, none of these approaches has been shown to converge to an optimal solution of problem (1). For the special case where problem (1) has a relatively complete recourse, \(\varvec{d}\), \(\varvec{T}\) and \(\varvec{W}\) are deterministic and the optimal value of the secondstage problem is quasiconvex over \(\varXi \), the solution scheme of [39] has been extended in [41] to a nested semiinfinite approach that can solve instances of problem (1) with MILP representable sets \({\mathcal {Y}}\) and \(\varXi \) to optimality in finite time.
Our interest in problem (2) is motivated by two observations. Firstly, problem (2) has been shown to be a remarkably good approximation of problem (1), both in theory and in numerical experiments [5, 24]. Secondly, and perhaps more importantly, the Kadaptability problem conforms well with human decisionmaking, which tends to address uncertainty by developing a small number of contingency plans, rather than devising the optimal response for every possible future state of the world. For instance, practitioners may prefer a limited number of contingency plans to full flexibility in the second stage for operational (e.g., in production planning or logistics) or organizational (e.g., in emergency response planning) reasons.
The Kadaptability problem was first studied in [5], where the authors reformulate the 2adaptability problem as a finitedimensional bilinear program and solve it heuristically. The authors also show that the 2adaptability problem is NPhard even if \(\varvec{d}\), \(\varvec{T}\) and \(\varvec{W}\) are deterministic, and they develop necessary conditions for the Kadaptability problem (2) to outperform the static robust problem (where all decisions are taken hereandnow). The relationship between the Kadaptability problem (2) and static robust optimization is further explored in [9] for the special case where \(\varvec{T}\) and \(\varvec{W}\) are deterministic. The authors show that the gaps between both problems and the twostage robust optimization problem (1) are intimately related to geometric properties of the uncertainty set \(\varXi \). Finitedimensional MILP reformulations for problem (2) are developed in [24] under the additional assumption that both the hereandnow decisions \(\varvec{x}\) and the waitandsee decisions \(\varvec{y}\) are binary. The authors show that both the size of the reformulations as well as their gaps to the twostage robust optimization problem (1) depend on whether the uncertainty only affects the objective coefficients \(\varvec{d}\), or whether the constraint coefficients \(\varvec{T}\), \(\varvec{W}\) and \(\varvec{h}\) are uncertain as well. Finally, it is shown in [12, 13] that for polynomial time solvable deterministic combinatorial optimization problems, the associated instances of problem (2) without firststage decisions \(\varvec{x}\) can also be solved in polynomial time if all of the following conditions hold: (i) \(\varXi \) is convex, (ii) only the objective coefficients \(\varvec{d}\) are uncertain, and (iii) \(K > N_2\) policies are sought. This result has been extended to discrete uncertainty sets in [14], in which case pseudopolynomial solution algorithms can be developed.
In this paper, we expand the literature on the Kadaptability problem in two ways. From an analytical viewpoint, we compare the twostage robust optimization problem (1) with the Kadaptability problem (2) in terms of their continuity, convexity and tractability. We also investigate when the approximation offered by the Kadaptability problem is tight, and under which conditions the twostage robust optimization and Kadaptability problems reduce to singlestage problems. From an algorithmic viewpoint, we develop a branchandbound scheme for the Kadaptability problem that combines ideas from semiinfinite and disjunctive programming. We establish conditions for its asymptotic and finite time convergence; we show how it can be refined and integrated into stateoftheart MILP solvers; and, we present a heuristic variant that can address largescale instances. In contrast to existing approaches, our algorithm can handle mixed continuous and discrete decisions in both stages as well as discrete uncertainty, and allows for modeling continuous secondstage decisions via a novel class of highly flexible piecewise affine decision rules. Extensive numerical experiments on benchmark data from various application domains indicate that our algorithm is highly competitive with stateoftheart solution schemes for problems (1) and (2).
We highlight that our C++ code is freely available for download on GitHub [33].
The remainder of this paper develops as follows. Section 2 analyzes the geometry and tractability of problems (1) and (2), and it proposes a novel class of piecewise affine decision rules that can be modeled as an instance of problem (2) with continuous secondstage decisions. Section 3 develops a branchandbound algorithm for the Kadaptability problem and analyzes its convergence. We present numerical results in Sect. 4, and we offer concluding remarks in Sect. 5.
Notation Vectors and matrices are printed in bold lowercase and uppercase letters, respectively, while scalars are printed in regular font. We use \({\mathbf {e}}_k\) to denote the kth unit basis vector and \({\mathbf {e}}\) to denote the vector whose components are all ones, respectively; their dimensions will be clear from the context. The ith row vector of a matrix \(\varvec{A}\) is denoted by \(\varvec{a}_i^\top \). For a logical expression \({\mathcal {E}}\), we define \(\mathbb {I}[{\mathcal {E}}]\) as the indicator function which takes a value of 1 is \({\mathcal {E}}\) is true and 0 otherwise.
2 Problem analysis
Summary of theoretical results. Here “OBJ” refers to instances where only the objective coefficients d are uncertain, while T, W and h are constant. Similarly, “CON” refers to instances where only the constraint coefficients T, W and righthand sides h are uncertain, while d is constant
Table 1 summarizes our theoretical results. For the sake of brevity, we relegate the formal statements of these results as well as their proofs to the electronic version of the paper at arxiv.org/abs/1706.07097. In the table, the firststage problem refers to the overall problems (1) and (2), the evaluation problem refers to the maximization over \(\varvec{\xi } \in \varXi \) for a fixed firststage decision, and the secondstage problem refers to the inner minimization over \(\varvec{y} \in {\mathcal {Y}}\) or \(k \in {\mathcal {K}}\) for a fixed firststage decision and a fixed realization of the uncertain problem parameters. The table reveals that despite significant differences in their formulations, the problems (1) and (2) behave very similarly. The most significant difference is caused by the replacement of the optimization over the secondstage decisions \(\varvec{y} \in {\mathcal {Y}}\) in problem (1) with the selection of a candidate policy \(k \in {\mathcal {K}}\) in problem (2). This ensures that the secondstage problem in (2) is always continuous, convex and tractable, whereas the firststage problem in (2) fails to be convex even if \({\mathcal {X}}\) and \({\mathcal {Y}}\) are convex. Moreover, in contrast to problem (1), the evaluation problem in (2) remains continuous as long as either the objective function or the constraints are unaffected by uncertainty. For general problem instances, however, neither of the two evaluation problems is continuous. As we will see in Sect. 3.2, this directly impacts the convergence of our branchandbound algorithm, which only takes place asymptotically in general. Note that the convexity of the problems (1) and (2) does not depend on the shape of the uncertainty set \(\varXi \).
2.1 Incorporating decision rules in the Kadaptability problem
Although the Kadaptability problem (2) selects the best candidate policy \(\varvec{y}_k\) in response to the observed parameter realization \(\varvec{\xi } \in \varXi \), the policies \(\varvec{y}_1, \ldots , \varvec{y}_K\)—once selected in the first stage—no longer depend on \(\varvec{\xi }\). This lack of dependence on the uncertain problem parameters can lead to overly conservative approximations of the twostage robust optimization problem (1) when the secondstage decisions are continuous. In this section, we show how the Kadaptability problem (2) can be used to generalize affine decision rules, which are commonly used to approximate continuous instances of the twostage robust optimization problem (1). We note that existing schemes, such as [13, 24], cannot be used for this purpose as they require the waitandsee decisions \(\varvec{y}\) to be binary.
Throughout this section, we assume that problem (1) has purely continuous secondstage decisions (that is, \({\mathcal {Y}}\) is LP representable), a deterministic objective function (that is, \(\varvec{d} (\varvec{\xi }) = \varvec{d}\) for all \(\varvec{\xi } \in \varXi \)) and fixed recourse (that is, \(\varvec{W} (\varvec{\xi }) = \varvec{W}\) for all \(\varvec{\xi } \in \varXi \)). The assumption of continuous secondstage decisions allows us to assume, without loss of generality, that \({\mathcal {Y}} = \mathbb {R}^{N_2}\) as any potential restrictions can be absorbed in the secondstage constraints.
Observation 1
Proof
Problem (4) is infeasible if and only if for every \(\varvec{x} \in {\mathcal {X}}\) and \((\varvec{y}^0, \varvec{Y}, \varvec{z}) \in \hat{{\mathcal {Y}}}^K\) there is a \(\varvec{\xi } \in \varXi \) such that \(\varvec{T} (\varvec{\xi }) \varvec{x} + \varvec{W} \varvec{y}_k^0 + \varvec{\hat{W}} (\varvec{\xi }) \varvec{z}_{k2} \not \le \varvec{h} (\varvec{\xi })\) for all \(k = 1,\ldots , K\), which in turn is the case if and only if for every \(\varvec{x} \in {\mathcal {X}}\) and \((\varvec{y}^0_k, \varvec{Y}_k) \in \mathbb {R}^{N_2} \times \mathbb {R}^{N_2 \times N_p}\), \(k = 1,\ldots , K\) there is a \(\varvec{\xi } \in \varXi \) such that \(\varvec{T} (\varvec{\xi }) \varvec{x} + \varvec{W} \varvec{y}_k^0 + \varvec{W} \varvec{Y}_k \varvec{\xi } \not \le \varvec{h} (\varvec{\xi })\) for all \(k = 1,\ldots , K\); that is, if and only if problem (3) is infeasible. We thus assume that both (3) and (4) are feasible. In this case, we verify that every feasible solution \((\varvec{x}, \varvec{y}^0, \varvec{Y}, \varvec{z})\) to problem (4) gives rise to a feasible solution \((\varvec{x}, \varvec{y})\), where \(\varvec{y} (\varvec{\xi }) = \varvec{y}_{k(\varvec{\xi })}^0 + \varvec{Y}_{k(\varvec{\xi })} \varvec{\xi }\) and \(k(\varvec{\xi })\) is any element of \(\mathop {\arg \min }\nolimits _{k \in {\mathcal {K}}} \{ \varvec{c}^\top \varvec{x} + \varvec{d}^\top \varvec{y}_k^0 + \varvec{\hat{d}} (\varvec{\xi })^\top \varvec{z}_{k1} \, : \, \varvec{T} (\varvec{\xi }) \varvec{x} + \varvec{W} \varvec{y}_k^0 + \varvec{\hat{W}} (\varvec{\xi }) \varvec{z}_{k2} \le \varvec{h} (\varvec{\xi })\}\), in problem (3) that attains the same worstcase objective value. Similarly, every optimal solution\((\varvec{x}, \varvec{y})\) to problem (3) gives rise to an optimal solution \((\varvec{x}, \varvec{y}^0, \varvec{Y}, \varvec{z})\), where \(\varvec{z}_k = (\varvec{z}_{k1}, \varvec{z}_{k2})\) with \(\varvec{z}_{k1} = \varvec{Y}_k^\top \varvec{d}\) and \(\varvec{z}_{k2} = [\varvec{w}_1^\top \varvec{Y}_k \; \ldots \; \varvec{w}_L^\top \varvec{Y}_k ]^\top \), \(k = 1, \ldots , K\), in problem (4). Hence, (3) and (4) share the same optimal value and the same sets of optimal solutions.\(\square \)
We close with an example that illustrates the benefits of piecewise affine decision rules.
Example 1
The piecewise affine decision rules presented here can be readily combined with discrete secondstage decisions. For the sake of brevity, we omit the details of this straightforward extension.
3 Solution scheme
Our solution scheme for the Kadaptability problem (2) is based on a reformulation as a semiinfinite disjunctive program which we present next.
Observation 2
In the following, we stipulate that the optimal value of (5) is \(+ \infty \) whenever it is infeasible.
Proof of Observation 2
Problem (2) is infeasible if and only if (iff) for every \(\varvec{x} \in {\mathcal {X}}\) and \(\varvec{y} \in {\mathcal {Y}}^K\) there is a \(\varvec{\xi } \in \varXi \) such that \(\varvec{T} (\varvec{\xi }) \varvec{x} + \varvec{W} (\varvec{\xi }) \varvec{y}_k \not \le \varvec{h} (\varvec{\xi })\) for all \(k \in {\mathcal {K}}\), which in turn is the case iff for every \(\varvec{x} \in {\mathcal {X}}\) and \(\varvec{y} \in {\mathcal {Y}}^K\), the disjunction in (5) is violated for at least one \(\varvec{\xi } \in \varXi \); that is, iff problem (5) is infeasible. We thus assume that both (2) and (5) are feasible. In this case, one readily verifies that every feasible solution \((\varvec{x}, \varvec{y})\) to problem (2) gives rise to a feasible solution \((\theta , \varvec{x}, \varvec{y})\), where \(\theta = \sup \nolimits _{\varvec{\xi } \in \varXi }\; \inf \limits _{k \in {\mathcal {K}}} \{ \varvec{c}^\top \varvec{x} + \varvec{d} (\varvec{\xi })^\top \varvec{y}_k : \varvec{T} (\varvec{\xi }) \varvec{x} + \varvec{W} (\varvec{\xi }) \varvec{y}_k \le \varvec{h} (\varvec{\xi })\}\), in problem (5) with the same objective value. Likewise, any optimal solution \((\theta , \varvec{x}, \varvec{y})\) to problem (5) corresponds to an optimal solution \((\varvec{x}, \varvec{y})\) in problem (2). Hence, (2) and (5) share the same optimal value and the same sets of optimal solutions.
Problem (5) cannot be solved directly as it contains infinitely many disjunctive constraints. Instead, our solution scheme iteratively solves a sequence of (increasingly tighter) relaxations of this problem that are obtained by enforcing the disjunctive constraints over finite subsets of \(\varXi \). Whenever the solution of such a relaxation violates the disjunction for some realization \(\varvec{\xi } \in \varXi \), we create K subproblems that enforce the disjunction associated with \(\varvec{\xi }\) to be satisfied by the kth disjunct, \(k = 1, \ldots , K\). Our solution scheme is reminiscent of discretization methods employed in semiinfinite programming, which iteratively replace an infinite set of constraints with finite subsets and solve the resulting discretized problems. Indeed, our scheme can be interpreted as a generalization of Kelley’s cuttingplane method [11, 27] applied to semiinfinite disjunctive programs. In the special case where \(K=1\), our method reduces to the cuttingplane method for (static) robust optimization problems proposed in [30].
In the remainder of this section, we describe our basic branchandbound scheme (Sect. 3.1), we study its convergence (Sect. 3.2), we discuss algorithmic variants to the basic scheme that can enhance its numerical performance (Sect. 3.3), and we present a heuristic variant that can address problems of larger scale (Sect. 3.4).
3.1 Branchandbound algorithm
Our solution scheme iteratively solves a sequence of scenariobased Kadaptability problems and separation problems. We define both problems first, and then we describe the overall algorithm.
If \(\varXi _k = \emptyset \) for all \(k \in {\mathcal {K}}\), then problem (6) is unbounded, and we stipulate that its optimal value is \(\infty \) and that its optimal value is attained by any solution \((\infty , \varvec{x}, \varvec{y})\) with \((\varvec{x}, \varvec{y}) \in {\mathcal {X}} \times {\mathcal {Y}}^K\). Otherwise, if problem (6) is infeasible for \(\varXi _1,\ldots ,\varXi _K\), then we define its optimal value to be \(+ \infty \). In all other cases, the optimal value of problem (6) is finite and it is attained by an optimal solution \((\theta , \varvec{x}, \varvec{y})\) since \({\mathcal {X}}\) and \({\mathcal {Y}}\) are compact.
Remark 2
(Decomposability) For Kadaptability problems without firststage decisions \(\varvec{x}\), problem (6) decomposes into K scenariobased static robust optimization problems that are only coupled through the constraints referencing the epigraph variable \(\theta \). In this case, we can recover an optimal solution to problem (6) by solving each of the K static problems individually and identifying the optimal \(\theta \) as the maximum of their optimal values.
Observation 3
Proof
Fix any feasible solution \((\theta , \varvec{x}, \varvec{y})\) to the scenariobased Kadaptability problem (6). For every \(\varvec{\xi } \in \varXi \), we can construct a feasible solution \((\zeta , \varvec{\xi }, \varvec{z})\) to problem (8) with \(\zeta = S (\theta , \varvec{x}, \varvec{y}, \varvec{\xi })\) by setting \(z_{k0} = 1\) if \(\varvec{c}^\top \varvec{x} + \varvec{d} (\varvec{\xi })^\top \varvec{y}_k  \theta \ge \varvec{t}_l (\varvec{\xi })^\top \varvec{x} + \varvec{w}_l (\varvec{\xi })^\top \varvec{y}_k  h_l (\varvec{\xi })\) for all \(l = 1, \ldots , L\) and \(z_{kl} = 1\) for \(l \in \mathop {\arg \max }\nolimits _{l \in \{1, \ldots , L\}} \left\{ \varvec{t}_l (\varvec{\xi })^\top \varvec{x} + \varvec{w}_l (\varvec{\xi })^\top \varvec{y}_k  h_l (\varvec{\xi }) \right\} \) otherwise (where ties can be broken arbitrarily). We thus conclude that \({\mathcal {S}} (\theta , \varvec{x}, \varvec{y})\) is less than or equal to the optimal value of problem (8). Likewise, every feasible solution \((\zeta , \varvec{\xi }, \varvec{z})\) to problem (8) satisfies \(\zeta \le \max \big \{ \varvec{c}^\top \varvec{x} + \varvec{d} (\varvec{\xi })^\top \varvec{y}_k  \theta , \; \max \nolimits _{l \in \{ 1, \ldots , L \}} \; \{ \varvec{t}_l (\varvec{\xi })^\top \varvec{x} + \varvec{w}_l (\varvec{\xi })^\top \varvec{y}_k  h_l (\varvec{\xi }) \} \big \}\) for all \(k \in {\mathcal {K}}\); that is, \(\zeta \le S (\theta , \varvec{x}, \varvec{y}, \varvec{\xi })\). Thus, the optimal value of problem (8) is less than or equal to \({\mathcal {S}} (\theta , \varvec{x}, \varvec{y})\) as well.
If the number of policies K is fixed and the uncertainty set \(\varXi \) is convex, then problem (8) can be solved by enumerating all \((L + 1)^K\) possible choices for \(\varvec{z}\), solving the resulting linear programs in \(\zeta \) and \(\varvec{\xi }\) and reporting the solution with the maximum value of \(\zeta \).\(\square \)
 1.
Initialize Set \({\mathcal {N}} \leftarrow \{ \tau ^0 \}\) (node set), where \(\tau ^0 = (\varXi _1^0, \ldots , \varXi _K^0)\) with \(\varXi _k^0 = \emptyset \) for all \(k \in {\mathcal {K}}\) (root node). Set \((\theta ^\text {i}, \varvec{x}^\text {i}, \varvec{y}^\text {i}) \leftarrow (+\infty , \emptyset , \emptyset )\) (incumbent solution).
 2.
Check convergence If \({\mathcal {N}} = \emptyset \), then stop and declare infeasibility (if \(\theta ^\text {i} = +\infty \)) or report \((\varvec{x}^\text {i}, \varvec{y}^\text {i})\) as an optimal solution to problem (2).
 3.
Select node Select a node \(\tau = (\varXi _1, \ldots , \varXi _K)\) from \({\mathcal {N}}\). Set \({\mathcal {N}} \leftarrow {\mathcal {N}} {\setminus } \{ \tau \}\).
 4.
Process node Let \((\theta , \varvec{x}, \varvec{y})\) be an optimal solution to the scenariobased Kadaptability problem (6) If \(\theta \ge \theta ^\text {i}\), then go to Step 2.
 5.
Check feasibility Let \((\zeta , \varvec{\xi }, \varvec{z})\) be an optimal solution to the separation problem (8). If \(\zeta \le 0\), then set \((\theta ^\text {i}, \varvec{x}^\text {i}, \varvec{y}^\text {i}) \leftarrow (\theta , \varvec{x}, \varvec{y})\) and go to Step 2; otherwise, if \(\zeta > 0\), go to Step 6.
 6.
Branch Instantiate K new nodes \(\tau _1, \ldots , \tau _K\) as follows: \(\tau _k = (\varXi _1, \ldots , \varXi _k \cup \{ \varvec{\xi } \}, \ldots , \varXi _K)\) for each \(k \in {\mathcal {K}}\). Set \({\mathcal {N}} \leftarrow {\mathcal {N}} \cup \{\tau _1, \ldots , \tau _K\}\) and go to Step 3.
3.2 Convergence analysis
We now establish the correctness of our branchandbound scheme, as well as conditions for its asymptotic and finite convergence.
Theorem 1
(Correctness) If the branchandbound scheme terminates, then it either returns an optimal solution to problem (2) or correctly identifies the latter as infeasible.
Proof
We first show that if the branchandbound scheme terminates on an infeasible problem instance, then the final incumbent solution is \((\theta ^\text {i}, \varvec{x}^\text {i}, \varvec{y}^\text {i}) = (+\infty , \emptyset , \emptyset )\). Indeed, the algorithm can only update the incumbent in Step 5 if the objective value of the separation problem is nonpositive. By construction, this is only possible if the algorithm has determined a feasible solution.
We now show that if the branchandbound scheme terminates on a feasible problem instance, then the algorithm returns an optimal solution \((\varvec{x}^\text {i}, \varvec{y}^\text {i})\) of problem (2). To this end, assume that \((\varvec{x}^\star , \varvec{y}^\star )\) is an optimal solution of problem (2) with objective value \(\theta ^\star \). Let \({\mathcal {T}}\) be the set of all nodes of the branchandbound tree for which \((\theta ^\star , \varvec{x}^\star , \varvec{y}^\star )\) is feasible in the corresponding scenariobased Kadaptability problem (6). Note that \({\mathcal {T}} \ne \emptyset \) since \((\theta ^\star , \varvec{x}^\star , \varvec{y}^\star )\) is feasible in the root node. Let \({\mathcal {T}}' \subseteq {\mathcal {T}}\) be the set of those nodes which have children in \({\mathcal {T}}\), that is, nodes that have been branched in Step 6 of our algorithm. Consider the set \({\mathcal {T}}'' = {\mathcal {T}} {\setminus } {\mathcal {T}}'\). Since the branchandbound scheme has terminated, we must have \({\mathcal {T}}'' \ne \emptyset \). Consider an arbitrary node \(\tau \in {\mathcal {T}}''\). Since \(\tau \) has been selected in Step 3 but not been branched in Step 6, either (i) \(\tau \) has been fathomed in Step 4 or if (ii) \(\tau \) has been fathomed in Step 5. In the former case, the solution \((\theta ^\star , \varvec{x}^\star , \varvec{y}^\star )\) must have been weakly dominated by the incumbent solution \((\theta ^\text {i}, \varvec{x}^\text {i}, \varvec{y}^\text {i})\), which therefore must be optimal as well. In the latter case, the incumbent solution must have been updated to \((\theta ^\star , \varvec{x}^\star , \varvec{y}^\star )\).\(\square \)
We now show that our branchandbound scheme converges asymptotically to an optimal solution of the Kadaptability problem (2). Our result has two implications: (i) for infeasible problem instances, the algorithm always terminates after finitely many iterations, i.e., infeasibility is detected in finite time; (ii) for feasible problem instances, the algorithm eventually only inspects solutions in the neighborhood of optimal solutions.
Theorem 2
(Asymptotic Convergence) Every accumulation point \(({\hat{\theta }}, \varvec{\hat{x}}, \varvec{\hat{y}})\) of the solutions to the scenariobased Kadaptability problem (6) in an infinite branch of the branchandbound tree gives rise to an optimal solution \((\varvec{\hat{x}}, \varvec{\hat{y}})\) of the Kadaptability problem (2) with objective value \({\hat{\theta }}\).
Proof
We denote by \((\theta ^\ell , \varvec{x}^\ell , \varvec{y}^\ell )\) and \((\zeta ^\ell , \varvec{\xi }^\ell , \varvec{z}^\ell )\) the sequences of optimal solutions to the scenariobased Kadaptability problem in Step 4 and the separation problem in Step 5 of the algorithm, respectively, that correspond to the node sequence \(\tau ^\ell \), \(\ell = 0, 1, \ldots \), of some infinite branch of the branchandbound tree. Since \({\mathcal {X}}\), \({\mathcal {Y}}\) and \(\varXi \) are compact, the Bolzano–Weierstrass theorem implies that \((\theta ^\ell , \varvec{x}^\ell , \varvec{y}^\ell )\) and \((\zeta ^\ell , \varvec{\xi }^\ell , \varvec{z}^\ell )\) each have at least one accumulation point.
We first show that every accumulation point \(({\hat{\theta }}, \varvec{\hat{x}}, \varvec{\hat{y}})\) of the sequence \((\theta ^\ell , \varvec{x}^\ell , \varvec{y}^\ell )\) corresponds to a feasible solution \((\varvec{\hat{x}}, \varvec{\hat{y}})\) of the Kadaptability problem (2) with objective value \({\hat{\theta }}\). By possibly going over to subsequences, we can without loss of generality assume that the two sequences \((\theta ^\ell , \varvec{x}^\ell , \varvec{y}^\ell )\) and \((\zeta ^\ell , \varvec{\xi }^\ell , \varvec{z}^\ell )\) converge themselves to \(({\hat{\theta }}, \varvec{\hat{x}}, \varvec{\hat{y}})\) and \(({\hat{\zeta }}, \varvec{{\hat{\xi }}}, \varvec{\hat{z}})\), respectively. Assume now that \((\varvec{\hat{x}}, \varvec{\hat{y}})\) does not correspond to a feasible solution of the Kadaptability problem (2) with objective value \({\hat{\theta }}\). Then there is \(\varvec{\xi }^\star \in \varXi \) such that
We now show that every accumulation point \(({\hat{\theta }}, \varvec{\hat{x}}, \varvec{\hat{y}})\) of the sequence \((\theta ^\ell , \varvec{x}^\ell , \varvec{y}^\ell )\) corresponds to an optimal solution \((\varvec{\hat{x}}, \varvec{\hat{y}})\) of the Kadaptability problem (2) with objective value \({\hat{\theta }}\). Assume to the contrary that \(({\hat{\theta }}, \varvec{\hat{x}}, \varvec{\hat{y}})\) is feasible but suboptimal, that is, there is a feasible solution \((\varvec{x}', \varvec{y}')\) to the Kadaptability problem (2) with objective value \(\theta ' < {\hat{\theta }}\). Let \({\mathcal {T}}\) be the set of all nodes of the branchandbound tree for which \((\theta ', \varvec{x}', \varvec{y}')\) is feasible in the corresponding scenariobased Kadaptability problem (6). Note that \({\mathcal {T}} \ne \emptyset \) since \((\theta ', \varvec{x}', \varvec{y}')\) is feasible in the root node. In the following, we distinguish the two cases (i) \( {\mathcal {T}}  < \infty \) and (ii) \( {\mathcal {T}}  = \infty \).
If \( {\mathcal {T}}  < \infty \), then \({\mathcal {T}}\) must contain nodes that were fathomed in Steps 4 or 5 of the algorithm. In that case, however, the incumbent solution must have been updated to \(({\tilde{\theta }}, \tilde{\varvec{x}}, \tilde{\varvec{y}})\) with \({\tilde{\theta }} \le \theta '\) after finitely many iterations. Since the objective values \(\theta ^\ell \) of the scenariobased Kadaptability problems will be arbitrarily close to \({\hat{\theta }}\) for \(\ell \) sufficiently large, the corresponding nodes \(\tau ^\ell \) will also be fathomed in Step 4. This contradicts the assumption that the node sequence \(\tau ^\ell \) corresponds to an infinite branch of the branchandbound tree.
If \( {\mathcal {T}}  = \infty \), on the other hand, the compactness of \({\mathcal {X}}\), \({\mathcal {Y}}\) and \(\varXi \) implies that \((\theta ', \varvec{x}', \varvec{y}')\) constitutes the accumulation point of another infinite sequence \((\theta '^{,\ell }, \varvec{x}'^{,\ell }, \varvec{y}'^{,\ell })\). In that case, too, the objective values \(\theta ^\ell \) and \(\theta '^{,\ell }\) of the scenariobased Kadaptability problems will be arbitrarily close to \({\hat{\theta }}\) and \(\theta '\), respectively, for \(\ell \) sufficiently large. Since \(\theta ' < {\hat{\theta }}\), the algorithm will fathom the tree nodes corresponding to the sequence \((\theta ^\ell , \varvec{x}^\ell , \varvec{y}^\ell )\) in Step 4. This again contradicts the assumption that the node sequence \(\tau ^\ell \) corresponds to an infinite branch of the branchandbound tree.
Since both cases \( {\mathcal {T}}  < \infty \) and \( {\mathcal {T}}  = \infty \) lead to contradictions, we conclude that the assumed suboptimality of \(({\hat{\theta }}, \varvec{\hat{x}}, \varvec{\hat{y}})\) is wrong. The statement of the theorem thus follows.\(\square \)
Theorem 2 guarantees that after sufficiently many iterations of the algorithm, our scheme generates feasible solutions that are close to an optimal solution of the Kadaptability problem (2). In general, our algorithm may not converge after finitely many iterations. In the following, we discuss a class of problem instances for which finite convergence is guaranteed.
Theorem 3
(Finite Convergence) The branchandbound scheme terminates after finitely many iterations, if \({\mathcal {Y}}\) has finite cardinality and only the objective function in problem (2) is uncertain.
Proof
We note that the assumption of deterministic constraints is critical in the previous statement.
Example 2
We note that every practical implementation of our branchandbound scheme will fathom nodes in Step 5 whenever the objective value of the separation problem (7) is sufficiently close to zero (within some \(\epsilon \)tolerance). This ensures that the algorithm terminates in finite time in practice. Indeed, in Example 2 the objective value of the separation problem is less than \(\epsilon \) for all nodes \(\tau ^\ell \) with \(\ell \ge \log _2 (\xi ^0 \epsilon ^{1})\), and our branchandbound algorithm will fathom the corresponding path of the tree after \({\mathcal {O}} (\log \epsilon ^{1})\) iterations if we seek \(\epsilon \)precision solutions.
3.3 Improvements to the basic algorithm
The algorithm of Sect. 3.1 serves as a blueprint that can be extended in multiple ways. In the following, we discuss three enhancements that improve the numerical performance of our algorithm.

6\('\). Branch Let \(K' = 1\) if \(\varXi _1 = \ldots = \varXi _K = \emptyset \) and let \(K' = \min \Big \{K, \, 1 + \max \limits _{k \in {\mathcal {K}}} \big \{k : \varXi _k \ne \emptyset \big \} \Big \}\) otherwise. Instantiate \(K'\) new nodes \(\tau _k = (\varXi _1, \ldots , \varXi _k \cup \{ \varvec{\xi } \}, \ldots , \varXi _K)\), \(k = 1, \ldots , K'\). Set \({\mathcal {N}} \leftarrow {\mathcal {N}} \cup \{\tau _1, \ldots , \tau _{K'}\}\) and go to Step 3.
Integration into MILP Solvers. Step 4 of our algorithm solves the scenariobased problem (6) from scratch in every node, despite the fact that two successive problems along any branch of the branchandbound tree differ only by the addition of a few constraints. We can leverage this commonality if we integrate our branchandbound algorithm into the solution scheme of the MILP solver used for problem (6). In doing so, we can also exploit the advanced facilities commonly present in the stateoftheart solvers such as warmstarts and cutting planes, among others.
In order to integrate our branchandbound algorithm into the solution scheme of the MILP solver, we initialize the solver with the scenariobased problem (6) corresponding to the root node \(\tau ^0\) of our algorithm, see Step 1. The solver then proceeds to solve this problem using its own branchandbound procedure. Whenever the solver encounters an integral solution\((\theta , \varvec{x}, \varvec{y}) \in \mathbb {R} \times {\mathcal {X}} \times {\mathcal {Y}}^K\), we solve the associated separation problem (8). If \({\mathcal {S}}(\theta , \varvec{x}, \varvec{y}) > 0\), then we execute Step 6 of our algorithm through a branch callback: we report the K new branches to the solver, which will discard the current solution. Otherwise, if \({\mathcal {S}}(\theta , \varvec{x}, \varvec{y}) \le 0\), then we do not create any new branches, and the solver will accept \((\theta , \varvec{x}, \varvec{y})\) as the new incumbent solution. This ensures that only those solutions which are feasible in problem (5) are accepted as incumbent solutions, and furthermore, that all nodes of our branchandbound scheme (not just the root node) are fully explored within the same search tree generated by the MILP solver for solving problem (6) corresponding to the root node \(\tau ^0\) of our algorithm.
Whenever the solver encounters a fractional solution, it will by default branch on an integer variable that is fractional in the current solution. However, if the fractional solution also happens to satisfy \({\mathcal {S}} (\theta , \varvec{x}, \varvec{y}) > 0\), then it is possible to override this default strategy and instead execute Step 6 of our algorithm. In such cases, a heuristic rule can be used to decide whether to branch on integer variables or to branch as in Step 6. In our computational experience, a simple rule that alternates between the default branching rule of the solver and the one defined by Step 6 appears to perform well in practice.
3.4 Modification as a heuristic algorithm
Whenever the number of policies K is large, the solution of the scenariobased Kadaptability problem (6) can be time consuming. In such cases, only a limited number of nodes will be explored by the algorithm in a given amount of computation time, and the quality of the final incumbent solution may be poor. As a remedy, we can reduce the size and complexity of the scenariobased Kadaptability problem (6) by fixing some of its secondstage policies and limiting the total computation time. In doing so, we obtain a heuristic variant of our algorithm that can scale to large values of K.
In our computational experience, a simple heuristic that sequentially solves the 1, 2, ..., Kadaptability problems by fixing in each Kadaptability problem all but one of the secondstage policies, \(\varvec{y}_1, \ldots , \varvec{y}_{K1}\), to their corresponding values in the \((K1)\)adaptability problem, performs well in practice. This heuristic is motivated by two observations. First, the resulting scenariobased Kadaptability problems (6) have the same size and complexity as the corresponding scenariobased 1adaptability problems. Second, in our experiments on instances with uncertain objective coefficients \(\varvec{d}\), we often found that some optimal secondstage policies of the \((K1)\)adaptability problem also appear in the optimal solution of the Kadaptability problem. In fact, it can be shown that this heuristic can obtain Kadaptable solutions that improve upon 1adaptable solutions only if the objective coefficients \(\varvec{d}\) are affected by uncertainty.
4 Numerical results
We now analyze the computational performance of our branchandbound scheme in a variety of problem instances from the literature. We consider a shortest path problem with uncertain arc weights (Sect. 4.1), a capital budgeting problem with uncertain cash flows (Sect. 4.2), a variant of the capital budgeting problem with the additional option to take loans (Sect. 4.3), a project management problem with uncertain task durations (Sect. 4.4), and a vehicle routing problem with uncertain travel times (Sect. 4.5). Of these, the first two problems involve only binary decisions, and they can therefore also be solved with the approach described in [24]. In these cases, we show that our solution scheme is highly competitive, and it frequently outperforms the approach of [24]. In contrast, the third and fourth problems also involve continuous decisions, and there is no existing solution approach for their associated Kadaptability problems. However, the project management problem from Sect. 4.4 involves only continuous secondstage decisions, and therefore the corresponding twostage robust optimization problem (1) can also be approximated using affine decision rules [3], which represent the most popular approach for such problems. In this case, we elucidate the benefits of Kadaptable constant and affine decisions over standard affine decision rules. Finally, the first and last problems involve only binary secondstage decisions and deterministic constraints, and they can therefore also be addressed with the heuristic approach described in [13]. In these cases, we show that the heuristic variant of our algorithm often outperforms the latter approach in terms of solution quality.
For each problem category, we investigate the tradeoffs between computational effort and improvement in objective value of the Kadaptability problem for increasing values of K. We demonstrate that (i) the Kadaptability problem can provide significant improvements over static robust optimization (which corresponds to the case \(K=1\)), and that (ii) our solution scheme can quickly determine feasible solutions of high quality.
We implemented our branchandbound algorithm in C++ using the C callable library of CPLEX 12.6 [25]. We note that CPLEX does not directly support the creation of more than two child nodes during branching. However, as the branching Step 6\('\) may create up to K child nodes, we use a binary representation of the underlying Kary tree (see, e.g., [32]), and implement the branching step using a combination of incumbent and branch callbacks. By using callback functions in this manner, we can integrate our algorithm within CPLEX in a modular fashion while still leveraging advanced MILP technology such as cutting planes, backtracking rules and other heuristics. We caution, however, that the creation of up to K child nodes can result in very large search trees, and potentially cause memory issues. Therefore, to limit the size of our search trees, we use a CPLEXimposed memory limit of 16GB RAM. We used a constraint feasibility tolerance of \(\epsilon = 10^{4}\) to accept any incumbent solutions, whereas all other solver options were kept at their default values. Unless mentioned otherwise, we employ a time limit of 7200 s for our branchandbound scheme and a time limit of 60 s for the heuristic variant of our algorithm. All experiments were conducted on a single core of an Intel Xeon 2.8 GHz computer.
4.1 Shortest paths
Results for the shortest path problem
N  \(K = 2\)  \(K = 3\)  \(K = 4\)  

Opt (#)  Time (s)  Gap (%)  Opt (#)  Time (s)  Gap (%)  Opt (#)  Time (s)  Gap (%)  
20  99  6  1.23  97  408  2.51  70  539  1.74 
25  91  222  4.14  64  847  2.91  33  885  2.89 
30  64  744  4.40  31  1237  4.10  16  827  4.27 
35  37  1083  5.36  14  1020  5.01  10  896  5.23 
40  10  808  6.28  6  1670  6.43  2  39  6.10 
45  9  1152  7.70  1  16  7.06  1  15  6.61 
50  2  3307  8.55  1  2308  7.90  0  –  7.10 
For each graph size \(N \in \{20, 25, \ldots , 50\}\), we randomly generate 100 problem instances as follows. We assign the coordinates \((u_i, v_i) \in \mathbb {R}^2\) to each node \(i \in V\) uniformly at random from the square \([0, 10]^2\). The nominal weight of the arc \((i,j) \in A\) is defined to be the Euclidean distance between the nodes i and j; that is, \(d_{ij}^0 = \sqrt{(u_i  u_j)^2 + (v_i  v_j)^2}\). The source node s and the terminal node t are defined to be the nodes with the maximum Euclidean distance between them. The arc set A is obtained by removing from the set of all pairwise links the \(\lfloor 0.7 (N^2  N) \rfloor \) connections with the largest nominal weights. We set the uncertainty budget to \(\varGamma = 3\). Further details on the parameter settings can be found in [24].
4.2 Capital budgeting
Results for the capital budgeting problem
N  \(K = 2\)  \(K = 3\)  \(K = 4\)  

Opt (#)  Time (s)  Gap (%)  Opt (#)  Time (s)  Gap (%)  Opt (#)  Time (s)  Gap (%)  
5  100  1  –  100  1  –  100  3  – 
10  100  1  –  100  16  –  100  149  – 
15  100  10  –  99  566  0.33  69  2245  1.42 
20  100  419  –  34  2787  1.65  5  3710  4.02 
25  29  2238  1.12  4  2281  2.63  0  –  6.22 
30  1  188  3.01  1  6687  3.35  0  –  8.27 
Table 3 summarizes the numerical performance of our branchandbound scheme for \(K \in \{2, 3, 4\}\). Table 3 demonstrates that our branchandbound scheme performs very well for this problem class since the majority of instances is solved to optimality for \(K \in \{2, 3\}\). Moreover, the optimality gaps for the unsolved instances are less than 4% for \(K \in \{2, 3\}\) and less than 9% for \(K = 4\) on average. Additionally, Fig. 6 shows that even for the largest instances, highquality incumbent solutions which significantly improve (\(\approx \) 100%) upon the static robust solutions are obtained within 1 minute of computation time. In obtaining the results of Fig. 6c, d using our heuristic, we observed that limiting the computation time to 1 minute is often insufficient for the successful termination of the algorithm. Therefore, the kth secondstage policy determined by the algorithm is not always deterministic, and it can affect the solution of the \((k+1)\)adaptability problem as well as the resulting objective value improvements. To eliminate this random effect, we ran our heuristic 10 times for each instance, using different random seeds for the underlying MILP solver, and report the averaged results in Fig. 6c, d. Our results compare favorably with those of [24] as well as those of the partitionandbound approach for the corresponding twostage robust optimization problem presented in [6].
4.3 Capital budgeting with loans
Results for the capital budgeting problem with loans
N  \(K = 2\)  \(K = 3\)  \(K = 4\)  

Opt (#)  Time (s)  Gap (%)  Opt (#)  Time (s)  Gap (%)  Opt (#)  Time (s)  Gap (%)  
5  100  1  –  100  9  –  98  80  3.14 
10  100  3  –  100  78  –  98  938  1.92 
15  100  62  –  96  1265  0.91  23  3989  2.23 
20  85  1680  0.80  20  3941  1.71  0  –  4.94 
25  12  3363  2.29  1  2693  3.34  0  –  6.88 
30  1  424  3.78  0  –  4.73  0  –  8.17 
4.4 Project management
We define a project as a directed acyclic graph \(G = (V, A)\) whose nodes \(V = \{ 1, \ldots , N \}\) represent the tasks (e.g., ‘build foundations’ or ‘develop prototype’) and whose arcs \(A \subseteq V \times V\) denote the temporal precedences, i.e., \((i, j) \in A\) implies that task j cannot be started before task i has been completed. We assume that each task \(i \in V\) has an uncertain duration \(d_i (\varvec{\xi })\) that depends on the realization of an uncertain parameter vector \(\varvec{\xi } \in \varXi \). Without loss of generality, we stipulate that the project graph G has the unique sink \(N \in V\), and that the last task N has a duration of zero. This can always be achieved by introducing dummy nodes and/or arcs.
We consider project networks of size \(N \in \{ 3 m + 1 \, : \, m = 3, 4, \ldots , 8 \}\). One can show that for each network size N, the optimal value of the corresponding static robust optimization problem as well as the affine decision rule problem is m, while the optimal value of the unapproximated twostage robust optimization problem is \((m+1)/2\), see [37, Example 2.2]. For \(K \in \{2, 3, 4\}\), Fig. 9 summarizes the computational performance of the branchandbound scheme and the improvement in objective value of the resulting piecewise constant and piecewise affine decision rules with K pieces over the corresponding 1adaptable solutions. Figure 9a, b show that using only two pieces, piecewise constant decision rules can improve upon the affine approximation by more than 12%, while a piecewise affine decision rule can improve by more than 15%. Figure 9c, d show that piecewise constant decision rules require smaller computation times than piecewise affine decision rules. This is not surprising since piecewise constant decision rules are parameterized by \({\mathcal {O}} (KN)\) variables, whereas piecewise affine decision rules are parameterized by \({\mathcal {O}} (KN^2)\) variables.
4.5 Vehicle routing
We note that the set \({\mathcal {Y}}\) represents the socalled twoindex vehicle flow formulation of the vehicle routing problem, in which the first equation ensures that M vehicles are used; the second set of equations ensures that each customer is visited by exactly one vehicle; while the third set of inequalities ensure that there are no subtours disconnected from the depot and that all vehicle capacities are respected. This formulation is known to be extremely challenging to solve because it consists of an exponential number of inequalities. For \(K > 1\), the corresponding Kadaptability problem is naturally even more challenging and it is practically intractable to solve it using the approach described in [24]. In contrast, the heuristic variant of our algorithm described in Sect. 3.3 as well as the heuristic approach of [13] only require the solution of vehicle routing subproblems that are of similar complexity as the associated 1adaptability problems. Therefore, in the following, we only present results using these algorithms. In both cases, we solved all vehicle routing subproblems using the branchandcut algorithm described in [29].
For our numerical experiments, we consider all 49 instances from [29] with \(N \le 50\), which are commonly used to benchmark vehicle routing algorithms. We set an overall time limit of 2 h. For the heuristic variant of our algorithm, we further set a time limit of 10 min per vehicle routing subproblem. We note that the heuristic of [13] requires the successful termination of an expensive preprocessing step to determine good Kadaptable solutions. Therefore, to prevent bias in favor of our algorithm, Fig. 10 compares the two algorithms only across the 39 instances for which this step terminated successfully. The figure shows that when the number of policies is small, the Kadaptable solutions obtained using our algorithm are about 1% better than those obtained using the heuristic algorithm of [13]. Moreover, the differences in their objective values are relatively higher for larger instances.
5 Conclusions
In contrast to singlestage robust optimization problems, which are typically solved via monolithic reformulations, there is growing evidence that twostage and multistage robust optimization problems are best solved algorithmically [6, 7, 8, 31, 39]. Our findings in this paper appear to confirm this observation, as our proposed branchandbound algorithm compares favorably with the reformulations proposed in [24]. In terms of modeling flexibility, our algorithm can accommodate mixed continuous and discrete decisions in both stages, can incorporate discrete uncertainty, and allows us to model flexible piecewise affine decision rules. At the same time, our numerical results indicate that the algorithmic approach is highly competitive in terms of computational performance as well. From a practical viewpoint, a notable feature of our algorithm is that it admits a lightweight implementation by integrating it into the branchandbound schemes of commercial solvers via branch callbacks, while allowing easy modification as a heuristic for largescale instances.
Our results open up multiple fruitful avenues for future research. The scope of the presented algorithm can be broadened further by generalizing it to twostage distributionally robust optimization problems, where the uncertain problem parameters are modeled as random variables following a probability distribution that is only partially known. More ambitiously, it would be instructive to explore how the concept of Kadaptability, as well as our proposed branchandbound algorithm, extend to dynamic robust optimization problems with more than two stages.
Notes
Acknowledgements
Anirudh Subramanyam gratefully acknowledges support from the John and Claire Bertucci Graduate Fellowship program at Carnegie Mellon University.
References
 1.Ayoub, J., Poss, M.: Decomposition for adjustable robust linear optimization subject to uncertainty polytope. Comput. Manag. Sci. 13(2), 219–239 (2016)MathSciNetCrossRefGoogle Scholar
 2.BenTal, A., Ghaoui, L.E., Nemirovski, A.: Robust Optimization. Princeton University Press, Princeton (2009)CrossRefGoogle Scholar
 3.BenTal, A., Goryashko, A., Guslitzer, E., Nemirovski, A.: Adjustable robust solutions of uncertain linear programs. Math. Program. 99(2), 351–376 (2004)MathSciNetCrossRefGoogle Scholar
 4.Bertsimas, D., Brown, D.B., Caramanis, C.: Theory and applications of robust optimization. SIAM Rev. 53(3), 464–501 (2011)MathSciNetCrossRefGoogle Scholar
 5.Bertsimas, D., Caramanis, C.: Finite adaptibility in multistage linear optimization. IEEE Trans. Autom. Control 55(12), 2751–2766 (2010)CrossRefGoogle Scholar
 6.Bertsimas, D., Dunning, I.: Multistage robust mixed integer optimization with adaptive partitions. Oper. Res. 64(4), 980–998 (2016)MathSciNetCrossRefGoogle Scholar
 7.Bertsimas, D., Georghiou, A.: Design of near optimal decision rules in multistage adaptive mixedinteger optimization. Oper. Res. 63(3), 610–627 (2015)MathSciNetCrossRefGoogle Scholar
 8.Bertsimas, D., Georghiou, A.: Binary decision rules for multistage adaptive mixedinteger optimization. Math. Program. 167(2), 395–433 (2018) MathSciNetCrossRefGoogle Scholar
 9.Bertsimas, D., Goyal, V., Sun, X.A.: A geometric characterization of the power of finite adaptability in multistage stochastic and adaptive optimization. Math. Oper. Res. 36(1), 24–54 (2011)MathSciNetCrossRefGoogle Scholar
 10.Bertsimas, D., Litvinov, E., Sun, X.A., Zhao, J., Zheng, T.: Adaptive robust optimization for the security constrained unit commitment problem. IEEE Trans. Power Syst. 28(1), 52–63 (2013)CrossRefGoogle Scholar
 11.Blankenship, J.W., Falk, J.E.: Infinitely constrained optimization problems. J. Optim. Theory Appl. 19(2), 261–281 (1976)MathSciNetCrossRefGoogle Scholar
 12.Buchheim, C., Kurtz, J.: Minmaxmin robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions. Electron Notes Discrete Math. 52, 45–52 (2016)MathSciNetCrossRefGoogle Scholar
 13.Buchheim, C., Kurtz, J.: Min–max–min robust combinatorial optimization. Math. Program. 163(1), 1–23 (2017)MathSciNetCrossRefGoogle Scholar
 14.Buchheim, C., Kurtz, J.: Complexity of minmaxmin robustness for combinatorial optimization under discrete uncertainty. Discrete Optim. 28(1), 1–15 (2018)MathSciNetCrossRefGoogle Scholar
 15.Chen, X., Zhang, Y.: Uncertain linear programs: extended affinely adjustable robust counterparts. Oper. Res. 57(6), 1469–1482 (2009)MathSciNetCrossRefGoogle Scholar
 16.Gabrel, V., Murat, C., Thiele, A.: Recent advances in robust optimization: an overview. Eur. J. Oper. Res. 235(3), 471–483 (2014)MathSciNetCrossRefGoogle Scholar
 17.Georghiou, A., Wiesemann, W., Kuhn, D.: Generalized decision rule approximations for stochastic programming via liftings. Math. Program. 152(1), 301–338 (2015)MathSciNetCrossRefGoogle Scholar
 18.Goh, J., Sim, M.: Distributionally robust optimization and its tractable approximations. Oper. Res. 58(4), 902–917 (2010)MathSciNetCrossRefGoogle Scholar
 19.Gorissen, B.L., den Hertog, D.: Robust counterparts of inequalities containing sums of maxima of linear functions. Eur. J. Oper. Res. 227(1), 30–43 (2013)MathSciNetCrossRefGoogle Scholar
 20.Gorissen, B.L., Yanıkoğlu, I., den Hertog, D.: A practical guide to robust optimization. Omega 53, 124–137 (2015)CrossRefGoogle Scholar
 21.Gounaris, C., Repoussis, P., Tarantilis, C., Wiesemann, W., Floudas, C.: An adaptive memory programming framework for the robust capacitated vehicle routing problem. Transp. Sci. 50(4), 1239–1260 (2016)CrossRefGoogle Scholar
 22.Gounaris, C., Wiesemann, W., Floudas, C.: The robust capacitated vehicle routing problem under demand uncertainty. Oper. Res. 61(3), 677–693 (2013)MathSciNetCrossRefGoogle Scholar
 23.Guslitser, E.: Uncertaintyimmunized solutions in linear programming. Master’s Thesis, Technion, Israeli Institute of Technology (2002)Google Scholar
 24.Hanasusanto, G.A., Kuhn, D., Wiesemann, W.: \(K\)adaptability in twostage robust binary programming. Oper. Res. 63(4), 877–891 (2015)Google Scholar
 25.IBM: ILOG CPLEX Optimizer (2018). https://www01.ibm.com/software/commerce/optimization/cplexoptimizer/. Accessed 27 July 2018
 26.Jiang, R., Zhang, M., Li, G., Guan, Y.: Twostage robust power grid optimization problem. Available on Optimization Online (2010)Google Scholar
 27.Kelley, J.E.: The cuttingplane method for solving convex programs. J. Soc. Ind. Appl. Math. 8(4), 703–712 (1960)MathSciNetCrossRefGoogle Scholar
 28.Kuhn, D., Wiesemann, W., Georghiou, A.: Primal and dual linear decision rules in stochastic and robust optimization. Math. Program. 130(1), 177–209 (2011)MathSciNetCrossRefGoogle Scholar
 29.Lysgaard, J., Letchford, A., Eglese, R.: A new branchandcut algorithm for the capacitated vehicle routing problem. Math. Program. 100(2), 423–445 (2004)MathSciNetCrossRefGoogle Scholar
 30.Mutapcic, A., Boyd, S.: Cuttingset methods for robust convex optimization with pessimizing oracles. Optim. Methods Softw. 24(3), 381–406 (2009)MathSciNetCrossRefGoogle Scholar
 31.Postek, K., den Hertog, D.: Multistage adjustable robust mixedinteger optimization via iterative splitting of the uncertainty set. INFORMS J. Comput. 28(3), 553–574 (2016)MathSciNetCrossRefGoogle Scholar
 32.Rubin, P.: When the OctoMom Solves MILPs (2010). https://orinanobworld.blogspot.com/2010/06/whenoctomomsolvesmilps.html. Accessed 29 Jan 2019
 33.Subramanyam, A., Gounaris, C., Wiesemann, W.: KAdaptabilitySolver (GitHub repository) (2019). https://doi.org/10.5281/zenodo.3490004
 34.Thiele, A., Terry, T., Epelman, M.: Robust linear optimization with recourse. Technical Report, Lehigh University and University of Michigan, (2010)Google Scholar
 35.Toth, P., Vigo, D.: Vehicle routing: problems, methods, and applications. Society for Industrial and Applied Mathematics, 2nd edn. (2014)Google Scholar
 36.Vayanos, P., Kuhn, D., Rustem, B.: Decision rules for information discovery in multistage stochastic programming. In: Proceedings of the 50th IEEE Conference on Decision and Control and European Control Conference, pp. 7368–7373 (2011)Google Scholar
 37.Wiesemann, W., Kuhn, D., Rustem, B.: Robust resource allocations in temporal networks. Math. Program. 135(1), 437–471 (2012)MathSciNetCrossRefGoogle Scholar
 38.Yanıkoğlu, I., Gorissen, B.L., den Hertog, D.: A survey of adjustable robust optimization. Eur. J. Oper. Res. 277(3), 799–813 (2019) MathSciNetCrossRefGoogle Scholar
 39.Zeng, B., Zhao, L.: Solving twostage robust optimization problems using a columnandconstraint generation method. Oper. Res. Lett. 41(5), 457–561 (2013)MathSciNetCrossRefGoogle Scholar
 40.Zhao, C., Wang, J., Watson, J.P., Guan, Y.: Multistage robust unit commitment considering wind and demand response uncertainties. IEEE Trans. Power Syst. 28(3), 2708–2717 (2013)CrossRefGoogle Scholar
 41.Zhao, L., Zeng, B.: An exact algorithm for twostage robust optimization with mixed integer recourse problems. Technical Report, University of South Florida, (2012)Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.