Abstract
We propose a new class of convex approximations for twostage mixedinteger recourse models, the socalled generalized alphaapproximations. The advantage of these convex approximations over existing ones is that they are more suitable for efficient computations. Indeed, we construct a loose Benders decomposition algorithm that solves large problem instances in reasonable time. To guarantee the performance of the resulting solution, we derive corresponding error bounds that depend on the total variations of the probability density functions of the random variables in the model. The error bounds converge to zero if these total variations converge to zero. We empirically assess our solution method on several test instances, including the SIZES and SSLP instances from SIPLIB. We show that our method finds nearoptimal solutions if the variability of the random parameters in the model is large. Moreover, our method outperforms existing methods in terms of computation time, especially for large problem instances.
Introduction
Consider the twostage mixedinteger recourse model with random righthand side
where the recourse function Q is defined as
This model represents a twostage decision problem under uncertainty. In the first stage, a decision x has to be made hereandnow, subject to deterministic constraints \(Ax = b\) and random goal constraints \(Tx = \omega \). Here, \(\omega \) is a continuous or discrete random vector whose probability distribution is known. In the second stage, the realization of \(\omega \) becomes known and any infeasibilities with respect to \(Tx = \omega \) have to be repaired. This is modelled by the secondstage problem
The objective in this twostage recourse model is to minimize the sum of immediate costs cx and expected secondstage costs \(Q(x) = \mathbb {E}_\omega [v(\omega ,x)]\), \({x \in X}\).
Frequently, integrality restrictions are imposed on the first and secondstage decisions. That is, X and Y are of the form \(X = {\mathbb {Z}}^{p_1}_+ \times \mathbb {R}^{n_1  p_1}_+\) and \(Y = {\mathbb {Z}}^{p_2}_+\times \mathbb {R}^{n_2  p_2}_+\). Such restrictions arise naturally when modelling reallife problems, for example to model on/off decisions or batch size restrictions. The resulting model is called a mixedinteger recourse (MIR) model. Such models have many practical applications in for example energy, telecommunication, production planning, and environmental control, see e.g. [10, 37].
While MIR models are highly relevant in practice, they are notoriously difficult to solve. The reason is that Q is in general nonconvex if integrality restrictions are imposed on the secondstage decision variables y, see [20]. Therefore, standard techniques for convex optimization cannot be used to solve these models. In contrast, if \(Y = {\mathbb {R}}^{n_2}_+\), then Q is convex and efficient solution methods are available, most notably the Lshaped method in [32] and variants thereof.
Because of the nonconvexity of Q, traditional solution methods for MIR models typically combine ideas from deterministic mixedinteger programming and stochastic continuous programming, see e.g. [4, 8, 9, 15, 17, 19, 29, 31, 38], and the survey papers [24, 28, 30]. We, however, use a fundamentally different approach to deal with the nonconvex recourse function Q. Instead of solving the original MIR model in (1), we solve an approximating problem in which Q is replaced by a convex approximation \(\hat{Q}\) of Q. This convex approximation \({\hat{Q}}\) does not have to be a lower bound of Q. The resulting approximating optimization problem is given by
Because \(\hat{Q}\) is convex, we can use efficient techniques from convex optimization to solve the optimization problem in (4). Indeed, this optimization problem is convex if all firststage variables \(x \in X\) are continuous, and it is a MIP with a convex objective if some of these firststage variables are integer. Thus, compared to traditional solution methods, we are able to solve similarsized problems much faster, and solve larger problem instances. In fact, we demonstrate this in our numerical experiments on problem instances available from [29] and SIPLIB [2], and on randomly generated problem instances.
Obviously, the optimal solution \({\hat{x}}\) of the approximating problem in (4) is not necessarily optimal for the original MIR model in (1). That is why we guarantee the quality of the approximating solution \({\hat{x}}\), by deriving an error bound on
This error bound directly gives us an upper bound on the optimality gap of \({\hat{x}}\):
see [25].
Convex approximations and corresponding error bounds have been derived for many different classes of models. The idea to use convex approximations \({\hat{Q}}\) for the nonconvex mixedinteger recourse function Q dates back to [33], in which van der Vlerk proposes to use \(\alpha \)approximations for the special case of simple integer recourse (SIR) models. These \(\alpha \)approximations are obtained by perturbing the probability distribution of the random vector \(\omega \). Klein Haneveld et al. [14] derive an error bound for the \(\alpha \)approximations that depends on the total variations of the marginal density functions of the random variables in the SIR model.
More convex approximations have been described for more general classes of problems. For example, in [34, 35] van der Vlerk extends the \(\alpha \)approximations to a class of MIR models with a single recourse constraint, and to integer recourse models with a totally unimodular (TU) recourse matrix, respectively. Furthermore, Romeijnders et al. [26] derive an error bound for the latter approximation, and they derive a tighter error bound for the shifted LPrelaxation approximation for the same class of problems. The quality of the convex approximations for TU integer recourse models is assessed empirically in [22], and it turns out that they perform well if the variability of the random parameters in the model is large enough.
Romeijnders et al. [23] generalize the shifted LPrelaxation to general twostage MIR models, and they derive a corresponding asymptotic error bound. This error bound converges to zero if the variability of the random parameters in the model increases. Romeijnders and van der Laan [21] derive similar error bounds for convex approximations that fit into a specific framework. In this framework, convex approximations are defined using pseudovalid cutting planes for the secondstage feasible regions. Their idea is to only use cutting planes which are affine in the firststage decision variables, so that the corresponding expected value function is convex. In general, the approximations are not exact, since pseudovalid cutting planes cut away feasible solutions and are allowed to be overly conservative. Nevertheless, a similar error bound as for the shifted LPrelaxation has been derived if the pseudovalid cutting planes are tight. That is, if they are exact on an entire grid of firststage solutions.
Both the shifted LPrelaxation of [23] and the cutting plane framework of [21], however, cannot be applied directly to efficiently solve MIR models in general. This is because the shifted LPrelaxation is very difficult to compute in general, as discussed in [23], and furthermore, the asymptotic error bound in the cutting plane framework of [21] only applies for tight pseudovalid cutting planes, which are only available in special cases, e.g. for SIR models. That is why we propose an alternative class of convex approximations for general twostage MIR models, the socalled generalized \(\alpha \)approximations. They are derived by exploiting properties of Gomory relaxations [11] of the secondstage mixedinteger programming problems.
Contrary to the shifted LPrelaxation, the generalized \(\alpha \)approximations, denoted by \({\hat{Q}_\alpha }\), can be solved efficiently. In fact, we develop a socalled loose Benders decomposition algorithm to solve the approximating model in (4) with \({\hat{Q}} = {\hat{Q}_\alpha }\). Our Benders decomposition is called loose, because we derive optimality cuts for \({\hat{Q}_\alpha }\) which are in general not tight at the current solution. While these loose optimality cuts are in general not sufficient to find the optimal solution \({\hat{x}}\) of the approximating model in (4), we prove that they are tight enough in the sense that a similar performance guarantee applies to the solution obtained by the loose Benders decomposition as to \({\hat{x}}\).
Summarizing, our main contributions are as follows.

We propose a new class of convex approximations for general twostage MIR models, which are based on Gomory relaxations of the secondstage problems. These generalized \(\alpha \)approximations can be solved efficiently, and a similar error bound as for the shifted LPrelaxation applies to the generalized \(\alpha \)approximations.

We derive a loose Benders decomposition algorithm to (approximately) solve the approximating model with the generalized \(\alpha \)approximations. This is the first efficient algorithm for solving nontrivial convex approximations of general twostage MIR models.

We prove that the solution obtained by the loose Benders decomposition algorithm has a similar performance guarantee as the exact solution to the generalized \(\alpha \)approximations.

We carry out extensive numerical experiments on 41 test instances from the literature and on 240 randomly generated instances, and show that using our loose Benders decomposition algorithm we obtain good solutions within reasonable time, also for large problem instances, in particular when the variability of the random parameters in the model is large.
The remainder of this paper is organized as follows. In Sect. 2, we discuss the shifted LPrelaxation of [23] and its corresponding error bound. In Sect. 3, we present the generalized \(\alpha \)approximations and an efficient algorithm for solving the corresponding approximating problem. Section 4 contains the proof of the performance guarantee for the loose Benders decomposition algorithm. In Sect. 5, we report on numerical experiments to evaluate the performance of our algorithm. Finally, we conclude in Sect. 6.
Throughout, we make the following assumptions. Assumptions (A2)–(A4) guarantee that Q(x) is finite for all \(x \in X\) such that \(Ax = b\).

(A1)
The firststage feasible region \({\mathcal {X}} := \{x\in X: Ax = b\}\) is bounded.

(A2)
The recourse is relatively complete: for all \(\omega \in \mathbb {R}^m\) and \(x \in {\mathcal {X}}\), there exists a \(y \in Y\) such that \(Wy = \omega  Tx\), so that \(v(\omega ,x) < \infty \).

(A3)
The recourse is sufficiently expensive: \(v(\omega ,x) > \infty \) for all \(\omega \in \mathbb {R}^m\) and \(x \in {\mathcal {X}}\).

(A4)
\(\mathbb {E}[\omega _i]\) is finite for all \(i=1,\dots ,m\).

(A5)
The recourse matrix W is integer.
Existing convex approximations of mixedinteger recourse functions
In this section, we review the shifted LPrelaxation approximation of [23] and its corresponding error bound. First, however, we review results on asymptotic periodicity in mixedinteger linear programming in Sect. 2.1. We do so since these results are not only used to derive the shifted LPrelaxation in Sect. 2.2, but also to derive the generalized \(\alpha \)approximations in Sect. 3.1.
Asymptotic periodicity in mixedinteger programming
In order to derive convex approximations of the MIR function Q we analyze the value function v of the secondstage problem, defined as
In particular, LPduality implies that the LPrelaxation \(v_\text {LP}(\omega ,x)\) of \(v(\omega ,x)\) is polyhedral in the righthand side vector \(\omega  Tx\):
where \(\lambda ^k\), \(k = 1\dots ,K\), are the extreme points of the dual feasible region \(\{\lambda : \lambda W \le q\}\). Romeijnders et al. [23] derive a similar characterization of \(v(\omega , x)\) in terms of linear and periodic functions by exploiting socalled Gomory relaxations. We briefly discuss these Gomory relaxations, before stating the characterization of \(v(\omega ,x)\) in Lemma 1.
The Gomory relaxation of \(v(\omega ,x)\) is defined for any dual feasible basis matrix of \(v_\text {LP}(\omega ,x)\). Let B denote such a matrix and let N be such that \(W \equiv \begin{pmatrix} B&N \end{pmatrix}\), meaning equality up to a permutation of the columns. Let \(y_B\) and \(y_N\) denote the secondstage variables corresponding to the columns in B and N, respectively, and \(q_B\) and \(q_N\) their corresponding cost parameters. The Gomory relaxation \(v_B\) is obtained by relaxing the nonnegativity constraints of the basic variables \(y_B\). Romeijnders et al. [23] derive the following expression for \(v_B\):
where
and \({\bar{q}}_N = q_N  q_B (B^{1}) N\). Moreover, they show that
if \(\omega  Tx \in \varLambda := \{t:B^{1}t \ge 0\}\), and if the distance of \(\omega  Tx\) to the boundary of \(\varLambda \) is sufficiently large, see Definition 1. We say that \(v(\omega , x)\) is asymptotically periodic, since \(\psi _B\) is a Bperiodic function, i.e. \(\psi _B(s + Bl) = \psi _B(s)\) for every \(s \in {\mathbb {R}}^m\) and \(l \in {\mathbb {Z}}^m\), see [23].
Definition 1
Let \(\varLambda \subset {\mathbb {R}}^m\) be a closed convex set and let \(d \in {\mathbb {R}}\) with \(d > 0\) be given. Then, we define \(\varLambda (d)\) as
where \({\mathcal {B}}(s, d) := \{t \in {\mathbb {R}}^m : t  s_2 \le d\}\) is the closed ball centered at s with radius d. We can interpret \(\varLambda (d)\) as the set of points in \(\varLambda \) with at least Euclidean distance d to the boundary of \(\varLambda \).
Lemma 1
[23, Theorem 2.9] Consider the mixedinteger programming problem
where W is an integer matrix, and \(v(\omega ,x)\) is finite for all \(\omega \in {\mathbb {R}}^m\) and \(x \in ~{\mathbb {R}}^n\). Then, there exist dual feasible basis matrices \(B^k\) of \(v_{\text {LP}}\), \(k = 1,\dots ,K\), closed convex polyhedral cones \(\varLambda ^k := \{t\in {\mathbb {R}}^m: (B^k)^{1}t \ge 0 \}\), distances \(d_k\), \(B^k\)periodic functions \(\psi ^k\), and constants \(w_k\) such that we have the following:

(i)
\(\displaystyle \bigcup _{k = 1}^K \varLambda ^k = \mathbb {R}^m\).

(ii)
\((int \ \varLambda ^k) \cap (int \ \varLambda ^l) = \emptyset \) for every \(k,l \in \{1,\dots ,K\}\) with \(k \ne l\).

(iii)
If \(\omega  Tx \in \varLambda ^k(d_k)\), then
$$\begin{aligned} v(\omega ,x) = v_\text {LP}(\omega ,x)+ \psi ^k(\omega  Tx) =q_{B^k} (B^k)^{1}(\omega  Tx) + \psi ^k(\omega  Tx) ,\end{aligned}$$where \(\psi ^l \equiv \psi ^k\) if \(q_{B^k} (B^k)^{1} = q_{B^l} (B^l)^{1}\).

(iv)
\(0\le \psi ^{{k}}(\omega  Tx) \le w_k\) for all \(\omega \in {\mathbb {R}}^m\) and \(x \in {\mathbb {R}}^{n_1}, \quad k = 1,\dots ,K\).
Lemma 1 (iii) shows that if \(\omega  Tx \in \varLambda ^k(d_k)\) for some \(k = 1,\dots ,K\), then \(v(\omega ,x)\) is equal to the sum of the LPrelaxation \(v_\text {LP}(\omega ,x)\) and \(\psi ^k(\omega  Tx)\). Hence, \(\psi ^k(\omega  Tx)\) can be interpreted as the additional costs resulting from the integrality restrictions on the decision variables y. For a discussion on how to obtain \(d_k\) or how to represent \(\varLambda ^k(d_k)\) using a system of linear inequalities we refer to [23].
The shifted LPrelaxation approximation
Lemma 1 shows why the secondstage value function is not convex in x. On regions of its domain it is the sum of a linear function \(q_{B^k}(B^k)^{1}(\omega  Tx)\) and a periodic function \(\psi ^k(\omega  Tx)\). Clearly the periodic part is causing v to be nonconvex. That is why the shifted LPrelaxation is obtained by replacing this periodic part \(\psi ^k(\omega  Tx)\) by a constant \(\varGamma _k\) for every \(k = 1,\dots ,K\), with \(\varGamma _k\) defined as
where \(p_k = \det B_k\). The K constants \(\varGamma _k\) can be interpreted as the averages of the periodic functions \(\psi ^k\). The shifted LPrelaxation approximation is obtained by taking the pointwise maximum over all dual feasible basis matrices \(B^k\), \(k = 1,\dots ,K\).
Definition 2
Define the shifted LPrelaxation approximation \({\tilde{Q}}\) of the MIR function Q as \({\tilde{Q}}(x) = {\mathbb {E}}_{\omega }[{\tilde{v}}(\omega ,x)]\), where
where \(B^k\), \(k =1,\dots ,K\), are the dual feasible basis matrices of Lemma 1, and \(\varGamma _k\) is defined in (7).
Romeijnders et al. [23] derive a total variation error bound on the approximation error \(Q  {\tilde{Q}}_\infty \) of the shifted LPrelaxation \({\tilde{Q}}\). This error bound is expressed in terms of the total variations of the onedimensional conditional probability density functions (pdf) of the random vector \(\omega \).
Definition 3
Let \(f: \mathbb {R}\rightarrow \mathbb {R}\) be a realvalued function, and let \(I \subset \mathbb {R}\) be an interval. Let \(\varPi (I)\) denote the set of all finite ordered sets \(P = \{x_1,\dots ,x_{N+1}\}\) with \(x_{1}< \dots < x_{N + 1}\) in I. Then, the total variation of f on I, denoted by \(\varDelta f(I)\), is defined as
where
We write \(\varDelta f := \varDelta f(\mathbb {R})\).
Definition 4
For every \(i = 1,\dots ,m\) and \(x_{i} \in {\mathbb {R}}^{m  1}\), define the ith conditional density function \(f_i(\cdot  x_{i})\) of the mdimensional joint pdf f as
where \(f_{i}\) is the joint pdf of the \((m  1)\)dimensional random vector \(\omega _{i}\), which is equal to \(\omega \) without its ith component. Define \({\mathcal {H}}^m\) as the set of all mdimensional joint pdf f such that \(f_i(\cdot  x_{i})\) is of bounded variation for all \(i =1, \dots ,m\) and \(x_{i} \in {\mathbb {R}}^{m1}\).
Theorem 1
[23, Theorem 5.1] There exists a constant \(C > 0\) such that for every continuous random vector \(\omega \) with joint pdf \(f \in {\mathcal {H}}^m\),
In general, the error bound in Theorem 1 may be large, in particular since the constant \(C>0\) may be large. Nevertheless, the theorem shows that if the total variations of the onedimensional conditional pdf are small, then \({\tilde{Q}}\) is a good approximation of Q. For example, if the components \(\omega _i\) of \(\omega \) follow independent normal distributions, with mean \(\mu _i\) and variance \(\sigma _i^2\), for \(i = 1,\dots ,m\), then the error bound in (8) simplifies to \(C' \sum _{ i =1}^m \sigma _i^{1}\) for some \(C'>0\); see [21, Example 2] for details. Observe that the error bound goes to zero if \({\sigma _i \rightarrow \infty }\) for all \(i = 1,\dots ,m\). Thus, the error of using the shifted LPrelaxation approximation decreases if the variability of the random parameters in the model increases.
Loose Benders decomposition algorithm for twostage mixedinteger recourse models
Generalized \(\alpha \)approximations
To derive the generalized \(\alpha \)approximations \({\hat{Q}_\alpha }\), we first derive a convex approximation \({\hat{v}_\alpha }\) of the secondstage value function v defined in (5), and we define \({\hat{Q}_\alpha }(x) := {\mathbb {E}}_\omega [{\hat{v}_\alpha }(\omega ,x)]\). Similar as for the shifted LPrelaxation \({\tilde{v}}\), we use Lemma 1, i.e. for \(\omega  Tx \in \varLambda ^k(d_k)\),
However, instead of replacing \(\psi ^k(\omega  Tx)\) by its average \(\varGamma _k\), as is done for the shifted LPrelaxation, we replace \(\psi ^k(\omega  Tx)\) by \(\psi ^k(\omega  \alpha )\) for some \(\alpha \in {\mathbb {R}}^m\) to obtain the generalized \(\alpha \)approximation \({\hat{v}_\alpha ^k}\) of \(v_{B^k}\) defined as
and we define the generalized \(\alpha \)approximation \({\hat{v}_\alpha }\) of v as
The difference compared to the shifted LPrelaxation seems small: the constants \(\varGamma _k\) are replaced by \(\psi ^k(\omega  \alpha )\). From a computational point of view, however, this difference is significant. This is because the constants \(\varGamma _k\) are the averages over \(\psi ^k\), and in general need to be obtained by computing a multidimensional integral of a mixedinteger value function. For a fixed \(\omega \) and \(\alpha \), however, the value of \(\psi ^k(\omega  \alpha )\) is obtained by solving a single mixedinteger programming problem of the same size as the secondstage problem. In fact, we need to solve the Gomory relaxation discussed in Sect. 2.1, which can be done in polynomial time if all secondstage variables are integer [11].
Definition 5
For \(\alpha \in {\mathbb {R}}^m\), we define the generalized \(\alpha \)approximation \({\hat{Q}_\alpha }\) of Q as
with \(\lambda ^k := q_{B^k}(B^k)^{1}\) and \(\psi ^k := \psi _{B^k}\), where \(B^k\), \(k = 1,\dots ,K\), are the dual feasible basis matrices of Lemma 1.
Remark 1
The generalized \(\alpha \)approximations are a generalization of the \(\alpha \)approximations defined for TU integer recourse models [26], which arise if \(y \in {\mathbb {Z}}^p_+\) and if the secondstage constraints are of the form \(Wy \ge \omega  Tx\), where W is a TU matrix. Indeed, for these models, we have \(\psi ^k(s) = \lambda ^k(\lceil s \rceil ~s)\), see [23], and thus
which is the expression for the \(\alpha \)approximations for TU integer recourse models in [26].
It turns out that we can derive a similar error bound for the generalized \(\alpha \)approximations as for the shifted LPrelaxation.
Theorem 2
There exists a constant \(C > 0\), such that for every \(\alpha \in {\mathbb {R}}^m\) and for every continuous random vector \(\omega \) with joint pdf \(f \in {\mathcal {H}}^m\),
Proof
See “Appendix”. \(\square \)
Theorem 2 states that the approximation error of \({\hat{Q}_\alpha }\) goes to zero as the total variations \(\mathbb {E}_{\omega _{i}}\left[ \varDelta f_i(\cdot  \omega _{i})\right] \), \(i = 1,\dots ,m\), all go to zero. This error bound is independent of \(\alpha \), i.e. \({\hat{Q}_\alpha }\) is a good approximation of the true MIR function Q for any \(\alpha \in {\mathbb {R}}^m\).
An interesting difference between the generalized \(\alpha \)approximations and the shifted LPrelaxation of Sect. 2.2 is that the approximating value function \({\hat{v}_\alpha }\) is not convex in \(\omega \) for fixed \(x\in {\mathbb {R}}^{n_1}\). Indeed, \({\hat{v}_\alpha }\) is only convex in x for every fixed \(\omega \), but this is sufficient to guarantee that the generalized \(\alpha \)approximation \({\hat{Q}_\alpha }\) is convex. In contrast, the value function \({\tilde{v}}\) of the shifted LPrelaxation is convex in both \(\omega \) and x. We illustrate these properties in Example 1. In this example, we also illustrate the difference between the generalized \(\alpha \)approximations and the pseudovalid cutting plane approximation of [21].
Example 1
Consider the secondstage mixedinteger value function v defined as
We use this example, since it also appears in [23] and [21], allowing us to compare the generalized \(\alpha \)approximations to the shifted LPrelaxation and a pseudovalid cutting plane approximation.
The LPrelaxation of v has two dual feasible basis matrices \(B^1 = [1]\) and \(B^2 = [1]\). Thus, \(K = 2\), and straightforward computations yield \(\lambda ^1 = 2\), \(\lambda ^2 = 1\), \(\psi ^1 \equiv 0\), and for every \(s \in {\mathbb {R}}\),
It follows that \(v_\text {LP}(\omega ,x) = \max \{2(\omega  x), \omega  x\}\). Moreover, for every \(\alpha \in {\mathbb {R}}\), the approximating value function is defined as
In contrast, the value function \({\tilde{v}}\) of the shifted LPrelaxation equals
since \(\varGamma ^2 := \int _0^1 \psi ^2(s)ds = 3/8\). Finally, the tight pseudovalid cutting plane approximation \({\hat{v}}_c\) is given by
Figure 1a, b show \(v(\omega ,x)\), \({\hat{v}_\alpha }(\omega ,x)\), \({\tilde{v}}(\omega ,x)\), and \({\hat{v}}_c(\omega ,x)\) as a function of x and \(\omega \), respectively. They illustrate that \({\hat{v}_\alpha }\) is convex in x for fixed \(\omega \), but not convex in \(\omega \) for fixed x, respectively. Moreover, Fig. 1b illustrates a key difference between \({\tilde{v}}(\omega , x)\) and \({\hat{v}_\alpha }(\omega ,x)\). Whereas \({\tilde{v}}(\omega ,x)\) is obtained by replacing the periodic part \(\psi ^2(\omega  x)\) of \(v(\omega ,x)\) by its average \(\varGamma ^2\), we obtain \({\hat{v}_\alpha }(\omega ,x)\) by shifting \(\psi ^2(\omega  x)\) towards \(\psi ^2(\omega  \alpha )\).
Finally, Fig. 1a shows that \({\hat{v}_\alpha }(\omega ,x) = {\hat{v}}_c(\omega ,x)\) for a range of x values. However, the difference \({\hat{v}}_c(\omega , x)  {\hat{v}_\alpha }(\omega , x)\) is relatively large if \(x \in (0.3,0.7)\). This is true in general for tight pseudovalid cutting plane approximations: there always exist \({\hat{d}}_k\) such that \({\hat{v}_\alpha }(\omega ,x) = {\hat{v}}_c(\omega ,x)\) if \(\omega  Tx \in \varLambda ^k({\hat{d}}_k)\). However, the two approximations may differ if \(\displaystyle \omega  Tx \notin \bigcup _{k} \varLambda ^k({\hat{d}}_k).\) \(\lozenge \)
Benders decomposition for the generalized \(\alpha \)approximations
We solve the approximating problem
using an SAA of \({\hat{Q}_\alpha }\). Given a sample \(\{\omega ^1,\dots ,\omega ^S\}\) of size S from the distribution of \(\omega \), the SAA \(\hat{Q}_\alpha ^S\) of \({\hat{Q}_\alpha }\) is defined as
and the corresponding SAA problem as
Since \(\hat{Q}_\alpha ^S\) is convex, we can solve the SAA problem using Benders decomposition [6], i.e. using an Lshaped algorithm [32]. In iteration \(\tau \) of this algorithm, we solve (11) with \(\hat{Q}_\alpha ^S\) replaced by a convex polyhedral outer approximation \(\hat{Q}_{\text {out}}^{\tau }\le \hat{Q}_\alpha ^S\). This problem is called the master problem
and its optimal solution \(x^{\tau }\) is referred to as the current solution at iteration \(\tau \). If \(\hat{Q}_{\text {out}}^{\tau }(x^\tau ) = \hat{Q}_\alpha ^S(x^{\tau })\), then the current solution \(x^{\tau }\) is optimal for the original SAA problem in (11). If not, then we strengthen the outer approximation using an optimality cut for \(\hat{Q}_\alpha ^S\) of the form
which is tight at \(x^{\tau }\), i.e. \(\hat{Q}_\alpha ^S(x^\tau ) = \beta _{\tau + 1} x^\tau + \delta _{\tau +1}\). The outer approximation \(\hat{Q}_{\text {out}}^{\tau }\) in iteration \(\tau \) is the pointwise maximum over all optimality cuts derived in previous iterations:
The challenge for the generalized \(\alpha \)approximations, however, is to compute tight optimality cuts of \(\hat{Q}_\alpha ^S\). Since
such tight optimality cuts can be obtained by identifying for each scenario s the maximizing index \(k_s^{\tau }\) at \(x^\tau \), defined as
However, this is computationally too expensive, since we need to compute \(\psi ^k(\omega ^s  \alpha )\) for all \(k= 1,\dots ,K\), and K grows exponentially in the size of the secondstage problem. Indeed, K is at least as large as the number of dual vertices of the feasible region of the LPrelaxation \(v_\text {LP}\) of v, of which there are exponentially many.
Loose Benders decomposition for the generalized \(\alpha \)approximations
To overcome this computational challenge we propose to use approximate indices \(\hat{k}_s^{\tau }\) defined as
These indices are computationally tractable since they correspond to the optimal basis matrix index of the LPrelaxation \(v_\text {LP}(\omega ^s, x^\tau )\), defined in (5). Hence, they can be obtained by solving a single LP. Moreover, if the values of \(\psi ^k(\omega ^s  \alpha )\) are equal for all \(k = 1,\dots ,K\), then the indices \(\hat{k}_s^{\tau }\) are optimal in (13), i.e. \(\hat{k}_s^{\tau }= k_s^{\tau }\). In general however, the indices \(k_s\) are suboptimal in (13), leading to the following definition.
Definition 6
Let \(x^\tau \) be given and let \(\hat{k}_s^{\tau }\), \(s = 1,\dots ,S\), be as in (14). We define the loose optimality cut for \(\hat{Q}_\alpha ^S\) at \(x^\tau \) as
We use these loose optimality cuts in our loose Benders decomposition algorithm, LBDA(\(\alpha \)). In this algorithm the outer approximation \(\hat{Q}_{\text {out}}^{\tau }\) of \(\hat{Q}_\alpha ^S\) is defined using our loose optimality cuts. The algorithm terminates with tolerance level \(\varepsilon \ge 0\) if \(\hat{Q}_{\text {out}}^{\tau }(x^\tau ) \ge \hat{Q}_{\text {out}}^{\tau + 1}(x^\tau )  \varepsilon \); then, LBDA(\(\alpha \)) reports \({\hat{x}_\alpha }= x^\tau \) as solution.
A full description of LBDA(\(\alpha \)) is given below. Note that the algorithm requires a lower bound L on \(\hat{Q}_\alpha ^S\) such that \(\hat{Q}_\alpha ^S(x) \ge L\) for all \(x \in {\mathcal {X}}\). Such a lower bound L exists because of Assumptions (A1)–(A4).
The solution \({\hat{x}_\alpha }\) of LBDA(\(\alpha \)) is not necessarily \(\varepsilon \)optimal for the SAA problem in (11), since LBDA(\(\alpha \)) uses the loose optimality cuts of Definition 6. If the optimality cuts were tight, then \(\hat{Q}_{\text {out}}^{\tau + 1}(x^\tau ) = \hat{Q}_\alpha ^S(x^\tau )\) for every iteration \(\tau \), and thus the algorithm would terminate if \(\hat{Q}_{\text {out}}^{\tau }(x^\tau ) \ge \hat{Q}_\alpha ^S(x^\tau )  \varepsilon \). We, however, show that our loose optimality cuts are asymptotically tight at \(x^\tau \): the difference \(\hat{Q}_{\text {out}}^{\tau + 1}(x^\tau )  \hat{Q}_\alpha ^S(x^\tau )\) converges to zero if the total variations of the onedimensional conditional pdf of the random vector \(\omega \) go to zero and as \(S \rightarrow \infty \), see Proposition 1. In this case, we are able to prove that the LBDA(\(\alpha \)) solution \({\hat{x}}_\alpha \) is nearoptimal; see Theorem 3 below for a bound on the optimality gap \(c{\hat{x}}_\alpha + Q({\hat{x}}_\alpha )  \eta ^*\). This performance guarantee is independent of \(\alpha \), and thus it applies if we take e.g. \(\alpha = 0\). We further explore selection of \(\alpha \) in Sect. 5. The proof of the performance guarantee is postponed to Sect. 4.2.
Theorem 3
Consider the twostage mixedinteger recourse model
Let \({\hat{x}_\alpha }\) denote the solution by LBDA(\(\alpha \)) with tolerance \(\varepsilon \) and sample size S. Then, there exists a constant \(C > 0\) such that for every continuous random vector \(\omega \) with joint pdf \(f \in {\mathcal {H}}^m\)
w.p. \(1\) as \(S \rightarrow \infty \).
Theorem 3 implies that the optimality gap of \({\hat{x}_\alpha }\) converges to the prespecified tolerance \(\varepsilon \) as the total variations of the underlying onedimensional conditional pdf go to zero.
Implementation details of LBDA(\(\alpha \))
LBDA(\(\alpha \)) can be implemented efficiently if the input size of the secondstage problem \(v(\omega ,x)\) is moderate. During each iteration \(\tau \), we have to solve the LPrelaxation \(v_\text {LP}(\omega ^s,x^\tau )\) and the Gomory relaxation \(v_{B^{\hat{k}_s^{\tau }}}(\omega ^s  \alpha , 0)\) for each scenario \(s = 1,\dots , S\), in order to generate a loose optimality cut. If the input size of \(v(\omega ,x)\) is not too large, then \(v_\text {LP}(\omega ^s, x^\tau )\) and \(v_{B^{\hat{k}_s^{\tau }}}(\omega ^s  \alpha , 0)\) can be solved in reasonable time using standard LP and MIP solvers, respectively. Moreover, the master problem can be solved efficiently using a standard LP solver, or MIP solver if some of the firststage decision variables are integer. Improved implementations of LBDA(\(\alpha \)) using a multicut approach [7] and regularization techniques [27] are possible. Furthermore, the subproblems \(v_{B^{\hat{k}_s^{\tau }}}(\omega ^s  \alpha , 0)\), \(s = 1,\dots ,S\) can be solved in parallel, and it may be beneficial to solve these subproblems inexactly in the first phase of the algorithm.
In Sect. 5, we exploit that LBDA(\(\alpha \)) can be run multiple times, using different values of \(\alpha \). The performance guarantee of LBDA(\(\alpha \)) is independent of \(\alpha \) and thus applies to every candidate solution that we obtain in this way. Moreover, we use in and outofsample evaluation to select the best candidate solution. In our numerical experiments, we investigate several schemes for selecting the values of \(\alpha \). Running LBDA(\(\alpha \)) multiple times can be done very efficiently by using parallelization and a common warm start. That is, we first apply the Lshaped algorithm of [32] to the LPrelaxation of the original problem (1), in which the integer restrictions on the secondstage variables y are relaxed. Next, we run LBDA(\(\alpha \)) for each value of \(\alpha \), and we keep the optimality cuts generated for the LPrelaxation \(Q_\text {LP}\) of Q, which are also valid for \({\hat{Q}_\alpha }\), for any value of \(\alpha \).
Performance guarantee of LBDA(\(\alpha \))
Convergence of sampling and loose optimality cuts
The performance guarantee of LBDA(\(\alpha \)) does not follow directly from our error bound for the generalized \(\alpha \)approximations. The reason is that LBDA(\(\alpha \)) uses sampling and the loose optimality cuts of Definition 6 to solve the corresponding approximating problem. We consider these aspects in Sects. 4.1.1 and 4.1.2, respectively. In particular, we prove consistency of the SAA and asymptotic tightness of our loose optimality cuts.
Consistency of the sample average approximation
Intuitively, the SAA becomes better as the sample size S increases. Indeed, we show in Lemma 2 that the SAA \(\hat{Q}_\alpha ^S\) of \({\hat{Q}_\alpha }\) converges uniformly to \({\hat{Q}_\alpha }\) on \({\mathcal {X}}\) w.p. \(1\) as \(S \rightarrow \infty \).
Lemma 2
Consider the generalized \(\alpha \)approximation \({\hat{Q}_\alpha }\) and its sample average approximation \(\hat{Q}_\alpha ^S\). Then,
w.p. \(1\) as \(S \rightarrow \infty \), where \({\mathcal {X}} = \{x \in {\mathbb {R}}^{n_1}_+ :Ax \le b \}\).
Proof
See “Appendix”. \(\square \)
Corollary 1
\(\displaystyle {\sup _{x \in {\mathcal {X}}} Q(x)  \hat{Q}_\alpha ^S(x) \le C \sum _{i = 1}^m \mathbb {E}_{\omega _{i}}\left[ \varDelta f_i(\cdot  \omega _{i})\right] }\) w.p. 1 as \({S \rightarrow \infty }\).
Proof
Since
the result follows directly by combining Theorem 2 and Lemma 2. \(\square \)
Corollary 1 implies a similar error bound on the difference between the optimal values \(\eta ^*\) and \({\hat{\eta }}^S_\alpha \) of the original MIR problem (1) and the SAA problem (11), respectively, see Corollary 2.
Corollary 2
Consider the optimal values \(\eta ^*\) and \({\hat{\eta }}_\alpha ^S\) of the MIR problem (1) and the SAA problem (11). Then,
w.p. \(1\) as \(S \rightarrow \infty \).
Proof
Since the optimal solution \(x^*\) of (1) is feasible but not necessarily optimal in (11), we have
The result follows directly from Corollary 1. \(\square \)
Asymptotic tightness of loose optimality cuts
If we derive a loose optimality cut \(\hat{Q}_\alpha ^S(x) \ge \beta _{\tau + 1} x+ \delta _{\tau + 1}\) for \(\hat{Q}_\alpha ^S\) at \(x^\tau \) in LBDA(\(\alpha \)), then the gap \(\hat{Q}_\alpha ^S(x^\tau ){(\beta _{\tau + 1} x^\tau + \delta _{\tau + 1})}\) may be positive, since the cut is not necessarily tight at \(x^\tau \). However, we derive a bound on this gap by considering the index function
Note that \({\hat{k}}(\omega ^s, x^\tau ) = \hat{k}_s^{\tau }\), i.e. the index function \({\hat{k}}(\omega ,x)\) is used to derive our loose optimality cuts. Based on this index function, we define the lower bounding function \({L_{\alpha }^S}\) of \(\hat{Q}_\alpha ^S\), given by
and we note that \({L_{\alpha }^S}(x^\tau )\) is the value of the loose optimality cut at \(x^\tau \), i.e. \({L_{\alpha }^S}(x^\tau ) = \beta _{\tau + 1} x^\tau + \delta _{\tau + 1}\). The function \({L_{\alpha }^S}\) is a lower bound of \(\hat{Q}_\alpha ^S\) since its corresponding value function, defined as
is a lower bound of \({\hat{v}_\alpha }(\omega ,x)\) for all \(\omega \in \varOmega \) and \(x \in X\).
Proposition 1 contains a uniform total variation bound on the difference between \({L_{\alpha }^S}\) and \(\hat{Q}_\alpha ^S\). In order to derive this result, we first analyze the difference between \({\nu _\alpha }\) and \({\hat{v}_\alpha }\). It turns out that \({\nu _\alpha }\) equals \({\hat{v}_\alpha }\) on large parts of the domain, see Lemma 3.
Lemma 3
Consider the value functions \({\nu _\alpha }\) and \({\hat{v}_\alpha }\), with \({\nu _\alpha }\) defined in (16) and \({\hat{v}_\alpha }\) defined as
Let \(\varLambda ^k\), \(k = 1,\dots ,K\), denote the closed convex cones from Lemma 1. Then, there exist vectors \(\sigma _k \in \varLambda ^k\) and a constant \(R > 0\) such that

(i)
\(0 \le {\hat{v}_\alpha }(\omega ,x)  {\nu _\alpha }(\omega ,x) \le R\) for all \(\omega \in {\mathbb {R}}^m\) and \(x \in {\mathbb {R}}^{n_1}\), and

(ii)
\(\omega  Tx \in \sigma _k + \varLambda ^k \implies {{\nu _\alpha }(\omega ,x) =\;} {\hat{v}_\alpha }(\omega ,x) = \lambda ^k(\omega  Tx) + \psi ^k(\omega \alpha )\).
Proof
See “Appendix”. \(\square \)
Proposition 1
Consider the SAA of the generalized \(\alpha \)approximation \(\hat{Q}_\alpha ^S\) and its lower bound \({L_{\alpha }^S}\), defined in (15). There exists a constant \(C > 0\) such that for every continuous random vector \(\omega \) with joint pdf \(f \in {\mathcal {H}}^m\),
w.p. \(1\) as \(S \rightarrow \infty \).
Proof
Define \(\varDelta (\omega ,x) := {\hat{v}_\alpha }(\omega ,x)  {\nu _\alpha }(\omega ,x)\), so that
We derive an upper bound on \(\varDelta (\omega ,x)\), independent of x. We then apply the strong law of large numbers (SLLN) to obtain the desired result.
If we define \({\mathcal {M}} = \cup _{k = 1}^K (\sigma _k + \varLambda _k)\), where \(\sigma _k\) and \(\varLambda ^k\), \(k = 1,\dots ,K\), are the vectors and closed convex cones from Lemma 3, then
where R is the upper bound on \({\hat{v}_\alpha }(\omega ,x)  {\nu _\alpha }(\omega ,x)\) from Lemma 3. Moreover, we can derive a bound on \({\mathbb {P}}[\omega  Tx \in {\mathcal {M}}]\). Unfortunately, the random variable \(\xi (\omega ,x)\) depends on x. Therefore, we cannot apply the SLLN to
if \({\mathcal {X}}\) is infinite. To resolve this, we use that \({\mathcal {X}}\) is bounded. Let D denote the diameter of \(T{{\mathcal {X}}}\), i.e., \(Tx  Tx' \le D\) for all \(x, x' \in {{\mathcal {X}}}\). Define \({\mathcal {M}}' \subset {\mathcal {M}}\) as \({\mathcal {M}}':= \bigcup _{k =1}^K (\sigma _k + \varLambda ^k)(D)\). Fix an arbitrary \({{\bar{x}}} \in X\). Note that for all \(x \in {{\mathcal {X}}}\),
We obtain
Note that \(\bar{\xi }(\omega )\) only depends on a fixed \(x^* \in {{\mathcal {X}}}\) and is independent of x. By the SLLN,
w.p. \(1\) as \(S \rightarrow \infty \).
By [23, Lemma 3.9], \({\mathbb {R}}^m\setminus \mathcal M'\) can be covered by finitely many hyperslices, that is,
where the hyperslices \(H_j\) are defined as
for some \(a_j^T\) and \(\delta _j\). By [23, Theorem 4.6], there exists a constant \(\beta > 0\) such that
The result now follows from
w.p. \(1\) as \(S \rightarrow \infty \), where \(C = R\beta \).\(\square \)
Proposition 1 shows that our loose optimality cuts are asymptotically tight, since for every iteration \(\tau \) in LBDA(\(\alpha \)) the loose optimality cut for \(\hat{Q}_\alpha ^S\) at \(x^\tau \) is tight for \({L_{\alpha }^S}\) at \(x^\tau \). Hence, if the underlying total variations are small enough and if S is large enough, then the loose optimality cut is nearly tight for \(\hat{Q}_\alpha ^S\) at \(x^\tau \).
Error bound on the optimality gap of LBDA(\(\alpha \))
We are now ready to prove the performance guarantee of LBDA(\(\alpha \)) in Theorem 3. Intuitively, we are able to derive this bound since (i) our loose optimality cuts are asymptotically tight, and thus \({\hat{x}_\alpha }\) is nearoptimal for the SAA problem with \(\hat{Q}_\alpha ^S\), (ii) the SAA \(\hat{Q}_\alpha ^S\) converges uniformly to \({\hat{Q}_\alpha }\), and (iii) Theorem 2 contains a uniform error on the difference between \({\hat{Q}_\alpha }\) and Q.
Proof of Theorem 3
Let \({\hat{x}_\alpha }\) denote the solution returned by LBDA(\(\alpha \)). Since \({\hat{x}_\alpha }:= x^\tau \) is the current solution in the final iteration \(\tau \) of the algorithm, it follows that \({\hat{x}_\alpha }\) is a minimizer of
Moreover, since \(\hat{Q}_{\text {out}}^{\tau }\le \hat{Q}_\alpha ^S\), it follows that \({c{\hat{x}_\alpha }+ \hat{Q}_{\text {out}}^{\tau }({\hat{x}_\alpha }) \le {\hat{\eta }}_\alpha ^S}\), and thus, by rearranging terms and adding \(\hat{Q}_\alpha ^S\) on both sides,
The righthand side of (18) represents an upper bound on the optimality gap of \({\hat{x}_\alpha }\) in the SAA problem (11). The termination criterion of LBDA(\(\alpha \)) guarantees that this upper bound is not too large. Indeed, at termination it holds that
and thus the upper bound in (18) reduces to
In the end, however, we are not interested in the optimality gap of \({\hat{x}_\alpha }\) in the SAA problem (11) but in the optimality gap
of \({\hat{x}_\alpha }\) in the original MIR problem (1). Adding and subtracting both \(\hat{Q}_\alpha ^S({\hat{x}_\alpha })\) and \({\hat{\eta }}_\alpha ^S\), and using (19), yields
Applying Corollaries 1 and 2, and Proposition 1, we conclude that there exists a constant \(C > 0\) such that
w.p. \(1\) as \(S \rightarrow \infty \). \(\square \)
The performance guarantee for LBDA(\(\alpha \)) in Theorem 3 is a worstcase bound. For many problem instances, the actual performance may be much better. In Sect. 5, we assess the performance of LBDA(\(\alpha \)) empirically on a wide range of test instances.
Numerical experiments
We test the performance of LBDA(\(\alpha \)) on problem instances from the literature and on randomly generated instances, see Sects. 5.2 and 5.3, respectively. In particular, we consider (variations of) an investment planning problem in [29], and two classes of problem instances available from SIPLIB [2], namely the SIZES problem [13] and the stochastic server location problem (SSLP) [18]. First, however, we describe the setup of our numerical experiments in Sect. 5.1.
Setup of numerical experiments
In our numerical experiments, we compare LBDA(\(\alpha \)) to several benchmark methods, in terms of costs, relative optimality gaps, and computations times. Since the performance of LBDA(\(\alpha \)) depends on \(\alpha \), we investigate four different approaches to select \(\alpha \).
First, we take \(\alpha \) equal to the zero vector. Second, we take \(\alpha = \alpha ^* := Tx^*\), where \(x^*\) is the optimal solution of the original problem. Obviously, for large problem instances \(x^*\) is unknown, however, we expect that \(\alpha ^*\) is a good choice of \(\alpha \) since the generalized \(\alpha \)approximations are obtained by replacing Tx by \(\alpha \) in the Gomory relaxations, and thus \({\hat{Q}}_{\alpha ^*}\) is a good approximation of Q near the true optimal solution \(x^*\). We test this for the smaller problem instances for which \(x^*\) is known.
Since \(\alpha ^*\), however, typically cannot be computed for larger problem instances, we also propose an iterative scheme, in which we first obtain \({\hat{x}}_{\alpha _0}\) by running LBDA\((\alpha _0)\), where \(\alpha _0\) is the zero vector. Next, we run LBDA\((\alpha _1)\), where \(\alpha _1 := T{\hat{x}}_{\alpha _0}\). We extend this scheme to 100 iterations by recursively defining \(\alpha _{k + 1} := T{\hat{x}}_{\alpha _k}\), \(k = 1,\dots , 100\). We then select the best value of \(\alpha \) in terms of expected costs, denoted by \(\alpha ^\#\). Finally, we apply LBDA(\(\alpha \)) multiple times using 100 different values of \(\alpha \), drawn from an multivariate uniform distribution on \([0,100]^m\), and we denote the best value of \(\alpha \) in terms of the expected costs by \(\alpha ^+\).
In order to compare these approaches, note that the expected costs \(cx + Q(x)\) of a candidate solution x can be computed exactly if the random vector \(\omega \) has a finite number of realizations, as is the case for the SIZES, SSLP, and investment planning problems that we consider. Therefore, if the optimal value \(\eta ^*\) is known, then the relative optimality gap \(\rho (x)\), defined as
can be computed exactly, and otherwise bounds on \(\rho (x)\) can be computed.
In contrast, for our randomly generated instances, we assume that \(\omega \) is continuously distributed. For these instances, we use the multiple replications procedure (MRP) [16] with Latin hypercube sampling [5] to obtain 95% confidence upper bounds on \(\rho (x)\). Moreover, we compare the performance of LBDA(\(\alpha \)) to a range of benchmark solutions, using outofsample estimation of \(cx + Q(x)\), with a sample size of \(10^5\), which guarantees that the standard errors of our results are sufficiently small. The benchmark and LBDA(\(\alpha \)) solutions are computed using a sample of size \(S = 1000\).
The first benchmark solution \({\bar{x}}_S\) is obtained by solving the deterministic equivalent formulation (DEF) of the corresponding SAA of the original problem (1). The DEF is a largescale MIP, which, typically, cannot be solved in reasonable time by standard MIP solvers. Hence we also solve the DEF using a smaller sample size \(S' = 100\), resulting in the second benchmark solution \({\bar{x}}_{S'}\).
We obtain three additional benchmark solutions by solving the generalized \(\alpha \)approximations exactly for \(\alpha = 0\), \(\alpha = \alpha ^*\) and \(\alpha = \alpha ^+\), that is, we find the optimal solution \(x_\alpha ^*\) of the approximating problem (4) with \({\hat{Q}} = \hat{Q}_\alpha ^S\). We do so by solving the approximation secondstage problems
by enumeration over \(k = 1,\dots ,K\). For this reason, \(x_\alpha ^*\) can only be computed in reasonable time for small problem instances.
Finally, we consider two trivial benchmark solutions, which we expect to outperform significantly. First, we relax the integer restrictions on the secondstage variables in the SAA of the original problem (1), resulting in the benchmark solution \(x_\text {LP}\). Second, we solve the Jensen approximation, which replaces the distribution of \(\omega \) by a degenerate distribution at \(\mu = {\mathbb {E}}_\omega [\omega ]\), and denote the optimal solution by \(x_\mu \).
We run our experiments on a single Intel Xeon E5 2680v3 core @2.5GHz with Gurobi 7.0.2. To ensure a fair comparison between solutions, we use common random numbers where possible and we limit the computation time of each algorithm to two hours.
Test instances from the literature
The SIZES problem
We first consider all instances of the SIZES test problem suite [13] from SIPLIB. These instances have mixedbinary variables in both stages, and differ in the number of scenarios, namely 3, 5, and 10. The DEF of the largest instance has 341 constraints and 825 variables, of which 110 are binary. We refer to [2] for further details. In Table 1, we report the outcomes of LBDA(\(\alpha \)) and solving the DEF.
We observe from Table 1 that LBDA(\(\alpha \)) performs very well for all choices of \(\alpha \). Indeed, on every instance, the optimality gaps of all LBDA\((\alpha )\) solutions are below 0.5%, and below 0.2% for \(\alpha = \alpha ^+\). Moreover, LBDA\((\alpha )\) runs very fast for all instances: for a single value of \(\alpha \), the computation time of LBDA(\(\alpha \)) is always below one second.
Another observation is that LBDA\((\alpha ^\#)\) and LBDA\((\alpha ^+)\) consistently outperform LBDA(0), at the expense of additional computation time. Nevertheless, the computation times of LBDA(\(\alpha \)) for \(\alpha ^\#\) and \(\alpha ^+\) are still moderate, and they scale much better to larger instances than solving the DEF. In particular, the time taken to solve the DEF grows exponentially as the sample size S increases, whereas the computation times of LBDA(\(\alpha \)) are approximately linear in S. Finally, LBDA(\(\alpha \)) performs very well if \(\alpha = \alpha ^*\). However, since \(\alpha ^*\) is not known in practice, it is useful to observe that LBDA\((\alpha ^+)\) achieves similar performance.
The stochastic server location problem
The SSLP instances are more challenging in terms of input size than the SIZES instances. Indeed, the DEF of the largest instance has over 1,000,000 binary decision variables and 120,000 constraints. Their first and secondstage problems are pure binary and mixedbinary, respectively, and \(\omega \) follows a discrete distribution. A full problem description can be found in [18], in which the instances are solved using the \(D^2\) algorithm of [31]. See also [1] and [12] for more recent computational experiments on these test instances using other exact approaches. In Table 2 we report the best known running time for each SSLP instance over all these exact approaches, along with the outcomes of LBDA(\(\alpha \)).
Strikingly, LBDA(\(\alpha \)) was able to solve all instances to optimality for \({\alpha = \alpha ^+}\) and \({\alpha = \alpha ^*}\). Moreover, LBDA(0) and LBDA\((\alpha ^\#)\) solved all instances except SSLP_15_45_5, on which both achieved an optimality gap of 0.45%. In terms of computation time, LBDA(0) is clearly preferred to LBDA\((\alpha ^\#)\) and LBDA\((\alpha ^+)\), while achieving similar results. Finally, although directly comparing LBDA(\(\alpha \)) to the other approaches is not completely fair since the algorithms were run on different machines, it is clear that LBDA(\(\alpha \)) is generally faster than exact approaches. Indeed, LBDA(0) solved all instances in under one minute, and eight out of ten instances were solved in less than ten seconds, whereas the fastest exact approach required at least one minute for six out of ten instances, and over one hour for the largest instance.
An investment planning problem
We consider the following problem by Schultz et al. [29],
where
and where the secondstage decision variables are binary, i.e. \(Y = \{0,1\}^4\), and the random vector \(\omega = (\omega _1, \omega _2)\) follows a discrete distribution which assigns equal probabilities to \(S = 441\) equidistant lattice points of \([5,15]^2\). Schultz et al. consider a second variant of this problem by choosing the technology matrix T as
whereas in the original formulation, T is the identity matrix \(I_2\). For both variants, we consider \(S \in \{ 4, 9, 36, 121, 441, 1681, 10201 \}\) and \(Y = {\mathbb {Z}}^4_+\), in addition to \(Y = \{0,1\}^4\), as is done in [19]. For each of the resulting 28 instances, Table 3 shows the results of LBDA(\(\alpha \)) and solving the DEF. Note that if Gurobi could not solve an instance within two hours, then we report the gaps to \({\hat{x}}^*\), the best solution that Gurobi was able to find.
Overall, LBDA(\(\alpha \)) performs well on the instances in Table 3. In particular, for \(\alpha = \alpha ^\#\) and \(\alpha = \alpha ^+\), LBDA(\(\alpha \)) achieves gaps that are below 2% on 22 and 23 out of 28 instances, respectively. In general, LBDA(\(\alpha \)) achieves better results if S is larger. For example, if \(Y = \{0,1\}^4\) and \(T = H\), then the gaps are strictly decreasing in S. This is in line with the performance guarantee of LBDA(\(\alpha \)) in Theorem 3: if S is larger, then the distributions of \(\omega _1\) and \(\omega _2\) more closely resemble a continuous uniform distribution on [5, 15], which has small total variation, i.e. the error bound in Theorem 3 is small.
On some instances, LBDA(\(\alpha \)) does not consistently perform well for all values of \(\alpha \). For example, if \(Y = {\mathbb {Z}}^4_+\), \(T = H\), and \(S = 9\), then LBDA\((\alpha ^+)\) and LBDA(0) achieve gaps of 0.1% and 12.9%, respectively. Furthermore, the instances with \(Y = \{0,1\}^4\), \(T = H\), and \(S \in \{4, 9\}\) turn out to be difficult instances for LBDA(\(\alpha \)): for all choices of \(\alpha \), the resulting gaps are \(8.8\%\) and \(5\%\), respectively.
In fact, for every choice of \(\alpha \), LBDA(\(\alpha \)) achieves identical gaps if \({Y = \{0,1\}^4}\) and \(T = H\), but the gaps are much smaller if \(S \ge 36\), e.g. if \(S = 10201\), then the gaps are 0.1%. In contrast, there are large differences between the different choices of \(\alpha \) if \(Y = \{0,1\}^4\) and \(T = I\). For these instances, \(\alpha = \alpha ^\#\) and \(\alpha = \alpha ^+\) outperform the other choices of \(\alpha \). However, similar as for the SIZES instances in Sect. 5.2.1, there is a tradeoff between performance and computation times, since LBDA\((\alpha ^+)\) and LBDA\((\alpha ^\#)\) require 100 LBDA(\(\alpha \)) runs, which is computationally more demanding than LBDA(0). Nevertheless, on every instance, the computation times of LBDA(\(\alpha \)) for \(\alpha \in \{\alpha ^\#,\alpha ^+\}\) are below 20 minutes, and below 3 minutes if \(S \le 1681\).
Randomly generated test instances
We generate random MIR problems of the form (1), with \(X = {\mathbb {R}}^{n_1}_+\) and
In addition, we assume that the components of the random vector \(\omega \in {\mathbb {R}}^m\) follow independent normal distributions with mean 10 and standard deviation \(\sigma \in \{0.1,0.5,1,2,4,10\}\). The parameters c, q, T, and W are fixed, and their elements are drawn from discrete uniform distributions with supports contained in [1, 5], [5, 10], [1, 6], and [1, 6], respectively.
The reason that we consider multiple values of the standard deviation \(\sigma \) is that Theorem 3 implies that LBDA(\(\alpha \)) performs better as \(\sigma \) increases. This is because the total variations of the onedimensional conditional pdf are small if \(\sigma \) is large, and thus the error bound on the optimality gap achieved by LBDA(\(\alpha \)) is also small.
To prevent noise in the outcomes of our experiments, we compute the average optimality gaps, costs, and computation times over 20 randomly generated test instances for each value of \(\sigma \). We consider test instances of two different sizes, namely \(n_1 = 10\), \(p = 5\), \(m = 5\) (small), and \(n_1 = 100\), \(p = 40\), \(m = 20\) (large). Tables 4 and 5 display the results for the small and large versions, respectively.
From these results, we observe that LBDA(\(\alpha \)) clearly outperforms the sampling solutions in terms of computation time and scalability to larger problem instances. In particular, we observe that the computation time of LBDA(\(\alpha \)) is of the same order of magnitude as that of \(x_\text {LP}\), while it performs significantly better in terms of optimality gaps and outofsample estimated expected costs. Undeniably, our results indicate that LBDA(\(\alpha \)) can be implemented very efficiently and that it can handle large MIR problem instances.
In line with the performance guarantee in Theorem 3, LBDA(\(\alpha \)) performs better for larger values of \(\sigma \). For example, on the small instances, the optimality gaps achieved by LBDA(0) are strictly decreasing in \(\sigma \), and for \(\sigma = 10.0\), LBDA\((\alpha ^+)\) outperforms the sampling solution \({\bar{x}}_S\). A similar observation is true for the optimal solution of the generalized \(\alpha \)approximations \({\hat{Q}_\alpha }\), as we would expect based on the error bound for \({\hat{Q}_\alpha }\) in Theorem 2. Observe, however, that even for large values of \(\sigma \), the optimality gaps reported in Table 4 for LBDA(\(\alpha \)) are relatively large, i.e. around 3–4%, and that the optimality gaps achieved by e.g. LBDA\((\alpha ^+)\) are not strictly decreasing in \(\sigma \). The reason is that the actual optimality gaps are likely much smaller than the upper bounds reported in Table 4. This is because they are computed using the MRP, which relies on solving multiple SAAs of the original problem. Since, however, Gurobi has difficulties solving the DEFs of these SAAs in reasonable time, especially for large values of \(\sigma \), the bounds obtained using the MRP are typically not sharp.
Interestingly, \(x_\text {LP}\) also performs better as \(\sigma \) increases. An explanation is that our error bound for the generalized \(\alpha \)approximation implies that the MIR function Q becomes closer to a convex function as \(\sigma \) increases. Thus, since the LPrelaxation of Q is a convex lower bound of Q, its approximation error is expected to become smaller as \(\sigma \) increases. Note however, that unlike the generalized \(\alpha \)approximations, the approximation error of the LPrelaxation does not go to zero. Indeed, based on our results, LBDA(\(\alpha \)) is clearly preferred to \(x_\text {LP}\), since LBDA(\(\alpha \)) consistently outperforms \(x_\text {LP}\), at the expense of very little additional computation time.
Furthermore, the results in Tables 4 and 5 indicate that \(\alpha = T{\bar{x}}_S\) is a good choice for LBDA(\(\alpha \)). Indeed, if \(\alpha = T{\bar{x}}_S\), then LBDA(\(\alpha \)) and \(x^*_\alpha \) perform similar to \({\bar{x}}_S\). For example, on the small instances, they achieve optimality gaps that are on average within 0.2% and 1.1% of \({\bar{x}}_S\), respectively. However, since \({\bar{x}}_S\) is difficult to compute in practice, the use of LBDA\((T{\bar{x}}_S)\) is limited, whereas LBDA\((\alpha ^+)\) and LBDA\((\alpha ^\#)\) can be applied directly. Similar as for the instances in Sects. 5.2.1 and 5.2.3, they achieve significantly better results than LBDA(0), at the expense of higher computation times. In particular, on the large instances, LBDA\((\alpha ^\#)\) performs 2 to 5 times as well as LBDA(0), and on the small instances, LBDA\((\alpha ^+)\) achieves optimality gaps that are within 0.6% of \({\bar{x}}_S\) for \(\sigma \ge 0.5\). While the computation times of LBDA(\(\alpha \)) increase for \(\alpha = \alpha ^+\) and \(\alpha = \alpha ^\#\) compared to \(\alpha = 0\), they remain manageable: even the large instances are solved within 26 minutes.
Finally, observe from Table 4 that LBDA(\(\alpha \)) generally achieves better or similar performance as \(x_\alpha ^*\). In other words, the fact that LBDA(\(\alpha \)) uses loose optimality cuts to solve the generalized \(\alpha \)approximations has no negative effect on the solution quality.
Conclusion
We consider twostage mixedinteger recourse models with random righthand side. Due to nonconvexity of the recourse function, such models are extremely difficult to solve. We develop a tractable approximating model by using convex approximations of the recourse function. In particular, we propose a new class of convex approximations, the socalled generalized \(\alpha \)approximations, and we derive a corresponding error bound on the difference between these approximations and the true recourse function. In addition, we show that this error bound is small if the variability of the random parameters in the model is large. More precisely, the error bound for the generalized \(\alpha \)approximations goes to zero as the total variations of the onedimensional conditional probability density functions of the random righthand side vector in the model go to zero.
The advantage of the generalized \(\alpha \)approximations over existing convex approximations is that it can be solved efficiently. In fact, we describe a loose Benders decomposition algorithm, LBDA(\(\alpha \)), which efficiently solves the corresponding approximating model. The quality of the candidate solution \({{\hat{x}_\alpha }}\) generated by LBDA(\(\alpha \)) in the original model is guaranteed by Theorem 3, which states an upper bound on the optimality gap of \({{\hat{x}_\alpha }}\). This performance guarantee is similar to the error bound we prove for the generalized \(\alpha \)approximations. Indeed, we show that the optimality gap of \({{\hat{x}_\alpha }}\) is small if the variability of the random parameters in the model is large.
In addition to this theoretical guarantee on the solution quality, we assess LBDA(\(\alpha \)) empirically on a range of test instances. In particular, we consider the SIZES and SSLP instances from SIPLIB, an investment planning problem by [29], and randomly generated instances. We find that LBDA(\(\alpha \)) performs well in terms of computation times, scalability to larger problem instances, and solution quality. In particular, LBDA(\(\alpha \)) is able to solve larger instances than traditional sampling techniques and its computation times scale more favourably in the input size of the instances. In terms of solution quality, LBDA(\(\alpha \)) solves the SIZES and SSLP instances to near optimality and generally performs very well on the investment planning instances. Moreover, on the randomly generated instances, LBDA(\(\alpha \)) performs similar to traditional sampling techniques and achieves small optimality gaps if the variability of the random parameters in the model is medium to large.
One avenue for future research is to derive sharper theoretical error bounds for the generalized \(\alpha \)approximations. While Theorem 3 provides conditions under which our solution method performs well, the quantitative error bound cannot be computed, as it depends on an unknown and potentially large constant C. A sharp tractable error bound would be an improvement over our current results. Another avenue is the extension of our solution method to more general mixedinteger recourse models, for example by allowing for randomness in the secondstage cost coefficients q, technology matrix T, or recourse matrix W.
References
 1.
Ahmed, S.: A scenario decomposition algorithm for 0–1 stochastic programs. Oper. Res. Lett. 41(6), 565–569 (2013)
 2.
Ahmed, S., Garcia, R., Kong, N., Ntaimo, L., Parija, G., Qiu, F., Sen, S.: SIPLIB: a stochastic integer programming test problem library. https://www2.isye.gatech.edu/~sahmed/siplib (2015)
 3.
Ahmed, S., Shapiro, A.: The sample average approximation method for stochastic programs with integer recourse. Preprint available from https://www.optimizationonline.org (2002)
 4.
Bansal, M., Huang, K.L., Mehrotra, S.: Tight second stage formulations in twostage stochastic mixed integer programs. SIAM J. Optim. 28(1), 788–819 (2018)
 5.
Bayraksan, G., Morton, D.P.: Assessing solution quality in stochastic programs. Math. Program. 108(2–3), 495–514 (2006)
 6.
Benders, J.F.: Partitioning procedures for solving mixedvariables programming problems. Numer. Math. 4(1), 238–252 (1962)
 7.
Birge, J.R., Louveaux, F.V.: A multicut algorithm for twostage stochastic linear programs. Eur. J. Oper. Res. 34(3), 384–392 (1988)
 8.
Carøe, C.C., Schultz, R.: Dual decomposition in stochastic integer programming. Oper. Res. Lett. 24(1–2), 37–45 (1999)
 9.
Gade, D., Küçükyavuz, S., Sen, S.: Decomposition algorithms with parametric Gomory cuts for twostage stochastic integer programs. Math. Program. 144(1–2), 39–64 (2014)
 10.
Gassmann, H.I., Ziemba, W.T. (eds.): Stochastic programming: applications in finance, energy, planning and logistics, World Scientific Series in Finance, vol. 4. World Scientific (2013)
 11.
Gomory, R.: Some polyhedra related to combinatorial problems. Linear Algebra Appl. 2(4), 451–558 (1969)
 12.
Guo, G., Hackebeil, G., Ryan, S.M., Watson, J.P., Woodruff, D.L.: Integration of progressive hedging and dual decomposition in stochastic integer programs. Oper. Res. Lett. 43(3), 311–316 (2015)
 13.
Jorjani, S., Scott, C.H., Woodruff, D.L.: Selection of an optimal subset of sizes. Int. J. Prod. Res. 37(16), 3697–3710 (1999)
 14.
Klein Haneveld, W.K., Stougie, L., van der Vlerk, M.H.: Simple integer recourse models: convexity and convex approximations. Math. Program. 108(2–3), 435–473 (2006)
 15.
Laporte, G., Louveaux, F.V.: The integer Lshaped method for stochastic integer programs with complete recourse. Oper. Res. Lett. 13(3), 133–142 (1993)
 16.
Mak, W.K., Morton, D.P., Wood, R.K.: Monte Carlo bounding techniques for determining solution quality in stochastic programs. Oper. Res. Lett. 24(1–2), 47–56 (1999)
 17.
Ntaimo, L.: Fenchel decomposition for stochastic mixedinteger programming. J. Glob. Optim. 55(1), 141–163 (2013)
 18.
Ntaimo, L., Sen, S.: The millionvariable march for stochastic combinatorial optimization. J. Glob. Optim. 32(3), 385–400 (2005)
 19.
Qi, Y., Sen, S.: The ancestral benders’ cutting plane algorithm with multiterm disjunctions for mixedinteger recourse decisions in stochastic programming. Math. Program. 161(1–2), 193–235 (2017)
 20.
Rinnooy Kan, A.H.G., Stougie, L.: Stochastic integer programming. In: Y. Ermoliev, R.J.B. Wets (eds.) Numerical Techniques for Stochastic Optimization, Springer Series in Computational Mathematics, vol. 10, pp. 201–213. Springer (1988)
 21.
Romeijnders, W., van der Laan, N.: Pseudovalid cutting planes for twostage mixedinteger stochastic programs with righthandside uncertainty. Operations Research (2020)
 22.
Romeijnders, W., Morton, D.P., van der Vlerk, M.H.: Assessing the quality of convex approximations for twostage totally unimodular integer recourse models. INFORMS J. Comput. 29(2), 211–231 (2017)
 23.
Romeijnders, W., Schultz, R., van der Vlerk, M.H., Klein Haneveld, W.K.: A convex approximation for twostage mixedinteger recourse models with a uniform error bound. SIAM J. Optim. 26(1), 426–447 (2016)
 24.
Romeijnders, W., Stougie, L., van der Vlerk, M.H.: Approximation in twostage stochastic integer programming. Surv. Oper. Res. Manag. Sci. 19(1), 17–33 (2014)
 25.
Romeijnders, W., van der Vlerk, M.H., Klein Haneveld, W.K.: Convex approximations of totally unimodular integer recourse models: a uniform error bound. SIAM J. Optim. 25(1), 130–158 (2015)
 26.
Romeijnders, W., van der Vlerk, M.H., Klein Haneveld, W.K.: Total variation bounds on the expectation of periodic functions with applications to recourse approximations. Math. Program. 157(1), 3–46 (2016)
 27.
Ruszczyński, A.: A regularized decomposition method for minimizing a sum of polyhedral functions. Math. Program. 35(3), 309–333 (1986)
 28.
Schultz, R.: Stochastic programming with integer variables. Math. Program. 97(1–2), 285–309 (2003)
 29.
Schultz, R., Stougie, L., van der Vlerk, M.H.: Solving stochastic programs with integer recourse by enumeration: a framework using Gröbner basis reductions. Math. Program. 83(1–3), 229–252 (1998)
 30.
Sen, S.: Algorithms for stochastic mixedinteger programming models. Handb. Oper. Res. Manag. Sci. 12, 515–558 (2005)
 31.
Sen, S., Higle, J.L.: The \(C^3\) theorem and a \(D^2\) algorithm for large scale stochastic mixedinteger programming: set convexification. Math. Program. 104(1), 1–20 (2005)
 32.
Van Slyke, R.M., Wets, R.: Lshaped linear programs with applications to optimal control and stochastic programming. SIAM J. Appl. Math. 17(4), 638–663 (1969)
 33.
van der Vlerk, M.H.: Stochastic programming with integer recourse. Ph.D. thesis, University of Groningen, The Netherlands (1995)
 34.
van der Vlerk, M.H.: Convex approximations for complete integer recourse models. Math. Program. 99(2), 297–310 (2004)
 35.
Van der Vlerk, M.H.: Convex approximations for a class of mixedinteger recourse models. Ann. Oper. Res. 177(1), 139–150 (2010)
 36.
Walkup, D.W., Wets, R.J.B.: Lifting projections of convex polyhedra. Pac. J. Math. 28(2), 465–475 (1969)
 37.
Wallace, S.W., Ziemba, W.T. (eds.): Applications of Stochastic Programming. MPSSIAM Series on Optimization (2005)
 38.
Zhang, M., Küçükyavuz, S.: Finitely convergent decomposition algorithms for twostage stochastic pure integer programs. SIAM J. Optim. 24(4), 1933–1951 (2014)
Acknowledgements
The research of Ward Romeijnders has been supported by Grant 45117034 4043 from The Netherlands Organisation for Scientific Research (NWO).
Author information
Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Proof of Theorem 2
Our proof is similar to e.g. [23, Theorem 5.1] and [21, Theorem 2]. Here, we point out the differences. In particular, we show that there exist vectors \(\sigma _k\), \(k = 1,\dots ,K\) and a constant \(R > 0\), such that

(i)
if \(\omega  Tx \in \sigma _k + \varLambda ^k\), then \(v(\omega ,x)  {\hat{v}_\alpha }(\omega ,x)\) is zeromean \(B^k\)periodic, and

(ii)
\(v(\omega ,x)  {\hat{v}_\alpha }(\omega ,x)  \le R\),
where \(\varLambda ^k\), \(k = 1,\dots ,K\), are the closed convex cones from Lemma 1. Property (i) follows from Lemmas 1 (iii) and 3 (ii), and the fact that \(\psi ^k(\omega  Tx)  \psi ^k(\omega  \alpha )\) is zeromean \(B^k\)periodic.
In order to prove (ii), we use a similar argument as in [23, Lemma 3.6] and [21, Proposition 2]. It suffices to show that there exists a constant \(R'\) such that \({\hat{v}_\alpha }(\omega ,x)  v_\text {LP}(\omega ,x)  \le R'\). We can take \(R' = \max _k w_k\), where \(w_k\) are the upper bounds on \(\psi ^k\) from Lemma 1, \(k = 1,\dots ,K\). \(\square \)
Proof of Lemma 2
Our line of proof is based on [3]. For any \(\nu > 0\), consider a finite set \({\mathcal {X}}_{\nu }\) such that for all \(x \in \mathcal X\), there exists an \(x' \in {\mathcal {X}}_{\nu }\) such that \(x  x' \le \nu \). Such a set \({\mathcal {X}}_\nu \) exist due to Assumption 1. Let \(x \in {\mathcal {X}}\) be given and let \(x' \in {\mathcal {X}}_{\nu }\) be such that \(x  x' \le \nu \). Note that
The first and third term on the righthand side of (20) can be bounded by noting that both \({\hat{Q}_\alpha }\) and \(\hat{Q}_\alpha ^S\) are Lipschitz continuous. Denote Lipschitz constants of \({\hat{Q}_\alpha }\) and \(\hat{Q}_\alpha ^S\) by \(L_1\) and \(L_2\), respectively. We obtain
which gives
The first term \((L_1 + L_2)\nu \) can be made arbitrarily small by letting \(\nu \rightarrow 0\). The result follows, because for fixed \(\nu \), the second term \(\sup _{{x'} \in X_{\nu }}\left {\hat{Q}_\alpha }(x')  \hat{Q}_\alpha ^S(x')\right \) goes to zero w.p. \(1\) as \(S \rightarrow \infty \). To see this, fix any \(x' \in X_\nu \), and consider the random variable
Thus, by the SLLN, \(\hat{Q}_\alpha ^S(x') \rightarrow {\hat{Q}_\alpha }(x')\) w.p. \(1\) as \(S \rightarrow \infty \). We can apply the SLLN since \({\mathbb {E}}[\xi ]\) exists and is finite by Assumptions (A2)–(A4). The result follows because \(\mathcal X_\nu \) is finite. \(\square \)
Proof of Lemma 3
It follows from the definitions of \({\hat{v}_\alpha }\) and \({\nu _\alpha }\) that \({\hat{v}_\alpha }\ge {\nu _\alpha }\). Moreover, we can take \(R = \max _k w_k\), where \(w_k\) are the upper bounds on \(\psi ^k\), \(k = 1,\dots ,K\), from Lemma 1.
To prove (ii), note that by the Basis Decomposition Theorem in [36], there exist basis matrices \(B^k\), \(k = 1,\dots ,K\), and closed convex cones \(\varLambda ^k = \{t: (B^k)^{1}t \ge 0\}\) such that \(\omega  Tx \in \varLambda ^k\) implies \(k(\omega ,x) = k\), and thus
It remains to show that there exists \(\sigma _k \in \varLambda ^k\) such that \({\hat{v}_\alpha }(\omega ,x) = \lambda ^k(\omega  Tx) + \psi ^k(\omega  \alpha )\) if \(\omega  Tx \in \sigma _k + \varLambda ^k\). Fix arbitrary \(l \in \{1,\dots ,K\}\). It suffices to show that there exist \(\sigma _{kl} \in \varLambda ^k\) such that \(\omega  Tx \in \sigma _{kl} +\varLambda ^k\) implies that
This is because
for some \(\sigma _k \in \varLambda ^k\). Hence, if \(\omega  Tx \in \sigma _k + \varLambda ^k\), then
To prove (22), note that if \(\lambda ^k = \lambda ^l\), then by Lemma 1 (iii), \(\psi ^k(\omega  \alpha ) = \psi ^l(\omega  \alpha )\), so that (22) holds with equality. If \(\lambda ^k \ne \lambda ^l\), then \(\lambda ^k s> \lambda ^l s\) for any \(s \in \text {int}(\varLambda ^k)\). For sufficiently large \(\gamma > 0\), we thus have
If we take \(\sigma _{kl} = \gamma s\), then (22) holds by observing that \(\psi ^k(\omega  \alpha ) \ge 0\) and \(\psi ^l(\omega  \alpha ) \le w_l\). \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
van der Laan, N., Romeijnders, W. A loose Benders decomposition algorithm for approximating twostage mixedinteger recourse models. Math. Program. 190, 761–794 (2021). https://doi.org/10.1007/s10107020015591
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107020015591
Keywords
 Stochastic programming
 Mixedinteger recourse
 Convex approximations
 Error bounds
Mathematics Subject Classification
 90C15
 90C11