Skip to main content
Log in

On sample average approximation for two-stage stochastic programs without relatively complete recourse

  • Full Length Paper
  • Series B
  • Published:
Mathematical Programming Submit manuscript

Abstract

We investigate sample average approximation (SAA) for two-stage stochastic programs without relatively complete recourse, i.e., for problems in which there are first-stage feasible solutions that are not guaranteed to have a feasible recourse action. As a feasibility measure of the SAA solution, we consider the “recourse likelihood”, which is the probability that the solution has a feasible recourse action. For \(\epsilon \in (0,1)\), we demonstrate that the probability that a SAA solution has recourse likelihood below \(1-\epsilon \) converges to zero exponentially fast with the sample size. Next, we analyze the rate of convergence of optimal solutions of the SAA to optimal solutions of the true problem for problems with a finite feasible region, such as bounded integer programming problems. For problems with non-finite feasible region, we propose modified “padded” SAA problems and demonstrate in two cases that such problems can yield, with high confidence, solutions that are certain to have a feasible recourse decision. Finally, we conduct a numerical study on a two-stage resource planning problem that illustrates the results, and also suggests there may be room for improvement in some of the theoretical analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Ben-Tal, A., Goryashko, A., Guslitzer, E., Nemirovski, A.: Adjustable robust solutions of uncertain linear programs. Math. Program. 99(2), 351–376 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  2. Benders, J.F.: Partitioning procedures for solving mixed-variables programming problems. Numer. Math. 4, 238–252 (1962)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bertsimas, D., Sim, M.: Tractable approximations to robust conic optimization problems. Math. Program. 107(1–2), 5–36 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  4. Birge, J.R., Louveaux, F.: Introduction to Stochastic Programming. Springer, Berlin (2011)

    Book  MATH  Google Scholar 

  5. Calafiore, G., Campi, M.C.: Uncertain convex programs: randomized solutions and confidence levels. Math. Program. 102(1), 25–46 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  6. Calafiore, G.C., Campi, M.C.: The scenario approach to robust control design. IEEE Trans. Autom. Control 51(5), 742–753 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  7. Campi, M.C., Garatti, S.: The exact feasibility of randomized solutions of uncertain convex programs. SIAM J. Optim. 19(3), 1211–1230 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  8. Chen, X., Shapiro, A., Sun, H.: Convergence analysis of sample average approximation of two-stage stochastic generalized equations. SIAM J. Optim. 29(1), 135–161 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  9. Dembo, A., Zeitouni, O.: Large deviations techniques and applications. Stochastic modelling and applied probability, vol. 38 (2010)

  10. Dupacová, J., Wets, R.: Asymptotic behavior of statistical estimators and of optimal solutions of stochastic optimization problems. Ann. Stat. 16, 1517–1549 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  11. Dyer, M., Stougie, L.: Computational complexity of stochastic programming problems. Math. Program. 106(3), 423–432 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  12. Heilmann, W.R.: Optimal selectors for stochastic linear programs. Appl. Math. Optim. 4(1), 139–142 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  13. King, A.J., Wets, R.J.: Epi-consistency of convex stochastic programs. Stoch. Stoch. Rep. 34(1–2), 83–92 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  14. Kleywegt, A.J., Shapiro, A., Homem-de Mello, T.: The sample average approximation method for stochastic discrete optimization. SIAM J. Optim. 12(2), 479–502 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  15. Liu, R.P.: On feasibility of sample average approximation solutions. SIAM J. Optim. 30(3), 2026–2052 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  16. Liu, X., Küçükyavuz, S., Luedtke, J.: Decomposition algorithms for two-stage chance-constrained programs. Math. Program. 157(1), 219–243 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  17. Luedtke, J.: A branch-and-cut decomposition algorithm for solving chance-constrained mathematical programs with finite support. Math. Program. 146(1–2), 219–244 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  18. Luedtke, J., Ahmed, S.: A sample approximation approach for optimization with probabilistic constraints. SIAM J. Optim. 19(2), 674–699 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  19. Mak, W.K., Morton, D.P., Wood, R.K.: Monte carlo bounding techniques for determining solution quality in stochastic programs. Oper. Res. Lett. 24(1–2), 47–56 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  20. Margellos, K., Goulart, P., Lygeros, J.: On the road between robust optimization and the scenario approach for chance constrained optimization problems. IEEE Trans. Autom. Control 59(8), 2258–2263 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  21. Mitsos, A.: Global optimization of semi-infinite programs via restriction of the right-hand side. Optimization 60(10–11), 1291–1308 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  22. Norkin, V.I., Pflug, G.C., Ruszczyński, A.: A branch and bound method for stochastic global optimization. Math. Program. 83(1–3), 425–450 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  23. Ruszczyński, A., Świȩtanowski, A.: Accelerating the regularized decomposition method for two stage stochastic linear problems. Eur. J. Oper. Res. 101(2), 328–342 (1997)

    Article  MATH  Google Scholar 

  24. Shapiro, A., Dentcheva, D., Ruszczyński, A.: Lectures on Stochastic Programming: Modeling and Theory. SIAM, Philadelphia (2009)

    Book  MATH  Google Scholar 

  25. Shapiro, A., Homem-de Mello, T.: A simulation-based approach to two-stage stochastic programming with recourse. Math. Program. 81(3), 301–325 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  26. Shapiro, A., Nemirovski, A.: On complexity of stochastic programming problems. In: Continuous Optimization, pp. 111–146. Springer (2005)

  27. Sherali, H.D., Adams, W.P.: A Reformulation-Linearization Technique for Solving Discrete and Continuous Nonconvex Problems, vol. 31. Springer, Berlin (2013)

    MATH  Google Scholar 

  28. Van Slyke, R.M., Wets, R.: L-shaped linear programs with applications to optimal control and stochastic programming. SIAM J. Appl. Math. 17(4), 638–663 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  29. Vershynin, R.: High-Dimensional Probability: An Introduction with Applications in Data Science, vol. 47. Cambridge University Press, Cambridge (2018)

    MATH  Google Scholar 

  30. Wets, R.J.B.: Stochastic programs with fixed recourse: the equivalent deterministic program. SIAM Rev. 16(3), 309–339 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  31. Wets, R.J.B.: On the continuity of the value of a linear program and of related polyhedral-valued multifunctions. In: Mathematical Programming Essays in Honor of George B. Dantzig Part I. Springer, pp. 14–29 (1985)

Download references

Acknowledgements

The authors thank two anonymous referees for comments that helped improve this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rui Chen.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The authors dedicate this paper to Shabbir Ahmed. This work was supported by NSF Award CMMI-1634597.

Appendix

Appendix

1.1 Proof of Corollary 5

For any \(\nu \in (0,1)\),

$$\begin{aligned} N&\ge \frac{1}{1-\nu }\bigg [\frac{1}{\epsilon }\log \frac{1}{\beta } +\frac{n_1}{\epsilon }\Bigl (\log \frac{1}{\nu \epsilon }+\log \Bigl (\frac{m_1}{n_1}+2\Bigr )+1\Bigr )\\&\quad +\frac{2n_1(n_2+1)}{\epsilon }\Bigl (\log \Bigl (\frac{m_2}{n_2+1}\Bigr )+1\Bigr )+n_1\bigg ]\\ \Rightarrow (1-\nu )N&\ge \frac{1}{\epsilon }\log \frac{1}{\beta } +\frac{n_1}{\epsilon }\Bigl (\log \frac{1}{\nu \epsilon } +\log \Bigl (\frac{2n_1+m_1}{n_1}\Bigr )+1\Bigr )\\&\quad +\frac{2n_1(n_2+1)}{\epsilon } \Bigl (\log \Bigl (\frac{m_2}{n_2+1}\Bigr )+1\Bigr )+n_1\\ \Rightarrow N&\ge \frac{1}{\epsilon }\log \frac{1}{\beta } +\frac{n_1}{\epsilon }\Bigg (\log \frac{1}{\nu \epsilon }+\log n_1-1 +\frac{\nu N\epsilon }{n_1}\Bigg )\\&\quad +\frac{n_1}{\epsilon } \Bigl (\log \Bigl (\frac{2n_1+m_1}{n_1^2}\Bigr )+2\Bigr )\\&\quad +\frac{2n_1(n_2+1)}{\epsilon }\Bigl (\log \Bigl (\frac{m_2}{n_2+1}\Bigr )+1\Bigr )+n_1\\ \Rightarrow N&\ge \frac{1}{\epsilon }\log \frac{1}{\beta } +\frac{n_1}{\epsilon }\log N+\frac{n_1}{\epsilon } \Bigl (\log \Bigl (\frac{2n_1+m_1}{n_1^2}\Bigr )+2\Bigr )\\&\quad +\frac{2n_1(n_2+1)}{\epsilon }\Bigl (\log \Bigl (\frac{m_2}{n_2+1}\Bigr )+1\Bigr )+n_1\\ \Rightarrow \log \beta&\ge n_1\log N+n_1\Bigl (\log \Bigl (\frac{2n_1 +m_1}{n_1^2}\Bigr )+2\Bigr )+2n_1(n_2+1)\Bigl (\log \Bigl (\frac{m_2}{n_2+1}\Bigr )+1\Bigr )\\&\quad +\epsilon n_1-\epsilon N\\ \Rightarrow \log \beta&\ge n_1\log N+n_1\log (2n_1+m_1)+2n_1(n_2+1)\log m_2\\&\quad +\epsilon n_1-\epsilon N-2\log (n_1!)-2n_1\log \Bigl ((n_2+1)!\Bigr )\\ \Rightarrow \beta&\ge \frac{N^{n_1}(2n_1+m_1)^{n_1}m_2^{2n_1(n_2+1)} e^{-\epsilon (N-n_1)}}{(n_1!)^2\Bigl ((n_2+1)!\Bigr )^{2n_1}}\\&\ge \left( {\begin{array}{c}N\\ n_1\end{array}}\right) \frac{1}{n_1!}(2n_1+m_1)^{n_1} \biggl (\frac{m_2^{n_2+1}}{(n_2+1)!}\biggr )^{2n_1}(1-\epsilon )^{N-n_1}. \end{aligned}$$

The third inequality can be justified by observing that \(\frac{\nu N\epsilon }{n_1}\ge 1+\log \frac{\nu N\epsilon }{n_1}\). The fifth inequality can be justified by observing that \(\log (k!)\ge k(\log k -1)\) for \(k\ge 1\). The result follows by setting \(\nu =1/2\).

1.2 Proof of Proposition 11

We only need to prove that \(H(x,\xi )\) is Lipshitz continuous in \((W(\xi ),T(\xi ),h(\xi ))\) for all \(x\in X\) under infinity (matrix) norm. Then the result follows from the assumption that \((W(\xi ),T(\xi ),h(\xi ))\) is Lipshitz continuous in \(\xi \).

For fixed \(x\in X\), assume \((y^*,\eta ^*)\) and \((y',\eta ')\) are optimal solutions of (11) for \(M:=({\overline{W}},{\overline{T}},{\overline{h}}):=({\overline{W}}(\xi ),{\overline{T}}(\xi ),{\overline{h}}(\xi ))\) and \(M':=({\overline{W}}',{\overline{T}}',{\overline{h}}'):=({\overline{W}}(\xi '),{\overline{T}}(\xi '),{\overline{h}}(\xi '))\), respectively. Consider the different conditions.

  1. 1.

    If the problem has only right-hand side randomness, then

    $$\begin{aligned} \begin{aligned} \eta ^*&=\min _y\Big \{\max _{i \in I} \{{\overline{h}}_i - {\overline{W}}_iy-{\overline{T}}_ix\Big \} : Dy \ge d - Cx \}\\&\le \max _{i \in I} \Big \{{\overline{h}}_i -{\overline{W}}_iy'-{\overline{T}}_ix\Big \}\\&\le \max _{i \in I} \Big \{{\overline{h}}'_i-{\overline{W}}_iy'-{\overline{T}}_ix\Big \}+\Vert M-M'\Vert _\infty \Vert \left( \begin{array}{c} 0\\ 0\\ 1 \end{array} \right) \Vert _\infty \\&\le \eta '+\Vert M-M'\Vert _\infty . \end{aligned} \end{aligned}$$
  2. 2.

    If the problem has fixed recourse and X is bounded, then

    $$\begin{aligned} \begin{aligned} \eta ^*&=\min _y\Big \{\max _{i \in I} \Big \{{\overline{h}}_i - {\overline{W}}_iy-{\overline{T}}_ix\Big \}:Dy\ge d-Cx \Big \}\\&\le \max _{i \in I} \{{\overline{h}}_i -{\overline{W}}_iy'-{\overline{T}}_ix\}\\&\le \max _{i \in I} \{{\overline{h}}'_i-{\overline{W}}_iy'-{\overline{T}}'_ix\}+\Vert M-M'\Vert _\infty \Vert \left( \begin{array}{c} 0\\ x\\ 1 \end{array} \right) \Vert _\infty \\&\le \eta '+R\Vert M-M'\Vert _\infty \end{aligned} \end{aligned}$$

    for some \(R>0\) since X is bounded.

  3. 3.

    If \(\{(x,y):x\in X,Dy\ge d-Cx \}\) is bounded, then similarly

    $$\begin{aligned} \begin{aligned} \eta ^*&=\min _y\Big \{\max _{i \in I} \{{\overline{h}}_i - {\overline{W}}_iy-{\overline{T}}_ix\Big \}:Dy\ge d-Cx \}\\&\le \max _{i \in I} \{{\overline{h}}_i -{\overline{W}}_iy'-{\overline{T}}_ix\}\\&\le \max _{i \in I} \{{\overline{h}}'_i-{\overline{W}}'_iy'-{\overline{T}}'_ix\}+\Vert M-M'\Vert _\infty \Vert \left( \begin{array}{c} y'\\ x\\ 1 \end{array} \right) \Vert _\infty \\&\le \eta '+R\Vert M-M'\Vert _\infty . \end{aligned} \end{aligned}$$

    for some \(R>0\) since \(\{(x,y):x\in X,Dy\ge d-Cx \}\) is bounded.

So for each case, there exists \(R>0\) such that

$$\begin{aligned} H(x,\xi )-H(x,\xi ')=\eta ^*-\eta '\le R\Vert M-M'\Vert _\infty . \end{aligned}$$

Similarly, \(H(x,\xi ')-H(x,\xi )\le R\Vert M-M'\Vert _\infty \). Therefore,

$$\begin{aligned} |H(x,\xi )-H(x,\xi ')|\le R\Vert M-M'\Vert _\infty , \end{aligned}$$

which implies that \(H(x,\xi )\) is a Lipschitz continuous function in \((W(\xi ),T(\xi ),h(\xi ) )\) for all \(x\in X\) under infinity norm. Therefore, Assumption 4 holds.

1.3 Proof of Proposition 12

First observe that the linear program (11) is always feasible. Thus,

$$\begin{aligned} \begin{aligned} H(x,\xi )&=\max _{\alpha } \quad \alpha ^T({\overline{h}}(\xi )-{\overline{T}}(\xi ){\hat{x}})+\beta ^T(d - C{\hat{x}}),\\&\qquad {\text {s.t.}}\quad {\overline{W}}(\xi )^T\alpha + D^T\beta =0,\\&\qquad \qquad e^T\alpha =1,~\alpha \ge 0,~\beta \ge 0, \end{aligned} \end{aligned}$$

where we adopt the convention that if the dual linear program is infeasible then the optimal value is defined to be \(-\infty \). Thus,

$$\begin{aligned} \max \{ H({\hat{x}},\xi ^J): J\in [N]^d \}&= \max _{\alpha ,\beta , J}\quad \alpha ^T\big ({\overline{h}}(\xi ^J)-{\overline{T}}(\xi ^J){\hat{x}}\big )+\beta ^T(d-C{\hat{x}}),\\&\qquad {\text {s.t.}}\quad \alpha ^T{\overline{W}}(\xi ^J) + \beta ^T D=0,\\&\qquad \qquad e^T\alpha =1,~\alpha \ge 0,~\beta \ge 0, J\in [N]^d \end{aligned}$$

Next, introduce binary variables \(\delta _{qj}\) for \(q \in [d]\) and \(j \in [N]\), where \(\delta _{qj}=1\) implies that \(J_q=j\). This leads to the mixed-integer nonlinear program:

$$\begin{aligned} \begin{aligned} \max _{\alpha ,\beta ,\xi ,\delta }&\quad \alpha ^T({\overline{h}}(\xi )-{\overline{T}}(\xi ){\hat{x}}) + \beta ^T(d - C{\hat{x}})\\ {\text {s.t.}}&\quad \xi _q=\sum _{j \in [N]}\xi ^j_q\delta _{qj},\quad q \in [d],\\&\quad \sum _{j \in [N]}\delta _{qj}=1,\quad q \in [d],\\&\quad \alpha ^T {\overline{W}}(\xi ) + \beta ^TD =0,\\&\quad e^T\alpha =1,\\&\quad \alpha \ge 0,~\beta \ge 0, ~\delta \in \{0,1\}^{d\times N}. \end{aligned} \end{aligned}$$
(36)

Using the assumptions that \({\overline{W}}(\xi )\), \({\overline{T}}(\xi )\) and \({\overline{h}}(\xi )\) are linear in \(\xi \), problem (36) can be written as the following mixed-integer bilinear program:

$$\begin{aligned} \max _{\alpha ,\beta ,\xi ,\delta }&\quad \alpha ^T\Biggl ({\overline{H}}\xi -\sum _{k \in [n_1]} {\overline{T}}^k\xi {\hat{x}}_k \Biggr ) + \beta ^T(d - C{\hat{x}}) \end{aligned}$$
(37)
$$\begin{aligned} {\text {s.t.}}&\quad \xi _q=\sum _{j \in [N]}\xi ^j_q\delta _{qj},\quad q \in [d], \end{aligned}$$
(38)
$$\begin{aligned}&\quad \sum _{j \in [N]}\delta _{qj}=1,\quad q \in [d], \end{aligned}$$
(39)
$$\begin{aligned}&\quad \alpha ^T{\overline{W}}^k\xi +\beta ^T D^k =0,\quad k \in [n_2], \end{aligned}$$
(40)
$$\begin{aligned}&\quad e^T\alpha =1, \end{aligned}$$
(41)
$$\begin{aligned}&\quad \alpha \ge 0,~\beta \ge 0,~\delta \in \{0,1\}^{d\times N}. \end{aligned}$$
(42)

We next use (38) to substitute out the variables \(\xi _q\) in this formulation. Observe that one this is done, the only nonlinear terms are of the form \(\alpha _p\delta _{qj}\) for \(p \in I, q \in [d], j \in [N]\). Thus, introduce new variables \(z_{pqj}\) to represent this product for each \(p \in I\), \(q \in [d]\), and \(j \in [N]\). Using constraint (41) we derive the linear constraints:

$$\begin{aligned} \sum _{p \in I} z_{pqj} \biggl (= \sum _{p \in I} \alpha _p\delta _{qj} \biggr ) = \delta _{qj}, \quad q \in [d],~j \in [N]. \end{aligned}$$
(43)

Using constraints (39) we derive the linear constraints:

$$\begin{aligned} \sum _{j \in [N]} z_{pqj} \biggl (= \sum _{j \in [N]} \alpha _p\delta _{qj} \biggr ) = \alpha _p, \quad p \in I, \ q \in [d]. \end{aligned}$$
(44)

Observe that constraints (43),(44) together with (42) and (39) are sufficient to imply \(z_{pqj} = \alpha _p \delta _{qj}\). Indeed for any fixed q, suppose \(j^*_q\) is the index such that \(\delta _{qj^*_q}=1\). Then (43) implies that \(z_{pqj}=0=\alpha _p\delta _{qj}\) for any \(j \ne j^*_q\) and all p, and (44) implies that \(\sum _{j \in [N]} z_{pqj} = z_{pqj^*_q} = \alpha _p = \alpha _p \delta _{qj^*_q}\) also for all p. Also note that constraints (41), (43) and (44) imply

$$\begin{aligned} \sum _{j\in [N]}\delta _{qj}=\sum _{j\in [N]}\sum _{p\in I}z_{pqj}=\sum _{p\in I}\Big (\sum _{j\in [N]}z_{pqj}\Big )=e^T\alpha =1 \end{aligned}$$

for \(q\in [d]\). Therefore, constraints (39) are redundant when constraints (41), (43) and (44) are present. Thus the mixed-integer bilinear program (37)–(42) can be reformulated as the MILP given in Proposition 12 by using (38) to substitute out the variables \(\xi \), using \(z_{pqj}\) to replace the bilinear terms \(\alpha _p\delta _{qj}\), and adding the constraints (43)–(44).

1.4 Proof of Proposition 13

Note that \(H({\hat{x}},\cdot )\) is convex under the assumptions and the maximum of a convex function can only be attained at extreme points. Therefore, we can reformulate (27) as

$$\begin{aligned} \max \Big \{H({\hat{x}},\xi ):\xi \in \prod _{q\in [d]} \big \{\xi ^{\min }_{q},\xi ^{\max }_{q}\big \} \Big \}. \end{aligned}$$
(45)

Introduce binary variables \(\delta _{q1}\) and \(\delta _{q2}\) for \(q\in [d]\), where \(\delta _{q1}=1\) implies that \(\xi _q=\xi ^{\min }_{q}\) and \(\delta _{q2}=1\) implies that \(\xi _q=\xi ^{\max }_{q}\). We can then rewrite (45) as a mixed-integer bilinear program:

$$\begin{aligned} \begin{aligned} \max _{\alpha ,\beta ,\xi ,\delta }&\quad \alpha ^T\biggl ({\overline{H}}\xi -\sum _{k \in [n_1]}{\overline{T}}^k\xi {\hat{x}}_k \biggr ) + \beta ^T(d - C{\hat{x}})\\ {\text {s.t.}}&\quad \xi _q=\xi ^{\min }_q\delta _{q1}+\xi ^{\max }_q\delta _{q2},\quad q \in [d],\\&\quad \delta _{q1}+\delta _{q2}=1,\quad q \in [d],\\&\quad \alpha ^T {\overline{W}}+ \beta ^TD =0,\\&\quad e^T\alpha =1,\\&\quad \alpha \ge 0,~\beta \ge 0, ~\delta \in \{0,1\}^{d\times 2}. \end{aligned} \end{aligned}$$
(46)

Similar to Proposition 12, we introduce variables \(z_{pqj}\) to represent \(\alpha _p\delta _{qj}\) for \(p\in I,q\in [d],j\in \{1,2\}\). Then problem (45) can be written as the following mixed-integer linear program:

$$\begin{aligned} \max _{\alpha ,\beta ,\delta ,z}&\quad \sum _{p\in I}\sum _{q\in [d]} \Bigg [\Bigg ({\overline{H}}_{pq}-\sum _{k\in [n_1]}{\hat{x}}_k {\overline{T}}^k_{pq}\Bigg )(\xi _q^{\min }z_{pq1} +\xi _q^{\max }z_{pq2}) \Bigg ]+ \beta ^T(d - C{\hat{x}}) \end{aligned}$$
(47)
$$\begin{aligned} {\text {s.t.}}&\quad \sum _{p\in I}z_{pqj}=\delta _{qj},\quad q\in [d],~j\in \{1,2\},\end{aligned}$$
(48)
$$\begin{aligned}&\quad z_{pq1}+z_{pq2}=\alpha _p,\quad p\in I,~q\in [d],\end{aligned}$$
(49)
$$\begin{aligned}&\quad \alpha ^T {\overline{W}}+ \beta ^TD =0,\end{aligned}$$
(50)
$$\begin{aligned}&\quad e^T\alpha =1,\end{aligned}$$
(51)
$$\begin{aligned}&\quad \alpha \ge 0,~\beta \ge 0,~z\ge 0, ~\delta \in \{0,1\}^{d\times 2}. \end{aligned}$$
(52)

Finally, we apply the reformulation-linearization technique to strengthen the MILP formulation (47)–(52). We introduce new variables \(w_{pqj}\ge 0\) to represent the product \(\beta _p\delta _{qj}\) for \(p\in [m_2]{\setminus } I,q\in [d],j\in \{1,2\}\). Note that \(w_{pqj}=\beta _p\delta _{qj}\) and \(\alpha ^T{\overline{W}}+\beta ^TD=0\) imply

$$\begin{aligned}&\sum _{p\in I}{\overline{W}}_{pk}z_{pqj}+\sum _{p\in [m_2]{\setminus } I}D_{pk} w_{pqj}\Bigg (=\delta _{qj}\Bigg (\sum _{p\in I}{\overline{W}}_{pk}\alpha _p+\sum _{p\in [m_2]{\setminus } I}D_{pk}\beta _p \Bigg )\Bigg )=0,\nonumber \\&\quad k\in [n_1],q\in [d],j\in \{1,2\}, \end{aligned}$$
(53)

and constraints \(\delta _{q1}+\delta _{q2}=1\) for \(q\in [d]\) imply

$$\begin{aligned} w_{pq1}+w_{pq2}\Big (=\beta _p(\delta _{q1}+\delta _{q2})\Big )=\beta _p,\quad p\in [m_2]{\setminus } I,~q\in [d]. \end{aligned}$$
(54)

Note that constraints (49), (53) and (54) imply (50). Therefore, adding new variables \(w\ge 0\) together with constraints (53) and (54) into the original MILP formulation yields a strengthened formulation in the lifted space, as the MILP given in Proposition 13.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, R., Luedtke, J. On sample average approximation for two-stage stochastic programs without relatively complete recourse. Math. Program. 196, 719–754 (2022). https://doi.org/10.1007/s10107-021-01753-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-021-01753-9

Keywords

Mathematics Subject Classification

Navigation