Skip to main content
Log in

A tractable approach for designing piecewise affine policies in two-stage adjustable robust optimization

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

We consider the problem of designing piecewise affine policies for two-stage adjustable robust linear optimization problems under right-hand side uncertainty. It is well known that a piecewise affine policy is optimal although the number of pieces can be exponentially large. A significant challenge in designing a practical piecewise affine policy is constructing good pieces of the uncertainty set. Here we address this challenge by introducing a new framework in which the uncertainty set is “approximated” by a “dominating” simplex. The corresponding policy is then based on a mapping from the uncertainty set to the simplex. Although our piecewise affine policy has exponentially many pieces, it can be computed efficiently by solving a compact linear program given the dominating simplex. Furthermore, we can find the dominating simplex in a closed form if the uncertainty set satisfies some symmetries and can be computed using a MIP in general. We would like to remark that our policy is an approximate piecewise-affine policy and is not necessarily a generalization of the class of affine policies. Nevertheless, the performance of our policy is significantly better than the affine policy for many important uncertainty sets, such as ellipsoids and norm-balls, both theoretically and numerically. For instance, for hypersphere uncertainty set, our piecewise affine policy can be computed by an LP and gives a \(O(m^{1/4})\)-approximation whereas the affine policy requires us to solve a second order cone program and has a worst-case performance bound of \(O(\sqrt{m})\).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Remark We note that in [7], in Tables 1 and 2, there is a typo in the performance bound for affine policies for p-norm balls. According to Theorem 3 in [7], the bound should be

    $$\begin{aligned} \frac{m^{\frac{p-1}{p}}+m}{m^{\frac{p-1}{p}}+m^{\frac{1}{p}}}= O \left( m^{\frac{1}{p}} \right) , \end{aligned}$$

    instead of \(\frac{m^{\frac{p-1}{p}}+m}{m^{\frac{1}{p}}+m}\) as mentioned in Table 2 in [7]).

References

  1. Ayoub, J., Poss, M.: Decomposition for adjustable robust linear optimization subject to uncertainty polytope. Comput. Manag. Sci. 13(2), 219–239 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  2. Ben-Tal, A., El Ghaoui, L., Nemirovski, A.: Robust Optimization. Princeton University Press, Princeton (2009)

    Book  MATH  Google Scholar 

  3. Ben-Tal, A., Goryashko, A., Guslitzer, E., Nemirovski, A.: Adjustable robust solutions of uncertain linear programs. Math. Program. 99(2), 351–376 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  4. Ben-Tal, A., Nemirovski, A.: Robust convex optimization. Math. Oper. Res. 23(4), 769–805 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  5. Ben-Tal, A., Nemirovski, A.: Robust solutions of uncertain linear programs. Oper. Res. Lett. 25(1), 1–14 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  6. Ben-Tal, A., Nemirovski, A.: Robust optimization-methodology and applications. Math. Program. 92(3), 453–480 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  7. Bertsimas, D., Bidkhori, H.: On the performance of affine policies for two-stage adaptive optimization: a geometric perspective. Math. Program. 153(2), 577–594 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  8. Bertsimas, D., Brown, D., Caramanis, C.: Theory and applications of robust optimization. SIAM Rev. 53(3), 464–501 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  9. Bertsimas, D., Caramanis, C.: Finite adaptability in multistage linear optimization. IEEE Trans. Autom. Control 55(12), 2751–2766 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  10. Bertsimas, D., Dunning, I.: Multistage robust mixed-integer optimization with adaptive partitions. Oper Res 64(4), 980–998 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  11. Bertsimas, D., Georghiou, A.: Design of near optimal decision rules in multistage adaptive mixed-integer optimization. Oper. Res. 63(3), 610–627 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  12. Bertsimas, D., Goyal, V.: On the power and limitations of affine policies in two-stage adaptive optimization. Math. Program. 134(2), 491–531 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  13. Bertsimas, D., Goyal, V., Sun, X.: A geometric characterization of the power of finite adaptability in multistage stochastic and adaptive optimization. Math. Oper. Res. 36(1), 24–54 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  14. Bertsimas, D., Iancu, D., Parrilo, P.: Optimality of affine policies in multi-stage robust optimization. Math. Oper. Res. 35, 363–394 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  15. Bertsimas, D., Sim, M.: Robust discrete optimization and network flows. Math. Program. Ser. B 98, 49–71 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  16. Bertsimas, D., Sim, M.: The price of robustness. Oper. Res. 52(2), 35–53 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  17. Chen, X., Sim, M., Sun, P., Zhang, J.: A linear decision-based approximation approach to stochastic programming. Oper. Res. 56(2), 344–357 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  18. Dantzig, G.: Linear programming under uncertainty. Manag. Sci. 1, 197–206 (1955)

    Article  MathSciNet  MATH  Google Scholar 

  19. El Ghaoui, L., Lebret, H.: Robust solutions to least-squares problems with uncertain data. SIAM J. Matrix Anal. Appl. 18, 1035–1064 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  20. El Housni, O., Goyal, V.: Beyond worst-case: a probabilistic analysis of affine policies in dynamic optimization. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 4759–4767. Curran Associates Inc, New York (2017)

    Google Scholar 

  21. El Housni, O., Goyal, V.: Piecewise static policies for two-stage adjustable robust linear optimization. Math. Program. Ser. A B 169(2), 649–665 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  22. Feige, U., Jain, K., Mahdian, M., Mirrokni, V.: Robust combinatorial optimization with exponential scenarios. Lect. Notes Comput. Sci. 4513, 439–453 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  23. Goldfarb, D., Iyengar, G.: Robust portfolio selection problems. Math. Oper. Res. 28(1), 1–38 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  24. Iancu, D., Sharma, M., Sviridenko, M.: Supermodularity and affine policies in dynamic robust optimization. Oper. Res. 61(4), 941–956 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  25. Kall, P., Wallace, S.: Stochastic Programming. Wiley, New York (1994)

    MATH  Google Scholar 

  26. Postek, K., Hertog, D.: Multistage adjustable robust mixed-integer optimization via iterative splitting of the uncertainty set. INFORMS J. Comput. 28(3), 553–574 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  27. Prékopa, A.: Stochastic Programming. Kluwer Academic Publishers, Dordrecht (1995)

    Book  MATH  Google Scholar 

  28. Shapiro, A.: Stochastic programming approach to optimization under uncertainty. Math. Program. Ser. B 112(1), 183–220 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  29. Shapiro, A., Dentcheva, D., Ruszczynski, A.: Lectures on Stochastic Programming: Modeling and Theory (SIAM). MPS, Philadelphia (2009)

  30. Soyster, A.: Convex programming with set-inclusive constraints and applications to inexact linear programming. Oper. Res. 21(5), 1154–1157 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  31. Zeng, B.: Solving Two-Stage Robust Optimization Problems by a Constraint-and-Column Generation Method. University of South Florida, Tampa (2011)

    Google Scholar 

Download references

Acknowledgements

O. El Housni and V. Goyal are supported by NSF Grants CMMI 1201116 and CMMI 1351838.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vineet Goyal.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Proof of Theorem 1

Proof

Let \((\hat{\varvec{x}},\varvec{\hat{y}}(\hat{\varvec{h}}), \hat{\varvec{h}}\in \hat{\mathcal{U}})\) be an optimal solution for \(z_{\textsf {AR}}(\hat{\mathcal{U}})\). For each \(\varvec{h}\in \mathcal{U}\), let \(\tilde{\varvec{y}}(\varvec{h})= \hat{\varvec{y}}(\hat{\varvec{h}})\) where \(\hat{\varvec{h}}\in \hat{\mathcal{U}}\) dominates \(\varvec{h}\). Therefore, for any \(\varvec{h}\in \mathcal{U}\),

$$\begin{aligned} \varvec{A}\hat{\varvec{x}}+\varvec{B}\tilde{\varvec{y}}(\varvec{h}) = \varvec{A}\hat{\varvec{x}}+\varvec{B}\hat{\varvec{y}}(\hat{\varvec{h}})\ge \hat{\varvec{h}}\ge \varvec{h}, \end{aligned}$$

i.e., \((\hat{\varvec{x}},\tilde{\varvec{y}}(\varvec{h}),\varvec{h}\in \mathcal{U})\) is a feasible solution for \(z_{\textsf {AR}}(\mathcal{U})\). Therefore,

$$\begin{aligned} z_{\textsf {AR}}(\mathcal{U})\le \varvec{c}^T\hat{\varvec{x}}+\max _{\varvec{h}\in \mathcal{U}}\varvec{d}^T\tilde{\varvec{y}}(\varvec{h})\le \varvec{c}^T\hat{\varvec{x}}+\max _{\hat{\varvec{h}}\in \hat{\mathcal{U}}}\varvec{d}^T\hat{\varvec{y}}(\hat{\varvec{h}})=z_{\textsf {AR}}(\hat{\mathcal{U}}). \end{aligned}$$

Conversely, let \((\varvec{x}^*, \varvec{y}^*(\varvec{h}), \varvec{h}\in \mathcal{U})\) be an optimal solution of \(z_{\textsf {AR}}(\mathcal{U})\). Then, for any \(\hat{\varvec{h}}\in \hat{\mathcal{U}}\), since \( \frac{\hat{\varvec{h}}}{\beta } \in \mathcal{U}\), we have,

$$\begin{aligned} \varvec{A}\varvec{x}^*+\varvec{B}\varvec{y}^*\left( \frac{\hat{\varvec{h}}}{\beta }\right) \ge \frac{\hat{\varvec{h}}}{\beta }, \end{aligned}$$

Therefore, \((\beta \varvec{x}^*, \beta \varvec{y}^*\left( \frac{\hat{\varvec{h}}}{\beta }\right) , \hat{\varvec{h}} \in \mathcal{U})\) is feasible for \(\varPi _{\textsf {AR}}(\hat{\mathcal{U}})\). Therefore,

$$\begin{aligned} z_{\textsf {AR}}(\hat{\mathcal{U}})\le \varvec{c}^T \beta \varvec{x}^* + \underset{ \varvec{\hat{h}} \in \hat{\mathcal{U}}}{\max }\; \varvec{d}^T \beta \varvec{y^*}\left( \frac{\varvec{\hat{h}}}{\beta }\right) \le \beta \cdot \left( \varvec{c}^T \varvec{x}^* + \underset{ \varvec{h} \in \mathcal{U}}{\max }\; \varvec{d}^T \varvec{y^*(h)} \right) = \beta \cdot z_{\textsf {AR}}(\mathcal U). \end{aligned}$$

\(\square \)

Proof of Lemma 1

Proof

(a) Suppose there exists \( \beta \) and \( \varvec{v} \in \mathcal{U}\) such that \( \; \hat{\mathcal{U}} = \beta \cdot {\textsf {conv}} \left( \varvec{e}_1,\ldots ,\varvec{e}_m, \varvec{v} \right) \) dominates \(\mathcal U \). Consider \(\varvec{h} \in \mathcal{U }\). Since \(\hat{\mathcal{U}}\) dominates \(\mathcal{U}\), there exists \( \alpha _1, \alpha _2,\ldots ,\alpha _{m+1} \ge 0\) with \(\alpha _1 + \cdots + \alpha _{m+1} =1\) such that

$$\begin{aligned} h_i \le \beta \left( \alpha _i + \alpha _{m+1} v_i \right) , \; \forall i =1,\ldots ,m. \end{aligned}$$
(B.1)

Let

$$\begin{aligned} I (\varvec{h} )= \left\{ i \in [m] \; \bigg \vert \; h_i - \beta v_i \ge 0 \right\} . \end{aligned}$$

Then,

$$\begin{aligned} \sum _{i=1}^m \left( h_i -\beta v_i \right) ^{+}&= \underset{i \in I (\varvec{h} ) }{\sum }h_i - \beta \underset{i \in I (\varvec{h} ) }{\sum }v_i\\&\le \sum _{i\in I (\varvec{h} )} \beta \left( \alpha _i + \alpha _{m+1} v_i \right) - \beta \underset{i \in I (\varvec{h} )}{\sum }v_i\\&= \beta \sum _{i\in I (\varvec{h} )} \alpha _i + \left( \alpha _{m+1} -1\right) \beta \underset{i \in I (\varvec{h} ) }{\sum }v_i\\&\le \beta , \end{aligned}$$

where the first inequality follows from (B.1) and the last inequality holds because \( \alpha _{m+1} -1 \le 0\), \( v_i \ge 0\) , \(\beta \ge 0\) and \( \sum _{i\in I (\varvec{h} )} \alpha _i \le 1\). We conclude that

$$\begin{aligned} \frac{1}{\beta } \sum _{i=1}^m \left( h_i -\beta v_i \right) ^{+} \le 1. \end{aligned}$$

(b) Now, suppose there exists \( \beta \) and \( \varvec{v} \in \mathcal{U}\) such that \( \; \hat{\mathcal{U}} = \beta \cdot {\textsf {conv}} \left( \varvec{e}_1,\ldots ,\varvec{e}_m, \varvec{v} \right) \) dominates \(\mathcal U \). For any \(\varvec{h} \in \mathcal{U}\), let

$$\begin{aligned} \varvec{{\hat{h}}} = \sum _{i=1}^m \left( h_i -\beta v_i \right) ^{+} \varvec{e}_i + \beta \varvec{v}. \end{aligned}$$

Then for all \(i=1,\ldots ,m\),

$$\begin{aligned} \hat{h}_i&= \left( h_i -\beta v_i \right) ^{+} + \beta v_i \\&\ge \left( h_i -\beta v_i \right) + \beta v_i \ge h_i . \end{aligned}$$

Therefore, \(\varvec{{\hat{h}}}\) dominates \(\varvec{h}\). Moreover,

$$\begin{aligned} \varvec{{\hat{h}}} = 2\beta \left( \sum _{i=1}^m \frac{\left( h_i -\beta v_i \right) ^{+}}{2 \beta } \varvec{e}_i + \frac{1}{2}\varvec{v} \right) \in 2\beta \cdot {\textsf {conv}} \left( \varvec{0}, \varvec{e}_1,\ldots ,\varvec{e}_m, \varvec{v} \right) , \end{aligned}$$

because

$$\begin{aligned} \frac{1}{\beta } \sum _{i=1}^m \left( h_i - \beta v_i \right) ^{+} \le 1. \end{aligned}$$

Therefore, \(2\beta \cdot {\textsf {conv}} \left( \varvec{0}, \varvec{e}_1,\ldots ,\varvec{e}_m, \varvec{v} \right) \) dominates \(\mathcal{U} \) and consequently \(2\beta \cdot {\textsf {conv}} \left( \varvec{e}_1,\ldots ,\varvec{e}_m, \varvec{v} \right) \) dominates \(\mathcal{U} \) as well. \(\square \)

Proof of Lemma 3

Proof

Suppose \( k \in [m]\). Let us consider

$$\begin{aligned} \varvec{h} \in \underset{\varvec{h} \in \mathcal{U}}{\textsf {argmax}} \sum _{i=1}^k h_i . \end{aligned}$$

Without loss of generality, we can suppose that \(h_i =0\) for \( i =k+1,\ldots , m\). Denote, \(\mathcal{S}_k\) the set of permutations of \(\{ 1,2,\ldots ,k\}\). We define \(\varvec{h} ^{\sigma } \in \mathbb {R}_+^m\) such that \(h^{\sigma }_i = h_{\sigma (i) }\) for \(i=1, \ldots ,k\) and \(h^{\sigma }_i =0\) otherwise. Since \(\mathcal{U}\) is a permutation invariant set, we have \(\varvec{h} ^{\sigma } \in \mathcal{U}\) for any \(\sigma \in \mathcal{S}_k\). The convexity of \(\mathcal{U}\) implies that

$$\begin{aligned} \frac{1}{k!} \sum _{ \sigma \in \mathcal{S}_k} \varvec{h}^{\sigma } \in \mathcal{U}. \end{aligned}$$

We have,

$$\begin{aligned} \sum _{ \sigma \in \mathcal{S}_k} h^{\sigma }_i = \left\{ \begin{array}{ll} (k-1)! \cdot \sum _{j=1}^k h_j &{}\quad \hbox {if } i=1,\ldots ,k \\ 0 &{}\quad \text{ otherwise, } \end{array} \right. \end{aligned}$$

and \(\sum _{j=1}^k h_j = k \cdot \gamma (k)\) by definition. Therefore,

$$\begin{aligned} \frac{1}{k!} \sum _{ \sigma \in \mathcal{S}_k} \varvec{h}^{\sigma } = \gamma (k) \cdot \sum _{i=1}^k \varvec{e}_i \in \mathcal{U}. \end{aligned}$$

\(\square \)

Proof of Lemma 4

Proof

Consider, \( \tilde{\varvec{h}} \in \mathcal{U} \) an optimal solution for the maximization problem in (3.3) for fixed \(\beta \). We will construct \( \varvec{h}^* \in \mathcal{U}\) another optimal solution of (3.3) that verifies the properties in the lemma. First, denote \( I = \{ i \; \vert \; {\tilde{h}}_i > \beta \gamma \} \) and \(\vert I\vert =k\). Since, \(\mathcal{U}\) is permutation invariant, we can suppose without loss of generality that \(I =\{1,2,\ldots ,k \}\). We define,

$$\begin{aligned} h^*_i = \left\{ \begin{array}{ll} \gamma (k) &{}\quad \hbox {if } i=1,\ldots ,k \\ 0 &{}\quad \hbox {otherwise.} \end{array} \right. \end{aligned}$$

From Lemma 3, we have \(\varvec{h}^* \in \mathcal{U}\). Moreover,

$$\begin{aligned} \sum _{i=1}^m ( \tilde{h}_i - \beta \gamma )^+&= \sum _{i=1}^k \tilde{h}_i - \beta \gamma k \le k \cdot \gamma (k) - \beta \gamma k \\&= \sum _{i=1}^k ( \gamma (k) - \beta \gamma ) = \sum _{i=1}^k ( h^*_i- \beta \gamma ) \\&\le \sum _{i=1}^k ( h^*_i- \beta \gamma )^+ = \sum _{i=1}^m ( h^*_i- \beta \gamma )^+\\ \end{aligned}$$

where the first inequality follows from the definition of the coefficients \(\gamma (.)\). Therefore, \(\varvec{h}^*\) and \( \tilde{\varvec{h}}\) have the same objective value in (3.3) and consequently \(\varvec{h}^*\) is also optimal for the maximization problem (3.3). Moreover, from the first inequality, we have \( \gamma (k) - \beta \gamma >0 \), i.e., \(\big \vert \{ i \; \vert \; h_i^* > \beta \gamma \} \big \vert =k.\) Therefore, \(\varvec{h}^*\) verifies the properties of the lemma. \(\square \)

Proof of Proposition 4

Proof

To prove that \(\hat{\mathcal{U}}\) dominates \(\mathcal{U}\), it is sufficient to take \(\varvec{h}\) in the boundaries of \( \mathcal{U}\), i.e.,

$$\begin{aligned} a \sum _{i=1}^m h_i \sum _{j=1}^m h_j + (1-a) \sum _{i=1}^m h_i^2 =1 , \end{aligned}$$
(E.1)

and find \( \alpha _1, \alpha _2,\ldots ,\alpha _{m+1}\) nonnegative reals with \(\sum _{i=1}^{m+1} \alpha _i =1\) such that for all \( i \in [m],\)

$$\begin{aligned} \; h_i \le \beta \left( \alpha _i + \gamma \alpha _{m+1}\right) . \end{aligned}$$

By taking all \(h_i\) equal in (E.1), we get

$$\begin{aligned} \gamma = \frac{1}{ \sqrt{\left( am^2+(1-a)m \right) } } . \end{aligned}$$

We choose for \( i \in [m]\),

$$\begin{aligned} \alpha _i = \frac{1}{2} \left( (1-a)h_i^2+ a h_i \sum _{j=1}^m h_j \right) \end{aligned}$$

and \(\alpha _{m+1}= \frac{1}{2}.\) First, we have \(\sum _{i=1}^{m+1} \alpha _i =1\) and for all \( i \in [m]\),

$$\begin{aligned} \beta \left( \alpha _i + \gamma \alpha _{m+1}\right)&= \frac{\beta }{2} \left( (1-a)h_i^2+ a h_i \sum _{j=1}^m h_j + \frac{1}{ \sqrt{am^2+(1-a)m} } \right) \\&\ge \frac{\beta }{2} \left( (1-a)h_i^2 + \frac{1}{ \sqrt{am^2+(1-a)m} } + a h_i \right) \\&\ge \frac{\beta }{2} \left( 2 \left( \frac{(1-a)}{\sqrt{am^2+(1-a)m}} \right) ^{\frac{1}{2}}h_i+ a h_i \right) =h_i \end{aligned}$$

where the first inequality holds because \(\sum _{j=1}^m h_j \ge 1\) which is a direct consequence of \( \varvec{h}^T \varSigma \varvec{h} =1\) and \( a \le 1\). The second one follows from the inequality of arithmetic and geometric means (AM-GM inequality). Finally, we can verify by case analysis on the values of a that

$$\begin{aligned} \left( \frac{a}{2}+ \frac{(1-a)^{\frac{1}{2}}}{\left( am^2+(1-a)m\right) ^{\frac{1}{4}}}\right) ^{-1}= O \left( m^{\frac{2}{5}} \right) . \end{aligned}$$

In fact, denote \(H(m)= \left( \frac{a}{2}+ \frac{(1-a)^{\frac{1}{2}}}{\left( am^2+(1-a)m\right) ^{\frac{1}{4}}}\right) ^{-1} = O \left( a+ \frac{1}{\left( am^2+m\right) ^{\frac{1}{4}}}\right) ^{-1}\)

Case 1\(a=O(\frac{1}{m})\). We have \(\left( am^2+m \right) ^{\frac{1}{4}} = O(m^{\frac{1}{4}})\). Then \(H(m)=O(m^{\frac{1}{4}})=O(m^{\frac{2}{5}})\).

Case 2\(a=\varOmega (m^{\frac{-2}{5}})\). We have \(H(m)=O(a^{-1})=O(m^{\frac{2}{5}})\).

Case 3\(a=O(m^{\frac{-2}{5}} )\)and\(a=\varOmega (\frac{1}{m})\). We have \(\left( am^2+m \right) ^{\frac{1}{4}} = O(m^{\frac{2}{5}})\). Then,

$$\begin{aligned} a+ \frac{1}{\left( am^2+m\right) ^{\frac{1}{4}}}= \varOmega \left( \frac{1}{m}\right) +\varOmega \left( m^{\frac{-2}{5}}\right) =\varOmega \left( m^{\frac{-2}{5}}\right) . \end{aligned}$$

Therefore, \(H(m)=O(m^{\frac{2}{5}})\). \(\square \)

Proof of Proposition 5

Proof

To prove that \(\hat{ \mathcal U}\) dominates \({ \mathcal U}\), it is sufficient to take \(\varvec{h}\) in the boundaries of \( \mathcal{U}\), i.e., \(\sum _{i=1}^m h_i =k\) and find \( \alpha _1, \alpha _2,\ldots ,\alpha _{m+1}\) non-negative reals with \(\sum _{i=1}^{m+1} \alpha _i =1\) such that for all \( i \in [m],\)

$$\begin{aligned} \; h_i \le \beta \left( \alpha _i + \frac{k}{m}\alpha _{m+1}\right) . \end{aligned}$$

First case If \(\beta =k\), we choose \(\alpha _i = \frac{h_i}{k}\) for \( i \in [m]\) and \(\alpha _{m+1}= 0.\) We have \(\sum _{i=1}^{m+1} \alpha _i =1\) and for all \( i \in [m]\),

$$\begin{aligned} \beta \left( \alpha _i + \frac{k}{m}\alpha _{m+1}\right) = k \frac{h_i}{k} \ge h_i . \end{aligned}$$

Second case If \(\beta =\frac{m}{k}\), we choose \(\alpha _i = 0 \) for \( i \in [m]\) and \(\alpha _{m+1}= 1.\) We have \(\sum _{i=1}^{m+1} \alpha _i =1\) and for all \( i \in [m]\),

$$\begin{aligned} \beta \left( \alpha _i + \frac{k}{m}\alpha _{m+1}\right) = 1 \ge h_i . \end{aligned}$$

\(\square \)

Proof of Lemma 6

Proof

Consider the following simplex

$$\begin{aligned} \hat{\mathcal{U}} = {\textsf {conv}} \left( \varvec{e}_1 , \ldots ,\varvec{e}_m, \frac{1}{\sqrt{m}} \varvec{e} \right) \end{aligned}$$

It is clear that \( \hat{\mathcal{U}}\) dominates \(\mathcal{U}\) since \(\frac{1}{\sqrt{m}} \varvec{e}\) dominates all the extreme points \( \varvec{\nu }_j\) for \(j \in [N]\). Moreover, by the convexity of \( \mathcal U\), we have \( \frac{1}{N} \sum _{j=1}^N \varvec{\nu }_j = \frac{\left( {\begin{array}{c}m-1\\ r-1\end{array}}\right) }{\sqrt{m}\left( {\begin{array}{c}m\\ r\end{array}}\right) } \varvec{e} = \frac{r}{m\sqrt{m}} \varvec{e} \in \mathcal{U}\). Denote \(\beta = \frac{m}{r } \). Hence, for all \(i \in [m]\)

$$\begin{aligned} \varvec{e}_i = \beta \underbrace{\left( \frac{1}{\beta } \cdot \varvec{e}_i + \left( 1- \frac{1}{\beta }\right) \cdot \varvec{0}\right) }_{\in \mathcal{U}} \qquad \text {and} \qquad \frac{1}{\sqrt{m}} \varvec{e} = \beta \cdot \underbrace{\frac{r}{m\sqrt{m}} \varvec{e}}_{\in \mathcal U}. \end{aligned}$$

Therefore, \( \hat{\mathcal{U}} \subseteq \beta \cdot \mathcal{U}\) and from Theorem 1, we conclude that our policy gives a \(\beta \)-approximation to the adjustable problem  (1.1) where \(\beta = \frac{m}{\lceil m- \sqrt{m \rceil } }=O (1+ \frac{1}{\sqrt{m}})\). \(\square \)

Proof of Lemma 7

Proof

First, let us prove that \(z_{\textsf {AR}}(\mathcal{U}) \le 1\). It is sufficient to define an adjustable solution only for the extreme points of \(\mathcal{U}\) because the constraints are linear. We define the following solution for all \(i=1,\ldots ,m\) and for all \(j=1,\ldots ,N\)

$$\begin{aligned} \varvec{x} = \varvec{0} , \quad \varvec{y} ( \varvec{0}) = \varvec{0}, \quad \varvec{y} ( \varvec{e}_i) = \varvec{e}_i, \quad \varvec{y} ( \varvec{\nu }_j) = \frac{1}{m} \varvec{e}. \end{aligned}$$

We have \(\varvec{B} \varvec{y} ( \varvec{0}) = \varvec{0}\). For \(i \in [m]\)

$$\begin{aligned} \varvec{B} \varvec{y} ( \varvec{e}_i) = \varvec{e}_i + \frac{1}{\sqrt{m}} ( \varvec{e} - \varvec{e}_i) \ge \varvec{e}_i \end{aligned}$$

and for \(j \in [N]\)

$$\begin{aligned} \varvec{B} \varvec{y} (\varvec{\nu }_j) = \frac{1}{m} \varvec{B} \varvec{e} = \left( \frac{1}{m}+ \frac{m-1}{m \sqrt{m}} \right) \varvec{e} \ge \frac{1}{\sqrt{m}} \varvec{e} \ge \varvec{\nu }_j. \end{aligned}$$

Therefore, the solution defined above is feasible. Moreover, the cost of our feasible solution is 1 because for all \(i \in [m]\) and \(j \in [N]\), we have

$$\begin{aligned} \varvec{d}^T \varvec{y} ( \varvec{e}_i)= \varvec{d}^T \varvec{y} ( \varvec{\nu }_j)= 1. \end{aligned}$$

Hence, \(z_{\textsf {AR}}(\mathcal{U}) \le 1.\) Now, it is sufficient to prove that \(z_{\textsf {Aff}}(\mathcal{U})= \varOmega ( \sqrt{m})\). First, \(\tilde{\varvec{x}}= \frac{1}{\sqrt{m}} \varvec{e} \) and \(\varvec{y}( \varvec{h})= \varvec{0}\) for any \(\varvec{h} \in \mathcal{U}\) is a feasible static solution (which is a special case of an affine solution). In fact,

$$\begin{aligned} \varvec{A} \tilde{\varvec{x}}= \frac{1}{\sqrt{m}} \varvec{A} \varvec{e} = \left( \frac{1}{\sqrt{m}}+ \frac{m-1}{m} \right) \varvec{e} \ge \varvec{e} \ge \varvec{h} \quad \forall \varvec{h} \in \mathcal{U} \end{aligned}$$

where the last inequality holds because \( \mathcal{U} \subseteq [0,1]^m\). Moreover, the cost of this static solution is

$$\begin{aligned} \varvec{c}^T \tilde{\varvec{x}} = \frac{\sqrt{m}}{15}. \end{aligned}$$

Hence,

$$\begin{aligned} z_{\textsf {Aff}}(\mathcal{U}) \le \frac{\sqrt{m}}{15}. \end{aligned}$$
(H.1)

Our instance is “a permuted instance”, i.e. \(\mathcal{U}\) is permutation invariant, \(\varvec{A}\) and \(\varvec{B}\) are symmetric and \(\varvec{c}\) and \(\varvec{d}\) are proportional to \(\varvec{e}\). Hence, from Lemma 8 and Lemma 7 in Bertsimas and Goyal [12], for any optimal solution \(\varvec{x}^*_{\textsf {Aff}}, \varvec{y}^*_{\textsf {Aff}}( \varvec{h})\) of the affine problem, we can construct another optimal affine solution that is “symmetric” and have the same stage cost. In particular, there exists an optimal solution for the affine problem of the following form \( \varvec{x}= \alpha \varvec{e}\), \( \varvec{y}( \varvec{h}) = \varvec{P} \varvec{h} + \varvec{q}\) for \(\varvec{h} \in \mathcal{U}\) where

$$\begin{aligned} \varvec{P}= \left( \begin{matrix} \theta &{}\quad \mu &{}\quad \ldots &{}\quad \mu \\ \mu &{}\quad \theta &{}\quad \ldots &{}\quad \mu \\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ \mu &{}\quad \mu &{}\quad \ldots &{}\quad \theta \end{matrix} \right) \end{aligned}$$
(H.2)

\( \varvec{q} = \lambda \varvec{e}\), \(\varvec{c}^T \varvec{x}= \varvec{c}^T \varvec{x}^*_{\textsf {Aff}}\) and \(\max _{ \varvec{h} \in \mathcal{U}} \varvec{d}^T \varvec{y}(\varvec{h})= \max _{ \varvec{h} \in \mathcal{U}} \varvec{d}^T \varvec{y}^*_{\textsf {Aff}}(\varvec{h})\). We have \( \varvec{x} \ge \varvec{0}\) and \( \varvec{y}(\varvec{0}) = \lambda \varvec{e} \ge \varvec{0}\) hence

$$\begin{aligned} \lambda \ge 0 \qquad \text {and} \qquad \alpha \ge 0. \end{aligned}$$
(H.3)

Claim

\(\alpha \ge \frac{1}{24 \sqrt{m}}\) For a sake of contradiction, suppose that \( \alpha > \frac{1}{24 \sqrt{m}}\). We know that

$$\begin{aligned} z_{\textsf {Aff}}(\mathcal{U}) \ge \varvec{c}^T \varvec{x} + \varvec{d}^T \varvec{y}( \varvec{0}) = \frac{\alpha }{15} m + \lambda m. \end{aligned}$$
(H.4)

Case 1 If \(\lambda \ge \frac{1}{12 \sqrt{m}}\), then from (H.4) and \(\alpha \ge 0\), we have \(z_{\textsf {Aff}}(\mathcal{U}) \ge \frac{\sqrt{m}}{12}\). Contradiction with (H.1).

Case 2 If \( \lambda \le \frac{1}{12 \sqrt{m}}\). We have

$$\begin{aligned} \varvec{y}( \varvec{e}_1) = ( \theta + \lambda ) \varvec{e}_1 + ( \mu + \lambda ) ( \varvec{e} - \varvec{e}_1). \end{aligned}$$

By feasibility of the solution, we have \(\varvec{A} \varvec{x}+ \varvec{B} \varvec{y} ( \varvec{e}_1) \ge \varvec{e}_1\), hence

$$\begin{aligned} \theta + \lambda + \alpha \left( \frac{m-1}{\sqrt{m}} +1 \right) +\frac{1}{\sqrt{m}} (m-1)(\mu + \lambda ) \ge 1 \end{aligned}$$

Therefore \(\theta + \lambda + \alpha \left( \frac{m-1}{\sqrt{m}} +1 \right) \ge \frac{1}{2}\) or \(\frac{1}{\sqrt{m}} (m-1)(\mu + \lambda ) \ge \frac{1}{2}\).

Case 2.1 Suppose \(\frac{1}{\sqrt{m}} (m-1)(\mu + \lambda ) \ge \frac{1}{2}\). Therefore,

$$\begin{aligned} z_{\textsf {Aff}}(\mathcal{U}) \ge \varvec{d}^T \varvec{y} (\varvec{e}_1) = \theta + \lambda + (m-1)(\mu + \lambda ) \ge \frac{\sqrt{m}}{2}. \; \; [\text {Contradiction with } (\hbox {H.1})] \end{aligned}$$

where the last inequality holds because \(\theta + \lambda \ge 0 \) as \(\varvec{y}( \varvec{e}_1) \ge \varvec{0}\).

Case 2.2 Now suppose we have the other inequality i.e. \(\theta + \lambda + \alpha \left( \frac{m-1}{\sqrt{m}} +1 \right) \ge \frac{1}{2}\). Recall that we have \( \lambda \le \frac{1}{12\sqrt{m}}\) and we know that \( \alpha < \frac{1}{24 \sqrt{m}}\). Therefore,

$$\begin{aligned} \theta \ge \frac{1}{2}- \frac{1}{12\sqrt{m}} - \frac{1}{24\sqrt{m}}\left( \frac{m-1}{\sqrt{m}} +1 \right) = \frac{11}{24} - \frac{3}{24 \sqrt{m}} + \frac{1}{24m} \ge \frac{11}{24} - \frac{3}{24 }= \frac{1}{3}. \end{aligned}$$

We have,

$$\begin{aligned} \varvec{y} ( \varvec{\nu }_1 ) = \frac{1}{\sqrt{m}} \left( ( \theta + (r-1) \mu ) ( \varvec{e}_1+ \cdots \varvec{e}_r) + r \mu ( \varvec{e} -( \varvec{e}_1+ \cdots \varvec{e}_r)) \right) + \lambda \varvec{e}. \end{aligned}$$

In particular we have ,

$$\begin{aligned} z_{\textsf {Aff}}(\mathcal{U}) \ge \varvec{d}^T \varvec{y} (\varvec{\nu }_1)&= \frac{r}{\sqrt{m}} ( \theta +(m-1) \mu ) + \lambda m \nonumber \\&\ge \frac{r}{\sqrt{m}} \left( \frac{1}{3} + (m-1) \mu \right) . \end{aligned}$$
(H.5)

where the last inequality follows from \(\lambda \ge 0\) and \( \theta \ge \frac{1}{3}.\)

Case 2.2.1 If \( \mu \ge 0\) then from (H.5)

$$\begin{aligned} z_{\textsf {Aff}}(\mathcal{U}) \ge \frac{r}{3\sqrt{m}} \ge \frac{m-\sqrt{m}}{3\sqrt{m}} \ge \frac{\sqrt{m}}{6} \; \; \text {for } m \ge 4 \; \; [\text {Contradiction with } (\hbox {H.1})] \end{aligned}$$

Case 2.2.2 Now suppose that \( \mu < 0\), by non-negativity of \( \varvec{y} ( \varvec{\nu }_1) \) we have

$$\begin{aligned} \frac{r}{\sqrt{m}} \mu + \lambda \ge 0 \end{aligned}$$

i.e.

$$\begin{aligned} \mu \ge \frac{-\lambda \sqrt{m}}{r} \end{aligned}$$

and from (H.5)

$$\begin{aligned} z_{\textsf {Aff}}(\mathcal{U})&\ge \frac{r}{\sqrt{m}} \left( \frac{1}{3} + (m-1) \mu \right) \\&\ge \frac{r}{\sqrt{m}}\left( \frac{1}{3} - \lambda \sqrt{m}\frac{m-1}{r} \right) \\&\ge \frac{r}{\sqrt{m}}\left( \frac{1}{3} - \frac{1}{12} \frac{m-1}{r} \right) \ge \frac{r}{\sqrt{m}} \left( \frac{1}{3} - \frac{1}{6} \right) \; \; \text {for } m \ge 4 . \\&\ge \frac{\sqrt{m}}{12} \; \; [\text {Contradiction with } (\hbox {H.1})]\\ \end{aligned}$$

We conclude that \( \alpha \ge \frac{1}{24 \sqrt{m}}\) and consequently

$$\begin{aligned} z_{\textsf {Aff}}(\mathcal{U}) \ge \varvec{c}^T \varvec{x} = \frac{\alpha m}{15} \ge \frac{\sqrt{m}}{360} = \varOmega ( \sqrt{m}). \end{aligned}$$

Hence,

$$\begin{aligned} z_{\textsf {Aff}}(\mathcal{U})= \varOmega ( \sqrt{m}) \cdot z_{\textsf {AR}}(\mathcal{U}). \end{aligned}$$

\(\varvec{c}^T \varvec{x}= \varvec{c}^T \varvec{x}^*_{\textsf {Aff}}\) Moreover, for any optimal affine solution, the cost of the first-stage affine solution \(\varvec{x}^*_{\textsf {Aff}}\) is \( \varOmega ({\sqrt{m}})\) away from the optimal adjustable problem  (1.1), i.e. \( \varvec{c}^T \varvec{x}^*_{\textsf {Aff}} =\varvec{c}^T \varvec{x} =\varOmega ( \sqrt{m})\cdot z_{\textsf {AR}}(\mathcal{U})\). \(\square \)

Proof of Theorem 7

Proof

Let us find the order of the left hand side ratio in inequality (5.3). We have,

$$\begin{aligned} \frac{\left( {\begin{array}{c}\sqrt{m}\\ m^{\epsilon }\end{array}}\right) \cdot \left( {\begin{array}{c}m-m^{\epsilon }\\ \sqrt{m}-m^{\epsilon }\end{array}}\right) }{\left( {\begin{array}{c}m\\ \sqrt{m}\end{array}}\right) }&= \frac{ (\sqrt{m})! \times (m-{m^{\epsilon }})! \times (m-\sqrt{m})! \times (\sqrt{m})! }{ ( \sqrt{m} - m^{\epsilon } )!\times ( m^{\epsilon } )! \times m! \times ( \sqrt{m} - m^{\epsilon } )! \times (m-\sqrt{m})! } \\&= \left( \frac{ (\sqrt{m})! }{ ( \sqrt{m} - m^{\epsilon } )! } \right) ^2 \cdot \frac{ (m-{m^{\epsilon }})! \ }{ ( m^{\epsilon } )! \times m! } .\\ \end{aligned}$$

By Stirling’s approximation, we have

$$\begin{aligned} \left( \sqrt{m}\right) !&= \varTheta \left( m^{\frac{1}{4}} \left( \frac{\sqrt{m}}{e} \right) ^{\sqrt{m}} \right) . \\ \left( \sqrt{m}-m^{\epsilon }\right) !&= \varTheta \left( ( \sqrt{m}-m^{\epsilon })^{\frac{1}{2}} \left( \frac{\sqrt{m}-m^{\epsilon }}{e} \right) ^{\sqrt{m}-m^{\epsilon }} \right) . \\ \left( m-m^{\epsilon }\right) !&= \varTheta \left( ( m-m^{\epsilon })^{\frac{1}{2}} \left( \frac{m-m^{\epsilon }}{e} \right) ^{m-m^{\epsilon }} \right) . \\ \left( m\right) !&= \varTheta \left( m^{\frac{1}{2}} \left( \frac{m}{e} \right) ^{m} \right) .\\ \left( m^{\epsilon }\right) !&= \varTheta \left( m^{\frac{1}{2} \epsilon }\left( \frac{m^{\epsilon }}{e} \right) ^{m^{\epsilon }} \right) . \end{aligned}$$

All together,

$$\begin{aligned} \frac{\left( {\begin{array}{c}\sqrt{m}\\ m^{\epsilon }\end{array}}\right) \cdot \left( {\begin{array}{c}m-m^{\epsilon }\\ \sqrt{m}-m^{\epsilon }\end{array}}\right) }{\left( {\begin{array}{c}m\\ \sqrt{m}\end{array}}\right) }= \varTheta \left( \frac{ \left( \sqrt{m} \right) ^{2 \sqrt{m}} \cdot \left( m-m^{\epsilon } \right) ^{ \left( m-m^{\epsilon }\right) } }{ m^{\frac{1}{2} \epsilon } \cdot \left( \sqrt{m}-m^{\epsilon } \right) ^{2 \left( \sqrt{m}-m^{\epsilon }\right) } \cdot m^m \cdot m^{ \epsilon m^{\epsilon }} } \right) . \end{aligned}$$

We have

$$\begin{aligned} \left( m-m^{\epsilon } \right) ^{ \left( m-m^{\epsilon }\right) } = \varTheta \left( m ^{ \left( m-m^{\epsilon }\right) } \cdot e^{-m^{\epsilon }+ \frac{m^{2\epsilon }}{m}} \right) , \end{aligned}$$

and

$$\begin{aligned} \left( \sqrt{m}-m^{\epsilon } \right) ^{ 2 \left( \sqrt{m}-m^{\epsilon }\right) } = \varTheta \left( \left( \sqrt{m} \right) ^{2 \left( \sqrt{m}-m^{\epsilon }\right) } \cdot e^{- 2m^{\epsilon }+ 2 \frac{m^{2\epsilon }}{\sqrt{m}}}\right) , \end{aligned}$$

WLOG, we can suppose that \( \epsilon < \frac{1}{4}\), therefore

$$\begin{aligned} \frac{\left( {\begin{array}{c}\sqrt{m}\\ m^{\epsilon }\end{array}}\right) \cdot \left( {\begin{array}{c}m-m^{\epsilon }\\ \sqrt{m}-m^{\epsilon }\end{array}}\right) }{\left( {\begin{array}{c}m\\ \sqrt{m}\end{array}}\right) }&= \varTheta \left( \frac{ e^{ m^{\epsilon } - 2 \frac{m^{2\epsilon }}{\sqrt{m}} + \frac{m^{2\epsilon }}{m} } }{ m^{ \epsilon m^{\epsilon } +\frac{1}{2}\epsilon } } \right) \\&=\varTheta \left( \frac{ e^{ m^{\epsilon } } }{ m^{ \epsilon m^{\epsilon } +\frac{1}{2}\epsilon } } \right) . \\ \end{aligned}$$

We have,

$$\begin{aligned} \varTheta \left( \frac{ Q(m)e^{ m^{\epsilon } } }{ m^{ \epsilon m^{\epsilon } +\frac{1}{2}\epsilon }} \right) \ge 1, \end{aligned}$$

but the later inequality contradicts

$$\begin{aligned} \lim _{m\rightarrow \infty } \frac{ Q(m)e^{ m^{\epsilon } } }{ m^{ \epsilon m^{\epsilon } +\frac{1}{2}\epsilon }} = 0. \end{aligned}$$

\(\square \)

Domination for non-permutation invariant sets

Proposition 6

Suppose Algorithm 1 returns \(\beta \) and \(\varvec{v}\) for some uncertainty set \(\mathcal{U}\). Then the set (6.3) is a dominating set for \(\mathcal U\).

Proof

Suppose Algorithm 1 returns \(\beta \) and \(\varvec{v}\),then the inequality (2.4) is verified, namely,

$$\begin{aligned} \frac{1}{\beta } \sum _{i=1}^m \left( h_i -\beta v_i \right) ^{+} \le 1, \quad \forall \varvec{h} \in \mathcal{U}. \end{aligned}$$

Recall the dominating point (2.3)

$$\begin{aligned} \varvec{ \hat{h}}(\varvec{h}) = \beta \varvec{v} + ( \varvec{h} - \beta \varvec{v} )_+. \end{aligned}$$

We have

$$\begin{aligned} \varvec{ \hat{h}}(\varvec{h}) = \beta \left( \sum _{i=1}^m \frac{( h_i -\beta v_i )^{+} }{\beta } ( \varvec{e}_i + \varvec{v} ) + \underbrace{\left( 1- \sum _{i=1}^m \frac{( h_i -\beta v_i )^{+} }{\beta } \right) }_{ \ge 0} \varvec{v} \right) \in {\hat{\mathcal{U}}} \end{aligned}$$

where

$$\begin{aligned} {\hat{\mathcal{U}}} = \beta \cdot {\textsf {conv}} \left( \varvec{v}, \varvec{e}_1 + \varvec{v}, \ldots , \varvec{e}_m + \varvec{v} \right) \end{aligned}$$

Hence \({\hat{\mathcal{U}}}\) is a dominating set. \(\square \)

Domination for the generalized budget set

Proposition 7

Let consider

$$\begin{aligned} {\hat{\mathcal{U}}}= {\textsf {conv}} \left( \varvec{e}_1, \ldots , \varvec{e}_m, \frac{1}{m-1-2\theta } \varvec{e} \right) \end{aligned}$$
(K.1)

The set (K.1) dominates the uncertainty set (6.2).

Proof

Consider the uncertainty set (6.2) given by

$$\begin{aligned} \mathcal{U}= \left\{ \varvec{h} \in [0,1]^m \; \Bigg \vert \; \sum _{i=1}^m h_i \le 1 + \theta ( h_i +h_j) \quad \forall i \ne j \right\} \end{aligned}$$

and

$$\begin{aligned} {\hat{\mathcal{U}}}= {\textsf {conv}} \left( \varvec{e}_1, \ldots , \varvec{e}_m, \frac{1}{m-1-2\theta } \varvec{e} \right) . \end{aligned}$$

Note that in our setting we choose \( \theta > \frac{m-1}{2}\). Take any \(\varvec{h} \in \mathcal{U}\). Suppose WLOG that

$$\begin{aligned} h_1 \le h_2 \le \cdots \le h_m \end{aligned}$$

Hence, by definition of \(\mathcal{U}\)

$$\begin{aligned} \varvec{e}^T \varvec{h} \le 1+ \theta ( h_1+h_2) \end{aligned}$$

To prove that \(\hat{ \mathcal U}\) dominates \({ \mathcal U}\), it is sufficient to find \( \alpha _1, \alpha _2,\ldots ,\alpha _{m+1}\) non-negative reals with \(\sum _{i=1}^{m+1} \alpha _i \le 1\) such that for all \( i \in [m],\)

$$\begin{aligned} \; h_i \le \alpha _i + \frac{1}{m-1-2\theta }\alpha _{m+1}. \end{aligned}$$

We choose \(\alpha _{m+1} = (m-1-2\theta ) \cdot \frac{h_1+h_2}{2} \), \(\alpha _1=h_1\) and for \(i \ge 2\), \(\alpha _i = h_i - \frac{h_1+h_2}{2}\). We can verify that

$$\begin{aligned} \alpha _1+ \frac{1}{m-1-2\theta } \alpha _{m+1} \ge \alpha _1 =h_1 \end{aligned}$$

and for \( i \ge 2\),

$$\begin{aligned} \alpha _i + \frac{1}{m-1-2\theta } \alpha _{m+1} =h_i \end{aligned}$$

Moreover, \(\alpha _{m+1} \ge 0\), \(\alpha _1 \ge 0\) and for \(i \ge 2\), \(\alpha _i \ge 0\) since \(h_1+h_2= min_{i\ne j} (h_i+h_j)\). Finally,

$$\begin{aligned} \sum _{i=1}^{m+1} \alpha _i&= \sum _{i=1}^{m} h_i - (m-1)\cdot \frac{h_1+h_2}{2}+ (m-1-2\theta ) \cdot \frac{h_1+h_2}{2}\\&\le 1+ \theta ( h_1+h_2) - (m-1)\cdot \frac{h_1+h_2}{2}+ (m-1-2\theta ) \cdot \frac{h_1+h_2}{2} =1. \end{aligned}$$

Note that the construction of this dominating set is slightly different from the general approach in Sect. 3 since we do not scale the unit vectors \(\varvec{e}_i\) in \({\hat{\mathcal{U}}}\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ben-Tal, A., El Housni, O. & Goyal, V. A tractable approach for designing piecewise affine policies in two-stage adjustable robust optimization. Math. Program. 182, 57–102 (2020). https://doi.org/10.1007/s10107-019-01385-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-019-01385-0

Mathematics Subject Classification

Navigation