Skip to main content
Log in

A tight characterization of the performance of static solutions in two-stage adjustable robust linear optimization

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

In this paper, we study the performance of static solutions for two-stage adjustable robust linear optimization problems with uncertain constraint and objective coefficients and give a tight characterization of the adaptivity gap. Computing an optimal adjustable robust optimization problem is often intractable since it requires to compute a solution for all possible realizations of uncertain parameters (Feige et al. in Lect Notes Comput Sci 4513:439–453, 2007). On the other hand, a static solution is a single (here and now) solution that is feasible for all possible realizations of the uncertain parameters and can be computed efficiently for most dynamic optimization problems. We show that for a fairly general class of uncertainty sets, a static solution is optimal for the two-stage adjustable robust linear packing problems. This is highly surprising in view of the usual perception about the conservativeness of static solutions. Furthermore, when a static solution is not optimal for the adjustable robust problem, we give a tight approximation bound on the performance of the static solution that is related to a measure of non-convexity of a transformation of the uncertainty set. We also show that our bound is at least as good (and in many case significantly better) as the bound given by the symmetry of the uncertainty set (Bertsimas and Goyal in Math Methods Oper Res 77(3):323–343, 2013; Bertsimas et al. in Math Oper Res 36(1):24–54, 2011).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Beale, E.M.L.: On minizing a convex function subject to linear inequalities. J. R. Stat. Soc. Ser. B (Methodological) 17(2), 173–184 (1955)

    MATH  MathSciNet  Google Scholar 

  2. Ben-Tal, A., El Ghaoui, L., Nemirovski, A.: Robust Optimization. Princeton University Press, Princeton (2009)

    Book  MATH  Google Scholar 

  3. Ben-Tal, A., Nemirovski, A.: Robust convex optimization. Math. Oper. Res. 23(4), 769–805 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  4. Ben-Tal, A., Nemirovski, A.: Robust solutions of uncertain linear programs. Oper. Res. Lett. 25(1), 1–14 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  5. Ben-Tal, A., Nemirovski, A.: Robust optimization-methodology and applications. Math. Program. 92(3), 453–480 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  6. Bertsimas, D., Brown, D.B., Caramanis, C.: Theory and applications of robust optimization. SIAM Rev. 53(3), 464–501 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  7. Bertsimas, D., Goyal, V.: On the power of robust solutions in two-stage stochastic and adaptive optimization problems. Math. Oper. Res. 35, 284–305 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  8. Bertsimas, D., Goyal, V.: On the approximability of adjustable robust convex optimization under uncertainty. Math. Methods Oper. Res. 77(3), 323–343 (2013)

    Google Scholar 

  9. Bertsimas, D., Goyal, V., Sun, X.A.: A geometric characterization of the power of finite adaptability in multistage stochastic and adaptive optimization. Math. Oper. Res. 36(1), 24–54 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  10. Bertsimas, D., Sim, M.: Robust discrete optimization and network flows. Math. Program. Ser. B 98, 49–71 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  11. Bertsimas, D., Sim, M.: The price of robustness. Oper. Res. 52(2), 35–53 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  12. Borwein, J., Lewis, A.: Convex Analysis and Nonlinear Optimization: Theory and Examples, vol. 3. Springer, Berlin (2006)

  13. Dantzig, G.B.: Linear programming under uncertainty. Manag. Sci. 1, 197–206 (1955)

    Article  MATH  MathSciNet  Google Scholar 

  14. Dyer, M., Stougie, L.: Computational complexity of stochastic programming problems. Math. Program. 106(3), 423–432 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  15. El Ghaoui, L., Lebret, H.: Robust solutions to least-squares problems with uncertain data. SIAM J. Matrix Anal. Appl. 18, 1035–1064 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  16. Feige, U., Jain, K., Mahdian, M., Mirrokni, V.: Robust combinatorial optimization with exponential scenarios. Lect. Notes Comput. Sci. 4513, 439–453 (2007)

    Article  MathSciNet  Google Scholar 

  17. Goldfarb, D., Iyengar, G.: Robust portfolio selection problems. Math. Oper. Res. 28(1), 1–38 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  18. Grötschel, M., Lovász, L., Schrijver, A.: The ellipsoid method and its consequences in combinatorial optimization. Combinatorica 1(2), 169–197 (1981)

    Article  MATH  MathSciNet  Google Scholar 

  19. Infanger, G.: Planning Under Uncertainty: Solving Large-scale Stochastic Linear Programs. Boyd & Fraser Pub Co, San Francisco (1994)

    MATH  Google Scholar 

  20. Kall, P., Wallace, S.W.: Stochastic Programming. Wiley, New York (1994)

    MATH  Google Scholar 

  21. Minkowski, H.: Allegemeine lehzätze über konvexe polyeder. Ges. Ahb. 2, 103–121 (1911)

    Google Scholar 

  22. Neumann, Jv: Zur theorie der gesellschaftsspiele. Math. Ann. 100(1), 295–320 (1928)

    Article  MATH  MathSciNet  Google Scholar 

  23. Prékopa, A.: Stochastic Programming. Kluwer Academic Publishers, Dordrecht, Boston (1995)

    Book  Google Scholar 

  24. Shapiro, A.: Stochastic programming approach to optimization under uncertainty. Math. Program. Ser. B 112(1), 183–220 (2008)

    Article  MATH  Google Scholar 

  25. Shapiro, A., Dentcheva, D., Ruszczyński, A.: Lectures on stochastic programming: modeling and theory. Society for Industrial and Applied Mathematics, Philadelphia, PA (2009)

    Book  Google Scholar 

  26. Shapiro, A., Nemirovski, A.: On complexity of stochastic programming problems. In Jeyakumar, V., Rubinov, A.M. (eds.) Continuous Optimization: Current Trends and Applications, pp. 111–144 (2005)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vineet Goyal.

Additional information

V. Goyal is supported by NSF Grant CMMI-1201116, DOE Grant DE-AR0000235 and Google Research Award.

B. Lu is supported by NSF Grant CMMI-1201116.

Appendices

Appendix 1: Down-monotone uncertainty sets

In this section, we show that in \(\varPi _\mathsf{AR}^I(\mathcal{U},{\varvec{h}})\) defined in (2.3) and \(\varPi _\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})\) defined in (2.4), we can assume \(\mathcal U\) to be down-monotone without loss of generality, where down-monotone is defined as follows.

Definition 3

A set \(\mathcal{S}\subseteq {\mathbb R}^n_+\) is down-monotone if \({\varvec{s}}\in \mathcal{S}, {\varvec{t}}\in {\mathbb R}^n_+\) and \({\varvec{t}}\le {\varvec{s}}\) implies \({\varvec{t}}\in \mathcal{S}\).

Given \(\mathcal{S}\subseteq {\mathbb R}^n_+\), we can construct the down-hull of \(S\), denoted by \(\mathcal{S}^{\downarrow }\) as follows.

$$\begin{aligned} \mathcal{S}^{\downarrow }=\{{\varvec{t}}\in {\mathbb R}^n_+\;|\;\exists {\varvec{s}}\in \mathcal{S}: {\varvec{t}}\le {\varvec{s}}\}. \end{aligned}$$
(6.1)

We would like to emphasize that the down hull of a non-negative uncertainty set is still constrained in the non-negative orthant. Given uncertainty set \(\mathcal{U}\in {\mathbb R}^{m\times n}_+\) and \({\varvec{h}}>{\varvec{0}}\), if \(\mathcal{U}\) is down-monotone, then \(\mathcal{U}^{\downarrow }=\mathcal{U}\). Therefore, \(\varPi _\mathsf{AR}^I(\mathcal{U}^{\downarrow },{\varvec{h}})\) is essentially the same problem with \(\varPi _\mathsf{AR}^I(\mathcal{U},{\varvec{h}})\) and we have \(z_\mathsf{AR}^{I}(\mathcal{U}^{\downarrow },{\varvec{h}})=z_\mathsf{AR}^{I}(\mathcal{U},{\varvec{h}})\). Similar arguments applies for \(\varPi _\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})\) and \(z_\mathsf{Rob}^{I}(\mathcal{U}^{\downarrow },{\varvec{h}})=z_\mathsf{Rob}^{I}(\mathcal{U},{\varvec{h}})\). On the other hand, if \(\mathcal{U}\) is not down-monotone, then \(\mathcal{U}\subsetneq \mathcal{U}^{\downarrow }\). Then, we prove the following lemma.

Lemma 9

Given uncertainty set \(\mathcal{U}\in {\mathbb R}^{m\times n}_+\) and \({\varvec{h}}>{\varvec{0}}\), let \(z_\mathsf{AR}^{I}(\mathcal{U},{\varvec{h}})\) be the optimal value of \(\varPi _\mathsf{AR}^I(\mathcal{U},{\varvec{h}})\) defined in (2.3), \(z_\mathsf{Rob}^{I}(\mathcal{U},{\varvec{h}})\) be the optimal value of \(\varPi _\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})\) defined in (2.4). Suppose \(\mathcal{U}\) is not down-monotone, let \(\mathcal{U}^{\downarrow }\) be defined as in (6.1). Then,

$$\begin{aligned} z_\mathsf{AR}^{I}(\mathcal{U}^{\downarrow },{\varvec{h}})=z_\mathsf{AR}^{I}(\mathcal{U},{\varvec{h}}),\;z_\mathsf{Rob}^{I}(\mathcal{U}^{\downarrow },{\varvec{h}})=z_\mathsf{Rob}^{I}(\mathcal{U},{\varvec{h}}). \end{aligned}$$

Proof

Consider an arbitrary \({\varvec{X}}\in \mathcal{U}^{\downarrow }\) and \({\varvec{X}}\not \in \mathcal{U}\), i.e., \({\varvec{X}}\in \mathcal{U}^{\downarrow }\backslash \mathcal{U}\). From (6.1), there exists \({\varvec{B}}\in \mathcal{U}\) such that \({\varvec{X}}\le {\varvec{B}}\). Since \({\varvec{B}},{\varvec{X}}\) and \({\varvec{y}}\) are all non-negative, any \({\varvec{y}}\in {\mathbb R}^{n}_+\) such that \({\varvec{B}}{\varvec{y}} \le {\varvec{h}}\) satisfies \({\varvec{X}}{\varvec{y}} \le {\varvec{h}}\). Therefore,

$$\begin{aligned} \max \{{\varvec{d}}^T {\varvec{y}}\;|\;{\varvec{B}} {\varvec{y}}\le {\varvec{h}},{\varvec{y}}\in \mathbb {R}^{n}_+ \}\le \max \{{\varvec{d}}^T {\varvec{y}}\;|\;{\varvec{X}} {\varvec{y}}\le {\varvec{h}},{\varvec{y}}\in \mathbb {R}^{n}_+\}. \end{aligned}$$

Take minimum over all \({\varvec{B}}\in \mathcal{U}\) on the left side, we have

$$\begin{aligned} \min _{{\varvec{B}}\in \mathcal{U}}\max _{{\varvec{y}}}\{{\varvec{d}}^T {\varvec{y}}\;|\;{\varvec{B}} {\varvec{y}}\le {\varvec{h}},{\varvec{y}}\in \mathbb {R}^{n}_+ \}\le \max _{{\varvec{y}}}\{{\varvec{d}}^T {\varvec{y}}\;|\;{\varvec{X}} {\varvec{y}}\le {\varvec{h}},{\varvec{y}}\in \mathbb {R}^{n}_+\}. \end{aligned}$$

Since \({\varvec{X}}\) is arbitrarily chosen in \(\mathcal{U}^{\downarrow }\backslash \mathcal{U}\), we can take minimum of all \({\varvec{X}}\in \mathcal{U}^{\downarrow }\backslash \mathcal{U}\) on the right side

$$\begin{aligned} \min _{{\varvec{B}}\in \mathcal{U}}\max _{{\varvec{y}}}\{{\varvec{d}}^T {\varvec{y}}\;|\;{\varvec{B}} {\varvec{y}}\le {\varvec{h}},{\varvec{y}}\in \mathbb {R}^{n}_+ \}\le \min _{{\varvec{X}}\in \mathcal{U}^{\downarrow }\backslash \mathcal{U}}\max _{{\varvec{y}}}\{{\varvec{d}}^T {\varvec{y}}\;|\;{\varvec{X}} {\varvec{y}}\le {\varvec{h}},{\varvec{y}}\in \mathbb {R}^{n}_+\}. \end{aligned}$$

Therefore, the minimizer of the outer problem of \(\varPi _\mathsf{AR}^I(\mathcal{U}^{\downarrow },{\varvec{h}})\) is in \(\mathcal U\), which implies

$$\begin{aligned} \min _{{\varvec{B}}\in \mathcal{U}}\max _{{\varvec{y}}}\{{\varvec{d}}^T {\varvec{y}}\;|\;{\varvec{B}} {\varvec{y}}\le {\varvec{h}},{\varvec{y}}\in \mathbb {R}^{n}_+ \}=\min _{{\varvec{X}}\in \mathcal{U}^{\downarrow }}\max _{{\varvec{y}}}\{{\varvec{d}}^T {\varvec{y}}\;|\;{\varvec{X}} {\varvec{y}}\le {\varvec{h}},{\varvec{y}}\in \mathbb {R}^{n}_+\}. \end{aligned}$$

As a result, we have \(z_\mathsf{AR}^{I}(\mathcal{U}^{\downarrow },{\varvec{h}})=z_\mathsf{AR}^{I}(\mathcal{U},{\varvec{h}})\).

Similarly, any \({\varvec{y}}\in {\mathbb R}^{n}_+\) satisfies \({\varvec{B}}{\varvec{y}} \le {\varvec{h}}\) for all \({\varvec{B}}\in \mathcal{U}\) is guaranteed to be feasible to \({\varvec{X}}{\varvec{y}} \le {\varvec{h}}\) for all \({\varvec{X}}\in \mathcal{U}^{\downarrow }\backslash \mathcal{U}\). Therefore, we conclude that \(z_\mathsf{Rob}^{I}(\mathcal{U}^{\downarrow },{\varvec{h}})=z_\mathsf{Rob}^{I}(\mathcal{U},{\varvec{h}})\). \(\square \)

Therefore, we can assume without loss of generality that \(\mathcal{U}\) is down-monotone in (2.3) and (2.4). Now, we generalize the result for the two-stage problems \(\varPi _\mathsf{AR}\) in (1.1) and \(\varPi _\mathsf{Rob}\) in (1.2). Consider the following adjustable robust problem \(\varPi _\mathsf{AR}^{\downarrow }\)

$$\begin{aligned} z_\mathsf{AR}^{\downarrow } = \max \;&{\varvec{c}}^T {\varvec{x}} + \min _{{\varvec{B}}\in \mathcal{U}^{\downarrow }} \max _{{\varvec{y}}({\varvec{B}})} {\varvec{d}}^T {\varvec{y}}({\varvec{B}})\nonumber \\&{\varvec{A}}{\varvec{x}} + {\varvec{B}}{\varvec{y}}({\varvec{B}}) \; \le \; {\varvec{h}}\\&{\varvec{x}} \; \in \; {\mathbb R}^{n_1}\nonumber \\&{\varvec{y}}({\varvec{B}}) \; \in \; {\mathbb R}^{n_2}_+,\nonumber \end{aligned}$$
(6.2)

and the corresponding two-stage static robust problem \(\varPi _\mathsf{Rob}^{\downarrow }\)

$$\begin{aligned} z_\mathsf{Rob}^{\downarrow } = \max \;&{\varvec{c}}^T {\varvec{x}} + {\varvec{d}}^T {\varvec{y}}\nonumber \\&{\varvec{A}}{\varvec{x}} + {\varvec{B}}{\varvec{y}} \; \le \; {\varvec{h}}, \; \forall {\varvec{B}} \in \mathcal{U}^{\downarrow }\\&{\varvec{x}} \; \in \; {\mathbb R}^{n_1}\nonumber \\&{\varvec{y}} \; \in \; {\mathbb R}^{n_2}_+.\nonumber \end{aligned}$$
(6.3)

Again, given uncertainty set \(\mathcal{U}\in {\mathbb R}^{m\times n_2}_+\), if \(\mathcal{U}\) is down-monotone, then \(\mathcal{U}^{\downarrow }=\mathcal{U}\). Therefore, \(\varPi _\mathsf{AR}^{\downarrow }\) is essentially the same problem with \(\varPi _\mathsf{AR}\) and we have \(z_\mathsf{AR}^{\downarrow }=z_\mathsf{AR}\). Similarly, \(z_\mathsf{Rob}^{\downarrow }=z_\mathsf{Rob}\). For the case where \(\mathcal U\) is not down-monotone, we prove the following lemma:

Lemma 10

Given uncertainty set \(\mathcal{U}\in {\mathbb R}^{m\times n_2}_+\) and \({\varvec{h}}\in {\mathbb R}^m\), let \(z_\mathsf{AR}\) and \(z_\mathsf{Rob}\) be the optimal values of \(\varPi _\mathsf{AR}\) defined in (1.1) and \(\varPi _\mathsf{Rob}\) defined in (1.2), respectively. Suppose \(\mathcal{U}\) is not down-monotone, let \(\mathcal{U}^{\downarrow }\) be defined as in (6.1). Let \(z_\mathsf{AR}^{\downarrow }\) and \(z_\mathsf{Rob}^{\downarrow }\) be the optimal values of \(\varPi _\mathsf{AR}^{\downarrow }\) defined in (6.2) and \(\varPi _\mathsf{Rob}^{\downarrow }\) defined in (6.3), respectively. Then,

$$\begin{aligned} z_\mathsf{AR}^{\downarrow }=z_\mathsf{AR}, z_\mathsf{Rob}^{\downarrow }=z_\mathsf{Rob}. \end{aligned}$$

Proof

Suppose \(({\varvec{x}}^*, {\varvec{y}}^*({\varvec{B}}), {\varvec{B}}\in \mathcal{U}^{\downarrow })\) is an optimal solution of \(\varPi _\mathsf{AR}^{\downarrow }\). Based on the discussion in Theorem 1, we can assume without loss of generality that \({\varvec{h}}-{\varvec{A}}{\varvec{x}}^*>{\varvec{0}}\). Then,

$$\begin{aligned} z_\mathsf{AR}^{\downarrow }&= {\varvec{c}}^T{\varvec{x}}^*+\min _{{\varvec{B}}\in \mathcal{U}^{\downarrow }}\max _{{\varvec{y}}\in {\mathbb R}^{n_2}_+}\left\{ {\varvec{d}}^T{\varvec{y}}\;\left| \;{\varvec{B}}{\varvec{y}}\le {\varvec{h}}-{\varvec{A}}{\varvec{x}}^*\right. \right\} \\&= {\varvec{c}}^T{\varvec{x}}^*+z_\mathsf{AR}^{I}(\mathcal{U}^{\downarrow },{\varvec{h}}-{\varvec{A}}{\varvec{x}}^*) \\&= {\varvec{c}}^T{\varvec{x}}^*+z_\mathsf{AR}^{I}(\mathcal{U},{\varvec{h}}-{\varvec{A}}{\varvec{x}}^*)\\&\le z_\mathsf{AR}. \end{aligned}$$

The second equation holds from Lemma 9, and the last inequality holds because \({\varvec{x}}={\varvec{x}}^*\) is a feasible first-stage solution for \(\varPi _\mathsf{AR}\). Therefore, \(z_\mathsf{AR}^{\downarrow }\le z_\mathsf{AR}\).

Conversely, suppose \((\tilde{{\varvec{x}}}, \tilde{{\varvec{y}}}({\varvec{B}}), {\varvec{B}}\in \mathcal{U})\) is the optimal solution for \(\varPi _\mathsf{AR}\). Again, we can assume without loss of generality that \({\varvec{h}}-{\varvec{A}}\tilde{{\varvec{x}}}>{\varvec{0}}\). Using similar arguments, we have

$$\begin{aligned} z_\mathsf{AR}&= {\varvec{c}}^T\tilde{{\varvec{x}}}+\min _{{\varvec{B}}\in \mathcal{U}}\max _{{\varvec{y}}\in {\mathbb R}^{n_2}_+}\left\{ \left. {\varvec{d}}^T{\varvec{y}}\;\right| \;{\varvec{B}}{\varvec{y}}\le {\varvec{h}}-{\varvec{A}}\tilde{{\varvec{x}}}\right\} \\&= {\varvec{c}}^T\tilde{{\varvec{x}}}+z_\mathsf{AR}^{I}(\mathcal{U},{\varvec{h}}-{\varvec{A}}\tilde{{\varvec{x}}}) \\&= {\varvec{c}}^T\tilde{{\varvec{x}}}+z_\mathsf{AR}^{I}(\mathcal{U}^{\downarrow },{\varvec{h}}-{\varvec{A}}\tilde{{\varvec{x}}}) \\&\le z_\mathsf{AR}^{\downarrow }. \end{aligned}$$

The last inequality holds because \({\varvec{x}}=\tilde{{\varvec{x}}}\) is a feasible first-stage solution for \(z_\mathsf{AR}^{\downarrow }\). Therefore, in both cases, we have \(z_\mathsf{AR}\le z_\mathsf{AR}^{\downarrow }\). Together with previous result, we have \(z_\mathsf{AR}^{\downarrow }=z_\mathsf{AR}\). In the same way, we can show that \(z_\mathsf{Rob}^{\downarrow }=z_\mathsf{Rob}\), we omit it here. \(\square \)

Lemma 11

Given a down-monotone set \(\mathcal{U}\subseteq {\mathbb R}^{m\times n}_+\), let \(T(\mathcal{U},{\varvec{h}})\) be defined as in (2.6), then \(T(\mathcal{U},{\varvec{h}})\) is down-monotone for all \({\varvec{h}}>{\varvec{0}}\).

Proof

Consider an arbitrary \({\varvec{h}}>{\varvec{0}}\) and \({\varvec{y}}\in T(\mathcal{U},{\varvec{h}})\subseteq {\mathbb R}^n_+\) such that

$$\begin{aligned} {\varvec{y}}={\varvec{B}}^T{\varvec{\lambda }}, {\varvec{h}}^T{\varvec{\lambda }}=1, {\varvec{\lambda }}\ge {\varvec{0}}, {\varvec{B}}\in \mathcal{U}. \end{aligned}$$

Then, for any \({\varvec{z}}\in {\mathbb R}^n_+\) such that \({\varvec{z}}\le {\varvec{y}}\), set

$$\begin{aligned} \hat{B}_{ij}=\frac{z_j}{y_j}B_{ij}, i=1,\ldots ,m,\;j=1,\ldots ,n. \end{aligned}$$

Clearly, \(\hat{{\varvec{B}}}\le {\varvec{B}}\) since \({\varvec{z}}\le {\varvec{y}}\). Therefore, \(\hat{{\varvec{B}}}\in \mathcal{U}\) from the assumption that \(\mathcal{U}\) is down-monotone. Then,

$$\begin{aligned} {\varvec{z}}=\hat{{\varvec{B}}}^T{\varvec{\lambda }}, {\varvec{h}}^T{\varvec{\lambda }}=1, {\varvec{\lambda }}\ge {\varvec{0}}, \hat{{\varvec{B}}}\in \mathcal{U}, \end{aligned}$$

which implies \({\varvec{z}}\in T(\mathcal{U},{\varvec{h}})\). \(\square \)

Appendix 2: Proofs of Lemmas 1 and 2

Proof of Lemma 1

Consider any \({\varvec{v}}_1, {\varvec{v}}_2 \in T(\mathcal{U},{\varvec{h}})\). Therefore, for \(j=1,2\),

$$\begin{aligned} {\varvec{v}}_j = {\varvec{B}}_j^T{\varvec{\lambda }}^j, {\varvec{h}}^T{\varvec{\lambda }}^j=1, {\varvec{\lambda }}^j\ge {\varvec{0}}, {\varvec{B}}_j\in \mathcal{U}. \end{aligned}$$

For any arbitrary \( \alpha \in [0,1]\), let \(\mu _i = \alpha \lambda ^1_i + (1-\alpha ) \lambda ^2_i\) and \({\varvec{b}}_i^j={\varvec{B}}_j^T{\varvec{e}}_i\) for \(i=1,\ldots ,m\). Then,

$$\begin{aligned} \alpha {\varvec{v}}_1 + (1-\alpha ) {\varvec{v}}_2&= \sum _{i=1}^m \big ( \alpha \lambda ^1_i {\varvec{b}}^1_i + (1-\alpha ) \lambda ^2_i {\varvec{b}}^2_i \big )\\&= \sum _{i=1}^m \mu _i \left( \frac{\alpha \lambda ^1_i}{\mu _i} {\varvec{b}}^1_i + \frac{(1-\alpha )\lambda ^2_i}{\mu _i} {\varvec{b}}^2_i \right) \\&= \sum _{i=1}^m \mu _i \cdot \hat{{\varvec{b}}}_i\\&= \hat{{\varvec{B}}}^T{\varvec{\mu }}, \end{aligned}$$

where \(\hat{{\varvec{b}}_i} \in \mathcal{U}_i\) since \(\hat{{\varvec{b}}}_i\) is a convex combination of \({\varvec{b}}^1_i\) and \({\varvec{b}}^2_i\) for all \(i=1,\ldots ,m\) and \(\mathcal{U}_i\) is convex. Also, note that \(\hat{{\varvec{B}}}\in \mathcal{U}\) (since \(\mathcal U\) is constraint-wise) and \({\varvec{h}}^T{\varvec{\mu }}=\alpha {\varvec{h}}^T{\varvec{\lambda }}^1 + (1-\alpha ) {\varvec{h}}^T{\varvec{\lambda }}^2 = 1\), we have

$$\begin{aligned} \alpha {\varvec{v}}_1 + (1-\alpha ) {\varvec{v}}_2 \in T(\mathcal{U},{\varvec{h}}). \end{aligned}$$

Therefore, \(T(\mathcal{U},{\varvec{h}})\) is convex. \(\square \)

Proof of Lemma 2

Since \(\mathcal{U}\) satisfies the scaled projections property, \(\mathcal{U}_j = \alpha _j S\) for some \(\alpha _j >0\) for all \(j=1,\ldots ,m\) where \(S\) is a convex set. Suppose

$$\begin{aligned} \frac{\alpha _1}{h_1} \le \frac{\alpha _2}{h_2} \le \cdots \le \frac{\alpha _m}{h_m}. \end{aligned}$$

Then

$$\begin{aligned} \frac{\alpha _1}{h_1} S \subseteq \frac{\alpha _2}{h_2} S \subseteq \cdots \subseteq \frac{\alpha _m}{h_m} S, \end{aligned}$$

which implies

$$\begin{aligned} \frac{1}{h_1} \mathcal{U}_1 \subseteq \frac{1}{h_2} \mathcal{U}_2 \subseteq \cdots \subseteq \frac{1}{h_m} \mathcal{U}_m \subseteq T(\mathcal{U},{\varvec{h}}). \end{aligned}$$

The last set inequality holds because we can take \({\varvec{\mu }}=\frac{{\varvec{e}}_m}{h_m}\) in (2.6).

Now, consider an arbitrary \({\varvec{v}} \in T(\mathcal{U},{\varvec{h}})\) such that

$$\begin{aligned} {\varvec{v}} = {\varvec{B}}^T{\varvec{\lambda }}, {\varvec{h}}^T{\varvec{\lambda }}=1, {\varvec{\lambda }}\ge {\varvec{0}},{\varvec{B}}\in \mathcal{U}. \end{aligned}$$

Let \({\varvec{b}}_j={\varvec{B}}^T{\varvec{e}}_j\), we have

$$\begin{aligned} {\varvec{v}}&= \sum _{j=1}^m \lambda _j{\varvec{b}}_j\\&= \sum _{j=1}^m \lambda _jh_j\cdot \frac{1}{h_j}{\varvec{b}}_j\\&= \frac{1}{h_m}\hat{{\varvec{b}}}, \end{aligned}$$

where \(\hat{{\varvec{b}}}\in \mathcal{U}_m\). The last equation holds because \({\varvec{h}}^T{\varvec{\lambda }}=1\) and \(\frac{1}{h_j} \mathcal{U}_j\subseteq \frac{1}{h_m} \mathcal{U}_m\) for all \(j=1,\ldots ,m-1\). Therefore,

$$\begin{aligned} T(\mathcal{U},{\varvec{h}})=\frac{1}{h_m} \mathcal{U}_m, \end{aligned}$$

which is convex. \(\square \)

Appendix 3: Tight example for measure of non-convexity bound

Theorem 7

Consider the following uncertainty set, \(\mathcal U^{\theta }\),

$$\begin{aligned} \mathcal{U}^{\theta } = \left\{ {\varvec{B}} \in [0,1]^{n\times n} \; \left| \; B_{ij}=0, \; \forall i\ne j, \; \sum _{j=1}^n B_{jj}^{\theta } \le 1\right. \right\} . \end{aligned}$$

with \(\theta >1\). Then,

  1. 1.

    \(T(\mathcal{U}^{\theta },{\varvec{h}})\) can be written as:

    $$\begin{aligned} T(\mathcal{U}^{\theta },{\varvec{h}})= \left\{ {\varvec{b}}\in {\mathbb R}^n_+ \left| \; \sum _{j=1}^n \left( \frac{b_j}{h_j}\right) ^{\frac{\theta }{\theta +1}}\le 1 \right. \right\} \end{aligned}$$
    (8.1)
  2. 2.

    The convex hull of \(T(\mathcal{U}^{\theta },{\varvec{h}})\) can be written as:

    $$\begin{aligned} \mathsf{conv}(T(\mathcal{U}^{\theta },{\varvec{h}}))=\left\{ {\varvec{b}}\in {\mathbb R}^n_+ \left| \; \sum _{j=1}^n \frac{b_j}{h_j}\le 1\right. \right\} . \end{aligned}$$
    (8.2)
  3. 3.

    \(T(\mathcal{U}^{\theta },{\varvec{h}})\) is non-convex for all \({\varvec{h}}>{\varvec{0}}\).

  4. 4.

    \(\kappa (T(\mathcal{U}^{\theta },{\varvec{h}}))=n^{\frac{1}{\theta }}\) for all \({\varvec{h}}>{\varvec{0}}\).

Proof

  1. 1.

    For given \({\varvec{h}}>{\varvec{0}}\) and \({\varvec{b}}\in T(\mathcal{U}^{\theta },{\varvec{h}})\), we have

    $$\begin{aligned} {\varvec{b}}={\varvec{B}}^T{\varvec{\mu }}, {\varvec{h}}^T{\varvec{\mu }}=1, {\varvec{\mu }}\ge {\varvec{0}}, {\varvec{B}}\in \mathcal{U}^{\theta }. \end{aligned}$$

    Let \(\lambda _i=h_i\mu _i\) for \(i=1,\ldots ,n\). Therefore, \({\varvec{e}}^T {\varvec{\lambda }} =1\) and

    $$\begin{aligned} {\varvec{b}}= {\varvec{B}}^T (\mathsf{diag}({\varvec{h}}))^{-1}{\varvec{\lambda }} = (\mathsf{diag}({\varvec{h}}))^{-1} {\varvec{B}}^T {\varvec{\lambda }}, \end{aligned}$$

    where \(\mathsf{diag}({\varvec{h}})\in {\mathbb R}^{n\times n}\) denotes the matrix with diagonal entries being \(h_i, i\in [n]\) and off-diagonal entries being zero. The second equality above follows as \({\varvec{B}}\) is diagonal. Therefore, \((\mathsf{diag}({\varvec{h}})) {\varvec{b}} \in T(\mathcal{U}^{\theta },{\varvec{e}})\). Using a similar argument, we can show that \({\varvec{b}}\in T(\mathcal{U}^{\theta },{\varvec{e}})\) implies that \((\mathsf{diag}({\varvec{h}}))^{-1}{\varvec{b}}\in T(\mathcal{U}^{\theta },{\varvec{h}})\). Therefore, \(T(\mathcal{U}^{\theta },{\varvec{h}}) = \mathsf{diag}({\varvec{h}}))^{-1} T(\mathcal{U}^{\theta },{\varvec{e}})\) and it is sufficient to show:

    $$\begin{aligned} T(\mathcal{U}^{\theta },{\varvec{e}})=\mathcal{A}:= \left\{ {\varvec{b}}\in {\mathbb R}^n_+ \left| \; \sum _{j=1}^n b_j^{\frac{\theta }{\theta +1}}\le 1 \right. \right\} . \end{aligned}$$

    Consider any \({\varvec{b}}\in \partial \mathcal{A}\), i.e., \({\varvec{b}}\in {\mathbb R}^n_+\) such that

    $$\begin{aligned} \sum _{j=1}^n b_j^{\frac{\theta }{\theta +1}}= 1. \end{aligned}$$

    Set

    $$\begin{aligned} \lambda _j=b_j^{\frac{\theta }{\theta +1}}, x_j=b_j^{\frac{1}{\theta +1}}. \end{aligned}$$

    Then,

    $$\begin{aligned} \lambda _j x_j=b_j, {\varvec{e}}^T{\varvec{\lambda }}=1, \sum _{j=1}^n {x_j}^{\theta }=1, \end{aligned}$$

    which implies \({\varvec{b}}\in T(\mathcal{U}^{\theta },{\varvec{e}})\). Since both \(\mathcal{A}\) and \(T(\mathcal{U}^{\theta },{\varvec{e}})\) are down-monotone, \(\mathcal{A}\subseteq T(\mathcal{U}^{\theta },{\varvec{e}})\). Conversely, consider the following problem:

    $$\begin{aligned} \max _{{\varvec{\lambda }}, {\varvec{x}}\ge {\varvec{0}}}\left\{ \sum _{i=1}^n (\lambda _jx_j)^{\frac{\theta }{\theta +1}}\;\left| \;{\varvec{e}}^T{\varvec{\lambda }}=1, \sum _{j=1}^n x_j^{\theta }\le 1.\right. \right\} \end{aligned}$$

    From Holder’s Inequality, we have

    $$\begin{aligned} \sum _{i=1}^n (\lambda _jx_j)^{\frac{\theta }{\theta +1}}\le ({\varvec{e}}^T{\varvec{\lambda }})^{\frac{\theta }{\theta +1}}\cdot \left( \sum _{j=1}^n x_j^{\theta }\right) ^{\frac{1}{\theta +1}}\le 1. \end{aligned}$$

    Therefore, for any \({\varvec{b}}\in T(\mathcal{U}^{\theta },{\varvec{e}})\), we have

    $$\begin{aligned} \sum _{j=1}^n b_j^{\frac{\theta }{\theta +1}}\le 1, \end{aligned}$$

    which implies \({\varvec{b}}\in \mathcal{A}\). Therefore, \(T(\mathcal{U}^{\theta },{\varvec{e}})\subseteq \mathcal{A}\).

  2. 2.

    Similarly, it is sufficient to show

    $$\begin{aligned} \mathsf{conv}(T(\mathcal{U}^{\theta },{\varvec{e}}))=\mathcal{B}:=\left\{ {\varvec{b}}\in {\mathbb R}^n_+ \;\left| \; \sum _{j=1}^n b_j\le 1\right. \right\} . \end{aligned}$$

    From (8.1), we see that \({\varvec{e}}_j\in T(\mathcal{U}^{\theta },{\varvec{e}})\). For any \({\varvec{b}}\in \partial \mathcal{B}\), by taking \({\varvec{\lambda }}={\varvec{b}}\) as the convex multiplier, we have

    $$\begin{aligned} {\varvec{b}}=\sum _{j=1}^n b_j{\varvec{e}}_j. \end{aligned}$$

    Therefore, \(\partial \mathcal{B}\subseteq \mathsf{conv}(T(\mathcal{U}^{\theta },{\varvec{e}}))\). Since both \(\mathcal{B}\) and \(\mathsf{conv}(T(\mathcal{U}^{\theta },{\varvec{e}}))\) are down-monotone, we have \(\mathcal{B}\subseteq \mathsf{conv}(T(\mathcal{U}^{\theta },{\varvec{e}}))\). Conversely, consider the following problem:

    $$\begin{aligned} \max _{{\varvec{b}}\ge {\varvec{0}}}\left\{ {\varvec{e}}^T{\varvec{b}}\;\left| \; \sum _{j=1}^n b_j^{\frac{\theta }{1+\theta }}\le 1\right. \right\} =\;\qquad \max _{{\varvec{a}}\ge {\varvec{0}}}\left\{ \sum _{j=1}^n a_j^{\frac{1+\theta }{\theta }}\;\left| \;{\varvec{e}}^T{\varvec{a}} \le 1\right. \right\} \end{aligned}$$

    Note that

    $$\begin{aligned} f({\varvec{x}})=\sum _{j=1}^n x_j^{\frac{1+\theta }{\theta }} \end{aligned}$$

    is a convex function. Therefore,

    $$\begin{aligned} \sum _{j=1}^n a_j^{\frac{1+\theta }{\theta }}\le ({\varvec{e}}^T{\varvec{a}})^{\frac{1+\theta }{\theta }}\le 1. \end{aligned}$$

    Therefore, for any \({\varvec{b}}\in T(\mathcal{U}^{\theta },{\varvec{e}})\), we have \({\varvec{b}}\in \mathcal{B}\). Since \(\mathcal{B}\) is convex, \(\mathsf{conv}(T(\mathcal{U}^{\theta },{\varvec{e}}))\subseteq \mathcal{B}\).

  3. 3.

    From (8.1) and (8.2), we see that \(\frac{1}{n}{\varvec{h}}\in \mathsf{conv}(T(\mathcal{U}^{\theta },{\varvec{h}}))\), but \(\frac{1}{n}{\varvec{h}}\not \in T(\mathcal{U}^{\theta },{\varvec{h}})\). Therefore, \(T(\mathcal{U}^{\theta },{\varvec{h}})\) is non-convex for all \({\varvec{h}}>{\varvec{0}}\).

  4. 4.

    Now, we compute \(\kappa (\mathcal{U}^{\theta },{\varvec{h}})\). Recall that

    $$\begin{aligned} \kappa (\mathcal{U}^{\theta },{\varvec{h}})&= \min \{ \alpha \; | \; \mathsf{conv}(T(\mathcal{U}^{\theta },{\varvec{h}}))\subseteq \alpha T(\mathcal{U}^{\theta },{\varvec{h}})\}\nonumber \\&= \min \{ \alpha \; | \; \frac{1}{\alpha }{\mathsf{conv}(T(\mathcal{U}^{\theta },{\varvec{h}}))}\subseteq T(\mathcal{U}^{\theta },{\varvec{h}})\}. \end{aligned}$$

    From (8.2) and scaling, we can observe that it is equivalent to find the largest \(\alpha \) such that the hyperplane

    $$\begin{aligned} \left\{ {\varvec{b}}\in \mathbb {R}^n_+ \;\left| \; \sum _{j=1}^n \frac{b_j}{h_j}=\frac{1}{\alpha }\right. \right\} \end{aligned}$$

    intersects with the positive boundary of \(T(\mathcal{U}^{\theta },{\varvec{h}})\). Therefore, we formulate the following problem:

    $$\begin{aligned} (\kappa (\mathcal{U}^{\theta },{\varvec{h}}))^{-1}&= \min _{{\varvec{b}}\ge {\varvec{0}}}\left\{ \sum _{j=1}^n \frac{b_j}{h_j} \;\left| \; \sum _{j=1}^n (\frac{b_j}{h_j})^{\frac{\theta }{1+\theta }}=1\right. \right\} \\&= \min _{{\varvec{a}}\ge {\varvec{0}}}\left\{ \sum _{j=1}^n a_j^{\frac{1+\theta }{\theta }} \; \left| \; \sum _{j=1}^n a_j=1\right. \right\} \end{aligned}$$

    By solving KKT conditions for the convex problem above, the optimal solution is \({\varvec{a}}=\frac{1}{n}\cdot {\varvec{e}}\). Therefore, we have

    $$\begin{aligned} \kappa (\mathcal{U}^{\theta },{\varvec{h}})=(n\cdot n^{-\frac{1+\theta }{\theta }})^{-1}=n^{\frac{1}{\theta }}. \end{aligned}$$

    \(\square \)

Appendix 4: Proofs of Lemmas 7 and 8

Proof of Lemma 7

We can write the dual of the inner problem of (4.12):

$$\begin{aligned} z_\mathsf{AR}^{(B,h,d)}&= \min _{({\varvec{B}},{\varvec{h}},{\varvec{d}})\in \mathcal{U}^{B,h,d},{\varvec{\alpha }}\in {\mathbb R}^{m}_+} \left\{ \left. {\varvec{h}}^T {\varvec{\alpha }} \;\right| \;{\varvec{B}}^T {\varvec{\alpha }} \ge {\varvec{d}}\right\} \\&= \min _{({\varvec{B}},{\varvec{h}})\in \mathcal{U}^{B,h},{\varvec{d}}\in \mathcal{U}^d, {\varvec{\alpha }}\in {\mathbb R}^{m}_+,\lambda } \left\{ \left. \lambda {\varvec{h}}^T \left( \frac{{\varvec{\alpha }}}{\lambda }\right) \; \right| \;\lambda {\varvec{B}}^T\left( \frac{{\varvec{\alpha }}}{\lambda }\right) \ge {\varvec{d}}, {\varvec{h}}^T{\varvec{\alpha }}=\lambda \right\} \\&= \min _{({\varvec{b}},t)\in T(\mathcal{U}^{B,h},{\varvec{e}}),{\varvec{d}}\in \mathcal{U}^d,\lambda } \left\{ \left. \lambda t\;\right| \;\lambda {\varvec{b}}\ge {\varvec{d}}\right\} , \end{aligned}$$

where the second equality holds because \(\mathcal{U}^{B,h,d}=\mathcal{U}^{B,h} \times \mathcal{U}^d\). \(\square \)

Proof of Lemma 8

Suppose

$$\begin{aligned} \mathcal{U}^{B,h,d} = \mathsf{conv}(({\varvec{B}}_1,{\varvec{h}}_1,{\varvec{d}}_1) \ldots , ({\varvec{B}}_K,{\varvec{h}}_K,{\varvec{d}}_K)) \end{aligned}$$

where \(({\varvec{B}}_j,{\varvec{h}}_j,{\varvec{d}}_j), j=1,\ldots ,K\) are the extreme points of \(\mathcal{U}^{B,h,d}\). We can rewrite (4.13) as follows.

$$\begin{aligned} z_\mathsf{Rob}^{(B,h,d)}=\max \{ \;z\;|\;{\varvec{B}}_j{\varvec{y}}\le {\varvec{h}}_j, z-{\varvec{d}}_j^T {\varvec{y}}\le 0, \;\forall j=1,\ldots ,K, {\varvec{y}}\in {\mathbb R}^{n}_+\}. \end{aligned}$$

By writing the dual problem, we have:

$$\begin{aligned} z_\mathsf{Rob}^{(B,h,d)}= \min _{{\varvec{\alpha }}_j\in {\mathbb R}^{m}_+, {\varvec{\beta }}\in {\mathbb R}^{K}_+} \left\{ \; \sum _{j=1}^K {\varvec{h}}_j^T {\varvec{\alpha }}_j \;\left| \;\sum _{j=1}^K {\varvec{B}}_j^T {\varvec{\alpha }}_j\ge \sum _{j=1}^K \beta _j{\varvec{d}}_j,{\varvec{e}}^T{\varvec{\beta }}=1\right. \right\} . \end{aligned}$$

Note that \(\mathcal{U}^{B,h,d} = \mathcal{U}^{B,h} \times \mathcal{U}^d\), \({\varvec{d}}\) can be chosen regardless of \({\varvec{B}}\) and \({\varvec{h}}\). Denote \(\theta _j={\varvec{h}}_j^T{\varvec{\alpha }}_j\), \(\lambda ={\varvec{e}}^T{\varvec{\theta }}\). Note that if \({\varvec{\alpha }}_j={\varvec{0}}\) for some \(j\in [K]\), we can remove the term \({\varvec{h}}_j^T{\varvec{\alpha }}_j\) and \({\varvec{B}}_j^T{\varvec{\alpha }}_j\) from the problem. Therefore, we can assume without loss of generality that \({\varvec{\theta }}>{\varvec{0}}\) and \(\lambda >0\). Therefore,

$$\begin{aligned} z_\mathsf{Rob}^{(B,h,d)}&= \min _{{\varvec{d}}\in \mathcal{U}^d, {\varvec{\alpha _j}} \ge {\varvec{0}},\lambda } \left\{ \; \lambda \sum _{j=1}^K \frac{\theta _j}{\lambda }{\varvec{h}}_j^T\left( \frac{{\varvec{\alpha }}_j}{\theta _j}\right) \; \left| \; \lambda \sum _{j=1}^K \frac{\theta _j}{\lambda }{\varvec{B}}_j^T \left( \frac{{\varvec{\alpha }}_j}{\theta _j}\right) \ge {\varvec{d}}\right. \right\} \\&= \min _{(\hat{{\varvec{b}}}_j,\hat{t}_j)\in T(\mathcal{U}^{B,h},{\varvec{e}}), {\varvec{d}}\in \mathcal{U}^d,\lambda } \left\{ \;\lambda \sum _{j=1}^K \frac{\theta _j}{\lambda }\hat{t}_j\;\left| \;\lambda \sum _{j=1}^K \frac{\theta _j}{\lambda } \hat{{\varvec{b}}}_j\ge {\varvec{d}} \right. \right\} \\&= \min _{({\varvec{b}},t)\in T(\mathcal{U}^{B,h},{\varvec{e}}),{\varvec{d}}\in \mathcal{U}^d,\lambda } \left\{ \left. \lambda t\;\right| \;\lambda {\varvec{b}}\ge {\varvec{d}}\right\} , \end{aligned}$$

where the second equality holds because \({\varvec{e}}^T\left( \frac{{\varvec{\alpha }}_j}{\theta _j}\right) =1, j=1,\ldots ,K\).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bertsimas, D., Goyal, V. & Lu, B.Y. A tight characterization of the performance of static solutions in two-stage adjustable robust linear optimization. Math. Program. 150, 281–319 (2015). https://doi.org/10.1007/s10107-014-0768-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-014-0768-y

Mathematics Subject Classification

Navigation