Abstract
In this paper, we study the performance of static solutions for two-stage adjustable robust linear optimization problems with uncertain constraint and objective coefficients and give a tight characterization of the adaptivity gap. Computing an optimal adjustable robust optimization problem is often intractable since it requires to compute a solution for all possible realizations of uncertain parameters (Feige et al. in Lect Notes Comput Sci 4513:439–453, 2007). On the other hand, a static solution is a single (here and now) solution that is feasible for all possible realizations of the uncertain parameters and can be computed efficiently for most dynamic optimization problems. We show that for a fairly general class of uncertainty sets, a static solution is optimal for the two-stage adjustable robust linear packing problems. This is highly surprising in view of the usual perception about the conservativeness of static solutions. Furthermore, when a static solution is not optimal for the adjustable robust problem, we give a tight approximation bound on the performance of the static solution that is related to a measure of non-convexity of a transformation of the uncertainty set. We also show that our bound is at least as good (and in many case significantly better) as the bound given by the symmetry of the uncertainty set (Bertsimas and Goyal in Math Methods Oper Res 77(3):323–343, 2013; Bertsimas et al. in Math Oper Res 36(1):24–54, 2011).
Similar content being viewed by others
References
Beale, E.M.L.: On minizing a convex function subject to linear inequalities. J. R. Stat. Soc. Ser. B (Methodological) 17(2), 173–184 (1955)
Ben-Tal, A., El Ghaoui, L., Nemirovski, A.: Robust Optimization. Princeton University Press, Princeton (2009)
Ben-Tal, A., Nemirovski, A.: Robust convex optimization. Math. Oper. Res. 23(4), 769–805 (1998)
Ben-Tal, A., Nemirovski, A.: Robust solutions of uncertain linear programs. Oper. Res. Lett. 25(1), 1–14 (1999)
Ben-Tal, A., Nemirovski, A.: Robust optimization-methodology and applications. Math. Program. 92(3), 453–480 (2002)
Bertsimas, D., Brown, D.B., Caramanis, C.: Theory and applications of robust optimization. SIAM Rev. 53(3), 464–501 (2011)
Bertsimas, D., Goyal, V.: On the power of robust solutions in two-stage stochastic and adaptive optimization problems. Math. Oper. Res. 35, 284–305 (2010)
Bertsimas, D., Goyal, V.: On the approximability of adjustable robust convex optimization under uncertainty. Math. Methods Oper. Res. 77(3), 323–343 (2013)
Bertsimas, D., Goyal, V., Sun, X.A.: A geometric characterization of the power of finite adaptability in multistage stochastic and adaptive optimization. Math. Oper. Res. 36(1), 24–54 (2011)
Bertsimas, D., Sim, M.: Robust discrete optimization and network flows. Math. Program. Ser. B 98, 49–71 (2003)
Bertsimas, D., Sim, M.: The price of robustness. Oper. Res. 52(2), 35–53 (2004)
Borwein, J., Lewis, A.: Convex Analysis and Nonlinear Optimization: Theory and Examples, vol. 3. Springer, Berlin (2006)
Dantzig, G.B.: Linear programming under uncertainty. Manag. Sci. 1, 197–206 (1955)
Dyer, M., Stougie, L.: Computational complexity of stochastic programming problems. Math. Program. 106(3), 423–432 (2006)
El Ghaoui, L., Lebret, H.: Robust solutions to least-squares problems with uncertain data. SIAM J. Matrix Anal. Appl. 18, 1035–1064 (1997)
Feige, U., Jain, K., Mahdian, M., Mirrokni, V.: Robust combinatorial optimization with exponential scenarios. Lect. Notes Comput. Sci. 4513, 439–453 (2007)
Goldfarb, D., Iyengar, G.: Robust portfolio selection problems. Math. Oper. Res. 28(1), 1–38 (2003)
Grötschel, M., Lovász, L., Schrijver, A.: The ellipsoid method and its consequences in combinatorial optimization. Combinatorica 1(2), 169–197 (1981)
Infanger, G.: Planning Under Uncertainty: Solving Large-scale Stochastic Linear Programs. Boyd & Fraser Pub Co, San Francisco (1994)
Kall, P., Wallace, S.W.: Stochastic Programming. Wiley, New York (1994)
Minkowski, H.: Allegemeine lehzätze über konvexe polyeder. Ges. Ahb. 2, 103–121 (1911)
Neumann, Jv: Zur theorie der gesellschaftsspiele. Math. Ann. 100(1), 295–320 (1928)
Prékopa, A.: Stochastic Programming. Kluwer Academic Publishers, Dordrecht, Boston (1995)
Shapiro, A.: Stochastic programming approach to optimization under uncertainty. Math. Program. Ser. B 112(1), 183–220 (2008)
Shapiro, A., Dentcheva, D., Ruszczyński, A.: Lectures on stochastic programming: modeling and theory. Society for Industrial and Applied Mathematics, Philadelphia, PA (2009)
Shapiro, A., Nemirovski, A.: On complexity of stochastic programming problems. In Jeyakumar, V., Rubinov, A.M. (eds.) Continuous Optimization: Current Trends and Applications, pp. 111–144 (2005)
Author information
Authors and Affiliations
Corresponding author
Additional information
V. Goyal is supported by NSF Grant CMMI-1201116, DOE Grant DE-AR0000235 and Google Research Award.
B. Lu is supported by NSF Grant CMMI-1201116.
Appendices
Appendix 1: Down-monotone uncertainty sets
In this section, we show that in \(\varPi _\mathsf{AR}^I(\mathcal{U},{\varvec{h}})\) defined in (2.3) and \(\varPi _\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})\) defined in (2.4), we can assume \(\mathcal U\) to be down-monotone without loss of generality, where down-monotone is defined as follows.
Definition 3
A set \(\mathcal{S}\subseteq {\mathbb R}^n_+\) is down-monotone if \({\varvec{s}}\in \mathcal{S}, {\varvec{t}}\in {\mathbb R}^n_+\) and \({\varvec{t}}\le {\varvec{s}}\) implies \({\varvec{t}}\in \mathcal{S}\).
Given \(\mathcal{S}\subseteq {\mathbb R}^n_+\), we can construct the down-hull of \(S\), denoted by \(\mathcal{S}^{\downarrow }\) as follows.
We would like to emphasize that the down hull of a non-negative uncertainty set is still constrained in the non-negative orthant. Given uncertainty set \(\mathcal{U}\in {\mathbb R}^{m\times n}_+\) and \({\varvec{h}}>{\varvec{0}}\), if \(\mathcal{U}\) is down-monotone, then \(\mathcal{U}^{\downarrow }=\mathcal{U}\). Therefore, \(\varPi _\mathsf{AR}^I(\mathcal{U}^{\downarrow },{\varvec{h}})\) is essentially the same problem with \(\varPi _\mathsf{AR}^I(\mathcal{U},{\varvec{h}})\) and we have \(z_\mathsf{AR}^{I}(\mathcal{U}^{\downarrow },{\varvec{h}})=z_\mathsf{AR}^{I}(\mathcal{U},{\varvec{h}})\). Similar arguments applies for \(\varPi _\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})\) and \(z_\mathsf{Rob}^{I}(\mathcal{U}^{\downarrow },{\varvec{h}})=z_\mathsf{Rob}^{I}(\mathcal{U},{\varvec{h}})\). On the other hand, if \(\mathcal{U}\) is not down-monotone, then \(\mathcal{U}\subsetneq \mathcal{U}^{\downarrow }\). Then, we prove the following lemma.
Lemma 9
Given uncertainty set \(\mathcal{U}\in {\mathbb R}^{m\times n}_+\) and \({\varvec{h}}>{\varvec{0}}\), let \(z_\mathsf{AR}^{I}(\mathcal{U},{\varvec{h}})\) be the optimal value of \(\varPi _\mathsf{AR}^I(\mathcal{U},{\varvec{h}})\) defined in (2.3), \(z_\mathsf{Rob}^{I}(\mathcal{U},{\varvec{h}})\) be the optimal value of \(\varPi _\mathsf{Rob}^I(\mathcal{U},{\varvec{h}})\) defined in (2.4). Suppose \(\mathcal{U}\) is not down-monotone, let \(\mathcal{U}^{\downarrow }\) be defined as in (6.1). Then,
Proof
Consider an arbitrary \({\varvec{X}}\in \mathcal{U}^{\downarrow }\) and \({\varvec{X}}\not \in \mathcal{U}\), i.e., \({\varvec{X}}\in \mathcal{U}^{\downarrow }\backslash \mathcal{U}\). From (6.1), there exists \({\varvec{B}}\in \mathcal{U}\) such that \({\varvec{X}}\le {\varvec{B}}\). Since \({\varvec{B}},{\varvec{X}}\) and \({\varvec{y}}\) are all non-negative, any \({\varvec{y}}\in {\mathbb R}^{n}_+\) such that \({\varvec{B}}{\varvec{y}} \le {\varvec{h}}\) satisfies \({\varvec{X}}{\varvec{y}} \le {\varvec{h}}\). Therefore,
Take minimum over all \({\varvec{B}}\in \mathcal{U}\) on the left side, we have
Since \({\varvec{X}}\) is arbitrarily chosen in \(\mathcal{U}^{\downarrow }\backslash \mathcal{U}\), we can take minimum of all \({\varvec{X}}\in \mathcal{U}^{\downarrow }\backslash \mathcal{U}\) on the right side
Therefore, the minimizer of the outer problem of \(\varPi _\mathsf{AR}^I(\mathcal{U}^{\downarrow },{\varvec{h}})\) is in \(\mathcal U\), which implies
As a result, we have \(z_\mathsf{AR}^{I}(\mathcal{U}^{\downarrow },{\varvec{h}})=z_\mathsf{AR}^{I}(\mathcal{U},{\varvec{h}})\).
Similarly, any \({\varvec{y}}\in {\mathbb R}^{n}_+\) satisfies \({\varvec{B}}{\varvec{y}} \le {\varvec{h}}\) for all \({\varvec{B}}\in \mathcal{U}\) is guaranteed to be feasible to \({\varvec{X}}{\varvec{y}} \le {\varvec{h}}\) for all \({\varvec{X}}\in \mathcal{U}^{\downarrow }\backslash \mathcal{U}\). Therefore, we conclude that \(z_\mathsf{Rob}^{I}(\mathcal{U}^{\downarrow },{\varvec{h}})=z_\mathsf{Rob}^{I}(\mathcal{U},{\varvec{h}})\). \(\square \)
Therefore, we can assume without loss of generality that \(\mathcal{U}\) is down-monotone in (2.3) and (2.4). Now, we generalize the result for the two-stage problems \(\varPi _\mathsf{AR}\) in (1.1) and \(\varPi _\mathsf{Rob}\) in (1.2). Consider the following adjustable robust problem \(\varPi _\mathsf{AR}^{\downarrow }\)
and the corresponding two-stage static robust problem \(\varPi _\mathsf{Rob}^{\downarrow }\)
Again, given uncertainty set \(\mathcal{U}\in {\mathbb R}^{m\times n_2}_+\), if \(\mathcal{U}\) is down-monotone, then \(\mathcal{U}^{\downarrow }=\mathcal{U}\). Therefore, \(\varPi _\mathsf{AR}^{\downarrow }\) is essentially the same problem with \(\varPi _\mathsf{AR}\) and we have \(z_\mathsf{AR}^{\downarrow }=z_\mathsf{AR}\). Similarly, \(z_\mathsf{Rob}^{\downarrow }=z_\mathsf{Rob}\). For the case where \(\mathcal U\) is not down-monotone, we prove the following lemma:
Lemma 10
Given uncertainty set \(\mathcal{U}\in {\mathbb R}^{m\times n_2}_+\) and \({\varvec{h}}\in {\mathbb R}^m\), let \(z_\mathsf{AR}\) and \(z_\mathsf{Rob}\) be the optimal values of \(\varPi _\mathsf{AR}\) defined in (1.1) and \(\varPi _\mathsf{Rob}\) defined in (1.2), respectively. Suppose \(\mathcal{U}\) is not down-monotone, let \(\mathcal{U}^{\downarrow }\) be defined as in (6.1). Let \(z_\mathsf{AR}^{\downarrow }\) and \(z_\mathsf{Rob}^{\downarrow }\) be the optimal values of \(\varPi _\mathsf{AR}^{\downarrow }\) defined in (6.2) and \(\varPi _\mathsf{Rob}^{\downarrow }\) defined in (6.3), respectively. Then,
Proof
Suppose \(({\varvec{x}}^*, {\varvec{y}}^*({\varvec{B}}), {\varvec{B}}\in \mathcal{U}^{\downarrow })\) is an optimal solution of \(\varPi _\mathsf{AR}^{\downarrow }\). Based on the discussion in Theorem 1, we can assume without loss of generality that \({\varvec{h}}-{\varvec{A}}{\varvec{x}}^*>{\varvec{0}}\). Then,
The second equation holds from Lemma 9, and the last inequality holds because \({\varvec{x}}={\varvec{x}}^*\) is a feasible first-stage solution for \(\varPi _\mathsf{AR}\). Therefore, \(z_\mathsf{AR}^{\downarrow }\le z_\mathsf{AR}\).
Conversely, suppose \((\tilde{{\varvec{x}}}, \tilde{{\varvec{y}}}({\varvec{B}}), {\varvec{B}}\in \mathcal{U})\) is the optimal solution for \(\varPi _\mathsf{AR}\). Again, we can assume without loss of generality that \({\varvec{h}}-{\varvec{A}}\tilde{{\varvec{x}}}>{\varvec{0}}\). Using similar arguments, we have
The last inequality holds because \({\varvec{x}}=\tilde{{\varvec{x}}}\) is a feasible first-stage solution for \(z_\mathsf{AR}^{\downarrow }\). Therefore, in both cases, we have \(z_\mathsf{AR}\le z_\mathsf{AR}^{\downarrow }\). Together with previous result, we have \(z_\mathsf{AR}^{\downarrow }=z_\mathsf{AR}\). In the same way, we can show that \(z_\mathsf{Rob}^{\downarrow }=z_\mathsf{Rob}\), we omit it here. \(\square \)
Lemma 11
Given a down-monotone set \(\mathcal{U}\subseteq {\mathbb R}^{m\times n}_+\), let \(T(\mathcal{U},{\varvec{h}})\) be defined as in (2.6), then \(T(\mathcal{U},{\varvec{h}})\) is down-monotone for all \({\varvec{h}}>{\varvec{0}}\).
Proof
Consider an arbitrary \({\varvec{h}}>{\varvec{0}}\) and \({\varvec{y}}\in T(\mathcal{U},{\varvec{h}})\subseteq {\mathbb R}^n_+\) such that
Then, for any \({\varvec{z}}\in {\mathbb R}^n_+\) such that \({\varvec{z}}\le {\varvec{y}}\), set
Clearly, \(\hat{{\varvec{B}}}\le {\varvec{B}}\) since \({\varvec{z}}\le {\varvec{y}}\). Therefore, \(\hat{{\varvec{B}}}\in \mathcal{U}\) from the assumption that \(\mathcal{U}\) is down-monotone. Then,
which implies \({\varvec{z}}\in T(\mathcal{U},{\varvec{h}})\). \(\square \)
Appendix 2: Proofs of Lemmas 1 and 2
Proof of Lemma 1
Consider any \({\varvec{v}}_1, {\varvec{v}}_2 \in T(\mathcal{U},{\varvec{h}})\). Therefore, for \(j=1,2\),
For any arbitrary \( \alpha \in [0,1]\), let \(\mu _i = \alpha \lambda ^1_i + (1-\alpha ) \lambda ^2_i\) and \({\varvec{b}}_i^j={\varvec{B}}_j^T{\varvec{e}}_i\) for \(i=1,\ldots ,m\). Then,
where \(\hat{{\varvec{b}}_i} \in \mathcal{U}_i\) since \(\hat{{\varvec{b}}}_i\) is a convex combination of \({\varvec{b}}^1_i\) and \({\varvec{b}}^2_i\) for all \(i=1,\ldots ,m\) and \(\mathcal{U}_i\) is convex. Also, note that \(\hat{{\varvec{B}}}\in \mathcal{U}\) (since \(\mathcal U\) is constraint-wise) and \({\varvec{h}}^T{\varvec{\mu }}=\alpha {\varvec{h}}^T{\varvec{\lambda }}^1 + (1-\alpha ) {\varvec{h}}^T{\varvec{\lambda }}^2 = 1\), we have
Therefore, \(T(\mathcal{U},{\varvec{h}})\) is convex. \(\square \)
Proof of Lemma 2
Since \(\mathcal{U}\) satisfies the scaled projections property, \(\mathcal{U}_j = \alpha _j S\) for some \(\alpha _j >0\) for all \(j=1,\ldots ,m\) where \(S\) is a convex set. Suppose
Then
which implies
The last set inequality holds because we can take \({\varvec{\mu }}=\frac{{\varvec{e}}_m}{h_m}\) in (2.6).
Now, consider an arbitrary \({\varvec{v}} \in T(\mathcal{U},{\varvec{h}})\) such that
Let \({\varvec{b}}_j={\varvec{B}}^T{\varvec{e}}_j\), we have
where \(\hat{{\varvec{b}}}\in \mathcal{U}_m\). The last equation holds because \({\varvec{h}}^T{\varvec{\lambda }}=1\) and \(\frac{1}{h_j} \mathcal{U}_j\subseteq \frac{1}{h_m} \mathcal{U}_m\) for all \(j=1,\ldots ,m-1\). Therefore,
which is convex. \(\square \)
Appendix 3: Tight example for measure of non-convexity bound
Theorem 7
Consider the following uncertainty set, \(\mathcal U^{\theta }\),
with \(\theta >1\). Then,
-
1.
\(T(\mathcal{U}^{\theta },{\varvec{h}})\) can be written as:
$$\begin{aligned} T(\mathcal{U}^{\theta },{\varvec{h}})= \left\{ {\varvec{b}}\in {\mathbb R}^n_+ \left| \; \sum _{j=1}^n \left( \frac{b_j}{h_j}\right) ^{\frac{\theta }{\theta +1}}\le 1 \right. \right\} \end{aligned}$$(8.1) -
2.
The convex hull of \(T(\mathcal{U}^{\theta },{\varvec{h}})\) can be written as:
$$\begin{aligned} \mathsf{conv}(T(\mathcal{U}^{\theta },{\varvec{h}}))=\left\{ {\varvec{b}}\in {\mathbb R}^n_+ \left| \; \sum _{j=1}^n \frac{b_j}{h_j}\le 1\right. \right\} . \end{aligned}$$(8.2) -
3.
\(T(\mathcal{U}^{\theta },{\varvec{h}})\) is non-convex for all \({\varvec{h}}>{\varvec{0}}\).
-
4.
\(\kappa (T(\mathcal{U}^{\theta },{\varvec{h}}))=n^{\frac{1}{\theta }}\) for all \({\varvec{h}}>{\varvec{0}}\).
Proof
-
1.
For given \({\varvec{h}}>{\varvec{0}}\) and \({\varvec{b}}\in T(\mathcal{U}^{\theta },{\varvec{h}})\), we have
$$\begin{aligned} {\varvec{b}}={\varvec{B}}^T{\varvec{\mu }}, {\varvec{h}}^T{\varvec{\mu }}=1, {\varvec{\mu }}\ge {\varvec{0}}, {\varvec{B}}\in \mathcal{U}^{\theta }. \end{aligned}$$Let \(\lambda _i=h_i\mu _i\) for \(i=1,\ldots ,n\). Therefore, \({\varvec{e}}^T {\varvec{\lambda }} =1\) and
$$\begin{aligned} {\varvec{b}}= {\varvec{B}}^T (\mathsf{diag}({\varvec{h}}))^{-1}{\varvec{\lambda }} = (\mathsf{diag}({\varvec{h}}))^{-1} {\varvec{B}}^T {\varvec{\lambda }}, \end{aligned}$$where \(\mathsf{diag}({\varvec{h}})\in {\mathbb R}^{n\times n}\) denotes the matrix with diagonal entries being \(h_i, i\in [n]\) and off-diagonal entries being zero. The second equality above follows as \({\varvec{B}}\) is diagonal. Therefore, \((\mathsf{diag}({\varvec{h}})) {\varvec{b}} \in T(\mathcal{U}^{\theta },{\varvec{e}})\). Using a similar argument, we can show that \({\varvec{b}}\in T(\mathcal{U}^{\theta },{\varvec{e}})\) implies that \((\mathsf{diag}({\varvec{h}}))^{-1}{\varvec{b}}\in T(\mathcal{U}^{\theta },{\varvec{h}})\). Therefore, \(T(\mathcal{U}^{\theta },{\varvec{h}}) = \mathsf{diag}({\varvec{h}}))^{-1} T(\mathcal{U}^{\theta },{\varvec{e}})\) and it is sufficient to show:
$$\begin{aligned} T(\mathcal{U}^{\theta },{\varvec{e}})=\mathcal{A}:= \left\{ {\varvec{b}}\in {\mathbb R}^n_+ \left| \; \sum _{j=1}^n b_j^{\frac{\theta }{\theta +1}}\le 1 \right. \right\} . \end{aligned}$$Consider any \({\varvec{b}}\in \partial \mathcal{A}\), i.e., \({\varvec{b}}\in {\mathbb R}^n_+\) such that
$$\begin{aligned} \sum _{j=1}^n b_j^{\frac{\theta }{\theta +1}}= 1. \end{aligned}$$Set
$$\begin{aligned} \lambda _j=b_j^{\frac{\theta }{\theta +1}}, x_j=b_j^{\frac{1}{\theta +1}}. \end{aligned}$$Then,
$$\begin{aligned} \lambda _j x_j=b_j, {\varvec{e}}^T{\varvec{\lambda }}=1, \sum _{j=1}^n {x_j}^{\theta }=1, \end{aligned}$$which implies \({\varvec{b}}\in T(\mathcal{U}^{\theta },{\varvec{e}})\). Since both \(\mathcal{A}\) and \(T(\mathcal{U}^{\theta },{\varvec{e}})\) are down-monotone, \(\mathcal{A}\subseteq T(\mathcal{U}^{\theta },{\varvec{e}})\). Conversely, consider the following problem:
$$\begin{aligned} \max _{{\varvec{\lambda }}, {\varvec{x}}\ge {\varvec{0}}}\left\{ \sum _{i=1}^n (\lambda _jx_j)^{\frac{\theta }{\theta +1}}\;\left| \;{\varvec{e}}^T{\varvec{\lambda }}=1, \sum _{j=1}^n x_j^{\theta }\le 1.\right. \right\} \end{aligned}$$From Holder’s Inequality, we have
$$\begin{aligned} \sum _{i=1}^n (\lambda _jx_j)^{\frac{\theta }{\theta +1}}\le ({\varvec{e}}^T{\varvec{\lambda }})^{\frac{\theta }{\theta +1}}\cdot \left( \sum _{j=1}^n x_j^{\theta }\right) ^{\frac{1}{\theta +1}}\le 1. \end{aligned}$$Therefore, for any \({\varvec{b}}\in T(\mathcal{U}^{\theta },{\varvec{e}})\), we have
$$\begin{aligned} \sum _{j=1}^n b_j^{\frac{\theta }{\theta +1}}\le 1, \end{aligned}$$which implies \({\varvec{b}}\in \mathcal{A}\). Therefore, \(T(\mathcal{U}^{\theta },{\varvec{e}})\subseteq \mathcal{A}\).
-
2.
Similarly, it is sufficient to show
$$\begin{aligned} \mathsf{conv}(T(\mathcal{U}^{\theta },{\varvec{e}}))=\mathcal{B}:=\left\{ {\varvec{b}}\in {\mathbb R}^n_+ \;\left| \; \sum _{j=1}^n b_j\le 1\right. \right\} . \end{aligned}$$From (8.1), we see that \({\varvec{e}}_j\in T(\mathcal{U}^{\theta },{\varvec{e}})\). For any \({\varvec{b}}\in \partial \mathcal{B}\), by taking \({\varvec{\lambda }}={\varvec{b}}\) as the convex multiplier, we have
$$\begin{aligned} {\varvec{b}}=\sum _{j=1}^n b_j{\varvec{e}}_j. \end{aligned}$$Therefore, \(\partial \mathcal{B}\subseteq \mathsf{conv}(T(\mathcal{U}^{\theta },{\varvec{e}}))\). Since both \(\mathcal{B}\) and \(\mathsf{conv}(T(\mathcal{U}^{\theta },{\varvec{e}}))\) are down-monotone, we have \(\mathcal{B}\subseteq \mathsf{conv}(T(\mathcal{U}^{\theta },{\varvec{e}}))\). Conversely, consider the following problem:
$$\begin{aligned} \max _{{\varvec{b}}\ge {\varvec{0}}}\left\{ {\varvec{e}}^T{\varvec{b}}\;\left| \; \sum _{j=1}^n b_j^{\frac{\theta }{1+\theta }}\le 1\right. \right\} =\;\qquad \max _{{\varvec{a}}\ge {\varvec{0}}}\left\{ \sum _{j=1}^n a_j^{\frac{1+\theta }{\theta }}\;\left| \;{\varvec{e}}^T{\varvec{a}} \le 1\right. \right\} \end{aligned}$$Note that
$$\begin{aligned} f({\varvec{x}})=\sum _{j=1}^n x_j^{\frac{1+\theta }{\theta }} \end{aligned}$$is a convex function. Therefore,
$$\begin{aligned} \sum _{j=1}^n a_j^{\frac{1+\theta }{\theta }}\le ({\varvec{e}}^T{\varvec{a}})^{\frac{1+\theta }{\theta }}\le 1. \end{aligned}$$Therefore, for any \({\varvec{b}}\in T(\mathcal{U}^{\theta },{\varvec{e}})\), we have \({\varvec{b}}\in \mathcal{B}\). Since \(\mathcal{B}\) is convex, \(\mathsf{conv}(T(\mathcal{U}^{\theta },{\varvec{e}}))\subseteq \mathcal{B}\).
-
3.
From (8.1) and (8.2), we see that \(\frac{1}{n}{\varvec{h}}\in \mathsf{conv}(T(\mathcal{U}^{\theta },{\varvec{h}}))\), but \(\frac{1}{n}{\varvec{h}}\not \in T(\mathcal{U}^{\theta },{\varvec{h}})\). Therefore, \(T(\mathcal{U}^{\theta },{\varvec{h}})\) is non-convex for all \({\varvec{h}}>{\varvec{0}}\).
-
4.
Now, we compute \(\kappa (\mathcal{U}^{\theta },{\varvec{h}})\). Recall that
$$\begin{aligned} \kappa (\mathcal{U}^{\theta },{\varvec{h}})&= \min \{ \alpha \; | \; \mathsf{conv}(T(\mathcal{U}^{\theta },{\varvec{h}}))\subseteq \alpha T(\mathcal{U}^{\theta },{\varvec{h}})\}\nonumber \\&= \min \{ \alpha \; | \; \frac{1}{\alpha }{\mathsf{conv}(T(\mathcal{U}^{\theta },{\varvec{h}}))}\subseteq T(\mathcal{U}^{\theta },{\varvec{h}})\}. \end{aligned}$$From (8.2) and scaling, we can observe that it is equivalent to find the largest \(\alpha \) such that the hyperplane
$$\begin{aligned} \left\{ {\varvec{b}}\in \mathbb {R}^n_+ \;\left| \; \sum _{j=1}^n \frac{b_j}{h_j}=\frac{1}{\alpha }\right. \right\} \end{aligned}$$intersects with the positive boundary of \(T(\mathcal{U}^{\theta },{\varvec{h}})\). Therefore, we formulate the following problem:
$$\begin{aligned} (\kappa (\mathcal{U}^{\theta },{\varvec{h}}))^{-1}&= \min _{{\varvec{b}}\ge {\varvec{0}}}\left\{ \sum _{j=1}^n \frac{b_j}{h_j} \;\left| \; \sum _{j=1}^n (\frac{b_j}{h_j})^{\frac{\theta }{1+\theta }}=1\right. \right\} \\&= \min _{{\varvec{a}}\ge {\varvec{0}}}\left\{ \sum _{j=1}^n a_j^{\frac{1+\theta }{\theta }} \; \left| \; \sum _{j=1}^n a_j=1\right. \right\} \end{aligned}$$By solving KKT conditions for the convex problem above, the optimal solution is \({\varvec{a}}=\frac{1}{n}\cdot {\varvec{e}}\). Therefore, we have
$$\begin{aligned} \kappa (\mathcal{U}^{\theta },{\varvec{h}})=(n\cdot n^{-\frac{1+\theta }{\theta }})^{-1}=n^{\frac{1}{\theta }}. \end{aligned}$$\(\square \)
Appendix 4: Proofs of Lemmas 7 and 8
Proof of Lemma 7
We can write the dual of the inner problem of (4.12):
where the second equality holds because \(\mathcal{U}^{B,h,d}=\mathcal{U}^{B,h} \times \mathcal{U}^d\). \(\square \)
Proof of Lemma 8
Suppose
where \(({\varvec{B}}_j,{\varvec{h}}_j,{\varvec{d}}_j), j=1,\ldots ,K\) are the extreme points of \(\mathcal{U}^{B,h,d}\). We can rewrite (4.13) as follows.
By writing the dual problem, we have:
Note that \(\mathcal{U}^{B,h,d} = \mathcal{U}^{B,h} \times \mathcal{U}^d\), \({\varvec{d}}\) can be chosen regardless of \({\varvec{B}}\) and \({\varvec{h}}\). Denote \(\theta _j={\varvec{h}}_j^T{\varvec{\alpha }}_j\), \(\lambda ={\varvec{e}}^T{\varvec{\theta }}\). Note that if \({\varvec{\alpha }}_j={\varvec{0}}\) for some \(j\in [K]\), we can remove the term \({\varvec{h}}_j^T{\varvec{\alpha }}_j\) and \({\varvec{B}}_j^T{\varvec{\alpha }}_j\) from the problem. Therefore, we can assume without loss of generality that \({\varvec{\theta }}>{\varvec{0}}\) and \(\lambda >0\). Therefore,
where the second equality holds because \({\varvec{e}}^T\left( \frac{{\varvec{\alpha }}_j}{\theta _j}\right) =1, j=1,\ldots ,K\).
Rights and permissions
About this article
Cite this article
Bertsimas, D., Goyal, V. & Lu, B.Y. A tight characterization of the performance of static solutions in two-stage adjustable robust linear optimization. Math. Program. 150, 281–319 (2015). https://doi.org/10.1007/s10107-014-0768-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107-014-0768-y