Abstract
In this paper, we study the performance of static solutions in two-stage adjustable robust packing linear optimization problem with uncertain constraint coefficients. Such problems arise in many important applications such as revenue management and resource allocation problems where demand requests have uncertain resource requirements. The goal is to find a two-stage solution that maximizes the worst case objective value over all possible realizations of the second-stage constraints from a given uncertainty set. We consider the case where the uncertainty set is column-wise and constraint-wise (any constraint describing the set involve entries of only a single column or a single row). This is a fairly general class of uncertainty sets to model constraint coefficient uncertainty. We show that the two-stage adjustable robust problem is \(\varOmega (\log n)\)-hard to approximate. On the positive side, we show that a static solution is an \(O\big (\log n \cdot \min (\log \varGamma , \log (m+n))\big )\)-approximation for the two-stage adjustable robust problem where m and n denote the numbers of rows and columns of the constraint matrix and \(\varGamma \) is the maximum possible ratio of upper bounds of the uncertain constraint coefficients. Therefore, for constant \(\varGamma \), surprisingly the performance bound for static solutions and therefore, the adaptivity gap matches the hardness of approximation for the adjustable problems. Furthermore, in general the static solution provides nearly the best efficient approximation for the two-stage adjustable robust problem.
Similar content being viewed by others
References
Arora, S., Babai, L., Stern, J., Sweedyk, Z.: The hardness of approximate optima in lattices, codes, and systems of linear equations. In: 34th Annual Symposium on Foundations of Computer Science, 1993. Proceedings, pp. 724–733. IEEE (1993)
Ben-Tal, A., El Ghaoui, L., Nemirovski, A.: Robust Optimization. Princeton University Press, Princeton (2009)
Ben-Tal, A., Nemirovski, A.: Robust convex optimization. Math. Oper. Res. 23(4), 769–805 (1998)
Ben-Tal, A., Nemirovski, A.: Robust solutions of uncertain linear programs. Oper. Res. Lett. 25(1), 1–14 (1999)
Ben-Tal, A., Nemirovski, A.: Robust optimization-methodology and applications. Math. Program. 92(3), 453–480 (2002)
Bertsimas, D., Brown, D.B., Caramanis, C.: Theory and applications of robust optimization. SIAM Rev. 53(3), 464–501 (2011)
Bertsimas, D., de Ruiter, F.J.C.T.: Duality in two-stage adaptive linear optimization: faster computation and stronger bounds. INFORMS J. Comput. 28(3), 500–511 (2016)
Bertsimas, D., Goyal, V.: On the power of robust solutions in two-stage stochastic and adaptive optimization problems. Math. Oper. Res. 35, 284–305 (2010)
Bertsimas, D., Goyal, V.: On the power and limitations of affine policies in two-stage adaptive optimization. Math. Program. 134(2), 491–531 (2012)
Bertsimas, D., Goyal, V.: On the approximability of adjustable robust convex optimization under uncertainty. Math. Methods Oper. Res. 77(3), 323–343 (2013)
Bertsimas, D., Goyal, V., Lu, B.Y.: A tight characterization of the performance of static solutions in two-stage adjustable robust linear optimization. Math. Program. 150(2), 281–319 (2015)
Bertsimas, D., Goyal, V., Sun, X.A.: A geometric characterization of the power of finite adaptability in multistage stochastic and adaptive optimization. Math. Oper. Res. 36(1), 24–54 (2011)
Bertsimas, D., Natarajan, K., Teo, C.-P.: Applications of semidefinite optimization in stochastic project scheduling. Technical report, High Performance Computation for Engineered Systems, Singapore–MIT Alliance (2002)
Bertsimas, D., Sim, M.: Robust discrete optimization and network flows. Math. Program. Ser. B 98, 49–71 (2003)
Bertsimas, D., Sim, M.: The price of robustness. Oper. Res. 52(2), 35–53 (2004)
Dean, B.C., Goemans, M.X., Vondrák, J.: Adaptivity and approximation for stochastic packing problems. In: Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 395–404. Society for Industrial and Applied Mathematics (2005)
El Ghaoui, L., Lebret, H.: Robust solutions to least-squares problems with uncertain data. SIAM J. Matrix Anal. Appl. 18, 1035–1064 (1997)
Feige, U.: A threshold of ln n for approximating set cover. J. ACM (JACM) 45(4), 634–652 (1998)
Feige, U., Jain, K., Mahdian, M., Mirrokni, V.: Robust combinatorial optimization with exponential scenarios. Lect. Notes Comput. Sci. 4513, 439–453 (2007)
Goel, A., Indyk, P.: Stochastic load balancing and related problems. In: 40th Annual Symposium on Foundations of Computer Science, 1999, pp. 579–586. IEEE (1999)
Goldfarb, D., Iyengar, G.: Robust portfolio selection problems. Math. Oper. Res. 28(1), 1–38 (2003)
Goyal, V., Ravi, R.: A ptas for the chance-constrained knapsack problem with random item sizes. Oper. Res. Lett. 38(3), 161–164 (2010)
Hadjiyiannis, M.J., Goulart, P.J., Kuhn, D.: A scenario approach for estimating the suboptimality of linear decision rules in two-stage robust optimization. In: 2011 50th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC), pp. 7386–7391. IEEE (2011)
Kall, P., Wallace, S.W.: Stochastic Programming. Wiley, New York (1994)
Prékopa, A.: Stochastic Programming. Kluwer Academic Publishers, Dordrecht (1995)
Shapiro, A.: Stochastic programming approach to optimization under uncertainty. Math. Program. Ser. B 112(1), 183–220 (2008)
Shapiro, A., Dentcheva, D., Ruszczyński, A.: Lectures on Stochastic Programming: Modeling and Theory. Society for Industrial and Applied Mathematics, Philadelphia (2009)
Shapiro, A., Nemirovski, A.: On complexity of stochastic programming problems. In: Jeyakumar, V., Rubinov, A.M. (eds.) Continuous Optimization: Current Trends and Applications, pp. 111–144. Springer, Berlin (2005)
Soyster, A.: Convex programming with set-inclusive constraints and applications to inexact linear programming. Oper. Res. 21(5), 1154–1157 (1973)
Vazirani, V.: Approx. Algorithms. Springer, Berlin (2013)
Wiesemann, W., Kuhn, D., Rustem, B.: Robust resource allocations in temporal networks. Math. Program. 135(1–2), 437–471 (2012)
Author information
Authors and Affiliations
Corresponding author
Additional information
Vineet Goyal has been supported by NSF Grant CMMI-1201116, CMMI-1351838 (CAREER), Google Faculty Research Award and IBM Faculty Award. Brian Y. Lu has been supported by NSF Grant CMMI-1201116.
Appendices
Appendix A: Proof of Theorem 2
In this section, we show that the general two-stage adjustable robust problem \(\varPi _\mathsf{AR}^\mathsf{Gen}\) (2.1) is \(\varOmega (2^{\log ^{1-\epsilon }m})\)-hard to approximate for any constant \(0<\epsilon <1\). We prove this by an approximation preserving reduction from the Label-Cover-Problem. The reduction is similar in spirit to the reduction from the Set-Cover-Problem to the two-stage adjustable robust problem.
Label-cover-problem We are given a finite set V (\(|V|=m\)), a family of subset \(\{{\mathcal V}_1,\ldots ,{\mathcal V}_K\}\) of V and graph \(G=(V, E)\). Let H be a supergraph with vertices \(\{{\mathcal V}_1,\ldots ,{\mathcal V}_K\}\) and edges F where \(({\mathcal V}_i, {\mathcal V}_j)\in F\) if there exists \((k,l)\in E\) such that \(k\in {\mathcal V}_i, l\in {\mathcal V}_j\). The goal is to find the smallest cardinality set \(C\subseteq V\) such that F is covered, i.e., for each \(({\mathcal V}_i, {\mathcal V}_j)\in F\), there exists \(k\in {\mathcal V}_i\cap C, l\in {\mathcal V}_j\cap C\) such that \((k,l)\in E\).
The label cover problem is \(\varOmega (2^{\log ^{1-\epsilon }m})\)-hard to approximate for any constant \(0<\epsilon <1\), i.e., there is no polynomial time approximation algorithm that give an \(O(2^{\log ^{1-\epsilon }m})\)-approximation for any constant \(0<\epsilon <1\) unless \(\mathbf {NP}\subseteq \mathbf {DTIME}(m^{\text {polylog}(m)})\) [1].
Proof of Theorem 2
Consider an instance \(\mathcal I\) of Label-Cover-Problem with ground elements V (\(|V|=m\)), graph \(G=(V, E)\), a family of subset of V: \(({\mathcal V}_1,\ldots ,{\mathcal V}_K)\) and a supergraph \(H=(\{{\mathcal V}_1,\ldots ,{\mathcal V}_K\}, F)\) where \(|F|=n\). We construct the following instance \(\mathcal{I}'\) of the general adjustable robust problem \(\varPi _\mathsf{AR}^\mathsf{Gen}\) (2.1):
where \(d_1=d_2=\cdots =d_n=1\), \(\varvec{I}_m\) is the m-dimensional identity matrix and each column set of \(\mathcal{U}_F\subseteq {\mathbb R}^{m\times n}_+\) corresponds to an edge \(({\mathcal V}_i, {\mathcal V}_j)\in F\) with
Therefore, \(\mathcal U\) is column-wise with column sets \(\mathcal{U}_{({\mathcal V}_i, {\mathcal V}_j)}, \forall ({\mathcal V}_i, {\mathcal V}_j)\in F\) and \(\mathcal{U}_j,j\in [m]\) where \(\mathcal{U}_j=\{-\varvec{e}_j\}\), i.e., there is no uncertainty in \(\mathcal{U}_j\). The instance \(\mathcal{I}'\) of \(\varPi _\mathsf{AR}^\mathsf{Gen}\) can be formulated as
Suppose \((\hat{\varvec{y}},\hat{\varvec{z}}, {\hat{\varvec{b}}}_{({\mathcal V}_i, {\mathcal V}_j)}, ({\mathcal V}_i, {\mathcal V}_j)\in F)\) is a feasible solution for instance \(\mathcal{I}'\). Then, we can compute a label cover of instance \(\mathcal I\) with cardinality at most \(\varvec{e}^T\hat{\varvec{y}}-\varvec{e}^T\hat{\varvec{z}}\). From strong duality, there exists an optimal solution \({\hat{\varvec{\mu }}}\) for
and \(\varvec{e}^T{\hat{\varvec{\mu }}}=\varvec{e}^T\hat{\varvec{y}}-\varvec{e}^T\hat{\varvec{z}}\). For each \(({\mathcal V}_i, {\mathcal V}_j)\in F\), consider a basic optimal solution \(({\tilde{\varvec{b}}}_{({\mathcal V}_i, {\mathcal V}_j)},({\mathcal V}_i, {\mathcal V}_j)\in F)\) where
Therefore, \({\tilde{\varvec{b}}}_{({\mathcal V}_i, {\mathcal V}_j)}\) is a vertex of \(\mathcal{U}_{({\mathcal V}_i, {\mathcal V}_j)}\) for each \(({\mathcal V}_i, {\mathcal V}_j)\in F\), which implies that \({\tilde{\varvec{b}}}_{({\mathcal V}_i, {\mathcal V}_j)}=\frac{1}{2}(\varvec{e}_{k_i}+\varvec{e}_{l_j})\) for some \((k_i,l_j) \in E\) and \(k_i\in {\mathcal V}_i, l_j\in {\mathcal V}_j\). Also, \({\tilde{\varvec{b}}}_{({\mathcal V}_i, {\mathcal V}_j)}^T{\hat{\varvec{\mu }}}\ge 1, \forall ({\mathcal V}_i, {\mathcal V}_j)\in F\). Now, let \(\tilde{\varvec{\mu }}\) the optimal solution of the following LP:
Clearly, \(\varvec{e}^T\tilde{\varvec{\mu }}\le \varvec{e}^T{\hat{\varvec{\mu }}}\). Also, since \({\tilde{\varvec{b}}}_{({\mathcal V}_i, {\mathcal V}_j)}=\frac{1}{2}(\varvec{e}_{k_i}+\varvec{e}_{l_j})\) and \({\tilde{\varvec{b}}}_{({\mathcal V}_i, {\mathcal V}_j)}^T\tilde{\varvec{\mu }}\ge 1\), \(\tilde{\mu }_{k_i}=\tilde{\mu }_{l_j}=1\). Therefore, \(\tilde{\varvec{\mu }}\in \{0,1\}^{m}\). Let
Clearly, C is a valid label cover for F and \(|C|=\varvec{e}^T\tilde{\varvec{\mu }}\le \varvec{e}^T{\hat{\varvec{\mu }}}=\varvec{e}^T\hat{\varvec{y}}-\varvec{e}^T\hat{\varvec{z}}\).
Conversely, given a label cover C of instance \(\mathcal I\), for any \(j\in [m]\), let \(\bar{\mu }_j=1\) if \(j\in C\) and zero otherwise. This implies that \(\varvec{e}^T\bar{\varvec{\mu }}=|C|\). For any \({({\mathcal V}_i, {\mathcal V}_j)}\in F\), let \(\bar{\varvec{b}}_{({\mathcal V}_i, {\mathcal V}_j)}=\frac{1}{2}(\varvec{e}_{k_i}+\varvec{e}_{l_j})\) where \(k_i\in {\mathcal V}_i\cap C, l_j\in {\mathcal V}_j\cap C\) such that \((k_i,l_j) \in E\). Then, let \(\varvec{\mu }'\) be an optimal solution for the following LP
Then, \(\varvec{e}^T\varvec{\mu }'\le \varvec{e}^T\bar{\varvec{\mu }}\) as \(\bar{\varvec{\mu }}\) is feasible for the above LP. From strong duality, there exists \(\bar{\varvec{y}}\in {\mathbb R}^n_+\) and \(\bar{\varvec{z}}\in {\mathbb R}^m_+\) such that \((\bar{\varvec{y}},\bar{\varvec{z}}, \bar{\varvec{b}}_{({\mathcal V}_i, {\mathcal V}_j)},{({\mathcal V}_i, {\mathcal V}_j)}\in F)\) is a feasible solution for instance \(\mathcal{I}'\) of \(\varPi _\mathsf{AR}^\mathsf{Gen}\) with cost \(\varvec{e}^T\bar{\varvec{y}}-\varvec{e}^T\bar{\varvec{z}}=\varvec{e}^T\varvec{\mu }'\le \varvec{e}^T\bar{\varvec{\mu }}=|C|\). \(\square \)
Appendix B: Approximate separation to optimization
For any \(\varvec{x} \in {\mathbb R}^n_+\), let
We show that if we can approximate the separation problem, we can also approximate \(\varPi _\mathsf{AR}\). Let \(\mathcal{A}\) be a \(\gamma \)-approximate algorithm for the separation problem (3.1), i.e., \(\mathcal{A}\) computes a \(\gamma \)-approximation algorithm for the min-max problem in (3.1). For any \(\varvec{x} \in {\mathbb R}^n_+\), let \(\varvec{B}^\mathcal{A}(\varvec{x})\) denote the matrix returned by \(\mathcal{A}\) and let
Therefore, the approximate separation based on Algorithm \(\mathcal{A}\) is as follows: for any \((\varvec{x}, z)\), return feasible if \(Q^\mathcal{A}(\varvec{x}) \ge z\). Otherwise give a violating hyperplane corresponding to \(\varvec{B}^\mathcal{A}(\varvec{x})\). Now, we prove the following theorem.
Theorem 12
Suppose we have an Algorithm \(\mathcal{A}\) that is a \(\gamma \)-approximation for the separation problem (3.1). Then we can compute a \(\gamma \)-approximation for the two-stage adjustable robust problem \(\varPi _\mathsf{AR}\) (1.1).
Proof
Since \(\mathcal{A}\) is a \(\gamma \)-approximation to the min-max problem in (3.1), for any \(\varvec{x} \in {\mathbb R}^n_+\),
Let \((\varvec{x}^*, z^*)\) be an optimal solution for \(\varPi _\mathsf{AR}\) and let
Consider the optimization algorithm based on the approximate separation algorithm \(\mathcal{A}\) and suppose it returns the solution \((\hat{\varvec{x}}, \hat{z})\). Note that \((\varvec{x}^*, z^*)\) is feasible according to the approximate separation algorithm \(\mathcal{A}\) as \(Q^\mathcal{A}(\varvec{x}^*)\ge Q^*(\varvec{x}^*)=z^*\). Therefore,
Note that \(\hat{z}\) is an approximation for the worst case second-stage objective value when the first stage solution is \(\hat{\varvec{x}}\). The true objective value for the first stage solution \(\hat{\varvec{x}}\) is given by
where the first inequality follows as \(\mathcal{A}\) is a \(\gamma \)-approximation and \(Q^\mathcal{A}(\hat{\varvec{x}}) \le \gamma \cdot Q^*(\hat{\varvec{x}})\). Inequality (B.2) follows as \((\hat{\varvec{x}}, \hat{z})\) is feasible according to \(\mathcal{A}\) and therefore, \(\hat{z} \le Q^\mathcal{A}(\hat{\varvec{x}})\) and the last inequality follows from (B.1). Therefore, the optimization problem based on algorithm \(\mathcal{A}\) computes a \(\gamma \)-approximation for \(\varPi _\mathsf{AR}\). \(\square \)
Appendix C: Transformation of the adjustable robust problem
Let \(\varvec{x}^*\) be the optimal first-stage solution for \(\varPi _\mathsf{AR}\), i.e.,
Note that \((\varvec{x}^*, \varvec{0})\) is a feasible solution for \(\varPi _\mathsf{Rob}\). We have
Since \(\varvec{c}\) and \(\varvec{x}^*\) are both non-negative, to prove Theorem 7, it suffice to show
In this section, we show that we can assume without loss of generality that \((\varvec{h}-\varvec{A}\varvec{x}^*)>\varvec{0}\), as otherwise the static solution is optimal for the two-stage adjustable robust problem \(\varPi _\mathsf{AR}\) (1.1), i.e., \(z_\mathsf{AR}=z_\mathsf{Rob}\): Note that \((\varvec{h}-\varvec{A}\varvec{x}^*)\ge \varvec{0}\), since otherwise the inner problem becomes infeasible. Now, suppose that \((\varvec{h}-\varvec{A}\varvec{x})_i=0\) for some \(i\in [m]\). Since \(\mathcal U\) is a full-dimensional convex set, there exist \(\varvec{B}^*\in \mathcal{U}\) such that \(B_{ij}^*>0\) for all \(j\in [n]\). Therefore,
which implies that \(z_\mathsf{AR}=\varvec{c}^T\varvec{x}^*\) since \(\varvec{d},\varvec{y}\) are non-negative. On the other hand, \((\varvec{x}^*, \varvec{0})\) is a feasible solution for \(\varPi _\mathsf{Rob}\). Therefore,
However, suppose \((\bar{\varvec{x}}, \bar{\varvec{y}})\) is an optimal solution for \(\varPi _\mathsf{Rob}\), then \(\varvec{x}=\bar{\varvec{x}}, \varvec{y}(\varvec{B})=\bar{\varvec{y}}\) for all \(\varvec{B}\in \mathcal{U}\) is feasible for \(\varPi _\mathsf{AR}\). Therefore, \(z_\mathsf{AR}\ge z_\mathsf{Rob}\).
Appendix D: Proof of Theorem 3
Let \(\varvec{y}^*\) be such that \({\hat{\varvec{B}}}\varvec{y}^*\le \varvec{h}\). For any \(\varvec{B}\in \mathcal{U}\), we have \(\varvec{B}\le {\hat{\varvec{B}}}\) component-wise by construction. Note that \(\varvec{y}^*\ge \varvec{0}\), this implies \(\varvec{B}\varvec{y}^*\le {\hat{\varvec{B}}}\varvec{y}^*\le \varvec{h}\) for all \(\varvec{B}\in \mathcal{U}\).
Conversely, suppose \(\tilde{\varvec{y}}\) satisfies \(\varvec{B}\tilde{\varvec{y}}\le \varvec{h}\) for all \(\varvec{B}\in \mathcal{U}\). For each \(i\in [m]\), note that \(\mathsf{diag}(\varvec{e}_i){\hat{\varvec{B}}}\in \mathcal{U}\) by construction. Therefore, \(\varvec{e}_i^T{\hat{\varvec{B}}}\tilde{\varvec{y}}\le h_i\) for all \(i\in [m]\), which implies that \({\hat{\varvec{B}}}\tilde{\varvec{y}}\le \varvec{h}\).
Appendix E: Proof of Lemma 1.
Let
From Theorem 3, \(\varPi _\mathsf{Rob}\) is equivalent to
The dual problem is
Let
It is easy to observe that \(\frac{1}{s}\varvec{e}\) is a feasible solution for both the primal and the dual formulations of \(z_\mathsf{Rob}\). Moreover, they have the same objective value. Therefore,
On the other hand, for each \(j\in [n]\), denote
By writing the dual of the inner maximization problem of \(\varPi _\mathsf{AR}\), we have
Therefore, we just need to solve
Suppose \(({\hat{\theta }},{\hat{\varvec{\mu }}}, {\hat{\varvec{b}}}_j, j\in [m])\) is an optimal solution for (E.1). For each \(j\in [n]\), consider a basic optimal solution \({\tilde{\varvec{b}}}_j\) of the following LP:
Therefore, \({\tilde{\varvec{b}}}_j\) is a vertex of \(\mathcal{U}_j\), which implies that \({\tilde{\varvec{b}}}_j={\hat{B}}_{i_j j}\varvec{e}_{i_j}\) for some \(i_j\in [n]\) and \({\tilde{\varvec{b}}}_j^T{\hat{\varvec{\mu }}}\ge {\hat{\theta }}\). For each \(i\in [n]\), let \({\mathcal S}_i=\{j\;|\;i_j=i\}\). We have \(\sum _{i=1}^n |{\mathcal S}_i|=n\). For each \(i\in [n]\) such that \({\mathcal S}_i\ne \emptyset \), \({\hat{B}}_{ij}\) can only take values in \(\{1, 1/2,\ldots , 1/n\}\) for \(j\in {\mathcal S}_i\). Moreover, \({\hat{B}}_{ij}\ne {\hat{B}}_{ik}\) for \(j\ne k\). Therefore, there exists \(l_i\in {\mathcal S}_i\) such that
We have
Therefore, \({\hat{\theta }}\le \frac{1}{n}\), which implies that \(z_\mathsf{AR}\ge n\).
On the other hand, it is easy to observe that \(z_\mathsf{AR} \le n\): \(\varvec{b}_j=\varvec{e}_j\), \(\varvec{\mu } = 1/n \cdot \varvec{e}\) and \(\theta = 1/n\) is a feasible solution for (E.1). Therefore,
which completes the proof.
Rights and permissions
About this article
Cite this article
Awasthi, P., Goyal, V. & Lu, B.Y. On the adaptivity gap in two-stage robust linear optimization under uncertain packing constraints. Math. Program. 173, 313–352 (2019). https://doi.org/10.1007/s10107-017-1222-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107-017-1222-8