Abstract
We consider the problem of designing piecewise affine policies for two-stage adjustable robust linear optimization problems under right-hand side uncertainty. It is well known that a piecewise affine policy is optimal although the number of pieces can be exponentially large. A significant challenge in designing a practical piecewise affine policy is constructing good pieces of the uncertainty set. Here we address this challenge by introducing a new framework in which the uncertainty set is “approximated” by a “dominating” simplex. The corresponding policy is then based on a mapping from the uncertainty set to the simplex. Although our piecewise affine policy has exponentially many pieces, it can be computed efficiently by solving a compact linear program given the dominating simplex. Furthermore, we can find the dominating simplex in a closed form if the uncertainty set satisfies some symmetries and can be computed using a MIP in general. We would like to remark that our policy is an approximate piecewise-affine policy and is not necessarily a generalization of the class of affine policies. Nevertheless, the performance of our policy is significantly better than the affine policy for many important uncertainty sets, such as ellipsoids and norm-balls, both theoretically and numerically. For instance, for hypersphere uncertainty set, our piecewise affine policy can be computed by an LP and gives a \(O(m^{1/4})\)-approximation whereas the affine policy requires us to solve a second order cone program and has a worst-case performance bound of \(O(\sqrt{m})\).
Similar content being viewed by others
Notes
Remark We note that in [7], in Tables 1 and 2, there is a typo in the performance bound for affine policies for p-norm balls. According to Theorem 3 in [7], the bound should be
$$\begin{aligned} \frac{m^{\frac{p-1}{p}}+m}{m^{\frac{p-1}{p}}+m^{\frac{1}{p}}}= O \left( m^{\frac{1}{p}} \right) , \end{aligned}$$instead of \(\frac{m^{\frac{p-1}{p}}+m}{m^{\frac{1}{p}}+m}\) as mentioned in Table 2 in [7]).
References
Ayoub, J., Poss, M.: Decomposition for adjustable robust linear optimization subject to uncertainty polytope. Comput. Manag. Sci. 13(2), 219–239 (2016)
Ben-Tal, A., El Ghaoui, L., Nemirovski, A.: Robust Optimization. Princeton University Press, Princeton (2009)
Ben-Tal, A., Goryashko, A., Guslitzer, E., Nemirovski, A.: Adjustable robust solutions of uncertain linear programs. Math. Program. 99(2), 351–376 (2004)
Ben-Tal, A., Nemirovski, A.: Robust convex optimization. Math. Oper. Res. 23(4), 769–805 (1998)
Ben-Tal, A., Nemirovski, A.: Robust solutions of uncertain linear programs. Oper. Res. Lett. 25(1), 1–14 (1999)
Ben-Tal, A., Nemirovski, A.: Robust optimization-methodology and applications. Math. Program. 92(3), 453–480 (2002)
Bertsimas, D., Bidkhori, H.: On the performance of affine policies for two-stage adaptive optimization: a geometric perspective. Math. Program. 153(2), 577–594 (2015)
Bertsimas, D., Brown, D., Caramanis, C.: Theory and applications of robust optimization. SIAM Rev. 53(3), 464–501 (2011)
Bertsimas, D., Caramanis, C.: Finite adaptability in multistage linear optimization. IEEE Trans. Autom. Control 55(12), 2751–2766 (2010)
Bertsimas, D., Dunning, I.: Multistage robust mixed-integer optimization with adaptive partitions. Oper Res 64(4), 980–998 (2016)
Bertsimas, D., Georghiou, A.: Design of near optimal decision rules in multistage adaptive mixed-integer optimization. Oper. Res. 63(3), 610–627 (2015)
Bertsimas, D., Goyal, V.: On the power and limitations of affine policies in two-stage adaptive optimization. Math. Program. 134(2), 491–531 (2012)
Bertsimas, D., Goyal, V., Sun, X.: A geometric characterization of the power of finite adaptability in multistage stochastic and adaptive optimization. Math. Oper. Res. 36(1), 24–54 (2011)
Bertsimas, D., Iancu, D., Parrilo, P.: Optimality of affine policies in multi-stage robust optimization. Math. Oper. Res. 35, 363–394 (2010)
Bertsimas, D., Sim, M.: Robust discrete optimization and network flows. Math. Program. Ser. B 98, 49–71 (2003)
Bertsimas, D., Sim, M.: The price of robustness. Oper. Res. 52(2), 35–53 (2004)
Chen, X., Sim, M., Sun, P., Zhang, J.: A linear decision-based approximation approach to stochastic programming. Oper. Res. 56(2), 344–357 (2008)
Dantzig, G.: Linear programming under uncertainty. Manag. Sci. 1, 197–206 (1955)
El Ghaoui, L., Lebret, H.: Robust solutions to least-squares problems with uncertain data. SIAM J. Matrix Anal. Appl. 18, 1035–1064 (1997)
El Housni, O., Goyal, V.: Beyond worst-case: a probabilistic analysis of affine policies in dynamic optimization. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 4759–4767. Curran Associates Inc, New York (2017)
El Housni, O., Goyal, V.: Piecewise static policies for two-stage adjustable robust linear optimization. Math. Program. Ser. A B 169(2), 649–665 (2018)
Feige, U., Jain, K., Mahdian, M., Mirrokni, V.: Robust combinatorial optimization with exponential scenarios. Lect. Notes Comput. Sci. 4513, 439–453 (2007)
Goldfarb, D., Iyengar, G.: Robust portfolio selection problems. Math. Oper. Res. 28(1), 1–38 (2003)
Iancu, D., Sharma, M., Sviridenko, M.: Supermodularity and affine policies in dynamic robust optimization. Oper. Res. 61(4), 941–956 (2013)
Kall, P., Wallace, S.: Stochastic Programming. Wiley, New York (1994)
Postek, K., Hertog, D.: Multistage adjustable robust mixed-integer optimization via iterative splitting of the uncertainty set. INFORMS J. Comput. 28(3), 553–574 (2016)
Prékopa, A.: Stochastic Programming. Kluwer Academic Publishers, Dordrecht (1995)
Shapiro, A.: Stochastic programming approach to optimization under uncertainty. Math. Program. Ser. B 112(1), 183–220 (2008)
Shapiro, A., Dentcheva, D., Ruszczynski, A.: Lectures on Stochastic Programming: Modeling and Theory (SIAM). MPS, Philadelphia (2009)
Soyster, A.: Convex programming with set-inclusive constraints and applications to inexact linear programming. Oper. Res. 21(5), 1154–1157 (1973)
Zeng, B.: Solving Two-Stage Robust Optimization Problems by a Constraint-and-Column Generation Method. University of South Florida, Tampa (2011)
Acknowledgements
O. El Housni and V. Goyal are supported by NSF Grants CMMI 1201116 and CMMI 1351838.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Proof of Theorem 1
Proof
Let \((\hat{\varvec{x}},\varvec{\hat{y}}(\hat{\varvec{h}}), \hat{\varvec{h}}\in \hat{\mathcal{U}})\) be an optimal solution for \(z_{\textsf {AR}}(\hat{\mathcal{U}})\). For each \(\varvec{h}\in \mathcal{U}\), let \(\tilde{\varvec{y}}(\varvec{h})= \hat{\varvec{y}}(\hat{\varvec{h}})\) where \(\hat{\varvec{h}}\in \hat{\mathcal{U}}\) dominates \(\varvec{h}\). Therefore, for any \(\varvec{h}\in \mathcal{U}\),
i.e., \((\hat{\varvec{x}},\tilde{\varvec{y}}(\varvec{h}),\varvec{h}\in \mathcal{U})\) is a feasible solution for \(z_{\textsf {AR}}(\mathcal{U})\). Therefore,
Conversely, let \((\varvec{x}^*, \varvec{y}^*(\varvec{h}), \varvec{h}\in \mathcal{U})\) be an optimal solution of \(z_{\textsf {AR}}(\mathcal{U})\). Then, for any \(\hat{\varvec{h}}\in \hat{\mathcal{U}}\), since \( \frac{\hat{\varvec{h}}}{\beta } \in \mathcal{U}\), we have,
Therefore, \((\beta \varvec{x}^*, \beta \varvec{y}^*\left( \frac{\hat{\varvec{h}}}{\beta }\right) , \hat{\varvec{h}} \in \mathcal{U})\) is feasible for \(\varPi _{\textsf {AR}}(\hat{\mathcal{U}})\). Therefore,
\(\square \)
Proof of Lemma 1
Proof
(a) Suppose there exists \( \beta \) and \( \varvec{v} \in \mathcal{U}\) such that \( \; \hat{\mathcal{U}} = \beta \cdot {\textsf {conv}} \left( \varvec{e}_1,\ldots ,\varvec{e}_m, \varvec{v} \right) \) dominates \(\mathcal U \). Consider \(\varvec{h} \in \mathcal{U }\). Since \(\hat{\mathcal{U}}\) dominates \(\mathcal{U}\), there exists \( \alpha _1, \alpha _2,\ldots ,\alpha _{m+1} \ge 0\) with \(\alpha _1 + \cdots + \alpha _{m+1} =1\) such that
Let
Then,
where the first inequality follows from (B.1) and the last inequality holds because \( \alpha _{m+1} -1 \le 0\), \( v_i \ge 0\) , \(\beta \ge 0\) and \( \sum _{i\in I (\varvec{h} )} \alpha _i \le 1\). We conclude that
(b) Now, suppose there exists \( \beta \) and \( \varvec{v} \in \mathcal{U}\) such that \( \; \hat{\mathcal{U}} = \beta \cdot {\textsf {conv}} \left( \varvec{e}_1,\ldots ,\varvec{e}_m, \varvec{v} \right) \) dominates \(\mathcal U \). For any \(\varvec{h} \in \mathcal{U}\), let
Then for all \(i=1,\ldots ,m\),
Therefore, \(\varvec{{\hat{h}}}\) dominates \(\varvec{h}\). Moreover,
because
Therefore, \(2\beta \cdot {\textsf {conv}} \left( \varvec{0}, \varvec{e}_1,\ldots ,\varvec{e}_m, \varvec{v} \right) \) dominates \(\mathcal{U} \) and consequently \(2\beta \cdot {\textsf {conv}} \left( \varvec{e}_1,\ldots ,\varvec{e}_m, \varvec{v} \right) \) dominates \(\mathcal{U} \) as well. \(\square \)
Proof of Lemma 3
Proof
Suppose \( k \in [m]\). Let us consider
Without loss of generality, we can suppose that \(h_i =0\) for \( i =k+1,\ldots , m\). Denote, \(\mathcal{S}_k\) the set of permutations of \(\{ 1,2,\ldots ,k\}\). We define \(\varvec{h} ^{\sigma } \in \mathbb {R}_+^m\) such that \(h^{\sigma }_i = h_{\sigma (i) }\) for \(i=1, \ldots ,k\) and \(h^{\sigma }_i =0\) otherwise. Since \(\mathcal{U}\) is a permutation invariant set, we have \(\varvec{h} ^{\sigma } \in \mathcal{U}\) for any \(\sigma \in \mathcal{S}_k\). The convexity of \(\mathcal{U}\) implies that
We have,
and \(\sum _{j=1}^k h_j = k \cdot \gamma (k)\) by definition. Therefore,
\(\square \)
Proof of Lemma 4
Proof
Consider, \( \tilde{\varvec{h}} \in \mathcal{U} \) an optimal solution for the maximization problem in (3.3) for fixed \(\beta \). We will construct \( \varvec{h}^* \in \mathcal{U}\) another optimal solution of (3.3) that verifies the properties in the lemma. First, denote \( I = \{ i \; \vert \; {\tilde{h}}_i > \beta \gamma \} \) and \(\vert I\vert =k\). Since, \(\mathcal{U}\) is permutation invariant, we can suppose without loss of generality that \(I =\{1,2,\ldots ,k \}\). We define,
From Lemma 3, we have \(\varvec{h}^* \in \mathcal{U}\). Moreover,
where the first inequality follows from the definition of the coefficients \(\gamma (.)\). Therefore, \(\varvec{h}^*\) and \( \tilde{\varvec{h}}\) have the same objective value in (3.3) and consequently \(\varvec{h}^*\) is also optimal for the maximization problem (3.3). Moreover, from the first inequality, we have \( \gamma (k) - \beta \gamma >0 \), i.e., \(\big \vert \{ i \; \vert \; h_i^* > \beta \gamma \} \big \vert =k.\) Therefore, \(\varvec{h}^*\) verifies the properties of the lemma. \(\square \)
Proof of Proposition 4
Proof
To prove that \(\hat{\mathcal{U}}\) dominates \(\mathcal{U}\), it is sufficient to take \(\varvec{h}\) in the boundaries of \( \mathcal{U}\), i.e.,
and find \( \alpha _1, \alpha _2,\ldots ,\alpha _{m+1}\) nonnegative reals with \(\sum _{i=1}^{m+1} \alpha _i =1\) such that for all \( i \in [m],\)
By taking all \(h_i\) equal in (E.1), we get
We choose for \( i \in [m]\),
and \(\alpha _{m+1}= \frac{1}{2}.\) First, we have \(\sum _{i=1}^{m+1} \alpha _i =1\) and for all \( i \in [m]\),
where the first inequality holds because \(\sum _{j=1}^m h_j \ge 1\) which is a direct consequence of \( \varvec{h}^T \varSigma \varvec{h} =1\) and \( a \le 1\). The second one follows from the inequality of arithmetic and geometric means (AM-GM inequality). Finally, we can verify by case analysis on the values of a that
In fact, denote \(H(m)= \left( \frac{a}{2}+ \frac{(1-a)^{\frac{1}{2}}}{\left( am^2+(1-a)m\right) ^{\frac{1}{4}}}\right) ^{-1} = O \left( a+ \frac{1}{\left( am^2+m\right) ^{\frac{1}{4}}}\right) ^{-1}\)
Case 1\(a=O(\frac{1}{m})\). We have \(\left( am^2+m \right) ^{\frac{1}{4}} = O(m^{\frac{1}{4}})\). Then \(H(m)=O(m^{\frac{1}{4}})=O(m^{\frac{2}{5}})\).
Case 2\(a=\varOmega (m^{\frac{-2}{5}})\). We have \(H(m)=O(a^{-1})=O(m^{\frac{2}{5}})\).
Case 3\(a=O(m^{\frac{-2}{5}} )\)and\(a=\varOmega (\frac{1}{m})\). We have \(\left( am^2+m \right) ^{\frac{1}{4}} = O(m^{\frac{2}{5}})\). Then,
Therefore, \(H(m)=O(m^{\frac{2}{5}})\). \(\square \)
Proof of Proposition 5
Proof
To prove that \(\hat{ \mathcal U}\) dominates \({ \mathcal U}\), it is sufficient to take \(\varvec{h}\) in the boundaries of \( \mathcal{U}\), i.e., \(\sum _{i=1}^m h_i =k\) and find \( \alpha _1, \alpha _2,\ldots ,\alpha _{m+1}\) non-negative reals with \(\sum _{i=1}^{m+1} \alpha _i =1\) such that for all \( i \in [m],\)
First case If \(\beta =k\), we choose \(\alpha _i = \frac{h_i}{k}\) for \( i \in [m]\) and \(\alpha _{m+1}= 0.\) We have \(\sum _{i=1}^{m+1} \alpha _i =1\) and for all \( i \in [m]\),
Second case If \(\beta =\frac{m}{k}\), we choose \(\alpha _i = 0 \) for \( i \in [m]\) and \(\alpha _{m+1}= 1.\) We have \(\sum _{i=1}^{m+1} \alpha _i =1\) and for all \( i \in [m]\),
\(\square \)
Proof of Lemma 6
Proof
Consider the following simplex
It is clear that \( \hat{\mathcal{U}}\) dominates \(\mathcal{U}\) since \(\frac{1}{\sqrt{m}} \varvec{e}\) dominates all the extreme points \( \varvec{\nu }_j\) for \(j \in [N]\). Moreover, by the convexity of \( \mathcal U\), we have \( \frac{1}{N} \sum _{j=1}^N \varvec{\nu }_j = \frac{\left( {\begin{array}{c}m-1\\ r-1\end{array}}\right) }{\sqrt{m}\left( {\begin{array}{c}m\\ r\end{array}}\right) } \varvec{e} = \frac{r}{m\sqrt{m}} \varvec{e} \in \mathcal{U}\). Denote \(\beta = \frac{m}{r } \). Hence, for all \(i \in [m]\)
Therefore, \( \hat{\mathcal{U}} \subseteq \beta \cdot \mathcal{U}\) and from Theorem 1, we conclude that our policy gives a \(\beta \)-approximation to the adjustable problem (1.1) where \(\beta = \frac{m}{\lceil m- \sqrt{m \rceil } }=O (1+ \frac{1}{\sqrt{m}})\). \(\square \)
Proof of Lemma 7
Proof
First, let us prove that \(z_{\textsf {AR}}(\mathcal{U}) \le 1\). It is sufficient to define an adjustable solution only for the extreme points of \(\mathcal{U}\) because the constraints are linear. We define the following solution for all \(i=1,\ldots ,m\) and for all \(j=1,\ldots ,N\)
We have \(\varvec{B} \varvec{y} ( \varvec{0}) = \varvec{0}\). For \(i \in [m]\)
and for \(j \in [N]\)
Therefore, the solution defined above is feasible. Moreover, the cost of our feasible solution is 1 because for all \(i \in [m]\) and \(j \in [N]\), we have
Hence, \(z_{\textsf {AR}}(\mathcal{U}) \le 1.\) Now, it is sufficient to prove that \(z_{\textsf {Aff}}(\mathcal{U})= \varOmega ( \sqrt{m})\). First, \(\tilde{\varvec{x}}= \frac{1}{\sqrt{m}} \varvec{e} \) and \(\varvec{y}( \varvec{h})= \varvec{0}\) for any \(\varvec{h} \in \mathcal{U}\) is a feasible static solution (which is a special case of an affine solution). In fact,
where the last inequality holds because \( \mathcal{U} \subseteq [0,1]^m\). Moreover, the cost of this static solution is
Hence,
Our instance is “a permuted instance”, i.e. \(\mathcal{U}\) is permutation invariant, \(\varvec{A}\) and \(\varvec{B}\) are symmetric and \(\varvec{c}\) and \(\varvec{d}\) are proportional to \(\varvec{e}\). Hence, from Lemma 8 and Lemma 7 in Bertsimas and Goyal [12], for any optimal solution \(\varvec{x}^*_{\textsf {Aff}}, \varvec{y}^*_{\textsf {Aff}}( \varvec{h})\) of the affine problem, we can construct another optimal affine solution that is “symmetric” and have the same stage cost. In particular, there exists an optimal solution for the affine problem of the following form \( \varvec{x}= \alpha \varvec{e}\), \( \varvec{y}( \varvec{h}) = \varvec{P} \varvec{h} + \varvec{q}\) for \(\varvec{h} \in \mathcal{U}\) where
\( \varvec{q} = \lambda \varvec{e}\), \(\varvec{c}^T \varvec{x}= \varvec{c}^T \varvec{x}^*_{\textsf {Aff}}\) and \(\max _{ \varvec{h} \in \mathcal{U}} \varvec{d}^T \varvec{y}(\varvec{h})= \max _{ \varvec{h} \in \mathcal{U}} \varvec{d}^T \varvec{y}^*_{\textsf {Aff}}(\varvec{h})\). We have \( \varvec{x} \ge \varvec{0}\) and \( \varvec{y}(\varvec{0}) = \lambda \varvec{e} \ge \varvec{0}\) hence
Claim
\(\alpha \ge \frac{1}{24 \sqrt{m}}\) For a sake of contradiction, suppose that \( \alpha > \frac{1}{24 \sqrt{m}}\). We know that
Case 1 If \(\lambda \ge \frac{1}{12 \sqrt{m}}\), then from (H.4) and \(\alpha \ge 0\), we have \(z_{\textsf {Aff}}(\mathcal{U}) \ge \frac{\sqrt{m}}{12}\). Contradiction with (H.1).
Case 2 If \( \lambda \le \frac{1}{12 \sqrt{m}}\). We have
By feasibility of the solution, we have \(\varvec{A} \varvec{x}+ \varvec{B} \varvec{y} ( \varvec{e}_1) \ge \varvec{e}_1\), hence
Therefore \(\theta + \lambda + \alpha \left( \frac{m-1}{\sqrt{m}} +1 \right) \ge \frac{1}{2}\) or \(\frac{1}{\sqrt{m}} (m-1)(\mu + \lambda ) \ge \frac{1}{2}\).
Case 2.1 Suppose \(\frac{1}{\sqrt{m}} (m-1)(\mu + \lambda ) \ge \frac{1}{2}\). Therefore,
where the last inequality holds because \(\theta + \lambda \ge 0 \) as \(\varvec{y}( \varvec{e}_1) \ge \varvec{0}\).
Case 2.2 Now suppose we have the other inequality i.e. \(\theta + \lambda + \alpha \left( \frac{m-1}{\sqrt{m}} +1 \right) \ge \frac{1}{2}\). Recall that we have \( \lambda \le \frac{1}{12\sqrt{m}}\) and we know that \( \alpha < \frac{1}{24 \sqrt{m}}\). Therefore,
We have,
In particular we have ,
where the last inequality follows from \(\lambda \ge 0\) and \( \theta \ge \frac{1}{3}.\)
Case 2.2.1 If \( \mu \ge 0\) then from (H.5)
Case 2.2.2 Now suppose that \( \mu < 0\), by non-negativity of \( \varvec{y} ( \varvec{\nu }_1) \) we have
i.e.
and from (H.5)
We conclude that \( \alpha \ge \frac{1}{24 \sqrt{m}}\) and consequently
Hence,
\(\varvec{c}^T \varvec{x}= \varvec{c}^T \varvec{x}^*_{\textsf {Aff}}\) Moreover, for any optimal affine solution, the cost of the first-stage affine solution \(\varvec{x}^*_{\textsf {Aff}}\) is \( \varOmega ({\sqrt{m}})\) away from the optimal adjustable problem (1.1), i.e. \( \varvec{c}^T \varvec{x}^*_{\textsf {Aff}} =\varvec{c}^T \varvec{x} =\varOmega ( \sqrt{m})\cdot z_{\textsf {AR}}(\mathcal{U})\). \(\square \)
Proof of Theorem 7
Proof
Let us find the order of the left hand side ratio in inequality (5.3). We have,
By Stirling’s approximation, we have
All together,
We have
and
WLOG, we can suppose that \( \epsilon < \frac{1}{4}\), therefore
We have,
but the later inequality contradicts
\(\square \)
Domination for non-permutation invariant sets
Proposition 6
Suppose Algorithm 1 returns \(\beta \) and \(\varvec{v}\) for some uncertainty set \(\mathcal{U}\). Then the set (6.3) is a dominating set for \(\mathcal U\).
Proof
Suppose Algorithm 1 returns \(\beta \) and \(\varvec{v}\),then the inequality (2.4) is verified, namely,
Recall the dominating point (2.3)
We have
where
Hence \({\hat{\mathcal{U}}}\) is a dominating set. \(\square \)
Domination for the generalized budget set
Proposition 7
Let consider
The set (K.1) dominates the uncertainty set (6.2).
Proof
Consider the uncertainty set (6.2) given by
and
Note that in our setting we choose \( \theta > \frac{m-1}{2}\). Take any \(\varvec{h} \in \mathcal{U}\). Suppose WLOG that
Hence, by definition of \(\mathcal{U}\)
To prove that \(\hat{ \mathcal U}\) dominates \({ \mathcal U}\), it is sufficient to find \( \alpha _1, \alpha _2,\ldots ,\alpha _{m+1}\) non-negative reals with \(\sum _{i=1}^{m+1} \alpha _i \le 1\) such that for all \( i \in [m],\)
We choose \(\alpha _{m+1} = (m-1-2\theta ) \cdot \frac{h_1+h_2}{2} \), \(\alpha _1=h_1\) and for \(i \ge 2\), \(\alpha _i = h_i - \frac{h_1+h_2}{2}\). We can verify that
and for \( i \ge 2\),
Moreover, \(\alpha _{m+1} \ge 0\), \(\alpha _1 \ge 0\) and for \(i \ge 2\), \(\alpha _i \ge 0\) since \(h_1+h_2= min_{i\ne j} (h_i+h_j)\). Finally,
Note that the construction of this dominating set is slightly different from the general approach in Sect. 3 since we do not scale the unit vectors \(\varvec{e}_i\) in \({\hat{\mathcal{U}}}\). \(\square \)
Rights and permissions
About this article
Cite this article
Ben-Tal, A., El Housni, O. & Goyal, V. A tractable approach for designing piecewise affine policies in two-stage adjustable robust optimization. Math. Program. 182, 57–102 (2020). https://doi.org/10.1007/s10107-019-01385-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107-019-01385-0