Abstract
This paper explores a class of nonlinear Adjustable Robust Optimization (ARO) problems, containing here-and-now and wait-and-see variables, with uncertainty in the objective function and constraints. By applying Fenchel’s duality on the wait-and-see variables, we obtain an equivalent dual reformulation, which is a nonlinear static robust optimization problem. Using the dual formulation, we provide conditions under which the ARO problem is convex on the here-and-now decision. Furthermore, since the dual formulation contains a non-concave maximization on the uncertain parameter, we use perspective relaxation and an alternating method to handle the non-concavity. By employing the perspective relaxation, we obtain an upper bound, which we show is the same as the static relaxation of the considered problem. Moreover, invoking the alternating method, we design a new dual-based cutting plane algorithm that is able to find a reasonable lower bound for the optimal objective value of the considered nonlinear ARO model. In addition to sketching and establishing the theoretical features of the algorithms, including convergence analysis, by numerical experiments we reveal the abilities of our cutting plane algorithm in producing locally robust solutions with an acceptable optimality gap.
Similar content being viewed by others
References
Arslan, A.N., Detienne, B.: Decomposition-based approaches for a class of two-stage robust binary optimization problems. INFORMS J. Comput. 34(2), 857–871 (2022)
Auslender, A., Teboulle, M.: Asymptotic Cones and Functions in Optimization and Variational Inequalities. Springer Science & Business Media, New York (2006)
Beck, A.: First-Order Methods in Optimization. SIAM, Philadelphia (2017)
Beck, A.: Introduction to Nonlinear Optimization: Theory, Algorithms, and Applications with Python and MATLAB, 2nd edn. SIAM, Philadelphia (2023)
Ben-Tal, A., El Ghaoui, L., Nemirovski, A.: Robust Optimization. Princeton University Press, Princeton (2009)
Ben-Tal, A., Goryashko, A., Guslitzer, E., Nemirovski, A.: Adjustable robust solutions of uncertain linear programs. Math. Program. 99(2), 351–376 (2004)
Ben-Tal, A., Nemirovski, A.: Robust convex optimization. Math. Oper. Res. 23(4), 769–805 (1998)
Ben-Tal, A., Nemirovski, A.: Robust solutions of uncertain linear programs. Oper. Res. Lett. 25(1), 1–13 (1999)
Bertsimas, D., Dunning, I.: Multistage robust mixed-integer optimization with adaptive partitions. Oper. Res. 64(4), 980–998 (2016)
Bertsimas, D., Goyal, V.: On the power and limitations of affine policies in two-stage adaptive optimization. Math. Program. 134(2), 491–531 (2012)
Bertsimas, D., Goyal, V., Lu, B.Y.: A tight characterization of the performance of static solutions in two-stage adjustable robust linear optimization. Math. Program. 150(2), 281–319 (2015)
Bertsimas, D., den Hertog, D.: Robust and Adaptive Optimization. Dynamic Ideas LLC, Belmont, MA (2022)
Bertsimas, D., Iancu, D.A., Parrilo, P.A.: A hierarchy of near-optimal policies for multistage adaptive optimization. IEEE Trans. Autom. Control 56(12), 2809–2824 (2011)
Boni, O., Ben-Tal, A.: Adjustable robust counterpart of conic quadratic problems. Math. Methods Oper. Res. 68(2), 211–233 (2008)
Breuer, D.J., Lahrichi, N., Clark, D.E., Benneyan, J.C.: Robust combined operating room planning and personnel scheduling under uncertainty. Oper. Res. Health Care 27, 100–276 (2020)
Combettes, P.L.: Perspective functions: Properties, constructions, and examples. Set-Val. Var. Anal. 26(2), 247–264 (2018)
Du, B., Zhou, H., Leus, R.: A two-stage robust model for a reliable p-center facility location problem. Appl. Math. Model. 77, 99–114 (2020)
El Ghaoui, L., Oustry, F., Lebret, H.: Robust solutions to uncertain semidefinite programs. SIAM J. Optim. 9(1), 33–52 (1998)
Grippo, L., Sciandrone, M.: On the convergence of the block nonlinear gauss-seidel method under convex constraints. Oper. Res. Lett. 26(3), 127–136 (2000)
Hadjiyiannis, M.J., Goulart, P.J., Kuhn, D.: A scenario approach for estimating the suboptimality of linear decision rules in two-stage robust optimization. In: 2011 50th IEEE Conference on Decision and Control and European Control Conference, pp. 7386–7391. IEEE (2011)
Hanasusanto, G.A., Kuhn, D., Wiesemann, W.: K-adaptability in two-stage robust binary programming. Oper. Res. 63(4), 877–891 (2015)
Hashemi Doulabi, H., Jaillet, P., Pesant, G., Rousseau, L.M.: Exploiting the structure of two-stage robust optimization models with exponential scenarios. INFORMS J. Comput. 33(1), 143–162 (2021)
Hiriart-Urruty, J.B., Lemaréchal, C.: Fundamentals of Convex Analysis. Springer Science & Business Media, Berlin (2004)
Kammammettu, S., Li, Z.: Two-stage robust optimization of water treatment network design and operations under uncertainty. Ind. Eng. Chem. Res. 59(3), 1218–1233 (2019)
Ke, G.Y.: Managing reliable emergency logistics for hazardous materials: A two-stage robust optimization approach. Comput. Oper. Res. 138, 105–557 (2022)
Koushki, J., Miettinen, K., Soleimani-damaneh, M.: LR-NIMBUS: an interactive algorithm for uncertain multiobjective optimization with lightly robust efficient solutions. J. Global Optim. 83, 843–863 (2022)
Lee, J., Skipper, D., Speakman, E.: Gaining or losing perspective. J. Global Optim. 82(4), 835–862 (2022)
Liang, E., Yuan, Z.: Adjustable robust optimal control for industrial 2-mercaptobenzothiazole production processes under uncertainty. Optim. Eng. pp. 1–38 (2022)
Löfberg, J.: YALMIP: A toolbox for modeling and optimization in MATLAB. In: Proceedings of the CACSD Conference. Taipei, Taiwan (2004)
Lu, M., Shen, Z.J.M.: A review of robust operations management under model uncertainty. Prod. Oper. Manag. 30(6), 1927–1943 (2021)
Marandi, A., Den Hertog, D.: When are static and adjustable robust optimization problems with constraint-wise uncertainty equivalent? Math. Program. 170(2), 555–568 (2018)
Marandi, A., van Houtum, G.J.: Robust location-transportation problems with integer-valued demand. Optimization (2020)
MOSEK ApS: The MOSEK optimization toolbox for MATLAB manual. version 9.3.21 (2022)
Postek, K., Hertog, D.D.: Multistage adjustable robust mixed-integer optimization via iterative splitting of the uncertainty set. INFORMS J. Comput. 28(3), 553–574 (2016)
Rockafellar, R.T.: Convex Analysis, vol. 36. Princeton University Press, Princeton (1970)
Romeijnders, W., Postek, K.: Piecewise constant decision rules via branch-and-bound based scenario detection for integer adjustable robust optimization. INFORMS J. Comput. 33(1), 390–400 (2021)
Roos, K., Balvert, M., Gorissen, B.L., den Hertog, D.: A universal and structured way to derive dual optimization problem formulations. Informs J. Optim. 2(4), 229–255 (2020)
Roy, A., Dabadghao, S., Marandi, A.: Value of intermediate imaging in adaptive robust radiotherapy planning to manage radioresistance. Ann. Oper. Res. pp. 1–22 (2022)
de Ruiter, F.J., Zhen, J., den Hertog, D.: Dual approach for two-stage robust nonlinear optimization. Oper. Res. (2022)
Shapiro, A., Dentcheva, D., Ruszczynski, A.: Lectures on Stochastic Programming: Modeling and Theory. SIAM, Philadelphia (2021)
Shapiro, A., Nemirovski, A.: On complexity of stochastic programming problems. In: Continuous optimization, pp. 111–146. Springer, New York (2005)
Soyster, A.L.: Convex programming with set-inclusive constraints and applications to inexact linear programming. Oper. Res. 21(5), 1154–1157 (1973)
Subramanyam, A., Gounaris, C.E., Wiesemann, W.: K-adaptability in two-stage mixed-integer robust optimization. Math. Program. Comput. 12(2), 193–224 (2020)
Takeda, A., Taguchi, S., Tütüncü, R.: Adjustable robust optimization models for a nonlinear two-period system. J. Optim. Theory Appl. 136(2), 275–295 (2008)
Wei, L., Gómez, A., Küçükyavuz, S.: Ideal formulations for constrained convex optimization problems with indicator variables. Math. Program. 192(1), 57–88 (2022)
Woolnough, D., Jeyakumar, V., Li, G.: Exact conic programming reformulations of two-stage adjustable robust linear programs with new quadratic decision rules. Optim. Lett. 15(1), 25–44 (2021)
Xidonas, P., Steuer, R., Hassapis, C.: Robust portfolio optimization: A categorized bibliographic review. Ann. Oper. Res. 292(1), 533–552 (2020)
Xu, G., Burer, S.: A copositive approach for two-stage adjustable robust optimization with uncertain right-hand sides. Comput. Optim. Appl. 70(1), 33–59 (2018)
Yanıkoğlu, İ, Gorissen, B.L., den Hertog, D.: A survey of adjustable robust optimization. Eur. J. Oper. Res. 277(3), 799–813 (2019)
Zadeh, N.: Note-a note on the cyclic coordinate ascent method. Manage. Sci. 16(9), 642–644 (1970)
Zeng, B., Zhao, L.: Solving two-stage robust optimization problems using a column-and-constraint generation method. Oper. Res. Lett. 41(5), 457–461 (2013)
Zhang, N., Fang, C.: Saddle point approximation approaches for two-stage robust optimization problems. J. Global Optim. 78(4), 651–670 (2020)
Zhang, X., Liu, X.: A two-stage robust model for express service network design with surging demand. Eur. J. Oper. Res. 299(1), 154–167 (2022)
Zhen, J., Kuhn, D., Wiesemann, W.: A unified theory of robust and distributionally robust optimization via the primal-worst-equals-dual-best principle. Oper. Res. (2023)
Acknowledgements
The first author conducted some parts of this work while he was a visiting researcher at the Eindhoven University of Technology. He wants to express his gratitude for the hospitality of the Department of Industrial Engineering and Innovation Sciences at this institution. The research of the first author was partially supported by INSF (No. 4000183).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendices
The appendix of this work is divided into two sections. The first section contains the proof of several points mentioned in the main text. In the second section, we provide a proof for Theorem 5.
Appendix 1 Additional results
In this appendix, first, we provide more details on a point for perspective function mentioned after Remark 1.
Proposition 1
If g is a proper, closed, and convex function, then
and
Proof
Let \(x^0\in \mathbb {R}^{n_x}\). We have
So, \( \sup _{t>0,x\in \mathbb {R}^{n_x}}g^{per} (x, t)= \sup _{t\ge 0,x\in \mathbb {R}^{n_x}}g^{per} (x, t)\).
As \( \inf _{t>0,x}g^{per} (x, t)\ge \inf _{t\ge 0,x}g^{per} (x, t)\), let \(\ell \in \{g^{per} (x, t)| t\ge 0,x\in \mathbb {R}^{n_x}\}\). We want to show \(\ell \ge \inf _{t>0,x}g^{per} (x, t)\).
-
1.
If \(\ell =g^{per} (x^0, t_0)\) for some \(x^0\in \mathbb {R}^{n_x}\) and \(t_0>0\), then \(\ell \ge \inf _{t>0,x}g^{per} (x, t)\).
-
2.
If \(\ell =g^{per} (x^0,0)\) for some \(x^0\in \mathbb {R}^{n_x}\), then
$$\begin{aligned} \ell =g^{per} (x^0,0)&=\displaystyle \liminf _{(x^i,t_i)\rightarrow (x^0,0)}g^{per} (x^i, t_i>0)\\&\ge \displaystyle \inf _{(x^i,t_i)\rightarrow (x^0,0)}g^{per} (x^i, t_i)\\&\ge \displaystyle \inf _{t>0,x\in \mathbb {R}^{n_x}}g^{per} (x, t). \end{aligned}$$The proof is complete.
\(\square \)
As a consequence of the above proposition, we have
The next proposition proves the convexity of the set \({\mathcal {V}}\) and the concavity of the function G introduced in the beginning of Sect. 5.
Proposition 2
The set \({\mathcal {V}}\) is convex, and G is a concave function on \({\mathcal {V}}\).
Proof
We consider two points \(\bar{v}=\begin{pmatrix} {\bar{\lambda }} \\ \{{\bar{w}}^j\}_{j=0}^m \end{pmatrix}, {\tilde{v}}=\begin{pmatrix} {\tilde{\lambda }} \\ \{{{\tilde{w}}}^j\}_{j=0}^m \end{pmatrix}\in {\mathcal {V}} \) and \(\ell \in [0,1]\). Since \(\frac{{\bar{w}}^j}{{\bar{\lambda }}_j},\frac{\tilde{w}^j}{{\tilde{\lambda }}_j}\in dom (g_j^*)\), and \(\lambda _{j} {g_j}^*( \tfrac{w^j}{\lambda _j})\) for each j is jointly convex in \((w^j,\lambda _{j})\), we have the following possible cases:
Case 1. \(\ell {\bar{\lambda }}_j+(1-\ell ){\tilde{\lambda }}_j>0:\) In this case,
Case 2. \(\ell {\bar{\lambda }}_j+(1-\ell ){\tilde{\lambda }}_j=0:\) In this case, if \(0<\ell <1\), then \({\bar{\lambda }}_j=0={\tilde{\lambda }}_j\), and so
If \(\ell =0\), then \({\tilde{\lambda }}_j=0\), and hence
If \(\ell =1\), then \({\bar{\lambda }}_j=0\), and thus
So, in all above three cases, we get
Convexity in all other constraints of \({\mathcal {V}}\) obviously holds. So, \(\ell \bar{v}+(1-\ell ){\tilde{v}}\in {\mathcal {V}}\) which shows that \({\mathcal {V}}\) is a convex set. The function G on the convex set \({\mathcal {V}}\) is a concave function due to the concavity of each \(-\lambda _{j} {g_j}^*( \tfrac{w^j}{\lambda _j})\). \(\square \)
Appendix 2 Proof of Theorem 5
We first recall optimality condition for a constrained differentiable problem (for more details see e.g., [4]). Consider a (non-convex) problem of the form
where g is a real-valued continuously differentiable function, and \({\mathcal {S}}\) is a nonempty closed convex set. A vector \(y^*\in {\mathcal {S}}\) is called a stationary point of problem (21) if
where \(\nabla g(y^*)\) is the gradient of g at \(y^*\).
Lemma 1
Let g be a real-valued continuously differentiable function defined on the Cartesian product of two closed convex sets \(C_1\subseteq \mathbb {R}^{n_1}\), \( C_2\subseteq \mathbb {R}^{n_2}\). Suppose that \(\bar{y}=(\bar{y}^1,\bar{y}^2)\in C_1\times C_2\). Then
if and only if the following properties hold:
-
(i)
\(~\nabla _1 g(\bar{y})^\top (y^1-\bar{y}^1)\le 0,~~~\forall y^1\in C_1,\)
-
(ii)
\(~\nabla _2 g(\bar{y})^\top (y^2-\bar{y}^2)\le 0,~~~\forall y^2\in C_2,\)
where the vector y is partitioned into two component vectors \(y^1\in \mathbb {R}^{n_1}\), \(y^2\in \mathbb {R}^{n_2}\), as \(y\equiv (y^1,y^2)\), and \(\nabla _1 g(\bar{y})=\left( \tfrac{\partial g}{\partial y^1}(\bar{y})\right) \), and \(\nabla _2 g(\bar{y})=\left( \tfrac{\partial g}{\partial y^2}(\bar{y})\right) \) denote the corresponding gradient vectors.
Proof
\((\Rightarrow )\) Let \(y=(y^1,y^2)\in C_1\times C_2\). By setting \( y:=(y^1,\bar{y}^2)\) and \( y:=(\bar{y}^1,y^2)\) in inequality (22), inequalities (i) and (ii) are derived.
\((\Leftarrow )\) Clearly, (i) and (ii) lead (22). \(\square \)
Now we are ready to prove Theorem 5. The main line of reasoning can be found in [19] but given here for completeness.
Proof of Theorem 5
Suppose that \(z^*=(z^{1*},z^{2*})\) is a limit point of the sequence \(\{z^k\}_{k\ge 0}\). Without loss of generality, we assume that \(z^k=(u^k,v^k)\rightarrow (z^{1*},z^{2*})\). Our goal is to show that for any \(\zeta =(\zeta ^1,\zeta ^2)\in {\mathcal {U}}\times {\mathcal {V}}\), we have
According to Lemma 1, the above inequality is equivalent to
where \(\nabla {\mathcal {L}}_{{\bar{x}}}(z^*)=\left( \nabla _1 {\mathcal {L}}_{{\bar{x}}}(z^{*})^\top ,\nabla _2 {\mathcal {L}}_{\bar{x}}(z^{*})^\top \right) ^\top \) is the gradient of \({\mathcal {L}}_{\bar{x}}\) at \(z^*\). By contradiction, suppose that there exists a vector \(\tilde{\zeta }^2\in {\mathcal {V}}\), such that
Set \(r^k:=\tilde{\zeta }^2-v^k\). As the sequence \(\{v^k\}_{k\ge 0}\) converges to \(z^{2*}\), the sequence \(\{r^k\}_{k\ge 0}\) converges to \(\tilde{\zeta }^2-z^{2*}\). Thus, due to the continuity of the gradient, there exists \(N>0\) such that for all \(k>N\) we have
So, \(d^k:=({\textbf {0}}^\top ,(r^k)^\top )^\top \) is an ascent direction of \({\mathcal {L}}_{{\bar{x}}}\) at \(z^k\). By backtracking line search [4, Lemma 4.3], for given parameter \(\alpha \in (0,1)\), there exists a step size \(t_k\in (0,1)\) such that
Therefore
Since \({\mathcal {V}}\) is convex, we have
Hence,
So, the sequence of function values \(\left\{ {\mathcal {L}}_{\bar{x}}(u^k,v^k)\right\} \) is non-decreasing and also bounded above. Therefore, it is convergent. The last inequality and the convergence of \(\left\{ {\mathcal {L}}_{{\bar{x}}}(u^k,v^k)\right\} \) implies
The above equation and (26) gives
which contradicts (25). This prove (24). The inequality (23) can be proved similarly. \(\square \)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Khademi, A., Marandi, A. & Soleimani-damaneh, M. A new dual-based cutting plane algorithm for nonlinear adjustable robust optimization. J Glob Optim (2024). https://doi.org/10.1007/s10898-023-01360-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10898-023-01360-2
Keywords
- Adjustable robust optimization
- Fenchel duality
- Biconvex programming
- Perspective function
- Alternating method
- Cutting plane methods