Skip to main content
Log in

A new dual-based cutting plane algorithm for nonlinear adjustable robust optimization

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

This paper explores a class of nonlinear Adjustable Robust Optimization (ARO) problems, containing here-and-now and wait-and-see variables, with uncertainty in the objective function and constraints. By applying Fenchel’s duality on the wait-and-see variables, we obtain an equivalent dual reformulation, which is a nonlinear static robust optimization problem. Using the dual formulation, we provide conditions under which the ARO problem is convex on the here-and-now decision. Furthermore, since the dual formulation contains a non-concave maximization on the uncertain parameter, we use perspective relaxation and an alternating method to handle the non-concavity. By employing the perspective relaxation, we obtain an upper bound, which we show is the same as the static relaxation of the considered problem. Moreover, invoking the alternating method, we design a new dual-based cutting plane algorithm that is able to find a reasonable lower bound for the optimal objective value of the considered nonlinear ARO model. In addition to sketching and establishing the theoretical features of the algorithms, including convergence analysis, by numerical experiments we reveal the abilities of our cutting plane algorithm in producing locally robust solutions with an acceptable optimality gap.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Arslan, A.N., Detienne, B.: Decomposition-based approaches for a class of two-stage robust binary optimization problems. INFORMS J. Comput. 34(2), 857–871 (2022)

    Article  MathSciNet  Google Scholar 

  2. Auslender, A., Teboulle, M.: Asymptotic Cones and Functions in Optimization and Variational Inequalities. Springer Science & Business Media, New York (2006)

    Google Scholar 

  3. Beck, A.: First-Order Methods in Optimization. SIAM, Philadelphia (2017)

    Book  Google Scholar 

  4. Beck, A.: Introduction to Nonlinear Optimization: Theory, Algorithms, and Applications with Python and MATLAB, 2nd edn. SIAM, Philadelphia (2023)

    Google Scholar 

  5. Ben-Tal, A., El Ghaoui, L., Nemirovski, A.: Robust Optimization. Princeton University Press, Princeton (2009)

    Book  Google Scholar 

  6. Ben-Tal, A., Goryashko, A., Guslitzer, E., Nemirovski, A.: Adjustable robust solutions of uncertain linear programs. Math. Program. 99(2), 351–376 (2004)

    Article  MathSciNet  Google Scholar 

  7. Ben-Tal, A., Nemirovski, A.: Robust convex optimization. Math. Oper. Res. 23(4), 769–805 (1998)

    Article  MathSciNet  Google Scholar 

  8. Ben-Tal, A., Nemirovski, A.: Robust solutions of uncertain linear programs. Oper. Res. Lett. 25(1), 1–13 (1999)

    Article  MathSciNet  Google Scholar 

  9. Bertsimas, D., Dunning, I.: Multistage robust mixed-integer optimization with adaptive partitions. Oper. Res. 64(4), 980–998 (2016)

    Article  MathSciNet  Google Scholar 

  10. Bertsimas, D., Goyal, V.: On the power and limitations of affine policies in two-stage adaptive optimization. Math. Program. 134(2), 491–531 (2012)

    Article  MathSciNet  Google Scholar 

  11. Bertsimas, D., Goyal, V., Lu, B.Y.: A tight characterization of the performance of static solutions in two-stage adjustable robust linear optimization. Math. Program. 150(2), 281–319 (2015)

    Article  MathSciNet  Google Scholar 

  12. Bertsimas, D., den Hertog, D.: Robust and Adaptive Optimization. Dynamic Ideas LLC, Belmont, MA (2022)

    Google Scholar 

  13. Bertsimas, D., Iancu, D.A., Parrilo, P.A.: A hierarchy of near-optimal policies for multistage adaptive optimization. IEEE Trans. Autom. Control 56(12), 2809–2824 (2011)

    Article  MathSciNet  Google Scholar 

  14. Boni, O., Ben-Tal, A.: Adjustable robust counterpart of conic quadratic problems. Math. Methods Oper. Res. 68(2), 211–233 (2008)

    Article  MathSciNet  Google Scholar 

  15. Breuer, D.J., Lahrichi, N., Clark, D.E., Benneyan, J.C.: Robust combined operating room planning and personnel scheduling under uncertainty. Oper. Res. Health Care 27, 100–276 (2020)

    Google Scholar 

  16. Combettes, P.L.: Perspective functions: Properties, constructions, and examples. Set-Val. Var. Anal. 26(2), 247–264 (2018)

    Article  MathSciNet  Google Scholar 

  17. Du, B., Zhou, H., Leus, R.: A two-stage robust model for a reliable p-center facility location problem. Appl. Math. Model. 77, 99–114 (2020)

    Article  MathSciNet  Google Scholar 

  18. El Ghaoui, L., Oustry, F., Lebret, H.: Robust solutions to uncertain semidefinite programs. SIAM J. Optim. 9(1), 33–52 (1998)

    Article  MathSciNet  Google Scholar 

  19. Grippo, L., Sciandrone, M.: On the convergence of the block nonlinear gauss-seidel method under convex constraints. Oper. Res. Lett. 26(3), 127–136 (2000)

    Article  MathSciNet  Google Scholar 

  20. Hadjiyiannis, M.J., Goulart, P.J., Kuhn, D.: A scenario approach for estimating the suboptimality of linear decision rules in two-stage robust optimization. In: 2011 50th IEEE Conference on Decision and Control and European Control Conference, pp. 7386–7391. IEEE (2011)

  21. Hanasusanto, G.A., Kuhn, D., Wiesemann, W.: K-adaptability in two-stage robust binary programming. Oper. Res. 63(4), 877–891 (2015)

    Article  MathSciNet  Google Scholar 

  22. Hashemi Doulabi, H., Jaillet, P., Pesant, G., Rousseau, L.M.: Exploiting the structure of two-stage robust optimization models with exponential scenarios. INFORMS J. Comput. 33(1), 143–162 (2021)

    Article  MathSciNet  Google Scholar 

  23. Hiriart-Urruty, J.B., Lemaréchal, C.: Fundamentals of Convex Analysis. Springer Science & Business Media, Berlin (2004)

    Google Scholar 

  24. Kammammettu, S., Li, Z.: Two-stage robust optimization of water treatment network design and operations under uncertainty. Ind. Eng. Chem. Res. 59(3), 1218–1233 (2019)

    Article  Google Scholar 

  25. Ke, G.Y.: Managing reliable emergency logistics for hazardous materials: A two-stage robust optimization approach. Comput. Oper. Res. 138, 105–557 (2022)

    Article  MathSciNet  Google Scholar 

  26. Koushki, J., Miettinen, K., Soleimani-damaneh, M.: LR-NIMBUS: an interactive algorithm for uncertain multiobjective optimization with lightly robust efficient solutions. J. Global Optim. 83, 843–863 (2022)

    Article  MathSciNet  Google Scholar 

  27. Lee, J., Skipper, D., Speakman, E.: Gaining or losing perspective. J. Global Optim. 82(4), 835–862 (2022)

    Article  MathSciNet  Google Scholar 

  28. Liang, E., Yuan, Z.: Adjustable robust optimal control for industrial 2-mercaptobenzothiazole production processes under uncertainty. Optim. Eng. pp. 1–38 (2022)

  29. Löfberg, J.: YALMIP: A toolbox for modeling and optimization in MATLAB. In: Proceedings of the CACSD Conference. Taipei, Taiwan (2004)

  30. Lu, M., Shen, Z.J.M.: A review of robust operations management under model uncertainty. Prod. Oper. Manag. 30(6), 1927–1943 (2021)

    Article  Google Scholar 

  31. Marandi, A., Den Hertog, D.: When are static and adjustable robust optimization problems with constraint-wise uncertainty equivalent? Math. Program. 170(2), 555–568 (2018)

    Article  MathSciNet  PubMed  Google Scholar 

  32. Marandi, A., van Houtum, G.J.: Robust location-transportation problems with integer-valued demand. Optimization (2020)

  33. MOSEK ApS: The MOSEK optimization toolbox for MATLAB manual. version 9.3.21 (2022)

  34. Postek, K., Hertog, D.D.: Multistage adjustable robust mixed-integer optimization via iterative splitting of the uncertainty set. INFORMS J. Comput. 28(3), 553–574 (2016)

  35. Rockafellar, R.T.: Convex Analysis, vol. 36. Princeton University Press, Princeton (1970)

    Book  Google Scholar 

  36. Romeijnders, W., Postek, K.: Piecewise constant decision rules via branch-and-bound based scenario detection for integer adjustable robust optimization. INFORMS J. Comput. 33(1), 390–400 (2021)

    Article  MathSciNet  Google Scholar 

  37. Roos, K., Balvert, M., Gorissen, B.L., den Hertog, D.: A universal and structured way to derive dual optimization problem formulations. Informs J. Optim. 2(4), 229–255 (2020)

    Article  MathSciNet  Google Scholar 

  38. Roy, A., Dabadghao, S., Marandi, A.: Value of intermediate imaging in adaptive robust radiotherapy planning to manage radioresistance. Ann. Oper. Res. pp. 1–22 (2022)

  39. de Ruiter, F.J., Zhen, J., den Hertog, D.: Dual approach for two-stage robust nonlinear optimization. Oper. Res. (2022)

  40. Shapiro, A., Dentcheva, D., Ruszczynski, A.: Lectures on Stochastic Programming: Modeling and Theory. SIAM, Philadelphia (2021)

    Book  Google Scholar 

  41. Shapiro, A., Nemirovski, A.: On complexity of stochastic programming problems. In: Continuous optimization, pp. 111–146. Springer, New York (2005)

  42. Soyster, A.L.: Convex programming with set-inclusive constraints and applications to inexact linear programming. Oper. Res. 21(5), 1154–1157 (1973)

    Article  Google Scholar 

  43. Subramanyam, A., Gounaris, C.E., Wiesemann, W.: K-adaptability in two-stage mixed-integer robust optimization. Math. Program. Comput. 12(2), 193–224 (2020)

    Article  MathSciNet  Google Scholar 

  44. Takeda, A., Taguchi, S., Tütüncü, R.: Adjustable robust optimization models for a nonlinear two-period system. J. Optim. Theory Appl. 136(2), 275–295 (2008)

    Article  MathSciNet  Google Scholar 

  45. Wei, L., Gómez, A., Küçükyavuz, S.: Ideal formulations for constrained convex optimization problems with indicator variables. Math. Program. 192(1), 57–88 (2022)

    Article  MathSciNet  Google Scholar 

  46. Woolnough, D., Jeyakumar, V., Li, G.: Exact conic programming reformulations of two-stage adjustable robust linear programs with new quadratic decision rules. Optim. Lett. 15(1), 25–44 (2021)

    Article  MathSciNet  Google Scholar 

  47. Xidonas, P., Steuer, R., Hassapis, C.: Robust portfolio optimization: A categorized bibliographic review. Ann. Oper. Res. 292(1), 533–552 (2020)

    Article  MathSciNet  Google Scholar 

  48. Xu, G., Burer, S.: A copositive approach for two-stage adjustable robust optimization with uncertain right-hand sides. Comput. Optim. Appl. 70(1), 33–59 (2018)

    Article  MathSciNet  Google Scholar 

  49. Yanıkoğlu, İ, Gorissen, B.L., den Hertog, D.: A survey of adjustable robust optimization. Eur. J. Oper. Res. 277(3), 799–813 (2019)

    Article  MathSciNet  Google Scholar 

  50. Zadeh, N.: Note-a note on the cyclic coordinate ascent method. Manage. Sci. 16(9), 642–644 (1970)

    Article  MathSciNet  Google Scholar 

  51. Zeng, B., Zhao, L.: Solving two-stage robust optimization problems using a column-and-constraint generation method. Oper. Res. Lett. 41(5), 457–461 (2013)

    Article  MathSciNet  Google Scholar 

  52. Zhang, N., Fang, C.: Saddle point approximation approaches for two-stage robust optimization problems. J. Global Optim. 78(4), 651–670 (2020)

    Article  MathSciNet  Google Scholar 

  53. Zhang, X., Liu, X.: A two-stage robust model for express service network design with surging demand. Eur. J. Oper. Res. 299(1), 154–167 (2022)

    Article  MathSciNet  Google Scholar 

  54. Zhen, J., Kuhn, D., Wiesemann, W.: A unified theory of robust and distributionally robust optimization via the primal-worst-equals-dual-best principle. Oper. Res. (2023)

Download references

Acknowledgements

The first author conducted some parts of this work while he was a visiting researcher at the Eindhoven University of Technology. He wants to express his gratitude for the hospitality of the Department of Industrial Engineering and Innovation Sciences at this institution. The research of the first author was partially supported by INSF (No. 4000183).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Majid Soleimani-damaneh.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendices

The appendix of this work is divided into two sections. The first section contains the proof of several points mentioned in the main text. In the second section, we provide a proof for Theorem 5.

Appendix 1 Additional results

In this appendix, first, we provide more details on a point for perspective function mentioned after Remark 1.

Proposition 1

If g is a proper, closed, and convex function, then

$$\begin{aligned} \sup _{t>0,x\in \mathbb {R}^{n_x}}g^{per} (x, t)= \sup _{t\ge 0,x\in \mathbb {R}^{n_x}}g^{per} (x, t), \end{aligned}$$

and

$$\begin{aligned} \inf _{t>0,x\in \mathbb {R}^{n_x}}g^{per} (x, t)= \inf _{t\ge 0,x\in \mathbb {R}^{n_x}}g^{per} (x, t). \end{aligned}$$

Proof

Let \(x^0\in \mathbb {R}^{n_x}\). We have

$$\begin{aligned} g^{per} (x^0, t_0=0)&=\displaystyle \liminf _{(x^i,t_i)\rightarrow (x^0,0)}g^{per} (x^i, t_i>0)\\ {}&\le \displaystyle \sup _{(x^i,t_i)\rightarrow (x^0,0)}g^{per} (x^i, t_i>0)\\ {}&\le \displaystyle \sup _{t>0,x\in \mathbb {R}^{n_x}}g^{per} (x, t). \end{aligned}$$

So, \( \sup _{t>0,x\in \mathbb {R}^{n_x}}g^{per} (x, t)= \sup _{t\ge 0,x\in \mathbb {R}^{n_x}}g^{per} (x, t)\).

As \( \inf _{t>0,x}g^{per} (x, t)\ge \inf _{t\ge 0,x}g^{per} (x, t)\), let \(\ell \in \{g^{per} (x, t)| t\ge 0,x\in \mathbb {R}^{n_x}\}\). We want to show \(\ell \ge \inf _{t>0,x}g^{per} (x, t)\).

  1. 1.

    If \(\ell =g^{per} (x^0, t_0)\) for some \(x^0\in \mathbb {R}^{n_x}\) and \(t_0>0\), then \(\ell \ge \inf _{t>0,x}g^{per} (x, t)\).

  2. 2.

    If \(\ell =g^{per} (x^0,0)\) for some \(x^0\in \mathbb {R}^{n_x}\), then

    $$\begin{aligned} \ell =g^{per} (x^0,0)&=\displaystyle \liminf _{(x^i,t_i)\rightarrow (x^0,0)}g^{per} (x^i, t_i>0)\\&\ge \displaystyle \inf _{(x^i,t_i)\rightarrow (x^0,0)}g^{per} (x^i, t_i)\\&\ge \displaystyle \inf _{t>0,x\in \mathbb {R}^{n_x}}g^{per} (x, t). \end{aligned}$$

    The proof is complete.

\(\square \)

As a consequence of the above proposition, we have

$$\begin{aligned} \sup _{t>0,x}-g^{per} (x, t)=\sup _{t\ge 0,x}-g^{per} (x, t). \end{aligned}$$

The next proposition proves the convexity of the set \({\mathcal {V}}\) and the concavity of the function G introduced in the beginning of Sect. 5.

Proposition 2

The set \({\mathcal {V}}\) is convex, and G is a concave function on \({\mathcal {V}}\).

Proof

We consider two points \(\bar{v}=\begin{pmatrix} {\bar{\lambda }} \\ \{{\bar{w}}^j\}_{j=0}^m \end{pmatrix}, {\tilde{v}}=\begin{pmatrix} {\tilde{\lambda }} \\ \{{{\tilde{w}}}^j\}_{j=0}^m \end{pmatrix}\in {\mathcal {V}} \) and \(\ell \in [0,1]\). Since \(\frac{{\bar{w}}^j}{{\bar{\lambda }}_j},\frac{\tilde{w}^j}{{\tilde{\lambda }}_j}\in dom (g_j^*)\), and \(\lambda _{j} {g_j}^*( \tfrac{w^j}{\lambda _j})\) for each j is jointly convex in \((w^j,\lambda _{j})\), we have the following possible cases:

Case 1. \(\ell {\bar{\lambda }}_j+(1-\ell ){\tilde{\lambda }}_j>0:\) In this case,

$$\begin{aligned}&(\ell {\bar{\lambda }}_j+(1-\ell ){\tilde{\lambda }}_j) g_j^*\left( \frac{\ell {\bar{w}}^j+(1-\ell )\tilde{w}^j}{\ell {\bar{\lambda }}_j+(1-\ell ){\tilde{\lambda }}_j}\right) \le \ell {\bar{\lambda }}_j g_j^*( \frac{\bar{w}^j}{{\bar{\lambda }}_j})+(1-\ell ){\tilde{\lambda }}_j g_j^*(\frac{\tilde{w}^j}{{\tilde{\lambda }}_j}) < \infty \\ {}&\Rightarrow \frac{\ell \bar{w}^j+(1-\ell )\tilde{w}^j}{\ell {\bar{\lambda }}_j+(1-\ell ){\tilde{\lambda }}_j}\in dom (g_j^*) \end{aligned}$$

Case 2. \(\ell {\bar{\lambda }}_j+(1-\ell ){\tilde{\lambda }}_j=0:\) In this case, if \(0<\ell <1\), then \({\bar{\lambda }}_j=0={\tilde{\lambda }}_j\), and so

$$\begin{aligned} (\ell {\bar{\lambda }}_j+(1-\ell ){\tilde{\lambda }}_j) g_j^*\left( \frac{\ell {\bar{w}}^j+(1-\ell )\tilde{w}^j}{\ell {\bar{\lambda }}_j+(1-\ell ){\tilde{\lambda }}_j}\right)&=\delta ^*_{dom(g_j)}(\ell {\bar{w}}^j+(1-\ell ){{\tilde{w}}}^j)\\&\le \delta ^*_{dom(g_j)}(\ell \bar{w}^j)+\delta ^*_{dom(g_j)}((1-\ell ){{\tilde{w}}}^j)\\&< \infty . \end{aligned}$$

If \(\ell =0\), then \({\tilde{\lambda }}_j=0\), and hence

$$\begin{aligned} \hspace{-1.12cm} (\ell {\bar{\lambda }}_j+(1-\ell ){\tilde{\lambda }}_j) g_j^*\left( \frac{\ell {\bar{w}}^j+(1-\ell )\tilde{w}^j}{\ell {\bar{\lambda }}_j+(1-\ell ){\tilde{\lambda }}_j}\right)&=\delta ^*_{dom(g_j)}(\ell {\bar{w}}^j+(1-\ell ){{\tilde{w}}}^j)\\&=\delta ^*_{dom(g_j)}({{\tilde{w}}}^j)< \infty . \end{aligned}$$

If \(\ell =1\), then \({\bar{\lambda }}_j=0\), and thus

$$\begin{aligned} (\ell {\bar{\lambda }}_j+(1-\ell ){\tilde{\lambda }}_j) g_j^*\left( \frac{\ell {\bar{w}}^j+(1-\ell )\tilde{w}^j}{\ell {\bar{\lambda }}_j+(1-\ell ){\tilde{\lambda }}_j}\right)&=\delta ^*_{dom(g_j)}(\ell {\bar{w}}^j+(1-\ell ){{\tilde{w}}}^j)\\&=\delta ^*_{dom(g_j)}({\bar{w}}^j)< \infty \end{aligned}$$

So, in all above three cases, we get

$$\begin{aligned}&(\ell {\bar{\lambda }}_j+(1-\ell ){\tilde{\lambda }}_j) g_j^*\left( \frac{\ell {\bar{w}}^j+(1-\ell )\tilde{w}^j}{\ell {\bar{\lambda }}_j+(1-\ell ){\tilde{\lambda }}_j}\right) =\delta ^*_{dom(g_j)}(\ell {\bar{w}}^j+(1-\ell ){{\tilde{w}}}^j) < \infty \\&\Rightarrow \frac{\ell {\bar{w}}^j+(1-\ell ){{\tilde{w}}}^j}{\ell {\bar{\lambda }}_j+(1-\ell ){\tilde{\lambda }}_j}\in dom (g_j^*). \end{aligned}$$

Convexity in all other constraints of \({\mathcal {V}}\) obviously holds. So, \(\ell \bar{v}+(1-\ell ){\tilde{v}}\in {\mathcal {V}}\) which shows that \({\mathcal {V}}\) is a convex set. The function G on the convex set \({\mathcal {V}}\) is a concave function due to the concavity of each \(-\lambda _{j} {g_j}^*( \tfrac{w^j}{\lambda _j})\). \(\square \)

Appendix 2 Proof of Theorem 5

We first recall optimality condition for a constrained differentiable problem (for more details see e.g., [4]). Consider a (non-convex) problem of the form

$$\begin{aligned} \displaystyle \sup _{y} \left\{ g(y)|~y\in {\mathcal {S}}\right\} , \end{aligned}$$
(21)

where g is a real-valued continuously differentiable function, and \({\mathcal {S}}\) is a nonempty closed convex set. A vector \(y^*\in {\mathcal {S}}\) is called a stationary point of problem (21) if

$$\begin{aligned} \nabla g(y^*)^\top (y-y^*)\le 0,~~~\forall y\in {\mathcal {S}}, \end{aligned}$$

where \(\nabla g(y^*)\) is the gradient of g at \(y^*\).

Lemma 1

Let g be a real-valued continuously differentiable function defined on the Cartesian product of two closed convex sets \(C_1\subseteq \mathbb {R}^{n_1}\), \( C_2\subseteq \mathbb {R}^{n_2}\). Suppose that \(\bar{y}=(\bar{y}^1,\bar{y}^2)\in C_1\times C_2\). Then

$$\begin{aligned} \nabla g(\bar{y})^\top (y-\bar{y})\le 0,~~~\forall y\in C_1\times C_2, \end{aligned}$$
(22)

if and only if the following properties hold:

  1. (i)

    \(~\nabla _1 g(\bar{y})^\top (y^1-\bar{y}^1)\le 0,~~~\forall y^1\in C_1,\)

  2. (ii)

    \(~\nabla _2 g(\bar{y})^\top (y^2-\bar{y}^2)\le 0,~~~\forall y^2\in C_2,\)

where the vector y is partitioned into two component vectors \(y^1\in \mathbb {R}^{n_1}\), \(y^2\in \mathbb {R}^{n_2}\), as \(y\equiv (y^1,y^2)\), and \(\nabla _1 g(\bar{y})=\left( \tfrac{\partial g}{\partial y^1}(\bar{y})\right) \), and \(\nabla _2 g(\bar{y})=\left( \tfrac{\partial g}{\partial y^2}(\bar{y})\right) \) denote the corresponding gradient vectors.

Proof

\((\Rightarrow )\) Let \(y=(y^1,y^2)\in C_1\times C_2\). By setting \( y:=(y^1,\bar{y}^2)\) and \( y:=(\bar{y}^1,y^2)\) in inequality (22), inequalities (i) and (ii) are derived.

\((\Leftarrow )\) Clearly, (i) and (ii) lead (22). \(\square \)

Now we are ready to prove Theorem 5. The main line of reasoning can be found in [19] but given here for completeness.

Proof of Theorem 5

Suppose that \(z^*=(z^{1*},z^{2*})\) is a limit point of the sequence \(\{z^k\}_{k\ge 0}\). Without loss of generality, we assume that \(z^k=(u^k,v^k)\rightarrow (z^{1*},z^{2*})\). Our goal is to show that for any \(\zeta =(\zeta ^1,\zeta ^2)\in {\mathcal {U}}\times {\mathcal {V}}\), we have

$$\begin{aligned} \nabla {\mathcal {L}}_{{\bar{x}}}(z^*)^\top (\zeta -z^*)\le 0. \end{aligned}$$

According to Lemma 1, the above inequality is equivalent to

$$\begin{aligned}&\nabla _1 {\mathcal {L}}_{{\bar{x}}}(z^{*})^\top (\zeta ^1-z^{1*})\le 0,~~~\forall \zeta ^1\in {\mathcal {U}}, \end{aligned}$$
(23)
$$\begin{aligned}&\nabla _2 {\mathcal {L}}_{{\bar{x}}}(z^{*})^\top (\zeta ^2-z^{2*})\le 0,~~~\forall \zeta ^2\in {\mathcal {V}}, \end{aligned}$$
(24)

where \(\nabla {\mathcal {L}}_{{\bar{x}}}(z^*)=\left( \nabla _1 {\mathcal {L}}_{{\bar{x}}}(z^{*})^\top ,\nabla _2 {\mathcal {L}}_{\bar{x}}(z^{*})^\top \right) ^\top \) is the gradient of \({\mathcal {L}}_{\bar{x}}\) at \(z^*\). By contradiction, suppose that there exists a vector \(\tilde{\zeta }^2\in {\mathcal {V}}\), such that

$$\begin{aligned} \nabla _2 {\mathcal {L}}_{{\bar{x}}}(z^{*})^\top (\tilde{\zeta }^2-z^{2*})>0. \end{aligned}$$
(25)

Set \(r^k:=\tilde{\zeta }^2-v^k\). As the sequence \(\{v^k\}_{k\ge 0}\) converges to \(z^{2*}\), the sequence \(\{r^k\}_{k\ge 0}\) converges to \(\tilde{\zeta }^2-z^{2*}\). Thus, due to the continuity of the gradient, there exists \(N>0\) such that for all \(k>N\) we have

$$\begin{aligned} \nabla _2 {\mathcal {L}}_{{\bar{x}}}(z^k)^\top r^k>0. \end{aligned}$$

So, \(d^k:=({\textbf {0}}^\top ,(r^k)^\top )^\top \) is an ascent direction of \({\mathcal {L}}_{{\bar{x}}}\) at \(z^k\). By backtracking line search [4, Lemma 4.3], for given parameter \(\alpha \in (0,1)\), there exists a step size \(t_k\in (0,1)\) such that

$$\begin{aligned} {\mathcal {L}}_{{\bar{x}}}(z^k+t_k d^k)- {\mathcal {L}}_{{\bar{x}}}(z^k)\ge \alpha t_k \nabla {\mathcal {L}}_{{\bar{x}}}(z^k)^\top d^k, ~~~\forall k>N. \end{aligned}$$

Therefore

$$\begin{aligned} {\mathcal {L}}_{{\bar{x}}}(u^k,v^k+t_k r^k)- {\mathcal {L}}_{\bar{x}}(u^k,v^k)\ge \alpha t_k \nabla _2 {\mathcal {L}}_{{\bar{x}}}(z^k)^\top r^k>0, ~~~\forall k>N. \end{aligned}$$
(26)

Since \({\mathcal {V}}\) is convex, we have

$$\begin{aligned} v^k+t_k r^k=(1-t_k) v^k+t_k{\tilde{\zeta }}^2\in {\mathcal {V}}, ~~~\forall k>N. \end{aligned}$$

Hence,

$$\begin{aligned} {\mathcal {L}}_{{\bar{x}}}(u^{k+1},v^{k+1})\ge {\mathcal {L}}_{\bar{x}}(u^{k},v^{k+1})\ge {\mathcal {L}}_{{\bar{x}}}(u^k,v^k+t_kr^k)> {\mathcal {L}}_{{\bar{x}}}(u^k,v^k), ~~~\forall k>N. \end{aligned}$$

So, the sequence of function values \(\left\{ {\mathcal {L}}_{\bar{x}}(u^k,v^k)\right\} \) is non-decreasing and also bounded above. Therefore, it is convergent. The last inequality and the convergence of \(\left\{ {\mathcal {L}}_{{\bar{x}}}(u^k,v^k)\right\} \) implies

$$\begin{aligned} \displaystyle \lim _{k\rightarrow \infty } {\mathcal {L}}_{\bar{x}}(u^k,v^k+t_k r^k)-{\mathcal {L}}_{{\bar{x}}}(u^{k},v^{k})=0. \end{aligned}$$

The above equation and (26) gives

$$\begin{aligned} \nabla _2 {\mathcal {L}}_{{\bar{x}}}(z^*)^\top (\tilde{\zeta }^2-z^{2*})=0, \end{aligned}$$

which contradicts (25). This prove (24). The inequality (23) can be proved similarly. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khademi, A., Marandi, A. & Soleimani-damaneh, M. A new dual-based cutting plane algorithm for nonlinear adjustable robust optimization. J Glob Optim (2024). https://doi.org/10.1007/s10898-023-01360-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10898-023-01360-2

Keywords

Mathematics Subject Classification

Navigation