Skip to main content

Advertisement

Log in

Deriving robust counterparts of nonlinear uncertain inequalities

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

In this paper we provide a systematic way to construct the robust counterpart of a nonlinear uncertain inequality that is concave in the uncertain parameters. We use convex analysis (support functions, conjugate functions, Fenchel duality) and conic duality in order to convert the robust counterpart into an explicit and computationally tractable set of constraints. It turns out that to do so one has to calculate the support function of the uncertainty set and the concave conjugate of the nonlinear constraint function. Conveniently, these two computations are completely independent. This approach has several advantages. First, it provides an easy structured way to construct the robust counterpart both for linear and nonlinear inequalities. Second, it shows that for new classes of uncertainty regions and for new classes of nonlinear optimization problems tractable counterparts can be derived. We also study some cases where the inequality is nonconcave in the uncertain parameters.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Here the conjugate is with respect to both arguments.

References

  1. AIMMS: Paragon Decision Technology. http://www.aimms.com/operations-research/mathematical-programming/robust-optimization (2012)

  2. Anderson, T.W., Darling, D.A.: Asymptotic theory of certain goodness-of-fit criteria based on stochastic processes. Ann. Math. Stat. 23(2), 193–212 (1952)

    Article  MATH  MathSciNet  Google Scholar 

  3. Averbakh, I., Zhao, Y.-B.: Explicit reformulations for robust optimization problems with general uncertainty sets. SIAM J. Optim. 18(4), 1436–1466 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  4. Ben-Tal, A., Ben-Israel, A., Teboulle, M.: Certainty equivalents and information measures: duality and extremal principles. J. Math. Anal. Appl. 157, 211–236 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  5. Ben-Tal, A., El-Ghaoui, L., Nemirovski, A.: Robust Optimization. Princeton Series in Applied Mathematics (2009)

  6. Ben-Tal, A., Nemirovski, A.: Lectures on Modern Convex Optimization—Analysis, Algorithms, and Engineering Applications. MPS-SIAM Series on Optimization (2001)

  7. Ben-Tal, A., den Hertog, D.: Immunizing conic quadratic optimization problems against implementation errors. CentER Discussion Paper 2011–060, Tilburg University, 2011 (to appear in Mathematical Programming)

  8. Ben-Tal, A., den Hertog, D., De Waegenaere, A.M.B., Melenberg, B., Rennen, G.: Robust solutions of optimization problems affected by uncertain probabilities. Manag. Sci. 59(2), 341–357 (2013)

    Article  Google Scholar 

  9. Ben-Tal, A., den Hertog, D., Laurent, M.: Hidden Convexity in Partially Separable Optimization. CentER Discussion Paper 2011-070, Tilburg University (2011)

  10. Bertsimas, D., Brown, D., Caramanis, C.: Theory and applications of robust optimization. SIAM Rev. 53(3), 464–501 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  11. Bertsimas, D., Pachamanova, D., Sim, M.: Robust linear optimization under general norms. Oper. Res. Lett. 32, 510–516 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  12. den Hertog, D.: Interior Point Approach to Linear, Quadratic and Convex Programming. Kluwer, Dordrecht (1994)

    Book  MATH  Google Scholar 

  13. Goh, J., Sim, M.: Robust optimization made easy with ROME. Oper. Res. 59(4), 973–985 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  14. Golub, G.H., van Loan, C.F.: Matrix Computations, 2nd edn. Johns Hopkins University Press, London (1989)

    MATH  Google Scholar 

  15. Gushchin, A.A.: On an extension of the notion of \(f\)-divergence. Theory Probab. Its Appl. 52(3), 439–455 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  16. Löfberg, J.: Automatic robust convex programming. Optim. Methods Softw. 27(1), 115–129 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  17. Nemirovski, A.: Lectures on Robust Convex Optimization. Lecture 2 of Lecture Notes. http://www2.isye.gatech.edu/nemirovs/Lect2 (2009)

  18. Nesterov, Y.E., Nemirovski, A.: Interior Point Polynomial Algorithms in Convex Programming. SIAM Studies in Applied Mathematics, Philadelphia (1993)

    Google Scholar 

  19. Rockafellar, R.T.: Convex Analysis. Princeton Press, Princeton (1970)

    MATH  Google Scholar 

Download references

Acknowledgments

We would like to thank Bram Gorissen (Tilburg University) for his critical reading of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dick den Hertog.

Additional information

Part of this paper was written when Aharon Ben-Tal was visiting Centrum Wiskunde & Informatica in Amsterdam, The Netherlands, as a CWI Distinguished Scientist.

Research partly supported by BSF Grant 2008302.

Appendices

Appendix A: Conjugate functions, support functions and Fenchel duality

In this section we give some basic results on conjugate functions, support functions and Fenchel duality. For a detailed treatment we refer to [19].

We start with some well-known results on conjugate functions. First, note that \(f^*\) is closed convex, and \(g_*\) is closed concave, and moreover \(f^{**}=f\) and \(g_{**}=g\). It is well-known that for \(a>0\)

$$\begin{aligned} (a f)^*(y) = a f^* \left( \frac{y}{a}\right) \quad \hbox {and} \quad (a g)_*(y) = a g_* \left( \frac{y}{a}\right) \!, \end{aligned}$$
(49)

and for \(\tilde{f}(x)=f(ax)\) and \(\tilde{g}(x) = g(ax)\), \(a>0\), we have

$$\begin{aligned} \tilde{f}^*(y) = f^*\left( \frac{y}{a}\right) \quad \hbox {and} \quad \tilde{g}_*(y) = g_*\left( \frac{y}{a}\right) , \end{aligned}$$

and for \(\tilde{f}(x)=f(x-a)\) and \(\tilde{g}(x) = g(x-a)\) we have

$$\begin{aligned} \tilde{f}^*(y) = f^*(y) + ay \quad \hbox {and} \quad \tilde{g}_*(y) = g_*(y) + ay. \end{aligned}$$

In this paper wel also frequently use the following sum-rules for conjugate functions.

Lemma 6.1

Assume that \(f_i,\; i=1,\ldots ,m\), are convex, and the intersection of the relative interiors of the domains of \(f_i,\; i=1,\ldots ,m\), is nonempty, i.e., \(\cap _{i=1}^m \hbox {ri}(\hbox {dom}\, f_i) \ne \emptyset \). Then

$$\begin{aligned} \left( \sum _{i=1}^m f_i \right) ^*(s) = \inf _{\{v^i\}_{i=1}^m} \left\{ \sum _{i=1}^m f_i^*(v^i) \ | \ \sum _{i=1}^m v^i = s\right\} , \end{aligned}$$

and the \(\inf \) is attained for some \(v_i,\; i=1,\ldots ,m\).\(\square \)

Corollary 6.1

Assume that \(f_i,\; i=1,\ldots ,m\), are convex, and separable, i.e. \(f_i(x) = f_i(x_i)\). Then

$$\begin{aligned} \left( \sum _{i=1}^m f_i \right) ^*(s) = \sum _{i=1}^m f_i^*(s_i) . \end{aligned}$$

\(\square \)

Lemma 6.2

Assume that \(g_i,\; i=1,\ldots ,m\), are concave, and the intersection of the relative interiors of the domains of \(g_i,\; i=1,\ldots ,m\), is nonempty, i.e., \(\cap _{i=1}^m \hbox {ri}(\hbox {dom}\, g_i) \ne \emptyset \). Then

$$\begin{aligned} \left( \sum _{i=1}^m g_i \right) _*(s) = \sup _{\{v^i\}_{i=1}^m} \left\{ \sum _{i=1}^m (g_i)_*(v^i) \ | \ \sum _{i=1}^m v^i = s\right\} , \end{aligned}$$

and the \(\sup \) is attained for some \(v^i,\; i=1,\ldots ,m\).\(\square \)

The following useful lemma states that the support function of the Minkowski sum of sets is the sum of the corresponding support functions.

Lemma 6.3

$$\begin{aligned} \delta ^*(y | S_1 + S_2 + \cdots S_k) = \sum _{i=1}^k \delta ^*(y | S_i). \end{aligned}$$

\(\square \)

Proof

The proof easily follows by using the definition of the support function:

$$\begin{aligned} \delta ^*(y | S_1 + S_2 + \cdots + S_k)&= \sup _{x^1 \in S_1,\ldots , x^k \in S_k} y^T(x^1+ \cdots +x^k) \\&= \sup _{x^1 \in S_1} y^Tx^1 + \cdots + \sup _{x^k \in S_k} y^Tx^k \\&= \sum _{i=1}^k \delta ^*(y | S_i). \end{aligned}$$

\(\square \)

The following lemma is a result on the support function for the intersection of several sets.

Lemma 6.4

Let \(S_1, \ldots , S_k\) be closed convex sets, such that \(\bigcap _i \hbox {ri}(S_i) \ne \emptyset \), and let \(S = \cap _{i=1}^k S_i\). Then

$$\begin{aligned} \delta ^*(y| S) = \min \left\{ \sum _{i=1}^k \delta ^*(v^i|S_i) \ | \ \sum _{i=1}^k v^i = y \right\} . \end{aligned}$$

\(\square \)

Corollary 6.2

Let \(y^{[1]}, y^{[2]}, \ldots , y^{[k]}\) be a partition of the variables \((y^1, y^2, \ldots , y^n)\) into \(k\) mutually exclusive subvectors. Let \(S_1, \ldots , S_k\) be closed convex sets, and let \(S = S_1 \times \cdots \times S_k\). Then

$$\begin{aligned} \delta ^*(y| S) = \sum _{i=1}^k \delta ^*(y^{[i]}|S_i). \end{aligned}$$

\(\square \)

We now state three results which are used in this paper to derive tractable robust counterparts. The first lemma relates the conjugate of the adjoint function [see (4)] to the conjugate of the original function. Note that \(f^{\lozenge }(x)\) is convex if \(f(x)\) is convex. The next proposition can be used in cases where \(f ^{*}\) is not available in closed form, but \((f^{\lozenge })^{*}\) is available as such.

Lemma 6.5

[15] For the conjugate of a function \(f:\mathbb {R}_+ \longrightarrow \mathbb {R}\) and the conjugate of its adjoint \(f^{\lozenge }\), we have

$$\begin{aligned} f^*(s) = \inf \{ y \in \mathbb {R}: (f^{\lozenge })^*(-y) \le -s\}. \end{aligned}$$

\(\square \)

The next proposition can be used in cases where \(f^{-1}\) is not available in closed form, but \((f^{-1})^{*}\) is available as such.

Lemma 6.6

[4] Let \(f:\mathbb {R}\longrightarrow \mathbb {R}\) be strictly increasing and concave. Then, for all \(y >0\)

$$\begin{aligned} \left( f^{-1} \right) ^*(y) = -y f_*\left( \frac{1}{y}\right) = -(f_*)^\lozenge (y). \end{aligned}$$

\(\square \)

The next proposition gives a usefull result related to the conjugate of a function after linear transformations.

Lemma 6.7

Let \(A\) be a linear transformation from \(\mathbb {R}^n\) to \(\mathbb {R}^m\). Assume there exists an \(x\) such that \(Ax \in \hbox {ri}(\hbox {dom}\ g)\). Then, for each convex function \(g\) on \(\mathbb {R}^m\), one has

$$\begin{aligned} (gA)^*(z) = \inf _y \{g^*(y)|A^Ty=z\}, \end{aligned}$$

where for each \(z\) the infimum is attained, and where the function \(gA\) is defined by

$$\begin{aligned} (gA)(x) = g(Ax). \end{aligned}$$

\(\square \)

We define the primal problem:

$$\begin{aligned} (P) \quad \quad \inf \{ f(x) - g(x) \ | \ x \in \small {\hbox { dom}}(f)\cap \small {\hbox {dom}}(g) \}. \end{aligned}$$

The Fenchel dual of \((P)\) is given by:

$$\begin{aligned} (D) \quad \quad \sup \{ g_*(y) - f^*(y) \ | \ y \in \small {\hbox { dom}}(g_*)\cap \small {\hbox {dom}}(f^*) \}. \end{aligned}$$

Now we can give the well-known Fenchel duality theorem.

Theorem 6.1

If \( \hbox {ri} (\hbox {dom}(f))\cap \hbox { ri}(\hbox {dom}(g)) \ne \emptyset \) then the optimal values of \((P)\) and \((D)\) are equal and the maximal value of \((D)\) is attained.

If \( \hbox {ri}(\hbox {dom}(g_*))\cap \hbox {ri}(\hbox {dom}(f^*)) \ne \emptyset \) then the optimal values of \((P)\) and \((D)\) are equal and the minimal value of \((P)\) is attained. \(\square \)

Note that since \(f^{**}=f\) and \(g_{**}=g\), we have that the dual of \((D)\) is \((P)\).

Appendix B: Conic quadratic optimization

1.1 B.1 Conic quadratic duality

Consider the following primal conic quadratic optimization problem:

$$\begin{aligned} \min _x \{ c^Tx \ | \ Rx=r, \Vert D_ix - d_i\Vert _2 \le p_i^Tx -q_i, i=1,\ldots ,K\}. \end{aligned}$$
(P)

This can be rewritten as

$$\begin{aligned} \min _x \{ c^Tx \ | \ Rx=r, B_i x - b_i \in L^{m_i}, i=1,\ldots ,K \}, \nonumber \end{aligned}$$
(P1)

where

$$\begin{aligned} B_i = \left[ \begin{array}{c} D_i \\ p_i^T \end{array} \right] , \quad b_i = \left[ \begin{array}{c} d_i \\ q_i \end{array} \right] , \end{aligned}$$

and \(L^{m_i}\) is the Lorentz cone of order \(m_i\).

The dual problem of \((P)\) is given by

$$\begin{aligned} {\left\{ \begin{array}{ll} \max _{v,y,z} \left\{ r^Tv + \sum _{i=1}^K \left( d_i^Tz_i + q_i y_i \right) \right\} \\ R^Tv + \sum _{i=1}^K (D_i^Tz_i + y_ip_i) = c \\ \Vert z_i\Vert _2 \le y_i, \quad i=1,\ldots , K. \end{array}\right. } \end{aligned}$$
(D)

The dual problem of \((P1)\) is given by

$$\begin{aligned} {\left\{ \begin{array}{ll} \max _{v,\eta } \left\{ r^Tv + \sum _{i=1}^K b_i^T \eta _i \right\} \\ R^Tv + \sum _{i=1}^K B_i^T \eta _i = c \\ \eta _i \in L^{m_i}, \quad i=1,\ldots , K. \end{array}\right. } \end{aligned}$$
(D1)

The following theorem states the well-known duality for Conic Quadratic Programming, but first we need the following definition.

Definition 7.1

\((P)\) is regular if \(\exists \hat{x} : R \hat{x} = r, \Vert D_i \hat{x} - d_i\Vert _2 < p_i^T\hat{x} - q_i, \forall i=1,\ldots ,K\).\(\square \)

Theorem 7.1

(Strong duality) If one of the problems \((P)\) or \((D)\) is regular and bounded, then the other problem is solvable and the optimal values of \((P)\) and \((D)\) are equal. If both \((P)\) and \((D)\) are regular then both problems are solvable and the optimal values of \((P)\) and \((D)\) are equal.\(\square \)

1.2 Conic Quadratic representation

We start with the definition of Conic Quadratic representable.

Definition 7.2

A set \(X \subset \mathbb {R}^n\) is Conic Quadratic representable (CQr) if there exist:

  • a vector \(u \in \mathbb {R}^l\) of additional variables

  • an affine mapping:

    $$\begin{aligned} H(x,u) = \left[ \begin{array}{c} H_1(x,u) \\ H_2(x,u) \\ \vdots \\ H_K(x,u) \end{array} \right] \ : \ \mathbb {R}^n \times \mathbb {R}^l \rightarrow \mathbb {R}^{m_1}\times \cdots \times \mathbb {R}^{m_K}, \end{aligned}$$

    such that

    $$\begin{aligned} X = \{ x \in \mathbb {R}^n \ | \ \exists u \in \mathbb {R}^l \ : \ H_j(x,u) \in K^{m_j}, \quad j=1,\ldots ,K\}, \end{aligned}$$

    where \(K^{m_j}\) is the second-order cone of order \(m_j\).

The collection \((l,K,H(\cdot , \cdot ),m_1,\ldots ,m_K)\) is called a Conic Quadratic Representation (CQR) of \(X\). \(\square \)

Given a CQR \((l,K,H(\cdot , \cdot ),m_1,\ldots ,m_K)\) of \(X\), the problem

$$\begin{aligned} \min \{c^Tx \ | \ x \in X \} \end{aligned}$$

can be posed as a Conic Quadratic Problem (CQP):

$$\begin{aligned} \min _{x,u} \{ c^Tx \ | \ H_j(x,u) \in K^{m_j}, \ j=1,\ldots ,K \}. \end{aligned}$$

The following definition extends CQr to functions.

Definition 7.3

A function \(f : \mathbb {R}^n \rightarrow \mathbb {R}\) is CQr if its epigraph

$$\begin{aligned} \hbox {epi}(f) = \{ (t,x) \in \mathbb {R}\times \mathbb {R}^n \ | \ f(x) \le t \} \end{aligned}$$

is a CQr set.\(\square \)

1.3 Operations preserving CQr

We now state some operations that preserve CQr of sets (for proofs and a full list we refer to [6]):

  • If \(X_i\) is CQr \(\forall i=1,\ldots ,N\), then \(\cap X_i\) and \(X_1 \times \cdots \times X_N\) and \(X_1 + X_2 + \cdots + X_N\) are CQr;

  • If \(X\) is CQr, then the set \(\{Bx+ b \ | \ x \in X \}\) is CQr;

  • If \(X\) is CQr, then the set \(\{ y \ | \ By+ b \in X \}\) is CQr.

We now state some operations that preserve CQr of functions (for proofs and a full list we refer to [6]):

  • If \(f_i(x)\) is CQr \(\forall i=1,\ldots ,N\), then \(\max _i f_i(x)\) and \(\sum _i \alpha _i f_i(x),\; \alpha _i \ge 0\), are CQr.

  • If \(f_i(x)\) is CQr, then \(f(Bx+ b)\) is CQr.

  • If \(f(x)\) is CQr, then the conjugate function \(f^*(s)\) is CQr.

  • If \(f(x)\) is CQr, then the perspective function \(f^{per}(x,v) = v f(x/v) ,\; v>0\), is CQr.

Since the last result is new, we give a proof. Note that

$$\begin{aligned} \hbox {epi}(f^{per}(x,v)) = \{ (t,x,v) \ | \ vf(x/v) \le t \} = \{ (t,x,v) \ | \ (x/v,t/v) \in \hbox {epi}(f) \}. \end{aligned}$$

Let \((l,K,H(x,u),m_1,\ldots ,m_K)\) be the CQR of \(\hbox {epi}(f)\). Since \(H(x,u)\) is an affine mapping, also \(H^{per}(x,v,u) := vH(x/v,u/v)\) is an affine mapping. Moreover, since \(H(x,u)\in K\), we have \(H^{per}_j(x,v,u) \in K^{m_j}\). Hence, we have \(\hbox {epi}(f^{per})\) is CQr, and \((l,K,H^{per}(x,v,u),m_1,\ldots ,m_K)\) is its CQR.

Note that once we have a CQR of \(f\), we immediately have the CQR of \(f^*\) and \(f^{per}\).

It can easily be verified that the following functions/sets are CQr:

  • \(f(x) = x^T Q^TQx + q^Tx + r\);

  • \(X_m = \{ (t,x_1,\ldots ,x_M) \ | \ t^M \le x_1\ldots x_M\} = \hbox {epi}(x_1x_2\ldots x_m)^{1/M}\), where \(m>0\) is an integer, and \(M=2^m\);

  • \(f(x) = \max (x,0)^{\pi }\), where \(\pi \ge 1\) is rational;

  • \(f(x) = |x|^{\pi }\), where \(\pi \ge 1\) is rational;

  • \(f(x) = x_1^{-\pi _1}x_2^{-\pi _2}\ldots x_m^{-\pi _m},\; x_i>0\), where \(\pi _i > 0\);

  • \(f(x) = \Vert x\Vert _p\), where \( 1 \le p \le \infty \) is rational.

1.4 Example

Consider the set

$$\begin{aligned} Z = \{ y \in \mathbb {R}^n \ | \ By + b \in \cap _{i=1}^KZ_i \}, \end{aligned}$$

where \(B \in \mathbb {R}^{m\times n},\; b\in \mathbb {R}^n\), and

$$\begin{aligned} Z_i = \{ z \ | \ (d_i^Tz + \beta _i)^2 + 2(d_i^Tz + \beta _i)^8 \le 1\}, \end{aligned}$$

where \(d_i \in \mathbb {R}^n,\; \beta _i \in \mathbb {R}\). Note that

$$\begin{aligned} Z = \cap _{i=1}^K \{ By + b \in Z_i \}, \end{aligned}$$

with

$$\begin{aligned} Z_i = \{ z \ | \ d_i^Tz + \beta _i \in X\}, \end{aligned}$$

and

$$\begin{aligned} X = \{ x \in \mathbb {R}\ | \ x^2 + 2x^8 \le 1 \}. \end{aligned}$$

To compute the support function of \(Z\) it is enough to compute the CQR of \(X\). Observe that

$$\begin{aligned} X \!=\! \left\{ x \in \mathbb {R}\ | \ \exists t_1,t_2,t_3 \ : \ \left( \begin{array}{c} 2x \\ t_1\!-\!1 \\ t_2 \!+\!1 \end{array} \right) \!\in \! L^3, \ \left( \begin{array}{c} 2t_1 \\ t_2\!-\!1 \\ t_2 \!+\!1 \end{array} \right) \in L^3, \ \left( \begin{array}{c} 2t_2 \\ t_3\!-\!1 \\ t_3 \!+\!1 \end{array} \right) \!\in \! L^3, \ t_1 + 2 t_3 \!\le \! 1 \right\} \!. \end{aligned}$$

We now can use the rules of intersection of CQr sets and linear transformations of CQr sets to write down the CQR of \(Z\) explicitly. Let \(\delta ^*(v|Z) = \sup _{y\in Z} v^Ty\). Using the CQR of \(Z\) this optimization problem is a conic quadratic problem. Its dual can be written down explicitly using conic quadratic duality.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ben-Tal, A., den Hertog, D. & Vial, JP. Deriving robust counterparts of nonlinear uncertain inequalities. Math. Program. 149, 265–299 (2015). https://doi.org/10.1007/s10107-014-0750-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-014-0750-8

Keywords

Mathematics Subject Classification

Navigation