Sums of Squares Polynomial Program Reformulations for Adjustable Robust Linear Optimization Problems with Separable Polynomial Decision Rules

We show that adjustable robust linear programs with affinely adjustable box data uncertainties under separable polynomial decision rules admit exact sums of squares (SOS) polynomial reformulations. These problems share the same optimal values and admit a one-to-one correspondence between the optimal solutions. A sum of squares representation of non-negativity of a separable non-convex polynomial over a box plays a key role in the reformulation. This reformulation allows us to find adjustable robust solutions of uncertain linear programs under box data uncertainty by numerically solving their associated equivalent SOS polynomial optimization problem using semi-definite linear programming. We illustrate how the quality of the adjustable robust solution of a robust optimization problem with polynomial decision rules improves as the degree of the polynomial increases. Our results demonstrate that the adjustable robust solutions approach the actual optimal solution as the degree of the polynomial increases from one to fifteen.


Introduction
Adjustable Robust Optimisation (ARO) is an area of Robust Optimization (RO) [2,5,11,14] that not only optimises for uncertain data, but allows for multi-stage decision making in applications. In ARO, some of the decision variables may be adjusted after The paper was dedicated to Professor Miguel Angel Goberna for his outstanding contribution to many areas of optimization and operations research, including semi-infinite programming, robust and parametric optimization. some portion of the uncertain data reveals itself. ARO has numerous real-world applications [3,5,10,19,22,23] and often leads to less conservative solutions than RO. However, even a linear ARO problem with two-stage decision-making is a challenging problem because we optimize a linear function over mappings rather than vectors. Without restricting the optimization to some special classes of mappings [8], it is generally hard to obtain numerically tractable reformulations. These restrictions are called "decision rules" in Robust Optimization [2,3,9].
Computationally tractable reformulations of adjustable two-stage robust linear programs with linear (or affine) decision rules have been successfully obtained by studying the transformed single-stage problems [22]. Following the introduction of quadratic decision rules by Ben-Tal and Nemirovski [2,4], various numerically tractable equivalent reformulations or exact relaxation problems have been given in the literature for ARO with quadratic decision rules. In particular, a parameterized separable quadratic rule has been shown in [24] to admit exact second-order cone programming reformulations for adjustable robust linear optimization problems under single ellipsoidal uncertainty. More general nonlinear decision rules have also been examined for ARO problems in [2].
More recently, it has been established (see [6]) that an affinely adjustable twostage robust linear program under the box uncertainty and a separable quadratic decision rule enjoys an equivalent semi-definite program reformulation. It was done by way of employing a characterization of non-negativity of a separable nonconvex quadratic function over box constraints in terms of sums of squares polynomials. On the other hand, two-stage ARO with box uncertainty sets under a general quadratic decision rule is numerically an intractable problem, because the resulting transformed single-stage problem is a numerically intractable non-convex quadratic optimization problem over box constraints which is known to be a NP-hard problem [13].
In this paper, we show that two-stage adjustable robust linear programs with affinely adjustable box data uncertainties under separable polynomial decision rules admit exact sums of squares (SOS) polynomial reformulations. The exactness is characterized in the sense that the problems have the same optimal values and they exhibit a one-to-one correspondence between the optimal solutions. We do this by establishing a sum of squares (SOS) characterization of non-negativity of a separable non-convex polynomial over a box. This allows us to find adjustable robust solutions of uncertain linear optimization problems under box data uncertainty by simply numerically solving their associated equivalent SOS polynomial optimization problems.
We also examine the behaviour of polynomial decision rules in terms of the optimality of the decision rule under realized uncertainty and illustrate how the quality of the solution of a simple robust optimization problem with polynomial decision rules improves as the degree of the polynomial increases. Our numerical example demonstrates that the adjustable robust solutions approach the true optimal solution as the degree of the polynomial increases from one to fifteen.
The organization of the paper is as follows. Section 2 presents a sum of square representation of nonnegativity for separable polynomials over a box. Section 3 presents an exact sums of squares polynomial optimization reformulation for an affinely adjustable robust linear optimization problem under a separable polynomial decision rule. Section 4 provides results of a numerical experiment, explaining the features of the adjustable robust solution of a robust optimization problem with polynomial decision rules as the degree of the polynomial increases.

SOS Certificate of Nonnegativity over Box for Separable Polynomials
We begin this Section with some preliminary results. Note first that the space ℝ n denotes the Euclidean space with dimension n. The non-negative orthant of ℝ n is denoted by ℝ n + . Let S n be the set of n × n symmetric matrices. For a matrix X ∈ S n , X ≽ 0 means that X is positive semidefinite, thus, z T Xz ≥ 0 for any z ∈ ℝ n [1,16].
The vector space of all real polynomials on ℝ n is denoted by ℝ[x]; A real polynomial f is a sum of squares polynomial whenever there exist real polynomials q l , l = 1,…,r, such that f = ∑ r l=1 q 2 l [7,15,17]. The set of all sums of squares of real polynomials with degree at most d is denoted by Σ 2 d . Next, we present a sum of squares (SOS) representation for non-negativity of a separable polynomial over a box. This property will play a key role, later in Section 3, for our exact SOS reformulation of ARO problems under box uncertainty and separable polynomial decision rules.

Proof Firstly, observe that (a) holds if and only if
This means that (a) is equivalent to the condition that there exist 0j ∈ ℝ, j = 1,…,q, such that for each j = 1,…,q, Then, (a) can be equivalently described by the condition that there exist 0j ∈ ℝ, j = 1,…,q, such that ∑ q j=1 0j ≤ 0 and f j (z j ) ≥ 0 for all z j ∈ [β j ,γ j ], j = 1,…,q. Now, for each j = 1,…,q, let We note that each g j is a real polynomial on ℝ with degree at most 2ℓ. Since β j ≤ γ j , we have the two cases:

Case 2.
Let β j = γ j . Then we have So, we see that It is worth noting that a single variable polynomial is nonnegative means that it is a sum of squares polynomial (see [17,Theorem 2.5]). Then g j ∈ Σ 2 2 [z j ], and so, we arrive at the conclusion that □

Exact SOS Optimization for ARO with Box Uncertainties
In this section, we consider an affinely adjustable robust linear optimization problem with fixed recourse and a box uncertainty. Consider also that a separable polynomial decision rule (PDR) holds. We will show that these adjustable robust linear optimization problems and their sums of squares (SOS) polynomial reformulations have the same optimal values and enjoy a one-to-one correspondence between the optimal solutions.
Recall that the box uncertainty set U box is given by U box = ∏ q j=1 [β j , j ] with β j , j ∈ ℝ and β j ≤ γ j , j = 1,…,q, we consider the following affinely adjustable uncertain linear optimization problem with the same notation as in [6]: where c ∈ ℝ n and ∶= ( 1 , … , s ) ∈ ℝ s are fixed, x ∈ ℝ n is the first-stage decision variable and y(⋅) is a second-stage adjustable decision variable. It depends on uncertain z ∈ U box , A ∶ ℝ q → ℝ p×n and d ∶ ℝ q → ℝ p are affine maps as defined below: for given matrices A j ∈ ℝ p×n and d j ∈ ℝ p , j = 0,1,…,q. The matrix B ∈ ℝ p×s is a fixed recourse matrix, meaning B is independent of the uncertain variable z.
We assume now that the adjustable variable y(⋅) satisfies more generally a separable polynomial decision rule of the form: where r 0 , r kj ∈ ℝ, r = 1,…,s, j = 1,…,q, k = 1,…,ℓ, are (non-adjustable) variables. Note here that each z j is the ℓ-th power of each z j , i.e., z j = (z j ) , but, for simplicity, we use the notation z j instead of (z j ) ℓ . Let us consider the robust counterpart of (UP) as Similarly, let us denote d j ∶= (d j1 , … , d jp ) T ∈ ℝ p , j = 0,1,…,q. Then d(z) is given by Now we associate with (RP) the following sums of squares (SOS) programming problem: Proof It is easy to see from the models that (RP) is equivalent to the following adjustable robust linear optimization problem whose objective function is free of uncertainty: One can verify that the inequality can be rewritten as: for each i = 1,…,p, and that the inequality is equivalent to Now applying Theorem 2.1 to (1), we have the following SOS reformulations: for each i = 1,…,p, there exist i j ∈ ℝ, j = 1,…,q, such that and Similarly as above, (2) is equivalent to the following SOS formulation: there exist j ∈ ℝ, j = 1,…,q, such that and It is worth observing that, when = 0 ∈ ℝ s in (RP), Theorem 3.1, yields the following adjustable robust linear program admits an SOS reformulation with the form: Note that if ℓ = 1 in (RP), then the problem (SOS) can be reformulated as the following semi-definite program: Remark 3.1 Note that, in the case where ℓ = 1 in (RP), it follows from the preceding theorem that inf (RP) = inf (LSDP). Moreover, (x,̄1 0 , … ,̄s 0 ,̄1 11 , … ,̄s 1q ) is an optimal solution of (RP) if and only if there exist ̄∈ ℝ,̄i j ∈ ℝ, ̄j ∈ ℝ, X i,j ∈ S 2 , Ȳ j ∈ S 2 , i = 1,…,p, j = 1…,q, such that is an optimal solution of (LSDP).
In [6], the authors considered the separable quadratic decision rule under the box data uncertainty (RP), i.e., the case ℓ = 2 in the problem (RP). In this case the resulting semidefinite program becomes: Note that problem (QSDP) is equivalent to the corresponding reformulation, given in [6].

Proof
The conclusion follows from [17, Proposition 2.1]. Note that, in this case the reformulation of problem (RP) with ℓ = 3 collapses to (CSDP). □ In the following, we present an example of an adjustable robust linear program with affinely adjustable box data uncertainties under separable polynomial decision rules, where the degree of the polynomial ℓ is allowed to be 1, 2 or 3. We compute the objective value and an optimal solution of the adjustable robust counterpart of this uncertain problem by solving the associated semi-definite program using Corollary 3.1 and employing the commonly used Matlab toolbox CVX [12].

Numerical Illustration of Uncertainty-Realized Optimality under PDRs
In this Section we aim to demonstrate how the quality of the adjustable robust solution of a robust optimization problem with polynomial decision rules improves as the degree of the polynomial increases. To do so, we construct an adjustable robust problem with known non-smooth solution that cannot be represented by a polynomial over the entire uncertainty set, and evaluate the attained solution when applying Theorem 3.1 with increasing degree ℓ. We show that the adjustable robust solutions approach the true optimal solution as the degree of the polynomial ℓ increases from one to twelve. Consider the following optimization problem: The solution of this problem is given by In practice, optimizing over the set of functions is not well posed, and so to solve (P) we would choose a decision rule for y(z).
In the case where we let y(z) to be a constant y -that is, we solve the static robust problem -we arrive at the solution x * 1 = y * = 1, x * 2 = 0 . However, whilst the solution preserves worst-case optimality, we lose on uncertainty-realization optimality, which is by comparison to the true solution y ⋆ (z) at different realizations of the uncertain parameter z.

Conclusion and Further Work
In this paper we have shown that two-stage adjustable robust linear programs with affinely adjustable box data uncertainties under separable polynomial decision rules admit exact sums of squares polynomial reformulations in the sense that the robust problem and their SOS reformulations have the same optimal values, and that there is a one-to-one correspondence between their optimal solutions. The behaviour of polynomial decision rules in terms of the optimality of the decision rule under realized uncertainty was also examined, and an example was given so as to illustrate how the quality of the solution of a simple robust optimization problem with polynomial decision rules improves as the degree of the polynomial increases. It would be of great interest to study the behaviour that the solutions of a more general adjustable robust linear program with polynomial decision rules approach the true optimal solution as the degree of the polynomial increases. It will be investigated in a forthcoming study.