The Saddle Point Problem of Polynomials

This paper studies the saddle point problem of polynomials. We give an algorithm for computing saddle points. It is based on solving Lasserre's hierarchy of semidefinite relaxations. Under some genericity assumptions on defining polynomials, we show that: i) if there exists a saddle point, our algorithm can get one by solving a finite number of Lasserre type semidefinite relaxations; ii) if there is no saddle point, our algorithm can detect its nonexistence.


Introduction
Let X ⊆ R n , Y ⊆ R m be two sets (for dimensions n, m > 0), and let F (x, y) be a continuous function in (x, y) ∈ X × Y . A pair (x * , y * ) ∈ X × Y is said to be a saddle point of F (x, y) over X × Y if The above implies that All saddle points share the same objective value, although there may exist different saddle points. The definition of saddle points in (1.1) requires the inequalities to hold for all points in the feasible sets X, Y . That is, when y is fixed to y * , x * is a global minimizer of F (x, y * ) over X; when x is fixed to x * , y * is a global maximizer of F (x * , y) over Y . Certainly, x * must also be a local minimizer of F (x, y * ) and y * must be a local maximizer of F (x * , y). So, the local optimality conditions can be applied at (x * , y * ). However, they are not sufficient to guarantee that (x * , y * ) is a saddle point, since (1.1) needs to be satisfied for all feasible points. The saddle point problem of polynomials (SPPP) is for cases that F (x, y) is a polynomial function in (x, y) and X, Y are semialgebraic sets, i.e., they are described by polynomial equalities and/or inequalities. The SPPP concerns the existence of saddle points and the computation of them if they exist. When F is convexconcave in (x, y) and X, Y are nonempty compact convex sets, there exists a saddle point. We refer to [5, §2.6] for the classical theory for convex-concave type saddle point problems. The SPPPs have broad applications. They are of fundamental importance in duality theory for constrained optimization, min-max optimization and game theory [5,30,57]. The following are some applications.
• Zero sum games can be formulated as saddle point problems [1,41,54]. In a zero sum game with two players, suppose the first player has the strategy vector x := (x 1 , . . . , x n ) and the second player has the strategy vector y := (y 1 , . . . , y m ). The strategies x, y usually represent probability measures over finite sets, for instance, x ∈ ∆ n , y ∈ ∆ m . (The notation ∆ n denotes the standard simplex in R n .) A typical profit function of the first player is f 1 (x, y) := x T A 1 x + y T A 2 y + x T By, for matrices A 1 , A 2 , B. For the zero sum game, the profit function f 2 (x, y) of the second player is −f 1 (x, y). Each player wants to maximize the profit, for the given strategy of the other player. A Nash equilibrium is a point (x * , y * ) such that the function f 1 (x, y * ) in x achieves the maximum value at x * , while the function f 2 (x * , y) in y achieves the maximum value at y * . Since f 1 (x, y) + f 2 (x, y) = 0, the Nash equilibrium (x * , y * ) is a saddle point of the function F := −f 1 over feasible sets X = ∆ n , Y = ∆ m . • The image restoration [23] can be formulated as a saddle point problem with the function F (x, y) := x T Ay + 1 2 Bx − z 2 and some feasible sets X, Y , for two given matrices A, B and a given vector z. Here, the notation · denotes the Euclidean norm. We refer to [12,19,23] for related work on this topic. • The saddle point problem plays an important role in robust optimization [6,21,33,62]. For instance, a statistical portfolio optimization problem is where Q is a covariance matrix, µ is the estimation of some parameters, and X is the feasible set for the decision variable x. In applications, there often exists a perturbation for (µ, Q). Suppose the perturbation for (µ, Q) is (δµ, δQ). There are two types of robust optimization problems min x∈X max (δµ,δQ)∈Y where Y is the feasible set for the perturbation (δµ, δQ). People are interested in x * and (δµ * , δQ * ) such that the above two robust optimization problems are solved simultaneously by them. In view of (1.2), this is equivalent to solving a saddle point problem.
For convex-concave type saddle point problems, most existing methods are based on gradients, subgradients, variational inequalities, or other related techniques. For these classical methods, we refer to the work by Chen, Lan and Ouyang [13], Cox, Juditsky and Nemirovski [14], He and Yuan [23], He and Monteiro [24], Korpelevich [29], Maistroskii [39], Monteiro and Svaiter [40], Nemirovski [43], Nedić and Ozdaglar [42], and Zabotin [61]. For more general cases of non convex-concave type saddle point problems (i.e., F is not convex-concave, and/or one of the sets X, Y is nonconvex), the computational task for solving SPPPs is much harder. A saddle point may, or may not, exist. There is very little work for solving non convexconcave saddle point problems [16,52]. Obviously, SPPPs can be formulated as a first order formula over the real field R. By the Tarski-Seidenberg theorem [2,11], the SPPP is equivalent to a quantifier free formula. Such quantifier free formula can be computed symbolically, e.g., by cylindrical algebraic decompositions [2,11,28]. Theoretically, the quantifier elimination (QE) method can solve SPPPs exactly, but it typically has high computational complexity [10,17,55]. Another straightforward approach for solving (1.1) is to compute all its critical points first and then select saddle points among them. The complexity of computing critical points is given in [56]. This approach typically has high computational cost, because the number of critical points is dramatically high [44] and we need to check the global optimality relation in (1.1) for getting saddle points. In the subsection 6.3, we will compare the performance between these methods and the new one given in this paper.
The basic questions for saddle point problems are: If a saddle point exists, how can we find it? If it does not, how can we detect its nonexistence? This paper discusses how to solve saddle point problems that are given by polynomials and that are non convex-concave. We give a numerical algorithm to solve SPPPs.
1.1. Optimality conditions. Throughout the paper, a property is said to hold generically in a space if it is true everywhere except a subset of zero Lebesgue measure. We refer to [22] for the notion of genericity in algebraic geometry. Assume X, Y are basic closed semialgebraic sets that are given as Here, each g i is a polynomial in x := (x 1 , . . . , x n ) and each h j is a polynomial in y := (y 1 , . . . , y m ). The E X 1 , E X 2 , E Y 1 , E Y 2 are disjoint labeling sets of finite cardinalities. To distinguish equality and inequality constraints, denote the tuples When E X 1 = ∅ (resp., E X 2 = ∅), there is no equality (resp., inequality) constraint for X. The same holds for Y . For convenience, denote the labeling sets and y * is a maximizer of (1.7) Under the linear independence constraint qualification (LICQ), or other kinds of constraint qualifications (see [4, §3.3]), there exist Lagrange multipliers λ i , µ j such that In the above, a ⊥ b means the product a · b = 0 and ∇ x F (resp., ∇ y F ) denotes the gradient of F (x, y) with respect to x (resp., y). When g, h are nonsingular (see the below for the definition), we can get explicit expressions for λ i , µ j in terms of x * , y * (see [50]). For convenience, write the labeling sets as Then, the constraining polynomial tuples can be written as The Lagrange multipliers can be written as vectors λ = (λ 1 , . . . , λ ℓ1 ), µ = (µ 1 , . . . , µ ℓ2 ).
Denote the matrices The tuple g is said to be nonsingular if rank G(x) = ℓ 1 for all x ∈ C n . Similarly, h is nonsingular if rank H(y) = ℓ 2 for all y ∈ C m . Note that if g is nonsingular, then LICQ must hold at x * . Similarly, the LICQ holds at y * if h is nonsingular. When g, h have generic coefficients (i.e., g, h are generic), the tuples g, h are nonsingular. The nonsingularity is a property that holds generically. We refer to the work [50] for more details.

1.2.
Contributions. This paper discusses how to solve saddle point problems of polynomials. We assume that the sets X, Y are given as in (1.3)-(1.4) and the defining polynomial tuples g, h are nonsingular, i.e., the matrices G(x), H(y) have full column rank everywhere. Then, as shown in [50], there exist matrix polynomials G 1 (x), H 1 (y) such that (I ℓ denotes the ℓ × ℓ identity matrix) When g, h have generic coefficients, they are nonsingular. Clearly, the above and (1.8)-(1.9) imply that (For a matrix A, the notation A i,1:n denotes its ith row with column indices from 1 to n.) Denote the Lagrange polynomial tuples (1.14) µ(x, y) := H 1 (y) :,1:m ∇ y F (x, y).
(The notation A :,1:n denotes the submatrix of A consisting of its first n columns.) At each saddle point (x * , y * ), the Lagrange multiplier vectors λ, µ in (1.8)-(1.9) can be expressed as Therefore, (x * , y * ) is a solution to the polynomial system 2 ). However, not every solution (x * , y * ) to (1.15) is a saddle point. This is because x * might not be a minimizer of (1.6), and/or y * might not be a maximizer of (1.7). How can we use (1.15) to get a saddle point? What further conditions do saddle points satisfy? When a saddle point does not exist, what is an appropriate certificate for the nonexistence? This paper addresses these questions. We give an algorithm for computing saddle points. First, we compute a candidate saddle point (x * , y * ). If it is verified to be a saddle point, then we are done. If it is not, then either x * is not a minimizer of (1.6) or y * is not a maximizer of (1.7). For either case, we add a new valid constraint to exclude such (x * , y * ), while all true saddle points are not excluded. Then, we solve a new optimization problem, together with the newly added constraints. Repeating this process, we get an algorithm (i.e., Algorithm 3.1) for solving SPPPs. When the SPPP is given by generic polynomials, we prove that Algorithm 3.1 is able to compute a saddle point if it exists, and it can detect nonexistence if there does not exist one. The candidate saddle points are optimizers of certain polynomial optimization problems. We also show that these polynomial optimization problems can be solved exactly by Lasserre's hierarchy of semidefinite relaxations, under some genericity conditions on defining polynomials. Since semidefinite programs are usually solved numerically (e.g., by SeDuMi) in practice, the computed solutions are correct up to numerical errors.
The paper is organized as follows. Section 2 reviews some basics for polynomial optimization. Section 3 gives an algorithm for solving SPPPs. We prove its finite convergence when the polynomials are generic. Section 4 discusses how to solve the optimization problems that arise in Section 3. Under some genericity conditions, we prove that Lasserre type semidefinite relaxations can solve those optimization problems exactly. Proofs of some core theorems are given in Section 5. Numerical examples are given in Section 6. Conclusions and some discussions are given in Section 7.

Preliminaries
This section reviews some basics in polynomial optimization. We refer to [9,34,35,37,38,58] for the books and surveys in this field.
In particular, we often use the notation [x] d , [y] d or [(x, y)] d . The superscript T denotes the transpose of a matrix/vector. The notation e i denotes the ith standard unit vector, while e denotes the vector of all ones. The notation I k denotes the k-by-k identity matrix. By writing X 0 (resp., X ≻ 0), we mean that X is a symmetric positive semidefinite (resp., positive definite) matrix. For matrices X 1 , . . . , X r , diag(X 1 , . . . , X r ) denotes the block diagonal matrix whose diagonal blocks are X 1 , . . . , X r . For a vector z, z denotes its standard Euclidean norm.
For a function f in x, in y, or in (x, y), ∇ x f (resp., ∇ y f ) denotes its gradient vector in x (resp., in y). In particular, F xi denotes the partial derivative of F (x, y) with respect to x i .
In computation, we often need to work with the truncation: For an ideal I ⊆ R[x, y], its complex and real varieties are defined respectively as A polynomial σ is said to be a sum of squares (SOS) if σ = s 2 1 + · · · + s 2 k for some real polynomials s 1 , . . . , s k . Whether or not a polynomial is SOS can be checked by solving a semidefinite program (SDP) [32,51]. Clearly, if a polynomial is SOS, then it is nonnegative everywhere. However, the reverse may not be true. Indeed, there are significantly more nonnegative polynomials than SOS ones [8,9]. The set of all SOS polynomials in (x, y) is denoted as Σ[x, y], and its dth truncation is For a tuple q = (q 1 , . . . , q t ) of polynomials in (x, y), its quadratic module is We often need to work with the truncation For two tuples p = (p 1 , . . . , p k ) and q = (q 1 , . . . , q t ) of polynomials in (x, y), for convenience, we denote The set IQ(p, q) (resp., IQ(p, q) 2k ) is a convex cone that is contained in R[x, y] (resp., R[x, y] 2k ).
The set IQ(p, q) is said to be archimedean if there exists σ ∈ IQ(p, q) such that σ(x, y) ≥ 0 defines a compact set in R n × R m . If IQ(p, q) is archimedean, then the set K := {p(x, y) = 0, q(x, y) ≥ 0} must be compact. The reverse is not always true. However, if K is compact, say, K ⊆ B(0, R) (the ball centered at 0 with radius R), then IQ(p,q) is always archimedean, withq = (q, R − x 2 − y 2 ), while {p(x, y) = 0,q(x, y) ≥ 0} defines the same set K. Under the assumption that IQ(p, q) is archimedean, every polynomial in (x, y), which is strictly positive on K, must belong to IQ(p, q). This is the so-called Putinar's Positivstellensatz [53]. Interestingly, under some optimality conditions, if a polynomial is nonnegative (but not strictly positive) over K, then it belongs to IQ(p, q). This is shown in [47].
The above is for polynomials in (x, y). For polynomials in only x or y, the ideals, sum-of-squares, quadratic modules, and their truncations are defined in the same way. The notation

2.3.
Localizing and moment matrices. Let ξ := (ξ 1 , . . . , ξ l ) be a subvector of (x, y) := (x 1 , . . . , x n , y 1 , . . . , y m ). Throughout the paper, the vector ξ is either x, or y, or (x, y). Denote by R N l d the space of real sequences indexed by α ∈ N l d . A vector in w : For instance, when n = 2 and k = 2 and q = 1 − x 2 1 − x 2 2 , we have When q = 1 (the constant one polynomial), L (k) q (w) is called the moment matrix and we denote . The columns and rows of L (k) q (w), as well as M k (w), are labeled by α ∈ N l with 2|α| ≤ 2k − deg(q). When q = (q 1 , . . . , q t ) is a tuple of polynomials, then we define , which is a block diagonal matrix. Moment and localizing matrices are important tools for constructing semidefinite programming relaxations for solving moment and polynomial optimization problems [20,25,32,48]. Moreover, moment matrices are also useful for computing tensor decompositions [49]. We refer to [60] for a survey on semidefinite programming and applications.

An algorithm for solving SPPPs
Let F, g, h be the polynomial tuples for the saddle point problem (1.1). Assume g, h are nonsingular. So the Lagrange multiplier vectors λ(x, y), µ(x, y) can be expressed as in (1.13)-(1.14). We have seen that each saddle point (x * , y * ) must satisfy (1.15). This leads us to consider the optimization problem , where λ i (x, y) and µ j (x, y) are Lagrange polynomials given as in (1.13)-(1.14). The saddle point problem (1.1) is not equivalent to (3.1). However, the optimization problem (3.1) can be used to get a candidate saddle point. Suppose (x * , y * ) is a minimizer of (3.1). If x * is a minimizer of F (x, y * ) over X and y * is a maximizer of F (x * , y) over Y , then (x * , y * ) is a saddle point; otherwise, such (x * , y * ) is not a saddle point, i.e., there exists u ∈ X and/or there exists v ∈ Y such that The points u, v can be used to give new constraints Every saddle point (x, y) must satisfy (3.2), so (3.2) can be added to the optimization problem (3.1) without excluding any true saddle points. For generic polynomials F, g, h, the problem (3.1) has only finitely many feasible points (see Theorem 3.3). Therefore, by repeatedly adding new inequalities like (3.2), we can eventually get a saddle point or detect nonexistence of saddle points. This results in the following algorithm. Step 0: Let K 1 = K 2 = S a := ∅ be empty sets.
Step 1: If the problem (3.1) is infeasible, then (1.1) does not have a saddle point and stop; otherwise, solve (3.1) for a set K 0 of minimizers. Let k := 0.
Step 2: For each (x * , y * ) ∈ K k , do the following: (a): (Lower level minimization) Solve the problem , and get a set of minimizers S 1 (y * ). If F (x * , y * ) > ϑ 1 (y * ), update and get a set of maximizers S 2 (x * ). If F (x * , y * ) < ϑ 2 (x * ), update Step 3: If S a = ∅, then each point in S a is a saddle point and stop; otherwise go to Step 4. Step 4: (Upper level minimization) Solve the optimization problem If (3.5) is infeasible, then (1.1) has no saddle points and stop; otherwise, compute a set K k+1 of optimizers for (3.5). Let k := k + 1 and go to Step 2. Output: If S a is nonempty, every point in S a is a saddle point; otherwise, output that there is no saddle point.
For generic polynomials, the feasible set K 0 of (3.1), as well as each K k in Algorithm 3.1, is finite. The convergence of Algorithm 3.1 is shown as follows.
Theorem 3.2. Let K 0 be the feasible set of (3.1) and let S a be the set of saddle points for (1.1). If the complement set of S a in K 0 (i.e., the set K 0 \ S a ) is finite, then Algorithm 3.1 must terminate after finitely many iterations. Moreover, if S a = ∅, then each (x * , y * ) ∈ S a is a saddle point; if S a = ∅, then there is no saddle point.
Proof. At an iteration, if S a = ∅, then Algorithm 3.1 terminates. For each iteration with S a = ∅, each point (x * , y * ) ∈ K k is not feasible for (3.5). When the kth iteration goes to the (k + 1)th one, the nonempty sets are disjoint from each other. All the points in K i are not saddle points, so Therefore, when the set K 0 \ S a is finite, Algorithm 3.1 must terminate after finitely many iterations.
When S a = ∅, each point (x * , y * ) ∈ S a is verified as a saddle point in Step 2. When S a = ∅, Algorithm 3.1 stops in Step 4 at some iteration, with the case that (3.5) is infeasible. Since every saddle point is feasible for both (3.1) and (3.5), there does not exist a saddle point if S a = ∅.
The number of iterations required by Algorithm 3.1 to terminate is bounded above by the cardinality of the complement set K 0 \ S a , which is always less than or equal to the cardinality |K 0 | of the feasible set of (3.1). Generally, it is hard to count |K 0 \ S a | or |K 0 | accurately. When the polynomials F, g, h are generic, we can prove that the number of solutions for equality constraints in (3.1) is finite. For bj are generic polynomials, then the polynomial system The proof for Theorem 3.3 will be given in Section 5. One would like to know what is the number of complex solutions to the polynomial system (3.6) for generic polynomials F, g, h. That number is an upper bound for |K 0 | and so is also an upper bound for the number of iterations required by Algorithm 3.1 to terminate. The following theorem gives an upper bound for |K 0 |. a i1 · · · a ir 1 b j1 · · · b jr 2 · s where in the above the number s is given as If F (x, y), g i , h j are generic, then (3.6) has at most M complex solutions, and hence Algorithm 3.1 must terminate within M iterations.
The proof for Theorem 3.4 will be given in Section 5. We remark that the upper bound M given in (3.7) is not sharp. In our computational practice, Algorithm 3.1 typically terminates after a few iterations. It is an interesting question to obtain accurate upper bounds for the number of iterations required by Algorithm 3.1 to terminate.

Solving optimization problems
We discuss how to solve the optimization problems that appear in Algorithm 3.1. Under some genericity assumptions on F, g, h, we show that their optimizers can be computed by solving Lasserre type semidefinite relaxations. Let X, Y be feasible sets given as in (1.3)-(1.4). Assume g, h are nonsingular, so λ(x, y), µ(x, y) can be expressed as in (1.13)-(1.14).

4.1.
The upper level optimization. The optimization problem (3.1) is a special case of (3.5), with K 1 = K 2 = ∅. It suffices to discuss how to solve (3.5) with finite sets K 1 , K 2 . For convenience, we rewrite (3.5) explicitly as Recall that λ i (x, y), µ j (x, y) are Lagrange polynomials as in (1.13)-(1.14). Denote by φ the tuple of equality constraining polynomials and denote by ψ the tuple of inequality constraining ones .
They are polynomials in (x, y). Let Then, the optimization problem (4.1) can be simply written as We apply Lasserre's hierarchy of semidefinite relaxations to solve (4.5). For integers k = d 0 , d 0 + 1, · · · , the kth order semidefinite relaxation is The number k is called a relaxation order. We refer to (2.4) for the localizing and moment matrices used in (4.6). Step 0: Let k := d 0 .
Step 2: If the relaxation (4.6) is infeasible, then (1.1) has no saddle points and stop; otherwise, solve it for a minimizer w * . Let t := d 0 .
Step 3 Check whether or not w * satisfies the rank condition Step 4 If (4.7) holds, extract r := rank M t (w * ) minimizers for (4.1) and stop.
Step 5 If t < k, let t := t + 1 and go to Step 3; otherwise, let k := k + 1 and go to Step 1. Output: Minimizers of the optimization problem (4.1) or a certificate for the infeasibility of (4.1).
The conclusions in the Steps 2 and 3 are justified by the following Proposition 4.2. The rank condition (4.7) is called flat extension or flat truncation [15,45]. It is a sufficient and also almost necessary criterion for checking convergence of Lasserre type relaxations [45]. When it is satisfied, the method in [27] can be applied to extract minimizers in Step 4. It was implemented in the software GloptiPoly 3 [26]. Proof. Since g, h are nonsingular, every saddle point must be a critical point, and Lagrange multipliers can be expressed as in (1.13)-(1.14). i) For each (u, v) that is feasible for (4.1), [(u, v)] 2k satisfies all the constraints of (4.6), for all k. Therefore, if (4.6) is infeasible for some k, then (4.1) is infeasible.
In the above, the item (1) is almost the same as that X, Y are compact sets; the item (2) is the same as that (3.6) has only finitely many real solutions. Also note that the item (1) or (2)  We would like to remark that when F, g, h are generic, every minimizer of (4.1) is an isolated real solution of (3.6). This is because (3.6) has only finitely many complex solutions for generic F, g, h. Therefore, Algorithm 4.1 has finite convergence for generic cases. We would also like to remark that Proposition 4.2 and Theorem 4.4 assume that the semidefinite relaxation (4.6) is solved exactly. However, semidefinite programs are usually solved numerical (e.g., by SeDuMi), for better computational performance. Therefore, in computational practice, the optimizers obtained by Algorithm 4.1 are correct up to numerical errors. This is a common feature of all numerical methods.

4.2.
Lower level minimization. For a given pair (x * , y * ) that is feasible for (3.1) or (3.5), we need to check whether or not x * is a minimizer of F (x, y * ) over X. This requires us to solve the minimization problem . When g is nonsingular, if it has a minimizer, the optimization (4.8) is equivalent to (by adding necessary optimality conditions) . Denote the tuple of equality constraining polynomials (4.10) φ y * : and denote the tuple of inequality ones They are polynomials in x but not in y, depending on the value of y * . Let (4.12) We can rewrite (4.9) equivalently as (4.13) min Lasserre's hierarchy of semidefinite relaxations for solving (4.13) is (4.14) for relaxation orders k = d 1 , d 1 + 1, · · · . Since (x * , y * ) is a feasible pair for (3.1) or (3.5), the problems (4.8) and (4.13) are also feasible, hence (4.14) is also feasible. A standard algorithm for solving (4.13) is as follows.
Step 4: If t < k, let t := t + 1 and go to Step 3; otherwise, let k := k + 1 and go to Step 1. Output: Minimizers of the optimization problem (4.13).
Similar conclusions as in Proposition 4.2 hold for Algorithm 4.5. For cleanness of the paper, we do not state them again. The method in [27] can be applied to extract minimizers in the Step 3. Moreover, Algorithm 4.5 also terminates within finitely many iterations, under some genericity conditions. Condition 4.6. The polynomial tuple g is nonsingular and the point y * satisfies one (not necessarily all) of the following: (1) IQ(g eq , g in ) is archimedean; (2) the equation φ y * (x) = 0 has finitely many real solutions; (3) IQ(φ y * , ψ y * ) is archimedean.
Since (x * , y * ) is feasible for (3.1) or (3.5), Condition 4.3 implies Condition 4.6, which also holds generically. The finite convergence of Algorithm 4.5 is summarized as follows.
Theorem 4.7. Assume the optimization problem (4.8) has a minimizer and Condition 4.6 holds. If each minimizer of (4.8) is an isolated critical point, then, for all k big enough, (4.14) has a minimizer and each of them must satisfy (4.15).
The proof of Theorem 4.7 will be given in Section 5. We would like to remark that every minimizer of (4.13) is an isolated critical point of (4.8), when F, g, h are generic. This is implied by Theorem 3.3.

4.3.
Lower level maximization. For a given pair (x * , y * ) that is feasible for (3.1) or (3.5), we need to check whether or not y * is a maximizer of F (x * , y) over Y . This requires us to solve the maximization problem . When h is nonsingular, if it has a minimizer, the optimization (4.16) is equivalent to (by adding necessary optimality conditions) the problem . Denote the tuple of equality constraining polynomials and denote the tuple of inequality ones They are polynomials in y but not in x, depending on the value of x * . Let (4.20) Hence, (4.17) can be simply expressed as Lasserre's hierarchy of semidefinite relaxations for solving (4.21) is 2k , for relaxation orders k = d 2 , d 2 + 1, · · · . Since (x * , y * ) is feasible for (3.1) or (3.5), the problems (4.16) and (4.21) must also be feasible. Hence, the relaxation (4.22) is always feasible. Similarly, an algorithm for solving (4.21) is as follows. Step 0: Let k := d 2 .
Step 4: If t < k, let t := t + 1 and go to Step 3; otherwise, let k := k + 1 and go to Step 1. Output: Maximizers of the optimization problem (4.21).
The same kind of conclusions like in Proposition 4.2 hold for Algorithm 4.8. The method in [27] can be applied to extract maximizers in Step 3. We can show that it must also terminate within finitely many iterations, under some genericity conditions. (1) IQ(h eq , h in ) is archimedean; (2) the equation φ x * (y) = 0 has finitely many real solutions; By the same argument as for Condition 4.6, we can also see that Condition 4.9 holds generically. Similarly, Algorithm 4.8 also terminates within finitely many iterations under some genericity conditions.

Theorem 4.10. Assume that (4.16) has a maximizer and Condition 4.9 holds. If each maximizer of (4.16) is an isolated critical point, then, for all k big enough, (4.22) has a maximizer and each of them must satisfy (4.23).
The proof of Theorem 4.10 will be given in Section 5. Similarly, when F, g, h are generic, each maximizer of (4.16) is an isolated critical point of (4.16).

Some proofs
This section gives the proofs for some theorems in the previous sections.

Proof of Theorem 3.3.
Under the genericity assumption, the polynomial tuples g, h are nonsingular, so the Lagrange multipliers in (1.8)-(1.9) can be expressed as in (1.13)-(1.14). Hence, (3.6) is equivalent to the polynomial system in (x, y, λ, µ): . Due to the complementarity conditions, g i (x) = 0 or λ i = 0 for each i ∈ E X 2 , and h j (x) = 0 or µ j = 0 for each j ∈ E Y 2 . Note that if g i (x) = 0 then λ i = 0 and if h j (x) = 0 then µ j = 0. Since E X 2 , E Y 2 are finite labeling sets, there are only finitely many cases of g i (x) = 0 or g i (x) = 0, h j (x) = 0 or h j (x) = 0. We prove the conclusion is true for every case. Moreover, if g i (x) = 0 for i ∈ E X 2 , then the inequality g i (x) ≥ 0 can be counted as an equality constraint. The same is true for h j (x) = 0 with j ∈ E Y 2 . Therefore, we only need to prove the conclusion is true for the case that has only equality constraints. Without loss of generality, assume E X 2 = E Y 2 = ∅ and write the labeling sets as When all g i are generic polynomials, the equations g i (x) = 0 (i ∈ E X 1 ) have no solutions if ℓ 1 > n. Similarly, the equations h j (x) = 0 (j ∈ E Y 1 ) have no solutions if ℓ 2 > m and all h j are generic. Therefore, we only consider the case that ℓ 1 ≤ n and ℓ 2 ≤ m. When F, g, h are generic, we show that (5.1) cannot have infinitely many solutions. The system (5.1) is the same as . , x n ) andỹ = (y 0 , y 1 , . . . , y m ). Denote byg i (x) (resp.,h j (ỹ)) the homogenization of g i (x) (resp., h j (y)). Let P n denote the n-dimensional complex projective space. Consider the projective variety It is smooth, by Bertini's theorem [22], under the genericity assumption on g i , h j . Denote the bi-homogenization of F (x, y) F (x,ỹ) := x a0 0 y b0 0F (x/x 0 , y/y 0 ). When F (x, y) is generic, the projective variety is also smooth. One can directly verify that (for homogeneous polynomials) (They are called Euler's identities.) Consider the determinantal variety Its homogenization is The projectivization of (5.2) is the intersection If (3.6) has infinitely many complex solutions, so does (5.2). Then, W ∩ U must intersect the hypersurface {F (x,ỹ) = 0}. This means that there exists (x,ȳ) ∈ V such that for some λ i , µ j . Also noteg i (x) =h j (ȳ) =F (x,ȳ) = 0. Writē x = (x 0 ,x 1 , . . . ,x n ),ȳ = (ȳ 0 ,ȳ 1 , . . . ,ȳ m ).
• Ifx 0 = 0 andȳ 0 = 0, by Euler's identities, we can further get This implies that V is singular, which is a contradiction.
• If x 0 = 0 but y 0 = 0, by Euler's identities, we can also get This means the linear section V ∩ {x 0 = 0} is singular, which is a contradiction again, by the genericity assumption on F, g, h.
• If x 0 = 0 but y 0 = 0, then we can have So the linear section V ∩{y 0 = 0} is singular, which is again a contradiction.
• If x 0 = y 0 = 0, then V ∩ {x 0 = 0, y 0 = 0} is singular. It is also a contradiction, under the genericity assumption on F, g, h. For every case, we obtain a contradiction. Therefore, the polynomial system (3.6) must have only finitely many complex solutions, when F, g, h are generic.
Proof of Theorem 3.4. Each solution of (3.6) is a critical point of F (x, y) over the set X × Y . We count the number of critical points by enumerating all possibilities of active constraints. For an active labeling set {i 1 , . . . , i r1 } ⊆ [ℓ 1 ] (for X) and an active labeling set {j 1 , . . . , j r2 } ⊆ [ℓ 2 ] (for Y ), an upper bound for the number is critical points is a i1 · · · a ir 1 b j1 · · · b jr 2 · s, which is given by Theorem 2.2 of [44]. Summing this upper bound for all possible active constraints, we eventually get the bound M . Since K 0 is a subset of (3.6), Algorithm 3.1 must terminate within M iterations, for generic polynomials.
ii) When (4.1) is feasible, every feasible point is a critical point. By Lemma 3.3 of [18], F (x, y) achieves finitely many values on φ(x, y) = 0, say, Recall that f * is the minimum value of (4.5). So, f * is one of the c i , say, c ℓ = f * . Since (4.1) has only finitely many minimizers, we can list them as the set If (x, y) is a feasible point of (4.1), then either F (x, y) = c k with k > ℓ, or (x, y) is one of (u 1 , v 1 ), . . . , (u B , v B ). Define the polynomial We partition the set {φ(x, y) = 0} into four disjoint ones: Note that U 3 is the set of minimizers for (4.5).
Proof of Theorem 4.7. The proof is the same as the one for Theorem 4.4. This is because the Lasserre's relaxations (4.14) are constructed by using optimality conditions of (4.8), which is the same as for Theorem 4.4. In other words, Theorem 4.7 can be thought of a special version of Theorem 4.4 with K 1 = K 2 = ∅, without variable y. The assumptions are the same. Therefore, the same proof can be used.
Proof of Theorem 4.10. The proof is the same as the one for Theorem 4.7.

Numerical Experiments
This section presents numerical examples of applying Algorithm 3.1 to solve saddle point problems. The computation is implemented in MATLAB R2012a, on a Lenovo Laptop with CPU@2.90GHz and RAM 16.0G. The Lasserre type moment semidefinite relaxations are solved by the software GloptiPoly 3 [26], which calls the semidefinite program solver SeDuMi [59]. For cleanness, only four decimal digits are displayed for computational results.
In prior existing references, there are very few examples of non convex-concave type SPPPs. We construct various examples, with different types of functions and constraints. When g, h are nonsingular tuples, the Lagrange multipliers λ(x, y), µ(x, y) can be expressed by polynomials as in (1.13)-(1.14). Here we give some expressions for λ(x, y) that will be frequently used in the examples. The expressions are similar for µ(x, y). Let F (x, y) be the objective.
It took about 7.5 seconds. (iii) Let n = m = 4 and This function is neither convex in x nor concave in y. After 2 iterations by Algorithm 3.1, we got 4 saddle points: x * = (0.2500, 0.2500, 0.2500, 0.2500), y * = e i , with i = 1, 2, 3, 4. It took about 99 seconds. (iv) Let n = m = 3 and This function is neither convex in x nor concave in y. After 4 iterations by Algorithm 3.1, we got that there is no saddle point. It took about 32 seconds.
This function is neither convex in x nor concave in y. After 3 iterations by Algorithm 3.1, we got that there is no saddle point. It took about 12.8 seconds.
It took about 75 seconds.
It took about 6 seconds.
Example 6.4. Consider the sphere constraints X = S 2 and Y = S 2 . They are not convex. The Lagrange multipliers can be expressed as in (6.4).
(ii) Let F (x, y) be the function The Lagrange multipliers can be expressed as in (6.4). The function F is not convex in x but is concave in y. After 1 iteration by Algorithm 3.1, we got the saddle point: Example 6.6. Consider the function F (x, y) := x 2 1 y 2 y 3 + y 2 1 x 2 x 3 + x 2 2 y 1 y 3 + y 2 2 x 1 x 3 + x 2 3 y 1 y 2 + y 2 3 x 1 x 2 and the sets They are nonnegative portions of spheres. The feasible sets X, Y are non-convex. The Lagrange multipliers are expressed as After 3 iterations by Algorithm 3.1, we got that there is no saddle point. It took about 37.3 seconds.
Example 6.7. Let X = Y = R 4 + be the nonnegative orthant and F (x, y) be The Lagrange multipliers can be expressed as in (6.5). The function F is neither convex in x nor concave in y. After 1 iteration by Algorithm 3.1, we got the saddle point It took about 4.8 seconds.
Example 6.8. Let X = Y = R 3 be the entire space, i.e., there are no constraints. There are no needs for Lagrange multiplier expressions. Consider the function It is neither convex in x nor concave in y. After 1 iteration by Algorithm 3.1, we got the saddle point It took about 113 seconds.
Example 6.9. Consider the sets and the function The function F (x, y) is not convex in x but is concave in y. The Lagrange multipliers can be expressed as The same expressions are for µ j (x, y). After 9 iterations by Algorithm 3.1, we get the saddle point: x * = (1.2599, 1.2181, 1.3032), y * = (1.0000, 1.1067, 0.9036).
It took about 64 seconds. 6.2. Some application problems. Example 6.10. We consider the saddle point problem arising from zero sum games with two players. Suppose x ∈ R n is the strategy for the first player and y ∈ R m is the strategy for the second one. The usual constraints for strategies are given by simplices, which represent probability measures on finite sets. So we consider feasible sets X = ∆ n , Y = ∆ m . Suppose the profit function of the first player is for matrices A 1 ∈ R n×n , A 2 ∈ R m×m , B ∈ R n×m . For the zero sum game, the profit function for the second player is f 2 (x, y) := −f 1 (x, y). Each player wants to maximize the profit, for the given strategy of the other player. The Nash equilibrium is a point (x * , y * ) such that the maximum of f 1 (x, y * ) over ∆ n is achieved at x * , while the maximum of f 2 (x * , y) over ∆ m is achieved at y * . This is equivalent to that (x * , y * ) is a saddle point of the function F := −f 1 (x, y) over X, Y . For instance, we consider the matrices The resulting saddle point problem is of the non convex-concave type. After 2 iterations by Algorithm 3.1, we get two Nash equilibria x * = (0, 1, 0, 0, 0), y * = (1, 0, 0, 0, 0), x * = (0, 1, 0, 0, 0), y * = (0, 1, 0, 0, 0). It took about 7 seconds. Example 6.11. Consider the portfolio optimization problem [21,62] where Q is a covariance matrix and µ is the estimation of some parameters. There often exists a perturbation (δµ, δQ) for (µ, Q). This results in two types of robust optimization problems min x∈X max (δµ,δQ)∈Y We look for x * and (δµ * , δQ * ) that can solve the above two robust optimization problems simultaneously. This is equivalent to the saddle point problem with F = −(µ + δµ) T x + x T (Q + δQ)x. For instance, consider the case that with the feasible sets In the above, SR 3×3 denotes the space of real symmetric 3-by-3 matrices. The Lagrange multipliers can be similarly expressed as in (6.3). After 1 iteration by Algorithm 3.1, we got the saddle point It took about 32 seconds. The above two min-max and max-min optimization problems are solved simultaneously by them.
6.3. Some comparisons with other methods. Upon the request by referees, we give some comparisons between Algorithm 3.1 and other methods. The saddle point problems can also be solved by the straightforward approach of enumerating all KKT points. When all KKT points are to be computed, the numerical homotopy method such as Bertini [3] can be used. Saddle points can also be computed by quantifier elimination (QE) methods. Note that the definition (1.1) automatically gives a quantifier formula for saddle points. In the following, we give a comparison of the performance of these methods and Algorithm 3.1.
For computing all KKT points, the Maple function Solve is used. After they are obtained, we select saddle points from them by checking the definition. For the QE approach, the Maple function QuantifierElimination is used to solve the quantifier elimination formulae. For the numerical homotopy approach, the software Bertini is used to solve the KKT system for getting all KKT points first and then we select saddle points from them. When there are infinitely many KKT points, the function Solve and the software Bertini experience difficulty to get saddle points by enumerating KKT points. This happens for Examples 6.1(i), 6.2(i), 6.4(ii). The computational time (in seconds or hours) for these methods is reported in Table 1.
We would like to remark that the Maple function QuantifierElimination cannot solve any example question (it does not terminate within 6 hours for each one). So we also try the Mathematica function Resolve to implement the QE method. It can solve Example 6.1(iii) in about 1 second, but it cannot solve any other example question (it does not terminate within 6 hours). The software Bertini can solve Examples 6.1(ii), 6.2(ii), 6.3(i), 6.4(i), 6.5, 6.7, 6.8, 6.9. For other example questions, it cannot finish within 6 hours. For these examples, Algorithm 3.1 took much less computational time, except Example 6.1(iii). We also like to remark that the symbolic methods like QE can obtain saddle points exactly in symbolic operations, while numerical methods can only obtain saddle points correctly up to round off errors.

Conclusions and discussions
This paper discusses how to solve the saddle point problem of polynomials. We propose an algorithm (i.e., Algorithm 3.1) for computing saddle points. The Lasserre type semidefinite relaxations are used to solve the polynomial optimization problems. Under some genericity assumptions, the proposed algorithm can compute a saddle point if there exists one. If there does not exist a saddle point, the algorithm can detect the nonexistence. However, we would like to remark that Algorithm 3.1 can always be applied, no matter whether the defining polynomials are generic or not. The algorithm needs to solve semidefinite programs for Lasserre type relaxations. Since semidefinite programs are usually solved numerically (e.g., by SeDuMi), the computed solutions by Algorithm 3.1 are correct up to numerical errors. If the computed solutions are not accurate enough, classical nonlinear optimization methods can be applied to improve the accuracy. The method given in this paper can be used to solve saddle point problems from broad applications, such as zero sum games, min-max optimization and robust optimization.
If the polynomials are such that the set K 0 is infinite, then the convergence of Algorithm 3.1 is not theoretically guaranteed. For future work, the following questions are important and interesting.  .5) is another concern. For efficient computational performance, Algorithms 4.1, 4.5, 4.8 are applied to solve them. However, their complexities are mostly open, to the best of the authors' knowledge. We remark that Algorithms 4.1, 4.5, 4.8 are based on the tight relaxation method in [50], instead of the classical Lasserre relaxation method in [32]. For the method in [32], there is no complexity result for the worst cases, i.e., there exist instances of polynomial optimization such that the method in [32] does not terminate within finitely many steps. The method in [50] always terminates within finitely many steps, under nonsingularity assumptions on constraints, while its complexity is currently unknown.
The finite convergence of Algorithm 3.1 is guaranteed if the set K 0 \ S a is finite. If it is infinite, it may or may not have finite convergence. If it does not, how can we get a saddle point? The following question is mostly open for the authors.
Question 7.2. For polynomials F, g, h such that the set K 0 \ S a is not finite, how can we compute a saddle point if it exists? Or how can we detect its nonexistence if it does not exist? When X, Y are nonempty compact convex sets and the function F is convexconcave, there always exists a saddle point [5, §2.6]. However, if one of X, Y is nonconvex or if F is not convex-concave, a saddle point may, or may not, exist. This is the case even if F is a polynomial and X, Y are semialgebraic sets. The existence and nonexistence of saddle points for SPPPs are shown in various examples in Section 6. However, there is a new interesting property for SPPPs. We can write the objective polynomial F (x, y) as A convex moment relaxation for the SPPP is to find (u * , v * ) ∈ conv(X ) × conv(Y) such that (u * ) T Gv ≤ (u * ) T G(v * ) ≤ u T Gv * for all u ∈ conv(X ), v ∈ conv(Y). (The notation conv(T ) denotes the convex hull of T .) When X, Y are nonempty compact sets, the above (u * , v * ) always exists, because u T Gv is bilinear in (u, v) and conv(X ), conv(Y) are compact convex sets. In particular, if such u * is an extreme point of conv(X ) and v * is an extreme point of conv(Y), say, u * = [a] d1 and v * = [b] d2 for a ∈ X, b ∈ Y , then (a, b) must be a saddle point of the original SPPP. If there is no saddle point (u * , v * ) such that u * , v * are both extreme, the original SPPP does not have saddle points. We refer to [30] for related work about this kind of problems.