Convex Generalized Nash Equilibrium Problems and Polynomial Optimization

This paper studies convex Generalized Nash Equilibrium Problems (GNEPs) that are given by polynomials. We use rational and parametric expressions for Lagrange multipliers to formulate efficient polynomial optimization for computing Generalized Nash Equilibria (GNEs). The Moment-SOS hierarchy of semidefinite relaxations are used to solve the polynomial optimization. Under some general assumptions, we prove the method can find a GNE if there exists one, or detect nonexistence of GNEs. Numerical experiments are presented to show the efficiency of the method.


Introduction
The Generalized Nash Equilibrium Problem (GNEP) is a kind of game to find strategies for a group of players such that each player's objective function is optimized, for given other players' strategies. Suppose there are N players and the ith player's strategy is a vector x i ∈ R ni (the n i -dimensional real Euclidean space). We write that x i := (x i,1 , . . . , x i,ni ), x := (x 1 , . . . , x N ).
The total dimension of all strategies is n := n 1 + . . . + n N . The main task of the GNEP is to find a tuple u = (u 1 , . . . , u N ) of strategies such that each u i is a minimizer of the ith player's optimization s.t . g i,j (u 1 , . . . , u i−1 , x i , u i+1 , . . . , u N ) = 0 (j ∈ E i ), g i,j (u 1 , . . . , u i−1 , x i , u i+1 , . . . , u N ) ≥ 0 (j ∈ I i ), where u −i := (u 1 , . . . , u i−1 , u i+1 , . . . , u N ), the f i and g i,j are continuously differentiable functions in x i , and the E i , I i are disjoint finite (possibly empty) labeling sets. The point u satisfying the above is called a Generalized Nash Equilibrium (GNE). For notational convenience, when the ith player's strategy is considered, we use x −i to denote the subvector of all players' strategies except the ith one, i.e., x −i := (x 1 , . . . , x i−1 , x i+1 , . . . , x N ), and write x = (x i , x −i ) accordingly. This paper focuses on the Generalized Nash Equilibrium Problem of Polynomials (GNEPP), i.e., all the functions f i and g i,j are polynomials in x. For each i = 1 1, . . . , N , let X i be the point-to-set map such that The tuple x is said to be a feasible point of the GNEP if x i ∈ X i (x −i ) for all i. Denote the set Then x is a feasible point for the GNEP if and only if x ∈ X.
Definition 1.1. The GNEP given by (1.1) is called convex 1 if for all i = 1, . . . , N and for all given For instance, consider the 2-player GNEPP In the above, the · denotes the Euclidean norm. For each i, the Hessian of f i with respect to x i is positive semidefinite for all x −i ∈ dom(X i ). All players have convex optimization problems, so this is a convex GNEP. One can directly check that it has a unique GNE u = (u 1 , u 2 ) with GNEPs originated from economics in [4,9]. Recently, it has been widely used in many areas, such as economics, transportation, telecommunications and pollution control. Convex GNEPs often appear in applications. We refer to [1,3,8,54] for recent work on applications of GNEPs. Some application examples are shown in Section 6.
Contributions. This paper focuses on convex GNEPPs. Under some constraint qualifications, a feasible point is a GNE if and only if it satisfies the KKT conditions. We introduce rational and parametric expressions for Lagrange multipliers and formulate polynomial optimization for computing GNEs. Our major results are: • For GNEPPs, we introduce the rational expression for Lagrange multipliers and study their properties. We prove the existence of rational expressions and give a sufficient and necessary condition for positivity of denominators. Moreover, we give parametric expressions for Lagrange multipliers for several cases. For all GNEPs, parametric expressions always exist. • Using rational and parametric expressions, we formulate polynomial optimization and propose an algorithm for computing GNEs. Under some general assumptions, we prove that the algorithm can compute a GNE if it exists, or detect nonexistence of GNEs. This is the first numerical method that has these properties, to the best of the authors' knowledge. • The Moment-SOS semidefinite relaxations are used to solve polynomial optimization for finding and verifying GNEs. Numerical experiments are presented to show the efficiency of the method. The paper is organized as follows. Some preliminaries about polynomial optimization are given in Section 2. We introduce rational expressions for Lagrange multipliers in Section 3. The parametric expressions for Lagrange multipliers are given in Section 4. We formulate polynomial optimization problems for computing GNEs and show how to solve them using the Moment-SOS hierarchy in Section 5. Numerical experiments and applications are given in Section 6. Conclusions and some discussions are given in Section 7.

Preliminaries
Notation. The symbol N (resp., R, C) stands for the set of nonnegative integers (resp., real numbers, complex numbers). For a positive integer k, denote the set [k] := {1, . . . , k}. For a real number t, ⌈t⌉ (resp., ⌊t⌋) denotes the smallest integer not smaller than t (resp., the biggest integer not bigger than t). We use e i to denote the vector such that the ith entry is 1 and all others are zeros. By writing A 0 (resp., A ≻ 0), we mean that the matrix A is symmetric positive semidefinite (resp., positive definite). For the ith player's strategy vector x i ∈ R ni , the x i,j denotes the jth entry of x i , for j = 1, . . . , n i . When we write (y, x −i ), it means that the ith player's strategy is y ∈ R ni , while the vector of all other players' strategy is fixed to be x −i . Let R[x] denote the ring of polynomials with real coefficients in x, and R[x] d denote its subset of polynomials whose degrees are not greater than d. For the ith player's strategy vector x i , the notation R[x i ] and R[x i ] d are defined in the same way. For ith player's objective f i (x), the notation ∇ xi f i , ∇ 2 xi f i respectively denote its gradient and Hessian with respect to x i .
In the following, we use the letter z to represent either x, x i or (x, ω) for some new variables ω, for convenience of discussion. Suppose z := (z 1 , . . . , z l ). For a polynomial p(z) ∈ R[z], the p = 0 means p(z) is identically zero on R l . We say the polynomial p is nonzero if p = 0. Let α := (α 1 , . . . , α l ) ∈ N l , and we denote z α := z α1 1 · · · z α l l , |α| := α 1 + . . . + α l . For an integer d > 0, denote the monomial power set We use [z] d to denote the vector of all monomials in z whose degree is at most d, ordered in the graded alphabetical ordering. For instance, if z = (z 1 , z 2 ), then Throughout the paper, a property is said to hold generically if it holds for all points in the space of input data except a set of Lebesgue measure zero. The subset I is an ideal if p · I ⊆ I for all p ∈ F[z] and I + I ⊆ I. For a tuple of polynomials q = (q 1 , . . . , q m ), the set is the ideal generated by q, which is the smallest ideal containing each q i . We review basic concepts in polynomial optimization. A polynomial σ ∈ R[z] is said to be a sum of squares (SOS) if σ = p 2 1 + . . . + p 2 k for some polynomials The set of all SOS polynomials in z is denoted as Σ[z]. For a degree d, we denote the truncation For a tuple g = (g 1 , . . . , g t ) of polynomials in z, its quadratic module is the set Similarly, we denote the truncation of Qmod[g] The tuple g determines the basic closed semi-algebraic set For a tuple h = (h 1 , . . . , h s ) of polynomials in R[z], its real zero set is  [55]. Interestingly, if f ≥ 0 on Z(h) ∩ S(g), we also have f ∈ Ideal[h] + Qmod[g], under some standard optimality conditions [42].

2.2.
Localizing and moment matrices. Let R N l 2d denote the space of all real vectors that are labeled by α ∈ N l 2d . A vector y ∈ R N l 2d is labeled as The operation f, y is a bilinear function in (f, y). For a polynomial q ∈ R[z], with deg(q) ≤ 2d, and the integer t = d − ⌈deg(q)/2⌉, the outer product q · [z] t ([z] t ) T is a symmetric matrix polynomial in z, with length n+t t . We write the expansion as for some symmetric matrices Q α . Then we define the matrix function It is called the dth localizing matrix of q generated by y. For given q, the matrix L is linear in y. Localizing and moment matrices are important for getting semidefinite relaxations of solving polynomial optimization [31,40,41]. They are also useful for solving truncated moment problems [20,45] and tensor decompositions [46,47]. We refer to [33,34,36,37,39,44] for more references about polynomial optimization and moment problems.

2.3.
Lagrange multiplier expressions. We study optimality conditions for Generalized Nash Equilibrium Problems. Consider the ith player's optimization. For convenience, suppose E i ∪ I i = [m i ] and g i := (g i,1 , . . . , g i,mi ). For a given x −i , under some suitable constraint qualifications (e.g., the linear independence constraint qualification (LICQ), Mangasarian-Fromovite constraint qualification (MFCQ), or the Slater's Condition; see [7] for them), if x i is a minimizer of F i (x −i ), then there exists a Lagrange multiplier vector λ i := (λ i,1 , . . . , λ i,mi ) such that This is called the first order Karush-Kuhn-Tucker system for A point x satisfying (2.5) is called a KKT point for the GNEP. For convex GNEPs, each KKT point is a GNE [16,Theorem 4.6]. .
If there exists a matrix polynomial L i (x) such that then the Lagrange multipliers λ i can be expressed as The vector of polynomials λ i (x) := (λ i,1 (x), . . . , λ i,mi (x)) is called a polynomial expression for Lagrange multipliers [48], where λ i,j (x) is the jth component of For the second player, the matrix polynomial G 2 (x) is not nonsingular, and polynomial expressions do not exist. In section 6, we give a rational expression for the second player's Lagrange multipliers.

Rational expressions for Lagrange Multipliers
In Section 2.3, a polynomial expression for the ith player's Lagrange multipliers exists if and only if the matrix G i (x) is nonsingular. For classical NEPs of polynomials, the nonsingularity holds generically [48,50]. However, this is often not the case for GNEPs. Let g i = (g i,1 , . . . , g i,mi ) be the tuple of constraining polynomials in F i (x −i ) and G i (x) be the matrix polynomial as in (2.7). If there exists a matrix polynomialL i (x) and a nonzero scalar polynomial q i (x) such that Denote byλ i,j (x) the jth entry ofλ i (x). Definition 3.1. For the ith player's optimization F i (x −i ), if there exist polynomialsλ i,1 , . . . ,λ i,mi and a nonzero polynomial q i such that q i (x) ≥ 0 for all x ∈ X, andλ i,j (x) = q i (x)λ i,j holds for all critical pairs (x i , λ i ), then we call the tuplê λ i /q i := (λ i,1 (x)/q i (x), . . . ,λ i,mi (x)/q i (x)) a rational expression for Lagrange multipliers.
The following is an example of rational expression. min For x 1 = (0, 0) and x 2 = 2, the G 1 (x) is the zero vector. For x 1 = ( √ 3, 0) and x 2 = 1, rank(G 2 (x)) = 1. Both G 1 (x), G 2 (x) are not nonsingular, so there are no polynomial expressions for Lagrange multipliers. However, (3.1) holds for The Lagrange multiplier expressions are In section 3.2, we show that if none of the g i,j is identically zero, then a rational expression for λ i always exists.
3.1. Optimality conditions and rational expressions. Suppose for each i, there exists a rational expressionλ i /q i for the ith player's Lagrange multiplier vector. Since q i (x)λ i,j =λ i (x) and q i (x) ≥ 0 for all x ∈ X, the following holds for all KKT points Under some constraint qualifications, if x is a GNE, then it satisfies (3.6). For convex GNEPs, if x satisfies (3.6) and q i (x) > 0, then x must be a GNE, since it satisfies (2.5) with λ i,j given by λ i,j =λ i,j (x)/q i (x). This leads us to consider the following optimization problem In the above, Θ is a generically chosen positive definite matrix. The following proposition is straightforward. (i) If (3.7) is infeasible, then the GNEP has no KKT points. Therefore, if every GNE is a KKT point, then the infeasibility of (3.7) implies the nonexistence of GNEs. (ii) Assume the GNEP is convex. If u is a feasible point of (3.7) and q i (u) > 0 for all i ∈ [N ], then u must be a GNE.
In Proposition 3.3 (ii), if q i (u) = 0, then u may not be a GNE. The following is such an example.
We would like to remark that for some special GNEPs, the equality q i (u) = 0 may imply that u i is a minimizer of F i (u −i ). See Example 3.8 for such a case.

3.2.
Existence of rational expressions. We study the existence of rational expressions with nonnegative q i (x). The following is a useful lemma.
is not identically zero, then a rational expression exists for λ i .
The rational expression in (3.8) may not be very practical, because the determinantal polynomials often have high degrees. In practice, we usually have rational expressions with low degrees. If each q i (x) > 0 for all x ∈ X, then every solution of (3.7) is a GNE. One wonders when a rational expression exists with q i (x) > 0 on X. The matrix polynomial G i is said to be nonsingular on X if G i (x) has full column rank for all x ∈ X. For the GNEP given in Example 3.2, both G 1 (x) and G 2 (x) are nonsingular on X. The following proposition is useful.
Remark. If G i (x) is nonsingular on X, then the LICQC must hold for the ith player's optimization. Furthermore, if this holds for all i ∈ [N ], then all GNEs are KKT points.

3.3.
A numerical method for finding rational expressions. We give a numerical method for finding rational expressions for Lagrange multipliers. It was introduced in [51] for solving bilevel optimization problems. Let G i (x) be the matrix polynomial defined in (2.6). For convenience, denote the tuples For a priori degree d, consider the following linear convex optimization: In the above, the first equality is the same as (3.1). The second equality ensures that q i is not identically zero, where v is a priori point in X. The constraint Therefore, if the maximum γ is positive, then q i (x) > 0 on X. By Lemma 3.5, one can always find a feasible γ ≥ 0 satisfying (3.9), for some d ≤ deg(H(x)), if none of g i,j (x) is identically zero. By Proposition 3.6, if each G i (x) is nonsingular on X and the archimedeanness holds for X, then there must exist γ > 0 satisfying (3.9) for some d.
is a feasible point of (3.9), then one can get a rational expression for Lagrange multipliers by Example 3.7. Consider the GNEP in Example 3.2. We have Let v := (0, 0, 1) for both players, and γ 1 = 1, γ 2 = 1/2. Then, the (L i (x), q i (x), γ i ) is a feasible point of (3.11), for each i = 1, 2. In fact, we have . The rational expressions for Lagrange multipliers are given by (3.5).
This implies x i = (0, 0, 0) is the only feasible point of the ith player's optimization and hence it is the minimizer. Therefore, each feasible point of (3.7) is a GNE.
One can solve (3.9) numerically for getting rational expressions. This is done in Example 6.6.

Parametric expressions for Lagrange multipliers
For some GNEPs, it may be difficult to find convenient rational expressions for Lagrange multipliers. Sometimes, the denominators may have high degrees. This is the case especially when m i > n i . If some q i has high degree, the polynomial optimization (3.6) also has a high degree, which makes the result moment SDP relaxations (see subsections 5.1 and 5.2) very difficult to be solved. To fix such issues, we introduce parametric expressions for Lagrange multipliers.
The following is an example of parametric expressions.
The Lagrange multipliers can be expressed as Parametric expressions are quite useful for solving the GNEPs. The following are some useful cases.
(i) Suppose the ith player's optimization F i (x −i ) contains the nonnegative constraints, i.e., its constraints are its constraints are i.e., its constraints are (iv) Suppose the ith player's optimization F i (x −i ) contains linear constraints, i.e., its constraints are where each b j is a polynomial in x −i . Let A = a 1 · · · a r T . Assume rankA = r. If we let s i := m i − r, then a parametric expression is (v) Suppose there exists a labeling subset T i : We would like to remark that parametric expressions for Lagrange multipliers always exist. For instance, one can get a parametric expression by letting ω i,j = λ i,j for all j. Such expression is called a trivial parametric expression. However, it is preferable to have small s i , to save computational costs. The optimality conditions (2.5) can be equivalently expressed as For convex GNEPs, a point x is a GNE if and only if there exists ω := (ω 1 , . . . , ω N ) such that x satisfies (4.5). Therefore, we consider the optimization (4.6) In the above, the Θ is a generically chosen positive definite matrix. The following proposition is straightforward. (i) If (4.6) is infeasible, then the GNEP has no KKT points. If every GNE is a KKT point, then the infeasibility of (4.6) implies nonexistence of GNEs. (ii) Assume the GNEP is convex. If (u, w) is a feasible point of (4.6), then u is a GNE.

The polynomial optimization reformulation
In this section, we give an algorithm for solving convex GNEPs. We assume each λ i has either a rational or parametric expression, as in Definition 3.1 or 4.1. If λ i has a polynomial or parametric expression, we let q i (x) := 1. If λ i has a polynomial or rational expression, then we let s i = 0. Recall the notation Choose a generic positive definite matrix Θ. Then solve the following polynomial optimization If (5.1) is infeasible, then there are no KKT points. Since Θ is positive definite, if (5.1) is feasible, then it must have a minimizer, say, (u, w) ∈ X × R s . For convex GNEPs, if q i (u) > 0 for all i, then u must be a GNE. If q i (u) ≤ 0 for some i, then u may or may not be a GNE. To check this, we solve the following optimization problem for those i with q i (u) ≤ 0 This is a polynomial optimization in x i . Since u ∈ X, the point u i is feasible for (5.2), so δ i ≤ 0. If δ i ≥ 0 for all i, then u must be a GNE. The following is an algorithm for solving the GNEP.
Algorithm 5.1. For the convex GNEP given by (1.1), do the following: Step 0: Choose a generic positive definite matrix Θ of length n + s + 1.
Step 1: Solve the polynomial optimization (5.1). If it is infeasible, then there are no KKT points and stop; otherwise, solve it for a minimizer (u, w).
Step 2: If all q i (u) > 0, then u is a GNE. Otherwise, for those i with q i (u) ≤ 0, solve the optimization (5.2) for the minimum value δ i . If δ i ≥ 0 for all such i, then u is a GNE; otherwise, it is not. In Step 0, we can choose Θ = R T R for a randomly generated square matrix R of length n + s + 1. When Θ is a generic positive definite matrix, the optimization (5.1) must have a unique minimizer, if its feasible set is nonempty. This is shown in Theorem 5.4(ii). Since the objective f i (x i , u −i ) is assumed to be convex in x i , if it is bounded from below on X i (u −i ), then (5.2) must have a minimizer (see [6,Theorem 3]). In applications, we are mostly interested in cases that (5.2) has a minimizer, for the existence of a GNE. In the subsections 5.1 and 5.2, we will discuss how to solve polynomial optimization problems in Algorithm 5.1, by the Moment-SOS hierarchy of semidefinite relaxations. The convergence of Algorithm 5.1 is shown as follows. (i) If (u, w) is a feasible point of (5.1) such that q i (u) > 0 for all i, then u is a GNE. (ii) Assume every GNE is a KKT point. If (5.1) is infeasible, then the GNEP has no GNEs. If Θ is positive definite and every q i (x) > 0 for all feasible x of (5.1), then Algorithm 5.1 will find a GNE if it exists.
Proof. (i) This is directly implied by Propositions 3.3 and 4.3.
(ii) If (5.1) is infeasible, then there is no GNE, because every GNE is assumed to be a KKT point and it must be feasible for (5.1). Next, assume (5.1) is feasible. Since Θ is positive definite, the optimization (5.1) has a minimizer, say, (u, w). By the given assumption, we have q i (u) > 0 for all i. So u is a GNE, by (i).
Remark. For convex GNEPs, we can choose not to use nontrivial expressions for Lagrange multipliers, i.e., we consider the polynomial optimization (5.1) with s i = m i and λ i,j = ω i,j for all i and j. By doing this, we can get an algorithm like Algorithm 5.1 to get GNEs. However, this approach is usually very inefficient computationally, because it results in more variables for the polynomial optimization (5.1). Note that when Lagrange multiplier expressions (LMEs) are not used, each Lagrange multiplier is treated as a new variable. Moreover, solving (5.1) without LMEs may require higher order Moment-SOS relaxations. This is shown in numerical experiments in Section 5.1. In Example 6.1(i-ii), we compare the performance of Algorithm 5.1 with and without LMEs. Computational results show the advantage of using them.
In Theorem 5.2(ii), if q i (x) > 0 for all x ∈ X, then we must have q i (x) > 0 for all feasible x of (5.1). Suppose (u, w) is a computed minimizer of (5.1). If u is not a GNE, i.e., δ i < 0 for some i, we can let N ⊆ [N ] be the labeling set of i with δ i < 0. By Theorem 5.2, we know q i (u) = 0 for all i ∈ N . For a priori small ε > 0, we can add the inequalities q i (x) ≥ ε (i ∈ N ) to the optimization (5.1), to exclude u from the feasible set. Then we solve the following new optimization If ε > 0 is not small enough, the constraint q i (x) ≥ ε may also exclude some GNEs. If the new optimization (5.3) is infeasible, one can heuristically get a candidate GNE by choosing a different generic positive definite Θ in (5.1). In computational practice, when a GNE exists, it is very likely that we can get one by doing this. However, how to detect nonexistence of GNEs when (5.1) is feasible can be theoretically difficult. The theoretical side of this problem is mostly open, to the best of the authors' knowledge.

5.1.
The optimization for all players. We discuss how to solve the polynomial optimization problems in Algorithm 5.1, by using the Moment-SOS hierarchy of semidefinite relaxations [31,33,34,36,37]. We refer to the notation in subsections 2.1 and 2.2.
First, we discuss how to solve the optimization (5.1). Denote the polynomial tuples For notational convenience, for a vector p = (p 1 , . . . , p s ), the set {p} stands for {p 1 , . . . , p s }, in the above. Denote the unions They are both finite sets of polynomials. Then, the optimization (5.1) can be equivalently written as Denote the degree d 0 := max{⌈deg(p)/2⌉ : p ∈ Φ ∪ Ψ}.
For a degree k ≥ d 0 , consider the kth order moment relaxation for solving (5.6) Its dual optimization problem is the kth order SOS relaxation For relaxation orders k = d 0 , d 0 + 1, . . ., we get the Moment-SOS hierarchy of semidefinite relaxations (5.7)-(5.8). This produces the following algorithm for solving the polynomial optimization problem (5.6).
Step 1: Solve the semidefinite relaxation (5.7). If it is infeasible, then (5.6) has no feasible points and stop; otherwise, solve it for a minimizer y * .
In the Step 2, e i denotes the labeling vector such that its ith entry is 1 while all other entries are 0. For instance, when n = s = 2, y e3 = y 0010 . The optimization (5.7) is a relaxation of (5.6). This is because if x is a feasible point of (5.6), then y = [x] 2k must be feasible for (5.7). Hence, if (5.7) is infeasible, then (5.6) must be infeasible, which also implies the nonexistence of KKT points. Moreover, the optimal value ϑ k of (5.7) is a lower bound for the minimum value of (5.6), i.e., ϑ k ≤ θ(x) for all x that is feasible for (5.6). In the Step 2, if u is feasible for (5.6) and ϑ k = θ(u), then u must be a minimizer of (5.6). The Algorithm 5.3 can be implemented in GloptiPoly [26]. The convergence of Algorithm 5.3 is shown as follows.    [55]. For such a big k, the SOS relaxation (5.8) is unbounded from above, hence the moment relaxation (5.7) must be infeasible.
(ii) When the optimization (5.6) is feasible, it must have a unique minimizer, say, x * . To see this, let θ be defined as in (5.6), K be the feasible set of (5.6), and R 2 (K) be the set of tms's in R N n 2 admitting K-representing measures. Consider the linear conic optimization problem (5.9) min θ, y s.t . y 0 = 1, y ∈ R 2 (K).
If Θ is generic in the cone of positive definite matrice, the objective θ, y is a generic linear function in y. By [43,Proposition 5.2], the optimization (5.9) has a unique minimizer. The minimum value of (5.9) is equal to ϑ min . Therefore, (5.6) has a unique minimizer when Θ is generic. The convergence of u (k) to x * is shown in [57] or [40,Theorem 3.3]. For the special case that Φ(x) = 0 has finitely many real solutions, the point u (k) must be equal to x * , when k is large enough. This is shown in [35] (also see [41]).
The archimedeaness of the set Ideal[Φ] + Qmod[Ψ] is essentially requiring that the feasible set of (5.6) is compact. The archimedeaness is sufficient but not necessary for Algorithm 5.3 to converge. Even if the archimedeaness fails to hold, Algorithm 5.3 is still applicable for solving (5.1). If the point u (k) is feasible and ϑ k = θ(u (k) ), then u (k) must be a minimizer of (5.1), regardless of the archimedeaness holds or not. Moreover, without archimedeaness, the infeasibility of (5.7) still implies that (5.1) is infeasible. In our computational practice, Algorithm 5.3 almost always has finite convergence.
The polynomial optimization (5.3) can be solved in the same way by the Moment-SOS hierarchy of semidefinite relaxations. The convergence property is the same. For the cleanness of this paper, we omit the details.

5.2.
Checking Generalized Nash Equilibria. Suppose u = (u, w) ∈ R n × R s is a minimizer of (5.1). For convex GNEPPs, if all q i (u) > 0, then u is a GNE, by Theorem 5.2(i). If q i (u) ≤ 0 for some i, we need to solve the optimization (5.2), to check if u = (u i , u −i ) is a GNE or not, Note that (5.2) is a convex polynomial optimization problem in x i . For given u −i , if it is bounded from below, then (5.2) achieves its optimal value at a minimizer.
Consider the ith player's optimization with q i (u) ≤ 0. For notational convenience, we denote the polynomial tuples Under some suitable constraint qualification conditions (e.g., the Slater's Condition), when (5.2) has a minimizer, it is equivalent to Denote the degree in variables x i for its constraining polynomials For a degree k ≥ d i , the kth order moment relaxation for (5.6) is (5.14) The dual optimization problem of (5.14) is the kth order SOS relaxation By solving the above relaxations for k = d i , d i + 1, . . ., we get the Moment-SOS hierarchy of relaxations (5.14)- (5.15). This gives the following algorithm.
Step 1: Solve the moment relaxation (5.14) for the minimum value η (k) i and a minimizer y * . If η (k) i ≥ 0, then η i = 0 and stop; otherwise, go to the next step.
Step 2: Let t := d i as in (5.13). If y * satisfies the rank condition then extract a set U i of r := rank M t (y * ) minimizers for (5.12) and stop.
Step 3: If (5.16) fails to hold and t < k, let t := t + 1 and then go to Step 2; otherwise, let k := k + 1 and go to Step 1.
We would like to remark that the optimization (5.12) is always feasible, because u i is a feasible point since u is a minimizer of (5.1). The moment relaxation (5.14) is also feasible. Because η (k) i is a lower bound for η i , and ≥ 0, then η i must be 0. In Step 2, the rank condition (5.16) is called flat truncation [40]. It is a sufficient (and almost necessary) condition to check convergence of moment relaxations. When (5.16) holds, the method in [25] can be used to extract r minimizers for (5.12). The Algorithm 5.5 can also be implemented in GloptiPoly [26] [31]. It is interesting to remark that ] must also be archimedean. Furthermore, we have the following convergence theorem for Algorithm 5.5.

Remark.
To check the flat truncation (5.16), we need to evaluate the ranks of M t [y * ] and M t−di [y * ]. Evaluating matrix ranks is a classical problem in numerical linear algebra. When a matrix is near to be singular, it may be difficult to determine its rank accurately, due to round-off errors. In computational practice, we often determine the rank of a matrix as the number of its singular values larger than a tolerance (say, 10 −6 ). We refer to [10] for determining matrix ranks numerically. Moreover, when (5.12) has a unique minimizer, the ranks of M t [y * ] and M t−di [y * ] are one, the flat truncation (5.16) is relatively easy to check by looking at the largest singular value.
Theorem 5.6. For the convex polynomial optimization (5.2), assume its optimal value is achieved at a KKT point. If either one of the following conditions hold, (i) The set I 1 + I 2 is archimedean, and the Hessian ∇ 2 xi ζ i (x * i , u −i ) ≻ 0 for a minimizer x * i of (5.12); or (ii) The real zero set of polynomials in H i (u) is finite, then Algorithm 5.5 must terminate within finitely many loops.
Proof. Since its optimal value is achieved at a KKT point, the optimization problem (5.2) is equivalent to (5.12).
(i) If I 1 + I 2 is archimedean and i is a minimizer of (5.12), then ζ i (x i ) − η i ∈ I 1 + I 2 , by [30,Corollary 3.3]. Since for all k big enough. Therefore, Algorithm 5.5 must terminate within finitely many loops, by the duality theory. Remark. If the objective polynomial in (5.2) is SOS-convex and its constraining ones are SOS-concave (see [24] for the definition of SOS-convex polynomials), then Algorithm 5.5 must terminate in the first loop (see [32]). If the optimal value of (5.2) is not achieved at a KKT point, the classical Moment-SOS hierarchy of semidefinite relaxations can be used to solve it. We refer to [30-34, 36, 37] for the work for solving general polynomial optimization.

Numerical experiments
In this section, we apply Algorithm 5.1 to solve convex GNEPs. To use it, we need Lagrange multiplier expressions. This can be done as follows.
• When polynomial expressions exist, we always use them. In particular, we use polynomial expressions for the first player of the GNEP given by (1.4), the first player in Example 6.1(ii), the third player in Examples 3.4 and 6.7(i-ii), the production unit and market players in Example 6.9. • We use rational expressions for all players in Examples 6.3, 6.4 and 6.6.
Moreover, rational expressions are used for the second player of the GNEP given by (1.4), the first two players in Examples 3.4 and 6.7(i-ii), and the consumer players in Example 6.9. For Example 6.6, the rational expression is obtained by solving (3.9) numerically. • When it is difficult to find convenient polynomial or rational expressions, we use parametric expressions for Lagrange multipliers. For all players in Examples 6.5, 6.8, we use parametric expressions.
We apply the software GloptiPoly 3 [26] and SeDuMi [58] to solve the Moment-SOS relaxations for the polynomial optimization (5.6) and (5.12). We use the software YALMIP for solving (3.9). The computation is implemented in an Alienware Aurora R8 desktop, with an Intel ® Core(TM) i7-9700 CPU at 3.00GHz×8 and 16GB of RAM, in a Windows 10 operating system. For neatness of the paper, only four decimal digits are shown for the computational results.
In Step 2 of Algorithm 5.1, if the optimal values δ i ≥ 0 for each i such that q i (u) ≤ 0, then the computed minimizer of (5.1) is a GNE. In numerical computations, we may not have δ i ≥ 0 exactly due to round-off errors. Typically, when δ i is near zero, say, δ i ≥ −10 −6 , we regard the computed solution as an accurate GNE. In the following, all the GNEPs are convex. Example 6.1. (i) For the GNEP given by (1.4), the first player has a polynomial expression for Lagrange multipliers given by (2.8), and the second player has a rational expression given as For each i, the q i (x) > 0 for all x ∈ X. We ran Algorithm 5.1 and obtained the GNE u = (u 1 , u 2 ) with It took around 2.83 seconds.
(ii) If the first player's objective is changed to then the GNEP has no GNE, detected by Algorithm 5.1. It took around 70.31 seconds to detect the nonexistence. The matrix polynomials G 1 (x) and G 2 (x) are nonsingular on X, so all GNEs must be KKT points if they exist.
In the following, we compare the performance of Algorithm 5.1 with the method of solving the optimization (5.1) without using Lagrange multiplier expressions, i.e., each Lagrange multiplier is treated as a new variable for polynomials. The comparison for Example 6.1(i) is given in Table 1. The computational results for the method using Lagrange multiplier expressions (i.e., for Algorithm 5.1) are given in the column labeled "Algorithm 5.1". The results for the method without using Lagrange multiplier expressions are given in the column labeled "Without LME". In the rows, the value k is the relaxation order for solving (5.1). The subcolumn "time" lists the consumed time (in seconds) for solving the moment relaxation of order k, and the subcolumn "GNE" shows if a GNE is obtained or not. When k = 2 for Algorithm 5.1, the degree of relaxation is less than appearing polynomials, so we display that "not applicable (n.a.)". For Example 6.1(ii), the comparison is given  Table 2. This GNEP does not have a GNE. When no LMEs are used, the 4th order moment relaxation cannot be solved due to out of memory. However this can be done by using nontrivial LMEs.
The rational expressions for both players are given by (3.5). For each i, the q i (x) > 0 for all x ∈ X. We ran Algorithm 5.1 and got the GNE u = (u 1 , u 2 ) with u 1 ≈ (0.4897, 1.0259), u 2 ≈ 0.7077.
It took around 0.20 second.
It took around 0.41 second in solving (3.9) for both players, and 6.40 seconds to find the GNE. For neatness of the paper, we do not display Lagrange multiplier expressions obtained by solving (3.9).
It took around 3.23 seconds.
Example 6.9. Consider the GNEP based on the Arrow and Debreu model of a competitive economy [4,17]. The first N 1 players are consumers, the second N 2 players are production units, and the last player is the market, so N = N 1 + N 2 + 1.
In this GNEP, each player has n 1 = · · · = n N variables. Let Q i ∈ R ni×ni , b i ∈ R ni , ξ i ∈ R ni + and a i,k ∈ R + be parameters. These players' optimization problems are: The ith player (a consumer): The ith player (a production unit): The N th player (the market): For each i ∈ [N 1 ], the Lagrange multipliers have rational expressions as for all x ∈ X. For each i = N 1 + 1, . . . , N 1 + N 2 , the ith player (a production unit) has polynomial expressions , λ i,j = ∂f i ∂x i,j + 2x i,j · λ i,1 (j = 1, . . . , n i ).
For the last player (the market), we substitute x N,ni by 1 − ni−1 j=1 x N,j , then the constraints become 1 − ni−1 j=1 x N,j ≥ 0, x N,1 ≥ 0, . . . , x N,ni−1 ≥ 0, and hence For each i = 1, . . . , N 1 , when n i = 2, the parameters are given as When n i = 3, the parameters are given as: The numerical results are presented in Table 3. The "N " is the total number of all players, the "N 1 " and "N 2 " are the number of consumers and production units respectively, the "n" (resp., "n i ") is the dimension of "x" (resp., "x i "), the "u" is the GNE obtained by Algorithm 5.1, the "q(u)" gives the value of the denominator vector q(u) := (q 1 (u), . . . , q N1 (u)), and "time" shows the consumed time (in seconds).
6.1. Comparison with other methods. We compare our method (i.e., Algorithm 5.1) with some classical methods for solving convex GNEPPs, such as the two-step method in [22] based on Quasi-variational formulation, the penalty method in [17], the exact version of interior point method based on the KKT system in [12], and the Augmented-Lagrangian method in [29]. All examples in Section 6 are tested for comparisions. For Example 6.9, we test for the case that N 1 = N 2 = 1, n i = 3. For these methods, we use the following stopping criterion: For each time we get a new iterate u, if its feasibility violation ξ < 10 −6 , then we compute the accuracy parameter δ. If δ < 10 −6 , then we stop the iteration. For these classical methods, the parameters are the same as given in [12,17,22,27,29]. When implementing the QVI method, we use Moment-SOS relaxations to find projections into given sets (the maximum number of iterations for line search is set to be 100). For the penalty method, the MATLAB function fsolve is used to implement the Levenberg-Marquardt Algorithm for solving all equations involved (the maximum number of iterations is set to be 100). The full penalization is used when we implement the Augmented-Lagrangian method, and a Levenberg-Marquardt type method (see [29,Algorithm 24]) is exploited to solve penalized subproblems. We let 1000 be the maximum number of iterations for the QVI method, let 1000 be the maximum number of outer iterations for the penalty method and the Augmented-Lagrangian method, and let 10, 000 be the maximum number of iterations for the interior point method. For initial points, we use (1, 0, 0, 1, 0, 0) for Example 6.1(i-ii), (0, 0, 0, 0, 0, 0, 0, 0, 1) for Example 6.9, and the zero vectors for other GNEPs. If the maximum number of iterations is reached but the stopping criterion is not met, we still solve the (5.2) to check if the latest iterating point is a GNE or not.
The numerical results are presented in Table 4, and the comparison is summarized in the following.
(1) The QVI method failed to find a GNE for Example 6.1(i), because the projection set in Step 2 is empty. Therefore the line-search could not finish (see [22,Algorithm 4.1]). This is also the case for Examples 6.1(ii) and 6.7(ii), for which the GNEs do not exist. For Examples 6.2 and 6.4, the sequence generated by QVI is alternating between several points and none of them is a GNE. For Example 6.8, the sequence does not converge. (2) The penalty method failed to find a GNE for Examples 6.1(i) and 6.6, because the equation F ε k (x) = 0 cannot be solved for some k (see [17,Algorithm 3.3]). This is also the case for Examples 6.1(ii) and 6.7(ii), for which the GNEs do not exist. (3) The interior-point method failed to find a GNE for Examples 6.1(i), 6.1(ii) and 6.7(ii), because the step-length is too small to efficiently decrease the violation of KKT conditions. Note that for Examples 6.1(ii) and 6.7(ii), the GNEs do not exist, so the Newton type directions usually do not satisfy the sufficient descent conditions. 14 Fail 0.50 error 3 · 10 −5 8 · 10 −6 3 · 10 −7 7 · 10 −7 (4) The Augmented-Lagrangian method failed to find a GNE for Example 6.1(i), because the maximum penalty parameter (10 12 ) is reached before a GNE is obtained. This is also the case for Example 6.1(ii), for which the GNEs do not exist. For Examples 6.2, 6.4, 6.6, 6.7(ii) and 6.9, the Augmented-Lagrangian method failed to find a GNE, because the penalization subproblems cannot be efficiently solved.

Conclusions and Discussions
This paper studies convex GNEPs given by polynomials. The rational and parametric expressions for Lagrange multipliers are used. Based on these expressions, Algorithms 5.1 is proposed for computing a GNE. The Moment-SOS hierarchy of semidefinite relaxations are used to solve the appearing polynomial optimization problems. Under some general assumptions, we show that Algorithm 5.1 is able to find a GNE if there exists one, or detect nonexistence of GNEs if there is none.
For future work, it is interesting to solve nonconvex GNEPPs. Under some constraint qualifications, the KKT system (2.5) is necessary but not sufficient for GNEs. A solution u of (2.5) may not be a GNE for nonconvex GNEPPs. If u is not a GNE, one needs to find an efficient method to obtain a different candidate. Such a method is proposed for solving NEPs [50]. For GNEPs, it is not clear how to generalize the method in [50]. When the point u is not a GNE, how can we exclude it and find a better candidate? When (5.1) is feasible, how do we detect nonexistence of GNEs? These questions are mostly open, to the best of the authors' knowledge.