Semidefinite games

We introduce and study the class of semidefinite games, which generalizes bimatrix games and finite $N$-person games, by replacing the simplex of the mixed strategies for each player by a slice of the positive semidefinite cone in the space of real symmetric matrices. For semidefinite two-player zero-sum games, we show that the optimal strategies can be computed by semidefinite programming. Furthermore, we show that two-player semidefinite zero-sum games are almost equivalent to semidefinite programming, generalizing Dantzig's result on the almost equivalence of bimatrix games and linear programming. For general two-player semidefinite games, we prove a spectrahedral characterization of the Nash equilibria. Moreover, we give constructions of semidefinite games with many Nash equilibria. In particular, we give a construction of semidefinite games whose number of connected components of Nash equilibria exceeds the long standing best known construction for many Nash equilibria in bimatrix games, which was presented by von Stengel in 1999.


Introduction
In the fundamental model of a bimatrix game in game theory, the spaces of the mixed strategies are given by (two) simplices The payoffs of the two players, p A and p B , are given by two matrices A, B ∈ R m×n , that is, x i A ij y j and p B (x, y) = i,j x i B ij y j .
In the zero-sum case B = −A, optimal strategies do exist and can be characterized by linear programming.Moreover, by Dantzig's classical result [11], zero-sum matrix games and linear programming are almost equivalent, see Adler [1] and von Stengel [51] for a detailed treatment of the situation when Dantzig's reduction is not applicable.Bimatrix games can be seen as a special case of broader classes of games (such as convex games, see [18], separable games, see [45]), which -with increasing generality -are less accessible from the combinatorial and computational viewpoint.We introduce and study a natural semidefinite generalization of bimatrix games (and of finite N-person games), in which the strategy spaces are not simplices but slices of the positive semidefinite cone; that is X = {X ∈ S m : X 0 and tr(X) = 1} and Y = {Y ∈ S n : Y 0 and tr(Y ) = 1} , where S m denotes the set of real symmetric m×m-matrices, " 0" denotes the positive definiteness of a matrix and tr abbreviates the trace.The payoff functions are where A and B are tensors in the bisymmetric space S m × S n .That is, A satisfies the symmetry relations A ijkl = A jikl and A ijkl = A ijlk ; analogous symmetry relations hold for B. If X and Y are restricted to be diagonal matrices, the semidefinite games specialize to bimatrix games.Similarly, if A ijkl = B ijkl = 0 whenever i = j or k = l, then the off-diagonal entries of X and Y do not have an influence on the payoffs and the game is a special case of a bimatrix game.
The motivation for the model of semidefinite games comes from several origins.(1) The Nash equilibria of bimatrix games are intrinsically connected to the combinatorics of polyhedra.Prominently, von Stengel [49] used this connection and cyclic polytopes to construct a family of n × n-bimatrix games whose number of equilibria grows as 0.949 • (1 + √ 2) n / √ n, for n → ∞.In particular, this number grows faster than 2 n − 1, which was an earlier conjecture of Quint and Shubik [40] for an upper bound.As of today, it is still an open problem whether there are bimatrix games with even more Nash equilibria than in von Stengel's construction.Recently, for tropical bimatrix games, Allamigeon, Gaubert and Meunier [4] showed that the upper bound of 2 n − 1 equilibria, i.e., the Quint-Shubik bound, holds in the tropical setting.
In many subareas around optimization and geometry, the transition from linearpolyhedral settings to semidefinite settings has turned out to be fruitful and beneficial.In this transition, polyhedra (the feasible sets of linear programs) are carried over into the more general spectrahedra (the feasible sets of semidefinite programs), see, e.g., [7].One of our main goals is to relate the Nash equilibria of semidefinite games to the geometry and combinatorics of spectrahedra.
(2) Various approaches to quantum games have been investigated, which combine game theoretic models with features of quantum computation and quantum information theory (see [5,19,26,27,30,44]).Quantum states are given by positive semidefinite Hermitian matrices with unit trace (see, e.g., [33] or [38] for an optimization viewpoint).An essential characteristic of our model is the use of positive semidefinite real-symmetric matrices with unit trace (also known as spectraplex ) as mixed strategies.From this perspective, we can consider the semidefinite games as a real-quantum generalization of bimatrix games and of finite N-player games.
Our class of games can also be seen as a subclass of the interactive quantum games studied in [20], see also [9].These games involve two players and a referee and, possible many, interactions between them.The overall actions of each player, the so-called Choi representation, consist of a single Hermitian positive semidefinite matrix along with a finite number of linear constraints and for the zero-sum case they derive a minimax theorem over the complex numbers.We refer the reader to [9] for further details and complexity results and to [23] for an algorithm to compute the equilibrium in the one round zero-sum case.
(3) In recent times, the connection of games and the use of polynomials in optimization has received wide interest.Prominently, Stein, Ozdaglar and Parrilo [36,45,46] have developed sum of squares-based optimization solvers for game theory.Laraki and Lasserre have developed hierarchical moment relaxations [28], see also Ahmadi and Zhang [3] for semidefinite relaxations and the Lasserre hierarchy to approximate Nash equilibria in bimatrix games.Recently, Nie and Tang [31,32] have studied games with polynomial descriptions and convex generalized Nash equilibrium problems through polynomial optimization and moment-SOS relaxations.The semidefinite conditions correspond to a nice polynomial structure, with underlying convexity.
In a different direction, the geometry of Nash equilibria in our class of games establishes novel connections and questions between game theory and semialgebraic geometry.Here, recall that already the set of Nash equilibria of finite N-person games can be as complicated as arbitrary semialgebraic sets [12].See [37] for recent work on the geometry of dependency equilibria.
Our contributions. 1.We develop a framework for approaching semidefinite games through the duality theory of semidefinite programming.As a consequence, the optimal strategies in semidefinite zero-sum games can be computed by a semidefinite program.Moreover, the set of optimal strategies are spectrahedra (rather than only projections of spectrahedra).See Theorem 4.1.
2. We generalize Dantzig's result on the almost equivalence of zero-sum bimatrix games and linear programs to the almost equivalence of semidefinite zero-sum games and semidefinite programs.See Theorem 5.3.For the special case of semidefinite programs with diagonal matrices, our result recovers Dantzig's result.

3.
For general (i.e., not necessarily zero-sum) semidefinite games, we prove a spectrahedral characterization of Nash equilibria.This characterization generalizes the polyhedral characterizations of Nash equilibria in bimatrix games.See Theorem 6.2.
4. We give constructions of families of semidefinite games with many Nash equilibria.In particular, these constructions of games on the strategy space S n × S n have more connected components of Nash equilibria than the best known constructions of Nash equilibria in bimatrix games (due to von Stengel [49]).See Example 7.5.
The paper is structured as follows.After collecting some notation in Section 2, we introduce semidefinite games in Section 3 and view them within the more general class of separable games.Section 4 deals with computing the optimal strategies in semidefinite zero-sum games by semidefinite programming.Section 5 then proves the almost equivalence of zero-sum games and semidefinite programs.For general semidefinite games, Section 6 gives a spectrahedral characterization of the Nash equilibria.
In Section 7, we present constructions with many Nash equilibria.Section 8 concludes the paper.

Notation
We denote by S n the set of real symmetric n × n-matrices and by S + n the subset of matrices in S n which are positive semidefinite.Further, denote by •, • the Frobenius scalar product, A, B := i,j a ij b ij .I n denotes the identity matrix.
An optimization problem of the form is called an SDP in dual normal form.We will make frequent use of the following duality results of semidefinite programming, see, e.g., [47].2) are strictly feasible with finite optimal values, then the optimal values coincide and they are attained in both problems.
A spectrahedron in R k can also be described as the intersection of the cone S + n with an affine subspace U = A 0 + L, where A 0 ∈ S n and L is a linear subspace of S n of dimension k, say, given as L = span{A 1 , . . ., A k } (see, e.g., [41], [7,Chapter 5]).The sets of the form with symmetric matrices A i , B j , are called spectrahedral shadows (see [42]).Any representation of the form (2.4) is called a semidefinite representation of C.

Semidefinite games
3.1.Two-player and N-player semidefinite games.Most of our work is concerned with two-player semidefinite games.For simplicity, we work over the real numbers, while many considerations can also be carried over to the complex numbers.Let m, n ≥ 1 and the strategy spaces X and Y are X = {X ∈ S m : X 0 and tr(X) = 1} and Y = {Y ∈ S n : Y 0 and tr(Y ) = 1} .
To formulate the payoffs, it is convenient to denote by (A ••kl ) 1≤k,l≤n the symmetric n × n-matrix which results from a fourth-order tensor A by fixing the third index to k and the fourth index to l.Such two-dimensional sections of a tensor are also called slices.The payoff functions are where A and B are tensors in the bisymmetric space S m × S n .That is, A satisfies the symmetry relations A ijkl = A jikl and A ijkl = A ijlk and analogous symmetry relations hold for B. If A = −B, then the game is called a semidefinite zero-sum game.
For the N-player version, with strategy spaces 1) , . . ., X (N ) ), then the payoff function for the k-th player is p k (X (1) , . . ., X (N ) ) = (i 1 ,j 1 ),...,(i N ,j N ) X (1) 3.2.Separable games.Stein, Ozdaglar and Parrilo [45] have introduced the class of separable games.An N-player separable game consists of pure strategy sets C 1 , . . ., C N , which are non-empty compact metric spaces, and the payoff functions The latter are of the form where rs , a Then, the payoff functions become p k (X (1) , . . ., X (N ) ) = This yields the setup of semidefinite games as introduced before.
The set of mixed strategies ∆ k of the k-th player is defined as the space of Borel probability measures σ k over C k .A mixed strategy profile σ is a Nash equilibrium if it satisfies where σ −k denotes the mixed strategies of all players except player k.
In this setting, the relation of our model to the mixed strategies of separable games does not yield any new insight, since taking the Borel measures over the convex set C k does not give new strategies.
There is a second viewpoint, which better captures the role of the pure strategies.Since every point in the positive semidefinite cone is a convex combination of positive semidefinite rank-1 matrices, we can also define the set of pure strategies C k as the set of matrices in S + m k which have trace 1 and rank 1.Then, by a Carathéodory-type argument in [45, Corollary 2.10], every separable game has a Nash equilibrium in which player k mixes among at most dim S m k + 1 = m k +1 2 + 1 pure strategies.In contrast to finite N-player games, the decomposition of a mixed strategy (such as the one in a Nash equilibrium) in terms of the pure strategies is not unique.Example 7.3 will illustrate this.

Semidefinite zero-sum games
In this section, we consider semidefinite zero-sum games.The payoff tensors are given by A and B := −A.Hence, the second player wants to minimize the payoff of the first player, p A .By the classical minimax theorem for bilinear functions over compact convex sets [15,48] (see also [14]), optimal strategies exist in the zero-sum case.We show that the sets of optimal strategies are spectrahedra and reveal the semialgebraic geometry of semidefinite zero-sum games.Theorem 4.1.Let G = (A, B) be a semidefinite zero-sum game.Then, the set of optimal strategies of each player is the set of optimal solutions of a semidefinite program.Moreover, each set of optimal strategies is a spectrahedron.
The value V of the game is defined through the minimax relation max The following lemma records that zero-sum matrix games can be embedded into semidefinite zero-sum games.Lemma 4.2.For a given zero-sum matrix game G with payoff matrix A = (a ij ) ∈ R m×n , let G ′ be the semidefinite zero-sum game on S m ×S n -matrices and payoff tensor Then a pair (x, y) ∈ ∆ 1 × ∆ 2 is a pair of optimal strategies for G if and only if there exists a pair of optimal strategies X × Y for G ′ with (4.1) Proof.For any strategy pair (X, Y ) ∈ X × Y with (4.1), the payoff in x i a ik y k , which coincides with the payoff in G for the strategy pair (x, y).
As a consequence of Lemma 4.2, any oracle to solve semidefinite zero-sum games can be used to solve zero-sum matrix games.Namely, construct the semidefinite zerosum game G ′ described in Lemma 4.2 and let X * ∈ S + m and Y * ∈ S + n be the optimal strategies provided by the oracle.Let x * and y * be the vectors of diagonal elements of X * and Y * .Since X * and Y * are positive semidefinite, the vectors x * and y * are nonnegative and due to tr(X * ) = tr(Y * ) = 1 we have m i=1 x * i = n j=1 y * i = 1.Since A ijkl = 0 for i = j or k = l, the off-diagonal elements in any strategy of the semidefinite game G ′ do not matter for the payoffs.Hence, x * and y * are optimal strategies for the zero-sum matrix game.Lemma 4.3.Let G be a semidefinite zero-sum game on S n × S n .If the payoff tensor satisfies A ijkl = −A klij for all i, j, k, l, then G has value 0.
In the proof, we employ a simple symmetry consideration.
Proof.Let V denote the value of G.Then, there exists an X ∈ X such that for all If we rearrange the order of the summation and use the precondition, then Adding (4.2) and (4.3) yields V ≤ 0. Analogously, there exists some Y ∈ Y such that for all X ∈ X , we have i,j,k,l X ij A ijkl Y kl ≤ V. Arguing similarly as before, we can deduce V ≥ 0. Altogether, this gives V = 0.
We characterize which a-priori-strategy player 1 will play if her strategy will be revealed to player 2 (max-min-strategy).In the following lemma, the symmetric n×nmatrix T plays the role of a slack matrix.Lemma 4.4.Let (A, B) be a semidefinite zero-sum game.As an a-priori-strategy, player 1 plays an optimal solution X of the SDP The optimal value of this optimization problem is attained.
Proof.For an a priori known strategy of player 1, player 2 will play a best response, i.e., an optimal solution of the problem min In what follows we see that the optimal value of the minimization problem is attained and that strong duality holds.As a-priori-strategy, player 1 uses an optimal solution of max We write the inner minimization problem of the minmax problem in terms of the dual SDP.This gives Note that the scaled unit matrix 1 n I n is a strictly feasible point for the minimization problem (4.6).If we choose a negative v 1 with sufficiently large absolute value, then the maximization problem (4.7) has a strictly feasible point as well.Hence, the duality theory for semidefinite programming implies that both the minimization and the maximization problems attain the optimal value.In connection with the outer maximization this gives the semidefinite program (4.4).
We remark that the expressions (4.6) and (4.7) can be interpreted as the smallest eigenvalue of the matrix ( X, A ••kl ) 1≤k,l≤n (see, e.g., [47]).Further note that the SDP (4.4) is not quite in one of the normal forms, see Lemma 4.7 below.
Next, we characterize the a-priori-strategies of player 2 in terms of a minimization problem to facilitate the duality reasoning.Similar to Lemma 4.4, the symmetric n × n-matrix S serves as a slack matrix.Lemma 4.5.Let (A, B) be a semidefinite zero-sum game.As an a-priori-strategy, player 2 plays an optimal solution Y of the SDP The optimal value of this optimization problem is attained.
Proof.The proof is similar to Lemma 4.4.As an a-priori-strategy, player 2 uses an optimal solution of min Since both the inner optimization problem and its dual are strictly feasible, strong duality holds.The statement then follows from the duality relation max Remark 4.6.Similar to the case of zero-sum matrix games, if the coefficients of the payoff tensor are chosen sufficiently generically, then the SDPs (4.4) and (4.8) have a unique optimal solution and, as a consequence, each player has a unique optimal strategy.In contrast to the set of optimal strategies of zero-sum matrix games, it is possible that the set of optimal strategies of a semidefinite game is non-polyhedral.For example, already in the trivial semidefinite game with the zero matrix A, the value is 0 and the set of optimal strategies of player 1 and player 2 are the full sets X and Y, respectively.Now we show that the sets of optimal strategies of the two players can be regarded as the optimal solutions of a pair of dual SDPs.Proof.We show that the dual of (4.8) coincides with (4.4).Setting , i.e., the block diagonal matrix with blocks Y , S, u + 1 and u − 1 (of size S n , S n , 1, 1), the problem (4.8) can be written as (4.9) We claim that the dual of (4.9) coincides with (4.4).Denote by E ij the matrix with 1 in row i and column j whenever i = j and with 1/2 in row i and column j as well as row j and column i otherwise.The dual is (4.10) Observe that i,j w ij A ijkl = W, A By identifying W with −X, we recognize this as the SDP in (4.4) Now we provide the proof of Theorem 4.1.
Proof.Player 1 can achieve at least the gain provided by the a-priori-strategy (4.4) from Lemma 4.4, and player 2 can bound her loss by the a-priori-strategy (4.8) from Lemma 4.5.By Lemma 4.7, both strategies are dual to each other, so that their optimal values coincide with the value of the game.In the coordinates (X, T, v 1 ) and (Y, S, u 1 ), the feasible regions of those optimization problems are spectrahedra, and the sets of optimal strategies are the sets of optimal solutions of the SDPs.Intersecting the feasibility spectrahedron, say, for player 1, with the hyperplane corresponding to the optimal value of the objective function shows the first part of the theorem.Precisely, we obtain the sets {X, T 0 : where V is the value of the game.By projecting the spectrahedra for the first player on the X-variables, we can deduce that in the space X the set of optimal strategies of player 1 is the projection of a spectrahedron.Similarly, this is true for player 2.
Indeed, the set of optimal strategies of each player is not only the projection of a spectrahedron, but also a spectrahedron.Namely, taking the optimal value of v (i.e., the value V of the game) corresponds geometrically to passing over to the intersection with a separating hyperplane and, because the other additional variables ("T ") just refer to a slack matrix, we see that the set of optimal strategies for the first player is a spectrahedron.In particular, the equation Similar arguments apply for the second player.Precisely, the spectrahedron for the first player lives in the space S n , whose variables we denote by the symmetric matrix variable X.The inequalities and equations for X are tr(X) − 1 = 0, where we can write the equation as two inequalities and where we can combine all the scalar inequalities and matrix inequalities into one block matrix inequality.
where V is the value of the game.
Note that O 1 and O 2 are spectrahedra in the space of symmetric matrices.
Example 4.9.We consider a semidefinite generalization of a 2 × 2-zero-sum matrix game known as "Plus one" (see, for example, [25]).The payoff matrix of that bimatrix game is A = 0 −1 1 0 and for each player, the second strategy is dominant.Let the semidefinite zero-sum game be defined by for i, j, k, l ∈ {1, 2}.If both players play only diagonal strategies (i.e., X and Y are diagonal matrices), then the payoffs correspond to the payoffs of the underlying matrix game.By Lemma 4.3, the value of the semidefinite game is 0. To determine an optimal strategy for player 1, we consider those points in the feasible set of the SDP (4.4) in Lemma 4.4, which have v 1 = 0. Hence, we are looking for matrices T, X 0 such that Since X 0 implies x 11 ≥ 0 and T 0 implies −x 11 ≥ 0, we obtain x 11 = 0. Hence, x 22 = 1 − x 11 = 1.Further, X 0 yields x 12 = x 21 = 0. Therefore, the optimal strategy of player 1 is 0 0 0 1 , and, similarly, the optimal strategy of player 2 is the same one.
Example 4.10.Consider a slightly different version of Example 4.9, in which the optimal strategies are not diagonal strategies.For i, j, k, l ∈ {1, 2}, let By Lemma 4.3, the value of the game is 0. If both players play only diagonal strategies, then the payoffs coincide (up to a factor of 2 in the payoffs, which is irrelevant for the optimal strategies) with the payoffs of the underlying zero-sum matrix game.Interestingly, we show that the optimal strategies are not diagonal strategies here.As in Example 4.9, we determine an optimal strategy for player 1 by considering the feasible points of the SDP (4.4) with v 1 = 0. Hence, we search for T, X 0 such that ).Hence, so that X 0 implies in connection with the arithmetic-geometric inequality that . Thus, the optimal strategy of player 1 is 1 2 , and, similarly, the optimal strategy of player 2 is the same one.
Note that X = 0 0 0 1 is not an optimal strategy for player 1, since, for example, for a given strategy Y of player 2, the payoff is 2y 11 + 2y 12 .Specifically, the choice

The almost-equivalence of semidefinite zero-sum games and semidefinite programs
We give a semidefinite generalization of Dantzig's almost equivalence of zero-sum matrix games and linear programming [11], see also [1].Given an LP in the form min with A ∈ R m×n and b ∈ R m and c ∈ R n , Dantzig constructed a zero-sum matrix game with the payoff matrix For the semidefinite generalization, the following variant of a duality statement is convenient, whose proof is given for the sake of completeness.
Lemma 5.1.For A 1 , . . ., A m , C ∈ S n and b ∈ R m , the following SDPs in the slightly modified normal forms constitute a primal-dual pair.
and sup y,S {b T y : We call them SDPs in modified primal and dual forms.
Proof.We derive these forms from the usual forms, in which the primal contains a relation "=" rather than a relation "≥" and in which the dual uses an unconstrained variable vector y rather than a non-negative vector.Starting from the standard pair, we extend each matrix A i to a symmetric 2n × 2n-matrix via a single additional nonzero element, namely −1, in entry (n+i, n+i) of the modified A i , 1 ≤ i ≤ m.Moreover, we formally embed C into a symmetric 2n×2n-matrix.The dual (in the original sense) of the extended problem still uses a vector of length y, but the modifications in the primal problem give the additional conditions y i ≥ 0, 1 ≤ i ≤ m.This shows the desired modified forms of the primal-dual pair.
A semidefinite zero-sum game with X = Y is called symmetric if the payoff tensor A satisfies the skew symmetric relation A ijkl = −A klij .By Lemma 4.3, the value of a symmetric game with payoff tensor A on the strategy space S n × S n is zero.Therefore, there exists a strategy X of the first player such that Generalizing the notion in [1], we call such a strategy X a solution of the symmetric game.The condition (5.3) states that the matrix ( X, A ••kl ) 1≤k,l≤n is contained in the dual cone of S + n .Since the cone S + n of positive semidefinite matrices is self-dual, (5.3) translates to It is useful to record the following specific version of a symmetric minimax theorem, which is a special case of the minimax theorem for convex games [14] and of the minimax theorem for quantum games [23].
Lemma 5.2 (Minimax theorem for symmetric semidefinite zero-sum games).Let G be a symmetric semidefinite zero-sum game with payoff tensor A. Then there exists a solution strategy X, i.e., a matrix X ∈ S n satisfying (5.4).
Proof.There exists a strategy X satisfying (5.3).By the considerations before the theorem, we obtain (5.4).Now we generalize the Dantzig construction.Given an SDP in modified normal form of Lemma 5.1, we define the following semidefinite Dantzig game on S n+m+1 ×S n+m+1 .The strategies of both players can be viewed as positive semidefinite block matrices diag(X, y, t) with X ∈ S + n , y ∈ R m + , t ∈ R + and trace 1.The payoff tensor Q is defined as follows.For 1 ≤ k, l ≤ n and 1 ≤ j ≤ m, let All other entries in the payoff tensor Q are zero.Note that Q has a block structure: For every non-zero entry Q ijkl , we have either i, j ∈ {1, . . ., n} or i, j ∈ {n + 1, . . ., n + m} or i = j = n + m + 1.An analogous property holds for k, l.
Let diag( X, ȳT , t) T be a solution to this symmetric game.By the definition of a solution, we have Due to the block structure of the payoff tensor Q, the matrix on the left-hand side of (5.5) has a block structure as well.The upper left n × n-matrix in (5.5) gives the condition − n j=1 ȳj A j + tC 0. The square submatrix in (5.5) indexed by n + 1, . . ., n+m is a diagonal matrix and gives the conditions The right lower entry in (5.5) gives b T ȳ − C, X ≥ 0. Hence, (5.5) is equivalent to the system and in addition, we have the conditions defining a strategy, ȳ ≥ 0, X 0, t ≥ 0 and 1 T ȳ + tr( X) + t = 1.This allows us to state the following result on the almost equivalence of semidefinite zero-sum games and semidefinite programs.Recall that reducing the equilibrium problem in a semidefinite zero-sum game to semidefinite programming follows from Section 4.
(2) If t > 0, then Xt −1 , ȳt −1 and some corresponding slack matrix S are an optimal solution to the primal-dual SDP pair given in (5.1) and (5.2). ( 3) If b T ȳ − C, X > 0, then the primal problem or the dual problem is infeasible.
The theorem ignores the case t = 0.In the special case of bimatrix games, that exception was already observed in Dantzig's treatment [11] and overcome by Adler [1] and von Stengel [51].
While the precondition in (3) looks like a statement of missing strong duality, note that ( X, ȳ) does not satisfy the constraints in the initially stated primal-dual SDP pair.If the initial SDP pair does not have an optimal primal-dual pair, then clearly case (2) can never hold.This is a difference to the LP case, where, say, in the case of finite optimal values, there always exists an optimal primal-dual pair and so case (2) is never ruled out a priori in the same way.This qualitative difference was expected, because for a semidefinite zero-sum game, the corresponding primal and dual feasible regions have relative interior points and thus there exists an optimal primal-dual pair.
So, the qualitative situation reflects that for semidefinite zero-sum games, the SDPs characterizing the optimal strategies are always well behaved.
By adding the precondition that the original pair of SDPs has primal-dual interior points, we come into the same situation that case (2) is not ruled out a priori.
Proof.Since X and ȳ are feasible solutions of the SDPs in (5.6) and (5.7), the weak duality theorem for semidefinite programming implies t(b T ȳ − C, X ) ≤ 0.
Since t ≥ 0, we obtain t = 0 or b T ȳ − C, X ≤ 0. In the latter case, (5.8) implies b T ȳ − C, X = 0. Altogether, this gives t(b T ȳ − C, X ) = 0.For the second statement, let t > 0. Then Xt −1 , ȳt −1 and the corresponding slack matrix S give a feasible point of the primal-dual SDP pair stated initially.Since b T ȳ − C, X = 0 and thus b T (ȳ t−1 ) − C, Xt −1 = 0, this feasible point is an optimal solution.
For the third statement, let b T ȳ − C, X > 0. Then statement (1) implies t = 0. Thus, A i , X ≥ 0 for 1 ≤ i ≤ m and m j=1 ȳj In the case C, X < 0, assume that the originally stated primal (5.1) has a feasible solution X ♦ .Then, for any λ ≥ 0, the point X ♦ + λ X is a feasible solution as well.By considering λ → ∞, we see that (5.1) has optimal value −∞.By the weak duality theorem, the dual problem (5.2) cannot be feasible.In the case b T ȳ > 0, similar arguments show that the primal problem is infeasible.
Note that in Theorem 5.3 it is not necessary to assume that the constraints of the SDP are linearly independent, since we have not expressed the situation only in terms of the slack variable.One can easily verify that optimal solutions of the SDP in primal normal form are matrices X ′ ∈ S + 2 with tr(X ′ ) = 1 and the only optimal solution of the dual problem is the pair (y ′ , S ′ ), where y ′ = 2 and S ′ = 0 ∈ S + 2 .The payoff tensor Q in the corresponding Dantzig game in flattened form is , where the rows and columns are indexed by X 11 ,X 22 ,2X 12 ,y 1 ,t.To extract an optimal strategy diag( X, ȳ, t) for this game, we observe that (5.6) gives x11 + x22 ≥ t and (5.7) yields 2 t ≥ ȳ.Then (5.8) implies ȳ ≥ 2(x 11 + x22 ) and we obtain x11 + x22 = t = ȳ 2 .By the trace condition, an optimal strategy is of the form diag( X, 1  2 , 1 4 ), where X ∈ S + 2 satisfies tr( X) = 1 4 .Since t > 0, Theorem 5.3 implies that 4 X and 4ȳ are optimal solutions to the primal-dual SDP pair.

General semidefinite games
Now we study two-player semidefinite games without the zero-sum condition.By Glicksberg's result [17] (see also Debreu [13] and Fan [16]), there always exists a Nash equilibrium for these games.This is so because they are a special case of Nplayers continuous games with continuous payoff functions defined on convex compact Hausdorff spaces [17].The goal of this section is to provide a characterization of the Nash equilibria in terms of spectrahedra, see Theorem 6.2.
Recall the following representation of Nash equilibria for bimatrix games in terms of polyhedra, as introduced by Mangasarian [29] (see also [50]): Definition 6.1.For an m × n-bimatrix game (A, B), let the polyhedra P and Q be defined by where 1 denotes the all-ones vector.
In P , the inequalities x ≥ 0 are numbered by 1, . . ., m and the inequalities x T B ≤ 1 T v are numbered by m + 1, . . ., m + n.In Q, the inequalities Ay ≤ 1u are numbered by 1, . . ., m and the inequalities y ≥ 0 are numbered by m + 1, . . ., m + n.In this setting, a pair of mixed strategies (x, y) ∈ ∆ 1 × ∆ 2 is a Nash equilibrium if and only if there exist u, v ∈ R such that (x, v) ∈ P , (y, u) ∈ Q and for all i ∈ {1, . . ., m + n}, the i-th inequality of P or Q is binding (i.e., it holds with equality).Here, u and v represent the payoffs of player 1 and player 2, respectively.This representation allows us to study Nash equilibria in terms of pairs of points in P × Q.
We aim at a suitable generalization of this combinatorial characterization to the case of semidefinite games.Note that in the case of bimatrix games, the characterization is strongly based on the finiteness of the pure strategies; this does not hold anymore for semidefinite games.Therefore, we start from equivalent versions of the bimatrix game polyhedra (6.1) and (6.2), which do not use finitely many pure strategies in their formulation.Instead, the best responses are expressed more explicitly as a maximization problem, While the generalizations to semidefinite games are no longer polyhedral, it is convenient to keep the symbols P and Q for the notation.Consider the sets We show that P and Q are spectrahedra in the spaces S m ×R and S n ×R.Similar to the considerations in the zero-sum case, for a fixed X, the expression max{p B (X, Y ) : Here, the minimum is attained, since the feasible set in the first line of the equations is compact.The maximum in the third line is attained due to the strong duality theorem for semidefinite programming and using that the feasible set in the first two lines ({Y : Y 0, tr(Y ) = 1}) has a strictly interior point and thus satisfies Slater's condition.Hence, If the min-problem inside P has some feasible solution (v 1 , T ) then for any v ′ 1 ≥ v 1 , there exists a feasible solution (v ′ 1 , T ′ ) as well.Namely, set We claim that P and Q are spectrahedra in the spaces S m × R and S n × R. For P , the inequalities and equations are given by where the equation can be written as two inequalities and where we can combine all the scalar inequalities and matrix inequalities into one block matrix inequality.The spectrahedra P and Q can be used to provide the following characterization of Nash equilibria in terms of a pair of projections of spectrahedra.We build on the terminology from the bimatrix situation after Definition 6.1 and describe Nash equilibria together with their payoffs.
Theorem 6.2.A quadruple (X, Y, u, v) represents a Nash equilibrium of the semidefinite game if and only if ) and in every finite rank-1 decomposition of X and Y , The positivity condition on λ i , µ j reflects the binding property of x i ≥ 0 or y j ≥ 0 from the bimatrix situation.Moreover, in the bimatrix situation the inequalities are induced by the pure strategies, which are the extreme points of ∆ 1 and ∆ 2 .In the semidefinite situation, we can associate with each extreme point of the strategy space an inequality, namely the inequality (say, for P and an extreme point Y of the strategy space of the second player) p B (X, Y ) ≤ v.
Proof.Let (X, Y, u, v) represent a Nash equilibrium.Then X is a best response of Y and Y is a best response of X, so that by definition of P and Q, we have (X, v) ∈ P and (Y, u) ∈ Q.Let s λ s p (s) (p (s ) T be a finite rank 1-decomposition of X with λ s > 0, tr(p (s) (p (s) ) T ) = 1 and s λ s = 1.Then Since the first player's payoff is u, the best response property gives p A (p ) T , Y ) < u, then, since X is a convex combination, we would have p A (X, Y ) < u, a contradiction.The statement on Y follows similarly.Conversely, let (X, v) ∈ P , (Y, u) ∈ Q and in every finite rank-1 decomposition X = s λ s p (s) (p (s) ) T , Y = t µ t q (t) (q (t) ) T with λ s , µ t > 0, tr(p (s) (p (s) ) T ) = 1, tr(q (t) (q (t) ) T ) = 1 and s λ s = t µ t = 1, we have (6.3) and (6.4).Since (Y, u) ∈ Q, we have u * := max{p A (X, Y ) : X ∈ S + m , tr(X) = 1} ≤ u.Due to (6.4), this gives p A (p (s) (p (s) ) T , Y ) = u for all s and thus p A (p (s) (p (s) ) T , Y ) = u * = u for all s.Hence, p A (X, Y ) = u * = u and X is a best response to Y .Similarly, Y is a best response to X with payoff v so that altogether (X, Y, u, v) represents a Nash equilibrium.Remark 6.3.Theorem 6.2 also holds if "in every finite rank-1 decomposition" is replaced by "in at least one finite rank-1 decomposition".The proof also works in that setting.We will illustrate this further below, in Example 7.3.
Compared to bimatrix games, the more general situation of semidefinite games is qualitatively different in the following sense.In a bimatrix game, every mixed strategy is a unique convex combination of the pure strategies.In a semidefinite game, the analogs of pure strategies are the rank 1-matrices in the spectraplex and the decompositions of the mixed (i.e., of rank at least 2) strategies as convex combinations of rank-1 matrices are no longer unique.However, the situation for a Nash equilibrium to have several decompositions is quite restrictive.Remark 6.4.To obtain a decomposition of a positive semidefinite matrix with trace 1 into the sums of positive semidefinite rank 1-matrices with trace 1, one can proceed as follows.Consider a spectral decomposition.Then replace the eigenvalues (which sum to 1) by eigenvalues 1 and interpret the original eigenvalues as coefficients of a convex combination.
Example 6.5.We consider the example of a hybrid game, where the first player plays a strategy in the simplex ∆ 2 and the second player plays a strategy in S + 2 with trace 1; note that the hybrid game can be encoded into a semidefinite game on S 2 × S 2 by setting A ijkl = B ijkl = 0 whenever i = j.We can describe the situation in terms of an index i for the first player and the indices (j, k) for the second player.For i = 1, let and for i = 2, let To determine the Nash equilibria, we consider three cases: Case 1: The first player plays the first pure strategy.Then the second player has payoff y 11 + 2εy 12 .A small computation shows that for small ε, the second player's best response is and indeed, this gives a Nash equilibrium.
Case 2: The first player plays the second pure strategy.Analogously, the second player's best response is and indeed, this gives a Nash equilibrium.
Case 3: The first player plays a totally mixed strategy.Let x = (x 1 , x 2 ) = (x 1 , 1 − x 1 ) ∈ ∆ 2 with x 1 , x 2 > 0 and x 1 + x 2 = 1.The second player's best response has the payoff The first pure strategy would give for the first player y 11 + 2εy Hence, for every non-negative ε, the best response of the second player is As apparent from the above considerations, in that case both pure strategies of the first player are best responses of the second player.To determine the strategy x of the first player, we use the condition that the maximum max Y ∈Y {x 1 y 11 + (1 − x 1 )y 22 + 2ε √ y 11 y 22 } has to be attained at the matrix (6.5).Substituting y 22 = 1 − y 11 , the resulting univariate problem in y 11 gives x = (1/2, 1/2).The payoff is 1  2 + ε for both players.
We close the section by mentioning that some classical results for bimatrix games remain true for semidefinite games.Since semidefinite games are convex compact games, the generalized Kohlberg-Mertens structure theorem on the Nash equilibria shown by Predtetchinski [39] holds for semidefinite games (for related recent structural results in the context of polytopal games see [35]).Moreover, generically, the number of Nash equilibria in semidefinite games is finite and odd, as a consequence of the results of Bich and Fixary [6].

Semidefinite games with many Nash equilibria
We construct a family of semidefinite games on the strategy space S n ×S n such that the set of Nash equilibria has many connected components.In particular, the number of Nash equilibria is larger than the number of Nash equilibria that an n × n-bimatrix game can have.
The following criterion allows us to construct semidefinite games from bimatrix games that contain the Nash equilibria of the bimatrix games and possibly additional ones.
Lemma 7.1.Let G = (A, B) be an m × n bimatrix game.Let Ḡ = ( Ā, B) be a semidefinite game on the strategy space S m × S n with āiikk = a ik and biikk = b ik for 1 ≤ i ≤ m, 1 ≤ k ≤ n.If āijkk = 0 for all i = j and all k, as well as biikl = 0 for all i and all k = l then, for every Nash equilibrium (x, y) of G, the pair (X, Y ) defined by is a Nash equilibrium of Ḡ.
Proof.Assume (X, Y ) is not a Nash equilibrium of Ḡ. W.l.o.g.we can assume that a strategy Y ′ exists for the second player with p B (X, Y ) < p B (X, Y ′ ).This yields where the last equation uses that biikl = 0 for all l = k.Hence, there exists a feasible strategy y ′ for the second player in G with p B (x, y) < p B (x, y ′ ).This contradicts the precondition that (x, y) is a Nash equilibrium in G.
In Lemma 7.1, besides the Nash equilibria inherited from the bimatrix game, there can be additional Nash equilibria in the semidefinite games.In a 2 × 2-bimatrix game, there can be at most three isolated Nash equilibria (see, e.g., [8] or [40]).The following example provides an instance of a semidefinite game on the strategy space S 2 × S 2 with five isolated Nash equilibria.In particular, it has more isolated Nash equilibria than a 2 × 2-bimatrix game can have.
Example 7.2.For a given c ∈ R, let and B = A. We claim that for c > 1/2, there are exactly five isolated Nash equilibria.
First consider the case that the diagonal of X is (1, 0).Since X 0, this implies x 12 = 0.The best response of player 2 gives on the diagonal (1, 0) of Y .From that, we see that 1 0 0 0 is a Nash equilibrium with payoff 1 for both players, and similarly, X = Y = 0 0 0 1 as well.These Nash equilibria are isolated, which follows as a direct consequence of the current case in connection with the subsequent considerations of the cases with diagonal of X not equal to (1, 0).Now consider the situation that both diagonal entries of X are positive, and due to the situation discussed before, we can also assume that both diagonal entries of Y are positive.Note that the payoff of each player is p(X, Y ) = x 11 y 11 + x 22 y 22 + 4cx 12 y 12 .
In a Nash equilibrium, as soon as one player plays the non-diagonal entry with non-zero weight, then both players will play the non-diagonal element with maximal possible absolute value and appropriate sign, say, for player 1, x 12 = ± √ x 11 x 22 .
Case 1: x 12 = 0. We can assume positive signs for the non-diagonal elements of both players. of p(X, Y ) necessarily must vanish.We remark that, since the payoff function is bilinear, non-infinitesimal deviations are not relevant here.
For the case c > 1/2, we obtain x 11 = y 11 = 1/2.For c = 1/2, any choice of x 11 ∈ (0, 1) and setting y 11 = x 11 gives a critical point, see below.For x 11 = 1 2 and y 11 = 1 2 , we obtain the Nash equilibria + c for both players.Case 2: x 12 = 0.This implies y 12 = 0, and we obtain the isolated Nash equilibrium with payoff 1 2 for both players.In the special case c = 1/2, any choice for x 11 ∈ (0, 1) and setting y ).Hence, in case c = 1/2, all the points with x = y for x ∈ (0, 1) and choosing the maximal possible off-diagonal entries (with appropriate sign with respect to the other player) give a family of Nash equilibria with payoff 1 for each player.
Example 7.3.We can use the Example 7.2 also to illustrate a situation, where there exists more than one decompositions of a strategy into rank-1 matrices.Consider again the main situation c > 1 2 .We have seen that the pair ( 1 2 I 2 , 1 2 I 2 ) of scaled identity matrices constitutes a Nash equilibrium.If player 1 plays X = 1 2 I 2 , the payoff of player 2 is which is independent of c. Due to tr(Y ) = 1, the payoff is 1 2 for any strategy Y of the second player.Note that the unit matrix has several decompositions into rank 1-matrices.Besides the canonical decomposition I 2 = e (1) (e (1) ) T + e (2) (e (2) ) T , we can also consider, say, even for a general unit matrix I n , the decomposition for any orthonormal basis u (1) , . . ., u (n) of R n .Both for the canonical decomposition and for the decomposition, say, with u (1) = (cos α, sin α) T , u (2) = (− sin α, cos α) T for α := π 6 , i.e., u In particular, all the rank 1-strategies occurring in the various decompositions of 1  2 I 2 are best responses of player 2 to the strategy 1  2 I 2 of the first player.We now show how to construct from Example 7.2 an explicit family of semidefinite games with many Nash equilibria.Block construction.Let A (1) and A (2) be tensors of size m 1 × m 1 × n 1 × n 1 and m 2 × m 2 × n 1 × n 1 .The block tensor with blocks A (1) and A (2) is formally defined as the tensor of size which has entries Naturally, this construction can be extended to more than two blocks.
Note that if one of the coefficients α 1 , α 2 , β 1 or β 2 is zero in the theorem, then one of the blocks in X * or Y * consists solely of zeroes.
We can use the block lemma to construct a family of semidefinite games with many Nash equilibria.and let all other entries of A be zero.Also, let B = A.
The discussion in Example 7.2, for the specific situation of the game on the strategy sets S 2 × S 2 , implies that there are five Nash equilibria.In all these equilibria, the strategies of both players coincide and this property is preserved throughout the generalized construction we present.Proof.Using the Block Lemma 7.4, we obtain 6 n/2 − 1 = √ 6 n − 1 Nash equilibria, because we can also use the zero matrix as 2 × 2 block within a strategy as long as not all the blocks are the zero matrix.Outside of the 2 × 2-diagonal blocks, the entries of these Nash equilibria are zero.Those entries can be chosen arbitrarily as long as the positive semidefiniteness constraint on the strategy is satisfied, without losing the equilibrium property.As a consequence, the Nash equilibria are not isolated.It remains to show that the √ 6 n − 1 Nash equilibria obtained from the Block Lemma belong to distinct connected components.
For each Nash equilibrium (X, Y ), consider the diagonal 2 × 2-blocks of the strategies.In each block, we have one of the five types from Example 7.2 or the block is the zero matrix.We associate a type p(X, Y ) ∈ {0, . . ., 5} n to each Nash equilibria which gives the type in each of the n/2 blocks of X and in each of the n/2 blocks of Y .
Any two of the √ 6 n − 1 Nash equilibria coming from the Block Lemma have distinct types.By restricting to the diagonal blocks, this implies that the 6 n/2 − 1 Nash equilibria belong to distinct connected components.
Asymptotically, we obtain more Nash equilibria than in the Quint and Shubik construction of bimatrix games [40] and also more Nash equilibria than in von Stengel's construction of bimatrix games [49], because there the number is 0.949 • 2.414 n / √ n asymptotically.
Specifically, von Stengel's construction gives a 6 × 6-bimatrix game with 75 isolated Nash equilibria, and so far no 6 × 6-bimatrix game with more than 75 isolated Nash equilibria is known.Von Stengel also showed an upper bound of 111 Nash equilibria for a 6 × 6-bimatrix games.In our construction of a semidefinite game on S 6 × S 6 , we obtain from Theorem 7.6 the higher number of 6 6/2 − 1 = 215 connected components of Nash equilibria in the semidefinite game.

Outlook and open questions
Since the transition from bimatrix games to semidefinite games leads from polyhedra to spectrahedra, in the geometric description of Nash equilibria, the questions on the maximal number of Nash equilibria appear to become even more challenging than in the bimatrix situation.Both from the viewpoint of the combinatorics of Nash equilibria and from the viewpoint of computation, rank restrictions have been fruitfully exploited in the contexts of bimatrix games [2,24] and separable games [45].It would be interesting to study the exploitation of low-rank structures of semidefinite games in the case of payoff tensors with suitable conditions on a low tensor rank.
Concerning the reduction from semidefinite programs to semidefinite games, it is a natural question whether the handlings of the exceptional cases by Adler [1] and von Stengel [51] can be generalized to the semidefinite case.
We also briefly mention questions of the semidefinite generalizations of more general classes of (bimatrix) games.A polymatrix game (or network game) [10] is defined by a graph.The nodes are the players and each edge corresponds to a two-player zero-sum matrix game.Every player chooses one set of strategies and she uses it with all the games that she is involved with.The game has an equilibrium that we can compute efficiently using linear programming.Shapley's stochastic games are two-player zerosum games of potentially infinite duration.Roughly speaking, the game takes place on a complete graph, the nodes of which correspond to zero-sum matrix games.Two players, starting from an arbitrary node (position), at each stage of the game, play a zero-sum matrix game and receive payoffs.Then, with a non-zero probability the game either stops or they players move to another node and play again.Because the stopping probabilities are non-zero at each position, the game terminates.Shapley proved [43] that this game has an equilibrium; there is also an algorithm to compute it [21], see also [34].It remains a future task to study the generalizations of polymatrix and stochastic games when the underlying bimatrix games are replaced by semidefinite games.For semidefinite polymatrix games this has recently been initiated in [22].

Corollary 4 . 8 .
Explicit LMI descriptions of the sets O 1 and O 2 of optimal strategies of the two players are

Theorem 7 . 6 .
The set of Nash equilibria of G n consists of√ 6 n − 1 ≈ 2.449 n − 1 connected components.