1 Introduction

In the fundamental model of a bimatrix game in game theory, the spaces of the mixed strategies are given by (two) simplices

$$\begin{aligned} \Delta _1 \ = \left\{ x \in \mathbb {R}^m \, : \, x \ge 0 \text { and } \sum _{i=1}^m x_i = 1 \right\} , \quad \Delta _2 \ = \left\{ y \in \mathbb {R}^n \, : \, y \ge 0 \text { and } \sum _{j=1}^n y_j = 1 \right\} . \end{aligned}$$

The payoffs of the two players, \(p_A\) and \(p_B\), are given by two matrices \(A, B \in \mathbb {R}^{m \times n}\), that is,

$$\begin{aligned} p_A(x,y) \ = \ \sum _{i,j} x_i A_{ij} y_j \; \text { and } \; p_B(x,y) \ = \ \sum _{i,j} x_i B_{ij} y_j. \end{aligned}$$

In the zero-sum case \(B=-A\), optimal strategies do exist and can be characterized by linear programming. Moreover, by Dantzig’s classical result (Dantzig 1951), zero-sum matrix games and linear programming are almost equivalent, see Adler (2013) and von Stengel (2022) for a detailed treatment of the situation when Dantzig’s reduction is not applicable. Bimatrix games can be seen as a special case of broader classes of games [such as convex games, see González-Dıaz et al. (2010), separable games, see Stein et al. (2008)], which—with increasing generality—are less accessible from the combinatorial and computational viewpoint.

We introduce and study a natural semidefinite generalization of bimatrix games (and of finite N-person games), in which the strategy spaces are not simplices but slices of the positive semidefinite cone; that is

$$\begin{aligned} \mathcal {X}= & {} \{ X \in \mathcal {S}_m \, : \, X \succeq 0 \text { and } {{\,\textrm{tr}\,}}(X) = 1\} \\ \text{ and } \, \mathcal {Y}= & {} \{ Y \in \mathcal {S}_n \, : \, \ Y \succeq 0 \text { and } {{\,\textrm{tr}\,}}(Y) = 1\} , \end{aligned}$$

where \(\mathcal {S}_m\) denotes the set of real symmetric \(m \times m\)-matrices, “\(\succeq 0\)” denotes the positive definiteness of a matrix and \({{\,\textrm{tr}\,}}\) abbreviates the trace. The payoff functions are

$$\begin{aligned} p_A(X, Y) \ = \ \sum _{i,j,k,l} X_{ij} A_{ijkl} Y_{kl} \; \text { and } \; p_B(X, Y) \ = \ \sum _{i,j,k,l} X_{ij} B_{ijkl} Y_{kl}, \end{aligned}$$

where A and B are tensors in the bisymmetric space \(\mathcal {S}_m \times \mathcal {S}_n\). That is, A satisfies the symmetry relations \(A_{ijkl} = A_{jikl}\) and \(A_{ijkl} = A_{ijlk}\); analogous symmetry relations hold for B. If X and Y are restricted to be diagonal matrices, the semidefinite games specialize to bimatrix games. Similarly, if \(A_{ijkl} = B_{ijkl} = 0\) whenever \(i \ne j\) or \(k \ne l\), then the off-diagonal entries of X and Y do not have an influence on the payoffs and the game is a special case of a bimatrix game.

The motivation for the model of semidefinite games comes from several origins.

  1. 1.

    The Nash equilibria of bimatrix games are intrinsically connected to the combinatorics of polyhedra. Prominently, von Stengel (1999) used this connection and cyclic polytopes to construct a family of \(n \times n\)-bimatrix games whose number of equilibria grows as \(0.949 \cdot (1+\sqrt{2})^n/\sqrt{n}\), for \(n \rightarrow \infty\). In particular, this number grows faster than \(2^n-1\), which was an earlier conjecture of Quint and Shubik (1997) for an upper bound. As of today, it is still an open problem whether there are bimatrix games with even more Nash equilibria than in von Stengel’s construction. Recently, for tropical bimatrix games, Allamigeon et al. (2023) showed that the upper bound of \(2^n-1\) equilibria, i.e., the Quint-Shubik bound, holds in the tropical setting. In many subareas around optimization and geometry, the transition from linear-polyhedral settings to semidefinite settings has turned out to be fruitful and beneficial. In this transition, polyhedra (the feasible sets of linear programs) are carried over into the more general spectrahedra (the feasible sets of semidefinite programs), see, e.g., Blekherman et al. (2013). One of our main goals is to relate the Nash equilibria of semidefinite games to the geometry and combinatorics of spectrahedra.

  2. 2.

    Various approaches to quantum games have been investigated, which combine game theoretic models with features of quantum computation and quantum information theory (see Berta et al. 2016; Guo et al. 2008; Khan et al. 2018; Landsburg 2011; Meyer 1999; Sikora and Varvitsiotis 2017). Quantum states are given by positive semidefinite Hermitian matrices with unit trace (see, e.g., Nielsen and Chuang 2002 or Prakash et al. 2018 for an optimization viewpoint). An essential characteristic of our model is the use of positive semidefinite real-symmetric matrices with unit trace (also known as spectraplex) as mixed strategies. From this perspective, we can consider the semidefinite games as a real-quantum generalization of bimatrix games and of finite N-player games. Our class of games can also be seen as a subclass of the interactive quantum games studied in Gutoski and Watrous (2007), see also Bostanci and Watrous (2021). These games involve two players and a referee and, possible many, interactions between them. The overall actions of each player, the so-called Choi representation, consist of a single Hermitian positive semidefinite matrix along with a finite number of linear constraints and for the zero-sum case they derive a minimax theorem over the complex numbers. We refer the reader to Bostanci and Watrous (2021) for further details and complexity results and to Jain and Watrous (2009) for an algorithm to compute the equilibrium in the one round zero-sum case.

  3. 3.

    In recent times, the connection of games and the use of polynomials in optimization has received wide interest. Prominently, Parrilo (2006) and Stein et al. (2008, 2011) have developed sum of squares-based optimization solvers for game theory. Laraki and Lasserre (2012) have developed hierarchical moment relaxations, see also Ahmadi and Zhang (2021) for semidefinite relaxations and the Lasserre hierarchy to approximate Nash equilibria in bimatrix games. Recently, Nie and Tang (2020, 2023) have studied games with polynomial descriptions and convex generalized Nash equilibrium problems through polynomial optimization and moment-SOS relaxations. The semidefinite conditions correspond to a nice polynomial structure, with underlying convexity. In a different direction, the geometry of Nash equilibria in our class of games establishes novel connections and questions between game theory and semialgebraic geometry. Here, recall that already the set of Nash equilibria of finite N-person games can be as complicated as arbitrary semialgebraic sets (Datta 2003). See Portakal and Sturmfels (2022) for recent work on the geometry of dependency equilibria.

Our contributions

  1. 1.

    We develop a framework for approaching semidefinite games through the duality theory of semidefinite programming. As a consequence, the optimal strategies in semidefinite zero-sum games can be computed by a semidefinite program. Moreover, the set of optimal strategies are spectrahedra (rather than only projections of spectrahedra). See Theorem 4.1.

  2. 2.

    We generalize Dantzig’s result on the almost equivalence of zero-sum bimatrix games and linear programs to the almost equivalence of semidefinite zero-sum games and semidefinite programs. See Theorem 5.3. For the special case of semidefinite programs with diagonal matrices, our result recovers Dantzig’s result.

  3. 3.

    For general (i.e., not necessarily zero-sum) semidefinite games, we prove a spectrahedral characterization of Nash equilibria. This characterization generalizes the polyhedral characterizations of Nash equilibria in bimatrix games. See Theorem 6.2.

  4. 4.

    We give constructions of families of semidefinite games with many Nash equilibria. In particular, these constructions of games on the strategy space \(\mathcal {S}_n \times \mathcal {S}_n\) have more connected components of Nash equilibria than the best known constructions of Nash equilibria in bimatrix games (due to von Stengel 1999). See Example 7.5.

The paper is structured as follows. After collecting some notation in Sect. 2, we introduce semidefinite games in Sect. 3 and view them within the more general class of separable games. Section 4 deals with computing the optimal strategies in semidefinite zero-sum games by semidefinite programming. Section 5 then proves the almost equivalence of zero-sum games and semidefinite programs. For general semidefinite games, Sect. 6 gives a spectrahedral characterization of the Nash equilibria. In Sect. 7, we present constructions with many Nash equilibria. Section 8 concludes the paper.

2 Notation

We denote by \(\mathcal {S}_n\) the set of real symmetric \(n \times n\)-matrices and by \(\mathcal {S}_n^+\) the subset of matrices in \(\mathcal {S}_n\) which are positive semidefinite. Further, denote by \(\langle \cdot , \cdot \rangle\) the Frobenius scalar product, \(\langle A, B \rangle := \sum _{i,j} a_{ij} b_ {ij}\). \(I_n\) denotes the identity matrix.

An optimization problem of the form

$$\begin{aligned} \inf _{X \in \mathcal {S}_n} \left\{ \langle C, X \rangle \, : \ \langle A_i, X \rangle = b_i, \, 1 \le i \le m, \, X \succeq 0 \right\} \end{aligned}$$
(2.1)

with \(A_1, \ldots , A_m \in \mathcal {S}_n\), \(C \in \mathcal {S}_n\) and \(b \in \mathbb {R}^m\) is called semidefinite program (SDP) in primal normal form, and a problem of the form

$$\begin{aligned} \sup _{Z \in \mathcal {S}_n, \, y \in \mathbb {R}^m} \left\{ b^T y\, : \, \sum _{i=1}^m y_i A_i + Z = C, \, Z \succeq 0 \right\} \end{aligned}$$
(2.2)

is called an SDP in dual normal form. We will make frequent use of the following duality results of semidefinite programming, see, e.g., Vandenberghe and Boyd (1996).

Theorem 2.1

  1. (a)

    (Weak duality.) Let X and (Zy) be feasible points for (2.1) and (2.2). Then \(\langle C,X \rangle - b^T y \ \ge 0 \, .\)

  2. (b)

    (Strong duality.) If both (2.1) and (2.2) are strictly feasible with finite optimal values, then the optimal values coincide and they are attained in both problems.

A convex set \(C \subset \mathbb {R}^k\) is called a spectrahedron if it can be written in the form

$$\begin{aligned} C = \left\{ x \in \mathbb {R}^k \ : \ A_0 + \sum _{i=1}^k x_i A_i \succeq 0 \right\} , \end{aligned}$$
(2.3)

with \(A_0, \ldots ,A_k \in \mathcal {S}_n\) for some \(n \in \mathbb {N}\). Any representation of C of the form (2.3) is called an LMI (Linear Matrix Inequality) representation of C.

A spectrahedron in \(\mathbb {R}^k\) can also be described as the intersection of the cone \(\mathcal {S}_n^+\) with an affine subspace \(U = A_0 + L ,\) where \(A_0 \in \mathcal {S}_n\) and L is a linear subspace of \(\mathcal {S}_n\) of dimension k, say, given as \(L = {{\,\textrm{span}\,}}\{A_1, \ldots , A_k\}\) (see, e.g., Rostalski and Sturmfels 2010; Blekherman et al. 2013, Chapter 5). The sets of the form

$$\begin{aligned} C = \left\{ x \in \mathbb {R}^k \ : \ \exists y \in \mathbb {R}^l \; \, A_0 + \sum _{i=1}^k x_i A_i + \sum _{j=1}^l y_j B_j \succeq 0 \right\} , \end{aligned}$$
(2.4)

with symmetric matrices \(A_i, B_j\), are called spectrahedral shadows (see Scheiderer 2018). Any representation of the form (2.4) is called a semidefinite representation of C.

3 Semidefinite games

3.1 Two-player and N-player semidefinite games

Most of our work is concerned with two-player semidefinite games. For simplicity, we work over the real numbers, while many considerations can also be carried over to the complex numbers. Let \(m, n \ge 1\) and the strategy spaces \(\mathcal {X}\) and \(\mathcal {Y}\) are

$$\begin{aligned} \mathcal {X}= & {} \{ X \in \mathcal {S}_m \, : \, X \succeq 0 \text { and } {{\,\textrm{tr}\,}}(X) = 1\} \\ \text{ and } \mathcal {Y}= & {} \{ Y \in \mathcal {S}_n \, : \, Y \succeq 0 \text { and } {{\,\textrm{tr}\,}}(Y) = 1\} . \end{aligned}$$

To formulate the payoffs, it is convenient to denote by \((A_{\cdot \cdot kl})_{1 \le k,l \le n}\) the symmetric \(n \times n\)-matrix which results from a fourth-order tensor A by fixing the third index to k and the fourth index to l. Such two-dimensional sections of a tensor are also called slices. The payoff functions are

$$\begin{aligned} p_A(X, Y) \ = \ \sum _{i,j,k,l} X_{ij} A_{ijkl} Y_{kl} \ = \ \langle ( \langle X, A_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n}, Y \rangle \\ \text { and } \; p_B(X, Y) \ = \ \sum _{i,j,k,l} X_{ij} B_{ijkl} Y_{kl} \ = \ \langle ( \langle X, B_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n}, Y \rangle , \end{aligned}$$

where A and B are tensors in the bisymmetric space \(\mathcal {S}_m \times \mathcal {S}_n\). That is, A satisfies the symmetry relations \(A_{ijkl} = A_{jikl}\) and \(A_{ijkl} = A_{ijlk}\) and analogous symmetry relations hold for B. If \(A = -B\), then the game is called a semidefinite zero-sum game.

For the N-player version, with strategy spaces

$$\begin{aligned} \mathcal {X}^{(i)} = \{X \in \mathcal {S}_{m_i} \, : \, X \succeq 0 \text { and } {{\,\textrm{tr}\,}}(X) = 1\}, \quad \text {for } 1 \le i \le N, \end{aligned}$$

let \(A^{(1)}, \ldots , A^{(N)} \in \mathcal {S}_{m_1} \times \cdots \times \mathcal {S}_{m_N}\). If \(X=(X^{(1)}, \ldots , X^{(N)})\), then the payoff function for the k-th player is

$$\begin{aligned} p_k(X^{(1)}, \ldots , X^{(N)})= & {} \sum _{i_1,j_1=1}^{m_1} \cdots \sum _{i_N,j_N=1}^{m_N} A^{(k)}_{(i_1,j_1), \ldots , (i_N,j_N)} X^{(1)}_{(i_1,j_1)} \cdots X^{(N)}_{(i_N,j_N)} . \end{aligned}$$

3.2 Separable games

Stein et al. (2008) have introduced the class of separable games. An N-player separable game consists of pure strategy sets \(C_1, \ldots , C_N\), which are non-empty compact metric spaces, and the payoff functions \(p_k: C \rightarrow \mathbb {R}\). The latter are of the form

$$\begin{aligned} p_k(s) \ = \ \sum _{j_1=1}^{m_1} \cdots \sum _{j_N=1}^{m_N} a_k^{j_1 \cdots j_N} f_1^{j_1}(s_1) \cdots f_N^{j_N}(s_N), \end{aligned}$$

where \(C:= \prod _{k=1}^N C_k\), \(a_k^{j_1 \cdots j_N} \in \mathbb {R}\), the functions \(f_k^{j_{i}}: C_k \rightarrow \mathbb {R}\) are continuous, and \(i, k \in \{1, \ldots , N\}\).

Semidefinite games are special cases of separable games. We can see this relation from two viewpoints. From a first viewpoint, let \(C_k\) be the matrices in \(\mathcal {S}_{m_k}^+\) with trace 1 and set

$$\begin{aligned} f_t^{(r,s)}(X^{(t)})= & {} X^{(t)}_{rs}, \\ a_k^{(i_1,j_1),\ldots ,(i_N,j_N)}= & {} (A^{(k)})_{(i_1,j_1),\ldots ,(i_N,j_N)}. \end{aligned}$$

Then, the payoff functions become

$$\begin{aligned} p_k(X^{(1)}, \ldots , X^{(N)})= & {} \sum _{i_1,j_1=1}^{m_1} \cdots \sum _{i_N,j_N=1}^{m_N} a_k^{(i_1,j_1), \ldots , (i_N,j_N)} f_1^{(i_1,j_1)}(X^{(1)}) \cdots f_n^{(i_N,j_N)}(X^{(N)}) \\= & {} \sum _{i_1,j_1=1}^{m_1} \cdots \sum _{i_N,j_N=1}^{m_N} A^{(k)}_{(i_1,j_1), \ldots , (i_N,j_N)} X^{(1)}_{(i_1,j_1)} \cdots X^{(N)}_{(i_N,j_N)}. \end{aligned}$$

This yields the setup of semidefinite games as introduced before.

The set of mixed strategies \(\Delta _k\) of the k-th player is defined as the space of Borel probability measures \(\sigma _k\) over \(C_k\). A mixed strategy profile \(\sigma\) is a Nash equilibrium if it satisfies

$$\begin{aligned} p_k(\tau _k, \sigma _{-k}) \le p_k(\sigma ), \quad \text {for all } \tau _k \in \Delta _k \text { and } k \in \{1, \ldots , N\}, \end{aligned}$$

where \(\sigma _{-k}\) denotes the mixed strategies of all players except player k.

In this setting, the relation of our model to the mixed strategies of separable games does not yield any new insight, since taking the Borel measures over the convex set \(C_k\) does not give new strategies.

There is a second viewpoint, which better captures the role of the pure strategies. Since every point in the positive semidefinite cone is a convex combination of positive semidefinite rank-1 matrices, we can also define the set of pure strategies \(C_k\) as the set of matrices in \(\mathcal {S}_{m_k}^+\) which have trace 1 and rank 1. Then, by a Carathéodory-type argument in Stein et al. (2008, Corollary 2.10), every separable game has a Nash equilibrium in which player k mixes among at most \(\dim \mathcal {S}_{m_k}+1 = \left( {\begin{array}{c}m_k+1\\ 2\end{array}}\right) +1\) pure strategies. In contrast to finite N-player games, the decomposition of a mixed strategy (such as the one in a Nash equilibrium) in terms of the pure strategies is not unique. Example 7.3 will illustrate this.

4 Semidefinite zero-sum games

In this section, we consider semidefinite zero-sum games. The payoff tensors are given by A and \(B:=-A\). Hence, the second player wants to minimize the payoff of the first player, \(p_A\). By the classical minimax theorem for bilinear functions over compact convex sets (Dresher et al. 1950; von Neumann 1945) (see also Dresher and Karlin 1953), optimal strategies exist in the zero-sum case. We show that the sets of optimal strategies are spectrahedra and reveal the semialgebraic geometry of semidefinite zero-sum games.

Theorem 4.1

Let \(G=(A,B)\) be a semidefinite zero-sum game. Then, the set of optimal strategies of each player is the set of optimal solutions of a semidefinite program. Moreover, each set of optimal strategies is a spectrahedron.

The value V of the game is defined through the minimax relation

$$\begin{aligned} \max _{X \in \mathcal {X}} \min _{Y \in \mathcal {Y}} \sum _{i,j,k,l} X_{ij} A_{ijkl} Y_{kl} \ = \ V \ = \ \min _{Y \in \mathcal {Y}} \max _{X \in \mathcal {X}} \sum _{i,j,k,l} X_{ij} A_{ijkl} Y_{kl}. \end{aligned}$$

The following lemma records that zero-sum matrix games can be embedded into semidefinite zero-sum games.

Lemma 4.2

For a given zero-sum matrix game G with payoff matrix \(A = (a_{ij}) \in \mathbb {R}^{m \times n}\), let \(G'\) be the semidefinite zero-sum game on \(\mathcal {S}_m \times \mathcal {S}_n\)-matrices and payoff tensor

$$\begin{aligned} \begin{array}{rcll} A_{iikk} &{} = &{} a_{ik} &{} \text {for } 1 \le i \le m, \; 1 \le k \le n, \\ A_{ijkl} &{} = &{} 0 &{} \text {for } i \ne j \text { or } k \ne l. \end{array} \end{aligned}$$

Then a pair \((x,y) \in \Delta _1 \times \Delta _2\) is a pair of optimal strategies for G if and only if there exists a pair of optimal strategies \(X \times Y\) for \(G'\) with

$$\begin{aligned} X_{ii} = x_{i}, \; 1 \le i \le m, \; \text { and } \; Y_{kk} = y_{k}, \; 1 \le k \le n. \end{aligned}$$
(4.1)

Proof

For any strategy pair \((X,Y) \in \mathcal {X}\times \mathcal {Y}\) with (4.1), the payoff in \(G'\) is

$$\begin{aligned} \sum _{i,j,k,l} X_{ij} A_{ijkl} Y_{kl} \ = \ \sum _{i,k} X_{ii} A_{iikk} Y_{kk} \ = \ \sum _{i,k} x_i a_{ik} y_k, \end{aligned}$$

which coincides with the payoff in G for the strategy pair (xy). \(\square\)

As a consequence of Lemma 4.2, any oracle to solve semidefinite zero-sum games can be used to solve zero-sum matrix games. Namely, construct the semidefinite zero-sum game \(G'\) described in Lemma 4.2 and let \(X^* \in \mathcal {S}_m^+\) and \(Y^* \in \mathcal {S}_n^+\) be the optimal strategies provided by the oracle. Let \(x^*\) and \(y^*\) be the vectors of diagonal elements of \(X^*\) and \(Y^*\). Since \(X^*\) and \(Y^*\) are positive semidefinite, the vectors \(x^*\) and \(y^*\) are nonnegative and due to \({{\,\textrm{tr}\,}}(X^*) = {{\,\textrm{tr}\,}}(Y^*) = 1\) we have \(\sum _{i=1}^m x_i^*=\sum _{j=1}^n y_i^* = 1\). Since \(A_{ijkl} = 0\) for \(i \ne j\) or \(k \ne l\), the off-diagonal elements in any strategy of the semidefinite game \(G'\) do not matter for the payoffs. Hence, \(x^*\) and \(y^*\) are optimal strategies for the zero-sum matrix game.

Lemma 4.3

Let G be a semidefinite zero-sum game on \(\mathcal {S}_n \times \mathcal {S}_n\). If the payoff tensor satisfies

$$\begin{aligned} A_{ijkl} = - A_{klij} \quad \text {for all } i,j,k,l, \end{aligned}$$

then G has value 0.

In the proof, we employ a simple symmetry consideration.

Proof

Let V denote the value of G. Then, there exists an \(X \in \mathcal {X}\) such that for all \(Y \in \mathcal {Y}\), we have \(\sum _{i,j,k,l} X_{ij} A_{ijkl} Y_{kl} \ \ge \ V.\) In particular, this implies

$$\begin{aligned} \sum _{i,j,k,l} X_{ij} A_{ijkl} X_{kl} \ \ge \ V. \end{aligned}$$
(4.2)

If we rearrange the order of the summation and use the precondition, then

$$\begin{aligned} \sum _{i,j,k,l} X_{ij} A_{klij} X_{kl} \ = \ \sum _{i,j,k,l} - X_{ij} A_{ijkl} X_{kl} \ \ge \ V. \end{aligned}$$
(4.3)

Adding (4.2) and (4.3) yields \(V \le 0\). Analogously, there exists some \(Y \in \mathcal {Y}\) such that for all \(X \in \mathcal {X}\), we have \(\sum _{i,j,k,l} X_{ij} A_{ijkl} Y_{kl} \ \le \ V.\) Arguing similarly as before, we can deduce \(V \ge 0\). Altogether, this gives \(V = 0\). \(\square\)

We characterize which a-priori-strategy player 1 will play if her strategy will be revealed to player 2 (max-min-strategy). In the following lemma, the symmetric \(n \times n\)-matrix T plays the role of a slack matrix.

Lemma 4.4

Let (AB) be a semidefinite zero-sum game. As an a-priori-strategy, player 1 plays an optimal solution X of the SDP

$$\begin{aligned} \max _{X,T \succeq 0, \, v_1 \in \mathbb {R}} \left\{ v_1 \, : \, v_1 I_n + T = (\langle X, A_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n} , \, {{\,\textrm{tr}\,}}(X) = 1 \right\} . \end{aligned}$$
(4.4)

The optimal value of this optimization problem is attained.

Proof

For an a priori known strategy of player 1, player 2 will play a best response, i.e., an optimal solution of the problem

$$\begin{aligned}{} & {} \min _Y \{ p_A(X,Y) \, : \, {{\,\textrm{tr}\,}}(Y) = 1, \, Y \succeq 0\} \nonumber \\{} & {} \quad = \min _Y \{ \langle (\langle X, A_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n}, Y \rangle \, : \, {{\,\textrm{tr}\,}}(Y) = 1, \, Y \succeq 0\} . \end{aligned}$$
(4.5)

In what follows we see that the optimal value of the minimization problem is attained and that strong duality holds. As a-priori-strategy, player 1 uses an optimal solution of

$$\begin{aligned} \max _{X \succeq 0, \, {{\,\textrm{tr}\,}}(X) = 1} \min _Y \{ \langle ( \langle X, A_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n}, Y \rangle \ : \ {{\,\textrm{tr}\,}}(Y) = 1, \, Y \succeq 0\} \, . \end{aligned}$$

We write the inner minimization problem of the minmax problem in terms of the dual SDP. This gives

$$\begin{aligned}{} & {} \min _Y \{ \langle ( \langle X,A_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n}, Y \rangle \, : \, {{\,\textrm{tr}\,}}(Y) = 1, \, Y \succeq 0\}, \end{aligned}$$
(4.6)
$$\begin{aligned}{} & {} \quad = \max _{T, \, v_1} \{ 1 \cdot v_1 : \, v_1 I_n + T = (\langle X,A_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n}, \, T \succeq 0, v_1 \in \mathbb {R}\} . \end{aligned}$$
(4.7)

Note that the scaled unit matrix \(\frac{1}{n} I_n\) is a strictly feasible point for the minimization problem (4.6). If we choose a negative \(v_1\) with sufficiently large absolute value, then the maximization problem (4.7) has a strictly feasible point as well. Hence, the duality theory for semidefinite programming implies that both the minimization and the maximization problems attain the optimal value. In connection with the outer maximization this gives the semidefinite program (4.4). \(\square\)

We remark that the expressions (4.6) and (4.7) can be interpreted as the smallest eigenvalue of the matrix \((\langle X,A_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n}\) (see, e.g., Vandenberghe and Boyd 1996). Further note that the SDP (4.4) is not quite in one of the normal forms, see Lemma 4.7 below.

Next, we characterize the a-priori-strategies of player 2 in terms of a minimization problem to facilitate the duality reasoning. Similar to Lemma 4.4, the symmetric \(n \times n\)-matrix S serves as a slack matrix.

Lemma 4.5

Let (AB) be a semidefinite zero-sum game. As an a-priori-strategy, player 2 plays an optimal solution Y of the SDP

$$\begin{aligned} \min _{Y, S \succeq 0, \, u_1 \in \mathbb {R}} \{ - u_1 \, : \, u_1 I_n + S = (\langle -A_{ij\cdot \cdot }, Y \rangle )_{1 \le i,j \le n}, \, {{\,\textrm{tr}\,}}(Y) = 1\} \, . \end{aligned}$$
(4.8)

The optimal value of this optimization problem is attained.

Proof

The proof is similar to Lemma 4.4. As an a-priori-strategy, player 2 uses an optimal solution of

$$\begin{aligned} \min _{Y \succeq 0, \, {{\,\textrm{tr}\,}}(Y) = 1} \max _X \{ \langle (\langle A_{ij\cdot \cdot }, Y\rangle )_{1 \le i,j \le n}, X \rangle \ : \ {{\,\textrm{tr}\,}}(X) = 1, \, X \succeq 0\} \, . \end{aligned}$$

Since both the inner optimization problem and its dual are strictly feasible, strong duality holds. The statement then follows from the duality relation

$$\begin{aligned} \max _{X \succeq 0} \{ \langle (\langle A_{ij\cdot \cdot }, Y\rangle )_{i,j}, X \rangle \ : \ \langle I_n, X \rangle = 1 \} \ = \ \min _{S \succeq 0, \, u_1 \in \mathbb {R}} \{ - u_1 \, : \, u_1 I_n + S = (\langle -A_{ij\cdot \cdot }, Y \rangle )_{i,j} \}. \end{aligned}$$

\(\square\)

Remark 4.6

Similar to the case of zero-sum matrix games, if the coefficients of the payoff tensor are chosen sufficiently generically, then the SDPs (4.4) and (4.8) have a unique optimal solution and, as a consequence, each player has a unique optimal strategy. In contrast to the set of optimal strategies of zero-sum matrix games, it is possible that the set of optimal strategies of a semidefinite game is non-polyhedral. For example, already in the trivial semidefinite game with the zero matrix A, the value is 0 and the set of optimal strategies of player 1 and player 2 are the full sets \(\mathcal {X}\) and \(\mathcal {Y}\), respectively.

Now we show that the sets of optimal strategies of the two players can be regarded as the optimal solutions of a pair of dual SDPs.

Lemma 4.7

The SDPs (4.4) and (4.8) are dual to each other.

Proof

We show that the dual of (4.8) coincides with (4.4). Setting

$$\begin{aligned} Y' \ = \ {{\,\textrm{diag}\,}}(Y,S,u_1^+, u_1^-), \end{aligned}$$

i.e., the block diagonal matrix with blocks Y, S, \(u_1^+\) and \(u_1^-\) (of size \(\mathcal {S}_n, \mathcal {S}_n, 1, 1)\), the problem (4.8) can be written as

$$\begin{aligned} \begin{array}{rcl} {\min \big \langle {{\,\textrm{diag}\,}}(0,0,-1,1), Y' \big \rangle } \\ \delta _{ij} (u_1^+ - u_1^-) + s_{ij} + \langle A_{ij\cdot \cdot }, Y \rangle &{} = &{} 0 \quad \text {for all } 1 \le i \le j \le n \,, \\ y_{11} + \cdots + y_{nn} &{} = &{} 1 \,, \\ Y' &{} \succeq &{} 0 \,. \end{array} \end{aligned}$$
(4.9)

We claim that the dual of (4.9) coincides with (4.4). Denote by \(E_{ij}\) the matrix with 1 in row i and column j whenever \(i=j\) and with 1/2 in row i and column j as well as row j and column i otherwise. The dual is

$$\begin{aligned} \begin{array}{rcl} &{}&{} {\max _{S',W, w'} w'} \\ &{}&{}\quad \sum w_{ij} {{\,\textrm{diag}\,}}( (A_{ij\cdot \cdot }), E_{ij}, \delta _{ij}, -\delta _{ij} ) \\ &{}&{}\qquad + w' {{\,\textrm{diag}\,}}(I_n,0,0,0) + S' = {{\,\textrm{diag}\,}}(0,0,-1,1), \\ &{}&{}\quad S' \succeq 0, \; W \in \mathcal {S}_n, \, w' \in \mathbb {R}\, . \end{array} \end{aligned}$$
(4.10)

Observe that \(\sum _{i,j} w_{ij} A_{ijkl} = \langle W, A_{\cdot \cdot kl}\rangle\). The second block in the constraint matrices gives that W is minus the second block of \(S'\), which describes a positive semidefiniteness condition on \(-W\). Then the equations involving \(\delta _{ij}\) and \(-\delta _{ij}\) ensure that \(\sum _{i=1}^n w_{ii} = -1\); namely, in (4.10), each of the two corresponding equations contains a non-negative slack variable. The combination of these equations shows that both of these slack variables must be zero. Altogether, this gives

$$\begin{aligned} \max _{S^*, -W \succeq 0, w' \in \mathbb {R}} \left\{ w' \, : \, w' I_n + S^* = (\langle -W, A_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n} , \, {{\,\textrm{tr}\,}}(W) = -1 \right\} . \end{aligned}$$
(4.11)

By identifying W with \(-X\), we recognize this as the SDP in  (4.4) \(\square\)

Now we provide the proof of Theorem 4.1.

Proof

Player 1 can achieve at least the gain provided by the a-priori-strategy (4.4) from Lemma 4.4, and player 2 can bound her loss by the a-priori-strategy (4.8) from Lemma 4.5. By Lemma 4.7, both strategies are dual to each other, so that their optimal values coincide with the value of the game. In the coordinates \((X,T,v_1)\) and \((Y,S,u_1)\), the feasible regions of those optimization problems are spectrahedra, and the sets of optimal strategies are the sets of optimal solutions of the SDPs. Intersecting the feasibility spectrahedron, say, for player 1, with the hyperplane corresponding to the optimal value of the objective function shows the first part of the theorem. Precisely, we obtain the sets

$$\begin{aligned}{} & {} \{X, T \succeq 0 : \, V I_n + T = (\langle X, A_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n}, \, {{\,\textrm{tr}\,}}(X) = 1 \big \} \\&\text { and } \;&\{ Y, S \succeq 0 : \, - V I_n + S = (\langle -A_{ij\cdot \cdot }, Y \rangle )_{1 \le i,j \le n}, \, {{\,\textrm{tr}\,}}(Y) = 1\} , \end{aligned}$$

where V is the value of the game. By projecting the spectrahedra for the first player on the X-variables, we can deduce that in the space \(\mathcal {X}\) the set of optimal strategies of player 1 is the projection of a spectrahedron. Similarly, this is true for player 2.

Indeed, the set of optimal strategies of each player is not only the projection of a spectrahedron, but also a spectrahedron. Namely, taking the optimal value of v (i.e., the value V of the game) corresponds geometrically to passing over to the intersection with a separating hyperplane and, because the other additional variables (“T”) just refer to a slack matrix, we see that the set of optimal strategies for the first player is a spectrahedron. In particular, the equation \(V I_n + T = (\langle X, A_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n}\) gives

$$\begin{aligned} -V I_n + (\langle X, A_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n} = T \succeq 0. \end{aligned}$$

Similar arguments apply for the second player. Precisely, the spectrahedron for the first player lives in the space \(\mathcal {S}_n\), whose variables we denote by the symmetric matrix variable X. The inequalities and equations for X are

$$\begin{aligned} \begin{array}{rcl} {{\,\textrm{tr}\,}}(X) - 1 &{} = &{} 0, \\ -V I_n + (\langle X, A_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n} &{} \succeq &{} 0, \end{array} \end{aligned}$$

where we can write the equation as two inequalities and where we can combine all the scalar inequalities and matrix inequalities into one block matrix inequality. \(\square\)

Corollary 4.8

Explicit LMI descriptions of the sets \({{\,\mathrm{\mathcal {O}}\,}}_1\) and \({{\,\mathrm{\mathcal {O}}\,}}_2\) of optimal strategies of the two players are

$$\begin{aligned} {{\,\mathrm{\mathcal {O}}\,}}_1:= & {} \{X \in \mathcal {S}_n : \, {{\,\textrm{diag}\,}}(X, \, -V I_n + (\langle X, A_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n}, \, \\{} & {} \; \, {{\,\textrm{tr}\,}}(X) - 1, \, 1 - {{\,\textrm{tr}\,}}(X)) \succeq 0 \}, \\ {{\,\mathrm{\mathcal {O}}\,}}_2:= & {} \{Y \in \mathcal {S}_n : \, {{\,\textrm{diag}\,}}(Y, \, V I_n + (\langle -A_{ij\cdot \cdot }, Y \rangle )_{1 \le i,j \le n}, \, \\{} & {} \; \, {{\,\textrm{tr}\,}}(Y)-1, \, 1 - {{\,\textrm{tr}\,}}(Y)) \succeq 0 \} , \end{aligned}$$

where V is the value of the game.

Note that \(\mathcal {O}_1\) and \(\mathcal {O}_2\) are spectrahedra in the space of symmetric matrices.

Example 4.9

We consider a semidefinite generalization of a \(2 \times 2\)-zero-sum matrix game known as “Plus one” (see, for example, Karlin and Peres 2017). The payoff matrix of that bimatrix game is \(A = \ \left( \begin{array} {cc} 0 &{}\quad -1 \\ 1 &{}\quad 0 \end{array} \right)\) and for each player, the second strategy is dominant. Let the semidefinite zero-sum game be defined by

$$\begin{aligned} A_{ijkl} \ = \ {\left\{ \begin{array}{ll} 1 &{} \max \{i,j\} > \max \{k,l\}, \\ -1 &{} \max \{i,j\} < \max \{k,l\}, \\ 0 &{} \text {else} \end{array}\right. } \end{aligned}$$

for \(i,j,k,l \in \{1, 2\}\). If both players play only diagonal strategies (i.e., X and Y are diagonal matrices), then the payoffs correspond to the payoffs of the underlying matrix game. By Lemma 4.3, the value of the semidefinite game is 0. To determine an optimal strategy for player 1, we consider those points in the feasible set of the SDP (4.4) in Lemma 4.4, which have \(v_1 = 0\). Hence, we are looking for matrices \(T, X \succeq 0\) such that

$$\begin{aligned} T \ = \ \left( \begin{array}{cc} \langle X,A_{\cdot \cdot 11} \rangle &{}\quad \langle X,A_{\cdot \cdot 12} \rangle \\ \langle X,A_{\cdot \cdot 21} \rangle &{}\quad \langle X,A_{\cdot \cdot 22} \rangle \end{array} \right) \ = \ \left( \begin{array}{cc} x_{12} + x_{21} + x_{22} &{}\quad - x_{11} \\ - x_{11} &{}\quad - x_{11} \end{array} \right) . \end{aligned}$$

Since \(X \succeq 0\) implies \(x_{11} \ge 0\) and \(T \succeq 0\) implies \(-x_{11} \ge 0\), we obtain \(x_{11} = 0\). Hence, \(x_{22} = 1 - x_{11} = 1\). Further, \(X \succeq 0\) yields \(x_{12} = x_{21} = 0\). Therefore, the optimal strategy of player 1 is \(\left( \begin{array}{cc} 0 &{}\quad 0 \\ 0 &{}\quad 1 \end{array} \right)\), and, similarly, the optimal strategy of player 2 is the same one.

Example 4.10

Consider a slightly different version of Example 4.9, in which the optimal strategies are not diagonal strategies. For \(i,j,k,l \in \{1, 2\}\), let

$$\begin{aligned} A_{ijkl} \ = \ (i+j) - (k+l). \end{aligned}$$

By Lemma 4.3, the value of the game is 0. If both players play only diagonal strategies, then the payoffs coincide (up to a factor of 2 in the payoffs, which is irrelevant for the optimal strategies) with the payoffs of the underlying zero-sum matrix game. Interestingly, we show that the optimal strategies are not diagonal strategies here. As in Example 4.9, we determine an optimal strategy for player 1 by considering the feasible points of the SDP (4.4) with \(v_1 = 0\). Hence, we search for \(T, X \succeq 0\) such that

$$\begin{aligned} T \ = \ \left( \begin{array}{cc} x_{12} + x_{21} + 2 x_{22} &{}\quad - x_{11} + x_{22} \\ - x_{11} + x_{22} &{}\quad - 2 x_{11} - x_{12} - x_{21}. \end{array} \right) . \end{aligned}$$

Since, using the symmetry \(x_{12} = x_{21}\), we have \(\det T \ = \ - (x_{11} + 2 x_{12} + x_{22})^2,\) we see that \(T \succeq 0\) implies \(x_{12} = - \frac{1}{2}(x_{11} + x_{22})\). Hence,

$$\begin{aligned} X = \left( \begin{array}{cc} x_{11} &{}\quad - \frac{1}{2}(x_{11} + x_{22}) \\ - \frac{1}{2}(x_{11} + x_{22}) &{}\quad x_{22} \end{array}\right) , \end{aligned}$$

so that \(X \succeq 0\) implies in connection with the arithmetic–geometric inequality that \(x_{11} = x_{22} = \frac{1}{2}\). Thus, the optimal strategy of player 1 is \(\frac{1}{2} \left( \begin{array}{cc} 1 &{}\quad - 1 \\ - 1 &{}\quad 1 \end{array} \right)\), and, similarly, the optimal strategy of player 2 is the same one.

Note that \(X = \left( \begin{array}{cc} 0 &{}\quad 0 \\ 0 &{}\quad 1 \end{array} \right)\) is not an optimal strategy for player 1, since, for example, for a given strategy Y of player 2, the payoff is \(2 y_{11} + 2 y_{12}.\) Specifically, the choice \(Y = \frac{1}{4} \left( \begin{array}{cc} 1 &{}\quad - \sqrt{3} \\ - \sqrt{3} &{}\quad 3 \end{array} \right)\) yields a payoff of \(\frac{1}{2}(1-\sqrt{3}) < 0\) for player 1.

5 The almost-equivalence of semidefinite zero-sum games and semidefinite programs

We give a semidefinite generalization of Dantzig’s almost equivalence of zero-sum matrix games and linear programming (Dantzig 1951), see also Adler (2013). Given an LP in the form

$$\begin{aligned} \min _x \{ c^Tx \, : \ Ax \ge b, \, x \ge 0 \} \end{aligned}$$

with \(A\in \mathbb {R}^{m\times n}\) and \(b\in \mathbb {R}^m\) and \(c \in \mathbb {R}^n\), Dantzig constructed a zero-sum matrix game with the payoff matrix

$$\begin{aligned} \left( \begin{array}{ccc} 0 &{}\quad A &{}\quad -b \\ -A^T &{}\quad 0 &{}\quad c \\ b^T &{}\quad -c^T &{}\quad 0 \end{array} \right) . \end{aligned}$$

For the semidefinite generalization, the following variant of a duality statement is convenient, whose proof is given for the sake of completeness.

Lemma 5.1

For \(A_1, \ldots , A_m, C \in \mathcal {S}_n\) and \(b \in \mathbb {R}^m\), the following SDPs in the slightly modified normal forms constitute a primal-dual pair.

$$\begin{aligned}{} & {} \inf _X \{ \langle C, X \rangle : \ \langle A_i, X \rangle \ge b_i, \, 1 \le i \le m, \, X \succeq 0 \} \end{aligned}$$
(5.1)
$$\begin{aligned}&\text { and }&\sup _{y,S} \{ b^T y: \, \sum _{i=1}^m y_i A_i + S = C, \, y \in \mathbb {R}_+^m, \, S \succeq 0\}. \end{aligned}$$
(5.2)

We call them SDPs in modified primal and dual forms.

Proof

We derive these forms from the usual forms, in which the primal contains a relation “\(=\)” rather than a relation “\(\ge\)” and in which the dual uses an unconstrained variable vector y rather than a non-negative vector. Starting from the standard pair, we extend each matrix \(A_i\) to a symmetric \(2n \times 2n\)-matrix via a single additional non-zero element, namely \(-1\), in entry \((n+i,n+i)\) of the modified \(A_i\), \(1 \le i \le m\). Moreover, we formally embed C into a symmetric \(2n \times 2n\)-matrix. The dual (in the original sense) of the extended problem still uses a vector of length y, but the modifications in the primal problem give the additional conditions \(y_i \ge 0\), \(1 \le i \le m\). This shows the desired modified forms of the primal-dual pair. \(\square\)

A semidefinite zero-sum game with \(\mathcal {X}=\mathcal {Y}\) is called symmetric if the payoff tensor A satisfies the skew symmetric relation \(A_{ijkl} = - A_{klij}.\) By Lemma 4.3, the value of a symmetric game with payoff tensor A on the strategy space \(\mathcal {S}_n \times \mathcal {S}_n\) is zero. Therefore, there exists a strategy \(\bar{X}\) of the first player such that

$$\begin{aligned} \langle ( \langle \bar{X}, A_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n}, Y \rangle \ge 0 \quad \text {for all } Y \in \mathcal {S}_n \text { with } {{\,\textrm{tr}\,}}(Y) = 1, \ Y \succeq 0. \end{aligned}$$
(5.3)

Generalizing the notion in Adler (2013), we call such a strategy \(\bar{X}\) a solution of the symmetric game. The condition (5.3) states that the matrix \(\langle ( \langle \bar{X}, A_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n}\) is contained in the dual cone of \(\mathcal {S}_n^+\). Since the cone \(\mathcal {S}_n^+\) of positive semidefinite matrices is self-dual, (5.3) translates to

$$\begin{aligned} (\langle \bar{X}, A_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n} \succeq 0. \end{aligned}$$
(5.4)

It is useful to record the following specific version of a symmetric minimax theorem, which is a special case of the minimax theorem for convex games (Dresher and Karlin 1953) and of the minimax theorem for quantum games (Jain and Watrous 2009).

Lemma 5.2

(Minimax theorem for symmetric semidefinite zero-sum games) Let G be a symmetric semidefinite zero-sum game with payoff tensor A. Then there exists a solution strategy \(\bar{X}\), i.e., a matrix \(\bar{X} \in \mathcal {S}_n\) satisfying (5.4).

Proof

There exists a strategy \(\bar{X}\) satisfying (5.3). By the considerations before the theorem, we obtain (5.4). \(\square\)

Now we generalize the Dantzig construction. Given an SDP in modified normal form of Lemma 5.1, we define the following semidefinite Dantzig game on \(\mathcal {S}_{n+m+1} \times \mathcal {S}_{n+m+1}\). The strategies of both players can be viewed as positive semidefinite block matrices \({{\,\textrm{diag}\,}}(X,y,t)\) with \(X \in \mathcal {S}_n^+\), \(y \in \mathbb {R}_+^m\), \(t \in \mathbb {R}_+\) and trace 1.

The payoff tensor Q is defined as follows. For \(1 \le k,l \le n\) and \(1 \le j \le m\), let

$$\begin{aligned} Q_{k,l,n+j,n+j} \ = \ - Q_{n+j,n+j,k,l} \ = \ (A_j)_{kl} \, . \end{aligned}$$

For \(1 \le k,l \le n\), let

$$\begin{aligned} Q _{n+m+1,n+m+1,k,l} \ = \ - Q_{k,l,n+m+1,n+m+1} \ = \ C_{kl} \, . \end{aligned}$$

For \(1 \le i \le m\), let

$$\begin{aligned} Q_{n+i,n+i,n+m+1,n+m+1} \ = \ -Q _{n+m+1,n+m+1,n+i,n+i} \ = \ b_i \, . \end{aligned}$$

All other entries in the payoff tensor Q are zero. Note that Q has a block structure: For every non-zero entry \(Q_{ijkl}\), we have either \(i,j \in \{1, \ldots , n\}\) or \(i,j \in \{n+1, \ldots , n+m\}\) or \(i=j=n+m+1\). An analogous property holds for kl.

Let \({{\,\textrm{diag}\,}}(\bar{X}, \bar{y}^T, \bar{t})^T\) be a solution to this symmetric game. By the definition of a solution, we have

$$\begin{aligned} ( \langle {{\,\textrm{diag}\,}}(\bar{X}, \bar{y}, \bar{t}), Q_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n + m+1} \ \succeq \ 0. \end{aligned}$$
(5.5)

Due to the block structure of the payoff tensor Q, the matrix on the left-hand side of (5.5) has a block structure as well. The upper left \(n \times n\)-matrix in (5.5) gives the condition \(- \sum _{j=1}^n \bar{y}_j A_j + \bar{t}C \succeq 0\). The square submatrix in (5.5) indexed by \(n+1, \ldots , n+m\) is a diagonal matrix and gives the conditions \(\langle A_i, \bar{X} \rangle - b_i \bar{t} \ge 0\), \(1 \le i \le m\). The right lower entry in (5.5) gives \(b^T \bar{y} - \langle C, \bar{X} \rangle \ge 0\). Hence, (5.5) is equivalent to the system

$$\begin{aligned} \langle A_i, \bar{X} \rangle - b_i \bar{t}\ge & 0, \quad 1 \le i \le m, \end{aligned}$$
(5.6)
$$\begin{aligned} - \sum _{j=1}^m \bar{y}_j A_j + \bar{t} C\succeq & 0, \end{aligned}$$
(5.7)
$$\begin{aligned} b^T \bar{y} - \langle C, \bar{X} \rangle\ge & 0, \end{aligned}$$
(5.8)

and in addition, we have the conditions defining a strategy,

$$\begin{aligned} \bar{y} \ge 0, \; \bar{X} \succeq 0, \; \bar{t} \ge 0 \text { and } \textbf{1}^T \bar{y} + {{\,\textrm{tr}\,}}(\bar{X}) + \bar{t} \ = \ 1. \end{aligned}$$

This allows us to state the following result on the almost equivalence of semidefinite zero-sum games and semidefinite programs. Recall that reducing the equilibrium problem in a semidefinite zero-sum game to semidefinite programming follows from Sect. 4.

Theorem 5.3

The following holds for the semidefinite Dantzig game:

  1. 1.

    \(\bar{t} (b^T \bar{y} - \langle C, \bar{X} \rangle ) = 0\).

  2. 2.

    If \(\bar{t} > 0\), then \(\bar{X} \bar{t}^{-1}\), \(\bar{y} \bar{t}^{-1}\) and some corresponding slack matrix \(\bar{S}\) are an optimal solution to the primal-dual SDP pair given in (5.1) and (5.2).

  3. 3.

    If \(b^T \bar{y} - \langle C, \bar{X} \rangle > 0\), then the primal problem or the dual problem is infeasible.

The theorem ignores the case \(\bar{t} = 0\). In the special case of bimatrix games, that exception was already observed in Dantzig’s treatment (Dantzig 1951) and overcome by Adler (2013) and von Stengel (2022).

While the precondition in (3) looks like a statement of missing strong duality, note that \((\bar{X}, \bar{y})\) does not satisfy the constraints in the initially stated primal-dual SDP pair. If the initial SDP pair does not have an optimal primal-dual pair, then clearly case (2) can never hold. This is a difference to the LP case, where, say, in the case of finite optimal values, there always exists an optimal primal-dual pair and so case (2) is never ruled out a priori in the same way. This qualitative difference was expected, because for a semidefinite zero-sum game, the corresponding primal and dual feasible regions have relative interior points and thus there exists an optimal primal-dual pair. So, the qualitative situation reflects that for semidefinite zero-sum games, the SDPs characterizing the optimal strategies are always well behaved.

By adding the precondition that the original pair of SDPs has primal-dual interior points, we come into the same situation that case (2) is not ruled out a priori.

Proof

Since \(\bar{X}\) and \(\bar{y}\) are feasible solutions of the SDPs in (5.6) and (5.7), the weak duality theorem for semidefinite programming implies

$$\begin{aligned} \bar{t} ( b^T \bar{y} - \langle C, \bar{X} \rangle ) \ \le \ 0. \end{aligned}$$

Since \(\bar{t} \ge 0\), we obtain \(\bar{t} = 0\) or \(b^T \bar{y} -\langle C, \bar{X} \rangle \le 0\). In the latter case, (5.8) implies \(b^T \bar{y} - \langle C, \bar{X} \rangle = 0\). Altogether, this gives \(\bar{t} (b^T \bar{y} - \langle C, \bar{X} \rangle ) = 0\).

For the second statement, let \(\bar{t} > 0\). Then \(\bar{X} \bar{t}^{-1}\), \(\bar{y} \bar{t}^{-1}\) and the corresponding slack matrix \(\bar{S}\) give a feasible point of the primal-dual SDP pair stated initially. Since \(b^T \bar{y} - \langle C, \bar{X} \rangle = 0\) and thus \(b^T (\bar{y}\bar{t}^{-1}) - \langle C, \bar{X} \bar{t}^{-1} \rangle = 0\), this feasible point is an optimal solution.

For the third statement, let \(b^T \bar{y} - \langle C, \bar{X} \rangle > 0\). Then statement (1) implies \(\bar{t} = 0\). Thus, \(\langle A_i, \bar{X} \rangle \ge 0\) for \(1 \le i \le m\) and \(\sum _{j=1}^m \bar{y_j} A_j \preceq 0\). Since \(b^T \bar{y} - \langle C, \bar{X} \rangle > 0\), we obtain \(b^T \bar{y} >0\) or \(\langle C, \bar{X} \rangle < 0\).

In the case \(\langle C, \bar{X} \rangle < 0\), assume that the originally stated primal (5.1) has a feasible solution \(X^{\Diamond }\). Then, for any \(\lambda \ge 0\), the point \(X^{\Diamond } + \lambda \bar{X}\) is a feasible solution as well. By considering \(\lambda \rightarrow \infty\), we see that (5.1) has optimal value \(-\infty\). By the weak duality theorem, the dual problem (5.2) cannot be feasible. In the case \(b^T \bar{y} >0\), similar arguments show that the primal problem is infeasible. \(\square\)

Note that in Theorem 5.3 it is not necessary to assume that the constraints of the SDP are linearly independent, since we have not expressed the situation only in terms of the slack variable.

Example 5.4

Consider the SDP given in the primal normal form

$$\begin{aligned} \min _X \left\{ \left\langle \begin{pmatrix} 2 &{} 0\\ 0 &{} 2 \end{pmatrix}, X \right\rangle : \left\langle \begin{pmatrix} 1 &{} 0\\ 0 &{} 1 \end{pmatrix}, X \right\rangle \ge 1, X\succeq 0 \right\} \end{aligned}$$

and its dual

$$\begin{aligned} \max _{y,S} \left\{ y:~ y \begin{pmatrix} 1 &{} 0\\ 0 &{} 1 \end{pmatrix} +S = \begin{pmatrix} 2 &{} 0\\ 0 &{} 2 \end{pmatrix}, \, y\ge 0, \, S\succeq 0 \right\} . \end{aligned}$$

One can easily verify that optimal solutions of the SDP in primal normal form are matrices \(X'\in \mathcal {S}^{+}_2\) with \({{\,\textrm{tr}\,}}(X')=1\) and the only optimal solution of the dual problem is the pair \((y',S')\), where \(y'=2\) and \(S'=0\in \mathcal {S}^{+}_2.\) The payoff tensor Q in the corresponding Dantzig game in flattened form is

where the rows and columns are indexed by \(X_{11}\),\(X_{22}\),\(2X_{12}\),\(y_1\),t. To extract an optimal strategy \({{\,\textrm{diag}\,}}(\bar{X},\bar{y},\bar{t})\) for this game, we observe that (5.6) gives \(\bar{x}_{11} + \bar{x}_{22} \ge \bar{t}\) and (5.7) yields \(2\bar{t} \ge \bar{y}\). Then (5.8) implies \(\bar{y} \ge 2(\bar{x}_{11} + \bar{x}_{22})\) and we obtain \(\bar{x}_{11} + \bar{x}_{22} = \bar{t} = \frac{\bar{y}}{2}\). By the trace condition, an optimal strategy is of the form \({{\,\textrm{diag}\,}}(\bar{X}, \frac{1}{2},\frac{1}{4})\), where \(\bar{X}\in \mathcal {S}^{+}_2\) satisfies \({{\,\textrm{tr}\,}}(\bar{X})=\frac{1}{4}\).

Since \(\bar{t} > 0\), Theorem 5.3 implies that \(4\bar{X}\) and \(4\bar{y}\) are optimal solutions to the primal-dual SDP pair.

6 General semidefinite games

Now we study two-player semidefinite games without the zero-sum condition. By Glicksberg’s result (Glicksberg 1952) [see also Debreu (1952) and Fan (1952)], there always exists a Nash equilibrium for these games. This is so because they are a special case of N-players continuous games with continuous payoff functions defined on convex compact Hausdorff spaces (Glicksberg 1952). The goal of this section is to provide a characterization of the Nash equilibria in terms of spectrahedra, see Theorem 6.2.

Recall the following representation of Nash equilibria for bimatrix games in terms of polyhedra, as introduced by Mangasarian (1964) (see also von Stengel 2002):

Definition 6.1

For an \(m \times n\)-bimatrix game (AB), let the polyhedra P and Q be defined by

$$\begin{aligned} \, \quad P= & {} \{(x, v) \in \mathbb {R}^m \times \mathbb {R}: x \ge 0, \; x^T B \le \textbf{1}^T v, \; \textbf{1}^T x = 1 \} \, , \end{aligned}$$
(6.1)
$$\begin{aligned} \, \quad Q= & {} \{(y, u) \in \mathbb {R}^n \times \mathbb {R}: A y \le \textbf{1} u, \; y \ge 0, \; \textbf{1}^T y = 1 \} , \end{aligned}$$
(6.2)

where \(\textbf{1}\) denotes the all-ones vector.

In P, the inequalities \(x \ge 0\) are numbered by \(1, \ldots , m\) and the inequalities \(x^T B \le \textbf{1}^T v\) are numbered by \(m+1, \ldots , m+n\). In Q, the inequalities \(A y \le \textbf{1} u\) are numbered by \(1, \ldots , m\) and the inequalities \(y \ge 0\) are numbered by \(m+1, \ldots , m+n\). In this setting, a pair of mixed strategies \((x,y) \in \Delta _1 \times \Delta _2\) is a Nash equilibrium if and only if there exist \(u, v \in \mathbb {R}\) such that \((x,v) \in P\), \((y,u) \in Q\) and for all \(i \in \{1, \ldots , m+n\}\), the i-th inequality of P or Q is binding (i.e., it holds with equality). Here, u and v represent the payoffs of player 1 and player 2, respectively. This representation allows us to study Nash equilibria in terms of pairs of points in \(P \times Q\).

We aim at a suitable generalization of this combinatorial characterization to the case of semidefinite games. Note that in the case of bimatrix games, the characterization is strongly based on the finiteness of the pure strategies; this does not hold anymore for semidefinite games. Therefore, we start from equivalent versions of the bimatrix game polyhedra (6.1) and (6.2), which do not use finitely many pure strategies in their formulation. Instead, the best responses are expressed more explicitly as a maximization problem,

$$\begin{aligned} P= & {} \{(x, v) \in \mathbb {R}^m \times \mathbb {R}: x \ge 0, \; \max _y \{ x^T B y : \, {y} \in \Delta _2 \} \le v, \; \textbf{1}^T x = 1 \} \, , \\ Q= & {} \{(y, u) \in \mathbb {R}^n \times \mathbb {R}: y \ge 0, \; \max _x \{x^T A y : \, x \in \Delta _1\} \le u, \; \textbf{1}^T y = 1 \} . \end{aligned}$$

While the generalizations to semidefinite games are no longer polyhedral, it is convenient to keep the symbols P and Q for the notation. Consider the sets

$$\begin{aligned} P= & {} \{(X, v) \in \mathcal {S}_m \times \mathbb {R}: X \succeq 0, \; \max _Y \{ p_B(X,Y) : \, Y \in \mathcal {S}_n^+, \, {{\,\textrm{tr}\,}}(Y) = 1\} \le v, \; \\{} & {} \; \, {{\,\textrm{tr}\,}}(X) = 1 \} , \\ Q= & {} \{(Y, u) \in \mathcal {S}_n \times \mathbb {R}: Y \succeq 0, \; \max _X \{p_A(X,Y) : \, X \in \mathcal {S}_m^+, \, {{\,\textrm{tr}\,}}(X) = 1\} \le u, \; \\{} & {} \; \, {{\,\textrm{tr}\,}}(Y) = 1 \}. \end{aligned}$$

We show that P and Q are spectrahedra in the spaces \(\mathcal {S}_m \times \mathbb {R}\) and \(\mathcal {S}_n \times \mathbb {R}\). Similar to the considerations in the zero-sum case, for a fixed X, the expression \(\max \{ p_B(X,Y) : \, Y \in \mathcal {S}_n^+, \, {{\,\textrm{tr}\,}}(Y) = 1\}\) can be rewritten as

$$\begin{aligned}{} & {} - \min \{- p_B(X,Y) \, : \, Y \in \mathcal {S}_n^+, \, {{\,\textrm{tr}\,}}(Y) = 1 \} \\{} & {} \quad = - \min \{ \langle ( \langle X, - B_{\cdot \cdot k,l} \rangle )_{1 \le k,l \le n}, Y \rangle \, : \, {{\,\textrm{tr}\,}}(Y) = 1, \, Y \succeq 0\} \\{} & {} \quad = - \max \{ 1 \cdot v_1 \, : \, v_1 I_n + T = (\langle X, -B_{\cdot \cdot k,l} \rangle )_{1 \le k,l \le n}, \ T \succeq 0, \ v_1 \in \mathbb {R}\} \\{} & {} \quad = \min \{ - v_1 \, : \, v_1 I_n + T = (\langle X, -B_{\cdot \cdot k,l} \rangle )_{1 \le k,l \le n}, \ T \succeq 0, \ v_1 \in \mathbb {R}\} \\{} & {} \quad = \min \{ v_1 \, : \, - v_1 I_n + T = (\langle X, -B_{\cdot \cdot k,l} \rangle )_{1 \le k,l \le n}, \ T \succeq 0, \ v_1 \in \mathbb {R}\}. \end{aligned}$$

Here, the minimum is attained, since the feasible set in the first line of the equations is compact. The maximum in the third line is attained due to the strong duality theorem for semidefinite programming and using that the feasible set in the first two lines (\(\{Y \, : \, Y \succeq 0, \ {{\,\textrm{tr}\,}}(Y) = 1\)}) has a strictly interior point and thus satisfies Slater’s condition. Hence,

$$\begin{aligned} P= & {} \{(X, v) \in \mathcal {S}_m \times \mathbb {R}: \\{} & {} X \succeq 0, \; \min _{T, \ v_1} \{ v_1 : \, - v_1 I_n + T = (\langle X,-B_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n}, \, T \succeq 0, v_1 \in \mathbb {R}\} \ \le \ v, \\{} & {} {{\,\textrm{tr}\,}}(X) = 1 \} \, , \\ Q= & {} \{(Y, u) \in \mathcal {S}_n \times \mathbb {R}: \\{} & {} Y \succeq 0, \; \min _{S, u_1} \{ u_1 : \, - u_1 I_n + S = (\langle -A_{ij\cdot \cdot }, Y \rangle )_{1 \le i,j \le n}, \, S \succeq 0, u_1 \in \mathbb {R}\} \ \le \ u, \, \\{} & {} {{\,\textrm{tr}\,}}(Y) = 1 \} . \end{aligned}$$

If the \(\min\)-problem inside P has some feasible solution \((v_1,T)\) then for any \(v_1' \ge v_1\), there exists a feasible solution \((v_1',T')\) as well. Namely, set \(T':=T+(v_1'-v_1) I_n \succeq 0\). Thus we have

$$\begin{aligned} P= & {} \{(X, v) : \, X \succeq 0, \, -v I_n + T = (\langle X,-B_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n}, \, T \succeq 0, \, {{\,\textrm{tr}\,}}(X) = 1 \} \, , \\ Q= & {} \{(Y, u) : \, Y \succeq 0, \, -u I_n + S = (\langle -A_{ij\cdot \cdot }, Y \rangle )_{1 \le i,j \le n}, \, S \succeq 0, \, {{\,\textrm{tr}\,}}(Y) = 1 \} . \end{aligned}$$

We claim that P and Q are spectrahedra in the spaces \(\mathcal {S}_m \times \mathbb {R}\) and \(\mathcal {S}_n \times \mathbb {R}\). For P, the inequalities and equations are given by

$$\begin{aligned} (\langle X,-B_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n} + v I_n \succeq 0, \; \, X \succeq 0, \; \, {{\,\textrm{tr}\,}}(X) = 1, \end{aligned}$$

where the equation can be written as two inequalities and where we can combine all the scalar inequalities and matrix inequalities into one block matrix inequality. The spectrahedra P and Q can be used to provide the following characterization of Nash equilibria in terms of a pair of projections of spectrahedra. We build on the terminology from the bimatrix situation after Definition 6.1 and describe Nash equilibria together with their payoffs.

Theorem 6.2

A quadruple (XYuv) represents a Nash equilibrium of the semidefinite game if and only if

  1. 1.

    \((X,v) \in P\),

  2. 2.

    \((Y,u) \in Q\),

  3. 3.

    and in every finite rank-1 decomposition of X and Y,

    $$\begin{aligned} X \ = \ \sum _s \lambda _s p^{(s)} (p^{(s)})^T, \quad Y \ = \ \sum _t \mu _t q^{(t)} (q^{(t)})^T \end{aligned}$$

    with \(\lambda _s, \mu _t > 0\), \({{\,\textrm{tr}\,}}(p^{(s)} (p^{(s)})^T) = 1\), \({{\,\textrm{tr}\,}}(q^{(t)} (q^{(t)})^T) = 1\) and \(\sum _s \lambda _s = \sum _t \mu _t = 1\), we have

    $$\begin{aligned} \langle ( \langle X, B_{\cdot \cdot kl} \rangle )_{1 \le k,l \le n}, q^{(t)} (q^{(t)})^T \rangle= & {} v \; \text { for all } t \end{aligned}$$
    (6.3)
    $$\begin{aligned} \text {and} \qquad \langle p^{(s)} (p^{(s)})^T, ( \langle A_{ij \cdot \cdot }, Y \rangle )_{1 \le i,j \le m} \rangle= & {} u \; \text { for all } s. \end{aligned}$$
    (6.4)

The positivity condition on \(\lambda _i, \mu _j\) reflects the binding property of \(x_i \ge 0\) or \(y_j \ge 0\) from the bimatrix situation. Moreover, in the bimatrix situation the inequalities are induced by the pure strategies, which are the extreme points of \(\Delta _1\) and \(\Delta _2\). In the semidefinite situation, we can associate with each extreme point of the strategy space an inequality, namely the inequality (say, for P and an extreme point Y of the strategy space of the second player)

$$\begin{aligned} p_B(X,Y) \ \le \ v. \end{aligned}$$

Proof

Let (XYuv) represent a Nash equilibrium. Then X is a best response of Y and Y is a best response of X, so that by definition of P and Q, we have \((X,v) \in P\) and \((Y,u) \in Q\). Let \(\sum _s \lambda _s p^{(s)} (p^{(s})^T\) be a finite rank 1-decomposition of X with \(\lambda _s > 0\), \({{\,\textrm{tr}\,}}(p^{(s)} (p^{(s)})^T) = 1\) and \(\sum _s \lambda _s = 1\). Then

$$\begin{aligned} p_A(X,Y)= & {} \langle X, \langle (A_{ij \cdot \cdot }, Y \rangle )_{1 \le i,j \le m} \rangle \\= & {} \sum _{s} \lambda _s \langle p^{(s)} (p^{(s)})^T, (\langle A_{ij \cdot \cdot }, Y \rangle )_{1 \le i,j \le m} \rangle . \end{aligned}$$

Since the first player’s payoff is u, the best response property gives \(p_A(p^{(s)} (p^{(s)})^T, Y) \le u\). If one of \(p^{(s)} (p^{(s)})^T\) had \(p_A(p^{(s)} (p^{(s)})^T, Y) < u\), then, since X is a convex combination, we would have \(p_A(X,Y) < u\), a contradiction. The statement on Y follows similarly.

Conversely, let \((X,v) \in P\), \((Y,u) \in Q\) and in every finite rank-1 decomposition \(X \ = \ \sum _s \lambda _s p^{(s)} (p^{(s)})^T\), \(Y \ = \ \sum _t \mu _t q^{(t)} (q^{(t)})^T\) with \(\lambda _s, \mu _t > 0\), \({{\,\textrm{tr}\,}}(p^{(s)} (p^{(s)})^T) = 1\), \({{\,\textrm{tr}\,}}(q^{(t)} (q^{(t)})^T) = 1\) and \(\sum _s \lambda _s = \sum _t \mu _t = 1\), we have (6.3) and (6.4). Since \((Y,u) \in Q\), we have \(u^*:=\max \{ p_A(X,Y) \, : \, X \in \mathcal {S}_m^+, \ {{\,\textrm{tr}\,}}(X) = 1\} \le u\). Due to (6.4), this gives \(p_A(p^{(s)} (p^{(s)})^T,Y) = u\) for all s and thus \(p_A(p^{(s)} (p^{(s)})^T,Y) = u^* = u\) for all s. Hence, \(p_A(X,Y) = u^* = u\) and X is a best response to Y. Similarly, Y is a best response to X with payoff v so that altogether (XYuv) represents a Nash equilibrium. \(\square\)

Remark 6.3

Theorem 6.2 also holds if “in every finite rank-1 decomposition” is replaced by “in at least one finite rank-1 decomposition". The proof also works in that setting. We will illustrate this further below, in Example 7.3.

Compared to bimatrix games, the more general situation of semidefinite games is qualitatively different in the following sense. In a bimatrix game, every mixed strategy is a unique convex combination of the pure strategies. In a semidefinite game, the analogs of pure strategies are the rank 1-matrices in the spectraplex and the decompositions of the mixed (i.e., of rank at least 2) strategies as convex combinations of rank-1 matrices are no longer unique. However, the situation for a Nash equilibrium to have several decompositions is quite restrictive.

Remark 6.4

To obtain a decomposition of a positive semidefinite matrix with trace 1 into the sums of positive semidefinite rank 1-matrices with trace 1, one can proceed as follows. Consider a spectral decomposition. Then replace the eigenvalues (which sum to 1) by eigenvalues 1 and interpret the original eigenvalues as coefficients of a convex combination.

Example 6.5

We consider the example of a hybrid game, where the first player plays a strategy in the simplex \(\Delta _2\) and the second player plays a strategy in \(\mathcal {S}_2^+\) with trace 1; note that the hybrid game can be encoded into a semidefinite game on \(\mathcal {S}_2 \times \mathcal {S}_2\) by setting \(A_{ijkl} = B_{ijkl} = 0\) whenever \(i \ne j\). We can describe the situation in terms of an index i for the first player and the indices (jk) for the second player. For \(i=1\), let

$$\begin{aligned} (A_{1jk})_{1 \le j,k \le 2} \ = \ (B_{1jk})_{1 \le j,k \le 2} = \left( \begin{array}{cc} 1 &{}\quad \varepsilon \\ \varepsilon &{}\quad 0 \end{array} \right) , \end{aligned}$$

and for \(i=2\), let

$$\begin{aligned} (A_{2jk})_{1 \le j,k \le 2} \ = \ (B_{2jk})_{1 \le j,k \le 2} = \left( \begin{array}{cc} 0 &{}\quad \varepsilon \\ \varepsilon &{}\quad 1 \end{array} \right) . \end{aligned}$$

To determine the Nash equilibria, we consider three cases:

Case 1: The first player plays the first pure strategy. Then the second player has payoff \(y_{11} + 2 \varepsilon y_{12}\). A small computation shows that for small \(\varepsilon\), the second player’s best response is

$$\begin{aligned} \frac{1}{2\sqrt{4 \varepsilon ^2+1}} \left( \begin{array}{cc} \sqrt{4 \varepsilon ^2+1}+1 &{}\quad 2 \varepsilon \\ 2 \varepsilon &{}\quad \sqrt{4 \varepsilon ^2+1}-1 \end{array} \right) , \end{aligned}$$

and indeed, this gives a Nash equilibrium.

Case 2: The first player plays the second pure strategy. Analogously, the second player’s best response is

$$\begin{aligned} \frac{1}{2\sqrt{4 \varepsilon ^2+1}} \left( \begin{array}{cc} \sqrt{4 \varepsilon ^2+1}-1 &{}\quad 2 \varepsilon \\ 2 \varepsilon &{}\quad \sqrt{4 \varepsilon ^2+1}+1 \end{array} \right) , \end{aligned}$$

and indeed, this gives a Nash equilibrium.

Case 3: The first player plays a totally mixed strategy. Let \(x=(x_1,x_2) = (x_1,1-x_1) \in \Delta _2\) with \(x_1,x_2 > 0\) and \(x_1+x_2=1\). The second player’s best response has the payoff

$$\begin{aligned}{} & {} \max \{x_1 y_{11} + 2 \varepsilon x_1 y_{12} + (1-x_1) y_{22} + 2 \varepsilon (1-x_1) y_{12} \, : \, Y \succeq 0, {{\,\textrm{tr}\,}}(Y) = 1 \} \\{} & {} \quad = \max \{x_1 y_{11} + (1-x_1) y_{22} + 2 \varepsilon y_{12} \, : \, Y \succeq 0, \, {{\,\textrm{tr}\,}}(Y) = 1 \}. \end{aligned}$$

The first pure strategy would give for the first player \(y_{11} + 2 \varepsilon y_{12}\) and the second pure strategy would give for the first player \(y_{22} + 2 \varepsilon y_{12}.\) If (xY) is a Nash equilibrium such that x is a totally mixed strategy, we must have equality, that is, \(y_{11} + 2 \varepsilon y_{12} \ = \ y_{22} + 2 \varepsilon y_{12}.\) Hence, \(y_{11} = y_{22}\) and the payoff of the second player is

$$\begin{aligned}&\max \{ x_1 y_{11} + (1-x_1) y_{11} + 2 \varepsilon y_{12} \} \\&\quad = \ \max \{ y_{11} + 2 \varepsilon y_{12} \}, \end{aligned}$$

which has become independent of \(x_1\). The payoff of the second player is maximized for the value \(y_{12} = \sqrt{y_{11} y_{22}} = y_{11}\). Thus, the payoff for the second player is

$$\begin{aligned}&\max \{y _{11} + 2 \varepsilon y_{11}\} \\&\quad = \ \max \{ (1+2 \varepsilon ) y_{11} \}. \end{aligned}$$

Hence, for every non-negative \(\varepsilon\), the best response of the second player is

$$\begin{aligned} \left( \begin{array}{cc} 1/2 &{}\quad 1/2 \\ 1/2 &{}\quad 1/2 \end{array} \right) . \end{aligned}$$
(6.5)

As apparent from the above considerations, in that case both pure strategies of the first player are best responses of the second player. To determine the strategy x of the first player, we use the condition that the maximum \(\max _{Y \in \mathcal {Y}} \{x_1 y_{11} + (1-x_1) y_{22} + 2 \varepsilon \sqrt{y_{11} y_{22}}\}\) has to be attained at the matrix (6.5). Substituting \(y_{22} = 1-y_{11}\), the resulting univariate problem in \(y_{11}\) gives \(x = (1/2,1/2)\). The payoff is \(\frac{1}{2} + \varepsilon\) for both players.

We close the section by mentioning that some classical results for bimatrix games remain true for semidefinite games. Since semidefinite games are convex compact games, the generalized Kohlberg-Mertens structure theorem on the Nash equilibria shown by Predtetchinski (2009) holds for semidefinite games [for related recent structural results in the context of polytopal games see Pahl (2023)]. Moreover, generically, the number of Nash equilibria in semidefinite games is finite and odd, as a consequence of the results of Bich and Fixary (2021).

7 Semidefinite games with many Nash equilibria

We construct a family of semidefinite games on the strategy space \(\mathcal {S}_n \times \mathcal {S}_n\) such that the set of Nash equilibria has many connected components. In particular, the number of Nash equilibria is larger than the number of Nash equilibria that an \(n \times n\) bimatrix game can have.

The following criterion allows us to construct semidefinite games from bimatrix games that contain the Nash equilibria of the bimatrix games and possibly additional ones.

Lemma 7.1

Let \(G=(A,B)\) be an \(m \times n\) bimatrix game. Let \(\bar{G}=(\bar{A},\bar{B})\) be a semidefinite game on the strategy space \(\mathcal {S}_m \times \mathcal {S}_n\) with \(\bar{a}_{iikk} = a_{ik}\) and \(\bar{b}_{iikk} = b_{ik}\) for \(1 \le i \le m\), \(1 \le k \le n\). If \(\bar{a}_{ijkk} =0\) for all \(i \ne j\) and all k, as well as \(\bar{b}_{iikl} =0\) for all i and all \(k \ne l\) then, for every Nash equilibrium (xy) of G, the pair (XY) defined by

$$\begin{aligned} X_{ij} = {\left\{ \begin{array}{ll} x_i, &{} i = j, \\ 0, &{} i \ne j, \end{array}\right. } \qquad Y_{ij} = {\left\{ \begin{array}{ll} y_i, &{} i = j, \\ 0, &{} i \ne j \end{array}\right. } \end{aligned}$$

is a Nash equilibrium of \(\bar{G}\).

Proof

Assume (XY) is not a Nash equilibrium of \(\bar{G}\). W.l.o.g. we can assume that a strategy \(Y'\) exists for the second player with \(p_{\bar{B}}(X,Y)<p_{\bar{B}}(X,Y')\). This yields

$$\begin{aligned} p_{\bar{B}}(X,Y)=\sum _{i,j,k,l} \bar{b}_{ijkl} X_{ij}Y_{kl} = \sum _{i,k} \bar{b}_{iikk} X_{ii}Y_{kk} < \sum _{i,k} \bar{b}_{iikk} X_{ii}Y'_{kk} = p_{\bar{B}}(X,Y'), \end{aligned}$$

where the last equation uses that \(\bar{b}_{iikl} = 0\) for all \(l \ne k\). Hence, there exists a feasible strategy \(y'\) for the second player in G with \(p_B(x,y) < p_B(x,y')\). This contradicts the precondition that (xy) is a Nash equilibrium in G. \(\square\)

In Lemma 7.1, besides the Nash equilibria inherited from the bimatrix game, there can be additional Nash equilibria in the semidefinite games. In a \(2 \times 2\) bimatrix game, there can be at most three isolated Nash equilibria (see, e.g., Borm et al. 1989 or Quint and Shubik 1997). The following example provides an instance of a semidefinite game on the strategy space \(\mathcal {S}_2 \times \mathcal {S}_2\) with five isolated Nash equilibria. In particular, it has more isolated Nash equilibria than a \(2 \times 2\) bimatrix game can have.

Example 7.2

For a given \(c \in \mathbb {R}\), let

$$\begin{aligned} \left( \begin{array}{cc} A_{\cdot \cdot 11} &{}\quad A_{\cdot \cdot 12} \\ A_{\cdot \cdot 21} &{}\quad A_{\cdot \cdot 22} \end{array} \right) = \left( \begin{array}{cc} \begin{pmatrix} 1 &{}\quad 0 \\ 0 &{}\quad 0 \end{pmatrix} &{} \begin{pmatrix} 0 &{}\quad c \\ c &{}\quad 0 \end{pmatrix} \\ \begin{pmatrix} 0 &{}\quad c \\ c &{}\quad 0 \end{pmatrix} &{} \begin{pmatrix} 0 &{}\quad 0 \\ 0 &{}\quad 1 \end{pmatrix} \end{array} \right) \end{aligned}$$

and \(B=A\). We claim that for \(c > 1/2\), there are exactly five isolated Nash equilibria.

First consider the case that the diagonal of X is (1, 0). Since \(X \succeq 0\), this implies \(x_{12} = 0\). The best response of player 2 gives on the diagonal (1, 0) of Y. From that, we see that

$$\begin{aligned} X = Y = \begin{pmatrix} 1 &{}\quad 0 \\ 0 &{}\quad 0 \end{pmatrix} \end{aligned}$$

is a Nash equilibrium with payoff 1 for both players, and similarly,

$$\begin{aligned} X = Y = \begin{pmatrix} 0 &{}\quad 0 \\ 0 &{}\quad 1 \end{pmatrix} \end{aligned}$$

as well. These Nash equilibria are isolated, which follows as a direct consequence of the current case in connection with the subsequent considerations of the cases with diagonal of X not equal to (1, 0).

Now consider the situation that both diagonal entries of X are positive, and due to the situation discussed before, we can also assume that both diagonal entries of Y are positive. Note that the payoff of each player is

$$\begin{aligned} p(X,Y) = x_{11} y_{11} + x_{22} y_{22} + 4c x_{12} y_{12}. \end{aligned}$$

In a Nash equilibrium, as soon as one player plays the non-diagonal entry with non-zero weight, then both players will play the non-diagonal element with maximal possible absolute value and appropriate sign, say, for player 1, \(x_{12} = \pm \sqrt{x_{11} x_{22}}\).

Case 1: \(x_{12} \ne 0\). We can assume positive signs for the non-diagonal elements of both players. The payoffs are

$$\begin{aligned} p(X,Y) = x_{11} y_{11} + x_{22} y_{22} + 4c \sqrt{x_{11} x_{22}} \sqrt{y_{11} y_{22}}. \end{aligned}$$

Expressing \(x_{22} = 1-x_{11}\) and \(y_{22} = 1 - y_{11}\), we obtain

$$\begin{aligned} p(X,Y) = x_{11} y_{11}+ (1-x_{11}) (1-y_{11}) + 4c \sqrt{x_{11} (1-x_{11})} \sqrt{y_{11} (1-y_{11})}. \end{aligned}$$

In a Nash equilibrium, the partial derivatives

$$\begin{aligned} p_{x_{11}} =&\ 2 y_{11} - 1 + \frac{2 c \sqrt{y_{11} (1-y_{11})} (1-2 x_{11})}{\sqrt{x_{11} (1-x_{11})}}, \\ p_{y_{11}} = \ {}&2 x_{11} - 1 + \frac{2 c \sqrt{x_{11} (1-x_{11})} (1-2 y_{11})}{\sqrt{y_{11} (1-y_{11})}} \end{aligned}$$

of p(XY) necessarily must vanish. We remark that, since the payoff function is bilinear, non-infinitesimal deviations are not relevant here.

For the case \(c > 1/2\), we obtain \(x_{11} = y_{11} = 1/2\). For \(c = 1/2\), any choice of \(x_{11} \in (0,1)\) and setting \(y_{11} = x_{11}\) gives a critical point, see below.

For \(x_{11} = \frac{1}{2}\) and \(y_{11} = \frac{1}{2}\), we obtain the Nash equilibria

$$\begin{aligned} X=Y= \left( \begin{array}{cc} 1/2 &{}\quad 1/2 \\ 1/2 &{}\quad 1/2 \end{array} \right) \; \text { as well as } \; X=Y= \left( \begin{array}{cc} 1/2 &{}\quad -1/2 \\ -1/2 &{}\quad 1/2 \end{array} \right) \end{aligned}$$

with payoff \(\frac{1}{2} + 4 \cdot \frac{1}{4} \cdot c = \frac{1}{2} + c\) for both players.

Case 2: \(x_{12} = 0\). This implies \(y_{12} = 0\), and we obtain the isolated Nash equilibrium

$$\begin{aligned} X = Y = \left( \begin{array}{cc} 1/2 &{}\quad 0 \\ 0 &{}\quad 1/2 \end{array} \right) \end{aligned}$$

with payoff \(\frac{1}{2}\) for both players.

In the special case \(c=1/2\), any choice for \(x_{11} \in (0,1)\) and setting \(y_{11} = x_{11}\) gives a critical point. Further inspecting the second derivatives

$$\begin{aligned} p_{x_{11} x_{11}} \big |_{y_{11} = x_{11}} \ = \ - \frac{1}{2 x_{11}(1-x_{11})} \, \text { and } \, p_{y_{11} y_{11}} \big |_{y_{11} = x_{11}} \ = \ - \frac{1}{2 x_{11}(1-x_{11})} \, , \end{aligned}$$

the negative values show that the points are all local maxima w.r.t. deviating from \(x_{11}\) (and, analogously, from \(x_{22}\)). Hence, in case \(c=1/2\), all the points with \(x=y\) for \(x \in (0,1)\) and choosing the maximal possible off-diagonal entries (with appropriate sign with respect to the other player) give a family of Nash equilibria with payoff 1 for each player.

Example 7.3

We can use the Example 7.2 also to illustrate a situation, where there exists more than one decompositions of a strategy into rank-1 matrices. Consider again the main situation \(c > \frac{1}{2}\). We have seen that the pair \((\frac{1}{2} I_2, \frac{1}{2} I_2)\) of scaled identity matrices constitutes a Nash equilibrium. If player 1 plays \(X=\frac{1}{2} I_2\), the payoff of player 2 is

$$\begin{aligned} p_B(X,Y) = \frac{1}{2} y_{11} + \frac{1}{2} y_{22} + 0 \cdot c \, , \end{aligned}$$

which is independent of c. Due to \({{\,\textrm{tr}\,}}(Y) = 1\), the payoff is \(\frac{1}{2}\) for any strategy Y of the second player. Note that the unit matrix has several decompositions into rank 1-matrices. Besides the canonical decomposition

$$\begin{aligned} I_2 \ = \ e^{(1)} (e^{(1)})^T + e^{(2)} (e^{(2)})^T, \end{aligned}$$

we can also consider, say, even for a general unit matrix \(I_n\), the decomposition

$$\begin{aligned} I_n \ = \ \sum _{k=1}^n (u^{(k)}) (u^{(k)})^T \end{aligned}$$

for any orthonormal basis \(u^{(1)}, \ldots , u^{(n)}\) of \(\mathbb {R}^n\). Both for the canonical decomposition and for the decomposition, say, with \(u^{(1)} = (\cos \alpha , \sin \alpha )^T\), \(u^{(2)} = (-\sin \alpha , \cos \alpha )^T\) for \(\alpha :=\frac{\pi }{6}\), i.e., \(u^{(1)} = \frac{1}{2}(\sqrt{3},1)^T\), \(u^{(2)} = \frac{1}{2}(-1,\sqrt{3})^T\), we obtain \(p_B(X,Y)=\frac{1}{2}\). In particular, all the rank 1-strategies occurring in the various decompositions of \(\frac{1}{2}I_2\) are best responses of player 2 to the strategy \(\frac{1}{2} I_2\) of the first player.

We now show how to construct from Example 7.2 an explicit family of semidefinite games with many Nash equilibria.

Block construction Let \(A^{(1)}\) and \(A^{(2)}\) be tensors of size \(m_1 \times m_1 \times n_1 \times n_1\) and \(m_2 \times m_2 \times n_1 \times n_1\). The block tensor with blocks \(A^{(1)}\) and \(A^{(2)}\) is formally defined as the tensor of size

$$\begin{aligned} (m_1 + m_2) \times (m_1 + m_2) \times (n_1 + n_2) \times (n_1 + n_2), \end{aligned}$$

which has entries

$$\begin{aligned} \begin{array}{rclll} a_{ijkl} &{} = &{} a^{(1)}_{ijkl} &{}\quad \text { for all } &{}\quad 1 \le i,j \le m_1, \, 1 \le k,l \le n_1, \\ a_{i+m_1, j+m_1,k+n_1,l+n_1} &{} = &{} a^{(2)}_{ijkl} &{}\quad \text { for all } &{}\quad 1 \le i,j \le m_2, \, 1 \le k,l \le n_2. \end{array} \end{aligned}$$

Naturally, this construction can be extended to more than two blocks.

For \(1 \le k \le 2\), let \((A^{(k)}, B^{(k)})\) be a semidefinite game \(G^{(k)}\) with strategy space \(\mathcal {S}_{m_k} \times \mathcal {S}_{n_k}\). Then the block game \(G=(A,B)\), where A is the block tensor with blocks \(A^{(1)}\) and \(A^{(2)}\) and B is the block tensor with blocks \(B^{(1)}\) and \(B^{(2)}\), defines a semidefinite game with strategy space \(\mathcal {S}_{m_1 + m_2} \times \mathcal {S}_{n_1 + n_2}\).

Lemma 7.4

(Block lemma) For \(1 \le k \le 2\), let \(G^{(k)}=(A^{(k)}, B^{(k)})\) be a semidefinite game with strategy space \(\mathcal {S}_{m_k} \times \mathcal {S}_{n_k}\) and \((X^{(k)}, Y^{(k)})\) be a Nash equilibrium of \(G^{(k)}\) and denote the payoffs of the two players by \(p_{A^{(k)}}\) and \(p_{B^{(k)}}\). If \(\alpha _1,\alpha _2,\beta _1,\beta _2 \ge 0\) satisfy \(\alpha _1 + \alpha _2 = 1, \beta _1 + \beta _2 = 1\) as well as

$$\begin{aligned} \alpha _1 p_{B^{(1)}}(X^{(1)},Y^{(1)}) = \alpha _2 p_{B^{(2)}}(X^{(2)},Y^{(2)}) \text { and } \beta _1 p_{A^{(1)}}(X^{(1)},Y^{(1)}) = \beta _2 p_{A^{(2)}}(X^{(2)},Y^{(2)}), \end{aligned}$$

then

$$\begin{aligned} X^* := \begin{pmatrix} \alpha _1 X^{(1)} &{} 0 \\ 0 &{} \alpha _2 X^{(2)} \end{pmatrix}, \quad Y^* := \begin{pmatrix} \beta _1 Y^{(1)} &{} 0 \\ 0 &{} \beta _2 Y^{(2)} \end{pmatrix} \end{aligned}$$

is a Nash equilibrium of the block game of \((A^{(1)}, B^{(1)})\) and \((A^{(2)}, B^{(2)})\).

Note that if one of the coefficients \(\alpha _1,\alpha _2,\beta _1\) or \(\beta _2\) is zero in the theorem, then one of the blocks in \(X^*\) or \(Y^*\) consists solely of zeroes.

Proof

We denote the payoff tensors of the block game by A and B. Let the first player play \(X^*\). Since \(\alpha _1 + \alpha _2 = 1\), \(X^*\) is indeed an admissible strategy of the first player. If the second player plays a strategy \(\bar{Y}\), we can assume that it is of the form

$$\begin{aligned} \bar{Y} = \begin{pmatrix} \gamma _1 \bar{Y}^{(1)} &{} 0 \\ 0 &{} \gamma _2 \bar{Y}^{(2)} \end{pmatrix} \end{aligned}$$

with some \(\gamma _1, \gamma _2 \ge 0\), \(\gamma _1 + \gamma _2 = 1\) and strategies \(\bar{Y}^{(1)}\), \(\bar{Y}^{(2)}\) of \(G^{(1)}\) and \(G^{(2)}\). Since \((X^{(k)}, Y^{(k)})\) is a Nash equilibrium of \(G^{(k)}\) for \(k \in \{1,2\}\), we obtain

$$\begin{aligned} p_B(X^*,\bar{Y})= & {} \alpha _1 \gamma _1 p_{B^{(1)}}(X^{(1)},\bar{Y}^{(1)}) + \alpha _2 \gamma _2 p_{B^{(2)}}(X^{(2)}, \bar{Y}^{(2)}) \\\le & {} \alpha _1 \gamma _1 p_{B^{(1)}}(X^{(1)},Y^{(1)}) + \alpha _2 \gamma _2 p_{B^{(2)}}(X^{(2)}, Y^{(2)}) \\= & {} (\gamma _1 + \gamma _2) \alpha _1 p_{B^{(1)}}(X^{(1)}, Y^{(1)}) \\= & {} (\beta _1 +\beta _2) \alpha _1 p_{B^{(1)}}(X^{(1)}, Y^{(1)}) \\= & {} \alpha _1 \beta _1 p_{B^{(1)}}(X^{(1)}, Y^{(1)}) + \alpha _2 \beta _2 p_{B^{(2)}}(X^{(2)}, Y^{(2)}) \\= & {} p_B(X^*,Y^*) . \end{aligned}$$

An analogous argument holds for the best response of the first player to the strategy \(Y^*\) of the second player. \(\square\)

We can use the block lemma to construct a family of semidefinite games with many Nash equilibria.

Example 7.5

Let \(m=n\), i.e., we consider a game \(G_n\) on \(\mathcal {S}_n \times \mathcal {S}_n\). Assume that n is even. We generalize Example 7.2. For a given \(c \in \mathbb {R}\), let

$$\begin{aligned} \left( \begin{array}{cc} A_{\cdot \cdot 11} &{}\quad A_{\cdot \cdot 12} \\ A_{\cdot \cdot 21} &{}\quad A_{\cdot \cdot 22} \end{array} \right) = \left( \begin{array}{cc} \begin{pmatrix} 1 &{}\quad 0 \\ 0 &{}\quad 0 \end{pmatrix} &{} \begin{pmatrix} 0 &{}\quad c \\ c &{}\quad 0 \end{pmatrix} \\ \begin{pmatrix} 0 &{}\quad c \\ c &{}\quad 0 \end{pmatrix} &{} \begin{pmatrix} 0 &{}\quad 0 \\ 0 &{}\quad 1 \end{pmatrix} \end{array} \right) \end{aligned}$$

as in Example 7.2. For \(1< s < n/2\) and \(2s-1 \le \{i,j,k,l\} \le 2s\), let

$$\begin{aligned} A_{ijkl} \ = \ A_{i-2(s-1),j-2(s-1),k-2(s-1),l-2(s-1)} \end{aligned}$$

and let all other entries of A be zero. Also, let \(B=A\).

The discussion in Example 7.2, for the specific situation of the game on the strategy sets \(\mathcal {S}_2 \times \mathcal {S}_2\), implies that there are five Nash equilibria. In all these equilibria, the strategies of both players coincide and this property is preserved throughout the generalized construction we present.

Theorem 7.6

The set of Nash equilibria of \(G_n\) consists of

$$\begin{aligned} \sqrt{6}^n - 1 \ \approx \ 2.449^n - 1 \end{aligned}$$

connected components.

Proof

Using the Block Lemma 7.4, we obtain \(6^{n/2} - 1 = \sqrt{6}^n -1\) Nash equilibria, because we can also use the zero matrix as \(2 \times 2\) block within a strategy as long as not all the blocks are the zero matrix. Outside of the \(2 \times 2\)-diagonal blocks, the entries of these Nash equilibria are zero. Those entries can be chosen arbitrarily as long as the positive semidefiniteness constraint on the strategy is satisfied, without losing the equilibrium property. As a consequence, the Nash equilibria are not isolated. It remains to show that the \(\sqrt{6}^n -1\) Nash equilibria obtained from the Block Lemma belong to distinct connected components.

For each Nash equilibrium (XY), consider the diagonal \(2 \times 2\)-blocks of the strategies. In each block, we have one of the five types from Example 7.2 or the block is the zero matrix. We associate a type \(p(X,Y) \in \{0, \ldots , 5\}^{n}\) to each Nash equilibria which gives the type in each of the n/2 blocks of X and in each of the n/2 blocks of Y.

Any two of the \(\sqrt{6}^n-1\) Nash equilibria coming from the Block Lemma have distinct types. By restricting to the diagonal blocks, this implies that the \(6^{n/2}-1\) Nash equilibria belong to distinct connected components. \(\square\)

Asymptotically, we obtain more Nash equilibria than in the Quint and Shubik construction of bimatrix games (Quint and Shubik 1997) and also more Nash equilibria than in von Stengel’s construction of bimatrix games (von Stengel 1999), because there the number is \(0.949 \cdot 2.414^n/\sqrt{n}\) asymptotically.

Specifically, von Stengel’s construction gives a \(6 \times 6\)-bimatrix game with 75 isolated Nash equilibria, and so far no \(6 \times 6\)-bimatrix game with more than 75 isolated Nash equilibria is known. Von Stengel also showed an upper bound of 111 Nash equilibria for a \(6 \times 6\)-bimatrix games. In our construction of a semidefinite game on \(\mathcal {S}_6 \times \mathcal {S}_6\), we obtain from Theorem 7.6 the higher number of \(6^{6/2} - 1 = 215\) connected components of Nash equilibria in the semidefinite game.

8 Outlook and open questions

Since the transition from bimatrix games to semidefinite games leads from polyhedra to spectrahedra, in the geometric description of Nash equilibria, the questions on the maximal number of Nash equilibria appear to become even more challenging than in the bimatrix situation. Both from the viewpoint of the combinatorics of Nash equilibria and from the viewpoint of computation, rank restrictions have been fruitfully exploited in the contexts of bimatrix games (Adsul et al. 2021; Kannan and Theobald 2010) and separable games (Stein et al. 2008). It would be interesting to study the exploitation of low-rank structures of semidefinite games in the case of payoff tensors with suitable conditions on a low tensor rank.

Concerning the reduction from semidefinite programs to semidefinite games, it is a natural question whether the handlings of the exceptional cases by Adler (2013) and von Stengel (2022) can be generalized to the semidefinite case.

We also briefly mention questions of the semidefinite generalizations of more general classes of (bimatrix) games. A polymatrix game (or network game) (Cai et al. 2016) is defined by a graph. The nodes are the players and each edge corresponds to a two-player zero-sum matrix game. Every player chooses one set of strategies and she uses it with all the games that she is involved with. The game has an equilibrium that we can compute efficiently using linear programming. Shapley’s stochastic games are two-player zero-sum games of potentially infinite duration. Roughly speaking, the game takes place on a complete graph, the nodes of which correspond to zero-sum matrix games. Two players, starting from an arbitrary node (position), at each stage of the game, play a zero-sum matrix game and receive payoffs. Then, with a non-zero probability the game either stops or they players move to another node and play again. Because the stopping probabilities are non-zero at each position, the game terminates. Shapley proved (Shapley 1953) that this game has an equilibrium; there is also an algorithm to compute it (Hansen et al. 2011), see also Oliu-Barton (2021). It remains a future task to study the generalizations of polymatrix and stochastic games when the underlying bimatrix games are replaced by semidefinite games. For semidefinite polymatrix games this has recently been initiated in Ickstadt et al. (2023).