1 Introduction

We begin by considering a complete, but not necessarily transitive, preference relation \(\succeq\) defined over lotteries on a finite set of alternatives X. We aim to construct a two-player symmetric game \(G=(\Delta X,\pi )\), where X represents the set of pure strategies, \(\Delta X\) the set of mixed strategies, and \(\pi\) an expected payoff function. Here, for p and q in \(\Delta X\), the relation \(p\succeq q\) holds if and only if \(\pi (p,q)\ge \pi (q,p)\), where we note the dual interpretation of the function \(\pi\) as (i) a payoff function in the game and (ii) a two-variable utility function representing \(\succeq\). Our main result establishes that if the preference relation \(\succeq\) satisfies the von Neumann-Morgenstern (vNM) expected utility axioms—that is, an expected utility function EU exists such that \(EU(p)\ge EU(q)\) whenever \(p\succeq q\)—then G is a potential game. Conversely, if G is a potential game, then the relation defined as above satisfies the expected utility axioms.

To our knowledge, the literature has not yet drawn such an explicit connection between expected utility and potential games. Initially introduced by Monderer and Shapley (1996), potential games are significant for two main reasons: they always have a pure Nash equilibrium, and the potential function is useful for equilibrium selection.Footnote 1 Because of these key features, the class of potential games and its extensions (Voorneveld 2000; Dubey et al. 2006) have been applied in various fields. These include computer science (Heliou et al. 2017; Yamamoto 2015), networks (Babichenko and Tamuz 2016), social environments (Demuynck et al. 2023), and theoretical biology (Sandholm 2010).

It is essential to differentiate the well-known properties of potential games from the main result presented here. In potential games, a pure Nash equilibrium coincides with a pure Nash equilibrium in a modified game, wherein the potential function of the original game serves as each player’s payoff function. This observation differs from our main result in two main ways. First, the potential function in two-player symmetric games is constructed as a function of two pure strategies. In contrast, our result indicates that the very existence of such a potential function is equivalent to the existence of a single-variable expected utility function. Second, it is crucial to highlight that the expected payoff function and the potential function are not intrinsically related. For instance, even if \(x\in X\) maximizes the expected utility function that represents \(\succeq\), it does not necessarily follow that the pure strategy profile (xx) will be a Nash equilibrium. Conversely, if (xx) is a Nash equilibrium, then x is not necessarily a maximizer of the expected utility function that represents \(\succeq\).

Despite these disparities, we find a notable relationship between the maximal elements of the preference relation \(\succeq\) and Nash’s (1953) optimal threat strategies. We observe that a mixed strategy \(p\in \Delta X\) is an optimal threat strategy in a two-player symmetric game if and only if p is a maximal element with respect to \(\succeq\). Additionally, if such a maximal element is a pure strategy, then x is a finite population evolutionary stable strategy, as introduced by Schaffer (1988, 1989). It is well known that such an evolutionary strategy does not always coincide with a Nash equilibrium strategy. Relatedly, Duersch et al. (2012b) introduced a simple but effective “imitate-if-better” rule, according to which an imitator playing a pure strategy \(y\in X\) will adopt the opponent’s preceding round pure strategy x if and only if the opponent received a strictly higher payoff, represented as \(x \succ y\) in our setting. They show that if the game has no finite population evolutionary stable strategy, then their imitate-if-better rule can be exploited. However, their results indicate that, within a broad class of economically relevant games, including generalized potential games, this decision rule cannot be exploited indefinitely, even by highly sophisticated players who can perfectly predict the imitator’s choices.

Fig. 1
figure 1

Illustration of the triangular property: for any three actions x, y and z in X, \(\pi (x,y)+\pi (y,z)+\pi (z,x)=\pi (x,z)+\pi (z,y)+\pi (y,x)\); that is, the sum of the payoffs at the solid dots (connected via the solid triangle) equals the sum of the payoffs at the open dots (connected via the dashed triangle)

Fig. 2
figure 2

Dependencies of our main result, illustrating how decision-theoretic results (in the upper part of the figure) link to the triangular property, and how game-theoretical results (in the lower part of the figure) are combined to obtain the main theorem

The proof of our main result depends on a set of lemmata and several results from the literature, as visualized in Fig. 2. Specifically, we make use of Fishburn’s (1982) SSB utility axiomatization, the ‘diagonal property’ introduced by Potters et al. (2009), and a key theorem by Duersch et al. (2012a). Throughout this note, we use “theorem” to refer to what has been shown by others, “lemma” for what we prove and use in the proof to our “main theorem,” and “proposition” for our additional results.

Figure 1 illustrates our novel ‘triangular property,’ which establishes a bridge between the game-theoretic and decision-theoretic results via the Independence axiom. A two-player symmetric game satisfies this triangular property if, for any three actions x, y and z in X, the equation \(\pi (x,y)+\pi (y,z)+\pi (z,x)=\pi (x,z)+\pi (z,y)+\pi (y,x)\) is satisfied. We then show that this property is equivalent to a triangular property in mixed strategies (Lemma 3), which is, in turn, equivalent to the preference relation \(\succeq\), defined as above, satisfying the Independence axiom (Lemmata 2 and 4).

We also discuss the diagonal property, originally introduced by Potters et al. (2009), which states that for any four strategies w, x, y and z, \(\pi (x,y)+\pi (z,w)=\pi (x,w)+\pi (z,y)\); that is, the sum of payoffs in both the diagonal and the anti-diagonal are equal. They prove that a zero-sum game is a potential game (Monderer and Shapley 1996) if and only if it satisfies the diagonal property (Theorem 3). In addition, Duersch et al. prove that a symmetric two-player game is a potential game if and only if its ‘relative payoff’ game—which is a zero-sum game obtained by subtracting the payoffs of Player 2 from those of Player 1—is also a potential game (Theorem 4).

Using Duersch et al.’s and Potters et al.’s results and two lemmata, we show that the triangular property is both necessary and sufficient for the game to be a potential game. Additionally, we demonstrate that this triangular property is equivalent to the Independence axiom (Lemma 4). Finally, by using SSB utility axiomatization (Theorem 1 and Lemma 1) and applying a result from Fishburn (1982) regarding a connection between the Independence and SSB utility axioms (Theorem 2), we confirm that the preference relation \(\succeq\), defined as \(p\succeq q\) if and only if \(\pi (p,q)\ge \pi (q,p)\), satisfies the vNM expected utility axioms.

In addition to the contributions outlined above, our paper is conceptually related to connections of vNM expected utility, as first axiomatized by von Neumann and Morgenstern (1953), with game theoretical concepts. For instance, Roth (1977) shows that a player’s Shapley value in a game corresponds to the vNM utility function that represents the player’s preferences over player positions in the game.Footnote 2 In this setting, consider a decision problem where the set of alternatives consists of different positions of players in a game. If the decision-maker’s preferences \(\succeq\) over these positions satisfy Roth’s (1977) axioms, then the two-player symmetric game \((\Delta X,\pi )\), where \(\pi\) represents \(\succeq\), is a potential game if and only if the Shapley value in the initial game is the vNM utility function that represents the decision-maker’s preferences over the player positions.

2 The setup

Let X be a finite set of alternatives and \(\Delta X\) the set of all lotteries (i.e., probability distributions) over X, respectively. The preference relation \(\succeq \,\subseteq \Delta X\times \Delta X\) represents the preferences of a decision-maker over lotteries on X. We assume that \(\succeq\) is complete: for every p and q in \(\Delta X\), \(p\succeq q\) or \(q\succeq p\). We use the notation \(p\sim q\) to indicate that \(p\succeq q\) and \(q\succeq p\), and \(p\succ q\) if \(p\succeq q\) but not \(p\sim q\). Since the relation \(\succeq\) is not assumed to be transitive, it cannot be represented by a one-variable, order-preserving utility function, hence the subsequent definition.Footnote 3

Definition 1

A function \(u:\Delta X\times \Delta X\rightarrow \mathbb {R}\) is said to represent the relation \(\succeq\) if, for every p and q in \(\Delta X\), the following conditions hold:

  1. 1.

    \(p\sim q\) if and only if \(u(p,q)=u(q,p)\),

  2. 2.

    \(p\succ q\) if and only if \(u(p,q)>u(q,p)\).

The function u(pq) may be interpreted as a measure of the intensity of preference for lottery p over lottery q. Specifically, if the value of u(pq) is greater than u(qp), then lottery p is preferred to lottery q.

We next present the SSB (skew-symmetric and bilinear) utility axioms introduced by Fishburn (1982). For all p, q and r in \(\Delta X\):

Axiom C (Continuity). There exists \(\beta \in (0,1)\) such that \(p\succ q\succ r\) implies \(q\sim \beta p+(1-\beta )r\).

Axiom D (Dominance). For all \(\alpha \in (0,1)\), \(p\succ q\) and \(p\succeq r\) implies \(p\succ \alpha q +(1-\alpha )r\), \(q\succ p\) and \(r\succeq p\) implies \(\alpha q +(1-\alpha )r\succ p\), and \(p\sim q\) and \(p\sim r\) implies \(p\sim \alpha q+(1-\alpha )r\).

Axiom S (Symmetry). For all \(\alpha \in (0,1)\), \(p\succ q\succ r\), \(p\succ r\) and \(q\sim \frac{1}{2}p+\frac{1}{2}r\) implies \(\alpha p +(1-\alpha )r\sim \frac{1}{2}p+\frac{1}{2}q \,\Longleftrightarrow \,\alpha r+(1-\alpha )p\sim \frac{1}{2}r+\frac{1}{2}q\).

As the following theorem states, axioms C, D and S are necessary and sufficient for representing preferences by an SSB utility function.

Theorem 1

(Theorem 1 from Fishburn 1982) The relation \(\succeq\) on \(\Delta X\) satisfies axioms C, D and S if and only if there exists an SSB function \(\phi\) such that for all p and q in \(\Delta X\), \(p\succ q \,\Longleftrightarrow \, \phi (p,q)>0\). Moreover, \(\phi\) is unique up to multiplication by a positive constant.

We additionally introduce the following well-known axiom, which is used in the axiomatization of von Neumann-Morgenstern (vNM) utility by Jensen (1967).Footnote 4

Axiom I (Independence). For all p, q and r in \(\Delta X\) and all \(\alpha \in (0,1)\), \(p\succeq q \,\Longleftrightarrow \, \alpha p+(1-\alpha )r\succeq \alpha q+(1-\alpha )r\).

Theorem 2

(Proposition 1 from Fishburn 1982) The relation \(\succeq\) on \(\Delta X\) satisfies axioms C, D and I if and only if there exists a one-variable vNM utility function that represents \(\succeq\).

In other words, replacing axiom S in the SSB utility theorem with the more powerful axiom I leads to a vNM representation, which reduces the order-preserving function from two variables to one. Note that this representation implies the transitivity of the preference relation \(\succeq\), although it is obtained without explicitly assuming transitivity. For this reason, our main theorem (presented in the next section) would trivially hold if we were to explicitly assume the transitivity of \(\succeq\) in addition to the vNM expected utility representation.Footnote 5

Let \(G=(\Delta X,\pi )\) denote a two-player symmetric game, where each player has the same finite set of pure actions X, and \(\Delta X\) represents the set of mixed strategies of each player. When Player 1 plays \(p\in \Delta X\) and Player 2 plays \(q\in \Delta X\), the resulting (expected) payoffs are \(\pi (p,q)\) for Player 1 and \(\pi (q,p)\) for Player 2. Here, \(\pi :\Delta X\times \Delta X\rightarrow \mathbb {R}\) is a von Neumann-Morgenstern expected payoff function, which is bilinear. Since \(\pi\) can be interpreted as a utility function representing a preference relation \(\succeq\) on \(\Delta X\), we refer to it as a payoff function when it is interpreted within the context of a game. G is called a two-player symmetric zero-sum game when for all p and q, \(\pi (p,q)+ \pi (q,p)=0\). Finally, A two-player symmetric game \((\Delta X,\pi )\) is called a potential game if there exists a function \(P:X\times X\rightarrow \mathbb {R}\) such that for all y and all x and z in X, we have \(\pi (x,y)-\pi (z,y)=P(x,y)-P(z,y)=P(y,x)-P(y,z)\).

3 The main theorem

Throughout the section, \(\succeq\) represents a complete preference relation and \(\pi\) represents the payoff function of game \(G=(\Delta X,\pi )\). We now present our main result:

Main Theorem. Let \(\succeq\) be a preference relation on \(\Delta X\) represented by a two-variable utility function \(\pi :\Delta X\times \Delta X\rightarrow \mathbb {R}\). If \(\succeq\) satisfies the von Neumann-Morgenstern utility axioms, then \(G=(\Delta X,\pi )\) is a potential game. Conversely, let \(G=(\Delta X,\pi )\) be a two-player symmetric game and assume that \(\pi\) represents \(\succeq\). If G is a potential game, then the preference relation \(\succeq\) satisfies the von Neumann-Morgenstern utility axioms.

Before proving the main theorem, we introduce a series of lemmata. In the first lemma, we establish a relationship between two-player symmetric games and SSB utility axioms.

Lemma 1

(G and SSB utility axioms) Let \((\Delta X,\pi )\) be a two-player symmetric game where \(\pi\) represents preference relation \(\succeq\). Then, \(\succeq\) satisfies the SSB utility axioms C, D and S. Conversely, if a preference relation \(\succeq\) on \(\Delta X\) satisfies the SSB utility axioms, then one can construct a two-player symmetric game \((\Delta X,\pi )\) such that \(\pi\) represents \(\succeq\).

Proof

If \(\pi\) represents the preference relation \(\succeq\), then for all p and q in \(\Delta X\), \(p\succeq q\) if and only if \(\pi (p,q)\ge \pi (q,p)\). Define the function \(\phi :\Delta X\times \Delta X\rightarrow \mathbb {R}\) such that for all p and q in \(\Delta X\), \(\phi (p,q)=\pi (p,q)-\pi (q,p)\).

First, we show that \(\phi\) is skew-symmetric. Since \(\phi (q,p)=\pi (q,p)-\pi (p,q)\), we have \(\phi (p,q)=-\phi (q,p)\), as desired.

Next, we show that \(\phi\) is bilinear, i.e., \(\phi (p,q)=\sum _{x\in X}\sum _{y\in X}p(x)q(y)\phi (x,y)\). By definition of \(\phi\) and \(\pi\) being bilinear, we have

$$\begin{aligned} \phi (p,q)&\>=\> \pi (p,q) - \pi (q,p) \>=\> \sum _{x\in X}\sum _{y\in X}p(x)q(y)\pi (x,y) - \sum _{x\in X}\sum _{y\in X}p(x)q(y)\pi (y,x) \nonumber \\&\>=\> \sum _{x\in X}\sum _{y\in X}p(x)q(y)\big (\pi (x,y)-\pi (y,x)\big ) \>=\> \sum _{x\in X}\sum _{y\in X}p(x)q(y)\phi (x,y), \end{aligned}$$
(1)

as desired.

By Theorem 1, \(\succeq\) satisfies axioms C, D and S if and only if there exists an SSB function that represents \(\succeq\). Since \(\phi\) is an SSB function and represents \(\succeq\), it follows that \(\succeq\) satisfies axioms C, D and S.

Conversely, if \(\succeq\) satisfies the SSB utility axioms, as we have just shown, there exists an SSB function \(\phi\) that represents \(\succeq\). We can then construct a bilinear function \(\pi :\Delta X\times \Delta X\rightarrow \mathbb {R}\) that represents \(\succeq\) as follows. First, for all x and y in X, define the function \(\pi ':X\times X\rightarrow \mathbb {R}\) such that \(\pi '(x,y)-\pi '(y,x)=\phi (x,y)\). Then, define \(\pi\) as the bilinear extension of \(\pi '\) to the domain \(\Delta X\times \Delta X\). As in Equation (1), it follows that for all p and q in \(\Delta X\), \(\pi (p,q)-\pi (q,p)=\phi (p,q)\). Note that the main difference in this construction from the previous one is the direction of construction. Starting from the game’s payoff function, one can construct a unique SSB utility function \(\phi\); however, starting from \(\phi\), there may be more than one payoff function \(\pi\) where, for all p and q in \(\Delta X\), the equation \(\pi (p,q)-\pi (q,p)=\phi (p,q)\) is satisfied.Footnote 6 Finally, we construct \((\Delta X,\pi )\) as a two-player symmetric game, with \(\pi (p,q)\) representing the expected payoff of Player 1, where \(p\in \Delta X\) and \(q\in \Delta X\) are the mixed strategies of Player 1 and Player 2, respectively. \(\square\)

We now present the Independence axiom for the preference relation \(\succeq\) in the form of a function that represents \(\succeq\).

Lemma 2

(Independence in functional form) Let \((\Delta X,\pi )\) be a two-player symmetric game and assume that \(\pi\) represents \(\succeq\). Then, \(\succeq\) satisfies the Independence axiom if and only if for every p, q and r in \(\Delta X\) and for all \(\alpha \in (0,1)\),

$$\begin{aligned} & \pi(p,q) \geq \pi(q,p) \,\Longleftrightarrow\, \\ & \alpha\big[\pi(p,q)-\pi(q,p)\big]+(1-\alpha)\big[\pi(p,r)-\pi(r,p)+\pi(r,q)-\pi(q,r)\big] \geq 0. \end{aligned}$$
(2)

Proof

By definition of the Independence axiom, for all p, q and r in \(\Delta X\) and all \(\alpha \in (0,1)\)

$$\begin{aligned} p\succeq q \,\Longleftrightarrow \, \alpha p+(1-\alpha )r\succeq \alpha q+(1-\alpha )r. \end{aligned}$$

This implies that for every p, q and r in \(\Delta X\) and for all \(\alpha \in (0,1)\), we have

$$\begin{aligned}&\pi(p,q)\geq\pi(q,p) \,\Longleftrightarrow\, \\ &\pi\big(\alpha p+(1-\alpha)r,\alpha q+(1-\alpha)r\big)\geq\pi\big(\alpha q+(1-\alpha)r,\alpha p+(1-\alpha)r\big).\end{aligned}$$

Then, the bilinearity of \(\pi\) implies that

$$\begin{aligned} & \pi(p,q)\geq\pi(q,p) \,\Longleftrightarrow\,\\ &\alpha\big[\pi(p,q)-\pi(q,p)\big]+(1-\alpha)\big[\pi(p,r)-\pi(r,p)+\pi(r,q)-\pi(q,r)\big]\geq 0.\end{aligned}$$

\(\square\)

The subsequent lemma establishes a relationship between the triangular property in pure strategies and mixed strategies.

Lemma 3

(Triangular property in pure and mixed strategies) Let \((\Delta X,\pi )\) be a two-player symmetric game. For all x, y and z in X,

$$\begin{aligned} \pi (x,y)+\pi (y,z)+\pi (z,x)=\pi (x,z)+\pi (z,y)+\pi (y,x) \end{aligned}$$

if and only if for all p, q and r in \(\Delta X\),

$$\begin{aligned} \pi (p,q)+\pi (q,r)+\pi (r,p) = \pi (p,r)+\pi (r,q)+\pi (q,p). \end{aligned}$$
(3)

Proof

(\(\Leftarrow\)): This holds by definition.

(\(\Rightarrow\)): Bilinearity of \(\pi\) implies that

$$\begin{aligned} \pi (p,q) = \sum _{x\in X}\sum _{y\in X}p(x)q(y)\pi (x,y). \end{aligned}$$

We start by substituting \(\pi (x,y)\) with \(\pi (x,z)+\pi (z,y)+\pi (y,x)-\pi (y,z)-\pi (z,x)\). Then, for all \(z\in X\), we have

$$\begin{aligned} \pi (p,q)&= \sum _{x\in X}\sum _{y\in X}p(x)q(y)\big (\pi (x,z)+\pi (z,y)+\pi (y,x)-\pi (y,z)-\pi (z,x)\big ). \end{aligned}$$

Using the distributive property of multiplication, we obtain

$$\begin{aligned} \pi (p,q)&= \sum _{x\in X}\sum _{y\in X}p(x)q(y)\pi (x,z) + \sum _{x\in X}\sum _{y\in X}p(x)q(y)\pi (z,y) + \sum _{x\in X}\sum _{y\in X}p(x)q(y)\pi (y,x) \\&\qquad - \sum _{x\in X}\sum _{y\in X}p(x)q(y)\pi (y,z) - \sum _{x\in X}\sum _{y\in X}p(x)q(y)\pi (z,x). \end{aligned}$$

Rearranging the sums implies that

$$\begin{aligned} \pi (p, q)&= \Big (\sum _{x\in X}p(x)\pi (x,z)\Big )\Big (\sum _{y\in X}q(y)\Big ) + \Big (\sum _{y\in X}q(y)\pi (z,y)\Big )\Big (\sum _{x\in X}p(x)\Big ) + \pi (q,p) \\&\qquad - \Big (\sum _{y\in X}q(y)\pi (y,z)\Big )\Big (\sum _{x\in X}p(x)\Big ) - \Big (\sum _{x\in X}p(x)\pi (z,x)\Big )\Big (\sum _{y\in X}q(y)\Big ) \end{aligned}$$

Since \(\sum _{x\in X}p(x)=1\), we get

$$\begin{aligned} \pi (p,q)&= \sum _{x\in X}p(x)\pi (x,z) + \sum _{y\in X}q(y)\pi (z,y) + \pi (q,p) - \sum _{y\in X}q(y)\pi (y,z) \\&\quad - \sum _{x\in X}p(x)\pi (z,x). \end{aligned}$$

Finally, by rearranging the index, we obtain

$$\begin{aligned} \pi (p,q)&= \sum _{x\in X}\big (p(x)-q(x)\big )\pi (x,z) + \sum _{x\in X}\big (q(x)-p(x)\big )\pi (z,x) + \pi (q,p). \end{aligned}$$
(4)

Following the same steps as above yields the equations

$$\begin{aligned} \pi (q,r) = \sum _{x\in X}\big (q(x)-r(x)\big )\pi (x,z) + \sum _{x\in X}\big (r(x)-q(x)\big )\pi (z,x) + \pi (r,q) \end{aligned}$$
(5)

and

$$\begin{aligned} \pi (r,p) = \sum _{x\in X}\big (r(x)-p(x)\big )\pi (x,z) + \sum _{x\in X}\big (p(x)-r(x)\big )\pi (z,x) + \pi (p,r). \end{aligned}$$
(6)

By summing Eqs (4), (5), and (6), we find that

$$\begin{aligned} \pi (p,q) + \pi (q,r) + \pi (r,p) = \pi (p,r) + \pi (r,q) + \pi (q,p), \end{aligned}$$

as desired. \(\square\)

The following lemma establishes a relationship between the Independence axiom and the triangular property.

Lemma 4

(Independence and triangular property) Let \((\Delta X,\pi )\) be a two-player symmetric game and assume that \(\pi\) represents \(\succeq\). Then, \(\succeq\) satisfies the Independence axiom if and only if for every x, y and z in X,

$$\begin{aligned} \pi (x,y) + \pi (y,z) + \pi (z,x) = \pi (x,z) + \pi (z,y) + \pi (y,x). \end{aligned}$$
(7)

Proof

(\(\Rightarrow\)): By Lemma 1, if \(\pi\) represents \(\succeq\), then \(\succeq\) satisfies axioms C, D and S. If, in addition, it satisfies the Independence axiom, then, by Theorem 2, a real valued function \(\bar{\pi }\) exists such that for all p and q in \(\Delta X\), \(\pi (p,q)-\pi (q,p)=\bar{\pi }(p)-\bar{\pi }(q)\). Then, it follows for all x, y and z in X we have \(\begin{aligned} & \pi(x,y)-\pi(y,x)+\pi(y,z)-\pi(z,y)+\pi(z,x)-\pi(x,z)\\& =\bar{\pi}(x)-\bar{\pi}(y)+\bar{\pi}(y)-\bar{\pi}(z)+\bar{\pi}(z)-\bar{\pi}(x)=0.\end{aligned}\)

(\(\Leftarrow\)): By Lemma 3, for all x, y and z in X, \(\pi (x,y)+\pi (y,z)+\pi (z,x)=\pi (x,z)+\pi (z,y)+\pi (y,x)\) if and only if Equation (3) holds. Then, substituting \(\pi (p,r)-\pi (r,p)+\pi (r,q)-\pi (q,r)\) with \(\pi (p,q)-\pi (q,p)\) reduces Equation (2) to \(\pi (p,q)\ge \pi (q,p) \,\Leftrightarrow \, \pi (p,q)\ge \pi (q,p)\). Thus, by Lemma 2, the Independence axiom is satisfied. \(\square\)

Next, we state Potters et al.’s lemma in the context of two-player symmetric zero-sum games.

Theorem 3

(Lemma 2.1 from Potters et al. 2009) Let G be a two-player symmetric zero-sum game. Then, G is a potential game if and only if the diagonal property holds: for all x, y, z and w in X, \(\pi (x,y)+\pi (z,w)=\pi (x,w)+\pi (z,y)\).

Potters et al.’s diagonal property is originally stated for zero-sum games, and it is a straightforward corollary to note that this property reduces to the triangular property in symmetric zero-sum games:

Lemma 5

Let \(G=(\Delta X,\pi )\) be a two-player symmetric zero-sum game. Then, for all x, y, z and w in X, the following equations are equivalent:

$$\begin{aligned} \pi (x,y)+\pi (z,w)=\pi (x,w)+\pi (z,y) \end{aligned}$$
(8)

and

$$\begin{aligned} \pi (x,y)=\pi (x,z)+\pi (z,y). \end{aligned}$$
(9)

Proof

(\(\Rightarrow\)): If \(w=z\), then Equation (8) reduces to Equation (9) because G being a symmetric zero-sum game implies that \(\pi (z,z)=0\).

(\(\Leftarrow\)): By Equation (9), \(\pi (x,y)=\pi (x,z)+\pi (z,y)\) and \(\pi (z,w)=\pi (z,x)+\pi (x,w)\). Summing up those two equations yield Equation (8) because G being a symmetric zero-sum game implies that \(\pi (z,x)=-\pi (x,z)\). \(\square\)

Finally, we state Duersch et al.’s theorem in the context of two-player symmetric games as it is used in the proof of our main theorem.

Theorem 4

(Theorem 20 from Duersch et al. 2012a) Let \((\Delta X,\pi )\) be a two-player symmetric game and \((\Delta X,\pi ')\) be its relative payoff game where for all x and y in X we have \(\pi '(x,y)=\pi (x,y)-\pi (y,x)\). Then, \((\Delta X,\pi )\) is a potential game if and only if \((\Delta X,\pi ')\) is a potential game.

Now, we are ready to prove our main theorem.

Proof of Main Theorem

Let \(\succeq\) be a preference relation on \(\Delta X\) represented by a function \(\pi :\Delta X\times \Delta X\rightarrow \mathbb {R}\). If \(\succeq\) satisfies von Neumann-Morgenstern utility axioms, then by Lemma 4 for every x, y, and z in X, the triangular equality (Equation (7)) holds, i.e., \(\pi (x,y)+\pi (y,z)+\pi (z,x)=\pi (x,z)+\pi (z,y)+\pi (y,x)\).

Define a two-player symmetric zero-sum game \((\Delta X,v)\) such that for all p and q in \(\Delta\), \(v(p,q)=\pi (p,q)-\pi (q,p)\). Rearranging Equation (7) yields

$$\begin{aligned} \pi (x,y) - \pi (y,x) = \pi (x,z) - \pi (z,x) + \pi (z,y) - \pi (y,z) \end{aligned}$$

if and only if

$$\begin{aligned} v(x,y) = v(x,z) + v(z,y), \end{aligned}$$

which is per Lemma 5 equivalent to

$$\begin{aligned} v(x,y) + v(z,w) = v(x,w) + v(z,y). \end{aligned}$$

Now, by Theorem 3, the latter equation holds if and only if \((\Delta X,v)\) is a potential game. Furthermore, by Theorem 4, \((\Delta X,v)\) is a potential game if and only if \((\Delta X,\pi )\) is a potential game, as desired.

Now, let \((\Delta X,\pi )\) be a two-player symmetric game where \(\pi\) represents \(\succeq\). We have just shown that \((\Delta X,\pi )\) is a potential game if and only if Equation (7) holds. By Lemma 4, this implies that \(\succeq\) satisfies the Independence axiom. By Lemma 1, if \(\pi\) represents \(\succeq\), then \(\succeq\) satisfies axioms C, D, and S. Theorem 2 then implies that \(\succeq\) satisfies vNM utility axioms, as desired. \(\square\)

4 Additional remarks

Here, we first introduce the concepts of maximal elements, optimal threat strategies, and finite population evolutionary stable strategies, and subsequently illustrate a connection among them.

A lottery \(p\in \Delta X\) is said to be a maximal element with respect to a preference relation \(\succeq\) if there exists no element \(q\in \Delta X\) such that \(q\succ p\). Given that \(\succeq\) is a complete relation, this definition simplifies to: for all q in \(\Delta X\), \(p\succeq q\).

Nash (1953) proposed a non-cooperative negotiation model as an extension of his earlier bargaining problem (Nash 1950). In this model, players choose optimal threat strategies prior to choosing their demands. If the demands are compatible, then each player obtains their respective demand. If the demands are not compatible, then the threat strategies are executed, essentially serving as a disagreement point. In case of two-player transferable utility games, optimal threat strategies are given by the optimal (maximin) strategies in the relative payoff game derived from the original game; that is, the zero-sum game obtained by subtracting the payoffs of Player 2 from those of Player 1 (for more details, see, e.g., Owen 1968).

In a similar vein, Schaffer (1988, 1989) showed that a finite population evolutionary stable strategy in symmetric two-player games corresponds to an optimal pure strategy in the relative payoff game. This result implies that an optimal pure threat strategy coincides with a finite population evolutionary stable strategy in symmetric two-player games. For additional applications of finite population evolutionary stable strategies, see Ania (2008) and Hehenkamp et al. (2010).

We are now able to formulate the following proposition:

Proposition 1

Let \((\Delta X,\pi )\) be a two-player symmetric game, where \(\pi\) represents \(\succeq\). A strategy \(p\in \Delta X\) is an optimal threat strategy if and only if p is a maximal element with respect to \(\succeq\).

Proof

(\(\Rightarrow\)): If \(p\in \Delta X\) is an optimal threat strategy then for all \(q\in \Delta X\), \(\pi '(p,q)\ge 0\), where \((\Delta X,\pi ')\) is the relative payoff game derived from \((\Delta X,\pi )\). Since \((\Delta X,\pi )\) is a symmetric game, \((\Delta X,\pi ')\) is a symmetric zero-sum game and hence its value is 0. It follows that for all \(q\in \Delta X\), \(\pi (p,q)\ge \pi (q,p)\). Thus, p is a maximal element with respect to \(\succeq\).

(\(\Rightarrow\)): If \(p\in \Delta X\) is a maximal element with respect to \(\succeq\) then for all \(q\in \Delta X\), \(\pi (p,q)\ge \pi (q,p)\). This implies that p must be an optimal strategy in the relative payoff game. Hence, it is an optimal threat strategy in \((\Delta X, \pi )\), as desired. \(\square\)

We now turn our attention to the Independence axiom. The following proposition offers a formula to determine the number of linearly independent equations needed for the triangular property to hold.

Proposition 2

Given a matrix \([m_{ij}]_{n\times n}\) with \(n\ge 3\), let the triangular property be such that for all i, j and k in \(\{1,2\ldots ,n\}\) we have \(m_{ij}+m_{jk}+m_{ki}=m_{ik}+m_{kj}+m_{ji}\). Then, the number of linearly independent equations needed for the triangular property to be satisfied is given by \(\frac{(n-1)(n-2)}{2}\).

Proof

Let \(E_{ijk}\) denote the equation \(m_{ij}+m_{jk}+m_{ki}=m_{ik}+m_{kj}+m_{ji}\), and E denote the set of all such equations. We can restrict attention to the case \(i< j < k\), because the equations \(E_{ijk}, E_{ikj}, E_{jik}, E_{jki}, E_{kji}\), and \(E_{kij}\) represent the same equation. Keeping the figure below in mind, the sums do not change regardless of which node we start summing from and whether we proceed clockwise or counterclockwise.

We show that \(E'=\{E_{1\ell m} \mid 1<\ell <m\}\) is a basis for E. To see this, first notice that for any quadruple \(i< \ell< k < m\), the equations \(E_{ijk}\), \(E_{ijm}\) and \(E_{ikm}\) imply the equation \(E_{jkm}\). Subsequently, it follows that any equation \(E_{ijk}\) with \(i \ne 1\) can be obtained from equations \(E_{1ij}\), \(E_{1ik}\) and \(E_{1jk}\). What remains to be shown is that an equation in \(E'\) cannot be obtained as a linear combination of the other equations in \(E'\). By contraposition, suppose \(E_{1jk}\) is a linear combination of equations in \(E'\backslash \{E_{1jk}\}\). Then, there must exist at least one equation in \(E'\) that contains the term \(m_{jk}\) (otherwise, the term \(m_{jk}\) could never appear). However, this equation must then be \(E_{1jk}\), which contradicts our supposition. The number of equations in \(E'\) is precisely the number given in the proposition. \(\square\)

Among the expected utility axioms, the Independence axiom often attracts scrutiny due to its strong behavioral implications. To explore the strength of this axiom, Proposition 2 presents a formula, \(\frac{(n-1)(n-2)}{2}\), where n represents the number of pure strategies in a two-player symmetric game. This formula calculates the number of linearly independent equations required for the triangular property to be satisfied, which, as established by Lemma 4, is equivalent to the Independence axiom. Consequently, this formula illustrates a quadratic increase in the number of required equations for the Independence axiom as the number of available alternatives increases, offering an alternative perspective on the strength of the Independence axiom in our setting. Intuitively speaking, as the number of alternatives grows, it could become increasingly challenging for a decision-maker’s preferences to be consistent with the Independence axiom.