## Abstract

We analyze the strategic allocation of resources across two contests as in the canonical Colonel Blotto game. In the games we study, two players simultaneously allocate their forces across two fields of battle. The larger force on each battlefield wins that battle, and the payoff to a player is the sum of the values of battlefields won. We completely characterize the set of Nash equilibria of all two-battlefield Blotto games and provide the unique equilibrium payoffs. We also show how to extend our characterization to cover previously unstudied games with nonlinear resource constraints.

This is a preview of subscription content, access via your institution.

## Notes

Szentes and Rosenthal (2003) discuss this relationship. In general, the classic Blotto game is equivalent to a multi-unit, budget constrained, all-pay auction with no payoff for leftover resources.

They also normalize the payoffs differently to make the game zero-sum. We find this constant-sum version more intuitive.

Often such games have simple Nash equilibria. For instance, if the budget constraints cross exactly once (or any odd number of times), each player can guarantee victory on exactly one battlefield. A simple Nash equilibrium is for each player to “hunker-down” and always send an unbeatable force to the battlefield on which they can guarantee victory.

See Gross and Wagner (1950) for further details.

We believe we could relax this assumption somewhat for Blotto. However, it seems realistic and makes the proof cleaner.

We need not make such restrictive assumptions on the form of \(f\) and \(g\) outside of quadrant I. However, we chose this simple functional form since the assumption only affects the constraints where they cannot possibly bind.

Note that one could provide a similar definition and explanation in terms of \(f^{-1}\left( p^{i}\left( E_{2}\right) \right) \).

See “Appendix 2” for a method to construct such a continuum of equilibria.

For example, consider the set of univariate CDFs of the form:

$$\begin{aligned} \left( \frac{x-h^{n-1}\left( E_{1}\right) }{f^{-1}\left( E_{2}\right) -h^{n-1}\left( E_{1}\right) }\right) ^{a} \end{aligned}$$\(\forall a\in [1,2]\). Any CDF in this set has range \(\left[ 0,1\right] \) over the domain \(\left[ h^{n-1}\left( E_{1}\right) ,f^{-1}\left( E_{2}\right) \right] .\) There are an uncountably infinite number of such CDFs. So, using the method described in “Appendix 2,” we could generate an uncountably infinite number of equilibrium bivariate Blotto strategies from this set of univariate CDFs.

Technically, Gross and Wagner (1950) set\(f(b_{1})=B-b_{1}.\) Here, we normalize \(B=1\).

Recall that Enemy can never win on Battlefield 2 when Blotto is also attacking Battlefield 2 heavily.

We are not discussing any of the larger triangles in the graph which contain smaller shapes (e.g., the axes and the resource constraints form triangles, but these are not what we are interested in). The triangles we are discussing are empty in Fig. 7c.

For instance, if \(S=\{(1,3),(2,5)\}\), then \(\Psi _{1}(S)=\{1,2\}\) and \(\Psi _{2}(S)=\{3,5\}\).

Clearly \(e_{2}^{d}=g(e_{1}^{d})\).

By “do better” on Battlefield 1 we mean \(e_{1}^{d}\) would be strictly greater than Blotto’s Battlefield 1 allocation, whereas \(e_{1}\) would be weakly less.

We are not the first to use these two results when analyzing Blotto Games (e.g., see Roberson (2006)).

Feasibility implies: \(\forall (e_{1},e_{2})\in S\; e_{1}\ge 0,\; e_{2}\ge 0,\; e_{2}\le g\left( e_{1}\right) \).

\(\mu _{B}^{d}\left( \emptyset \right) =0\) for \(i=1\).

Note, \(g\left( h^{n-i}\left( E_{1}-\left( n+1-i\right) \frac{\epsilon }{k^{*}}\right) \right) \!=\!g\left( h\left( h^{n-i-1}\left( E_{1}-\left( n+1-i\right) \frac{\epsilon }{k^{*}}\right) \right) \right) \!=\!f\left( h^{n-i-1}\left( E_{1}-\left( n+1-i\right) \frac{\epsilon }{k^{*}}\right) \right) \).

For \(i=n-1,\) \(\Psi _{1}\left( T_{i+1}^{e}\right) \) technically equals \(\left( f^{-1}\left( p^{i-1}\left( E_{2}\right) \right) ,h^{n-i-1}\left( E_{1}\right) \right] .\)

## References

Adamo, T., Matros, A.: A blotto game with incomplete information. Econ. Lett.

**105**(1), 100–102 (2009)Arad, A., Rubinstein, A.: Multi-dimensional iterative reasoning in action: the case of the colonel blotto game. J. Econ. Behav. Organ.

**84**(2), 571–585 (2012)Blackett, D.W.: Some blotto games. Nav. Res. Logist. Q.

**1**(1), 55–60 (1954)Blackett, D.W.: Pure strategy solutions of blotto games. Nav. Res. Logist. Q.

**5**(2), 107–109 (1958)Borel, E.: La théorie du jeu les équations intégralesa noyau symétrique. Comptes Rendus de l’Académie des Sciences

**173**, 1304–1308 (1921); English translation by Savage, L.: The theory of play and integral equations with skew symmetric kernels. Econometrica**21**(1), 97–100 (1953)Chowdhury, S.M., Kovenock, D., Sheremeta, R.M.: An experimental investigation of colonel blotto games. Econ. Theory

**52**(3), 1–29 (2013)Colantoni, C.S., Levesque, T.J., Ordeshook, P.C.: Campaign resource allocations under the electoral college. Am. Polit. Sci. Rev.

**69**(1), 141–154 (1975)Coughlin, P.J.: Pure strategy equilibria in a class of systems defense games. Int. J. Game Theory

**20**(3), 195–210 (1992)Dziubiński, M.: Non-symmetric discrete general lotto games. Int. J. Game Theory

**42**(4), 801–833 (2012)Golman, R., Page, S.E.: General blotto: games of allocative strategic mismatch. Public Choice

**138**(3–4), 279–299 (2009)Gross, O.: The symmetric blotto game. RAND Corporation RM-424 (1950)

Gross, O., Wagner, R.: A continuous colonel blotto game. RAND Corporation RM-408 (1950)

Hart, S.: Discrete colonel blotto and general lotto games. Int. J. Game Theory

**36**(3), 441–460 (2008)Hortala-Vallve, R., Llorente-Saguer, A.: A simple mechanism for resolving conflict. Games Econ. Behav.

**70**(2), 375–391 (2010)Kovenock, D., Roberson, B.: Coalitional colonel blotto games with application to the economics of alliances. J. Public Econ. Theory

**14**(4), 653–676 (2012a)Kovenock, D., Roberson, B.: The Oxford Handbook of the Economics of Peace and Conflict, Oxford University Press, Chap Conflicts with multiple battlefields (2012b)

Kovenock, D., Mauboussin, M.J., Roberson, B.: Asymmetric conflicts with endogenous dimensionality. Korean Econ. Rev.

**26**(2), 287–305 (2010)Laslier, J.: How two-party competition treats minorities. Rev. Econ. Des.

**7**(3), 297–307 (2002)Laslier, J., Picard, N.: Distributive politics and electoral competition. J. Econ. Theory

**103**(1), 106–130 (2002)Le Breton, M., Zaporozhets, V.: Sequential legislative lobbying under political certainty. Econ. J.

**120**(543), 281–312 (2010)Merolla, J., Munger, M., Tofias, M.: In play: a commentary on strategies in the 2004 U.S. presidential election. Public Choice

**123**(1–2), 19–37 (2005)Powell, R.: Sequential, nonzero-sum “Blotto”: allocating defensive resources prior to attack. Games Econ. Behav.

**67**(2), 611–615 (2009)Powers, M., Shen, Z.: Colonel blotto in the war on terror: implications for event frequency. J. Homel. Secur. Emerg. Manag.

**6**(1), (2009)Roberson, B.: The colonel blotto game. Econ. Theory

**29**(1), 1–24 (2006)Roberson, B., Kvasov, D.: The non-constant-sum colonel blotto game. Econ. Theory

**51**(2), 397–433 (2012)Sahuguet, N., Persico, N.: Campaign spending regulation in a model of redistributive politics. Econ. Theory

**28**(1), 95–124 (2006)Szentes, B., Rosenthal, R.W.: Three-object two-bidder simultaneous auctions: chopsticks and tetrahedra. Games Econ. Behav.

**44**(1), 114–133 (2003)Thomas, C.: N-dimensional colonel blotto game with asymmetric battlefield values. Working paper, The University of Texas at Austin (2012)

Vorob’ev, N.N.: Game theory: lectures for economists and systems scientists. Springer, New York (1977)

Wu, Y., Wang, B., Liu, K.: Optimal power allocation strategy against jamming attacks using the colonel blotto game. IEEE Global Telecommunications Conference, 2009 (GLOBECOM 2009), pp 1–5 (2009)

Young, H.P.: The allocation of funds in lobbying and campaigning. Behav. Sci.

**23**(1), 21–31 (1978)

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

We would like to thank Ken Hendricks, Kyle Kretschman, Marcin Pȩski, Tarun Sabarwal, Brian Roberson, and Tom Wiseman as well as two anonymous referees and editors Dan Kovenock and Nicholas Yannelis as well as several panel participants from the 2009 Southern Economic Association Conference and multiple University of Texas Writing Workshops for their helpful comments and insight. The ideas and views expressed herein do not necessarily reflect those of the United States Air Force, CNA, or any other organization.

## Appendices

### Appendix 1: Proofs of Theorems 1–3

First, we prove that pairs of strategies from \(\Omega ^{B}\) and \(\Omega ^{E}\) provide the expected payoffs from Theorem 1. Then we prove that all pairs of strategies from \(\Omega ^{B}\) and \(\Omega ^{E}\) constitute a Nash equilibrium. Finally, we show that no other strategies are a part of any Nash equilibrium.

First, define two projection operators. \(\Psi _{1}(S)\) is the set of all scalars that are the first dimension of some two dimensional point in the set \(S\), the set of Battlefield 1 allocations given a set \(S\) of two dimensional battlefield allocations.

\(\Psi _{2}(S)\) is defined similarly:^{Footnote 20}

We also define the set of all points in some \(T_{i}^{b}\) (\(T_{i}^{e}\)):

Recall the definitions of \(h\) and \(p\) from Eqs. 12 and 13. Lemma 5 gives us the intervals of battlefield allocations within the various \(T_{i}\)’s.

###
**Lemma 5**

\(\forall i=1,2,\ldots ,n\!:\)

\(\forall i=2,3,\ldots ,n-1\!:\)

For \(i=1\) we have the following:

and for \(i=n:\)

The last four equations are implied by Eqs. 28 and 29, but vary due to the closed bounds at the extremes. We will simultaneously prove Lemma 5 with our proof of Lemma 6.

First, informally we could rewrite the interval \(\left[ 0,f^{-1}\left( p^{n-1}\left( E_{2}\right) \right) \right] \) as

or we could rewrite the interval \(\left[ 0,f\left( h^{n-1}\left( E_{1}\right) \right) \right] \) as

More formally,

###
**Lemma 6**

\(\Psi _{1}(T^{e})\cup \Psi _{1}(T^{b})=\left[ 0,f^{-1}\left( p^{n-1}\left( E_{2}\right) \right) \right] \) and \(\Psi _{2}(T^{e})\cup \Psi _{2}(T^{b})=\left[ 0,f\left( h^{n-1}\left( E_{1}\right) \right) \right] \) while \(\Psi _{1}(T^{e})\cap \Psi _{1}(T^{b})=\{E_{1}\}\) and \(\Psi _{2}(T^{e})\cap \Psi _{2}(T^{b})=\{E_{2}\}\).

###
*Proof*

Refer to Eqs. 16–19. Consider the bounds for any \(e_{1}\in \Psi _{1}(T_{i}^{e})\). Its open (closed when \(i=1\)) infimum is \(f^{-1}\left( p^{i-2}\left( E_{2}\right) \right) \). Changing the two other constraints on \(T_{i}^{e}\) to equalities and solving we find that for \(e_{1}\in \Psi _{1}(T_{i}^{e})\) the open (closed when \(i=n\)) supremum is \(h^{n-i}(E_{1})\), which is the closed infimum of \(b_{1}\in \Psi _{1}(T_{i}^{b})\). Similar algebra for the other relevant bounds in Lemma 5 and iteration from \(i=1\) confirms Lemmas 5 and 6. \(\square \)

Based on this we know that \(\forall i\in \{1,\ldots ,n\}\) any \(e_{1}\in \Psi _{1}(T_{i}^{e})\) is strictly less than any \(b_{1}\in \Psi _{1}(T_{i}^{b})\), which is strictly less than any \(e_{1}\in \Psi _{1}(T_{i+1}^{e})\). The former inequality is weak when \(i=n\). (Obviously, we ignore the latter when \(i=n\)). Also, \(\forall i\in \{1,\ldots ,n\}\) any \(e_{2}\in \Psi _{2}(T_{i-1}^{e})\) is strictly greater than any \(b_{2}\in \Psi _{2}(T_{i}^{b})\) which is strictly greater than any \(e_{2}\in \Psi _{2}(T_{i}^{e})\). The later inequality is weak when \(i=1\). Obviously, we ignore the former when \(i=1\). More formally we have:

###
**Lemma 7**

###
*Proof*

Lemma 7 follows directly by examining the bounds in Lemma 5. \(\square \)

###
*Proof of Theorem 1*

Recall **Theorem ** 1: When Blotto plays a strategy \(\mu _{B}\in \Omega ^{B}\) and Enemy plays a strategy \(\mu _{E}\in \Omega ^{E}\), Blotto’s expected payoff is \(\frac{\sum _{j=0}^{n}w^{j}}{\sum _{j=0}^{n-1}w^{j}}\) and Enemy’s expected payoff is \(\frac{\sum _{j=1}^{n-1}w^{j}}{\sum _{j=0}^{n-1}w^{j}}.\)

###
*Proof*

Given Lemma 7, we know that against any Blotto strategy in \(\Omega ^{B}\), when Enemy plays in \(T_{i}^{e}\) his probability of winning on Battlefield 1 is:

His probability of winning Battlefield 2 is:

The total expected payoff is then:

for any allocation in (any) \(T_{i}^{e}\).

Similarly, against any Enemy strategy from above, when Blotto plays in \(T_{i}^{b}\) his probability of winning Battlefield 1 is:

His probability of winning Battlefield 2 is:

His total expected payoff is then

for any allocation in (any) \(T_{i}^{b}\). \(\square \)

###
*Proof of Theorem 2*

In this section, we prove that any pair of strategies \(\left\{ \mu _{B},\mu _{E}\right\} \), such that \(\mu _{B}\in \Omega ^{B}\) and \(\mu _{E}\in \Omega ^{E},\) in fact forms a Nash equilibrium. Before proceeding with the formal proof, we provide the intuition. Properties 1b and 1e specify that in any equilibrium Blotto and Enemy each randomize over \(n\) distinct areas (\(T_{1}^{b},\ldots ,T_{n}^{b}\) and \(T_{1}^{e},\ldots ,T_{n}^{e}\)). Blotto and Enemy’s potential equilibrium allocations on either battlefield only overlap at one point in the following sense:

Though in the figures it may appear as though all boundaries should be included in these sets, careful inspection of the boundary conditions shows that the boundaries for Enemy are open, and closed for Blotto. Therefore, they should not in fact be included in these sets.

Given that ties always go to Blotto, we calculated players’ expected payoffs in “Proof of Theorem 1 of Appendix 1.” When they both play strategies satisfying Properties 1b and 1e, Blotto achieves an expected payoff of \(\frac{\sum _{j=0}^{n}w^{j}}{\sum _{j=0}^{n-1}w^{j}}\) while Enemy earns \(\frac{\sum _{j=1}^{n-1}w^{j}}{\sum _{j=0}^{n-1}w^{j}}\). We will show that given these payoffs, Property 2b(e) ensures that Enemy (Blotto) has no full expenditure allocation which provide a payoff strictly greater than \(\frac{\sum _{j=1}^{n-1}w^{j}}{\sum _{j=0}^{n-1}w^{j}} \left( \frac{\sum _{j=0}^{n}w^{j}}{\sum _{j=0}^{n-1}w^{j}}\right) \). Since all allocations in the players’ supports provide the same payoff, and there exist no allocations providing higher payoffs, pairs of strategies from these distributions constitute a Nash equilibrium. We now move to the formal proof.

Recall **Theorem** 2: *Any pair of strategies*
\(\left\{ \mu _{B},\mu _{E}\right\} \)
*such that*
\(\mu _{B}\in \Omega ^{B}\) and \(\mu _{E}\in \Omega ^{E}\)
*constitute a Nash equilibrium.*

###
*Proof*

We show that there are no allocations for Enemy or Blotto that provide a strictly higher expected payoff than their payoffs from Theorem 1. Note that if either player were to have an expected payoff improving deviation from the strategies we defined, they must have a full expenditure payoff improving deviation, as the expected payoffs must be weakly increasing in strength on either battlefield. Therefore, we only need to show that there are no payoff improving full expenditure deviations. So, we check full expenditure deviations outside of any \(T_{i}^{e}\) or \(T_{i}^{b}\).

Consider a full expenditure Enemy deviation \(e^{d}=(e_{1}^{d},e_{2}^{d}),\; e^{d}\notin T^{e}\).^{Footnote 21} Given the bounds on the \(T_{i}^{e}\)’s and Lemma 5, \(e^{d}\) must lie “between” some \(T_{i}^{e}\) and \(T_{i+1}^{e}\) in the following sense: \(\forall e_{1}^{i}\in \Psi _{1}(T_{i}^{e}),e_{1}^{i+1}\in \Psi _{1}(T_{i+1}^{e})\quad e_{1}^{i}<e_{1}^{d}<e_{1}^{i+1}\), and similarly for \(e_{2}^{d}\). Let \((e_{1},e_{2})\) be an allocation in \(T_{i}^{e}\). Examine Property 2b with \(x=e_{1}^{d}\). Given Lemma 7, the realized payoff to Enemy of playing \(e^{d}\) against any of our Blotto strategies will be the same as if he had played \((e_{1},e_{2})\) unless Blotto plays in \(T_{i}^{b}\) or \(T_{i+1}^{b}\). If Blotto plays in \(T_{i}^{b},\) the deviant allocation *may* do better ^{Footnote 22} on Battlefield 1 (without changing the outcome on Battlefield 2). The cost is that if Blotto plays in \(T_{i+1}^{b}\) the deviant strategy may do worse on Battlefield 2 (without changing the outcome on Battlefield 1). Using the notation of Property 2b, any \(b_{1}\) in \(j_{b}^{e_{1}^{d},i}\) will lose to \(e_{1}^{d}\) (while it would have beat \(e_{1}\)) and any \(b_{2}\) in \(k_{b}^{e_{1}^{d},i}\) will beat \(e_{2}^{d}\) (while it would have lost to \(e_{2}\)). Property 2b then says that by moving from any \((e_{1},e_{2})\) in \(T_{i}^{e}\) to \((e_{1}^{d},e_{2}^{d})\), the additional probability of winning on Battlefield 1 is weakly less than the additional probability of losing on Battlefield 2 times the weight placed on that battlefield. Therefore, no full expenditure deviation \((e_{1}^{d},e_{2}^{d})\) is payoff improving, and therefore no deviation is payoff improving.

The same line of reasoning applies directly to Property 2e and full expenditure deviations by Blotto which lie “between” some \(T_{i}^{b}\) and \(T_{i+1}^{b}\). Specifically, Property 2e ensures that a full expenditure deviating allocation by Blotto, \((b_{1}^{d},b_{2}^{d})\), cannot be payoff improving. Simply set \(b_{1}^{d}=x\) in the property and the same line of reasoning follows. Additionally, there are full expenditure deviations which do not lie “between” some \(T_{i}^{b}\) and some \(T_{i+1}^{b}\) (e.g., \((B,0)\) and \((0,B)\)). Specifically, there are two more deviating types of full expenditure allocations: a \((b_{1}^{d},b_{2}^{d})\) where \(\forall (b_{1},b_{2})\in T_{1}^{b}\quad b_{1}^{d}<b_{1}\quad and\quad f\left( b_{1}^{d}\right) =b_{2}^{d}>b_{2}\) or a \((b_{1}^{\#},b_{2}^{\#})\) where \(\forall (b_{1},b_{2})\in T_{n}^{b}\quad b_{1}^{\#}>b_{1}\quad and\quad f\left( b_{1}^{\#}\right) =b_{2}^{\#}<b_{2}\). In the former, Blotto increases allocations to Battlefield 2 at the expense of Battlefield 1, relative to \(T_{1}^{b}\). However, in \(T_{1}^{b}\), Blotto is guaranteeing victory on Battlefield 2, so this can not be payoff improving. Similar logic applies to the later type of allocations. Given Lemma 5, and Lemma 7, there are no other types of full expenditure deviations.

Thus, if Blotto plays \(\mu _{B}\in \Omega ^{B}\) and Enemy plays \(\mu _{E}\in \Omega ^{E}\), they would both be playing best responses to the other’s strategy. Therefore any such pair \(\left\{ \mu _{B},\mu _{E}\right\} \) constitutes a Nash equilibrium. \(\square \)

###
*Proof of Theorem 3*

We now prove that there are no uncharacterized strategies which could be part of a Nash equilibrium. Before proceeding, we will need two lemmas that hold for any two-player constant-sum game.

Consider a two-player constant-sum game where Player 1 chooses a strategy \(x\in X\) and Player 2 chooses a strategy \(y\in Y\). Let \(f^{i}(x,y)\) denote the expected payoffs to Player \(i\) when Player 1 plays \(x\) and Player 2 plays \(y\). Also, let \(\Xi ^{i}\) denote the set of player \(i\) strategies that are a part of some Nash equilibrium.

###
**Definition 8**

A game is said to feature *constant payoffs* if, for each player, the expected payoff is the same in each Nash equilibrium.

###
**Lemma 9**

If a Nash equilibrium exists for a two-player constant-sum game, that game features constant payoffs. In other words, \(\exists \left\{ c_{i}\right\} _{i=1}^{2}\) such that in every Nash equilibrium \(\{x_{j},y_{j}\}\), \(f^{i}(x_{j},y_{j})=c_{i}\) for all \(i=1,2\).

For a formal proof of the preceding and proceding lemmas see (Vorob’ev (1977), pp. 1–10)

###
**Lemma 10**

**Equilibrium Interchangeability** If a Nash equilibrium exists to a two-player constant-sum game, then every strategy that a player uses in any Nash equilibrium forms a Nash equilibrium with any opponent strategy from any (other) Nash equilibrium. In other words: for all \(x^{*}\in \Xi ^{1}\) and all \(y^{*}\in \Xi ^{2}\), \(\left\{ x^{*},y^{*}\right\} \) constitutes a Nash equilibrium.

Equilibrium Interchangeability and constant payoffs are useful when proving the completeness of our characterization.^{Footnote 23} Equilibrium Interchangeability allows us to consider equilibrium strategies for Blotto and Enemy separately. Unlike with most multiple equilibria games, there is no need to worry about pairing with a particular opponent equilibrium strategy. If we discover just one Nash equilibrium (pair of strategies), all the remaining equilibria are simply the cross of all the Blotto strategies that form an equilibrium with the one known Enemy strategy, and all the Enemy strategies that form an equilibrium with the one known Blotto strategy.

Our proof of the completeness of our characterization proceeds as follows. We first prove that all Enemy strategies that are a part of some Nash equilibrium are in \(\Omega ^{E}.\) Then we prove that all Blotto strategies that are a part of some equilibrium are in \(\Omega ^{B}.\) Therefore, the set of all Nash equilibria is the set of pairs of strategies from \(\Omega ^{E}\) and \(\Omega ^{B}\) by Equilibrium Interchangeability.

In the proof, we make use of the strategy \(\mu _{B}^{*}\) where in each \(T_{i}^{b}\) Blotto plays the allocations \(\left( h^{n-i}(E_{1}),f\left( h^{n-i}(E_{1})\right) \right) \) and \(\left( f^{-1}\left( p^{i-1}\left( E_{2}\right) \right) ,p^{i-1}\left( E_{2}\right) \right) \) with probability \(\frac{w^{n-i}}{2\cdot {{{\sum \nolimits _{j=0}^{n-1}}}w^{j}}}\) each. These points correspond to \(z_{1}-z_{6}\) in Fig. 11 for a game in region 3. Blotto never plays any other allocations. Recall that if \(E_{2}=f\left( h^{n-i}(E_{1})\right) \) each of the \(T_{i}^{b}\)’s contain only one allocation, \(\left( h^{n-i}(E_{1}),f\left( h^{n-i}(E_{1})\right) \right) =\left( f^{-1}\left( p^{i-1}\left( E_{2}\right) \right) ,p^{i-1}\left( E_{2}\right) \right) .\) In this case this single allocation is played with probability \(\frac{w^{n-i}}{{{\sum \nolimits _{j=0}^{n-1}}}w^{j}}.\)

In each \(T_{i}^{b},\) \(\left( h^{n-i}(E_{1}),f\left( h^{n-i}(E_{1})\right) \right) \) and \(\left( f^{-1}\left( p^{i-1}\left( E_{2}\right) \right) ,p^{i-1}\left( E_{2}\right) \right) \) are the intersections of Blotto’s resource constraint with the two other bounds on \(T_{i}^{b}\). So, Property 1b holds as \(2\cdot \frac{w^{n-i}}{2\cdot {{{\sum \nolimits _{j=0}^{n-1}}}w^{j}}}=\frac{w^{n-i}}{{{\sum \nolimits _{j=0}^{n-1}}}w^{j}}=\mu _{B}^{*}\left( T_{i}^{b}\right) \). Now consider Property 2b. As Blotto is playing \(\left( f^{-1}\left( p^{i-1}\left( E_{2}\right) \right) ,p^{i-1}\left( E_{2}\right) \right) \) in \(T_{i}^{b}\) with probability \(\frac{w^{n-i}}{2\cdot {{\sum \nolimits _{j=0}^{n-1}}}w^{j}}\), \(\mu _{B}^{*}(j_{b}^{x,i})\le \frac{w^{n-i}}{{{\sum \nolimits _{j=0}^{n-1}}}w^{j}}-\frac{w^{n-i}}{2\cdot {{\sum \nolimits _{j=0}^{n-1}}}w^{j}}=\frac{w^{n-i}}{2\cdot {{\sum \nolimits _{j=0}^{n-1}}}w^{j}},\) \(\forall i<1,2,\ldots ,n-1,\quad \forall x\in [h^{n-i}(E_{1}),f^{-1}\left( p^{i-1}\left( E_{2}\right) \right) ].\) As \(g\) is strictly decreasing, the minimum of \(\mu _{B}^{*}(k_{b}^{x,i})\) for \(x\in [h^{n-i}(E_{1}),f^{-1}\left( p^{i-1}\left( E_{2}\right) \right) ]\) occurs when \(x=h^{n-i}(E_{1}).\) Then \(\forall x\in [h^{n-i}(E_{1}),f^{-1}\left( p^{i-1}\left( E_{2}\right) \right) ],\) \(\mu _{B}^{*}(k_{b}^{x,i})\ge \frac{w^{n-(i+1)}}{2\cdot {{\sum \nolimits _{j=0}^{n-1}}}w^{j}}\) as \(g\left( h^{n-i}(E_{1})\right) =f\left( h^{n-i-1}(E_{1})\right) \) and \(\left( h^{n-(i+1)}(E_{1}),f\left( h^{n-(i+1)}(E_{1})\right) \right) \) is played with probability \(\frac{w^{n-(i+1)}}{2\cdot {{\sum \nolimits _{j=0}^{n-1}}}w^{j}}\) in \(T_{i+1}^{b}.\) Therefore \(\mu _{B}^{*}(k_{b}^{x,i})\cdot w\ge \frac{w^{n-i}}{2\cdot {{\sum \nolimits _{j=0}^{n-1}}}w^{j}}\ge \mu _{B}^{*}(j_{b}^{x,i})\) and Property 2b holds, \(\mu _{B}^{*}\in \Omega ^{B}.\)

###
**Lemma 11**

Any Enemy strategy which is a part of some Nash equilibrium is in \(\Omega ^{E}\).

###
*Proof*

We prove Lemma 11 by contradiction. Suppose there exists a Nash equilibrium Enemy strategy that is not in \(\Omega ^{E}\). Such a strategy must then either violate Property 1e or satisfy Property 1e and violate Property 2e. In proving that all our strategies were indeed part of a Nash equilibrium, we have already shown how a violation of Property 2e alone would provide Blotto with a payoff improving deviation, so we rule out that possibility. The only other way Lemma 11 could be false is if there were a Nash equilibrium Enemy strategy which violated property 1e. We divide deviations from property 1e into three possible cases. Figure 12 provides a graphical reference (in Region 3) to aid the reader.

Intuitively,** Deviation 1** represents Enemy placing some mass on allocations which, relative to some \(T_{i}^{e}\), always send less to one battlefield, without increasing the allocation to the other. **Deviation 2 **represents Enemy placing mass on allocations which, relative to any \(T_{i}^{e}\) always send less to one battlefield, but increase the allocation to the other. **Deviation 3** has him playing an “incorrect” mass on some \(T_{i}^{e}\) (i.e., \(\mu _{E}(T_{i}^{e})\ne \frac{w^{i-1}}{{{\sum \nolimits _{j=0}^{n-1}}}w^{j}}\)).

**Deviation 1:** Enemy could play over an area that sends less to both battlefields than some \((e_{1},e_{2})\) in some \(T_{i}^{e}\). Formally, this would have Enemy play a \(\mu _{E}^{d_{1}}\) such that the following three statements hold for some feasible set of Enemy allocations \(S\):^{Footnote 24}

Condition 40 implies that at least one of the two inequalities in 42 always holds strictly.

Suppose **Deviation 1** holds for some \(S\) and consider an allocation \((e_{1}^{d},e_{2}^{d})\in S\). Without loss of generality, suppose \(\exists \left( e_{1},e_{2}\right) \in T_{i}^{e}\) such that \(e_{1}^{d}\le e_{1}\) and \(e_{2}^{d}<e_{2}.\) Either \(e_{2}^{d}<e_{2}^{\#}\) for all \(e_{2}^{\#}\in \Psi _{2}\left( T_{i}^{e}\right) \) or not. If so, \((e_{1}^{d},e_{2}^{d})\) cannot be a best response to \(\mu _{B}^{*}\) which has Blotto playing Battlefield 2 allocations equal to the open lower bound of \(e_{2}^{\#}\in \Psi _{2}\left( T_{i}^{e}\right) \) with positive probability. Enemy could increase his expected payoff by playing \((e_{1},e_{2}).\) Alternatively, there could exist an \(e_{2}^{\#}\in \Psi _{2}\left( T_{i}^{e}\right) \) for which \(e_{2}^{d}\ge e_{2}^{\#}.\) Since we know \(e_{2}^{d}<e_{2}\in \Psi _{2}\left( T_{i}^{e}\right) \) we must have that \(e_{2}^{d}\in \Psi _{2}\left( T_{i}^{e}\right) .\) Given the bounds of \(T_{i}^{e}\) and Eq. 40, it must be the case that \(e_{1}^{d}\le f^{-1}\left( p^{i-2}\left( E_{2}\right) \right) \) the open lower bound of \(\Psi _{1}\left( T_{i}^{e}\right) ,\) a Battlefield 1 allocation Blotto plays with positive probability in \(\mu _{B}^{*}\). Therefore \((e_{1}^{d},e_{2}^{d})\) cannot be a best response to \(\mu _{B}^{*}\) which has Blotto playing the open lower bounds of \(e_{1}^{\#}\in \Psi _{1}\left( T_{i}^{e}\right) \) with positive probability. Enemy could increase his payoff by playing \(\left( e_{1},e_{2}\right) .\) Given equilibrium interchangeability and the fact that no allocation \((e_{1}^{d},e_{2}^{d})\in S\) could be a best response to \(\mu _{B}^{*},\)
**Deviation 1** cannot happen in any Nash equilibrium.

This only leaves two possible types of deviations by Enemy: He could play with mass other than \(\frac{w^{i-1}}{{{\sum \nolimits _{j=0}^{n-1}}}w^{j}}\) over some \(T_{i}^{e}\)
** (Deviation 3)** and/or he could play with mass over a region \(S\) where \(\forall (e_{1}^{d},e_{2}^{d})\in S,\quad \forall (e_{1},e_{2})\in T^{e}\) :

**(Deviation 2)**. Given the bounds of the \(T_{i}^{e}\)’s it is easy to show that any such region \(S\) must be within the set of points \(D_{i}^{e}\), indexed by \(i=1,2,\ldots ,n-1\), where

Given Lemma 5 there are no other allocations for which **Deviation 2** holds.

We simultaneously prove that neither of the latter two deviations is possible. Consider a \(T_{i}^{e}\) and \(D_{i}^{e}\) and a deviating Enemy strategy \(\mu _{E}^{d}\) that forms an Nash equilibrium with any \(\mu _{B}\in \Omega ^{B}\). Consider some \(i\in \left\{ 1,2,3,\ldots ,n-1\right\} \). Assume that

In other words, there has not “yet” been a **Deviation 2** or **Deviation 3**.

Suppose the mass over \(\mu _{E}^{d}(T_{i}^{e})<\frac{w^{i-1}}{{{\sum \nolimits _{j=0}^{n-1}}}w^{j}}\). Given Lemma 7, Eq. 43 and the fact that we’ve ruled out **Deviation 1**, when Blotto plays \(\left( h^{n-i}(E_{1}),f\left( h^{n-i}(E_{1})\right) \right) \) (which he does with probability \(\frac{w^{n-i}}{2\cdot {{\sum \nolimits _{j=0}^{n-1}}}w^{j}}\) in strategy \(\mu _{B}^{*}\)) he wins Battlefield 1 with probability \(\mu _{E}^{d}\left( T_{1}^{e}\cup \ldots \cup T_{i}^{e}\right) <\frac{\sum _{j=0}^{i-1}w^{j}}{\sum _{j=0}^{n-1}w^{j}}\) but still wins Battlefield 2 with probability \(1-\mu _{E}^{d}\left( T_{1}^{e}\cup \ldots \cup T_{i-1}^{e}\right) =\frac{\sum _{j=i-1}^{n-i}w^{j}}{\sum _{j=0}^{n-1}w^{j}}\) for a total expected payoff strictly less than \(\frac{\sum _{j=0}^{n}w^{j}}{\sum _{j=0}^{n-1}w^{j}}\), which is Blotto’s constant expected payoff in all equilibria, a contradiction. Therefore, \(\mu _{E}^{d}(T_{i}^{e})\ge \frac{w^{i-1}}{{{\sum \nolimits _{j=0}^{n-1}}}w^{j}}.\)

Similarly, if \(\mu _{E}^{d}\left( T_{i}^{e}\right) \!>\!\frac{w^{i-1}}{{{\sum \nolimits _{j=0}^{n-1}}}w^{j}},\) then when Blotto plays \(\left( h^{n-i}(E_{1}),f\left( h^{n-i}(E_{1})\right) \right) \), he wins Battlefield 1 with probability \(\mu _{E}^{d}\left( T_{1}^{e}\cup \ldots \cup T_{i}^{e}\right) >\frac{\sum _{j=0}^{i-1}w^{j}}{\sum _{j=0}^{n-1}w^{j}}\) but still wins Battlefield 2 with probability \(1-\mu _{E}^{d}\left( T_{1}^{e}\cup \ldots \cup T_{i-1}^{e}\right) =\frac{\sum _{j=i-1}^{n-i}w^{j}}{\sum _{j=0}^{n-1}w^{j}}\) for a total expected payoff strictly greater than his constant equilibrium payoff, \(\frac{\sum _{j=0}^{n}w^{j}}{\sum _{j=0}^{n-1}w^{j}}\), a contradiction. Therefore, \(\mu _{E}^{d}(T_{i}^{e})=\frac{w^{i-1}}{{{\sum \nolimits _{j=0}^{n-1}}}w^{j}}\).

Now suppose \(\mu _{E}^{d}(D_{i}^{e})>0\). Now, when Blotto plays \(\left( f^{-1}\left( p^{i-1}\left( E_{2}\right) \right) ,p^{i-1}\left( E_{2}\right) \right) \) (which he does with probability \(\frac{w^{n-i}}{2\cdot {{\sum \nolimits _{j=0}^{n-1}}}w^{j}}\) in strategy \(\mu _{B}^{*}\)) he expects to win on Battlefield 1 with probability \(\mu _{E}^{d}\left( T_{1}^{e}\cup \ldots \cup T_{i}^{e}\cup D_{i}^{e}\right) >\frac{\sum _{j=0}^{i-1}w^{j}}{\sum _{j=0}^{n-1}w^{j}}\) and expects to win on Battlefield 2 with probability \(1-\mu _{E}^{d}\left( T_{1}^{e}\cup \ldots \cup T_{i-1}^{e}\right) =\frac{\sum _{j=i-1}^{n-i}w^{j}}{\sum _{j=0}^{n-1}w^{j}}\). Therefore his total expected payoff is strictly greater than \(\frac{\sum _{j=0}^{n}w^{j}}{\sum _{j=0}^{n-1}w^{j}}\), his constant equilibrium payoff, another contradiction. Therefore, \(\mu _{E}^{d}(D_{i}^{e})\) must equal zero.

As the above analysis holds for all \(i=1,2,\ldots ,n-1\), the mass over all such \(T_{i}^{e}\) and \(D_{i}^{e}\) must equal \(\frac{w^{i-1}}{{{\sum \nolimits _{j=0}^{n-1}}}w^{j}}\) and \(0\), respectively. The remaining mass of \(\frac{w^{n-1}}{{{\sum \nolimits _{j=0}^{n-1}}}w^{j}}\) must then be distributed over the only region left, \(T_{n}^{e}\). Therefore, \(\mu _{E}^{d}\) satisfies Property 1e. We’ve already discussed why it must also satisfy Property 2e. Therefore, \(\mu _{E}^{d}\in \Omega ^{E},\) and there can be no Enemy strategies which are a part of some Nash equilibrium which do not satisfy our characterization. \(\square \)

We have ruled out any potential Enemy strategies that deviate from our characterization of possible Nash equilibrium Enemy strategies. The proof that we have characterized the complete set of Blotto Nash equilibrium strategies proceeds much the same way as the proof of completeness for Enemy’s strategies. However, due to the open boundaries of the \(T_{i}^{e}\)’s, the proof requires additional consideration.

###
**Lemma 12**

Any Blotto strategy which is a part of some Nash equilibrium is in \(\Omega ^{B}.\)

###
*Proof*

Suppose not. Then there exists at least one Blotto strategy which is a part of some Nash equilibrium that does not satisfy properties 1b and 2b. Call such a strategy \(\mu _{B}^{d}\) We’ve already shown how a strategy that satisfied property 1b, but violated property 2b would give Enemy an allocation offering a payoff higher than his constant equilibrium payoff. So, any uncharacterized Blotto strategy which is part of some Nash equilibrium must violate property 1b. Property 1b specifies that Blotto must only play in his \(T_{i}^{b}\)’s and provides the probability of play in each. There are two ways Blotto could violate this property: He could sometimes play outside of \(T^{b}\), or his \(\mu _{B}\) could assign an incorrect probability to some \(T_{i}^{b}\). We break the former down into two separate deviations. The first type of deviation we consider **(Deviation 1)** are where Blotto mixes over allocations that send weakly less to both battlefields than some non-deviating allocation. Formally, a strategy, \(\mu _{B}^{d_{1}}\), exhibits **Deviation 1** if the following conditions hold for some \(S\) :

The second condition implies that one of the two inequalities in the third condition always holds strictly.

The next type of deviation we consider **(Deviation 2)** are the remaining feasible allocations outside of \(T^{b}\). Specifically, these are the allocations that are outside \(T^{b}\), and, relative to any allocation in \(T^{b},\) send strictly more to one battlefield. Formally a strategy, \(\mu _{B}^{d_{2}}\), exhibits **Deviation 2** if the following conditions hold for some \(S\):

Consider the following sets of allocations:

Note that Eqs. 50 and 52 are implied by 51 with the exception of the weak inequalities at the zero bounds. A strategy, \(\mu _{B}^{d_{2}}\), satisfying **Deviation 2** must allocate some mass over at least one of the \(D_{i}^{b}\)’s as, given Lemma 5, these are the only regions where conditions 47–49 hold. See Fig. 13 for a graph of the \(D_{i}\)’s.

The last type of deviation we consider **(Deviation 3)** is simply where Blotto plays inappropriate mass over one of his \(T_{i}^{b}\)’s. Formally, a strategy \(\mu _{B}^{d_{3}}\) exhibits **Deviation 3** if the following condition holds for at least one of Blotto’s \(T_{i}^{b}\)’s:

Because of Lemma 10 (Equilibrium Interchangeability), any Blotto strategy from any Nash equilibrium must form a Nash equilibrium with any Enemy strategy from \(\Omega ^{E}\). Specifically, we consider the following sequence of Enemy strategies: For any \(k=1,2,\ldots \) let \(\mu _{E}^{k}\) be the strategy where in each \(T_{i}^{e}\) Enemy plays points

and

with probability \(\frac{w^{i-1}}{2\cdot {{\sum \nolimits _{j=0}^{n-1}}}w^{j}}\). This implies that no other allocations are in the support of \(\mu _{E}^{k}\) since \(2\cdot \sum _{i=1}^{n}\frac{w^{i-1}}{2\cdot {{\sum \nolimits _{j=0}^{n-1}}}w^{j}}=1.\) For \(\epsilon \) sufficiently small \(\mu _{E}^{1}\) (and any other \(\mu _{E}^{k}\)) satisfies properties 1e and 2e.

Intuitively, this is a sequence of strategies that has Enemy randomizing over allocations on his resource constraint, arbitrarily close to the corners in his \(T_{i}^{e}\)’s. In other words, for any \(T_{i}^{e}\) we will be able to find some \(\mu _{E}^{k}\) where Enemy plays arbitrarily close to each intersection of his resource constraint and either of the other two bounds for \(T_{i}^{e}\) with strictly positive probability.

To see that any \(\mu _{E}^{k}\) is in fact in \(\Omega ^{E},\) consider the following. So long as \(\epsilon \) is sufficiently small, the points from \(\mu _{E}^{1}\), \(\left( g^{-1}\left( p^{i-1}\left( E_{2}-i\frac{\epsilon }{1}\right) \right) ,p^{i-1}\left( E_{2}-i\frac{\epsilon }{1}\right) \right) \) and \(\left( h^{n-i}\left( E_{1}-\left( n+1-i\right) \frac{\epsilon }{1}\right) ,g\left( h^{n-i}\left( E_{1}-\left( n+1-i\right) \frac{\epsilon }{1}\right) \right) \right) \), will be in \(T_{i}^{e}.\) These points are a small (note the \(i\epsilon \) or \(\left( n+1-i\right) \epsilon \) terms) distance from the intersection with their respective boundary of \(T_{i}^{e}.\) As they are a small distance toward the interior, Property 1e holds (for sufficiently small \(\epsilon \)). Since these points will simply get closer to the boundary as \(k\) increases, (yet never touch it) Property 1e will hold for any \(\mu _{E}^{k}.\)

The \(\frac{\epsilon }{k}\) terms are multiplied by \(i\) and \(\left( n+1-i\right) \) to ensure that property 2e holds. Note that full expenditure deviating play by Blotto of \(\left( f^{-1}\left( p^{i-1}\left( E_{2}-i\frac{\epsilon }{k}\right) \right) , p^{i-1}\left( E_{2}-i\frac{\epsilon }{k}\right) \right) \) will exactly match the \(p^{i-1}\left( E_{2}-i\frac{\epsilon }{k}\right) \) Enemy sometimes plays on Battlefield 2, but will not increase his payoff on Battlefield 1 as \(f^{-1}\left( p^{i-1}\left( E_{2}-i\frac{\epsilon }{k}\right) \right) \) is strictly less than \(f^{-1}\left( p^{i}\left( E_{2}-\left( i+1\right) \frac{\epsilon }{k}\right) \right) \) (from \(T_{i+1}^{b}\)). So, Blotto’s expected payoff will not increase relative to play in \(T_{i}^{b}.\) Similar analysis on other potential full expenditure Blotto deviations ensures Property 2e holds.

Now we are ready to start considering Blotto’s potential deviations. **Deviation 1** has Blotto mix over allocations which send weakly less to both battlefields than some allocation in some \(T_{i}^{b}.\) Since these deviating allocations are not themselves in \(T_{i}^{b}\) they must send strictly less to at least one battlefield. Suppose these conditions hold for some set of allocations \(S\) and let \(b^{d}=\left( b_{1}^{d},b_{2}^{d}\right) \in S\) satisfy **Deviation 1** relative to a \(\left( b_{1}^{i},b_{2}^{i}\right) \in T_{i}^{b}.\) Suppose \(b^{d}\) sends strictly less to Battlefield 1, or, by Lemma 5, \(b_{1}^{d}<h^{n-i}(E_{1})\) (and \(b_{2}^{d}\le f\left( h^{n-i}\left( E_{1}\right) \right) \)). There exists some \(k^{*}\) where \(b_{1}^{d}<h^{n-i}\left( E_{1}-\left( n+1-i\right) \frac{\epsilon }{k^{*}}\right) <h^{n-i}\left( E_{1}\right) \). Now, we know \(\left( b_{1}^{d},b_{2}^{d}\right) \) cannot be a best response to \(\mu _{E}^{k^{*}}.\) Blotto could play \(\left( h^{n-i}(E_{1}),f\left( h^{n-i}(E_{1})\right) \right) \), but he plays \(\left( b_{1}^{d},b_{2}^{d}\right) \) which strictly lowers his probability of winning on Battlefield 1 when Enemy plays \(\mu _{E}^{k^{*}}\), and it does so without increasing Blotto’s probability of winning on Battlefield 2. Therefore \(\left( b_{1}^{d},b_{2}^{d}\right) \) provides a strictly lower payoff and cannot be a best response. Similar logic applies to \(\left( b_{1}^{d},b_{2}^{d}\right) \) that send strictly less to Battlefield 2 (or where \(b_{1}^{d}\le f^{-1}\left( p^{i-1}\left( E_{2}\right) \right) \) and \(b_{2}^{d}<p^{i-1}(E_{2})\)). Therefore, all allocations which could be randomized over in **Deviation 1** are not best responses to some \(\mu _{E}^{k}.\) Therefore, no equilibrium Blotto strategy exhibits **Deviation 1**.

Consider a deviating Blotto strategy \(\mu _{B}^{d}\) that forms a Nash equilibrium with any \(\mu _{E}\in \Omega ^{E}\). All allocations in \(D_{0}^{b}\) and \(D_{n}^{b}\) are not best responses to certain Nash equilibrium Enemy strategies. For instance, take an allocation \(\left( b_{1}^{d},b_{2}^{d}\right) \in D_{0}^{b}.\) Blotto is increasing his Battlefield 2 allocation while reducing his Battlefield 1 allocation relative to \(T_{1}^{b}.\) However, in \(T_{1}^{b}\) Blotto was already guaranteeing victory on Battlefield 2 so this cannot be payoff improving. Clearly \(b_{1}^{d}<h^{n-1}(E_{1}).\) We can find some \(k^{*}\in \mathbb {N}\) such that \(b_{1}^{d}<h^{n-1}\left( E_{1}-\left( n+1-1\right) \frac{\epsilon }{k^{*}}\right) <h^{n-1}(E_{1}).\) As Enemy plays \(h^{n-1}\left( E_{1}-\left( n+1-1\right) \frac{\epsilon }{k^{*}}\right) \) on Battlefield 1 with positive probability in \(\mu _{E}^{k^{*}},\) \(\left( b_{1}^{d},b_{2}^{d}\right) \) must provide Blotto with a strictly lower payoff than \(\left( h^{n-1}(E_{1}),f\left( h^{n-1}(E_{1})\right) \right) ,\) which also guarantees victory on Battlefield 2. Therefore, \(\mu _{B}^{d}\left( D_{0}^{b}\right) =0\). Similar logic implies that \(\mu _{B}^{d}\left( D_{n}^{b}\right) =0.\)

We now simultaneously prove that neither **Deviation 2 **nor** Deviation 3** is possible in a Nash equilibrium. Consider some \(i\in \left\{ 1,2,3,\ldots ,n-1\right\} .\) Assume that

In other words, there has not “yet” been a **Deviation 2** or **Deviation 3**.

Consider possible versions of **Deviation 3** for \(T_{i}^{b}\). First suppose, \(\mu _{B}^{d}\left( T_{i}^{b}\right) >\frac{w^{n-i}}{{\sum \nolimits _{j=0}^{{n-1}}}w^{j}}\). Given Eq. 53 and the fact that we’ve already ruled out **Deviation 1**, when Enemy plays in \(T_{i}^{e}\) his probability of winning on Battlefield 1 is \(\mu _{B}^{d}\left( T_{1}^{b}\!\cup \!\ldots \!\cup \! T_{i-1}^{b}\right) \!=\!\frac{\sum _{j=n-(i-1)}^{n-1}w^{j}}{\sum _{j=0}^{n-1}w^{j}}\).^{Footnote 25} However, his probability of winning on Battlefield 2 is \(1-\mu _{B}^{d}(T_{1}^{b}\cup \ldots \cup T_{i}^{b})<\frac{\sum _{j=0}^{n-i-1}w^{j}}{\sum _{j=0}^{n-1}w^{j}}.\) Therefore, Enemy’s total expected payoff is then strictly less than \(\frac{\sum _{j=1}^{n-1}w^{j}}{\sum _{j=0}^{n-1}w^{j}}\) which is his constant equilibrium payoff, a contradiction. Similar logic applies to the case where \(\mu _{B}^{d}\left( T_{i}^{b}\right) <w^{n-i}\) and implies that his expected payoff would be greater than \(\frac{\sum _{j=1}^{n-1}w^{j}}{\sum _{j=0}^{n-1}w^{j}},\) his constant equilibrium payoff, a contradiction. Therefore, \(\mu _{B}^{d}\left( T_{i}^{b}\right) =\frac{w^{n-i}}{{{\sum \nolimits _{j=0}^{n-1}}}w^{j}}.\)

Now consider a possible **Deviation 2**. Specifically, \(\mu _{B}^{d}\left( D_{i}^{b}\right) >0.\) Note that all Battlefield 2 allocations in \(D_{i}^{b}\) are strictly greater than \(f\left( h^{n-i-1}\left( E_{1}\right) \right) .\) Define \(S_{i}\left( \delta \right) =\left\{ \left( b_{1},b_{2}\right) :\;\left( b_{1},b_{2}\right) \in D_{i}^{b}\; and\; b_{2}\ge f\left( h^{n-i-1}\left( E_{1}\right) \right) +\delta \right\} .\) We are then assured that \(\exists \delta >0\) sufficiently small that \(\mu _{B}^{d}\left( S_{i}\left( \delta \right) \right) >0\). Then we are also assured that \(\exists k^{*}\in \mathbb {N}\) such that \(f\left( h^{n-i-1}\left( E_{1}\right) \right) +\delta >g\left( h^{n-i}\left( E_{1}-\left( n+1-i\right) \frac{\epsilon }{k^{*}}\right) \right) \).^{Footnote 26} Note that Enemy plays \(\left( h^{n-i}\left( E_{1}-\left( n+1-i\right) \frac{\epsilon }{k^{*}}\right) ,g\left( h^{n-i}\left( E_{1}-\left( n+1-i\right) \frac{\epsilon }{k^{*}}\right) \right) \right) \) with positive probability in \(\mu _{E}^{k^{*}}.\) When he does so the probability that he wins on Battlefield 1 is \(\mu _{B}(T_{1}^{b}\cup \ldots \cup T_{i-1}^{b})=\frac{\sum _{j=n-(i-1)}^{n-1}w^{j}}{\sum _{j=0}^{n-1}w^{j}}\), but the probability he wins on Battlefield 2 is weakly less than \(1-\mu _{B}(T_{1}^{b}\cup \ldots \cup T_{i-1}^{b}\cup S_{i}(\delta ))<\frac{\sum _{j=0}^{n-i-1}w^{j}}{\sum _{j=0}^{n-1}w^{j}}.\) Therefore his expected payoff is strictly less than \(\frac{\sum _{j=1}^{n-1}w^{j}}{\sum _{j=0}^{n-1}w^{j}},\) his constant equilibrium payoff, a contradiction.

This analysis holds for all \(i=1,\ldots ,n-1\). Therefore, given Lemma 5 and the definitions of the \(T_{i}^{b}\)’s, \(D_{i}^{b}\)’s, and **Deviation 1**, we have now determined the mass over sets containing all feasible Blotto allocations other than those in \(T_{n}^{b}\) (see Fig. 13 for a graphical aide). Thus, \(\mu _{B}^{d}\left( T_{n}^{b}\right) =\frac{1}{{{\sum \nolimits _{j=0}^{n-1}}}w^{j}},\) the remaining mass. Therefore, \(\mu _{B}^{d}\) does not violate Property 1b. Since we’ve already shown it can’t violate Property 2b, \(\mu _{B}^{d}\in \Omega ^{B}.\)
\(\square \)

Recall **Theorem 3**: The complete set of Nash Equilibria of any two-battlefield Colonel Blotto game is the set of pairs \(\left\{ \mu _{B},\mu _{E}\right\} \) such that \(\mu _{B}\in \Omega ^{B}\) and \(\mu _{E}\in \Omega ^{E}.\)

###
*Proof*

The theorem follows directly from Lemmas 10, 11, and 12. \(\square \)

### Appendix 2: Method of equilibrium strategy construction

###
*Blotto construction*

Here we will demonstrate how to construct an equilibrium Blotto strategy from any continuous CDF whose range is \(\left[ 0,1\right] \) over the domain of \(b_{1}\) within \(T_{1}^{b}\).

Let \(z_{1}\left( b_{1}\right) \) be any continuous, strictly increasing function over the domain \(\left[ h^{n-1}\left( E_{1}\right) ,f^{-1}\left( E_{2}\right) \right] \) such that:

So, \(z_{1}\) is a continuous, strictly increasing function with a range of \(\left[ 0,1\right] \) over the domain of \(b_{1}\) within \(T_{1}^{b}.\) We let this represent a CDF of \(b_{1}\) values that Blotto might play within \(T_{1}^{b}.\) Now \(\forall i=2,\ldots ,n\) we iteratively define:

Each \(z_{i}\) represents a CDF over \(b_{1}\) values that Blotto might play within \(T_{i}^{b}.\) Note that by Eq. 26 the lower and upper bounds of \(b_{1}\) values in \(\Omega \left( T_{i}^{b}\right) \) are \(h^{n-i}(E_{1})\) and \(f^{-1}\left( p^{i-1}(E_{2})\right) \) respectively. Then, by Eqs. 54, and 56:

Note that by definition we have that \(f^{-1}\left( p^{j}\left( x\right) \right) =h^{-j}\left( f^{-1}\left( x\right) \right) .\) Therefore, by Eqs. 55, and 56:

So, given this construction method, \(z_{i}\) equals zero at the lower bound of \(b_{1}\) values in \(\Psi _{1}\left( T_{i}^{b}\right) \) and equals 1 for the upper bound of \(b_{1}\) values in \(\Psi _{1}\left( T_{i}^{b}\right) .\) Also note that all \(z_{i}\) are strictly increasing functions over \(\Psi _{1}\left( T_{i}^{b}\right) \) as \(h^{-\left( i-1\right) }\) and \(z_{1}\) are strictly increasing functions. Therefore, each \(z_{i}\) is a CDF over \(b_{1}\) values that Blotto might play within \(\Psi _{1}\left( T_{i}^{b}\right) .\)

###
**Proposition 13**

For any \(z_{1},\) the following mixed strategy, \(\mu _{B}^{*},\) is in \(\Omega ^{B}:\) Blotto chooses \(i\in \{1,2,\ldots ,n\}\) with the probability he chooses any particular \(i\) being given by

He then randomly chooses a \(b_{1}\) within \(\Psi _{1}\left( T_{i}^{b}\right) \) according to the CDF \(z_{i}.\) He then allocates \(\left( b_{1},f\left( b_{1}\right) \right) \).

###
*Proof*

This strategy trivially satisfies Property 1b. We now show that such a strategy satisfies Property 2b. For some \(i\in 1,2,\ldots ,n-1\) consider an \(x\in [h^{n-i}(E_{1}),f^{-1}\left( p^{i-1}\left( E_{2}\right) \right) ]=\Psi _{1}\left( T_{i}^{b}\right) .\) By construction, and Eq. 20 \(\mu _{B}^{*}\left( j_{b}^{x,i}\right) =z_{i}\left( x\right) \cdot \frac{w^{n-i}}{{{\sum \nolimits _{j=0}^{n-1}}}w^{j}}.\) Note that since under this strategy Blotto always fully expends his resources, the set of allocations such that \(b_{2}\ge g\left( x\right) \) is equivalent to the set of allocations such that \(b_{1}\le f^{-1}\left( g\left( x\right) \right) .\) Therefore, given Eq. 21 \(\mu _{B}^{*}\left( k_{b}^{x,i}\right) =z_{i+1}\left( f^{-1}\left( g\left( x\right) \right) \right) \cdot \frac{w^{n-i-1}}{{{\sum \nolimits _{j=0}^{n-1}}}w^{j}}.\) Note that \(h^{-1}\left( x\right) =f^{-1}\left( g\left( x\right) \right) .\) Therefore, by construction \(z_{i+1}\left( f^{-1}\left( g\left( x\right) \right) \right) =z_{i}\left( x\right) \), and \(\mu _{B}^{*}\left( j_{b}^{x,i}\right) =\mu _{B}^{*}\left( k_{b}^{x,i}\right) \cdot w.\) Thus Property 2b is satisfied. \(\square \)

###
*Enemy construction*

Here we will demonstrate how to construct an equilibrium Enemy strategy from any continuous CDF whose range is \(\left[ 0,1\right] \) over \(\Psi _{1}\left( T_{1}^{e}\right) .\)

Let \(v_{1}\left( e_{1}\right) \) be any continuous, strictly increasing function such that:

So, \(v_{1}\) is a continuous, strictly increasing function with a range of \(\left[ 0,1\right] \) over the domain of \(e_{1}\) within \(T_{1}^{e}.\) We let this represent a CDF of \(e_{1}\) values that Enemy might play within \(T_{1}^{e}.\) Now \(\forall i=2,\ldots ,n\) we iteratively define:

Note that by Eqs. 28, 30, and 32 the lower and upper bounds of \(e_{1}\) values within \(T_{i}^{e}\) are \(f^{-1}\left( p^{i-2}(E_{2})\right) \) and \(h^{n-i}(E_{1})\) respectively. Then, by Eqs. 57, and 59:

Note that by definition we have that \(f^{-1}\left( p^{j}\left( x\right) \right) =h^{-j}\left( f^{-1}\left( x\right) \right) .\) Therefore, by Eqs. 58, and 59:

So, given this construction method, \(v_{i}\) equals zero at the lower bound of \(e_{1}\)values in \(\Psi _{1}\left( T_{i}^{e}\right) \) and equals 1 for the upper bound of \(e_{1}\) values in \(\Psi _{1}\left( T_{i}^{e}\right) .\) Also note that all \(v_{i}\) are strictly increasing over \(\Psi _{1}\left( T_{i}^{e}\right) \) as \(h^{-\left( i-1\right) }\) and \(v_{1}\) are strictly increasing functions. Therefore, each \(v_{i}\) is a CDF over \(e_{1}\) values that Enemy might play within \(\Psi _{1}\left( T_{i}^{e}\right) .\)

###
**Proposition 14**

For any \(v_{1},\) the following mixed strategy, \(\mu _{E}^{*},\) is in \(\Omega ^{e}:\) Enemy chooses \(i\in \{1,2,\ldots ,n\}\) with the probability he chooses any particular \(i\) being given by

He then randomly chooses a \(e_{1}\) within \(\Psi _{1}\left( T_{i}^{e}\right) \) according to the CDF \(v_{i}.\) He then allocates \(\left( e_{1},g\left( e_{1}\right) \right) \).

###
*Proof*

This strategy trivially satisfies Property 1e. We now show that such a strategy satisfies Property 2e. For some \(i\in 1,2,\ldots ,n-1\) consider an \(x\in \left( f^{-1}\left( p^{i-1}\left( E_{2}\right) \right) ,h^{n-i-1}\left( E_{1}\right) \right) =\Psi _{1}\left( T_{i+1}^{e}\right) \). ^{Footnote 27} By construction, and Eq. 22
\(\mu _{E}^{*}\left( j_{e}^{x,i}\right) =v_{i+1}\left( x\right) \cdot \frac{w^{i}}{{{\sum \nolimits _{j=0}^{n-1}}}w^{j}}.\) Note that since under this strategy Enemy always fully expends his resources, the restriction that \(e_{2}\ge f\left( x\right) \) is equivalent to the restriction that \(e_{1}\le g^{-1}\left( f\left( x\right) \right) .\) Therefore, given Eq. 23
\(\mu _{E}^{*}\left( k_{e}^{x,i}\right) =v_{i}\left( g^{-1}\left( f\left( x\right) \right) \right) \cdot \frac{w^{i-1}}{{{\sum \nolimits _{j=0}^{n-1}}}w^{j}}.\) Note that \(h\left( x\right) =g^{-1}\left( f\left( x\right) \right) .\) Therefore, by construction

and \(\mu _{E}^{*}\left( j_{e}^{x,i}\right) =\mu _{E}^{*}\left( k_{e}^{x,i}\right) \cdot w.\) Thus Property 2e is satisfied and \(\mu _{E}^{*}\in \Omega ^{e}\). \(\square \)

## Rights and permissions

## About this article

### Cite this article

Macdonell, S.T., Mastronardi, N. Waging simple wars: a complete characterization of two-battlefield Blotto equilibria.
*Econ Theory* **58**, 183–216 (2015). https://doi.org/10.1007/s00199-014-0807-1

Received:

Accepted:

Published:

Issue Date:

DOI: https://doi.org/10.1007/s00199-014-0807-1

### Keywords

- Colonel Blotto game
- Zero-sum game
- Warfare
- All-pay auction
- Multi-unit auction

### JEL Classification

- C72
- H56
- D7