Abstract
We consider the class of two-person zero-sum allocation games known as Captain Lotto games (Hart in Int J Game Theory 45:37–61, 2016). These are Colonel Blotto type games in which the players have capacity constraints. We consider the game with non-strict constraints, and with strict constraints. We show in most cases that when optimal strategies exist, they are necessarily unique. When they don’t exist, we characterize the pointwise limit of the cumulative distribution functions of \(\varepsilon \)-optimal strategies.
Similar content being viewed by others
Notes
In terms of the pointwise limit of their cumulative distribution function (CDF).
\({\mathbf {1}}_{d}\) denotes the distribution that puts mass 1 on the point d.
Throughout this paper we identify a random variable with its distribution. We denote the uniform distribution on the interval [c, d] by \(U_{[c,d]}.\) Also note: \(Y=\alpha Z+(1-\alpha )W\) denotes the mixture distribution of Z and W, and not their sum.
\(\mathbb {R}\) denotes the set of real numbers, and \(\mathbb {N}\) denotes the set of natural numbers.
Here and in the next case, \(X^{*}\) is an \(\varepsilon \)-optimal strategy, where \(c^{+}\) stands for \(c+\delta \) for a small enough \(\delta >0\).
Clearly, the specific distribution of W does not matter, only its expectation does, and so one may well take \(W = {\mathbf {1}}_{E(W)} = {\mathbf {1}}_{c+\delta } (H(W,Y) = H({\mathbf {1}}_{E(W)},Y)\) because \(Y\le c\) and \(W>c\)).
We denote the cumulative distribution function (CDF) of a random variable X by \(F_{X}(t)\).
Throughout we will not insist on writing constants that multiply our \(\varepsilon \)’s, since these arguments vanish when we take the limit.
We write “\(\in [\alpha ,\beta ]\)” to mean that the value of \(F_{z_{0}}\) there must belong to the interval \([\alpha ,\beta ]\); in Lemma 6 we will show that in fact every \(\gamma \in [\alpha ,\beta ]\) and every \(z_{0}\in [0,c/2]\) are attained for some subsequence.
\(conv\{\varOmega \}\) is the convex hull of the set \(\varOmega .\)
In case (v) \(Y^{*}\) is an \(\varepsilon \)-optimal strategy, where \(c^{-}\) stands for \(c-\delta \) for a small enough \(\delta >0\).
References
Bell RM, Cover TM (1980) Competitive optimality of logarithmic investment. Math Oper Res 5:161–166
Billingsley P (1986) Probability and measure, 2nd edn. Wiley, New York
Borel E (1921) La Thorie du Jeu et les quations Intgrales Noyau Symtrique. Comptes Rendus de lAcadmie des Sciences 173:1304–1308 (Translated by Savage LJ (1953) The theory of play and integral equations with skew symmetric kernels. Econometrica 21:97–100)
Hart S (2008) Discrete Colonel Blotto and General Lotto games. Int J Game Theory 36:441–460
Hart S (2016) Allocation games with caps: from Captain Lotto to all-pay auctions. Int J Game Theory 45:37–61
Lizzeri A (1999) Budget deficit and redistributive politics. Rev Econ Stud 66:909–928
Myerson RB (1993) Incentives to cultivate minorities under alternative electoral systems. Am Polit Sci Rev 87:856–869
Roberson B (2006) The Colonel Blotto game. Econ Theory 29:1–24
Sahuguet N, Persico N (2006) Campaign spending regulations in a model of redistributive politics. Econ Theory 28:95–124
Author information
Authors and Affiliations
Corresponding author
Additional information
This research has received funding from the European Research Council under the European Community’s Seventh Framework Program (FP7/2007-2013)/ERC Grant Agreement No. [249159]. The author would like to express his deep gratitude to his advisor, Sergiu Hart. The author would also like to thank Nir Gadish and the anonymous referees for useful comments and discussions.
Appendices
Appendix A
In this appendix we prove useful lemmas and cite some known results.
Every nonnegative random variable X satisfies
(see, e.g., Billingsley 1986 (21.9)). Since \(H(X,Y)=P(X>Y)-P(Y>X)\), we easily obtain
Lemma 8
For every nonnegative random variable \(X\ge 0\) and every \(t>0,\)
-
1.
$$\begin{aligned} H(U_{[0,2t]},X)\ge 1-\frac{E(X)}{t}. \end{aligned}$$
-
2.
\(X\le 2t\) if and only if
$$\begin{aligned} H(U_{[0,2t]},X)=1-\frac{E(X)}{t}. \end{aligned}$$
Remark 6
Part (1) of this lemma is proved in Hart (2008) (see (10), (11)). Since the proof is short and we use this lemma repeatedly, we include its proof here.
Proof
which proves the first part of our lemma. For the second part we first assume that \(X\le 2t\). Thus,
For the other direction, assume \(H(U_{[0,2t]},X)=1-E(X)/t.\) Using this assumption and (24) gives us
This implies that
thus \(P(X\ge x)=0\) for almost every \(x\in [2t,\infty ).\) However, the left-hand side is a nonincreasing function and so we obtain that \(P(X>x)=0\) for every \(x\in [2t,\infty ),\) i.e., \(X\le 2t.\) \(\square \)
When both players have the same cap, i.e., \(c_{A}=c_{B}=c,\) and, without loss of generality, we assume \(a\ge b,\) the value of the \(\varLambda _{c,c}(a,b)\) game is \(1-b/a.\) This is part of Theorem 1, which is proved in Hart (2016). Using this yields the following:
Lemma 9
Let \(0<a\le c. X\) is an optimal strategy of player A in \(\varLambda _{c,c}(a,b)\) for some \(0<b\le a\) if and only if X is an optimal strategy of player A in \(\varLambda _{c,c}(a,t)\) for all \(t\in [b,a].\)
Proof
Assume X is an optimal strategy of player A in \(\varLambda _{c,c}(a,b)\) for some \(0<b\le a\). Let \(t\in [b,a]\) and let Y be some strategy of player B in \(\varLambda _{c,c}(a,t).\) Define \(Y^{b}=(1-b/t){\mathbf {1}}_{0}+(b/t)Y\) with \(E(Y^{b})=b\). Since X is optimal we obtain
Thus X is optimal in \(\varLambda _{c,c}(a,t).\)
The other direction is trivial. \(\square \)
Remark 7
Note that in Lemma 9 c may be greater than 2a (and even infinite).
Corollary 1
If X is optimal in \(\varLambda _{c,c}(a,b)\) for \(a\ge b\) then X is optimal in \(\varLambda _{c,c}(a,a).\)
Lemma 10
Let X be an optimal strategy of player A in \(\varLambda _{c,c}(a,b),\) and let \(Y=\alpha {\mathbf {1}}_{t_{1}}+(1-\alpha ){\mathbf {1}}_{t_{2}}\) be some strategy of player B with \(0\le t_{1}<b<t_{2}\le c\) and \(\alpha =(t_{2}-b)/(t_{2}-t_{1}).\) Then
where \(v=\hbox {val}\varLambda _{c,c}(a,b).\)
Proof
X is optimal; thus
Multiplying by \((t_{2}-t_{1})/[(t_{2}-b)(b-t_{1})]\) yields
and so
\(\square \)
Finally, we cite a version of Helly’s selection theorem, and prove a useful lemma.
Theorem 21
(Helly’s selection theorem) For every sequence \(F_{n}\) of distribution functions there exists a subsequence \(F_{n_{k}}\) and a nondecreasing, right-continuous function F such that \(\lim _{k\rightarrow \infty }F_{n_{k}}(x)=F(x)\) at continuity points x of F.
(see, e.g., Billingsley 1986 (Theorem 25.9)). Note that since F is nondecreasing, the set of points where F is not continuous is at most countable. An application of the diagonal method gives a subsequence of \(F_{n}\) that converges for every \(t\in \mathbb {R}\). We also note that it is sufficient to assume that \(F_{n}\) is a sequence of nondecreasing functions, that are uniformly bounded, i.e., there exists some \(0<M\in \mathbb {R}\) with \(|F_{n}(x)|<M\) for every \(x\in U\) and every \(n\in \mathbb {N}\).
Lemma 11
Let \(\{X_{n}\}_{n\in \mathbb {N}}\) be a sequence of nonnegative random variables, and assume \(\lim _{n\rightarrow \infty }E(X_{n})=0\). Then for every \(t>0, \lim _{n\rightarrow \infty }F_{X_{n}}(t)=1\).
Proof
Assume the opposite. Then there exists \(t_{0}>0, 0< \varepsilon \le 1\) and a subsequence \(\{X_{n_{k}}\}_{k\in \mathbb {N}}\) of \(\{X_{n}\}_{n \in \mathbb {N}}\), with \(\lim _{k\rightarrow \infty }F_{X_{n_{k}}}(t_{0}) = 1 - \varepsilon \). Thus \(\lim _{k\rightarrow \infty }P(X_{n_{k}}>t_{0}) = \varepsilon > 0 \).
By (24) and Fatou’s lemma we have,
However, \(\lim _{n\rightarrow \infty }E(X_{n})=0\), a contradiction. \(\square \)
Corollary 2
Let \(c>0\) and let \(\{X_{n}\}_{n\in \mathbb {N}}\) be a sequence of random variables with \(X_{n} \ge c\) for every \(n \in \mathbb {N}\). Assume that \(\lim _{n\rightarrow \infty }E(X_{n})=c\). Then for every \(t>c, \lim _{n\rightarrow \infty }F_{X_{n}}(t)=1\).
Appendix B
In this appendix we prove Theorems 15–20. We begin with a lemma.
Lemma 12
If X is an optimal strategy for player A in \(\varLambda _{c,c^{-}}(a,b)\) with \(P(X=c)=0,\) and
then X is an optimal strategy for player A in \(\varLambda _{c,c}(a,b)\).
Proof
Let \(X^{0}\) be an optimal strategy of player A in \(\varLambda _{c,c^{-}}(a,b)\) with \(P(X^{0}=c)=0\), and let Y be some strategy of player B in \(\varLambda _{c,c}(a,b)\). Express \(Y=\gamma W+(1-\gamma ){\mathbf {1}}_{c}\), with \(E(W)=w\) and \(\gamma =(c-b)/(c-w)\).
If \(\gamma =1\), then optimality in \(\varLambda _{c,c^{-}}(a,b)\) gives us,
Assume \(\gamma <1\). For every \(\delta >0\) small enough, define \(Y_{\delta }=\gamma _{\delta }W+(1-\gamma _{\delta }){\mathbf {1}}_{c-\delta }\) with \(\gamma _{\delta }=(c-b-\delta )/(c-w-\delta )\). By \(X^{0}\)’s optimality,
Taking the \(\delta \rightarrow 0^{+}\) limit, and remembering that \(P(X^{0}\ge t)\) is left continuous yields,
(the last equality is given by Y’s definition and by \(P(X^{0}=c)=0\)). Since
\(X^{0}\) is an optimal strategy of player A in \(\varLambda _{c,c}(a,b)\). \(\square \)
1.1 B.1 Proof of Theorem 15
For both players, optimality follows from Theorem 14, and uniqueness follows from the same proof as in Hart (2008, Appendix).
1.2 B.2 Proof of Theorem 16
For both players optimality follows from Theorem 14. \(Y^{*}\)’s uniqueness derives from its uniqueness in \(\varLambda _{2a,2a}(a,b)\). We are left with \(X^{*}\)’s uniqueness.
Let \(X^{0}\) be an optimal strategy of player A. We express \(X^{0}=\alpha Z+(1-\alpha ){\mathbf {1}}_{c}\), where \(0\le Z<c\) with \(E(Z)=z\), and \(\alpha =(c-a)/(c-z)\). We wish to show that \(\alpha =1\). Assume \(\alpha <1\), and notice \(\alpha<1\Leftrightarrow z<a\).
If \(z<b\), then by Theorem 15 \(Y=U_{[0,2b]}\) is an optimal strategy of player B in \(\varLambda _{c,c^{-}}(z,b)\). Thus,
By substituting \(\alpha =(c-a)/(c-z), c=2a\), and multiplying by \(ab(c-z)\) we obtain: \((2b-z)a^{2}\le b^{2}(2a-z)\).
Since \(z<b<a\), we have \(2ab\le z(a+b)<2ab\), a contradiction. We conclude that \(z\ge b\).
Let \(Y=\beta Z+(1-\beta ){\mathbf {1}}_{0}\) with \(\beta =b/z\) (we use the same Z as in player A’s strategy \(X^{0}\)). Y is indeed allowed for player B since \(0\le Z<c\) and \(z\ge b\). Now,
Rearranging and substituting \(\alpha =(c-a)/(c-z), \beta = b/z\) yields,
and so,
However, \(z<a=\frac{c}{2}\), which leads to a contradiction. Thus, \(\alpha =1\) and \(X^{0}\) satisfies \(0\le X^{0}<c\), i.e., \(P(X^{0}\ge c)=0\).
Notice that
thus we can obtain by Lemma 12 that \(X^{0}\) is optimal for player A in \(\varLambda _{c,c}(a,b)\). Since \(X^{*}\) is the unique optimal strategy for player A in \(\varLambda _{c,c}(a,b)\) (Theorem 2), we have \(X^{0}=X^{*}\).
1.3 B.3 Proof of Theorem 17
Denote \(X(a)=X^{*}=(1-2a/c){\mathbf {1}}_{0}+(2a/c)U_{[0,c]}\in M\). By Theorem 14 \(X^{*}\) and \(Y^{*}\) are optimal for players A and B, respectively. As in Theorem 16, \(Y^{*}\)’s uniqueness derives from its uniqueness in \(\varLambda _{c,c}(a,c/2)\), and so we turn to prove that M is the set of all optimal strategies of player A.
Let \(X(z)\in M\), and let Y be some strategy of player B.
Since \(Y<c\) we have \(H({\mathbf {1}}_{c},Y)=1\). Moreover, the strategy \((1-2z/c){\mathbf {1}}_{0}+(2z/c)U_{[0,c]}\) is optimal for player A in \(\varLambda _{c,c^{-}}(z,c/2)\), where \(z\le c/2\). Applying these to (26) yields,
Thus, every strategy \(X(z)\in M\) is optimal for player A.
We now wish to show that every optimal strategy of player A equals \(X(z)\in M\) for some \(0\le z\le a.\) Note that for \(a\le b=c/2\),
Let \(X^{0}\) be some optimal strategy of player A. If \(P(X^{0}=c)=0\), by Lemma 12 and Theorem 2 we obtain, \(X^{0}=(1-2a/c){\mathbf {1}}_{0}+(2a/c)U_{[0,c]}=X(a)\in M\). If \(P(X^{0}=c)>0\), express \(X^{0}=\alpha Z+(1-\alpha ){\mathbf {1}}_{c}\) with \(Z<c, E(Z)=z\) and \(\alpha =(c-a)/(c-z)\). Since \(P(X^{0}=c)>0\), we have \(z<a\). Let Y be some strategy of player B.
Rearranging and substituting \(\alpha =(c-a)/(c-z)\) yields,
thus Z is an optimal strategy of player A in \(\varLambda _{c,c^{-}}(z,c/2)\) with \(z<a\le c/2\), and \(P(Z=c)=0\). By Lemma 12 and Theorem 2 we obtain, \(Z=(1-2z/c){\mathbf {1}}_{0}+(2z/c)U_{[o,c]}\), and so
and we are done.
1.4 B.4 Proof of Theorem 18
Denote \(X^{*}=X(c/2)=(2c-2a)/cU_{[0,c]}+(2a-c)/c{\mathbf {1}}_{c}\). The optimality of \(X^{*}\) and \(Y^{*}=U_{[0,c]}\) for players A and B, respectively, derives from Theorem 14.
We turn to prove that \(Y^{*}\) is unique, and begin with a lemma.
Lemma 13
Let \(Y^{0}\) be an optimal strategy of player B in \(\varLambda _{c,c^{-}}(a,b),\) where \(b\le c/2<a\). Then for every X of player A in \(\varLambda _{c,c^{-}}(c/2,b)\) we have
Proof
Let \(Y^{0}\) be an optimal strategy of player B in \(\varLambda _{c,c^{-}}(a,b)\), where \(b\le c/2<a\), and let X be some strategy of player A in \(\varLambda _{c,c^{-}}(c/2,b)\). Define \(X^{a}=\alpha X+(1-\alpha ){\mathbf {1}}_{c}\) with \(\alpha =2(c-a)/c\), and so \(E(X^{a})=a\). Since \(Y^{0}\) is optimal we have
Rearranging and substituting \(\alpha =2(c-a)/c\) yields
\(\square \)
Let \(Y^{0}\) be an optimal strategy of player B. Using the above lemma gives us that \(Y^{0}\) is an optimal strategy of player B in \(\varLambda _{c,c^{-}}(c/2,c/2)\). According to Theorem 17, \(Y^{0}=U_{[0,c]}=Y^{*}\), and so \(Y^{*}\) is unique.
Turning to player A, let \(X(z)\in M\). As we saw for \(z=c/2, X(c/2)=X^{*}\) is an optimal strategy of player A. Assume \(z<c/2\). According to Theorem 17 the strategy \((1-2z/c){\mathbf {1}}_{0}+(2z/c)U_{[0,c]}\) is optimal for player A in \(\varLambda _{c,c^{-}}(z,c/2)\); thus,
And so, every strategy \(X(z)\in M\) is optimal.
Now, let \(X^{0}\) be some optimal strategy of player A. We wish to show that \(X^{0}=X(z)\) for some \(0\le z\le c/2\).
Lemma 14
An optimal strategy of player A in \(\varLambda _{c,c^{-}}(a,c/2),\) where \(c/2<a<c,\) must have an atom at c, i.e., \(P(X^{0}=c)>0.\)
Proof
Assume \(P(X^{0}=c)=0\). Note that
By Lemma 12 we obtain that \(X^{0}\) is also optimal for player A in \(\varLambda _{c,c}(a,c/2)\), where \(c/2<a<c\). By Theorem 2, \(X^{0}=(c-a)/aU_{[0,2(c-a)]}+(2a-c)/a{\mathbf {1}}_{c}\), which means that \(P(X^{0}=c)>0\), a contradiction. \(\square \)
Express \(X^{0}=\alpha Z+(1-\alpha ){\mathbf {1}}_{c}\) with \(Z<c, E(Z)=z\) and \(\alpha =(c-a)/(c-z)\). From the above lemma \(\alpha <1\), which is equivalent to \(z<a\).
Let Y be some strategy of player B. Substituting \(\alpha =(c-a)/(c-z)\) in
and rearranging yields,
thus Z is optimal for player A in \(\varLambda _{c,c^{-}}(z,c/2)\).
Note that \(Z<c\). If \(c/2<z\), Lemma 14 guarantees that \(P(Z=c)>0\), a contradiction. Thus \(z\le c/2\). According to Theorem 17, since Z is optimal for player A, it holds that
for some \(t\in [0,z]\). However, Z has no atom at c, and so \(t=z\), and
thus
with \(0\le z\le c/2\). \(X^{0}=X(z)\in M\) and we are done.
1.5 B.5 Proof of Theorem 19
Optimality of \(X^{*}\) and \(Y^{*}\) for players A and B, respectively, derives from Theorem 14.
We turn to prove that \(Y^{*}\) is unique. Let \(Y^{0}\) be an optimal strategy of player B in \(\varLambda _{c,c^{-}}(a,b)\), where \(b<c/2<a\). By Lemma 13, \(Y^{0}\) is an optimal strategy of player B in \(\varLambda _{c,c^{-}}(c/2,b)\). Since \(b<c/2\), we can use Theorem 16; thus \(Y^{0}=(1-2b/c){\mathbf {1}}_{0}+(2b/c)U_{[0,c]}=Y^{*}\), and \(Y^{*}\) is unique.
Turning to \(X^{*}\)’s uniqueness, let \(X^{0}\) be an optimal strategy of player A in \(\varLambda _{c,c^{-}}(a,b)\), where \(b<c/2<a<c\), and let Y be some strategy of player B in \(\varLambda _{c,c^{-}}(a,c/2)\). Define \(Y^{b}=(1-2b/c){\mathbf {1}}_{0}+(2b/c)Y\).
and so,
thus \(X^{0}\) is optimal for player A in \(\varLambda _{c,c^{-}}(a,c/2)\), and according to Theorem 18:
for some \(z\in [0,\frac{c}{2}]\).
Claim
\(P(X^{0}=0)=0\).
Proof
Express \(X^{0}=(1-\alpha ){\mathbf {1}}_{0}+\alpha W\), with \(W>0, E(W)=w\) and \(\alpha =a/w\). We wish to show that \(\alpha =1\). Assume the opposite, i.e., \(\alpha <1\).
Rearranging yields,
However, \(H(W,U_{[0,c]})\le 2w/c-1\) (Lemma 8, Appendix A); thus,
which yields,
Since \(\alpha <1\), we obtain \(c/2 \le b\), a contradiction. \(\square \)
Now, using the fact that \(P(X^{0}=0)=0\) together with (27) gives us \(z=c/2\). Thus \(X^{0}=X(c/2)=X^{*}\), and we are done.
1.6 B.6 Proof of Theorem 20
The optimality of \(X^{*}\) and \(\varepsilon \)-optimality of \(Y^{*}=Y(b)\) derives from Theorem 14.
We begin with \(X^{*}\)’s uniqueness. Let \(X^{0}\) be some optimal strategy of player A, and express \(X^{0}=\alpha Z+(1-\alpha ){\mathbf {1}}_{c},\) with \(Z<c, E(Z)=z\) and \(\alpha =(c-a)/(c-z)\). It is sufficient to show that \(z=0\). Assume the opposite, i.e., \(z>0\).
Let Y be some strategy of player B. Then,
Rearranging and substituting \(\alpha =(c-a)/(c-z)\) yields,
thus Z is an optimal strategy of player A in \(\varLambda _{c,c^{-}}(z,b)\), where \(c/2<b\) and \(z>0\).
Note that in this case we have the following strong inequality,
Since \(Z<c\), repeating the steps of Lemma 12 gives us that for every strategy Y of player B in \(\varLambda _{c,c}(z,b)\),
Meaning Z, a feasible strategy for player A in \(\varLambda _{c,c}(z,b)\), guarantees a greater value than the value of the game, a contradiction. We conclude that \(z=0\), and \(X^{0}=X^{*}\).
We turn to player B. Since a convex combination of \(\varepsilon \)-optimal strategies is \(\varepsilon \)-optimal, it is sufficient to show that every strategy in \(\varOmega _{\delta }\) is \(\varepsilon \)-optimal for a small enough \(\delta >0\).
We know that for \(c/2<b\), the strategy
is \(\varepsilon \)-optimal for player B in \(\varLambda _{c,c^{-}}(a,b)\) for a sufficiently small \(\delta >0\), and \(val\varLambda _{c,c^{-}}(a,b)=2a/c-1\). We conclude that for every \(w\in (c/2,b]\), for a sufficiently small \(\delta >0\) the strategy
is \(\varepsilon \)-optimal for player B in \(\varLambda _{c,c^{-}}(a,w)\), and \(val\varLambda _{c,c^{-}}(a,w)=\frac{2a}{c}-1\). Using Theorems 17 and 18, we may add \(w=c/2\) to this conclusion (as a matter of fact when \(w=c/2, W\) is optimal). Moreover, for a small enough \(\delta >0\), we can use Markov’s inequality and obtain
Let \(Y(w)\in \varOmega _{\delta }\). Then,
with \(w\in [c/2,b]\) and \(\beta =(c-\delta -b)/(c-\delta -w)\). Let X be some strategy of player A in \(\varLambda _{c,c^{-}}(a,b)\). Then,
thus Y(w) is \(\varepsilon \)-optimal for player B when \(\delta >0\) is small enough and we are done.
Rights and permissions
About this article
Cite this article
Amir, N. Uniqueness of optimal strategies in Captain Lotto games. Int J Game Theory 47, 55–101 (2018). https://doi.org/10.1007/s00182-017-0578-6
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00182-017-0578-6