Skip to main content
Log in

Uniqueness of optimal strategies in Captain Lotto games

  • Original Paper
  • Published:
International Journal of Game Theory Aims and scope Submit manuscript

Abstract

We consider the class of two-person zero-sum allocation games known as Captain Lotto games (Hart in Int J Game Theory 45:37–61, 2016). These are Colonel Blotto type games in which the players have capacity constraints. We consider the game with non-strict constraints, and with strict constraints. We show in most cases that when optimal strategies exist, they are necessarily unique. When they don’t exist, we characterize the pointwise limit of the cumulative distribution functions of \(\varepsilon \)-optimal strategies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. In terms of the pointwise limit of their cumulative distribution function (CDF).

  2. \({\mathbf {1}}_{d}\) denotes the distribution that puts mass 1 on the point d.

  3. Throughout this paper we identify a random variable with its distribution. We denote the uniform distribution on the interval [cd] by \(U_{[c,d]}.\) Also note: \(Y=\alpha Z+(1-\alpha )W\) denotes the mixture distribution of Z and W, and not their sum.

  4. \(\mathbb {R}\) denotes the set of real numbers, and \(\mathbb {N}\) denotes the set of natural numbers.

  5. Here and in the next case, \(X^{*}\) is an \(\varepsilon \)-optimal strategy, where \(c^{+}\) stands for \(c+\delta \) for a small enough \(\delta >0\).

  6. Clearly, the specific distribution of W does not matter, only its expectation does, and so one may well take \(W = {\mathbf {1}}_{E(W)} = {\mathbf {1}}_{c+\delta } (H(W,Y) = H({\mathbf {1}}_{E(W)},Y)\) because \(Y\le c\) and \(W>c\)).

  7. We denote the cumulative distribution function (CDF) of a random variable X by \(F_{X}(t)\).

  8. Throughout we will not insist on writing constants that multiply our \(\varepsilon \)’s, since these arguments vanish when we take the limit.

  9. We write “\(\in [\alpha ,\beta ]\)” to mean that the value of \(F_{z_{0}}\) there must belong to the interval \([\alpha ,\beta ]\); in Lemma 6 we will show that in fact every \(\gamma \in [\alpha ,\beta ]\) and every \(z_{0}\in [0,c/2]\) are attained for some subsequence.

  10. \(conv\{\varOmega \}\) is the convex hull of the set \(\varOmega .\)

  11. As in Lemma 6, we will see in Remark 4 that every \(y\in [0,1-a/c]\) is attained for some subsequence.

  12. In case (v) \(Y^{*}\) is an \(\varepsilon \)-optimal strategy, where \(c^{-}\) stands for \(c-\delta \) for a small enough \(\delta >0\).

References

  • Bell RM, Cover TM (1980) Competitive optimality of logarithmic investment. Math Oper Res 5:161–166

    Article  Google Scholar 

  • Billingsley P (1986) Probability and measure, 2nd edn. Wiley, New York

    Google Scholar 

  • Borel E (1921) La Thorie du Jeu et les quations Intgrales Noyau Symtrique. Comptes Rendus de lAcadmie des Sciences 173:1304–1308 (Translated by Savage LJ (1953) The theory of play and integral equations with skew symmetric kernels. Econometrica 21:97–100)

    Google Scholar 

  • Hart S (2008) Discrete Colonel Blotto and General Lotto games. Int J Game Theory 36:441–460

    Article  Google Scholar 

  • Hart S (2016) Allocation games with caps: from Captain Lotto to all-pay auctions. Int J Game Theory 45:37–61

    Article  Google Scholar 

  • Lizzeri A (1999) Budget deficit and redistributive politics. Rev Econ Stud 66:909–928

    Article  Google Scholar 

  • Myerson RB (1993) Incentives to cultivate minorities under alternative electoral systems. Am Polit Sci Rev 87:856–869

    Article  Google Scholar 

  • Roberson B (2006) The Colonel Blotto game. Econ Theory 29:1–24

    Article  Google Scholar 

  • Sahuguet N, Persico N (2006) Campaign spending regulations in a model of redistributive politics. Econ Theory 28:95–124

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nadav Amir.

Additional information

This research has received funding from the European Research Council under the European Community’s Seventh Framework Program (FP7/2007-2013)/ERC Grant Agreement No. [249159]. The author would like to express his deep gratitude to his advisor, Sergiu Hart. The author would also like to thank Nir Gadish and the anonymous referees for useful comments and discussions.

Appendices

Appendix A

In this appendix we prove useful lemmas and cite some known results.

Every nonnegative random variable X satisfies

$$\begin{aligned} E(X)=\int _{0}^{\infty }P(X\ge x)dx=\int _{0}^{\infty }P(X>x)dx \end{aligned}$$
(24)

(see, e.g., Billingsley 1986 (21.9)). Since \(H(X,Y)=P(X>Y)-P(Y>X)\), we easily obtain

$$\begin{aligned} 1-2P(Y\ge X)\le H(X,Y)\le 2P(X\ge Y)-1. \end{aligned}$$
(25)

Lemma 8

For every nonnegative random variable \(X\ge 0\) and every \(t>0,\)

  1. 1.
    $$\begin{aligned} H(U_{[0,2t]},X)\ge 1-\frac{E(X)}{t}. \end{aligned}$$
  2. 2.

    \(X\le 2t\) if and only if

    $$\begin{aligned} H(U_{[0,2t]},X)=1-\frac{E(X)}{t}. \end{aligned}$$

Remark 6

Part (1) of this lemma is proved in Hart (2008) (see (10), (11)). Since the proof is short and we use this lemma repeatedly, we include its proof here.

Proof

$$\begin{aligned} P(X\ge U_{[0,2t]})= & {} \frac{1}{2t}\int _{0}^{2t}P(X\ge x)dx\le \frac{1}{2t}\int _{0}^{\infty }P(X\ge x)dx=\frac{E(X)}{2t}\\ H(U_{[0,2t]},X)\ge & {} 1-2P(X\ge U_{[0,2t]})\ge 1-2\frac{E(X)}{2t}=1-\frac{E(X)}{t}, \end{aligned}$$

which proves the first part of our lemma. For the second part we first assume that \(X\le 2t\). Thus,

$$\begin{aligned} H(U_{[0,2t]},X)= & {} P(U_{[0,2t]}>X)-P(U_{[0,2t]}<X)\\= & {} 1-P(X\ge U_{[0,2t]})-P(X>U_{[0,2t]})\\= & {} 1-\frac{1}{2t}\int _{0}^{2t}P(X\ge x)+P(X>x)dx\\= & {} 1-\frac{1}{2t}\int _{0}^{\infty }P(X\ge x)+P(X>x)dx\\= & {} 1-\frac{1}{2t}2E(X)=1-\frac{E(X)}{t}. \end{aligned}$$

For the other direction, assume \(H(U_{[0,2t]},X)=1-E(X)/t.\) Using this assumption and (24) gives us

$$\begin{aligned} 1-\frac{E(X)}{t}= & {} H(U_{[0,2t]},X)=1-\frac{1}{2t}\int _{0}^{2t}P(X\ge x)+P(X>x)dx\\\ge & {} 1-\frac{1}{2t}\int _{0}^{2t}2P(X\ge x)dx\\= & {} 1-\frac{1}{t}\int _{0}^{2t}P(X\ge x)dx\\\ge & {} 1-\frac{1}{t}\int _{0}^{\infty }P(X\ge x)dx=1-\frac{E(X)}{t}. \end{aligned}$$

This implies that

$$\begin{aligned} \int _{2t}^{\infty }P(X\ge x)dx=0; \end{aligned}$$

thus \(P(X\ge x)=0\) for almost every \(x\in [2t,\infty ).\) However, the left-hand side is a nonincreasing function and so we obtain that \(P(X>x)=0\) for every \(x\in [2t,\infty ),\) i.e., \(X\le 2t.\) \(\square \)

When both players have the same cap, i.e., \(c_{A}=c_{B}=c,\) and, without loss of generality, we assume \(a\ge b,\) the value of the \(\varLambda _{c,c}(a,b)\) game is \(1-b/a.\) This is part of Theorem 1, which is proved in Hart (2016). Using this yields the following:

Lemma 9

Let \(0<a\le c. X\) is an optimal strategy of player A in \(\varLambda _{c,c}(a,b)\) for some \(0<b\le a\) if and only if X is an optimal strategy of player A in \(\varLambda _{c,c}(a,t)\) for all \(t\in [b,a].\)

Proof

Assume X is an optimal strategy of player A in \(\varLambda _{c,c}(a,b)\) for some \(0<b\le a\). Let \(t\in [b,a]\) and let Y be some strategy of player B in \(\varLambda _{c,c}(a,t).\) Define \(Y^{b}=(1-b/t){\mathbf {1}}_{0}+(b/t)Y\) with \(E(Y^{b})=b\). Since X is optimal we obtain

$$\begin{aligned}&\displaystyle 1-\frac{b}{a}\le H(X,Y^{b})=\left( 1-\frac{b}{t}\right) H(X,{\mathbf {1}}_{0})+\frac{b}{t}H(X,Y)\le 1-\frac{b}{t}+\frac{b}{t}H(X,Y)\\&\displaystyle 1-\frac{t}{a}=\frac{t}{b}\left( \frac{b}{t}-\frac{b}{a}\right) \le H(X,Y). \end{aligned}$$

Thus X is optimal in \(\varLambda _{c,c}(a,t).\)

The other direction is trivial. \(\square \)

Remark 7

Note that in Lemma 9 c may be greater than 2a (and even infinite).

Corollary 1

If X is optimal in \(\varLambda _{c,c}(a,b)\) for \(a\ge b\) then X is optimal in \(\varLambda _{c,c}(a,a).\)

Lemma 10

Let X be an optimal strategy of player A in \(\varLambda _{c,c}(a,b),\) and let \(Y=\alpha {\mathbf {1}}_{t_{1}}+(1-\alpha ){\mathbf {1}}_{t_{2}}\) be some strategy of player B with \(0\le t_{1}<b<t_{2}\le c\) and \(\alpha =(t_{2}-b)/(t_{2}-t_{1}).\) Then

$$\begin{aligned} \frac{H(X,{\mathbf {1}}_{t_{1}})-v}{t_{1}-b}\le \frac{H(X,{\mathbf {1}}_{t_{2}})-v}{t_{2}-b} \end{aligned}$$

where \(v=\hbox {val}\varLambda _{c,c}(a,b).\)

Proof

X is optimal; thus

$$\begin{aligned} v\le H(X,Y)=\frac{t_{2}-b}{t_{2}-t_{1}}H(X,{\mathbf {1}}_{t_{1}})+\frac{b-t_{1}}{t_{2}-t_{1}}H(X,{\mathbf {1}}_{t_{2}}). \end{aligned}$$

Multiplying by \((t_{2}-t_{1})/[(t_{2}-b)(b-t_{1})]\) yields

$$\begin{aligned} \frac{t_{2}-t_{1}}{(t_{2}-b)(b-t_{1})}v\le \frac{H(X,{\mathbf {1}}_{t_{1}})}{b-t_{1}}+\frac{H(X,{\mathbf {1}}_{t_{2}})}{t_{2}-b} \end{aligned}$$

and so

$$\begin{aligned} \frac{H(X,{\mathbf {1}}_{t_{1}})-v}{t_{1}-b}\le \frac{H(X,{\mathbf {1}}_{t_{2}})-v}{t_{2}-b}. \end{aligned}$$

\(\square \)

Finally, we cite a version of Helly’s selection theorem, and prove a useful lemma.

Theorem 21

(Helly’s selection theorem) For every sequence \(F_{n}\) of distribution functions there exists a subsequence \(F_{n_{k}}\) and a nondecreasing,  right-continuous function F such that \(\lim _{k\rightarrow \infty }F_{n_{k}}(x)=F(x)\) at continuity points x of F.

(see, e.g., Billingsley 1986 (Theorem 25.9)). Note that since F is nondecreasing, the set of points where F is not continuous is at most countable. An application of the diagonal method gives a subsequence of \(F_{n}\) that converges for every \(t\in \mathbb {R}\). We also note that it is sufficient to assume that \(F_{n}\) is a sequence of nondecreasing functions, that are uniformly bounded, i.e., there exists some \(0<M\in \mathbb {R}\) with \(|F_{n}(x)|<M\) for every \(x\in U\) and every \(n\in \mathbb {N}\).

Lemma 11

Let \(\{X_{n}\}_{n\in \mathbb {N}}\) be a sequence of nonnegative random variables,  and assume \(\lim _{n\rightarrow \infty }E(X_{n})=0\). Then for every \(t>0, \lim _{n\rightarrow \infty }F_{X_{n}}(t)=1\).

Proof

Assume the opposite. Then there exists \(t_{0}>0, 0< \varepsilon \le 1\) and a subsequence \(\{X_{n_{k}}\}_{k\in \mathbb {N}}\) of \(\{X_{n}\}_{n \in \mathbb {N}}\), with \(\lim _{k\rightarrow \infty }F_{X_{n_{k}}}(t_{0}) = 1 - \varepsilon \). Thus \(\lim _{k\rightarrow \infty }P(X_{n_{k}}>t_{0}) = \varepsilon > 0 \).

By (24) and Fatou’s lemma we have,

$$\begin{aligned} \liminf _{k\rightarrow \infty } E(X_{n_{k}})= & {} \liminf _{k\rightarrow \infty } \int _{0}^{\infty } P(X_{n_{k}}>t) dt \ge \int _{0}^{\infty } \liminf _{k\rightarrow \infty } P(X_{n_{k}}>t) dt \\= & {} \int _{0}^{t_{0}} \liminf _{k\rightarrow \infty } P(X_{n_{k}}>t) dt + \int _{t_{0}}^{\infty } \liminf _{k\rightarrow \infty } P(X_{n_{k}}>t) dt\\\ge & {} \int _{0}^{t_{0}} \liminf _{k\rightarrow \infty } P(X_{n_{k}}>t_{0}) dt = t_{0} \varepsilon > 0. \end{aligned}$$

However, \(\lim _{n\rightarrow \infty }E(X_{n})=0\), a contradiction. \(\square \)

Corollary 2

Let \(c>0\) and let \(\{X_{n}\}_{n\in \mathbb {N}}\) be a sequence of random variables with \(X_{n} \ge c\) for every \(n \in \mathbb {N}\). Assume that \(\lim _{n\rightarrow \infty }E(X_{n})=c\). Then for every \(t>c, \lim _{n\rightarrow \infty }F_{X_{n}}(t)=1\).

Appendix B

In this appendix we prove Theorems 1520. We begin with a lemma.

Lemma 12

If X is an optimal strategy for player A in \(\varLambda _{c,c^{-}}(a,b)\) with \(P(X=c)=0,\) and

$$\begin{aligned} \hbox {val}\varLambda _{c,c}(a,b) \le \hbox {val}\varLambda _{c,c^{-}}(a,b), \end{aligned}$$

then X is an optimal strategy for player A in \(\varLambda _{c,c}(a,b)\).

Proof

Let \(X^{0}\) be an optimal strategy of player A in \(\varLambda _{c,c^{-}}(a,b)\) with \(P(X^{0}=c)=0\), and let Y be some strategy of player B in \(\varLambda _{c,c}(a,b)\). Express \(Y=\gamma W+(1-\gamma ){\mathbf {1}}_{c}\), with \(E(W)=w\) and \(\gamma =(c-b)/(c-w)\).

If \(\gamma =1\), then optimality in \(\varLambda _{c,c^{-}}(a,b)\) gives us,

$$\begin{aligned} \hbox {val}\varLambda _{c,c}(a,b) \le \hbox {val}\varLambda _{c,c^{-}}(a,b) \le H(X^{0},Y). \end{aligned}$$

Assume \(\gamma <1\). For every \(\delta >0\) small enough, define \(Y_{\delta }=\gamma _{\delta }W+(1-\gamma _{\delta }){\mathbf {1}}_{c-\delta }\) with \(\gamma _{\delta }=(c-b-\delta )/(c-w-\delta )\). By \(X^{0}\)’s optimality,

$$\begin{aligned} \hbox {val}\varLambda _{c,c^{-}}(a,b)\le & {} H(X^{0},Y_{\delta })= \gamma _{\delta }H(X^{0},W)+(1-\gamma _{\delta })H(X^{0},{\mathbf {1}}_{c-\delta })\\\le & {} \gamma _{\delta }H(X^{0},W)+(1-\gamma _{\delta })[2P(X^{0}\ge c-\delta )-1]. \end{aligned}$$

Taking the \(\delta \rightarrow 0^{+}\) limit, and remembering that \(P(X^{0}\ge t)\) is left continuous yields,

$$\begin{aligned} \hbox {val}\varLambda _{c,c^{-}}(a,b) \le \gamma H(X^{0},W)+(1-\gamma )[2P(X^{0}\ge c)-1]=H(X^{0},Y) \end{aligned}$$

(the last equality is given by Y’s definition and by \(P(X^{0}=c)=0\)). Since

$$\begin{aligned} \hbox {val}\varLambda _{c,c}(a,b) \le \hbox {val}\varLambda _{c,c^{-}}(a,b), \end{aligned}$$

\(X^{0}\) is an optimal strategy of player A in \(\varLambda _{c,c}(a,b)\). \(\square \)

1.1 B.1 Proof of Theorem 15

For both players, optimality follows from Theorem 14, and uniqueness follows from the same proof as in Hart (2008, Appendix).

1.2 B.2 Proof of Theorem 16

For both players optimality follows from Theorem 14. \(Y^{*}\)’s uniqueness derives from its uniqueness in \(\varLambda _{2a,2a}(a,b)\). We are left with \(X^{*}\)’s uniqueness.

Let \(X^{0}\) be an optimal strategy of player A. We express \(X^{0}=\alpha Z+(1-\alpha ){\mathbf {1}}_{c}\), where \(0\le Z<c\) with \(E(Z)=z\), and \(\alpha =(c-a)/(c-z)\). We wish to show that \(\alpha =1\). Assume \(\alpha <1\), and notice \(\alpha<1\Leftrightarrow z<a\).

If \(z<b\), then by Theorem 15 \(Y=U_{[0,2b]}\) is an optimal strategy of player B in \(\varLambda _{c,c^{-}}(z,b)\). Thus,

$$\begin{aligned}&\displaystyle 1-\frac{b}{a}\le H(X^{0},U_{[0,2b]})=1-\alpha +\alpha H(Z,U_{[0,2b]})\le 1-\alpha +\alpha \left( \frac{z}{b}-1\right) \\&\displaystyle \left( 2-\frac{z}{b}\right) \alpha \le \frac{b}{a}. \end{aligned}$$

By substituting \(\alpha =(c-a)/(c-z), c=2a\), and multiplying by \(ab(c-z)\) we obtain: \((2b-z)a^{2}\le b^{2}(2a-z)\).

$$\begin{aligned}&(2b-z)a^{2}\le b^{2}(2a-z)=b^{2}(2a-2b)+b^{2}(2b-z)\\&(2b-z)(a^{2}-b^{2})\le 2b^{2}(a-b)\\&(2b-z)(a+b)\le 2b^{2}\\&(2b-z)a+2b^{2}-zb\le 2b^{2}\\&2ab\le z(a+b). \end{aligned}$$

Since \(z<b<a\), we have \(2ab\le z(a+b)<2ab\), a contradiction. We conclude that \(z\ge b\).

Let \(Y=\beta Z+(1-\beta ){\mathbf {1}}_{0}\) with \(\beta =b/z\) (we use the same Z as in player A’s strategy \(X^{0}\)). Y is indeed allowed for player B since \(0\le Z<c\) and \(z\ge b\). Now,

$$\begin{aligned} 1-\frac{b}{a}\le H(X^{0},Y)=\alpha (1-\beta )H(Z,{\mathbf {1}}_{0})+1-\alpha \le \alpha (1-\beta )+1-\alpha . \end{aligned}$$

Rearranging and substituting \(\alpha =(c-a)/(c-z), \beta = b/z\) yields,

$$\begin{aligned} \frac{c-a}{c-z}\frac{b}{z}=\alpha \beta \le \frac{b}{a}, \end{aligned}$$

and so,

$$\begin{aligned} \left( \frac{c}{2}\right) ^{2}=(c-a)a\le (c-z)z. \end{aligned}$$

However, \(z<a=\frac{c}{2}\), which leads to a contradiction. Thus, \(\alpha =1\) and \(X^{0}\) satisfies \(0\le X^{0}<c\), i.e., \(P(X^{0}\ge c)=0\).

Notice that

$$\begin{aligned} \hbox {val}\varLambda _{c,c^{-}}(a,b) = \hbox {val}\varLambda _{c,c}(a,b) = 1-\frac{b}{a}; \end{aligned}$$

thus we can obtain by Lemma 12 that \(X^{0}\) is optimal for player A in \(\varLambda _{c,c}(a,b)\). Since \(X^{*}\) is the unique optimal strategy for player A in \(\varLambda _{c,c}(a,b)\) (Theorem 2), we have \(X^{0}=X^{*}\).

1.3 B.3 Proof of Theorem 17

Denote \(X(a)=X^{*}=(1-2a/c){\mathbf {1}}_{0}+(2a/c)U_{[0,c]}\in M\). By Theorem 14 \(X^{*}\) and \(Y^{*}\) are optimal for players A and B, respectively. As in Theorem 16, \(Y^{*}\)’s uniqueness derives from its uniqueness in \(\varLambda _{c,c}(a,c/2)\), and so we turn to prove that M is the set of all optimal strategies of player A.

Let \(X(z)\in M\), and let Y be some strategy of player B.

$$\begin{aligned} H(X(z),Y)=\frac{c-a}{c-z}H\left( \left( 1-\frac{2z}{c}\right) {\mathbf {1}}_{0}+ \frac{2z}{c}U_{[0,c]},Y\right) +\frac{a-z}{c-z}H({\mathbf {1}}_{c},Y). \end{aligned}$$
(26)

Since \(Y<c\) we have \(H({\mathbf {1}}_{c},Y)=1\). Moreover, the strategy \((1-2z/c){\mathbf {1}}_{0}+(2z/c)U_{[0,c]}\) is optimal for player A in \(\varLambda _{c,c^{-}}(z,c/2)\), where \(z\le c/2\). Applying these to (26) yields,

$$\begin{aligned} H(X(z),Y)\ge \frac{c-a}{c-z}\left( \frac{2z}{c}-1\right) +\frac{a-z}{c-z}=\frac{2a}{c}-1. \end{aligned}$$

Thus, every strategy \(X(z)\in M\) is optimal for player A.

We now wish to show that every optimal strategy of player A equals \(X(z)\in M\) for some \(0\le z\le a.\) Note that for \(a\le b=c/2\),

$$\begin{aligned} \hbox {val}\varLambda _{c,c^{-}}\left( a,\frac{c}{2}\right) = \hbox {val}\varLambda _{c,c}\left( a,\frac{c}{2}\right) = \frac{2a}{c}-1. \end{aligned}$$

Let \(X^{0}\) be some optimal strategy of player A. If \(P(X^{0}=c)=0\), by Lemma 12 and Theorem 2 we obtain, \(X^{0}=(1-2a/c){\mathbf {1}}_{0}+(2a/c)U_{[0,c]}=X(a)\in M\). If \(P(X^{0}=c)>0\), express \(X^{0}=\alpha Z+(1-\alpha ){\mathbf {1}}_{c}\) with \(Z<c, E(Z)=z\) and \(\alpha =(c-a)/(c-z)\). Since \(P(X^{0}=c)>0\), we have \(z<a\). Let Y be some strategy of player B.

$$\begin{aligned} 1-2\frac{c-a}{c}\le H(X^{0},Y) = \alpha H(Z,Y)+1-\alpha . \end{aligned}$$

Rearranging and substituting \(\alpha =(c-a)/(c-z)\) yields,

$$\begin{aligned} 1-2\frac{c-z}{c}\le H(Z,Y); \end{aligned}$$

thus Z is an optimal strategy of player A in \(\varLambda _{c,c^{-}}(z,c/2)\) with \(z<a\le c/2\), and \(P(Z=c)=0\). By Lemma 12 and Theorem 2 we obtain, \(Z=(1-2z/c){\mathbf {1}}_{0}+(2z/c)U_{[o,c]}\), and so

$$\begin{aligned} X^{0}=\frac{c-a}{c-z}\left[ \left( 1-\frac{2z}{c}\right) {\mathbf {1}}_{0}+\frac{2z}{c}U_{[0,c]}\right] +\frac{a-z}{c-z}{\mathbf {1}}_{c}=X(z)\in M, \end{aligned}$$

and we are done.

1.4 B.4 Proof of Theorem 18

Denote \(X^{*}=X(c/2)=(2c-2a)/cU_{[0,c]}+(2a-c)/c{\mathbf {1}}_{c}\). The optimality of \(X^{*}\) and \(Y^{*}=U_{[0,c]}\) for players A and B, respectively, derives from Theorem 14.

We turn to prove that \(Y^{*}\) is unique, and begin with a lemma.

Lemma 13

Let \(Y^{0}\) be an optimal strategy of player B in \(\varLambda _{c,c^{-}}(a,b),\) where \(b\le c/2<a\). Then for every X of player A in \(\varLambda _{c,c^{-}}(c/2,b)\) we have

$$\begin{aligned} H(X,Y^{0})\le 1-\frac{2b}{c}. \end{aligned}$$

Proof

Let \(Y^{0}\) be an optimal strategy of player B in \(\varLambda _{c,c^{-}}(a,b)\), where \(b\le c/2<a\), and let X be some strategy of player A in \(\varLambda _{c,c^{-}}(c/2,b)\). Define \(X^{a}=\alpha X+(1-\alpha ){\mathbf {1}}_{c}\) with \(\alpha =2(c-a)/c\), and so \(E(X^{a})=a\). Since \(Y^{0}\) is optimal we have

$$\begin{aligned} 1-2\frac{c-a}{c}\frac{2b}{c}\ge H(X^{a},Y^{0})=\alpha H(X,Y^{0})+1-\alpha . \end{aligned}$$

Rearranging and substituting \(\alpha =2(c-a)/c\) yields

$$\begin{aligned} H(X,Y^{0})\le 1-\frac{2b}{c}. \end{aligned}$$

\(\square \)

Let \(Y^{0}\) be an optimal strategy of player B. Using the above lemma gives us that \(Y^{0}\) is an optimal strategy of player B in \(\varLambda _{c,c^{-}}(c/2,c/2)\). According to Theorem 17, \(Y^{0}=U_{[0,c]}=Y^{*}\), and so \(Y^{*}\) is unique.

Turning to player A, let \(X(z)\in M\). As we saw for \(z=c/2, X(c/2)=X^{*}\) is an optimal strategy of player A. Assume \(z<c/2\). According to Theorem 17 the strategy \((1-2z/c){\mathbf {1}}_{0}+(2z/c)U_{[0,c]}\) is optimal for player A in \(\varLambda _{c,c^{-}}(z,c/2)\); thus,

$$\begin{aligned} H(X(z),Y)= & {} \frac{c-a}{c-z}H\left( \left( 1-\frac{2z}{c}\right) {\mathbf {1}}_{0}+ \frac{2z}{c}U_{[0,c]},Y\right) +\frac{a-z}{c-z}H({\mathbf {1}}_{c},Y)\\\ge & {} \frac{c-a}{c-z}\left( \frac{2z}{c}-1\right) +\frac{a-z}{c-z}=\frac{2a}{c}-1. \end{aligned}$$

And so, every strategy \(X(z)\in M\) is optimal.

Now, let \(X^{0}\) be some optimal strategy of player A. We wish to show that \(X^{0}=X(z)\) for some \(0\le z\le c/2\).

Lemma 14

An optimal strategy of player A in \(\varLambda _{c,c^{-}}(a,c/2),\) where \(c/2<a<c,\) must have an atom at c,  i.e.,  \(P(X^{0}=c)>0.\)

Proof

Assume \(P(X^{0}=c)=0\). Note that

$$\begin{aligned} \hbox {val}\varLambda _{c,c}\left( a,\frac{c}{2}\right) = \hbox {val}\varLambda _{c,c^{-}}\left( a,\frac{c}{2}\right) = 1-2\frac{c-a}{c}. \end{aligned}$$

By Lemma 12 we obtain that \(X^{0}\) is also optimal for player A in \(\varLambda _{c,c}(a,c/2)\), where \(c/2<a<c\). By Theorem 2, \(X^{0}=(c-a)/aU_{[0,2(c-a)]}+(2a-c)/a{\mathbf {1}}_{c}\), which means that \(P(X^{0}=c)>0\), a contradiction. \(\square \)

Express \(X^{0}=\alpha Z+(1-\alpha ){\mathbf {1}}_{c}\) with \(Z<c, E(Z)=z\) and \(\alpha =(c-a)/(c-z)\). From the above lemma \(\alpha <1\), which is equivalent to \(z<a\).

Let Y be some strategy of player B. Substituting \(\alpha =(c-a)/(c-z)\) in

$$\begin{aligned} 1-2\frac{c-a}{a} \le H(X^{0},Y)\,=\, \alpha H(Z,Y)+1-\alpha , \end{aligned}$$

and rearranging yields,

$$\begin{aligned} 1-2\frac{c-z}{c} \le H(Z,Y); \end{aligned}$$

thus Z is optimal for player A in \(\varLambda _{c,c^{-}}(z,c/2)\).

Note that \(Z<c\). If \(c/2<z\), Lemma 14 guarantees that \(P(Z=c)>0\), a contradiction. Thus \(z\le c/2\). According to Theorem 17, since Z is optimal for player A, it holds that

$$\begin{aligned} Z=\frac{c-z}{c-t}\left[ \left( 1-\frac{2t}{c}\right) {\mathbf {1}}_{0}+\frac{2t}{c}U_{[0,c]}\right] +\frac{z-t}{c-t}{\mathbf {1}}_{c}, \end{aligned}$$

for some \(t\in [0,z]\). However, Z has no atom at c, and so \(t=z\), and

$$\begin{aligned} Z=\left( 1-\frac{2z}{c}\right) {\mathbf {1}}_{0}+\frac{2z}{c}U_{[0,c]}; \end{aligned}$$

thus

$$\begin{aligned} X^{0}=\frac{c-a}{c-z}\left[ \left( 1-\frac{2z}{c}\right) {\mathbf {1}}_{0}+\frac{2z}{c}U_{[0,c]}\right] +\frac{a-z}{c-z}{\mathbf {1}}_{c}=X(z) \end{aligned}$$

with \(0\le z\le c/2\). \(X^{0}=X(z)\in M\) and we are done.

1.5 B.5 Proof of Theorem 19

Optimality of \(X^{*}\) and \(Y^{*}\) for players A and B, respectively, derives from Theorem 14.

We turn to prove that \(Y^{*}\) is unique. Let \(Y^{0}\) be an optimal strategy of player B in \(\varLambda _{c,c^{-}}(a,b)\), where \(b<c/2<a\). By Lemma 13, \(Y^{0}\) is an optimal strategy of player B in \(\varLambda _{c,c^{-}}(c/2,b)\). Since \(b<c/2\), we can use Theorem 16; thus \(Y^{0}=(1-2b/c){\mathbf {1}}_{0}+(2b/c)U_{[0,c]}=Y^{*}\), and \(Y^{*}\) is unique.

Turning to \(X^{*}\)’s uniqueness, let \(X^{0}\) be an optimal strategy of player A in \(\varLambda _{c,c^{-}}(a,b)\), where \(b<c/2<a<c\), and let Y be some strategy of player B in \(\varLambda _{c,c^{-}}(a,c/2)\). Define \(Y^{b}=(1-2b/c){\mathbf {1}}_{0}+(2b/c)Y\).

$$\begin{aligned} 1-2\frac{c-a}{c}\frac{2b}{c}\le H(X^{0},Y^{b})= & {} \left( 1-\frac{2b}{c}\right) H(X^{0},{\mathbf {1}}_{0})+\frac{2b}{c}H(X^{0},Y)\\\le & {} 1-\frac{2b}{c}+\frac{2b}{c}H(X^{0},Y), \end{aligned}$$

and so,

$$\begin{aligned} 1-2\frac{c-a}{c}\le H(X^{0},Y); \end{aligned}$$

thus \(X^{0}\) is optimal for player A in \(\varLambda _{c,c^{-}}(a,c/2)\), and according to Theorem 18:

$$\begin{aligned} X^{0}=X(z)=\frac{c-a}{c-z}\left[ \left( 1-\frac{2z}{c}\right) {\mathbf {1}}_{0}+\frac{2z}{c}U_{[0,c]}\right] +\frac{a-z}{c-z}{\mathbf {1}}_{c} \end{aligned}$$
(27)

for some \(z\in [0,\frac{c}{2}]\).

Claim

\(P(X^{0}=0)=0\).

Proof

Express \(X^{0}=(1-\alpha ){\mathbf {1}}_{0}+\alpha W\), with \(W>0, E(W)=w\) and \(\alpha =a/w\). We wish to show that \(\alpha =1\). Assume the opposite, i.e., \(\alpha <1\).

$$\begin{aligned} 1-2\frac{c-a}{c}\frac{2b}{c}\le H(X^{0},Y^{*}) = (1-\alpha )\frac{2b}{c}(-1)+\left( 1-\frac{2b}{c}\right) \alpha +\alpha \frac{2b}{c}H(W,U_{[0,c]}). \end{aligned}$$

Rearranging yields,

$$\begin{aligned} \frac{c}{2b}(1-\alpha )+\left( \frac{2a}{c}-1\right) \le \alpha H(W,U_{[0,c]}). \end{aligned}$$

However, \(H(W,U_{[0,c]})\le 2w/c-1\) (Lemma 8, Appendix A); thus,

$$\begin{aligned} \frac{c}{2b}(1-\alpha )+\left( \frac{2a}{c}-1\right) \le \alpha \left( \frac{2w}{c}-1\right) =\frac{2a}{c}-\alpha , \end{aligned}$$

which yields,

$$\begin{aligned} \frac{c}{2b}(1-\alpha )\le 1-\alpha . \end{aligned}$$

Since \(\alpha <1\), we obtain \(c/2 \le b\), a contradiction. \(\square \)

Now, using the fact that \(P(X^{0}=0)=0\) together with (27) gives us \(z=c/2\). Thus \(X^{0}=X(c/2)=X^{*}\), and we are done.

1.6 B.6 Proof of Theorem 20

The optimality of \(X^{*}\) and \(\varepsilon \)-optimality of \(Y^{*}=Y(b)\) derives from Theorem 14.

We begin with \(X^{*}\)’s uniqueness. Let \(X^{0}\) be some optimal strategy of player A, and express \(X^{0}=\alpha Z+(1-\alpha ){\mathbf {1}}_{c},\) with \(Z<c, E(Z)=z\) and \(\alpha =(c-a)/(c-z)\). It is sufficient to show that \(z=0\). Assume the opposite, i.e., \(z>0\).

Let Y be some strategy of player B. Then,

$$\begin{aligned} 1-2\frac{c-a}{c}\le H(X^{0},Y) = \alpha H(Z,Y)+ 1 - \alpha . \end{aligned}$$

Rearranging and substituting \(\alpha =(c-a)/(c-z)\) yields,

$$\begin{aligned} 1-2\frac{c-z}{c} \le H(Z,Y); \end{aligned}$$

thus Z is an optimal strategy of player A in \(\varLambda _{c,c^{-}}(z,b)\), where \(c/2<b\) and \(z>0\).

Note that in this case we have the following strong inequality,

$$\begin{aligned} \hbox {val}\varLambda _{c,c}(z,b) = \frac{z-b}{\max \{z,b\}} < 1-2\frac{c-z}{c} = \hbox {val}\varLambda _{c,c^{-}}(z,b). \end{aligned}$$

Since \(Z<c\), repeating the steps of Lemma 12 gives us that for every strategy Y of player B in \(\varLambda _{c,c}(z,b)\),

$$\begin{aligned} \hbox {val}\varLambda _{c,c}(z,b) = \frac{z-b}{\max \{z,b\}} < H(Z,Y). \end{aligned}$$

Meaning Z, a feasible strategy for player A in \(\varLambda _{c,c}(z,b)\), guarantees a greater value than the value of the game, a contradiction. We conclude that \(z=0\), and \(X^{0}=X^{*}\).

We turn to player B. Since a convex combination of \(\varepsilon \)-optimal strategies is \(\varepsilon \)-optimal, it is sufficient to show that every strategy in \(\varOmega _{\delta }\) is \(\varepsilon \)-optimal for a small enough \(\delta >0\).

We know that for \(c/2<b\), the strategy

$$\begin{aligned} Y^{*}=Y(b)=\frac{c-\delta -b}{b-\delta }U_{[0,2(c-b)]}+\frac{2b-c}{b-\delta }{\mathbf {1}}_{c-\delta } \end{aligned}$$

is \(\varepsilon \)-optimal for player B in \(\varLambda _{c,c^{-}}(a,b)\) for a sufficiently small \(\delta >0\), and \(val\varLambda _{c,c^{-}}(a,b)=2a/c-1\). We conclude that for every \(w\in (c/2,b]\), for a sufficiently small \(\delta >0\) the strategy

$$\begin{aligned} W=\frac{c-\delta -w}{w-\delta }U_{[0,2(c-w)]}+\frac{2w-c}{w-\delta }{\mathbf {1}}_{c-\delta } \end{aligned}$$

is \(\varepsilon \)-optimal for player B in \(\varLambda _{c,c^{-}}(a,w)\), and \(val\varLambda _{c,c^{-}}(a,w)=\frac{2a}{c}-1\). Using Theorems 17 and 18, we may add \(w=c/2\) to this conclusion (as a matter of fact when \(w=c/2, W\) is optimal). Moreover, for a small enough \(\delta >0\), we can use Markov’s inequality and obtain

$$\begin{aligned} P(X\ge c-\delta )\le \frac{a}{c-\delta } \le \frac{a}{c}+\frac{\varepsilon }{2}. \end{aligned}$$

Let \(Y(w)\in \varOmega _{\delta }\). Then,

$$\begin{aligned} Y(w)=\beta W+(1-\beta ){\mathbf {1}}_{c-\delta }, \end{aligned}$$

with \(w\in [c/2,b]\) and \(\beta =(c-\delta -b)/(c-\delta -w)\). Let X be some strategy of player A in \(\varLambda _{c,c^{-}}(a,b)\). Then,

$$\begin{aligned} H(X,Y(w))= & {} \beta H(X,W)+(1-\beta )H(X,{\mathbf {1}}_{c-\delta })\\\le & {} \beta \left( \frac{2a}{c}-1 +\varepsilon \right) +(1-\beta )\left[ 2P(X\ge c-\delta )-1\right] \\\le & {} \beta \left( \frac{2a}{c}-1 +\varepsilon \right) +(1-\beta )\left[ 2\left( \frac{a}{c}+\frac{\varepsilon }{2}\right) -1\right] \\= & {} \frac{2a}{c}-1 +\varepsilon ; \end{aligned}$$

thus Y(w) is \(\varepsilon \)-optimal for player B when \(\delta >0\) is small enough and we are done.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Amir, N. Uniqueness of optimal strategies in Captain Lotto games. Int J Game Theory 47, 55–101 (2018). https://doi.org/10.1007/s00182-017-0578-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00182-017-0578-6

Keywords

Navigation