1 Introduction

In the summer of 2020, one of the authors (DH) spent a wonderful, albeit short, family holiday on one of the Frisian North Sea islands. At last one could enjoy nature with almost no worries and a brief period of the year when incidence numbers, exponential growth and R‑factors had faded into the background. My wife had immersed herself in one of her books on the beach, while I relaxed and let my eyes wander over the expanse of the sea. Finally, my wife, who teaches mathematics, turned to me and said, “You might be interested in this.” While reading Marc Elsberg’s thriller “GREED”, she had come across an interesting connection, which we initially discussed animatedly without pencil and paper. The following investigation finally arose from this first conversation.

Elsberg had his breakthrough as an author in 2012 with “BLACKOUT”. The book describes the scenario of a widespread collapse of power supply and its consequences. With “ZERO” and “HELIX” he confirmed his reputation as a master of the science thriller. In his eighth book with the title “GREED” [10], Elsberg deals with economic concepts, findings and theories with a focus on the question whether comprehensive cooperation between economic partners and branches of industry could lead to greater prosperity for everybody. He relies on scientific work related to ergodicity economics of a group led by Ole Peters at the London Mathematical Laboratory, which was supported by the Nobel Prize winners Murray Gell-Mann and Ken Arrow [30].

In his review of “GREED”, Edgar Fell [13, translated from German] comments on Elsberg’s book:

In the course of this exciting story, more and more connections of an economic nature emerge. It is fascinating to see how the author succeeds in bringing complex economic and social issues closer to the reader. Even the mathematical foundations of game theory are built into the plot in a “playful” way. An illegal game of chance in a bar, for example, offers the opportunity to take the first steps in this direction. It works like magic. Elsberg’s captivating art of storytelling allows even readers who are completely untrained in mathematics to grasp his number games. The knowledge that is conveyed stimulates thought – about how modern forms of society actually work.

The aim of the following considerations is to present an elementary analysis of a game of chance (a bet) from Chap. 14 of Elsberg’s thriller “GREED”, mentioned in the review by Edgar Fell, and of variants of this bet. On the one hand, this offers the opportunity to apply some of the basic concepts of (elementary) stochastics. In this respect it is fair to say that games of chance provided much of the inspiration behind the birth of probability theory; see the historical review in Ethier’s book on the Doctrine of Chances [12]. On the other hand, the investigation naturally leads to a threshold phenomenon (phase transition). Phenomena of this kind were originally observed in statistical physics, but also play an important role in the analysis of random graphs and random polytopes. For an elementary introduction to the topic of phase transitions in classical random graphs (Erdös–Renyi) we refer to [9], the classical work [11] as well as the monographs and textbooks [4, 19, 23]. Threshold phenomena with random polytopes (e.g. in high dimensions), random cones and connections to optimization, data analysis and signal processing have been investigated in [1,2,3, 6, 7, 14, 15, 21, 22, 31], for example.

In Sect. 2, we provide a summary of those aspects from “GREED” which are relevant for the present discussion. We essentially focus on Chap. 14, in which Elsberg stages the dynamics associated with the gambling scenario (the bet). In the following sections, we will analyse this particular gamble step by step. Here we start from the specific situation provided in the thriller. Motivated by our observations on the initial scenario described by Elsberg and a first quantitative analysis in Sect. 3, we generalise the underlying parameters of this game of chance (see Sect. 4). At first we examine the asymptotic behaviour of the expected net profit as the number \(n\) of rounds tends to infinity (Elsberg’s choice is \(n=100\)). Then we introduce general parameters \(u\) and \(d\) used to update the score after each round and find pairs \((d,u)\) (at least numerically if \(n\) is finite) that define a fair game (Elsberg’s choice corresponds to \(d=0.6\), \(u=1.5\) which results in an unfair game). The asymptotic analysis as \(n\) tends to infinity leads to a surprising threshold phenomenon which marks the asymptotically sharp transition from the profit zone to the loss area. While the original version of the bet is based on successively tossing a fair coin, in Sect. 4.5 we also explore the effect a biased coin has on the outcome of the bet, which allows us to establish a similar, but more general asymptotic threshold property. In addition, it can be seen from our investigation that a surprisingly small bias may already turn an unfavourable game into a fair one while keeping the other parameters fixed (see Fig. 6). By a thorough analysis of the asymptotic behaviour of the variance of the net profit as the number \(n\) of rounds tends to infinity, we can even deduce the limit distribution of the net profit. It turns out that the limit distribution is deterministic, except for the case in which \((d,u)\) lies on the boundary between profit zone and loss area when we obtain a two-point distribution in the limit. Finally, in Sect. 5 we illustrate some numerical simulations that motivate a small excursion to a generalised birthday problem. Throughout the paper, our arguments are motivated and illustrated by numerical calculations with Python (relevant source code is available from https://github.com/tgoell/On-a-game-of-chance-in-Marc-Elsberg-s-Greed-.git).

2 Elsberg’s game of chance [10] and some initial insights

In Chap. 14 of “GREED” [10], Elsberg describes the following scenario.

A group of people gathers in a bar in “Berlin Mitte”. A man (Fitzroy Peel, the croupier) offers the following bet to the rest of the group: A player starts with an initial score of 100 points (units). Afterwards, a coin is tossed one hundred times. In each round, the current score is increased by fifty percent, if the coin shows “heads”. Otherwise, the current score is reduced by forty percent. After one hundred rounds, there are two possibilities. If the final score exceeds 100 points (units), the player wins and receives double his stake. The mentioned stake can be chosen by the player, but is not allowed to exceed one hundred euros. If the final score does not exceed one hundred points, the player loses. The payout in the case of a loss is not explicitly specified in [10] (although it seems natural to assume that Elsberg intended the player to lose his or her entire stake).

While one member of the group called “T-Shirt” wants to participate right away, another one (Jan) is sceptical at first. T‑Shirt tries to convince Jan with the following explanation. In his opinion, one simply needs to take the mean of the possible outcomes. He therefore adds the possible percentages after one round (150 percent and 60 percent) and divides the sum by the number of possible outcomes (two). This yields a mean of 105 percent in each round. Jan and two other members of the group seem to be convinced by T‑Shirts explanation and agree to join the game.

However, another member of the group speaks up and expresses his concerns about the neglect of probabilities in the previous explanation. In his opinion, one needs to consider that the coin shows “heads” or “tails”, each with probability \(\frac{1}{2}\). Therefore, he multiplies the possible outcomes with the associated probabilities to arrive at an average gain of 105 percent in each round – the same result as before.

Finally, one more member of the group explains his point of view on the suggested bet. He explains that the average increase of five percent in each round yields an average final score (expected outcome) of 131.5 times the initial score. T‑Shirt seems confident of his victory and the group starts the offered gamble.

In the following chapters, Elsberg describes the reactions of the members of the group as the gambling evolves. Initially, a euphoric mood spreads in the room, as the majority of the players are successful in the beginning. However, after a couple of rounds more and more people end up with low scores and finally only one player has a score exceeding 100 (units). Now the mood in the room shifts completely. There are violent accusations of cheating, leading even to a physical confrontation.

Subsequently (in Chap. 22), a first popular explanation of the preceding events is given. On the one hand, it is argued that the mean values calculated by some of the participants are not appropriate for analysing the game, as they do not describe the course of the game over time, and that they ignore the fact that the initial situation can be different in each round. Here, the phenomenon of (lack of) ergodicity (the coincidence of temporal mean values and probabilistic averages) is used as an explanation. However, other points are perhaps more decisive for the analysis of the game.

More helpful is the remark that with an initial score of 100 (units) and a loss of forty percent in the first round, the score is reduced to sixty. Then, in order to reach again 100 (units) by winning in the second round, 66.67 percent instead of just 50 percent of the current score would be required. But even if a player wins in the first round and loses in the second round, the remaining score is only ninety. Ultimately, the basic error of the players is to calculate an expected value for the starting round and to conclude that \(0.5\cdot 1.5+0.5\cdot 0.6=1.05\) is the factor by which the profit per round should increase. Although the expected score at the end of the game varies exactly in this way (see below), the rules of the game do not state that the payout is double the stake if the mean value is greater than 100 (units), but if the actual outcome of the game results in a score that is greater than 100 (units). In addition, the prize in the event of a win is independent of the final score. It is only relevant whether the final score exceeds the initial score of 100 (units).

3 A quantitative analysis

We start by summarising and formalising the rules of the previously described game. Here we suggest a payout rule in the case of a loss which is more advantageous for the gamblers than a complete loss of the stake.

  1. 1.

    The stake \(a\leq 100\) (euros) is placed. The initial score is 100 (units).

  2. 2.

    The game consists of 100 rounds, each starting with the toss of a fair coin.

  3. 3.

    If the coin shows “heads”, the current score is increased by \(50\%\). Otherwise, the current score is reduced by \(40\%\).

  4. 4.

    This procedure will be continued until 100 rounds are completed.

  5. 5.

    In the end, the player wins if the final score exceeds 100 and receives a payout of twice the individual stake. In this case the net profit equals the stake. Otherwise, the payout of the player is

    $$\text{stake}\cdot\frac{\text{final score}}{100},$$

    hence the net profit equals

    $$\text{stake}\cdot\frac{\text{final score}}{100}-\text{stake},$$

    which is negative (a loss), but not a complete loss of the stake. In other words, the higher the final score, the higher the percentage of the stake that the player keeps.

Remark: As said before, the payout in the event of a loss is not explicitly specified by Elsberg. If, in contrast to the situation described above, we agree that the player loses his or her entire stake in the event of a loss, then the outcome would be even more disadvantageous for the player. The analysis below then simplifies significantly as the consideration of the quantity \(A(\ldots)\) in Sect. 4.3 can be omitted.

Simulations: Using Python (or a similar programming language), the game can be simulated very easily. Various realisations are displayed in Sect. 5.

A first analysis: Let \(a\) denote the stake. We assume that the coin tosses are done independently with a fair coin. If the coin shows “heads” \(k\) times and “tails” \((100-k)\) times during the \(n=100\) rounds, for some \(k\in\{0,\ldots,100\}\), then the final score is given by

$$b(k)=100\cdot 1.5^{k}\cdot 0.6^{100-k}.$$
(1)

Note that the final score depends only on the numbers of “heads” and “tails” and not on the particular order in which they appear.

Using (1), we can easily find the smallest integer \(k\) necessary to win the game. A player wins the game, yielding a net profit of \(a\) euros, if and only if the condition

$$b(k)=100\cdot 1.5^{k}\cdot 0.6^{100-k}> 100$$
(2)

is satisfied. Condition (2) can be rewritten as

$$\left(\frac{1.5}{0.6}\right)^{k}> \left(\frac{1}{0.6}\right)^{100}$$
(3)

or

$$k> 100\cdot\frac{\ln\left(\frac{5}{3}\right)}{\ln\left(\frac{5}{2}\right)},$$
(4)

where \(\ln\) denotes the natural logarithm. Clearly, (4) does neither depend on the initial score 100, nor on the stake \(a\), and (2) is equivalent to

$$k> 100\cdot\frac{\ln 5-\ln 3}{\ln 5-\ln 2}\approx 55.749.$$
(5)

Consequently, a player receives a net payout of \(a\) euros (keeping the initial stake) in the case where \(k\geq 56\), whereas the net profit is \(a\cdot 1.5^{k}\cdot 0.6^{100-k}-a<0\) euros (which is a loss that depends on the particular number \(k\)) if \(k\leq 55\).

In the underlying scenario, we can easily compute the probability of a loss, which is given by

$$\sum_{k=0}^{55}\binom{100}{k}\left(\frac{1}{2}\right)^{100}\approx 0.864,$$

whereas the probability of a win is approximately given by 0.136.

The previously determined probability of winning can also be seen in our numerical simulations (see Sect. 5). Fig. 12 illustrates the outcomes of 100 simulations of Elsberg’s game of chance. Only 14 out of the 100 simulations were beneficial for the gamblers.

Further, we can determine the expected net profit, i.e., the payout minus the stake, of a player. Hence, the expected net profit is given by

$$\begin{aligned}\displaystyle\sum_{k=0}^{55}\binom{100}{k}\left(\frac{1}{2}\right)^{100}\cdot\left(a\cdot 1.5^{k}\cdot 0.6^{100-k}-a\right)+\sum_{k=56}^{100}\binom{100}{k}\left(\frac{1}{2}\right)^{100}\cdot a=a\cdot\left(\frac{1}{2}\right)^{100}\left[\sum_{k=0}^{55}\binom{100}{k}\left(1.5^{k}\cdot 0.6^{100-k}-1\right)+\sum_{k=56}^{100}\binom{100}{k}\right]=a\cdot\left[-1+\sum_{k=0}^{55}\binom{100}{k}0.75^{k}\cdot 0.3^{100-k}+\left(\frac{1}{2}\right)^{99}\sum_{k=56}^{100}\binom{100}{k}\right]\approx-0.68\cdot a.\end{aligned}$$

This demonstrates that a player should ultimately expect a significant loss.

4 Variations

The previous analysis shows that the participants of Elsberg’s bet will experience a significant loss on average. Thus the question arises which rule or parameter underlying the gambling leads to this unfair situation and how the framework can be adjusted in order to make the gambling more (or even less) advantageous for the participants. In the following, we will study the influence of different parameters of Elsberg’s game of chance (to which we also refer as a “bet”, “gamble” or simply a “game”), or rather of the version of it employing our specific payout rule, starting with the number \(n\) of rounds.

4.1 Number of rounds

We will now analyse the influence of the number \(n\) of rounds on the expected net profit.

For \(n=1\), the behaviour of the gamble coincides with the players’ perception. The expected net profit is then given by \(G(a,1)=0.3\cdot a\). For \(n=2\), the expected net profit is still positive, given by \(G(a,2)=0.4\cdot a\). However, the final score is only larger than 100 if \(k=2\). This event occurs with probability \(\frac{1}{4}\) implying that the probability of a loss is given by \(\frac{3}{4}\). After \(n=6\) rounds, the expected net profit is negative for the first time. Although \(G(a,7)\) is positive again, the expected net profit is strictly negative for \(n\geq 8\). Fig. 1 illustrates the expected net profit after \(n\) rounds and shows an interesting behaviour. While the value \(G(a,n)\) is not monotonic due to some jumps, we can clearly see a decreasing trend. This leads to the conjecture that the expected net profit converges to \(-a\) as \(n\to\infty\) (see Fig. 2). A formal proof of the asymptotic behaviour can be found below in Proposition 1.

Fig. 1
figure 1

Illustration of the expected net profit in Elsberg’s bet after \(n\) rounds (\(n\in\{1,\ldots,200\}\)) for an initial stake of \(a=100\) euros

Fig. 2
figure 2

Illustration of the expected net profit after \(n\) rounds using an initial stake of \(a=100\) (euros) in Elsberg’s gamble

In general, the expected net profit after \(n\) rounds with the initial stake \(a\) is given by

$$G(a,n):=a\cdot\left[-1+\sum_{k=0}^{k_{0}(n)}\binom{n}{k}0.75^{k}\cdot 0.3^{n-k}+\left(\frac{1}{2}\right)^{n-1}\sum_{k=k_{0}(n)+1}^{n}\binom{n}{k}\right],$$
(6)

where

$$k_{0}(n):=\left\lfloor n\cdot\frac{\ln 5-\ln 3}{\ln 5-\ln 2}\right\rfloor$$

represents the boundary between winning and losing events. In the definition of \(k_{0}\), \(\lfloor\cdot\rfloor\) denotes the floor function defined as \(\lfloor x\rfloor=\max\{n\in\mathbb{Z}:\,n\leq x\}\), \(x\in\mathbb{R}\). The logarithmic expression in the previous definition of \(k_{0}\) is approximately given by

$$\frac{\ln 5-\ln 3}{\ln 5-\ln 2}\approx 0.5574929501.$$

We will now prove the previously mentioned conjecture regarding the asymptotic behaviour of the expected net profit using concentration inequalities. As usual, we denote a random variable \(X\) with a binomial distribution with parameters \(n\in\mathbb{N}\) and \(p\in[0,1]\) by \(X\sim\mathrm{Bin}(n,p)\).

Proposition 1

For any \(a> 0\) , \(G(a,n)\rightarrow-a\) as \(n\rightarrow\infty\) with an exponential rate of convergence.

Proof

In the following, we will prove that both sums in (6) converge to zero as \(n\) tends to infinity.

Let \(X_{n}\sim\text{Bin}\left(n,\frac{5}{7}\right)\). Then we get

$$\begin{aligned}\sum_{k=0}^{k_{0}(n)}\binom{n}{k}0.75^{k}\cdot 0.3^{n-k}=1.05^{n}\sum_{k=0}^{k_{0}(n)}\binom{n}{k}\left(\frac{5}{7}\right)^{k}\cdot\left(\frac{2}{7}\right)^{n-k}=1.05^{n}\cdot\mathbb{P}(X_{n}\leq k_{0}(n))\leq 1.05^{n}\cdot\mathbb{P}\left(X_{n}-\frac{5}{7}\,n\leq-\left(\frac{5}{7}-0.5575\right)n\right)\leq 1.05^{n}\cdot\exp\left(-2\cdot\left(\frac{5}{7}-0.5575\right)^{2}n\right)\leq 1.05^{n}\cdot 1.050392097^{-n}\to 0\quad\text{ as }n\to\infty,\end{aligned}$$
(7)

where we used Hoeffding’s inequality [5, Theorem 2.8], [8] in the second to last step. (Alternatively, Chernoff’s inequality can be used, at the cost of an additional factor 2.)

Now let \(Y_{n}\sim\text{Bin}\left(n,\frac{1}{2}\right)\). Using \(k_{0}(n)\geq\lfloor 0.55n\rfloor\geq 0.55n-1\), we obtain the following upper bound on the second sum

$$\begin{aligned}\left(\frac{1}{2}\right)^{n-1}\cdot\sum_{k=k_{0}(n)+1}^{n}\binom{n}{k}=2\cdot\mathbb{P}(Y_{n}\geq k_{0}(n)+1)\leq 2\cdot\mathbb{P}(Y_{n}\geq 0.55n)\leq 2\cdot\mathbb{P}(Y_{n}-0.5n\geq 0.05n)\leq 2\exp\left(-2\cdot 0.05^{2}\cdot n\right)\to 0\quad\text{ as }n\to\infty,\end{aligned}$$
(8)

where we used Okamoto’s inequality [29], [5, Ex. 2.12] in the last step.

An application of the upper bounds (7) and (8) in (6) completes the proof.\(\square\)

Remark:

If we are not interested in the rate of convergence in Proposition 1, the second part of the proof can be simplified by using the fact that \(Y_{n}/n\) converges in probability to zero. However, it seems that for the first part of the argument some finer tools are required.

An important feature of Elsberg’s game of chance is the bounded payout in the event of a win. If the payout in the case of a loss would also apply in the winning scenarios, the expected net profit would be given by

$$\left(\frac{1}{2}\right)^{n}\sum_{k=0}^{n}\binom{n}{k}\left(1.5^{k}\cdot 0.6^{n-k}-1\right)\cdot a=a\cdot\left(1.05^{n}-1\right).$$
(9)

This is exactly the value the gamblers expected intuitively.

4.2 Up and down

In the following subsection, we will study the influence of the percentages used to modify the current score after each round. In Elsberg’s game of chance, the score is increased by \(50\%\) or decreased by \(40\%\) if the coin shows “heads” or “tails”, respectively. We will substitute these percentages by some general percentages \(a_{u}\) and \(a_{d}\). The indices \(u\) and \(d\) stand for “up” and “down”.

Hence, the updated modification (increase or decrease) of the current score in each round is given as follows.

  1. 3’.

    If the coin shows “heads”, the current score is increased by \(a_{u}\%\). Otherwise, the current score is reduced by \(a_{d}\%\).

In the game introduced by Elsberg, the values \(a_{u}\) and \(a_{d}\) are apparently given by

$$a_{u}=50\quad\text{and}\quad a_{d}=40{.}$$

Since it will be more convenient to use fractions instead of percentages, we introduce the following factors

$$u:=1+\frac{a_{u}}{100}\quad\text{and}\quad d:=1-\frac{a_{d}}{100}.$$

Using the previously defined factors \(u\) and \(d\), we obtain more generally (cf. (1)) for the final score after \(n\) rounds

$$\widetilde{b}(k)=100\cdot u^{k}\cdot d^{n-k}.$$
(10)

Here again, \(k\in\{0,1,\ldots,n\}\) denotes the number of coin tosses showing “heads” among the \(n\) independent repetitions. Then, the expected net profit under the updated modification rule 3’ is given by

$$\widetilde{G}(a,n,u,d)=a\cdot\left(\frac{1}{2}\right)^{n}\left[\sum_{k=0}^{\widetilde{k}_{0}(n,u,d)}\binom{n}{k}\left({u}^{k}d^{n-k}-1\right)+\sum_{k=\widetilde{k}_{0}(n,u,d)+1}^{n}\binom{n}{k}\right].$$
(11)

As before, the quantity

$$\widetilde{k}_{0}(n,u,d):=\left\lfloor-n\cdot\frac{\ln(d)}{\ln(u)-\ln(d)}\right\rfloor$$
(12)

represents the boundary between winning and losing events. More precisely, a player wins if and only if \(k> \widetilde{k}_{0}(n,u,d)\).

General assumption: In the following, we always assume that \(0<d\leq 1\leq u\) and \(u\neq d\). This is not a loss of generality since other choices of \(u\) and \(d\) are not reasonable in the given situation.

Intuitively, one would expect that an increase of \(u\) or \(d\) results in an advantage for the participants of the game of chance. Figs. 3 and 4 support this conjecture. Both figures were generated using an initial stake of \(a=100\) (euros) in a game of \(n=100\) rounds. In Fig. 3, we fixed the factor \(u\) and illustrated the expected net profit in terms of \(d\). In Fig. 4, we treated the opposite situation where \(d\) is fixed and the expected net profit is computed in terms of \(u\). In both figures, the expected net profit in the situation described by Elsberg is marked by a red dot.

Both figures show that the expected net profit increases with the variable parameter \(d\) and \(u\), respectively. Moreover, both figures illustrate that the expected net profit approaches \(-a\) or \(a\) for small or large choices of \(d\) and \(u\), respectively.

Based on Figs. 3 and 4, one expects the existence of choices for \(u\) and \(d\) that result in a fair game. In this context, we say that a game is “fair”, if the expected net profit equals zero. Numerically, it is possible to determine pairs \((d,u)\) which define a fair game. The blue line in Fig. 5 contains tuples \((d,u)\) resulting in a fair game. Tuples below the blue line are advantageous for the organiser of the game (the croupier) while tuples above the blue line result in a game which is advantageous for the participants.

For comparison, we also illustrated the function \(u=d^{-1}\) (orange) suggesting the conjecture that asymptotically as \(n\to\infty\), the fair tuples \((d,u)\) are determined by the relation \(u=d^{-1}\). We will prove this conjecture in Theorem 2 below.

Fig. 5 also contains a green dot marking the tuple \((0.6,1.5)\) used in the game of chance introduced by Elsberg. This illustration shows once again that Elsberg’s game of chance is unfair to the participants in the game.

Fig. 3
figure 3

Expected net profit after \(n=100\) rounds with an initial stake of \(a=100\) in terms of the “down factor” \(d\) for various choices of the “up factor” \(u\)

Fig. 4
figure 4

Expected net profit after \(n=100\) rounds with an initial stake of \(a=100\) in terms of the “up factor” \(u\) for various choices of the “down factor” \(d\)

Fig. 5
figure 5

Pairs \((d,u)\) resulting in a fair game for \(a=100\) and \(n=100\) (blue curve)

4.3 Gambling on the edge

As already mentioned in the previous subsection, the conjecture which is supported by the illustration of our numerical results in Fig. 5 can be proven exactly (asymptotically as \(n\to\infty\)). This leads to an interesting threshold phenomenon, where \(\{(u,d)\in[1,\infty)\times(0,1]:ud=1,u\neq d\}\) describes the boundary between tuples leading to a game of chance that is advantageous or disadvantageous for the participants in the game. All tuples on the boundary lead to a fair game.

Since, according to the property \(\widetilde{G}(a,n,u,d)=a\cdot\widetilde{G}(1,n,u,d)\), the expected net profit \(\widetilde{G}(a,n,u,d)\) is proportional to the initial stake \(a\), we define \(G(n,u,d):=\widetilde{G}(1,n,u,d)\) for notational simplicity. Then, it follows that \(G(n,u,d)\) can be written as

$$G(n,u,d)=-1+A(n,u,d)+B(n,u,d),$$

where

$$A(n,u,d):=\left(\frac{1}{2}\right)^{n}\cdot\sum_{k=0}^{\widetilde{k}_{0}(n,u,d)}\binom{n}{k}{u}^{k}d^{n-k},\qquad B(n,u,d):=\left(\frac{1}{2}\right)^{n-1}\cdot\sum_{k=\widetilde{k}_{0}(n,u,d)+1}^{n}\binom{n}{k}$$

and

$$\widetilde{k}_{0}(n,u,d)=\left\lfloor n\cdot\frac{\ln\left(\frac{1}{d}\right)}{\ln\left(\frac{u}{d}\right)}\right\rfloor,$$

implying that \(\widetilde{k}_{0}(n,u,d)\in\{0,\ldots,n\}\).

Theorem 2

Let \(0<d\leq 1\leq u\) and \(u\neq d\). Then

$$\lim_{n\rightarrow\infty}G(n,u,d)=\begin{cases}1,&\text{if }ud> 1,\\ 0,&\text{if }ud=1,\\ -1,&\text{if }ud<1.\end{cases}$$

The proof of Theorem 2 will be split into two auxiliary results concerning the asymptotic behaviour of \(A(n,u,d)\) and \(B(n,u,d)\) as \(n\to\infty\).

Lemma 1

Let \(0<d\leq 1\leq u\) and \(u\neq d\) . Then

$$\lim_{n\rightarrow\infty}A(n,u,d)=0.$$

Proof

Case 1: \(u+d<2\). In this case,

$$A(n,u,d)=\left(\frac{u+d}{2}\right)^{n}\cdot\sum_{k=0}^{\widetilde{k}_{0}(n,u,d)}\binom{n}{k}\left(\frac{u}{u+d}\right)^{k}\left(\frac{d}{u+d}\right)^{n-k}\leq\left(\frac{u+d}{2}\right)^{n}\to 0$$

as \(n\to\infty\).

Case 2: \(u+d=2\). Since \(u\neq d\) is satisfied by assumption, it follows that \(d<1\) must hold as well. We can therefore conclude that \(2\cdot\left(1-\frac{d}{2}\right)^{1-\frac{d}{2}}\left(\frac{d}{2}\right)^{\frac{d}{2}}> 1\) (see Lemma 3 for \(x=\frac{d}{2}\in(0,\frac{1}{2}\))), and thus \((2-d)^{2-d}d^{d}> 1\) or equivalently \(u^{u}d^{2-u}> 1\). This inequality is in turn equivalent to

$$\frac{u}{2}> \frac{\ln\left(\frac{1}{d}\right)}{\ln\left(\frac{u}{d}\right)},$$

which implies that

$$\frac{1}{n}\widetilde{k}_{0}(n,u,d)\to\frac{\ln\left(\frac{1}{d}\right)}{\ln\left(\frac{u}{d}\right)}<\frac{u}{2}\quad\text{for }n\to\infty.$$
(13)

Now let \(Y_{n}\sim\text{Bin}(n,\frac{u}{2})\). The law of large numbers yields

$$\begin{aligned}\displaystyle A(n,u,d)=\sum_{k=0}^{\widetilde{k}_{0}(n,u,d)}\binom{n}{k}\left(\frac{u}{2}\right)^{k}\left(\frac{d}{2}\right)^{n-k}=\mathbb{P}\left(Y_{n}\leq\widetilde{k}_{0}(n,u,d)\right)=\mathbb{P}\left(\frac{1}{n}Y_{n}\leq\frac{\widetilde{k}_{0}(n,u,d)}{n}\right)\to 0\quad\text{as }n\to\infty,\end{aligned}$$

due to (13) and \(\frac{1}{n}\mathbb{E}Y_{n}=\frac{u}{2}\).

If we use Hoeffding’s inequality [5, Theorem 2.8] (or alternatively Chernoff’s inequality including an additional factor 2) and the fact that \(\frac{\widetilde{k}_{0}(n,u,d)}{n}-\frac{u}{2}<0\) for sufficiently large \(n\) due to (13), it is even possible to verify an exponential rate of convergence. Using the previously mentioned tools, we get

$$\mathbb{P}\left(Y_{n}-\frac{nu}{2}\leq\left(\frac{\widetilde{k}_{0}(n,u,d)}{n}-\frac{u}{2}\right)n\right)\leq\exp\left(-2\left(\frac{\widetilde{k}_{0}(n,u,d)}{n}-\frac{u}{2}\right)^{2}n\right),$$

where

$$\frac{\widetilde{k}_{0}(n,u,d)}{n}-\frac{u}{2}\to\frac{\ln\left(\frac{1}{d}\right)}{\ln\left(\frac{u}{d}\right)}-\frac{u}{2}=:\alpha(u,d)<0\quad\text{for }n\to\infty.$$

Note that \(\alpha(u,d)\to 0\) as \(d,u\to 1\).

Case 3: \(u+d> 2\) and \(ud\neq 1\). It follows that \(u> 2-d\geq 1\), hence \(u^{u}d^{d}> (2-d)^{2-d}d^{d}\geq 1\), and therefore we obtain

$$\frac{\ln\left(\frac{1}{d}\right)}{\ln\left(\frac{u}{d}\right)}<\frac{u}{u+d}<1.$$
(14)

For sufficiently large \(n\) it also follows that

$$1-\frac{\widetilde{k}_{0}(n,u,d)}{n}> \frac{d}{u+d}\quad\text{or equivalently }\quad\frac{\widetilde{k}_{0}(n,u,d)}{n}<\frac{u}{u+d}.$$
(15)

Let \(\xi_{n}\sim\text{Bin}(n,\frac{u}{u+d})\) and \(\zeta_{n}:=n-\xi_{n}\sim\text{Bin}(n,\frac{d}{u+d})\). We will now use an inequality (see [5, Ex. 2.11] or [8]) that usually arises during the derivation of Chernoff’s inequality. It states that if \(S_{n}\sim\mathrm{Bin}(n,p)\) and \(y\in(p,1)\), then

$$\mathbb{P}(S_{n}\geq ny)\leq\left(\frac{(1-p)^{1-y}p^{y}}{(1-y)^{1-y}y^{y}}\right)^{n}.$$
(16)

(Remark: In the Wikipedia article [35] this step is referred to as the “Chernoff-Hoeffding theorem”.)

Using (15) and (16) for \(p=\frac{d}{u+d}\) and \(y=1-n^{-1}\widetilde{k}_{0}(n,u,d)\), it follows that

$$\begin{aligned}\displaystyle A(n,u,d)=\left(\frac{u+d}{2}\right)^{n}\mathbb{P}\left(\xi_{n}\leq\widetilde{k}_{0}(n,u,d)\right)=\left(\frac{u+d}{2}\right)^{n}\mathbb{P}\left(\zeta_{n}\geq n-\widetilde{k}_{0}(n,u,d)\right)=\left(\frac{u+d}{2}\right)^{n}\mathbb{P}\left(\zeta_{n}\geq\left(1-\frac{\widetilde{k}_{0}(n,u,d)}{n}\right)n\right)\leq\left(\frac{u+d}{2}\right)^{n}\left[\frac{\left(\frac{u}{u+d}\right)^{\frac{\widetilde{k}_{0}(n,u,d)}{n}}\left(\frac{d}{u+d}\right)^{1-\frac{\widetilde{k}_{0}(n,u,d)}{n}}}{\left(\frac{\widetilde{k}_{0}(n,u,d)}{n}\right)^{\frac{\widetilde{k}_{0}(n,u,d)}{n}}\left(1-\frac{\widetilde{k}_{0}(n,u,d)}{n}\right)^{1-\frac{\widetilde{k}_{0}(n,u,d)}{n}}}\right]^{n}=\omega(n,u,d)^{n},\end{aligned}$$

where

$$\omega(n,u,d):=\frac{u^{\frac{\widetilde{k}_{0}(n,u,d)}{n}}d^{1-\frac{\widetilde{k}_{0}(n,u,d)}{n}}}{2\left(\frac{\widetilde{k}_{0}(n,u,d)}{n}\right)^{\frac{\widetilde{k}_{0}(n,u,d)}{n}}\left(1-\frac{\widetilde{k}_{0}(n,u,d)}{n}\right)^{1-\frac{\widetilde{k}_{0}(n,u,d)}{n}}}.$$

Since we assumed that \(ud\neq 1\), we get

$$\frac{1}{n}\widetilde{k}_{0}(n,u,d)\to\frac{\ln\left(\frac{1}{d}\right)}{\ln\left(\frac{u}{d}\right)}\in\left(0,1\right)\setminus\left\{\tfrac{1}{2}\right\},$$

and therefore

$$2\left(\frac{\widetilde{k}_{0}(n,u,d)}{n}\right)^{\frac{\widetilde{k}_{0}(n,u,d)}{n}}\left(1-\frac{\widetilde{k}_{0}(n,u,d)}{n}\right)^{1-\frac{\widetilde{k}_{0}(n,u,d)}{n}}\to\rho(u,d)\in(1,2)\quad\text{as }n\to\infty.$$

as n → ∞. Furthermore,

$$u^{\frac{\widetilde{k}_{0}(n,u,d)}{n}}d^{1-\frac{\widetilde{k}_{0}(n,u,d)}{n}}\to u^{\frac{\ln\left(\frac{1}{d}\right)}{\ln\left(\frac{u}{d}\right)}}d^{\frac{\ln\left(u\right)}{\ln\left(\frac{u}{d}\right)}}=1\quad\text{for }n\to\infty.$$
(17)

Finally, this implies that \(\omega(n,u,d)\to{\rho(u,d)}^{-1}\in(0,1)\) as \(n\to\infty\), which yields \(\omega(n,u,d)^{n}\to 0\), as requested.

Case 4: \(u+d> 2\) and \(ud=1\). If \(n\) is even, there exists some \(m\in\mathbb{N}\) such that \(n=2m\). Hence, \(\widetilde{k}_{0}(n,u,d)=\left\lfloor\frac{n}{2}\right\rfloor=m\) and therefore \(A(n,u,d)\) can be written as

$$A(n,u,d)=\sum_{k=0}^{m}\left(\frac{1}{2}\right)^{2m}\binom{2m}{k}d^{2m-2k}.$$

Since \(k\leq m\), it follows that \(\binom{2m}{k}\leq\binom{2m}{m}\). Now we can use Stirling’s approximation to see that

$$\binom{2m}{m}\sim\frac{2^{2m}}{\sqrt{\pi m}},$$
(18)

where \(\sim\) means that the two expressions are asymptotically equivalent. Hence, for \(k\leq m\) and sufficiently large \(m\in\mathbb{N}\),

$$\left(\frac{1}{2}\right)^{2m}\binom{2m}{k}\leq\frac{1}{\sqrt{m}}.$$

We conclude that

$$A(n,u,d)\leq\frac{1}{\sqrt{m}}\sum_{j=0}^{m}(d^{2})^{j}\leq(1-d^{2})^{-1}\frac{1}{\sqrt{m}}\to 0\quad\text{as }n\to\infty.$$

If \(n\) is odd, there exists some \(m\in\mathbb{N}\) such that \(n=2m+1\). Then it follows that \(\widetilde{k}_{0}(n,u,d)=m\) and \(A(n,u,d)\) can be bounded by

$$\begin{aligned}\displaystyle A(n,u,d)=\sum_{k=0}^{m}\left(\frac{1}{2}\right)^{2m+1}\binom{2m+1}{k}d^{2m+1-2k}=\frac{d}{2}\sum_{k=0}^{m}\left(\frac{1}{2}\right)^{2m}\frac{2m+1}{2m+1-k}\binom{2m}{k}d^{2m-2k}\leq\frac{d}{2}\frac{2m+1}{m+1}A(2m,u,d)\leq A(2m,u,d)\to 0\quad\text{as }n\to\infty,\end{aligned}$$

which completes the argument.\(\square\)

Remark:

\(\bullet\) As soon as (13) is available, the second case in the previous proof can be included into the third case. However, the application of Hoeffding’s inequality is easier in the second case. Moreover, we could alternatively argue with the law of large numbers in the second case (though without obtaining the exponential rate of convergence then). Therefore we decided to treat these cases separately.

\(\bullet\) At the critical boundary, characterised by the equation \(ud=1\), the expression \(A(n,u,d)\) still converges to zero. However, the rate of convergence is no longer exponential, but of order \(1/\sqrt{n}\). We only presented an upper bound on the order of convergence, but one could consider the summand for \(k=m\) to deduce a lower bound as well.

In order to establish the asymptotic behaviour of \(G(n,u,d)\), which is stated in Theorem 2, it remains to analyse the asymptotic behaviour of \(B(n,u,d)\). The required result is provided by the following lemma.

Lemma 2

If \(0<d\leq 1\leq u\) and \(u\neq d\) , then

$$\lim_{n\rightarrow\infty}B(n,u,d)=\begin{cases}0,&\text{if }ud<1,\\ 1,&\text{if }ud=1,\\ 2,&\text{if }ud> 1.\end{cases}$$

Proof

Case 1: \(ud<1\). Then

$$\frac{1}{n}\widetilde{k}_{0}(n,u,d)\to\frac{\ln\left(\frac{1}{d}\right)}{\ln\left(\frac{u}{d}\right)}> \frac{1}{2}\quad\text{as }n\to\infty,$$
(19)

where the lower bound on the limit is equivalent to \(ud<1\). If we introduce binomially distributed random variables \(X_{n}\sim\text{Bin}(n,\frac{1}{2})\), it follows that

$$B(n,u,d)=2\,\mathbb{P}\left(X_{n}\geq\widetilde{k}_{0}(n,u,d)+1\right)\leq 2\,\mathbb{P}\left(\frac{1}{n}X_{n}\geq\frac{1}{n}\widetilde{k}_{0}(n,u,d)\right)\to 0$$

for n → ∞, by the law of large numbers. An application of Okamoto’s inequality (see [29]), in combination with the fact that \(\widetilde{k}_{0}(n,u,d)-\frac{n}{2}> 0\) for sufficiently large \(n\in\mathbb{N}\) and (19), yields the even stronger statement

$$B(n,u,d)\leq 2\,\mathbb{P}\left(X_{n}-\frac{n}{2}\geq\widetilde{k}_{0}(n,u,d)-\frac{n}{2}\right)\leq 2\exp\left(-2\left(\frac{\widetilde{k}_{0}(n,u,d)}{n}-\frac{1}{2}\right)^{2}n\right)\to 0$$

as \(n\to\infty\).

Case 2: \(ud> 1\). Then

$$\frac{1}{n}\widetilde{k}_{0}(n,u,d)\to\frac{\ln\left(\frac{1}{d}\right)}{\ln\left(\frac{u}{d}\right)}<\frac{1}{2}\text{ as }n\to\infty.$$

Analogously to the first case, an application of the law of large numbers yields

$$B(n,u,d)=2\,\mathbb{P}(X_{n}\geq\widetilde{k}_{0}(n,u,d)+1)=2\,\mathbb{P}\left(\frac{1}{n}X_{n}\geq\frac{1}{n}(\widetilde{k}_{0}(n,u,d)+1)\right)\to 2\cdot 1=2$$

for \(n\to\infty\).

Again, an application of Okamoto’s inequality [29] instead of the law of large numbers, leads to a stronger, exponential estimate given by

$$\begin{aligned}\displaystyle B(n,u,d)=2\,\mathbb{P}\left(X_{n}\geq\widetilde{k}_{0}(n,u,d)+1\right)=2\left(1-\mathbb{P}\left(X_{n}\leq\widetilde{k}_{0}(n,u,d)\right)\right)\geq 2\left(1-\exp\left(-2n\left(\frac{\widetilde{k}_{0}(n,u,d)}{n}-\frac{1}{2}\right)^{2}\right)\right)\rightarrow 2\quad\text{as }n\to\infty.\end{aligned}$$

In combination with the upper bound \(B(n,u,d)\leq 2\), it now follows that \(\lim_{n\rightarrow\infty}B(n,u,d)=2\), and the convergence is of exponential order.

Case 3: \(ud=1\). Then we have \(\widetilde{k}_{0}(n,u,d)=\left\lfloor\frac{n}{2}\right\rfloor\).

If \(n\) is odd, there exists some \(m\in\mathbb{N}\) such that \(n=2m+1\). From the identity

$$\sum_{k=m+1}^{2m+1}\binom{2m+1}{k}=2^{2m}$$

we deduce that

$$B(n,u,d)=2\,\mathbb{P}\left(X_{n}\geq\left\lfloor\frac{n}{2}\right\rfloor+1\right)=2\cdot\frac{1}{2}=1.$$

If \(n\) is even, hence \(n=2m\) for some \(m\in\mathbb{N}\), we get

$$\sum_{k=m+1}^{2m}\binom{2m}{k}=2^{2m-1}-\frac{1}{2}\binom{2m}{m},$$

and therefore, by Stirling’s approximation (18), it follows that

$$B(n,u,d)=2\,\mathbb{P}\left(X_{n}\geq\left\lfloor\frac{n}{2}\right\rfloor+1\right)\to 2\cdot\frac{1}{2}=1\quad\text{as }n\to\infty.$$

In this case, the convergence is of the order \(1/\sqrt{n}\).\(\square\)

4.4 Unbounded prize

As we already mentioned in Sect. 3, the a priori bound on the prize in the event of a win contributes to the disadvantages of the participants in Elsberg’s game of chance. We will now analyse a modified gambling rule, which depends on the final score and does not involve an a priori bound.

We will now assume that the payout at the end of the game is always given by \(a\) times the final score. In analogy to the representation of the expected net profit in (9), we can derive a general representation of the expected net profit using the updated payout rule in the case of a win. We therefore arrive at the following (much simpler) expression for the expected net profit, that is,

$$a\cdot\left(\frac{1}{2}\right)^{n}\sum_{k=0}^{n}\binom{n}{k}\left(u^{k}\cdot d^{n-k}-1\right)=a\cdot\left(\left(\frac{u+d}{2}\right)^{n}-1\right).$$
(20)

Clearly, the expected net profit is zero if and only if \(u+d=2\). Thus, using the current payout rule in the event of a win, we can exactly characterise the tuples \((d,u)\) which lead to a fair game, by the condition \(u+d=2\).

In the subsection below, we will analyse a modified version of Elsberg’s game of chance based on a coin which is not necessarily fair. Before pursuing this topic, we describe the influence of a biased coin in the current scenario of a payout which does not involve an a priori bound on the prize in the event of a win. If \(p\in[0,1]\) describes the probability of the event “heads” and \(q=1-p\) the probability that the coin shows “tails”, the expected net profit in the underlying situation is then given by \(a((pu+qd)^{n}-1)\). Hence, under these assumptions the game is fair if and only if \(pu+qd=1\).

4.5 Fake coins

In this subsection, we analyse the influence of the probability \(p\) that the coin shows “heads”. In “GREED”, the gambling is executed using a fair coin (\(p=\frac{1}{2}\)). Instead of a fair coin, we could use a bent (biased) coin or some completely different Bernoulli experiment (for example, by tossing a drawing pin or a dice and modifying the rule in step 3’ appropriately). For the sake of simplicity, we will continue to use the toss of a coin.

Intuitively, increasing the probability \(p\) that the coin shows “heads” should increase the chances of winning for the participants of the gambling and hence increase their expected net profit. Fig. 6 supports this conjecture. The figure illustrates the expected net profit after \(n=100\) rounds using an initial stake of \(a=100\) in terms of the probability \(p\). The factors \(u\) and \(d\) are chosen according to the gamble described in “GREED”.

As in the analysis of the influence of the up and down factors \(u\) and \(d\), we want to choose the probability \(p\) so that a fair game of chance is obtained. Fig. 6 suggests that it is possible (at least numerically) to determine such a probability \(p\). The net profit for \(p=\frac{1}{2}\) and the choice of \(p\) which results in a fair game are both marked in Fig. 6.

Fig. 6
figure 6

Illustration of the expected net profit after \(n=100\) coin tosses in terms of the probability \(p\) that the coin shows “heads”, using the game parameters \(a=100\), \(u=1.5\) and \(d=0.6\)

In order to formalise the previously described modified version of Elsberg’s game of chance, we consider a sequence of independent Bernoulli distributed random variables \(X^{\prime}_{i}\sim\text{Bin}(1,p)\), where \(\{X^{\prime}_{i}=1\}\) and \(\{X^{\prime}_{i}=0\}\) represent the events that the \(i\)-th coin toss shows “heads” and “tails”, respectively. In order to simplify our calculations, we define the complementary probability \(q=1-p\), which denotes the probability of the event “tails”. Then, the random variable \(X_{n}^{(p)}:=X^{\prime}_{1}+\cdots+X^{\prime}_{n}\sim\text{Bin}(n,p)\) counts how often the event “heads” occurs among the \(n\) coin tosses.

Since we are interested in the net profit at the end of the game, we introduce the random variable \(T(n,u,d,p)\) given by

$$\begin{aligned}\displaystyle T(n,u,d,p):=\sum_{k=0}^{\widetilde{k}_{0}(n,u,d)}\mathbf{1}\{X_{n}^{(p)}=k\}(u^{k}d^{n-k}-1)+\mathbf{1}\{X_{n}^{(p)}\geq\widetilde{k}_{0}(n,u,d)+1\}=-1+\sum_{k=0}^{\widetilde{k}_{0}(n,u,d)}\mathbf{1}\{X_{n}^{(p)}=k\}u^{k}d^{n-k}+2\cdot\mathbf{1}\{X_{n}^{(p)}\geq\widetilde{k}_{0}(n,u,d)+1\},\end{aligned}$$

which represents the net profit after \(n\) rounds using the initial stake \(a=1\). At this point, we recall the explanation at the beginning of Sect. 4.3 according to which the expected net profit is proportional to the stake \(a\). Hence, it suffices to consider the case \(a=1\).

By \(\mathbf{1}\{X_{n}^{(p)}=k\}\) we denote the indicator function with respect to the event \(\{X_{n}^{(p)}=k\}\). More precisely, the expression is given by

$$\mathbf{1}\{X_{n}^{(p)}=k\}=\begin{cases}1,&\text{if }X_{n}^{(p)}=k,\\ 0,&\text{if }X_{n}^{(p)}\neq k.\end{cases}$$

For the sake of notational simplicity, we will use the shorthand notation \(T_{n}=T(n,u,d,p)\).

Since we defined \(T_{n}\) as the net profit at the end of the game, the expected net profit is given by the expectation of the random variable \(T_{n}\). Hence, we get the following representation of the expected net profit \(G(n,u,d,p):=\mathbb{E}[T(n,u,d,p)]\), that is,

$$\begin{aligned}\displaystyle G(n,u,d,p)=-1+\sum_{k=0}^{\widetilde{k}_{0}(n,u,d)}\binom{n}{k}(pu)^{k}(qd)^{n-k}+2\cdot\sum_{k=\widetilde{k}_{0}(n,u,d)+1}^{n}\binom{n}{k}p^{k}q^{n-k}=:-1+A(n,u,d,p)+B(n,u,d,p).\end{aligned}$$

Here it should be noted that \(\widetilde{k}_{0}(n,u,d)\) is independent of \(p\).

Similarly to the results in Sect. 4.3, we determine the asymptotic behaviour of the expected net profit as \(n\rightarrow\infty\). Again, for the proof we consider two auxiliary results concerning the asymptotic behaviour of the quantities \(A(n,u,d,p)\) and \(B(n,u,d,p)\) as \(n\to\infty\).

We start by providing two analytic inequalities which will be useful in the proof of Lemma 5.

Lemma 3

If \(d\in(0,1]\) and \(x\in[0,d]\) , then

$$(1-x)^{1-x}(d-x)^{x}\geq 1-\frac{x}{d}.$$

The inequality is strict if \(d\in(0,1)\) and \(x\in(0,d)\) .

Proof

If \(d=1\), the assertion of the lemma is apparently true, since the expressions on the left- and right-hand side are equal. Now let \(d\in(0,1)\) be arbitrary, but fixed. We introduce the auxiliary function

$$f(x):=(1-x)\ln(1-x)+x\ln(d-x)-\ln\left(1-\frac{x}{d}\right),\quad x\in[0,d).$$

Then, \(f(0)=0\) and

$$f^{\prime}(x)=\frac{1-x}{d-x}-1-\ln\left(\frac{1-x}{d-x}\right)> 0\quad\text{for }x\in[0,d),$$

since \(h-1-\ln(h)> 0\) for \(h> 1\). Using the strict monotonicity of \(f\), it follows that \(f(x)> 0\) for \(x\in(0,d)\). Now the assertions of the lemma can be easily deduced.

\(\square\)

Lemma 4

If \(x,p\in(0,1)\) , then

$$\left(\frac{x}{p}\right)^{x}\left(\frac{1-x}{1-p}\right)^{1-x}\geq 1.$$

Equality holds if and only if \(x=p\) .

Proof

Let \(p\in(0,1)\) be arbitrary, but fixed. Again, we introduce an auxiliary function, given by

$$g(x):=x\ln\left(\frac{x}{p}\right)+(1-x)\ln\left(\frac{1-x}{1-p}\right),\quad x\in(0,1).$$

Then \(g\) satisfies \(g(0+)=-\ln(1-p)> 0\), \(g(1-)=-\ln(p)> 0\) and

$$g^{\prime}(x)=\ln\left(\frac{x}{p}\frac{1-p}{1-x}\right).$$

Moreover, \(g^{\prime}(0+)=-\infty\), \(g^{\prime}(1-)=+\infty\) and \(g^{\prime}(x)=0\) are satisfied if and only if \(x=p\). Finally, we have \(g(p)=0\). Now we can easily deduce the assertions of the lemma.\(\square\)

The preceding auxiliary results can be used to prove the following generalisation of Lemma 1.

Lemma 5

If \(0<d\leq 1\leq u\) and \(u\neq d\) , then

$$\lim_{n\rightarrow\infty}A(n,u,d,p)=0.$$

Proof

The overall structure of the proof is similar to the proof of Lemma 1.

Case 1: \(pu+qd<1\). Then it follows that

$$A(n,u,d,p)=\left(pu+qd\right)^{n}\cdot\sum_{k=0}^{\widetilde{k}_{0}(n,u,d)}\binom{n}{k}\left(\frac{pu}{pu+qd}\right)^{k}\left(\frac{qd}{pu+qd}\right)^{n-k}\leq\left(pu+qd\right)^{n}\to 0$$

as \(n\to\infty\).

Case 2: \(pu+qd=1\). We have \(d<1\), since \(u\neq d\). An application of Lemma 3 with \(x=qd\in(0,d)\) shows that

$$(1-qd)^{1-qd}(pd)^{qd}> p$$

and therefore

$$u^{pu}d^{qd}> 1.$$

The previous inequality can be rewritten as

$$\frac{\ln\left(\frac{1}{d}\right)}{\ln\left(\frac{u}{d}\right)}<pu.$$

Hence, the asymptotic behaviour of \(\widetilde{k}_{0}\) is given by

$$\frac{1}{n}\widetilde{k}_{0}(n,u,d)\to\frac{\ln\left(\frac{1}{d}\right)}{\ln\left(\frac{u}{d}\right)}<pu\quad\text{for }n\to\infty.$$
(21)

Now let \(Y_{n}^{(p)}\sim\text{Bin}(n,pu)\). The law of large numbers implies that

$$\begin{aligned}\displaystyle A(n,u,d,p)=\sum_{k=0}^{\widetilde{k}_{0}(n,u,d)}\binom{n}{k}\left(pu\right)^{k}\left(qd\right)^{n-k}=\mathbb{P}\left(Y_{n}^{(p)}\leq\widetilde{k}_{0}(n,u,d)\right)=\mathbb{P}\left(\frac{1}{n}Y_{n}^{(p)}\leq\frac{\widetilde{k}_{0}(n,u,d)}{n}\right)\to 0\quad\text{as }n\to\infty,\end{aligned}$$

where we used (21) and \(\frac{1}{n}\mathbb{E}Y_{n}^{(p)}=pu\).

Analogously to the proof of Lemma 1, an alternative argument that is based on Hoeffding’s or Chernoff’s inequality yields an exponential rate of convergence.

Case 3: \(pu+qd> 1\) and \(u^{p}d^{q}\neq 1\). An application of Lemma 3 with \(x=qd\in(0,d)\) (and hence \(1-qd> 0\)) yields the inequality

$$(pu)^{1-qd}(pd)^{qd}> (1-qd)^{1-qd}(pd)^{qd}\geq p,$$

and therefore

$$u^{pu}d^{qd}> 1.$$

The previous inequality is equivalent to the left inequality in

$$\frac{\ln\left(\frac{1}{d}\right)}{\ln\left(\frac{u}{d}\right)}<\frac{pu}{pu+qd}<1.$$

Finally, for sufficiently large \(n\) we get

$$1-\frac{\widetilde{k}_{0}(n,u,d)}{n}> \frac{qd}{pu+qd}\quad\text{and}\quad\frac{\widetilde{k}_{0}(n,u,d)}{n}<\frac{pu}{pu+qd}.$$
(22)

Now let \(\xi_{n}^{(p)}\sim\text{Bin}(n,\frac{pu}{pu+qd})\) and \(\zeta_{n}^{(p)}:=n-\xi_{n}^{(p)}\sim\text{Bin}(n,\frac{qd}{pu+qd})\). Similarly to the proof of Theorem 2, we can use (22) in the derivation of an upper bound on \(A(n,u,d,p)\), that is,

$$\begin{aligned}\displaystyle A(n,u,d,p)=\left({pu+qd}\right)^{n}\mathbb{P}\left(\xi_{n}^{(p)}\leq\widetilde{k}_{0}(n,u,d)\right)=\left({pu+qd}\right)^{n}\mathbb{P}\left(\zeta_{n}^{(p)}\geq n-\widetilde{k}_{0}(n,u,d)\right)=\left({pu+qd}\right)^{n}\mathbb{P}\left(\zeta_{n}^{(p)}\geq\left(1-\frac{\widetilde{k}_{0}(n,u,d)}{n}\right)n\right)\leq\left({pu+qd}\right)^{n}\left[\frac{\left(\frac{pu}{pu+qd}\right)^{\frac{\widetilde{k}_{0}(n,u,d)}{n}}\left(\frac{qd}{pu+qd}\right)^{1-\frac{\widetilde{k}_{0}(n,u,d)}{n}}}{\left(\frac{\widetilde{k}_{0}(n,u,d)}{n}\right)^{\frac{\widetilde{k}_{0}(n,u,d)}{n}}\left(1-\frac{\widetilde{k}_{0}(n,u,d)}{n}\right)^{1-\frac{\widetilde{k}_{0}(n,u,d)}{n}}}\right]^{n}=\omega(n,u,d,p)^{n},\end{aligned}$$

where

$$\omega(n,u,d,p):=\frac{u^{\frac{\widetilde{k}_{0}(n,u,d)}{n}}d^{1-\frac{\widetilde{k}_{0}(n,u,d)}{n}}}{\left(\frac{1}{p}\right)^{\frac{\widetilde{k}_{0}(n,u,d)}{n}}\left(\frac{1}{q}\right)^{1-\frac{\widetilde{k}_{0}(n,u,d)}{n}}\left(\frac{\widetilde{k}_{0}(n,u,d)}{n}\right)^{\frac{\widetilde{k}_{0}(n,u,d)}{n}}\left(1-\frac{\widetilde{k}_{0}(n,u,d)}{n}\right)^{1-\frac{\widetilde{k}_{0}(n,u,d)}{n}}}.$$

Observe that

$$\frac{1}{n}\widetilde{k}_{0}(n,u,d)\to\frac{\ln\left(\frac{1}{d}\right)}{\ln\left(\frac{u}{d}\right)}=:c\in(0,1).$$

The limit \(c\) satisfies \(c\neq p\) due to the assumption \(u^{p}d^{1-p}\neq 1\). From Lemma 4 we deduce that the denominator of \(\omega(n,u,d,p)\) converges to \(\rho(u,d,p)> 1\) as \(n\to\infty\).

Moreover, (17) remains true (see the proof of Lemma 1), since \(\widetilde{k}_{0}\) does not depend on \(p\).

We conclude that \(\omega(n,u,d,p)\to{\rho(u,d,p)}^{-1}\in(0,1)\) and therefore \(\omega(n,u,d,p)^{n}\to 0\) as \(n\to\infty\).

Case 4: \(u^{p}d^{q}=1\). Since \(d\neq u\), it follows that \(pu+qd> 1\). Since \(u^{p}d^{q}=1\), \(\widetilde{k}_{0}(n,u,d)=\lfloor np\rfloor\) and therefore \(A(n,u,d,p)\) simplifies to

$$A(n,u,d,p)=\sum_{k=0}^{\lfloor np\rfloor}\binom{n}{k}(pu)^{k}(qd)^{n-k}.$$

For \(k\leq\lfloor np-q\rfloor\) we can easily deduce that \(a_{k}:=\binom{n}{k}p^{k}q^{n-k}\leq a_{k+1}\). In addition, the assumption \(u^{p}d^{q}=1\) implies that \(u^{k}d^{n-k}=d^{n-\frac{k}{p}}\). Hence,

$$\begin{aligned}A(n,u,d,p)\leq\binom{n}{\lfloor np\rfloor}p^{\lfloor np\rfloor}q^{n-\lfloor np\rfloor}\sum_{k=0}^{\lfloor np\rfloor}d^{n-\frac{k}{p}}\leq\binom{n}{\lfloor np\rfloor}p^{\lfloor np\rfloor}q^{n-\lfloor np\rfloor}\frac{1}{1-d}.\end{aligned}$$
(23)

Further, the right-hand side of (23) converges to zero since

$$\begin{aligned}\binom{n}{\lfloor np\rfloor}p^{\lfloor np\rfloor}q^{n-\lfloor np\rfloor}=\mathbb{P}\left(\lfloor np\rfloor-1<X_{n}^{(p)}\leq\lfloor np\rfloor\right)=\mathbb{P}\left(\frac{\lfloor np\rfloor-np-1}{\sqrt{npq}}<\frac{X_{n}^{(p)}-np}{\sqrt{npq}}\leq\frac{\lfloor np\rfloor-np}{\sqrt{npq}}\right)\to\Phi(0)-\Phi(0)=0\quad\text{for }n\to\infty,\end{aligned}$$
(24)

where \(\Phi\) is the cumulative distribution function of the standard normal distribution.

Combining (23) and (24), finally we obtain that \(A(n,u,d,p)\) converges to zero as \(n\to\infty\). The Berry–Esseen theorem further shows that the convergence is of the order \(1/\sqrt{n}\).\(\square\)

As explained in the remark following the proof of Lemma 1, it is possible to include the second case of the previous proof into the third case. However, we decided to treat these cases separately due to the same reasons as mentioned before.

Instead of using the central limit theorem and the Berry–Esseen theorem at the end of the fourth case, we could use that

$$\binom{n}{\lfloor np\rfloor}p^{\lfloor np\rfloor}q^{n-\lfloor np\rfloor}\sim\frac{1}{\sqrt{2\pi pq}}\cdot\frac{1}{\sqrt{n}}$$

as \(n\to\infty\), to prove (24). The former asymptotic equivalence arises as a local central limit theorem in the proof of the De Moivre–Laplace theorem (see, e.g., [33, p. 55]). Alternatively, one could use Stirling’s approximation (while using the binary entropy function) to provide a more direct argument.

In order to determine the asymptotic behaviour of the expected net profit, we need to identify the limit of the second summand, denoted by \(B(n,u,d,p)\), as \(n\to\infty\).

Lemma 6

If \(0<d\leq 1\leq u\) and \(u\neq d\) , then

$$\lim_{n\rightarrow\infty}B(n,u,d,p)=\begin{cases}0,&\text{if }u^{p}d^{q}<1,\\ 1,&\text{if }u^{p}d^{q}=1,\\ 2,&\text{if }u^{p}d^{q}> 1.\end{cases}$$

Proof

Case 1: \(u^{p}d^{q}<1\). Then

$$\frac{1}{n}\widetilde{k}_{0}(n,u,d)\to\frac{\ln\left(\frac{1}{d}\right)}{\ln\left(\frac{u}{d}\right)}> p\quad\text{as }n\to\infty,$$
(25)

where the inequality giving a lower bound on the limit of \(n^{-1}\widetilde{k}_{0}\) can be rewritten as \(u^{p}d^{q}<1\). Since \(X_{n}^{(p)}\sim\text{Bin}(n,p)\), the law of large numbers yields

$$B(n,u,d,p)=2\,\mathbb{P}\left(X_{n}^{(p)}\geq\widetilde{k}_{0}(n,u,d)+1\right)\leq 2\,\mathbb{P}\left(\frac{1}{n}X_{n}^{(p)}\geq\frac{1}{n}\widetilde{k}_{0}(n,u,d)\right)\to 0\quad\text{as }n\to\infty.$$

As in the previous proofs, we obtain an exponential rate of convergence using Chernoff’s inequality in combination with \(\widetilde{k}_{0}(n,u,d)-np> 0\) for sufficiently large \(n\) and (25), that is,

$$B(n,u,d,p)\leq 2\,\mathbb{P}\left(X_{n}^{(p)}-np\geq\widetilde{k}_{0}(n,u,d)-np\right)\leq 4\,\exp\left(-2\left(\frac{\widetilde{k}_{0}(n,u,d)}{n}-p\right)^{2}n\right)\to 0$$

as \(n\to\infty\).

Case 2: \(u^{p}d^{q}> 1\). Then it follows that

$$\frac{1}{n}\widetilde{k}_{0}(n,u,d)\to\frac{\ln\left(\frac{1}{d}\right)}{\ln\left(\frac{u}{d}\right)}<p.$$

Similarly to the estimate in Case 1, the law of large numbers yields

$$B(n,u,d,p)=2\,\mathbb{P}(X_{n}^{(p)}\geq\widetilde{k}_{0}(n,u,d)+1)=2\,\mathbb{P}\left(\frac{1}{n}X_{n}^{(p)}\geq\frac{1}{n}(\widetilde{k}_{0}(n,u,d)+1)\right)\to 2\cdot 1=2$$

as \(n\to\infty\).

Furthermore, by an application of Chernoff’s inequality we obtain an exponential lower bound on \(B(n,u,d,p)\) given by

$$\begin{aligned}\displaystyle B(n,u,d,p)=2\,\mathbb{P}\left(X_{n}^{(p)}\geq\widetilde{k}_{0}(n,u,d)+1\right)=2\left(1-\mathbb{P}\left(X_{n}^{(p)}\leq\widetilde{k}_{0}(n,u,d)\right)\right)\geq 2\left(1-2\cdot\exp\left(-2n\left(\frac{\widetilde{k}_{0}(n,u,d)}{n}-p\right)^{2}\right)\right)\rightarrow 2\quad\text{as }n\to\infty.\end{aligned}$$

In combination with the upper bound, \(B(n,u,d,p)\leq 2\), we obtain that \(B(n,u,d,p)\) converges to 2 as \(n\to\infty\) at an exponential rate.

Case 3: \(u^{p}d^{q}=1\). In this critical case, we will argue more effectively by using the central limit theorem. The assumption \(u^{p}d^{1-p}=1\) implies that \(\widetilde{k}_{0}(n,u,d)=\lfloor np\rfloor\) and therefore

$$\begin{aligned}\displaystyle B(n,u,d,p)=2\,\sum_{k=\lfloor np\rfloor+1}^{n}\binom{n}{k}p^{k}q^{n-k}=2\,\mathbb{P}\left(X_{n}^{(p)}\geq\lfloor np\rfloor+1\right)=2\left(1-\mathbb{P}\left(X_{n}^{(p)}\leq\lfloor np\rfloor\right)\right)=2\left(1-\mathbb{P}\left(\frac{X_{n}^{(p)}-np}{\sqrt{npq}}\leq\frac{\lfloor np\rfloor-np}{\sqrt{npq}}\right)\right)\to 2(1-\Phi(0))=1\end{aligned}$$

as n → ∞. Again, we can deduce the rate of convergence, which is given by \(1/\sqrt{n}\), using the Berry–Esseen theorem.\(\square\)

Finally, the following generalisation of Theorem 2 is implied by Lemmas 5 and 6.

Theorem 3

Let \(p\in(0,1)\), \(0<d\leq 1\leq u\) and \(u\neq d\). Then the expected net profit \(G(n,u,d,p)=\mathbb{E}[T(n,u,d,p)]\) after \(n\) rounds satisfies

$$\lim_{n\rightarrow\infty}G(n,u,d,p)=\begin{cases}1,&\text{if }u^{p}d^{q}> 1,\\ 0,&\text{if }u^{p}d^{q}=1,\\ -1,&\text{if }u^{p}d^{q}<1.\end{cases}$$

4.6 Analysis of the variance

In the final part of our analysis, we study the variance of the random variable \(T_{n}\). Since \(T_{n}\) was defined as the sum of random variables (and a constant, which does not affect the variance), the variance of \(T_{n}\) is composed of the variances of the single random variables and the covariances of any two distinct random variables. If we further use that \(\{X_{n}^{(p)}=k\}\cap\{X_{n}^{(p)}=\ell\}=\emptyset\) for \(k\neq\ell\), then we obtain

$$\begin{aligned}\displaystyle\mathbb{V}(T_{n})&\displaystyle=\sum_{k=0}^{\widetilde{k}_{0}(n,u,d)}\binom{n}{k}(pu^{2})^{k}(qd^{2})^{n-k}+4\,\mathbb{P}(X_{n}^{(p)}\geq\widetilde{k}_{0}(n,u,d)+1)\mathbb{P}(X_{n}^{(p)}\leq\widetilde{k}_{0}(n,u,d))\qquad-\sum_{k=0}^{\widetilde{k}_{0}(n,u,d)}\binom{n}{k}^{2}(pu)^{2k}(qd)^{2(n-k)}\qquad-4\sum_{k=0}^{\widetilde{k}_{0}(n,u,d)}\binom{n}{k}(pu)^{k}(qd)^{n-k}\cdot\mathbb{P}(X_{n}^{(p)}\geq\widetilde{k}_{0}(n,u,d)+1)\qquad-2\sum_{0\leq k<\ell\leq\widetilde{k}_{0}(n,u,d)}\binom{n}{k}\binom{n}{\ell}(pu)^{k+\ell}(qd)^{2n-k-\ell},\end{aligned}$$

where

$$\mathbb{P}(X_{n}^{(p)}\geq\widetilde{k}_{0}(n,u,d)+1)=\sum_{k=\widetilde{k}_{0}(n,u,d)+1}^{n}\binom{n}{k}p^{k}q^{n-k}.$$

Using the previous representation of \(\mathbb{V}(T_{n})\), we explored how the parameters \(n,u,d,p\) of the game affect the variance of \(T_{n}\). The results can be found in the Figs. 710. Note that we examined the entire variance as well as the behaviour of the five summands, which we denoted by \(v_{i}(n,u,d,p)\), \(i=1,\ldots,5\).

The results of our numerical analysis motivated the following theorem concerning the asymptotic behaviour of the variance of \(T_{n}\) as well as of the random variable \(T_{n}\) itself (with respect to convergence in distribution).

Theorem 4

Let \(p\in(0,1)\), \(0<d\leq 1\leq u\) and \(u\neq d\). Then the asymptotic behaviour of the variance of the net profit \(T_{n}=T(n,u,d,p)\) after \(n\) rounds is given by

$$\lim_{n\rightarrow\infty}\mathbb{V}(T_{n})=\begin{cases}0,&\text{if }u^{p}d^{q}\neq 1,\\ 1,&\text{if }u^{p}d^{q}=1.\end{cases}$$

Moreover, the limiting distribution of the random variable \(T_{n}\) is characterised by

$$T_{n}\to\begin{cases}-1,&\text{if }u^{p}d^{q}<1,\\ Z,&\text{if }u^{p}d^{q}=1,\\ 1,&\text{if }u^{p}d^{q}> 1,\end{cases}$$

where the limit is to be understood in the sense of convergence in distribution. The distribution of the random variable \(Z\) is the two-point distribution given by \(\mathbb{P}(Z=1)=\frac{1}{2}=\mathbb{P}(Z=-1)\).

Proof

We define

$$C_{n}:=C(n,u,d,p):=\sum_{k=0}^{\widetilde{k}_{0}(n,u,d)}\mathbf{1}\{X_{n}^{(p)}=k\}u^{k}d^{n-k}$$

and

$$D_{n}:=D(n,u,d,p):=2\cdot\mathbf{1}\{X_{n}^{(p)}\geq\widetilde{k}_{0}(n,u,d)+1\}.$$

Then we can write \(T_{n}\) as

$$T_{n}=-1+C_{n}+D_{n},$$

and therefore

$$\mathbb{V}(T_{n})=\mathbb{V}(C_{n})+\mathbb{V}(D_{n})+2\cdot\text{Cov}(C_{n},D_{n}).$$

Using the Cauchy–Schwarz inequality, we can estimate the covariance of \(C_{n}\) and \(D_{n}\) using the associated variances and obtain

$$\text{Cov}(C_{n},D_{n})^{2}\leq\mathbb{V}(C_{n})\cdot\mathbb{V}(D_{n}).$$

To determine the asymptotic behaviour of \(\mathbb{V}(T_{n})\), we can focus our attention on the limits of \(\mathbb{V}(C_{n})\) and \(\mathbb{V}(D_{n})\) as \(n\to\infty\) (as we will see).

The variance of \(C_{n}\) is explicitly given by

$$\begin{aligned}\displaystyle\mathbb{V}(C_{n})=\sum_{k=0}^{\widetilde{k}_{0}(n,u,d)}\binom{n}{k}p^{k}q^{n-k}u^{2k}d^{2(n-k)}-\sum_{k=0}^{\widetilde{k}_{0}(n,u,d)}\binom{n}{k}^{2}(pu)^{2k}(qd)^{2(n-k)}\qquad-2\sum_{0\leq k<\ell\leq\widetilde{k}_{0}(n,u,d)}\binom{n}{k}\binom{n}{\ell}(pu)^{k+\ell}(qd)^{2n-k-\ell}\geq 0.\end{aligned}$$

In order to show that \(\mathbb{V}(C_{n})\to 0\), as \(n\to\infty\), it suffices to prove that the first sum in the previous representation of \(\mathbb{V}(C_{n})\) converges to zero as \(n\to\infty\). Here we can apply again the result from Lemma 1, since apparently \(\widetilde{k}_{0}(n,u^{2},d^{2})=\widetilde{k}_{0}(n,u,d)\) and therefore

$$\sum_{k=0}^{\widetilde{k}_{0}(n,u,d)}\binom{n}{k}p^{k}q^{n-k}u^{2k}d^{2(n-k)}=A(n,u^{2},d^{2},p)\to 0\quad\text{for }n\to\infty.$$

The application of Lemma 5 is permissible since \(0<d^{2}\leq 1\leq u^{2}\) and \(d^{2}\neq u^{2}\) are satisfied under the assumptions of the theorem.

Furthermore, the variance of \(D_{n}\) is given by

$$\mathbb{V}(D_{n})=4\,\mathbb{P}\left(X_{n}^{(p)}\geq\widetilde{k}_{0}(n,u,d)+1\right)\mathbb{P}\left(X_{n}^{(p)}\leq\widetilde{k}_{0}(n,u,d)\right).$$

To examine the asymptotic behaviour of the variance of \(D_{n}\), we distinguish three cases.

(a) If \(u^{p}d^{q}<1\), it follows that \(\mathbb{P}\left(X_{n}^{(p)}\geq\widetilde{k}_{0}(n,u,d)+1\right)\to 0\) as \(n\to\infty\), by the same arguments as in the first case of the proof of Lemma 6.

(b) If \(u^{p}d^{q}> 1\), we get \(\mathbb{P}\left(X_{n}^{(p)}\geq\widetilde{k}_{0}(n,u,d)+1\right)\to 1\) as \(n\to\infty\). This assertion can be shown analogously to the second case in the proof of Lemma 6. Therefore, it follows that \(\mathbb{P}\left(X_{n}^{(p)}\leq\widetilde{k}_{0}(n,u,d)\right)\to 0\) as \(n\to\infty\).

The results in (a) and (b) imply that if \(u^{p}d^{q}\neq 1\), then \(\mathbb{V}(D_{n})\to 0\) as \(n\to\infty\) with an exponential rate of convergence.

(c) Finally, we need to treat the case where \(u^{p}d^{q}=1\). Then it follows that \(\mathbb{P}\left(X_{n}^{(p)}\geq\widetilde{k}_{0}(n,u,d)+1\right)\to\frac{1}{2}\) and \(\mathbb{P}\left(X_{n}^{(p)}\leq\widetilde{k}_{0}(n,u,d)\right)\to\frac{1}{2}\) as \(n\to\infty\). These assertions can be shown similarly to the third case in the proof of Lemma 6. Hence, it follows that \(\mathbb{V}(D_{n})\to 1\) as \(n\to\infty\). The convergence is of order \(1/\sqrt{n}\).

Now we want to find the limit in distribution of the sequence of random variables \((T_{n})_{n\in\mathbb{N}}\).

If \(u^{p}d^{q}\neq 1\), we can immediately deduce the limit in distribution using Theorem 3 and \(\mathbb{V}(T_{n})\to 0\) as \(n\to\infty\). In this case, we conclude the formally stronger result of convergence in probability.

Now we are left with the case \(u^{p}d^{q}=1\).

First, we recall that \(\mathbb{E}[C_{n}]=A(n,u,d,p)\rightarrow 0\) as \(n\rightarrow\infty\) (according to Lemma 1) and \(\mathbb{V}(C_{n})\rightarrow 0\) as \(n\rightarrow\infty\). This directly implies that \(C_{n}\) converges in probability to zero as \(n\to\infty\). Due to Sluzki’s theorem [18, S. 209] (or [16, Chap. 5, Thm. 11.4], [25, Thm. 13.18]) it remains to show that \(D_{n}-1\) converges in distribution to \(Z\). For this purpose we use the characteristic functions \(\varphi_{D_{n}-1}\) of \(D_{n}-1\) and \(\varphi_{Z}\) of \(Z\). For \(t\in\mathbb{R}\) it follows that

$$\begin{aligned}\displaystyle\varphi_{D_{n}-1}(t)=\mathbb{E}\left[\mathrm{e}^{it(D_{n}-1)}\right]=\mathbb{P}\left(X_{n}^{(p)}\geq\widetilde{k}_{0}(n,u,d)+1\right)\mathrm{e}^{it}+\mathbb{P}\left(X_{n}^{(p)}\leq\widetilde{k}_{0}(n,u,d)\right)\mathrm{e}^{-it}\longrightarrow\frac{1}{2}\mathrm{e}^{it}+\frac{1}{2}\mathrm{e}^{-it}=\varphi_{Z}(t)\end{aligned}$$

as \(n\rightarrow\infty\), where again we used the asymptotic behaviour of the probabilities \(\mathbb{P}\left(X_{n}^{(p)}\geq\widetilde{k}_{0}(n,u,d)+1\right)\) and \(\mathbb{P}\left(X_{n}^{(p)}\leq\widetilde{k}_{0}(n,u,d)\right)\) for \(n\rightarrow\infty\) which we derived in Part (c). Finally, the Lévy–Cramér continuity theorem [18, S. 21] (or [16, Chap. 5, Thm. 9.1], [25, Thm. 15.24]) implies the assertion.\(\square\)

Illustrations:

Figs. 710 display the behaviour of \(\mathbb{V}(T_{n})\) in terms of the different underlying parameters \(n,u,d\) and \(p\) of the game.

Fig. 7 shows the asymptotic behaviour of the variance of \(T_{n}\) as \(n\) tends to infinity in the initial situation described in “GREED” (\(u=1.5,\,d=0.6,\,p=0.5\)). Fig. 7a illustrates the entire variance while Fig. 7b shows the behaviour of the five summands introduced at the beginning of this subsection. As we would expect according to Theorem 4, the convergence of the variance towards zero is clearly visible.

The remaining Figs. 810 display the variance of \(T_{n}\) for \(n=200\) in terms of the parameters \(p\), \(u\) and \(d\). Again, Figs. 810a show the entire variance while Figs. 810b illustrate the five summands \(v_{1},\ldots,v_{5}\) separately.

Figs. 810b suggest that the quantity \(v_{2}\) has the strongest influence on the variance. In contrast, the terms \(v_{1}\), \(v_{3}\) and \(v_{5}\) only take values close to zero. These observations coincide with the proof of Theorem 4, where we proved that \(v_{1},\,v_{3},\,v_{4}\) and \(v_{5}\) converge towards zero for any admissible choice of parameters while \(v_{2}\) converges towards 1 in the special case \(u^{p}d^{q}=1\).

Moreover, we want to point out that the threshold phenomenon described in Theorem 4 is already clearly visible after \(n=200\) rounds.

Fig. 7
figure 7

Asymptotic behaviour of the variance of \(T_{n}\) for \(u=1.5\), \(d=0.6\), \(p=0.5\) (as in [10]) as \(n\rightarrow\infty\) (for clarity, only every fifth value was shown). a is the variance itself, b the individual summands \(v_{1},\ldots,v_{5}\) that contribute to the variance

Fig. 8
figure 8

Illustration of the variance of \(T_{n}\) as a function in \(p\), where \(u=1.5,\,d=0.6,\,n=200\). a shows the entire variance while b displays the summands \(v_{1},\ldots,v_{5}\) separately

Fig. 9
figure 9

Illustration of the variance of \(T_{n}\) as a function in \(u\), where \(d=0.6,\,n=200,\,p=0.5\). a shows the entire variance while b displays the summands \(v_{1},\ldots,v_{5}\) separately

Fig. 10
figure 10

Illustration of the variance of \(T_{n}\) as a function in \(d\), where \(u=1.5,\,n=200,\,p=0.5\). a shows the entire variance while b displays the summands \(v_{1},\ldots,v_{5}\) separately

5 Some simulations and a generalised birthday problem

The following section includes some simulations of the development of the score over time as well as the simulation of the final net profit in 100 repetitions of Elsberg’s gamble. As we are working in the initial situation described by Elsberg, the underlying scenario is characterised by the game parameters \(a=100\), \(n=100\), \(u=1.5\), \(d=0.6\) and \(p=0.5\).

The simulations were generated using Python.

The random.binomial(n,p) function integrated in the Numpy library was used to generate binomially distributed pseudo-random numbers.

Fig. 11 shows eight simulations of the score over time, that is, as a function of successive rounds. It should not come as a surprise that at least two of the eight simulations exhibit the same final score. The final score only depends on the number \(k\) of the \(n\) rounds in which the coin shows “heads”. The probability of the event of “heads” occurring \(k\) times for a fair coin is just \(p_{k}=\binom{n}{k}2^{-n}\) for \(k\in\{0,1,\ldots,n\}\). If the eight simulations are done independently, the probability \(P_{8}\) that all eight final scores are different is \(P_{8}=8!\sum_{|I|=8}\prod_{i\in I}p_{i}\), where the summation extends over all \(8\)-element subsets \(I\) of \(\{0,1,\ldots,n\}\). Maclaurin’s inequality [32, (5)] implies that

$$\left[\frac{1}{\binom{n+1}{8}}\sum_{|I|=8}\prod_{i\in I}p_{i}\right]^{\frac{1}{8}}\leq\frac{1}{n+1}\sum_{i=0}^{n}p_{i}=\frac{1}{n+1},$$

hence \(P_{8}\leq 8!\binom{n+1}{8}\left(\frac{1}{n+1}\right)^{8}\) (see [20, 24] for alternative arguments), where equality holds for positive probabilities \(p_{i}\) if and only if \(p_{0}=\cdots=p_{n}={1}/{(n+1)}\). For \(n=100\),

$$1-P_{8}\geq 1-\frac{100\cdots 94}{101^{7}}\geq 0.24$$

thus is a lower bound for the probability that for at least two of eight independent simulations the same final score is attained. In other words, we have estimated the probability of at least two equal final scores for non-uniform random variables in terms of the uniform case. The present question represents a generalised birthday problem (see [17, 26, 34]). A recursive numerical calculation with the help of [27, Proposition 3.1] results in \(1-P_{8}\approx 0.83\) and is therefore considerably larger than in the case of uniform distributions. The naive direct calculation of this probability by summation over all \(8\)-element subsets of \(\{0,1,\ldots,100\}\) turns out to be infeasible, so recursive methods and approximations [27, 28, 34] become relevant.

Subsequently, the scores at the end of each round were calculated as part of a simulation study over \(100\) rounds.

In Fig. 12 the net profits are shown which are achieved in \(100\) repetitions of Elsberg’s game of chance. In exactly \(14\) of the \(100\) simulations of the gamble, the maximal net profit of \(100\) euros is realized.

Fig. 11
figure 11

Eight simulations of the score over time as a function of successive rounds for \(u=1.5\), \(d=0.6\), \(p=0.5\), \(a=100\) and \(n=100\) (as in [10])

Fig. 12
figure 12

Net profit achieved in 100 simulations of Elsberg’s game [10] with \(a=100\) and \(n=100\)