On a game of chance in Marc Elsberg's thriller"GREED"

A (possibly illegal) game of chance, which is described in Chapter 14 of Marc Elsberg's thriller"GREED", seems to offer an excellent chance of winning. However, as the gambling starts and evolves over several rounds, the actual experience of the vast majority of the gamblers in a pub is strikingly different. We provide an analysis of this specific game and several of its variants by elementary tools of probability. Thus we also encounter an interesting threshold phenomenon, which is related to the transition from a profit zone to a loss area. Our arguments are motivated and illustrated by numerical calculations with Python.


Introduction
In the summer of 2020, one of the authors (DH) spent a wonderful, albeit short, family holiday on one of the Frisian North Sea islands. At last one could enjoy nature with almost no worries and a brief period of the year when incidence numbers, exponential growth and R-factors had faded into the background. My wife had immersed herself in one of her books on the beach, while I relaxed and let my eyes wander over the expanse of the sea. Finally, my wife, who teaches mathematics, turned to me and said, 'You might be interested in this.' While reading Marc Elsberg's thriller 'GREED', she had come across an interesting connection, which we initially discussed animatedly without pencil and paper. The following investigation finally arose from this first conversation.
Elsberg had his breakthrough as an author in 2012 with 'BLACKOUT'. The book describes the scenario of a widespread collapse of power supply and its consequences. With 'ZERO' and 'HELIX' he confirmed his reputation as a master of the science thriller. In his eighth book with the title 'GREED' [10], Elsberg deals with economic concepts, findings and theories with a focus on the question whether comprehensive cooperation between economic partners and branches of industry could lead to greater prosperity for everybody. He relies on scientific work related to ergodicity economics of a group led by Ole Peters at the London Mathematical Laboratory, which was supported by the Nobel Prize winners Murray Gell-Mann and Ken Arrow [30].
In his review of 'GREED', Edgar Fell [13, translated from German] comments on Elsberg's book: 'In the course of this exciting story, more and more connections of an economic nature emerge. It is fascinating to see how the author succeeds in bringing complex economic and social issues closer to the reader. Even the mathematical foundations of game theory are built into the plot in a "playful" way. An illegal game of chance in a bar, for example, offers the opportunity to take the first steps in this direction. It works like magic. Elsberg's captivating art of storytelling allows even readers who are completely untrained in mathematics to grasp his number games. The knowledge that is conveyed stimulates thought -about how modern forms of society actually work.' The aim of the following considerations is to present an elementary analysis of a game of chance (a bet) from Chapter 14 of Elsberg's thriller 'GREED', mentioned in the review by Edgar Fell, and of variants of this bet. On the one hand, this offers the opportunity to apply some of the basic concepts of (elementary) stochastics. In this respect it is fair to say that games of chance provided much of the inspiration behind the birth of probability theory; see the historical review in Ethier's book on the Doctrine of Chances [12]. On the other hand, the investigation naturally leads to a threshold phenomenon (phase transition). Phenomena of this kind were originally observed in statistical physics, but also play an important role in the analysis of random graphs and random polytopes. For an elementary introduction to the topic of phase transitions in classical random graphs (Erdös-Renyi) we refer to [9], the classical work [11] as well as the monographs and textbooks [4,19,23]. Threshold phenomena with random polytopes (e.g. in high dimensions), random cones and connections to optimization, data analysis and signal processing have been investigated in [1,2,3,6,7,14,15,21,22,31], for example.
In Section 2, we provide a summary of those aspects from 'GREED' which are relevant for the present discussion. We essentially focus on Chapter 14, in which Elsberg stages the dynamics associated with the gambling scenario (the bet). In the following sections, we will analyse this particular gamble step by step. Here we start from the specific situation provided in the thriller. Motivated by our observations on the initial scenario described by Elsberg and a first quantitative analysis in Section 3, we generalise the underlying parameters of this game of chance (see Section 4). At first we examine the asymptotic behaviour of the expected net profit as the number n of rounds tends to infinity (Elsberg's choice is n = 100). Then we introduce general parameters u and d used to update the score after each round and find pairs (d, u) (at least numerically if n is finite) that define a fair game (Elsberg's choice corresponds to d = 0.6, u = 1.5 which results in an unfair game). The asymptotic analysis as n tends to infinity leads to a surprising threshold phenomenon which marks the asymptotically sharp transition from the profit zone to the loss area. While the original version of the bet is based on successively tossing a fair coin, in Section 4.5 we also explore the effect a biased coin has on the outcome of the bet, which allows us to establish a similar, but more general asymptotic threshold property. In addition, it can be seen from our investigation that a surprisingly small bias may already turn an unfavourable game into a fair one while keeping the other parameters fixed (see Figure 4.6). By a thorough analysis of the asymptotic behaviour of the variance of the net profit as the number n of rounds tends to infinity, we can even deduce the limit distribution of the net profit. It turns out that the limit distribution is deterministic, except for the case in which (d, u) lies on the boundary between profit zone and loss area when we obtain a two-point distribution in the limit. Finally, in Section 5 we illustrate some numerical simulations that motivate a small excursion to a generalised birthday problem. Throughout the paper, our arguments are motivated and illustrated by numerical calculations with Python (relevant source code can be selected from the arXiv version of the paper). 2 Elsberg's game of chance [10] and some initial insights In Chapter 14 of 'GREED' [10], Elsberg describes the following scenario.
A group of people gathers in a bar in 'Berlin Mitte'. A man (Fitzroy Peel, the croupier) offers the following bet to the rest of the group: A player starts with an initial score of 100 points (units). Afterwards, a coin is tossed one hundred times. In each round, the current score is increased by fifty percent, if the coin shows 'heads'. Otherwise, the current score is reduced by forty percent. After one hundred rounds, there are two possibilities. If the final score exceeds 100 points (units), the player wins and receives double his stake. The mentioned stake can be chosen by the player, but is not allowed to exceed one hundred euros. If the final score does not exceed one hundred points, the player loses. The payout in the case of a loss is not explicitly specified in [10] (although it seems natural to assume that Elsberg intended the player to lose his or her entire stake).
While one member of the group called 'T-Shirt' wants to participate right away, another one (Jan) is sceptical at first. T-Shirt tries to convince Jan with the following explanation. In his opinion, one simply needs to take the mean of the possible outcomes. He therefore adds the possible percentages after one round (150 percent and 60 percent) and divides the sum by the number of possible outcomes (two). This yields a mean of 105 percent in each round. Jan and two other members of the group seem to be convinced by T-Shirts explanation and agree to join the game.
However, another member of the group speaks up and expresses his concerns about the neglect of probabilities in the previous explanation. In his opinion, one needs to consider that the coin shows 'heads' or 'tails', each with probability 1 2 . Therefore, he multiplies the possible outcomes with the associated probabilities to arrive at an average gain of 105 percent in each round -the same result as before.
Finally, one more member of the group explains his point of view on the suggested bet. He explains that the average increase of five percent in each round yields an average final score (expected outcome) of 131.5 times the initial score. T-Shirt seems confident of his victory and the group starts the offered gamble.
In the following chapters, Elsberg describes the reactions of the members of the group as the gambling evolves. Initially, a euphoric mood spreads in the room, as the majority of the players are successful in the beginning. However, after a couple of rounds more and more people end up with low scores and finally only one player has a score exceeding 100 (units). Now the mood in the room shifts completely. There are violent accusations of cheating, leading even to a physical confrontation.
Subsequently (in Chapter 22), a first popular explanation of the preceding events is given. On the one hand, it is argued that the mean values calculated by some of the participants are not appropriate for analysing the game, as they do not describe the course of the game over time, and that they ignore the fact that the initial situation can be different in each round. Here, the phenomenon of (lack of) ergodicity (the coincidence of temporal mean values and probabilistic averages) is used as an explanation. However, other points are perhaps more decisive for the analysis of the game.
More helpful is the remark that with an initial score of 100 (units) and a loss of forty percent in the first round, the score is reduced to sixty. Then, in order to reach again 100 (units) by winning in the second round, 66.67 percent instead of just 50 percent of the current score would be required. But even if a player wins in the first round and loses in the second round, the remaining score is only ninety. Ultimately, the basic error of the players is to calculate an expected value for the starting round and to conclude that 0.5 · 1.5 + 0.5 · 0.6 = 1.05 is the factor by which the profit per round should increase. Although the expected score at the end of the game varies exactly in this way (see below), the rules of the game do not state that the payout is double the stake if the mean value is greater than 100 (units), but if the actual outcome of the game results in a score that is greater than 100 (units). In addition, the prize in the event of a win is independent of the final score. It is only relevant whether the final score exceeds the initial score of 100 (units). 5. In the end, the player wins if the final score exceeds 100 and receives a payout of twice the individual stake. In this case the net profit equals the stake. Otherwise, the payout of the player is stake · final score 100 , hence the net profit equals stake · final score 100 − stake, which is negative (a loss), but not a complete loss of the stake. In other words, the higher the final score, the higher the percentage of the stake that the player keeps.
Remark: As said before, the payout in the event of a loss is not explicitly specified by Elsberg. If, in contrast to the situation described above, we agree that the player loses his or her entire stake in the event of a loss, then the outcome would be even more disadvantageous for the player. The analysis below then simplifies significantly as the consideration of the quantity A(. . .) in Section 4.3 can be omitted.
Simulations: Using Python (or a similar programming language), the game can be simulated very easily. Various realisations are displayed in Section 5.
A first analysis: Let a denote the stake. We assume that the coin tosses are done independently with a fair coin. If the coin shows 'heads' k times and 'tails' (100 − k) times during the n = 100 rounds, for some k ∈ {0, . . . , 100}, then the final score is given by Note that the final score depends only on the numbers of 'heads' and 'tails' and not on the particular order in which they appear.
Using where ln denotes the natural logarithm. Clearly, (3.4) does neither depend on the initial score 100, nor on the stake a, and (3.2) is equivalent to k > 100 · ln 5 − ln 3 ln 5 − ln 2 ≈ 55.749.
Consequently, a player receives a net payout of a euros (keeping the initial stake) in the case where k ≥ 56, whereas the net profit is a · 1.5 k · 0.6 100−k − a < 0 euros (which is a loss that depends on the particular number k) if k ≤ 55.
In the underlying scenario, we can easily compute the probability of a loss, which is given by The previously determined probability of winning can also be seen in our numerical simulations (see Section 5). Figure 5.2 illustrates the outcomes of 100 simulations of Elsberg's game of chance. Only 14 out of the 100 simulations were beneficial for the gamblers.
Further, we can determine the expected net profit, i.e., the payout minus the stake, of a player. Hence, the expected net profit is given by This demonstrates that a player should ultimately expect a significant loss.

Variations
The previous analysis shows that the participants of Elsberg's bet will experience a significant loss on average. Thus the question arises which rule or parameter underlying the gambling leads to this unfair situation and how the framework can be adjusted in order to make the gambling more (or even less) advantageous for the participants. In the following, we will study the influence of different parameters of Elsberg's game of chance (to which we also refer as a 'bet', 'gamble' or simply a 'game'), or rather of the version of it employing our specific payout rule, starting with the number n of rounds.

Number of rounds
We will now analyse the influence of the number n of rounds on the expected net profit.
For n = 1, the behaviour of the gamble coincides with the players' perception. The expected net profit is then given by G(a, 1) = 0.3 · a. For n = 2, the expected net profit is still positive, given by G(a, 2) = 0.4 · a. However, the final score is only larger than 100 if k = 2. This event occurs with probability 1 4 implying that the probability of a loss is given by 3 4 . After n = 6 rounds, the expected net profit is negative for the first time. Although G(a, 7) is positive again, the expected net profit is strictly negative for n ≥ 8. Figure 4.1 illustrates the expected net profit after n rounds and shows an interesting behaviour. While the value G(a, n) is not monotonic due to some jumps, we can clearly see a decreasing trend. This leads to the conjecture that the expected net profit converges to −a as n → ∞ (see Figure 4.2). A formal proof of the asymptotic behaviour can be found below in Proposition 4.1.
In general, the expected net profit after n rounds with the initial stake a is given by   represents the boundary between winning and losing events. In the definition of k 0 , ⌊·⌋ denotes the floor function defined as ⌊x⌋ = max{n ∈ Z : n ≤ x}, x ∈ R. The logarithmic expression in the previous definition of k 0 is approximately given by We will now prove the previously mentioned conjecture regarding the asymptotic behaviour of the expected net profit using concentration inequalities. As usual, we denote a random variable X with a binomial distribution with parameters n ∈ N and p ∈ [0, 1] by X ∼ Bin(n, p).
For any a > 0, G(a, n) → −a as n → ∞ with an exponential rate of convergence.
Proof. In the following, we will prove that both sums in (4.1) converge to zero as n tends to infinity. Let X n ∼ Bin n, 5 7 . Then we get where we used Hoeffding's inequality [5, Theorem 2.8], [8] in the second to last step. (Alternatively, Chernoff's inequality can be used, at the cost of an additional factor 2.) Now let Y n ∼ Bin n, 1 2 . Using k 0 (n) ≥ ⌊0.55n⌋ ≥ 0.55n − 1, we obtain the following upper bound on the second sum where we used Okamoto's inequality [29], [5,Ex. 2.12] in the last step.
An application of the upper bounds (4.2) and (4.3) in (4.1) completes the proof.
Remark: If we are not interested in the rate of convergence in Proposition 4.1, the second part of the proof can be simplified by using the fact that Y n /n converges in probability to zero. However, it seems that for the first part of the argument some finer tools are required.
An important feature of Elsberg's game of chance is the bounded payout in the event of a win. If the payout in the case of a loss would also apply in the winning scenarios, the expected net profit would be given by This is exactly the value the gamblers expected intuitively.

Up and down
In the following subsection, we will study the influence of the percentages used to modify the current score after each round. In Elsberg's game of chance, the score is increased by 50% or decreased by 40% if the coin shows 'heads' or 'tails', respectively. We will substitute these percentages by some general percentages a u and a d . The indices u and d stand for 'up' and 'down'.
Hence, the updated modification (increase or decrease) of the current score in each round is given as follows.
3'. If the coin shows 'heads', the current score is increased by a u %. Otherwise, the current score is reduced by a d %.
In the game introduced by Elsberg, the values a u and a d are apparently given by a u = 50 and a d = 40.
Since it will be more convenient to use fractions instead of percentages, we introduce the following factors Using the previously defined factors u and d, we obtain more generally (cf. (3.1)) for the final score after Here again, k ∈ {0, 1, . . . , n} denotes the number of coin tosses showing 'heads' among the n independent repetitions. Then, the expected net profit under the updated modification rule 3' is given by As before, the quantity represents the boundary between winning and losing events. More precisely, a player wins if and only if k > k 0 (n, u, d).
General assumption: In the following, we will always assume that 0 < d ≤ 1 ≤ u and u = d. This is not a loss of generality since other choices of u and d are not reasonable in the given situation.
Intuitively, one would expect that an increase of u or d results in an advantage for the participants of the game of chance. The following For comparison, we also illustrated the function u = d −1 (orange) suggesting the conjecture that asymptotically as n → ∞, the fair tuples (d, u) are determined by the relation u = d −1 . We will prove this conjecture in Theorem 4.2 below.

Gambling on the edge
As already mentioned in the previous subsection, the conjecture which is supported by the illustration of our numerical results in Figure 4.5 can be proven exactly (asymptotically as n → ∞). This leads to an interesting threshold phenomenon, where {(u, d) ∈ [1, ∞) × (0, 1] : ud = 1, u = d} describes the boundary between tuples leading to a game of chance that is advantageous or disadvantageous for the participants in the game. All tuples on the boundary lead to a fair game. Since, according to the property G(a, n, u, d) = a · G(1, n, u, d), the expected net profit G(a, n, u, d) is proportional to the initial stake a, we define G(n, u, d) := G(1, n, u, d) for notational simplicity. Then, it follows that G(n, u, d) can be written as     implying that k 0 (n, u, d) ∈ {0, . . . , n}.
The proof of Theorem 4.2 will be split into two auxiliary results concerning the asymptotic behaviour of A(n, u, d) and B(n, u, d) as n → ∞.
Since u = d is satisfied by assumption, it follows that d < 1 must hold as well.
We can therefore conclude that Now let Y n ∼ Bin(n, u 2 ). The law of large numbers yields due to (4.8) and 1 n EY n = u 2 . If we use Hoeffding's inequality [5, Theorem 2.8] (or alternatively Chernoff's inequality including an additional factor 2) and the fact that k 0 (n,u,d) n − u 2 < 0 for sufficiently large n due to (4.8), it is even possible to verify an exponential rate of convergence. Using the previously mentioned tools, we get Note that α(u, d) → 0 as d, u → 1. For sufficiently large n it also follows that Let ξ n ∼ Bin(n, u u+d ) and ζ n := n − ξ n ∼ Bin(n, d u+d ). We will now use an inequality (see [5,Ex. 2.11] or [8]) that usually arises during the derivation of Chernoff's inequality. It states that if S n ∼ Bin(n, p) and y ∈ (p, 1), then (4.11) (Remark: In the Wikipedia article [35] this step is referred to as the 'Chernoff-Hoeffding theorem'.) Using (4.10) and (4.11) for p = d u+d and y = 1 − n −1 k 0 (n, u, d), it follows that Since we assumed that ud = 1, we get and therefore Furthermore, Finally, this implies that ω(n, u, d) → ρ(u, d) −1 ∈ (0, 1), which yields ω(n, u, d) n → 0 for n → ∞, as requested.
Case 4: u + d > 2 and ud = 1. If n is even, there exists some m ∈ N such that n = 2m. Hence, k 0 (n, u, d) = n 2 = m and therefore A(n, u, d) can be written as Since k ≤ m, it follows that 2m k ≤ 2m m . Now we can use Stirling's approximation to see that where ∼ means that the two expressions are asymptotically equivalent. Hence, for k ≤ m and sufficiently large m ∈ N, We conclude that If n is odd, there exists some m ∈ N such that n = 2m + 1. Then it follows that k 0 (n, u, d) = m and A(n, u, d) can be bounded by which completes the argument.
Remarks. • As soon as (4.8) is available, the second case in the previous proof can be included into the third case. However, the application of Hoeffding's inequality is easier in the second case. Moreover, we could alternatively argue with the law of large numbers in the second case (though without obtaining the exponential rate of convergence then). Therefore we decided to treat these cases separately.
• At the critical boundary, characterised by the equation ud = 1, the expression A(n, u, d) still converges to zero. However, the rate of convergence is no longer exponential, but of order 1/ √ n. We only presented an upper bound on the order of convergence, but one could consider the summand for k = m to deduce a lower bound as well.
In order to establish the asymptotic behaviour of G(n, u, d), which is stated in Theorem 4.2, it remains to analyse the asymptotic behaviour of B(n, u, d). The required result is provided by the following lemma.  where the lower bound on the limit is equivalent to ud < 1. If we introduce binomially distributed random variables X n ∼ Bin(n, 1 2 ), it follows that B(n, u, d) = 2 P X n ≥ k 0 (n, u, d) by the law of large numbers. An application of Okamoto's inequality (see [29]), in combination with the fact that k 0 (n, u, d) − n 2 > 0 for sufficiently large n ∈ N and (4.14), yields the even stronger statement as n → ∞.
Analogously to the first case, an application of the law of large numbers yields B(n, u, d) = 2 P(X n ≥ k 0 (n, u, d) Again, an application of Okamoto's inequality [29] instead of the law of large numbers, leads to a stronger, exponential estimate given by B(n, u, d) = 2 P X n ≥ k 0 (n, u, d) + 1 = 2 1 − P X n ≤ k 0 (n, u, d) In combination with the upper bound B(n, u, d) ≤ 2, it now follows that lim n→∞ B(n, u, d) = 2, and the convergence is of exponential order.
In this case, the convergence is of the order 1/ √ n.

Unbounded prize
As we already mentioned in Section 3, the a priori bound on the prize in the event of a win contributes to the disadvantages of the participants in Elsberg's game of chance. We will now analyse a modified gambling rule, which depends on the final score and does not involve an a priori bound.
We will now assume that the payout at the end of the game is always given by a times the final score. In analogy to the representation of the expected net profit in (4.4), we can derive a general representation of the expected net profit using the updated payout rule in the case of a win. We therefore arrive at the following (much simpler) expression for the expected net profit, that is, Clearly, the expected net profit is zero if and only if u + d = 2. Thus, using the current payout rule in the event of a win, we can exactly characterise the tuples (d, u) which lead to a fair game, by the condition u + d = 2.
In the subsection below, we will analyse a modified version of Elsberg's game of chance based on a coin which is not necessarily fair. Before pursuing this topic, we describe the influence of a biased coin in the current scenario of a payout which does not involve an a priori bound on the prize in the event of a win. If p ∈ [0, 1] describes the probability of the event 'heads' and q = 1 − p the probability that the coin shows 'tails', the expected net profit in the underlying situation is then given by a((pu + qd) n − 1). Hence, under these assumptions the game is fair if and only if pu + qd = 1.

Fake coins
In this subsection, we analyse the influence of the probability p that the coin shows 'heads'. In 'GREED', the gambling is executed using a fair coin (p = 1 2 ). Instead of a fair coin, we could use a bent (biased) coin or some completely different Bernoulli experiment (for example, by tossing a drawing pin or a dice and modifying the rule in step 3' appropriately). For the sake of simplicity, we will continue to use the toss of a coin.
Intuitively, increasing the probability p that the coin shows 'heads' should increase the chances of winning for the participants of the gambling and hence increase their expected net profit. Figure 4.6 supports this conjecture. The figure illustrates the expected net profit after n = 100 rounds using an initial stake of a = 100 in terms of the probability p. The factors u and d are chosen according to the gamble described in 'GREED'.
As in the analysis of the influence of the up and down factors u and d, we want to choose the probability p so that a fair game of chance is obtained. Figure 4.6 suggests that it is possible (at least numerically) to determine such a probability p. The net profit for p = 1 2 and the choice of p which results in a fair game are both marked in Figure 4.6. In order to formalise the previously described modified version of Elsberg's game of chance, we consider a sequence of independent Bernoulli distributed random variables X ′ i ∼ Bin(1, p), where {X ′ i = 1} and {X ′ i = 0} represent the events that the i-th coin toss shows 'heads' and 'tails', respectively. In order to simplify our calculations, we define the complementary probability q = 1 − p, which denotes the probability of the event 'tails'. Then, the random variable X (p) n := X ′ 1 + · · · + X ′ n ∼ Bin(n, p) counts how often the event 'heads' occurs among the n coin tosses.
Since we are interested in the net profit at the end of the game, we introduce the random variable 1{X (p) n = k}u k d n−k + 2 · 1{X (p) n ≥ k 0 (n, u, d) + 1}, which represents the net profit after n rounds using the initial stake a = 1. At this point, we recall the explanation at the beginning of Subsection 4.3 according to which the expected net profit is proportional to the stake a. Hence, it suffices to consider the case a = 1.

By 1{X
(p) n = k} we denote the indicator function with respect to the event {X (p) n = k}. More precisely, the expression is given by For the sake of notational simplicity, we will use the shorthand notation T n = T (n, u, d, p).
Since we defined T n as the net profit at the end of the game, the expected net profit is given by the expectation of the random variable T n . Hence, we get the following representation of the expected net profit G(n, u, d, p) := E[T (n, u, d, p)], that is, Here it should be noted that k 0 (n, u, d) is independent of p.
Similarly to the results in Section 4.3, we determine the asymptotic behaviour of the expected net profit as n → ∞. Again, for the proof we consider two auxiliary results concerning the asymptotic behaviour of the quantities A(n, u, d, p) and B(n, u, d, p) as n → ∞.
We start by providing two analytic inequalities which will be useful in the proof of Lemma 4.7.
Proof. If d = 1, the assertion of the lemma is apparently true, since the expressions on the left-and right-hand side are equal. Now let d ∈ (0, 1) be arbitrary, but fixed. We introduce the auxiliary function .
Using the strict monotonicity of f , it follows that f (x) > 0 for x ∈ (0, d). Now the assertions of the lemma can be easily deduced.
Equality holds if and only if x = p.
The previous inequality can be rewritten as Hence, the asymptotic behaviour of k 0 is given by n ∼ Bin(n, pu). The law of large numbers implies that where we used (4.16) and 1 n EY (p) n = pu. Analogously to the proof of Lemma 4.3, an alternative argument based on Hoeffding's or Chernoff's inequality yields an exponential rate of convergence.
The previous inequality is equivalent to the left inequality in Finally, for sufficiently large n we get 1 − k 0 (n, u, d) n > qd pu + qd and k 0 (n, u, d) n < pu pu + qd . A(n, u, d, p) = (pu + qd) n P ξ (p) n ≤ k 0 (n, u, d) = (pu + qd) n P ζ (p) n ≥ n − k 0 (n, u, d) where ω(n, u, d, p) := u k 0 (n,u,d) n The limit c satisfies c = p due to the assumption u p d 1−p = 1. From Lemma 4.6 we deduce that the denominator of ω(n, u, d, p) converges to ρ(u, d, p) > 1 as n → ∞.
Moreover, (4.12) remains true (see the proof of Lemma 4.3), since k 0 does not depend on p.
Case 4: u p d q = 1. Since d = u, it follows that pu + qd > 1. Since u p d q = 1, k 0 (n, u, d) = ⌊np⌋ and therefore A(n, u, d, p) simplifies to For k ≤ ⌊np − q⌋ we can easily deduce that a k := n k p k q n−k ≤ a k+1 . In addition, the assumption u p d q = 1 implies that u k d n−k = d n− k p . Hence, Further, the right-hand side of (4.18) converges to zero since n ⌊np⌋ p ⌊np⌋ q n−⌊np⌋ = P ⌊np⌋ − 1 < X (p) n ≤ ⌊np⌋ where Φ is the cumulative distribution function of the standard normal distribution.
Combining (4.18) and (4.19), we obtain that A(n, u, d, p) converges to zero as n → ∞. The Berry-Esseen theorem further shows that the convergence is of the order 1/ √ n.
As explained in the remark following the proof of Lemma 4.3, it is possible to include the second case of the previous proof into the third case. However, we decided to treat these cases separately due to the same reasons as mentioned before.
Instead of using the central limit theorem and the Berry-Esseen theorem at the end of the fourth case, we could instead use that n ⌊np⌋ p ⌊np⌋ q n−⌊np⌋ ∼ 1 √ 2πpq · 1 √ n as n → ∞, to prove (4.19). The former asymptotic equivalence arises as a local central limit theorem in the proof of the De Moivre-Laplace theorem (see [33, p. 55]). Alternatively, one could use Stirling's approximation (while using the binary entropy function) to provide a more direct argument.
In order to determine the asymptotic behaviour of the expected net profit, we need to identify the limit of the second summand, denoted by B(n, u, d, p), as n → ∞.
Proof. Case 1: u p d q < 1. Then where the inequality giving a lower bound on the limit of n −1 k 0 can be rewritten as u p d q < 1. Since X (p) n ∼ Bin(n, p), the law of large numbers yields B(n, u, d, p) = 2 P X (p) n ≥ k 0 (n, u, d) + 1 ≤ 2 P 1 n X (p) n ≥ 1 n k 0 (n, u, d) → 0 as n → ∞.
As in the previous proofs, we obtain an exponential rate of convergence using Chernoff's inequality in combination with k 0 (n, u, d) − np > 0 for sufficiently large n and (4.20) , that is, as n → ∞.
Case 2: u p d q > 1. Then it follows that Similarly to the estimate in Case 1, the law of large numbers yields B(n, u, d, p) = 2 P(X (p) n ≥ k 0 (n, u, d) Furthermore, by an application of Chernoff's inequality we obtain an exponential lower bound on B(n, u, d, p) given by B(n, u, d, p) = 2 P X (p) n ≥ k 0 (n, u, d) + 1 = 2 1 − P X (p) n ≤ k 0 (n, u, d) In combination with the upper bound, B(n, u, d, p) ≤ 2, we obtain that B(n, u, d, p) converges to 2 as n → ∞ at an exponential rate.
Case 3: u p d q = 1. In this critical case, we will argue more effectively by using the central limit theorem.
The assumption u p d 1−p = 1 implies that k 0 (n, u, d) = ⌊np⌋ and therefore B(n, u, d, p) = 2 n k=⌊np⌋+1 n k p k q n−k = 2 P X (p) n ≥ ⌊np⌋ + 1 Again, we can deduce the rate of convergence, which is given by 1/ √ n, using the Berry-Esseen theorem.
Finally, the following generalisation of Theorem 4.2 is implied by Lemmas 4.7 and 4.8.

Analysis of the variance
In the final part of our analysis, we study the variance of the random variable T n . Since T n was defined as the sum of random variables (and a constant, which does not affect the variance), the variance of T n is composed of the variances of the single random variables and the covariances of any two distinct random variables. If we further use that {X (pu 2 ) k (qd 2 ) n−k + 4 P(X (p) n ≥ k 0 (n, u, d) + 1)P(X (p) n ≤ k 0 (n, u, d)) where P(X (p) n ≥ k 0 (n, u, d) + 1) = n k= k 0 (n,u,d)+1 n k p k q n−k .
Using the previous representation of V(T n ), we explored how the parameters n, u, d, p of the game affect the variance of T n . The results can be found in the . Note that we examined the entire variance as well as the behaviour of the five summands, which we denoted by v i (n, u, d, p), i = 1, . . . , 5.
The results of our numerical analysis motivated the following theorem concerning the asymptotic behaviour of the variance of T n as well as of the random variable T n itself (with respect to convergence in distribution).
Theorem 4.10. Let p ∈ (0, 1), 0 < d ≤ 1 ≤ u and u = d. Then the asymptotic behaviour of the variance of the net profit T n = T (n, u, d, p) after n rounds is given by Moreover, the limiting distribution of the random variable T n is characterised by where the limit is to be understood in the sense of convergence in distribution. The distribution of the random variable Z is the two-point distribution given by P(Z = 1) = 1 2 = P(Z = −1). Proof. We define Using the Cauchy-Schwarz inequality, we can estimate the covariance of C n and D n using the associated variances and obtain Cov(C n , D n ) 2 ≤ V(C n ) · V(D n ).
To determine the asymptotic behaviour of V(T n ), we can focus our attention on the limits of V(C n ) and V(D n ) as n → ∞ (as we will see).
The variance of C n is explicitly given by In order to show that V(C n ) → 0, as n → ∞, it suffices to prove that the first sum in the previous representation of V(C n ) converges to zero as n → ∞. Here we can apply again the result from Lemma 4.3, since apparently k 0 (n, u 2 , d 2 ) = k 0 (n, u, d) and therefore k 0 (n,u,d) k=0 n k p k q n−k u 2k d 2(n−k) = A(n, u 2 , d 2 , p) → 0 for n → ∞.
The application of Lemma 4.7 is permissible since 0 < d 2 ≤ 1 ≤ u 2 and d 2 = u 2 are satisfied under the assumptions of the theorem.
Furthermore, the variance of D n is given by V(D n ) = 4 P X (p) n ≥ k 0 (n, u, d) + 1 P X (p) n ≤ k 0 (n, u, d) .
To examine the asymptotic behaviour of the variance of D n , we distinguish three cases.
(a) If u p d q < 1, it follows that P X (p) n ≥ k 0 (n, u, d) + 1 → 0 as n → ∞, by the same arguments as in the first case of the proof of Lemma 4.8.
n ≥ k 0 (n, u, d) + 1 → 1 as n → ∞. This assertion can be shown analogously to the second case in the proof of Lemma 4.8. Therefore, it follows that P X (p) The results in (a) and (b) imply that if u p d q = 1, then V(D n ) → 0 as n → ∞ with an exponential rate of convergence.
(c) Finally, we need to treat the case where u p d q = 1. Then it follows that P X (p) n ≥ k 0 (n, u, d) + 1 → 1 2 and P X (p) n ≤ k 0 (n, u, d) → 1 2 as n → ∞. These assertions can be shown similarly to the third case in the proof of Lemma 4.8. Hence, it follows that V(D n ) → 1 as n → ∞. The convergence is of order 1/ √ n.
Now we want to find the limit in distribution of the sequence of random variables (T n ) n∈N .
If u p d q = 1, we can immediately deduce the limit in distribution using Theorem 4.9 and V(T n ) → 0 as n → ∞. In this case, we conclude the formally stronger result of convergence in probability.
Now we are left with the case u p d q = 1.
First, we recall that E[C n ] = A(n, u, d, p) → 0 as n → ∞ (according to Lemma 4.3) and V(C n ) → 0 as n → ∞. This directly implies that C n converges in probability to zero as n → ∞. = P X (p) n ≥ k 0 (n, u, d) as n → ∞, where again we used the asymptotic behaviour of P X  .7 shows the asymptotic behaviour of the variance of T n as n tends to infinity in the initial situation described in 'GREED' (u = 1.5, d = 0.6, p = 0.5). The left-hand side illustrates the entire variance while the right-hand shows the behaviour of the five summands introduced at the beginning of this subsection. As we would expect according to Theorem 4.10, the convergence of the variance towards zero is clearly visible.
The remaining Figures 4.8 -4.10 display the variance of T n for n = 200 in terms of the parameters p, u and d. Again, the left-hand side shows the entire variance while the right-hand side illustrates the five summands v 1 , . . . , v 5 separately.
The right-hand side of Figures 4.8 -4.10 suggest that the quantity v 2 has the strongest influence on the variance. In contrast, the terms v 1 , v 3 and v 5 only take values close to zero. These observations coincide with the proof of Theorem 4.10, where we proved that v 1 , v 3 , v 4 and v 5 converge towards zero for any admissible choice of parameters while v 2 converges towards 1 in the special case u p d q = 1.
Moreover, we want to point out that the threshold phenomenon described in Theorem 4.10 is already clearly visible after n = 200 rounds.   The left-hand side shows the entire variance while the right-hand side displays the summands v 1 , . . . , v 5 separately.

Some simulations and a generalised birthday problem
The following section includes some simulations of the development of the score over time as well as the simulation of the final net profit in 100 repetitions of Elsberg's gamble. As we are working in the initial situation described by Elsberg, the underlying scenario is characterised by the game parameters a = 100, n = 100, u = 1.5, d = 0.6 and p = 0.5.
The simulations were generated using Python.
The random.binomial(n,p) function integrated in the Numpy library was used to generate binomially distributed pseudo-random numbers.   The left-hand side shows the entire variance while the right-hand side displays the summands v 1 , . . . , v 5 separately. Figure 5.1 shows eight simulations of the score over time, that is, as a function of successive rounds. It should not come as a surprise that at least two of the eight simulations exhibit the same final score. The final score only depends on the number k of the n rounds in which the coin shows 'heads'. The probability of the event of 'heads' occurring k times for a fair coin is just p k = n k 2 −n for k ∈ {0, 1, . . . , n}. If the eight simulations are done independently, the probability P 8 that all eight final scores are different is P 8 = 8! |I|=8 i∈I p i , where the summation extends over all 8-element subsets I of {0, 1, . . . , n}. Maclaurin's inequality [32, (5) (see [20,24] for alternative arguments), where equality holds for positive probabilities p i if and only if p 0 = · · · = p n = 1/(n + 1). For n = 100, 1 − P 8 ≥ 1 − 100 · · · 94 101 7 ≥ 0.24 thus is a lower bound for the probability that for at least two of eight independent simulations the same final score is attained. In other words, we have estimated the probability of at least two equal final scores for non-uniform random variables in terms of the uniform case. The present question represents a generalised birthday problem (see [17,26,34] Subsequently, the scores at the end of each round were calculated as part of a simulation study over 100 rounds.
In Figure 5.2 the net profits are shown which are achieved in 100 repetitions of Elsberg's game of chance. In exactly 14 of the 100 simulations of the gamble, the maximal net profit of 100 euros is realized.
Acknowledgement. The authors would like to thank Annette Hug for the idea for this project and numerous inspiring discussions. The authors are also grateful to Richard Gardner, Norbert Henze and Rolf Schneider for their friendly feedback and support.

Conflicts of interests.
The authors declare that they have no conflicts of interests and no conflicts of competing interests.