Skip to main content

Gambling Problems

  • Chapter
  • First Online:
Understanding Markov Chains

Part of the book series: Springer Undergraduate Mathematics Series ((SUMS))

  • 6157 Accesses

Abstract

This chapter consists in a detailed study of a fundamental example of random walk that can only evolve by going up of down by one unit within the finite state space \(\{ 0 , 1, \ldots , S\}\). This allows us in particular to have a first look at the technique of first step analysis that will be repeatedly used in the general framework of Markov chains, particularly in Chap. 5.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Due to the relation \((f+g)(k)=f(k)+g(k)\) we can check that if f and g are two solutions of (2.2.6) then \(f+g\) is also a solution of (2.2.6), hence the equation is linear.

  2. 2.

    Where did we get this idea? From intuition, experience, or empirically by multiple trials and errors.

  3. 3.

    From the Latin “id est” meaning “that is”.

  4. 4.

    Exercise: check by hand computation that the equality to 1 holds as stated.

  5. 5.

    In this game, the payout is \(\$2\) and the payout percentage is 2p.

  6. 6.

    The notation “\(\inf \)” stands for “infimum”, meaning the smallest \(n\ge 0\) such that \(X_n = 0\) or \(X_n=S\), if such an n exists.

  7. 7.

    “Almost surely” means “with probability 1”.

  8. 8.

    Recall that an infinite set of finite data values may have an infinite average.

  9. 9.

    This point is left as exercise.

  10. 10.

    Also called “lazy random walk”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nicolas Privault .

Exercises

Exercises

Exercise 2.1

We consider a gambling problem with the possibility of a draw,Footnote 10 i.e. at time n the gain \(X_n\) of Player A can increase by one unit with probability \(r \in (0,1/2]\), decrease by one unit with probability r, or remain stable with probability \(1-2r\). We let

$$ f(k) : = \mathbb {P}( R_A \mid X_0 = k ) $$

denote the probability of ruin of Player A, and let

$$ h(k) : = \mathrm{I\!E}[ T_{0,S} \mid X_0 = k ] $$

denote the expectation of the game duration \(T_{0,S}\) starting from \(X_0=k\), \(k=0,1,\ldots , S\).

  1. (a)

    Using first step analysis, write down the difference equation satisfied by f(k) and its boundary conditions, \(k=0,1,\ldots , S\). We refer to this equation as the homogeneous equation .

  2. (b)

    Solve the homogeneous equation of Question (a) by your preferred method. Is this solution compatible with your intuition of the problem? Why?

  3. (c)

    Using first step analysis, write down the difference equation satisfied by h(k) and its boundary conditions, \(k=0,1,\ldots , S\).

  4. (d)

    Find a particular solution of the equation of Question (c).

  5. (e)

    Solve the equation of Question (c).

    Hint: recall that the general solution of the equation is the sum of a particular solution and a solution of the homogeneous equation.

  6. (f)

    How does the mean duration h(k) behave as r goes to zero? Is this solution compatible with your intuition of the problem? Why?

Exercise 2.2

Recall that for any standard gambling process \((Z_k)_{k\in {\mathord {\mathbb N}}}\) on a state space \(\{a, a+1,\ldots , b-1,b\}\) with absorption at states and and probabilities \(p\not = q\) of moving by \(\pm 1\), the probability of hitting state before hitting state after starting from state \(Z_0=k \in \{ a, a+1,\ldots , b-1,b\}\) is given by

$$\begin{aligned} \frac{1-(p/q)^{b-k}}{1-(p/q)^{b-a}}. \end{aligned}$$
(2.3.18)

In questions (a), (b), (c) below we consider a gambling process \((X_k)_{k\in {\mathord {\mathbb N}}}\) on the state space \(\{0,1,\ldots , S\}\) with absorption at and and probabilities \(p\not = q\) of moving by \(\pm 1\).

  1. (a)

    Using Relation (2.3.18), give the probability of coming back in finite time to a given state \(m\in \{1,2,\ldots , S-1\}\) after starting from \(X_0 = k \in \{ m+1,\ldots , S \}\).

  2. (b)

    Using Relation (2.3.18), give the probability of coming back in finite time to the given state \(m\in \{1,2,\ldots , S-1\}\) after starting from \(X_0=k \in \{ 0,1,,\ldots , m-1\}\).

  3. (c)

    Using first step analysis, give the probability of coming back to state in finite time after starting from \(X_0=m\).

  4. (d)

    Using first step analysis, compute the mean time to either come back to m of reach any of the two boundaries \(\{0,S\}\), whichever comes first?

  5. (e)

    Repeat the above questions (c), (d) with equal probabilities \(p= q=1/2\), in which case the probability of hitting state before hitting state after starting from state \(Z_0=k\) is given by

    $$\begin{aligned} \frac{b-k}{b-a}, \qquad k=a, a+1,\ldots , b-1,b. \end{aligned}$$
    (2.3.19)

Exercise 2.3

Consider a gambling process \((X_n)_{n\in {\mathord {\mathbb N}}}\) on the state space \(\mathbb {S}= \{ 0, 1, \ldots , S \}\), with probability p, resp. q, of moving up, resp. down, at each time step. For \(k = 0,1,\ldots , S\), let \(\tau _k\) denote the first hitting time

$$ \tau _k : = \inf \{ n \ge 0 \ : X_n = k \}. $$

of state by the process \((X_n)_{n\in {\mathord {\mathbb N}}}\), and let

$$ p_k: = \mathbb {P}( \tau _{k+1} < \tau _0 \mid X_0 = k), \qquad k = 0,1,\ldots , S-1, $$

denote the probability of hitting state before hitting state .

  1. (a)

    Show that \(p_k = \mathbb {P}( \tau _{k+1} < \tau _0 \mid X_0 = k)\) satisfies the recurrence equation

    $$\begin{aligned} p_k = p + q p_{k-1}p_k, \qquad k = 1,2,\ldots , S-1, \end{aligned}$$
    (2.3.20)

    i.e.

    $$ p_k = \frac{p}{1 - q p_{k-1}}, \qquad k = 1,2,\ldots , S-1. $$
  2. (b)

    Check by substitution that the solution of (2.3.20) is given by

    $$\begin{aligned} p_k = \frac{1-(q/p)^k}{1-(q/p)^{k+1}}, \qquad k=0,1,\ldots , S-1. \end{aligned}$$
    (2.3.21)
  3. (c)

    Compute \(\mathbb {P}( \tau _S < \tau _0 \mid X_0 = k )\) by a product formula and recover (2.2.11) and (2.2.27) based on the result of part (2.3.21).

  4. (d)

    Show that (2.2.12) and (2.2.28) can be recovered in a similar way in the symmetric case \(p=q=1/2\) by trying the solution \(p_k = k/(k+1)\), \(k=0,1,\ldots , S-1\).

Exercise 2.4

Consider a gambling process \((X_n)_{n\in {\mathord {\mathbb N}}}\) on the state space \(\{0,1,2\}\), with transition probabilities given by

where \(0< p < 1\) and \(q=1-p\). In this game, Player A is allowed to “rebound” from state to state with probability p, and state is absorbing.

In order to be ruined, Player A has to visit state twice. Let

$$ f(k) : = \mathbb {P}( R_A \mid X_0 = k ), \qquad k = 0,1,2, $$

denote the probability of ruin of Player A starting from \(k=0,1,2\). Starting from counts as one visit to .

  1. (a)

    Compute the boundary condition f(0) using pathwise analysis.

  2. (b)

    Give the value of the boundary condition f(2), and compute f(1) by first step analysis.

Exercise 2.5

  1. (a)

    Recover (2.3.17) from (2.3.11) by letting p go to 1 / 2, i.e. when \(r=q/p\) goes to 1.

  2. (b)

    Recover (2.2.21) from (2.2.11) by letting p go to 1 / 2, i.e. when \(r=q/p\) goes to 1.

Exercise 2.6

Extend the setting of Exercise 2.1 to a non-symmetric gambling process with draw and respective probabilities \(\alpha >0\), \(\beta >0\), and \(1-\alpha -\beta >0\) of increase, decrease, and draw. Compute the ruin probability f(k) and the mean game duration h(k) in this extended framework. Check that when \(\alpha = \beta \in (0,1/2)\) we recover the result of Exercise 2.1.

Problem 2.7

We consider a discrete-time process \((X_n)_{n\ge 0}\) that models the wealth of a gambler within \(\{ 0,1,\ldots , S\}\), with the transition probabilities

$$ \left\{ \begin{array}{l} \mathbb {P}( X_{n+1} = k + 1 \mid X_n = k ) = p, \quad k=0,1,\ldots , S-1, \\ \\ \mathbb {P}( X_{n+1} = k - 1 \mid X_n = k ) = q, \quad k=1,2,\ldots , S, \end{array} \right. $$

and

$$ \mathbb {P}( X_{n+1} = 0 \mid X_n = 0 ) = q, $$

for all \(n\in {\mathord {\mathbb N}}= \{0,1,2,\ldots \}\), where \(q=1-p\) and \(p\in (0,1]\). In this model the gambler is given a second chance, and may be allowed to “rebound” after reaching 0. Let

$$ W = \bigcup _{n\in {\mathord {\mathbb N}}} \{ X_n = S \} $$

denote the event that the player eventually wins the game.

  1. (a)

    Let

    $$ g(k) := \mathbb {P}( W \mid X_0 = k) $$

    denote the probability that the player eventually wins after starting from state \(k\in \{0,1,\ldots , S\}\). Using first step analysis, write down the difference equations satisfied by g(k), \(k=0,1,\ldots , S-1\), and their boundary condition(s), which may not be given in explicit form. This question is standard, however one has to pay attention to the special behavior of the process at state 0.

  2. (b)

    Obtain \(\mathbb {P}( W \mid X_0 = k)\) for all \(k=0,1,\ldots , S\) as the unique solution to the system of equations stated in Question (a). The answer to this question is very simple and can be obtained through intuition. However, a (mathematical) proof is required.

  3. (c)

    Let

    $$ T_S = \inf \{ n \ge 0 \ : \ X_n = S \} $$

    denote the first hitting time of S by the process \((X_n)_{n\ge 0}\). Let

    $$ h(k) := \mathrm{I\!E}[ T_S \mid X_0 = k] $$

    denote the expected time until the gambler wins after starting from state \(k\in \{0,1,\ldots , S\}\). Using first step analysis, write down the difference equations satisfied by h(k) for \(k=0,1,\ldots , S-1\), and state the corresponding boundary condition(s). Again, one has to pay attention to the special behavior of the process at state 0, as the equation obtained by first step analysis for h(0) will take a particular form and can be viewed as a second boundary condition.

  4. (d)

    Compute \(\mathrm{I\!E}[ T_S \mid X_0 = k ]\) for all \(k=0,1,\ldots , S\) by solving the equations of Question (c).

    This question is more difficult than Question (b), and it could be skipped at first reading since its result is not used in the sequel. One can solve the homogeneous equation for \(k=1,2,\ldots , S-1\) using the results of Sect. 2.3, and a particular solution can be found by observing that here we consider the time until Player A (not B) wins. As usual, the cases \(p\not = q\) and \(p=q=1/2\) have to be considered separately at some point. The formula obtained for \(p=1\) should be quite intuitive and may help you check your result.

  5. (e)

    Let now

    $$ T_0 = \inf \{ n \ge 0 \ : \ X_n = 0 \} $$

    denote the first hitting time of 0 by the process \((X_n)_{n\ge 0}\). Using the results of Sect. 2.2 for the ruin of Player B, write down the value of

    $$ p_k : = \mathbb {P}( T_S < T_0 \mid X_0 = k ) $$

    as a function of p, S, and \(k=0,1,\ldots , S\).

    Note that according to the notation of this chapter, \(\{ T_S < T_0 \}\) denotes the event “Player A wins the game”.

  6. (f)

    Explain why the equality

    $$\begin{aligned} \mathbb {P}( T_S< T_0 \mid X_1 =&k +1 \text { and } X_0 = k ) = \mathbb {P}( T_S< T_0 \mid X_1 = k +1 ) \nonumber \\&\quad = \mathbb {P}( T_S < T_0 \mid X_0 = k+1 ). \end{aligned}$$
    (2.3.22)

    holds for \(k\in \{0,1,\ldots , S -1 \}\) (an explanation in words will be sufficient here).

  7. (g)

    Using Relation (2.3.22), show that the probability

    $$ \mathbb {P}( X_1 = k+1 \mid X_0 = k \text{ and } T_S < T_0 ) $$

    of an upward step given that state S is reached first, is equal to

    $$\begin{aligned} \mathbb {P}( X_1 = k+1 \mid X_0 = k \text{ and } T_S< T_0 ) = p \frac{\mathbb {P}(T_S< T_0 \mid X_0 = k+1 )}{ \mathbb {P}(T_S < T_0 \mid X_0 = k )} = p \frac{p_{k+1}}{p_k} , \end{aligned}$$
    (2.3.23)

    \(k = 1,2,\ldots , S-1\), to be computed explicitly from the result of Question (e). How does this probability compare to the value of p?

    No particular difficulty here, the proof should be a straightforward application of the definition of conditional probabilities.

  8. (h)

    Compute the probability

    $$ \mathbb {P}( X_1 = k-1 \mid X_0 = k \text{ and } T_0 < T_S ), \qquad k = 1,2,\ldots , S, $$

    of a downward step given that state 0 is reached first, using similar arguments to Question (g).

  9. (i)

    Let

    $$ h(k) = \mathrm{I\!E}[ T_S \mid X_0 = k, \ T_S < T_0 ], \qquad k = 1,2,\ldots , S, $$

    denote the expected time until the player wins, given that state 0 is never reached. Using the transition probabilities (2.3.23), state the finite difference equations satisfied by h(k), \(k=1,2,\ldots , S-1\), and their boundary condition(s). The derivation of the equation is standard, but you have to make a careful use of conditional transition probabilities given \(\{T_S < T_0 \}\). There is an issue on whether and how h(0) should appear in the system of equations, but this point can actually be solved.

  10. (j)

    Solve the equation of Question (i) when \(p=1/2\) and compute h(k) for \(k = 1,2,\ldots , S\). What can be said of h(0)?

    There is actually a way to transform this equation using an homogeneous equation already solved in Sect. 2.3.

Problem 2.8

Let \(S \ge 1\). We consider a discrete-time process \((X_n)_{n\ge 0}\) that models the wealth of a gambler within \(\{ 0 , 1,\ldots , S \}\), with the transition probabilities

$$ \mathbb {P}( X_{n+1} = k + 2 \mid X_n = k ) = p, \quad \mathbb {P}( X_{n+1} = k - 1 \mid X_n = k ) = 2p, $$

and

$$ \mathbb {P}( X_{n+1} = k \mid X_n = k ) = r, \quad k \in z, $$

for all \(n \in {\mathord {\mathbb N}}= \{0,1,2,\ldots \}\), where \(p>0\), \(r \ge 0\), and \(3p+r=1\). We let

$$ \tau : = \inf \{ n \ge 0 \ : \ X_n \le 0 \text{ or } X_n \ge S \}. $$
  1. (a)

    Consider the probability

    $$ g(k) : = \mathbb {P}( X_\tau \ge S \mid X_0 = k ) $$

    that the game ends with Player A winning the game, starting from \(X_0 = k\). Give the values of g(0), g(S) and \(g(S+1)\).

  2. (b)

    Using first step analysis, write down the difference equation satisfied by g(k), \(k=1,2,\ldots , S-1\), and its boundary conditions, by taking overshoot into account. We refer to this equation as the homogeneous equation.

  3. (c)

    Solve the equation of Question (b) from its characteristic equation as in (2.2.15).

  4. (d)

    Does the answer to Question (c) depend on p? Why?

  5. (e)

    Consider the expected time

    $$ h(k) : = \mathrm{I\!E}[ \tau \mid X_0 = k ], \qquad k=0,1,\ldots , S+1, $$

    spent until the end of the game. Give the values of h(0), h(S) and \(h(S+1)\).

  6. (f)

    Using first step analysis, write down the difference equation satisfied by h(k), \(k=1,2,\ldots , S-1\), and its boundary conditions.

  7. (g)

    Find a particular solution of the equation of Question (e)

  8. (h)

    Solve the equation of Question (2.1 c).

    Hint: the general solution of the equation is the sum of a particular solution and a solution of the homogeneous equation.

  9. (i)

    How does the mean duration h(k) behave as p goes to zero? Is this compatible with your intuition of the problem? Why?

  10. (j)

    How do the values of g(k) and h(k) behave for fixed \(k \in \{1,2,\ldots , S-1 \}\) as S tends to infinity?

Problem 2.9

Consider a gambling process \((X_n)_{n\ge 0}\) on the state space \(\mathbb {S}= \{0,1,\ldots , S\}\), with transition probabilities

$$ \mathbb {P}( X_{n+1} = k+1 \mid X_n = k ) = p, \qquad \mathbb {P}( X_{n+1} = k-1 \mid X_n = k ) = q, $$

\(k=1,2,\ldots , S-1\), with \(p+q=1\). Let

$$ \tau := \inf \{ n \ge 0 \ : \ X_n = 0 \text{ or } X_n = S \} $$

denote the time until the process hits either state 0 or state S, and consider the second moment

$$ h(k) : = \mathrm{I\!E}\big [\tau ^2 \mid X_0 = k \big ] , $$

of \(\tau \) after starting from \(k = 0,1,2,\ldots , S\).

  1. (a)

    Give the values of h(0) and h(S).

  2. (b)

    Using first step analysis, find an equation satisfied by h(k) and involving \( \mathrm{I\!E}[ \tau \mid X_0 = k +1 ]\) and \( \mathrm{I\!E}[ \tau \mid X_0 = k - 1]\), \(k=1,2,\ldots , S-1\).

  3. (c)

    From now on we take \(\underline{p=q=1/2}\). Recall that in this case we have

    $$ \mathrm{I\!E}[ \tau \mid X_0 = k ] = (S-k) k, \qquad k = 0,1,\ldots , S. $$

    Show that the function h(k) satisfies the finite difference equation

    $$\begin{aligned} h(k) = - 1 + 2 (S-k) k + \frac{1}{2} h(k+1) + \frac{1}{2} h(k-1) , \qquad k=1,2,\ldots , S-1. \end{aligned}$$
    (2.3.24)
  4. (d)

    Knowing that

    $$ k \longmapsto \frac{2}{3} k^2 - \frac{2S}{3} k^3 + \frac{k^4}{3} $$

    is a particular solution of the equation (2.3.24) of Question (c), and that the solution of the homogeneous equation

    $$\begin{aligned} f(k) = \frac{1}{2} f(k+1) + \frac{1}{2} f(k-1) , \qquad k=1,2,\ldots , S-1, \end{aligned}$$

    takes the form

    $$ f(k) = C_1 + C_2 k, $$

    compute the value of the expectation h(k) solution of (2.3.24) for all \(k=0,1,\ldots , S\).

  5. (e)

    Compute the variance

    $$ v(k) = \mathrm{I\!E}\big [ \tau ^2 \mid X_0 = k \big ] - \big ( \mathrm{I\!E}[ \tau \mid X_0 = k ] \big )^2 $$

    of the game duration starting from \(k = 0,1,\ldots , S\).

  6. (f)

    Compute v(1) when \(\underline{S=2}\) and explain why the result makes pathwise sense.

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Privault, N. (2018). Gambling Problems. In: Understanding Markov Chains. Springer Undergraduate Mathematics Series. Springer, Singapore. https://doi.org/10.1007/978-981-13-0659-4_2

Download citation

Publish with us

Policies and ethics