Generalized Gambler’s Ruin Problem: Explicit Formulas via Siegmund Duality

We give explicit formulas for ruin probabilities in a multidimensional Generalized Gambler’s ruin problem. The generalization is best interpreted as a game of one player against d other players, allowing arbitrary winning and losing probabilities (including ties) depending on the current fortune with particular player. It includes many previous other generalizations as special cases. Instead of usually utilized first-step-like analysis we involve dualities between Markov chains. We give general procedure for solving ruin-like problems utilizing Siegmund duality in Markov chains for partially ordered state spaces studied recently in context of Möbius monotonicity.


Introduction
Gambler's ruin problem has been playing important role in applied mathematics. There are applications in some casino games, e.g. craps (Isaac 1995), blackjack (Snell 2009), physics (El-Shehawey 2000Yamamoto 2013), hydrology (Tsai et al. 2014), biology and epidemic models (Harik et al. 1999), finance (Scott 1981;Rolski et al. 2009;Asmussen and Albrecher 2010), just to mention few. There are many variations of the problem, newer ones are formulated after older ones are solved. For example, the following variations were proposed: infinite amount of money, three or more players (Kmet and Petkovšek 2002;Rocha and Stern 2004), the attrition variation (applies, e.g., to World Series or Stanley Cup finals Kaigh 1979), some cases of winning probabilities being dependent on the current fortune (El-Shehawey 2009;Lefebvre 2008).
The problem is as relevant today as it was in 17th century. According to (Edwards 1983) Pascal was the first who posed a problem in 1656 in a letter to Fermat. The most common form comes from Huygens who restated the problem as follows (rephrased): "Let two men play with three dice, the first player scoring a point whenever 11 is thrown, whereas the second whenever 14 is thrown. Each player starts with 12 points. Successful roll adds one point to the player and subtracts one from the other player. The loser of the game is the first to reach zero points. What is the probability of victory for each player?" Huygens gave a solution of above problem. (Bernoulli 1713) generalized and replaced Huygens' numerical results by formulas, i.e., he considered general initial capital i, general total amount of money N and general winning probabilities p ∈ (0, 1). Since then the problem became very popular and different proofs were obtained. Pascal gave his solution to Cercavi without mentioning the method. According to Edwards (1983) Fermat probably used the combinatorial argument (as in "Problem of points"), most methods which appeared later used some kind of first-step-like analysis which led to solving some recursions.
It is known that the absorption probability of given chain can be related to the stationary distribution of some other ergodic chain. The relation is given via so-called Siegmund duality, the notion introduced in (Siegmund 1976). It was studied in financial context, where the probability that a dual risk process starting at level h is ruined equals the probability that the stationary queue length exceeds level h, see (Asmussen and Albrecher 2010;Asmussen and Sigman 2009). Already in (Lindley 1952) such duality between some random walks on integers was shown. For this duality reader is also referred to (Theodore Cox and Rösler 1984;Diaconis and Fill 1990;Dette et al. 1997) or (Huillet 2010), just to mention few. All above papers have one thing in common: they study Siegmund duality defined for linear ordering of the state space (and most of them birth and death chains only). In this case (Siegmund 1976) states that the process has such dual if and only if it is stochastically monotone (w.r.t. total ordering). It is a little bit surprising that it was not exploited in the context of onedimensional Gambler's ruin problem. The solutions of the classical problem and its various one-dimensional generalizations are special cases of Theorem 1 and can be relatively easy calculated using usual stochastic monotonicity. On the other hand multidimensional case is quite different. For partial nonlinear ordering the stochastic monotonicity does not imply the existence of Siegmund dual, see (Liggett 2004). Finding such duals was successful for some specific chains and/or orderings. For example, in financial context, in (Błaszczyszyn and Sigman 1999) authors considered R d -valued Markov processes (their Siegmund dual was set-valued). Recently, (Huillet 2014) considered dualities for Markov chains on partitions and sets. In (Lorek 2016) we show that Siegmund dual exists if and only if chain is Möbius monotone, the connections with Strong Stationary Duality (consult Diaconis and Fill 1990) are also given therein. Let us mention at this point that for non-linear ordering Möbius and stochastic monotonicities are, in general, different. In particular, we can have a chain which is not stochastically monotone, but which is Möbius monotone, thus we are able to construct its Siegmund dual.
In this paper, based on results from (Lorek 2016), we give the solution to the multidimensional Generalized Gambler's ruin problem. The paper is organized as follows. In Section 2 we describe our Generalized Gambler's ruin problem, state its solution (Theorem 1) and point out other results as some special cases. In Section 3 we recall notion of Siegmund duality and antiduality for chains on partially ordered state spaces and give a general recipe for calculating ruin-like probabilities (summarized in Theorem 2). Section 4 includes some toy example (Cat Eats Mouse Eats Cheese), where the case of negative antidual matrix is presented. Finally, Section 5 contains proof of Theorem 1.

Generalized Gambler's Ruin Problem and main Result
In the one-dimensional Gambler's ruin problem two players start a game with total amount of, say, N dollars and initial values k and N −k. At each step they flip the coin (not necessary unbiased) to decide who wins a dollar. The game is over when one of them goes bankrupt.
We will consider the following generalization. There is one player (referred as "we") playing with d ≥ 1 other players. Our initial assets are (i 1 , . . . , i d ) with 0 ≤ i j ≤ N j , j = 1, . . . , d (N j ≥ 1 is a total amount of assets with player j ) and assets of consecutive players are (N 1 −i 1 , . . . , N d −i d ). Then, with probability p j (i j ) we win one dollar with player j and with probability q j (i j ) we lose it. With the remaining probability 1− d k=1 (p k (i k )+q j (i k )) we do nothing (i.e., ties are also possible). Once we win completely with player j (i.e., i j = N j ) we do not play with him/her anymore. We lose the whole game if we lose with at least one player, i.e., when i j = 0 for some j = 1, . . . , d. Note that if initially for some j we have i j = 0 then we already lost (the winning probability is 0), thus we can restrict initial assets to 1 ≤ i j ≤ N j , j = 1, . . . , d.
For d = 1 and p 1 (j ) = p, q 1 (j ) = q, N 1 = N we have the classical Gambler's ruin problem on the state space {0, . . . , N}. Many one-dimensional generalizations of this game were considered, e.g., (Lefebvre 2008) studied the case of some specific sequences of p 1 (j ), q 1 (j ), later this was extended to any p 1 (j ), q 1 (j ) in (El-Shehawey 2009). Variations of classical problem with ties allowed, i.e., p 1 (j ) = p 1 , q 1 (j ) = q 1 , p 1 + q 1 < 1 were considered in, e.g., (Lengyel 2009a;2009b) (the latter one considers so called conditional version of the problem). Some of the articles studied both, the ruin probability and duration of the game, whereas most papers studied only duration of the game. Some generalizations to a higher a dimension d ≥ 1 were studied in (Rocha and Stern 2004;Kmet and Petkovšek 2002).
We will describe the game more formally as a Markov chain Z with two absorbing states.
With some abuse of notation, we will sometimes write (i 1 , . . . , i d ) = −∞. The transitions of the described chain are following: The chain, as required, has two absorbing states: (N 1 , . . . , N d ) (we win) and −∞ (we lose). We will give formulas for the probabilities of winning starting at arbitrary state, i.e., for where τ e := inf{n ≥ 0 : Z n = e}. Our main result is following.
Theorem 1 Consider the Generalized Gambler's ruin problem described above. Then, the probability of winning starting at (i 1 , . . . , i d ) is given by Theorem 1 generalizes some previous one-dimensional cases. For example: -(i) Assume we have won with all the players except player j . Then, the probability of winning is .
This way formula (4.1) from (El-Shehawey 2009) is recovered (with some slight modification in notation, since authors considered a various versions of reflecting barriers). -(ii) In addition to (i), let p j (r) = p, q j (r) = q, r = 1, . . . , N j . Then we recover winning probability in the the classical Gambler's ruin problem (with possible ties): -(iii) (Homogeneity case). Assume that for all j = 1, . . . , d we have p j (r) = p j , q j (r) = q j , r = 1, . . . , N j . Define Then we have which is a multidimensional generalization of the classical Gambler's ruin problem. Of course we obtain the same probabilities if only ratios q j (i j ) p j (i j ) are constant, e.g., for the following spatially nonhomogeneous case which is thus a multidimensional generalization of cases considered in (El-Shehawey 2009) and in (Lefebvre 2008). In the latter article only symmetric case corresponding to p j = q j = 1/2 was considered.

Tools: Siegmund Duality and Antiduality
We shortly recall notion of Siegmund duality, its applications to studying absorption probabilities and result concerning existence of Siegmund dual from (Lorek 2016). Let X be a discrete-time Markov chain with transition matrix P X and finite state space E = {e 1 , . . . , e M } partially ordered by with unique minimal element e 1 and unique maximal element e M . Assume it is ergodic with the stationary distribution π. For A ⊆ E define P X (e, A) := e ∈A P X (e, e ) and similarly π(A) := e∈A π(e). Define also {e} ↑ := {e ∈ E : e e }, {e} ↓ := {e ∈ E : e e} and δ(e, e ) = 1(e, e ). We say that Markov chain Z with transition matrix P Z is the Siegmund dual of X if Note that we can find a matrix fulfilling (3) which is substochastic, since we may have for some e j that e i P Z (e j , e i ) < 1. In a similar way as Siegmund (1976) did (he considered linear ordering only), we add then one extra absorbing state, say −∞ (called a coffin state). Denote the resulting matrix by P Z and define P Z (e j , −∞) = 1 − e i P Z (e j , e i ), P Z (−∞, e j ) = δ(−∞, e j ) and P Z (e, e 2 ) = P Z (e, e 2 ) otherwise. Note that Eq. 3 implies that e M is an absorbing state, thus Z has two absorbing states. Taking limits as n → ∞ on both sides of Eq. 3 we have where τ e = inf{n : Z n = e}. This way the stationary distribution of ergodic chain is related to the absorption probabilities of its Siegmund dual. For partial ordering define matrix C(e, e ) = 1(e e ). Such matrix is always invertible, and its inverse C −1 is often denoted by μ (what we use throughout the paper) and called the Möbius function of ordering . Note that Eq. 3 for n = 1 can be written as The main result of (Lorek 2016) is that for given partial ordering the Siegmund dual chain exists if and only if X is Möbius monotone (see also Lorek and Szekli 2012 for more details on this monotonicity). In such a case, the Siegmund dual on E = E ∪ {−∞} has transitions outside coffin state given by (the nonnegativity of which is the definition of Möbius monotonicity of X). The natural application is in studying stationary distribution of a chain X (e.g., its asymptotics): calculate Siegmund dual and then its probability of being eventually absorbed in e M . However, we can reverse the process starting with a chain Z with two absorbing states (we win or we lose The procedure is then the following: 1) remove state −∞ obtaining substochastic matrix P Z ; 2) introduce some partial ordering expressed by matrix C such that e M is a unique maximal element; 3) calculate transitions of Siegmund antidual chain X from Eq. 5 calculating P X = CP T Z C −1 ; 4) if the resulting matrix P X has a stationary measure π such that ∀(e ∈ E) lim n→∞ P n X (e, ·) = π(·) then we can calculate absorption probabilities of Z from the relation (4) (if P X is a stochastic matrix, then π is the stationary distribution of the chain related to this matrix).
The details are in the following theorem.
Remark 1 If resulting P X in Eq. 7 is a stochastic matrix of ergodic chain, say X, then π is its stationary distribution. Moreover, it is Möbius monotone with respect to .
Remark 2 If the resulting P X has negative entries it does not have a real probabilistic interpretation. However, e.g., in area of quantum mechanics, such "distributions", called negative quasi-probabilities are quite common and natural in this context. This notion was already introduced in (Wigner 1932), where author writes: "[...] cannot be really interpreted as the simultaneous probability for coordinates and momenta, as is clear from the fact, that it may take negative values. But of course this must not hinder the use of it in calculations as an auxiliary function which obeys many relations we would expect from such a probability." For some recent connections of negative quasi-probability and quantum computations see (Veitch et al. 2012).

Proof of Theorem 2
The main sketch of the proof was essentially given before the theorem. The only thing which may be not clear is that for all e we have e P X (e, e ) = 1. Let us In ( * ) we used the fact, that for any partial order with unique maximal element e M , the Möbius function fulfills ∀(e ∈ E) e j C −1 (e, e j ) = 1(e = e M ). To see this consider column of C −1 corresponding to state e M after applying first elementary column operation of Gauss-Jordan elimination.

Toy example: Cat Eats Mouse Eats Cheese
Before proceeding to the proof of the main result on Generalized Gambler's ruin problem (i.e., Theorem 1) we give a 5-state example. The reason for this is that we wanted to present an example having the resulting matrix P X with negative entries. The example is taken from (Bremaud 1999) (Example 3.2 Cat Eats Mouse Eats Cheese, where the answer is easily calculated using first-step analysis): "A merry mouse moves in a maze. If it is at time n in a room with k adjacent rooms, it will be at time n + 1 in one of the k adjacent rooms, choosing one at random, each with probability 1 k . A fat lazy cat remains all the time in a given room, and a piece of cheese waits for the mouse in another room (see Fig. 1). The cat is not completely lazy: If the mouse enters the room inhabited by the cat, the cat will eat it. What is the probability that the mouse ever gets to eat the cheese when starting from room 1, the cat and the cheese being in rooms 3 and 5, respectively?" where P Z is the original Cat Eats Mouse Eats Cheese matrix, P Z is the matrix with state 3 removed (E = {1, 2, 4, 5}) and C represents the ordering with Hasse diagram presented on the right side (with 5 being a maximal state). We are to calculate ρ(j ) = P (τ 5 < τ 3 |Z 0 = j), j = 1, 2, 4, 5.