A Zero-Sum Poisson Stopping Game with Asymmetric Signal Rates

The objective of this paper is to study a class of zero-sum optimal stopping games of diffusions under a so-called Poisson constraint: the players are allowed to stop only at the arrival times of their respective Poissonian signal processes. These processes can have different intensities, which makes the game setting asymmetric. We give a weak and easily verifiable set of sufficient condition under which we derive a semi-explicit solution to the game in terms of the minimal r-excessive functions of the diffusion. We also study limiting properties of the solutions with respect to the signal intensities and illustrate our main findings with explicit examples.


Introduction
Optimal stopping games were introduced in the seminal paper [7], for other classical references, see [17,29,33]; see also [18] for a review article. In the typical form, these are two-player games, where the sup-(inf -) players objective is to maximize (minimize) the expected present value of the exercise payoff. Important applications of stopping games in mathematical finance are cancellable (or callable) options [3,11,19] and convertible bonds [16,20,32]. Here, the issuer (i.e., inf -player) has the right to cancel (or convert) the contract by paying a fee to the holder (i.e., sup-player).
The stopping game considered in our study stems from the so-called Poisson stopping problem, this term was coined in [22]. Poisson stopping problems are built on continuous-time dynamics but stopping is only allowed at the arrival times of an exogenous signal process, usually a Poisson process. This type of stopping problem first appeared in [5], where optimal stopping of geometric Brownian motion on the arrival times on an independent Poisson process (later, Poisson stopping) was studied. Papers in the same vein include the following. The paper [12] addresses Poisson stopping at the maximum of Geometric Brownian motion. In [26], Poisson stopping of general one-dimensional diffusion processes is considered. Poisson stopping is generalized to optimal switching problems in [23], and to a multi-dimensional setting in [22]. Extension to more general, time-inhomogeneous signal processes is addressed in [28]. Time-inhomogeneous Poissonian signal process is considered in [13,14]. In [13], the stopping problem is set up so that the decision maker can control the intensity of the Poissonian signal process, whereas [14] addresses the shape properties of the value function in a time-inhomogeneous Poisson stopping problem.
We extend the Poisson stopping framework to zero-sum stopping games in the following way. Similarly to [26], we study a perpetual problem and assume that the underlying dynamics follow a general one-dimensional diffusion. Moreover, we assume that there is two independent Poisson signal processes, one for each player, and that players can stop only at the arrival times of their respective Poisson processes. These processes can have different intensities, which makes the game setting asymmetric. Our problem setting is closely related to [24,25], see also [15]. In [24], a similar game is studied where there is only one signal process and both players are allowed to stop at its arrival times. This is in contrast to our case when the intensities of the signal processes coincide. Namely, even though the arrival rates are the same, the signals will almost surely never appear simultaneously. This eliminates the need to assume the usual ordering (appearing for instance in [2, 8-10, 21, 24, 27]) that the payoff of the inf-player has to dominate that of the sup-player which is due to the fact that immediate comparison of the payoffs is never needed; this observation is made also in [25] where the heterogeneous case is studied. We point out that some comparison of the payoffs is still needed, these are spelled out in assumptions 2.4. The payoff processes in [24,25] are assumed to be progressively measurable with respect to the minimal completion of the filtration generated by a (potentially multidimensional) Wiener process. This is in the same spirit to our model as the paths dictating the payoffs are continuous in both cases. We refer here to [30], where a similar constrained game is considered for Lévy-dynamics. The time horizon in [24,25] is allowed be a stopping time, either bounded or unbounded. For an unbounded time horizon, the analysis of [24] covers the case where the payoffs are bounded. This is in contrast to our study, where we allow also for unbounded payoffs. In [24,25], the authors provide a characterization of the value in terms of a penalized backward stochastic differential equation. We take a different route by solving our problem via a free boundary problem. As a result, we produce explicit (up to a representation of the minimal r -excessive functions of the diffusion process) solutions for the optimal value function. We also characterize the optimal threshold rules in terms of the minimal r -excessive functions and provide sufficient conditions both for existence and uniqueness of the solution; to the best of our knowledge, these are new results. These results are useful for a few reasons. Firstly, diffusion models are important in many applications and our results shed a new light on the structure of the solution for this class of problems. Secondly, the semi-explicit nature of the solution allows a deeper study on the asymptotics and other properties of the asymmetry. Lastly, the solution is fairly easy to produce, at least numerically, as it will boil down to solving a linear second order ordinary differential equation.
The remainder of the study is organized as follows. In Sect. 2 we formulate the optimal stopping game. A candidate solution for the game is derived in Sect. 3, whereas in Sect. 4 we show that the candidate solution is indeed the solution of the game. Asymptotic results are proved in Sect. 5, and the study is concluded by explicit examples in Sect. 6.

The Game
We assume that the state process X is a regular diffusion evolving on R + with the initial state x. Furthermore, we assume that the boundaries of the state space R + are natural. Now, the evolution of X is completely determined by its scale function S and speed measure m inside R + , see [4, pp. 13-14]. Furthermore, we assume that the function S and the measure m are both absolutely continuous with respect to the Lebesgue measure, have smooth derivatives and that S is twice continuously differentiable. Under these assumptions, we know that the infinitesimal generator A : where the functions σ and μ are related to S and m via the formulae m (x) = 2 [4, p. 17]. From these definitions we find that σ 2 for all x ∈ R + . In what follows, we assume that the functions μ and σ 2 are continuous. The assumption that the state space is R + is done for convenience. In fact, we could assume that the state space is any interval I in R and the subsequent analysis would hold with obvious modifications. Furthermore, we denote as, respectively, ψ r and ϕ r the increasing and the decreasing solution of the second order linear ordinary differential equation Au = ru, where r > 0, defined on the domain of the characteristic operator of X . The functions ψ r and ϕ r can be identified as the minimal r -excessive functions ψ r and ϕ r of X , see [4, pp. 18-20]. In addition, we assume that the filtration F carries a Poisson processes with intensities λ i and λ s , respectively. We call the processes Y i and Y s signal processes, and assume that they are mutually independent and also independent of X . We denote the arrival times of Y i and Y s , respectively, as T n i and T n s . Finally, we make the convention that T 0 i = T 0 s = 0.
Denote now as L r 1 the class of measurable mappings f satisfying the integrability condition We know from the literature, see [4, p. 19], that for a given f ∈ L r 1 the resolvent R r f can be expressed as Next, we define the stopping game. The players, sup and inf, have their respective exercise payoff functions g s and g i , and are allowed to stop the process X only at the arrivals of their respective signal processes Y s and Y i . The sup-player attempts to maximize the expected present value of exercise payoff, whereas the inf-players objective is to minimize the same quantity. We define the lower and upper values of the game as When the equality holds the zero-sum game is said to have a value V . The maximizing strategies in V and the minimizing strategies in V are called optimal and any pair of optimal strategies is a Nash equilibrium. We point out that in the game studied here, it is not necessary to include the possibility of simultaneous stopping as independent Poisson arrivals do not, almost surely, occur simultaneously. It is also worth pointing out that in the definition of upper and lower values, the players are not allowed to stop immediately. One could think the value function as the value of future stopping potentiality without the immediate stopping optionality.
To solve the problem (2.2), we introduce two auxiliary problems. Auxiliary problem I is defined via the lower and upper values where Similarly, the auxiliary problem S is defined via the lower and upper values Finally, the values V i 0 and V s 0 are said to exist, if conditions similar to (2.2) hold. We point out that in auxiliary problem I the inf-player is allowed to stop immediately, whereas the sup-player has to wait until the next Y s -arrival to make a choice. The roles are reversed in auxiliary problem S, where the sup-player can stop immediately. In Sect. 3, we propose a Bellman principle that binds the candidate values for the main problem and the auxiliary problems together.
We consider payoff functions similar to the existing literature in optimal stopping that consider explicitly solvable cases, see [2,27]. Assumption 2.1 Let g i and g s be real functions defined of positive reals and satisfying the following conditions: (1) g i and g s are non-decreasing continuously differentiable, (2) g i and g s are stochastically C 2 : they are twice continuously differentiable outside of a countable set {x j } which has no accumulation points and the limits |g i (x j ±)| and |g s (x j ±)| are all finite, (3) There exists states z i and z s such that Some remarks regarding these assumptions are in order. The monotonicity in point (1) is satisfied in many potential applications and point (2) essentially guarantees that we can work with the expressions (A − r )g i and (A − r )g s . The point (3) suggests that we are setting up problems, where the continuation region is connected, that is, the equilibrium stopping rule is two-sided. This structure is important and appears in many applications.
The class of problems given by our assumptions is large and contains important cases such as linear payoffs. Indeed, when the payoffs are linear g k (x) = x − c k , k = i, s, where c s > c i , and X is Geometric Brownian motion such that drift μ < r , then (A − r )g k (x) = (μ − r )x + c k . More generally, if the drift coefficient of X is a polynomial for which the leading term has a negative coefficient (this is typical in mean-reverting models), then assumptions 2.1 hold for linear payoffs. For example, if X is a Verhulst-Pearl diffusion ( We address non-smooth payoffs in Sect. 3.4 by studying the payoff structure of a callable option [3,11,19,21] and observe that its analysis can, fairly directly, be reduced to our core case.
We make some preliminary analysis. For f ∈ L r 1 , we define the functionals i and s as and, with a slight abuse of notation, Since the functions ψ q and ϕ q are solutions of the differential equation (A − q)u = 0, we find after differentiation that Therefore, an application of the fundamental theorem of calculus combined with the assumed boundary classification of the diffusion yields the results.

Remark 2.3
We note that the point (3) in Assumption 2.1 implies that there exists unique statesx i ,x s ∈ (0, ∞) such that By mean value theorem we have where ξ ∈ (x, k). Because the lower boundary is natural, and hence The assumption 2.1 suffice to show uniqueness of our solution in Sect. 3 and to prove the verification theorem in Sect. 4, but we need to pose additional assumptions for the existence of the optimal solution. These are collected below.

Assumption 2.4
Let g i , g s and x j be defined as in assumption 2.1 and the statesx i ,x s as in remark 2.3. We assume that the limits satisfy g i ψ r (0+) < 0 and g s ϕ r (∞) > 0.
In point (3) of assumption 2.4, we also allow that g i ψ r (0+) = −∞ and g s ϕ r (∞) = ∞. These are the cases in many examples.
For f , g ∈ L r 1 , we define the functionals H i and H s as (2.4) Lemma 2.5 Let g ∈ L r 1 satisfy the points (1) and (2) of Assumption 2.1. Furthermore, let ξ r be r -harmonic. Then Proof We prove the first claim, the second can be proved similarly. Elementary differentiation and a reorganization of the terms yield We apply the second part of Lemma 2.2 to ξ r and find that By substituting this to the Eq. (2.5) and then first applying the second part of Lemma 2.2 to g and then the expression (2.6) again, the claim follows.

The Candidate Solution
We start the analysis of the problem (2.2) by deriving a candidate solution. To this end, we recall the main problem (2.2) and the auxiliary problems I and S from Sect. 2. Denote the candidate value for the problem (2.2) as G and the candidate functions for the auxiliary problems I and S as G i 0 and G s 0 , respectively. We make the following working assumptions: (1) We assume that the candidate value functions satisfy the following dynamic programming principle: here the random variables U s ∼ Exp(λ s ) and U i ∼ Exp(λ i ) are independent. In auxiliary game I , the inf-player chooses between stopping immediately or waiting whereas the sup-player can do nothing but wait; this situation is reflected by the Eq. (3.1); the Eq. (3.2) has a similar interpretation in terms of auxiliary problem S. The condition (3.3) is the expected present value of the next stopping opportunity, which will be either for inf-or sup-player and present itself as a choice reflected by the conditions (3.1) and (3.2). We point out that by the independence of U i and U s , the condition (3.3) can be written as (2) By the time homogeneity of the stopping game, we assume that the continuation region is the interval (y i , y s ), for some constants y i and y s . Thus we can rewrite the functions G i 0 and G s 0 as (3) Furthermore, we assume that the function G is continuous. Then These assumptions are used to device the candidate solution for the problem; this is the task of this section. The candidate solution is then verified to be the actual solution in Sect. 4. Since we find that We develop this representation further in the following lemma.

Lemma 3.1
The following representations hold: Proof Let x < y i . Then by the conditions (3.3), (3.1) and (3.2), we find that By Strong Markov property, we obtain we find by another application of Strong Markov property, that The case x > y s is proved similarly.
The next lemma provides necessary conditions for the optimality of the thresholds y i and y s .
which can be rewritten as Proof Let x ∈ (y i , y s ). Using Lemma 2.1 of [26], we find that .
This can be rewritten as Consider first the expected value (3.6). On the event {U i ∧ U s > τ (y i ,y s ) = τ y s }, we have, by Strong Markov property and Lemma 2.1 of [26], the following: where η y s is the first return time to y s . Since h r (X τ ys ) = h r (y s ) = G(y s ), we find that For the equality (3.3) to hold, the equation should hold; here, the last equation is obtained by breaking down the expected values (3.9) and (3.8) similarly to (3.4) and (3.5). This holds, if The necessary condition is obtained by analyzing the expected value (3.7) similarly.
To conclude the claimed integral representation, we find by applying the representation (2.1) to the function x → h r (x)1 {x y s } , that By treating the other expectations similarly, we obtain the integral representations.
We write the necessary conditions given by Lemma 3.2 in a more convenient form. First, write the harmonic function h r as h r (x) = Cψ r (x) + Dϕ r (x). Now, since h r (y i ) = g i (y i ) and h r (y s ) = g s (y s ), we find by solving a pair of linear equations, that (3.10) By substituting (3.10) to the conditions of lemma 3.2 and reorganizing the terms, we obtain (3.11) We can simplify the denominators of the coefficient terms. Indeed, since for r -harmonic ξ , we find using lemma 2.2, that Thus, By treating the term ψ r (y s ) s (ϕ r ; y s ) − ϕ r (y s ) s (ψ r ; y s ) similarly, we can rewrite the necessary conditions (3.11) as

On Uniqueness of the Solution
The next proposition is our main result on the uniqueness of the solution to the pair of necessary conditions given in lemma 3.2. To ease the presentation in the following, we introduce a bit shorter notation

Proposition 3.3 Let Assumption 2.1 hold and assume that a solution (y i , y s ) to the pair of Lemma 3.2 exists. Then the solution is unique.
Proof Define a function K : (0, and hence K is increasing in its domain (0,x i ]. Now using the fixed point property we have This means that whenever K intersects the diagonal of R + , the intersection is from above. Hence, the uniqueness follows from continuity.

On Existence of the Solution
We proceed by analysing the solvability of the pair (3.12). By item (4) of Assumptions 2.1 and Lemma 2.5, we find that (3.14) We find similarly that Next, we study the limiting properties of the functions appearing in (3.12). Regarding the function H i (ϕ r , g i ; ·), by adding and subtracting the term ( i (A −r )g i ) and using lemma 2.2, we obtain By a similar computation, we find also that By substituting these expressions to H i (ϕ r , g i ; ·), simplifying, and using lemma 2.2 again, we observe that Assume that x < z i . Then the intermediate value theorem yields where ξ x ∈ (0, x). By continuity, we find by passing to the limit x → 0+ that By a similar analysis, we find that the limit Consider next the function H i (g i , ψ r ; x). Since we find using the Eq. (3.16) and lemma 2.2 that Assume that x < z i . Then the intermediate value theorem yields Thus by continuity H i (g i , ψ r ; 0+) = 0. A similar analysis yields the limit H s (g s , ϕ r ; ∞) = 0. Finally, by using remark 2.3 and the facts that ϕ r is r -harmonic and z i <x i , we find using lemma 2.2 that where q i = r + λ i . By a similar analysis, we find that H s (ψ r , g s ;x s ) > 0. We summarize these findings:  Proof Define the function K : (0,x i ] → (0,x i ] as in (3.13). We first observe that the proven limiting properties (3.17) and monotonicity properties (3.14) together with the conditions (3.18) guarantee that the function K is well-defined. Using the representation (3.16), we see that After handling the other inequality similarly, we see that the condition (3.18) is equivalent to The assumptions in 2.1 guarantee thatx s < z s ,x i > z i and z i < z s . Thus, (1) implies that z i <x i <x s < z s and consequently by our assumptions The other inequality in (3.19) is proved similarly.
It follows from above calculations that the function K is well-defined and from proof of proposition 3.3 that it is increasing. Furthermore, K is a mapping from interval (0,x i ] to its open subset. Thus, K must have a fixed point which we denote by y i . Then the pair (y i , y s ), where y s = H −1 s,ψ (H i,ψ (y i )), is a solution to the equations given in lemma 3.2. The uniqueness follows from proposition 3.3.
The assumption (1) and (3) in proposition 3.4 are satisfied in most situations and are easily verified. However, the assumption (2) requires more analysis in most cases. Fortunately, it is very easy to check it at least numerically in applications, because the statesx i ,x s are known to be unique zeroes of the functions

On Non-differentiable Payoffs
Although our analysis does not cover non-differentiable payoff functions, its conclusions can be extended fairly easily to some important cases. As an example, assume that the payoff functions are g i (x) = (x − c i ) + and g s (x) = (x − c s ) + , where c i < c s and let X be a diffusion satisfying the basic assumptions of Sect. 2. This payoff structure can be viewed as a callable option, see, e.g., [19]. Recall the optimality conditions (3.12): H s (ψ r , g s ; y s ), H i (ϕ r , g i ; y i ) = H s (g s , ϕ r ; y s ).
We observe that the left hand side of both of these equations is zero on (0, c i ).
Assume first that the functions on the right hand side of the conditions (3.12) have a common zero y 0 . Then the following must hold First, we observe that if g s (y 0 ) = 0, then which clearly cannot hold. Assume now that g s (y 0 ) > 0. Then by dividing the conditions (3.20) sidewise, some further manipulations yield here, we have used the fact that the function ϕ r ψ r is decreasing. Since all functions in these expressions are positive, we conclude that the ratio ϕ r ψ r must in fact be constant over the interval (y 0 , ∞), something which is clearly not true. Thus, we can safely consider the case that the functions on the right hand side of the conditions (3.12) do not have a common zero. Then neither y i nor y s can be in the interval (0, c i ). Thus we can restrict the analysis of the optimality conditions to the interval (c i , ∞). It is straightforward to see that in this case the functions H i behave as in our main result. The functions H s also behave similarly but the turning point is at c s . Thus, our main result can be applied to solve the problem after locating the pointsx i and c s .

Theorem 4.1 Let the assumptions 2.1 hold and assume that thresholds y i and y s are unique solution to
Then the value function (2.2) reads as Moreover, the game has a Nash equilibrium constituted by the stopping rules τ * = inf{T n s > 0 : X T n s y s } σ * = inf{T n i > 0 : X T n i y i }.
To prove this result, we first introduce some notation. Define the filtrations G n i n i 0 and (G n s ) n s 0 as G n i = F T n i and G n s = F T n s , respectively. Moreover, define the sets of admissible stopping times with respect to the G-filtrations: Then the function V defined in (2.2) can be written as We point out that the G-filtrations were defined only for the case where immediate stopping is not allowed. This is because we do the verification only for the main problem and not for the auxiliary problems. However, similar techniques could be employed to do the verification also for functions G i 0 and G s 0 defined via (3.1) and (3.2), where the function G is given by expression for V in the claim of the main theorem. We omit the details.
The proof of the main theorem requires uniform integrability. This is provided by the following lemma.

Lemma 4.2 For any fixed stopping rule T N i , the process
is a uniformly integrable supermartingale with respect to (G n s ) n s 0 . For any fixed stopping rule T N s , the process is a uniformly integrable submartingale with respect to G n i n i 0 .
Proof We prove the claim for S s , the process S i is treated similarly. Since G s 0 G, Strong Markov property yields To prove uniform integrability, we show that for all stopping rules T N i ; these conditions are necessary and sufficient for uniform integrability. Fix T N i and n s . Define the measure The property (4.2) follows from (4.4) by setting A = . We observes that P x (A) → 0, whenever P * x (A) → 0. Thus the property (4.3) follows from (4.4).

Proof of Theorem 4.1
The task is to show that V = G; the claimed Nash equilibrium follows then from the construction of G. To this end, recall the definition of the value function V from (2.2). Obviously, for all x; we prove the first of these inequalities, the second is proved similarly. Since g s G s 0 , we find using lemma 4.2 and optional sampling, that for arbitrary stopping rules T N i and T N s ; here, U s is an independent Exp(λ s )distributed (G n s )-stopping time and the last equality follows from the independence of U s and the stopping times T N i . The right-hand side is independent of T N s , thus we obtain and, consequently, For the inf-player, consider the stopping ruleÑ i ="Stop at the next inf-players Poisson arrival if the state of X at the time is below the threshold y i . Otherwise, wait". Then (4.5) Since Strong Markov property yields Finally, since, conditional to {Ñ i = k}, on the event {T n−1 < U s < T n }, n = 1, . . . , k − 1, and on the event {U s > T k−1 }, the expression (4.5) can be written as V (x) G(x). Thus V (x) = G(x) and the proof is complete.

Asymptotics of i and s
A similar stopping game, where both of the players are allowed to stop without any restrictions is studied in [2]. In that case, we know that under the assumption 2.1 complemented by the assumption g i g s , the optimal stopping thresholds (x i , x s ) are the unique solution to the pair of equations It seems reasonable this solution should coincide with ours when both of the information rates λ i and λ s tend to infinity as in that case stopping opportunities appear more frequently for both of the players. We show after some auxiliary calculations that this is indeed the case. The pair of equations can be represented as Furthermore, for all s > 0, we have where τ z = inf{t 0 | X t = z}. Therefore, we find that for z < x < y and, consequently by monotone convergence, that  We can now prove the following convergence result.
Then the unique fixed point y i of K λ converges to the unique fixed point x i of k as λ tends to infinity.
Proof From the representation of the pair of Eqs. (5.2), (5.4) and monotonicity we see that The claim follows now by noticing that k(x) − x attains both negative and positive values (this follows essentially from point (1) in Assumption 2.4, see [2]).
In the case that λ i → 0, i.e. in the absence of competition, the threshold y * s converges to the threshold of an stopping problem presented in [26]. And further, if then we let λ s → ∞ the threshold coincides with the threshold in [1], see [26], proposition 2.6. These asymptotic results are collected in Table 1.

Consequences of the Asymmetry
Interesting feature of the anti-symmetry is that when one of the rates, for example λ s , stays fixed, and we increase λ i , both of the thresholds decrease. To see this note that y i ∈ (0,x i ) and also that by (5.3) for z < x < y we have Hence, assuming λ 1 < λ 2 we have that H 2 (g i , ψ r ; y i ) H 1 (g i , ψ r ; y i ) and H 2 (ϕ r , g i ; y i ) H 1 (ϕ r , g i ; y i ).
We recall the definition of K and write the dependency on λ i explicitly Taking the derivative with respect to λ i yields Hence, K is decreasing in λ i . Next, suppose that y 1 is a fixed point of K when λ i = λ 1 and assume that λ 1 < λ 2 . Then and consequently y 1 > y 2 , as K (y 1 , λ 1 ) < 1 by proof of proposition 3.3. Similarly, we can show that y s decreases as function of λ i . This observation has an intuitive explanation. If the information rate λ i of the infplayer increases he should wait longer (in the sense that y i decreases) as he gets more frequent opportunities to stop, and hence, is not affected as much by the uncertainty of the underlying. On the other hand, as the rate for the inf-player increases, the sup-player wants to stop sooner (in the sense that y s decreases). This is because the inf-player is now less likely to miss good opportunities to stop and the sup-player has to react accordingly.

Geometric Brownian Motion with Smooth Payoff
In this illustration we compare the properties of our general findings with the usual stopping game where the players are allowed to stop without restrictions. Thus, we follow [2] and consider a stopping game given by where c s > c i > 0 are constant measuring the sunk costs, f (x) = x θ is a profit flow with 0 < θ < 1, and the underlying diffusion X t is a geometric Brownian motion.
We remark as in [2] that in this case the buyer always gets the expected cumulative present value (R r f )(x), and hence the only factor that depends on the timing of the decision is the cost which the buyer pays (and the seller receives) at exercise. Thus, the game can be seen as the valuation of an investment which guarantees the buyer a permanent flow of revenues from the exercise date up to an arbitrarily distant future at a cost which is endogenously determined from the game. Remark 6. 1 We point out that by a similar analysis, we could study the linear payoff structure g i (x) = x − c i and g s (x) = x − c s , where 0 < c i < c s . However, we want to compare our results to those of [2] and, therefore, present the case of cumulative payoffs. Fig. 1 The optimal stopping thresholds in the symmetric case as functions of λ. The dashed lines are the thresholds in the unconstrained case [2] In the non-symmetric case, we find that, if the information rate λ s is fixed for the sup-player, both thresholds decrease as the function of λ i , see Fig. 2. Interestingly, at least in our numeric examples, increasing volatility does not necessarily expand the continuation region (by increasing y s and decreasing y i ). This is in contrary to the findings in [2] for the standard stopping game.
Finally, the value function of the game is shown in Fig. 3.

Mean Reverting Dynamics
To further expand the first example, we consider different diffusion dynamics under similar payoff structure. We assume that the diffusion X has the infinitesimal generator where μ > 0 is a constant, β > 0 is the degree of mean-reversion and σ > 0 is the volatility coefficient. This process is often called Verhulst-Pearl diffusion. Because the payoffs are chosen similarly as in the first example to be g j (x) = (R r f )(x) − c j , where j = i, s, c i < c s , and f (x) = x, the assumption 2.1 are again satisfied. The scale density and the density of the speed measure read as σ 2 x , and the minimal r -excessive functions are (see [6], p. 202) where U is a confluent hypergeometric function, L is the generalized Laguerre polynomial L(a, b, z) = L b a (z) and Due to the complicated nature of the minimal r -excessive functions in this example, the functionals h and h, where h is ψ r , ϕ r or (A−r )g j ( j = i, s), cannot be calculated explicitly. Consequently, the pair of equations (4.1) for the optimal thresholds cannot be simplified from their original integral forms in any helpful way, and are thus left unstated.
Regarding the assumptions in 2.4 the first one is again satisfied. Unfortunately, again due to the complicated forms of ψ r and ϕ r , the second assumption has to be verified numerically in each case. The results are illustrated numerically in Table 2with the parameters σ = 0.3, r = 0.08, γ = 0.05, c i = 10.0, c s = 20.0 and λ = λ i = λ s . These values suggest that the optimal stopping thresholds converge to the thresholds of the unconstrained case as the the intensity λ increases. This is in line with our general result.
in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.