1 Introduction

Optimal stopping games were introduced in the seminal paper [7], for other classical references, see [17, 29, 33]; see also [18] for a review article. In the typical form, these are two-player games, where the sup- (inf-) players objective is to maximize (minimize) the expected present value of the exercise payoff. Important applications of stopping games in mathematical finance are cancellable (or callable) options [3, 11, 19] and convertible bonds [16, 20, 32]. Here, the issuer (i.e., inf-player) has the right to cancel (or convert) the contract by paying a fee to the holder (i.e., sup-player).

The stopping game considered in our study stems from the so-called Poisson stopping problem, this term was coined in [22]. Poisson stopping problems are built on continuous-time dynamics but stopping is only allowed at the arrival times of an exogenous signal process, usually a Poisson process. This type of stopping problem first appeared in [5], where optimal stopping of geometric Brownian motion on the arrival times on an independent Poisson process (later, Poisson stopping) was studied. Papers in the same vein include the following. The paper [12] addresses Poisson stopping at the maximum of Geometric Brownian motion. In [26], Poisson stopping of general one-dimensional diffusion processes is considered. Poisson stopping is generalized to optimal switching problems in [23], and to a multi-dimensional setting in [22]. Extension to more general, time-inhomogeneous signal processes is addressed in [28]. Time-inhomogeneous Poissonian signal process is considered in [13, 14]. In [13], the stopping problem is set up so that the decision maker can control the intensity of the Poissonian signal process, whereas [14] addresses the shape properties of the value function in a time-inhomogeneous Poisson stopping problem.

We extend the Poisson stopping framework to zero-sum stopping games in the following way. Similarly to [26], we study a perpetual problem and assume that the underlying dynamics follow a general one-dimensional diffusion. Moreover, we assume that there is two independent Poisson signal processes, one for each player, and that players can stop only at the arrival times of their respective Poisson processes. These processes can have different intensities, which makes the game setting asymmetric. Our problem setting is closely related to [24, 25], see also [15]. In [24], a similar game is studied where there is only one signal process and both players are allowed to stop at its arrival times. This is in contrast to our case when the intensities of the signal processes coincide. Namely, even though the arrival rates are the same, the signals will almost surely never appear simultaneously. This eliminates the need to assume the usual ordering (appearing for instance in [2, 8,9,10, 21, 24, 27]) that the payoff of the inf-player has to dominate that of the sup-player which is due to the fact that immediate comparison of the payoffs is never needed; this observation is made also in [25] where the heterogeneous case is studied. We point out that some comparison of the payoffs is still needed, these are spelled out in assumptions 2.4. The payoff processes in [24, 25] are assumed to be progressively measurable with respect to the minimal completion of the filtration generated by a (potentially multidimensional) Wiener process. This is in the same spirit to our model as the paths dictating the payoffs are continuous in both cases. We refer here to [30], where a similar constrained game is considered for Lévy-dynamics. The time horizon in [24, 25] is allowed be a stopping time, either bounded or unbounded. For an unbounded time horizon, the analysis of [24] covers the case where the payoffs are bounded. This is in contrast to our study, where we allow also for unbounded payoffs. In [24, 25], the authors provide a characterization of the value in terms of a penalized backward stochastic differential equation. We take a different route by solving our problem via a free boundary problem. As a result, we produce explicit (up to a representation of the minimal r-excessive functions of the diffusion process) solutions for the optimal value function. We also characterize the optimal threshold rules in terms of the minimal r-excessive functions and provide sufficient conditions both for existence and uniqueness of the solution; to the best of our knowledge, these are new results. These results are useful for a few reasons. Firstly, diffusion models are important in many applications and our results shed a new light on the structure of the solution for this class of problems. Secondly, the semi-explicit nature of the solution allows a deeper study on the asymptotics and other properties of the asymmetry. Lastly, the solution is fairly easy to produce, at least numerically, as it will boil down to solving a linear second order ordinary differential equation.

The remainder of the study is organized as follows. In Sect. 2 we formulate the optimal stopping game. A candidate solution for the game is derived in Sect. 3, whereas in Sect. 4 we show that the candidate solution is indeed the solution of the game. Asymptotic results are proved in Sect. 5, and the study is concluded by explicit examples in Sect. 6.

2 The Game

We assume that the state process X is a regular diffusion evolving on \(\mathbb {R}_+\) with the initial state x. Furthermore, we assume that the boundaries of the state space \(\mathbb {R}_+\) are natural. Now, the evolution of X is completely determined by its scale function S and speed measure m inside \(\mathbb {R}_+\), see [4, pp. 13–14]. Furthermore, we assume that the function S and the measure m are both absolutely continuous with respect to the Lebesgue measure, have smooth derivatives and that S is twice continuously differentiable. Under these assumptions, we know that the infinitesimal generator \(\mathcal {A}:\mathcal {D}(\mathcal {A})\rightarrow C_b(\mathbb {R}_+)\) of X can be expressed as \(\mathcal {A}=\frac{1}{2}\sigma ^2(x)\frac{d^2}{dx^2}+\mu (x)\frac{d}{dx}\), where the functions \(\sigma \) and \(\mu \) are related to S and m via the formulæ  \(m'(x)=\frac{2}{\sigma ^2(x)}e^{B(x)}\) and \(S'(x)= e^{-B(x)}\) for all \(x \in \mathbb {R}_+\), where \(B(x):=\int ^x \frac{2\mu (y)}{\sigma ^2(y)}dy\), see [4, p. 17]. From these definitions we find that \(\sigma ^2(x)=\frac{2}{S'(x)m'(x)}\) and \(\mu (x)=-\frac{S''(x)}{{S'}^2(x)m'(x)}\) for all \(x \in \mathbb {R}_+\). In what follows, we assume that the functions \(\mu \) and \(\sigma ^2\) are continuous. The assumption that the state space is \(\mathbb {R}_+\) is done for convenience. In fact, we could assume that the state space is any interval \(\mathcal {I}\) in \(\mathbb {R}\) and the subsequent analysis would hold with obvious modifications. Furthermore, we denote as, respectively, \(\psi _r\) and \(\varphi _r\) the increasing and the decreasing solution of the second order linear ordinary differential equation \(\mathcal {A}u=ru\), where \(r>0\), defined on the domain of the characteristic operator of X. The functions \(\psi _r\) and \(\varphi _r\) can be identified as the minimal r-excessive functions \(\psi _r\) and \(\varphi _r\) of X, see [4, pp. 18–20]. In addition, we assume that the filtration \(\mathbb {F}\) carries a Poisson processes \(Y^i=(Y^i_t,\mathcal {F}_t)\) and \(Y^s=(Y^s_t,\mathcal {F}_t)\) with intensities \(\lambda _i\) and \(\lambda _s\), respectively. We call the processes \(Y^i\) and \(Y^s\) signal processes, and assume that they are mutually independent and also independent of X. We denote the arrival times of \(Y^i\) and \(Y^s\), respectively, as \(T_{n^i}\) and \(T_{n^s}\). Finally, we make the convention that \(T_{0^i} = T_{0^s} =0\).

Denote now as \(L_1^r\) the class of measurable mappings f satisfying the integrability condition

$$\begin{aligned} \mathbb {E}_x \bigg [ \int _0^{\infty } e^{-rt}|f(X_t)| dt\bigg ] < \infty . \end{aligned}$$

We know from the literature, see [4, p. 19], that for a given \(f\in L_1^r\) the resolvent \(R_rf\) can be expressed as

$$\begin{aligned} (R_rf)(x) =B_r^{-1}\varphi _r(x)&\int _0^x \psi _r(y)f(y)m'(y)dy +B_r^{-1}\psi _r(x)\int _x^\infty \varphi _r(y)f(y)m'(y)dy, \end{aligned}$$
(2.1)

for all \(x \in \mathbb {R}_+\), where \(B_r=\frac{\psi _r'(x)}{S'(x)}\varphi _r(x)-\frac{\varphi _r'(x)}{S'(x)}\psi _r(x)\) denotes the Wronskian determinant.

Next, we define the stopping game. The players, sup and inf, have their respective exercise payoff functions \(g_s\) and \(g_i\), and are allowed to stop the process X only at the arrivals of their respective signal processes \(Y^s\) and \(Y^i\). The sup-player attempts to maximize the expected present value of exercise payoff, whereas the inf-players objective is to minimize the same quantity. We define the lower and upper values of the game as

$$\begin{aligned} \underline{V}(x) = \sup _{\tau \in \textbf{T}^s} \inf _{\sigma \in \textbf{T}^i} \mathbb {E}_x\left[ e^{-r(\tau \wedge \sigma )}R(\tau ,\sigma ) \right] , \quad \overline{V}(x) = \inf _{\sigma \in \textbf{T}^i} \sup _{\tau \in \textbf{T}^s} \mathbb {E}_x\left[ e^{-r(\tau \wedge \sigma )}R(\tau ,\sigma ) \right] , \end{aligned}$$

where

$$\begin{aligned}&\textbf{T}^s = \{\ \tau \text { is a }\mathbb {F}-\text {stopping time } \ | \text { for all } \omega : \tau (\omega )=T_{n^s}(\omega ), \text {for some } n^s=1,2,\dots \} \\&\textbf{T}^i = \{\ \tau \text { is a }\mathbb {F}-\text {stopping time } \ | \text { for all } \omega : \tau (\omega )=T_{n^i}(\omega ), \text {for some } n^i=1,2,\dots \} \\&R(\tau ,\sigma ) = g_s(X_{\tau })\mathbbm {1}_{\{\tau < \sigma \}} + g_i(X_{\sigma })\mathbbm {1}_{\{\tau > \sigma \}}. \end{aligned}$$

When the equality

$$\begin{aligned} \begin{aligned} V(x) = \underline{V}(x) = \overline{V}(x) \end{aligned} \end{aligned}$$
(2.2)

holds the zero-sum game is said to have a value V. The maximizing strategies in \(\underline{V}\) and the minimizing strategies in \(\overline{V}\) are called optimal and any pair of optimal strategies is a Nash equilibrium. We point out that in the game studied here, it is not necessary to include the possibility of simultaneous stopping as independent Poisson arrivals do not, almost surely, occur simultaneously. It is also worth pointing out that in the definition of upper and lower values, the players are not allowed to stop immediately. One could think the value function as the value of future stopping potentiality without the immediate stopping optionality.

To solve the problem (2.2), we introduce two auxiliary problems. Auxiliary problem I is defined via the lower and upper values

$$\begin{aligned} \underline{V}_0^i(x) = \sup _{\tau \in \textbf{T}^s} \inf _{\sigma \in \textbf{T}_0^i} \mathbb {E}_x\left[ e^{-r(\tau \wedge \sigma )}R(\tau ,\sigma ) \right] , \quad \overline{V}_0^i(x) = \inf _{\sigma \in \textbf{T}_0^i} \sup _{\tau \in \textbf{T}^s} \mathbb {E}_x\left[ e^{-r(\tau \wedge \sigma )}R(\tau ,\sigma ) \right] , \end{aligned}$$

where

$$\begin{aligned} \textbf{T}_0^i = \{\ \tau \text { is a }\mathbb {F}-\text {stopping time } \ | \text { for all } \omega : \tau (\omega )=T_{n^i}(\omega ), \text {for some } n^i=0,1,\dots \}. \end{aligned}$$

Similarly, the auxiliary problem S is defined via the lower and upper values

$$\begin{aligned} \underline{V}_0^s(x) = \sup _{\tau \in \textbf{T}_0^s} \inf _{\sigma \in \textbf{T}^i} \mathbb {E}_x\left[ e^{-r(\tau \wedge \sigma )}R(\tau ,\sigma ) \right] , \quad \overline{V}_0^s(x) = \inf _{\sigma \in \textbf{T}^i} \sup _{\tau \in \textbf{T}_0^s} \mathbb {E}_x\left[ e^{-r(\tau \wedge \sigma )}R(\tau ,\sigma ) \right] , \end{aligned}$$

where

$$\begin{aligned} \textbf{T}_0^s = \{\ \tau \text { is a }\mathbb {F}-\text {stopping time } \ | \text { for all } \omega : \tau (\omega )=T_{n^s}(\omega ), \text {for some } n^s=0,1,\dots \}. \end{aligned}$$

Finally, the values \(V_0^i\) and \(V_0^s\) are said to exist, if conditions similar to (2.2) hold. We point out that in auxiliary problem I the inf-player is allowed to stop immediately, whereas the sup-player has to wait until the next \(Y^s\)-arrival to make a choice. The roles are reversed in auxiliary problem S, where the sup-player can stop immediately. In Sect. 3, we propose a Bellman principle that binds the candidate values for the main problem and the auxiliary problems together.

We consider payoff functions similar to the existing literature in optimal stopping that consider explicitly solvable cases, see [2, 27].

Assumption 2.1

Let \(g_i\) and \(g_s\) be real functions defined of positive reals and satisfying the following conditions:

  1. (1)

    \(g_i\) and \(g_s\) are non-decreasing continuously differentiable,

  2. (2)

    \(g_i\) and \(g_s\) are stochastically \(C^2\): they are twice continuously differentiable outside of a countable set \(\{x_j\}\) which has no accumulation points and the limits \(|g_i^{''}(x_j\pm )|\) and \(|g_s^{''}(x_j\pm )|\) are all finite,

  3. (3)

    There exists states \(z_i\) and \(z_s\) such that

    $$\begin{aligned} {\left\{ \begin{array}{ll} (\mathcal {A}-r)g_i(x) \\ (\mathcal {A}-r)g_s(x) \end{array}\right. } \gtreqqless 0, \ {\left\{ \begin{array}{ll} x \lesseqqgtr z_i, \\ x \lesseqqgtr z_s. \end{array}\right. } \end{aligned}$$

Some remarks regarding these assumptions are in order. The monotonicity in point (1) is satisfied in many potential applications and point (2) essentially guarantees that we can work with the expressions \((\mathcal {A}-r)g_i\) and \((\mathcal {A}-r)g_s\). The point (3) suggests that we are setting up problems, where the continuation region is connected, that is, the equilibrium stopping rule is two-sided. This structure is important and appears in many applications.

The class of problems given by our assumptions is large and contains important cases such as linear payoffs. Indeed, when the payoffs are linear \(g_k(x) = x-c_k\), \(k=i,s\), where \(c_s > c_i\), and X is Geometric Brownian motion such that drift \(\mu < r\), then \((\mathcal {A}-r)g_k(x) = (\mu - r)x +c_k\). More generally, if the drift coefficient of X is a polynomial for which the leading term has a negative coefficient (this is typical in mean-reverting models), then assumptions 2.1 hold for linear payoffs. For example, if X is a Verhulst-Pearl diffusion (\( \mathcal {A} = \frac{1}{2}\sigma ^2x^2\frac{d^2}{dx^2} + \mu x(1-\beta x) \frac{d}{dx}\)), then \((\mathcal {A}-r)g_k(x) = - \mu \beta x^2 + (\mu - r)x +c_k\).

We address non-smooth payoffs in Sect. 3.4 by studying the payoff structure of a callable option [3, 11, 19, 21] and observe that its analysis can, fairly directly, be reduced to our core case.

We make some preliminary analysis. For \(f \in L_1^r\), we define the functionals \(\Psi _i\) and \(\Phi _s\) as

$$\begin{aligned} \begin{aligned} (\Psi _i f)(x)&= \int _0^x \psi _{r+\lambda _i}(z)f(z)m'(z)dz, \\ (\Phi _s f)(x)&= \int _x^\infty \varphi _{r+\lambda _s}(z)f(z)m'(z)dz, \\ \end{aligned} \end{aligned}$$
(2.3)

and, with a slight abuse of notation,

$$\begin{aligned} \begin{aligned} (\Psi f)(x)&= \int _0^x \psi _{r}(z)f(z)m'(z)dz, \\ (\Phi f)(x)&= \int _x^\infty \varphi _{r}(z)f(z)m'(z)dz. \\ \end{aligned} \end{aligned}$$

Lemma 2.2

Let \(q>0\) and \(g \in L_1^{q}\) satisfy the points (1) and (2) of Assumption 2.1. Then

$$\begin{aligned} \frac{\varphi _{q}'(x)}{S'(x)} g(x) - \frac{g'(x)}{S'(x)} \varphi _{q}(x)&= \int _x^\infty \varphi _{q}(z)(\mathcal {A}-q)g(z)m'(z)dz, \\ \frac{g'(x)}{S'(x)} \psi _{q}(x) - \frac{\psi _{q}'(x)}{S'(x)} g(x)&= \int _0^x \psi _{q}(z)(\mathcal {A}-q)g(z)m'(z)dz, \end{aligned}$$

Proof

Denote

$$\begin{aligned} J(x) = \frac{\varphi _{q}'(x)}{S'(x)} g(x) - \frac{g'(x)}{S'(x)} \varphi _{q}(x), \, \quad I(x) = \frac{g'(x)}{S'(x)} \psi _{q}(x) - \frac{\psi _{q}'(x)}{S'(x)} g(x). \end{aligned}$$

Since the functions \(\psi _q\) and \(\varphi _q\) are solutions of the differential equation \((\mathcal {A}-q)u=0\), we find after differentiation that

$$\begin{aligned} J'(x)= -\varphi _q(x)((\mathcal {A}-q)g)(x)m'(x) = \frac{\varphi _q(x)}{\psi _q(x)}I'(x) \end{aligned}$$

Therefore, an application of the fundamental theorem of calculus combined with the assumed boundary classification of the diffusion yields the results. \(\square \)

Remark 2.3

We note that the point (3) in Assumption 2.1 implies that there exists unique states \(\tilde{x}_i, \tilde{x}_s \in (0,\infty )\) such that

$$\begin{aligned} (\Psi _i (\mathcal {A}-r)g_i)(x)&\gtreqqless 0, \ x \lesseqqgtr \tilde{x}_i, \\ (\Phi _s (\mathcal {A}-r)g_s)(x)&\gtreqqless 0, \ x \lesseqqgtr \tilde{x}_s. \end{aligned}$$

Indeed, first notice that \((\Phi _s (\mathcal {A}-r)g_s)(x)<0\) when \(x > z_s\). Then taking \(x< k < z_s\) we get

$$\begin{aligned} (\Phi _s (\mathcal {A}-r)g_s)(x) - (\Phi _s (\mathcal {A}-r)g_s)(k) = \int _{x}^k \varphi _{r+\lambda _s} (\mathcal {A}-r)g_s(z) m'(z)dz. \end{aligned}$$

By mean value theorem we have

$$\begin{aligned} (\Phi _s (\mathcal {A}-r)g_s)(x) = (\Phi _s (\mathcal {A}-r)g_s)(k) + \frac{(\mathcal {A}-r)g_s(\xi )}{r+\lambda _s} \bigg ( \frac{\varphi _{r+\lambda _s}'(k)}{S'(k)} -\frac{\varphi _{r+\lambda _s}'(x)}{S'(x)} \bigg ), \end{aligned}$$

where \(\xi \in (x,k)\). Because the lower boundary is natural, and hence \(\frac{\varphi _{r+\lambda _s}'(x)}{S'(x)} \rightarrow -\infty \) when \(x \rightarrow 0\), we see that taking the limit \(x \rightarrow 0\) yields \((\Phi _s (\mathcal {A}-r)g_s)(x) \rightarrow \infty \). Thus, by monotonicity the functional \((\Phi _s (\mathcal {A}-r)g_s)(x)\) must have a finite unique root \(\tilde{x}_s>0\). Similar calculations show that \(\tilde{x}_i\) is finite.

The assumption 2.1 suffice to show uniqueness of our solution in Sect. 3 and to prove the verification theorem in Sect. 4, but we need to pose additional assumptions for the existence of the optimal solution. These are collected below.

Assumption 2.4

Let \(g_i\), \(g_s\) and \(x_j\) be defined as in assumption 2.1 and the states \(\tilde{x}_i, \tilde{x}_s\) as in remark 2.3. We assume that

  1. (1)

    \((\mathcal {A}-r)g_s(x) > (\mathcal {A}-r)g_i(x) \text { for all } \mathbb {R}_+ \setminus \{x_j\}\),

  2. (2)

    the states \(\tilde{x}_i, \tilde{x}_s\) have the order \(\tilde{x}_i < \tilde{x}_s\),

  3. (3)

    the limits satisfy \(\frac{g_i}{\psi _r}(0+) < 0\) and \(\frac{g_s}{\varphi _r}(\infty ) > 0\).

In point (3) of assumption 2.4, we also allow that \(\frac{g_i}{\psi _r}(0+) = - \infty \) and \(\frac{g_s}{\varphi _r}(\infty ) = \infty \). These are the cases in many examples.

For \(f,g \in L_1^r\), we define the functionals \(H_i\) and \(H_s\) as

$$\begin{aligned} \begin{aligned} H_i(g,f; x)&= \lambda _i\frac{g(x)(\Psi _i f)(x) - f(x)(\Psi _i g)(x)}{\psi _{r+\lambda _i}(x)}, \\ H_s(g,f; x)&= \lambda _s\frac{g(x)(\Phi _s f)(x) - f(x)(\Phi _s g)(x)}{ \varphi _{r+\lambda _s}(x)}. \end{aligned} \end{aligned}$$
(2.4)

Lemma 2.5

Let \(g \in L_1^{r}\) satisfy the points (1) and (2) of Assumption 2.1. Furthermore, let \(\xi _r\) be r-harmonic. Then

$$\begin{aligned} \frac{d}{dx} H_i(g , \xi _r; x)&= \frac{\lambda _i S'(x)}{ \psi _{r+\lambda _i}^2(x)} (\Psi _i \xi _r)(x) (\Psi _i(\mathcal {A}-r)g)(x), \\ \frac{d}{dx} H_s(\xi _r, g; x)&= \frac{\lambda _s S'(x)}{ \varphi _{r+\lambda _s}^2(x)} (\Phi _s \xi _r)(x) (\Phi _s(\mathcal {A}-r)g)(x). \end{aligned}$$

Proof

We prove the first claim, the second can be proved similarly. Elementary differentiation and a reorganization of the terms yield

$$\begin{aligned}&\lambda _i^{-1} \psi _{r+\lambda _i}^2(x) \frac{d}{dx} H_i(g,\xi _r; x) \nonumber \\&\quad = (g'(x)\psi _{r+\lambda _i}(x)-g(x)\psi _{r+\lambda _i}'(x)) (\Psi _i \xi _r)(x) \nonumber \\&\qquad - (\xi _r'(x)\psi _{r+\lambda _i}(x)-\xi _r(x)\psi _{r+\lambda _i}'(x)) (\Psi _i g)(x). \end{aligned}$$
(2.5)

We apply the second part of Lemma 2.2 to \(\xi _r\) and find that

$$\begin{aligned} \lambda _i(\Psi _i \xi _r)(x) = \frac{\psi _{r+\lambda _i}'(x)}{S'(x)} \xi _r(x) - \frac{\xi _r'(x)}{S'(x)} \psi _{r+\lambda _i}(x). \end{aligned}$$
(2.6)

By substituting this to the Eq. (2.5) and then first applying the second part of Lemma 2.2 to g and then the expression (2.6) again, the claim follows. \(\square \)

3 The Solution: Necessary Conditions

3.1 The Candidate Solution

We start the analysis of the problem (2.2) by deriving a candidate solution. To this end, we recall the main problem (2.2) and the auxiliary problems I and S from Sect. 2. Denote the candidate value for the problem (2.2) as G and the candidate functions for the auxiliary problems I and S as \(G_0^i\) and \(G_0^s\), respectively. We make the following working assumptions:

(1) We assume that the candidate value functions satisfy the following dynamic programming principle:

$$\begin{aligned} G_0^i(x)&= \min (g_i(x), G(x)), \end{aligned}$$
(3.1)
$$\begin{aligned} G_0^s(x)&= \max (g_s(x), G(x)), \end{aligned}$$
(3.2)
$$\begin{aligned} G(x)&= \mathbb {E}_x\left[ e^{-r(U^i \wedge U^s)}\left( G_0^i(X_{U^i}) \mathbbm {1}_{\{ U^i< U^s \}} + G_0^s(X_{U^s}) \mathbbm {1}_{\{ U^s < U^i \}} \right) \right] \end{aligned}$$
(3.3)

here the random variables \(U^s \sim {\text {Exp}}(\lambda _s)\) and \(U^i \sim {\text {Exp}}(\lambda _i)\) are independent. In auxiliary game I, the inf-player chooses between stopping immediately or waiting whereas the sup-player can do nothing but wait; this situation is reflected by the Eq. (3.1); the Eq. (3.2) has a similar interpretation in terms of auxiliary problem S. The condition (3.3) is the expected present value of the next stopping opportunity, which will be either for inf- or sup-player and present itself as a choice reflected by the conditions (3.1) and (3.2). We point out that by the independence of \(U^i\) and \(U^s\), the condition (3.3) can be written as

$$\begin{aligned} G(x) = \frac{\lambda _i}{\lambda _i + \lambda _s} (\lambda _i+\lambda _s)(R_{r+\lambda _i+\lambda _s}G_0^i)(x) + \frac{\lambda _s}{\lambda _i + \lambda _s} (\lambda _i+\lambda _s)(R_{r+\lambda _i+\lambda _s}G_0^s)(x). \end{aligned}$$

(2) By the time homogeneity of the stopping game, we assume that the continuation region is the interval \((y_i,y_s)\), for some constants \(y_i\) and \(y_s\). Thus we can rewrite the functions \(G_0^i\) and \(G_0^s\) as

$$\begin{aligned} G_0^i(x) = {\left\{ \begin{array}{ll} G(x), &{} x > y_i, \\ g_i(x), &{} x \leqslant y_i, \end{array}\right. } \quad G_0^s(x) = {\left\{ \begin{array}{ll} g_s(x), &{} x \geqslant y_s, \\ G(x), &{} x < y_s. \end{array}\right. } \end{aligned}$$

(3) Furthermore, we assume that the function G is continuous. Then

$$\begin{aligned} G(y_i)&= G_0^i(y_i) = g_i(y_i), \\ G(y_s)&= G_0^s(y_s) = g_s(y_s). \end{aligned}$$

These assumptions are used to device the candidate solution for the problem; this is the task of this section. The candidate solution is then verified to be the actual solution in Sect. 4.

Since \(G(x) = G_0^i(x) = G_0^s(x)\) on \((y_i,y_s)\) and

$$\begin{aligned} G(x) = \mathbb {E}_x \left[ e^{-r(U^i\wedge U^s)}\left( G_0^i(X_{U^i}) \mathbbm {1}_{\{ U^i< U^s \}} + G_0^s(X_{U^s}) \mathbbm {1}_{\{ U^s < U^i \}} \right) \right] , \end{aligned}$$

we find that

$$\begin{aligned}&\frac{\lambda _i}{\lambda _i + \lambda _s}G_0^i(x) + \frac{\lambda _s}{\lambda _i + \lambda _s}G_0^s(x) \\&\quad = G(x) \\&\quad = \frac{\lambda _i}{\lambda _i + \lambda _s} (\lambda _i+\lambda _s)(R_{r+\lambda _i+\lambda _s}G_0^i)(x) + \frac{\lambda _s}{\lambda _i + \lambda _s} (\lambda _i+\lambda _s)(R_{r+\lambda _i+\lambda _s}G_0^s)(x) \\&\quad = \mathbb {E}_x \left[ e^{-r(U^i\wedge U^s)} \left( \frac{\lambda _i}{\lambda _i + \lambda _s}G_0^i(X_{U^i \wedge U^s}) + \frac{\lambda _s}{\lambda _i + \lambda _s}G_0^s(X_{U^i \wedge U^s}) \right) \right] . \end{aligned}$$

By [26, Lemma 2.1], the function \(x \mapsto \frac{\lambda _i}{\lambda _i + \lambda _s}G_0^i(x) + \frac{\lambda _s}{\lambda _i + \lambda _s}G_0^s(x)\) is r-harmonic on \((y_i.y_s)\). Consequently, we have that \(G(x) = G_0^i(x) = G_0^s(x) = h_r(x)\), where \(h_r\) is r-harmonic, on \((y_i,y_s)\). Summarizing,

$$\begin{aligned} G_0^i(x) = {\left\{ \begin{array}{ll} G(x), &{} x \geqslant y_s, \\ h_r(x), &{} x \in (y_i,y_s), \\ g_i(x), &{} x \leqslant y_i, \end{array}\right. } \quad G_0^s(x) = {\left\{ \begin{array}{ll} g_s(x), &{} x \geqslant y_s, \\ h_r(x), &{} x \in (y_i,y_s), \\ G(x), &{} x \leqslant y_i. \end{array}\right. } \end{aligned}$$

We develop this representation further in the following lemma.

Lemma 3.1

The following representations hold:

$$\begin{aligned} G(x) = {\left\{ \begin{array}{ll} \lambda _i(R_{r+\lambda _i}g_i)(x) + \frac{g_i(y_i) - \lambda _i(R_{r+\lambda _i}g_i)(y_i)}{\psi _{r+\lambda _i}(y_i)}\psi _{r+\lambda _i}(x), &{} x < y_i, \\ \lambda _s(R_{r+\lambda _s}g_s)(x) + \frac{g_s(y_s) - \lambda _s(R_{r+\lambda _s}g_s)(y_s)}{\varphi _{r+\lambda _s}(y_s)}\varphi _{r+\lambda _s}(x), &{} x > y_s. \end{array}\right. } \end{aligned}$$

Proof

Let \(x < y_i\). Then by the conditions (3.3), (3.1) and (3.2), we find that

$$\begin{aligned} G(x)&= \mathbb {E}_x\left[ e^{-rU^i}g_i(X_{U^i})\mathbbm {1}_{\{U^i< U^s\}}\mathbbm {1}_{\{ U^i \wedge U^s< \tau _{y_i} \}} \right] \\&\quad + \mathbb {E}_x\left[ e^{-rU^i}G_0^i(X_{U^i})\mathbbm {1}_{\{U^i< U^s\}}\mathbbm {1}_{\{ U^i \wedge U^s> \tau _{y_i} \}} \right] \\&+ \mathbb {E}_x\left[ e^{-rU^s}G(X_{U^s})\mathbbm {1}_{\{U^s< U^i\}}\mathbbm {1}_{\{ U^i \wedge U^s< \tau _{y_i} \}} \right] \\&\quad + \mathbb {E}_x\left[ e^{-rU^s}G_0^s(X_{U^s})\mathbbm {1}_{\{U^s < U^i\}}\mathbbm {1}_{\{ U^i \wedge U^s > \tau _{y_i} \}} \right] . \end{aligned}$$

By Strong Markov property, we obtain

$$\begin{aligned}&\mathbb {E}_x\left[ e^{-rU^i}G_0^i(X_{U^i})\mathbbm {1}_{\{U^i< U^s\}}\mathbbm {1}_ {\{ U^i \wedge U^s> \tau _{y_i} \}} \right] \\&\qquad + \mathbb {E}_x\left[ e^{-rU^s}G_0^s(X_{U^s}) \mathbbm {1}_{\{U^s< U^i\}}\mathbbm {1}_{\{ U^i \wedge U^s> \tau _{y_i} \}} \right] \\&\quad =\mathbb {E}_x\left[ e^{-r\tau _{y_i}}\mathbb {E}_{X_{\tau _{y_i}}}\left[ e^{-r(U^i \wedge U^s)} \left( G_0^i(X_{U^i}) \mathbbm {1}_{\{ U^i< U^s \}} + G_0^s(X_{U^s}) \mathbbm {1}_{\{ U^s < U^i \}} \right) \right] \mathbbm {1}_{\{ U^i \wedge U^s > \tau _{y_i} \}} \right] \end{aligned}$$

Thus,

$$\begin{aligned}&G(x) =\mathbb {E}_x\left[ e^{-rU^i}g_i(X_{U^i})\mathbbm {1}_{\{U^i< U^s\}}\mathbbm {1}_ {\{ U^i \wedge U^s< \tau _{y_i} \}} \right] \\&\quad + \mathbb {E}_x\left[ e^{-rU^s}G(X_{U^s}) \mathbbm {1}_{\{U^s< U^i< \tau _{y_i} \}}\mathbbm {1}_{\{ U^i \wedge U^s< \tau _{y_i} \}} \right] \\&\quad + \mathbb {E}_x\left[ e^{-rU^s}G(X_{U^s})\mathbbm {1}_{\{U^s< \tau _{y_i}< U^i \}}\mathbbm {1}_{\{ U^i \wedge U^s < \tau _{y_i} \}} \right] \\&\quad + \mathbb {E}_x\left[ e^{-r\tau _{y_i}}G(X_{\tau _{y_i}}) \mathbbm {1}_{\{ U^i \wedge U^s > \tau _{y_i} \}} \right] . \end{aligned}$$

Since

$$\begin{aligned} G(X_{U^s})&= \mathbb {E}_{X_{U^s}}\left[ e^{-rU^i} g_i(X_{U^i}) \right] \quad \text { on the event } \{ U^s< U^i< \tau _{y_i}\}, \\ G(X_{U^s})&= \mathbb {E}_{X_{U^s}}\left[ e^{-r\tau _{y_i}}G(X_{\tau _{y_i}}) \right] \quad \text { on the event } \{ U^s< \tau _{y_i} < U^i\}, \end{aligned}$$

we find by another application of Strong Markov property, that

$$\begin{aligned} G(x)&= \mathbb {E}\left[ e^{-rU^i}g_i(X_{U^i}) \mathbbm {1}_{\{U^i < \tau _{y_i}\}} \right] + \mathbb {E}_x\left[ e^{-r \tau _{y_i}} G(X_{\tau _{y_i}}) \mathbbm {1}_{\{ U^i > \tau _{y_i} \}} \right] . \end{aligned}$$

Since \(G(X_{\tau _{y_i}}) = G(y_i) = g_i(y_i)\), we finally obtain

$$\begin{aligned} G(x)&= \mathbb {E}_x\left[ e^{-rU^i} g_i(X_{U^i}) \right] - \mathbb {E}_x\left[ e^{-rU^i} g_i(X_{U^i}) \mathbbm {1}_{\{ U^i> \tau _{y_i} \}} \right] \\&\quad + \mathbb {E}_x\left[ e^{-r\tau _{y_i}} \mathbbm {1}_{\{ U^i > \tau _{y_i} \}} \right] g(y_i) \\&= \lambda _i(R_{r+\lambda _i}g_i)(x) + \frac{g_i(y_i) - \lambda _i(R_{r+\lambda _i}g_i)(y_i)}{\psi _{r+\lambda _i}(y_i)}\psi _{r+\lambda _i}(x). \end{aligned}$$

The case \(x > y_s\) is proved similarly. \(\square \)

The next lemma provides necessary conditions for the optimality of the thresholds \(y_i\) and \(y_s\).

Lemma 3.2

Assume, that the condition(3.3) holds for all \(x \in \mathbb {R}_+\). Then

$$\begin{aligned} \mathbb {E}_{y_i}\left[ e^{-rU^i}h_r(X_{U^i}) \mathbbm {1}_{\{ X_{U^i}< y_i \}} \right]&= \mathbb {E}_{y_i}\left[ e^{-rU^i}g_i(X_{U^i}) \mathbbm {1}_{\{ X_{U^i} < y_i \}} \right] , \\ \mathbb {E}_{y_s}\left[ e^{-rU^s}h_r(X_{U^s}) \mathbbm {1}_{\{ X_{U^s}> y_s \}} \right]&= \mathbb {E}_{y_s}\left[ e^{-rU^s}g_s(X_{U^s}) \mathbbm {1}_{\{ X_{U^s} > y_s \}} \right] , \end{aligned}$$

which can be rewritten as

$$\begin{aligned} \int _0^{y_i} \psi _{r+\lambda _i}(z) g_i(z) m'(x)dz&= \int _0^{y_i} \psi _{r+\lambda _i}(z) h_r(z) m'(x)dz, \\ \int _{y_s}^\infty \varphi _{r+\lambda _s}(z) g_s(z) m'(x)dz&= \int _{y_s}^\infty \varphi _{r+\lambda _s}(z) h_r(z) m'(x)dz. \end{aligned}$$

Proof

Let \(x \in (y_i,y_s)\). Using Lemma 2.1 of [26], we find that

$$\begin{aligned} G(x)&= h_r(x) = \frac{\lambda _i}{\lambda _i + \lambda _s} h_r(x) + \frac{\lambda _s}{\lambda _i + \lambda _s} h_r(x) \\&= \frac{\lambda _i}{\lambda _i + \lambda _s} (\lambda _i+\lambda _s)(R_{r+\lambda _i+\lambda _s}h_r)(x) + \frac{\lambda _s}{\lambda _i + \lambda _s} (\lambda _i+\lambda _s)(R_{r+\lambda _i+\lambda _s}h_r)(x) \\&= \mathbb {E}_x\Big [ \underbrace{e^{-r(U^i \wedge U^s)}\left( h_r(X_{U^i}) \mathbbm {1}_{\{ U^i< U^s \}} + h_r(X_{U^s}) \mathbbm {1}_{\{ U^s < U^i \}} \right) }_{:=F_r(X_{U^i},X_{U^s})} \Big ]. \end{aligned}$$

This can be rewritten as

$$\begin{aligned} G(x)&= \mathbb {E}_x\left[ F_r(X_{U^i},X_{U^s}) \mathbbm {1}_{\{U^i\wedge U^s < \tau _{(y_i,y_s)}\}} \right] \nonumber \\&+ \mathbb {E}_x\left[ F_r(X_{U^i},X_{U^s}) \mathbbm {1}_{\{U^i\wedge U^s > \tau _{(y_i,y_s)}\}} \mathbbm {1}_ {\{ \tau _{(y_i,y_s)} = \tau _{y_s}\}}\right] \end{aligned}$$
(3.4)
$$\begin{aligned}&+ \mathbb {E}_x\left[ F_r(X_{U^i},X_{U^s}) \mathbbm {1}_{\{U^i\wedge U^s > \tau _{(y_i,y_s)}\}} \mathbbm {1}_{\{ \tau _{(y_i,y_s)} = \tau _{y_i}\}}\right] . \end{aligned}$$
(3.5)

Strong Markov property and [26, Lemma 2.1] yields

$$\begin{aligned}&\mathbb {E}_x\left[ F_r(X_{U^i},X_{U^s}) \mathbbm {1}_{\{U^i\wedge U^s> \tau _{(y_i,y_s)}\}} \mathbbm {1}_ {\{ \tau _{(y_i,y_s)} = \tau _{y_s}\}}\right] \nonumber \\&\qquad + \mathbb {E}_x\left[ F_r(X_{U^i},X_{U^s}) \mathbbm {1}_{\{U^i\wedge U^s> \tau _{(y_i,y_s)}\}} \mathbbm {1}_{\{ \tau _{(y_i,y_s)} = \tau _{y_i}\}}\right] \nonumber \\&\quad =\mathbb {E}_x\left[ e^{-r\tau _{y_s}} \mathbb {E}_{X_{\tau _{y_s}}} \left[ F_r(X_{U^i},X_{U^s}) \right] \mathbbm {1}_{\{U^i\wedge U^s> \tau _{(y_i,y_s)}\}} \mathbbm {1}_{\{ \tau _{(y_i,y_s)} = \tau _{y_s}\}}\right] \nonumber \\&\qquad +\mathbb {E}_x\left[ e^{-r\tau _{y_i}} \mathbb {E}_{X_{\tau _{y_i}}} \left[ F_r(X_{U^i},X_{U^s}) \right] \mathbbm {1}_{\{U^i\wedge U^s> \tau _{(y_i,y_s)}\}} \mathbbm {1}_{\{ \tau _{(y_i,y_s)} = \tau _{y_i}\}}\right] \nonumber \\&\quad =\mathbb {E}_x\left[ e^{-r\tau _{y_s}} h_r(X_{\tau _{y_s}}) \mathbbm {1}_{\{U^i\wedge U^s > \tau _{(y_i,y_s)}\}} \mathbbm {1}_{\{ \tau _{(y_i,y_s)} = \tau _{y_s}\}}\right] \end{aligned}$$
(3.6)
$$\begin{aligned}&\qquad +\mathbb {E}_x\left[ e^{-r\tau _{y_i}} h_r(X_{\tau _{y_i}}) \mathbbm {1}_{\{U^i\wedge U^s > \tau _{(y_i,y_s)}\}} \mathbbm {1}_{\{ \tau _{(y_i,y_s)} = \tau _{y_i}\}}\right] . \end{aligned}$$
(3.7)

Consider first the expected value (3.6). On the event \(\{ U^i\wedge U^s > \tau _{(y_i,y_s)} = \tau _{y_s}\} \), we have, by Strong Markov property and Lemma 2.1 of [26], the following:

$$\begin{aligned} e^{-r\tau _{y_s}} h_r(X_{\tau _{y_s}})&= e^{-r\tau _{y_s}} h_r(X_{\tau _{y_s}}) \left( \mathbbm {1}_ {\{ U^s< \eta _{y_s} \}} + \mathbbm {1}_{\{ \eta _{y_s}< U^s \}} \right) \\&=e^{-r\tau _{y_s}}\left( \mathbb {E}_{X_{\tau _{y_s}}}\left[ e^{-rU^s} h_r(X_{U^s}) \right] \mathbbm {1}_{\{ U^s< \eta _{y_s}\}} + h_r(X_{\tau _{y_s}}) \mathbbm {1}_{\{ \eta _{y_s} < U^s \}} \right) , \end{aligned}$$

where \(\eta _{y_s}\) is the first return time to \(y_s\). Since \(h_r(X_{\tau _{y_s}}) = h_r(y_s) = G(y_s)\), we find that

$$\begin{aligned}&\mathbb {E}_x\left[ e^{-r\tau _{y_s}} h_r(X_{\tau _{y_s}}) \mathbbm {1}_{\{U^i\wedge U^s> \tau _ {(y_i,y_s)}\}} \mathbbm {1}_{\{ \tau _{(y_i,y_s)} = \tau _{y_s}\}}\right] \\&\quad = \mathbb {E}_x\left[ e^{-r\tau _{y_s}} \mathbb {E}_{X_{\tau _{y_s}}}\left[ e^{-rU^s} h_r(X_{U^s}) \right] \mathbbm {1}_{\{ U^s< \eta _{y_s}\}} \mathbbm {1}_{\{U^i\wedge U^s> \tau _{(y_i,y_s)}\}} \mathbbm {1}_{\{ \tau _{(y_i,y_s)} = \tau _{y_s}\}}\right] \\&\qquad +\mathbb {E}_x\left[ e^{-r\tau _{y_s}} G(y_s) \mathbbm {1}_{\{ \eta _{y_s} < U^s \}} \mathbbm {1}_{\{U^i\wedge U^s > \tau _{(y_i,y_s)}\}} \mathbbm {1}_{\{ \tau _{(y_i,y_s)} = \tau _{y_s}\}}\right] . \end{aligned}$$

For the equality (3.3) to hold, the equation

$$\begin{aligned}&\mathbb {E}_x\left[ e^{-r\tau _{y_s}} \mathbb {E}_{X_{\tau _{y_s}}}\left[ e^{-rU^s} h_r(X_{U^s}) \right] \mathbbm {1}_{\{ U^s< \eta _{y_s}\}} \mathbbm {1}_{\{U^i\wedge U^s> \tau _{(y_i,y_s)}\}} \mathbbm {1}_{\{ \tau _{(y_i,y_s)} = \tau _{y_s}\}}\right] \nonumber \\&\qquad +\mathbb {E}_x\left[ e^{-r\tau _{y_s}} G(y_s) \mathbbm {1}_{\{ \eta _{y_s}< U^s \}} \mathbbm {1}_{\{U^i\wedge U^s> \tau _{(y_i,y_s)}\}} \mathbbm {1}_{\{ \tau _{(y_i,y_s)} = \tau _{y_s}\}}\right] \nonumber \\&\quad =\mathbb {E}_x\left[ e^{-rU^i} G_0^i(X_{U^i})\mathbbm {1}_{\{U^i < U^s\}} \mathbbm {1}_{\{U^i\wedge U^s > \tau _{(y_i,y_s)}\}} \mathbbm {1}_{\tau _{\{ (y_i,y_s)} = \tau _{y_s} \}} \right] \end{aligned}$$
(3.8)
$$\begin{aligned}&\qquad + \mathbb {E}_x\left[ e^{-rU^s} G_0^s(X_{U^s})\mathbbm {1}_{\{U^s< U^i\}} \mathbbm {1}_{\{U^i\wedge U^s> \tau _{(y_i,y_s)}\}} \mathbbm {1}_{\{ \tau _{(y_i,y_s)} = \tau _{y_s}\}} \right] \nonumber \\&\quad =\mathbb {E}_x\left[ e^{-r\tau _{y_s}} \mathbb {E}_{X_{\tau _{y_s}}}\left[ e^{-rU^s} g_s(X_{U^s}) \right] \mathbbm {1}_{\{ U^s< \eta _{y_s}\}} \mathbbm {1}_{\{U^i\wedge U^s> \tau _{(y_i,y_s)}\}} \mathbbm {1}_ {\{ \tau _{(y_i,y_s)} = \tau _{y_s}\}}\right] \nonumber \\&\qquad +\mathbb {E}_x\left[ e^{-r\tau _{y_s}} G(y_s) \mathbbm {1}_{\{ \eta _{y_s} < U^s \}} \mathbbm {1}_{\{U^i\wedge U^s > \tau _{(y_i,y_s)}\}} \mathbbm {1}_{\{ \tau _{(y_i,y_s)} = \tau _{y_s}\}}\right] \end{aligned}$$
(3.9)

should hold; here, the last equation is obtained by breaking down the expected values (3.9) and (3.8) similarly to (3.4) and (3.5). This holds, if

$$\begin{aligned} \mathbb {E}_{X_{\tau _{y_s}}}\left[ e^{-rU^s}h_r(X_{U^s}) \mathbbm {1}_{\{ X_{U^s}> y_s \}} \right] = \mathbb {E}_{X_{\tau _{y_s}}}\left[ e^{-rU^s}g_s(X_{U^s}) \mathbbm {1}_{\{ X_{U^s} > y_s \}} \right] . \end{aligned}$$

The necessary condition

$$\begin{aligned} \mathbb {E}_{X_{\tau _{y_i}}}\left[ e^{-rU^i}h_r(X_{U^i}) \mathbbm {1}_{\{ X_{U^i}< y_i \}} \right] = \mathbb {E}_{X_{\tau _{y_i}}}\left[ e^{-rU^i}g_i(X_{U^i}) \mathbbm {1}_{\{ X_{U^i} < y_i \}} \right] . \end{aligned}$$

is obtained by analyzing the expected value (3.7) similarly.

To conclude the claimed integral representation, we find by applying the representation (2.1) to the function \(x \mapsto h_r(x) \mathbbm {1}_{\{ x \geqslant y_s\}}\), that

$$\begin{aligned} \mathbb {E}_{X_{\tau _{y_s}}}\left[ e^{-rU^s}h_r(X_{U^s}) \mathbbm {1}_{\{ X_{U^s} > y_s \}} \right] = B_{r+\lambda _s}^{-1} \psi _{r+\lambda _s}(y_s) \int _{y_s}^\infty \varphi _{r+\lambda _s}(y)h_r(y)m'(y)dy. \end{aligned}$$

By treating the other expectations similarly, we obtain the integral representations. \(\square \)

We write the necessary conditions given by Lemma 3.2 in a more convenient form. First, write the harmonic function \(h_r\) as \(h_r(x) = C \psi _r(x) + D \varphi _r(x)\). Now, since \(h_r(y_i)=g_i(y_i)\) and \(h_r(y_s)=g_s(y_s)\), we find by solving a pair of linear equations, that

$$\begin{aligned} h_r(x) = \frac{\varphi _r(y_i)g_s(y_s) - \varphi _r(y_s)g_i(y_i)}{\varphi _r(y_i)\psi _r(y_s) - \varphi _r(y_s)\psi _r(y_i)} \psi _r(x) + \frac{g_i(y_i)\psi _r(y_s) - g_s(y_s)\psi _r(y_i)}{\varphi _r(y_i)\psi _r(y_s) - \varphi _r(y_s)\psi _r(y_i)} \varphi _r(x).\nonumber \\ \end{aligned}$$
(3.10)

By substituting (3.10) to the conditions of lemma 3.2 and reorganizing the terms, we obtain

$$\begin{aligned} \begin{aligned} g_s(y_s)&= \frac{H_i(\varphi _r,g_i;y_i)}{H_i(\varphi _r,\psi _r;y_i)}\psi _r(y_s) + \frac{H_i(g_i,\psi _r;y_i)}{H_i(\varphi _r,\psi _r;y_i)}\varphi _r(y_s), \\ g_i(y_i)&= \frac{H_s(g_s,\varphi _r;y_s)}{H_s(\psi _r,\varphi _r,y_s)} \psi _r(y_i) + \frac{H_s(\psi _r,g_s;y_s)}{H_s(\psi _r,\varphi _r;y_s)} \varphi _r(y_i). \end{aligned} \end{aligned}$$
(3.11)

We can simplify the denominators of the coefficient terms. Indeed, since \((\mathcal {A}-(r+\lambda _i))\xi _{r}(x) = -\lambda _i \xi _{r}(x)\) for r-harmonic \(\xi \), we find using lemma 2.2, that

$$\begin{aligned} \frac{\xi _r'(x) \psi _{r+\lambda _i}(x)}{S'(x)} - \frac{\xi _r(x)\psi _{r+\lambda _i}'(x)}{S'(x)} = -\lambda _i(\Psi _i\xi _r)(x). \end{aligned}$$

Thus,

$$\begin{aligned} H_i(\varphi _r,\psi _r;x) = \bigg ( \frac{\varphi _r'(x) \psi _{r}(x)}{S'(x)} - \frac{\psi _r'(x) \varphi _{r}(x)}{S'(x)} \bigg ) = -B_r. \end{aligned}$$

By treating the term \(\psi _r(y_s)\Phi _s(\varphi _r;y_s) - \varphi _r(y_s)\Psi _s(\psi _r;y_s)\) similarly, we can rewrite the necessary conditions (3.11) as

$$\begin{aligned} \begin{aligned} H_i(g_i,\psi _r;y_i)&= H_s(\psi _r,g_s;y_s), \\ H_i(\varphi _r,g_i;y_i)&= H_s(g_s,\varphi _r;y_s). \end{aligned} \end{aligned}$$
(3.12)

3.2 On Uniqueness of the Solution

The next proposition is our main result on the uniqueness of the solution to the pair of necessary conditions given in lemma 3.2. To ease the presentation in the following, we introduce a bit shorter notation

$$\begin{aligned}&H_{i, \varphi }(x) = H_i(\varphi _r,g_i; x), \quad H_{i, \psi }(x) = H_i(g_i,\psi _r; x), \\&H_{s,\psi }(x) = H_s(\psi _r,g_s;x), \quad H_{s,\varphi }(x) = H_s(g_s,\varphi _r;x). \end{aligned}$$

Proposition 3.3

Let Assumption 2.1 hold and assume that a solution \((y_i, y_s)\) to the pair of Lemma 3.2 exists. Then the solution is unique.

Proof

Define a function \(K: (0, \tilde{x}_i] \rightarrow (0, \tilde{x}_i]\)

$$\begin{aligned} K(x)=\check{H}_{i, \varphi }^{-1}(\hat{H}_{s, \varphi }(\hat{H}_{s, \psi }^{-1}(\check{H}_{i, \psi }(x)))), \end{aligned}$$
(3.13)

where \(\hat{\cdot }\) and \(\check{\cdot }\) are restrictions to domains \([\tilde{x}_s, \infty )\) and \((0, \tilde{x}_i]\) respectively. We notice that if a solution \((y_i,y_s)\) to the pair exists, then \(y_i\) must be fixed point of K. Because the functions \(H_i\) and \(H_s\) are monotonic in their domains we get

$$\begin{aligned} K'(x) =&\check{H}_{i, \varphi }^{-1'}(\hat{H}_{s, \varphi }(\hat{H}_{s, \psi }^{-1}(\check{H}_{i, \psi }(x)))) \cdot \hat{H}_{s, \varphi }'(\hat{H}_{s, \psi }^{-1}(\check{H}_{i, \psi }(x))) \\ \cdot&\hat{H}_{s, \psi }^{-1'}(\check{H}_{i, \psi }(x)) \cdot \check{H}_{i, \psi }'(x) > 0, \end{aligned}$$

and hence K is increasing in its domain \((0,\tilde{x}_i]\). Now using the fixed point property we have

$$\begin{aligned} K'(y_i)&= \check{H}_{i, \varphi }^{-1'}(\check{H}_{i, \varphi }(y_i))\cdot \hat{H}_{s, \varphi }'(y_s) \cdot \hat{H}_{s, \psi }^{-1'}(\hat{H}_{s, \psi }(y_s)) \cdot \check{H}_{i, \psi }'(y_i) \\&= \frac{\hat{H}_{s, \varphi }'(y_s)}{\hat{H}_{s, \psi }'(y_s)} \frac{\check{H}_{i, \psi }'(y_i)}{\check{H}_{i, \varphi }'(y_i)} = \frac{(\Phi _s \varphi _r)(y_s)}{(\Phi _s \psi _r)(y_s)} \frac{(\Psi _i \psi _r)(y_i)}{(\Psi _i \varphi _r)(y_i)}< \frac{\varphi _r(y_s)}{\varphi _r(y_i)} \frac{\psi _r(y_i)}{\psi _r(y_s)} < 1. \end{aligned}$$

This means that whenever K intersects the diagonal of \(\mathbb {R}_+\), the intersection is from above. Hence, the uniqueness follows from continuity. \(\square \)

3.3 On Existence of the Solution

We proceed by analysing the solvability of the pair (3.12). By item (4) of Assumptions 2.1 and Lemma 2.5, we find that

$$\begin{aligned} \begin{aligned} H_i'(\varphi _r,g_i;x)&= \frac{-\lambda _i S'(x)}{\psi _{r+\lambda _i}^2(x)}(\Psi _i\varphi _r)(x)(\Psi _i(\mathcal {A}-r)g_i)(x)< 0, \ x< \tilde{x}_i, \\ H_i'(g_i,\psi _r;x)&= \frac{\lambda _i S'(x)}{\psi _{r+\lambda _i}^2(x)}(\Psi _i\psi _r)(x)(\Psi _i(\mathcal {A}-r)g_i)(x) > 0, \ x < \tilde{x}_i. \end{aligned} \end{aligned}$$
(3.14)

We find similarly that

$$\begin{aligned} \begin{aligned} H_s'(g_s,\varphi _r;x)&> 0, \ x> \tilde{x}_s, \\ H_s'(\psi _r,g_s;x)&< 0, \ x > \tilde{x}_s. \end{aligned} \end{aligned}$$
(3.15)

Next, we study the limiting properties of the functions appearing in (3.12). Regarding the function \(H_i(\varphi _r,g_i;\cdot )\), by adding and subtracting the term \((\Psi _i(\mathcal {A}-r)g_i)\) and using lemma 2.2, we obtain

$$\begin{aligned} \lambda _i(\Psi _i g_i)(x) = (\Psi _i(\mathcal {A}-r)g_i)(x)-\left( \frac{g_i'(x)}{S'(x)}\psi _{r+\lambda _i}(x)-\frac{\psi _{r+\lambda _i}'(x)}{S'(x)}g_i(x) \right) . \end{aligned}$$

By a similar computation, we find also that

$$\begin{aligned} \lambda _i(\Psi _i\varphi _r)(x) = \frac{\psi _{r+\lambda _i}'(x)}{S'(x)}\varphi _r(x)-\frac{\varphi _r'(x)}{S'(x)}\psi _{r+\lambda _i}(x). \end{aligned}$$

By substituting these expressions to \(H_i(\varphi _r,g_i;\cdot )\), simplifying, and using lemma 2.2 again, we observe that

$$\begin{aligned} H_i(\varphi _r,g_i;x)&= \frac{\varphi _r(x)}{\psi _{r+\lambda _i}(x)}(\Psi _i(\mathcal {A}-r)g_i)(x) \nonumber \\&+ \left( \frac{\varphi _r'(x)}{S'(x)}g_i(x)-\frac{g_i'(x)}{S'(x)}\varphi _r(x) \right) \nonumber \\&= \frac{\varphi _r(x)}{\psi _{r+\lambda _i}(x)}(\Psi _i(\mathcal {A}-r)g_i)(x) + (\Phi (\mathcal {A}-r)g_i)(x). \end{aligned}$$
(3.16)

Assume that \(x < z_i\). Then the intermediate value theorem yields

$$\begin{aligned}&H_i(\varphi _r,g_i;x) \\&\quad = \frac{\varphi _r(x)}{\psi _{r+\lambda _i}(x)}\frac{\psi _{r+\lambda _i} (\xi _x)}{\psi _{r}(\xi _x)}(\Psi (\mathcal {A}-r)g_i)(x) + (\Phi (\mathcal {A}-r)g_i)(x) \\&\quad = \frac{1}{\psi _{r}(\xi _x)}\bigg (\psi _{r}(\xi _x)(\Phi (\mathcal {A}-r)g_i)(x) + \frac{\psi _{r+\lambda _i}(\xi _x)}{\psi _{r+\lambda _i}(x)}\varphi _r(x)(\Psi (\mathcal {A}-r)g_i)(x) \bigg ), \end{aligned}$$

where \(\xi _x \in (0,x)\). By continuity, we find by passing to the limit \(x \rightarrow 0+\) that

$$\begin{aligned} H_i(\varphi _r,g_i;0+) = B_r \frac{(R_r(\mathcal {A}-r)g_i)}{\psi _r}(0+) = -B_r \frac{g_i}{\psi _r}(0+). \end{aligned}$$

By a similar analysis, we find that the limit

$$\begin{aligned} H_s(\psi _r,g_s;\infty ) = - B_r \frac{g_s}{\varphi _r}(\infty ). \end{aligned}$$

Consider next the function \(H_i(g_i,\psi _r;x)\). Since

$$\begin{aligned} \lambda _i(\Psi _i\psi _r)(x) = \frac{\psi _{r+\lambda _i}'(x)}{S'(x)}\psi _r(x)-\frac{\psi _r'(x)}{S'(x)}\psi _{r+\lambda _i}(x), \end{aligned}$$

we find using the Eq. (3.16) and lemma 2.2 that

$$\begin{aligned} H_i(g_i,\psi _r;x) = \frac{\psi _r(x)}{\psi _{r+\lambda _i}(x)}(\Psi _i(\mathcal {A}-r)g_i)(x) + (\Psi (\mathcal {A}-r)g_i)(x). \end{aligned}$$

Assume that \(x < z_i\). Then the intermediate value theorem yields

$$\begin{aligned} \frac{\psi _r(x)}{\psi _{r+\lambda _i}(x)}(\Psi _i(\mathcal {A}-r)g_i)(x) = \underbrace{\frac{\frac{\psi _{r+\lambda _i}(\xi _x)}{\psi _{r+\lambda _i}(x)}}{\frac{\psi _r(\xi _x)}{\psi _r(x)}}}_{<1}(\Psi (\mathcal {A}-r)g_i)(x). \end{aligned}$$

Thus by continuity \(H_i(g_i,\psi _r;0+) = 0\). A similar analysis yields the limit \(H_s(g_s,\varphi _r;\infty )=0\).

Finally, by using remark 2.3 and the facts that \(\varphi _r\) is r-harmonic and \(z_i<\tilde{x}_i\), we find using lemma 2.2 that

$$\begin{aligned} H_i(\varphi _r,g_i;\tilde{x}_i)&=\frac{g_i(\tilde{x}_i)(\Psi _i(\mathcal {A}-q_i)\varphi _r)(\tilde{x}_i)-\varphi _r(\tilde{x}_i)(\Psi _i(\mathcal {A}-q_i)g_i)(\tilde{x}_i)}{\psi _{r+\lambda _i}(\tilde{x}_i)} \\&= \frac{\psi _r'(\tilde{x}_i)}{S'(x)}g_i(\tilde{x}_i) - \frac{g_i'(\tilde{x}_i)}{S'(x)}\varphi _r(\tilde{x}_i) \\&= \int _{\tilde{x}_i}^\infty \varphi _r(y)(\mathcal {A}-r)g_i(y)m'(y)dy < 0, \end{aligned}$$

where \(q_i = r+ \lambda _i\). By a similar analysis, we find that \(H_s(\psi _r,g_s;\tilde{x}_s)>0\). We summarize these findings:

$$\begin{aligned} {\left\{ \begin{array}{ll} H_i(\varphi _r,g_i;0+) = -B_r \frac{g_i}{\psi _r}(0+), \\ H_i(\varphi _r,g_i;\tilde{x}_i)<0, \\ H_i(g_i,\psi _r;0+) = 0, \end{array}\right. } {\left\{ \begin{array}{ll} H_s(\psi _r,g_s;\infty ) = - B_r \frac{g_s}{\varphi _r}(\infty ), \\ H_s(\psi _r,g_s;\tilde{x}_s)>0, \\ H_s(g_s,\varphi _r;\infty )=0. \end{array}\right. } \end{aligned}$$
(3.17)

Unfortunately, the assumptions in 2.1 are not enough to guarantee the existence of a solution to the pair of equations in 3.2 and more analysis is needed. The next proposition is our main result on the solvability of the necessary conditions.

Proposition 3.4

Under the assumptions 2.1 and 2.4, the pair of necessary conditions given in Lemma 3.2, has a unique solution.

Proof

Define the function \(K: (0, \tilde{x}_i] \rightarrow (0, \tilde{x}_i]\) as in (3.13). We first observe that the proven limiting properties (3.17) and monotonicity properties (3.14) together with the conditions

$$\begin{aligned} \begin{aligned} H_i(g_i, \psi _r ; \tilde{x}_i)&< H_s(\psi _r, g_s ; \tilde{x}_s), \\ H_i(\varphi _r, g_i ; \tilde{x}_i)&< H_s(g_s, \varphi _r ; \tilde{x}_s) \end{aligned} \end{aligned}$$
(3.18)

guarantee that the function K is well-defined. Using the representation (3.16), we see that

$$\begin{aligned} H_i(g_i, \psi _r ; \tilde{x}_i)&= (\Psi (\mathcal {A}-r)g_i)(\tilde{x}_i), \\ H_s(\psi _r, g_s ; \tilde{x}_s)&= (\Psi (\mathcal {A}-r)g_s)(\tilde{x}_s). \end{aligned}$$

After handling the other inequality similarly, we see that the condition (3.18) is equivalent to

$$\begin{aligned} \begin{aligned}&(\Psi (\mathcal {A}-r)g_i))(\tilde{x}_i)< (\Psi (\mathcal {A}-r)g_s))(\tilde{x}_s), \\&(\Phi (\mathcal {A}-r)g_i))(\tilde{x}_i) < (\Phi (\mathcal {A}-r)g_s))(\tilde{x}_s). \end{aligned} \end{aligned}$$
(3.19)

The assumptions in 2.1 guarantee that \(\tilde{x}_s < z_s\), \(\tilde{x}_i > z_i\) and \(z_i < z_s\). Thus, (1) implies that \(z_i< \tilde{x}_{i}< \tilde{x}_s < z_s\) and consequently by our assumptions

$$\begin{aligned} H_i(g_i, \psi _r ; \tilde{x}_i) - H_s(\psi _r, g_s ; \tilde{x}_s)&= \int _0^{\tilde{x}_i} \big [ (\mathcal {A}-r)g_i(z) - (\mathcal {A}-r)g_s(z) \big ]\psi _r(z)m'(z)dz \\&- \int _{\tilde{x}_i}^{\tilde{x}_s} (\mathcal {A}-r)g_s(z)\psi _r(z)m'(z)dz < 0. \end{aligned}$$

The other inequality in (3.19) is proved similarly.

It follows from above calculations that the function K is well-defined and from proof of proposition 3.3 that it is increasing. Furthermore, K is a mapping from interval \((0,\tilde{x}_i]\) to its open subset. Thus, K must have a fixed point which we denote by \(y_i\). Then the pair \((y_i, y_s)\), where \(y_s = H_{s, \psi }^{-1}(H_{i,\psi }(y_i))\), is a solution to the equations given in lemma 3.2. The uniqueness follows from proposition 3.3. \(\square \)

The assumption (1) and (3) in proposition 3.4 are satisfied in most situations and are easily verified. However, the assumption (2) requires more analysis in most cases. Fortunately, it is very easy to check it at least numerically in applications, because the states \(\tilde{x}_i, \tilde{x}_s\) are known to be unique zeroes of the functions \((\Psi _i (\mathcal {A}-r)g_i)(x)\) and \((\Phi _s (\mathcal {A}-r)g_s)(x)\).

3.4 On Non-differentiable Payoffs

Although our analysis does not cover non-differentiable payoff functions, its conclusions can be extended fairly easily to some important cases. As an example, assume that the payoff functions are \(g_i(x) = (x-c_i)^+\) and \(g_s(x) = (x-c_s)^+\), where \(c_i < c_s\) and let X be a diffusion satisfying the basic assumptions of Sect. 2. This payoff structure can be viewed as a callable option, see, e.g., [19]. Recall the optimality conditions (3.12):

$$\begin{aligned} \begin{aligned} H_i(g_i,\psi _r;y_i)&= H_s(\psi _r,g_s;y_s), \\ H_i(\varphi _r,g_i;y_i)&= H_s(g_s,\varphi _r;y_s). \end{aligned} \end{aligned}$$

We observe that the left hand side of both of these equations is zero on \((0,c_i)\).

Assume first that the functions on the right hand side of the conditions (3.12) have a common zero \(y_0\). Then the following must hold

$$\begin{aligned} \begin{aligned} g_s(y_0)\int _{y_0}^\infty \varphi _{r+\lambda _s}(z)\psi _r(z)m'(z)dz&= \psi _r(y_0)\int _{y_0}^\infty \varphi _{r+\lambda _s}(z)g_s(z)m'(z)dz \\ g_s(y_0)\int _{y_0}^\infty \varphi _{r+\lambda _s}(z)\varphi _r(z)m'(z)dz&= \varphi _r(y_0)\int _{y_0}^\infty \varphi _{r+\lambda _s}(z)g_s(z)m'(z)dz \end{aligned} \end{aligned}$$
(3.20)

First, we observe that if \(g_s(y_0)=0\), then

$$\begin{aligned} \psi _r(y_0)\int _{y_0}^\infty \varphi _{r+\lambda _s}(z)g_s(z)m'(z)dz = \varphi _r(y_0)\int _{y_0}^\infty \varphi _{r+\lambda _s}(z)g_s(z)m'(z)dz = 0, \end{aligned}$$

which clearly cannot hold. Assume now that \(g_s(y_0)>0\). Then by dividing the conditions (3.20) sidewise, some further manipulations yield

$$\begin{aligned} \int _{y_0}^\infty \varphi _{r+\lambda _s}(z)\varphi _r(z)m'(z)dz&= \frac{\varphi _r(y_0)}{\psi _r(y_0)} \int _{y_0}^\infty \varphi _{r+\lambda _s}(z)\psi _r(z)m'(z)dz \\&\geqslant \int _{y_0}^\infty \varphi _{r+\lambda _s}(z)\frac{\varphi _r(z)}{\psi _r(z)}\psi _r(z)m'(z)dz \\&= \int _{y_0}^\infty \varphi _{r+\lambda _s}(z)\varphi _r(z)m'(z)dz; \end{aligned}$$

here, we have used the fact that the function \(\frac{\varphi _r}{\psi _r}\) is decreasing. Since all functions in these expressions are positive, we conclude that the ratio \(\frac{\varphi _r}{\psi _r}\) must in fact be constant over the interval \((y_0,\infty )\), something which is clearly not true. Thus, we can safely consider the case that the functions on the right hand side of the conditions (3.12) do not have a common zero. Then neither \(y_i\) nor \(y_s\) can be in the interval \((0,c_i)\). Thus we can restrict the analysis of the optimality conditions to the interval \((c_i,\infty )\). It is straightforward to see that in this case the functions \(H_i\) behave as in our main result. The functions \(H_s\) also behave similarly but the turning point is at \(c_s\). Thus, our main result can be applied to solve the problem after locating the points \(\tilde{x}_i\) and \(c_s\).

4 The Solution: Sufficient Conditions

The purpose of this section is to prove the following theorem. This is our main results on the solution of the considered stopping game.

Theorem 4.1

Let the assumptions 2.1 hold and assume that thresholds \(y_i\) and \(y_s\) are unique solution to

$$\begin{aligned} \begin{aligned} \int _0^{y_i} \psi _{r+\lambda _i}(z) g_i(z) m'(z)dz&= \int _0^{y_i} \psi _{r+\lambda _i}(z) h_r(z) m'(z)dz, \\ \int _{y_s}^\infty \varphi _{r+\lambda _s}(z) g_s(z) m'(z)dz&= \int _{y_s}^\infty \varphi _{r+\lambda _s}(z) h_r(z) m'(z)dz, \end{aligned} \end{aligned}$$
(4.1)

where

$$\begin{aligned} h_r(x) = \frac{\varphi _r(y_i)g_s(y_s) - \varphi _r(y_s)g_i(y_i)}{\varphi _r(y_i)\psi _r(y_s) - \varphi _r(y_s)\psi _r(y_i)} \psi _r(x) + \frac{g_i(y_i)\psi _r(y_s) - g_s(y_s)\psi _r(y_i)}{\varphi _r(y_i)\psi _r(y_s) - \varphi _r(y_s)\psi _r(y_i)} \varphi _r(x). \end{aligned}$$

Then the value function (2.2) reads as

$$\begin{aligned} V(x) = {\left\{ \begin{array}{ll} \lambda _i(R_{r+\lambda _i}g_i)(x) + \frac{g_i(y_i) - \lambda _i(R_{r+\lambda _i}g_i)(y_i)}{\psi _{r+\lambda _i}(y_i)}\psi _{r+\lambda _i}(x), &{} x < y_i, \\ h_r(x), &{} x \in (y_i,y_s), \\ \lambda _s(R_{r+\lambda _s}g_s)(x) + \frac{g_s(y_s) - \lambda _s(R_{r+\lambda _s}g_s)(y_s)}{\varphi _{r+\lambda _s}(y_s)}\varphi _{r+\lambda _s}(x), &{} x > y_s. \end{array}\right. } \end{aligned}$$

Moreover, the game has a Nash equilibrium constituted by the stopping rules

$$\begin{aligned} \tau ^*&= \inf \{ T_{n^s}> 0 \ : \ X_{T_{n^s}} \geqslant y_s\} \\ \sigma ^*&= \inf \{ T_{n^i} > 0 \ : \ X_{T_{n^i}} \leqslant y_i\}. \end{aligned}$$

To prove this result, we first introduce some notation. Define the filtrations \(\left( \mathcal {G}_{n^i} \right) _{n^i \geqslant 0}\) and \(\left( \mathcal {G}_{n^s} \right) _{n^s \geqslant 0}\) as \(\mathcal {G}_{n^i} = \mathcal {F}_{T_{n^i}}\) and \(\mathcal {G}_{n^s} = \mathcal {F}_{T_{n^s}}\), respectively. Moreover, define the sets of admissible stopping times with respect to the \(\mathcal {G}\)-filtrations:

$$\begin{aligned} \mathcal {N}^s&= \left\{ N^s \geqslant 1 \ : \ N^s \text { is an }(\mathcal {G}_{n^s})-\text {stopping time} \right\} \\ \mathcal {N}^i&= \left\{ N^i \geqslant 1 \ : \ N^i \text { is an} (\mathcal {G}_{n^i})-\text {stopping time} \right\} . \end{aligned}$$

Then the function V defined in (2.2) can be written as

$$\begin{aligned} V(x) = \sup _{N^s \in \mathcal {N}^s} \inf _{N^i \in \mathcal {N}^i} \mathbb {E}_x\left[ e^{-r(T_{N^i}\wedge T_{N^s})} R(T_{N^s},T_{N^i}) \right] . \end{aligned}$$

We point out that the \(\mathcal {G}\)-filtrations were defined only for the case where immediate stopping is not allowed. This is because we do the verification only for the main problem and not for the auxiliary problems. However, similar techniques could be employed to do the verification also for functions \(G_0^i\) and \(G_0^s\) defined via (3.1) and (3.2), where the function G is given by expression for V in the claim of the main theorem. We omit the details.

The proof of the main theorem requires uniform integrability. This is provided by the following lemma.

Lemma 4.2

For any fixed stopping rule \(T_{N^i}\), the process

$$\begin{aligned} S^s(N^i,\cdot ): n^s \mapsto e^{-r(T_{N^i}\wedge T_{n^s})}\left( g_i(X_{T_{N^i}})\mathbbm {1}_{\{ T_{N^i}< T_{n^s} \}} + G_0^s(X_{T_{n^s}})\mathbbm {1}_{\{ T_{n^s} < T_{N^i} \}}\right) , \ n^s = 1,2,\dots , \end{aligned}$$

is a uniformly integrable supermartingale with respect to \((\mathcal {G}_{n^s})_{n^s\geqslant 0}\).

For any fixed stopping rule \(T_{N^s}\), the process

$$\begin{aligned} S^i(\cdot ,N^s): n^i \mapsto e^{-r(T_{n^i}\wedge T_{N^s})}\left( G_0^i(X_{T_{n^i}})\mathbbm {1}_{\{ T_{n^i}< T_{N^s} \}} + g_s(X_{T_{N^s}})\mathbbm {1}_{\{ T_{N^s} < T_{n^i} \}}\right) , \ n^i = 1,2,\dots , \end{aligned}$$

is a uniformly integrable submartingale with respect to \(\left( \mathcal {G}_{n^i} \right) _{n^i \geqslant 0}\).

Proof

We prove the claim for \(S^s\), the process \(S^i\) is treated similarly. Since \(G_0^s \geqslant G\), Strong Markov property yields

$$\begin{aligned}&e^{-r(T_{N^i}\wedge T_{n^s})}\left( g_i(X_{T_{N^i}})\mathbbm {1}_{\{ T_{N^i}< T_{n^s} \}} + G_0^s(X_{T_{n^s}})\mathbbm {1}_{\{ T_{n^s}< T_{N^i} \}}\right) \\&\quad \geqslant e^{-r(T_{N^i}\wedge T_{n^s})}\left( g_i(X_{T_{N^i}})\mathbbm {1}_{\{ T_{N^i}< T_{n^s} \}} + G(X_{T_{n^s}})\mathbbm {1}_{\{ T_{n^s}< T_{N^i} \}}\right) \\&\quad = e^{-r(T_{N^i}\wedge T_{n^s})} \mathbb {E}_{X_{T_{n^s}}}\left[ e^{-r U^s} G_0^s(X_{U^s}) \mathbbm {1}_{\{ T_{n^s} + U^s< T_{N^i} \}} \right] \mathbbm {1}_{\{ T_{n^s}< T_{N^i} \}} \\&\qquad +e^{-r(T_{N^i}\wedge T_{n^s})} \mathbb {E}_{X_{T_{n^s}}}\left[ e^{-r(T_{N^i}-T_{n^s})} g_i(X_{T_{N^i}}) \mathbbm {1}_{\{ T_{n^s}< T_{N^i}< T_{n^s} + U^s \}} \right] \mathbbm {1}_{\{ T_{n^s}< T_{N^i} \}} \\&\qquad + e^{-r(T_{N^i}\wedge T_{n^s})} g_i(X_{T_N^i})\mathbbm {1}_{\{T_{n^s} > T_{N_i}\}} \\&\quad = \mathbb {E}_x\left[ e^{-r(T_{N^i}\wedge T_{{n+1}^s})}\left( g_i(X_{T_{N^i}})\mathbbm {1}_{\{ T_{N^i}< T_{{n+1}^s} \}} + G_0^s(X_{T_{{n+1}^s}})\mathbbm {1}_{\{ T_{{n+1}^s} < T_{N^i} \}}\right) \right] . \end{aligned}$$

To prove uniform integrability, we show that

$$\begin{aligned}&\sup _{n^s}\mathbb {E}_x[S^s(N^i,n^s)]<\infty , \text { and} \end{aligned}$$
(4.2)
$$\begin{aligned}&\sup _{n^s}\mathbb {E}_x[S^s(N^i,n^s)\mathbbm {1}_{A}]\rightarrow 0, \text { when } \mathbb {P}_x(A)\rightarrow 0, \end{aligned}$$
(4.3)

for all stopping rules \(T_{N^i}\); these conditions are necessary and sufficient for uniform integrability. Fix \(T_{N^i}\) and \(n^s\). Define the measure

$$\begin{aligned} \mathbb {P}^*(A)&= \mathbb {E}_x\left[ L^s(N^i,n^s) \mathbbm {1}_A \right] , \ A \in \mathcal {F}, \text{ where }\\ L^s(N^i,n^s)&= e^{-r(T_{N^i}\wedge T_{n^s})} \frac{\psi _r(X_{T_{N^i}\wedge T_{n^s}})}{\psi _r(x)}. \end{aligned}$$

Let \(A\in \mathcal {F}\). Since \(\frac{G_0^s(x)}{\psi _r(x)} \leqslant \frac{g_s(\hat{x}^s)}{\psi _r(\hat{x}^s)}\) for all x, we find that

$$\begin{aligned} \begin{aligned} \frac{\mathbb {E}_x[S^s(N^i,n^s)\mathbbm {1}_{A}]}{\psi _r(x)}&= \mathbb {E}_x\left[ \frac{g_i(X_{T_{N^i}})}{\psi _{r}(X_{T_{N^i}})} \mathbbm {1}_{\{ T_{N^i}< T_{n^s} \}}\mathbbm {1}_{A} L^s(N^i,n^s) \right] \\&\quad + \mathbb {E}_x\left[ \frac{G_0^s(X_{T_{n^s}})}{\psi _{r}(X_{T_{n^s}})} \mathbbm {1}_{\{ T_{N^i}< T_{n^s} \}}\mathbbm {1}_{A} L^s(N^i,n^s) \right] \\&\leqslant \max \left( \frac{g_i(\hat{x}^i)}{\psi _r(\hat{x}^i)}, \frac{g_s(\hat{x}^s)}{\psi _r(\hat{x}^s)} \right) \mathbb {P}^*(A) < \infty . \end{aligned} \end{aligned}$$
(4.4)

The property (4.2) follows from (4.4) by setting \(A=\Omega \). We observes that \(\mathbb {P}_x(A) \rightarrow 0\), whenever \(\mathbb {P}^*_x(A) \rightarrow 0\). Thus the property (4.3) follows from (4.4). \(\square \)

Proof of Theorem 4.1

The task is to show that \(V=G\); the claimed Nash equilibrium follows then from the construction of G. To this end, recall the definition of the value function V from (2.2). Obviously, \(\underline{V}(x) \leqslant \overline{V}(x)\) for all x. To prove that \(V=G\), it is sufficient to show that \(\overline{V}(x) \leqslant G(x) \leqslant \underline{V}(x)\) for all x; we prove the first of these inequalities, the second is proved similarly. Since \(g_s \leqslant G_0^s\), we find using lemma 4.2 and optional sampling, that

$$\begin{aligned}&\mathbb {E}_x\left[ e^{-r(T_{N^s}\wedge T_{N^i})}\left( g_i(X_{T_{N^i}})\mathbbm {1}_{\{ T_{N^i}< T_{N^s} \}} + g_s(X_{T_{N^s}})\mathbbm {1}_{\{ T_{N^s}< T_{N^i} \}}\right) \right] \\&\quad \leqslant \mathbb {E}_x\left[ e^{-r(T_{N^s}\wedge T_{N^i})}\left( g_i(X_{T_{N^i}})\mathbbm {1}_{\{ T_{N^i}< T_{N^s} \}} + G_0^s(X_{T_{N^s}})\mathbbm {1}_{\{ T_{N^s}< T_{N^i} \}}\right) \right] \\&\quad \leqslant \mathbb {E}_x\left[ e^{-r(U^s\wedge T_{N^i})}\left( g_i(X_{T_{N^i}})\mathbbm {1}_{\{ T_{N^i}< U^s \}} + G_0^s(X_{U^s})\mathbbm {1}_{\{ U^s< T_{N^i} \}}\right) \right] \\&\quad = \lambda _s(R_{r+\lambda _s}G_0^s)(x)\mathbb {P}_x(U^s< T_{N^i}) + \mathbb {E}_x\left[ e^{-rT_{N^i}}g_i(X_{T_{N^i}}) \mathbbm {1}_{\{ T_{N^i} < U^s \}} \right] , \end{aligned}$$

for arbitrary stopping rules \(T_{N^i}\) and \(T_{N^s}\); here, \(U^s\) is an independent \({\text {Exp}}(\lambda _s)\)-distributed \((\mathcal {G}_{n^s})\)-stopping time and the last equality follows from the independence of \(U^s\) and the stopping times \(T_{N^i}\). The right-hand side is independent of \(T_{N^s}\), thus we obtain

$$\begin{aligned}&\sup _{T_{N^s}}\mathbb {E}_x\left[ e^{-r(T_{N^s}\wedge T_{N^i})}\left( g_i(X_{T_{N^i}})\mathbbm {1}_{\{ T_{N^i}< T_{N^s} \}} + g_s(X_{T_{N^s}})\mathbbm {1}_{\{ T_{N^s}< T_{N^i} \}}\right) \right] \\&\quad \leqslant \lambda _s(R_{r+\lambda _s}G_0^s)(x)\mathbb {P}_x(U^s< T_{N^i}) + \mathbb {E}_x\left[ e^{-rT_{N^i}}g_i(X_{T_{N^i}}) \mathbbm {1}_{\{ T_{N^i} < U^s \}} \right] , \end{aligned}$$

and, consequently,

$$\begin{aligned} \overline{V}(x) \leqslant \inf _{T_{N^i}} \left( \lambda _s(R_{r+\lambda _s}G_0^s)(x)\mathbb {P}_x(U^s< T_{N^i}) + \mathbb {E}_x\left[ e^{-rT_{N^i}}g_i(X_{T_{N^i}}) \mathbbm {1}_{\{ T_{N^i} < U^s \}} \right] \right) . \end{aligned}$$

For the inf-player, consider the stopping rule \(\tilde{N}^i=\)"Stop at the next inf-players Poisson arrival if the state of X at the time is below the threshold \(y_i\). Otherwise, wait". Then

$$\begin{aligned} \begin{aligned} \overline{V}(x)&\leqslant \mathbb {E}_x\left[ e^{-rU^s} G^s_0(X_{U^s}) \mathbbm {1}_{\{ U^s< T_{\tilde{N}^i} \}} + e^{-rT_{\tilde{N}^i}} g_i(X_{T_{\tilde{N}^i}}) \mathbbm {1}_{\{ U^s> T_{\tilde{N}^i} \}} \right] \\&= \sum _{k=1}^\infty \mathbb {E}_x\left[ e^{-rU^s} G^s_0(X_{U^s}) \mathbbm {1}_{\{ U^s < T_{k} \}} + e^{-rT_{k}} g_i(X_{T_{k}}) \mathbbm {1}_{\{ U^s > T_{k} \}} \right. \left| \tilde{N}^i = k\right] \mathbb {P}_x(\tilde{N}^i = k). \end{aligned}\nonumber \\ \end{aligned}$$
(4.5)

Since \(g_i(X_{T_k}) = G_0^i(X_{T_k})\) conditional to \(\{ \tilde{N}^i = k \}\), we find that

$$\begin{aligned}&\mathbb {E}_x\left[ e^{-rU^s} G_0^s(X_{U^s}) \mathbbm {1}_{\{ U^s< T_{k} \}} + e^{-rT_{k}} g_i(X_{T_{k}}) \mathbbm {1}_{\{ U^s> T_{k} \}} \right. \left| \tilde{N}^i = k\right] \\&\quad =\mathbb {E}_x\left[ \sum _{n=1}^{k-1} e^{-rU^s} G_0^s(X_{U^s}) \mathbbm {1}_{\{ T_{n-1}<U^s< T_{n} \}} \right. \left| \tilde{N}^i = k\right] \\&\qquad + \mathbb {E}_x\left[ e^{-rT_{k-1}} \mathbb {E}_{X_{T_{k-1}}}\left[ e^{-r(U^i\wedge U^s)}\left( G_0^i(X_{U^i})\mathbbm {1}_{\{ U^i< U^s \}}+G_0^s(X_{U^s})\mathbbm {1}_{\{ U^s< U^i \}} \right) \right] \mathbbm {1}_{\{ U^s> T_{k-1} \}} \right. \left| \right. \\&\qquad \left. \tilde{N}^i = k\right] \\&\quad =\mathbb {E}_x\left[ \sum _{n=1}^{k-1} e^{-rU^s} G_0^s(X_{U^s}) \mathbbm {1}_{\{ T_{n-1}<U^s < T_{n} \}} \right. \left| \tilde{N}^i = k\right] + \mathbb {E}_x\left[ e^{-rT_{k-1}} G(X_{T_{k-1}}) \mathbbm {1}_{\{ U^s > T_{k-1} \}} \right. \left| \right. \\&\qquad \left. \tilde{N}^i = k\right] . \end{aligned}$$

Strong Markov property yields

$$\begin{aligned}&\mathbb {E}_x\left[ \sum _{n=1}^{k-1} e^{-rU^s} G_0^s(X_{U^s}) \mathbbm {1}_{\{ T_{n-1}<U^s< T_{n} \}} \right. \left| \tilde{N}^i = k\right] \\&\quad =\mathbb {E}_x\left[ \sum _{n=1}^{k-1} e^{-rT_{n-1}} \mathbb {E}_{X_{T_{n-1}}} \left[ e^{-rU^s} G_0^s(X_{U^s}) \right] \mathbbm {1}_{\{ T_{n-1}<U^s < T_{n} \}} \right. \left| \tilde{N}^i = k\right] . \end{aligned}$$

Finally, since, conditional to \(\{ \tilde{N}_i = k \}\),

$$\begin{aligned} e^{-rT_{n-1}} \mathbb {E}_{X_{T_{n-1}}} \left[ e^{-rU^s} G_0^s(X_{U^s}) \right] = e^{-rT_{n-1}} G(X_{T_{n-1}}) = G(x) \end{aligned}$$

on the event \(\{ T_{n-1}<U^s < T_{n} \}\), \(n = 1, \dots , k-1\), and

$$\begin{aligned} e^{-rT_{k-1}} G(X_{T_{k-1}}) = G(x) \end{aligned}$$

on the event \(\{ U^s > T_{k-1} \}\), the expression (4.5) can be written as \(\overline{V}(x) \leqslant G(x)\). Thus \(V(x)=G(x)\) and the proof is complete. \(\square \)

5 Properties of the Solution

5.1 Asymptotics of \(\lambda _i\) and \(\lambda _s\)

A similar stopping game, where both of the players are allowed to stop without any restrictions is studied in [2]. In that case, we know that under the assumption 2.1 complemented by the assumption \(g_i\geqslant g_s\), the optimal stopping thresholds \((x_i, x_s)\) are the unique solution to the pair of equations

$$\begin{aligned} \begin{aligned}&(\Psi (\mathcal {A}-r)g_i)(x_i) = (\Psi (\mathcal {A}-r)g_s)(x_s),\\&(\Phi (\mathcal {A}-r)g_i)(x_i) = (\Phi (\mathcal {A}-r)g_s)(x_s). \end{aligned} \end{aligned}$$
(5.1)

It seems reasonable this solution should coincide with ours when both of the information rates \(\lambda _i\) and \(\lambda _s\) tend to infinity as in that case stopping opportunities appear more frequently for both of the players. We show after some auxiliary calculations that this is indeed the case.

The pair of equations can be represented as

$$\begin{aligned} \begin{aligned}&-\frac{\psi _r(y_i)}{\psi _{r+\lambda _i}(y_i)}(\Psi _i(\mathcal {A}-r)g_i)(y_i) + (\Psi (\mathcal {A}-r)g_i)(y_i) \\&\quad = \frac{\psi _r(y_s)}{\varphi _{r+\lambda _s}(y_s)}(\Phi _s(\mathcal {A}-r)g_s)(y_s) + (\Psi (\mathcal {A}-r)g_s)(y_s),\\&\quad \frac{\varphi _r(y_i)}{\psi _{r+\lambda _i}(y_i)}(\Psi _i(\mathcal {A}-r)g_i)(y_i) + (\Phi (\mathcal {A}-r)g_i)(y_i) \\&\quad = - \frac{\varphi _r(y_s)}{\varphi _{r+\lambda _s}(y_s)}(\Phi _s(\mathcal {A}-r)g_s)(y_s) + (\Phi (\mathcal {A}-r)g_s)(y_s). \end{aligned} \end{aligned}$$
(5.2)

Furthermore, for all \(s > 0\), we have

$$\begin{aligned} \mathbb {E}_x[e^{-s\tau _z} \mid \tau _z < \infty ] = {\left\{ \begin{array}{ll} \dfrac{\psi _{s}(x)}{\psi _{s}(z)}, &{} x \leqslant z \\ \dfrac{\varphi _{s}(x)}{\varphi _{s}(z)}, &{} x > z, \end{array}\right. } \end{aligned}$$
(5.3)

where \(\tau _z = \inf \{ t \geqslant 0 \mid X_t = z \}\). Therefore, we find that for \(z< x < y\)

$$\begin{aligned} \lim _{s \rightarrow \infty } \frac{\psi _{s}(x)}{\psi _{s}(z)} = 0, \quad \, \, \lim _{s \rightarrow \infty } \frac{\varphi _{s}(y)}{\varphi _{s}(x)} = 0, \end{aligned}$$

and, consequently by monotone convergence, that

$$\begin{aligned} \begin{aligned}&\frac{(\Psi _i(\mathcal {A}-r)g_i)(y_i)}{\psi _{r+\lambda _i}(y_i)} = \int _0^{y_i} \frac{\psi _{r+\lambda _i} (y)}{\psi _{r+\lambda _i}(y_i)} (\mathcal {A}-r)g_i(y)m'(y)dy \xrightarrow {\lambda _i \rightarrow \infty } 0, \\&\frac{(\Phi _i(\mathcal {A}-r)g_s)(y_s)}{\varphi _{r+\lambda _i}(y_s)} = \int _{y_s}^{\infty } \frac{\varphi _{r+\lambda _s}(y)}{\varphi _{r+\lambda _s}(y_s)} (\mathcal {A}-r)g_s(y)m'(y)dy \xrightarrow {\lambda _s \rightarrow \infty } 0. \end{aligned} \end{aligned}$$
(5.4)

We can now prove the following convergence result.

Proposition 5.1

Let \(K_{\lambda }(x)\) be as in (3.13) and define a function \(k: (0, z_i] \rightarrow (0, z_i]\) as [see (5.1)]

$$\begin{aligned} k(x) = \Phi _{(\mathcal {A}-r)g_i}^{-1}( \Phi _{(\mathcal {A}-r)g_s}( \Psi _{(\mathcal {A}-r)g_s}^{-1}( \Psi _{(\mathcal {A}-r)g_i}( x)))). \end{aligned}$$

Then the unique fixed point \(y_i\) of \(K_{\lambda }\) converges to the unique fixed point \(x_i\) of k as \(\lambda \) tends to infinity.

Proof

From the representation of the pair of Eqs. (5.2), (5.4) and monotonicity we see that

$$\begin{aligned} \lim _{\lambda \rightarrow \infty } K_{\lambda }(x) \xrightarrow {\lambda \rightarrow \infty } k(x). \end{aligned}$$

The claim follows now by noticing that \(k(x)-x\) attains both negative and positive values (this follows essentially from point (1) in Assumption 2.4, see [2]). \(\square \)

In the case that \(\lambda _i \rightarrow 0\), i.e. in the absence of competition, the threshold \(y_s^*\) converges to the threshold of an stopping problem presented in [26]. And further, if then we let \(\lambda _s \rightarrow \infty \) the threshold coincides with the threshold in [1], see [26], proposition 2.6. These asymptotic results are collected in Table .

Table 1 The asymptotics of the optimal thresholds

5.2 Consequences of the Asymmetry

Interesting feature of the anti-symmetry is that when one of the rates, for example \(\lambda _s\), stays fixed, and we increase \(\lambda _i\), both of the thresholds decrease. To see this note that \(y_i \in (0, \tilde{x}_i)\) and also that by (5.3) for \(z< x < y\) we have

$$\begin{aligned} \frac{\psi _{r+\lambda }(z)}{\psi _{r+\lambda }(x)} \leqslant \frac{\psi _{r}(z)}{\psi _{r}(x)}, \\ \frac{\varphi _{r+\lambda }(y)}{\varphi _{r+\lambda }(x)} \leqslant \frac{\varphi _{r}(y)}{\varphi _{r}(x)}. \end{aligned}$$

Hence, assuming \(\lambda _1 < \lambda _2\) we have that \(H_2(g_i, \psi _r ;y_i) \leqslant H_1(g_i, \psi _r ;y_i)\) and \(H_2(\varphi _r, g_i ;y_i) \leqslant H_1(\varphi _r, g_i ;y_i)\).

We recall the definition of K and write the dependency on \(\lambda _i\) explicitly

$$\begin{aligned} K(x, \lambda _i)=\check{H}_{i, \varphi }^{-1}(\hat{H}_{s, \varphi }(\hat{H}_{s, \psi }^{-1}(\check{H}_{i, \psi }(x, \lambda _i))), \lambda _i). \end{aligned}$$
(5.5)

Taking the derivative with respect to \(\lambda _i\) yields

$$\begin{aligned} \frac{\partial K}{\partial \lambda _i} = \frac{\partial \check{H}_{i, \varphi }^{-1}}{\partial \lambda _i} + \frac{\partial \hat{H}_{s, \varphi }}{\partial \hat{H}_{s, \psi }^{-1}} \frac{\partial \hat{H}_{s, \psi }^{-1}}{\partial \check{H}_{i, \psi }} \frac{\partial \check{H}_{i, \psi }}{ \partial \lambda _i} < 0. \end{aligned}$$

Hence, K is decreasing in \(\lambda _i\). Next, suppose that \(y_1\) is a fixed point of K when \(\lambda _i = \lambda _1\) and assume that \(\lambda _1 < \lambda _2\). Then

$$\begin{aligned} y_1 = K(y_1, \lambda _1) > K (y_1, \lambda _2), \end{aligned}$$

and consequently \(y_1 > y_2\), as \(K'(y_1,\lambda _1) < 1\) by proof of proposition 3.3. Similarly, we can show that \(y_s\) decreases as function of \(\lambda _i\).

This observation has an intuitive explanation. If the information rate \(\lambda _i\) of the inf-player increases he should wait longer (in the sense that \(y_i\) decreases) as he gets more frequent opportunities to stop, and hence, is not affected as much by the uncertainty of the underlying. On the other hand, as the rate for the inf-player increases, the sup-player wants to stop sooner (in the sense that \(y_s\) decreases). This is because the inf-player is now less likely to miss good opportunities to stop and the sup-player has to react accordingly.

6 Illustrations

6.1 Geometric Brownian Motion with Smooth Payoff

In this illustration we compare the properties of our general findings with the usual stopping game where the players are allowed to stop without restrictions. Thus, we follow [2] and consider a stopping game given by

$$\begin{aligned} R(\tau ,\sigma ) = ((R_r f)(X_{\tau })-c_s)\mathbbm {1}_{\{\tau < \sigma \}} +((R_r f)(X_{\sigma })-c_i)\mathbbm {1}_{\{\tau > \sigma \}}, \end{aligned}$$

where \(c_s> c_i > 0\) are constant measuring the sunk costs, \(f(x) = x^\theta \) is a profit flow with \(0<\theta <1\), and the underlying diffusion \(X_t\) is a geometric Brownian motion.

We remark as in [2] that in this case the buyer always gets the expected cumulative present value \((R_r f )(x)\), and hence the only factor that depends on the timing of the decision is the cost which the buyer pays (and the seller receives) at exercise. Thus, the game can be seen as the valuation of an investment which guarantees the buyer a permanent flow of revenues from the exercise date up to an arbitrarily distant future at a cost which is endogenously determined from the game.

Remark 6.1

We point out that by a similar analysis, we could study the linear payoff structure \(g_i(x) = x-c_i\) and \(g_s(x) = x-c_s \), where \(0< c_i < c_s\). However, we want to compare our results to those of [2] and, therefore, present the case of cumulative payoffs.

In this framework, the infinitesimal generator of the diffusion \(X_t\) reads as

$$\begin{aligned} \mathcal {A} = \frac{1}{2}\sigma ^2 x^2\frac{d^2}{dx^2} + \mu x \frac{d}{dx}, \end{aligned}$$

where \(\mu \in \mathbb {R}_+\) and \(\sigma \in \mathbb {R}_+\) are given constants. We readily verify that the assumption 2.1 are satisfied. Furthermore, the scale density and the density of the speed measure read as

$$\begin{aligned} S'(x) = x^{-\frac{2 \mu }{\sigma ^2}}, \quad m'(x) = \frac{2}{\sigma ^2} x^{\frac{2 \mu }{\sigma ^2}-2}. \end{aligned}$$

Denote

$$\begin{aligned}&\beta _{\lambda } = \frac{1}{2}-\frac{\mu }{\sigma ^2}+\sqrt{\bigg (\frac{1}{2}-\frac{\mu }{\sigma ^2} \bigg )^2+\frac{2(r+\lambda )}{\sigma ^2}}>1, \\&\alpha _{\lambda } = \frac{1}{2}-\frac{\mu }{\sigma ^2} - \sqrt{\bigg (\frac{1}{2}-\frac{\mu }{\sigma ^2} \bigg )^2+\frac{2(r+\lambda )}{\sigma ^2}} > 0. \end{aligned}$$

Then the minimal r-excessive functions for X read as

$$\begin{aligned} \psi _{r}(x) = x^{\beta _0}, \varphi _{r}(x) = x^{\alpha _0}, \psi _{r+\lambda }(x) = x^{\beta _{\lambda }}, \varphi _{r+\lambda }(x) = x^{\alpha _{\lambda }}. \end{aligned}$$

It is worth to emphasise that \(\beta _{\lambda }> 1> \theta> 0 > \alpha _{\lambda }\), so that the conclusion in remark 2.3 holds.

The resolvent can be shown to be

$$\begin{aligned} (R_r f)(x) = \frac{x^{\theta }}{r-\theta \mu - \frac{1}{2}\sigma ^2\theta (\theta -1)}. \end{aligned}$$

Noting that \((\beta _0-\theta )(\theta -\alpha _0) = 2(r-\theta \mu - \frac{1}{2}\sigma ^2 \theta (\theta -1))/\sigma ^2\), we find the alternative representation

$$\begin{aligned} (R_r f)(x) = \frac{2}{\sigma ^2}\frac{x^{\theta }}{(\beta _0-\theta )(\theta -\alpha _0)}. \end{aligned}$$

For notional convenience, we do the calculations in the symmetric case \(\lambda =\lambda _i=\lambda _s\). To write down the pair of equations (4.1), we first calculate the auxiliary functionals

$$\begin{aligned} (\Psi _i \psi _r)(x)&= \frac{2}{\sigma ^2} \frac{1}{\beta _{\lambda }-\alpha _0}x^{\beta _{\lambda }-\alpha _0},&(\Psi _i \varphi _r)(x)&= \frac{2}{\sigma ^2} \frac{1}{\beta _{\lambda }-\beta _0}x^{\beta _{\lambda }-\beta _0}, \\ (\Phi _s \psi _r)(x)&= - \frac{2}{\sigma ^2} \frac{1}{\alpha _{\lambda }-\alpha _0}x^{\alpha _{\lambda }-\alpha _0},&(\Phi _s \varphi _r)(x)&= - \frac{2}{\sigma ^2} \frac{1}{\alpha _{\lambda }-\beta _0}x^{\alpha _{\lambda }-\beta _0}, \end{aligned}$$

Moreover,

$$\begin{aligned} (\Psi _i g_i)(x)&= \bigg ( \frac{2}{\sigma ^2}\bigg )^2 \frac{1}{(\beta _0-\theta )(\theta -\alpha _0)(\theta -\alpha _{\lambda })}x^{\theta -\alpha _{\lambda }}+\frac{2}{\sigma ^2}\frac{c_i}{\alpha _{\lambda }}x^{-\alpha _{\lambda }}, \\ (\Phi _s g_s)(x)&= -\bigg ( \frac{2}{\sigma ^2}\bigg )^2 \frac{1}{(\beta _0-\theta )(\theta -\alpha _0)(\theta -\beta _{\lambda })}x^{\theta -\beta _{\lambda }}-\frac{2}{\sigma ^2}\frac{c_s}{\beta _{\lambda }}x^{-\beta _{\lambda }}. \end{aligned}$$

Noting that \(\alpha _{\lambda } -\beta _0 = \alpha _0-\beta _{\lambda }\) and using the above expressions, the pair of equations (4.1) read as

$$\begin{aligned}&\frac{2}{\sigma ^2}\frac{x^{\theta -\alpha _0}}{(\beta _0-\theta )(\theta -\alpha _0)} \bigg [ \frac{1}{\beta _{\lambda }-\alpha _0} - \frac{1}{\theta -\alpha _{\lambda }} \bigg ] - x^{-\alpha _0} c_i \bigg [ \frac{1}{\alpha _{\lambda }} + \frac{1}{\beta _{\lambda }-\alpha _0} \bigg ] \\&\quad = \frac{2}{\sigma ^2}\frac{y^{\theta -\alpha _0}}{(\beta _0-\theta )(\theta -\alpha _0)} \bigg [ \frac{1}{\alpha _{\lambda }-\alpha _0} - \frac{1}{\theta -\beta _{\lambda }} \bigg ] - y^{-\alpha _0} c_s \bigg [ \frac{1}{\beta _{\lambda }} + \frac{1}{\alpha _{\lambda }-\alpha _0} \bigg ], \\&\frac{2}{\sigma ^2}\frac{x^{\theta -\beta _0}}{(\beta _0-\theta )(\theta -\alpha _0)} \bigg [ \frac{1}{\beta _{\lambda }-\beta _0} - \frac{1}{\theta -\alpha _{\lambda }} \bigg ] - x^{-\beta _0} c_i \bigg [ \frac{1}{\alpha _{\lambda }} + \frac{ 1}{\beta _{\lambda }-\beta _0} \bigg ] \\&\quad = \frac{2}{\sigma ^2}\frac{y^{\theta -\beta _0}}{(\beta _0-\theta )(\theta -\alpha _0)} \bigg [ \frac{1}{\alpha _{\lambda }-\beta _0} - \frac{1}{\theta -\beta _{\lambda }} \bigg ] - y^{-\beta _0} c_s \bigg [ \frac{1}{\beta _{\lambda }} + \frac{1}{\alpha _{\lambda }-\beta _0} \bigg ]. \end{aligned}$$

Unfortunately, it seems to be impossible to solve the pair explicitly and thus we illustrate the results numerically.

Next we analyse the assumptions in 2.4. The item (1) clearly holds and regarding item (2) we find that

$$\begin{aligned}&(\Psi _i (\mathcal {A}-r)g_i)(x) = \frac{2}{\sigma ^2} \int _0^x z^{\beta _{\lambda }} (c_i-z^{\theta }) z^{\frac{2 \mu }{\sigma ^2} -2} dz = \frac{2}{\sigma ^2} \bigg ( -\frac{r c_i}{\alpha _{\lambda }}x^ {-\alpha _{\lambda }}-\frac{1}{\theta -\alpha _{\lambda }}x^{\theta -\alpha _{\lambda }} \bigg ), \\&(\Phi _s (\mathcal {A}-r)g_s)(x) = \frac{2}{\sigma ^2} \int _x^{\infty } z^{\alpha _{\lambda }} (c_i-z^{\theta }) z^{\frac{2 \mu }{\sigma ^2} -2} dz = \frac{2}{\sigma ^2} \bigg ( \frac{r c_i}{\beta _{\lambda }}x^{-\beta _{\lambda }}+\frac{1}{\theta -\beta _{\lambda }}x^{\theta -\beta _{\lambda }} \bigg ). \end{aligned}$$

Hence,

$$\begin{aligned} \tilde{x}_i^{\theta } = rc_i \left( 1 - \frac{\theta }{\alpha _{\lambda }} \right) , \quad \tilde{x}_s^{\theta } = rc_s \left( 1 - \frac{\theta }{\beta _{\lambda }} \right) , \end{aligned}$$

and consequently,

$$\begin{aligned} \tilde{x}_i - \tilde{x}_s< 0 \iff (c_i-c_s)+\theta \left( \frac{c_s}{\beta _{\lambda }}-\frac{c_i}{\alpha _{\lambda }}\right) < 0. \end{aligned}$$

This demonstrates the interplay between the payoffs and the information rates and also highlights that our assumptions on the payoff functions are not enough to guarantee the ordering \(\tilde{x}_i < \tilde{x}_s\) automatically.

To illustrate the results, we choose the parameters \(\mu = 1/2, \sigma = 1, r = 9/2, c_i = 1, c_s = 4/3.\) Then in the symmetric case we find, as expected, that the optimal thresholds converge to the ones in the unconstrained case [2], as \(\lambda \rightarrow \infty \), see Fig. .

Fig. 1
figure 1

The optimal stopping thresholds in the symmetric case as functions of \(\lambda \). The dashed lines are the thresholds in the unconstrained case [2]

In the non-symmetric case, we find that, if the information rate \(\lambda _s\) is fixed for the sup-player, both thresholds decrease as the function of \(\lambda _i\), see Fig. . Interestingly, at least in our numeric examples, increasing volatility does not necessarily expand the continuation region (by increasing \(y_s\) and decreasing \(y_i\)). This is in contrary to the findings in [2] for the standard stopping game.

Fig. 2
figure 2

Non-symmetric case with fixed \(\lambda _s=100\) and \(\lambda _i\) varying. The dashed lines are the thresholds in the unconstrained case [2]

Finally, the value function of the game is shown in Fig. .

Fig. 3
figure 3

Value function for the constrained game

6.2 Mean Reverting Dynamics

To further expand the first example, we consider different diffusion dynamics under similar payoff structure. We assume that the diffusion X has the infinitesimal generator

$$\begin{aligned} \mathcal {A} = \frac{1}{2}\sigma ^2x^2\frac{d^2}{dx^2} + \mu x(1-\beta x) \frac{d}{dx}, \end{aligned}$$

where \(\mu > 0\) is a constant, \(\beta > 0\) is the degree of mean-reversion and \(\sigma > 0\) is the volatility coefficient. This process is often called Verhulst-Pearl diffusion. Because the payoffs are chosen similarly as in the first example to be \(g_j(x)=(R_r f)(x)-c_j\), where \(j=i,s\), \(c_i < c_s\), and \(f(x)=x\), the assumption 2.1 are again satisfied. The scale density and the density of the speed measure read as

$$\begin{aligned} S'(x)= x^{-\frac{2\mu }{\sigma ^2}} e^{\frac{2\mu \gamma }{\sigma ^2}x} , \quad m'(x)=\frac{2}{\sigma ^2} x^{\frac{2\mu }{\sigma ^2}-2} e^{-\frac{2\mu \gamma }{\sigma ^2}x}, \end{aligned}$$

and the minimal r-excessive functions are (see [6], p. 202)

$$\begin{aligned} \varphi _{r}(x) = x^\alpha U\left( \alpha , 1+\alpha -\beta , \frac{2\mu \gamma }{\sigma ^2}x\right) , \quad \, \psi _{r}(x) = x^\alpha L\left( -\alpha , \alpha -\beta , \frac{2\mu \gamma }{\sigma ^2}x\right) , \end{aligned}$$

where U is a confluent hypergeometric function, L is the generalized Laguerre polynomial \(L(a,b,z)=L_a^b(z)\) and

$$\begin{aligned}&\beta = \frac{1}{2}-\frac{\mu }{\sigma ^2}-\sqrt{\bigg (\frac{1}{2} -\frac{\mu }{\sigma ^2} \bigg )^2+\frac{2r}{\sigma ^2}}, \\&\alpha = \frac{1}{2}-\frac{\mu }{\sigma ^2} + \sqrt{\bigg (\frac{1}{2}-\frac{\mu }{\sigma ^2} \bigg )^2+\frac{2r}{\sigma ^2}}. \end{aligned}$$

Due to the complicated nature of the minimal r-excessive functions in this example, the functionals \(\Psi h\) and \(\Phi h\), where h is \(\psi _r, \varphi _r\) or \((\mathcal {A}-r)g_j\) (\(j=i,s\)), cannot be calculated explicitly. Consequently, the pair of equations (4.1) for the optimal thresholds cannot be simplified from their original integral forms in any helpful way, and are thus left unstated.

Regarding the assumptions in 2.4 the first one is again satisfied. Unfortunately, again due to the complicated forms of \(\psi _r\) and \(\varphi _r\), the second assumption has to be verified numerically in each case. The results are illustrated numerically in Table with the parameters \(\sigma = 0.3\), \(r=0.08\), \(\gamma = 0.05\), \(c_i = 10.0\), \(c_s = 20.0\) and \(\lambda = \lambda _i = \lambda _s\). These values suggest that the optimal stopping thresholds converge to the thresholds of the unconstrained case as the the intensity \(\lambda \) increases. This is in line with our general result.

Table 2 The optimal stopping thresholds in the constrained case for different values of the intensity parameter. The optimal thresholds in the unconstrained case [2] are in this case \(x_i=0.36\) and \(x_s=2.80\)