Abstract
The objective of this paper is to study a class of zero-sum optimal stopping games of diffusions under a so-called Poisson constraint: the players are allowed to stop only at the arrival times of their respective Poissonian signal processes. These processes can have different intensities, which makes the game setting asymmetric. We give a weak and easily verifiable set of sufficient condition under which we derive a semi-explicit solution to the game in terms of the minimal r-excessive functions of the diffusion. We also study limiting properties of the solutions with respect to the signal intensities and illustrate our main findings with explicit examples.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Optimal stopping games were introduced in the seminal paper [7], for other classical references, see [17, 29, 33]; see also [18] for a review article. In the typical form, these are two-player games, where the sup- (inf-) players objective is to maximize (minimize) the expected present value of the exercise payoff. Important applications of stopping games in mathematical finance are cancellable (or callable) options [3, 11, 19] and convertible bonds [16, 20, 32]. Here, the issuer (i.e., inf-player) has the right to cancel (or convert) the contract by paying a fee to the holder (i.e., sup-player).
The stopping game considered in our study stems from the so-called Poisson stopping problem, this term was coined in [22]. Poisson stopping problems are built on continuous-time dynamics but stopping is only allowed at the arrival times of an exogenous signal process, usually a Poisson process. This type of stopping problem first appeared in [5], where optimal stopping of geometric Brownian motion on the arrival times on an independent Poisson process (later, Poisson stopping) was studied. Papers in the same vein include the following. The paper [12] addresses Poisson stopping at the maximum of Geometric Brownian motion. In [26], Poisson stopping of general one-dimensional diffusion processes is considered. Poisson stopping is generalized to optimal switching problems in [23], and to a multi-dimensional setting in [22]. Extension to more general, time-inhomogeneous signal processes is addressed in [28]. Time-inhomogeneous Poissonian signal process is considered in [13, 14]. In [13], the stopping problem is set up so that the decision maker can control the intensity of the Poissonian signal process, whereas [14] addresses the shape properties of the value function in a time-inhomogeneous Poisson stopping problem.
We extend the Poisson stopping framework to zero-sum stopping games in the following way. Similarly to [26], we study a perpetual problem and assume that the underlying dynamics follow a general one-dimensional diffusion. Moreover, we assume that there is two independent Poisson signal processes, one for each player, and that players can stop only at the arrival times of their respective Poisson processes. These processes can have different intensities, which makes the game setting asymmetric. Our problem setting is closely related to [24, 25], see also [15]. In [24], a similar game is studied where there is only one signal process and both players are allowed to stop at its arrival times. This is in contrast to our case when the intensities of the signal processes coincide. Namely, even though the arrival rates are the same, the signals will almost surely never appear simultaneously. This eliminates the need to assume the usual ordering (appearing for instance in [2, 8,9,10, 21, 24, 27]) that the payoff of the inf-player has to dominate that of the sup-player which is due to the fact that immediate comparison of the payoffs is never needed; this observation is made also in [25] where the heterogeneous case is studied. We point out that some comparison of the payoffs is still needed, these are spelled out in assumptions 2.4. The payoff processes in [24, 25] are assumed to be progressively measurable with respect to the minimal completion of the filtration generated by a (potentially multidimensional) Wiener process. This is in the same spirit to our model as the paths dictating the payoffs are continuous in both cases. We refer here to [30], where a similar constrained game is considered for Lévy-dynamics. The time horizon in [24, 25] is allowed be a stopping time, either bounded or unbounded. For an unbounded time horizon, the analysis of [24] covers the case where the payoffs are bounded. This is in contrast to our study, where we allow also for unbounded payoffs. In [24, 25], the authors provide a characterization of the value in terms of a penalized backward stochastic differential equation. We take a different route by solving our problem via a free boundary problem. As a result, we produce explicit (up to a representation of the minimal r-excessive functions of the diffusion process) solutions for the optimal value function. We also characterize the optimal threshold rules in terms of the minimal r-excessive functions and provide sufficient conditions both for existence and uniqueness of the solution; to the best of our knowledge, these are new results. These results are useful for a few reasons. Firstly, diffusion models are important in many applications and our results shed a new light on the structure of the solution for this class of problems. Secondly, the semi-explicit nature of the solution allows a deeper study on the asymptotics and other properties of the asymmetry. Lastly, the solution is fairly easy to produce, at least numerically, as it will boil down to solving a linear second order ordinary differential equation.
The remainder of the study is organized as follows. In Sect. 2 we formulate the optimal stopping game. A candidate solution for the game is derived in Sect. 3, whereas in Sect. 4 we show that the candidate solution is indeed the solution of the game. Asymptotic results are proved in Sect. 5, and the study is concluded by explicit examples in Sect. 6.
2 The Game
We assume that the state process X is a regular diffusion evolving on \(\mathbb {R}_+\) with the initial state x. Furthermore, we assume that the boundaries of the state space \(\mathbb {R}_+\) are natural. Now, the evolution of X is completely determined by its scale function S and speed measure m inside \(\mathbb {R}_+\), see [4, pp. 13–14]. Furthermore, we assume that the function S and the measure m are both absolutely continuous with respect to the Lebesgue measure, have smooth derivatives and that S is twice continuously differentiable. Under these assumptions, we know that the infinitesimal generator \(\mathcal {A}:\mathcal {D}(\mathcal {A})\rightarrow C_b(\mathbb {R}_+)\) of X can be expressed as \(\mathcal {A}=\frac{1}{2}\sigma ^2(x)\frac{d^2}{dx^2}+\mu (x)\frac{d}{dx}\), where the functions \(\sigma \) and \(\mu \) are related to S and m via the formulæ \(m'(x)=\frac{2}{\sigma ^2(x)}e^{B(x)}\) and \(S'(x)= e^{-B(x)}\) for all \(x \in \mathbb {R}_+\), where \(B(x):=\int ^x \frac{2\mu (y)}{\sigma ^2(y)}dy\), see [4, p. 17]. From these definitions we find that \(\sigma ^2(x)=\frac{2}{S'(x)m'(x)}\) and \(\mu (x)=-\frac{S''(x)}{{S'}^2(x)m'(x)}\) for all \(x \in \mathbb {R}_+\). In what follows, we assume that the functions \(\mu \) and \(\sigma ^2\) are continuous. The assumption that the state space is \(\mathbb {R}_+\) is done for convenience. In fact, we could assume that the state space is any interval \(\mathcal {I}\) in \(\mathbb {R}\) and the subsequent analysis would hold with obvious modifications. Furthermore, we denote as, respectively, \(\psi _r\) and \(\varphi _r\) the increasing and the decreasing solution of the second order linear ordinary differential equation \(\mathcal {A}u=ru\), where \(r>0\), defined on the domain of the characteristic operator of X. The functions \(\psi _r\) and \(\varphi _r\) can be identified as the minimal r-excessive functions \(\psi _r\) and \(\varphi _r\) of X, see [4, pp. 18–20]. In addition, we assume that the filtration \(\mathbb {F}\) carries a Poisson processes \(Y^i=(Y^i_t,\mathcal {F}_t)\) and \(Y^s=(Y^s_t,\mathcal {F}_t)\) with intensities \(\lambda _i\) and \(\lambda _s\), respectively. We call the processes \(Y^i\) and \(Y^s\) signal processes, and assume that they are mutually independent and also independent of X. We denote the arrival times of \(Y^i\) and \(Y^s\), respectively, as \(T_{n^i}\) and \(T_{n^s}\). Finally, we make the convention that \(T_{0^i} = T_{0^s} =0\).
Denote now as \(L_1^r\) the class of measurable mappings f satisfying the integrability condition
We know from the literature, see [4, p. 19], that for a given \(f\in L_1^r\) the resolvent \(R_rf\) can be expressed as
for all \(x \in \mathbb {R}_+\), where \(B_r=\frac{\psi _r'(x)}{S'(x)}\varphi _r(x)-\frac{\varphi _r'(x)}{S'(x)}\psi _r(x)\) denotes the Wronskian determinant.
Next, we define the stopping game. The players, sup and inf, have their respective exercise payoff functions \(g_s\) and \(g_i\), and are allowed to stop the process X only at the arrivals of their respective signal processes \(Y^s\) and \(Y^i\). The sup-player attempts to maximize the expected present value of exercise payoff, whereas the inf-players objective is to minimize the same quantity. We define the lower and upper values of the game as
where
When the equality
holds the zero-sum game is said to have a value V. The maximizing strategies in \(\underline{V}\) and the minimizing strategies in \(\overline{V}\) are called optimal and any pair of optimal strategies is a Nash equilibrium. We point out that in the game studied here, it is not necessary to include the possibility of simultaneous stopping as independent Poisson arrivals do not, almost surely, occur simultaneously. It is also worth pointing out that in the definition of upper and lower values, the players are not allowed to stop immediately. One could think the value function as the value of future stopping potentiality without the immediate stopping optionality.
To solve the problem (2.2), we introduce two auxiliary problems. Auxiliary problem I is defined via the lower and upper values
where
Similarly, the auxiliary problem S is defined via the lower and upper values
where
Finally, the values \(V_0^i\) and \(V_0^s\) are said to exist, if conditions similar to (2.2) hold. We point out that in auxiliary problem I the inf-player is allowed to stop immediately, whereas the sup-player has to wait until the next \(Y^s\)-arrival to make a choice. The roles are reversed in auxiliary problem S, where the sup-player can stop immediately. In Sect. 3, we propose a Bellman principle that binds the candidate values for the main problem and the auxiliary problems together.
We consider payoff functions similar to the existing literature in optimal stopping that consider explicitly solvable cases, see [2, 27].
Assumption 2.1
Let \(g_i\) and \(g_s\) be real functions defined of positive reals and satisfying the following conditions:
-
(1)
\(g_i\) and \(g_s\) are non-decreasing continuously differentiable,
-
(2)
\(g_i\) and \(g_s\) are stochastically \(C^2\): they are twice continuously differentiable outside of a countable set \(\{x_j\}\) which has no accumulation points and the limits \(|g_i^{''}(x_j\pm )|\) and \(|g_s^{''}(x_j\pm )|\) are all finite,
-
(3)
There exists states \(z_i\) and \(z_s\) such that
$$\begin{aligned} {\left\{ \begin{array}{ll} (\mathcal {A}-r)g_i(x) \\ (\mathcal {A}-r)g_s(x) \end{array}\right. } \gtreqqless 0, \ {\left\{ \begin{array}{ll} x \lesseqqgtr z_i, \\ x \lesseqqgtr z_s. \end{array}\right. } \end{aligned}$$
Some remarks regarding these assumptions are in order. The monotonicity in point (1) is satisfied in many potential applications and point (2) essentially guarantees that we can work with the expressions \((\mathcal {A}-r)g_i\) and \((\mathcal {A}-r)g_s\). The point (3) suggests that we are setting up problems, where the continuation region is connected, that is, the equilibrium stopping rule is two-sided. This structure is important and appears in many applications.
The class of problems given by our assumptions is large and contains important cases such as linear payoffs. Indeed, when the payoffs are linear \(g_k(x) = x-c_k\), \(k=i,s\), where \(c_s > c_i\), and X is Geometric Brownian motion such that drift \(\mu < r\), then \((\mathcal {A}-r)g_k(x) = (\mu - r)x +c_k\). More generally, if the drift coefficient of X is a polynomial for which the leading term has a negative coefficient (this is typical in mean-reverting models), then assumptions 2.1 hold for linear payoffs. For example, if X is a Verhulst-Pearl diffusion (\( \mathcal {A} = \frac{1}{2}\sigma ^2x^2\frac{d^2}{dx^2} + \mu x(1-\beta x) \frac{d}{dx}\)), then \((\mathcal {A}-r)g_k(x) = - \mu \beta x^2 + (\mu - r)x +c_k\).
We address non-smooth payoffs in Sect. 3.4 by studying the payoff structure of a callable option [3, 11, 19, 21] and observe that its analysis can, fairly directly, be reduced to our core case.
We make some preliminary analysis. For \(f \in L_1^r\), we define the functionals \(\Psi _i\) and \(\Phi _s\) as
and, with a slight abuse of notation,
Lemma 2.2
Let \(q>0\) and \(g \in L_1^{q}\) satisfy the points (1) and (2) of Assumption 2.1. Then
Proof
Denote
Since the functions \(\psi _q\) and \(\varphi _q\) are solutions of the differential equation \((\mathcal {A}-q)u=0\), we find after differentiation that
Therefore, an application of the fundamental theorem of calculus combined with the assumed boundary classification of the diffusion yields the results. \(\square \)
Remark 2.3
We note that the point (3) in Assumption 2.1 implies that there exists unique states \(\tilde{x}_i, \tilde{x}_s \in (0,\infty )\) such that
Indeed, first notice that \((\Phi _s (\mathcal {A}-r)g_s)(x)<0\) when \(x > z_s\). Then taking \(x< k < z_s\) we get
By mean value theorem we have
where \(\xi \in (x,k)\). Because the lower boundary is natural, and hence \(\frac{\varphi _{r+\lambda _s}'(x)}{S'(x)} \rightarrow -\infty \) when \(x \rightarrow 0\), we see that taking the limit \(x \rightarrow 0\) yields \((\Phi _s (\mathcal {A}-r)g_s)(x) \rightarrow \infty \). Thus, by monotonicity the functional \((\Phi _s (\mathcal {A}-r)g_s)(x)\) must have a finite unique root \(\tilde{x}_s>0\). Similar calculations show that \(\tilde{x}_i\) is finite.
The assumption 2.1 suffice to show uniqueness of our solution in Sect. 3 and to prove the verification theorem in Sect. 4, but we need to pose additional assumptions for the existence of the optimal solution. These are collected below.
Assumption 2.4
Let \(g_i\), \(g_s\) and \(x_j\) be defined as in assumption 2.1 and the states \(\tilde{x}_i, \tilde{x}_s\) as in remark 2.3. We assume that
-
(1)
\((\mathcal {A}-r)g_s(x) > (\mathcal {A}-r)g_i(x) \text { for all } \mathbb {R}_+ \setminus \{x_j\}\),
-
(2)
the states \(\tilde{x}_i, \tilde{x}_s\) have the order \(\tilde{x}_i < \tilde{x}_s\),
-
(3)
the limits satisfy \(\frac{g_i}{\psi _r}(0+) < 0\) and \(\frac{g_s}{\varphi _r}(\infty ) > 0\).
In point (3) of assumption 2.4, we also allow that \(\frac{g_i}{\psi _r}(0+) = - \infty \) and \(\frac{g_s}{\varphi _r}(\infty ) = \infty \). These are the cases in many examples.
For \(f,g \in L_1^r\), we define the functionals \(H_i\) and \(H_s\) as
Lemma 2.5
Let \(g \in L_1^{r}\) satisfy the points (1) and (2) of Assumption 2.1. Furthermore, let \(\xi _r\) be r-harmonic. Then
Proof
We prove the first claim, the second can be proved similarly. Elementary differentiation and a reorganization of the terms yield
We apply the second part of Lemma 2.2 to \(\xi _r\) and find that
By substituting this to the Eq. (2.5) and then first applying the second part of Lemma 2.2 to g and then the expression (2.6) again, the claim follows. \(\square \)
3 The Solution: Necessary Conditions
3.1 The Candidate Solution
We start the analysis of the problem (2.2) by deriving a candidate solution. To this end, we recall the main problem (2.2) and the auxiliary problems I and S from Sect. 2. Denote the candidate value for the problem (2.2) as G and the candidate functions for the auxiliary problems I and S as \(G_0^i\) and \(G_0^s\), respectively. We make the following working assumptions:
(1) We assume that the candidate value functions satisfy the following dynamic programming principle:
here the random variables \(U^s \sim {\text {Exp}}(\lambda _s)\) and \(U^i \sim {\text {Exp}}(\lambda _i)\) are independent. In auxiliary game I, the inf-player chooses between stopping immediately or waiting whereas the sup-player can do nothing but wait; this situation is reflected by the Eq. (3.1); the Eq. (3.2) has a similar interpretation in terms of auxiliary problem S. The condition (3.3) is the expected present value of the next stopping opportunity, which will be either for inf- or sup-player and present itself as a choice reflected by the conditions (3.1) and (3.2). We point out that by the independence of \(U^i\) and \(U^s\), the condition (3.3) can be written as
(2) By the time homogeneity of the stopping game, we assume that the continuation region is the interval \((y_i,y_s)\), for some constants \(y_i\) and \(y_s\). Thus we can rewrite the functions \(G_0^i\) and \(G_0^s\) as
(3) Furthermore, we assume that the function G is continuous. Then
These assumptions are used to device the candidate solution for the problem; this is the task of this section. The candidate solution is then verified to be the actual solution in Sect. 4.
Since \(G(x) = G_0^i(x) = G_0^s(x)\) on \((y_i,y_s)\) and
we find that
By [26, Lemma 2.1], the function \(x \mapsto \frac{\lambda _i}{\lambda _i + \lambda _s}G_0^i(x) + \frac{\lambda _s}{\lambda _i + \lambda _s}G_0^s(x)\) is r-harmonic on \((y_i.y_s)\). Consequently, we have that \(G(x) = G_0^i(x) = G_0^s(x) = h_r(x)\), where \(h_r\) is r-harmonic, on \((y_i,y_s)\). Summarizing,
We develop this representation further in the following lemma.
Lemma 3.1
The following representations hold:
Proof
Let \(x < y_i\). Then by the conditions (3.3), (3.1) and (3.2), we find that
By Strong Markov property, we obtain
Thus,
Since
we find by another application of Strong Markov property, that
Since \(G(X_{\tau _{y_i}}) = G(y_i) = g_i(y_i)\), we finally obtain
The case \(x > y_s\) is proved similarly. \(\square \)
The next lemma provides necessary conditions for the optimality of the thresholds \(y_i\) and \(y_s\).
Lemma 3.2
Assume, that the condition(3.3) holds for all \(x \in \mathbb {R}_+\). Then
which can be rewritten as
Proof
Let \(x \in (y_i,y_s)\). Using Lemma 2.1 of [26], we find that
This can be rewritten as
Strong Markov property and [26, Lemma 2.1] yields
Consider first the expected value (3.6). On the event \(\{ U^i\wedge U^s > \tau _{(y_i,y_s)} = \tau _{y_s}\} \), we have, by Strong Markov property and Lemma 2.1 of [26], the following:
where \(\eta _{y_s}\) is the first return time to \(y_s\). Since \(h_r(X_{\tau _{y_s}}) = h_r(y_s) = G(y_s)\), we find that
For the equality (3.3) to hold, the equation
should hold; here, the last equation is obtained by breaking down the expected values (3.9) and (3.8) similarly to (3.4) and (3.5). This holds, if
The necessary condition
is obtained by analyzing the expected value (3.7) similarly.
To conclude the claimed integral representation, we find by applying the representation (2.1) to the function \(x \mapsto h_r(x) \mathbbm {1}_{\{ x \geqslant y_s\}}\), that
By treating the other expectations similarly, we obtain the integral representations. \(\square \)
We write the necessary conditions given by Lemma 3.2 in a more convenient form. First, write the harmonic function \(h_r\) as \(h_r(x) = C \psi _r(x) + D \varphi _r(x)\). Now, since \(h_r(y_i)=g_i(y_i)\) and \(h_r(y_s)=g_s(y_s)\), we find by solving a pair of linear equations, that
By substituting (3.10) to the conditions of lemma 3.2 and reorganizing the terms, we obtain
We can simplify the denominators of the coefficient terms. Indeed, since \((\mathcal {A}-(r+\lambda _i))\xi _{r}(x) = -\lambda _i \xi _{r}(x)\) for r-harmonic \(\xi \), we find using lemma 2.2, that
Thus,
By treating the term \(\psi _r(y_s)\Phi _s(\varphi _r;y_s) - \varphi _r(y_s)\Psi _s(\psi _r;y_s)\) similarly, we can rewrite the necessary conditions (3.11) as
3.2 On Uniqueness of the Solution
The next proposition is our main result on the uniqueness of the solution to the pair of necessary conditions given in lemma 3.2. To ease the presentation in the following, we introduce a bit shorter notation
Proposition 3.3
Let Assumption 2.1 hold and assume that a solution \((y_i, y_s)\) to the pair of Lemma 3.2 exists. Then the solution is unique.
Proof
Define a function \(K: (0, \tilde{x}_i] \rightarrow (0, \tilde{x}_i]\)
where \(\hat{\cdot }\) and \(\check{\cdot }\) are restrictions to domains \([\tilde{x}_s, \infty )\) and \((0, \tilde{x}_i]\) respectively. We notice that if a solution \((y_i,y_s)\) to the pair exists, then \(y_i\) must be fixed point of K. Because the functions \(H_i\) and \(H_s\) are monotonic in their domains we get
and hence K is increasing in its domain \((0,\tilde{x}_i]\). Now using the fixed point property we have
This means that whenever K intersects the diagonal of \(\mathbb {R}_+\), the intersection is from above. Hence, the uniqueness follows from continuity. \(\square \)
3.3 On Existence of the Solution
We proceed by analysing the solvability of the pair (3.12). By item (4) of Assumptions 2.1 and Lemma 2.5, we find that
We find similarly that
Next, we study the limiting properties of the functions appearing in (3.12). Regarding the function \(H_i(\varphi _r,g_i;\cdot )\), by adding and subtracting the term \((\Psi _i(\mathcal {A}-r)g_i)\) and using lemma 2.2, we obtain
By a similar computation, we find also that
By substituting these expressions to \(H_i(\varphi _r,g_i;\cdot )\), simplifying, and using lemma 2.2 again, we observe that
Assume that \(x < z_i\). Then the intermediate value theorem yields
where \(\xi _x \in (0,x)\). By continuity, we find by passing to the limit \(x \rightarrow 0+\) that
By a similar analysis, we find that the limit
Consider next the function \(H_i(g_i,\psi _r;x)\). Since
we find using the Eq. (3.16) and lemma 2.2 that
Assume that \(x < z_i\). Then the intermediate value theorem yields
Thus by continuity \(H_i(g_i,\psi _r;0+) = 0\). A similar analysis yields the limit \(H_s(g_s,\varphi _r;\infty )=0\).
Finally, by using remark 2.3 and the facts that \(\varphi _r\) is r-harmonic and \(z_i<\tilde{x}_i\), we find using lemma 2.2 that
where \(q_i = r+ \lambda _i\). By a similar analysis, we find that \(H_s(\psi _r,g_s;\tilde{x}_s)>0\). We summarize these findings:
Unfortunately, the assumptions in 2.1 are not enough to guarantee the existence of a solution to the pair of equations in 3.2 and more analysis is needed. The next proposition is our main result on the solvability of the necessary conditions.
Proposition 3.4
Under the assumptions 2.1 and 2.4, the pair of necessary conditions given in Lemma 3.2, has a unique solution.
Proof
Define the function \(K: (0, \tilde{x}_i] \rightarrow (0, \tilde{x}_i]\) as in (3.13). We first observe that the proven limiting properties (3.17) and monotonicity properties (3.14) together with the conditions
guarantee that the function K is well-defined. Using the representation (3.16), we see that
After handling the other inequality similarly, we see that the condition (3.18) is equivalent to
The assumptions in 2.1 guarantee that \(\tilde{x}_s < z_s\), \(\tilde{x}_i > z_i\) and \(z_i < z_s\). Thus, (1) implies that \(z_i< \tilde{x}_{i}< \tilde{x}_s < z_s\) and consequently by our assumptions
The other inequality in (3.19) is proved similarly.
It follows from above calculations that the function K is well-defined and from proof of proposition 3.3 that it is increasing. Furthermore, K is a mapping from interval \((0,\tilde{x}_i]\) to its open subset. Thus, K must have a fixed point which we denote by \(y_i\). Then the pair \((y_i, y_s)\), where \(y_s = H_{s, \psi }^{-1}(H_{i,\psi }(y_i))\), is a solution to the equations given in lemma 3.2. The uniqueness follows from proposition 3.3. \(\square \)
The assumption (1) and (3) in proposition 3.4 are satisfied in most situations and are easily verified. However, the assumption (2) requires more analysis in most cases. Fortunately, it is very easy to check it at least numerically in applications, because the states \(\tilde{x}_i, \tilde{x}_s\) are known to be unique zeroes of the functions \((\Psi _i (\mathcal {A}-r)g_i)(x)\) and \((\Phi _s (\mathcal {A}-r)g_s)(x)\).
3.4 On Non-differentiable Payoffs
Although our analysis does not cover non-differentiable payoff functions, its conclusions can be extended fairly easily to some important cases. As an example, assume that the payoff functions are \(g_i(x) = (x-c_i)^+\) and \(g_s(x) = (x-c_s)^+\), where \(c_i < c_s\) and let X be a diffusion satisfying the basic assumptions of Sect. 2. This payoff structure can be viewed as a callable option, see, e.g., [19]. Recall the optimality conditions (3.12):
We observe that the left hand side of both of these equations is zero on \((0,c_i)\).
Assume first that the functions on the right hand side of the conditions (3.12) have a common zero \(y_0\). Then the following must hold
First, we observe that if \(g_s(y_0)=0\), then
which clearly cannot hold. Assume now that \(g_s(y_0)>0\). Then by dividing the conditions (3.20) sidewise, some further manipulations yield
here, we have used the fact that the function \(\frac{\varphi _r}{\psi _r}\) is decreasing. Since all functions in these expressions are positive, we conclude that the ratio \(\frac{\varphi _r}{\psi _r}\) must in fact be constant over the interval \((y_0,\infty )\), something which is clearly not true. Thus, we can safely consider the case that the functions on the right hand side of the conditions (3.12) do not have a common zero. Then neither \(y_i\) nor \(y_s\) can be in the interval \((0,c_i)\). Thus we can restrict the analysis of the optimality conditions to the interval \((c_i,\infty )\). It is straightforward to see that in this case the functions \(H_i\) behave as in our main result. The functions \(H_s\) also behave similarly but the turning point is at \(c_s\). Thus, our main result can be applied to solve the problem after locating the points \(\tilde{x}_i\) and \(c_s\).
4 The Solution: Sufficient Conditions
The purpose of this section is to prove the following theorem. This is our main results on the solution of the considered stopping game.
Theorem 4.1
Let the assumptions 2.1 hold and assume that thresholds \(y_i\) and \(y_s\) are unique solution to
where
Then the value function (2.2) reads as
Moreover, the game has a Nash equilibrium constituted by the stopping rules
To prove this result, we first introduce some notation. Define the filtrations \(\left( \mathcal {G}_{n^i} \right) _{n^i \geqslant 0}\) and \(\left( \mathcal {G}_{n^s} \right) _{n^s \geqslant 0}\) as \(\mathcal {G}_{n^i} = \mathcal {F}_{T_{n^i}}\) and \(\mathcal {G}_{n^s} = \mathcal {F}_{T_{n^s}}\), respectively. Moreover, define the sets of admissible stopping times with respect to the \(\mathcal {G}\)-filtrations:
Then the function V defined in (2.2) can be written as
We point out that the \(\mathcal {G}\)-filtrations were defined only for the case where immediate stopping is not allowed. This is because we do the verification only for the main problem and not for the auxiliary problems. However, similar techniques could be employed to do the verification also for functions \(G_0^i\) and \(G_0^s\) defined via (3.1) and (3.2), where the function G is given by expression for V in the claim of the main theorem. We omit the details.
The proof of the main theorem requires uniform integrability. This is provided by the following lemma.
Lemma 4.2
For any fixed stopping rule \(T_{N^i}\), the process
is a uniformly integrable supermartingale with respect to \((\mathcal {G}_{n^s})_{n^s\geqslant 0}\).
For any fixed stopping rule \(T_{N^s}\), the process
is a uniformly integrable submartingale with respect to \(\left( \mathcal {G}_{n^i} \right) _{n^i \geqslant 0}\).
Proof
We prove the claim for \(S^s\), the process \(S^i\) is treated similarly. Since \(G_0^s \geqslant G\), Strong Markov property yields
To prove uniform integrability, we show that
for all stopping rules \(T_{N^i}\); these conditions are necessary and sufficient for uniform integrability. Fix \(T_{N^i}\) and \(n^s\). Define the measure
Let \(A\in \mathcal {F}\). Since \(\frac{G_0^s(x)}{\psi _r(x)} \leqslant \frac{g_s(\hat{x}^s)}{\psi _r(\hat{x}^s)}\) for all x, we find that
The property (4.2) follows from (4.4) by setting \(A=\Omega \). We observes that \(\mathbb {P}_x(A) \rightarrow 0\), whenever \(\mathbb {P}^*_x(A) \rightarrow 0\). Thus the property (4.3) follows from (4.4). \(\square \)
Proof of Theorem 4.1
The task is to show that \(V=G\); the claimed Nash equilibrium follows then from the construction of G. To this end, recall the definition of the value function V from (2.2). Obviously, \(\underline{V}(x) \leqslant \overline{V}(x)\) for all x. To prove that \(V=G\), it is sufficient to show that \(\overline{V}(x) \leqslant G(x) \leqslant \underline{V}(x)\) for all x; we prove the first of these inequalities, the second is proved similarly. Since \(g_s \leqslant G_0^s\), we find using lemma 4.2 and optional sampling, that
for arbitrary stopping rules \(T_{N^i}\) and \(T_{N^s}\); here, \(U^s\) is an independent \({\text {Exp}}(\lambda _s)\)-distributed \((\mathcal {G}_{n^s})\)-stopping time and the last equality follows from the independence of \(U^s\) and the stopping times \(T_{N^i}\). The right-hand side is independent of \(T_{N^s}\), thus we obtain
and, consequently,
For the inf-player, consider the stopping rule \(\tilde{N}^i=\)"Stop at the next inf-players Poisson arrival if the state of X at the time is below the threshold \(y_i\). Otherwise, wait". Then
Since \(g_i(X_{T_k}) = G_0^i(X_{T_k})\) conditional to \(\{ \tilde{N}^i = k \}\), we find that
Strong Markov property yields
Finally, since, conditional to \(\{ \tilde{N}_i = k \}\),
on the event \(\{ T_{n-1}<U^s < T_{n} \}\), \(n = 1, \dots , k-1\), and
on the event \(\{ U^s > T_{k-1} \}\), the expression (4.5) can be written as \(\overline{V}(x) \leqslant G(x)\). Thus \(V(x)=G(x)\) and the proof is complete. \(\square \)
5 Properties of the Solution
5.1 Asymptotics of \(\lambda _i\) and \(\lambda _s\)
A similar stopping game, where both of the players are allowed to stop without any restrictions is studied in [2]. In that case, we know that under the assumption 2.1 complemented by the assumption \(g_i\geqslant g_s\), the optimal stopping thresholds \((x_i, x_s)\) are the unique solution to the pair of equations
It seems reasonable this solution should coincide with ours when both of the information rates \(\lambda _i\) and \(\lambda _s\) tend to infinity as in that case stopping opportunities appear more frequently for both of the players. We show after some auxiliary calculations that this is indeed the case.
The pair of equations can be represented as
Furthermore, for all \(s > 0\), we have
where \(\tau _z = \inf \{ t \geqslant 0 \mid X_t = z \}\). Therefore, we find that for \(z< x < y\)
and, consequently by monotone convergence, that
We can now prove the following convergence result.
Proposition 5.1
Let \(K_{\lambda }(x)\) be as in (3.13) and define a function \(k: (0, z_i] \rightarrow (0, z_i]\) as [see (5.1)]
Then the unique fixed point \(y_i\) of \(K_{\lambda }\) converges to the unique fixed point \(x_i\) of k as \(\lambda \) tends to infinity.
Proof
From the representation of the pair of Eqs. (5.2), (5.4) and monotonicity we see that
The claim follows now by noticing that \(k(x)-x\) attains both negative and positive values (this follows essentially from point (1) in Assumption 2.4, see [2]). \(\square \)
In the case that \(\lambda _i \rightarrow 0\), i.e. in the absence of competition, the threshold \(y_s^*\) converges to the threshold of an stopping problem presented in [26]. And further, if then we let \(\lambda _s \rightarrow \infty \) the threshold coincides with the threshold in [1], see [26], proposition 2.6. These asymptotic results are collected in Table .
5.2 Consequences of the Asymmetry
Interesting feature of the anti-symmetry is that when one of the rates, for example \(\lambda _s\), stays fixed, and we increase \(\lambda _i\), both of the thresholds decrease. To see this note that \(y_i \in (0, \tilde{x}_i)\) and also that by (5.3) for \(z< x < y\) we have
Hence, assuming \(\lambda _1 < \lambda _2\) we have that \(H_2(g_i, \psi _r ;y_i) \leqslant H_1(g_i, \psi _r ;y_i)\) and \(H_2(\varphi _r, g_i ;y_i) \leqslant H_1(\varphi _r, g_i ;y_i)\).
We recall the definition of K and write the dependency on \(\lambda _i\) explicitly
Taking the derivative with respect to \(\lambda _i\) yields
Hence, K is decreasing in \(\lambda _i\). Next, suppose that \(y_1\) is a fixed point of K when \(\lambda _i = \lambda _1\) and assume that \(\lambda _1 < \lambda _2\). Then
and consequently \(y_1 > y_2\), as \(K'(y_1,\lambda _1) < 1\) by proof of proposition 3.3. Similarly, we can show that \(y_s\) decreases as function of \(\lambda _i\).
This observation has an intuitive explanation. If the information rate \(\lambda _i\) of the inf-player increases he should wait longer (in the sense that \(y_i\) decreases) as he gets more frequent opportunities to stop, and hence, is not affected as much by the uncertainty of the underlying. On the other hand, as the rate for the inf-player increases, the sup-player wants to stop sooner (in the sense that \(y_s\) decreases). This is because the inf-player is now less likely to miss good opportunities to stop and the sup-player has to react accordingly.
6 Illustrations
6.1 Geometric Brownian Motion with Smooth Payoff
In this illustration we compare the properties of our general findings with the usual stopping game where the players are allowed to stop without restrictions. Thus, we follow [2] and consider a stopping game given by
where \(c_s> c_i > 0\) are constant measuring the sunk costs, \(f(x) = x^\theta \) is a profit flow with \(0<\theta <1\), and the underlying diffusion \(X_t\) is a geometric Brownian motion.
We remark as in [2] that in this case the buyer always gets the expected cumulative present value \((R_r f )(x)\), and hence the only factor that depends on the timing of the decision is the cost which the buyer pays (and the seller receives) at exercise. Thus, the game can be seen as the valuation of an investment which guarantees the buyer a permanent flow of revenues from the exercise date up to an arbitrarily distant future at a cost which is endogenously determined from the game.
Remark 6.1
We point out that by a similar analysis, we could study the linear payoff structure \(g_i(x) = x-c_i\) and \(g_s(x) = x-c_s \), where \(0< c_i < c_s\). However, we want to compare our results to those of [2] and, therefore, present the case of cumulative payoffs.
In this framework, the infinitesimal generator of the diffusion \(X_t\) reads as
where \(\mu \in \mathbb {R}_+\) and \(\sigma \in \mathbb {R}_+\) are given constants. We readily verify that the assumption 2.1 are satisfied. Furthermore, the scale density and the density of the speed measure read as
Denote
Then the minimal r-excessive functions for X read as
It is worth to emphasise that \(\beta _{\lambda }> 1> \theta> 0 > \alpha _{\lambda }\), so that the conclusion in remark 2.3 holds.
The resolvent can be shown to be
Noting that \((\beta _0-\theta )(\theta -\alpha _0) = 2(r-\theta \mu - \frac{1}{2}\sigma ^2 \theta (\theta -1))/\sigma ^2\), we find the alternative representation
For notional convenience, we do the calculations in the symmetric case \(\lambda =\lambda _i=\lambda _s\). To write down the pair of equations (4.1), we first calculate the auxiliary functionals
Moreover,
Noting that \(\alpha _{\lambda } -\beta _0 = \alpha _0-\beta _{\lambda }\) and using the above expressions, the pair of equations (4.1) read as
Unfortunately, it seems to be impossible to solve the pair explicitly and thus we illustrate the results numerically.
Next we analyse the assumptions in 2.4. The item (1) clearly holds and regarding item (2) we find that
Hence,
and consequently,
This demonstrates the interplay between the payoffs and the information rates and also highlights that our assumptions on the payoff functions are not enough to guarantee the ordering \(\tilde{x}_i < \tilde{x}_s\) automatically.
To illustrate the results, we choose the parameters \(\mu = 1/2, \sigma = 1, r = 9/2, c_i = 1, c_s = 4/3.\) Then in the symmetric case we find, as expected, that the optimal thresholds converge to the ones in the unconstrained case [2], as \(\lambda \rightarrow \infty \), see Fig. .
In the non-symmetric case, we find that, if the information rate \(\lambda _s\) is fixed for the sup-player, both thresholds decrease as the function of \(\lambda _i\), see Fig. . Interestingly, at least in our numeric examples, increasing volatility does not necessarily expand the continuation region (by increasing \(y_s\) and decreasing \(y_i\)). This is in contrary to the findings in [2] for the standard stopping game.
Finally, the value function of the game is shown in Fig. .
6.2 Mean Reverting Dynamics
To further expand the first example, we consider different diffusion dynamics under similar payoff structure. We assume that the diffusion X has the infinitesimal generator
where \(\mu > 0\) is a constant, \(\beta > 0\) is the degree of mean-reversion and \(\sigma > 0\) is the volatility coefficient. This process is often called Verhulst-Pearl diffusion. Because the payoffs are chosen similarly as in the first example to be \(g_j(x)=(R_r f)(x)-c_j\), where \(j=i,s\), \(c_i < c_s\), and \(f(x)=x\), the assumption 2.1 are again satisfied. The scale density and the density of the speed measure read as
and the minimal r-excessive functions are (see [6], p. 202)
where U is a confluent hypergeometric function, L is the generalized Laguerre polynomial \(L(a,b,z)=L_a^b(z)\) and
Due to the complicated nature of the minimal r-excessive functions in this example, the functionals \(\Psi h\) and \(\Phi h\), where h is \(\psi _r, \varphi _r\) or \((\mathcal {A}-r)g_j\) (\(j=i,s\)), cannot be calculated explicitly. Consequently, the pair of equations (4.1) for the optimal thresholds cannot be simplified from their original integral forms in any helpful way, and are thus left unstated.
Regarding the assumptions in 2.4 the first one is again satisfied. Unfortunately, again due to the complicated forms of \(\psi _r\) and \(\varphi _r\), the second assumption has to be verified numerically in each case. The results are illustrated numerically in Table with the parameters \(\sigma = 0.3\), \(r=0.08\), \(\gamma = 0.05\), \(c_i = 10.0\), \(c_s = 20.0\) and \(\lambda = \lambda _i = \lambda _s\). These values suggest that the optimal stopping thresholds converge to the thresholds of the unconstrained case as the the intensity \(\lambda \) increases. This is in line with our general result.
References
Alvarez, L.H.R.: Reward functionals, salvage values and optimal stopping. Math. Methods Oper. Res. 54(2), 315–337 (2001)
Alvarez, L.H.R.: A class of solvable stopping games. Appl. Math. Optim. 58, 291–314 (2008)
Alvarez, L.H.R.: Minimum guaranteed payments and costly cancellation rights: a stopping game perspective. Math. Finance 20(4), 733–751 (2010)
Borodin, A., Salminen, P.: Handbook on Brownian motion—facts and formulæ. Birkhäuser, Basel (2015)
Dupuis, P., Wang, H.: Optimal stopping with random intervention times. Adv. Appl. Probab. 34, 141–157 (2002)
Dayanik, S., Karatzas, I.: On the optimal stopping problem for one-dimensional diffusions. Stoch. Process. Appl. 107, 173–212 (2003)
Dynkin, E.: Game variant of a problem of optimal stopping. Soviet Math. Dokl. 10, 270–274 (1969)
Ekström, E.: Properties of game options. Math. Methods Oper. Res. 63, 221–238 (2006)
Ekström, E., Villeneuve, S.: On the value of optimal stopping games. Ann. Appl. Probab. 16, 1576–1596 (2006)
Ekström, E., Peskir, G.: Optimal stopping games for Markov processes. SIAM J. Control Optim. 47, 684–702 (2008)
Emmerling, T.J.: Perpertual cancellable American call options. Math. Finance 22(4), 645–666 (2012)
Guo, X., Liu, J.: Stopping at the maximum of geometric Brownian motion when signals are received. J. Appl. Probab. 42, 826–838 (2005)
Hobson, D.G., Zeng, M.: Constrained optimal stopping liquidity and effort. Stoch. Process. Their Appl. (to appear) (2019). https://doi.org/10.1016/j.spa.2019.10.010
Hobson, D.G.: The shape of the value function under Poisson optimal stopping. Stoch. Process. Their Appl. 133, 229–246 (2021)
Hobson, D.G., Liang, G., Sun, H.: Callable convertible bonds under liquidity constraints. (2021). https://doi.org/10.48550/arXiv2111.02554
Kallsen, J., Kühn, C.: Convertible bonds: financial derivatives of game type. In: Exotic option Pricign and advanced levy models, pp. 277–291. Wiley, Hoboken (2005)
Kifer, Y.: Optimal stopping in games with continuous time. Theory Probab. Appl. 16, 545–550 (1971)
Kifer, Y.: Dynkin’s games and Israeli options. International Scholarly Research Notices. (2013)
Kühn, C., Kyprianou, A.: Callable puts as composite exotic options. Math. Finance 17(4), 487–502 (2007)
Kühn, C., van Schaik, K.: Perpetual convertible bonds with credit risk. Stochastics 80(6), 585–610 (2008)
Kyprianou, A.: Some calculations for Israeli options. Finance Stoch. 8, 73–86 (2004)
Lange, R.J., Ralph, D., Støre, K.: Real-option valuation in multiple dimensions using Poisson optional stopping times. J. Quant. Financ. Anal. 55(2), 653–677 (2020)
Liang, G., Wei, W.: Optimal switching at Poisson random intervention times. Discr. Contin. Dyn. Syst. B 21(5), 1483–1505 (2016)
Liang, G., Sun, H.: Dynkin games with Poisson random intervention times. SIAM J. Control Optimi. 57(4), 2962–2991 (2019)
Liang, G., Sun, H.: Risk-sensitive Dynkin games with heterogeneous Poisson random intervention times. (2020). arXiv:2008.01787
Lempa, J.: Optimal stopping with information constraint. Appl. Math. Optim. 66, 147–173 (2012)
Lempa, J., Matomäki, P.: A Dynkin game with asymmetric information. Stochastics 85, 763–788 (2013)
Menaldi, J.L., Robin, M.: On some optimal stopping problems with constraint. SIAM J. Control Optim. 54(5), 2650–2671 (2018)
Neveu, J.: Discrete-parameter martingales. North-Holland, Amsterdam (1975)
Pérez, J.L., Rodosthenous, N., Yamazaki, K.: Non-zero-sum optimal stopping game with continuous versus periodic observations. (2021). https://doi.org/10.48550/arXiv.2107.08243
Rogers, L.C.G., Zane, O.: A simple model of liquidity effects. In: Advances in finance and stochastics: essays in honour of dieter Sondermann. Springer, Berlin (1998)
Sîrbu, M., Shreve, S.E.: A two-person game for pricing convertible bonds. SIAM J. Control Optim. 45(4), 1508–1539 (2006)
Yasuda, M.: On a randomized strategy in Neveu’s stopping problem. Stoch. Process. Their Appl. 21, 159–166 (1985)
Acknowledgements
Anonymous referees are acknowledged for helpful comments on an earlier version of the paper. Emmy.network is acknowledged for continued support.
Funding
Open Access funding provided by University of Turku (UTU) including Turku University Central Hospital. The authors have not disclosed any funding.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have not disclosed any competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Lempa, J., Saarinen, H. A Zero-Sum Poisson Stopping Game with Asymmetric Signal Rates. Appl Math Optim 87, 35 (2023). https://doi.org/10.1007/s00245-022-09945-1
Accepted:
Published:
DOI: https://doi.org/10.1007/s00245-022-09945-1