1 Introduction

This paper is concerned with asymptotic relations between certain discounted and ergodic control problems for one-dimensional diffusions. More precisely, the following control problems are considered:

  1. (A)

    Classical singular stochastic control problems with both discounted and ergodic criteria

  2. (B)

    Constrained bounded variation control problems where controlling is allowed only at the independent Poisson arrival times with both discounted and ergodic criteria

These control problems are expected to be linked to each other via certain limiting properties. For instance, it is often expected that in item (A), the values of the problems with discounted criterion are connected to the ergodic problems in an Abelian sense, when the discounting factor vanishes. This relationship, often called the vanishing discount method and sometimes used in a heuristic manner, can be used to solve the ergodic problems [10, 19, 20].

Regarding item (B), the problems of this form have attracted attention in the recent years [14, 15, 17, 18, 23]. For related studies in optimal stopping, see [9, 11, 13]. In these problems, it is reasonable to expect that the value functions of the constrained problems should converge to the values of their singular counterparts as the Poisson arrival rate of the control opportunities tends to infinity.

The main contribution of this paper is that we prove these expectations to be correct for time-homogeneous control problems with one-dimensional diffusion dynamics; our findings are summarized in Fig. 1. The choice of this framework is two-fold. These diffusion models are important in many applications, and furthermore, the time-homogeneous structure allows explicit calculations, by which we can first solve the HJB-equations of both discounted and ergodic problems separately and then establish that the solutions satisfy the desired limiting properties. This is in contrast to the vanishing discount method, where the HJB-equation of the ergodic problem is solved using the solution of the discounted problem [20].

The remainder of the paper is organized as follows. In Sect. 2, we set up the diffusion dynamics. In Sect. 3, we introduce the control problems and study the functionals appearing in their analysis. The asymptotic relations are proved in Sect. 4. Paper is concluded with an explicit examples in Sect. 5.

2 Underlying Dynamics

Let \((\Omega , \mathcal{F}, \{\mathcal{F}_{t}\}_{t \geq 0}, \mathbb{P})\) be a filtered probability space which satisfies the usual conditions. We consider an uncontrolled real-valued process \(X\) defined on \((\Omega , \mathcal{F}, \{\mathcal{F}_{t}\}_{t \geq 0}, \mathbb{P})\) which is modelled as a strong solution to the Itô stochastic differential equation

$$ dX_{t}=\mu (X_{t})dt+ \sigma (X_{t})dW_{t}, \qquad X_{0} = x, $$
(1)

where \(W_{t}\) is the Wiener process and the functions \(\mu \) and \(\sigma \) are well-behaved (see Chap. 5 of [12]). For notional convenience, we consider the case where the process evolves in \(\mathbb{R}_{+}\), even though all the results remain unchanged even if the state space would be replaced with any interval in ℝ.

We define the second-order linear differential operator \(\mathcal{A}\), which represents the infinitesimal generator of the diffusion \(X\), as

$$ \mathcal{A} = \mu (x) \frac{d}{dx} + \frac{1}{2} \sigma ^{2}(x) \frac{d^{2}}{dx^{2}} $$
(2)

and for a given \(r > 0\), we respectively denote the increasing and decreasing solutions to the differential equation \((\mathcal{A}-r)f=0\) by \(\psi _{r} > 0\) and \(\varphi _{r} > 0\). These solutions are often called the fundamental solutions and can be identified as the minimal \(r\)-excessive functions of the diffusion \(X\) (see p.19 of [6]).

We define a set \(\mathcal{L}_{1}^{r}\) of functions \(f\) that satisfy the integrability condition \(\mathbb{E}_{x} [ \int _{0}^{\infty}e^{-r s} \left | f(X_{s}) \right | ds ] < \infty \). Using this notation, we define the inverse of the differential operator \((r-\mathcal{A})\), called the resolvent \(R_{r}\), by

$$ (R_{r}f)(x)=\mathbb{E}_{x} \bigg[ \int _{0}^{\infty}e^{-r s} f(X_{s}) ds \bigg] $$

for all \(x \in \mathbb{R}_{+}\) and \(f \in \mathcal{L}_{1}^{r}\). We also define the scale density of the diffusion by

$$ S'(x) = \exp \bigg( - \int _{0}^{x} \frac{2 \mu (z)}{\sigma ^{2}(z)}dz \bigg), $$

which is the derivative of the monotonic (and non-constant) solution to the differential equation \(\mathcal{A} f=0\).

Often in computations it is useful to use the formula

$$ \begin{aligned} (R_{r} f)(x) = & B_{r}^{-1} \varphi _{r}(x)\int _{0}^{x} \psi _{r}(y)f(y)m'(y)dy \\ + & B_{r}^{-1} \psi _{r}(x)\int _{x}^{\infty} \varphi _{r}(y) f(y) m'(y) dy, \end{aligned} $$
(3)

which connects the resolvent and the fundamental solutions \(\psi _{r}\) and \(\varphi _{r}\) (see p.19 of [6]). Here the positive constant, which is independent of \(x\),

$$ B_{r} = \frac{\psi _{r}'(x)}{S'(x)} \varphi _{r}(x) - \frac{\varphi _{r}'(x)}{S'(x)} \psi _{r}(x) $$

is the Wronskian of the fundamental solutions and

$$ m'(x) = \frac{2}{\sigma ^{2}(x)S'(x)} $$

denotes the density of the speed measure. We also recall the resolvent equation (see p.4 of [6])

$$ R_{q} R_{r}= \frac{R_{r}-R_{q}}{q-r}. $$
(4)

3 The Control Problems

To pose the assumptions for the control problems, we define the function \(\theta _{r}: \mathbb{R}_{+} \to \mathbb{R}\) as \(\theta _{r}(x)=\pi (x)+\gamma \rho (x)\), where \(\gamma \) is a positive constant and \(\pi : \mathbb{R}_{+} \to \mathbb{R}\) is the cost function. Here, the function \(\rho : \mathbb{R}_{+} \to \mathbb{R}\) is defined as \(\rho (x)=\mu (x)-rx\), where \(r\) is a positive constant called the discounting factor. In economic literature, the function \(\theta _{r}\) can be understood as the net convenience yield of holding inventories [2, 8]. This function appears in wide range of control problems of one-dimensional diffusions, when the criteria to be minimized includes discounting [2, 14, 16].

In addition, we note that in the absence of discounting (\(r=0\)), the function \(\theta _{r}\) reduces to

$$ \pi _{\mu}(x) = \pi (x) + \gamma \mu (x),$$

which is in key role in many ergodic control problems of one-dimensional diffusions [5, 15].

We study the control problems under the following assumptions, which guarantee semi-explicit solvability of the control problems defined at the end of this section.

Assumption 1

We assume that:

  1. 1.

    the upper boundary \(\infty \) and the lower boundary 0 are natural,

  2. 2.

    the cost function \(\pi \) is continuous, non-negative and non-decreasing,

  3. 3.

    the functions \(\theta _{r}\) and id\(: x \mapsto x\) are in \(\mathcal{L}_{1}^{r}\),

  4. 4.

    there is a unique state \(x^{*} \geq 0\) such that \(\theta _{r}\) (for \(r \geq 0\)) is decreasing on \((0, x^{*} )\) and increasing on \((x^{*} , \infty )\) and that it satisfies the limiting condition \({ \lim _{x \to \infty} \theta _{r}(x) \geq 0}\).

Assumption 2

We assume that:

  1. 1.

    \(m(0,y)=\int _{0}^{y} m'(z)dz <\infty \) and \(\int _{0}^{y} \pi _{\mu}(z) m'(z)dz<\infty \) for all \(y \in \mathbb{R}_{+}\),

  2. 2.

    \(\lim _{x \downarrow 0} S'(x) = \infty \).

We make some remarks on these assumptions. First, we assume that the uncontrolled state variable \(X\) cannot become infinitely large or zero in finite time, see pp.18–20 of [6], for a characterization of the boundary behavior of diffusions. Second, the cost function is non-decreasing and non-negative which is in line with usual economic applications. Third, the resolvents \((R_{r} \theta _{r})(x)\) and \((R_{r} \, \text{id})(x)\) exists. Fourth, we restrict our attention to the case, where the function \(\theta _{r}\) (\(\pi _{\mu}\)) has a unique global minimum at \(x^{*}\). The Assumption 4 essentially guarantees that the equations (presented at the end of this section) for the optimal control boundaries have a unique solution. Finally, the first part in Assumption 2 guarantees that the underlying diffusion has a stationary distribution (see p.37 of [6]).

Lastly, we assume that

Assumption 3

The process \(N_{t}\) is a Poisson process with a parameter \(\lambda \geq 0\) and it is independent of the underlying diffusion \(X_{t}\). Furthermore, we assume that, the filtration \(\{\mathcal{F}_{t}\}_{t \geq 0}\) is rich enough to carry the Poisson process \(N = (N_{t},\mathcal{F}_{t})_{t \geq 0}\). We denote the jump times of \(N_{t}\) by \(T_{i}\).

Before stating the control problems, we define the auxiliary functions \({\pi _{\gamma}: \mathbb{R}_{+} \to \mathbb{R}}\) and \(g: \mathbb{R}_{+} \to \mathbb{R}\) as

$$\begin{aligned} &g(x)=\gamma x-(R_{r}\pi )(x), \\ &\pi _{\gamma}(x) = \lambda \gamma x + \pi (x), \end{aligned}$$

where \(\gamma \), \(r\) are the same positive constants as in the definition of \(\theta _{r}\) and \(\lambda \) is the intensity of the Poisson process in Assumption 3. The next lemma gives useful relationships between these auxiliary functions, \(\theta _{r}\) and \(\pi \). The lemma can be proved using the resolvent equation (4) and the harmonicity property \((A-r)(R_{r}\pi )(x)+\pi (x)=0\).

Lemma 1

Assume that \(g, \pi _{\gamma}, \theta _{r} \in \mathcal{L}_{1}^{r}\). Then

$$\begin{aligned} & (\mathcal{A}-r)g(x) = \theta _{r}(x), \end{aligned}$$
(5)
$$\begin{aligned} & (R_{r+\lambda} \pi _{\gamma})(x) = \lambda (R_{r+\lambda}g)(x) + (R_{r} \pi )(x), \end{aligned}$$
(6)
$$\begin{aligned} & \lambda (R_{r+\lambda}g)(x) = (R_{r+\lambda}\theta _{r})(x)+g(x). \end{aligned}$$
(7)

The following functionals are in a key role in the main results of the study, as they offer convenient representations of the integral functionals that include \(\psi _{r}\) and \(\varphi _{r}\):

$$\begin{aligned} L_{\theta}^{r}(x) & = r\int _{x}^{\infty} \varphi _{r}(y) \theta (y) m'(y)dy + \frac{\varphi '_{r}(x)}{S'(x)}\theta (x), \\ K_{\theta}^{r}(x) & = r \int _{0}^{x} \psi _{r}(y) \theta (y) m'(y)dy - \frac{\psi _{r}'(x)}{S'(x)} \theta (x), \\ H(x,y) & = \int _{x}^{y} (\pi _{\mu}(z)-\pi _{\mu}(x))m'(z)dz. \end{aligned}$$

We now formulate downward singular control problems of one-dimensional diffusions and similar problems where controlling is allowed only at exogenously given Poisson arrival times. This latter problem is called constrained control problem.

We assume in all of the following theorems, that the controlled dynamics are given by the stochastic differential equation

$$ X_{t}^{D} = \mu (X_{t}^{D}) dt + \sigma (X_{t}^{D})dW_{t} - \gamma dD_{t}, \quad X_{0}^{D} = x \in \mathbb{R}\mathbbm{_{+}}, $$

where \(D_{t}\) denotes the applied control policy and \(\gamma \) is positive constant (the coefficient \(\gamma \) is often interpreted as proportional transaction cost).

Assumption 4

Admissible control policies

  1. 1.

    In the singular problems (Theorems 2 and 3 below), we call a control policy \(D^{s}_{t}\) admissible, if it is non-negative, non-decreasing, right-continuous, and \(\{\mathcal{F}_{t}\}_{t \geq 0}\)-adapted, and denote the set of admissible controls by \(\mathcal{D}_{s}\).

  2. 2.

    In the constrained problems (Theorems 4 and 5 below) the set of admissible controls \(\mathcal{D}\) is given by those non-decreasing, left-continuous processes \(D_{t}\) that have the representation

    $$ D_{t}=\int _{[0,t)} \eta _{s} dN_{s}, $$

    where \(N\) is a Poisson process and the integrand \(\eta \) is \(\{\mathcal{F}_{t}\}_{t \geq 0}\)-predictable.

We introduce the main results of the control problems below and refer to [1, 35, 14, 15] for full discussions.

Theorem 2

Singular control with discounted criteria pp.1701-1702 of [1], pp.714-715 of [3]

Under the Assumption 1, the optimal control policy minimizing the objective

$$ J_{s}(x,D^{s})=\mathbb{E}_{x} \left [ \int _{0}^{\infty} e^{-rt}(\pi (X_{t}^{D^{s}})dt + \gamma dD^{s}_{t}) \right ], $$

where \(D^{s}_{t} \in \mathcal{D}_{s}\) is

$$ D^{s}_{t} = \textstyle\begin{cases} \mathcal{L}(t, y^{*}_{s}), & t>0, \\ (x-y^{*}_{s})^{+}, & t=0, \end{cases} $$

where \(\mathcal{L}(t, y^{*}_{s})\) denotes the local time push of the process \(X_{t}\) at the boundary \(y^{*}_{s}\). The boundary \(y^{*}_{s}\) is characterized by the unique solution to the equation

$$ K_{\theta _{r}}^{r}(y^{*}_{s}) = 0. $$
(8)

Moreover, the value of the problem reads as

$$ V^{s}_{r}(x) := \inf _{D^{s} \in \mathcal{D}^{s}} J_{s}(x,D^{s})= \textstyle\begin{cases} \gamma x+\frac{\theta _{r}(y^{*}_{s})}{r}, & \quad x \geq y^{*}_{s} \\ (R_{r} \pi )(x)-\psi _{r}(x) \frac{(R_{r}\pi )'(y^{*}_{s})-\gamma}{\psi _{r}'(y^{*}_{s})}, & \quad x< y^{*}_{s}. \end{cases} $$

Theorem 3

Singular control with ergodic criteria pp.16-17 of [5], p.7 of [4]

Under the Assumptions 1and 2, the optimal control policy minimizing the objective

$$ J_{se}(x,D^{s})=\liminf _{T \to \infty} \frac{1}{T} \mathbb{E}_{x} \left [ \int _{0}^{T} (\pi (X_{t}^{D^{s}})dt + \gamma dD^{s}_{t}) \right ] $$

where \(D^{s}_{t} \in \mathcal{D}_{s}\) is

$$ D^{s}_{t} = \textstyle\begin{cases} \mathcal{L}(t, b^{*}_{s}), & t>0, \\ (x-b^{*}_{s})^{+}, & t=0, \end{cases} $$

where \(\mathcal{L}(t, b^{*}_{s})\) denotes the local time push of the process \(X_{t}\) at the boundary \(b^{*}_{s}\). The boundary \(b^{*}_{s}\) is characterized by the unique solution to the equation

$$ H(0,b^{*}_{s}) = 0. $$
(9)

Moreover, the long run average cumulative yield reads as

$$ \beta ^{s} := \inf _{D \in \mathcal{D}^{s}} J_{se}(x,D) = \pi _{\mu}(b^{*}_{s}). $$

Theorem 4

Control with discounted criteria and constraint p.115 of [14]

Under the Assumption 1, the optimal control policy that minimizes the objective

$$ J(x,D)=\mathbb{E}_{x} \left [ \int _{0}^{\infty} e^{-rt}(\pi (X_{t}^{D})dt + \gamma dD_{t}) \right ], $$

where \(D_{t} \in \mathcal{D}\), is as follows. If the controlled process \(X^{D}\) is above the threshold \(y^{*}\) at a jump time \(T_{i}\) of \(N\), i.e. \(X^{D}_{T_{i-}} > y^{*}\) for any \(i\), the decision maker should take the controlled process \(X^{D}\) to \(y^{*}\). Further, the threshold \(y^{*}\) is uniquely determined by

$$ \psi _{r}'(y^{*}) L^{r+\lambda}_{g}(y^{*})=g'(y^{*})L^{r+\lambda}_{ \psi _{r}}(y^{*}), $$

which can be rewritten as

$$ \frac{L_{\theta _{r}}^{r+\lambda}(y^{*})}{\varphi _{r+\lambda}'(y^{*})} + \frac{K_{\theta _{r}}^{r}(y^{*})}{\psi '_{r}(y^{*})} = 0. $$
(10)

In addition, the value function \(V_{r, \lambda}(x):= \inf _{D \in \mathcal{D}} J(x,D)\) of the problem reads as

$$ V_{r, \lambda}(x)= \textstyle\begin{cases} \gamma x+ (R_{r+\lambda}\theta _{r})(x)- \frac{(R_{r+\lambda}\theta _{r})'(y^{*})}{\varphi _{r+\lambda}'(y^{*})} \varphi _{r+\lambda}(x)+ A(y^{*}), & \quad x \geq y^{*} \\ \gamma x+ (R_{r} \theta _{r})(x)-\psi _{r}(x) \frac{(R_{r} \theta _{r})'(y^{*})}{\psi _{r}'(y^{*})}, & \quad x< y^{*} \end{cases} $$
(11)

where

$$ A(y^{*}) = \frac{\lambda}{r} \bigg[ (R_{r+\lambda}\theta _{r})(y^{*}) - (R_{r+\lambda}\theta _{r})'(y^{*}) \frac{\varphi _{r+\lambda}(y^{*})}{\varphi _{r+\lambda}'(y^{*})} \bigg]. $$

Proof

We only prove that the optimality condition can be rewritten as (10), and refer to [14] for the rest of the claim. To prove the representation, we first use the Lemma 6, and then the formulas (5) and (7), to get

$$\begin{aligned} & \frac{2\lambda S'(y^{*})}{\sigma ^{2}(y^{*})} \Big[ \psi _{r}'(y^{*}) L_{g}(y^{*})-g'(y^{*})L_{\psi _{r}}(y^{*}) \Big] \\ & = \psi '_{r}(y^{*})(\varphi _{r+\lambda}''(y^{*}) \lambda (R_{r+ \lambda}g)'(y^{*})-\varphi _{r+\lambda}'(y^{*})\lambda (R_{r+\lambda}g)''(y^{*})) \\ & - g'(y^{*})(\varphi _{r+\lambda}''(y^{*})\psi _{r}'(y^{*})-\varphi _{r+ \lambda}'(y^{*})\psi _{r}''(y^{*})) \\ & = \psi _{r}'(y^{*})(\varphi _{r+\lambda}''(y^{*})(R_{r+\lambda} \theta _{r})'(y^{*})- \varphi _{r+\lambda}'(y^{*}) (R_{r+\lambda} \theta _{r})''(y^{*})) \\ & - \varphi _{r+\lambda}'(y^{*})(\psi _{r}''(y^{*})(R_{r}\theta _{r})'(y^{*})- \psi _{r}'(y^{*})(R_{r}\theta _{r})''(y^{*})). \end{aligned}$$

Utilizing the Lemma 6 again, we see that the optimality condition has the form

$$ \frac{L_{\theta _{r}}^{r+\lambda}(y^{*})}{\varphi _{r+\lambda}'(y^{*})} + \frac{K_{\theta _{r}}^{r}(y^{*})}{\psi '_{r}(y^{*})} = 0. $$

 □

Theorem 5

Control with ergodic criteria and constraint p.16 of [15]

Under the Assumptions 1, 2and that \(\pi (x) \geq C(x^{\alpha}-1)\), where \(\alpha \) and \(C\) are positive constants, the optimal control policy minimizing the objective

$$ J_{e}(x,D)=\liminf _{T \to \infty} \frac{1}{T} \mathbb{E}_{x} \left [ \int _{0}^{T} (\pi (X_{t}^{D})dt + \gamma dD_{t}) \right ], $$

where \(D_{t} \in \mathcal{D}\), is as follows. If the controlled process \(X^{D}\) is above the threshold \(b^{*}\) at a jump time \(T_{i}\) of \(N\), i.e. \(X^{D}_{T_{i-}} > b^{*}\) for any \(i\), the decision maker should take the controlled process \(X^{D}\) to \(b^{*}\). Further, the threshold \(b^{*}\) is uniquely determined by

$$ \frac{L_{\pi _{\mu}}^{\lambda}(b^{*})}{\varphi _{\lambda}'(b^{*})} + \frac{H(0,b^{*})}{S'(b^{*}) m(0,b^{*})} = 0, $$
(12)

and the long run average cumulative yield \(\beta _{\lambda}\) reads as

$$ \beta _{\lambda} := \inf _{D \in \mathcal{D}} J_{e}(x,D) = m(0,b^{*})^{-1} \left [ \int ^{b^{*}}_{0} \pi _{\mu}(z)m'(z) dz \right ]. $$

Remark 1

The boundary classifications for the underlying diffusion can be relaxed in all of the above theorems. For example, in Theorem 3 it can be shown that the results stays unchanged when the lower boundary is exit or killing, see p.5 of [14].

The optimal policies in the above theorems can be summarised as follows. In the singular control problems the optimal policies are local time type barrier policies. In other words, when the process is below some constant boundary \(y^{*}_{s}\) the process should be left uncontrolled, but it should never be allowed to cross it, i.e. it is reflected at \(y^{*}_{s}\). The situation in the problems with constraint is similar: when the process is below some threshold \(y^{*}\) we do not act, but if the process crosses the boundary, and the Poisson process jumps, we immediately push it down to \(y^{*}\) and start it anew. In other words, the optimal strategy in all of the problems is to exert control at the ‘maximum rate’ when the process is at (or above) the corresponding boundary.

4 Main Results

In the next lemma, we collect useful representations for functionals \(K^{f}_{r}\) and \(L_{f}^{r}\).

Lemma 6

The functions \(L_{f}\) and \(K_{f}\) have alternative representations

$$\begin{aligned} L_{f}^{r}(x) = & \frac{\sigma ^{2}(x)}{2 S'(x)}[\varphi _{r}''(x) (R_{r}f)'(x)- \varphi _{r}'(x) (R_{r}f)''(x)], \\ K_{f}^{r}(x) = & \frac{\sigma ^{2}(x)}{2 S'(x)} [\psi '_{r}(x) (R_{r} f)''(x)- \psi _{r}''(x) (R_{r} f)'(x)]. \end{aligned}$$

Proof

The proof for the claim on \(L_{f}\) is lemma 2 of [14] and the proof on \(K_{f}\) is completely analogous. □

Under our assumption that the boundaries are natural, we have that

$$\begin{aligned} \frac{\varphi '_{r}(x)}{S'(x)} = -r \int _{x}^{\infty} \varphi _{r}(y)m'(y)dy, \quad \frac{\psi '_{r}(x)}{S'(x)} = r \int _{0}^{x} \psi _{r}(y)m'(y)dy, \end{aligned}$$
(13)

and thus, we can further rewrite

$$\begin{aligned} L_{f}^{r}(x) & = r\int _{x}^{\infty} \varphi _{r}(y) (f(y)-f(x)) m'(y)dy, \\ K_{f}^{r}(x) & = r \int _{0}^{x} \psi _{r}(y) (f(y)-f(x)) m'(y)dy. \end{aligned}$$

In the next lemma we prove that these functionals satisfy asymptotic properties that are needed to establish the relationships between the introduced control problems.

Lemma 7

Under the Assumption 1, we have the limits

$$\begin{aligned} \frac{L_{\theta _{r}}^{r+\lambda}(x)}{\varphi _{r+\lambda}(x)} \xrightarrow{ \lambda \to \infty } 0, \quad \frac{K_{\theta _{r}}^{r+\lambda}(x)}{\psi _{r+\lambda}(x)} \xrightarrow{ \lambda \to \infty } 0. \end{aligned}$$

In addition, if the underlying diffusion \(X_{t}\) is recurrent, i.e. \(\mathbb{P}_{x}[\tau _{z} < \infty ] = 1\) for all \(x,z \in \mathbb{R}_{+}\), then

$$\begin{aligned} \frac{L_{\theta _{r}}^{r}(x)}{r\varphi _{r}(x)} \xrightarrow{ r \to 0 } H(x,\infty ), \quad \frac{K_{\theta _{r}}^{r}(x)}{r\psi _{r}(x)} \xrightarrow{ r \to 0 } H(0,x), \end{aligned}$$

where

$$ H(x,y) = \int _{x}^{y} (\pi _{\mu}(z)-\pi _{\mu}(x))m'(z)dz. $$

Proof

Let \(\tau _{z} = \inf \{ t \geq 0 \mid X_{t} = z \}\). Then for all \(s > 0\) we have

$$ \mathbb{E}_{x}[e^{-s\tau _{z}} \mid \tau _{z} < \infty ] = \textstyle\begin{cases} \dfrac{\psi _{s}(x)}{\psi _{s}(z)}, \quad x \leq z \\ \dfrac{\varphi _{s}(x)}{\varphi _{s}(z)}, \quad x > z. \end{cases} $$
(14)

Therefore, by letting \(s \to 0+\) we get by monotone convergence that

$$ \begin{aligned} & \lim _{s \to 0+} \frac{\psi _{s}(x)}{\psi _{s}(z)} = \mathbb{P}_{x}[ \tau _{z} < \infty ] = 1, \\ & \lim _{s \to 0+} \frac{\varphi _{s}(x)}{\varphi _{s}(z)} = \mathbb{P}_{x}[\tau _{z} < \infty ] = 1, \end{aligned} $$
(15)

under the assumption that the underlying diffusion is recurrent. In addition, again by (14), we find that

$$\begin{aligned} \lim _{s \to \infty} s \frac{\psi _{s}(x)}{\psi _{s}(z)} = 0, \qquad \lim _{s \to \infty} s \frac{\varphi _{s}(x)}{\varphi _{s}(z)} = 0. \end{aligned}$$
(16)

Since \(\lim _{r \to 0+} \theta _{r}(x) = \pi _{\mu}(x)\), we see by using the above observations that by monotone convergence

$$\begin{aligned} \frac{L_{\theta _{r}}^{r}(x)}{r\varphi _{r}(x)} & = \int _{x}^{\infty} \frac{\varphi _{r}(z)}{\varphi _{r}(x)}(\theta _{r}(z)- \theta _{r}(x) )m'(z)dz \to H(x,\infty ) \text{ as } r \to 0, \\ \frac{K_{\theta _{r}}^{r}(x)}{r\psi _{r}(x)} & = \int _{0}^{x} \frac{\psi _{r}(z)}{\psi _{r}(x)}(\theta _{r}(z)- \theta _{r}(x) )m'(z)dz \to H(0,x) \text{ as } r \to 0. \end{aligned}$$

Similarly, by utilizing (16) we obtain

$$\begin{aligned} \frac{L_{\theta _{r}}^{r+\lambda}(x)}{\varphi _{r+\lambda}(x)} & = (r+ \lambda ) \int _{x}^{\infty} \frac{\varphi _{r+\lambda}(z)}{\varphi _{r+\lambda}(x)}(\theta _{r}(z)- \theta _{r}(x) )m'(z)dz \to 0 \text{ as } \lambda \to \infty , \\ \frac{K_{\theta _{r}}^{r+\lambda}(x)}{\psi _{r+\lambda}(x)} & = (r+ \lambda ) \int _{0}^{x} \frac{\psi _{r+\lambda}(z)}{\psi _{r+\lambda}(x)}(\theta _{r}(z)- \theta _{r}(x) )m'(z)dz \to 0 \text{ as } \lambda \to \infty . \end{aligned}$$

 □

Proposition 8

Asymptotics of the optimal thresholds

Under the Assumptions 1, 2and 3the optimal thresholds satisfy the following asymptotic results in terms of the intensity of the Poisson process

$$\begin{aligned} y^{*}(\lambda ) \xrightarrow{ \lambda \to \infty } y^{*}_{s}, \quad b^{*}( \lambda ) \xrightarrow{ \lambda \to \infty } b^{*}_{s}, \end{aligned}$$

and if the underlying diffusion is also recurrent, we have the vanishing discount factor limits

$$\begin{aligned} y^{*}_{s}(r) \xrightarrow{ r \to 0 } b^{*}_{s}, \quad y^{*}(r) \xrightarrow{ r \to 0 } b^{*}. \end{aligned}$$

Proof

We prove the first and the last claim, as the second claim is proposition 3 of [15] and the third claim lemma 3.1 of [4]. Define the functions

$$\begin{aligned} G_{r, \lambda}(x) & = \frac{L_{\theta _{r}}^{r+\lambda}(x)}{\varphi _{r+\lambda}'(x)} + \frac{K_{\theta _{r}}^{r}(x)}{\psi _{r}'(x)}, \\ F_{\lambda}(x) & = \frac{L_{\pi _{\mu}}^{\lambda}(x)}{\varphi _{\lambda}'(x)} + \frac{H(0,x)}{S'(x) m(0,x)}, \end{aligned}$$

and let \(y^{*}_{s}(r)\), \(y^{*}(\lambda )\), \(b^{*}(\lambda )\) be such that \(K_{\theta _{r}}^{r}(y^{*}_{s}(r))=0\), \(G_{r, \lambda}(y^{*}(\lambda ))=0\) and \(F_{\lambda}(b^{*}(\lambda ))=0\). We note that these roots are unique and define the optimal boundaries in Theorems 2, 4 and 5.

The first claim is to show that the unique root \(y^{*}(\lambda )\) of the function \(G_{r, \lambda}(x)\) in the limit \(\lambda \to \infty \) is \(y^{*}_{s}\). We find that

$$ G_{r, \lambda}(x) = \frac{\varphi _{r+\lambda}(x)}{\varphi _{r+\lambda}'(x)} \frac{L_{\theta _{r}}^{r+\lambda}(x)}{\varphi _{r+\lambda}(x)} + \frac{K_{\theta _{r}}^{r}(x)}{\psi _{r}'(x)}. $$

Because the function \(\lambda \mapsto \frac{\varphi _{r+\lambda}(x)}{\varphi _{r+\lambda}'(x)}\) is increasing and bounded above by 0 by lemma 6 in [22], we find by Lemma 7 that

$$ G_{r, \lambda}(x) = \frac{\varphi _{r+\lambda}(x)}{\varphi _{r+\lambda}'(x)} \frac{L_{\theta _{r}}^{r+\lambda}(x)}{\varphi _{r+\lambda}(x)} + \frac{K_{\theta _{r}}^{r}(x)}{\psi _{r}'(x)} \to \frac{K_{\theta _{r}}^{r}(x)}{\psi _{r}'(x)} \text{ as } \lambda \to \infty . $$
(17)

Thus, because the function \(\frac{K_{\theta _{r}}^{r}(x)}{\psi _{r}'(x)}\) changes sign (see lemma 3.1. of [3]), we have by (17), the equations (8) and (10) that

$$ y^{*}(\lambda ) \xrightarrow{ \lambda \to \infty } y^{*}_{s}. $$

To prove the last claim, we find utilizing (13) and Lemma 7 that

$$\begin{aligned} \frac{K_{\theta _{r}}^{r}(x)}{\psi _{r}'(x)} & = \frac{K_{\theta _{r}}^{r}(x)}{S'(x)\int _{0}^{x}\psi _{r}(z)m'(z) dz} = \frac{\frac{K_{\theta _{r}}^{r}(x)}{r\psi _{r}(x)}}{S'(x)\int _{0}^{x}\frac{\psi _{r}(z)}{r\psi _{r}(x)}m'(z) dz} \\ & \xrightarrow{ r \to 0 } \frac{H(0,x)}{S'(x)m(0,x)}. \end{aligned}$$

Hence,

$$ G_{r, \lambda}(x) \xrightarrow{ r \to 0 } \frac{L_{\pi _{\mu}}^{\lambda}(x)}{\varphi _{\lambda}'(x)} + \frac{H(0,x)}{S'(x)m(0,x)} = F_{\lambda}(x). $$
(18)

Therefore, as the function \(F_{\lambda}(x)\) changes sign (see proposition 1 of [15]), the claim follows by (18) and the equations (12) and (10). □

Similar limiting results hold also for the corresponding values of the defined control problems. However, it is clear that in terms of the vanishing discounting factor, the results hold only in the following Abelian sense.

Proposition 9

Asymptotics of the values

Under the Assumptions 1, 2and 3the values of the control problems satisfy the following asymptotic results

$$\begin{aligned} V_{r, \lambda}(x) \xrightarrow{ \lambda \to \infty} V^{s}_{r}(x), \quad \beta _{\lambda} \xrightarrow{ \lambda \to \infty } \beta ^{s}. \end{aligned}$$

Also, if the underlying diffusion is recurrent, we have the following Abelian limits

$$\begin{aligned} r V_{r, \lambda}(x) \xrightarrow{ r \to 0 } \beta _{\lambda}, \qquad r V^{s}_{r} (x) \xrightarrow{ r \to 0 } \beta ^{s}. \end{aligned}$$

Proof

For the last claim see lemma 3.1 of [4]. To prove the third, we first re-write the value function (11) using Lemma 1 as

$$ V_{r, \lambda}(x)= \textstyle\begin{cases} \gamma x+ (R_{r+\lambda}\theta _{r})(x)- \frac{(R_{r+\lambda}\theta _{r})'(y^{*})}{\varphi _{r+\lambda}'(y^{*})} \varphi _{r+\lambda}(x)+ A(y^{*}), & \quad x \geq y^{*}, \\ \gamma x + \psi _{r}(x)\bigg[ \frac{(R_{r} \theta _{r})(x)}{\psi _{r}(x)}- \frac{(R_{r} \theta _{r})'(y^{*})}{\psi _{r}'(y^{*})} \bigg], & \quad x< y^{*}, \end{cases} $$
(19)

where

$$ A(y^{*}) = \frac{\lambda}{r} \bigg[ (R_{r+\lambda}\theta _{r})(y^{*}) - (R_{r+\lambda}\theta _{r})'(y^{*}) \frac{\varphi _{r+\lambda}(y^{*})}{\varphi _{r+\lambda}'(y^{*})} \bigg]. $$

We notice that when \(x > y^{*}\), the value function \(r V_{r, \lambda}(x)\) has convenient presentation in terms of the limit \(r \to 0\). However, when \(x < y^{*}\) we have to proceed as follows. Because \(V_{r, \lambda}(x)\) is continuous across the boundary \(y^{*}\), we find

$$\begin{aligned} & (r+\lambda )\psi _{r}'(y^{*})(\varphi _{r+\lambda}'(y^{*})(R_{r+ \lambda}\theta _{r})(y^{*}) - \varphi _{r+\lambda}(y^{*})(R_{r+ \lambda}\theta _{r})'(y^{*})) \\ = & r\varphi _{r+\lambda}'(y^{*})(\psi _{r}'(y^{*})(R_{r}\theta _{r})(y^{*}) - \psi _{r}(y^{*})(R_{r}\theta _{r})'(y^{*})), \end{aligned}$$
(20)

which can be re-organized as

$$\begin{aligned} & - r \frac{(R_{r}\theta _{r})'(y^{*})}{\psi _{r}'(y^{*})} + r \frac{(R_{r}\theta _{r})(y^{*})}{\psi _{r}(y^{*})} \\ & = (r+\lambda )\bigg( \frac{\varphi _{r+\lambda}'(y^{*})(R_{r+\lambda}\theta _{r})(y^{*}) - \varphi _{r+\lambda}(y^{*})(R_{r+\lambda}\theta _{r})'(y^{*})}{\varphi _{r+\lambda}'(y^{*}) \psi _{r}(y^{*})} \bigg). \end{aligned}$$
(21)

Thus, we get that

$$\begin{aligned} & r\gamma x + r \psi _{r}(x)\bigg[ \frac{(R_{r} \theta _{r})(x)}{\psi _{r}(x)}- \frac{(R_{r} \theta _{r})'(y^{*})}{\psi _{r}'(y^{*})} \bigg] \\ & = r \gamma x + r(R_{r} \theta _{r})(x) - r \frac{\psi _{r}(x)}{\psi _{r}(y^{*})} (R_{r}\theta _{r})(y^{*}) \\ & + (r+\lambda )\frac{\psi _{r}(x)}{\psi _{r}(y^{*})}\bigg( \frac{\varphi _{r+\lambda}'(y^{*})(R_{r+\lambda}\theta _{r})(y^{*}) - \varphi _{r+\lambda}(y^{*})(R_{r+\lambda}\theta _{r})'(y^{*})}{\varphi _{r+\lambda}'(y^{*})} \bigg). \end{aligned}$$

Using the formulas (3) and (15), we see that

$$\begin{aligned} r (R_{r} \theta _{r})(x) & = r \frac{\varphi _{r}(x) \int _{0}^{x} \psi _{r}(z)\theta _{r}(z)m'(z)dz +\psi _{r}(x)\int _{x}^{\infty} \varphi _{r}(z) \theta _{r}(z) m'(z) dz}{\frac{1}{S'(x)}[\psi '_{r}(x) \varphi _{r}(x)-\psi _{r}(x)\varphi _{r}'(x)]} \\ & = \frac{\varphi _{r}(x)\int _{0}^{x} \psi _{r}(z)\theta _{r}(z)m'(z)dz + \psi _{r}(x) \int _{x}^{\infty} \varphi _{r}(z) \theta _{r}(z) m'(z) dz}{ \varphi _{r}(x)\int _{0}^{x} \psi _{r}(z)m'(z)dz +\psi _{r}(x) \int _{x}^{\infty} \varphi _{r}(z)m'(z)dz} \\ & = \frac{\int _{0}^{x} \frac{\psi _{r}(z)}{\psi _{r}(x)}\theta _{r}(z)m'(z)dz + \int _{x}^{\infty} \frac{\varphi _{r}(z)}{\varphi _{r}(x)} \theta _{r}(z) m'(z) dz}{ \int _{0}^{x} \frac{\psi _{r}(z)}{\psi _{r}(x)}m'(z)dz + \int _{x}^{\infty} \frac{\varphi _{r}(z)}{\varphi _{r}(x)}m'(z)dz} \\ & \xrightarrow{ r \to 0 } \frac{\int _{0}^{\infty}\pi _{\mu}(z)m'(z)dz}{ \int _{0}^{\infty} m'(z)dz}, \end{aligned}$$

and thus, by (15) we have

$$ r(R_{r} \theta _{r})(x) - r \frac{\psi _{r}(x)}{\psi _{r}(y^{*})} (R_{r} \theta _{r})(y^{*}) \xrightarrow{ r \to 0 } 0. $$

Therefore, by continuity and proposition 8, the value function satisfies

$$ rV_{r, \lambda}(x) \xrightarrow{ r \to 0 } \lambda \frac{\varphi _{\lambda}'(b^{*})(R_{\lambda} \pi _{\mu})(b^{*}) - \varphi _{\lambda}(b^{*})(R_{\lambda} \pi _{\mu})'(b^{*})}{\varphi _{\lambda}'(b^{*})}. $$

Finally, utilizing (3) and (12), the limiting value reads as

$$ -\lambda \frac{S'(b^{*})}{\varphi _{\lambda}'(b^{*})} \int _{b^{*}}^{ \infty} \pi _{\mu}(z) \varphi _{\lambda}(z) m'(z)dz = \beta _{\lambda}, $$

which completes the proof of the third claim.

To prove the second claim, we notice that the value function \(V_{r, \lambda}(x)\) is independent of \(\lambda \) when \(x< y^{*}\). Thus, we can focus on the region \(x>y^{*}\). We re-organize the terms \(V_{r, \lambda}(x)\) in the upper region as

$$ \gamma x+ (R_{r+\lambda}\theta _{r})(x)- \frac{(R_{r+\lambda}\theta _{r})'(y^{*})}{\varphi _{r+\lambda}'(y^{*})} \varphi _{r+\lambda}(x)+ A(y^{*}), $$
(22)

where

$$ A(y^{*}) = \frac{\lambda}{r} \bigg[ (R_{r+\lambda}\theta _{r})(y^{*}) - (R_{r+\lambda}\theta _{r})'(y^{*}) \frac{\varphi _{r+\lambda}(y^{*})}{\varphi _{r+\lambda}'(y^{*})} \bigg]. $$

Because diffusions are Feller-processes, we know that \(\lambda (R_{r+\lambda}\theta _{r}) \to \theta _{r}\) as \(\lambda \to \infty \) (in sup-norm), see p.235 of [21]. Thus,

$$ \gamma x + \frac{\lambda (R_{r+\lambda}\theta _{r})(x)}{\lambda} + \frac{\lambda (R_{r+\lambda}\theta _{r})(y^{*})}{r} \xrightarrow{ \lambda \to \infty } \gamma x + \frac{\theta _{r}(y^{*}_{s})}{r}. $$

To deal with the remaining terms in (19), we note that by (20)

$$\begin{aligned} \frac{(R_{r+\lambda}\theta _{r})'(y^{*})}{\varphi _{r+\lambda}'(y^{*})} & = \frac{(R_{r+\lambda}\theta _{r})(y^{*})}{\varphi _{r+\lambda}(y^{*})} - \frac{r}{r+\lambda} \frac{(R_{r}\theta _{r})(y^{*})}{\varphi _{r+\lambda}(y^{*})} \\ & + \frac{r}{r+\lambda} \frac{\psi _{r}(y^{*})}{\psi '_{r}(y^{*})} \frac{(R_{r}\theta _{r})'(y^{*})}{\varphi _{r+\lambda}(y^{*})}. \end{aligned}$$

Utilizing the above we get by (15)

$$\begin{aligned} & \frac{(R_{r+\lambda}\theta _{r})'(y^{*})}{\varphi '_{r+\lambda}(y^{*})} \varphi _{r+\lambda}(x) \\ & = \frac{\varphi _{r+\lambda}(x)}{\varphi _{r+\lambda}(y^{*})} (R_{r+ \lambda}\theta _{r})(y^{*}) - \frac{r}{r+\lambda} \frac{\varphi _{r+\lambda}(x)}{\varphi _{r+\lambda}(y^{*})} (R_{r} \theta _{r})(y^{*}) \\ & + \frac{r}{r+\lambda} \frac{\psi _{r} (y^{*})}{\psi _{r}'(y^{*})} \frac{\varphi _{r+\lambda}(x)}{\varphi _{r+\lambda}(y^{*})} (R_{r} \theta _{r})'(y^{*}) \xrightarrow{ \lambda \to \infty } 0 \end{aligned}$$

and

$$\begin{aligned} & \frac{\lambda}{r} \frac{(R_{r+\lambda}\theta _{r})'(y^{*})}{\varphi '_{r+\lambda}(y^{*})} \varphi _{r+\lambda}(y^{*}) \\ & = \frac{\lambda}{r} (R_{r+\lambda}\theta _{r})(y^{*}) - \frac{\lambda}{r+\lambda}(R_{r}\theta _{r})(y^{*}) + \frac{\lambda}{r+\lambda} \frac{\psi _{r} (y^{*})}{\psi _{r}'(y^{*})} (R_{r} \theta _{r})'(y^{*}) \\ & \xrightarrow{ \lambda \to \infty } \frac{\theta _{r}(y^{*}_{s})}{r} - (R_{r}\theta _{r})(y^{*}_{s}) + \frac{\psi _{r}(y^{*}_{s})}{\psi _{r}'(y^{*}_{s})} (R_{r}\theta _{r})'(y^{*}_{s}). \end{aligned}$$

As the value function \(V^{s}_{r}(x)\) is continuous over the boundary \(y^{*}_{s}\), we further find that

$$ \frac{\theta _{r}(y^{*}_{s})}{r} - (R_{r}\theta _{r})(y^{*}_{s}) + \frac{\psi _{r}(y^{*}_{s})}{\psi _{r}'(y^{*}_{s})} (R_{r}\theta _{r})'(y^{*}_{s}) = 0. $$

Combining the above limits the result follows by continuity and proposition 8.

Lastly, the second claim of the proposition follows by continuity of the functions and proposition 8, as \(\beta ^{s}\) can also be represented as (see p.17 of [5])

$$ \beta ^{s} = m(0,b_{s}^{*})^{-1} \left [ \int ^{b_{s}^{*}}_{0} \pi _{ \mu}(z)m'(z) dz \right ]. $$

 □

Unfortunately, our results do not answer the interesting question about the rate of any of these convergences. It seems to be the case that different underlying dynamics have different rates of convergence, and further, the threshold boundaries can have different rate of convergence as the value functions. Thus, we believe it to be hard to find general conditions for the rate of the convergence. However, in the next section we show the rate of convergence in an explicit case.

Another interesting question is the convergence of the controlled processes. A portfolio selection problem under vanishing transaction costs is studied in [7]. Here the authors show that the optimal thresholds and value functions under transaction costs converge to the ones without transaction costs as these costs vanish. Using this, the authors argue further that the controlled wealth processes converge weakly. To prove a similar result in our case unfortunately seems more formidable. For instance, in [7] the fact that controlled processes are bounded simplifies the proof of tightness of the approximating sequence. In our case, this does not hold, which makes the problem more difficult. This interesting question of convergence is left for future research.

5 Illustration

5.1 Brownian Motion with Drift

Let the underlying process \(X_{t}\) be defined by

$$ dX_{t} = \mu dt + dW_{t}, \quad X_{0}=x, $$
(23)

where \(\mu > 0\). Also, we let the process evolve in ℝ and choose a quadratic running cost \(\pi (x) = x^{2}\). The minimal excessive functions are in this case known to be

$$ \varphi _{\lambda}(x) = e^{-\big(\sqrt{\mu ^{2}+2 \lambda}+\mu \big) x}, \quad \psi _{\lambda}(x) = e^{\big(\sqrt{\mu ^{2}+2 \lambda}-\mu \big) x}, $$

and the scale density and speed measure read as

$$ S'(x)= \exp (-2 \mu x), \quad m'(x)=2 \exp (2 \mu x), $$

respectively. The net convenience yield now takes the form \(\theta (x) = x^{2} + \gamma (\mu -rx)\). We notice immediately that our Assumptions 1 hold as the boundary assumption can be relaxed in the case of the discounted problems (Theorems 2 and 4), see Remark 1. However, it is clear that the process does not satisfy the ergodicity properties needed for the results in terms of the limit \(r \to 0\).

To illustrate the results of proposition 8, we solve the optimality conditions (8) and (10). Conveniently the solution to these equations can be represented explicitly. To solve the equations we need to find the functions \(K_{\theta _{r}}^{r}(x)\) and \(L_{\theta _{r}}^{r+\lambda}(x)\). Elementary integration yields

$$\begin{aligned} K_{\theta _{r}}^{r}(x) & = \frac{2 e^{x \alpha _{r}^{+}} \big( 2 + (-2 x + r \gamma ) \alpha _{r}^{+} \big)}{(\alpha _{r}^{+})^{3}}, \\ L_{\theta _{r}}^{r+\lambda}(x) & = \frac{2 e^{x \alpha _{r}^{-}} \big( 2 + (2 x - r \gamma ) \alpha _{r}^{-} \big)}{(\alpha _{r}^{-})^{3}}, \end{aligned}$$

where \(\alpha _{r}^{+} = \mu + \sqrt{2r + \mu ^{2}}\) and \(\alpha _{r}^{-} = \mu - \sqrt{2r + \mu ^{2}}\). Plugging these representations to the equations (8) and (10), a simplification yields as solutions the thresholds

Fig. 1
figure 1

Relations between the control problems. These relations hold for the optimal thresholds and also for the values, in the sense of Propositions 8 and 9

y s = r γ 2 + 1 α r + , y = r γ 2 + 1 α r + + 1 α r + λ .

Using these explicit representation, we get a limit as in proposition 8. These thresholds are illustrated in the Fig. 2. This also shows that the order of convergence is \(\sqrt{\lambda}\).

Fig. 2
figure 2

Threshold boundaries as a function of the intensity of the Poisson process with the parameters \(\gamma = 0.001\), \(\mu =0.1\) and \(r=0.001\)

To find the value function \(V_{r, \lambda}(x)\) in the region \(x \geq y^{*}\) we first calculate that the resolvent is equal to

$$ \lambda (R_{r+\lambda} \theta _{r})(x)= \frac{\lambda (r+\lambda )(1+x(x-r\gamma )(r+\lambda ))+\lambda (r+\lambda )(2x+\gamma \lambda )\mu +2\lambda \mu ^{2}}{(r+\lambda )^{3}}. $$

Hence,

$$ \frac{(R_{r+\lambda} \theta _{r})'(y^{*})}{\varphi '_{r+\lambda}(y^{*})} \varphi _{r+\lambda}(x)= \frac{e^{(y^{*}-x)(\mu +\sqrt{2(r+\lambda )+\mu ^{2}})}(r\gamma -2y^{*})(r+\lambda )-2\mu}{(r+\lambda )^{2}(\mu + \sqrt{2(r+\lambda )+\mu ^{2}})}, $$

where \(x > y^{*}\). Using these expressions we can write down the value function \(V_{r, \lambda}(x)\) in (11). Further, this implies that the order of convergence of the value is \(\lambda \), and interestingly, it is different from the thresholds.

5.2 Controlled Ornstein-Uhlenbeck Process

Consider dynamics that are characterized by a stochastic differential equation

$$ dX_{t} = - \delta X_{t} dt + dW_{t}, \quad X_{0}=x, $$

where \(\delta > 0\). This diffusion is often used to model continuous time systems that have mean reverting behaviour. To illustrate the results we choose the running cost \(\pi (x) = \lvert x \rvert \), and consequently \(\theta (x) = \lvert x \rvert -\gamma (\delta + r) x\). The scale density and the density of speed measure are in this case

$$ S'(x)= \exp ( \delta x^{2}) , \quad m'(x)=2 \exp (- \delta x^{2}), $$

and the minimal \(r\)-excessive functions read as (see p.141 of [6])

$$ \varphi _{\lambda}(x) = e^{\frac{\delta x^{2}}{2}} D_{-\lambda / \delta}(x \sqrt{2 \delta}), \quad \psi _{\lambda}(x) = e^{ \frac{\delta x^{2}}{2}} D_{-\lambda / \delta}(-x \sqrt{2 \delta}), $$

where \(D_{\nu}(x)\) is a parabolic cylinder function. We note that our Assumptions 1 and 2 are satisfied and that this process is positively recurrent (see p. 141 of [6]). Unfortunately, the equations for the optimal thresholds take rather complicated forms, and thus the results in proposition (8) are only illustrated numerically in Figs. 3 and 4.

Fig. 3
figure 3

Threshold boundaries as a function of the intensity of the Poisson process with the parameters \(\gamma = 0.1\), \(\delta =1\) and \(r=1\)

Fig. 4
figure 4

Threshold boundaries as a function of the discounting with the parameters \(\gamma = 0.1\), \(\delta =1\) and \(\lambda =20\)