Abstract
We study the asymptotic relations between certain singular and constrained control problems for onedimensional diffusions with both discounted and ergodic objectives. In the constrained control problems the controlling is allowed only at independent Poisson arrival times. We show that when the underlying diffusion is recurrent, the solutions of the discounted problems converge in Abelian sense to those of their ergodic counterparts. Moreover, we show that the solutions of the constrained problems converge to those of their singular counterparts when the Poisson rate tends to infinity. We illustrate the results with drifted Brownian motion and OrnsteinUhlenbeck process.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
This paper is concerned with asymptotic relations between certain discounted and ergodic control problems for onedimensional diffusions. More precisely, the following control problems are considered:

(A)
Classical singular stochastic control problems with both discounted and ergodic criteria

(B)
Constrained bounded variation control problems where controlling is allowed only at the independent Poisson arrival times with both discounted and ergodic criteria
These control problems are expected to be linked to each other via certain limiting properties. For instance, it is often expected that in item (A), the values of the problems with discounted criterion are connected to the ergodic problems in an Abelian sense, when the discounting factor vanishes. This relationship, often called the vanishing discount method and sometimes used in a heuristic manner, can be used to solve the ergodic problems [10, 19, 20].
Regarding item (B), the problems of this form have attracted attention in the recent years [14, 15, 17, 18, 23]. For related studies in optimal stopping, see [9, 11, 13]. In these problems, it is reasonable to expect that the value functions of the constrained problems should converge to the values of their singular counterparts as the Poisson arrival rate of the control opportunities tends to infinity.
The main contribution of this paper is that we prove these expectations to be correct for timehomogeneous control problems with onedimensional diffusion dynamics; our findings are summarized in Fig. 1. The choice of this framework is twofold. These diffusion models are important in many applications, and furthermore, the timehomogeneous structure allows explicit calculations, by which we can first solve the HJBequations of both discounted and ergodic problems separately and then establish that the solutions satisfy the desired limiting properties. This is in contrast to the vanishing discount method, where the HJBequation of the ergodic problem is solved using the solution of the discounted problem [20].
The remainder of the paper is organized as follows. In Sect. 2, we set up the diffusion dynamics. In Sect. 3, we introduce the control problems and study the functionals appearing in their analysis. The asymptotic relations are proved in Sect. 4. Paper is concluded with an explicit examples in Sect. 5.
2 Underlying Dynamics
Let \((\Omega , \mathcal{F}, \{\mathcal{F}_{t}\}_{t \geq 0}, \mathbb{P})\) be a filtered probability space which satisfies the usual conditions. We consider an uncontrolled realvalued process \(X\) defined on \((\Omega , \mathcal{F}, \{\mathcal{F}_{t}\}_{t \geq 0}, \mathbb{P})\) which is modelled as a strong solution to the Itô stochastic differential equation
where \(W_{t}\) is the Wiener process and the functions \(\mu \) and \(\sigma \) are wellbehaved (see Chap. 5 of [12]). For notional convenience, we consider the case where the process evolves in \(\mathbb{R}_{+}\), even though all the results remain unchanged even if the state space would be replaced with any interval in ℝ.
We define the secondorder linear differential operator \(\mathcal{A}\), which represents the infinitesimal generator of the diffusion \(X\), as
and for a given \(r > 0\), we respectively denote the increasing and decreasing solutions to the differential equation \((\mathcal{A}r)f=0\) by \(\psi _{r} > 0\) and \(\varphi _{r} > 0\). These solutions are often called the fundamental solutions and can be identified as the minimal \(r\)excessive functions of the diffusion \(X\) (see p.19 of [6]).
We define a set \(\mathcal{L}_{1}^{r}\) of functions \(f\) that satisfy the integrability condition \(\mathbb{E}_{x} [ \int _{0}^{\infty}e^{r s} \left  f(X_{s}) \right  ds ] < \infty \). Using this notation, we define the inverse of the differential operator \((r\mathcal{A})\), called the resolvent \(R_{r}\), by
for all \(x \in \mathbb{R}_{+}\) and \(f \in \mathcal{L}_{1}^{r}\). We also define the scale density of the diffusion by
which is the derivative of the monotonic (and nonconstant) solution to the differential equation \(\mathcal{A} f=0\).
Often in computations it is useful to use the formula
which connects the resolvent and the fundamental solutions \(\psi _{r}\) and \(\varphi _{r}\) (see p.19 of [6]). Here the positive constant, which is independent of \(x\),
is the Wronskian of the fundamental solutions and
denotes the density of the speed measure. We also recall the resolvent equation (see p.4 of [6])
3 The Control Problems
To pose the assumptions for the control problems, we define the function \(\theta _{r}: \mathbb{R}_{+} \to \mathbb{R}\) as \(\theta _{r}(x)=\pi (x)+\gamma \rho (x)\), where \(\gamma \) is a positive constant and \(\pi : \mathbb{R}_{+} \to \mathbb{R}\) is the cost function. Here, the function \(\rho : \mathbb{R}_{+} \to \mathbb{R}\) is defined as \(\rho (x)=\mu (x)rx\), where \(r\) is a positive constant called the discounting factor. In economic literature, the function \(\theta _{r}\) can be understood as the net convenience yield of holding inventories [2, 8]. This function appears in wide range of control problems of onedimensional diffusions, when the criteria to be minimized includes discounting [2, 14, 16].
In addition, we note that in the absence of discounting (\(r=0\)), the function \(\theta _{r}\) reduces to
which is in key role in many ergodic control problems of onedimensional diffusions [5, 15].
We study the control problems under the following assumptions, which guarantee semiexplicit solvability of the control problems defined at the end of this section.
Assumption 1
We assume that:

1.
the upper boundary \(\infty \) and the lower boundary 0 are natural,

2.
the cost function \(\pi \) is continuous, nonnegative and nondecreasing,

3.
the functions \(\theta _{r}\) and id\(: x \mapsto x\) are in \(\mathcal{L}_{1}^{r}\),

4.
there is a unique state \(x^{*} \geq 0\) such that \(\theta _{r}\) (for \(r \geq 0\)) is decreasing on \((0, x^{*} )\) and increasing on \((x^{*} , \infty )\) and that it satisfies the limiting condition \({ \lim _{x \to \infty} \theta _{r}(x) \geq 0}\).
Assumption 2
We assume that:

1.
\(m(0,y)=\int _{0}^{y} m'(z)dz <\infty \) and \(\int _{0}^{y} \pi _{\mu}(z) m'(z)dz<\infty \) for all \(y \in \mathbb{R}_{+}\),

2.
\(\lim _{x \downarrow 0} S'(x) = \infty \).
We make some remarks on these assumptions. First, we assume that the uncontrolled state variable \(X\) cannot become infinitely large or zero in finite time, see pp.18–20 of [6], for a characterization of the boundary behavior of diffusions. Second, the cost function is nondecreasing and nonnegative which is in line with usual economic applications. Third, the resolvents \((R_{r} \theta _{r})(x)\) and \((R_{r} \, \text{id})(x)\) exists. Fourth, we restrict our attention to the case, where the function \(\theta _{r}\) (\(\pi _{\mu}\)) has a unique global minimum at \(x^{*}\). The Assumption 4 essentially guarantees that the equations (presented at the end of this section) for the optimal control boundaries have a unique solution. Finally, the first part in Assumption 2 guarantees that the underlying diffusion has a stationary distribution (see p.37 of [6]).
Lastly, we assume that
Assumption 3
The process \(N_{t}\) is a Poisson process with a parameter \(\lambda \geq 0\) and it is independent of the underlying diffusion \(X_{t}\). Furthermore, we assume that, the filtration \(\{\mathcal{F}_{t}\}_{t \geq 0}\) is rich enough to carry the Poisson process \(N = (N_{t},\mathcal{F}_{t})_{t \geq 0}\). We denote the jump times of \(N_{t}\) by \(T_{i}\).
Before stating the control problems, we define the auxiliary functions \({\pi _{\gamma}: \mathbb{R}_{+} \to \mathbb{R}}\) and \(g: \mathbb{R}_{+} \to \mathbb{R}\) as
where \(\gamma \), \(r\) are the same positive constants as in the definition of \(\theta _{r}\) and \(\lambda \) is the intensity of the Poisson process in Assumption 3. The next lemma gives useful relationships between these auxiliary functions, \(\theta _{r}\) and \(\pi \). The lemma can be proved using the resolvent equation (4) and the harmonicity property \((Ar)(R_{r}\pi )(x)+\pi (x)=0\).
Lemma 1
Assume that \(g, \pi _{\gamma}, \theta _{r} \in \mathcal{L}_{1}^{r}\). Then
The following functionals are in a key role in the main results of the study, as they offer convenient representations of the integral functionals that include \(\psi _{r}\) and \(\varphi _{r}\):
We now formulate downward singular control problems of onedimensional diffusions and similar problems where controlling is allowed only at exogenously given Poisson arrival times. This latter problem is called constrained control problem.
We assume in all of the following theorems, that the controlled dynamics are given by the stochastic differential equation
where \(D_{t}\) denotes the applied control policy and \(\gamma \) is positive constant (the coefficient \(\gamma \) is often interpreted as proportional transaction cost).
Assumption 4
Admissible control policies

1.
In the singular problems (Theorems 2 and 3 below), we call a control policy \(D^{s}_{t}\) admissible, if it is nonnegative, nondecreasing, rightcontinuous, and \(\{\mathcal{F}_{t}\}_{t \geq 0}\)adapted, and denote the set of admissible controls by \(\mathcal{D}_{s}\).

2.
In the constrained problems (Theorems 4 and 5 below) the set of admissible controls \(\mathcal{D}\) is given by those nondecreasing, leftcontinuous processes \(D_{t}\) that have the representation
$$ D_{t}=\int _{[0,t)} \eta _{s} dN_{s}, $$where \(N\) is a Poisson process and the integrand \(\eta \) is \(\{\mathcal{F}_{t}\}_{t \geq 0}\)predictable.
We introduce the main results of the control problems below and refer to [1, 3–5, 14, 15] for full discussions.
Theorem 2
Singular control with discounted criteria pp.17011702 of [1], pp.714715 of [3]
Under the Assumption 1, the optimal control policy minimizing the objective
where \(D^{s}_{t} \in \mathcal{D}_{s}\) is
where \(\mathcal{L}(t, y^{*}_{s})\) denotes the local time push of the process \(X_{t}\) at the boundary \(y^{*}_{s}\). The boundary \(y^{*}_{s}\) is characterized by the unique solution to the equation
Moreover, the value of the problem reads as
Theorem 3
Singular control with ergodic criteria pp.1617 of [5], p.7 of [4]
Under the Assumptions 1and 2, the optimal control policy minimizing the objective
where \(D^{s}_{t} \in \mathcal{D}_{s}\) is
where \(\mathcal{L}(t, b^{*}_{s})\) denotes the local time push of the process \(X_{t}\) at the boundary \(b^{*}_{s}\). The boundary \(b^{*}_{s}\) is characterized by the unique solution to the equation
Moreover, the long run average cumulative yield reads as
Theorem 4
Control with discounted criteria and constraint p.115 of [14]
Under the Assumption 1, the optimal control policy that minimizes the objective
where \(D_{t} \in \mathcal{D}\), is as follows. If the controlled process \(X^{D}\) is above the threshold \(y^{*}\) at a jump time \(T_{i}\) of \(N\), i.e. \(X^{D}_{T_{i}} > y^{*}\) for any \(i\), the decision maker should take the controlled process \(X^{D}\) to \(y^{*}\). Further, the threshold \(y^{*}\) is uniquely determined by
which can be rewritten as
In addition, the value function \(V_{r, \lambda}(x):= \inf _{D \in \mathcal{D}} J(x,D)\) of the problem reads as
where
Proof
We only prove that the optimality condition can be rewritten as (10), and refer to [14] for the rest of the claim. To prove the representation, we first use the Lemma 6, and then the formulas (5) and (7), to get
Utilizing the Lemma 6 again, we see that the optimality condition has the form
□
Theorem 5
Control with ergodic criteria and constraint p.16 of [15]
Under the Assumptions 1, 2and that \(\pi (x) \geq C(x^{\alpha}1)\), where \(\alpha \) and \(C\) are positive constants, the optimal control policy minimizing the objective
where \(D_{t} \in \mathcal{D}\), is as follows. If the controlled process \(X^{D}\) is above the threshold \(b^{*}\) at a jump time \(T_{i}\) of \(N\), i.e. \(X^{D}_{T_{i}} > b^{*}\) for any \(i\), the decision maker should take the controlled process \(X^{D}\) to \(b^{*}\). Further, the threshold \(b^{*}\) is uniquely determined by
and the long run average cumulative yield \(\beta _{\lambda}\) reads as
Remark 1
The boundary classifications for the underlying diffusion can be relaxed in all of the above theorems. For example, in Theorem 3 it can be shown that the results stays unchanged when the lower boundary is exit or killing, see p.5 of [14].
The optimal policies in the above theorems can be summarised as follows. In the singular control problems the optimal policies are local time type barrier policies. In other words, when the process is below some constant boundary \(y^{*}_{s}\) the process should be left uncontrolled, but it should never be allowed to cross it, i.e. it is reflected at \(y^{*}_{s}\). The situation in the problems with constraint is similar: when the process is below some threshold \(y^{*}\) we do not act, but if the process crosses the boundary, and the Poisson process jumps, we immediately push it down to \(y^{*}\) and start it anew. In other words, the optimal strategy in all of the problems is to exert control at the ‘maximum rate’ when the process is at (or above) the corresponding boundary.
4 Main Results
In the next lemma, we collect useful representations for functionals \(K^{f}_{r}\) and \(L_{f}^{r}\).
Lemma 6
The functions \(L_{f}\) and \(K_{f}\) have alternative representations
Proof
The proof for the claim on \(L_{f}\) is lemma 2 of [14] and the proof on \(K_{f}\) is completely analogous. □
Under our assumption that the boundaries are natural, we have that
and thus, we can further rewrite
In the next lemma we prove that these functionals satisfy asymptotic properties that are needed to establish the relationships between the introduced control problems.
Lemma 7
Under the Assumption 1, we have the limits
In addition, if the underlying diffusion \(X_{t}\) is recurrent, i.e. \(\mathbb{P}_{x}[\tau _{z} < \infty ] = 1\) for all \(x,z \in \mathbb{R}_{+}\), then
where
Proof
Let \(\tau _{z} = \inf \{ t \geq 0 \mid X_{t} = z \}\). Then for all \(s > 0\) we have
Therefore, by letting \(s \to 0+\) we get by monotone convergence that
under the assumption that the underlying diffusion is recurrent. In addition, again by (14), we find that
Since \(\lim _{r \to 0+} \theta _{r}(x) = \pi _{\mu}(x)\), we see by using the above observations that by monotone convergence
Similarly, by utilizing (16) we obtain
□
Proposition 8
Asymptotics of the optimal thresholds
Under the Assumptions 1, 2and 3the optimal thresholds satisfy the following asymptotic results in terms of the intensity of the Poisson process
and if the underlying diffusion is also recurrent, we have the vanishing discount factor limits
Proof
We prove the first and the last claim, as the second claim is proposition 3 of [15] and the third claim lemma 3.1 of [4]. Define the functions
and let \(y^{*}_{s}(r)\), \(y^{*}(\lambda )\), \(b^{*}(\lambda )\) be such that \(K_{\theta _{r}}^{r}(y^{*}_{s}(r))=0\), \(G_{r, \lambda}(y^{*}(\lambda ))=0\) and \(F_{\lambda}(b^{*}(\lambda ))=0\). We note that these roots are unique and define the optimal boundaries in Theorems 2, 4 and 5.
The first claim is to show that the unique root \(y^{*}(\lambda )\) of the function \(G_{r, \lambda}(x)\) in the limit \(\lambda \to \infty \) is \(y^{*}_{s}\). We find that
Because the function \(\lambda \mapsto \frac{\varphi _{r+\lambda}(x)}{\varphi _{r+\lambda}'(x)}\) is increasing and bounded above by 0 by lemma 6 in [22], we find by Lemma 7 that
Thus, because the function \(\frac{K_{\theta _{r}}^{r}(x)}{\psi _{r}'(x)}\) changes sign (see lemma 3.1. of [3]), we have by (17), the equations (8) and (10) that
To prove the last claim, we find utilizing (13) and Lemma 7 that
Hence,
Therefore, as the function \(F_{\lambda}(x)\) changes sign (see proposition 1 of [15]), the claim follows by (18) and the equations (12) and (10). □
Similar limiting results hold also for the corresponding values of the defined control problems. However, it is clear that in terms of the vanishing discounting factor, the results hold only in the following Abelian sense.
Proposition 9
Asymptotics of the values
Under the Assumptions 1, 2and 3the values of the control problems satisfy the following asymptotic results
Also, if the underlying diffusion is recurrent, we have the following Abelian limits
Proof
For the last claim see lemma 3.1 of [4]. To prove the third, we first rewrite the value function (11) using Lemma 1 as
where
We notice that when \(x > y^{*}\), the value function \(r V_{r, \lambda}(x)\) has convenient presentation in terms of the limit \(r \to 0\). However, when \(x < y^{*}\) we have to proceed as follows. Because \(V_{r, \lambda}(x)\) is continuous across the boundary \(y^{*}\), we find
which can be reorganized as
Thus, we get that
Using the formulas (3) and (15), we see that
and thus, by (15) we have
Therefore, by continuity and proposition 8, the value function satisfies
Finally, utilizing (3) and (12), the limiting value reads as
which completes the proof of the third claim.
To prove the second claim, we notice that the value function \(V_{r, \lambda}(x)\) is independent of \(\lambda \) when \(x< y^{*}\). Thus, we can focus on the region \(x>y^{*}\). We reorganize the terms \(V_{r, \lambda}(x)\) in the upper region as
where
Because diffusions are Fellerprocesses, we know that \(\lambda (R_{r+\lambda}\theta _{r}) \to \theta _{r}\) as \(\lambda \to \infty \) (in supnorm), see p.235 of [21]. Thus,
To deal with the remaining terms in (19), we note that by (20)
Utilizing the above we get by (15)
and
As the value function \(V^{s}_{r}(x)\) is continuous over the boundary \(y^{*}_{s}\), we further find that
Combining the above limits the result follows by continuity and proposition 8.
Lastly, the second claim of the proposition follows by continuity of the functions and proposition 8, as \(\beta ^{s}\) can also be represented as (see p.17 of [5])
□
Unfortunately, our results do not answer the interesting question about the rate of any of these convergences. It seems to be the case that different underlying dynamics have different rates of convergence, and further, the threshold boundaries can have different rate of convergence as the value functions. Thus, we believe it to be hard to find general conditions for the rate of the convergence. However, in the next section we show the rate of convergence in an explicit case.
Another interesting question is the convergence of the controlled processes. A portfolio selection problem under vanishing transaction costs is studied in [7]. Here the authors show that the optimal thresholds and value functions under transaction costs converge to the ones without transaction costs as these costs vanish. Using this, the authors argue further that the controlled wealth processes converge weakly. To prove a similar result in our case unfortunately seems more formidable. For instance, in [7] the fact that controlled processes are bounded simplifies the proof of tightness of the approximating sequence. In our case, this does not hold, which makes the problem more difficult. This interesting question of convergence is left for future research.
5 Illustration
5.1 Brownian Motion with Drift
Let the underlying process \(X_{t}\) be defined by
where \(\mu > 0\). Also, we let the process evolve in ℝ and choose a quadratic running cost \(\pi (x) = x^{2}\). The minimal excessive functions are in this case known to be
and the scale density and speed measure read as
respectively. The net convenience yield now takes the form \(\theta (x) = x^{2} + \gamma (\mu rx)\). We notice immediately that our Assumptions 1 hold as the boundary assumption can be relaxed in the case of the discounted problems (Theorems 2 and 4), see Remark 1. However, it is clear that the process does not satisfy the ergodicity properties needed for the results in terms of the limit \(r \to 0\).
To illustrate the results of proposition 8, we solve the optimality conditions (8) and (10). Conveniently the solution to these equations can be represented explicitly. To solve the equations we need to find the functions \(K_{\theta _{r}}^{r}(x)\) and \(L_{\theta _{r}}^{r+\lambda}(x)\). Elementary integration yields
where \(\alpha _{r}^{+} = \mu + \sqrt{2r + \mu ^{2}}\) and \(\alpha _{r}^{} = \mu  \sqrt{2r + \mu ^{2}}\). Plugging these representations to the equations (8) and (10), a simplification yields as solutions the thresholds
Using these explicit representation, we get a limit as in proposition 8. These thresholds are illustrated in the Fig. 2. This also shows that the order of convergence is \(\sqrt{\lambda}\).
To find the value function \(V_{r, \lambda}(x)\) in the region \(x \geq y^{*}\) we first calculate that the resolvent is equal to
Hence,
where \(x > y^{*}\). Using these expressions we can write down the value function \(V_{r, \lambda}(x)\) in (11). Further, this implies that the order of convergence of the value is \(\lambda \), and interestingly, it is different from the thresholds.
5.2 Controlled OrnsteinUhlenbeck Process
Consider dynamics that are characterized by a stochastic differential equation
where \(\delta > 0\). This diffusion is often used to model continuous time systems that have mean reverting behaviour. To illustrate the results we choose the running cost \(\pi (x) = \lvert x \rvert \), and consequently \(\theta (x) = \lvert x \rvert \gamma (\delta + r) x\). The scale density and the density of speed measure are in this case
and the minimal \(r\)excessive functions read as (see p.141 of [6])
where \(D_{\nu}(x)\) is a parabolic cylinder function. We note that our Assumptions 1 and 2 are satisfied and that this process is positively recurrent (see p. 141 of [6]). Unfortunately, the equations for the optimal thresholds take rather complicated forms, and thus the results in proposition (8) are only illustrated numerically in Figs. 3 and 4.
References
Alvarez, L.H.R.: Singular stochastic control, linear diffusions, and optimal stopping: a class of solvable problems. SIAM J. Control Optim. 39, 1697–1710 (2001)
Alvarez, L.H.R.: A class of solvable impulse control problems. Appl. Math. Optim. 49, 265–295 (2004)
Alvarez, L.H.R., Lempa, J.: On the optimal stochastic impulse control of linear diffusions. SIAM J. Control Optim. 47, 703–732 (2008)
Alvarez, L.H.R., Hening, A.: Optimal sustainable harvesting of populations in random environments. Stoch. Process. Appl. 150, 678–698 (2022)
Alvarez, L.H.R.: A class of solvable stationary singular stochastic control problems (2018). 1803.03464
Borodin, A.N., Salminen, P.: Handbook of Brownian Motion – Facts and Formulae, 2nd edn. Birkhäuser, Basel (2001)
Christensen, S., Irle, A., Ludwig, A.: Optimal portfolio selection under vanishing fixed transaction costs. Adv. Appl. Probab. 49, 1116–1143 (2017)
Dixit, A.K., Pindyck, R.S.: Investment Under Uncertainty. Princeton Univ. Press, Princeton (1994)
Dupuis, P., Wang, H.: Optimal stopping with random intervention times. Adv. Appl. Probab. 34, 141–157 (2002)
Gatarek, D., Stettner, L.: On the compactness method in general ergodic impulsive control of Markov processes. Stoch. Stoch. Rep. 31, 15–25 (1990)
Guo, X., Liu, J.: Stopping at the maximum of geometric Brownian motion when signals are received. J. Appl. Probab. 42, 826–838 (2005)
Karatzas, I., Shreve, S.E.: Brownian Motion and Stochastic Calculus. Springer, New York (1991)
Lempa, J.: Optimal stopping with information constraint. Appl. Math. Optim. 66, 147–173 (2012)
Lempa, J.: Bounded variation control of Itô diffusion with exogenously restricted intervention times. Adv. Appl. Probab. 46, 102–120 (2014)
Lempa, J., Saarinen, H.: Ergodic control of diffusions with random intervention times. J. Appl. Probab. 58, 1–21 (2021)
Matomäki, P.: On solvability of a twosided singular control problem. Math. Methods Oper. Res. 76, 239–271 (2012)
Menaldi, J.L., Robin, M.: On some impulse control problems with constraint. SIAM J. Control Optim. 55, 3204–3225 (2017)
Menaldi, J.L., Robin, M.: On some ergodic impulse control problems with constraint. SIAM J. Control Optim. 56, 2690–2711 (2018)
Palczewski, J., Stettner, L.: Impulse control maximizing average cost per unite time: a nonuniformly ergodic case. SIAM J. Control Optim. 55, 936–960 (2017)
Robin, M.: Longterm average cost control problems for continuous time Markov processes: a survey. Acta Appl. Math. 1, 281–299 (1983)
Rogers, L.C.G., Williams, D.: Diffusions, Markov Processes and Martingales, vol. 1. Cambridge University Press, Cambridge (2001)
Saarinen, H.: Twosided Poisson control of linear diffusions (2022). 2203.09573
Wang, H.: Some control problems with random intervention times. Adv. Appl. Probab. 33, 404–422 (2001)
Acknowledgements
We would like to gratefully acknowledge the emmy.network foundation under the aegis of the Fondation de Luxembourg, for its continued support.
Funding
Open Access funding provided by University of Turku (UTU) including Turku University Central Hospital.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing Interests
These authors contributed equally to this work.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Saarinen, H., Lempa, J. A Note on Asymptotics Between Singular and Constrained Control Problems of OneDimensional Diffusions. Acta Appl Math 181, 13 (2022). https://doi.org/10.1007/s1044002200530w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s1044002200530w
Keywords
 Bounded variation control
 Singular stochastic control
 Diffusion process
 Resolvent semigroup
 Poisson process
 Ergodic control