A Note on Asymptotics Between Singular and Constrained Control Problems of One-Dimensional Diffusions

We study the asymptotic relations between certain singular and constrained control problems for one-dimensional diffusions with both discounted and ergodic objectives. In the constrained control problems the controlling is allowed only at independent Poisson arrival times. We show that when the underlying diffusion is recurrent, the solutions of the discounted problems converge in Abelian sense to those of their ergodic counterparts. Moreover, we show that the solutions of the constrained problems converge to those of their singular counterparts when the Poisson rate tends to infinity. We illustrate the results with drifted Brownian motion and Ornstein-Uhlenbeck process.


Introduction
This paper is concerned with asymptotic relations between certain discounted and ergodic control problems for one-dimensional diffusions. More precisely, the following control problems are considered: (A) Classical singular stochastic control problems with both discounted and ergodic criteria (B) Constrained bounded variation control problems where controlling is allowed only at the independent Poisson arrival times with both discounted and ergodic criteria These control problems are expected to be linked to each other via certain limiting properties. For instance, it is often expected that in item (A), the values of the problems with discounted criterion are connected to the ergodic problems in an Abelian sense, when the discounting factor vanishes. This relationship, often called the vanishing discount method and sometimes used in a heuristic manner, can be used to solve the ergodic problems [10,19,20].
Regarding item (B), the problems of this form have attracted attention in the recent years [14,15,17,18,23]. For related studies in optimal stopping, see [9,11,13]. In these problems, it is reasonable to expect that the value functions of the constrained problems should converge to the values of their singular counterparts as the Poisson arrival rate of the control opportunities tends to infinity.
The main contribution of this paper is that we prove these expectations to be correct for time-homogeneous control problems with one-dimensional diffusion dynamics; our findings are summarized in Fig. 1. The choice of this framework is two-fold. These diffusion models are important in many applications, and furthermore, the time-homogeneous structure allows explicit calculations, by which we can first solve the HJB-equations of both discounted and ergodic problems separately and then establish that the solutions satisfy the desired limiting properties. This is in contrast to the vanishing discount method, where the HJB-equation of the ergodic problem is solved using the solution of the discounted problem [20].
The remainder of the paper is organized as follows. In Sect. 2, we set up the diffusion dynamics. In Sect. 3, we introduce the control problems and study the functionals appearing in their analysis. The asymptotic relations are proved in Sect. 4. Paper is concluded with an explicit examples in Sect. 5.

Underlying Dynamics
Let ( , F , {F t } t≥0 , P) be a filtered probability space which satisfies the usual conditions. We consider an uncontrolled real-valued process X defined on ( , F , {F t } t≥0 , P) which is modelled as a strong solution to the Itô stochastic differential equation where W t is the Wiener process and the functions μ and σ are well-behaved (see Chap. 5 of [12]). For notional convenience, we consider the case where the process evolves in R + , even though all the results remain unchanged even if the state space would be replaced with any interval in R.
We define the second-order linear differential operator A, which represents the infinitesimal generator of the diffusion X, as and for a given r > 0, we respectively denote the increasing and decreasing solutions to the differential equation (A − r)f = 0 by ψ r > 0 and ϕ r > 0. These solutions are often called the fundamental solutions and can be identified as the minimal r-excessive functions of the diffusion X (see p.19 of [6]). We define a set L r 1 of functions f that satisfy the integrability condition E x [ ∞ 0 e −rs |f (X s )| ds] < ∞. Using this notation, we define the inverse of the differential operator (r − A), called the resolvent R r , by for all x ∈ R + and f ∈ L r 1 . We also define the scale density of the diffusion by which is the derivative of the monotonic (and non-constant) solution to the differential equation Af = 0. Often in computations it is useful to use the formula which connects the resolvent and the fundamental solutions ψ r and ϕ r (see p.19 of [6]).
Here the positive constant, which is independent of x, is the Wronskian of the fundamental solutions and denotes the density of the speed measure. We also recall the resolvent equation (see p.4 of [6])

The Control Problems
To pose the assumptions for the control problems, we define the function θ r : R + → R as θ r (x) = π(x) + γρ(x), where γ is a positive constant and π : R + → R is the cost function.
Here, the function ρ : R + → R is defined as ρ(x) = μ(x)−rx, where r is a positive constant called the discounting factor. In economic literature, the function θ r can be understood as the net convenience yield of holding inventories [2,8]. This function appears in wide range of control problems of one-dimensional diffusions, when the criteria to be minimized includes discounting [2,14,16].
In addition, we note that in the absence of discounting (r = 0), the function θ r reduces to which is in key role in many ergodic control problems of one-dimensional diffusions [5,15]. We study the control problems under the following assumptions, which guarantee semiexplicit solvability of the control problems defined at the end of this section.

Assumption 1
We assume that: 1. the upper boundary ∞ and the lower boundary 0 are natural, 2. the cost function π is continuous, non-negative and non-decreasing, 3. the functions θ r and id: x → x are in L r 1 , 4. there is a unique state x * ≥ 0 such that θ r (for r ≥ 0) is decreasing on (0, x * ) and increasing on (x * , ∞) and that it satisfies the limiting condition lim x→∞ θ r (x) ≥ 0.
We make some remarks on these assumptions. First, we assume that the uncontrolled state variable X cannot become infinitely large or zero in finite time, see pp.18-20 of [6], for a characterization of the boundary behavior of diffusions. Second, the cost function is non-decreasing and non-negative which is in line with usual economic applications. Third, the resolvents (R r θ r )(x) and (R r id)(x) exists. Fourth, we restrict our attention to the case, where the function θ r (π μ ) has a unique global minimum at x * . The Assumption 4 essentially guarantees that the equations (presented at the end of this section) for the optimal control boundaries have a unique solution. Finally, the first part in Assumption 2 guarantees that the underlying diffusion has a stationary distribution (see p.37 of [6]).
Lastly, we assume that

Assumption 3
The process N t is a Poisson process with a parameter λ ≥ 0 and it is independent of the underlying diffusion X t . Furthermore, we assume that, the filtration {F t } t≥0 is rich enough to carry the Poisson process N = (N t , F t ) t≥0 . We denote the jump times of N t by T i .
Before stating the control problems, we define the auxiliary functions π γ : R + → R and g : R + → R as where γ , r are the same positive constants as in the definition of θ r and λ is the intensity of the Poisson process in Assumption 3. The next lemma gives useful relationships between these auxiliary functions, θ r and π . The lemma can be proved using the resolvent equation (4) and the harmonicity property (A − r)(R r π)(x) + π(x) = 0. Lemma 1 Assume that g, π γ , θ r ∈ L r 1 . Then The following functionals are in a key role in the main results of the study, as they offer convenient representations of the integral functionals that include ψ r and ϕ r : We now formulate downward singular control problems of one-dimensional diffusions and similar problems where controlling is allowed only at exogenously given Poisson arrival times. This latter problem is called constrained control problem.
We assume in all of the following theorems, that the controlled dynamics are given by the stochastic differential equation where D t denotes the applied control policy and γ is positive constant (the coefficient γ is often interpreted as proportional transaction cost).

Assumption 4 (Admissible control policies)
1. In the singular problems (Theorems 2 and 3 below), we call a control policy D s t admissible, if it is non-negative, non-decreasing, right-continuous, and {F t } t≥0 -adapted, and denote the set of admissible controls by D s . 2. In the constrained problems (Theorems 4 and 5 below) the set of admissible controls D is given by those non-decreasing, left-continuous processes D t that have the representation where N is a Poisson process and the integrand η is {F t } t≥0 -predictable.
We introduce the main results of the control problems below and refer to [1, 3-5, 14, 15] for full discussions.
Theorem 2 (Singular control with discounted criteria pp.1701-1702 of [1], pp.714-715 of [3]) Under the Assumption 1, the optimal control policy minimizing the objective where L(t, y * s ) denotes the local time push of the process X t at the boundary y * s . The boundary y * s is characterized by the unique solution to the equation Moreover, the value of the problem reads as , x<y * s .
Theorem 3 (Singular control with ergodic criteria pp.16-17 of [5], p.7 of [4]) Under the Assumptions 1 and 2, the optimal control policy minimizing the objective where L(t, b * s ) denotes the local time push of the process X t at the boundary b * s . The boundary b * s is characterized by the unique solution to the equation Moreover, the long run average cumulative yield reads as Theorem 4 (Control with discounted criteria and constraint p.115 of [14]) Under the Assumption 1, the optimal control policy that minimizes the objective where D t ∈ D, is as follows. If the controlled process X D is above the threshold y * at a jump time T i of N , i.e. X D T i− > y * for any i, the decision maker should take the controlled process X D to y * . Further, the threshold y * is uniquely determined by which can be rewritten as In addition, the value function V r,λ (x) := inf D∈D J (x, D) of the problem reads as where Proof We only prove that the optimality condition can be rewritten as (10), and refer to [14] for the rest of the claim. To prove the representation, we first use the Lemma 6, and then the formulas (5) and (7), to get Utilizing the Lemma 6 again, we see that the optimality condition has the form L r+λ θr (y * ) ϕ r+λ (y * ) + K r θr (y * ) ψ r (y * ) = 0.
Theorem 5 (Control with ergodic criteria and constraint p.16 of [15]) Under the Assumptions 1, 2 and that π(x) ≥ C(x α − 1), where α and C are positive constants, the optimal control policy minimizing the objective where D t ∈ D, is as follows. If the controlled process X D is above the threshold b * at a jump time T i of N , i.e. X D T i− > b * for any i, the decision maker should take the controlled process X D to b * . Further, the threshold b * is uniquely determined by and the long run average cumulative yield β λ reads as

Remark 1
The boundary classifications for the underlying diffusion can be relaxed in all of the above theorems. For example, in Theorem 3 it can be shown that the results stays unchanged when the lower boundary is exit or killing, see p.5 of [14].
The optimal policies in the above theorems can be summarised as follows. In the singular control problems the optimal policies are local time type barrier policies. In other words, when the process is below some constant boundary y * s the process should be left uncontrolled, but it should never be allowed to cross it, i.e. it is reflected at y * s . The situation in the problems with constraint is similar: when the process is below some threshold y * we do not act, but if the process crosses the boundary, and the Poisson process jumps, we immediately push it down to y * and start it anew. In other words, the optimal strategy in all of the problems is to exert control at the 'maximum rate' when the process is at (or above) the corresponding boundary.

Main Results
In the next lemma, we collect useful representations for functionals K f r and L r f .

Lemma 6
The functions L f and K f have alternative representations Proof The proof for the claim on L f is lemma 2 of [14] and the proof on K f is completely analogous.
Under our assumption that the boundaries are natural, we have that and thus, we can further rewrite In the next lemma we prove that these functionals satisfy asymptotic properties that are needed to establish the relationships between the introduced control problems.

Lemma 7 Under the Assumption 1, we have the limits
In addition, if the underlying diffusion X t is recurrent, i.e. P x [τ z < ∞] = 1 for all x, z ∈ R + , then Therefore, by letting s → 0+ we get by monotone convergence that under the assumption that the underlying diffusion is recurrent. In addition, again by (14), we find that Since lim r→0+ θ r (x) = π μ (x), we see by using the above observations that by monotone convergence Similarly, by utilizing (16) we obtain

Proposition 8 (Asymptotics of the optimal thresholds) Under the Assumptions 1, 2 and 3 the optimal thresholds satisfy the following asymptotic results in terms of the intensity of the Poisson process
and if the underlying diffusion is also recurrent, we have the vanishing discount factor limits Proof We prove the first and the last claim, as the second claim is proposition 3 of [15] and the third claim lemma 3.1 of [4]. Define the functions and let y * s (r), y * (λ), b * (λ) be such that K r θr (y * s (r)) = 0, G r,λ (y * (λ)) = 0 and F λ (b * (λ)) = 0. We note that these roots are unique and define the optimal boundaries in Theorems 2, 4 and 5.
The first claim is to show that the unique root y * (λ) of the function G r,λ (x) in the limit λ → ∞ is y * s . We find that Because the function λ → ϕ r+λ (x) ϕ r+λ (x) is increasing and bounded above by 0 by lemma 6 in [22], we find by Lemma 7 that Thus, because the function K r θr (x) ψ r (x) changes sign (see lemma 3.1. of [3]), we have by (17), the equations (8) and (10) that To prove the last claim, we find utilizing (13) and Lemma 7 that (0, x) . Hence, Therefore, as the function F λ (x) changes sign (see proposition 1 of [15]), the claim follows by (18) and the equations (12) and (10).
Similar limiting results hold also for the corresponding values of the defined control problems. However, it is clear that in terms of the vanishing discounting factor, the results hold only in the following Abelian sense.
Thus, we get that Using the formulas (3) and (15), we see that and thus, by (15) we have Therefore, by continuity and proposition 8, the value function satisfies Finally, utilizing (3) and (12), the limiting value reads as which completes the proof of the third claim.
To prove the second claim, we notice that the value function V r,λ (x) is independent of λ when x < y * . Thus, we can focus on the region x > y * . We re-organize the terms V r,λ (x) in the upper region as where A(y * ) = λ r (R r+λ θ r )(y * ) − (R r+λ θ r ) (y * ) ϕ r+λ (y * ) ϕ r+λ (y * ) .
Combining the above limits the result follows by continuity and proposition 8.
Lastly, the second claim of the proposition follows by continuity of the functions and proposition 8, as β s can also be represented as (see p.17 of [5]) Unfortunately, our results do not answer the interesting question about the rate of any of these convergences. It seems to be the case that different underlying dynamics have different rates of convergence, and further, the threshold boundaries can have different rate of convergence as the value functions. Thus, we believe it to be hard to find general conditions for the rate of the convergence. However, in the next section we show the rate of convergence in an explicit case.
Another interesting question is the convergence of the controlled processes. A portfolio selection problem under vanishing transaction costs is studied in [7]. Here the authors show that the optimal thresholds and value functions under transaction costs converge to the ones without transaction costs as these costs vanish. Using this, the authors argue further that the controlled wealth processes converge weakly. To prove a similar result in our case unfortunately seems more formidable. For instance, in [7] the fact that controlled processes are bounded simplifies the proof of tightness of the approximating sequence. In our case, this does not hold, which makes the problem more difficult. This interesting question of convergence is left for future research.

Brownian Motion with Drift
Let the underlying process X t be defined by where μ > 0. Also, we let the process evolve in R and choose a quadratic running cost π(x) = x 2 . The minimal excessive functions are in this case known to be and the scale density and speed measure read as respectively. The net convenience yield now takes the form θ(x) = x 2 + γ (μ − rx). We notice immediately that our Assumptions 1 hold as the boundary assumption can be relaxed in the case of the discounted problems (Theorems 2 and 4), see Remark 1. However, it is clear that the process does not satisfy the ergodicity properties needed for the results in terms of the limit r → 0.
To illustrate the results of proposition 8, we solve the optimality conditions (8) and (10). Conveniently the solution to these equations can be represented explicitly. To solve the equations we need to find the functions K r θr (x) and L r+λ θr (x). Elementary integration yields where α + r = μ + 2r + μ 2 and α − r = μ − 2r + μ 2 . Plugging these representations to the equations (8) and (10), a simplification yields as solutions the thresholds Using these explicit representation, we get a limit as in proposition 8. These thresholds are illustrated in the Fig. 2. This also shows that the order of convergence is √ λ. To find the value function V r,λ (x) in the region x ≥ y * we first calculate that the resolvent is equal to

Controlled Ornstein-Uhlenbeck Process
Consider dynamics that are characterized by a stochastic differential equation where D ν (x) is a parabolic cylinder function. We note that our Assumptions 1 and 2 are satisfied and that this process is positively recurrent (see p. 141 of [6]). Unfortunately, the equations for the optimal thresholds take rather complicated forms, and thus the results in proposition (8)   Acknowledgements We would like to gratefully acknowledge the emmy.network foundation under the aegis of the Fondation de Luxembourg, for its continued support.
Funding Note Open Access funding provided by University of Turku (UTU) including Turku University Central Hospital.

Declarations
Competing Interests These authors contributed equally to this work.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.