Asymptotics of Impulse Control Problem with Multiplicative Reward

We consider a long-run impulse control problem for a generic Markov process with a multiplicative reward functional. We construct a solution to the associated Bellman equation and provide a verification result. The argument is based on the probabilistic properties of the underlying process combined with the Krein-Rutman theorem applied to the specific non-linear operator. Also, it utilises the approximation of the problem in the bounded domain and with the help of the dyadic time-grid.


Introduction
Impulse control constitutes a versatile framework for controlling real-life stochastic systems.In this type of control, a decision-maker determines intervention times and instantaneous after-intervention states of the controlled process.By doing so, one can affect a continuous time phenomenon in a discrete time manner.Consequently, impulse control attracted considerable attention in the mathematical literature; see e.g.[4,9,26] for classic contributions and [3,10,19,21] for more recent results.In addition to generic mathematical properties, impulse control problems were studied with reference to specific applications including i.a.controlling exchange rates, epidemics, and portfolios with transaction costs; see e.g.[18,25,27] and references therein.
When looking for an optimal impulse control strategy, one must decide on the optimality criterion.Recently, considerable attention was paid to the so-called risk-sensitive functional given, for any γ ∈ R, by where Z is a (random) payoff corresponding to a chosen control strategy; see [14] for a seminal contribution.This functional with γ = 0 corresponds to the usual linear criterion and the case γ < 0 is associated with risk-averse preferences; see [5] for a comprehensive overview.Also, the functional with γ > 0 could be linked to the asymptotics of the power utility function; see [31] for details.Recent comprehensive discussion on the long-run version with µ γ could be found in [6].We refer also to [23] and references therein for a discussion on the connection between (1.1) and the duality of the large deviations-based criteria.
In this paper we focus on the use of the functional µ γ with γ > 0.More specifically, we consider the impulse control problem for some continuous time Markov process and construct a solution to the associated Bellman equation which characterises an optimal impulse control strategy.To do this, we study the family of impulse control problems in bounded domains and then extend the analysis to the generic locally compact state space.This idea was used in [1], where PDEs techniques were applied to obtain the characterisation of the controlled diffusions in the risks-sensitive setting.A similar approximation for the the average cost per unit time problem was considered in [32].
The main contribution of this paper is a construction of a solution to the Bellman equation associated with the problem, see Theorem 5.1 for details.It should be noted that we get a bounded solution even though the state space could be unbounded and we assume virtually no ergodicity conditions for the uncontrolled process.Also, note that present results for γ > 0 complement our recent findings on the impulse control with the risk-averse preferences; see [24] for the dyadic case and [15] for the continuous time framework.Nevertheless, it should be noted that the techniques for γ < 0 and γ > 0 are substantially different and it is not possible to directly transform the results in one framework to the other; see e.g.[16,20] for further discussion.
The structure of this paper is as follows.In Section 2 we formally introduce the problem, discuss the assumptions and, in Theorem 2.3, provide a verification argument.Next, in Section 3 we consider an auxiliary dyadic problem in a bounded domain and in Theorem 3.1 we construct a solution to the corresponding Bellman equation.This is used in Section 4 where we extend our analysis to the unbounded domain with the dyadic time-grid; see Theorem 4.2.Next, in Section 5 we finally construct a solution to the Bellman equation for the original problem; see Theorem 5.1.Finally, in Appendix A we discuss some properties of the optimal stopping problems that are used in this paper.

Preliminaries
Let X = (X t ) t≥0 be a continuous time standard Feller-Markov process on a filtered probability space (Ω, F , (F t ), P).The process X takes values in a locally compact separable metric space E endowed with a metric ρ and the Borel σ-field E. With any x ∈ E we associate a probability measure P x describing the evolution of the process X starting in x; see Section 1.4 in [28] for details.Also, we use E x , x ∈ E, and P t (x, A) := P x [X t ∈ A], t ≥ 0, x ∈ E, A ∈ E, for the corresponding expectation operator and the transition probability, respectively.By C b (E) we denote the family of continuous bounded real-valued functions on E. Also, to ease the notation, by T , T x , and T x,b we denote the families of stopping times, P x a.s.finite stopping times, and P x a.s.bounded stopping times, respectively.Also, for any δ > 0, by T δ ⊂ T , T δ x ⊂ T x , and T δ x,b ⊂ T x,b , we denote the respective subfamilies of dyadic stopping times, i.e. those taking values in the set {0, δ, 2δ, . ..} ∪ {∞}.
Throughout this paper we fix some compact U ⊆ E and we assume that a decision-maker is allowed to shift the controlled process to U .This is done with the help of an impulse control strategy, i.e. a sequence is an increasing sequence of stopping times and (ξ i ) is a sequence of F τi -measurable after-impulse states with values in U .With any starting point x ∈ E and a strategy V we associate a probability measure P (x,V ) for the controlled process Y .Under this measure, the process starts at x and follows its usual (uncontrolled) dynamics up to the time τ 1 .Then, it is immediately shifted to ξ 1 and starts its evolution again, etc.More formally, we consider a countable product of filtered spaces (Ω, F , (F t )) and a coordinate process (X 1 t , X 2 t , . ..).Then, we define the controlled process Y as Y t := X i t , t ∈ [τ i−1 , τ i ) with the convention τ 0 ≡ 0. Under the measure P (x,V ) we get Y τi = ξ i ; we refer to Chapter V in [26] for the construction details; see also Appendix in [8] and Section 2 in [29].
The family of admissible impulse control strategies is denoted by V. Also, note that, to simplify the notation, by Y τ − i := X i τi , i ∈ N * , we denote the state of the process right before the ith impulse (yet, possibly, after the jump).
In this paper we study the asymptotics of the impulse control problem given by sup where, for any x ∈ E and V ∈ V, we set with f denoting the running cost function and c being the shift-cost function, respectively.Note that this could be seen as a long-run standardised version of the functional (1.1) with γ > 0 applied to the impulse control framework.Here, the standardisation refers to the fact that we do not use directly the parameter γ (apart from its sign).Also, the problem is of the long-run type, i.e. the utility is averaged over time which improves the stability of the results.The analysis in this paper is based on the approximation of the problem in a bounded domain.Thus, we fix a sequence (B m ) m∈N of compact sets satisfying B m ⊂ B m+1 and E = ∞ m=0 B m .Also, we assume that U ⊂ B 0 .Next, we assume the following conditions.
(A1) (Cost functions).The map f : E → R − is a continuous and bounded.Also, the map c : E × U → R − is continuous, bounded, and strictly non-positive, and satisfies the triangle inequality, i.e. for some c 0 < 0, we have Also, we assume that c satisfies the uniform limit at infinity condition lim 3) (A2) (Transition probability continuity).For any t, the transition probability P t is continuous with respect to the total variation norm, i.e. for any sequence (A3) (Distance control).For any compact set Γ ⊂ E, t 0 > 0, and r 0 > 0, we have lim where M Γ (t, r) := sup x∈Γ P x [sup s∈[0,t] ρ(X s , x) ≥ r], t, r > 0. (A4) (Process irreducibility).For any m ∈ N, x ∈ B m , δ > 0, and any open set O ⊂ B m , we have . Also, we assume that for any x ∈ E, δ > 0, and m ∈ N, we have where τ Bm := δ inf{k ∈ N : X kδ / ∈ B m }.Before we proceed, let us comment on these assumptions.First, note that (A1) states typical cost-functions conditions.In particular, the non-positivity assumption for f is merely a technical normalisation.Indeed, for a generic f where J f denotes the version of the functional J from (2.2) corresponding to the running cost function f .Second, Assumption (A2) states that the transition probabilities P t (x, •) are continuous with respect to the total variation norm.Note that this directly implies that the transition semigroup associated to X is strong Feller, i.e. for any t > 0 and a bounded measurable map h : E → R, the map x → E x [h(X t )] is continuous and bounded.
Third, Assumption (A3) quantifies distance control properties of the underlying process.It states that, for a fixed time horizon, the process with a high probability stays close to its starting point and, with a fixed radius, with a high probability it does not leave the corresponding ball with a sufficiently short time horizon.Note that these properties are automatically satisfied if the transition semigroup is C 0 -Feller; see Proposition 2.1 in [21] and Proposition 6.4 in [2] for details.
Finally, Assumption (A4) states a form of the irreducibility of the process X.It requires that the process visits a sufficiently rich family of sets with unit probability.
To solve (2.1), we show the existence of a solution to the impulse control Bellman equation, i.e. a function w ∈ C b (E) and a constant λ ∈ R satisfying where the operator M is given by We start with a simple observation giving a lower bound for the constant λ from (2.6).To do this, we define the semi-group type by r(f ) := lim (2.7) see e.g.Proposition 1 in [30] for a discussion on the properties of r(f ).
Proof.From (2.6), for any T ≥ 0, we get Thus, using the boundedness of w and M w, we get Consequently, dividing both hand-sides by T and letting T → ∞, we get 0 ≥ r(f − λ), which concludes the proof.
Let us now link a solution to (2.6) with the optimal value and an optimal strategy for (2.1).To ease the notation, we recursively define the strategy V : where τ0 := 0 and ξ 0 ∈ U is some fixed point.First, we show that V is a proper strategy.
Using the continuity of w we may find Recalling the compactness of U and the continuity of w we may find r > 0 such that for any x ∈ U and y ∈ E satisfying ρ(x, y) < r we get |w(x) − w(y)| < − c0 2 .
Let us now consider the family of events and note that, for any Thus, recalling (2.11), for any k ∈ N, k ≥ 1, we get P (x0, V ) [B k ∩ A] = 0 and, in particular, we have (2.13) Let us now show that lim sup k→∞ Using Assumption (A3), for any ε > 0, we may find t 0 > 0, such that Thus, using the strong Markov property and noting that X n+1 τn ∈ U , for any k ∈ N, k ≥ 1, we get Recalling that ε > 0 was arbitrary, for any k ∈ N, k ≥ 1, we get Now, to ease the notation, let For the contradiction, assume that lim k→∞ P (x0, V ) [C k ] > 0. Consequently, we get In particular, we may find i 0 ∈ N such that for any n ≥ i 0 we get τn+1 (ω)− τn (ω) ≤ t0 2 .This leads to the contradiction as from the fact that ω Consequently, we get lim k→∞ P (x0, V ) [C k ] = 0 and, in particular, we get lim sup k→∞ Hence, recalling (2.14) and (2.17), we get lim sup k→∞ Thus, recalling (2.13), for any k ∈ N, k ≥ 1, we obtain , and letting k → ∞, we conclude the proof of (2.9).Now, we show the verification result linking (2.6) with the optimal value and an optimal strategy for (2.1).
Proof.The proof is based on the argument from Theorem 4.4 in [15] thus we show only an outline.First, we show that λ = J(x, V ), x ∈ E, where the strategy V is given by (2.8).Let us fix x ∈ E.Then, combining the argument used in Lemma 7.1 in [2] and Proposition A.3, we get that the process Recalling Proposition 2.2 we get τn → ∞ as n → ∞.Thus, letting n → ∞ in (2.18) and using Lebesgue's dominated convergence theorem we get Thus, recalling the boundedness of w, taking the logarithm of both sides, dividing by T , and letting T → ∞ we obtain Second, let us fix some x ∈ E and an admissible strategy We show that λ ≥ J(x, V ).Using the argument from Lemma 7.1 in [2] and Proposition A.3, we get that the process is a P (x,V ) -supermartingale.Noting that on the event {τ k+1 < T } we have Recalling the admissibility of V , we get τ n → ∞ as n → ∞.Thus, letting n → ∞ in (2.19) and using Fatou's lemma, we get Thus, taking the logarithm of both sides, dividing by T , and letting T → ∞, we get which concludes the proof.
In the following sections we construct a solution to (2.6).In the construction we approximate the underlying problem using the dyadic time-grid.Also, we consider a version of the problem in the bounded domain.

Dyadic impulse control in a bounded set
In this section we consider a version of (2.1) with a dyadic-time-grid and obligatory impulses when the process leaves some compact set.In this way, we construct a solution to the bounded-domain dyadic counterpart of (2.6).More specifically, let us fix some δ > 0 and m ∈ N. We show the existence of a map In fact, we start with the analysis of an associated one-step equation.More specifically, we show the existence of a constant λ m δ ∈ R and a map w m δ ∈ C b (B m ) satisfying see Theorem 3.1 for details.Also, note that we link (3.2) with (3.1) in Theorem 3.4.
Theorem 3.1.There exists a constant λ m δ > 0 and a map w m δ ∈ C b (B m ) such that (3.2) is satisfied and we get sup ξ∈U w m δ (ξ) = 0. Proof.The idea of the proof is to use the Krein-Rutman theorem to get a positive eigenvalue with a non-negative eigenvector of the suitable operator.More specifically, we consider a cone of non-negative continuous and bounded functions Now, we use the Krein-Rutman theorem to show that T m δ admits a positive eigenvalue and a non-negative eigenfunction; see Theorem 4.3 in [7] for details.We start with verifying the assumptions.First, note that T m δ is positively homogeneous, monotonic increasing, and we have T m δ ½(x) ≥ e −δ f − c ½(x), x ∈ B m , where ½ denotes the function identically equal to 1 on B m .Also, using Assumption (A2), we get that T m δ transforms C + b (B m ) into itself and it is continuous with respect to the supremum norm.Let us now show that T m δ is in fact completely continuous.To see this, let (h n ) n∈N ⊂ C + b (B m ) be a bounded (by some constant K > 0) sequence; using Arzelà-Ascoli Theorem we show that it is possible to find a convergent subsequence of ( T m δ h n ) n∈N .Note that, for any n ∈ N, we get T m δ h n ≤ e δ f K, hence ( T m δ h n ) is uniformly bounded.Next, let us fix some ε > 0, x ∈ B m , and (x k ) ⊂ B m such that x k → x as k → ∞.Also, to ease the notation, for any n ∈ N, we set x ∈ E and note that H n are measurable functions bounded by 2K uniformly in n ∈ N.Then, for any n, k ∈ N, we get Also, using Assumption (A1), we may find k ∈ N big enough such that, for any n ∈ N, we obtain Next, note that for any u ∈ (0, δ) and n, k ∈ N, we get Also, using the inequality |e y − e z | ≤ e max(y,z) |y − z|, y, z ∈ R, we may find u > 0 small enough such that, for any n, k ∈ N, we get x ∈ E, and using the Markov property combined with Assumption (A2), we may find k ∈ N big enough such that for any n ∈ N, we get Thus, recalling (3.5)-(3.6),we get that for k ∈ N big enough and any n ∈ N, enough and any n ∈ N, which proves the equicontinuity of the family ( T m δ h n ) n∈N .Consequently, using Arzelà-Ascoli, we may find a uniformly (in x ∈ B m ) convergent subsequence of ( T m δ h n ) n∈N and the operator T m δ is completely continuous.Thus, using the Krein-Rutman theorem we conclude that there exists a constant λm δ > 0 and a non-zero map note that this set exists thanks to the continuity of h m δ and the fact that h m δ is non-zero.Next, using (3.7), we have Then, for any n ∈ N, we inductively get Thus, letting n → ∞ and using Assumption (A4) combined with (3.8), we show h m δ (x) > 0 for any x ∈ B m .Next, we define w m δ (x) := ln h m δ (x), x ∈ B m , and λ m δ := 1 δ ln λm δ .Thus, from (3.7), we get that the pair (w m δ , λ m δ ) satisfies T m δ e w m δ (x) = e δλ m δ e w m δ (x) , x ∈ B m , and sup ξ∈U w m δ (ξ) = 0.
In fact, using Assumption (A1) and the argument from Theorem 3.1 in [15], we have Finally, we extend the definition of w m δ to the full space E by setting note that the definition is correct since, at the right-hand side, we need to evaluate w m δ only at the points from U ⊂ B 0 ⊂ B m and this map is already defined there.
As we show now, Equation (3.2) may be linked to a specific martingale characterisation.Proposition 3.2.Let (w m δ , λ m δ ) be a solution to (3.2).Then, for any x ∈ B m , we get that the process is a P x -supermartingale.Also, the process Proof.To ease the notation, we show the proof only for δ = 1; the general case follows the same logic.Let us fix m, n ∈ N and x ∈ B m .Then, using the fact w m 1 (y) = M w m 1 (y), x / ∈ B m , and the inequality we have , which shows the supermartingale property of (z m 1 (n)).Next, note that on the set Thus, we have ), which concludes the proof.
Let us denote by V δ,m the family of impulse control strategies with impulse times in the time-grid {0, δ, 2δ, . ..} and obligatory impulses when the controlled process exits the set B m at some multiple of δ.Using a martingale characterisation of (3.2), we get that λ m δ is the optimal value of the impulse control problem with impulse strategies from V δ,m .To show this result, we introduce a strategy V : where τ0 := 0 and ξ 0 ∈ U is some fixed point.
Theorem 3.3.Let (w m δ , λ m δ ) be a solution to (3.2).Then, for any x ∈ B m , we get Also, the strategy V defined in (3.9) is optimal.
Proof.The proof follows the lines of the proof of Theorem 2.3 and is omitted for brevity.
Next, we link (3.2) with an infinite horizon optimal stopping problem under the non-degeneracy assumption.
First, note that for any x ∈ B m , n ∈ N, and τ ∈ T δ x , using Proposition 3.2 and Doob's optional stopping theorem, we have Also, recalling the boundedness of w m 1 and Proposition A.2, and letting n → ∞, we get Next, noting that w m 1 (X τ ∧τB m ) ≥ M w m 1 (X τ ∧τB m ), and taking the supremum over τ ∈ T δ x , we get Second, using again Proposition 3.2, for any x ∈ B m and n ∈ N, we get Using again the boundedness of w m 1 and Proposition A.2, and letting n → ∞, we get In fact, noting that w m 1 (X τ m δ ∧τB m ) = M w m 1 (X τ m δ ∧τB m ), we obtain thus we get Finally, using Proposition A.4, we have which concludes the proof.
Remark 3.5.In Theorem 3.4 we showed that, if λ m δ > r(f ), a solution to the one-step equation (3.2) is uniquely characterised by the optimal stopping value function (3.1).If λ m δ ≤ r(f ), the problem is degenerate and, in particular, we cannot use the uniform integrability result from Proposition A.2.In fact, in this case it is even possible that the one-step Bellman equation admits multiple solutions and the optimal stopping characterisation does not hold; see e.g.Theorem 1.13 in [22] for details.

Dyadic impulse control
In this section we consider a dyadic full-domain version of (2.1).We construct a solution to the associated Bellman equation which will be later used to find a solution to (2.6).The argument uses a bounded domain approximation from Section 3.More specifically, throughout this section we fix some δ > 0 and show the existence of a function w δ ∈ C b (E) and a constant λ δ ∈ R, which are a solution to the dyadic Bellman equation of the form In fact, we set note that this constant is well-defined as, from Theorem 3.3, recalling that B m ⊂ B m+1 , we get λ m δ ≤ λ m+1 δ , m ∈ N. First, we state the lower bound for λ δ .Lemma 4.1.Let (w δ , λ δ ) be a solution to (4.1).Then, we get λ δ ≥ r(f ).
Proof.The proof follows the lines of the proof of Lemma 2.1 and is omitted for brevity.
Proof.We start with some general comments and an outline of the argument.First, note that from Theorem 3.1, for any m ∈ N, we get a solution (w m δ , λ m δ ) to (3.2) satisfying sup ξ∈U w m δ (ξ) = 0. Also, from the assumption λ δ > r(f ) we get λ m δ > r(f ) for m ∈ N sufficiently big (for simplicity, we assume that λ 0 δ > r(f )).Thus, using Theorem 3.4, we get that, for any m ∈ N, the pair (w m δ , λ m δ ) satisfies (3.1).Second, to construct a function w δ , we use Arzelà-Ascoli Theorem.More specifically, recalling that sup ξ∈U w m δ (ξ) = 0 and using the fact that − c ≤ c(x, ξ) ≤ 0, x ∈ E, ξ ∈ U , for any m ∈ N and x ∈ E, we get − c ≤ M w m δ (x) ≤ 0. Also, note that, for any m ∈ N and x, y ∈ E, we have Consequently, the sequence (M w m δ ) m∈N is uniformly bounded and equicontinuous.Thus, using Arzelà-Ascoli Theorem combined with a diagonal argument, we may find a subsequence (for brevity still denoted by (M w m δ ) m∈N ) and a map φ δ ∈ C b (E) such that M w m δ (x) converges to φ δ (x) as m → ∞ uniformly in x from any compact set.In fact, using Assumption (A1) and the argument from the first step of the proof of Theorem 4.1 in [15], we get that the convergence is uniform in x ∈ E.Then, we define To complete the construction, we show that w m δ converges to w δ uniformly on compact sets.Indeed, in this case we have thus φ δ ≡ M w δ and from (4.3) we get that (4.1) is satisfied.Also, recalling that from Theorem 3.1 we get sup ξ∈U w m δ (ξ) = 0, m ∈ N, we also get sup ξ∈U w δ (ξ) = 0. Finally, to show the convergence, we define the auxiliary functions We split the rest of the proof into three steps: (1) Note that, for any x ∈ E and m ∈ N, we have Recalling the fact that φ δ is a uniform limit of M w m δ as m → ∞, we conclude the proof of this step.
Step 3. We show that |w m,2 δ (x) − w δ (x)| → 0 as m → ∞ uniformly in x from compact sets.First, we show that w m,2 δ (x) ≤ w δ (x) for any m ∈ N and x ∈ E. Let ε > 0 and τ ε m ∈ T δ x,b be an ε-optimal stopping time for w m,2 δ (x).Then, we get As ε > 0 was arbitrary, we get w m,2 δ (x) ≤ w δ (x), m ∈ N, x ∈ E. In fact, using a similar argument, for any x ∈ E, we may show that the map m → w m,2 δ (x) is non-decreasing.
Second, let ε > 0 and τ ε ∈ T δ x,b be an ε-optimal stopping time for w δ (x).Then, we obtain Thus, using (4.10) and recalling that ε > 0 was arbitrary, we get lim m→∞ w m,2 δ (x) = w δ (x).Also, noting that by Proposition A.3 and Proposition A.4, the maps x → w δ (x) and x → w m,2 δ (x) are continuous, and using the monotonicity of m → w m,2 δ (x), from Dini's Theorem we get that w m,2 δ (x) converges to w δ (x) uniformly in x from compact sets, which concludes the proof.
Proof.The proof follows the lines of the proof of Theorem 2.3 and is omitted for brevity.

Existence of a solution to the Bellman equation
In this section we construct a solution (w, λ) to (2.6), which together with Theorem 2.3 provides a solution to (2.1).The argument uses a dyadic approximation and the results from Section 4.More specifically, we consider a family of dyadic time steps δ k := 1 2 k , k ∈ N. First, we specify the value of λ.In fact, we define λ := lim inf where λ δ k is a constant given by (4.2), corresponding to δ k .Note that, if for some k 0 ∈ N we get λ δ k 0 > r(f ), then using Theorem 4.3, we get that λ δ k ≤ λ δ k+1 , k ≥ k 0 , and the limit inferior could be replaced by the usual limit.
Proof.The argument is partially based on the one used in Theorem 4.2 thus we discuss only the main points.From the fact that λ > r(f ) we get λ δ k > r(f ) for sufficiently big k ∈ N; to simplify the notation, we assume λ δ0 > r(f ).Thus, using Theorem 4.2, for any k ∈ N, we get the existence of a map w δ k ∈ C b (E) satisfying and such that sup ξ∈U w δ k (ξ) = 0. Thus, we get and the family (M w δ k ) k∈N is uniformly bounded.Also, it is equicontinuous as we have Thus, using Arzelà-Ascoli theorem, we may choose a subsequence (for brevity still denoted by (M w δ k )), such that (M w δ k ) converges uniformly on compact sets to some map φ.In fact, using Assumption (A1) and the argument from the first step of the proof of Theorem 4. In the following, we show that w δ k converges to w uniformly in compact sets as k → ∞.Then, we get that M w δ k converges to M w, hence M w ≡ φ and (2.6) is satisfied.
To show the convergence, we define In the following, we show that |w(x) − w 1 δ k (x)| → 0 and |w 1 δ k (x) − w δ k (x)| → 0 as k → ∞ uniformly in x from compact sets.In fact, to show the first convergence, we note that For transparency, we split the rest of the proof into three parts: (1) Step 1.We show that |w(x) − w 0 δ k (x)| → 0 as k → ∞ as k → ∞ uniformly in x from compact sets.First, note that we have w 0 δ k (x) ≤ w(x), k ∈ N, x ∈ E. Next, for any x ∈ E and ε > 0, let τ ε ∈ T x,b be an ε-optimal stopping time for w(x) and let τ k ε be its T δ k x,b approximation given by Then, we get Also, using Proposition A.2 and letting k → ∞, we have Consequently, recalling that ε > 0 was arbitrary, we obtain lim k→∞ w 0 δ k (x) = w(x) for any x ∈ E. In fact, using the monotonicity of the sequence (w 0 δ k ) k∈N combined with Proposition A.3, Proposition A.4, and Dini's theorem, we get that the convergence is uniform on compact sets, which concludes the proof of this step.
Step 2. We show that |w(x) − w Also, recalling that φ(•) ≤ 0, for any k ∈ N and x ∈ E, we obtain Thus, repeating the argument from the second step of the proof of Theorem 4.2, we get w 2 δ k (x) → w(x) as k → ∞ uniformly in x ∈ E, which concludes the proof of this step.
Step 3. We show that |w In fact, recalling that M w δ k − φ → 0 as k → ∞, the argument follows the lines of the one used in the first step of the proof of Theorem 4.2.This concludes the proof.

Appendix A. Properties of optimal stopping problems
In this section we discuss some properties of the optimal stopping problems that are used in this paper.Throughout this section we consider g, G ∈ C b (E) and assume G(•) ≤ 0 and r(g) < 0, where r(g) is the type of the semigroup given by (2.7) corresponding to the map g.We start with a useful result related to the asymptotic behaviour of the running cost function g.Proof.For transparency, we prove the claims point by point.
Proof of (1).First, we show the boundedness of x → U g−a 0 1(x).Let ε < a − r(g).Using the definition of r(g − a) we may find t 0 ≥ 0, such that for any t ≥ t 0 we get sup x∈E E x e t 0 (g(Xs)−a)ds ≤ e t(r(g)−a+ε) .Then, using Fubini's theorem and noting that r(g) − a + ε < 0, for any x 0 ∈ E, we get For the continuity, note that using Assumption (A2) and repeating the argument used in Lemma 4 in Section II.5 of [13], we get that x → E x e t 0 (g(Xs)−a)ds dt is continuous for any t ≥ 0. Also, as in the proof of the boundedness, we may show 0 ≤ sup x∈E E x e t 0 (g(Xs)−a)ds ≤ e t( g −a) 1 {t∈[0,t0]} + e t(r(g)−a+ε) 1 {t>t0} and the upper bound is integrable (with respect to t).Thus, using Lebesgue's dominated convergence theorem, we get the continuity of the map x → U g−a 0 1(x) = Using Lemma A.1 we get the uniform integrability of a suitable family of random variables.This result is extensively used throughout the paper as it simplifies numerous limiting arguments.
Proposition A.2.For any x ∈ E, the family {e τ 0 g(Xs)ds } τ ∈Tx is P x -uniformly integrable.
Proof.Let us fix some x ∈ E and, for any τ ∈ T x and n ∈ N, define the event A τ n := { τ 0 g(X s )ds ≥ n}.Note that for any T ≥ 0, we get sup uniformly in x ∈ E.Then, using Lemma A.1, for any ε > 0, we may find T ≥ 0 such that for any x ∈ E, we obtain 0 ≤ e û(x) − e uT (x) ≤ sup Thus, letting ε → 0, we get e uT (x) → e û(x) as T → ∞ uniformly in x ∈ E and consequently u T (x) → û(x) as T → ∞ uniformly in x ∈ E. Thus, from the continuity of x → u T (x), T ≥ 0, we get that the map x → û(x) is continuous.Now, we show that u ≡ û.First, we show that lim T →∞ u T (x) = ũ(x), where Thus, letting T → ∞ and taking supremum over τ ∈ T x we get lim T →∞ u T (x) = ũ(x), x ∈ E. Also, using the argument from Lemma 2.2 from [17] we get ũ ≡ u.Thus, we get u(x) = lim T →∞ u T (x) = û(x), x ∈ E, hence the map x → u(x) is continuous.Also, we get (A.2).
Step 2. We show the martingale properties of z.First, we focus on the stopping time τ .Let us define τ T := inf{t ≥ 0 : u T −t (X t ) ≤ G(X t )}.
Using the argument from Proposition 11 in [16] we get that τ T is an optimal stopping time for u T .Also, noting that the map T → u T (x), x ∈ E, is increasing, we get that T → τ T is also increasing, thus we may define τ := lim T →∞ τ T .We show that τ ≡ τ .

. 7 )
After a possible normalisation, we assume that sup ξ∈U h m δ (ξ) = 1.Let us now show that h m δ (x) > 0, x ∈ B m .To see this, let us define D := e −δ f 1 λm δ and let O h ⊂ B m be an open set such that inf