Stochastic Dynamic Programming with Non-linear Discounting

In this paper, we study a Markov decision process with a non-linear discount function and with a Borel state space. We define a recursive discounted utility, which resembles non-additive utility functions considered in a number of models in economics. Non-additivity here follows from non-linearity of the discount function. Our study is complementary to the work of Jaśkiewicz et al. (Math Oper Res 38:108–121, 2013), where also non-linear discounting is used in the stochastic setting, but the expectation of utilities aggregated on the space of all histories of the process is applied leading to a non-stationary dynamic programming model. Our aim is to prove that in the recursive discounted utility case the Bellman equation has a solution and there exists an optimal stationary policy for the problem in the infinite time horizon. Our approach includes two cases: (a) when the one-stage utility is bounded on both sides by a weight function multiplied by some positive and negative constants, and (b) when the one-stage utility is unbounded from below.


Introduction
Discounting future benefits and costs is crucial in order to determine fair prices for investments and projects, in particular, when long time horizons come into play like for example in the task to price carbon emissions.The idea behind pricing carbon emissions is to add up the cost of the economic damage caused by this emission for the society.Up to now the value of one ton CO 2 emissions varies quite heavily between countries, ranging for instance from $119.43 in Sweden to $2.69 in Japan (see [11]).This is of course only partly due to the use of different discount functions, but nevertheless emphasises the role of discounting.
The traditional discounting with a constant discount factor can be traced back to [29].This is still the most common way to discount benefits and cost, in particular because it simplifies the computation.However, Koopmans [25] gave an axiomatic characterisation of a class of recursive utilities, which also includes the classical way of discounting.He introduced an aggregator W to aggregate the current utility u t with future ones v t+1 in v t = W (u t , v t+1 ).When we choose W (u, v) = u + βv, we get the classical discounting with a discount factor β.
In this paper, we study a Markov decision process with a Borel state space, unbounded stage utility and with a non-linear discount function δ which has certain properties.We use an aggregation of the form v t = u t + δ v t+1 (•, x t+1 ) q(x t+1 |x t , π t (h t )) where q is the transition kernel of the Markov decision process, π t is the decision at time t and h t the history of the process.When δ(x) = βx we are back in the classical setting.In this case, it is well-known how to solve Markov decision process with an infinite time horizon, see for example [3,8,15,16,30,31].In the unbounded utility case, the established method is to use a weighted supremum norm and combine it with Banach's contraction theorem (see e.g.[3,4,9,14,17]).In our setting the Banach contraction principle cannot be applied.Indeed our paper is in the spirit of [20,21,22], where also non-linear discounting was used and an extension of the Banach theorem due to Matkowski [26] was applied.Whereas papers [21,22] consider a purely deterministic decision process, work [20] treats a stochastic problem.However, in [20] the expectation is the final operator applied at the end of aggregation, whereas in the present paper expectation-and discounting-operators alternate.As will be explained in Section 3 this has the advantage that we get optimal stationary policies in our setting.
The main result of our paper is a solution procedure of these new kind of discounting problems with stochastic transition.In particular, we provide an optimality equation, show that it has a solution and prove that there exists an optimal stationary policy for the problem in the infinite time horizon.Note that we allow the utility function to be unbounded.
The outline of our paper is as follows.We introduce our model data together with the assumptions in Section 2. In Section 3, we present our optimisation problem.Particularly, we explain how the utility is aggregated in our model and what precisely the difference to the model and results in [20] is.In Section 4, we summarise some auxiliary results like a measurable selection theorem and a generalised fixed point theorem, which is used later.Next, in Subsection 5.1 we treat the model, where the positive and negative part of the one-stage utility is bounded by a weight function ω.We show in this case that the value function v * is a unique fixed point of the corresponding maximal reward operator and that every maximiser in the Bellman equation for v * defines an optimal stationary policy.In Subsection 5.2, we consider then the setting, where the utility function is unbounded from below, but still bounded from above by the weight function ω.Here, we can only show that the value function v * is a fixed point of the corresponding maximal reward operator, but examples show that the fixed point is not necessarily unique.Anyway, as in Section 5.1, any maximiser in the Bellman equation for v * defines again an optimal stationary policy.The proof employs an approximation of v * by a monotone sequence of value functions, which are bounded by a weight function ω in absolute value like in Subsection 5.1.In Section 6, we briefly discuss two numerical algorithms for the solution of problems in Subsection 5.1, namely policy iteration and policy improvement.The last section presents some applications.We discuss two different optimal growth models, an inventory problem and a stopping problem.

The dynamic programming model
Let N (R) denote the set of all positive integers (all real numbers) and R = R ∪ {−∞}, R + = [0, ∞).A Borel space Y is a non-empty Borel subset of a complete separable metric space.By B(Y ) we denote the σ-algebra of all Borel subsets of Y and we write M(Y ) to denote the set of all Borel measurable functions g : Y → R.
A discrete-time Markov decision process is specified by the following objects: (i) X is the state space and is assumed to be a Borel space.
(ii) A is the action space and is assumed to be a Borel space.
(iii) D is a non-empty Borel subset of X × A. We assume that for each x ∈ X, the non-empty x-section A(x) := {a ∈ A : (x, a) ∈ D} of D represents the set of actions available in state x.(iv) q is a transition probability from D to X.For each B ∈ B(X), q(B|x, a) is the probability that the new state is in the set B, given the current state is x ∈ X and an action a ∈ A(x) has been chosen.
X and H n+1 be the space of admissible histories up to the n-th transition, i.e., H n+1 := D n × X for n ∈ N.An element of H n is called a partial history of the process.We put H = D × D × • • • and assume that H n and H are equipped with the product σ-algebras.
In this paper, we restrict ourselves to deterministic policies, since randomisation does not give any advantage from the point of view of utility maximisation.A policy π is a sequence (π n ) of decision rules where, for every n ∈ N, π n is a Borel measurable mapping, which associates any admissible history h n ∈ H n (n ∈ N) with an action a n ∈ A(x n ).We write Π to denote the set of all policies.Let F be the set of all Borel measurable mappings f : X → A such that f (x) ∈ A(x) for every x ∈ X.When A(x) is compact for each x ∈ X, then from the Arsenin-Kunugui result (see Theorem 18.18 in [24]), it follows that F = ∅.A policy π is called stationary if π n = f for all n ∈ N and some f ∈ F. Therefore, a stationary policy π = (f, f, ...) will be identified with f ∈ F and the set of all stationary policies will be denoted by F.
Our next assumption is on the discount function δ.

Remark 2.1
In some empirical studies it was observed that negative and positive utilities were discounted by different discount factors ("sign effect").Therefore a simple non-linear discount function is δ(y) = δ 1 y for y ≤ 0 and δ(y) = δ 2 y for y > 0, where δ 1 , δ 2 ∈ (0, 1) and δ 1 = δ 2 .For a discussion and interpretation of this and other types of discount functions the reader is referred to Jaśkiewicz, Matkowski and Nowak [21].Additional examples of discount functions are also given in Section 7.
The following two standard sets of assumptions will be used alternatively.

Assumptions (W):
(W 2.1) A(x) is compact for every x ∈ X and the set-valued mapping x → A(x) is upper semicontinuous, i.e., {x ∈ X : A(x) ∩ K = ∅} is closed for each closed set K ⊂ A, (W 2.2) the function u is upper semicontinuous on D, (W 2.3) the transition probability q is weakly continuous, i.e., (x, a) → X φ(y)q(dy|x, a) is continuous on D for each bounded continuous function φ, (W 2.4) the function ω is continuous on X.
is continuous on D.

Assumptions (S):
(S2.1) A(x) is compact for every x ∈ X, (S2.2) the function u(x, •) is upper semicontinuous on A(x) for every x ∈ X, (S2.3) for each x ∈ X and every Borel set X ⊂ X, the function q( X|x, •) is continuous on A(x), (S2.4) the function a → X ω(y)q(dy|x, a) is continuous on A(x) for every x ∈ X.
The above conditions were used in stochastic dynamic programming by many authors, see, e.g., Schäl [30], Bäuerle and Rieder [3], Bertsekas and Shreve [7] or Hernández-Lerma and Lasserre [16,17].Using the so-called "weight" or "bounding" function ω one can study dynamic programming models with unbounded one-stage utility u.This method was introduced by Wessels [34], but as noted by van der Wal [32], in the dynamic programming with linear discount function δ(z) = βz, one can introduce an extra state x e ∈ X, re-define the transition probability and the utility function to obtain an equivalent "bounded model".More precisely, we consider a new state space X ∪ {x e }, where x e is an absorbing isolated state.Let A(x e ) = {a e } with an extra action a e ∈ A. For x ∈ X the action sets are A(x).The transition probabilities Q and one-stage utilities R in a new model are as follows , for (x, a) ∈ D, R(x e , a e ) := 0.
Here, α is a constant from assumption (B2.4).This transformed Markov decision process is equivalent to the original one in the sense that every policy gives the same total expected discounted payoff up to the factor ω(x), where x ∈ X denotes the initial state.We would like to emphasise that in the non-linear discount function case such a transformation to bounded case is not possible.We need to do some extra work.
Remark 2.5 If u is bounded from above by some constant, then we can skip assumptions (A) and it is enough in assumptions (B) to require (B2.1), (B2.2) and (B2.3)(i).In this case, it suffices to put α = 1, ω(x) = 1 for all x ∈ X and it is easily seen that (B2.3)(ii), (B2.4), (S2.4), (W 2.4) and (W 2.5) hold.If, on the other hand, u is unbounded in the sense that there exists a function ω meeting conditions (A2.2), then ω(x) ≥ 1 for all x ∈ X must be unbounded as well.From Remark 2.3, it follows that (B2.3) holds when the function z → γ(z)/z is non-increasing.We would like to emphasise that condition (ii) in (B2.3) is crucial in our proofs in the case when (A2.2) holds with an unbounded function ω.The dynamic programming problems when only (A2.2) is assumed can be solved by a "truncation method" and then making use of an approximation by solutions for models that satisfy conditions (A).
3 Discounted utility evaluations: two alternative approaches Below in this subsection we assume that all expectations (integrals) and limits exist.In the sequel, we shall study cases where this assumption is satisfied.
Let π ∈ Π and x = x 1 ∈ X be an initial state.By E π x we denote the expectation operator with respect to the unique probability measure P π x on H induced by the policy π ∈ Π and the transition probability q according to the Ionescu-Tulcea theorem, see Proposition 7.28 in [7].
x n , a n ) and the expected discounted utility over an infinite horizon is A policy π * ∈ Π is optimal in the dynamic programming model under utility evaluation (3.1), if Remark 3.2 Utility functions as in (3.1) have been considered by Jaśkiewicz, Matkowski and Nowak [20].Optimal policies have been shown to exist for the model with u bounded from above satisfying assumptions (A) and either (W) or (S).However, optimal policies obtained in [20] are history-dependent and are characterised by an infinite system of Bellman equations as in the non-stationary model of Hinderer [18].
To obtain stationary optimal policies, we shall define a recursive discounted utility using the ideas similar to those developed in papers on dynamic programming by Denardo [12] and Bertsekas [6] and in papers on economic dynamic optimisation [1,2,4,9,14,27,28,33].The seminal article for these studies was the work by Koopmans [25] on stationary recursive utility generalising the standard discounted utility of Samuelson [29].To define the recursive discounted utility we must introduce some operator notation.
Let π = (π 1 , π 2 , ...) ∈ Π and v ∈ M(H k+1 ).We set and These operators are well-defined for example when u and v are bounded from above.Similarly, we define Q γ π k with δ replaced by γ.Observe that by (B2.1) The composition Let 0 be a function that assigns zero to each argument y ∈ X.
Definition 3.3 For any π = (π k ) ∈ Π and any initial state x = x 1 , the n-stage recursive discounted utility is defined as and the recursive discounted utility over an infinite horizon is A policy π * ∈ Π is optimal in the dynamic programming model under utility evaluation (3.2), if For instance, below we give a full formula for n = 3. Namely, We would like to point out that in the special case of linear discount function δ(z) = βz with β ∈ (0, 1), the two above-mentioned approaches coincide.In that case we deal with the usual expected discounted utility, because

Auxiliary results
Let Y be a Borel space.By U(Y ) we denote the space of all upper semicontinuous functions on Y.We recall some results on measurable selections, a generalisation of the Banach fixed point theorem and present a property of a subadditive function γ.Lemma 4.1 Assume that A(x) is compact for each x ∈ X.(a) Let g ∈ M(D) be such that a → g(x, a) is upper semicontinuous on A(x) for each x ∈ X.Then, is Borel measurable and there exists a Borel measurable mapping f * : X → A such that for all x ∈ X.(b) If, in addition, we assume that x → A(x) is upper semicontinuous and g ∈ U(D), then g * ∈ U(X).
Lemma 4.2 Let assumptions (B) be satisfied and Our results will be formulated using the standard dynamic programming operators.For any v ∈ M a b (X), put Sv(x, a) := u(x, a) + X δ(v(y))q(dy|x, a), (x, a) ∈ D.
Next define By T (m) we denote the composition of T with itself m times.If f ∈ F and v ∈ M a b (X), then we put The next result follows from Lemmas 4.1-4.2 and Lemmas 8.3.7 and 8.5.5 from [17].
The following fixed point theorem will play an important role in our proof (see e.g.[26] or Theorem 5.2 in [13]).Lemma 4.5 Let (Z, m) be a complete metric space, ψ : R + → R + be a continuous, increasing function with ψ(x) < x for all x ∈ (0, ∞).If an operator T : Z → Z satisfies the inequality For the convenience of the reader we formulate and prove a modification of Lemma 8 from [20] that is used many times in our proofs.Consider a function ψ : R + → R + and put Lemma 4.6 If ψ is increasing, subadditive and ψ(y) < y for all y > 0, then for any z > 0, there exists Proof For any k ∈ N, let ψ (k) mean the composition of ψ with itself k times.Note that since the function ψ is increasing, then for each m ≥ 1, Hence, the sequence (ψ m (z)) is increasing.We show that its limit is finite.Indeed, observe that by the subadditivity of ψ, we have By induction, we obtain Let ǫ > 0 be fixed.Since ψ (m) (z) → 0 as m → ∞, there exists m ≥ 1 such that Observe now that from subadditivity of ψ (set γ := ψ in (2.1)), it follows that By induction, we can easily prove that 5 Stationary optimal policies in dynamic problems with the recursive discounted utilities In this section, we prove that if u ∈ M a b (D), assumptions (B) hold and either conditions (W ) or (S) are satisfied, then the recursive discounted utility functions (3.2) are well-defined and there exists an optimal stationary policy.Moreover, under assumptions (W ) ((S)), the value function x → sup π∈Π U (x, π) belongs to U a b (X) (M a b (X)).The value function and an optimal policy will be characterised via a single Bellman equation.First we shall study the case u ∈ M d b (D) and then apply an approximation technique to the unbounded from below case.

One-period utilities with bounds on both sides
Assume that M d b (X) is endowed with the so-called weighted norm • ω defined as The following theorem is the main result of this subsection.Its proof is split in different parts below.
Theorem 5.1 Suppose that assumptions (A), (B) hold and assumptions (W ) are satisfied.Then (a) the Bellman equation

The points (a) and (b) also remain valid under assumptions (A), (B) and (S).
Remark 5.2 As already mentioned, in our approach we consider only deterministic strategies.This is because the optimality results do not change, when we take randomised strategies into account.Actually, we may examine a new model in which the original action sets A(x) are replaced by the set of probability measures Pr(A(x)).Then, the Bellman equation has a solution as in Theorem 5.1, but the supremum in (4.1) is taken over the set Pr(A(x)).However, due to our assumptions the maximum is also attained at a Dirac delta concentrated at some point from A(x).Therefore, randomised strategies do not influence the results.We point out that γ(n) is the n-th iteration of the function γ.
We now prove that the recursive discounted utility (3.2) is well-defined.Proof We shall prove that (U n (•, π)) is a Cauchy sequence of functions in M d b (X) for each policy π ∈ Π.We claim that (5.3) Indeed, using assumptions (B), we can conclude that Assume that m = 1.Then, for any h n+1 ∈ H n+1 , we have Take m = 2 and notice by (B) that For m = 3, it follows that Continuing this way, for any h n+1 ∈ H n+1 , we obtain where z appears m times on the right-hand side of inequality (5.4).By (5.2), γm (z) < L(z) < ∞.Combining (5.3) and (5.4) and making use of (B2.4) and (5.1), we conclude that From the proof of (5.4) we deduce that for any n ∈ N Therefore, for each n ∈ N, U n (•, π) ∈ M d b (X).From (5.5) it follows that (U n (x, π)) is a Cauchy sequence in the Banach space M d b (X).
Proof of Theorem 5.1 Consider first assumptions (W ).By Lemma 4.3, T maps U d b (X) into itself.We show that T has a fixed point in Then, under assumptions (B) we obtain Since the space U d b (X) endowed with the metric induced by the norm • ω is complete, by Lemma 4.5, there exists a unique By Lemma 4.1 and the assumptions that δ is increasing and continuous, it follows that there exists X) also satisfies assumptions of Lemma 4.5.Thus there is a unique function ṽ ∈ M d b (X) such that ṽ for any h ∈ M d b (X).Therefore, ṽ = v * .Putting h := 0 we deduce from Lemma 5.3 that In order to prove the optimality of f * note that for any a ∈ A(x) and x ∈ X, it holds Taking any policy π = (π n ) and iterating the above inequality, we get We now prove that lim n→∞ With this end in view, we first consider the differences By Lemma 5.3, U n (x, π) → U (x, π) for every x ∈ X as n → ∞.Therefore, we have that This implies that which finishes the proof under assumptions (W ).For assumptions (S) the proof proceeds along the same lines.By Lemma 4.3, under (S), T : Remark 5.4 Under assumptions of Theorem 5.1, the Bellman equation has a unique solution and it is the optimal value function v * (x) = sup π∈Π U (x, π).Moreover, it holds that Obviously, T (n) 0 is the value function in the n-step dynamic programming problem.One can say that the value iteration algorithm works and the iterations T (n) 0(x) converge to v * (x) for each x ∈ X.This convergence is uniform in x ∈ X when the weight function ω is bounded.

One-period utilities unbounded from below
In this subsection we drop condition (A2.1) and assume that there exists c > 0 such that u(x, a) ≤ cω(x) for all (x, a) ∈ D. In other words, u ∈ M a b (D).Here we obtain the following result which is shown in the remaining part of this subsection.Remark 5. 6 We shall prove that v * is the limit of a non-increasing sequence of value functions in "truncated models", i.e. the models that satisfy (A2.1) and (A2.2).The convergence is monotone, but it is not uniform.The Bellman equation may have many unbounded solutions.
Remark 5.7 The assumptions of Theorem 5.5 do not guarantee uniqueness.An example is very simple.Assume that X = N, A = A(x) = {a}, u(x, a) = 0 for all (x, a) ∈ D, and the process moves from state x to x + 1 with probability one.The discount function δ(x) = βx with β ∈ (0, 1).Clearly, u satisfies assumption (A2.2) with ω(x) = 1, c = 1.Note that v(x) = r/β x is a solution to the Bellman equation T v = v for any r ∈ R. Clearly, v * (x) = 0 is one of them.Actually, v * (x) = 0 is the largest non-positive solution to the Bellman equation.This example does not contradict the uniqueness result in Theorem 5.1.Within the class of bounded functions v * (x) = 0 is the unique solution to the Bellman equation.
We now prove that the recursive discounted utility (3.2) is well-defined.Lemma 5.8 If u ∈ M a b (D) and assumptions (B) are satisfied, then U (x, π) := lim n→∞ U n (x, π) exists in R for any policy π ∈ Π and any initial state x ∈ X.Moreover, U (•, π) ∈ M a b (X).Our assumption that u ∈ M a b (D) means that (A2.2) holds, i.e., there exists c > 0 such that u(x, a) ≤ cω(x) for all (x, a) ∈ D.
Proof of Lemma 5.8 We divide the proof into five parts.
Step 1 We start with a simple observation: for any n ∈ N, x ∈ X and π ∈ Π it holds From assumptions (B), it follows that for a ∈ R and x ∈ X.Note that, for any h k ∈ H k and π k , we have Furthermore, for any v ∈ M(H k+1 ) such that v(h k , π k (h k ), y) ≤ ηω(y) for all y ∈ X and some η > 0, we obtain From this fact and (5.6) we conclude that This finishes the first step.
Step 3 Let v ∈ M(H k+1 ) be such that v(h k , a k , x k+1 ) ≤ ηω(x k+1 ) for every x k+1 ∈ X and for some η > 0. Define Next, we have and Continuing this way, we get where c appears on the right-hand side of this inequality m times.Putting c = L(z) with z = c in (5.2), we obtain Step 4 For m, n ∈ N, we set . Hence and from (5.8), it follows that (5.9) Observe that for each k = 1, ..., n − 2, Now, using (5.10), for any π ∈ Π and x = x 1 , we conclude that This and (5.9) imply that , for all m, n ∈ N. (5.11) Step 5 We now consider the case where U n (x, π) > −∞ for an initial state x ∈ X, a policy π ∈ Π and for all n ∈ N. From (5.8) we have Therefore, lim m→∞ W n,m (x, π) exists and is finite.Let us denote this limit by G n .Note that, for each m, n ∈ N, Let ǫ > 0 be fixed.Then, by (5.11), for sufficiently large n, say n > N 0 , and consequently for all n > N 0 .Observe that the sequence (G n ) is non-increasing and G * := lim n→∞ G n exists in the extended real line R. Hence, the limit also exists and equals G * .
In the proof of Theorem 5.5 we shall need the following result (see [30] or Theorem A.1.5 in [3]).In the proof of Theorem 5.5 we shall refer to the dynamic programming operators defined in (4.1) and (4.2).Moreover, we also define corresponding operators for v ∈ M a b (X) and K ∈ N as follows where f ∈ F and u K (x, a) = max{u(x, a), 1 − K}, K ∈ N. The recursive discounted utility functions with one-period utility u K in the finite (n-periods) and infinite time horizon for an initial state x ∈ X and a policy π ∈ Π will be denoted by U K n (x, π) and U K (x, π), respectively.
Lemma 5.10 For any n ∈ N and f ∈ F , it holds Proof We proceed by induction.For n = 1 the fact is obvious.Suppose that Here, T K,(n) f denotes the n-th composition of the operator T K f with itself.Then, by our induction hypothesis, our assumption that δ is continuous and increasing, and the monotone convergence theorem, we infer that, for every x ∈ X, The lemma now follows by the induction principle.
Proof of Theorem 5.5 Assume first (W ).The proof for (S) is analogous with obvious changes.By Theorem 5.1, for any K ∈ N, there exists a unique solution v * ,K ∈ U d b (X) to the Bellman equation and, for each Clearly, the sequence (v * ,K ) is non-increasing, thus v ∞ (x) := lim K→∞ v * ,K (x) exists in R for every x ∈ X, and consequently, v ∞ (x) ≥ v * (x), x ∈ X. (5.12) From Theorem 5.1 we know that v * ,K is a solution to the equation Since both sequences (v * ,K (x)), x ∈ X, and (u K (x, a)), (x, a) ∈ D, are non-increasing, it follows from Lemma 5.9, our assumption that δ is increasing and continuous and the monotone convergence theorem that Moreover, in case (W ), we have that v ∞ ∈ U a b (X).From the obvious inequalities u(x, a) ≤ u 1 (x, a) ≤ cω(x), (x, a) ∈ D, it follows that v ∞ (x) ≤ cω(x) for c = L(c) and for all x ∈ X (put z = c in (5.2)).By Lemma 4.1, there exists a maximiser f ∈ F on the right-hand side of equation (5.13) and we have Iterating this equation, we obtain that From (5.7) in the proof of Lemma 5.8, (with c replaced by c, u replaced by u K and π , for all x ∈ X and n ∈ N.
Letting K → ∞ in the above inequality and making use of Lemma 5.10 yield that From this inequality and (5.12), we conclude that and the proof is finished.

Computational Issues
In this section we consider the unbounded utility setting as in Theorem 5.1.

Policy iteration
An optimal stationary policy can be computed as a limit point of a sequence of decision rules.In what follows, we define V 0 = 0 and V n := T (n) 0 for n ∈ N. Next for fixed x ∈ X, let for n ∈ N. In the same way, let By LsA * n (x), we denote the upper limit of the set sequence (A * n (x)), that is, the set of all accumulation points of sequences (a n ) with a n ∈ A * n (x) for all n ∈ N. The next result states that an optimal stationary policy can be obtained from accumulation points of sequences of maximisers of recursively computed value functions.Related results for dynamic programming with standard discounting are discussed, for example, in [3] and [30].Theorem 6.1 Under assumptions of Theorem 5.1, we obtain: Using the arguments as in the proof of (5.5), we infer that Let the discount function be as follows with some ε ∈ (0, 1).We observe that there is no constant β ∈ (0, 1) such that δ(z) < βz for all z > 0.
Then |u(x, a)| ≤ √ x ≤ √ x + 1 for a ∈ A(x) = [0, x] and x ∈ X.Thus, (A) holds.Furthermore, by Jensen's inequality we have We assume that the next state evolves according to the equation whether to stop the process and receive the reward R(x), where x is the current state or to continue.In the latter case the reward C(x) (which might be a cost) is received.The aim is to find a stopping time such that the recursive discounted reward is maximized.We assume here that the controller has to stop with probability one.This problem is a special case of the more general model of Section 2. We have to choose here (i) X ∪ {∞} is the state space where ∞ is an absorbing state which indicates that the process is already stopped, (ii) A := {0, 1} where a = 0 means continue and a = 1 means stop, (iii) A(x) := A for all x ∈ X ∪ {∞}, (iv) q(B|x, 0) := q(B|x) for x ∈ X, B a Borel set and q({∞}|x, 1) = 1 for x ∈ X and q(∞|∞, •) = 1, (v) u(x, a) := C(x)(1 − a) + R(x)a for x ∈ X and u(∞, •) = 0.
We assume now that |C(x)| ≤ ω(x) and |R(x)| ≤ ω(x) which implies (A) and we assume (B).The optimisation problem is considered with the recursive discounted utility and an interpretation that the receiving of rewards (costs) is stopped after a random time.This random time is a stopping time with respect to the filtration generated by observable states (for more details see Chapter 10 in [3]).The action space A consists of two elements only, so assumptions (S) are satisfied.In the above description we already assumed (A) and (B).Therefore the result follows from Theorem 5.1 Let us consider a special example.Imagine a person who wants to sell her house.At the beginning of each week she receives an offer, which is randomly distributed over the interval [m, M ] with 0 < m < M .The offers are independent and identically distributed with distribution q.The house seller has to decide immediately whether to accept or reject this offer.If she rejects, the offer is lost and she has maintenance cost.Which offer should she accept in order to maximise her expected reward?
Note that C * := [m,M] δ(v * (y))q(dy) is obviously a constant independent of x.Thus the optimal strategy is to accept the first offer which is above −C + C * .The corresponding stopping time is geometrically distributed and thus certainly satisfies P(τ * < ∞) = 1.Moreover, it is not difficult to see that whenever we have two discount functions δ 1 , δ 2 , which satisfy assumptions (B) and which are ordered, i.e., δ 1 ≤ δ 2 , then C * 1 ≤ C * 2 because the operator T is monotone.Thus, with stricter discounting we will stop earlier.

Theorem 5 . 5
Suppose that assumptions (A2.2), (B)and (W ) are satisfied.Then (a) the optimal value function v * (x) := sup π∈Π U (x, π), x ∈ X, is a solution to the Bellman equation T v = v and v * ∈ M a b (X), (b) there exists f ∈ F such that T f v * = v * and f is an optimal stationary policy, (c) v * ∈ U a b (X).The points (a) and (b) also remain valid under assumptions (A2.2), (B) and (S).

Lemma 5 . 9
If Y is a metric space and (w n ) is a non-increasing sequence of upper semicontinuous functions w n : Y → R, then (a) w ∞ = lim n→∞ w n exists and w ∞ is upper semicontinuous, (b) if, additionally, Y is compact, then max y∈Y lim n→∞ w n (y) = lim n→∞ max y∈Y w n (y).

Example 7 . 2 (
a)s + 1 ν(ds) ≤ (x − a)s + 1 ≤ √ x + 1for a ∈ A(x) and x ∈ X.Hence, (B2.4) is satisfied with α = 1.It is obvious that conditions (W ) are also met.Therefore, from Theorem 5.1, there exists an upper semicontinuous solution to the Bellman equation and a stationary optimal policy f * ∈ F. Stochastic optimal growth model 2) We study a modified problem model from Example 1.
, we have X := [m, M ], C(x) ≡ −c and R(x) := x.From Proposition 7.5 we obtain that the value function satisfies v * (x) = max x; −C + [m,M] [3]optimal stationary policy.Else set k := k + 1 and go to step 2.It is obvious that the algorithm stops in a finite number of steps if the state and action sests are finite.In general, as in the standard discounted case (see, e.g., Theorem 7.5.1 and Corollary 7.5.3in[3])wecan only claim that v There is a single good available to the consumer.The level of this good at the beginning of period t ∈ N is given by x t ∈ X := [0, ∞).The consumer has to divide between consumption a t ∈ A := [0, ∞) and investment (saving) y t = x t − a t .Thus A(x t ) = [0, x t ].From consumption a t the consumer receives utility u(x t , a t ) = √ a t .Investment, on the other hand, is used for production with input y t yielding outputx t+1 = y t • ξ * (x) = lim k→∞ U (x, f k ).t , t ∈ N,where (ξ t ) is a sequence of i.i.d.shocks with distribution ν being a probability measure on [0, ∞).The initial state x = x 1 ∈ R + is non-random.Further, we assume that s