Robust Consumption-Investment Problem on Infinite Horizon

In our paper we consider an infinite horizon consumption-investment problem under a model misspecification in a general stochastic factor model. We formulate the problem as a stochastic game and finally characterize the saddle point and the value function of that game using an ODE of semilinear type, for which we provide a proof of an existence and uniqueness theorem for its solution. Such equation is interested on its own right, since it generalizes many other equations arising in various infinite horizon optimization problems.


Introduction
A major weakness of a portfolio optimization is a huge sensitivity to estimation errors and a model misspecification. The concern about a model uncertainty should lead the investor to design a strategy which is robust to model imperfections. In this paper a max-min robust version of the classical Merton optimal investment-consumption model is presented. We consider a financial market consisting of a stock and a bond. A stock and a bond dynamics are assumed to be stochastic differential equations. In addition coefficients of our model are affected by a non-tradable but observable D. Zawisza (B) Institute of Mathematics, Faculty of Mathematics and Computer Science, Jagiellonian University in Krakow, Łojasiewicza 6, 30-348 Kraków, Poland e-mail: dariusz.zawisza@im.uj.edu.pl stochastic factor. The investor trades between these assets and is supposed to consume part of his wealth. Instead of supposing that this is the exact model, we assume here that the trader knows only that the correct model belongs to a wide class of models, which will be described later. To determine a robust consumption-investment controls the investor maximizes his worst case total expected discounted HARA utility of consumption. In our paper the problem is formulated as a stochastic game between the market and the investor. To solve it we use a nonlinear Hamilton-Jacobi-Bellman-Isaacs equation. After several substitutions we are able to reduce it to a semilinear equation of the Hamilton-Jacobi-Bellman type, for which we provide a proof of an existence and uniqueness theorem.
Infinite horizon consumption-investment problems in stochastic factor models, but without a model uncertainty assumption, were considered, among others by Fleming et al. [5,6], and Pang [16,17], Hata et al. [12]. Most of these papers use a sub-and supersolution method to prove that there exists a smooth solution to the resulting equation. The exception is Fleming et al. [5] paper, where the solution to the infinite horizon HJB equation is approximated by a solution to finite horizon problems. Our approach is closest to the latter and in the proof we use stochastic methods to obtain estimates needed to apply the Arzel-Ascolli Lemma. Moreover, our paper extends many other aforementioned papers, since to prove that there exists a smooth solution to the resulting equation we do not need any differentiability assumption on model coefficients.
The finite horizon analogue of our problem was considered and solved by Schied [18]. For literature review about finite horizon max-min problems we refer to Zawisza [21].
Max-min infinite horizon optimization methods has recently gained a lot of attraction in the theoretical economics and finance. A variety of modifications to our issue were considered among others by Anderson et al. [1], Faria et al. [4], Gagliardini et al. [9], Hansen et al. [11], Trojani et al. [19,20]. Most of these works consider usually the problem from an economical/financial point of view only. Even if our model description can be treated as a special case of their setting, they do not provide strict mathematical proofs of their findings.
It is worth mentioning also the work of Knispel [14], where the robust risk-sensitive optimization problem is solved.

Model Description
Let ( , F, P) be a probability space with filtration (F t , 0 ≤ t < +∞) (possibly enlarged to satisfy usual assumptions) generated by two independent Brownian motions (W 1 t , 0 ≤ t < +∞), (W 2 t , 0 ≤ t < +∞). We assume that investor has an imprecise knowledge about the dynamic economic environment and therefore the measure P should be regarded only as an approximate probabilistic description of the economy. Our economy consists of two primitive securities: a bank account (B t , 0 ≤ t < +∞) and a share (S t , 0 ≤ t < +∞). We assume also that the price of the share is modulated by one non-tradable (but observable) factor (Y t , 0 ≤ t < +∞). This factor can represent an additional source of an uncertainty such as: a stochastic volatility, a stochastic interest rate or other economic conditions. Processes mentioned above are solutions to the system of stochastic differential equations (2.1) The coefficients r , b, g, a, σ > 0 are continuous functions and they are assumed to satisfy all the required regularity conditions, in order to guarantee that the unique strong solution to (2.1) exists. We treat ρ ∈ [−1, 1] as a correlation coefficient.
As it was mentioned, the investor believes that his model is an imprecise description of the market. A common approach in describing a model uncertainty over the finite horizon T is to assume that the probability measure is not precisely known and the investor knows only a class of possible measures. In many papers (Cvitanic, Karatzas [2] and Hernández, Schied [13]) it is usually assumed that this class is equal to where E(·) t denotes the Doleans-Dade exponential and M denotes the set of all bounded, progressively measurable processes η = (η 1 , η 2 ) taking values in a fixed compact, convex set ⊂ R 2 . In our setting we will follow that type of the problem formulation.
The dynamics of the investors wealth process (X π,c t , 0 ≤ t < +∞) is given by the stochastic differential equation where x denotes a current wealth of the investor, π we can interpret as a capital invested in S t , whereas c is a consumption per unit of time.

Formulation of the Problem
We consider a hyperbolic absolute risk aversion (HARA) utility function U (x) = x γ γ with a parameter 0 < γ < 1, with γ = 0. The negative parameter case (γ < 0) is discussed at the end of our paper. The objective we use is equal to the overall discounted utility of consumption i.e.
x,y denotes the expectation with respect to the measure Q η t . Note that we use the short notation for τ x,y , whereas full form is τ π,c x,y .

Definition 2.1
A control (or a strategy) (π, c) = ((π t , c t ), 0 ≤ t < +∞) is admissible for a starting point (x, y), (π, c) ∈ A x,y , if it satisfies the following assumptions: Our investor uses the Gilboa and Schmeidler [10] type preferences to maximize his overall satisfaction. More precisely he uses a minimax criterion and tries to maximize his objective in the worst case model i.e. maximize inf η∈M J π,c,η (x, y) (2.5) over the class of admissible strategies A x,y . The problem (2.5) is considered as a zero-sum stochastic differential game problem. Process η is the control of player number 1 (the "market"), while strategy (π, c) is the control of player number 2 (the "investor"). We are looking for a saddle point ((π * , c * ), η * ) ∈ A x,y × M and a value function V (x, y) such that J π,c,η * (x, y) J π * ,c * ,η * (x, y) J π * ,c * ,η (x, y), and V (x, y) = J π * ,c * ,η * (x, y).
As usually we will seek optimal strategies in the feedback form where π(x, y), c(x, y), η(x, y) are Borel measurable functions and X t and Y t are solutions to the system (2.3). Such controls are often called Markov controls and are denoted simply by (π(x, y), c(x, y), η(x, y)).

HJBI Equations and Saddle Point Derivation
We will use the standard HJB approach to solve the robust investment problem stated in the previous section. Let L π,c,η denotes the differential operator given by L π,c,η V (x, y) = 1 2 a 2 (y)V yy + 1 2 π 2 σ 2 (y)V x x + ρπσ (y)a(y)V xy + ρη 1 +ρη 2 a(y)V y + g(y)V y + π b(y) − r (y) For simplicity, we omit (x, y) variables in the functions' notation. To establish a link between this operator and a saddle point of our initial problem, we need to prove a verification theorem. The following one seems to be new in the literature.
, an admissible Markov control (π * (x, y), c * (x, y), η * (x, y)) and constants D 1 , D 2 > 0 such that Proof Assume that (x, y) ∈ (0, +∞)×R are fixed. Let's fix first η ∈ M and consider the system (Q η dynamics of (X t , Y t )): where (T n , n = 1, 2, . . .), (T n → +∞) is a localizing sequence of stopping times such that By letting (n → ∞) and using (3.7) we get We should consider two cases Since we have (3.5), then Note that U (x) = x γ γ and (3.4) can be used to obtain In both scenarios (Cases I, II) we can deduce from (3.9) that In addition (3.6) holds, which gives us the desired inequality If we use η * instead of η and use (3.3) then instead of (3.9) we have which means that Hence, Case I is satisfied also for η = η * and consequently after passing t → +∞ and using (3.6) we conclude that Next we choose (π, c) ∈ A x,y and apply the Itô formula to the system Repeating the method presented above and using (3.2) we get Since V is nonnegative, we get Let us point out that conditions (3.1)-(3.3) hold if the upper and the lower Hamilton-Jacobi-Bellman-Isaacs equations are satisfied: To find the saddle point it is more convenient for us to use the upper Isaacs equation. Once we verify that it has a unique solution V , it is also necessary to prove that V is also a solution to the lower equation. To do that we use the following minimax theorem proved by Fan [3, Theorem 2].

Saddle Point Derivation
As announced, to find explicit forms of the saddle point ((π * (x,y), c * (x,y)), η * (x,y)), we start with the upper Isaacs equation This type of reasoning is well known in the literature and therefore we do not present it with all details. Note that if there exists V ∈ C 2,2 ((0, ∞) × R), V x x < 0, then the maximum over (π, c) in (3.10) is well defined and achieved at The HARA type utility motivates us to seek the solution of the form Substituting (3.11) and (3.12) in (3.10) yields where λ(y) := b(y) − r (y) σ (y) and F should satisfy the following equation Assuming that there exists a smooth solution to (3.14) we can determine a saddle point candidate (π * (x, y), c * (x, y), η * (x, y)) by finding a Borel measurable function From calculations (3.10)-(3.14), it follows that η * (x, y) does not depend on x and is equal to the minimizer of (3.14). Moreover, (π * (x, y), c * (x, y)) = (π * (x, y, η * 1 (y)), c * (x, y)), where (π * (x, y, η), c * (x, y)) is given by (3.13). The last claim is a consequence of the following two facts: (1) the minimax equality holds: y) and therefore (π * (x, y), c * (x, y)) is the unique solution to the equation

Smooth Solution to the Resulting PDE
In this section, we use stochastic methods to derive existence and uniqueness results for classical solutions to differential equations which play a key role in the solution to our initial problem. Let's recall it once more Assume now that there exists F -a solution to Eq. (4.1) such that a(y)F y F is bounded. In this case there exists R > 0 such that Therefore, it is reasonable to consider equations of the form where θ > 0. This type of equation can be rewritten into 1 2 a 2 (y)u yy + max δ∈D min η∈ [i(y) + l(δ, η)a(y)]u y + h(y, δ, η)u (4.2) where D ⊂ R n , ⊂ R k are compacts. To the best of our knowledge, subsequent results on classical solutions to (4.2) have not been available so far under assumptions given here. We make the following two assumptions.
Assumption 1 Functions a, h and i, l are continuous, a 2 (y) > ε > 0 and there exist

4)
Remark Assume for a moment that a is constant. If (4.3) is satisfied then also (4.4) holds with L 2 = L 1 . Nevertheless in some models the constant L 2 can be much lower than L 1 , for instance it is worth to notice the case i(y) + l(δ, η)a(y) = −y + η, where L 2 can be set to zero.
To construct a candidate solution to our problem we use a sequence of solutions to finite time horizon problems of the form with terminal condition u(y, T ) = 0.
For more details about the verfication reasoning, which was used here, see for example the proof of Theorem 6.1 from Zawisza [22]. This implies that Since the oposite inequality This representation confirms the uniqueness, the boundedness and the strict positivity of u(y, t). Finally, we are able to notice that instead of the class C m 1 ,m 2 in (4.7), we can limit ourselves to the class C m 1 ,m 2 , since when u is strictly positive, then the maximum with respect to c in (4.5) is achieved at which is a continuous function.
It is also possible to rewrite Eq. (4.5) in the following form Proof It is sufficient to note that if D ⊂ R n , ⊂ R k and f is a continuous function then which, in addition, is bounded together with the y-derivative and bounded away from zero.

Lemma 4.2 If Assumption 1 is satisfied then H is continuous and there exists K
Proof The solution will be constructed by taking the limit in a sequence of solutions to finite horizon problems (4.5). Suppose that T > 0 is fixed and let u be the solution to (4.5). To use the Arzel-Ascolli Lemma we need to prove uniform estimates for u and all its derivatives. We can use a stochastic control representation to obtain Since h is bounded and w > sup η,δ,y h(y, δ, η) then there exists α > 0 that A bound for u y will be obtained by estimating the Lipschitz constant. Note that if w > sup η,y h(y, η) + L 2 , then w 1 := w − L 2 > sup η,y h(y, η). Moreover we will use the fact that |e x − e y | ≤ |x − y| for x, y ≤ 0. For a notational convenience we Using the Itô formula we have Using (4.4) we have Gronwall's lemma yields We should consider now two cases: Thanks to that we have an estimate on v t . Namely, let t ≥ 0 be fixed. Observe that for where Note that We assumed that w > sup y,δ,η h(Y k , δ k , η(δ k )), hence there exists β > 0 that for ξ > 0 we have and finally The above inequality ensures that v t (y, t) is uniformly bounded and v t (y, t) is convergent to 0 (t → ∞), uniformly with respect to y. We have obtained so far uniform bounds for v, v t , v y . Moreover we know that equation is satisfied, H satisfies (4.8) and a 2 (y) > ε > 0. Hence, a proper bound is also satisfied for v yy . By the Arzel-Ascolli Lemma, there exists a sequence (t n , n = 1, 2, . . .) such that (v(y, t n ), n = 1, 2, . . .) is convergent to some twice continuously differentiable function, which will be denoted further also by v(y). What is more, the convergence holds locally uniformly together with v y (y, t n ) and v yy (y, t n ). This indicates that v,v y are bounded and The uniqueness follows from the infinite horizon analogue of stochastic representation (4.6).
Gathering Lemma 4.2 and Theorem 2.1 of Friedman [8] we get that if conditions of Assumption 1 are satisfied and a = 0 then for all T > 0 there exists a unique bounded solution to finite horizon equation (4.5). We are sure that a smooth solution to equation (4.5) exists under more general conditions but we will treat this problem elsewhere. Up to the end we assume that a is a nonzero constant. We should focus now on We have already proved that ifĥ andî are continuous and then there exists a nonnegative, bounded and C 2 (R) solution to  (4.13). In addition, m * 1 and m * 2 do not depend on R.
Proof Maximum with respect to c in (4.17) is achieved at

From Lemma 4.4 and Theorem 4.3 we know that
In that case we can set m * 2 := max{P And the conclusion follows.
Finally we are able to consider our main equation: Proof It is sufficient to note that Lemma 4.5 and inequalities (4.10), (4.11) ensure that for all R > 0, there exists F R -a solution to (4.13) such that F R y F R is bounded by a constant which is independent of R. This allows to conclude that there exists R * that a F R * y F R * ≤ R * and F R * is also a solution to (4.17).

Final Result
Theorem 5.1 Suppose that a = 0 is a constant, g, r , λ are Lipschitz continuous functions, λ, r are bounded and g is of a linear growth condition. In addition let w > sup y,ηĥ (y, η) Then there exists a saddle point (π * (x, y), c * (x, y), η * (x, y)) such that where F is a unique bounded together with the y-derivative and bounded away from zero solution to (4.17). The term η * is a Borel measurable function which realizes minimum in (4.17).
Proof It follows from Proposition 4.6 that there exists a positive, bounded away from zero and bounded together with the first y-derivative solution to (4.17). By the classical measurable selection theorem there exists a Borel measurable η * (y) ∈ being realization of the minimum in (4.17). If we set then due to (3.10)-(3.14), it is sufficient to prove only that (π * (x, y), c * (x, y), η * (x, y)) is an admissible Markov saddle point and conditions (3.6) and (3.7) hold. Let Note that ζ 1 · (b − r ), ζ 1 · σ , and ζ 2 are bounded functions since λ and λ 2 are bounded. Therefore, the process Z t := X π * ,c * t is a unique solution to the equation This is a linear equation with bounded stochastic coefficients, which implies that for all η ∈ M. This confirms the admissibility of (π * (x, y), c * (x, y)).

Examples
We can apply our main result to the following (ε modifications) of standard stochastic volatility models: • The Scott model: • The Stein and Stein model:

Negative HARA Parameter Case
It is easy to check that for a negative HARA parameter (γ < 0), HJBI equations shows that there is no saddle point for that problem, since there is no constraint for the consumption process. Therefore we might consider a constrained problem, which is based on the following investor's objective: where the dynamics of the investor's wealth process (X π,c t , 0 ≤ t < +∞) is given by the stochastic differential equation In that problem we assume that the consumption is proportional to the wealth i.e. c t =c t X π,c t . We interpret the processc t as a consumption rate and assume it belongs to the class C m 1 ,m 2 .
After considering HJBI equation and after several transformations (as in (3.10)-(3.14)) we get the equation: This may be rewritten into