1 Introduction

A major weakness of a portfolio optimization is a huge sensitivity to estimation errors and a model misspecification. The concern about a model uncertainty should lead the investor to design a strategy which is robust to model imperfections. In this paper a max–min robust version of the classical Merton optimal investment-consumption model is presented. We consider a financial market consisting of a stock and a bond. A stock and a bond dynamics are assumed to be stochastic differential equations. In addition coefficients of our model are affected by a non-tradable but observable stochastic factor. The investor trades between these assets and is supposed to consume part of his wealth. Instead of supposing that this is the exact model, we assume here that the trader knows only that the correct model belongs to a wide class of models, which will be described later. To determine a robust consumption-investment controls the investor maximizes his worst case total expected discounted HARA utility of consumption. In our paper the problem is formulated as a stochastic game between the market and the investor. To solve it we use a nonlinear Hamilton–Jacobi–Bellman–Isaacs equation. After several substitutions we are able to reduce it to a semilinear equation of the Hamilton–Jacobi–Bellman type, for which we provide a proof of an existence and uniqueness theorem.

Infinite horizon consumption-investment problems in stochastic factor models, but without a model uncertainty assumption, were considered, among others by Fleming et al. [5, 6], and Pang [16, 17], Hata et al. [12]. Most of these papers use a sub- and supersolution method to prove that there exists a smooth solution to the resulting equation. The exception is Fleming et al. [5] paper, where the solution to the infinite horizon HJB equation is approximated by a solution to finite horizon problems. Our approach is closest to the latter and in the proof we use stochastic methods to obtain estimates needed to apply the Arzel–Ascolli Lemma. Moreover, our paper extends many other aforementioned papers, since to prove that there exists a smooth solution to the resulting equation we do not need any differentiability assumption on model coefficients.

The finite horizon analogue of our problem was considered and solved by Schied [18]. For literature review about finite horizon max–min problems we refer to Zawisza [21].

Max–min infinite horizon optimization methods has recently gained a lot of attraction in the theoretical economics and finance. A variety of modifications to our issue were considered among others by Anderson et al. [1], Faria et al. [4], Gagliardini et al. [9], Hansen et al. [11], Trojani et al. [19, 20]. Most of these works consider usually the problem from an economical/financial point of view only. Even if our model description can be treated as a special case of their setting, they do not provide strict mathematical proofs of their findings.

It is worth mentioning also the work of Knispel [14], where the robust risk-sensitive optimization problem is solved.

2 Model Description

Let \(( \Omega , \mathcal {F}, P)\) be a probability space with filtration \((\mathcal {F}_{t}, 0 \le t < +\infty )\) (possibly enlarged to satisfy usual assumptions) generated by two independent Brownian motions \((W_{t}^{1}, 0 \le t < +\infty )\), \((W_{t}^{2}, 0 \le t < + \infty )\). We assume that investor has an imprecise knowledge about the dynamic economic environment and therefore the measure \(P\) should be regarded only as an approximate probabilistic description of the economy. Our economy consists of two primitive securities: a bank account \((B_{t}, 0 \le t < + \infty )\) and a share \((S_{t},0 \le t < + \infty )\). We assume also that the price of the share is modulated by one non-tradable (but observable) factor \((Y_{t},0 \le t < + \infty )\). This factor can represent an additional source of an uncertainty such as: a stochastic volatility, a stochastic interest rate or other economic conditions. Processes mentioned above are solutions to the system of stochastic differential equations

$$\begin{aligned} {\left\{ \begin{array}{ll} dB_{t} &{}=r(Y_{t}) B_{t} dt, \\ dS_{t} &{}=b(Y_{t}) S_{t} dt + \sigma (Y_{t}) S_{t} dW_{t}^{1}, \\ dY_{t} &{}=g(Y_{t}) dt + a(Y_{t})(\rho dW_{t}^{1} + \bar{\rho } dW_{t}^{2}). \end{array}\right. } \end{aligned}$$
(2.1)

The coefficients \(r\), \(b\), \(g\), \(a\), \(\sigma > 0\) are continuous functions and they are assumed to satisfy all the required regularity conditions, in order to guarantee that the unique strong solution to (2.1) exists. We treat \(\rho \in [-1,1]\) as a correlation coefficient.

As it was mentioned, the investor believes that his model is an imprecise description of the market. A common approach in describing a model uncertainty over the finite horizon \(T\) is to assume that the probability measure is not precisely known and the investor knows only a class of possible measures. In many papers (Cvitanic, Karatzas [2] and Hernández, Schied [13]) it is usually assumed that this class is equal to

$$\begin{aligned} \mathcal {Q}_{T}:= \biggl \lbrace Q^{\eta }_{T} \sim P \; \vert \; \frac{dQ^{\eta }_{T}}{dP} = \mathcal {E} \biggl ( \int \eta _{1,t}dW_{t}^{1} + \eta _{2,t}dW_{t}^{2} \biggr )_{T}\; , \quad (\eta _{1},\eta _{2}) \in \mathcal {M} \biggr \rbrace ,\nonumber \\ \end{aligned}$$
(2.2)

where \(\mathcal {E}(\cdot )_{t}\) denotes the Doleans–Dade exponential and \(\mathcal {M}\) denotes the set of all bounded, progressively measurable processes \(\eta =(\eta _{1},\eta _{2})\) taking values in a fixed compact, convex set \(\Gamma \subset \mathbb {R}^{2}\). In our setting we will follow that type of the problem formulation.

The dynamics of the investors wealth process \((X^{\pi ,c}_{t}, 0 \le t < + \infty )\) is given by the stochastic differential equation

$$\begin{aligned} {\left\{ \begin{array}{ll} d X_{t} = (r(Y_{t} )X_{t} + \pi _t (b(Y_{t})-r(Y_t))) dt +\pi _{t} \sigma (Y_{t}) dW_{t}^{1}-c_{t}dt,\\ X_{0}=x, \end{array}\right. } \end{aligned}$$
(2.3)

where \(x\) denotes a current wealth of the investor, \(\pi \) we can interpret as a capital invested in \(S_{t}\), whereas \(c\) is a consumption per unit of time.

2.1 Formulation of the Problem

We consider a hyperbolic absolute risk aversion (HARA) utility function \(U(x)=\frac{x^\gamma }{\gamma }\) with a parameter \(0<\gamma < 1\), with \(\gamma \ne 0\). The negative parameter case (\(\gamma <0\)) is discussed at the end of our paper. The objective we use is equal to the overall discounted utility of consumption i.e.

$$\begin{aligned} J^{\pi ,c,\eta }(x,y):= \lim _{t \rightarrow \infty } \mathbb {E}_{x,y}^{\eta ,t} \int _{0}^{t \wedge \tau _{x,y}} e ^{- w t}U(c_{t})\, dt\!=\! \lim _{t \rightarrow \infty } \mathbb {E}_{x,y}^{\eta ,t} \int _{0}^{t \wedge \tau _{x,y}} e ^{- w t}\frac{\bigl (c_{t}\bigr )^{\gamma }}{\gamma }\, dt,\nonumber \\ \end{aligned}$$
(2.4)

where \(w>0\) is a discount rate, \(\tau _{x,y}=\inf \{t>0, \quad X_{t}^{\pi ,c,\eta } \le 0\}\), \( \mathbb {E}_{x,y}^{\eta ,t}\) denotes the expectation with respect to the measure \(Q_{t}^{\eta }\). Note that we use the short notation for \(\tau _{x,y}\), whereas full form is \(\tau _{x,y}^{\pi ,c}\).

Definition 2.1

A control (or a strategy) \((\pi ,c)=((\pi _{t},c_{t}), 0 \le t < +\infty )\) is admissible for a starting point \((x,y)\), \((\pi ,c) \in \mathcal {A}_{x,y}\), if it satisfies the following assumptions:

  1. (1)

    the process \((c_{t}, 0 \le t < +\infty )\) is nonnegative,

  2. (2)

    \((\pi ,c)\) is progressively measurable with respect to the filtration \((\mathcal {F}_{t}, 0 \le t < + \infty )\),

  3. (3)

    there exists a unique solution to (2.3) and

    $$\begin{aligned} \mathbb {E}_{x,y}^{\eta ,t}\sup _{0 \le s \le t \wedge \tau ^{\pi ,c}}\bigl ( X_{s}^{\pi ,c}\bigr )^{\gamma }< + \infty \end{aligned}$$

    for all \(t>0\), \(\eta \in \mathcal {M}\).

Our investor uses the Gilboa and Schmeidler [10] type preferences to maximize his overall satisfaction. More precisely he uses a minimax criterion and tries to maximize his objective in the worst case model i.e.

$$\begin{aligned} \text {maximize} \quad \inf _{\eta \in \mathcal {M}} J^{\pi ,c,\eta }(x,y) \end{aligned}$$
(2.5)

over the class of admissible strategies \(\mathcal {A}_{x,y}\).

The problem (2.5) is considered as a zero-sum stochastic differential game problem. Process \(\eta \) is the control of player number 1 (the “market”), while strategy \((\pi ,c)\) is the control of player number 2 (the “investor”). We are looking for a saddle point \(((\pi ^{*},c^{*}),\eta ^{*}) \in \mathcal {A}_{x,y} \times \mathcal {M} \) and a value function \(V(x,y)\) such that

$$\begin{aligned} J^{\pi ,c,\eta ^{*}}(x,y) \leqslant J^{\pi ^{*},c^{*},\eta ^{*}} (x,y) \leqslant J^{\pi ^{*},c^{*},\eta } (x,y), \end{aligned}$$

and

$$\begin{aligned} V(x,y) = J^{\pi ^{*},c^{*},\eta ^{*}}(x,y). \end{aligned}$$

As usually we will seek optimal strategies in the feedback form \(((\pi (X_{t}, Y_{t}), c(X_{t}, Y_{t}), \eta (X_{t}, Y_{t})), 0 \le t < +\infty )\), where \(\pi (x,y)\), \(c(x,y)\), \(\eta (x,y)\) are Borel measurable functions and \(X_{t}\) and \(Y_{t}\) are solutions to the system (2.3). Such controls are often called Markov controls and are denoted simply by \((\pi (x,y),c(x,y),\) \(\eta (x,y))\).

3 HJBI Equations and Saddle Point Derivation

We will use the standard HJB approach to solve the robust investment problem stated in the previous section. Let \(\mathcal {L}^{\pi ,c,\eta }\) denotes the differential operator given by

$$\begin{aligned} \mathcal {L}^{\pi ,c,\eta } V (x,y)&=\frac{1}{2}a^{2}(y) V_{yy} + \frac{1}{2} \pi ^{2} \sigma ^{2}(y) V_{xx} + \rho \pi \sigma (y) a(y) V_{xy} \\&\quad + \bigl (\rho \eta _{1} + \bar{\rho } \eta _{2}\bigr )a(y) V_{y} + g(y) V_{y} + \pi \bigl (b(y)-r(y)\\&\quad +\eta _{1} \sigma (y)\bigr ) V_{x} + r(y)xV_{x} -cV_{x} . \end{aligned}$$

For simplicity, we omit \((x,y)\) variables in the functions’ notation. To establish a link between this operator and a saddle point of our initial problem, we need to prove a verification theorem. The following one seems to be new in the literature.

Theorem 3.1

Suppose there exists a function \(V \in \mathcal {C}^{2,2}((0,+\infty ) \times \mathbb {R}) \cap \mathcal {C} ([0,+\infty ) \times \mathbb {R})\), an admissible Markov control \((\pi ^{*}(x,y),c^{*}(x,y),\eta ^{*}(x,y))\) and constants \(D_1,D_2>0\) such that

$$\begin{aligned}&\mathcal {L}^{\pi ^{*}(x,y),c^{*}(x,y),\eta } V(x,y) -wV(x,y)+\frac{(c^{*}(x,y))^{\gamma }}{\gamma } \ge 0, \end{aligned}$$
(3.1)
$$\begin{aligned}&\mathcal {L}^{\pi ,c,\eta ^{*}(x,y)} V(x,y) -wV(x,y) +\frac{c^{\gamma }}{\gamma }\le 0, \end{aligned}$$
(3.2)
$$\begin{aligned}&\mathcal {L}^{\pi ^{*}(x,y), c^{*}(x,y), \eta ^{*}(x,y)} V(x,y)-wV(x,y) +\frac{(c^{*}(x,y))^{\gamma }}{\gamma } = 0, \end{aligned}$$
(3.3)
$$\begin{aligned}&D_1 x^{\gamma } \le \bigl (c^{*}(x,y)\bigr )^{\gamma }, \end{aligned}$$
(3.4)
$$\begin{aligned}&V(x,y) \le D_2 x^{\gamma } \end{aligned}$$
(3.5)

for all \(\eta \in \Gamma \), \((\pi ,c) \in \mathbb {R}\times (0,+\infty )\), \((x,y) \in (0,+\infty ) \times \mathbb {R}\) and

$$\begin{aligned}&\tau _{x,y}^{\pi ^*,c^*,\eta }= + \infty , \end{aligned}$$
(3.6)
$$\begin{aligned}&\mathbb {E}_{x,y}^{\eta ,t} \biggl ( \sup _{0 \le s \le t \wedge \tau ^{\pi ,c}} e^{-ws}|V(X_{s}^{\pi ,c},Y_{s})| \biggr ) < + \infty \end{aligned}$$
(3.7)

for all \( (x,y) \in (0,+\infty ) \times \mathbb {R}\), \(t \in [0,+\infty )\), \((\pi ,c) \in \mathcal {A}_{x,y}\), \(\eta \in \mathcal {M}\). Then

$$\begin{aligned} J^{\pi ,c,\eta ^{*}}(x,y)\le V(x,y) \le J^{\pi ^{*},c^{*},\eta }(x,y) \end{aligned}$$

for all \(\pi \in \mathcal {A}_{x,y}\), \(\eta \in \mathcal {M}\), and

$$\begin{aligned} V(x,y)= J^{\pi ^{*},c^{*},\eta ^{*}}(x,y). \end{aligned}$$

Proof

Assume that \( (x,y) \in (0,+\infty ) \times \mathbb {R}\) are fixed. Let’s fix first \(\eta \in \mathcal {M}\) and consider the system (\(Q^{\eta }\) dynamics of \((X_{t},Y_{t})\)):

$$\begin{aligned} {\left\{ \begin{array}{ll} dX_{t}=r(Y_{t})X_{t} dt + \pi ^{*}_{t}\bigl (b(Y_{t}) - r(Y_{t}) + \eta _{1,t} \sigma (Y_{t}) \bigr ) dt +\pi ^{*}_{t} \sigma (Y_{t}) dW_{t}^{1,\eta } -c^{*}_{t} dt,\\ dY_{t} =\bigl (g(Y_{t})+a(Y_{t}) (\eta _{1,t} \rho + \eta _{2,t} \bar{\rho }) \bigr ) dt + a(Y_{t}) (\rho dW_{t}^{1,\eta } + \bar{\rho }dW_{t}^{2,\eta }), \end{array}\right. }\quad \,\, \end{aligned}$$
(3.8)

where \(\pi ^{*}_{t}=\pi ^{*}(X_{t},Y_{t})\), \(c^{*}_{t}=c^{*}(X_{t},Y_{t})\). If we apply the Itô formula to (3.8) and the function \(e^{-wt}V(x,y)\), we get

$$\begin{aligned}&\mathbb {E}_{x,y}^{\eta ,t} \bigl (e^{-w t \wedge T_{n}} V(X_{t \wedge T_{n}},Y_{t \wedge T_{n}})\bigr ) \\&\quad = V(x,y)+\mathbb {E}_{x,y}^{\eta ,t} \int _{0}^{t \wedge T_{n}} e^{-ws} (\mathcal {L}^{\pi ^{*}_{s},c^{*}_{s},\eta _{s}} - w) V(X_{s},Y_{s}) ds + \mathbb {E}_{x,y}^{\eta ,t}\int _{0}^{t \wedge T_{n}} M_{s} dW_{s}^{\eta }, \end{aligned}$$

where \((T_{n}, n=1,2, \ldots )\), \((T_{n} \rightarrow +\infty )\) is a localizing sequence of stopping times such that

$$\begin{aligned} \mathbb {E}_{x,y}^{\eta ,t} \int _{0}^{t \wedge T_{n}} M_{s} dW_{s}^{\eta }=0. \end{aligned}$$

Applying (3.1) yields

$$\begin{aligned} \mathbb {E}_{x,y}^{\eta ,t} \bigl (e^{-w(t \wedge T_{n})}V(X_{t \wedge T_{n}},Y_{t \wedge T_{n}})\bigr ) \geqslant V(x,y)- \mathbb {E}_{x,y}^{\eta ,t} \int _{0}^{t \wedge T_{n}}e^{-ws} U(c^{*}_{s}) ds. \end{aligned}$$

By letting \((n \rightarrow \infty )\) and using (3.7) we get

$$\begin{aligned} \mathbb {E}_{x,y}^{\eta ,t} \bigl (e^{-wt }V(X_{t },Y_{t })\bigr ) + \mathbb {E}_{x,y}^{\eta ,t} \int _{0}^{t }e^{-ws} U(c^{*}_{s})\, ds\ge V(x,y). \end{aligned}$$
(3.9)

We should consider two cases

Case I

$$\begin{aligned} \lim _{t \rightarrow +\infty } \mathbb {E}_{x,y}^{\eta ,t} \int _{0}^{t }e^{-ws}( X_{s})^{\gamma } ds < +\infty . \end{aligned}$$

Since we have (3.5), then

$$\begin{aligned} \mathbb {E}_{x,y} \bigl (e^{-wt }V(X_{t },Y_{t })\bigr ) \le D_2 \mathbb {E}_{x,y} e^{-wt}( X_{t})^{\gamma }, \end{aligned}$$

which means that \(\mathbb {E}_{x,y} \bigl (e^{-wt }V(X_{t },Y_{t })\bigr )\) is convergent to 0.

Case II

$$\begin{aligned} \lim _{t \rightarrow +\infty } \mathbb {E}_{x,y}^{\eta ,t} \int _{0}^{t }e^{-ws}( X_{s})^{\gamma } ds = +\infty . \end{aligned}$$

Note that \(U(x)=\frac{x^{\gamma }}{\gamma }\) and (3.4) can be used to obtain

$$\begin{aligned} +\infty = \frac{D_{1}}{\gamma } \lim _{t \rightarrow +\infty }\mathbb {E}_{x,y}^{\eta ,t} \int _{0}^{t }e^{-ws}( X_{s})^{\gamma } ds \le \lim _{t \rightarrow +\infty } \mathbb {E}_{x,y}^{\eta ,t} \int _{0}^{t}e^{-ws} U(c^{*}_{s}) \,ds. \end{aligned}$$

In both scenarios (Cases I, II) we can deduce from (3.9) that

$$\begin{aligned} V(x,y) \le \lim _{t \rightarrow +\infty } \mathbb {E}_{x,y} \int _{0}^{t }e^{-ws}U(c^{*}_{s}) ds = J^{\pi ^{*},c^{*},\eta }(x,y). \end{aligned}$$

In addition (3.6) holds, which gives us the desired inequality

$$\begin{aligned} V(x,y) \le \lim _{t \rightarrow +\infty } \mathbb {E}_{x,y} ^{\eta ,t}\int _{0}^{\tau _{x,y} \wedge t }e^{-ws}U(c^{*}_{s}) ds = J^{\pi ^{*},c^{*},\eta }(x,y). \end{aligned}$$

If we use \(\eta ^{*}\) instead of \(\eta \) and use (3.3) then instead of (3.9) we have

$$\begin{aligned} \mathbb {E}_{x,y}^{\eta ^{*},t} \bigl (e^{-wt }V(X_{t },Y_{t })\bigr ) + \mathbb {E}_{x,y}^{\eta ^{*},t} \int _{0}^{t }e^{-ws} U(c^{*}_{s}) \,ds = V(x,y), \end{aligned}$$

which means that

$$\begin{aligned} D_{1} \lim _{t \rightarrow +\infty } \mathbb {E}_{x,y}^{\eta ^{*},t} \int _{0}^{t }e^{-ws}( X_{s})^{\gamma } ds \le \lim _{t \rightarrow +\infty } \mathbb {E}_{x,y}^{\eta ^{*},t} \int _{0}^{t }e^{-ws}U(c^{*}_{s}) ds < +\infty . \end{aligned}$$

Hence, Case I is satisfied also for \(\eta =\eta ^{*}\) and consequently after passing \(t \rightarrow +\infty \) and using (3.6) we conclude that

$$\begin{aligned} V(x,y) = \lim _{t \rightarrow +\infty } \mathbb {E}_{x,y}^{\eta ^{*},t} \int _{0}^{t }e^{-ws}U(c^{*}_{s}) ds = J^{\pi ^{*},c^{*},\eta ^{*}}(x,y). \end{aligned}$$

Next we choose \((\pi ,c) \in \mathcal {A}_{x,y}\) and apply the Itô formula to the system

$$\begin{aligned} {\left\{ \begin{array}{ll} dX_{t}=r(Y_{t})X_{t} dt + \pi _{t}\bigl (b(Y_{t})-r(Y_{t})+ \eta _{1,t}^{*} \sigma (Y_{t}) \bigr )dt +\pi _{t} \sigma (Y_{t}) dW_{t}^{1,\eta } - c_{t} dt, \\ dY_{t} =\bigl (g(Y_{t})+ a(Y_{t})\bigl (\eta _{1,t}^{*} \rho + \eta _{2,t}^{*} \bar{\rho }\bigr )\bigr ) dt + a(Y_{t})\bigl (\rho dW_{t}^{ 1,\eta } + \bar{\rho }dW_{t}^{2,\eta }\bigr ). \end{array}\right. } \end{aligned}$$

Repeating the method presented above and using (3.2) we get

$$\begin{aligned}&\mathbb {E}_{x,y}\bigl (e^{-w (t \wedge T_{n} \wedge \tau _{x,y})}V(X_{t \wedge T_{n} \wedge \tau _{x,y}}^{\pi ,c},Y_{t \wedge T_{n} \wedge \tau _{x,y}})\bigr ) \\&\quad \le V(x,y)- \mathbb {E}_{x,y} \int _{0}^{t \wedge T_{n}\wedge \tau _{x,y}}e^{-ws} U(c_{s}) \, ds. \end{aligned}$$

Since \(V\) is nonnegative, we get

$$\begin{aligned} V(x,y) \geqslant \lim _{t \rightarrow +\infty } \mathbb {E}_{x,y}^{\eta ^{*},t} \int _{0}^{t \wedge \tau _{x,y}}e^{-ws} U(c_{s}) \, ds =J^{\pi ,c,\eta ^{*}}(x,y). \end{aligned}$$

\(\square \)

Let us point out that conditions (3.1)–(3.3) hold if the upper and the lower Hamilton–Jacobi–Bellman–Isaacs equations are satisfied:

$$\begin{aligned}&\mathop {\max }\limits _{{\pi \in \mathbb {R}}}\mathop {\max }\limits _{c>0}\mathop {\min }\limits _{\eta \in \Gamma } \left( \mathcal {L}^{\pi ,c,\eta }V - wV + \frac{c^{\gamma }}{\gamma }\right) \\&\quad = \mathop {\min }\limits _{\eta \in \Gamma } \mathop {\max }\limits _{{\pi \in \mathbb {R}}} \mathop {\max }\limits _{c>0}\left( \mathcal {L}^{\pi ,c,\eta }V - wV + \frac{c^{\gamma }}{\gamma }\right) =0. \end{aligned}$$

To find the saddle point it is more convenient for us to use the upper Isaacs equation. Once we verify that it has a unique solution \(V\), it is also necessary to prove that \(V\) is also a solution to the lower equation. To do that we use the following minimax theorem proved by Fan [3, Theorem 2].

Theorem 3.2

Let X be a compact Hausdorff space and Y an arbitrary set (not topologized). Let f be a real-valued function on \(X \times Y\) such that, for every \(\eta \in Y\), \(f(\pi , \eta )\) is lower semi-continuous on \(X\). If \(f\) is convex on \(X\) and concave on \(Y\), then

$$\begin{aligned} \min _{\eta \in X} \sup _{\pi \in Y} f(\pi ,\eta ) = \sup _{\pi \in Y} \min _{\eta \in X} f(\pi ,\eta ). \end{aligned}$$

3.1 Saddle Point Derivation

As announced, to find explicit forms of the saddle point \(((\pi ^{*}(x,\!y),c^{*}(x,\!y)), \eta ^{*}(x,\!y))\), we start with the upper Isaacs equation

$$\begin{aligned} \min _{\eta \in \Gamma } \max _{\pi \in \mathbb {R}} \max _{c>0} \left( \mathcal {L}^{\pi ,c,\eta }V - wV + \frac{c^{\gamma }}{\gamma }\right) =0, \end{aligned}$$

i.e.

$$\begin{aligned}&\frac{1}{2}a^{2}(y) V_{yy} + \min _{\eta \in \Gamma } \max _{\pi \in \mathbb {R}} \biggl (\frac{1}{2} \pi ^{2} \sigma ^{2}(y) V_{xx} + \rho \pi \sigma (y) a(y) V_{xy} \nonumber \\&\quad + \bigl (\rho \eta _{1} + \bar{\rho } \eta _{2}\bigr )a(y) V_{y} + g(y) V_{y} + \pi \bigl (b(y)-r(y) + \eta _{1} \sigma (y)\bigr ) V_{x} \biggr ) \nonumber \\&\quad +\; r(y)xV_{x} + \max _{c>0}\bigl (-cV_{x}+ \frac{c^{\gamma }}{\gamma }\bigr ) -wV =0. \end{aligned}$$
(3.10)

This type of reasoning is well known in the literature and therefore we do not present it with all details. Note that if there exists \(V \in \mathcal {C}^{2,2}((0,\infty ) \times \mathbb {R})\), \(V_{xx}<0\), then the maximum over \((\pi ,c)\) in (3.10) is well defined and achieved at

$$\begin{aligned} \begin{aligned} \pi ^{*}(x,y,\eta )&=-\frac{\rho a(y)}{\sigma (y) } \frac{V_{xy}}{V_{xx}}- \frac{(b(y)-r(y) +\eta _{1} \sigma (y))}{ \sigma ^{2}(y)} \frac{V_{x}}{V_{xx}},\\ c^{*}(x,y)&=\biggl (\frac{V_x}{\gamma }\biggr )^{\frac{1}{\gamma -1}}. \end{aligned} \end{aligned}$$
(3.11)

The HARA type utility motivates us to seek the solution of the form

$$\begin{aligned} V(x,y)=\frac{x^{\gamma }}{\gamma } F(y). \end{aligned}$$
(3.12)

Substituting (3.11) and (3.12) in (3.10) yields

$$\begin{aligned} \begin{aligned} \pi ^{*}(x,y,\eta )&= \frac{\rho a(y) x}{(1-\gamma ) \sigma (y) } \frac{ F_{y}}{F} + \frac{(\lambda (y)+\eta _{1} )x}{(1-\gamma ) \sigma (y)}, \\ c^{*}(x,y)&=F^{\frac{1}{\gamma -1}} x, \end{aligned} \end{aligned}$$
(3.13)

where \(\displaystyle \lambda (y):=\frac{b(y)-r(y)}{\sigma (y)}\) and \(F\) should satisfy the following equation

$$\begin{aligned}&\frac{1}{2}a^2(y) F_{yy} +\frac{\rho ^{2} \gamma }{2(1-\gamma )} a^{2}(y) \frac{F_{y}^2}{F} + \bigg (g(y) + \frac{\rho \gamma }{1-\gamma } a(y) \lambda (y) \biggr )F_{y} \\&\begin{aligned}&\quad + \min _{(\eta _{1},\eta _{2} ) \in \Gamma } \biggl (\bar{\rho } \eta _{2}a(y) F_{y}+\frac{\rho }{(1-\gamma )} a(y) \eta _{1}F_{y} +\frac{\gamma }{2(1-\gamma )}\bigl ( \lambda (y)+\eta _{1} \bigr )^{2} F \biggr ) \nonumber \\&\quad + \gamma r(y)F + (1-\gamma ) F^{\frac{\gamma }{\gamma -1}} =0. \nonumber \end{aligned} \end{aligned}$$
(3.14)

Assuming that there exists a smooth solution to (3.14) we can determine a saddle point candidate \((\pi ^{*}(x,y),c^{*}(x,y),\eta ^{*}(x,y))\) by finding a Borel measurable function \(\eta ^{*}(x,y)\) such that

$$\begin{aligned}&\min _{\eta \in \Gamma } \max _{\pi \in \mathbb {R}} \max _{c>0} \left( \mathcal {L}^{\pi ,c,\eta }V(x,y) -wV(x,y) + \frac{c^{\gamma }}{\gamma }\right) \\&\quad = \max _{\pi \in \mathbb {R}} \max _{c>0} \left( \mathcal {L}^{\pi ,c,\eta ^{*}(x,y)}V(x,y) - wV(x,y) + \frac{c^{\gamma }}{\gamma }\right) \end{aligned}$$

and Borel measurable functions \((\pi ^{*}(x,y),c^{*}(x,y))\) such that

$$\begin{aligned}&\max _{\pi \in \mathbb {R}} \max _{c>0} \min _{\eta \in \Gamma } \left( \mathcal {L}^{\pi ,c,\eta }V(x,y) -wV(x,y) + \frac{c^{\gamma }}{\gamma }\right) \\&\quad = \min _{\eta \in \Gamma } \left( \mathcal {L}^{\pi ^{*}(x,y),c^{*}(x,y),\eta }V(x,y) - wV(x,y) +\frac{( c^{*}(x,y))^{\gamma }}{\gamma }\right) . \end{aligned}$$

From calculations (3.10)–(3.14), it follows that \(\eta ^{*}(x,y)\) does not depend on \(x\) and is equal to the minimizer of (3.14). Moreover, \((\pi ^{*}(x,y),c^{*}(x,y))=(\pi ^{*}(x,y,\eta ^{*}_{1}(y)),c^{*}(x,y))\), where \((\pi ^{*}(x,y,\eta ),c^{*}(x,y))\) is given by (3.13). The last claim is a consequence of the following two facts:

  1. (1)

    the minimax equality holds:

    $$\begin{aligned}&\min _{\eta \in \Gamma } \max _{\pi \in \mathbb {R}} \max _{c>0} \left( \mathcal {L}^{\pi ,c,\eta }V(x,y) -wV(x,y) + \frac{c^{\gamma }}{\gamma }\right) \\&\quad =\max _{\pi \in \mathbb {R}} \max _{c>0} \min _{\eta \in \Gamma } \left( \mathcal {L}^{\pi ,c,\eta }V(x,y) - wV(x,y) + \frac{c^{\gamma }}{\gamma }\right) \\&\quad = \left( \mathcal {L}^{\pi ^{*}(x,y),c^{*}(x,y),\eta ^{*}(x,y)}V(x,y) - wV(x,y) + \frac{(c^{*}(x,y))^{\gamma }}{\gamma }\right) , \end{aligned}$$
  2. (2)

    \(\displaystyle \mathcal {L}^{\pi ^{*}(x,y),c,\eta ^{*}(x,y)}V(x,y)= \max _{\pi }\mathcal {L}^{\pi ,c,\eta ^{*}(x,y)}V(x,y) \) and therefore \((\pi ^{*}(x,y),c^{*}(x,y))\) is the unique solution to the equation

    $$\begin{aligned} \mathcal {L}^{\pi ,c,\eta ^{*}(x,y)}V(x,y)+\frac{c^{\gamma }}{\gamma }=\mathcal {L}^{\pi ^{*}(x,y),c^{*}(x,y),\eta ^{*}(x,y)}V(x,y) + \frac{\bigl (c^{*}(x,y)\bigr )^{\gamma }}{\gamma }. \end{aligned}$$

4 Smooth Solution to the Resulting PDE

In this section, we use stochastic methods to derive existence and uniqueness results for classical solutions to differential equations which play a key role in the solution to our initial problem. Let’s recall it once more

$$\begin{aligned}&\frac{1}{2}a^2(y) F_{yy} +\frac{\rho ^{2} \gamma }{2(1-\gamma )} a^{2}(y) \frac{F_{y}^2}{F} + \bigg (g(y) + \frac{\rho \gamma }{1-\gamma } a(y) \lambda (y) \biggr )F_{y}\\&\quad \begin{aligned}&+ \min _{(\eta _{1},\eta _{2} ) \in \Gamma } \biggl (\bar{\rho } \eta _{2}a(y) F_{y}+\frac{\rho }{(1-\gamma )} a(y) \eta _{1}F_{y} +\frac{\gamma }{2(1-\gamma )}\bigl ( \lambda (y)+\eta _{1} \bigr )^{2} F \biggr ) \nonumber \\&+ \gamma r(y)F + (1-\gamma ) F^{\frac{\gamma }{\gamma -1}} -w F=0. \nonumber \end{aligned} \end{aligned}$$
(4.1)

Assume now that there exists \(F\) – a solution to Eq. (4.1) such that \(\displaystyle \frac{a(y)F_{y}}{F}\) is bounded. In this case there exists \(R>0\) such that

$$\begin{aligned} \max _{q \in [-R,R]} \bigl (-F q^{2} + 2a(y)F_{y} q\bigr ) = a^{2}(y)\frac{F_{y}^2}{F}. \end{aligned}$$

Therefore, it is reasonable to consider equations of the form

$$\begin{aligned} \frac{1}{2}&a^{2}(y) F_{yy} +\max _{q \in [-R,R]} \bigl (-\theta F q^{2} + 2 \theta a(y) F_{y} q\bigr ) \\&+ \min _{\eta \in \Gamma } \big ([\hat{i}(y)+\hat{l}(\eta )a(y)]F_{y} + \hat{h}(y,\eta ) F \big ) + \max _{c>0}\big (-\gamma cF+c^{\gamma }\big ) -wF =0, \end{aligned}$$

where \(\theta >0\). This type of equation can be rewritten into

(4.2)

where \(D \subset \mathbb {R}^{n},\Gamma \subset \mathbb {R}^{k} \) are compacts. To the best of our knowledge, subsequent results on classical solutions to (4.2) have not been available so far under assumptions given here.

We make the following two assumptions.

Assumption 1

Functions \(a\), \(h\) and \(i\), \(l\) are continuous, \(a^{2}(y)>\varepsilon >0\) and there exist \(L_{1}>0\), \(L_{2} \ge 0\) such that

$$\begin{aligned}&|h(y,\delta ,\eta )-h(\bar{y},\delta ,\eta )| + |i(y)-i(\bar{y})|\le L_1 |y- \bar{y}|, \nonumber \\&|h(y,\delta ,\eta )| \le L_{1}, \quad |i(y)+l(\delta ,\eta )a(y)| \le L_1(1+|y|),\end{aligned}$$
(4.3)
$$\begin{aligned}&(y-\bar{y})[i(y)+l(\delta ,\eta )a(y)-i(\bar{y})-l(\delta ,\eta )a(\bar{y})] + \frac{1}{2}|a(y)- a(\bar{y})|^2 \!\le \! L_2 |y-\bar{y}|^2. \nonumber \\ \end{aligned}$$
(4.4)

Remark

Assume for a moment that \(a\) is constant. If (4.3) is satisfied then also (4.4) holds with \(L_{2}=L_{1}\). Nevertheless in some models the constant \(L_{2}\) can be much lower than \(L_{1}\), for instance it is worth to notice the case \(i(y)+l(\delta ,\eta )a(y)=-y + \eta \), where \(L_2\) can be set to zero.

Assumption 2

There exist a Borel measurable function \(\eta ^{*}(\delta ,y,u,p)\) and a Borel measurable function \(\delta ^{*}(y,u,p)\) such that

$$\begin{aligned} \eta ^{*}(\delta ,y,u,p) \in \arg \min _{\eta \in \Gamma } G(\delta ,\eta ,y,u,p), \qquad \delta ^{*}(y,u,p) \in \arg \max _{\delta \in D} \min _{\eta \in \Gamma } G(\delta ,\eta ,y,u,p), \end{aligned}$$

where

$$\begin{aligned} G(\delta ,\eta ,y,u,p)=[i(y)+l(\delta ,\eta )a(y)] p + h(y,\delta ,\eta ) u. \end{aligned}$$

Remark

By classical measurable selection results all conditions of Assumption 2 are satisfied for instance, when \(h(y,\delta ,\eta )=h_{1}(y,\delta )+ h_{2}(y,\eta )\), \(l(\delta ,\eta )=l_{1}(\delta )+ l_{2}(\eta )\) and \(h_{1}\), \(h_{2}\), \(l_{1}\), \(l_{2}\) are continuous functions.

To construct a candidate solution to our problem we use a sequence of solutions to finite time horizon problems of the form

$$\begin{aligned}&u_{t}+ \frac{1}{2}a^{2}(y) u_{yy} +\max _{\delta \in D} \min _{\eta \in \Gamma } \biggl ([i(y)+l(\delta ,\eta )a(y)]u_{y} + h(y,\delta ,\eta ) u\biggr ) \\&\quad +\max _{m_{1} \le c \le m_{2}}\bigl (- \gamma cu + c^{\gamma } \bigr )-wu =0, \quad \quad (y,t) \in \mathbb {R} \times [0,T),\nonumber \end{aligned}$$
(4.5)

with terminal condition \(u(y,T)=0\).

Lemma 4.1

Suppose that \(h\) and \(i\) are continuous, all conditions of Assumption 1 and Assumption 2 are satisfied and there exists \(u\)—a polynomial growth solution to (4.5). Then \(u\) is a unique polynomial growth solution to (4.5), which in addition is bounded and strictly positive. Moreover it admits a stochastic representation of the form

$$\begin{aligned} u(y,t)= \sup _{\delta \in \mathcal {D}, \; \; c \in \mathcal {C}_{m_{1},m_{2}}} \inf _{\eta \in \mathcal {N}} \mathbb {E}_{y,t}^{l(\delta ,\eta (\delta ))} \biggl ( \int _{t}^{T}e^{\int _{t}^{s}(h(Y_{k},\delta _{k},\eta (\delta _{k}))- \gamma c_{k}-w)\, dk} c_{s}^{\gamma } ds \biggr ), \end{aligned}$$
(4.6)

where \(dY_{t}=[i(Y_{t})+l(\delta _{t},\eta (\delta _{t}))a(Y_{t})]\,dt + a(Y_{t}) dW_{t}^{l(\delta ,\eta (\delta ))}\), \(\mathcal {D}\) is the class of all progressively measurable processes taking values in \(D\), \(\mathcal {N}\) is the family of all functions:\(\eta :D \times [0,+\infty ) \times \Omega \rightarrow \Gamma \) with the property that for all \(\delta \in \mathcal {D}\) the process \((\eta (\delta _{t}):=\eta (\delta _{t},t,\cdot )|\; 0 \le t < +\infty )\) is progressively measurable and \(\mathcal {C}_{m_{1},m_{2}}\) denotes the class of all continuous processes \((c_{t}|\; 0 \le t < +\infty )\) that \(m_{1} \le c_{t} \le m_{2}\).

Proof

Under conditions of Assumption 2 for all functions \(\eta : \Gamma \rightarrow D\) and for all \(\delta \in D\), \((y,u,p) \in \mathbb {R}^{3}\), we have

$$\begin{aligned} G(\delta ,\eta ^{*}(\delta ,y,u,p),y,u,p)&\le \max _{\delta \in D} \min _{\eta \in \Gamma } G(\delta ,\eta ,y,u,p) \\&\le G(\delta ^{*}(y,u,p),\eta (\delta ^{*}(y,u,p)),y,u,p). \end{aligned}$$

In addition let \(c^{*}(y)\) be a Borel measurable function, which maximize (4.5). Then for all \(\eta \in \mathcal {N}\), \(\delta \in D\), \(c \in [m_{1},m_{2}]\), \(y \in \mathbb {R}\), we get

$$\begin{aligned} \mathcal {K}^{\delta ,c,\eta ^{*}(\delta ,y,u,u_{y})}u(y,t) \le 0 \le \mathcal {K}^{\delta ^{*}(y,u,u_{y}),c^{*}(y),\eta (\delta ^{*}(y,u,u_{y}))}u(y,t), \end{aligned}$$

where

$$\begin{aligned} \mathcal {K}^{\delta ,c,\eta }u(y,t)&= u_{t}+ \frac{1}{2}a^{2}(y) u_{yy}+ [i(y)+l(\delta ,\eta )a(y)]u_{y} \\&+ h(y,\delta ,\eta ) u +\max _{m_{1} \le c \le m_{2}}\bigl (- \gamma cu + c^{\gamma } \bigr )-wu. \end{aligned}$$

Recall that the solution \(u\) satisfies a polynomial growth condition and all conditions of Assumption 1 are satisfied, which gurantees that for all \(\eta \in \mathcal {N}\) and \(\delta \in \mathcal {D}\)

$$\begin{aligned} \mathbb {E}_{y,t}^{l(\delta ,\eta (\delta ))} \sup _{t \le s \le T} |u(Y_{s})| < + \infty , \end{aligned}$$

(for the proof see Appendix D of Fleming and Soner [7]). Therfore, we can use the standard verification argument, which leads us to the conclusion

$$\begin{aligned}&\mathbb {E}_{y,t}^{l(\delta ,\eta ^{*}(\delta ))} \biggl (\int _{t}^{T}e^{\int _{t}^{s}(h(Y_{k},\delta _{k},\eta ^{*}(\delta _{k}))-\gamma c_{k}-w)\, dk} c_{s}^{\gamma } ds \biggr ) \\&\quad \le u(y,t) \le \mathbb {E}_{y,t}^{l(\delta ^{*},\eta (\delta ^{*}))} \biggl ( \int _{t}^{T}e^{\int _{t}^{s}(h(Y_{k},\delta _{k}^{*},\eta (\delta _{k}^{*}))- \gamma c_{k}-w)\, dk} c_{s}^{\gamma } ds \biggr ), \end{aligned}$$

which is true for all \(\delta \in \mathcal {D}\) and \(\eta \in \mathcal {N}\) , \(c \in \widehat{\mathcal {C}}_{m_{1},m_{2}}\). Here \( \widehat{\mathcal {C}}_{m_{1},m_{2}}\) denotes the class of all progressively measurable processes taking values in the interval \([m_{1},m_{2}]\), \(\eta ^{*}(\delta )\) is the abbreviation of \(\eta ^{*}(\delta ,Y,u(Y),u_{y}(Y))\) and \(\delta ^{*}\) is the abbreviation of \(\delta ^{*}(Y,u(Y),u_{y}(Y)) \). For more details about the verfication reasoning, which was used here, see for example the proof of Theorem 6.1 from Zawisza [22].

This implies that

$$\begin{aligned}&\inf _{\eta \in \mathcal {N}} \sup _{\delta \in \mathcal {D}, \; \; c \in \widehat{\mathcal {C}}_{m_{1},m_{2}}} \mathbb {E}_{y,t}^{l(\delta ,\eta (\delta ))} \biggl (\int _{t}^{T}e^{\int _{t}^{s}(h(Y_{k},\delta _{k},\eta (\delta _{k}))- \gamma c_{k}-w)\, dk} c_{s}^{\gamma } ds \biggr )\\&\quad \le u(y,t)\le \sup _{\delta \in \mathcal {D}, \; \; c \in \widehat{\mathcal {C}}_{m_{1},m_{2}}} \inf _{\eta \in \mathcal {N}} \mathbb {E}_{y,t}^{l(\delta ,\eta (\delta ))} \biggl (\int _{t}^{T}e^{\int _{t}^{s}(h(Y_{k},\delta _{k},\eta (\delta _{k}))- \gamma c_{k}-w)\, dk} c_{s}^{\gamma } ds \biggr ). \end{aligned}$$

Since the oposite inequality

$$\begin{aligned}&\sup _{\delta \in \mathcal {D}, \; \; c \in \widehat{\mathcal {C}}_{m_{1},m_{2}}}\inf _{\eta \in \mathcal {N}} \mathbb {E}_{y,t}^{l(\delta ,\eta (\delta ))} \biggl (\int _{t}^{T}e^{\int _{t}^{s}(h(Y_{k},\delta _{k},\eta (\delta _{k}))- \gamma c_{k}-w)\, dk} c_{s}^{\gamma } ds \biggr ) \\&\quad \le \inf _{\eta \in \mathcal {N}} \sup _{\delta \in \mathcal {D}, \; \; c \in \widehat{\mathcal {C}}_{m_{1},m_{2}}} \ \mathbb {E}_{y,t}^{l(\delta ,\eta (\delta ))} \biggl (\int _{t}^{T}e^{\int _{t}^{s}(h(Y_{k},\delta _{k},\eta (\delta _{k}))- \gamma c_{k}-w)\, dk} c_{s}^{\gamma } ds \biggr ) \end{aligned}$$

is always satisfied, we get

$$\begin{aligned} u(y,t)= \sup _{\delta \in \mathcal {D}, \; \; c \in \widehat{\mathcal {C}}_{m_{1},m_{2}}} \inf _{\eta \in \mathcal {N}} \mathbb {E}_{y,t}^{l(\delta ,\eta (\delta ))} \biggl ( \int _{t}^{T}e^{\int _{t}^{s}(h(Y_{k},\delta _{k},\eta (\delta _{k}))- \gamma c_{k}-w)\, dk} c_{s}^{\gamma } ds \biggr ). \end{aligned}$$
(4.7)

This representation confirms the uniqueness, the boundedness and the strict positivity of \(u(y,t)\).

Finally, we are able to notice that instead of the class \(\widehat{\mathcal {C}}_{m_{1},m_{2}}\) in (4.7), we can limit ourselves to the class \(\mathcal {C}_{m_{1},m_{2}}\), since when \(u\) is strictly positive, then the maximum with respect to \(c\) in (4.5) is achieved at

$$\begin{aligned} c^{*}(y)={\left\{ \begin{array}{ll} m_{1}, &{}\; \text { if } \; u^{\frac{1}{\gamma -1}}(y) \le m_{1},\\ u^{\frac{1}{\gamma -1}}(y) &{}\; \text { if } \;m_{1} \le u^{\frac{1}{\gamma -1}}(y) \le m_{2},\\ m_{2} \; &{}\;\text { if } \; u^{\frac{1}{\gamma -1}}(y) \ge m_{2}, \end{array}\right. } \end{aligned}$$

which is a continuous function. \(\square \)

It is also possible to rewrite Eq. (4.5) in the following form

$$\begin{aligned} u_{t}+ \frac{1}{2}a^{2}(y) u_{yy} +H(y,u,u_{y})-wu=0, \end{aligned}$$

where

$$\begin{aligned} H(y,u,p)= \max _{\delta \in D} \min _{\eta \in \Gamma } \biggl ([i(y)+l(\delta ,\eta )a(y)] p + h(y,\delta ,\eta ) u\biggr ) + \max _{m_{1} \le c \le m_{2}}\biggl (-\gamma cu + c^{\gamma }\biggr ). \end{aligned}$$

Lemma 4.2

If Assumption 1 is satisfied then \(H\) is continuous and there exists \(K>0\) that

$$\begin{aligned}&|H(y,0,0)| \le K, \nonumber \\&|H(y,u,p)-H(y,\bar{u},p)| \le K |u-\bar{u}|, \\&|H(y,u,p)-H(\bar{y},u,p)| \le K (1+|p|)|y-\bar{y}|, \nonumber \\&|H(y,u,p)-H(y,u,\bar{p})| \le K (1+|y|)|p-\bar{p}|.\nonumber \end{aligned}$$
(4.8)

Proof

It is sufficient to note that if \(D \subset \mathbb {R}^{n}\), \(\Gamma \subset \mathbb {R}^{k}\) and \(f\) is a continuous function then

$$\begin{aligned} |\max _{\delta \in D}\min _{\eta \in \Gamma } f(z,\delta ,\eta )-\max _{\delta \in D}\min _{\eta \in \Gamma } f(\bar{z},\delta ,\eta )| \le \max _{\delta \in D} \max _{\eta \in \Gamma } |f(z,\delta ,\eta )-f(\bar{z},\delta ,\eta )|. \end{aligned}$$

\(\square \)

Theorem 4.3

Suppose that for each \(T>0\) there exists a unique bounded solution to (4.5), all conditions of Assumptions 1 and 2 are satisfied with \(L_{1}>0\), \(L_{2}\ge 0\) and \(w > \sup _{\eta ,\delta ,y} h(y,\delta ,\eta )+ L_{2}\). Then there exists a unique bounded solution to

$$\begin{aligned}&\frac{1}{2}a^{2}(y)u_{yy} +\max _{\delta \in D} \min _{\eta \in \Gamma } \biggl ([i(y)+l(\delta ,\eta )a(y)] u_{y} + h(y,\delta ,\eta ) u\biggr )\nonumber \\&\quad + \max _{m_{1} \le c \le m_{2}}\bigl (-\gamma cu+c^{\gamma }\bigr )-wu =0, \end{aligned}$$
(4.9)

which, in addition, is bounded together with the \(y\)-derivative and bounded away from zero.

Proof

The solution will be constructed by taking the limit in a sequence of solutions to finite horizon problems (4.5).

Suppose that \(T>0\) is fixed and let \(u\) be the solution to (4.5). To use the Arzel–Ascolli Lemma we need to prove uniform estimates for \(u\) and all its derivatives. We can use a stochastic control representation to obtain

$$\begin{aligned} u(y,t) = \sup _{\delta \in \mathcal {D}, \; \; c \in \mathcal {C}_{m_{1},m_{2}}} \inf _{\eta \in \mathcal {N}} \mathbb {E}_{y,t}^{l(\delta ,\eta (\delta ))} \biggl ( \int _{t}^{T}e^{\int _{t}^{s}(h(Y_{k},\delta _{k},\eta (\delta _{k}))- \gamma c_{k}-w)\, dk} c_{s}^{\gamma } ds \biggr ). \end{aligned}$$

Since \(h\) is bounded and \(w > \sup _{\eta ,\delta ,y} h(y,\delta ,\eta )\) then there exists \(\alpha >0\) that

$$\begin{aligned} |u(y,t)|&\le \sup _{\delta \in \mathcal {D}, \; \; c \in \mathcal {C}_{m_{1},m_{2}}} \inf _{\eta \in \mathcal {N}} \mathbb {E}_{y,t}^{l(\delta ,\eta (\delta ))} \biggl ( \int _{t}^{T} e^{\int _{t}^{s} -\alpha -\gamma c_{k}\, dk} c_{s}^{\gamma } ds \biggr )\\&\le m_{2}^{\gamma } \int _{t}^{T} e^{-\alpha (t-s)} ds \le \frac{m_{2}^{\gamma }}{\alpha }. \end{aligned}$$

A bound for \(u_y\) will be obtained by estimating the Lipschitz constant. Note that if \(w > \sup _{\eta ,y} h(y,\eta )+ L_{2}\), then \(w_{1}:=w-L_{2} > \sup _{\eta ,y} h(y,\eta )\). Moreover we will use the fact that \(|e^{x}-e^{y}| \le |x-y|\) for \(x,y \le 0\). For a notational convenience we will write \(\mathbb {E} ^{l(\delta ,\eta (\delta ))}f(Y_{t}(y,s))\) instead of \(\mathbb {E}_{y,s}^{l(\delta ,\eta (\delta ))}f(Y_{t})\).

$$\begin{aligned} |u(y,t)-u(\bar{y},t)|&\le \sup _{ c \in \mathcal {C}_{m_{1},m_{2}}} \sup _{\eta \in \mathcal {N},\delta \in \mathcal {D}} \mathbb {E}^{l(\delta ,\eta (\delta ))} \int _{t}^{T} c_{s}^{\gamma }e^{-\int _{t}^{s} (\gamma c_{k}+L_{2})\, dk} \cdot \\&\qquad \cdot \biggl | e^{\int _{t}^{s} (h(Y_{k}(y,t),\delta _{k},\eta (\delta _{k}))-w_{1})\, dk} - e^{\int _{t}^{s} (h(Y_{k}(\bar{y},t),\delta _{k},\eta (\delta _{k}))-w_{1})\, dk} \biggr | ds \\&\le \sup _{ c \in \mathcal {C}_{m_{1},m_{2}}}\sup _{\eta \in \mathcal {N},\delta \in \mathcal {D}} \mathbb {E} ^{l(\delta ,\eta (\delta ))} \int _{t}^{T} c_{s}^{\gamma }e^{-\int _{t}^{s} (\gamma c_{k}+L_{2})\, dk} \\&\qquad \cdot \int _{t}^{s}| h(Y_{k}(y,t),\delta _{k},\eta (\delta _{k}))\, - h(Y_{k}(\bar{y},t),\delta _{k},\eta (\delta _{k}))|\, dk \, ds \\&\le L_{1} \sup _{ c \in \mathcal {C}_{m_{1},m_{2}}} \sup _{\eta \in \mathcal {N},\delta \in \mathcal {D}} \mathbb {E} ^{l(\delta ,\eta (\delta ))} \int _{t}^{T} c_{s}^{\gamma }e^{-\int _{t}^{s} \gamma c_{k}\, dk}\\&\qquad \cdot \int _{t}^{s}e^{-L_{2}(s-t)}| Y_{k}(y,t) - Y_{k}(\bar{y},t) |\, dk \, ds \\&\le L_{1} m_{2}^{\gamma } \sup _{\eta \in \mathcal {N},\delta \in \mathcal {D}} \mathbb {E}^{l(\delta ,\eta (\delta ))} \int _{t}^{T} \\&\qquad \cdot \int _{t}^{s}e^{-(L_{2}+\gamma m_{1})(s-t)}| Y_{k}(y,t) - Y_{k}(\bar{y},t) |\, dk \, ds. \end{aligned}$$

Using the Itô formula we have

$$\begin{aligned}&\mathbb {E}^{l(\delta ,\eta (\delta ))} (Y_{k}(y,t) - Y_{k}(\bar{y},t))^{2} = (y - \bar{y})^{2} + \int _{t}^{k} 2\mathbb {E}^{l(\delta ,\eta (\delta ))} (Y_{l}(y,t) - Y_{l}(\bar{y},t))\\&\qquad \cdot [i(Y_{l}(y,t))+l(\delta _{t},\eta (\delta _{t}))a(Y_{l}(y,t))-i(Y_{l}(\bar{y},t))-l(\delta _{t},\eta (\delta _{t}))a(Y_{l}(\bar{y},t))] \, dl \\&\qquad + \int _{t}^{k} \mathbb {E}^{l(\delta ,\eta (\delta ))}(a(Y_{l}(y,t))- a(Y_{l}(\bar{y},t))^{2}\, dl. \end{aligned}$$

Using (4.4) we have

$$\begin{aligned} \mathbb {E} ^{l(\delta ,\eta (\delta ))} (Y_{k}(y,t) - Y_{k}(\bar{y},t))^{2} \le (y - \bar{y})^{2} + 2L_{2} \int _{t}^{k} \mathbb {E}^{l(\delta ,\eta (\delta ))} (Y_{l}(y,t) - Y_{l}(\bar{y},t))^{2} dl. \end{aligned}$$

Gronwall’s lemma yields

$$\begin{aligned} \mathbb {E}^{l(\delta ,\eta (\delta ))} (Y_{k}(y,s) - Y_{k}(\bar{y},s))^{2} \le (y-\bar{y})^{2} e^{2L_{2}(k-s)}. \end{aligned}$$

We should consider now two cases:

Case I \( L_{2} >0 \).

$$\begin{aligned} |u(y,t)-u(\bar{y},t)|&\le L_{1} m_{2}^{\gamma } |y-\bar{y}| \int _{t}^{T}e^{\int _{t}^{s} (-L_{2} - \gamma m_{1}) \, dk} \int _{t}^{s} e^{L_{2}(k-t)}dk ds \\&\le \frac{L_{1} m_{2}^{\gamma }}{L_{2}} |y-\bar{y}| \int _{t}^{T}e^{ (-\gamma m_{1}) (s-t)} \le \frac{L_{1} m_{2}^{\gamma }}{\gamma m_{1} L_{2}} |y-\bar{y}|. \nonumber \end{aligned}$$
(4.10)

Case II \( L_{2}=0\)

$$\begin{aligned} |u(y,t)-u(\bar{y},t)|&\le L_{1} m_{2}^{\gamma } |y-\bar{y}| \int _{t}^{T}e^{\int _{t}^{s} ( - \gamma m_{1}) \, dk} (s-t) ds \\&= L_{1} m_{2}^{\gamma } |y-\bar{y}| \int _{0}^{T-t}e^{ - \gamma m_{1} k } k dk \nonumber \\&= L_{1} m_{2}^{\gamma } |y-\bar{y}| \left( \frac{(T-t) e^{ - \gamma m_{1} (T-t) }}{\gamma m_{1}} + \frac{1- e^{ - \gamma m_{1} (T-t) }}{\gamma ^{2} m_{1}^{2}} \right) . \nonumber \end{aligned}$$
(4.11)

Note that above estimates do not depend on the time horizon \(T\) (the last one for large values of \(T-t\)). We consider new function \(v(y,t)=u^{T}(y,T-t)\), where \(u^{T}\) denotes the solution to equation (4.5) with the terminal condition given at time \(T\). Then \(v\) is a solution to

$$\begin{aligned} v_{t}-\frac{1}{2} a^{2}(y) v_{yy} - H(y,v,v_{y})+wv=0 \end{aligned}$$

with the initial condition \(v(y,0)=0\). From the uniqueness property we get that

$$\begin{aligned} v(y,t)\!=\!u^{t}(y,0)\!=\! \sup _{\delta \in \mathcal {D}, \; c \in \mathcal {C}_{m_{1},m_{2}}} \inf _{\eta \in \mathcal {N}} \mathbb {E}_{y,0}^{l(\delta ,\eta (\delta ))} \biggl ( \int _{0}^{t}e^{\int _{0}^{s}(h(Y_{k},\delta _{k},\eta (\delta _{k}))- \gamma c_{k}-w)\, dk} c_{s}^{\gamma } ds \biggr ). \end{aligned}$$

Thanks to that we have an estimate on \(v_{t}\). Namely, let \(t \ge 0\) be fixed. Observe that for \(\xi >0\)

$$\begin{aligned} |v(y,t+\xi )-v(y,t)| \le \sup _{\delta \in \mathcal {D}, \; c \in \mathcal {C}_{m_{1},m_{2}}} \sup _{\eta \in \mathcal {N}} \biggl | I(t+\xi ,y,\eta ,c) -I(t,y,\eta ,c) \biggr |, \end{aligned}$$

where

$$\begin{aligned} I(t,y,\eta ,c):= \mathbb {E}_{y,0} ^{l(\delta ,\eta (\delta ))} \biggl ( \int _{0}^{t}e^{\int _{0}^{s}(h(Y_{k},\delta _{k},\eta (\delta _{k}))- \gamma c_{k}-w)\, dk} c_{s}^{\gamma } ds \biggr ). \end{aligned}$$

Note that

$$\begin{aligned} \frac{\partial I}{\partial t} (t,y,\eta ,c)= \mathbb {E}_{y,0}^{l(\delta ,\eta (\delta ))} e^{\int _{0}^{t}(h(Y_{k},\delta _{k},\eta (\delta _{k}))- \gamma c_{k}-w)\, dk} c_{s}^{\gamma } . \end{aligned}$$

We assumed that \(w >\sup _{y,\delta ,\eta } h(Y_{k},\delta _{k},\eta (\delta _{k}))\), hence there exists \(\beta >0\) that for \(\xi >0\) we have

$$\begin{aligned} \left| \frac{\partial I}{\partial t} (t+\xi ,y,\eta ,c)\right| \le m_{2}^{\gamma } e^{-\beta t} \end{aligned}$$

and finally

$$\begin{aligned} \biggl | I(t+\xi ,y,\eta ,c) -I(t,y,\eta ,c) \biggr | \le m_{2}^{\gamma } e^{-\beta t} |\xi |. \end{aligned}$$

The above inequality ensures that \(v_{t}(y,t)\) is uniformly bounded and \(v_{t}(y,t)\) is convergent to \(0\) \((t \rightarrow \infty )\), uniformly with respect to \(y\).

We have obtained so far uniform bounds for \(v\), \(v_{t}\), \(v_{y}\). Moreover we know that equation

$$\begin{aligned} {\left\{ \begin{array}{ll} v_{t} - \frac{1}{2}a^{2}(y) v_{yy} +wv -H(y,v,v_{y})=0 &{}\quad \quad (y,t) \in \mathbb {R} \times (0,+\infty ),\\ v(y,0)=0 &{}\quad \quad y \in \mathbb {R}. \end{array}\right. } \end{aligned}$$
(4.12)

is satisfied, \(H\) satisfies (4.8) and \(a^{2}(y)>\varepsilon >0\). Hence, a proper bound is also satisfied for \(v_{yy}\).

By the Arzel-Ascolli Lemma, there exists a sequence \((t_{n}, n=1,2,\ldots )\) such that \((v(y,t_{n}), n=1,2,\ldots )\) is convergent to some twice continuously differentiable function, which will be denoted further also by \(v(y)\). What is more, the convergence holds locally uniformly together with \(v_{y}(y,t_{n})\) and \(v_{yy}(y,t_{n})\). This indicates that \(v\),\(v_{y}\) are bounded and

$$\begin{aligned} \frac{1}{2}a^{2}(y) v_{yy}+H(y,v,v_{y})-wv=0. \end{aligned}$$

The uniqueness follows from the infinite horizon analogue of stochastic representation (4.6). \(\square \)

Gathering Lemma 4.2 and Theorem 2.1 of Friedman [8] we get that if conditions of Assumption 1 are satisfied and \(a \ne 0\) then for all \(T>0\) there exists a unique bounded solution to finite horizon equation (4.5). We are sure that a smooth solution to equation (4.5) exists under more general conditions but we will treat this problem elsewhere. Up to the end we assume that \(a\) is a nonzero constant. We should focus now on

$$\begin{aligned}&\frac{1}{2} a^{2} F_{yy} +\max _{q \in [-R,R]} \bigl (-\theta F q^{2} + 2 \theta a F_{y} q\bigr )\\&\qquad + \min _{\eta \in \Gamma } \big ([\hat{ i}(y)+a\hat{l}(\eta )]F_{y}+ \hat{ h}(y,\eta ) F \big ) + \max _{c>0}\big (-\gamma cF+c^{\gamma }\big ) -wF =0.\nonumber \end{aligned}$$
(4.13)

We have already proved that if \(\hat{h}\) and \(\hat{i}\) are continuous and

$$\begin{aligned} |\hat{h}(y,\eta )-\hat{h}(\bar{y},\eta )| + |\hat{i}(y)-\hat{i}(\bar{y})|\le L_1 |y- \bar{y}|, \nonumber \\ |\hat{h}(y,\eta )| \le L_{1}, \quad |\hat{i}(y,\eta )| \le L_1(1+|y|),\end{aligned}$$
(4.14)
$$\begin{aligned} (y-\bar{y})(\hat{i}(y)- \hat{i}(\bar{y})) \le L_2 |y-\bar{y}|^2, \end{aligned}$$
(4.15)

then there exists a nonnegative, bounded and \(\mathcal {C}^{2}(\mathbb {R})\) solution to

$$\begin{aligned}&\frac{1}{2} a^{2} F_{yy} +\max _{q \in [-R,R]} \bigl (-\theta F q^{2} + 2 \theta a F_{y} q\bigr )\nonumber \\&\quad + \min _{\eta \in \Gamma } \big ([\hat{ i}(y)+a\hat{l}(\eta )]F_{y} + \hat{ h}(y,\eta ) F \big ) + \max _{m_{1} \le c \le m_{2}} \big (-\gamma cF+c^{\gamma }\big ) -wF =0.\nonumber \\ \end{aligned}$$
(4.16)

We denote this solution by \(F_{m_{1},m_{2},R}\). The proof of Theorem 4.3 shows that

$$\begin{aligned} F_{m_{1},m_{2},R} \le \frac{m_{2}^{\gamma }}{\alpha }, \end{aligned}$$

where \(\alpha :=w-\sup _{y,\eta } \hat{h}(y,\eta )\).

Lemma 4.4

If \(\hat{h}\), \(\hat{i}\) are continuous, \(a \ne 0\) and (4.14), (4.15) are satisfied then there exists \(P>0\) that

$$\begin{aligned} F_{m_{1},m_{2},R} \ge P, \quad \text { for all } 0 < m_{1} \le 1 \le m_{2}, R>0. \end{aligned}$$

Proof

Since \(F_{m_{1},m_{2},R}\) is approximated by finite horizon problems, then

$$\begin{aligned}&F_{m_{1},m_{2},R}(y) \\&\quad = \lim _{t \rightarrow \infty } \sup _{c \in C_{m_1,m_{2}}} \sup _{ q \in [-R,R] } \inf _{\eta \in \mathcal {M}} \mathbb {E}_{y,0}^{l(\eta )}\biggl ( \int _{0}^{ t}e^{\int _{0}^{s}(\hat{h}(Y_{k},\eta (\delta _{k}))-\theta q_{k}^{2}-\gamma c_{k}-w)\, dk} \bigl (c_{s}\bigr )^{\gamma }ds \biggr ) \\&\quad \ge \lim _{t \rightarrow \infty } \inf _{\eta \in \mathcal {M}} \mathbb {E}_{y,0}^{l(\eta )} \biggl ( \int _{0}^{t}e^{\int _{0}^{s}(\hat{h}(Y_{k},\eta (\delta _{k}))-\gamma - w)\, dk} ds \biggr ). \end{aligned}$$

Since \(w>\sup _{y,\eta } h(y,\delta ,\eta )\), then for \(p:=w+\gamma - \inf _{y,\eta } h(y,\delta ,\eta ) >0\) we have

$$\begin{aligned} F_{m_{1},m_{2},R}(y) \ge \biggl ( \int _{0}^{+ \infty }e^{-ps} ds \biggr ) = \frac{1}{p}=:P. \end{aligned}$$

\(\square \)

Lemma 4.5

Under the conditions of Lemma 4.4 there exist \(m_{1}^{*}\) and \(m_{2}^{*}\) that \(m_{1}^{*} \le 1 \le m_{2}^{*}\) and \(F_{m_{1}^{*},m_{2}^{*},R}\) is a solution to (4.13). In addition, \(m_{1}^{*}\) and \(m_{2}^{*}\) do not depend on \(R\).

Proof

Maximum with respect to \(c\) in (4.17) is achieved at

$$\begin{aligned} c^{*}_{m_{1},m_{2}}={\left\{ \begin{array}{ll} m_{1}, &{}\; \text { if } \; F_{m_{1},m_{2},R}^{\frac{1}{\gamma -1}} \le m_{1},\\ F_{m_{1},m_{2},R}^{\frac{1}{\gamma -1}} &{}\; \text { if } \;m_{1} \le F_{m_{1},m_{2},R}^{\frac{1}{\gamma -1}} \le m_{2},\\ m_{2} \; &{}\text { if } \; F_{m_{1},m_{2},R}^{\frac{1}{\gamma -1}}\ge m_{2}. \end{array}\right. } \end{aligned}$$

From Lemma 4.4 and Theorem 4.3 we know that

$$\begin{aligned} P \le F_{m_{1},m_{2},R} \le \frac{m_{2}^{\gamma }}{\alpha }. \end{aligned}$$

Hence

$$\begin{aligned} \left( \frac{m_{2}^{\gamma }}{\alpha } \right) ^{\frac{1}{\gamma -1}} \le \bigl ( F_{m_{1},m_{2},R} \bigr )^{\frac{1}{\gamma -1}} \le P ^{\frac{1}{\gamma -1}}. \end{aligned}$$

In that case we can set \(m_{2}^{*}:= \max \{ P^{\frac{1}{\gamma -1}}, 1 , \alpha ^{\frac{1}{\gamma }} \}\), \(m_{1}^{*}:= \bigl ( \frac{m_{2}^{\gamma }}{\alpha } \bigl )^{\frac{1}{\gamma -1}}\). For such \(m_{1}^{*}\), \(m_{2}^{*}\) we have

$$\begin{aligned} \max _{c>0}\big (-\gamma c F_{m_{1}^{*},m_{2}^{*},R}+c^{\gamma }\big ) = \max _{m_{1}^{*} \le c \le m_{2}^{*}} \big (-\gamma c F_{m_{1}^{*},m_{2}^{*},R}+c^{\gamma }\big ). \end{aligned}$$

And the conclusion follows. \(\square \)

Finally we are able to consider our main equation:

$$\begin{aligned}&\frac{1}{2}a^2 F_{yy} +\frac{\rho ^{2} \gamma }{2(1-\gamma )} a^{2} \frac{F_{y}^2}{F} + \bigg (g(y) + \frac{\rho \gamma }{1-\gamma } a \lambda (y) \biggr )F_{y}\\&\quad + \min _{(\eta _{1},\eta _{2} ) \in \Gamma } \biggl (\bar{\rho } \eta _{2}a F_{y}+\frac{\rho }{(1-\gamma )} a \eta _{1}F_{y} +\frac{\gamma }{2(1-\gamma )}\bigl ( \lambda (y)+\eta _{1} \bigr )^{2} F \biggr ) \nonumber \\&\quad + \gamma r(y)F + (1-\gamma ) F^{\frac{\gamma }{\gamma -1}} -w F =0. \nonumber \end{aligned}$$
(4.17)

Proposition 4.6

Under the conditions of Lemma 4.4 there exists a unique bounded together with the \(y\)-derivative and bounded away from zero solution to (4.17).

Proof

It is sufficient to note that Lemma 4.5 and inequalities (4.10), (4.11) ensure that for all \(R>0\), there exists \(F^{R}\)—a solution to (4.13) such that \( \frac{ F^{R}_{y}}{F^{R}}\) is bounded by a constant which is independent of \(R\). This allows to conclude that there exists \(R^{*}\) that \(\displaystyle \left| \frac{a F^{R^{*}}_{y}}{F^{R^{*}}} \right| \le R^{*}\) and \(F^{R^{*}}\) is also a solution to (4.17). \(\square \)

5 Final Result

Theorem 5.1

Suppose that \(a \ne 0\) is a constant, \(g\), \(r\), \(\lambda \) are Lipschitz continuous functions, \(\lambda \), \(r\) are bounded and \(g\) is of a linear growth condition. In addition let \(w> \sup _{y,\eta } \hat{h}(y,\eta )+ L_{2}\), where

$$\begin{aligned} \hat{h}(y,\eta )&=\frac{\gamma }{2(1-\gamma )}\bigl ( \lambda (y)+\eta _{1} \bigr )^{2} + \gamma r(y) , \\ \hat{i}(y,\eta )&= \frac{\rho \gamma }{1-\gamma } a \lambda (y) + g(y) +\bar{\rho } \eta _{2}a + \frac{\rho }{(1-\gamma )} a \eta _{1}. \end{aligned}$$

Then there exists a saddle point \((\pi ^{*}(x,y),c^{*}(x,y),\eta ^{*}(x,y))\) such that

$$\begin{aligned} \pi ^{*}(x,y)= \frac{\rho a x}{(1-\gamma ) \sigma (y) } \frac{ F_{y}}{F} + \frac{(\lambda (y)+\eta _{1}^{*}(y) ) x}{(1-\gamma ) \sigma (y)} , \quad c^{*}(x,y):=F^{\frac{1}{\gamma -1}} x, \end{aligned}$$

where \(F\) is a unique bounded together with the \(y\)-derivative and bounded away from zero solution to (4.17). The term \(\eta ^{*}\) is a Borel measurable function which realizes minimum in (4.17).

Proof

It follows from Proposition 4.6 that there exists a positive, bounded away from zero and bounded together with the first \(y\)-derivative solution to (4.17).

By the classical measurable selection theorem there exists a Borel measurable \(\eta ^{*}(y) \in \Gamma \) being realization of the minimum in (4.17). If we set

then due to (3.10)–(3.14), it is sufficient to prove only that \((\pi ^{*}(x,y),c^{*}(x,y),\eta ^{*}(x,y))\) is an admissible Markov saddle point and conditions (3.6) and (3.7) hold. Let

$$\begin{aligned} \zeta _{1}(y):= \frac{\rho a }{(1-\gamma ) \sigma (y) } \frac{ F_{y}}{F} + \frac{(\lambda (y)+\eta _{1}^{*}(y) )}{(1-\gamma ) \sigma (y)}, \quad \zeta _{2}(y):= F^{\frac{1}{\gamma -1}}. \end{aligned}$$

Note that \(\zeta _{1} \cdot (b-r)\), \(\zeta _{1} \cdot \sigma \), and \(\zeta _{2}\) are bounded functions since \(\lambda \) and \(\lambda ^{2} \) are bounded. Therefore, the process \(Z_{t}: =X_{t}^{\pi ^{*},c^{*}}\) is a unique solution to the equation

$$\begin{aligned} dZ_{t}=[\zeta _{1} (Y_{t}) (b(Y_{t})-r(Y_{t})) + \eta _{1} \zeta _{1}(Y_{t}) \sigma (Y_{t}) - \zeta _{2}(Y_{t}) ]Z_{t} dt + \zeta _{1}(Y_{t}) \sigma (Y_{t}) Z_{t} dW^{1,\eta }_{t}. \end{aligned}$$

This is a linear equation with bounded stochastic coefficients, which implies that

$$\begin{aligned} \mathbb {E}_{x,y}^{\eta ,T} \sup _{0 \le s \le T} \bigl (X_{s}^{\pi ^{*},c^{*}} \bigr )^{\gamma } < + \infty , \end{aligned}$$

for all \(\eta \in \mathcal {M}\). This confirms the admissibility of \((\pi ^{*}(x,y),c^{*}(x,y))\).

In addition \(X_{t}^{\pi ^{*},c^{*}}\) is strictly positive and this ensures that (3.6) holds. Condition (3.7) is satisfied since \(F\) is bounded and for any \(x,y \in (0,+\infty ) \times \mathbb {R}\),

$$\begin{aligned} \mathbb {E}_{x,y}^{\eta ,T} \sup _{0 \le s \le T} |V(X_{s}^{\pi ,c},Y_{s})|&= \mathbb {E}_{x,y}^{\eta ,T} \sup _{0 \le s \le T} \bigl (X_{s}^{\pi ,c}\bigr )^{\gamma }|F(Y_{s})| < +\infty . \end{aligned}$$

\(\square \)

5.1 Examples

We can apply our main result to the following (\(\varepsilon \) modifications) of standard stochastic volatility models:

  • The Scott model:

    $$\begin{aligned} {\left\{ \begin{array}{ll} dS_{t}=bdt+\sqrt{e^{ Y_{t}}+\varepsilon } dW_{t}^{1}, \quad \varepsilon >0, \\ dY_{t}= (\kappa -\theta Y_{t})dt + \rho dW_{t}^{1} + \bar{\rho } dW_{t}^{2}. \end{array}\right. } \end{aligned}$$
  • The Stein and Stein model:

    $$\begin{aligned} {\left\{ \begin{array}{ll} dS_{t}=bdt+(|Y_{t}|+ \varepsilon ) dW_{t}^{1}, \quad \varepsilon >0, \\ dY_{t}= (\kappa -\theta Y_{t})dt + \rho dW_{t}^{1} + \bar{\rho } dW_{t}^{2}. \end{array}\right. } \end{aligned}$$

6 Negative HARA Parameter Case

It is easy to check that for a negative HARA parameter (\(\gamma < 0\)), HJBI equations

$$\begin{aligned}&\max _{\pi \in \mathbb {R}} \max _{c>0}\min _{\eta \in \Gamma } ( \mathcal {L}^{\pi ,c,\eta }V(x,y) - wV(x,y) + \frac{c^{\gamma }}{\gamma }) \\&\quad = \min _{\eta \in \Gamma } \max _{\pi \in \mathbb {R}} \max _{c>0} ( \mathcal {L}^{\pi ,c,\eta }V(x,y) - wV(x,y) + \frac{c^{\gamma }}{\gamma }) =0 \end{aligned}$$

have a trivial solution equals 0. This may suggest that the problem is ill posed. Indeed, a careful analysis of the investor’s objective function

$$\begin{aligned} J^{\pi ,c,\eta }(x,y)= \lim _{t \rightarrow \infty } \mathbb {E}_{x,y}^{\eta ,t} \int _{0}^{t \wedge \tau _{x,y}} e ^{- w t}\frac{\bigl (c_{t}\bigr )^{\gamma }}{\gamma }\, dt, \end{aligned}$$

shows that there is no saddle point for that problem, since there is no constraint for the consumption process. Therefore we might consider a constrained problem, which is based on the following investor’s objective:

$$\begin{aligned} \bar{J}^{\pi ,c,\eta }(x,y)= \lim _{t \rightarrow \infty } \mathbb {E}_{x,y}^{\eta ,t} \int _{0}^{t \wedge \tau _{x,y}} e ^{- w t}\frac{\bigl (\bar{c}_{t} X_{t}^{\pi ,\bar{c}}\bigr )^{\gamma }}{\gamma }\, dt, \end{aligned}$$

where the dynamics of the investor’s wealth process \((X^{\pi ,\bar{c}}_{t}, 0 \le t < + \infty )\) is given by the stochastic differential equation

$$\begin{aligned} d X_{t} = (r(Y_{t} )X_{t} + \pi _t (b(Y_{t})-r(Y_t))) dt +\pi _{t} \sigma (Y_{t}) dW_{t}^{1}-\bar{c}_{t} X_{t} dt. \end{aligned}$$

In that problem we assume that the consumption is proportional to the wealth i.e. \(c_{t}=\bar{c}_{t} X_{t}^{\pi ,\bar{c}}\) . We interpret the process \(\bar{c}_{t}\) as a consumption rate and assume it belongs to the class \(\mathcal {C}_{m_{1},m_{2}}\).

After considering HJBI equation and after several transformations (as in (3.10)–(3.14)) we get the equation:

$$\begin{aligned}&\frac{1}{2}a^2(y) F_{yy} +\frac{\rho ^{2} \gamma }{2(1-\gamma )} a^{2}(y) \frac{F_{y}^2}{F} + \bigg (g(y) + \frac{\rho \gamma }{1-\gamma } a(y) \lambda (y) \biggr )F_{y}\\&\quad + \max _{(\eta _{1},\eta _{2} ) \in \Gamma } \biggl (\bar{\rho } \eta _{2}a(y) F_{y}+\frac{\rho }{(1-\gamma )} a(y) \eta _{1}F_{y} +\frac{\gamma }{2(1-\gamma )}\bigl ( \lambda (y)+\eta _{1} \bigr )^{2} F \biggr ) \nonumber \\&\quad + \gamma r(y)F + \min _{m_{1} \le \bar{c} \le m_{2}} \biggl (-\gamma \bar{c}F+\bar{c}^{\gamma }\biggr ) -w F =0. \nonumber \end{aligned}$$
(6.1)

This may be rewritten into

$$\begin{aligned}&\frac{1}{2} a^{2}(y)u_{yy} +\max _{\delta \in D} \min _{\eta \in \Gamma } \big ([i(y)+l(\delta ,\eta )a(y)] u_{y} + h(y,\delta ,\eta )) u\big ) \\&\quad + \min _{m_{1} \le c \le m_{2}} \big (-\gamma cu+c^{\gamma }\big )-wu =0,\nonumber \end{aligned}$$
(6.2)

where \(D \subset \mathbb {R}^{n},\Gamma \subset \mathbb {R}^{k} \) are compacts.

We have the following theorem

Theorem 6.1

Suppose that for each \(T>0\) there exists a unique bounded solution to (4.5), all conditions of Assumption 1 and Assumption 2 are satisfied with \(L_{1}>0\), \(L_{2}\ge 0\) and \(w > \sup _{\eta ,\delta ,y} h(y,\delta ,\eta ) - \gamma m_{2} + L_{2}\). Then there exists a unique bounded solution to (6.2) which, in addition, is bounded together with the \(y\)-derivative and bounded away from zero.

Proof

In the light of the proof of Theorem 4.3 it is sufficient to find estimates for \(u\) and \(u_{y}\), where \(u\) is given by

$$\begin{aligned} u(y,t) = \sup _{\delta \in \mathcal {D}} \inf _{\eta \in \mathcal {N}, \; \; \bar{c} \in \mathcal {C}_{m_{1},m_{2}}} \mathbb {E}_{y,t}^{l(\delta ,\eta (\delta ))} \biggl ( \int _{t}^{T}e^{\int _{t}^{s}(h(Y_{k},\delta _{k},\eta (\delta _{k}))- \gamma c_{k}-w)\, dk} c_{s}^{\gamma } ds \biggr ). \end{aligned}$$

Since \(h\) is bounded and \(w > \sup _{\eta ,\delta ,y} h(y,\delta ,\eta ) - \gamma m_{2} + L_{2}\) then there exists \(\alpha >0\) that

$$\begin{aligned} |u(y,t)|&\le \sup _{\delta \in \mathcal {D}} \inf _{\eta \in \mathcal {N}, \; \; \bar{c} \in \mathcal {C}_{m_{1},m_{2}}} \mathbb {E}_{y,t}^{l(\delta ,\eta )} \biggl ( \int _{t}^{T} e^{\int _{t}^{s} -\alpha \, dk} c_{s}^{\gamma } ds \biggr ) \\&\le m_{1}^{\gamma } \int _{t}^{T} e^{-\alpha (t-s)} ds \le \frac{m_{1}^{\gamma }}{\alpha }. \end{aligned}$$

The bound for \(u_y\) will be obtained by estimating the Lipschitz constant. Note that if \(w > \sup _{\eta ,y} h(y,\eta )- \gamma m_{2}+ L_{2}\), then there exists \(w_{1}\) that \(w > w_{1}>\sup _{\eta ,y} h(y,\eta )- \gamma m_{2}+ L_{2}\). We need also a separate notation for \(w_{2}:=w_{1}-L_{1}\).

$$\begin{aligned} |u(y,t)-u(\bar{y},t)|&\le \sup _{ c \in \mathcal {C}_{m_{1},m_{2}}} \sup _{\eta \in \mathcal {N},\delta \in \mathcal {D}} \mathbb {E}^{l(\delta ,\eta (\delta ))} \int _{t}^{T} c_{s}^{\gamma }e^{-\int _{t}^{s} (w-w_{1}+L_{2})\, dk} \\&\qquad \cdot \biggl | e^{\int _{t}^{s} (h(Y_{k}(y,t),\delta _{k},\eta (\delta _{k}))-w_{2})\, dk} - e^{\int _{t}^{s} (h(Y_{k}(\bar{y},t),\delta _{k},\eta (\delta _{k}))-w_{2})\, dk}\biggr | ds \\&\le \sup _{ c \in \mathcal {C}_{m_{1},m_{2}}}\sup _{\eta \in \mathcal {N},\delta \in \mathcal {D}} \mathbb {E} ^{l(\delta ,\eta (\delta ))} \int _{t}^{T} c_{s}^{\gamma }e^{-\int _{t}^{s} (w-w_{1}+L_{2})\, dk} \\&\qquad \cdot \int _{t}^{s}| h(Y_{k}(y,t),\delta _{k},\eta (\delta _{k}))\, - h(Y_{k}(\bar{y},t),\delta _{k},\eta (\delta _{k}))|\, dk \, ds \\&\le L_{1} \sup _{ c \in \mathcal {C}_{m_{1},m_{2}}} \sup _{\eta \in \mathcal {N},\delta \in \mathcal {D}} \mathbb {E} ^{l(\delta ,\eta (\delta ))} \int _{t}^{T} c_{s}^{\gamma }e^{-\int _{t}^{s} (w-w_{1}+L_{2})\, dk} \\&\qquad \cdot \int _{t}^{s} | Y_{k}(y,t) - Y_{k}(\bar{y},t) |\, dk \, ds \\&\le L_{1} m_{1}^{\gamma } \sup _{\eta \in \mathcal {N},\delta \in \mathcal {D}} \mathbb {E}^{l(\delta ,\eta (\delta ))} \int _{t}^{T} \int _{t}^{s}e^{- (w-w_{1}+L_{2})(s-t)}| Y_{k}(y,t)\\&\quad - Y_{k}(\bar{y},t) |\, dk \, ds. \end{aligned}$$

The rest of the proof is just a simple repetition of the proof of Theorem 4.3. \(\square \)