Abstract
We prove the dynamic programming principle for uniformly nondegenerate stochastic differential games in the framework of time-homogeneous diffusion processes considered up to the first exit time from a domain. In contrast with previous results established for constant stopping times we allow arbitrary stopping times and randomized ones as well. There is no assumption about solvability of the the Isaacs equation in any sense (classical or viscosity). The zeroth-order “coefficient” and the “free” term are only assumed to be measurable in the space variable. We also prove that value functions are uniquely determined by the functions defining the corresponding Isaacs equations and thus stochastic games with the same Isaacs equation have the same value functions.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The dynamic programming principle is one of basic tools in the theory of controlled diffusion processes. It seems to the author that Fleming and Souganidis in [2] were the first authors who proved the dynamic programming principle with nonrandom stopping times for stochastic differential games in the whole space on a finite time horizon. They used rather involved technical constructions to overcome some measure-theoretic difficulties, a technique somewhat resembling the one in Nisio [12], and the theory of viscosity solutions.
In [4] Kovats considers time-homogeneous stochastic differential games in a “weak” formulation in smooth domains and proves the dynamic programming principle again with nonrandom stopping times. He uses approximations of policies by piece-wise constant ones and proceeds similarly to [12].
Świȩch in [13] reverses the arguments in [2] and proves the dynamic programming principle for time-homogeneous stochastic differential games in the whole space with constant stopping times “directly” from knowing that the viscosity solutions exist. His method is quite similar to the so-called verification principle from the theory of controlled diffusion processes.
It is also worth mentioning the paper [1] by Buckdahn and Li where the dynamic programming principle for constant stopping times in the time-inhomogeneous setting in the whole space is derived by using the theory of backward–forward stochastic equations.
In this paper we will be only dealing with the dynamic programming principle for stochastic differential games and its relation to the corresponding Isaacs equations. Concerning all other aspect of the theory of stochastic differential games we refer the reader to [1, 2, 4, 12], and [13], and the references therein.
In [10] we adopted the strategy of Świȩch [13] which is based on using the fact that in many cases the Isaacs equation has a sufficiently regular solution. In [13] viscosity solutions are used and we relied on classical ones. In the present article no assumptions are made on the solvability of Isaacs equations. Here we use a very general result of [9] (see Theorem 1.1 there) about solvability of approximating Isaacs equations and our Theorem 5.2 implying that the solutions of approximating equations approximate the value function in the original problem. Then we basically pass to the limit in the formulas obtained in [10].
The main emphasis of [2, 4, 12], and [13] is on proving that the value functions for stochastic differential games are viscosity solutions of the corresponding Isaacs equations and the dynamic programming principle is used just as a tool to do that. In our setting the zeroth-order coefficient and the running payoff function can be just measurable and in this situation neither our methods nor the methods based on the notion of viscosity solution seem to be of much help while characterizing the value function as a viscosity solution.
Our main future goal is to develop some tools which would allow us in a subsequent article to show that the value functions are of class \(C^{0,1}\), provided that the data are there, for possibly degenerate stochastic differential games without assuming that the zeroth-order coefficient is large enough negative (see [11]). On the way to achieve this goal one of the main steps, apart from proving the dynamic programming principle, consist of proving certain representation formulas like the ones in Theorems 3.2 and 3.3 of [10] in which the process is not assumed to be uniformly nondegenerate. Next important ingredient consists of approximations results stated as Theorem again for possibly degenerated processes. By combining Theorem 1.1 of [9] with Theorems 3.2 and 3.3 of [10] and 5.2, we then come to one of the main results of the present article, Theorem 2.1, about the dynamic programming principle in a very general form including stopping and randomized stopping times. Speaking somewhat informally using randomized stopping times is equivalent to introducing a nonnegative random process, which multipied by \(dt\) would give us the probability that we stop the process on time interval \((t,t+dt)\) given that it was not stoped before.
In Theorem 2.2 we assert the Hölder continuity of the value function in our case where the zeroth-order coefficient and the running payoff function can be discontinuous.
Theorem 2.1 concerns time-homogeneous stochastic differential games unlike the time inhomogeneous in [2] and generalizes the corresponding results of [13] and [4], where however degenerate case is not excluded.
Our Theorem 2.3 shows that the value function is uniquely defined by the corresponding Isaacs equation and is independent of the way the equation is represented as \(\sup \inf \) of linear operators (provided that they satisfy our basic assumptions). This fact in a somewhat more restricted situation is also noted in Remark 2.4 of [13].
The article is organized as follows. In Sect. 2 we state our main results to which actually, as we pointed out implicitly above, belongs Theorem 5.2. In Sect. 3 we give a version of Theorem for the whole space. Then in Sect. 4 we prove a very simple result allowing us to compare the value functions corresponding to different data.
Sections 5 and 6 are devoted to deriving approximation results. In Sect. 5 we consider the approximations from above whereas in Sect. from below. The point is that we know from [9] that one can slightly modify the underlying Isaacs equation in such a way that the modified equation would have rather smooth solutions. These smooth solutions are shown to coincide with the corresponding value functions, which in addition satisfy the dynamic programming principle, and the goal of Sects. 5 and 6 is to show that when the modification “fades away” we obtain the dynamic programming principle for the original value function. Theorem 5.2 is proved for the case that the process can degenerate. Its version for the uniformly nondegenerate case is given in Sect. 7 where we also prove Theorem 2.3 about the characterization of the value function by the Isaacs equation. In the final short Sect. 8 we combine previous results and prove Theorems 2.1 and 2.2.
The author is sincerely grateful too the referee’s comments which allowed him to clear up some obscure places.
2 Main results for bounded domains
Let \(\mathbb{R }^{d}=\{x=(x_{1},\ldots ,x_{d})\}\) be a \(d\)-dimensional Euclidean space and let \(d_{1}\ge d\) be an integer. Assume that we are given separable metric spaces \(A\) and \(B\) and let, for each \(\alpha \in A\) and \(\beta \in B\) the following functions on \(\mathbb{R }^{d}\) be given:
-
(i)
\(d\times d_{1}\) matrix-valued \(\sigma ^{\alpha \beta }( x) = \left( \sigma ^{\alpha \beta }_{ij}( x)\right) \),
-
(ii)
\(\mathbb{R }^{d}\)-valued \(b^{\alpha \beta }( x)= \left( b^{\alpha \beta }_{i }(x)\right) \), and
-
(iii)
real-valued functions \(c^{\alpha \beta }( x) , f^{\alpha \beta }( x) \), and \(g(x)\).
Take a \(\zeta \in C^{\infty }_{0}(\mathbb{R }^{d})\) with unit integral and for \(\varepsilon >0\) introduce \(\zeta _{\varepsilon }(x)=\varepsilon ^{-d}\zeta (x/\varepsilon )\). For locally summable functions \(u=u(x)\) on \(\mathbb{R }^{d}\) define
Assumption 2.1
-
(i)
a) All the above functions are continuous with respect to \(\beta \in B\) for each \((\alpha ,x)\) and continuous with respect to \(\alpha \in A\) uniformly with respect to \(\beta \in B\) for each \(x\). b) These functions are Borel measurable functions of \((\alpha ,\beta ,x)\), the function \(g(x)\) is bounded and uniformly continuous on \(\mathbb{R }^{d}\), and \(c^{\alpha \beta }\ge 0\).
-
(ii)
For any \(x \in \mathbb{R }^{d}\)
$$\begin{aligned} \sup _{(\alpha ,\beta ) \in A\times B }( | c^{\alpha \beta }|+| f^{\alpha \beta }|)( x)<\infty , \end{aligned}$$(2.1)and for any \(x,y\in \mathbb{R }^{d}\) and \((\alpha ,\beta )\in A\times B \)
$$\begin{aligned}&\Vert \sigma ^{\alpha \beta }( x)-\sigma ^{\alpha \beta }( y)\Vert \le K_{1}|x-y|, \quad |b^{\alpha \beta }( x)-b^{\alpha \beta }( y) |\le K_{1}|x-y|,\\&\quad \Vert \sigma ^{\alpha \beta }( x )\Vert ,|b^{\alpha \beta }( x )| \le K_{0}, \end{aligned}$$where \(K_{0}\) and \(K_{1}\) are some fixed constants.
-
(iii)
For any bounded domain \(D\subset \mathbb{R }^{d}\) we have
$$\begin{aligned} \Vert \sup _{(\alpha ,\beta )\in A\times B }| f^{\alpha \beta } |\,\Vert _{L_{d}(D)}&+ \Vert \sup _{(\alpha ,\beta )\in A\times B } c^{\alpha \beta } \,\Vert _{L_{d}(D)}<\infty ,\\ \Vert \sup _{(\alpha ,\beta )\in A\times B }| f^{\alpha \beta }&- ( f^{\alpha \beta })^{(\varepsilon )}|\,\Vert _{L_{d}(D)}\rightarrow 0,\\ \Vert \sup _{(\alpha ,\beta )\in A\times B }| c^{\alpha \beta }&- ( c^{\alpha \beta })^{(\varepsilon )}|\,\Vert _{L_{d}(D)}\rightarrow 0, \end{aligned}$$as \(\varepsilon \downarrow 0\).
-
(iv)
There is a constant \(\delta \in (0,1]\) such that for \(\alpha \in A, \beta \in B\), and \(x,\lambda \in \mathbb{R }^{d}\) we have
$$\begin{aligned} \delta |\lambda |^{2}\le a^{\alpha \beta }_{ij}( x)\lambda _{i} \lambda _{j}\le \delta ^{-1}|\lambda |^{2}, \end{aligned}$$
where \(a^{\alpha \beta }=(a^{\alpha \beta }_{ij})=(1/2) \sigma ^{\alpha \beta }(\sigma ^{\alpha \beta })^{*}\).
The reader understands, of course, that the summation convention is adopted throughout the article.
Let \((\Omega ,\mathcal{F },P)\) be a complete probability space, let \(\{\mathcal{F }_{t},t\ge 0\}\) be an increasing filtration of \(\sigma \)-fields \(\mathcal{F }_{t}\subset \mathcal{F }\) such that each \(\mathcal{F }_{t}\) is complete with respect to \(\mathcal{F },P\), and let \(w_{t},t\ge 0\), be a standard \(d_{1}\)-dimensional Wiener process given on \(\Omega \) such that \(w_{t}\) is a Wiener process relative to the filtration \(\{\mathcal{F }_{t},t\ge 0\}\).
The set of progressively measurable \(A\)-valued processes \(\alpha _{t}=\alpha _{t}(\omega )\) is denoted by \(\mathfrak{A }\). Similarly we define \(\mathfrak{B }\) as the set of \(B\)-valued progressively measurable functions. By \( \mathbb{B }\) we denote the set of \(\mathfrak{B }\)-valued functions \({\varvec{\beta }}(\alpha _{\cdot })\) on \(\mathfrak{A }\) such that, for any \(T\in (0,\infty )\) and any \(\alpha ^{1}_{\cdot }, \alpha ^{2}_{\cdot }\in \mathfrak{A }\) satisfying
we have
For \(\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot } \in \mathfrak{B }\), and \(x\in \mathbb{R }^{d}\) define \(x^{\alpha _{\cdot } \beta _{\cdot } x}_{t} \) as a unique solution of the Itô equation
For a sufficiently smooth function \(u=u(x)\) introduce
where, naturally, \(D_{i}=\partial /\partial x_{i}, D_{ij}=D_{i}D_{j}\). Also set
Denote
Next, fix a bounded domain \(D\subset \mathbb{R }^{d}\), define \(\tau ^{\alpha _{\cdot }\beta _{\cdot } x}\) as the first exit time of \(x^{\alpha _{\cdot } \beta _{\cdot } x}_{t}\) from \(D\), and introduce
where the indices \(\alpha _{\cdot }, {\varvec{\beta }}\), and \(x\) at the expectation sign are written to mean that they should be placed inside the expectation sign wherever and as appropriate, that is
Observe that this definition makes perfect sense due to Theorem 2.2.1 of [6] and \(v(x)=g(x)\) in \(\mathbb{R }^{d}{\setminus } D\).
Here is our first main result before which we introduce one more assumption.
Assumption 2.2
There exists a nonnegative \(G\in C(\bar{D})\cap C^{2}_{loc}(D)\) such that \(G= 0\) on \(\partial D\) and
in \(D\) for all \(\alpha \in A\) and \(\beta \in B\).
Theorem 2.1
Under the above assumptions
-
(i)
The function \(v(x)\) is bounded and continuous in \(\mathbb{R }^{d}\).
-
(ii)
Let \(\gamma ^{\alpha _{\cdot }\beta _{\cdot }x} \) be an \(\{\mathcal{F }_{t}\}\)-stopping time defined for each \(\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot }\in \mathfrak{B }\), and \(x\in \mathbb{R }^{d}\) and such that \(\gamma ^{\alpha _{\cdot }\beta _{\cdot }x}\le \tau ^{\alpha _{\cdot }\beta _{\cdot } x}\). Also let \(\lambda _{t}^{\alpha _{\cdot } \beta _{\cdot }x}\ge 0\) be progressively measurable functions on \(\Omega \times [0,\infty )\) defined for each \(\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot }\in \mathfrak{B }\), and \(x\in \mathbb{R }^{d}\) and such that they have finite integrals over finite time intervals (for any \(\omega \)). Then for any \(x\)
$$\begin{aligned} v(x)=\mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} E_{x}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })}\left[ v(x_{\gamma })e^{-\phi _{\gamma }-\psi _{\gamma }} +\int \limits _{0}^{\gamma } \{f( x_{t})+\lambda _{t}v(x_{t})\}e^{-\phi _{t}-\psi _{t}}\,dt \right] ,\nonumber \\ \end{aligned}$$(2.6)
where inside the expectation sign \(\gamma =\gamma ^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })x} \) and
Remark 2.1
The above setting is almost identical to that of [10] and statement of Theorem 2.1 is almost identical to that of Theorem 2.2 of [10]. However, here we did not impose a quite strong assumption from [10] that \(D\) be approximated by domains in which the Isaacs equation has regular solutions. On the other hand, we pay for that by excluding parameters \(p\), which are present in Theorem 2.2 of [10] and will reappear in our Theorem 2.3.
Note that the possibility to vary \(\lambda \) in Theorem might be useful while considering stochastic differential games with stopping in the spirit of [5].
Theorem 2.2
The function \(v\) is locally Hölder continuous in \(D\) with exponent \(\theta \in (0,1)\) depending only on \(d\) and \(\delta \).
Next, we state a comparison result, for which we need some new objects and additional assumptions. Take an integer \(k \ge d\) and assume that on \(\mathbb{R }^{k }\) we are given a mapping
which is twice continuously differentiable with bounded and uniformly continuous first- and second-order derivatives.
The reader understands that the case \(k =d\) is not excluded in which case \(\Pi (\check{x})\equiv \check{x}\) is allowed.
Assume that we are given a separable metric space \(\mathcal{P }\) and let, for each \(\alpha \in A, \beta \in B\), and \(p\in \mathcal{P }\), the following functions on \(\mathbb{R }^{k}\) be given:
-
(i)
\(k\times d_{1}\) matrix-valued \(\check{\sigma }^{\alpha \beta }(p,\check{x}) = (\check{\sigma }^{\alpha \beta }_{ij}(p,\check{x}))\),
-
(ii)
\(\mathbb{R }^{k}\)-valued \(\check{b}^{\alpha \beta }(p,\check{x})= (\check{b}^{\alpha \beta }_{i }(p,\check{x}))\), and
-
(iii)
real-valued functions \(\check{r}^{\alpha \beta }(p,\check{x}) , \check{c}^{\alpha \beta }(p,\check{x}) \), and \(\check{f}^{\alpha \beta }(p,\check{x}) \).
As usual we introduce
and for a fixed \(\bar{p}\in \mathcal{P }\) define
Assumption 2.3
-
(i)
All the above functions apart from \(\check{r}\) are continuous with respect to \(\beta \in B\) for each \((\alpha ,p,\check{x})\) and continuous with respect to \(\alpha \in A\) uniformly with respect to \(\beta \in B\) for each \((p,\check{x})\). Furthermore, they are Borel measurable functions of \((p,\check{x})\) for each \((\alpha ,\beta )\) and \(\check{c}^{\alpha \beta }\ge 0\).
-
(ii)
The functions \(\bar{\sigma }^{\alpha \beta }(\check{x} )\) and \(\bar{b}^{\alpha \beta }(\check{x} )\) are uniformly continuous with respect to \(\check{x}\) uniformly with respect to \((\alpha ,\beta )\in A\times B \) and for any \(\check{x} \in \mathbb{R }^{k}\) and \((\alpha ,\beta ,p)\in A\times B\times \mathcal{P }\)
$$\begin{aligned} \Vert \sigma ^{\alpha \beta }(p,\check{x} )\Vert ,|b^{\alpha \beta }(p,\check{x} )| \le K_{0}. \end{aligned}$$ -
(iii)
We have \(\bar{r}\equiv 1\) and there is a constant \(\check{\delta }_{1}\in (0,1]\) such that on \(A\times B\times \mathcal{P }\times \mathbb{R }^{k}\) we have
$$\begin{aligned} \check{r}^{\alpha \beta }(p,\check{x})\in [\check{\delta }_{1},\check{\delta }_{1}^{-1}],\quad \check{f} ^{\alpha \beta }(p,\check{x}) =\check{r}^{\alpha \beta }(p,\check{x}) \bar{f}^{\alpha \beta }(\check{x}) . \end{aligned}$$(2.7) -
(iv)
The functions \(c^{\alpha \beta }( x)\) and \(f^{\alpha \beta }( x)\) are bounded on \(A\times B\times \mathbb{R }^{d}\). (This part bears on the objects introduced before Theorem 2.1.)
-
(v)
For any \(\check{x} \in \mathbb{R }^{k}\)
A function \(p^{\alpha _{\cdot }\beta _{\cdot }}_{t}= p^{\alpha _{\cdot }\beta _{\cdot }}_{t}(\omega )\) given on \(\mathfrak{A }\times \mathfrak{B }\times \Omega \times (0,\infty )\) is said to be control adapted if, for any \((\alpha _{\cdot }\!,\beta _{\cdot })\in \mathfrak{A }\times \mathfrak{B }\) it is progressively measurable in \((\omega ,t)\) and, for any \(T\in (0,\infty )\), we have
as long as
The set of \(\mathcal{P }\)-valued control adapted processes is denoted by \(\mathfrak{P }\).
We discussed a way in which control adapted processes appear naturally in Remark 2.4 of [10].
Fix a \(p\in \mathfrak{P }\) and for \(\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot }\in \mathfrak{B }\), and \(\check{x}\in \mathbb{R }^{k }\) consider the following equation
Assumption 2.4
Equation (2.9) satisfies the usual hypothesis, that is for any \(\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot } \in \mathfrak{B }\), and \(\check{x}\in \mathbb{R }^{k }\) it has a unique solution denoted by \( \check{x}_{t}^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\) and \( \check{x}_{t}^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\) is a control adapted process for each \(\check{x}\).
In order to state additional assumptions, we need a possibly unbounded domain \( D^{\check{\!\!\!}\,} \subset \mathbb{R }^{k }\) such that
Denote by \(\check{\tau }^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\) the first exit time of \(\check{x}_{t}^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\) from \(D^{\check{\!\!\!}\,}\) and set
Next, suppose that for each \(\varepsilon >0\) we are given real-valued Borel measurable functions \(\bar{c}^{\alpha \beta }_{\varepsilon }(\check{x})\) and \(\bar{f}^{\alpha \beta }_{\varepsilon }(\check{x})\) defined on \(A\times B\times \mathbb{R }^{k}\) and impose
Assumption 2.5
-
(i)
For each \(\varepsilon >0\) the functions \((\bar{c}_{\varepsilon },\bar{f}_{\varepsilon })^{\alpha \beta } \) are bounded on \(A\times B\times \bar{D^{\check{\!\!\!}\,}}\) and uniformly continuous with respect to \(\check{x}\in \bar{D^{\check{\!\!\!}\,}}\) uniformly with respect to \(\alpha ,\beta \).
-
(ii)
For any \(\check{x}\in D^{\check{\!\!\!}\,}\)
$$\begin{aligned}&\sup _{(\alpha _{\cdot }\!,\beta _{\cdot } )\in \mathfrak{A }\times \mathfrak{B }} E^{\alpha _{\cdot }\beta _{\cdot }}_{\check{x}} \int \limits _{0}^{\check{\tau }} \sup _{\alpha \in A,\beta \in B} |\bar{c}^{\alpha \beta } -\bar{c}^{\alpha \beta } _{\varepsilon }| (\check{x}_{t}) e^{-\check{\phi }_{t}}\,dt\rightarrow 0,\nonumber \\&\sup _{(\alpha _{\cdot }\!,\beta _{\cdot } )\in \mathfrak{A }\times \mathfrak{B }} E^{\alpha _{\cdot }\beta _{\cdot }}_{\check{x}} \int \limits _{0}^{\check{\tau }} \sup _{\alpha \in A,\beta \in B} |\bar{f}^{\alpha \beta } -\bar{f}^{\alpha \beta } _{\varepsilon }| (\check{x}_{t}) e^{-\check{\phi }_{t}}\,dt\rightarrow 0 \end{aligned}$$(2.10)with the second convergence in (2.10) being uniform in \(D^{\check{\!\!\!}\,}\).
-
(iii)
There exists a constant \(\check{\delta }\in (0,1]\) such that for \(\check{x}\in \mathbb{R }^{k}, p\in \mathcal{P }, \alpha \in A, \beta \in B\), and \(\lambda \in \mathbb{R }^{d}\) we have
Remark 2.2
Assumption 2.5 (iii) is equivalent to saying that for solutions of (2.9) the processes \(\Pi (\check{x}_{t})\) are uniformly nondegenerate.
It is convenient to always lift functions \(u\) given on \(\mathbb{R }^{d}\) to functions given on \(\mathbb{R }^{k}\) by the formula
For sufficiently smooth functions \(u=u(\check{x})\) introduce
(naturally, \(D_{i}=\partial /\partial \check{x}_{i}, D_{ij}=D_{i}D_{j}\)). Also set
Assumption 2.6
There exists a nonnegative (bounded) function \(\check{G}\in C(\bar{D^{\check{\!\!\!}\,}})\cap C^{2}_{loc}(D^{\check{\!\!\!}\,})\) such that \(\check{G}(\check{x})\rightarrow 0\) as \(\check{x}\in \bar{D^{\check{\!\!\!}\,}}\) and \(\mathrm{dist}\,(\Pi (\check{x}),\partial D)\rightarrow 0\) (\(\check{G} = 0\) on \(\partial D\) if \(k =d\) and \(\Pi (\check{x})\equiv \check{x}\)) and
in \(\mathcal{P }\times D^{\check{\!\!\!}\,}\) for all \(\alpha \in A\) and \(\beta \in B\).
Next, take a real-valued function \(\psi \) on \(\mathbb{R }^{k}\) with finite \(C^{2}(\mathbb{R }^{k})\)-norm and introduce
where, naturally, \(v\) is taken from Theorem 2.1. Assumption 2.5 (iii) (and the boundedness of \(D\)) and Theorem 2.2.1 of [6] allow us to conclude that that \( P^{\alpha _{\cdot }\beta _{\cdot }}_{\check{x}} (\check{\tau }^{\alpha _{\cdot }\beta _{\cdot }\check{x}}<\infty )=1\). Also notice that (2.7) and Assumptions 2.6 imply that for any \(\check{x}\in D^{\check{\!\!\!}\,}\)
which is finite at least for small \(\varepsilon >0\) owing to (2.10). Hence, \(\check{v}\) is well defined.
By the way, observe also that, if \(k=d\) and \(\Pi (\check{x})\equiv \check{x}\), then \(\psi (\check{x})v( \check{x}) =\psi (x)g(x)\) on \(\partial D^{\check{\!\!\!}\,}=\partial D\).
Assumption 2.7
For any function \(u\in C^{2}_{loc}(D)\) (not \(C^{2}_{loc}(D^{\check{\!\!\!}\,})\)), the function \(\psi (\check{x})u(\check{x})\) is \(p\)-insensitive in \(D^{\check{\!\!\!}\,}\) relative to \((\check{r}^{\alpha \beta }(p,\check{x}), \check{L}^{\alpha \beta }(p,\check{x}))\) in the terminology of [10], that is, for any \(\alpha _{\cdot }\!, \beta _{\cdot }\), and \(\check{x}\) we have
whenever \(t<\check{\tau }^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\), where \(m_{t}\) is a local martingale starting at zero.
We discuss this assumption in Remark 2.6.
Finally, take some \(\{\mathcal{F }_{t}\}\)-stopping times \(\gamma ^{\alpha _{\cdot }\beta _{\cdot }\check{x}} \) and progressively measurable functions \(\lambda _{t}^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\ge 0\) on \(\Omega \times [0,\infty )\) defined for each \(\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot }\in \mathfrak{B }\), and \(\check{x} \in \mathbb{R }^{k}\) and such that \(\lambda _{t}^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\) have finite integrals over finite time intervals (for any \(\omega \)). Introduce
In the following theorem by quadratic functions we mean quadratic functions on \(\mathbb{R }^{d}\) (not \(\mathbb{R }^{k }\)) (and if \(u\) is a function defined in \(D\) then we extend it to a function in a domain in \(\mathbb{R }^{k}\) following notation (2.11)).
Theorem 2.3
-
(i)
If for any \(\check{x} \in D^{\check{\!\!\!}\,}\) and quadratic function \(u\), we have
$$\begin{aligned} H[u](\Pi (\check{x}))\le 0\Longrightarrow \check{H}[u\psi ](\check{x})\le 0, \end{aligned}$$(2.12)then \(\check{v}\le \psi v\) in \(\mathbb{R }^{k}\) and for any \(\check{x}\in \mathbb{R }^{k}\)
$$\begin{aligned} v\psi (\check{x}) \ge \mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} E_{\check{x}}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })} \left[ v\psi (\check{x}_{\gamma \wedge \check{\tau }}) e^{-\check{ \phi }_{\gamma \wedge \check{\tau }}- \psi _{\gamma \wedge \check{\tau }}}\right. \nonumber \\ \left. +\int \limits _{0}^{\gamma \wedge \check{\tau }} \{\check{f}(p_{t},\check{x} _{t} )+\lambda _{t} v\psi (\check{x} _{t})\}e^{-\check{ \phi }_{t}- \psi _{t}}\,dt \right] . \end{aligned}$$(2.13) -
(ii)
If for any \(\check{x} \in D^{\check{\!\!\!}\,}\) and quadratic function \(u\), we have
$$\begin{aligned} H[u](\Pi (\check{x}))\ge 0\Longrightarrow \check{H}[u\psi ](\check{x})\ge 0, \end{aligned}$$(2.14)then \(\check{v}\ge \psi v\) in \(\mathbb{R }^{k}\) and for any \(\check{x}\in \mathbb{R }^{k}\)
$$\begin{aligned} v\psi (\check{x}) \le \mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} E_{\check{x}}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })}\Biggl [ v\psi (\check{x}_{\gamma \wedge \check{\tau }}) e^{-\check{ \phi }_{\gamma \wedge \check{\tau }}- \psi _{\gamma \wedge \check{\tau }}} \nonumber \\ +\int \limits _{0}^{\gamma \wedge \check{\tau }} \{\check{f}(p_{t},\check{x} _{t} )+\lambda _{t} v\psi (\check{x} _{t})\}e^{-\check{ \phi }_{t}- \psi _{t}}\,dt \Biggr ]. \end{aligned}$$(2.15)
Remark 2.3
Under the assumptions of Theorem 2.1 suppose that \(c\) and \(f\) are bounded. Take a global barrier \(\Psi \), which is an infinitely differentiable function on \(\mathbb{R }^{d}\) such that \(\Psi \ge 1\) on \(\mathbb{R }^{d}\) and \((L^{\alpha \beta }+c^{\alpha \beta }) \Psi \le -1\) on \(D\) for all \(\alpha ,\beta \). The existence of such functions is a simple and well-known fact.
In Theorem 2.3 take \(k=d, D^{\check{\!\!\!}\,}=D\), and independent of \(p\) functions \(\check{r}\equiv 1\),
where \(D\Psi \) is the gradient of \(\Psi \) (a column vector).
A simple computation shows that
and therefore both conditions in (2.12) and (2.14) are satisfied with \(\psi =\Psi ^{-1}\) and by Theorem we conclude that \(\check{v}=\Psi ^{-1} v\). It is still probably worth noting that to check Assumption in this case we take \((\bar{c}_{\varepsilon }, \bar{f}_{\varepsilon })^{\alpha \beta } = [( \check{c} , \check{f})^{\alpha \beta }]^{(\varepsilon )}.\)
This simple observation sometimes helps introducing a new \(c\ge 1\) when the initial one was zero.
Remark 2.4
If \(\check{a},\check{b},\check{c}\), and \(\check{f}\) are independent of \(p\) and \(k =d, \Pi (x)\equiv x\), and \(\psi \equiv 1\), then Theorem 2.3 implies that \(v=\check{v}\) whenever the functions \(H\) and \(\check{H}\) coincide. Therefore, \(v\) and \(\check{v}\) are uniquely defined by \(H\) and not by its particular representation (2.4) and, for that matter, not by the choice of probability space, filtration, and the Wiener process including its dimension. By Theorem 2.3 we also have that \(v=\check{v}\) if \(k =d, \Pi (x)\equiv x\), and if \(\check{a},\check{b},\check{c}\), and \(\check{f}\) do depend on \(p\) but in such a way that
since in that case any smooth function is \(p\)-insensitive. In such a situation we see that \(\check{v}\) is independent of \(p\in \mathfrak{P }\) as well.
Also notice that, if in Theorem 2.1 the functions \(c\) and \(f\) are bounded (see Assumption 2.3 (iv)) and one takes \(k=d\), assumes that the checked functions are independent of \(p\), and finally takes the checked functions equal to the unchecked ones and \((\bar{c}_{\varepsilon }, \bar{f}_{\varepsilon })^{\alpha \beta } = [( c , f)^{\alpha \beta }]^{(\varepsilon )}\), then one sees that assertion (ii) of Theorem 2.1 follows immediately from Theorem 2.2.1 of [6] and Theorem 2.3.
Remark 2.5
Here we discuss the possibility to use dilations. Take a constant \(\mu >0\) and consider the following modification of (2.3)
The solution of this equation is denoted by \(x^{\alpha _{\cdot }\beta _{\cdot } x}_{t}(\mu )\). Then let
denote by \(\tau ^{\alpha _{\cdot }\beta _{\cdot } x}(\mu )\) the first exit time of \(x^{\alpha _{\cdot }\beta _{\cdot } x}_{t}(\mu )\) from \(\mu ^{-1}D\), and set
A simple application of Theorem 2.3 with \(\Pi (x)=\mu x\) and \(\psi \equiv 1\) shows that \(v(\mu x)=v(x,\mu )\). Of course, other types of changing the coordinates are also covered by Theorem 2.3.
Remark 2.6
The case \(k >d\) will play a very important role in a subsequent article (see [11]) about stochastic differential games. To illustrate one of applications consider the one-dimensional Wiener process \(w_{t}\), define \(\tau _{x}\) as the first exit time of \(x+ w_{t}\) from \((-1,1)\) and introduce
so that the corresponding (Isaacs) equation becomes
in \((-1,1)\) with zero boundary data at \(\pm 1\). We want to show how Theorem 2.3 allows one to derive the following
where \(\check{\tau }_{x}\) is the first exit time of \(x+w_{t}+t\) from \((-1,1)\). (Of course, (2.17) is a simple corollary of Girsanov’s theorem.)
In order to do that consider the two-dimensional diffusion process given by
starting at
where \(\varepsilon \in (0,1)\), let \(\tau ^{\varepsilon }_{x,y}\) be the first time the process exits from this domain, and introduce
In this situation we take \(\Pi (x,y)=x\). The corresponding (Isaacs) equation is now
As \(G(x)\) and \(\check{G}(x,y)\) one can take \(1-|x|^2\) and set \(r(x,y)=y\).
It is a trivial computation to show that if \(u(x)\) satisfies \(H[u](x)\le 0\) at a point \(x\in (-1,1)\), then for \(\check{u}(x,y) :=yu(x)\) we have \(\check{H}[\check{u}](x,y)\le 0\) for any \(y>0\) and if we reverse the sign of the first inequality the same will happen with the second one. By Theorem 2.3 we have that \(\check{v}(x,y)=yv(x)\) in \(D^{\check{\!\!\!}\,}_{\varepsilon }\) and since for \(y=1\)
we conclude that for any \(\varepsilon \in (0,1)\)
where \(\tau ^{\varepsilon }_{x}\) is the minimum of the first exit time of \(x+w_{t}+t\) from \((-1,1)\) and the first exit time of \(e^{-w_{t}-(1/2)t}\) from \((\varepsilon ,\varepsilon ^{-1})\). The latter tends to infinity as \(\varepsilon \downarrow 0\) and we obtain (2.17) from (2.19) and the fact that \(v=0\) at \(\pm 1\).
The reader might have noticed that the process given by (2.18) is degenerate. It shows why in Assumption 2.5 we require only \(\Pi (\check{x}_{t})\) to be uniformly nondegenerate.
3 Main results for the whole space
In this section we keep the assumptions of Sect. 2 apart from Assumptions 2.2 and 2.6 concerning the existence of the barrier functions \(G\) and \(\check{G}\) and take \(D=\mathbb{R }^{d}\). In case we encounter expressions like \(v(x_{\gamma })\) we set them to be zero on the event \(\{\gamma =\infty \}\). In the whole space we need the following.
Assumption 3.1
-
(i)
The functions \( c,f, \check{c}, \check{f}\) are bounded.
-
(ii)
For a constant \(\chi >0\) we have \(c^{\alpha \beta }(p,x ),\check{c}^{\alpha \beta }(p,\check{x} )\ge \chi \) for all \(\alpha ,\beta ,p,x\) and \(\check{x}\).
Notice that in this situation \(\tau ^{\alpha _{\cdot }\beta _{\cdot }x} =\infty \), however \(\check{\tau }^{\alpha _{\cdot }\beta _{\cdot } \check{x}}\) may still be finite.
Theorem 3.1
Under the above assumptions all assertions of Theorems and 2.3 hold true.
Proof
First we deal with Theorem 2.1. Take \(D=D_{n}=\{x:|x|<n\}\) and \(0\) in the original Theorem in place of \(D\) and \(g\), respectively, and denote thus obtained function \(v\) by \(v_{n}\). It is not hard to check that, due to the boundedness of \(f\) and the condition that \(c\ge \chi \), in any compact set \(\Gamma \subset \mathbb{R }^{d}\) we have \(v_{n}\rightarrow v\) uniformly on \(\Gamma \) as \(n\rightarrow \infty \). Furthermore, since the boundary of \(D_{n}\) is smooth and \(\sigma ,b,c\) are bounded and \(a\) is uniformly nondegenerate, for each \(n\) there exists a global barrier \(G_{n}\) satisfying Assumption 2.2 with \(D_{n}\) in place of \(D\). Therefore, by Theorem 2.1, \(v_{n}\) are continuous and so is \(v\).
For each \(n\ge m\ge 1\) we also have by Theorem 2.1 that
where \(\tau _{m}^{\alpha _{\cdot }\beta _{\cdot }x}\) is the first exit time of \(x_{t}^{\alpha _{\cdot }\beta _{\cdot }x}\) from \(D_{m}\). Since \(v_{n}\rightarrow v\) uniformly on \(\bar{D}_{m}\), we conclude that
Passing to the limit as \(m\rightarrow \infty \) proves our theorem in what concerns Theorem 2.1.
In case of Theorem 2.3 the argument is quite similar and we only comment on the existence of \(\check{G}_{n}\) satisfying Assumption 2.6 with \(D^{\check{\!\!\!}\,}_{n}= \{\check{x}\in D^{\check{\!\!\!}\,}:\Pi (\check{x})\in D_{n}\}\). Under obvious circumstances one can take \(\check{G}_{n}(\check{x}) =G_{n}(\Pi (\check{x}))\). In the general case one should construct \(G_{n}\) for operators with, perhaps, a smaller ellipticity constant and larger drift terms. The theorem is proved. \(\square \)
4 An auxiliary result
In this section \(D\) is not assumed to be bounded. We need a bounded continuous function \(\Psi \) on \(\bar{D}\) such that \(\Psi \ge 0\) in \(D\) and \(\Psi =0\) on \(\partial D\) (if \(\partial D\ne \emptyset \)). We assume that we are given two continuous \(\mathcal{F }_{t}\)-adapted processes \(x^{\prime }_{t}\) and \(x^{\prime \prime }_{t}\) in \(\mathbb{R }^{d}\) with \(x^{\prime }_{0},x^{\prime \prime }_{0}\in D\) (a.s.) and progressively measurable real-valued processes \(c^{\prime }_{t},c^{\prime \prime }_{t},f^{\prime }_{t},f^{\prime \prime }_{t}\). Suppose that \(c^{\prime },c^{\prime \prime }\ge 0\).
Define \(\tau ^{\prime }\) and \(\tau ^{\prime \prime }\) as the first exit times of \(x^{\prime }_{t}\) and \(x^{\prime \prime }_{t}\) from \(D\), respectively. Then introduce
and suppose that
Remark 4.1
According to Theorem 2.2.1 of [6] the above requirements about \(f\) and \(c\) are fulfilled if Assumption 2.1 is satisfied and we take \(x_{t}\) and \((f, c)\) with prime and double prime of the type
respectively, where \(\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot }\in \mathfrak{B }\), and \(x\in \mathbb{R }^{d}\).
Finally set
Now comes our main assumption.
Assumption 4.1
The processes
are supermartingales.
Remark 4.2
Observe that Assumption 4.1 is satisfied under the assumptions of Theorem 2.1 if we take \(\Psi =G\) from Theorem 2.1 and other objects from Remark 4.1.
Indeed, by Itô’s formula
is a local supermartingale, where
Since it is nonnegative or constant, it is a supermartingale.
Denote
and by replacing \(c\) with \(f\) define \(\Delta _{f}\).
Lemma 4.1
Introduce a constant \(M_{f}\) (perhaps \(M_{f}=\infty \)) such that for each \(t\ge 0\) (a.s.)
Then
where the last two terms can be dropped if \(\tau ^{\prime }=\tau ^{\prime \prime }\) (a.s.).
Proof
We have
where owing to (4.1), Assumption 4.1, and the fact that bounded \(\Psi \ge 0\), the last expectation is dominated by
Similar estimates hold for \(v^{\prime }\) and this shows how the last terms in (4.4) appear and when they disappear.
Next,
where
By using Fubini’s theorem it is easily seen that the last expectation above equals
which owing to (4.3) is less than \(M_{f}\Delta _{c}\). This proves the lemma. \(\square \)
Remark 4.3
Assumption (4.3) is satisfied if, for instance, for each \(t\ge 0\)
Indeed, in that case the left-hand side of (4.3) is less that \(\Phi _{t}\) times the left-hand side of (4.5) just because \(\Phi _{t}\) is a decreasing function of \(t\).
This observation will be later used in conjunction with Theorem 2.2.1 of [6].
5 A general approximation result from above
In this section Assumption 2.1 (iv) about the uniform nondegeneracy as well as Assumption 2.2 concerning \(G\) are not used and the domain \(D\) is not supposed to be bounded.
We impose the following.
Assumption 5.1
-
(i)
Assumptions 2.1 (i) b), (ii) are satisfied.
-
(ii)
The functions \(c^{\alpha \beta }( x)\) and \(f^{\alpha \beta }( x)\) are bounded on \(A\times B \times \mathbb{R }^{d}\) and uniformly continuous with respect to \(x\in \mathbb{R }^{d}\) uniformly with respect to \(\alpha ,\beta \).
Set
and let \(A_{2}\) be a separable metric space having no common points with \(A_{1}\).
Assumption 5.2
The functions \( \sigma ^{\alpha \beta }( x), b^{\alpha \beta }( x), c^{\alpha \beta }( x)\), and \( f^{\alpha \beta }( x)\) are also defined on \(A_{2}\times B \times \mathbb{R }^{d}\) in such a way that they are independent of \(\beta \) (on \(A_{2}\times B \times \mathbb{R }^{d}\)) and Assumptions 2.1 (i) b), (ii) are satisfied with, perhaps, larger constants \(K_{0}\), \(K_{1}\) and, of course, with \(A_{2}\) in place of \(A\). The functions \( c^{\alpha \beta }( x)\) and \( f^{\alpha \beta }( x)\) are bounded on \(A_{2}\times B \times \mathbb{R }^{d}\).
Define
Then we introduce \(\hat{\mathfrak{A }}\) as the set of progressively measurable \(\hat{A}\)-valued processes and \(\hat{\mathbb{B }}\) as the set of \(\mathfrak{B }\)-valued functions \( {\varvec{\beta }}(\alpha _{\cdot })\) on \(\hat{\mathfrak{A }}\) such that, for any \(T\in [0,\infty )\) and any \(\alpha ^{1}_{\cdot }, \alpha ^{2}_{\cdot }\in \hat{\mathfrak{A }}\) satisfying
we have
Assumption 5.3
There exists a nonnegative bounded uniformly continuous in \(\bar{D}\) function \(G\in C^{2}_{loc}(D)\) such that \(G=0\) on \(\partial D\) (if \(D\ne \mathbb{R }^{d}\)) and
in \( D\) for all \(\alpha \in \hat{A}\) and \(\beta \in B\).
Here are a few consequences of Assumption 5.3.
Lemma 5.1
For any constant \(\chi \le (2\sup _{D}G)^{-1}\) and any \(\alpha _{\cdot }\in \hat{\mathfrak{A }}, \beta _{\cdot }\in \mathfrak{B }\), and \(x\in \bar{D}\) the process
where we use notation (4.2), is a supermartingale and
In particular, for any \(T\in [0,\infty )\)
Finally, for any stopping time \(\gamma \le \tau ^{\alpha _{\cdot }\beta _{\cdot } x}\)
The proof of this lemma is easily achieved by using Itô’s formula and the fact that \(L^{\alpha \beta }G+\chi G \le -1/2\) on \(D\) for all \(\alpha ,\beta \).
Take a constant \(K\ge 0\) and set
where
Observe that
These definitions make sense owing to Lemma 5.1, which also implies that \(v^{\alpha _{\cdot }\beta _{\cdot } }_{K}\) and \(v^{\alpha _{\cdot }\beta _{\cdot } }\) and bounded in \(\bar{D}\).
Theorem 5.2
We have \(v_{K}\rightarrow v\) uniformly on \(\bar{D}\) as \(K\rightarrow \infty \).
We need the following in which \(\pi :\hat{A}\rightarrow A_{1}\) is a mapping defined as \(\pi \alpha =\alpha \) if \(\alpha \in A_{1}\) and \(\pi \alpha =\alpha ^{*}\) if \(\alpha \in A_{2}\), where \(\alpha ^{*}\) is a fixed point in \(A\).
Theorem 5.3
There exists a constant \(N\) depending only on \(K_{0},K_{1}\), and \(d\) such that for any \(\alpha _{\cdot }\in \hat{\mathfrak{A }}, \beta _{\cdot }\in \mathfrak{B }, x\in \mathbb{R }^{d}\), \(T\in [0,\infty )\), and stopping time \(\gamma \)
where
Proof
For simplicity of notation we drop the superscripts \(\alpha _{\cdot } ,\beta _{\cdot },x\). Observe that \(x_{t}\) and \(y_{t}\) satisfy
where \(\eta _{t}=I_{t}+J_{t}\),
By Theorem II.5.9 of [6] (where we replace the processes \(x_{t}\) and \(\tilde{x}_{t}\) with appropriately stopped ones) for any \(T\in [0,\infty )\) and any stopping time \(\gamma \)
where \(N\) depends only on \(K_{1}\) and \(d\), which by Theorem III.6.8 of [8] leads to
with the constant \(N\) being three times the one from (5.1).
By using Davis’s inequality we see that for any \(T\in [0,\infty )\)
Furthermore, almost obviously
and this in combination with (5.2) proves the lemma. \(\square \)
Proof of Theorem 5.2
Without losing generality we may assume that \(g\in C^{3}(\mathbb{R }^{d})\) since the functions of this class uniformly approximate any \(g\) which is uniformly continuous in \(\mathbb{R }^{d}\). Then notice that by Itô’s formula and Lemma 5.1 for \(g\in C^{3}(\mathbb{R }^{d})\) we have
where
which is bounded and, for \((\alpha ,\beta )\in A\times B\), is uniformly continuous in \(x\) uniformly with respect to \(\alpha ,\beta \). This argument shows that without losing generality we may (and will) also assume that \(g=0\).
Next, since \(\mathfrak{A }\subset \hat{\mathfrak{A }}\) and for \(\alpha _{\cdot }\in \hat{\mathfrak{A }}\) and \({\varvec{\beta }}\in \hat{\mathbb{B }}\) we have \({\varvec{\beta }}(\alpha _{\cdot })\in \mathfrak{B }\), it holds that
To estimate \(v_{K}\) from above, take \({\varvec{\beta }}\in \mathbb{B }\) and define \(\hat{{\varvec{\beta }}} \in \hat{\mathbb{B }}\) by
Also take any sequence \(x^{n} \in \bar{D}, n=1,2,...\), and find a sequence \(\alpha ^{n}_{\cdot }\in \hat{\mathfrak{A }}\) such that
where
It follows from Lemma 5.1 that there is a constant \(N\) independent of \(n\) and \(K\) such that \(|v^{\alpha ^{n}_{\cdot }\hat{{\varvec{\beta }}} (\alpha ^{n}_{\cdot })}(x^{n})|\le N, |v|\le N, v_{K}\ge v\ge -N\) and we conclude from (5.3) that for any \(T\in [0,\infty )\) and
we have
where and below in the proof by \(N\) we denote constants which may change from one occurrence to another and independent of \(n, K\), and \(T\).
Next, introduce
define \(\gamma ^{n}\) as the first exit time of \(y^{n}_{t}\) from \(D\), and, with the aim of applying Lemma 4.1, observe that by identifying \(x^{n}_{t},y^{n}_{t},\tau ^{n},\gamma ^{n}\) and the objects related to them with \(x^{\prime }_{t},x^{\prime \prime }_{t},\tau ^{\prime },\tau ^{\prime \prime }\) and the objects related to them, respectively, we have
Hence for any \(T\in (0,\infty )\)
where \(W_{c }\) is the modulus of continuity of \(c \) and
By virtue of (5.4) we have \(I_{n}\le Ne^{NT }/K\) and \( J_{n }\le N e^{-\chi T}\) by Lemma 5.1, say with \(\chi = (2\sup _{D}G)^{-1}\). Therefore,
A similar estimate holds if we replace \(c\) with \(f\).
As long as the last terms in (4.4) are concerned, observe that
where \(W_{G}\) is the modulus of continuity of \(G\) and
with the second inequality following from Lemma 5.1.
Finally, in light of Lemma 5.1 one can take \(M_{f}\) in Lemma 4.1 to be a constant \(N\) independent of \(n\) and \(K\) and then by applying Lemma 4.1 we conclude from (5.3) that
where \(W(r)\) is a bounded function such that \(W(r)\rightarrow 0\) as \(r\downarrow 0\).
This result, (5.4), and Lemma 5.3 imply that, for any \(T\),
where \(w(T,K)\) is independent of \(n\) and \(w(T,K)\rightarrow 0\) as \(K\rightarrow \infty \) for any \(T\). Hence
Owing to the arbitrariness of \({\varvec{\beta }}\in \mathbb{B }\) we have
and the arbitrariness of \(x^{n}\) yields
which leads to the desired result after first letting \(K\rightarrow \infty \) and then \(T\rightarrow \infty \). The theorem is proved. \(\square \)
6 A general approximation result from below
As in Sect. 5, Assumption (iv) about the uniform nondegeneracy as well as Assumption 2.2 concerning \(G\) are not used and the domain \(D\) is not supposed to be bounded.
However, we suppose that Assumption 5.1 is satisfied. Here we allow \(\beta \) to change in a larger set penalizing using controls other than initially available.
Set
and let \(B_{2}\) be a separable metric space having no common points with \(B_{1}\).
Assumption 6.1
The functions \( \sigma ^{\alpha \beta }( x), b^{\alpha \beta }(x), c^{\alpha \beta }(x)\), and \( f^{\alpha \beta }(x)\) are also defined on \(A\times B_{2} \times \mathbb{R }^{d}\) in such a way that they are independent of \(\alpha \) (on \(A\times B_{2} \times \mathbb{R }^{d}\)) and Assumptions 2.1 (i) b), (ii) are satisfied with, perhaps, larger constants \(K_{0}\) and \(K_{1}\) and, of course, with \(B_{2}\) in place of \(B\). The functions \( c^{\alpha \beta }(x)\) and \( f^{\alpha \beta }(x)\) are bounded on \(A\times B_{2} \times \mathbb{R }^{d}\).
Define
Then we introduce \(\hat{\mathfrak{B }}\) as the set of progressively measurable \(\hat{B}\)-valued processes and \(\hat{\mathbb{B }}\) as the set of \(\hat{\mathfrak{B }}\)-valued functions \( {\varvec{\beta }}(\alpha _{\cdot })\) on \( \mathfrak{A }\) such that, for any \(T\in [0,\infty )\) and any \(\alpha ^{1}_{\cdot }, \alpha ^{2}_{\cdot }\in \mathfrak{A }\) satisfying
we have
Assumption 6.2
There exists a nonnegative bounded uniformly continuous in \(\bar{D}\) function \(G\in C^{2}_{loc}(D)\) such that \(G=0\) on \(\partial D\) (if \(D\ne \mathbb{R }^{d}\)) and
in \( D\) for all \(\alpha \in A\) and \(\beta \in \hat{B}\).
Take a constant \(K\ge 0\) and set
where
We reiterate that
These definitions make sense by the same reason as in Sect. 5.
Theorem 6.1
We have \(v_{-K}\rightarrow v\) uniformly on \(\bar{D}\) as \(K\rightarrow \infty \).
Proof
As in the proof of Theorem 5.2 we may assume that \(g=0\). Then since \(\mathbb{B }\subset \hat{\mathbb{B }}\) we have that \(v_{-K}\le v\). To estimate \(v_{-K}\) from below take any sequence \(x^{n}\in \bar{D}\) and find a sequence \({\varvec{\beta }}^{n}\in \hat{\mathbb{B }}\) such that
Since the last supremum is certainly greater than a negative constant independent of \(n\) plus
where \(\bar{c}\) is the same as in Sect. 5, we conclude that
Next, introduce \(\pi \beta \) similarly to how \(\pi \alpha \) was introduced and find a sequence of \(\alpha ^{n}_{\cdot }\in \mathfrak{A }\) such that
By using (6.1) and arguing as in the proof of Theorem 5.2 one proves that
tends to zero as \(n\rightarrow \infty \). This leads to the desired result since
The theorem is proved. \(\square \)
7 Versions of Theorems 5.2 and for uniformly nondegenerate case and Proof of Theorem 2.3
In Theorem 7.1 below we suppose that Assumptions 2.1 (i) b), (ii) are satisfied and domain \(D\) is bounded . We also take extensions of \(\sigma ,b,c\) and \(f\) as in Sects. 5 and 6 satisfying Assumptions 5.2 and 6.1 and additionally require the extended \(\sigma ^{\alpha \beta }\) to also satisfy Assumption 2.1 (iv), perhaps with a different constant \(\delta \).
Finally, we suppose that Assumptions 5.3 and 6.2 are satisfied.
Then take \(\gamma \) and \(\lambda \) as in Sect. 5 (and Sect. 6) and introduce the functions \(v_{\pm K}\) and \(v\) as in Sects. 5 and 6.
Theorem 7.1
We have \(v_{\pm K}\rightarrow v\) uniformly on \(\bar{D}\) as \(K\rightarrow \infty \).
Proof
For \(\varepsilon >0\) we construct \(v_{\varepsilon ,\pm K}(x)\) and \(v_{\varepsilon }(x)\) from \(\sigma ,b,c^{(\varepsilon )},f^{(\varepsilon )}\) (mollifying only the original \(c,f\) and not their extensions) and \(g\) in the same way as \(v_{\pm K}\) and \(v\) were constructed from \(\sigma ,b,c, f\), and \(g\). By Theorems 5.2 and 6.1 we have \(v_{\varepsilon ,\pm K}\rightarrow v_{\varepsilon }\) uniformly on \(\bar{D}\) as \(K\rightarrow \infty \) for any \(\varepsilon >0\).
Therefore, we only need to show that \(|v_{\varepsilon ,\pm K}- v_{\pm K}|+|v_{\varepsilon }- v |\le W(\varepsilon )\), where \(W(\varepsilon )\) is independent of \(K\) and tends to zero as \(\varepsilon \downarrow 0\). However, by Theorem 2.2.1 of [6] and Lemma 4.1 (see also Remarks 4.1 and 4.2)
This proves the theorem. \(\square \)
In the remaining part of the section the assumption of Theorem 2.3, that is all the assumptions stated in Sect. 2, are supposed to be satisfied.
Proof of Theorem 2.3
For obvious reasons while proving the inequalities (2.13) and (2.15) in assertions (i) and (ii) we may assume that \(g\in C^{2}(\mathbb{R }^{d})\).
(i) First suppose that \(D\in C^{2}\). By Theorem 1.1 of [9] there is a set \(A_{2}\) and bounded continuous functions \(\sigma ^{\alpha }=\sigma ^{\alpha \beta }, b^{\alpha }=b^{\alpha \beta }, c^{\alpha }=c^{\alpha \beta }\) (independent of \(x\) and \(\beta \)), and \(f^{\alpha \beta }\equiv 0\) defined on \(A_{2}\) such that Assumption 2.1 (iv) about the uniform nondegeneracy of \(a^{\alpha }=a^{\alpha \beta } =(1/2)\sigma ^{\alpha }(\sigma ^{\alpha })^{*}\) is satisfied for \(\alpha \in A_{2}\) (perhaps with a different constant \(\delta >0\)) and such that for any \(K\ge 0\) the equation (the following notation is explained below)
(a.e.) in \(D\) with boundary condition \(u=g\) on \(\partial D\) has a unique solution
(recall Assumption 2.3 (iv) and that \(g\in C^{2}(\mathbb{R }^{d})\)). Here
Observe that
where the first equality follows from the definition of \(H[u]\), (7.3), and the fact that \(L^{\alpha \beta }\) is independent of \(\beta \) for \(\alpha \in A_{2}\).
We set \(u_{K}(x)=g(x)\) if \(x\not \in D\).
Since \(D\) is sufficiently regular by assumption, there exists a sequence \(u^{n}(x)\) of functions of class \(C^{2}(\bar{D})\), which converge to \(u_{K}\) as \(n\rightarrow \infty \) uniformly in \(\bar{D}\) and in \(W^{2}_{p}(D)\) for any \(p\ge 1\). Hence, by Theorem 4.2 of [10] we have
By Theorem 7.1 we have that \(u_{K}\rightarrow v\) uniformly on \(\bar{D}\) and, since they coincide outside \(D\), the convergence is uniform on \(\mathbb{R }^{d}\). In particular,
On the other hand by assumption, (7.1), and (7.2) we have \(\check{H}[\psi u_{K}]\le 0\) (a.e. \(D^{\check{\!\!\!}\,}\)). We also know that \(u_{K}\ge v\) and, in particular, \(\psi u_{K}\ge \check{v}\) on \(\partial D^{\check{\!\!\!}\,}\) Furthermore, \(\psi u^{n}\in C^{2}(\bar{D^{\check{\!\!\!}\,}}), \psi u^{n}\) are \(p\)-insensitive by Assumption , and, for each \(n\), the second-order derivatives of \(\psi u^{n}\) are uniformly continuous in \(\bar{D^{\check{\!\!\!}\,}})\) (because of our assumptions on \(\Pi \) and \(\psi \)). Also \(\psi u^{n}\) converge to \(\psi u_{K}\) as \(n\rightarrow \infty \) uniformly in \(\bar{D^{\check{\!\!\!}\,}}\) and, as is easy to see, for any \(\check{x}\in D^{\check{\!\!\!}\,}\)
where, as always, \(u_{K}(\check{x})=u_{K}(\Pi (\check{x}))\) and \(u^{n}(\check{x})=u^{n}(\Pi (\check{x}))\) and the constant \(N\) depends only on \(\Vert \psi , \Pi \Vert _{C^{1,1}}, d\), and \(k\). By Assumption 2.5 (iii) and Theorem 2.2.1 of [6] the last expression tends to zero as \(n\rightarrow \infty \) uniformly with respect to \(\alpha _{\cdot }\in \mathfrak{A }\) and \(\beta _{\cdot }\in \mathfrak{B }\). We also recall that the remaining parts of Assumption 2.5 are imposed and this allows us to apply Theorem 4.1 of [10] and conclude that \(\psi u_{K}\ge \check{v}\), which after setting \(K\rightarrow \infty \) yields \(\psi v\ge \check{v}\). Theorem 4.1 of [10] also says that
By letting \(K\rightarrow \infty \) in (7.5) and using the uniform convergence of \(u_{K}\) to \(v\) we easily get the desired result in our particular case of smooth \(D\).
So far we did not use the assumption concerning the boundary behavior of \(G\) and \(\check{G}\) which we need now to deal with the case of general \(D\). Take an expanding sequence of smooth domains \(D_{n}\subset D\) such that \(D=\bigcup D_{n}\) and construct the functions \(v^{n}\) in the same way as \(v\) by replacing \(D\) with \(D_{n}\). We extend \(v^{n}\) to \(\mathbb{R }^{k}\) as in (2.11).
Also construct \(\check{v}^{n}\) by replacing \(D^{\check{\!\!\!}\,}\) with
and the boundary data \(\psi v^{n}\) in place of \(\psi v\), that is
where \(\check{\tau }^{\alpha _{\cdot }\beta _{\cdot }\check{x}}(n)\) is the first exit time of \(\check{x}_{t}^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\) from \(D^{\check{\!\!\!}\,}_{n}\). Then by the above we have that
and
We now claim that, as \(n\rightarrow \infty \),
That (7.8) holds is proved in [10] (see Sect. 6 there). Owing to (7.8) to prove (7.9) it suffices to show that uniformly in \(\mathbb{R }^{k }\) (notice the replacement of \(v^{n}\) by \(v\))
(recall that \(\check{\tau }^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\) is the first exit time of \(\check{x}_{t}^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\) from \(D^{\check{\!\!\!}\,}\)). Both sides of (7.10) coincide if \(\check{x}\not \in D^{\check{\!\!\!}\,}\). Therefore, we need to prove the uniform convergence only in \(D^{\check{\!\!\!}\,}\).
Here \(v\in C(\bar{D})\) and it is convenient to prove (7.10) just for any such \(v\), regardless of its particular construction. In that case, relying on Assumption 2.7, as in the proof of Theorem 2.2 of [10] (see Section 6 there), we reduce our problem to proving that uniformly in \(D^{\check{\!\!\!}\,}\)
with perhaps modified \(\check{f}\) still satisfying (2.7) and (2.8) and satisfying Assumptions (i), (ii) with (modified \(\bar{f}_{\varepsilon }\)).
For \(\varepsilon >0\) introduce
and observe that
where
where
By Assumption 2.5 (ii) we have that \(J_{n}(x)\rightarrow 0\) as \(\varepsilon \downarrow 0\) uniformly in \(D^{\check{\!\!\!}\,}\) (this is the only place where we use the uniformity in (2.10)). Furthermore, by Lemma 5.1 of [10]
As is easy to check \(\Pi (\partial D^{\check{\!\!\!}\,}_{n}) \subset \partial D_{n}\), so that, if we have a sequence of points \(\check{x}_{n}\in \partial D^{\check{\!\!\!}\,}_{n}\), then \(\mathrm{dist}\,(\Pi (\check{x}_{n}),\partial D)\rightarrow 0\) as \(n\rightarrow \infty \). It follows by Assumption that the right-hand side of (7.11) goes to zero as \(n\rightarrow \infty \). This proves that \(I_{n}(x)\rightarrow 0\) uniformly in \(D^{\check{\!\!\!}\,}\), yields (7.10) and (7.9) and along with (7.8) and (7.6) proves that \(\psi v\ge \check{v}\). One passes to the limit in (7.7) similarly and this finally brings the proof of assertion (i) to an end.
(ii) As above first suppose that \(D\in C^{2}\). By Theorem 1.3 of [9] there is a set \(B_{2}\) and bounded continuous functions \(\sigma ^{\beta }=\sigma ^{\alpha \beta }, b^{\beta }=b^{\alpha \beta }, c^{\beta }=c^{\alpha \beta }\) (independent of \(x\) and \(\alpha \)), and \(f^{\alpha \beta }\equiv 0\) defined on \(B_{2}\) such that Assumption 2.1 (iv) about the uniform nondegeneracy of \(a^{\beta }=a^{\alpha \beta } =(1/2)\sigma ^{\beta }(\sigma ^{\beta })^{*}\) is satisfied for \(\beta \in B_{2}\) (perhaps with a different constant \(\delta >0\)) and such that for any \(K\ge 0\) the equation (the following notation is explained below)
(a.e.) in \(D\) with boundary condition \(u=g\) on \(\partial D\) has a unique solution
Here
We introduce
and note that
After that it suffices to repeat the above proof relying again on Theorem 7.1.
The theorem is proved. \(\square \)
8 Proof of Theorems 2.1 and 2.2
Here all assumptions of Theorem 2.1 are supposed to be satisfied.
Proof of Theorem 2.1
If the functions \((c,f)^{\alpha \beta }(x)\) are bounded on \(A\times B\times \mathbb{R }^{d}\), then according to Remark 2.4 assertion (ii) of Theorem 2.1 follows immediately from Theorem 2.3. The continuity of \(v\) also follows from the proof of Theorem 2.3.
In the general case, for \(\varepsilon >0\), define
and construct \(v_{\varepsilon }(x)\) from \(\sigma ,b,c_{\varepsilon }, f_{ \varepsilon }\), and \(g\) in the same way as \(v\) was constructed from \(\sigma ,b,c, f\), and \(g\). By the above (2.6) holds if we replace \(f\) and \(c\) with \(f_{\varepsilon }\) and \(c_{\varepsilon }\) respectively.
We first take \(\lambda \equiv 0 \) and \(\gamma ^{\alpha _{\cdot }\beta _{\cdot }} =\tau ^{\alpha _{\cdot }\beta _{\cdot }x}\) in the counterpart of (2.6) corresponding to \(v_{\varepsilon }\). Then by Theorem 2.2.1 of [6] and Lemma 4.1 (see also Remarks 4.1 and 4.2)
It follows by Assumption 2.1 (iii) that \(v_{\varepsilon }\rightarrow v\) uniformly on \(\bar{D}\) and \(v\) is continuous in \(\bar{D}\). After that we easily pass to the limit in the counterpart of (2.6) corresponding to \(v_{\varepsilon }\) for arbitrary \(\lambda \) and \(\gamma \) again on the basis of Lemma 4.1. The theorem is proved. \(\square \)
Proof of Theorem 2.2
We know from [9] (see Remark 1.3 there) that \(u_{K}\) introduced in the proof of Theorem 2.3 (see Sect. 7) satisfies an elliptic equation
where \((a^{K}_{ij})\) satisfies the uniform nondegeneracy condition (see Assumption 2.1 (iv)) with a constant \(\delta _{1}=\delta _{1}(\delta ,d)>0, |b^{K}|,c^{K}\) are bounded by a constant depending only on \(K_{0}, \delta \), and \(d, c^{K}\ge 0\) and
Then according to classical results (see, for instance, [3] or [7]) there exists a constant \(\theta \in (0,1)\) depending only on \(\delta _{1}\) and \(d\), that is on \(\delta \) and \(d\), such that for any subdomain \(D^{\prime }\subset \bar{D^{\prime }} \subset D\) and \(x,y\in D^{\prime }\) we have
where \(N\) depends only on \(\delta , d\), the distance between the boundaries of \(D^{\prime }\) and \(D\), on the diameter of \(D\), and on \(K_{0}\). It is seen that (8.1) will be preserved as we let \(K \rightarrow \infty \) and then perform all other steps in the above proof of Theorem 2.1 which will lead us to the desired result. The theorem is proved. \(\square \)
References
Buckdahn, R., Li, J.: Stochastic differential games and viscosity solutions of Hamilton-Jacobi-Bellman-Isaacs equations. SIAM J. Control Optim. 47(1), 444–475 (2008)
Fleming, W.H., Souganidis, P.E.: On the existence of value functions of two-player, zero-sum stochastic differential games. Indiana Univ. Math. J. 38(2), 293–314 (1989)
Gilbarg, D., Trudinger, N.S.: “Elliptic Partial Differential Equations of Second Order”, Series: Classics in Mathematics. Springer, Berlin (2001)
Kovats, J.: Value functions and the Dirichlet problem for Isaacs equation in a smooth domain. Trans. Am. Math. Soc. 361(8), 4045–4076 (2009)
Krylov, N.V.: On a problem with two free boundaries for an elliptic equation and optimal stopping of a Markov process, Doklady Academii Nauk SSSR, vol. 194 (1970), No. 6, 1263–1265 in Russian; English translation: Soviet Math. Dokl., vol. 11, No. 5, 1370–1372 (1970)
Krylov, N.V.: “Controlled diffusion processes”, Nauka, Moscow, 1977 in Russian; English translation Springer, (1980)
Krylov, N.V.: “Nonlinear elliptic and parabolic equations of second order”, Nauka, Moscow, 1985 in Russian. English translation Reidel, Dordrecht (1987)
Krylov, N.V.: Introduction to the theory of diffusion processes. Am. Math. Soc, Providence, RI (1995)
Krylov, N.V.: On the existence of smooth solutions for fully nonlinear elliptic equations with measurable “coefficients” without convexity assumptions. Methods Appl. Anal. 19(2), 119–146 (2012)
Krylov, N.V.: On the dynamic programming principle for uniformly nondegenerate stochastic differential games in domains. Stoch. Proc. Appl. (to appear)
Krylov, N.V.: On regularity properties and approximations of value functions for stochastic differential games in domains, http://arxiv.org/abs/1207.3758
Nisio, M.: Stochastic differential games and viscosity solutions of Isaacs equations. Nagoya Math. J. 110, 163–184 (1988)
Świȩch, A.: Another approach to the existence of value functions of stochastic differential games. J. Math. Anal. Appl. 204(3), 884–897 (1996)
Author information
Authors and Affiliations
Corresponding author
Additional information
The author was partially supported by NSF Grant DNS-1160569.
Rights and permissions
About this article
Cite this article
Krylov, N.V. On the dynamic programming principle for uniformly nondegenerate stochastic differential games in domains and the Isaacs equations. Probab. Theory Relat. Fields 158, 751–783 (2014). https://doi.org/10.1007/s00440-013-0495-y
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-013-0495-y