1 Introduction

The dynamic programming principle is one of basic tools in the theory of controlled diffusion processes. It seems to the author that Fleming and Souganidis in [2] were the first authors who proved the dynamic programming principle with nonrandom stopping times for stochastic differential games in the whole space on a finite time horizon. They used rather involved technical constructions to overcome some measure-theoretic difficulties, a technique somewhat resembling the one in Nisio [12], and the theory of viscosity solutions.

In [4] Kovats considers time-homogeneous stochastic differential games in a “weak” formulation in smooth domains and proves the dynamic programming principle again with nonrandom stopping times. He uses approximations of policies by piece-wise constant ones and proceeds similarly to [12].

Świȩch in [13] reverses the arguments in [2] and proves the dynamic programming principle for time-homogeneous stochastic differential games in the whole space with constant stopping times “directly” from knowing that the viscosity solutions exist. His method is quite similar to the so-called verification principle from the theory of controlled diffusion processes.

It is also worth mentioning the paper [1] by Buckdahn and Li where the dynamic programming principle for constant stopping times in the time-inhomogeneous setting in the whole space is derived by using the theory of backward–forward stochastic equations.

In this paper we will be only dealing with the dynamic programming principle for stochastic differential games and its relation to the corresponding Isaacs equations. Concerning all other aspect of the theory of stochastic differential games we refer the reader to [1, 2, 4, 12], and [13], and the references therein.

In [10] we adopted the strategy of Świȩch [13] which is based on using the fact that in many cases the Isaacs equation has a sufficiently regular solution. In [13] viscosity solutions are used and we relied on classical ones. In the present article no assumptions are made on the solvability of Isaacs equations. Here we use a very general result of [9] (see Theorem 1.1 there) about solvability of approximating Isaacs equations and our Theorem 5.2 implying that the solutions of approximating equations approximate the value function in the original problem. Then we basically pass to the limit in the formulas obtained in [10].

The main emphasis of [2, 4, 12], and [13] is on proving that the value functions for stochastic differential games are viscosity solutions of the corresponding Isaacs equations and the dynamic programming principle is used just as a tool to do that. In our setting the zeroth-order coefficient and the running payoff function can be just measurable and in this situation neither our methods nor the methods based on the notion of viscosity solution seem to be of much help while characterizing the value function as a viscosity solution.

Our main future goal is to develop some tools which would allow us in a subsequent article to show that the value functions are of class \(C^{0,1}\), provided that the data are there, for possibly degenerate stochastic differential games without assuming that the zeroth-order coefficient is large enough negative (see [11]). On the way to achieve this goal one of the main steps, apart from proving the dynamic programming principle, consist of proving certain representation formulas like the ones in Theorems 3.2 and 3.3 of [10] in which the process is not assumed to be uniformly nondegenerate. Next important ingredient consists of approximations results stated as Theorem again for possibly degenerated processes. By combining Theorem 1.1 of [9] with Theorems 3.2 and 3.3 of [10] and 5.2, we then come to one of the main results of the present article, Theorem 2.1, about the dynamic programming principle in a very general form including stopping and randomized stopping times. Speaking somewhat informally using randomized stopping times is equivalent to introducing a nonnegative random process, which multipied by \(dt\) would give us the probability that we stop the process on time interval \((t,t+dt)\) given that it was not stoped before.

In Theorem 2.2 we assert the Hölder continuity of the value function in our case where the zeroth-order coefficient and the running payoff function can be discontinuous.

Theorem 2.1 concerns time-homogeneous stochastic differential games unlike the time inhomogeneous in [2] and generalizes the corresponding results of [13] and [4], where however degenerate case is not excluded.

Our Theorem 2.3 shows that the value function is uniquely defined by the corresponding Isaacs equation and is independent of the way the equation is represented as \(\sup \inf \) of linear operators (provided that they satisfy our basic assumptions). This fact in a somewhat more restricted situation is also noted in Remark 2.4 of [13].

The article is organized as follows. In Sect. 2 we state our main results to which actually, as we pointed out implicitly above, belongs Theorem 5.2. In Sect. 3 we give a version of Theorem for the whole space. Then in Sect. 4 we prove a very simple result allowing us to compare the value functions corresponding to different data.

Sections 5 and 6 are devoted to deriving approximation results. In Sect. 5 we consider the approximations from above whereas in Sect. from below. The point is that we know from [9] that one can slightly modify the underlying Isaacs equation in such a way that the modified equation would have rather smooth solutions. These smooth solutions are shown to coincide with the corresponding value functions, which in addition satisfy the dynamic programming principle, and the goal of Sects. 5 and 6 is to show that when the modification “fades away” we obtain the dynamic programming principle for the original value function. Theorem 5.2 is proved for the case that the process can degenerate. Its version for the uniformly nondegenerate case is given in Sect. 7 where we also prove Theorem 2.3 about the characterization of the value function by the Isaacs equation. In the final short Sect. 8 we combine previous results and prove Theorems 2.1 and 2.2.

The author is sincerely grateful too the referee’s comments which allowed him to clear up some obscure places.

2 Main results for bounded domains

Let \(\mathbb{R }^{d}=\{x=(x_{1},\ldots ,x_{d})\}\) be a \(d\)-dimensional Euclidean space and let \(d_{1}\ge d\) be an integer. Assume that we are given separable metric spaces \(A\) and \(B\) and let, for each \(\alpha \in A\) and \(\beta \in B\) the following functions on \(\mathbb{R }^{d}\) be given:

  1. (i)

    \(d\times d_{1}\) matrix-valued \(\sigma ^{\alpha \beta }( x) = \left( \sigma ^{\alpha \beta }_{ij}( x)\right) \),

  2. (ii)

    \(\mathbb{R }^{d}\)-valued \(b^{\alpha \beta }( x)= \left( b^{\alpha \beta }_{i }(x)\right) \), and

  3. (iii)

    real-valued functions \(c^{\alpha \beta }( x) , f^{\alpha \beta }( x) \), and \(g(x)\).

Take a \(\zeta \in C^{\infty }_{0}(\mathbb{R }^{d})\) with unit integral and for \(\varepsilon >0\) introduce \(\zeta _{\varepsilon }(x)=\varepsilon ^{-d}\zeta (x/\varepsilon )\). For locally summable functions \(u=u(x)\) on \(\mathbb{R }^{d}\) define

$$\begin{aligned} u^{(\varepsilon )}(x)=u*\zeta _{\varepsilon }(x). \end{aligned}$$

Assumption 2.1

  1. (i)

    a) All the above functions are continuous with respect to \(\beta \in B\) for each \((\alpha ,x)\) and continuous with respect to \(\alpha \in A\) uniformly with respect to \(\beta \in B\) for each \(x\). b) These functions are Borel measurable functions of \((\alpha ,\beta ,x)\), the function \(g(x)\) is bounded and uniformly continuous on \(\mathbb{R }^{d}\), and \(c^{\alpha \beta }\ge 0\).

  2. (ii)

    For any \(x \in \mathbb{R }^{d}\)

    $$\begin{aligned} \sup _{(\alpha ,\beta ) \in A\times B }( | c^{\alpha \beta }|+| f^{\alpha \beta }|)( x)<\infty , \end{aligned}$$
    (2.1)

    and for any \(x,y\in \mathbb{R }^{d}\) and \((\alpha ,\beta )\in A\times B \)

    $$\begin{aligned}&\Vert \sigma ^{\alpha \beta }( x)-\sigma ^{\alpha \beta }( y)\Vert \le K_{1}|x-y|, \quad |b^{\alpha \beta }( x)-b^{\alpha \beta }( y) |\le K_{1}|x-y|,\\&\quad \Vert \sigma ^{\alpha \beta }( x )\Vert ,|b^{\alpha \beta }( x )| \le K_{0}, \end{aligned}$$

    where \(K_{0}\) and \(K_{1}\) are some fixed constants.

  3. (iii)

    For any bounded domain \(D\subset \mathbb{R }^{d}\) we have

    $$\begin{aligned} \Vert \sup _{(\alpha ,\beta )\in A\times B }| f^{\alpha \beta } |\,\Vert _{L_{d}(D)}&+ \Vert \sup _{(\alpha ,\beta )\in A\times B } c^{\alpha \beta } \,\Vert _{L_{d}(D)}<\infty ,\\ \Vert \sup _{(\alpha ,\beta )\in A\times B }| f^{\alpha \beta }&- ( f^{\alpha \beta })^{(\varepsilon )}|\,\Vert _{L_{d}(D)}\rightarrow 0,\\ \Vert \sup _{(\alpha ,\beta )\in A\times B }| c^{\alpha \beta }&- ( c^{\alpha \beta })^{(\varepsilon )}|\,\Vert _{L_{d}(D)}\rightarrow 0, \end{aligned}$$

    as \(\varepsilon \downarrow 0\).

  4. (iv)

    There is a constant \(\delta \in (0,1]\) such that for \(\alpha \in A, \beta \in B\), and \(x,\lambda \in \mathbb{R }^{d}\) we have

    $$\begin{aligned} \delta |\lambda |^{2}\le a^{\alpha \beta }_{ij}( x)\lambda _{i} \lambda _{j}\le \delta ^{-1}|\lambda |^{2}, \end{aligned}$$

where \(a^{\alpha \beta }=(a^{\alpha \beta }_{ij})=(1/2) \sigma ^{\alpha \beta }(\sigma ^{\alpha \beta })^{*}\).

The reader understands, of course, that the summation convention is adopted throughout the article.

Let \((\Omega ,\mathcal{F },P)\) be a complete probability space, let \(\{\mathcal{F }_{t},t\ge 0\}\) be an increasing filtration of \(\sigma \)-fields \(\mathcal{F }_{t}\subset \mathcal{F }\) such that each \(\mathcal{F }_{t}\) is complete with respect to \(\mathcal{F },P\), and let \(w_{t},t\ge 0\), be a standard \(d_{1}\)-dimensional Wiener process given on \(\Omega \) such that \(w_{t}\) is a Wiener process relative to the filtration \(\{\mathcal{F }_{t},t\ge 0\}\).

The set of progressively measurable \(A\)-valued processes \(\alpha _{t}=\alpha _{t}(\omega )\) is denoted by \(\mathfrak{A }\). Similarly we define \(\mathfrak{B }\) as the set of \(B\)-valued progressively measurable functions. By \( \mathbb{B }\) we denote the set of \(\mathfrak{B }\)-valued functions \({\varvec{\beta }}(\alpha _{\cdot })\) on \(\mathfrak{A }\) such that, for any \(T\in (0,\infty )\) and any \(\alpha ^{1}_{\cdot }, \alpha ^{2}_{\cdot }\in \mathfrak{A }\) satisfying

$$\begin{aligned} P( \alpha ^{1}_{t}=\alpha ^{2}_{t} \quad \text{ for} \text{ almost} \text{ all}\quad t\le T)=1, \end{aligned}$$
(2.2)

we have

$$\begin{aligned} P( {\varvec{\beta }}_{t}(\alpha ^{1}_{\cdot })={\varvec{\beta }}_{t}(\alpha ^{2}_{\cdot }) \quad \text{ for} \text{ almost} \text{ all}\quad t\le T)=1. \end{aligned}$$

For \(\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot } \in \mathfrak{B }\), and \(x\in \mathbb{R }^{d}\) define \(x^{\alpha _{\cdot } \beta _{\cdot } x}_{t} \) as a unique solution of the Itô equation

$$\begin{aligned} x_{t}=x+\int \limits _{0}^{t}\sigma ^{\alpha _{s} \beta _{s} }( x_{s})\,dw_{s} +\int \limits _{0}^{t}b^{\alpha _{s} \beta _{s} }( x_{s})\,ds. \end{aligned}$$
(2.3)

For a sufficiently smooth function \(u=u(x)\) introduce

$$\begin{aligned} L^{\alpha \beta } u( x)=a^{\alpha \beta }_{ij}( x)D_{ij}u(x)+ b ^{\alpha \beta }_{i }( x)D_{i}u(x)-c^{\alpha \beta } ( x)u(x), \end{aligned}$$

where, naturally, \(D_{i}=\partial /\partial x_{i}, D_{ij}=D_{i}D_{j}\). Also set

$$\begin{aligned} H[u](x)=\mathop {{\mathrm{sup\,\,\,inf}}}\limits _{\alpha \in A\,\,\beta \in B} [L^{\alpha \beta } u(x)+f^{\alpha \beta } (x)]. \end{aligned}$$
(2.4)

Denote

$$\begin{aligned} \phi ^{\alpha _{\cdot }\beta _{\cdot } x}_{t} =\int \limits _{0}^{t}c^{\alpha _{s} \beta _{s} }( x^{\alpha _{\cdot } \beta _{\cdot } x}_{s})\,ds. \end{aligned}$$

Next, fix a bounded domain \(D\subset \mathbb{R }^{d}\), define \(\tau ^{\alpha _{\cdot }\beta _{\cdot } x}\) as the first exit time of \(x^{\alpha _{\cdot } \beta _{\cdot } x}_{t}\) from \(D\), and introduce

$$\begin{aligned} v(x)=\mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} E_{x}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })}\left[ \int \limits _{0}^{\tau } f( x_{t})e^{-\phi _{t}}\,dt+g(x_{\tau })e^{-\phi _{\tau }}\right] , \end{aligned}$$
(2.5)

where the indices \(\alpha _{\cdot }, {\varvec{\beta }}\), and \(x\) at the expectation sign are written to mean that they should be placed inside the expectation sign wherever and as appropriate, that is

$$\begin{aligned}&E_{x}^{\alpha _{\cdot }\beta _{\cdot }}\left[ \int \limits _{0}^{\tau } f( x_{t})e^{-\phi _{t}}\,dt+g(x_{\tau })e^{-\phi _{\tau }}\right] \\&\quad := E \left[ g\left( x^{\alpha _{\cdot }\beta _{\cdot } x} _{\tau ^{\alpha _{\cdot }\beta _{\cdot } x}} \right) e^{-\phi ^{\alpha _{\cdot }\beta _{\cdot } x} _{\tau ^{\alpha _{\cdot }\beta _{\cdot } x}}} +\int \limits _{0}^{\tau ^{\alpha _{\cdot }\beta _{\cdot } x}} f^{\alpha _{t}\beta _{t} } \left( x^{\alpha _{\cdot }\beta _{\cdot } x}_{t}\right) e^{-\phi ^{\alpha _{\cdot }\beta _{\cdot } x}_{t}}\,dt\right] . \end{aligned}$$

Observe that this definition makes perfect sense due to Theorem 2.2.1 of [6] and \(v(x)=g(x)\) in \(\mathbb{R }^{d}{\setminus } D\).

Here is our first main result before which we introduce one more assumption.

Assumption 2.2

There exists a nonnegative \(G\in C(\bar{D})\cap C^{2}_{loc}(D)\) such that \(G= 0\) on \(\partial D\) and

$$\begin{aligned} L^{\alpha \beta }G\le -1 \end{aligned}$$

in \(D\) for all \(\alpha \in A\) and \(\beta \in B\).

Theorem 2.1

Under the above assumptions

  1. (i)

    The function \(v(x)\) is bounded and continuous in \(\mathbb{R }^{d}\).

  2. (ii)

    Let \(\gamma ^{\alpha _{\cdot }\beta _{\cdot }x} \) be an \(\{\mathcal{F }_{t}\}\)-stopping time defined for each \(\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot }\in \mathfrak{B }\), and \(x\in \mathbb{R }^{d}\) and such that \(\gamma ^{\alpha _{\cdot }\beta _{\cdot }x}\le \tau ^{\alpha _{\cdot }\beta _{\cdot } x}\). Also let \(\lambda _{t}^{\alpha _{\cdot } \beta _{\cdot }x}\ge 0\) be progressively measurable functions on \(\Omega \times [0,\infty )\) defined for each \(\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot }\in \mathfrak{B }\), and \(x\in \mathbb{R }^{d}\) and such that they have finite integrals over finite time intervals (for any \(\omega \)). Then for any \(x\)

    $$\begin{aligned} v(x)=\mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} E_{x}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })}\left[ v(x_{\gamma })e^{-\phi _{\gamma }-\psi _{\gamma }} +\int \limits _{0}^{\gamma } \{f( x_{t})+\lambda _{t}v(x_{t})\}e^{-\phi _{t}-\psi _{t}}\,dt \right] ,\nonumber \\ \end{aligned}$$
    (2.6)

where inside the expectation sign \(\gamma =\gamma ^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })x} \) and

$$\begin{aligned} \psi ^{\alpha _{\cdot }\beta _{\cdot } x}_{t} =\int \limits _{0}^{t} \lambda ^{\alpha _{\cdot }\beta _{\cdot } x}_{s}\,ds. \end{aligned}$$

Remark 2.1

The above setting is almost identical to that of [10] and statement of Theorem 2.1 is almost identical to that of Theorem 2.2 of [10]. However, here we did not impose a quite strong assumption from [10] that \(D\) be approximated by domains in which the Isaacs equation has regular solutions. On the other hand, we pay for that by excluding parameters \(p\), which are present in Theorem 2.2 of [10] and will reappear in our Theorem 2.3.

Note that the possibility to vary \(\lambda \) in Theorem might be useful while considering stochastic differential games with stopping in the spirit of [5].

Theorem 2.2

The function \(v\) is locally Hölder continuous in \(D\) with exponent \(\theta \in (0,1)\) depending only on \(d\) and \(\delta \).

Next, we state a comparison result, for which we need some new objects and additional assumptions. Take an integer \(k \ge d\) and assume that on \(\mathbb{R }^{k }\) we are given a mapping

$$\begin{aligned} \Pi :\check{x}\in \mathbb{R }^{k}\rightarrow \Pi (\check{x})\in \mathbb{R }^{d} \end{aligned}$$

which is twice continuously differentiable with bounded and uniformly continuous first- and second-order derivatives.

The reader understands that the case \(k =d\) is not excluded in which case \(\Pi (\check{x})\equiv \check{x}\) is allowed.

Assume that we are given a separable metric space \(\mathcal{P }\) and let, for each \(\alpha \in A, \beta \in B\), and \(p\in \mathcal{P }\), the following functions on \(\mathbb{R }^{k}\) be given:

  1. (i)

    \(k\times d_{1}\) matrix-valued \(\check{\sigma }^{\alpha \beta }(p,\check{x}) = (\check{\sigma }^{\alpha \beta }_{ij}(p,\check{x}))\),

  2. (ii)

    \(\mathbb{R }^{k}\)-valued \(\check{b}^{\alpha \beta }(p,\check{x})= (\check{b}^{\alpha \beta }_{i }(p,\check{x}))\), and

  3. (iii)

    real-valued functions \(\check{r}^{\alpha \beta }(p,\check{x}) , \check{c}^{\alpha \beta }(p,\check{x}) \), and \(\check{f}^{\alpha \beta }(p,\check{x}) \).

As usual we introduce

$$\begin{aligned} \check{a}^{\alpha \beta }(p,\check{x})=(1/2) \check{\sigma }^{\alpha \beta }(p,\check{x}) (\check{\sigma }^{\alpha \beta }(p,\check{x}))^{*} \end{aligned}$$

and for a fixed \(\bar{p}\in \mathcal{P }\) define

$$\begin{aligned} (\bar{a},\bar{\sigma },\bar{b},\bar{c},\bar{f},\bar{r})^{\alpha \beta }(\check{x}) =(\check{a},\check{\sigma },\check{b}, \check{c},\check{f} ,\check{r})^{\alpha \beta }(\bar{p},\check{x}). \end{aligned}$$

Assumption 2.3

  1. (i)

    All the above functions apart from \(\check{r}\) are continuous with respect to \(\beta \in B\) for each \((\alpha ,p,\check{x})\) and continuous with respect to \(\alpha \in A\) uniformly with respect to \(\beta \in B\) for each \((p,\check{x})\). Furthermore, they are Borel measurable functions of \((p,\check{x})\) for each \((\alpha ,\beta )\) and \(\check{c}^{\alpha \beta }\ge 0\).

  2. (ii)

    The functions \(\bar{\sigma }^{\alpha \beta }(\check{x} )\) and \(\bar{b}^{\alpha \beta }(\check{x} )\) are uniformly continuous with respect to \(\check{x}\) uniformly with respect to \((\alpha ,\beta )\in A\times B \) and for any \(\check{x} \in \mathbb{R }^{k}\) and \((\alpha ,\beta ,p)\in A\times B\times \mathcal{P }\)

    $$\begin{aligned} \Vert \sigma ^{\alpha \beta }(p,\check{x} )\Vert ,|b^{\alpha \beta }(p,\check{x} )| \le K_{0}. \end{aligned}$$
  3. (iii)

    We have \(\bar{r}\equiv 1\) and there is a constant \(\check{\delta }_{1}\in (0,1]\) such that on \(A\times B\times \mathcal{P }\times \mathbb{R }^{k}\) we have

    $$\begin{aligned} \check{r}^{\alpha \beta }(p,\check{x})\in [\check{\delta }_{1},\check{\delta }_{1}^{-1}],\quad \check{f} ^{\alpha \beta }(p,\check{x}) =\check{r}^{\alpha \beta }(p,\check{x}) \bar{f}^{\alpha \beta }(\check{x}) . \end{aligned}$$
    (2.7)
  4. (iv)

    The functions \(c^{\alpha \beta }( x)\) and \(f^{\alpha \beta }( x)\) are bounded on \(A\times B\times \mathbb{R }^{d}\). (This part bears on the objects introduced before Theorem 2.1.)

  5. (v)

    For any \(\check{x} \in \mathbb{R }^{k}\)

$$\begin{aligned} \sup _{(\alpha ,\beta ) \in A\times B }( |\bar{c}^{\alpha \beta }|+|\bar{f}^{\alpha \beta }|)(\check{x})<\infty . \end{aligned}$$
(2.8)

A function \(p^{\alpha _{\cdot }\beta _{\cdot }}_{t}= p^{\alpha _{\cdot }\beta _{\cdot }}_{t}(\omega )\) given on \(\mathfrak{A }\times \mathfrak{B }\times \Omega \times (0,\infty )\) is said to be control adapted if, for any \((\alpha _{\cdot }\!,\beta _{\cdot })\in \mathfrak{A }\times \mathfrak{B }\) it is progressively measurable in \((\omega ,t)\) and, for any \(T\in (0,\infty )\), we have

$$\begin{aligned} P\left( p^{\alpha ^{1}_{\cdot }\beta ^{1}_{\cdot }}_{t} =p^{\alpha ^{2}_{\cdot }\beta ^{2}_{\cdot }}_{t} \quad \text{ for} \text{ almost} \text{ all}\quad t\le T\right) =1 \end{aligned}$$

as long as

$$\begin{aligned} P\left( \alpha ^{1}_{t}=\alpha ^{2}_{t} , \beta ^{1}_{t}=\beta ^{2}_{t} \quad \text{ for} \text{ almost} \text{ all}\quad t\le T\right) =1. \end{aligned}$$

The set of \(\mathcal{P }\)-valued control adapted processes is denoted by \(\mathfrak{P }\).

We discussed a way in which control adapted processes appear naturally in Remark 2.4 of [10].

Fix a \(p\in \mathfrak{P }\) and for \(\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot }\in \mathfrak{B }\), and \(\check{x}\in \mathbb{R }^{k }\) consider the following equation

$$\begin{aligned} \check{x}_{t}=\check{x}+\int \limits _{0}^{t}\check{\sigma }^{\alpha _{s}\beta _{s}}(p^{\alpha _{\cdot }\beta _{\cdot }}_{s},\check{x}_{s})\,dw_{s}+\int \limits _{0}^{t}\check{b}^{\alpha _{s}\beta _{s}} (p^{\alpha _{\cdot }\beta _{\cdot }}_{s},\check{x}_{s})\,ds. \end{aligned}$$
(2.9)

Assumption 2.4

Equation (2.9) satisfies the usual hypothesis, that is for any \(\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot } \in \mathfrak{B }\), and \(\check{x}\in \mathbb{R }^{k }\) it has a unique solution denoted by \( \check{x}_{t}^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\) and \( \check{x}_{t}^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\) is a control adapted process for each \(\check{x}\).

In order to state additional assumptions, we need a possibly unbounded domain \( D^{\check{\!\!\!}\,} \subset \mathbb{R }^{k }\) such that

$$\begin{aligned} \Pi ( D^{\check{\!\!\!}\,})=D. \end{aligned}$$

Denote by \(\check{\tau }^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\) the first exit time of \(\check{x}_{t}^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\) from \(D^{\check{\!\!\!}\,}\) and set

$$\begin{aligned} \check{\phi }^{\alpha _{\cdot }\beta _{\cdot } \check{x}}_{t} =\int \limits _{0}^{t}\check{c}^{\alpha _{s} \beta _{s} }\left( p^{\alpha _{\cdot }\beta _{\cdot }}_{s}, \check{x}^{\alpha _{\cdot } \beta _{\cdot } \check{x}}_{s}\right) \,ds, \end{aligned}$$

Next, suppose that for each \(\varepsilon >0\) we are given real-valued Borel measurable functions \(\bar{c}^{\alpha \beta }_{\varepsilon }(\check{x})\) and \(\bar{f}^{\alpha \beta }_{\varepsilon }(\check{x})\) defined on \(A\times B\times \mathbb{R }^{k}\) and impose

Assumption 2.5

  1. (i)

    For each \(\varepsilon >0\) the functions \((\bar{c}_{\varepsilon },\bar{f}_{\varepsilon })^{\alpha \beta } \) are bounded on \(A\times B\times \bar{D^{\check{\!\!\!}\,}}\) and uniformly continuous with respect to \(\check{x}\in \bar{D^{\check{\!\!\!}\,}}\) uniformly with respect to \(\alpha ,\beta \).

  2. (ii)

    For any \(\check{x}\in D^{\check{\!\!\!}\,}\)

    $$\begin{aligned}&\sup _{(\alpha _{\cdot }\!,\beta _{\cdot } )\in \mathfrak{A }\times \mathfrak{B }} E^{\alpha _{\cdot }\beta _{\cdot }}_{\check{x}} \int \limits _{0}^{\check{\tau }} \sup _{\alpha \in A,\beta \in B} |\bar{c}^{\alpha \beta } -\bar{c}^{\alpha \beta } _{\varepsilon }| (\check{x}_{t}) e^{-\check{\phi }_{t}}\,dt\rightarrow 0,\nonumber \\&\sup _{(\alpha _{\cdot }\!,\beta _{\cdot } )\in \mathfrak{A }\times \mathfrak{B }} E^{\alpha _{\cdot }\beta _{\cdot }}_{\check{x}} \int \limits _{0}^{\check{\tau }} \sup _{\alpha \in A,\beta \in B} |\bar{f}^{\alpha \beta } -\bar{f}^{\alpha \beta } _{\varepsilon }| (\check{x}_{t}) e^{-\check{\phi }_{t}}\,dt\rightarrow 0 \end{aligned}$$
    (2.10)

    with the second convergence in (2.10) being uniform in \(D^{\check{\!\!\!}\,}\).

  3. (iii)

    There exists a constant \(\check{\delta }\in (0,1]\) such that for \(\check{x}\in \mathbb{R }^{k}, p\in \mathcal{P }, \alpha \in A, \beta \in B\), and \(\lambda \in \mathbb{R }^{d}\) we have

$$\begin{aligned} \check{\delta }|\lambda |^{2}\le \big |\lambda ^{*}\frac{\partial \Pi }{\partial \check{x}}(\check{x}) \check{\sigma }^{\alpha \beta } (p,\check{x})\big |^{2}\le \check{\delta }^{-1}|\lambda |^{2}. \end{aligned}$$

Remark 2.2

Assumption 2.5 (iii) is equivalent to saying that for solutions of (2.9) the processes \(\Pi (\check{x}_{t})\) are uniformly nondegenerate.

It is convenient to always lift functions \(u\) given on \(\mathbb{R }^{d}\) to functions given on \(\mathbb{R }^{k}\) by the formula

$$\begin{aligned} u(\check{x}):=u(\Pi (\check{x})). \end{aligned}$$
(2.11)

For sufficiently smooth functions \(u=u(\check{x})\) introduce

$$\begin{aligned} \check{L}^{\alpha \beta } u(p,\check{x})&= \check{a}^{\alpha \beta }_{ij}(p,\check{x}) D_{ij}u(\check{x})+ \check{b} ^{\alpha \beta }_{i }(p,x)D_{i}u(\check{x})- \check{c}^{\alpha \beta } (p, \check{x})u(\check{x}),\\ \bar{L}^{\alpha \beta }u(\check{x})&= \check{L}^{\alpha \beta }u(\bar{p}, \check{x} ) \end{aligned}$$

(naturally, \(D_{i}=\partial /\partial \check{x}_{i}, D_{ij}=D_{i}D_{j}\)). Also set

$$\begin{aligned} \check{H}[u](\check{x})=\mathop {{\mathrm{sup\,\,\,inf}}}\limits _{\alpha \in A\,\,\beta \in B} [\bar{L}^{\alpha \beta } u(\check{x})+\bar{f}^{\alpha \beta } (\check{x})]. \end{aligned}$$

Assumption 2.6

There exists a nonnegative (bounded) function \(\check{G}\in C(\bar{D^{\check{\!\!\!}\,}})\cap C^{2}_{loc}(D^{\check{\!\!\!}\,})\) such that \(\check{G}(\check{x})\rightarrow 0\) as \(\check{x}\in \bar{D^{\check{\!\!\!}\,}}\) and \(\mathrm{dist}\,(\Pi (\check{x}),\partial D)\rightarrow 0\) (\(\check{G} = 0\) on \(\partial D\) if \(k =d\) and \(\Pi (\check{x})\equiv \check{x}\)) and

$$\begin{aligned} \check{L}^{\alpha \beta }\check{G}(p,\check{x})\le -1 \end{aligned}$$

in \(\mathcal{P }\times D^{\check{\!\!\!}\,}\) for all \(\alpha \in A\) and \(\beta \in B\).

Next, take a real-valued function \(\psi \) on \(\mathbb{R }^{k}\) with finite \(C^{2}(\mathbb{R }^{k})\)-norm and introduce

$$\begin{aligned} \check{v}(\check{x})= \mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} E_{\check{x}}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })} \left[ \int \limits _{0}^{\check{\tau }} \check{f}( p_{t},\check{x}_{t})e^{-\check{\phi }_{t}}\,dt+ \psi (\check{x}_{\check{\tau }})v( \check{x}_{\check{\tau }}) e^{-\check{\phi }_{\check{\tau }}}\right] , \end{aligned}$$

where, naturally, \(v\) is taken from Theorem 2.1. Assumption 2.5 (iii) (and the boundedness of \(D\)) and Theorem 2.2.1 of [6] allow us to conclude that that \( P^{\alpha _{\cdot }\beta _{\cdot }}_{\check{x}} (\check{\tau }^{\alpha _{\cdot }\beta _{\cdot }\check{x}}<\infty )=1\). Also notice that (2.7) and Assumptions 2.6 imply that for any \(\check{x}\in D^{\check{\!\!\!}\,}\)

$$\begin{aligned}&\delta _{1}\sup _{(\alpha _{\cdot }\!,\beta _{\cdot })\in \mathfrak{A }\times \mathfrak{B }} E^{\alpha _{\cdot }\beta _{\cdot }}_{\check{x}} \int \limits _{0}^{\check{\tau }}|\check{f} (p_{t}, \check{x}_{t})|e^{-\check{\phi }_{t}}\,dt\\&\le \sup _{(\alpha _{\cdot }\!,\beta _{\cdot })\in \mathfrak{A }\times \mathfrak{B }} E^{\alpha _{\cdot }\beta _{\cdot }}_{\check{x}} \int \limits _{0}^{\check{\tau }}|\bar{f} ( \check{x}_{t})|e^{-\check{\phi }_{t}}\,dt\\&\le \sup _{(\alpha _{\cdot }\beta _{\cdot })\in \mathfrak{A }\times \mathfrak{B }} E^{\alpha _{\cdot }\beta _{\cdot }}_{\check{x}} \int \limits _{0}^{\check{\tau }}|\bar{f} ( \check{x}_{t})-\bar{f}_{\varepsilon } ( \check{x}_{t})|e^{-\check{\phi }_{t}}\,dt+\check{G}(\check{x}) \sup _{\alpha ,\beta ,\check{y}}|\bar{f}^{\alpha \beta }_{\varepsilon } (\check{y})|, \end{aligned}$$

which is finite at least for small \(\varepsilon >0\) owing to (2.10). Hence, \(\check{v}\) is well defined.

By the way, observe also that, if \(k=d\) and \(\Pi (\check{x})\equiv \check{x}\), then \(\psi (\check{x})v( \check{x}) =\psi (x)g(x)\) on \(\partial D^{\check{\!\!\!}\,}=\partial D\).

Assumption 2.7

For any function \(u\in C^{2}_{loc}(D)\) (not \(C^{2}_{loc}(D^{\check{\!\!\!}\,})\)), the function \(\psi (\check{x})u(\check{x})\) is \(p\)-insensitive in \(D^{\check{\!\!\!}\,}\) relative to \((\check{r}^{\alpha \beta }(p,\check{x}), \check{L}^{\alpha \beta }(p,\check{x}))\) in the terminology of [10], that is, for any \(\alpha _{\cdot }\!, \beta _{\cdot }\), and \(\check{x}\) we have

$$\begin{aligned}&d\left[ (\psi u)(\check{x}_{t}^{\alpha _{\cdot }\beta _{\cdot }\check{x}}) e^{-\check{\phi }^{\alpha _{\cdot }\beta _{\cdot } \check{x}}_{t}}\right] =dm_{t}\\&\quad +e^{-\check{\phi }^{\alpha _{\cdot }\beta _{\cdot } \check{x}}_{t}} \check{r}^{\alpha _{t}\beta _{t}}(p^{\alpha _{\cdot }\beta _{\cdot }}_{t}, \check{x}_{t}^{\alpha _{\cdot }\beta _{\cdot }\check{x}}) \bar{L}^{\alpha _{t}\beta _{t}}(\psi u)( \check{x}_{t}^{\alpha _{\cdot }\beta _{\cdot }\check{x}})\,dt, \end{aligned}$$

whenever \(t<\check{\tau }^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\), where \(m_{t}\) is a local martingale starting at zero.

We discuss this assumption in Remark 2.6.

Finally, take some \(\{\mathcal{F }_{t}\}\)-stopping times \(\gamma ^{\alpha _{\cdot }\beta _{\cdot }\check{x}} \) and progressively measurable functions \(\lambda _{t}^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\ge 0\) on \(\Omega \times [0,\infty )\) defined for each \(\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot }\in \mathfrak{B }\), and \(\check{x} \in \mathbb{R }^{k}\) and such that \(\lambda _{t}^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\) have finite integrals over finite time intervals (for any \(\omega \)). Introduce

$$\begin{aligned} \psi ^{\alpha _{\cdot }\beta _{\cdot }\check{x}}_{t} =\int \limits _{0}^{t}\lambda _{s}^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\,ds. \end{aligned}$$

In the following theorem by quadratic functions we mean quadratic functions on \(\mathbb{R }^{d}\) (not \(\mathbb{R }^{k }\)) (and if \(u\) is a function defined in \(D\) then we extend it to a function in a domain in \(\mathbb{R }^{k}\) following notation (2.11)).

Theorem 2.3

  1. (i)

    If  for any \(\check{x} \in D^{\check{\!\!\!}\,}\) and quadratic function \(u\), we have

    $$\begin{aligned} H[u](\Pi (\check{x}))\le 0\Longrightarrow \check{H}[u\psi ](\check{x})\le 0, \end{aligned}$$
    (2.12)

    then \(\check{v}\le \psi v\) in \(\mathbb{R }^{k}\) and for any \(\check{x}\in \mathbb{R }^{k}\)

    $$\begin{aligned} v\psi (\check{x}) \ge \mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} E_{\check{x}}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })} \left[ v\psi (\check{x}_{\gamma \wedge \check{\tau }}) e^{-\check{ \phi }_{\gamma \wedge \check{\tau }}- \psi _{\gamma \wedge \check{\tau }}}\right. \nonumber \\ \left. +\int \limits _{0}^{\gamma \wedge \check{\tau }} \{\check{f}(p_{t},\check{x} _{t} )+\lambda _{t} v\psi (\check{x} _{t})\}e^{-\check{ \phi }_{t}- \psi _{t}}\,dt \right] . \end{aligned}$$
    (2.13)
  2. (ii)

    If  for any \(\check{x} \in D^{\check{\!\!\!}\,}\) and quadratic function \(u\), we have

    $$\begin{aligned} H[u](\Pi (\check{x}))\ge 0\Longrightarrow \check{H}[u\psi ](\check{x})\ge 0, \end{aligned}$$
    (2.14)

    then \(\check{v}\ge \psi v\) in \(\mathbb{R }^{k}\) and for any \(\check{x}\in \mathbb{R }^{k}\)

    $$\begin{aligned} v\psi (\check{x}) \le \mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} E_{\check{x}}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })}\Biggl [ v\psi (\check{x}_{\gamma \wedge \check{\tau }}) e^{-\check{ \phi }_{\gamma \wedge \check{\tau }}- \psi _{\gamma \wedge \check{\tau }}} \nonumber \\ +\int \limits _{0}^{\gamma \wedge \check{\tau }} \{\check{f}(p_{t},\check{x} _{t} )+\lambda _{t} v\psi (\check{x} _{t})\}e^{-\check{ \phi }_{t}- \psi _{t}}\,dt \Biggr ]. \end{aligned}$$
    (2.15)

Remark 2.3

Under the assumptions of Theorem 2.1 suppose that \(c\) and \(f\) are bounded. Take a global barrier \(\Psi \), which is an infinitely differentiable function on \(\mathbb{R }^{d}\) such that \(\Psi \ge 1\) on \(\mathbb{R }^{d}\) and \((L^{\alpha \beta }+c^{\alpha \beta }) \Psi \le -1\) on \(D\) for all \(\alpha ,\beta \). The existence of such functions is a simple and well-known fact.

In Theorem 2.3 take \(k=d, D^{\check{\!\!\!}\,}=D\), and independent of \(p\) functions \(\check{r}\equiv 1\),

$$\begin{aligned} \check{\sigma }^{\alpha \beta }(x)&= \Psi ^{1/2}(x) \sigma ^{\alpha \beta }(x),\quad \check{b}^{\alpha \beta }(x) =\Psi (x)b^{\alpha \beta }(x)+2a^{\alpha \beta }(x)D\Psi (x),\\ \check{c}^{\alpha \beta }(x)&= -L^{\alpha \beta }\Psi (x),\quad \check{f}^{\alpha \beta }(x)= f^{\alpha \beta }(x), \quad \check{g}(x)=\Psi ^{-1}(x)g(x), \end{aligned}$$

where \(D\Psi \) is the gradient of \(\Psi \) (a column vector).

A simple computation shows that

$$\begin{aligned} \check{L}^{\alpha \beta }u(x)+\check{f}^{\alpha \beta } =L^{\alpha \beta }(u\Psi )(x)+f^{\alpha \beta }(x) \end{aligned}$$

and therefore both conditions in (2.12) and (2.14) are satisfied with \(\psi =\Psi ^{-1}\) and by Theorem we conclude that \(\check{v}=\Psi ^{-1} v\). It is still probably worth noting that to check Assumption in this case we take \((\bar{c}_{\varepsilon }, \bar{f}_{\varepsilon })^{\alpha \beta } = [( \check{c} , \check{f})^{\alpha \beta }]^{(\varepsilon )}.\)

This simple observation sometimes helps introducing a new \(c\ge 1\) when the initial one was zero.

Remark 2.4

If \(\check{a},\check{b},\check{c}\), and \(\check{f}\) are independent of \(p\) and \(k =d, \Pi (x)\equiv x\), and \(\psi \equiv 1\), then Theorem 2.3 implies that \(v=\check{v}\) whenever the functions \(H\) and \(\check{H}\) coincide. Therefore, \(v\) and \(\check{v}\) are uniquely defined by \(H\) and not by its particular representation (2.4) and, for that matter, not by the choice of probability space, filtration, and the Wiener process including its dimension. By Theorem 2.3 we also have that \(v=\check{v}\) if \(k =d, \Pi (x)\equiv x\), and if \(\check{a},\check{b},\check{c}\), and \(\check{f}\) do depend on \(p\) but in such a way that

$$\begin{aligned} (\check{a},\check{b},\check{c},\check{f})(p,x) =\check{r}(p,x)(a,b,c,f)(x) \end{aligned}$$

since in that case any smooth function is \(p\)-insensitive. In such a situation we see that \(\check{v}\) is independent of \(p\in \mathfrak{P }\) as well.

Also notice that, if in Theorem 2.1 the functions \(c\) and \(f\) are bounded (see Assumption 2.3 (iv)) and one takes \(k=d\), assumes that the checked functions are independent of \(p\), and finally takes the checked functions equal to the unchecked ones and \((\bar{c}_{\varepsilon }, \bar{f}_{\varepsilon })^{\alpha \beta } = [( c , f)^{\alpha \beta }]^{(\varepsilon )}\), then one sees that assertion (ii) of Theorem 2.1 follows immediately from Theorem 2.2.1 of [6] and Theorem 2.3.

Remark 2.5

Here we discuss the possibility to use dilations. Take a constant \(\mu >0\) and consider the following modification of (2.3)

$$\begin{aligned} x_{t}=x+\int \limits _{0}^{t} \sigma ^{\alpha _{s} \beta _{s} }( \mu x_{s})\,dw_{s} +\int \limits _{0}^{t} \mu b^{\alpha _{s} \beta _{s} }( \mu x_{s})\,ds. \end{aligned}$$
(2.16)

The solution of this equation is denoted by \(x^{\alpha _{\cdot }\beta _{\cdot } x}_{t}(\mu )\). Then let

$$\begin{aligned} \phi ^{\alpha _{\cdot }\beta _{\cdot } x}_{t}(\mu ) =\int \limits _{0}^{t}\mu ^{2}c^{\alpha _{s} \beta _{s} }(\mu x^{\alpha _{\cdot } \beta _{\cdot } x}_{s}(\mu ))\,ds, \end{aligned}$$

denote by \(\tau ^{\alpha _{\cdot }\beta _{\cdot } x}(\mu )\) the first exit time of \(x^{\alpha _{\cdot }\beta _{\cdot } x}_{t}(\mu )\) from \(\mu ^{-1}D\), and set

$$\begin{aligned} v(x,\mu )&= \mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} E_{x}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })} \left[ \int \limits _{0}^{\tau (\mu )} \mu ^{2}f(\mu x_{t}(\mu ))e^{-\phi _{t}(\mu )}\,dt \right. \\&\left. + g(\mu x_{\tau }(\mu ))e^{-\phi _{\tau (\mu )}(\mu )}\right] . \end{aligned}$$

A simple application of Theorem 2.3 with \(\Pi (x)=\mu x\) and \(\psi \equiv 1\) shows that \(v(\mu x)=v(x,\mu )\). Of course, other types of changing the coordinates are also covered by Theorem 2.3.

Remark 2.6

The case \(k >d\) will play a very important role in a subsequent article (see [11]) about stochastic differential games. To illustrate one of applications consider the one-dimensional Wiener process \(w_{t}\), define \(\tau _{x}\) as the first exit time of \(x+ w_{t}\) from \((-1,1)\) and introduce

$$\begin{aligned} v(x)=E\int \limits _{0}^{\tau _{x}}f(x+w_{t})\,dt, \end{aligned}$$

so that the corresponding (Isaacs) equation becomes

$$\begin{aligned} H[v]:=(1/2)D^{2}v+f=0 \end{aligned}$$

in \((-1,1)\) with zero boundary data at \(\pm 1\). We want to show how Theorem 2.3 allows one to derive the following

$$\begin{aligned} v(x)=E\int \limits _{0}^{\check{\tau }_{x}}e^{-w_{t}-(1/2)t}f(x+w_{t}+t)\,dt, \end{aligned}$$
(2.17)

where \(\check{\tau }_{x}\) is the first exit time of \(x+w_{t}+t\) from \((-1,1)\). (Of course, (2.17) is a simple corollary of Girsanov’s theorem.)

In order to do that consider the two-dimensional diffusion process given by

$$\begin{aligned} dx_{t}=dw_{t}+dt,\quad dy_{t}=-y_{t}\,dw_{t} \end{aligned}$$
(2.18)

starting at

$$\begin{aligned} (x,y)\in D^{\check{\!\!\!}\,}_{\varepsilon }= (-1,1)\times (\varepsilon ,\varepsilon ^{-1}), \end{aligned}$$

where \(\varepsilon \in (0,1)\), let \(\tau ^{\varepsilon }_{x,y}\) be the first time the process exits from this domain, and introduce

$$\begin{aligned} \check{v}(x,y)=E\left[ \int \limits _{0}^{\tau ^{\varepsilon }_{x,y}}y_{t}f(x_{t})\,dt +y_{\tau ^{\varepsilon }_{x,y}}v(x_{\tau ^{\varepsilon }_{x,y}})\right] . \end{aligned}$$

In this situation we take \(\Pi (x,y)=x\). The corresponding (Isaacs) equation is now

$$\begin{aligned} \check{H}[\check{v}](x,y)&:= (1/2)\frac{\partial ^{2}}{(\partial x)^{2}} \check{v}(x,y)- y\frac{\partial ^{2}}{ \partial x \partial y } \check{v}(x,y) +(1/2)y^{2}\frac{\partial ^{2}}{(\partial y)^{2}} \check{v}(x,y)\\&+\frac{\partial }{ \partial x}\check{v}(x,y)+yf(x)=0. \end{aligned}$$

As \(G(x)\) and \(\check{G}(x,y)\) one can take \(1-|x|^2\) and set \(r(x,y)=y\).

It is a trivial computation to show that if \(u(x)\) satisfies \(H[u](x)\le 0\) at a point \(x\in (-1,1)\), then for \(\check{u}(x,y) :=yu(x)\) we have \(\check{H}[\check{u}](x,y)\le 0\) for any \(y>0\) and if we reverse the sign of the first inequality the same will happen with the second one. By Theorem 2.3 we have that \(\check{v}(x,y)=yv(x)\) in \(D^{\check{\!\!\!}\,}_{\varepsilon }\) and since for \(y=1\)

$$\begin{aligned} y_{t}=e^{-w_{t}-(1/2)t}, \end{aligned}$$

we conclude that for any \(\varepsilon \in (0,1)\)

$$\begin{aligned} v(x)=E\left[ \int \limits _{0}^{\tau ^{\varepsilon }_{x} }e^{-w_{t}-(1/2)t} f(x+w_{t}+t)\,dt +y_{\tau ^{\varepsilon }_{x }}v(x_{\tau ^{\varepsilon }_{x} })\right] , \end{aligned}$$
(2.19)

where \(\tau ^{\varepsilon }_{x}\) is the minimum of the first exit time of \(x+w_{t}+t\) from \((-1,1)\) and the first exit time of \(e^{-w_{t}-(1/2)t}\) from \((\varepsilon ,\varepsilon ^{-1})\). The latter tends to infinity as \(\varepsilon \downarrow 0\) and we obtain (2.17) from (2.19) and the fact that \(v=0\) at \(\pm 1\).

The reader might have noticed that the process given by (2.18) is degenerate. It shows why in Assumption 2.5 we require only \(\Pi (\check{x}_{t})\) to be uniformly nondegenerate.

3 Main results for the whole space

In this section we keep the assumptions of Sect. 2 apart from Assumptions 2.2 and 2.6 concerning the existence of the barrier functions \(G\) and \(\check{G}\) and take \(D=\mathbb{R }^{d}\). In case we encounter expressions like \(v(x_{\gamma })\) we set them to be zero on the event \(\{\gamma =\infty \}\). In the whole space we need the following.

Assumption 3.1

  1. (i)

    The functions \( c,f, \check{c}, \check{f}\) are bounded.

  2. (ii)

    For a constant \(\chi >0\) we have \(c^{\alpha \beta }(p,x ),\check{c}^{\alpha \beta }(p,\check{x} )\ge \chi \) for all \(\alpha ,\beta ,p,x\) and \(\check{x}\).

Notice that in this situation \(\tau ^{\alpha _{\cdot }\beta _{\cdot }x} =\infty \), however \(\check{\tau }^{\alpha _{\cdot }\beta _{\cdot } \check{x}}\) may still be finite.

Theorem 3.1

Under the above assumptions all assertions of Theorems and 2.3 hold true.

Proof

First we deal with Theorem 2.1. Take \(D=D_{n}=\{x:|x|<n\}\) and \(0\) in the original Theorem in place of \(D\) and \(g\), respectively, and denote thus obtained function \(v\) by \(v_{n}\). It is not hard to check that, due to the boundedness of \(f\) and the condition that \(c\ge \chi \), in any compact set \(\Gamma \subset \mathbb{R }^{d}\) we have \(v_{n}\rightarrow v\) uniformly on \(\Gamma \) as \(n\rightarrow \infty \). Furthermore, since the boundary of \(D_{n}\) is smooth and \(\sigma ,b,c\) are bounded and \(a\) is uniformly nondegenerate, for each \(n\) there exists a global barrier \(G_{n}\) satisfying Assumption 2.2 with \(D_{n}\) in place of \(D\). Therefore, by Theorem 2.1, \(v_{n}\) are continuous and so is \(v\).

For each \(n\ge m\ge 1\) we also have by Theorem 2.1 that

$$\begin{aligned} v_{n}(x)&= \mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} E_{x}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })}\Biggl [ v_{n}(x_{\gamma \wedge \tau _{m}})e^{-\phi _{\gamma \wedge \tau _{m}} -\psi _{\gamma \wedge \tau _{m}}}\\&+\int \limits _{0}^{\gamma \wedge \tau _{m}} \{f( x_{t})+\lambda _{t}v_{n}(x_{t})\}e^{-\phi _{t}-\psi _{t}}\,dt \Biggr ], \end{aligned}$$

where \(\tau _{m}^{\alpha _{\cdot }\beta _{\cdot }x}\) is the first exit time of \(x_{t}^{\alpha _{\cdot }\beta _{\cdot }x}\) from \(D_{m}\). Since \(v_{n}\rightarrow v\) uniformly on \(\bar{D}_{m}\), we conclude that

$$\begin{aligned} v (x)&= \mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} E_{x}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })}\Biggl [ v (x_{\gamma \wedge \tau _{m}})e^{-\phi _{\gamma \wedge \tau _{m}} -\psi _{\gamma \wedge \tau _{m}}}\\&+\int \limits _{0}^{\gamma \wedge \tau _{m}} \{f( x_{t})+\lambda _{t}v (x_{t})\}e^{-\phi _{t}-\psi _{t}}\,dt \Biggr ]. \end{aligned}$$

Passing to the limit as \(m\rightarrow \infty \) proves our theorem in what concerns Theorem 2.1.

In case of Theorem 2.3 the argument is quite similar and we only comment on the existence of \(\check{G}_{n}\) satisfying Assumption 2.6 with \(D^{\check{\!\!\!}\,}_{n}= \{\check{x}\in D^{\check{\!\!\!}\,}:\Pi (\check{x})\in D_{n}\}\). Under obvious circumstances one can take \(\check{G}_{n}(\check{x}) =G_{n}(\Pi (\check{x}))\). In the general case one should construct \(G_{n}\) for operators with, perhaps, a smaller ellipticity constant and larger drift terms. The theorem is proved. \(\square \)

4 An auxiliary result

In this section \(D\) is not assumed to be bounded. We need a bounded continuous function \(\Psi \) on \(\bar{D}\) such that \(\Psi \ge 0\) in \(D\) and \(\Psi =0\) on \(\partial D\) (if \(\partial D\ne \emptyset \)). We assume that we are given two continuous \(\mathcal{F }_{t}\)-adapted processes \(x^{\prime }_{t}\) and \(x^{\prime \prime }_{t}\) in \(\mathbb{R }^{d}\) with \(x^{\prime }_{0},x^{\prime \prime }_{0}\in D\) (a.s.) and progressively measurable real-valued processes \(c^{\prime }_{t},c^{\prime \prime }_{t},f^{\prime }_{t},f^{\prime \prime }_{t}\). Suppose that \(c^{\prime },c^{\prime \prime }\ge 0\).

Define \(\tau ^{\prime }\) and \(\tau ^{\prime \prime }\) as the first exit times of \(x^{\prime }_{t}\) and \(x^{\prime \prime }_{t}\) from \(D\), respectively. Then introduce

$$\begin{aligned} \phi ^{\prime }_{t}=\int \limits _{0}^{t}c^{\prime }_{s}\,ds,\quad \phi ^{\prime \prime }_{t}=\int \limits _{0}^{t}c^{\prime \prime }_{s}\,ds, \end{aligned}$$

and suppose that

$$\begin{aligned} E \int \limits _{0}^{ \tau ^{\prime }}|f^{\prime }_{t}|e^{-\phi ^{\prime }_{t}}\,dt+ E \int \limits _{0}^{\tau ^{\prime \prime } }|f^{\prime \prime }_{t}|e^{-\phi ^{\prime \prime }_{t}}\,dt<\infty . \end{aligned}$$
(4.1)

Remark 4.1

According to Theorem 2.2.1 of [6] the above requirements about \(f\) and \(c\) are fulfilled if Assumption 2.1 is satisfied and we take \(x_{t}\) and \((f, c)\) with prime and double prime of the type

$$\begin{aligned} x^{\alpha _{\cdot }\beta _{\cdot }x}_{t},\quad (f,c)^{\alpha _{t}\beta _{t}}( x^{\alpha _{\cdot }\beta _{\cdot }x}_{t} ), \end{aligned}$$

respectively, where \(\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot }\in \mathfrak{B }\), and \(x\in \mathbb{R }^{d}\).

Finally set

$$\begin{aligned} v^{\prime }=E \int \limits _{0}^{\tau ^{\prime }}f^{\prime }_{t}e^{-\phi ^{\prime }_{t}}\,dt, \quad v^{\prime \prime }=E \int \limits _{0}^{\tau ^{\prime \prime }}f^{\prime \prime }_{t}e^{-\phi ^{\prime \prime }_{t}}\,dt. \end{aligned}$$

Now comes our main assumption.

Assumption 4.1

The processes

$$\begin{aligned} \Psi (x^{\prime }_{t\wedge \tau ^{\prime }})e^{-\phi ^{\prime }_{t\wedge \tau ^{\prime }}} +\int \limits _{0}^{t\wedge \tau ^{\prime }}e^{-\phi ^{\prime }_{s}}\,ds ,\quad \Psi (x^{\prime \prime }_{t\wedge \tau ^{\prime \prime }})e^{-\phi ^{\prime \prime }_{t\wedge \tau ^{\prime \prime }}} +\int \limits _{0}^{t\wedge \tau ^{\prime \prime }}e^{-\phi ^{\prime \prime }_{s}}\,ds \end{aligned}$$

are supermartingales.

Remark 4.2

Observe that Assumption 4.1 is satisfied under the assumptions of Theorem 2.1 if we take \(\Psi =G\) from Theorem 2.1 and other objects from Remark 4.1.

Indeed, by Itô’s formula

$$\begin{aligned} G(x _{t\wedge \tau })e^{-\phi _{t\wedge \tau }} +\int \limits _{0}^{t\wedge \tau }e^{-\phi _{s}}\,ds \end{aligned}$$

is a local supermartingale, where

$$\begin{aligned} x_{t}=x^{\alpha _{\cdot }\beta _{\cdot }x}_{t},\quad \tau =\tau ^{\alpha _{\cdot }\beta _{\cdot }x},\quad \phi _{t}=\phi ^{\alpha _{\cdot }\beta _{\cdot }x}_{t} . \end{aligned}$$
(4.2)

Since it is nonnegative or constant, it is a supermartingale.

Denote

$$\begin{aligned} \Phi _{t}=e^{-\phi ^{\prime }_{t}}+e^{-\phi ^{\prime \prime }_{t}},\quad \Delta _{c}=E\int \limits _{0}^{\tau ^{\prime }\wedge \tau ^{\prime \prime }} |c^{\prime }_{t} - c^{\prime \prime }_{t} |\Phi _{t}\,dt, \end{aligned}$$

and by replacing \(c\) with \(f\) define \(\Delta _{f}\).

Lemma 4.1

Introduce a constant \(M_{f}\) (perhaps \(M_{f}=\infty \)) such that for each \(t\ge 0\) (a.s.)

$$\begin{aligned} I_{\tau ^{\prime }\wedge \tau ^{\prime \prime }>t}E\left\{ \int \limits _{t}^{\tau ^{\prime }\wedge \tau ^{\prime \prime }} |f^{\prime \prime }_{s}| \Phi _{s}\,ds\mid \mathcal{F }_{t}\right\} \le \Phi _{t} M_{f}. \end{aligned}$$
(4.3)

Then

$$\begin{aligned}&|v^{\prime }- v^{\prime \prime }| \le \Delta _{f}+M_{f}\Delta _{c} + \sup |f^{\prime }| EI_{\tau ^{\prime \prime }<\tau ^{\prime }}[\Psi (x^{\prime }_{ \tau ^{\prime \prime }})-\Psi (x^{\prime \prime } _{ \tau ^{\prime \prime }})] e^{-\phi ^{\prime }_{ \tau ^{\prime \prime }}}\nonumber \\&\quad + \sup |f^{\prime \prime }| EI_{\tau ^{\prime }<\tau ^{\prime \prime }}[\Psi (x^{\prime \prime }_{ \tau ^{\prime }})-\Psi (x^{\prime } _{ \tau ^{\prime }})] e^{-\phi ^{\prime \prime }_{ \tau ^{\prime }}} , \end{aligned}$$
(4.4)

where the last two terms can be dropped if \(\tau ^{\prime }=\tau ^{\prime \prime }\) (a.s.).

Proof

We have

$$\begin{aligned} \big |v^{\prime \prime } -E\int \limits _{0}^{\tau ^{\prime }\wedge \tau ^{\prime \prime }} f^{\prime \prime }_{t} e^{-\phi ^{\prime \prime }_{t}}\,dt\big |\le \sup |f^{\prime \prime }|E\int \limits _{\tau ^{\prime }\wedge \tau ^{\prime \prime }}^{\tau ^{\prime \prime }}e^{-\phi ^{\prime \prime }_{t}}\,dt, \end{aligned}$$

where owing to (4.1), Assumption 4.1, and the fact that bounded \(\Psi \ge 0\), the last expectation is dominated by

$$\begin{aligned} E\Psi (x^{\prime \prime }_{\tau ^{\prime }\wedge \tau ^{\prime \prime }})e^{-\phi ^{\prime \prime }_{\tau ^{\prime }\wedge \tau ^{\prime \prime }}} I_{\tau ^{\prime }<\tau ^{\prime \prime } }&= EI_{\tau ^{\prime }<\tau ^{\prime \prime }}\Psi (x^{\prime \prime }_{\tau ^{\prime } })e^{-\phi ^{\prime \prime }_{ \tau ^{\prime }}}\\&= EI_{\tau ^{\prime }<\tau ^{\prime \prime }}[\Psi (x^{\prime \prime }_{ \tau ^{\prime }})-\Psi (x^{\prime } _{ \tau ^{\prime }})] e^{-\phi ^{\prime \prime }_{ \tau ^{\prime }}}. \end{aligned}$$

Similar estimates hold for \(v^{\prime }\) and this shows how the last terms in (4.4) appear and when they disappear.

Next,

$$\begin{aligned} E\int \limits _{0}^{\tau ^{\prime }\wedge \tau ^{\prime \prime }}\big | f^{\prime }_{t} e^{-\phi ^{\prime }_{t}} - f^{\prime \prime }_{t} e^{-\phi ^{\prime \prime }_{t}}\big |\,dt \le \Delta _{f}+J, \end{aligned}$$

where

$$\begin{aligned} J&= E\int \limits _{0}^{\tau ^{\prime }\wedge \tau ^{\prime \prime }} |f^{\prime \prime }_{t}|\,| e^{-\phi ^{\prime \prime }_{t}} -e^{-\phi ^{\prime } _{t}}|\,dt \le E\int \limits _{0}^{\tau ^{\prime }\wedge \tau ^{\prime \prime }} |f^{\prime \prime }_{t}|C_{t} \Phi _{t}\,dt,\\ C_{t}&= \int \limits _{0}^{t}|c^{\prime }_{s}- c^{\prime \prime }_{s}|\,ds. \end{aligned}$$

By using Fubini’s theorem it is easily seen that the last expectation above equals

$$\begin{aligned} E\int \limits _{0}^{\tau ^{\prime }\wedge \tau ^{\prime \prime }}\left( \int \limits _{s}^{\tau ^{\prime }\wedge \tau ^{\prime \prime }} |f^{\prime \prime }_{t}| \Phi _{t}\,dt\right) |c^{\prime }_{s}- c^{\prime \prime }_{s}|\,ds, \end{aligned}$$

which owing to (4.3) is less than \(M_{f}\Delta _{c}\). This proves the lemma. \(\square \)

Remark 4.3

Assumption (4.3) is satisfied if, for instance, for each \(t\ge 0\)

$$\begin{aligned} I_{\tau ^{\prime \prime }>t}E\left\{ \int \limits _{t}^{\tau ^{\prime \prime }} |f^{\prime \prime }_{s}| \,ds\mid \mathcal{F }_{t}\right\} \le M_{f}. \end{aligned}$$
(4.5)

Indeed, in that case the left-hand side of (4.3) is less that \(\Phi _{t}\) times the left-hand side of (4.5) just because \(\Phi _{t}\) is a decreasing function of \(t\).

This observation will be later used in conjunction with Theorem 2.2.1 of [6].

5 A general approximation result from above

In this section Assumption 2.1 (iv) about the uniform nondegeneracy as well as Assumption 2.2 concerning \(G\) are not used and the domain \(D\) is not supposed to be bounded.

We impose the following.

Assumption 5.1

  1. (i)

    Assumptions 2.1 (i) b), (ii) are satisfied.

  2. (ii)

    The functions \(c^{\alpha \beta }( x)\) and \(f^{\alpha \beta }( x)\) are bounded on \(A\times B \times \mathbb{R }^{d}\) and uniformly continuous with respect to \(x\in \mathbb{R }^{d}\) uniformly with respect to \(\alpha ,\beta \).

Set

$$\begin{aligned} A_{1}=A \end{aligned}$$

and let \(A_{2}\) be a separable metric space having no common points with \(A_{1}\).

Assumption 5.2

The functions \( \sigma ^{\alpha \beta }( x), b^{\alpha \beta }( x), c^{\alpha \beta }( x)\), and \( f^{\alpha \beta }( x)\) are also defined on \(A_{2}\times B \times \mathbb{R }^{d}\) in such a way that they are independent of \(\beta \) (on \(A_{2}\times B \times \mathbb{R }^{d}\)) and Assumptions 2.1 (i) b), (ii) are satisfied with, perhaps, larger constants \(K_{0}\), \(K_{1}\) and, of course, with \(A_{2}\) in place of \(A\). The functions \( c^{\alpha \beta }( x)\) and \( f^{\alpha \beta }( x)\) are bounded on \(A_{2}\times B \times \mathbb{R }^{d}\).

Define

$$\begin{aligned} \hat{A}=A_{1}\cup A_{2}. \end{aligned}$$

Then we introduce \(\hat{\mathfrak{A }}\) as the set of progressively measurable \(\hat{A}\)-valued processes and \(\hat{\mathbb{B }}\) as the set of \(\mathfrak{B }\)-valued functions \( {\varvec{\beta }}(\alpha _{\cdot })\) on \(\hat{\mathfrak{A }}\) such that, for any \(T\in [0,\infty )\) and any \(\alpha ^{1}_{\cdot }, \alpha ^{2}_{\cdot }\in \hat{\mathfrak{A }}\) satisfying

$$\begin{aligned} P( \alpha ^{1}_{t}=\alpha ^{2}_{t} \quad \text{ for} \text{ almost} \text{ all}\quad t\le T)=1, \end{aligned}$$

we have

$$\begin{aligned} P( {\varvec{\beta }}_{t}(\alpha ^{1}_{\cdot })={\varvec{\beta }}_{t}(\alpha ^{2}_{\cdot }) \quad \text{ for} \text{ almost} \text{ all}\quad t\le T)=1. \end{aligned}$$

Assumption 5.3

There exists a nonnegative bounded uniformly continuous in \(\bar{D}\) function \(G\in C^{2}_{loc}(D)\) such that \(G=0\) on \(\partial D\) (if \(D\ne \mathbb{R }^{d}\)) and

$$\begin{aligned} L^{\alpha \beta }G( x)\le -1 \end{aligned}$$

in \( D\) for all \(\alpha \in \hat{A}\) and \(\beta \in B\).

Here are a few consequences of Assumption 5.3.

Lemma 5.1

For any constant \(\chi \le (2\sup _{D}G)^{-1}\) and any \(\alpha _{\cdot }\in \hat{\mathfrak{A }}, \beta _{\cdot }\in \mathfrak{B }\), and \(x\in \bar{D}\) the process

$$\begin{aligned} G(x _{t\wedge \tau }) e^{ \chi (t\wedge \tau ) -\phi _{t\wedge \tau }} +(1/2)\int \limits _{0}^{t\wedge \tau } e^{ \chi s-\phi _{s}}\,ds, \end{aligned}$$

where we use notation (4.2), is a supermartingale and

$$\begin{aligned} E^{\alpha _{\cdot }\beta _{\cdot }}_{x}\int \limits _{0}^{\tau } e^{\chi t-\phi _{t}}\,dt\le 2G(x). \end{aligned}$$

In particular, for any \(T\in [0,\infty )\)

$$\begin{aligned} E^{\alpha _{\cdot }\beta _{\cdot }}_{x}I_{\tau >T}\int \limits _{T}^{\tau }e^{ -\phi _{t}}\,dt&= e^{-\chi T}E^{\alpha _{\cdot }\beta _{\cdot }}_{x}I_{\tau >T}\int \limits _{T}^{\tau }e^{\chi T -\phi _{t}}\,dt\\&\le e^{-\chi T}E^{\alpha _{\cdot }\beta _{\cdot }}_{x}\int \limits _{0}^{\tau } e^{\chi t-\phi _{t}}\,dt \le 2e^{-\chi T}G(x). \end{aligned}$$

Finally, for any stopping time \(\gamma \le \tau ^{\alpha _{\cdot }\beta _{\cdot } x}\)

$$\begin{aligned}&E^{\alpha _{\cdot }\beta _{\cdot }}_{x}I_{\gamma >T} G(x_{\gamma })e^{-\phi _{\gamma }}\le E^{\alpha _{\cdot }\beta _{\cdot }}_{x}I_{\gamma >T} G(x_{T})e^{-\phi _{T}}\\&\quad \le e^{-\chi T} E^{\alpha _{\cdot }\beta _{\cdot }}_{x} I_{\gamma >T} G(x_{T})e^{\chi T-\phi _{T}}\\&\quad \le e^{-\chi T} E^{\alpha _{\cdot }\beta _{\cdot }}_{x} G(x_{T\wedge \gamma })e^{\chi (T\wedge \gamma )- \phi _{T\wedge \gamma }}\le e^{-\chi T}G(x). \end{aligned}$$

The proof of this lemma is easily achieved by using Itô’s formula and the fact that \(L^{\alpha \beta }G+\chi G \le -1/2\) on \(D\) for all \(\alpha ,\beta \).

Take a constant \(K\ge 0\) and set

$$\begin{aligned} v_{K}(x)=\mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \hat{\mathbb{B }}\,\,\alpha _{\cdot }\in \hat{\mathfrak{A }}} v^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })}_{K}(x), \end{aligned}$$

where

$$\begin{aligned} v^{\alpha _{\cdot }\beta _{\cdot } }_{K}(x)&= E_{x}^{\alpha _{\cdot }\beta _{\cdot } }\left[ \int \limits _{0}^{\tau } f_{K} ( x_{t})e^{-\phi _{t} }\,dt +g(x_{\tau })e^{-\phi _{\tau } }\right] \\&= : v^{\alpha _{\cdot }\beta _{\cdot } } (x)-K E_{x}^{\alpha _{\cdot }\beta _{\cdot } }\int \limits _{0}^{\tau } I_{\alpha _{t}\in A_{2}}e^{-\phi _{t} }\,dt,\\ f^{\alpha \beta }_{K}( x)&= f^{\alpha \beta }( x)-KI_{\alpha \in A_{2}}. \end{aligned}$$

Observe that

$$\begin{aligned} v(x)=\mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} v^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })}(x). \end{aligned}$$

These definitions make sense owing to Lemma 5.1, which also implies that \(v^{\alpha _{\cdot }\beta _{\cdot } }_{K}\) and \(v^{\alpha _{\cdot }\beta _{\cdot } }\) and bounded in \(\bar{D}\).

Theorem 5.2

We have \(v_{K}\rightarrow v\) uniformly on \(\bar{D}\) as \(K\rightarrow \infty \).

We need the following in which \(\pi :\hat{A}\rightarrow A_{1}\) is a mapping defined as \(\pi \alpha =\alpha \) if \(\alpha \in A_{1}\) and \(\pi \alpha =\alpha ^{*}\) if \(\alpha \in A_{2}\), where \(\alpha ^{*}\) is a fixed point in \(A\).

Theorem 5.3

There exists a constant \(N\) depending only on \(K_{0},K_{1}\), and \(d\) such that for any \(\alpha _{\cdot }\in \hat{\mathfrak{A }}, \beta _{\cdot }\in \mathfrak{B }, x\in \mathbb{R }^{d}\), \(T\in [0,\infty )\), and stopping time \(\gamma \)

$$\begin{aligned} E^{\alpha _{\cdot }\beta _{\cdot }}_{x} \sup _{t\le T\wedge \gamma }|x_{t}-y_{t}|\le Ne^{NT} \left( E^{\alpha _{\cdot }\beta _{\cdot }}_{x}\int \limits _{0}^{ T\wedge \gamma }I_{\alpha _{t}\in A_{2}}\,dt\right) ^{1/2} , \end{aligned}$$

where

$$\begin{aligned} y^{\alpha _{\cdot } \beta _{\cdot }x}_{t} =x^{\pi \alpha _{\cdot } \beta _{\cdot }x}_{t}. \end{aligned}$$

Proof

For simplicity of notation we drop the superscripts \(\alpha _{\cdot } ,\beta _{\cdot },x\). Observe that \(x_{t}\) and \(y_{t}\) satisfy

$$\begin{aligned} x_{t}&= x+\int \limits _{0}^{t}\sigma ^{\alpha _{s}\beta _{s}}( x_{s}) \,dw_{s}+\int \limits _{0}^{t}b^{\alpha _{s}\beta _{s}}( x_{s}) \,ds,\\ y_{t}&= x+\int \limits _{0}^{t}\sigma ^{\alpha _{s}\beta _{s}}( y_{s}) \,dw_{s}+\int \limits _{0}^{t}b^{\alpha _{s}\beta _{s}}( y_{s}) \,ds+\eta _{t}, \end{aligned}$$

where \(\eta _{t}=I_{t}+J_{t}\),

$$\begin{aligned} I_{t}&= \int \limits _{0}^{t}[\sigma ^{\pi \alpha _{s}\beta _{s}}( y_{s}) -\sigma ^{\alpha _{s}\beta _{s}}( y_{s})]\,dw_{s},\\ J_{t}&= \int \limits _{0}^{t}[b^{\pi \alpha _{s}\beta _{s}}( y_{s}) -b^{\alpha _{s}\beta _{s}}( y_{s})]\,ds. \end{aligned}$$

By Theorem II.5.9 of [6] (where we replace the processes \(x_{t}\) and \(\tilde{x}_{t}\) with appropriately stopped ones) for any \(T\in [0,\infty )\) and any stopping time \(\gamma \)

$$\begin{aligned} E\sup _{t\le T\wedge \gamma }|x _{t}-y _{t}|^{2}\le Ne^{NT}E\sup _{t\le T\wedge \gamma }|I _{t}+J _{t}|^{2}, \end{aligned}$$
(5.1)

where \(N\) depends only on \(K_{1}\) and \(d\), which by Theorem III.6.8 of [8] leads to

$$\begin{aligned} E\sup _{t\le T\wedge \gamma }|x _{t}-y _{t}| \le Ne^{NT}E\sup _{t\le T\wedge \gamma }|I _{t}+J _{t}| \end{aligned}$$
(5.2)

with the constant \(N\) being three times the one from (5.1).

By using Davis’s inequality we see that for any \(T\in [0,\infty )\)

$$\begin{aligned} E\sup _{t\le T\wedge \gamma }|I _{t}| \le NE\left( \int \limits _{0}^{T\wedge \gamma } I_{\alpha _{s}\in A_{2}}\,ds\right) ^{1/2} \le N\left( E\int \limits _{0}^{T\wedge \gamma } I_{\alpha _{s}\in A_{2}}\,ds\right) ^{1/2}. \end{aligned}$$

Furthermore, almost obviously

$$\begin{aligned} E\sup _{t\le T\wedge \gamma }|J_{t}| \le N E\int \limits _{0}^{T\wedge \gamma } I_{\alpha _{s}\in A_{2}}\,ds \le NT^{1/2}\left( E\int \limits _{0}^{T\wedge \gamma } I_{\alpha _{s}\in A_{2}}\,ds\right) ^{1/2} \end{aligned}$$

and this in combination with (5.2) proves the lemma. \(\square \)

Proof of Theorem 5.2

Without losing generality we may assume that \(g\in C^{3}(\mathbb{R }^{d})\) since the functions of this class uniformly approximate any \(g\) which is uniformly continuous in \(\mathbb{R }^{d}\). Then notice that by Itô’s formula and Lemma 5.1 for \(g\in C^{3}(\mathbb{R }^{d})\) we have

$$\begin{aligned}&E_{x}^{\alpha _{\cdot }\beta _{\cdot } }\left[ \int \limits _{0}^{\tau } f_{K} ( x_{t})e^{-\phi _{t} }\,dt +g(x_{\tau })e^{-\phi _{\tau }}\right] \\&\quad =g(x)+E_{x }^{\alpha _{\cdot } \beta _{\cdot } } \int \limits _{0}^{\tau }[ \hat{f} ( x_{t} )-KI_{\alpha _{t}\in A_{2}}] e^{-\phi _{t} }\,dt, \end{aligned}$$

where

$$\begin{aligned} \hat{f} ^{\alpha \beta }( x ):=f^{\alpha \beta } ( x ) + L^{\alpha \beta }g( x), \end{aligned}$$

which is bounded and, for \((\alpha ,\beta )\in A\times B\), is uniformly continuous in \(x\) uniformly with respect to \(\alpha ,\beta \). This argument shows that without losing generality we may (and will) also assume that \(g=0\).

Next, since \(\mathfrak{A }\subset \hat{\mathfrak{A }}\) and for \(\alpha _{\cdot }\in \hat{\mathfrak{A }}\) and \({\varvec{\beta }}\in \hat{\mathbb{B }}\) we have \({\varvec{\beta }}(\alpha _{\cdot })\in \mathfrak{B }\), it holds that

$$\begin{aligned} v_{K}\ge v. \end{aligned}$$

To estimate \(v_{K}\) from above, take \({\varvec{\beta }}\in \mathbb{B }\) and define \(\hat{{\varvec{\beta }}} \in \hat{\mathbb{B }}\) by

$$\begin{aligned} \hat{{\varvec{\beta }}}_{t}(\alpha _{\cdot })={\varvec{\beta }}_{t}(\pi \alpha _{\cdot }). \end{aligned}$$

Also take any sequence \(x^{n} \in \bar{D}, n=1,2,...\), and find a sequence \(\alpha ^{n}_{\cdot }\in \hat{\mathfrak{A }}\) such that

$$\begin{aligned} v_{K}(x^{n})&\le \sup _{\alpha \in \hat{\mathfrak{A }}} E_{x^{n}}^{\alpha _{\cdot }\hat{{\varvec{\beta }}}(\alpha _{\cdot })} \int \limits _{0}^{\tau } f _{K} ( x_{t})e^{-\phi _{t} }\,dt\nonumber \\&= 1/n+v^{\alpha ^{n}_{\cdot }\hat{{\varvec{\beta }}}(\alpha ^{n}_{\cdot })}(x^{n}) -KE\int \limits _{0}^{\tau ^{n} }I_{\alpha ^{n}_{t}\in A_{2}} e^{-\phi ^{n}_{t} }\,dt, \end{aligned}$$
(5.3)

where

$$\begin{aligned} (\tau ^{n},\phi ^{n}_{t} ) =( \tau ,\phi _{t} ) ^{\alpha ^{n}_{\cdot }\hat{{\varvec{\beta }}}(\alpha ^{n}_{\cdot })x^{n}}. \end{aligned}$$

It follows from Lemma 5.1 that there is a constant \(N\) independent of \(n\) and \(K\) such that \(|v^{\alpha ^{n}_{\cdot }\hat{{\varvec{\beta }}} (\alpha ^{n}_{\cdot })}(x^{n})|\le N, |v|\le N, v_{K}\ge v\ge -N\) and we conclude from (5.3) that for any \(T\in [0,\infty )\) and

$$\begin{aligned} \bar{c}:=\sup c \end{aligned}$$

we have

$$\begin{aligned} E\int \limits _{0}^{\tau ^{n} }I_{\alpha ^{n}_{t}\in A_{2}} e^{-t\bar{c}}\,dt\le N/K,\quad E\int \limits _{0}^{\tau ^{n} \wedge T}I_{\alpha ^{n}_{t}\in A_{2}} \,dt\le Ne^{NT}/K, \end{aligned}$$
(5.4)

where and below in the proof by \(N\) we denote constants which may change from one occurrence to another and independent of \(n, K\), and \(T\).

Next, introduce

$$\begin{aligned} x^{n}_{t} = x^{\alpha ^{n}_{\cdot }\hat{{\varvec{\beta }}} (\alpha ^{n}_{\cdot }) x^{n}}_{t} ,\quad y^{n}_{t} =x^{\pi \alpha ^{n}_{\cdot }\hat{{\varvec{\beta }}} ( \alpha ^{n}_{\cdot }) x^{n} }_{t} ,\quad \pi \phi ^{n}_{t}=\int \limits _{0}^{t}c^{ \pi \alpha ^{n}_{s}\hat{{\varvec{\beta }}} _{s} ( \alpha ^{n}_{\cdot })}(y^{n}_{s} )\,ds, \end{aligned}$$

define \(\gamma ^{n}\) as the first exit time of \(y^{n}_{t}\) from \(D\), and, with the aim of applying Lemma 4.1, observe that by identifying \(x^{n}_{t},y^{n}_{t},\tau ^{n},\gamma ^{n}\) and the objects related to them with \(x^{\prime }_{t},x^{\prime \prime }_{t},\tau ^{\prime },\tau ^{\prime \prime }\) and the objects related to them, respectively, we have

$$\begin{aligned} |c^{\prime }_{t}-c^{\prime \prime }_{t}| =|c^{\alpha ^{n}_{t}\hat{{\varvec{\beta }}}_{t}(\alpha ^{n}_{\cdot })} ( x^{n}_{t})- c^{\pi \alpha ^{n}_{t}\hat{{\varvec{\beta }}}_{t}(\alpha ^{n}_{\cdot })} ( y^{n}_{t})| . \end{aligned}$$

Hence for any \(T\in (0,\infty )\)

$$\begin{aligned} \Delta _{c}^{n}&= E\int \limits _{0}^{\tau ^{n}\wedge \gamma ^{n}} |c^{\alpha ^{n}_{t}\hat{{\varvec{\beta }}}_{t}(\alpha ^{n}_{\cdot })} ( x^{n}_{t})-c^{\pi \alpha ^{n}_{t}\hat{{\varvec{\beta }}}_{t}(\alpha ^{n}_{\cdot })} ( y^{n}_{t})|(e^{-\phi _{t}^{n} }+e^{-\pi \phi _{t}^{n}})\,dt\\&\le E\int \limits _{0}^{\tau ^{n}\wedge \gamma ^{n}\wedge T} W_{c } (|x^{n}_{t} -y^{n}_{t}|) \,dt+I_{n }+J_{n }, \end{aligned}$$

where \(W_{c }\) is the modulus of continuity of \(c \) and

$$\begin{aligned} I_{n}&= N E\int \limits _{0}^{\tau ^{n}\wedge \gamma ^{n}\wedge T} I_{\alpha ^{n}_{t}\in A_{2}} \,dt,\\ J_{n }&= N E\int \limits _{\tau ^{n}\wedge \gamma ^{n}\wedge T}^{\tau ^{n}\wedge \gamma ^{n}} (e^{-\phi _{t}^{n}}+e^{-\pi \phi _{t}^{n}})\,dt. \end{aligned}$$

By virtue of (5.4) we have \(I_{n}\le Ne^{NT }/K\) and \( J_{n }\le N e^{-\chi T}\) by Lemma 5.1, say with \(\chi = (2\sup _{D}G)^{-1}\). Therefore,

$$\begin{aligned} \Delta _{c}^{n}\le TEW_{c}\left( \sup _{t\le \tau ^{n}\wedge T} |x^{n}_{t}-y^{n}_{t}|\right) + Ne^{NT}/K+Ne^{-\chi T}. \end{aligned}$$

A similar estimate holds if we replace \(c\) with \(f\).

As long as the last terms in (4.4) are concerned, observe that

$$\begin{aligned}&E|G( x^{n}_{\tau ^{n} })-G( y^{n}_{\tau ^{n} } )| e^{-\pi \phi ^{n}_{\tau ^{n}}}I_{\tau ^{n}<\gamma ^{n}}\\&\quad \le EW_{G}\left( \sup _{t\le \tau ^{n}\wedge \gamma ^{n} \wedge T} |x^{n}_{t}-y^{n}_{t}|\right) +R_{n}, \end{aligned}$$

where \(W_{G}\) is the modulus of continuity of \(G\) and

$$\begin{aligned} R_{n}=EI_{\gamma ^{n}>\tau ^{n}>T}G(y^{n}_{\tau ^{n}}) e^{-\pi \phi _{\tau ^{n}}} \le EI_{\gamma ^{n}\wedge \tau ^{n}>T}G(y^{n}_{\gamma ^{n}\wedge \tau ^{n}}) e^{-\pi \phi _{\gamma ^{n}\wedge \tau ^{n}}} \le Ne^{-\chi T}, \end{aligned}$$

with the second inequality following from Lemma 5.1.

Finally, in light of Lemma 5.1 one can take \(M_{f}\) in Lemma 4.1 to be a constant \(N\) independent of \(n\) and \(K\) and then by applying Lemma 4.1 we conclude from (5.3) that

$$\begin{aligned} v_{K}(x^{n})\le 1/n&+ v^{\pi \alpha _{\cdot }^{n}{\varvec{\beta }}(\pi \alpha _{\cdot }^{n})}(x^{n})\\&+ (T+1)EW \left( \sup _{t\le \tau ^{n}\wedge \gamma ^{n}\wedge T} |x^{n}_{t}-y^{n}_{t}|\right) + Ne^{NT }/K+Ne^{-\chi T}, \end{aligned}$$

where \(W(r)\) is a bounded function such that \(W(r)\rightarrow 0\) as \(r\downarrow 0\).

This result, (5.4), and Lemma 5.3 imply that, for any \(T\),

$$\begin{aligned} v_{K}(x^{n})\le 1/n +v^{\pi \alpha _{\cdot }^{n}{\varvec{\beta }}(\pi \alpha _{\cdot }^{n})}(x^{n}) +w(T,K)+ Ne^{NT }/K+Ne^{-\chi T}, \end{aligned}$$
(5.5)

where \(w(T,K)\) is independent of \(n\) and \(w(T,K)\rightarrow 0\) as \(K\rightarrow \infty \) for any \(T\). Hence

$$\begin{aligned} v_{K}(x^{n})\le \sup _{\alpha _{\cdot }\in \mathfrak{A }} v^{ \alpha _{\cdot } {\varvec{\beta }}( \alpha _{\cdot } )}(x^{n}) +w(T,K)+ Ne^{NT }/K+Ne^{-\chi T}+1/n. \end{aligned}$$

Owing to the arbitrariness of \({\varvec{\beta }}\in \mathbb{B }\) we have

$$\begin{aligned} v_{K}(x^{n})\le v(x^{n})+w(T,K)+ Ne^{NT }/K+Ne^{-\chi T}+1/n, \end{aligned}$$

and the arbitrariness of \(x^{n}\) yields

$$\begin{aligned} \sup _{\bar{D}}(v_{K}-v) \le w(T,K)+ Ne^{NT }/K+Ne^{-\chi T} , \end{aligned}$$

which leads to the desired result after first letting \(K\rightarrow \infty \) and then \(T\rightarrow \infty \). The theorem is proved. \(\square \)

6 A general approximation result from below

As in Sect. 5, Assumption (iv) about the uniform nondegeneracy as well as Assumption 2.2 concerning \(G\) are not used and the domain \(D\) is not supposed to be bounded.

However, we suppose that Assumption 5.1 is satisfied. Here we allow \(\beta \) to change in a larger set penalizing using controls other than initially available.

Set

$$\begin{aligned} B_{1}=B \end{aligned}$$

and let \(B_{2}\) be a separable metric space having no common points with \(B_{1}\).

Assumption 6.1

The functions \( \sigma ^{\alpha \beta }( x), b^{\alpha \beta }(x), c^{\alpha \beta }(x)\), and \( f^{\alpha \beta }(x)\) are also defined on \(A\times B_{2} \times \mathbb{R }^{d}\) in such a way that they are independent of \(\alpha \) (on \(A\times B_{2} \times \mathbb{R }^{d}\)) and Assumptions 2.1 (i) b), (ii) are satisfied with, perhaps, larger constants \(K_{0}\) and \(K_{1}\) and, of course, with \(B_{2}\) in place of \(B\). The functions \( c^{\alpha \beta }(x)\) and \( f^{\alpha \beta }(x)\) are bounded on \(A\times B_{2} \times \mathbb{R }^{d}\).

Define

$$\begin{aligned} \hat{B}=B_{1}\cup B_{2}. \end{aligned}$$

Then we introduce \(\hat{\mathfrak{B }}\) as the set of progressively measurable \(\hat{B}\)-valued processes and \(\hat{\mathbb{B }}\) as the set of \(\hat{\mathfrak{B }}\)-valued functions \( {\varvec{\beta }}(\alpha _{\cdot })\) on \( \mathfrak{A }\) such that, for any \(T\in [0,\infty )\) and any \(\alpha ^{1}_{\cdot }, \alpha ^{2}_{\cdot }\in \mathfrak{A }\) satisfying

$$\begin{aligned} P( \alpha ^{1}_{t}=\alpha ^{2}_{t} \quad \text{ for} \text{ almost} \text{ all}\quad t\le T)=1, \end{aligned}$$

we have

$$\begin{aligned} P( {\varvec{\beta }}_{t}(\alpha ^{1}_{\cdot })={\varvec{\beta }}_{t}(\alpha ^{2}_{\cdot }) \quad \text{ for} \text{ almost} \text{ all}\quad t\le T)=1. \end{aligned}$$

Assumption 6.2

There exists a nonnegative bounded uniformly continuous in \(\bar{D}\) function \(G\in C^{2}_{loc}(D)\) such that \(G=0\) on \(\partial D\) (if \(D\ne \mathbb{R }^{d}\)) and

$$\begin{aligned} L^{\alpha \beta }G(x)\le -1 \end{aligned}$$

in \( D\) for all \(\alpha \in A\) and \(\beta \in \hat{B}\).

Take a constant \(K\ge 0\) and set

$$\begin{aligned} v_{-K}(x)=\inf _{{\varvec{\beta }}\in \hat{\mathbb{B }}}\sup _{\alpha _{\cdot }\in \mathfrak{A }} v^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })}_{-K}(x), \end{aligned}$$

where

$$\begin{aligned} v^{\alpha _{\cdot }\beta _{\cdot } }_{-K}(x)&= E_{x}^{\alpha _{\cdot }\beta _{\cdot } }\left[ \int \limits _{0}^{\gamma \wedge \tau } f_{-K} ( x_{t})e^{-\phi _{t} }\,dt +g(x_{\gamma \wedge \tau })e^{-\phi _{\gamma \wedge \tau } }\right] \\&= : v^{\alpha _{\cdot }\beta _{\cdot } } (x)+K E_{x}^{\alpha _{\cdot }\beta _{\cdot } }\int \limits _{0}^{\gamma } I_{\beta _{t}\in B_{2}}e^{-\phi _{t} }\,dt,\\ f^{\alpha \beta }_{-K}( x)&= f^{\alpha \beta }( x) +KI_{\beta \in B_{2}} . \end{aligned}$$

We reiterate that

$$\begin{aligned} v(x)=\mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} v^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })}(x). \end{aligned}$$

These definitions make sense by the same reason as in Sect. 5.

Theorem 6.1

We have \(v_{-K}\rightarrow v\) uniformly on \(\bar{D}\) as \(K\rightarrow \infty \).

Proof

As in the proof of Theorem 5.2 we may assume that \(g=0\). Then since \(\mathbb{B }\subset \hat{\mathbb{B }}\) we have that \(v_{-K}\le v\). To estimate \(v_{-K}\) from below take any sequence \(x^{n}\in \bar{D}\) and find a sequence \({\varvec{\beta }}^{n}\in \hat{\mathbb{B }}\) such that

$$\begin{aligned} v_{-K}(x^{n})\ge -1/n +\sup _{\alpha _{\cdot }\in \mathfrak{A }} E^{\alpha _{\cdot }{\varvec{\beta }}^{n}(\alpha _{\cdot })}_{x^{n}} \int \limits _{0}^{\tau }f_{-K}( x_{t})e^{-\phi _{t} } \,dt. \end{aligned}$$

Since the last supremum is certainly greater than a negative constant independent of \(n\) plus

$$\begin{aligned} K\sup _{\alpha _{\cdot }\in \mathfrak{A }} E^{\alpha _{\cdot }{\varvec{\beta }}^{n}(\alpha _{\cdot })}_{x^{n}} \int \limits _{0}^{\tau }I_{{\varvec{\beta }}_{t}(\alpha _{\cdot }) \in B_{2}}e^{-\bar{c}t} \,dt, \end{aligned}$$

where \(\bar{c}\) is the same as in Sect. 5, we conclude that

$$\begin{aligned} \sup _{\alpha _{\cdot }\in \mathfrak{A }} E^{\alpha _{\cdot }{\varvec{\beta }}^{n}(\alpha _{\cdot })}_{x^{n}} \int \limits _{0}^{\tau }I_{{\varvec{\beta }}_{t}(\alpha _{\cdot }) \in B_{2}}e^{-\bar{c}t} \,dt\le N/K. \end{aligned}$$
(6.1)

Next, introduce \(\pi \beta \) similarly to how \(\pi \alpha \) was introduced and find a sequence of \(\alpha ^{n}_{\cdot }\in \mathfrak{A }\) such that

$$\begin{aligned} E^{\alpha ^{n}_{\cdot }\pi {\varvec{\beta }}^{n}(\alpha ^{n}_{\cdot })}_{x^{n}} \int \limits _{0}^{\tau }f ( x_{t})e^{-\phi _{t} } \,dt\ge v(x^{n})-1/n. \end{aligned}$$

By using (6.1) and arguing as in the proof of Theorem 5.2 one proves that

$$\begin{aligned} I_{n}:=\big |E^{\alpha ^{n}_{\cdot }\pi {\varvec{\beta }}^{n}(\alpha ^{n}_{\cdot })}_{x^{n}} \int \limits _{0}^{ \tau }f ( x_{t})e^{-\phi _{t} } \,dt - E^{\alpha ^{n}_{\cdot } {\varvec{\beta }}^{n}(\alpha ^{n}_{\cdot })}_{x^{n}} \int \limits _{0}^{ \tau }f ( x_{t})e^{-\phi _{t} } \,dt\big | \end{aligned}$$

tends to zero as \(n\rightarrow \infty \). This leads to the desired result since

$$\begin{aligned} v_{-K}(x^{n})&\ge -1/n +E^{\alpha ^{n}_{\cdot }{\varvec{\beta }}^{n}(\alpha ^{n}_{\cdot })}_{x^{n}} \int \limits _{0}^{ \tau }f ( x_{t})e^{-\phi _{t} } \,dt\\&\ge -1/n+I_{n}+E^{\alpha ^{n}_{\cdot }\pi {\varvec{\beta }}^{n}(\alpha ^{n}_{\cdot })}_{x^{n}} \int \limits _{0}^{ \tau }f ( x_{t})e^{-\phi _{t} } \,dt\\&\ge -2/n+I_{n}+v(x^{n}). \end{aligned}$$

The theorem is proved. \(\square \)

7 Versions of Theorems 5.2 and for uniformly nondegenerate case and Proof of Theorem 2.3

In Theorem 7.1 below we suppose that Assumptions 2.1 (i) b), (ii) are satisfied and domain \(D\) is bounded . We also take extensions of \(\sigma ,b,c\) and \(f\) as in Sects. 5 and 6 satisfying Assumptions 5.2 and 6.1 and additionally require the extended \(\sigma ^{\alpha \beta }\) to also satisfy Assumption 2.1 (iv), perhaps with a different constant \(\delta \).

Finally, we suppose that Assumptions 5.3 and 6.2 are satisfied.

Then take \(\gamma \) and \(\lambda \) as in Sect. 5 (and Sect. 6) and introduce the functions \(v_{\pm K}\) and \(v\) as in Sects. 5 and 6.

Theorem 7.1

We have \(v_{\pm K}\rightarrow v\) uniformly on \(\bar{D}\) as \(K\rightarrow \infty \).

Proof

For \(\varepsilon >0\) we construct \(v_{\varepsilon ,\pm K}(x)\) and \(v_{\varepsilon }(x)\) from \(\sigma ,b,c^{(\varepsilon )},f^{(\varepsilon )}\) (mollifying only the original \(c,f\) and not their extensions) and \(g\) in the same way as \(v_{\pm K}\) and \(v\) were constructed from \(\sigma ,b,c, f\), and \(g\). By Theorems 5.2 and 6.1 we have \(v_{\varepsilon ,\pm K}\rightarrow v_{\varepsilon }\) uniformly on \(\bar{D}\) as \(K\rightarrow \infty \) for any \(\varepsilon >0\).

Therefore, we only need to show that \(|v_{\varepsilon ,\pm K}- v_{\pm K}|+|v_{\varepsilon }- v |\le W(\varepsilon )\), where \(W(\varepsilon )\) is independent of \(K\) and tends to zero as \(\varepsilon \downarrow 0\). However, by Theorem 2.2.1 of [6] and Lemma 4.1 (see also Remarks 4.1 and 4.2)

$$\begin{aligned} |v_{\varepsilon ,\pm K}&- v_{\pm K}|+|v_{\varepsilon }- v |\le N \Vert \sup _{\alpha \in A,\beta \in B}|f^{\alpha \beta }- (f^{\alpha \beta })^{(\varepsilon )}|\,\Vert _{L_{d}(D)}\\&+ N\Vert \sup _{\alpha \in {\hat{A}},\beta \in {\hat{B}}}|f^{\alpha \beta } |\,\Vert _{L_{d}(D)} \Vert \sup _{\alpha \in A,\beta \in B}|c^{\alpha \beta }- (c^{\alpha \beta })^{(\varepsilon )}|\,\Vert _{L_{d}(D)}. \end{aligned}$$

This proves the theorem. \(\square \)

In the remaining part of the section the assumption of Theorem 2.3, that is all the assumptions stated in Sect. 2, are supposed to be satisfied.

Proof of Theorem 2.3

For obvious reasons while proving the inequalities (2.13) and (2.15) in assertions (i) and (ii) we may assume that \(g\in C^{2}(\mathbb{R }^{d})\).

(i) First suppose that \(D\in C^{2}\). By Theorem 1.1 of [9] there is a set \(A_{2}\) and bounded continuous functions \(\sigma ^{\alpha }=\sigma ^{\alpha \beta }, b^{\alpha }=b^{\alpha \beta }, c^{\alpha }=c^{\alpha \beta }\) (independent of \(x\) and \(\beta \)), and \(f^{\alpha \beta }\equiv 0\) defined on \(A_{2}\) such that Assumption 2.1 (iv) about the uniform nondegeneracy of \(a^{\alpha }=a^{\alpha \beta } =(1/2)\sigma ^{\alpha }(\sigma ^{\alpha })^{*}\) is satisfied for \(\alpha \in A_{2}\) (perhaps with a different constant \(\delta >0\)) and such that for any \(K\ge 0\) the equation (the following notation is explained below)

$$\begin{aligned} H_{K}[u]=0 \end{aligned}$$
(7.1)

(a.e.) in \(D\) with boundary condition \(u=g\) on \(\partial D\) has a unique solution

$$\begin{aligned} u_{K} \in C^{1}(\bar{D})\bigcap _{p\ge 1} W^{2}_{p}(D) \end{aligned}$$

(recall Assumption 2.3 (iv) and that \(g\in C^{2}(\mathbb{R }^{d})\)). Here

$$\begin{aligned} H_{K}[u](x)&:= \max (H[u](x), P[u](x)-K),\end{aligned}$$
(7.2)
$$\begin{aligned} P[u](x)&= \sup _{\alpha \in A_{2}} \left[ a_{ij}^{\alpha } D_{ij}u(x) +b^{\alpha }_{i} D_{i}u(x)-c^{\alpha } u(x) \right] . \end{aligned}$$
(7.3)

Observe that

$$\begin{aligned}&\max (H[u](x), P[u](x)-K)\\&\quad =\max \left\{ \mathop {{\mathrm{sup\,\,\,inf}}}\limits _{\alpha \in A_{1}\,\,\beta \in B} [L^{\alpha \beta }u(x)+f^{\alpha \beta }(x)], \mathop {{\mathrm{sup\,\,\,inf}}}\limits _{\alpha \in A_{2}\,\,\beta \in B} [L^{\alpha \beta }u(x)+f^{\alpha \beta }(x)-K]\right\} \\&\quad =\mathop {{\mathrm{sup\,\,\,inf}}}\limits _{\alpha \in \hat{A}\,\,\beta \in B} \left[ L^{\alpha \beta }u( x)+f^{\alpha \beta }_{K}( x)\right] \quad (f^{\alpha \beta }_{K}( x)=f^{\alpha \beta } ( x) I_{\alpha \in A_{1}}-KI_{\alpha \in A_{2}}), \end{aligned}$$

where the first equality follows from the definition of \(H[u]\), (7.3), and the fact that \(L^{\alpha \beta }\) is independent of \(\beta \) for \(\alpha \in A_{2}\).

We set \(u_{K}(x)=g(x)\) if \(x\not \in D\).

Since \(D\) is sufficiently regular by assumption, there exists a sequence \(u^{n}(x)\) of functions of class \(C^{2}(\bar{D})\), which converge to \(u_{K}\) as \(n\rightarrow \infty \) uniformly in \(\bar{D}\) and in \(W^{2}_{p}(D)\) for any \(p\ge 1\). Hence, by Theorem 4.2 of [10] we have

$$\begin{aligned} u _{K}(x)=\mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \hat{\mathbb{B }}\,\,\alpha \in \hat{\mathfrak{A }}} E_{x}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })}\left[ \int \limits _{0}^{\tau } f_{K} ( x_{t})e^{-\phi _{t}}\,dt+g(x_{\tau })e^{-\phi _{\tau }}\right] . \end{aligned}$$

By Theorem 7.1 we have that \(u_{K}\rightarrow v\) uniformly on \(\bar{D}\) and, since they coincide outside \(D\), the convergence is uniform on \(\mathbb{R }^{d}\). In particular,

$$\begin{aligned} v\in C(\mathbb{R }^{d}). \end{aligned}$$
(7.4)

On the other hand by assumption, (7.1), and (7.2) we have \(\check{H}[\psi u_{K}]\le 0\) (a.e. \(D^{\check{\!\!\!}\,}\)). We also know that \(u_{K}\ge v\) and, in particular, \(\psi u_{K}\ge \check{v}\) on \(\partial D^{\check{\!\!\!}\,}\) Furthermore, \(\psi u^{n}\in C^{2}(\bar{D^{\check{\!\!\!}\,}}), \psi u^{n}\) are \(p\)-insensitive by Assumption , and, for each \(n\), the second-order derivatives of \(\psi u^{n}\) are uniformly continuous in \(\bar{D^{\check{\!\!\!}\,}})\) (because of our assumptions on \(\Pi \) and \(\psi \)). Also \(\psi u^{n}\) converge to \(\psi u_{K}\) as \(n\rightarrow \infty \) uniformly in \(\bar{D^{\check{\!\!\!}\,}}\) and, as is easy to see, for any \(\check{x}\in D^{\check{\!\!\!}\,}\)

$$\begin{aligned}&E^{\alpha _{\cdot }\beta _{\cdot }}_{\check{x}} \int \limits _{0}^{\check{\tau }}\left( |D^{2}_{\check{x}}( \psi u^{n}) - D^{2}_{\check{x}} ( \psi u_{K} )| +|D_{\check{x}}( \psi u^{n})-D_{\check{x}}( \psi u_{K}) |\right) ( \check{x} _{t})e^{-\check{\phi }_{t}}\,dt\\&\quad \le NE^{\alpha _{\cdot }\beta _{\cdot }}_{\check{x}} \int \limits _{0}^{\check{\tau }} \bigg (|D^{2}u^{n}-D^{2}u_{K}|+ |Du^{n}-Du_{K}| +| u^{n}- u_{K}|\bigg )(\Pi (\check{x}_{t}))\,dt, \end{aligned}$$

where, as always, \(u_{K}(\check{x})=u_{K}(\Pi (\check{x}))\) and \(u^{n}(\check{x})=u^{n}(\Pi (\check{x}))\) and the constant \(N\) depends only on \(\Vert \psi , \Pi \Vert _{C^{1,1}}, d\), and \(k\). By Assumption 2.5 (iii) and Theorem 2.2.1 of [6] the last expression tends to zero as \(n\rightarrow \infty \) uniformly with respect to \(\alpha _{\cdot }\in \mathfrak{A }\) and \(\beta _{\cdot }\in \mathfrak{B }\). We also recall that the remaining parts of Assumption 2.5 are imposed and this allows us to apply Theorem 4.1 of [10] and conclude that \(\psi u_{K}\ge \check{v}\), which after setting \(K\rightarrow \infty \) yields \(\psi v\ge \check{v}\). Theorem 4.1 of [10] also says that

$$\begin{aligned} \psi u_{K}(\check{x})&\ge \mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} E_{\check{x}}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })}\left[ \psi u_{K}(\check{x}_{\gamma \wedge \check{\tau }}) e^{-\check{ \phi }_{\gamma \wedge \check{\tau }}- \psi _{\gamma \wedge \check{\tau }}}\right. \nonumber \\&\left. +\int \limits _{0}^{\gamma \wedge \check{\tau }} \{\check{f}(p _{t},\check{x} _{t})+\lambda _{t} \psi u_{K}(\check{x} _{t})\}e^{-\check{ \phi }_{t}- \psi _{t}}\,dt \right] . \end{aligned}$$
(7.5)

By letting \(K\rightarrow \infty \) in (7.5) and using the uniform convergence of \(u_{K}\) to \(v\) we easily get the desired result in our particular case of smooth \(D\).

So far we did not use the assumption concerning the boundary behavior of \(G\) and \(\check{G}\) which we need now to deal with the case of general \(D\). Take an expanding sequence of smooth domains \(D_{n}\subset D\) such that \(D=\bigcup D_{n}\) and construct the functions \(v^{n}\) in the same way as \(v\) by replacing \(D\) with \(D_{n}\). We extend \(v^{n}\) to \(\mathbb{R }^{k}\) as in (2.11).

Also construct \(\check{v}^{n}\) by replacing \(D^{\check{\!\!\!}\,}\) with

$$\begin{aligned} D^{\check{\!\!\!}\,}_{n}=D^{\check{\!\!\!}\,}\cap \{\check{x}:\Pi (\check{x}) \in D_{n}\}\quad (=D_{n}\quad \text{ if} \quad k =d\quad \text{ and}\quad \Pi (x)\equiv x) \end{aligned}$$

and the boundary data \(\psi v^{n}\) in place of \(\psi v\), that is

$$\begin{aligned} \check{v}^{n}(\check{x}) =\mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} E_{\check{x}}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })} \left[ \psi v^{n}(\check{x}_{\check{\tau }(n)} ) e^{-\check{\phi }_{\check{\tau }(n)}} +\int \limits _{0}^{\check{\tau }(n)}\check{f}(p_{t},\check{x} _{t}) e^{-\check{\phi }_{t}}\,dt\right] , \end{aligned}$$

where \(\check{\tau }^{\alpha _{\cdot }\beta _{\cdot }\check{x}}(n)\) is the first exit time of \(\check{x}_{t}^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\) from \(D^{\check{\!\!\!}\,}_{n}\). Then by the above we have that

$$\begin{aligned} \psi v^{n}\ge \check{v}^{n} \end{aligned}$$
(7.6)

and

$$\begin{aligned} \psi v^{n}(\check{x})&\ge \mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} E_{\check{x}}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })}\left[ \psi v^{n}(\check{x}_{\gamma \wedge \check{\tau }(n)}) e^{-\check{ \phi }_{\gamma \wedge \check{\tau }(n)} - \psi _{\gamma \wedge \check{\tau }(n)}}\right. \nonumber \\&\left. +\int \limits _{0}^{\gamma \wedge \check{\tau }(n)} \{\check{f}(p_{t},\check{x} _{t} )+\lambda _{t} \psi v^{n}(\check{x} _{t})\} e^{-\check{ \phi }_{t}- \psi _{t}}\,dt \right] . \end{aligned}$$
(7.7)

We now claim that, as \(n\rightarrow \infty \),

$$\begin{aligned} \sup _{\mathbb{R }^{d}}|v^{n}-v|\rightarrow 0 ,\end{aligned}$$
(7.8)
$$\begin{aligned} \sup _{\mathbb{R }^{k }}|\check{v}^{n}-\check{v}| \rightarrow 0. \end{aligned}$$
(7.9)

That (7.8) holds is proved in [10] (see Sect. 6 there). Owing to (7.8) to prove (7.9) it suffices to show that uniformly in \(\mathbb{R }^{k }\) (notice the replacement of \(v^{n}\) by \(v\))

$$\begin{aligned}&\mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} E_{\check{x}}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })} \left[ \psi v (\check{x}_{\check{\tau }(n)} ) e^{-\check{\phi }_{\check{\tau }(n)}} +\int \limits _{0}^{\check{\tau }(n)}\check{f}(p_{t},\check{x} _{t}) e^{-\check{\phi }_{t}}\,dt\right] \nonumber \\&\quad \rightarrow \mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} E_{\check{x}}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })} \left[ \psi v (\check{x}_{\check{\tau } } ) e^{-\check{\phi }_{\check{\tau } }} +\int \limits _{0}^{\check{\tau } }\check{f}(p_{t},\check{x} _{t}) e^{-\check{\phi }_{t}}\,dt\right] \end{aligned}$$
(7.10)

(recall that \(\check{\tau }^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\) is the first exit time of \(\check{x}_{t}^{\alpha _{\cdot }\beta _{\cdot }\check{x}}\) from \(D^{\check{\!\!\!}\,}\)). Both sides of (7.10) coincide if \(\check{x}\not \in D^{\check{\!\!\!}\,}\). Therefore, we need to prove the uniform convergence only in \(D^{\check{\!\!\!}\,}\).

Here \(v\in C(\bar{D})\) and it is convenient to prove (7.10) just for any such \(v\), regardless of its particular construction. In that case, relying on Assumption 2.7, as in the proof of Theorem 2.2 of [10] (see Section 6 there), we reduce our problem to proving that uniformly in \(D^{\check{\!\!\!}\,}\)

$$\begin{aligned} \hat{v}_{n}(x)&:= \mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} E_{\check{x}}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })} \int \limits _{0}^{\check{\tau }(n)}\check{f}(p_{t},\check{x} _{t}) e^{-\check{\phi }_{t}}\,dt\\ \rightarrow \hat{v}(x)&:= \mathop {{\mathrm{inf\,\,\,sup}}}\limits _{{\varvec{\beta }}\in \mathbb{B }\,\,\alpha _{\cdot }\in \mathfrak{A }} E_{\check{x}}^{\alpha _{\cdot }{\varvec{\beta }}(\alpha _{\cdot })} \int \limits _{0}^{\check{\tau } }\check{f}(p_{t},\check{x} _{t}) e^{-\check{\phi }_{t}}\,dt \end{aligned}$$

with perhaps modified \(\check{f}\) still satisfying (2.7) and (2.8) and satisfying Assumptions (i), (ii) with (modified \(\bar{f}_{\varepsilon }\)).

For \(\varepsilon >0\) introduce

$$\begin{aligned} N_{\varepsilon }=\sup _{(\alpha ,\beta ,\check{x})\in A\times B\times D^{\check{\!\!\!}\,}}| \bar{f}_{\varepsilon }^{\alpha \beta }(\check{x})| \end{aligned}$$

and observe that

$$\begin{aligned} |\check{v}(\check{x})-\check{v}_{n}(\check{x})| \le \delta _{1}^{-1}I_{n}(x), \end{aligned}$$

where

$$\begin{aligned}&\delta _{1}I_{n}(x):=\delta _{1}\sup _{\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot }\in \mathfrak{B }}E^{\alpha _{\cdot }\beta _{\cdot }}_{\check{x}} \int \limits _{\check{\tau }_{n}}^{\check{\tau }} |f(p_{t},\check{x}_{t})|e^{-\check{\phi }_{t}}\,dt\\&\quad \le \sup _{\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot }\in \mathfrak{B }}E^{\alpha _{\cdot }\beta _{\cdot }}_{\check{x}} \int \limits _{\check{\tau }_{n}}^{\check{\tau }} |\bar{f} (\check{x}_{t})|e^{-\check{\phi }_{t}}\,dt \le N_{\varepsilon } \sup _{\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot }\in \mathfrak{B }}E^{\alpha _{\cdot }\beta _{\cdot }}_{\check{x}} \int \limits _{\check{\tau }_{n}}^{\check{\tau }} e^{-\check{\phi }_{t}}\,dt+J_{n}(x), \end{aligned}$$

where

$$\begin{aligned} J_{n}(x) =\sup _{\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot }\in \mathfrak{B }}E^{\alpha _{\cdot }\beta _{\cdot }}_{\check{x}} \int \limits _{0}^{\check{\tau }} |\bar{f} (\check{x}_{t})-\bar{f}_{\varepsilon } (\check{x}_{t})|e^{-\check{\phi }_{t}}\,dt. \end{aligned}$$

By Assumption 2.5 (ii) we have that \(J_{n}(x)\rightarrow 0\) as \(\varepsilon \downarrow 0\) uniformly in \(D^{\check{\!\!\!}\,}\) (this is the only place where we use the uniformity in (2.10)). Furthermore, by Lemma 5.1 of [10]

$$\begin{aligned} \sup _{\alpha _{\cdot }\in \mathfrak{A }, \beta _{\cdot }\in \mathfrak{B }}E^{\alpha _{\cdot }\beta _{\cdot }}_{\check{x}} \int \limits _{\check{\tau }_{n}}^{\check{\tau }} e^{-\check{\phi }_{t}}\,dt\le \sup _{\partial D^{\check{\!\!\!}\,}_{n} }\check{G}. \end{aligned}$$
(7.11)

As is easy to check \(\Pi (\partial D^{\check{\!\!\!}\,}_{n}) \subset \partial D_{n}\), so that, if we have a sequence of points \(\check{x}_{n}\in \partial D^{\check{\!\!\!}\,}_{n}\), then \(\mathrm{dist}\,(\Pi (\check{x}_{n}),\partial D)\rightarrow 0\) as \(n\rightarrow \infty \). It follows by Assumption that the right-hand side of (7.11) goes to zero as \(n\rightarrow \infty \). This proves that \(I_{n}(x)\rightarrow 0\) uniformly in \(D^{\check{\!\!\!}\,}\), yields (7.10) and (7.9) and along with (7.8) and (7.6) proves that \(\psi v\ge \check{v}\). One passes to the limit in (7.7) similarly and this finally brings the proof of assertion (i) to an end.

(ii) As above first suppose that \(D\in C^{2}\). By Theorem 1.3 of [9] there is a set \(B_{2}\) and bounded continuous functions \(\sigma ^{\beta }=\sigma ^{\alpha \beta }, b^{\beta }=b^{\alpha \beta }, c^{\beta }=c^{\alpha \beta }\) (independent of \(x\) and \(\alpha \)), and \(f^{\alpha \beta }\equiv 0\) defined on \(B_{2}\) such that Assumption 2.1 (iv) about the uniform nondegeneracy of \(a^{\beta }=a^{\alpha \beta } =(1/2)\sigma ^{\beta }(\sigma ^{\beta })^{*}\) is satisfied for \(\beta \in B_{2}\) (perhaps with a different constant \(\delta >0\)) and such that for any \(K\ge 0\) the equation (the following notation is explained below)

$$\begin{aligned} H_{-K}[u]=0 \end{aligned}$$

(a.e.) in \(D\) with boundary condition \(u=g\) on \(\partial D\) has a unique solution

$$\begin{aligned} u_{-K} \in C^{1}(\bar{D})\bigcap _{p\ge 1} W^{2}_{p}(D) . \end{aligned}$$

Here

$$\begin{aligned} H_{-K}[u](x)&:= \max (H[u](x), P[u](x)+K),\\ P[u](x)&= \inf _{\beta \in B_{2}} \left[ a_{ij}^{\beta }D_{ij}u(x) +b^{\beta }_{i}D_{i}u(x)-c^{\beta }u(x)\right] . \end{aligned}$$

We introduce

$$\begin{aligned} f^{\alpha \beta }_{K}( x)=f^{\alpha \beta }( x) I_{\beta \in B_{1}}+KI_{\beta \in B_{2}}. \end{aligned}$$

and note that

$$\begin{aligned}&\mathop {{\mathrm{sup\,\,\,inf}}}\limits _{\alpha \in A\,\,\beta \in \hat{B}} \left[ L^{\alpha \beta }u( x)+f^{\alpha \beta }_{K}( x)\right] \\&\quad =\sup _{\alpha \in A }\min \left\{ \inf _{ \beta \in B} [ L^{\alpha \beta }u(x)+ f^{\alpha \beta }(x)], \inf _{\beta \in B_{2}} [L^{\alpha \beta }u(x)+f^{\alpha \beta }(x)+K]\right\} \\&\quad =\min (H[u](x), P[u](x)+K). \end{aligned}$$

After that it suffices to repeat the above proof relying again on Theorem 7.1.

The theorem is proved. \(\square \)

8 Proof of Theorems 2.1 and 2.2

Here all assumptions of Theorem 2.1 are supposed to be satisfied.

Proof of Theorem 2.1

If the functions \((c,f)^{\alpha \beta }(x)\) are bounded on \(A\times B\times \mathbb{R }^{d}\), then according to Remark 2.4 assertion (ii) of Theorem 2.1 follows immediately from Theorem 2.3. The continuity of \(v\) also follows from the proof of Theorem 2.3.

In the general case, for \(\varepsilon >0\), define

$$\begin{aligned} (c_{\varepsilon },f_{\varepsilon })^{\alpha \beta }(x)= ( c^{\alpha \beta }, f^{\alpha \beta })^{(\varepsilon )} (x) \end{aligned}$$

and construct \(v_{\varepsilon }(x)\) from \(\sigma ,b,c_{\varepsilon }, f_{ \varepsilon }\), and \(g\) in the same way as \(v\) was constructed from \(\sigma ,b,c, f\), and \(g\). By the above (2.6) holds if we replace \(f\) and \(c\) with \(f_{\varepsilon }\) and \(c_{\varepsilon }\) respectively.

We first take \(\lambda \equiv 0 \) and \(\gamma ^{\alpha _{\cdot }\beta _{\cdot }} =\tau ^{\alpha _{\cdot }\beta _{\cdot }x}\) in the counterpart of (2.6) corresponding to \(v_{\varepsilon }\). Then by Theorem 2.2.1 of [6] and Lemma 4.1 (see also Remarks 4.1 and 4.2)

$$\begin{aligned}&|v(x)-v_{\varepsilon }(x)|\le N \Vert \sup _{\alpha \in A,\beta \in B}| f^{\alpha \beta }- ( f^{\alpha \beta })^{(\varepsilon )}|\,\Vert _{L_{d}(D)}\\&\quad +N\Vert \sup _{\alpha \in A,\beta \in B}| f^{\alpha \beta } |\,\Vert _{L_{d}(D)} \Vert \sup _{\alpha \in A,\beta \in B}| c^{\alpha \beta }- ( c^{\alpha \beta })^{(\varepsilon )}|\,\Vert _{L_{d}(D)}. \end{aligned}$$

It follows by Assumption 2.1 (iii) that \(v_{\varepsilon }\rightarrow v\) uniformly on \(\bar{D}\) and \(v\) is continuous in \(\bar{D}\). After that we easily pass to the limit in the counterpart of (2.6) corresponding to \(v_{\varepsilon }\) for arbitrary \(\lambda \) and \(\gamma \) again on the basis of Lemma 4.1. The theorem is proved. \(\square \)

Proof of Theorem 2.2

We know from [9] (see Remark 1.3 there) that \(u_{K}\) introduced in the proof of Theorem 2.3 (see Sect. 7) satisfies an elliptic equation

$$\begin{aligned} a^{K}_{ij}D_{ij}u_{K}+b^{K}_{i}D_{i}u_{K}-c^{K}u_{K}+f^{K}=0, \end{aligned}$$

where \((a^{K}_{ij})\) satisfies the uniform nondegeneracy condition (see Assumption 2.1 (iv)) with a constant \(\delta _{1}=\delta _{1}(\delta ,d)>0, |b^{K}|,c^{K}\) are bounded by a constant depending only on \(K_{0}, \delta \), and \(d, c^{K}\ge 0\) and

$$\begin{aligned} |f^{K}|\le \sup _{\alpha ,\beta }| f^{\alpha \beta }|. \end{aligned}$$

Then according to classical results (see, for instance, [3] or [7]) there exists a constant \(\theta \in (0,1)\) depending only on \(\delta _{1}\) and \(d\), that is on \(\delta \) and \(d\), such that for any subdomain \(D^{\prime }\subset \bar{D^{\prime }} \subset D\) and \(x,y\in D^{\prime }\) we have

$$\begin{aligned} |u_{K}(x)-u_{K}(y)|\le N|x-y|^{\theta }, \end{aligned}$$
(8.1)

where \(N\) depends only on \(\delta , d\), the distance between the boundaries of \(D^{\prime }\) and \(D\), on the diameter of \(D\), and on \(K_{0}\). It is seen that (8.1) will be preserved as we let \(K \rightarrow \infty \) and then perform all other steps in the above proof of Theorem 2.1 which will lead us to the desired result. The theorem is proved. \(\square \)