1 Introduction

This paper studies a stochastic filtering problem on a finite time horizon [0, T], \(T > 0\), in which the dynamics of a multi-dimensional process \(X = (X_t)_{t \in [0,T]}\), called signal or unobserved process, are additively affected by a process having components of bounded variation. The aim is to estimate the hidden state \(X_t\), at each time \(t \in [0,T]\), using the information provided by a further stochastic process \(Y = (Y_t)_{t \in [0,T]}\), called observed process; said otherwise, we look for the conditional distribution of \(X_t\) given the available observation up to time t. This leads to derive an evolution equation for the filtering process, which is a probability measure-valued process satisfying, for any given bounded and measurable function \(\varphi :\mathbb {R}^m \rightarrow \mathbb {R}\),

$$\begin{aligned} \pi _t(\varphi ) {{:}{=}} \int _{\mathbb {R}^m} \varphi (x) \, \pi _t({{\mathrm {d}}} x) = {{\mathbb {E}}} \bigl [\varphi (X_t) \bigm | {{\mathcal {Y}}} _t \bigr ], \quad t \in [0,T], \end{aligned}$$

where \(({{\mathcal {Y}}} _t)_{t \in [0,T]}\) is the natural filtration generated by Y and augmented by \({{\mathbb {P}}} \)-null sets. The process \(\pi \) provides the best estimate (in the usual \({{\mathrm {L}}} ^2\) sense) of the signal process X, given the available information obtained through the process Y.

Stochastic filtering is nowadays a well-established research topic. The literature on the subject is vast and many different applications have been studied: the reader may find a fairly detailed historical account in the book by Bain and Crisan [2]. Classic references are the books by Bensoussan [5], Kallianpur [29], Liptser and Shiryaev [35] (cf. also Brémaud [6, Chapter 4] for stochastic filtering with point process observation); more recent monographs are, e.g., the aforementioned book by Bain and Crisan [2], Crisan and Rozovskiĭ [16], and Xiong [42] (see also Cohen and Elliott [14] Chapter 22). Recently, different cases where the signal and/or the observation processes can have discontinuous trajectories (as in the present work) have been studied and explicit filtering equations have been derived: see, for instance, Bandini et al. [4], Calvia [9], Ceci and Gerardi [12, 13], Ceci and Colaneri [10, 11],Confortola and Fuhrman [15], Grigelionis and Mikulevicius [26].

The main motivation of our analysis stems from the study of singular stochastic control problems under partial observation. Consider a continuous-time stochastic system whose position or level \(X_t\) at time \(t \in [0,T]\) is subject to random disturbances and can be adjusted instantaneously through (cumulative) actions that, as functions of time, do not have to be absolutely continuous with respect to Lebesgue measure. In particular, they may present a Cantor-like component and/or a jump component. The use of such singular control policies is nowadays common in applications in Economics, Finance, Operations Research, as well as in Mathematical Biology. Typical examples are, amongst others, (ir)reversible investment choices (e.g., Riedel and Su [41]), dividends’ payout (e.g., Reppen et al. [40]), inventory management problems (e.g., Harrison and Taksar [27], De Angelis et al. [18]), as well as harvesting issues (e.g., [1]). Suppose also that the decision maker acting on the system is not able to observe the dynamics of the controlled process X, but she/he can only follow the evolution of a noisy process Y, whose drift is a function of the signal process. Mathematically, we assume that the pair (XY) is defined on a filtered complete probability space \((\Omega , {{\mathcal {F}}} , {{\mathbb {F}}} {{:}{=}} ({{\mathcal {F}}} _t)_{t \in [0,T]}, {{\mathbb {P}}} )\) and that its dynamics are given, for any \(t \in [0,T]\), by the following system of SDEs:

$$\begin{aligned} \left\{ \begin{aligned} {{\mathrm {d}}} X_t&= b(t, X_t) \, {{\mathrm {d}}} t + \sigma (t, X_t) \, {{\mathrm {d}}} W_t + {{\mathrm {d}}} \nu _t,&X_{0^-} \sim \xi \in {\mathcal {P}}(\mathbb {R}^m), \\ {{\mathrm {d}}} Y_t&= h(t, X_t) \, {{\mathrm {d}}} t + \gamma (t) \, {{\mathrm {d}}} B_t,&Y_0 = y \in \mathbb {R}^n. \end{aligned} \right. \end{aligned}$$
(1.1)

Here: \(\xi \) is a given probability distribution on \(\mathbb {R}^m\); W and B are two independent \({{\mathbb {F}}} \)-standard Brownian motions; coefficients \(b, \sigma , h, \gamma \) are suitable measurable functions; \(\nu \) is a càdlàg, \(\mathbb {R}^m\)-valued process with (components of) bounded variation, that is adapted to the previously introduced observation filtration \(({{\mathcal {Y}}} _t)_{t \in [0,T]}\).

Clearly, the decision maker might want to adjust the dynamics of X in order to optimize a given performance criterion. Since X is unobservable, this leads to a stochastic optimal control problem under partial observation, which can be tackled by deriving and studying the so-called separated problem, an equivalent problem under full information (see, e.g., Bensoussan[5], Calvia [8, 9]), where the signal X is formally replaced by its estimate provided by the filtering process \(\pi \). However, to effectively solve the original optimization problem by means of the separated one, a first necessary step concerns the detailed study of the associated filtering problem.

As already discussed, models like (1.1) are apt to describe problems in various applied fields. A particular management problem of pollution control motivated our study. Consider a firm producing a single good on a given time horizon [0, T], \(T > 0\). The production capacity can be adjusted through an investment strategy \(\nu = (\nu _t)_{t \in [0,T]}\), which is partially reversible; that is, an increase and a decrease of the production capacity is allowed. Production of this good generates pollution, proportionally with respect to the employed investment/disinvestment plan (see Ferrari and Koch [25] and Pommeret and Prieur [38], among others). The actual level of pollution, modeled by stochastic process \(X = (X_t)_{t \in [0,T]}\), is not directly observable and follows the SDE

$$\begin{aligned} {{\mathrm {d}}} X_t = -\delta X_t {{\mathrm {d}}} t +\sigma {{\mathrm {d}}} W_t + \beta {{\mathrm {d}}} \nu _t, \quad X_{0^-} = x_0 \in \mathbb {R}, \quad t \in [0,T], \end{aligned}$$

where \(\delta \) is an exogenous pollution decay factor, \(\sigma > 0\), W is a standard real Brownian motion, and \(\beta > 0\) is a pollution-to-investment ratio. A noisy measurement of X is provided by the observed process Y, satisfying

$$\begin{aligned} Y_t = \int _0^t h(X_s) \, {{\mathrm {d}}} s + B_t, \quad t \in [0,T], \end{aligned}$$

where \(h :\mathbb {R}\rightarrow \mathbb {R}\) is the so-called sensor function and B is a standard real Brownian motion independent of W.

The firm aims at choosing an optimal partially reversible investment strategy \(\nu \) to minimize the cost functional:

$$\begin{aligned} {{\mathbb {E}}} \biggl [\int _0^T {{\mathrm {e}}} ^{-rt}(X_t - {\bar{x}})^2 {{\mathrm {d}}} t + \int _0^T {{\mathrm {e}}} ^{-rt} K_+ {{\mathrm {d}}} \nu ^+_t + \int _0^T {{\mathrm {e}}} ^{-rt} K_- {{\mathrm {d}}} \nu ^-_t\biggr ], \end{aligned}$$

where \(r > 0\) is a given discount factor, and \(\nu ^{+}_t\) (resp. \(\nu ^{-}_t\)) is the cumulative investment (resp. disinvestment) into production performed up to time \(t \in [0,T]\). The cost functional is composed of two parts: the first one, corresponding to the first integral, models the fact that the firm must track an emission level \({{\bar{x}}}\), chosen by some central authority (e.g., a national government); the second one, corresponding to the second and third integrals, models the cost bore by the firm to implement investment and disinvestment policies \(\nu ^+, \nu ^-\), at the marginal costs \(K_+, \, K_- > 0\).

Clearly, the firm needs to base any production plan on observable quantities, so \(\nu \) must be adapted to the observation filtration \(({{\mathcal {Y}}} _t)_{t \in [0,T]}\). Moreover, the unobservable feature of the pollution level X, makes this a singular control problem under partial observation, for the study of which a necessary first step is the derivation of the explicit filtering equation satisfied by the filtering process \(\pi \).

To the best of our knowledge, the derivation of explicit filtering equations in the setting described above has not yet received attention in the literature, except for the linear-Gaussian case, studied in Menaldi and Robin [37]. In this paper we provide a first contribution in this direction. Indeed, the recent literature treating singular stochastic control problems under partial observation assumes that the observed process, rather than the signal one, is additively controlled (cf. Callegaro et al. [7], De Angelis [17], Décamps and Villeneuve [19], and Federico et al. [24]). Clearly, such a modeling feature leads to a filtering analysis that is completely different from ours.

By making use of the so-called reference probability measure approach, we derive the Zakai stochastic partial differential equation (SPDE) satisfied by the so-called unnormalized filtering process, which is a measure-valued process, associated with the filtering process via a suitable change of probability measure. Then, we deduce the corresponding evolution equation for \(\pi \), namely, the so-called Kushner–Stratonovich equation or Fujisaki–Kallianpur–Kunita equation. Furthermore, we show that the unnormalized filtering process is the unique solution to the Zakai equation, in the class of measure-valued processes having a square-integrable density. The latter result is proved under the technical requirement that the jump times of the process \(\nu \) affecting X in (1.1) do not accumulate over the considered time-horizon. Although such a condition clearly poses a restriction on the generality of the model, we also acknowledge that it is typically satisfied by optimal control processes arising in singular stochastic control problems. It is important to notice that establishing conditions under which the unnormalized filtering process possesses a density paves the way to recast the separated problem as a stochastic control problem in a Hilbert space, as we will briefly explain in the next section.

The rest of the introduction is now devoted to a discussion of our approach and results at a more technical level.

1.1 Methodology and Main Results

In this paper we are going to study the filtering problem described above through the so-called reference probability approach, that we briefly summarize here. To start, let us notice that the model introduced in (1.1) is somewhat ill-posed. In fact, the dynamics of the signal process X depend on the \(({{\mathcal {Y}}} _t)_{t \in [0,T]}\)-adapted process \(\nu \) while, simultaneously, the dynamics of the observed process Y depend on X. Otherwise said, it is not clear how to define \(\nu \), which has to be given a priori, and circularity arises if one attempts to introduce the partially observed system (XY) as in (1.1).

A possible way out of this impasse is to define Y as a given Gaussian process independent of X (see (2.2)). In this way, it makes sense to fix a \(({{\mathcal {Y}}} _t)_{t \in [0,T]}\)-adapted process \(\nu \) and to define the dynamics of the signal process X as in the first SDE of (1.1) (see also (2.8)). Finally, under suitable assumptions, there exists a probability measure change (cf. (2.12)) that allows us to recover the dynamics of Y as in the second SDE of (1.1) (see also (2.13)). It is important to notice that the resulting probability depends on the initial law \(\xi \) of \(X_{0^-}\) and on \(\nu \).

To derive the associated Kushner–Stratonovich equation there are two main approaches in the literature: The Innovations approach and the aforementioned reference probability approach. Although it might be possible to derive the filtering dynamics in our context by using the former approach, we follow the latter method.

Our first main results is Theorem 3.4, where we deduce the Zakai equation verified by the unnormalized filtering process (see (3.3) for its definition). From this result, as a byproduct, we deduce in Theorem 3.6 the Kushner–Stratonovich equation satisfied by the filtering process. It is worth noticing that, given the presence of the bounded-variation process \(\nu \) in the dynamics of X, Theorem 3.4 cannot be obtained by invoking classical results, but novel estimates need to be derived (cf. Lemma A.1 and Proposition A.2). In particular, we employ a change of variable formula for Lebesgue–Stieltjes integrals.

It is clear that in applications, for instance to optimal control problems, establishing uniqueness of the solution to the Zakai equation or to the Kushner–Stratonovich equation is essential. In the literature there are several approaches to tackle this problem, most notably the following four: The filtered martingale problem approach, originally proposed by Kurtz and Ocone [34], and later extended to singular martingale problems in [32] (see also [31]); the PDE approach, as in the book by [5] (see also [2, Sect. 4.1]); the functional analytic approach, introduced by Lucic and Heunis [36] (see also [2, Sect. 4.2]); the density approach, studied in Kurtz and Xiong [33] (see also [2, Sect. 7] and [42]).

The first three methods allow to prove uniqueness of the solution to the Zakai equation in a suitable class of measure-valued processes. However, they do not guarantee that the unique measure-valued process solution to the Zakai equation admits a density process, a fact that has an impact on the study of the separated problem. Indeed, without requiring or establishing conditions guaranteeing existence of such a density process, the separated problem must be formulated in an appropriate Banach space of measures and, as a consequence, the Hamilton–Jacobi–Bellman (HJB) equation associated to the separated problem must be formulated in such a general setting as well. As a matter of fact, only recently some techniques have been developed to treat this case, predominantly in the theory of mean-field games (an application to optimal control problems with partial observation is given in [3]).

A more common approach in the literature considers, instead, the density process as the state variable for the separated problem. If it is possible to show that such a density process is the unique solution of a suitable SPDE in \({{\mathrm {L}}} ^2(\mathbb {R}^m)\), the so-called Duncan–Mortensen–Zakai equation, then this \({{\mathrm {L}}} ^2(\mathbb {R}^m)\)-valued process can be equivalently used as state variable in the separated problem. This is particularly convenient, since for optimal control problems in Hilbert spaces a well-developed theory is available, at least in the regular case (see, e.g., the monograph by Fabbri et al. [23]). Therefore, in view of possible future applications to singular optimal control problems under partial observation, we adopted the density approach to prove that, under suitable assumptions, the unnormalized filtering process is the unique solution to the Zakai equation in the class of measure-valued processes admitting a density with respect to Lebesgue measure.

We show this result, first, in the case where \(\nu \) is a continuous process (cf. Theorem 4.6) and, then, in the case where the jump times of \(\nu \) do not accumulate in the time interval [0, T] (see Theorem 4.7). As we already observed, although this assumption prevents to achieve full generality, it has a clear interpretation and it is usually satisfied by the examples considered in the literature. From a technical side, it seems that a direct approach using the method proposed by [33] is not feasible to treat the case of accumulating jumps, due to difficulties in estimating crucial quantities in the arguments used, that are related to the jump component of filtering process. A possible workaround might consists in approximating the process \(\nu \) by cutting away jumps of size smaller than some \(\delta > 0\) and then, provided that a suitable tightness property holds, pass to the limit, as \(\delta \rightarrow 0\), in the relevant equations. However, this is a delicate and lengthy reasoning, which is left for future research.

The rest of this paper is organized as follows. Section 1.2 provides notation used throughout this work. Section 2 introduces the filtering problem. The Zakai and Kushner–Stratonovich equations are then derived in Section 3, while the uniqueness of the solution to the Zakai equation is proved in Section 4. Finally, Appendix 1 collects the proof of technical results.

1.2 Notation

In this section we collect the main notation used in this work. Throughout the paper the set \(\mathbb {N}\) denotes the set of natural integers \(\mathbb {N}= \{1, 2, \dots \}\), \(\mathbb {N}_0 = \{0, 1, \dots \}\), and \(\mathbb {R}\) is the set of real numbers.

For any \(m \times n\) matrix \(A = (a_{ij})\), the symbol \(A^*\) denotes its transpose and \(\Vert {A}\Vert \) is its Frobenius norm; i.e., \(\Vert {A}\Vert = (\sum _{i=1}^m \sum _{j=1}^n a_{ij}^2)^{1/2}\). For any \(x,y\in \mathbb {R}^d\), \(\Vert {x}\Vert \) denotes the Euclidean norm of x and \(x \cdot y = x^* y\) indicates the inner product of x and y. For a fixed Hilbert space H, we denote its inner product by \(\langle \cdot , \cdot \rangle \) and by \(\Vert {\cdot }\Vert _H\) its norm.

The symbol \(\mathbf {1}_C\) denotes the indicator function of a set C, while \(\mathsf {1}\) is the constant function equal to 1. The symbol \(\int _a^b\) denotes \(\int _{[a,b]}\) for any \(-\infty< a \le b < +\infty \).

For any \(d \in \mathbb {N}\) and \(T > 0\), we denote by \({{\mathrm {C}}} ^{1,2}_b([0,T] \times \mathbb {R}^d)\) the set of real-valued bounded measurable functions on \([0,T] \times \mathbb {R}^d\), that are continuously differentiable once with respect to the first variable and twice with respect to the second, with bounded derivatives. For any such function, the symbol \(\partial _t\) denotes the derivative with respect to the first variable, while \({{\mathrm {D}}} _x = (\partial _1, \dots , \partial _d)\) and \({{\mathrm {D}}} ^2_x = (\partial ^2_{ij})_{i,j = 1}^d\) denote, respectively, the gradient and the Hessian matrix with respect to the second variable. Furthermore, we simply write \({{\mathrm {C}}} ^{2}_b(\mathbb {R}^d)\), when we are considering a real-valued bounded function on \(\mathbb {R}^d\) that is twice continuously differentiable with bounded derivatives.

For any \(d \in \mathbb {N}\) we indicate by \({{\mathrm {L}}} ^2(\mathbb {R}^d)\) the set of all square-integrable functions with respect to Lebesgue measure and for all \(k \in \mathbb {N}\) we denote by \(W^2_k(\mathbb {R}^d)\) the Sobolev space of all functions \(f \in {{\mathrm {L}}} ^2(\mathbb {R}^d)\) such that the partial derivatives \(\partial ^\alpha \) exist in the weak sense and are in \({{\mathrm {L}}} ^2(\mathbb {R}^d)\), whenever the multi-index \(\alpha = (\alpha _1, \dots , \alpha _d)\) is such that \(\alpha _1 + \cdots + \alpha _d \le k\).

For a fixed metric space E, endowed with the Borel \(\sigma \)-algebra, we denote by \({{\mathcal {P}}} (E)\), \({{\mathcal {M}}} _+(E)\), and \({{\mathcal {M}}} (E)\) the sets of probability, finite positive, and finite signed measures on E, respectively. If \(\mu \in {{\mathcal {M}}} (E)\), then \(|{\mu }| \in {{\mathcal {M}}} _+(E)\) is the total variation of \(\mu \).

For any given càdlàg stochastic process \(Z = (Z_t)_{t \ge 0}\) defined on a probability space \((\Omega , {{\mathcal {F}}} , {{\mathbb {P}}} )\), we denote by \((Z_{t^-})_{t \ge 0}\) the left-continuous version of Z (i.e., \(Z_{t^-} = \lim _{s \rightarrow t^-} Z_s, \, {{\mathbb {P}}} \)-a.s., for any \(t \ge 0\)), and by \(\Delta Z_t {{:}{=}} Z_t - Z_{t^-}\) the jump of Z at time \(t \ge 0\). If Z has finite variation over [0, t], for all \(t \ge 0\), \(|{Z}|\) (resp. \(Z^+\), \(Z^-\)) is the variation process (resp. the positive part process, the negative part process) of Z, i.e., the process such that, for each \(t \in [0,T]\) and \(\omega \in \Omega \), \(|{Z}|_t(\omega )\) (resp. \(Z^+_t(\omega )\), \(Z^-_t(\omega )\)) is the total variation (resp. the positive part, the negative part) of the function \(s \mapsto Z_s(\omega )\) on [0, t]. It is useful to remember that \(Z = Z^+ - Z^-\), \(|{Z}| = Z^+ + Z^-\), and that \(Z^+\), \(Z^-\) are non-decreasing processes.

Finally, with the word measurable we refer to Borel-measurable, unless otherwise specified.

2 Model Formulation

Let \(T > 0\) be a given fixed time horizon and \((\Omega , {{\mathcal {F}}} , {{\mathbb {F}}} {{:}{=}} ({{\mathcal {F}}} _t)_{t \in [0,T]}, {{\mathbb {P}}} )\) be a complete filtered probability space, with \({{\mathbb {F}}} \) satisfying the usual assumptions.

Define on \((\Omega , {{\mathcal {F}}} , {{\mathbb {F}}} , {{\mathbb {P}}} )\) two independent \({{\mathbb {F}}} \)-adapted standard Brownian motions W and \({\overline{B}}\), taking values in \(\mathbb {R}^d\) and \(\mathbb {R}^n\), respectively, with \(d, n \in \mathbb {N}\). Let then \(\gamma :[0,T] \rightarrow \mathbb {R}^{n \times n}\) be a measurable function such that, for each \(t \in [0,T]\), \(\gamma (t)\) is symmetric, with \(\gamma _{ij}(t) \in {{\mathrm {L}}} ^2([0,T])\), for all \(i, j = 1, \dots , n\), and uniformly positive definite; that is, there exists \(\delta > 0\) such that for all \(t \in [0,T]\) and all \(x \in \mathbb {R}^m\)

$$\begin{aligned} \gamma (t) x \cdot x \ge \delta \Vert {x}\Vert ^2. \end{aligned}$$
(2.1)

These requirements guarantee in particular that the observed process \(Y = (Y_t)_{t \in [0,T]}\), defined as

$$\begin{aligned} Y_t = y + \int _0^t \gamma (t) \, {{\mathrm {d}}} {\overline{B}}_t, \quad t \in [0,T], \, y \in \mathbb {R}^n, \end{aligned}$$
(2.2)

is an \(\mathbb {R}^n\)-valued \({{\mathbb {F}}} \)-adapted martingale, of which we take a continuous version. Clearly, it holds

$$\begin{aligned} {{\mathrm {d}}} Y_t = \gamma (t) \, {{\mathrm {d}}} {\overline{B}}_t, \quad t \in [0,T], \qquad Y_0 = y \in \mathbb {R}^n. \end{aligned}$$
(2.3)

Remark 2.1

It is not restrictive to require that \(\gamma \) is symmetric (and uniformly positive definite). Indeed, suppose that \({\overline{B}}\) is an \(\mathbb {R}^k\)-valued \({{\mathbb {F}}} \)-adapted standard Brownian motion and that \(\gamma :[0,T] \rightarrow \mathbb {R}^{n \times k}\) is such that \(\gamma \gamma ^*(t){:}{=}\gamma (t) \gamma ^*(t)\) is uniformly positive definite. Then, we can obtain an equivalent model defining the \(\mathbb {R}^n\)-valued \({{\mathbb {F}}} \)-adapted standard Brownian motion \({\widetilde{B}} = ({\widetilde{B}}_t)_{t \in [0,T]}\) through:

$$\begin{aligned} {{\mathrm {d}}} {\widetilde{B}}_t {{:}{=}} \bigl (\gamma \gamma ^*(t)\bigr )^{-1/2} \, \gamma (t) \, {{\mathrm {d}}} {\overline{B}}_t, \quad t \in [0,T]. \end{aligned}$$

In fact, in this case (2.3) becomes:

$$\begin{aligned} {{\mathrm {d}}} Y_t = \bigl (\gamma \gamma ^*(t)\bigr )^{1/2} \, {{\mathrm {d}}} {\widetilde{B}}_t, \quad t \in [0,T], \qquad Y_0 = y \in \mathbb {R}^n, \end{aligned}$$

and clearly \(\bigl (\gamma \gamma ^*(t)\bigr )^{1/2}\) is symmetric (and uniformly positive definite).

We indicate with the symbol \({{\mathbb {Y}}} \) the completed natural filtration generated by Y, i.e., \({{\mathbb {Y}}} {{:}{=}} ({{\mathcal {Y}}} _t)_{t \in [0,T]}\), with \({{\mathcal {Y}}} _t {{:}{=}} \{Y_s :0 \le s \le t\} \vee {{\mathcal {N}}} \), where \({{\mathcal {N}}} \) is the collection of all \({{\mathbb {P}}} \)-null sets.

Remark 2.2

Notice that since \(\gamma \) is invertible, \({{\mathbb {Y}}} \) coincides with the completed natural filtration generated by \({\overline{B}}\) and is, therefore, right-continuous. These facts will be useful in the sequel.

Next, we consider a probability distribution \(\xi \) on \(\mathbb {R}^m\); measurable functions \(b :[0,T] \times \mathbb {R}^m \rightarrow \mathbb {R}^m\) and \(\sigma :[0,T] \times \mathbb {R}^m \rightarrow \mathbb {R}^{m \times d}\), with \(m \in \mathbb {N}\); a \({{\mathbb {Y}}} \)-adapted, càdlàg, \(\mathbb {R}^m\)-valued process \(\nu \) whose components have paths of finite variation. We introduce the following requirements, that will be in force throughout the paper.

Assumption 2.1

  1. (i)

    There exist constants \(C_b\) and \(L_b\) such that for all \(t \in [0,T]\)

    $$\begin{aligned} \Vert {b(t,x) - b(t,x')}\Vert \le L_b \Vert {x - x'}\Vert \quad \text {and} \quad \Vert {b(t,0)}\Vert \le C_b, \quad \forall x, x' \in \mathbb {R}^m.\nonumber \\ \end{aligned}$$
    (2.4)
  2. (ii)

    There exist constants \(C_\sigma \) and \(L_\sigma \) such that for all \(t \in [0,T]\)

    $$\begin{aligned} \Vert {\sigma (t,x) - \sigma (t,x')}\Vert \le L_\sigma \Vert {x - x'}\Vert \quad \text {and} \quad \Vert {\sigma (t,0)}\Vert \le C_\sigma , \quad \forall x, x' \in \mathbb {R}^m.\nonumber \\ \end{aligned}$$
    (2.5)
  3. (iii)

    The probability law \(\xi \in {{\mathcal {P}}} (\mathbb {R}^m)\) satisfies

    $$\begin{aligned} \int _{\mathbb {R}^m} \Vert {x}\Vert ^2 \, \xi ({{\mathrm {d}}} x) < +\infty . \end{aligned}$$
    (2.6)
  4. (iv)

    The \(\mathbb {R}^m\)-valued process \(\nu \) is \({{\mathbb {Y}}} \)-adapted, càdlàg, with \(\nu _{0^-} = 0\). Its components have paths of finite variation, which in particular satisfy

    $$\begin{aligned} |{\nu ^i}|_T \le K, \qquad \forall i = 1, \dots , m, \end{aligned}$$
    (2.7)

    for some constant \(K > 0\).

Under Assumption 2.1, for any such \(\nu \), the following SDE for the signal process \(X = (X_t)_{t \in [0,T]}\) admits a unique strong solution:

$$\begin{aligned} {{\mathrm {d}}} X_t = b(t, X_t) \, {{\mathrm {d}}} t + \sigma (t, X_t) \, {{\mathrm {d}}} W_t + {{\mathrm {d}}} \nu _t, \quad t \in [0,T], \qquad X_{0^-} \sim \xi \in {{\mathcal {P}}} (\mathbb {R}^m).\,\,\,\,\,\,\,\, \end{aligned}$$
(2.8)

It is important to bear in mind, especially in applications to optimal control problems, that the solution to (2.8) and all the quantities that are related to it depend on the the probability distribution \(\xi \) and on \(\nu \). However, for the ease of exposition, we will not stress this dependence in the sequel.

Remark 2.3

Conditions (2.4) and (2.5) ensure that SDE (2.8) admits a unique strong solution for any \(\nu \). If we assume, in addition, that (2.6) and (2.7) hold, then we have that, for some constant \(\kappa \) depending on T, b, \(\sigma \), and \(\nu \),

$$\begin{aligned} {{\mathbb {E}}} [\sup _{t \in [0,T]} \Vert {X_t}\Vert ^2] \le \kappa (1+ {{\mathbb {E}}} [\Vert {X_{0^-}}\Vert ^2]) < +\infty , \end{aligned}$$
(2.9)

since \({{\mathbb {E}}} [\Vert {X_{0^-}}\Vert ^2] = \int _{\mathbb {R}^m} \Vert {x}\Vert ^2 \, \xi ({{\mathrm {d}}} x)\). Proofs of these statements are standard and can be found, for instance, in [14, 39].

We finally arrive to the model we intend to analyze via a change of measure. Let \(h :[0,T] \times \mathbb {R}^m \rightarrow \mathbb {R}^n\) be a measurable function satisfying the following condition, that will stand from now on.

Assumption 2.2

There exists a constant \(C_h\) such that for all \(t \in [0,T]\)

$$\begin{aligned} \Vert {h(t,x)}\Vert \le C_h(1+\Vert {x}\Vert ), \quad \forall x \in \mathbb {R}^m. \end{aligned}$$
(2.10)

For all \(t \in [0,T]\) define then:

$$\begin{aligned} \eta _t {{:}{=}} \exp \left\{ \int _0^t \gamma ^{-1}(s) h(s, X_s) \, {{\mathrm {d}}} {\overline{B}}_s - \dfrac{1}{2} \int _0^t \Vert {\gamma ^{-1}(s) h(s, X_s)}\Vert ^2 \, {{\mathrm {d}}} s\right\} . \end{aligned}$$
(2.11)

By Proposition A.2, \(\eta \) is a \(({{\mathbb {P}}} ,{{\mathbb {F}}} )\)-martingale, under Assumptions 2.1 and 2.2. Therefore, we can introduce the probability measure \({\widetilde{{{\mathbb {P}}} }}\) on \((\Omega , {{\mathcal {F}}} _T)\) satisfying

$$\begin{aligned} \frac{{{\mathrm {d}}} {\widetilde{{{\mathbb {P}}} }}}{{{\mathrm {d}}} {{\mathbb {P}}} } \bigg |_{{{\mathcal {F}}} _T} = \eta _T. \end{aligned}$$
(2.12)

By Girsanov’s Theorem, the process \(B = (B_t)_{t \in [0,T]}\) given by \(B_t {{:}{=}} {\overline{B}}_t - \int _0^t \gamma ^{-1}(s) h(s, X_s) \, {{\mathrm {d}}} s\), \(t \in [0,T]\), is a \(({\widetilde{{{\mathbb {P}}} }},{{\mathbb {F}}} )\)-Brownian motion, and under \({\widetilde{{{\mathbb {P}}} }}\) the dynamics of the observed process are provided by the SDE:

$$\begin{aligned} {{\mathrm {d}}} Y_t = h(t, X_t) \, {{\mathrm {d}}} t + \gamma (t) \, {{\mathrm {d}}} B_t, \quad t \in [0,T], \qquad Y_0 = y \in \mathbb {R}^n. \end{aligned}$$
(2.13)

We see that Eqs. (2.8) and (2.13) are formally equivalent to model (1.1). Observe, however, that the Brownian motion driving (2.13) is not a source of noise given a priori, but it is obtained through a probability measure change; moreover, our construction implies that it depends on the initial law \(\xi \) and on process \(\nu \). This formulation is typical in optimal control problems under partial observation (see, e.g., [5], Chapter 8) and has the advantage of avoiding the circularity problem discussed in the Introduction.

Remark 2.4

If the partially observed system defined by (2.8) and (2.13) describes the state variables of a singular optimal control problem, where \(\nu \) is the control process, then condition (2.7) implies that the singular control is of finite fuel type (see El Karoui and Karatzas[21], Karatzas et al. [30] for early contributions).

Remark 2.5

It is worth noticing that all the results in this paper remain valid if we allow b to depend also on \(\omega \), as long as the map \((\omega ,t) \mapsto b(\omega ,t,x)\) is \({{\mathbb {Y}}} \)-adapted and càdlàg, for each \(x \in \mathbb {R}^m\), and condition (2.4) holds uniformly with respect to \(\omega \) (i.e., \(L_b\) and \(C_b\) do not depend on \(\omega \)). To extend our subsequent results to this case, it suffices to apply the so-called freezing lemma whenever necessary.

This modeling flexibility is important when it comes to treating controlled dynamics where b is a deterministic function, depending on an additional parameter representing the action of a regular control \(\alpha = (\alpha _t)_{t \in [0,T]}\). Clearly, this control must be càdlàg and \({{\mathbb {Y}}} \)-adapted, i.e., based on the available information. The measurability requirement above ensures that the map \((\omega , t) \mapsto b(t,x,\alpha _t(\omega ))\) is \({{\mathbb {Y}}} \)-adapted.

3 The Zakai and Kushner–Stratonovich Equations

In this section we will deduce the Zakai equation satisfied by the unnormalized filtering process, defined in (3.3). As a byproduct, we will deduce the Kushner–Stratonovich equation satisfied by the filtering process (see (3.1) for its definition). As anticipated in the Introduction, we will use the reference probability approach to achieve these results. The reference probability will be precisely \({{\mathbb {P}}} \), under which the observed process is Gaussian and satisfies (2.2). However, the probability measure that matters from a modelization point of view is \({\widetilde{{{\mathbb {P}}} }}\), which is defined in (2.12). Indeed, we will define the filtering process under this measure. It is important to bear in mind that \({\widetilde{{{\mathbb {P}}} }}\) and \({{\mathbb {P}}} \) are equivalent probability measures. Hence, any result holding \({{\mathbb {P}}} \)-a.s., holds also \({\widetilde{{{\mathbb {P}}} }}\)-a.s., and we will write only the first of these two wordings.

The following technical lemma is needed. Its proof is a consequence of the facts highlighted in Remark 2.2 and it is omitted (the reader may refer, for instance, to [2], Prop. 3.15). In what follows we will denote \({{\mathcal {Y}}} {{:}{=}} {{\mathcal {Y}}} _T\).

Lemma 3.1

Let Z be an \({{\mathcal {F}}} _t\)-measurable, \({{\mathbb {P}}} \)-integrable random variable, \(t \in [0,T]\). Then

$$\begin{aligned} {{\mathbb {E}}} [Z \mid {{\mathcal {Y}}} _t] = {{\mathbb {E}}} [Z \mid {{\mathcal {Y}}} ]. \end{aligned}$$

As previously anticipated, the filtering process \(\pi = (\pi _t)_{t \in [0,T]}\) is a \({{\mathcal {P}}} (\mathbb {R}^m)\)-valued process providing the conditional law of the signal X at each time \(t \in [0,T]\), given the available observation up to time t. It is defined for any bounded and measurable \(\varphi :[0,T] \times \mathbb {R}^m \rightarrow \mathbb {R}\) as:

$$\begin{aligned} \pi _t(\varphi _t) {{:}{=}} {\widetilde{{{\mathbb {E}}} }}\bigl [\varphi (t, X_t) \bigm | {{\mathcal {Y}}} _t \bigr ], \quad t \in [0,T], \end{aligned}$$
(3.1)

where \(\varphi _t(x) {{:}{=}} \varphi (t,x)\), for any \((t,x) \in [0,T] \times \mathbb {R}^m\). Since \(\mathbb {R}^m\) is a complete and separable metric space, \(\pi \) is a well-defined, \({{\mathcal {P}}} (\mathbb {R}^m)\)-valued and \({{\mathbb {Y}}} \)-adapted process.Footnote 1 Moreover, \(\pi \) admits a càdlàg modification, since X is càdlàg (see, e.g. [2], Cor. 2.26). Hence, in the sequel we shall consider \(\pi \) as a \({{\mathbb {Y}}} \)-progressively measurable process.

We recall the useful Kallianpur-Striebel formula, which holds thanks to Proposition A.2 for any bounded and measurable \(\varphi :[0,T] \times \mathbb {R}^m \rightarrow \mathbb {R}\) and for any fixed \(t \in [0,T]\) (for a proof see, e.g., [2], Prop. 3.16)

$$\begin{aligned} \pi _t(\varphi _t) = \frac{{{\mathbb {E}}} \bigl [\eta _t \varphi (t, X_t) \bigm | {{\mathcal {Y}}} \bigr ]}{{{\mathbb {E}}} \bigl [\eta _t \bigm | {{\mathcal {Y}}} \bigr ]}, \quad {{\mathbb {P}}} \text {-a.s.} \end{aligned}$$
(3.2)

This formula allows us to define the measure-valued process \(\rho = (\rho _t)_{t \in [0,T]}\), called unnormalized conditional distribution of X, or unnormalized filtering process, defined, for any bounded and measurable \(\varphi :[0,T] \times \mathbb {R}^m \rightarrow \mathbb {R}\), as:

$$\begin{aligned} \rho _t(\varphi _t) {{:}{=}} {{\mathbb {E}}} \bigl [\eta _t \varphi (t, X_t) \bigm | {{\mathcal {Y}}} _t \bigr ], \quad t \in [0,T]. \end{aligned}$$
(3.3)

Given the properties of \(\pi \) and of \(\eta \) it is possible to show (see, e.g., [2], Lemma 3.18) that \(\rho \) is càdlàg and \({{\mathbb {Y}}} \)-adapted, hence \({{\mathbb {Y}}} \)-progressively measurable. Moreover, the Kallianpur-Striebel formula implies that for any bounded and measurable \(\varphi :[0,T] \times \mathbb {R}^m \rightarrow \mathbb {R}\) and for any fixed \(t \in [0,T]\):

$$\begin{aligned} \pi _t(\varphi _t) = \frac{\rho _t(\varphi _t)}{\rho _t(\mathsf {1})}, \quad {{\mathbb {P}}} \text {-a.s.}, \end{aligned}$$
(3.4)

where \(\mathsf {1}:\mathbb {R}^m \rightarrow \mathbb {R}\) is the constant function equal to 1.

To describe the local dynamics of the signal process X, let us introduce the operator \({{\mathcal {A}}} \), defined for any \(\varphi \in {{\mathrm {C}}} ^{1,2}_b([0,T] \times \mathbb {R}^m)\) as:

$$\begin{aligned} {{\mathcal {A}}} \varphi (t,x) {{:}{=}} {{\mathrm {D}}} _x \varphi (t,x) \cdot b(t,x) + \frac{1}{2} \mathrm{tr}\bigl ({{\mathrm {D}}} ^2_x \varphi (t,x) \, \sigma \sigma ^*(t,x)\bigr ), \quad (t,x) \in [0,T] \times \mathbb {R}^m.\nonumber \\ \end{aligned}$$
(3.5)

We can also define the family of operators \({{\mathcal {A}}} _t\), \(t \in [0,T]\), given by:

$$\begin{aligned} {{\mathcal {A}}} _t \varphi (x) = {{\mathrm {D}}} _x \varphi (x) \cdot b(t,x) + \frac{1}{2} \mathrm{tr}\bigl ({{\mathrm {D}}} ^2_x \varphi (x) \, \sigma \sigma ^*(t,x)\bigr ), \quad x \in \mathbb {R}^m, \, \varphi \in {{\mathrm {C}}} ^2_b(\mathbb {R}^m). \end{aligned}$$

To obtain the Zakai equation we need, first, to write the semimartingale decomposition of the process \(\bigl (\varphi (t,X_t)\bigr )_{t \in [0,T]}\). For any \(\varphi \in {{\mathrm {C}}} ^{1,2}_b([0,T] \times \mathbb {R}^m)\) we have, applying Itô’s formula:

$$\begin{aligned} \varphi (t,X_t) =&\varphi (0, X_{0^-}) + \int _0^t \bigl [\partial _s + {{\mathcal {A}}} \bigr ] \varphi (s, X_s) \, {{\mathrm {d}}} s + \int _0^t {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \, {{\mathrm {d}}} \nu _s \nonumber \\&+ \sum _{0 \le s \le t} \Bigl [\varphi (s,X_s) - \varphi (s, X_{s^-}) - {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \cdot \Delta \nu _s\Bigr ] + M_t^\varphi , \quad t \in [0,T].\nonumber \\ \end{aligned}$$
(3.6)

Here, \(M_t^\varphi {{:}{=}} \int _0^t {{\mathrm {D}}} _x \varphi (t,X_t) \, \sigma (t, X_t) \, {{\mathrm {d}}} W_t\), \(t \in [0,T]\), is a square-integrable \(({{\mathbb {P}}} , {{\mathbb {F}}} )\)-martingale, thanks to conditions (2.4) and (2.5) (see also Remark 2.3).

We need the following two technical Lemmata. Up to minor modifications, their proofs follow that of [2], Lemma 3.21.

Lemma 3.2

Let \(\Psi = (\Psi _t)_{t \in [0,T]}\) be a real-valued \(({{\mathbb {P}}} , {{\mathbb {F}}} )\)-progressively measurable process such that

$$\begin{aligned} {{\mathbb {E}}} \biggl [\int _0^T \Psi _s^2 \, {{\mathrm {d}}} s \biggr ] < +\infty . \end{aligned}$$

Then, for any \(j = 1, \dots , k\) we have

$$\begin{aligned} {{\mathbb {E}}} \biggl [\int _0^t \Psi _s \, {{\mathrm {d}}} {\overline{B}}_s^j \biggm | {{\mathcal {Y}}} \biggr ] = \int _0^t {{\mathbb {E}}} [\Psi _s \mid {{\mathcal {Y}}} ] \, {{\mathrm {d}}} {\overline{B}}_s^j, \quad t \in [0,T]. \end{aligned}$$

Lemma 3.3

Let \(\Psi = (\Psi _t)_{t \in [0,T]}\) be a real-valued \(({{\mathbb {P}}} , {{\mathbb {F}}} )\)-progressively measurable process satisfyingFootnote 2

$$\begin{aligned} {{\mathbb {E}}} \biggl [\int _0^T \Psi _s^2 \, {{\mathrm {d}}} \langle M^\varphi \rangle _s \biggr ] < +\infty . \end{aligned}$$

Then,

$$\begin{aligned} {{\mathbb {E}}} \biggl [\int _0^t \Psi _s \, {{\mathrm {d}}} M_s^\varphi \biggm | {{\mathcal {Y}}} \biggr ] = 0, \quad t \in [0,T]. \end{aligned}$$

We are now ready to state the main result of this section, namely, to provide the Zakai equation.

Theorem 3.4

Suppose that Assumptions 2.1 and 2.2 are satisfied and, moreover, that

$$\begin{aligned} \int _{\mathbb {R}^m} \Vert {x}\Vert ^3 \, \xi ({{\mathrm {d}}} x) < +\infty . \end{aligned}$$
(3.7)

Then, for any \(\varphi \in {{\mathrm {C}}} _b^{1,2}([0,T] \times \mathbb {R}^m)\), the unnormalized conditional distribution \(\rho \) satisfies the Zakai equation:

$$\begin{aligned} \rho _t(\varphi _t)&= \xi (\varphi _0) + \int _0^t \rho _s\bigl (\bigl [\partial _s + {{\mathcal {A}}} _s\bigr ] \varphi _s\bigr ) \, {{\mathrm {d}}} s + \int _0^t \rho _{s^-}\bigl ({{\mathrm {D}}} _x \varphi _s\bigr ) \, {{\mathrm {d}}} \nu _s \nonumber \\&\quad + \int _0^t \gamma ^{-1}(s) \rho _s(\varphi _s h_s) \, {{\mathrm {d}}} {\overline{B}}_s \nonumber \\&\quad +\sum _{0 \le s \le t} \Bigl [\rho _{s^-}\bigl (\varphi _s(\cdot + \!\Delta \nu _s) \!-\! \varphi _s - {{\mathrm {D}}} _x \varphi _s \cdot \Delta \nu _s\bigr )\Bigr ], {{\mathbb {P}}} \text {-a.s.}, \quad t \!\in \! [0,T],\qquad \quad \end{aligned}$$
(3.8)

where \(\xi (\varphi _0) {{:}{=}} \int _{\mathbb {R}^m} \varphi (0, x) \, \xi ({{\mathrm {d}}} x)\) and, for all \(t \in [0,T]\), \(h_t(\cdot ) {{:}{=}} h(t,\cdot )\),

$$\begin{aligned} \int _0^t \rho _{s^-}\bigl ({{\mathrm {D}}} _x \varphi _s\bigr ) \, {{\mathrm {d}}} \nu _s&{{:}{=}} \sum _{i=1}^m \int _0^t \rho _{s^-}\bigl (\partial _i \varphi _s\bigr ) \, {{\mathrm {d}}} \nu ^i_s, \\ \int _0^t \gamma ^{-1}(s) \rho _s(\varphi _s h_s) \, {{\mathrm {d}}} {\overline{B}}_s&{{:}{=}} \sum _{i=1}^n \sum _{j=1}^n \int _0^t \gamma ^{-1}_{ij}(s) \rho _s(\varphi _s h^j_s) \, {{\mathrm {d}}} {\overline{B}}^i_s. \end{aligned}$$

Proof

Fix \(t \in [0,T]\) and \(\varphi \in {{\mathrm {C}}} _b^{1,2}([0,T] \times \mathbb {R}^m)\). Let us introduce the constants

$$\begin{aligned} C_\varphi&{{:}{=}} \sup _{t,x} |{\varphi (t,x)}|,&C'_\varphi&{{:}{=}} \sup _{t,x} \Vert {{{\mathrm {D}}} _x \varphi (t,x)}\Vert ,&C''_\varphi&{{:}{=}} \sup _{t,x} \Vert {{{\mathrm {D}}} ^2_x \varphi (t,x)}\Vert , \end{aligned}$$

where the suprema are taken over \([0,T] \times \mathbb {R}^m\). The proof is organized in several steps.

Step 1 (Approximation) For any fixed \(\epsilon > 0\), define the bounded process \(\eta ^\epsilon = (\eta ^\epsilon _t)_{t \in [0,T]}\):

$$\begin{aligned} \eta ^\epsilon _t {{:}{=}} \frac{\eta _t}{1+ \epsilon \eta _t}, \quad t \in [0,T], \end{aligned}$$
(3.9)

where \(\eta \) is defined in (2.11). Both \(\eta \) and \(\eta ^\epsilon \) have continuous trajectories and this fact will be used in what follows without further mention.

Applying Itô’s formula we obtain

$$\begin{aligned} \eta ^\epsilon _t&= \dfrac{1}{1+\epsilon } - \int _0^t \dfrac{\epsilon \eta _s^2}{(1+\epsilon \eta _s)^3} \Vert {\gamma ^{-1}(s) h(s, X_s)}\Vert ^2 \, {{\mathrm {d}}} s \\&\quad + \int _0^t \dfrac{\eta _s}{(1+\epsilon \eta _s)^2} \gamma ^{-1}(s) h(s, X_s) \, {{\mathrm {d}}} {\overline{B}}_s. \end{aligned}$$

Denoting by \([\cdot , \cdot ]\) the optional quadratic covariation operator, thanks to the integration by parts rule and recalling (3.6) we get

$$\begin{aligned} \eta ^\epsilon _t \varphi (t,X_t)&= \dfrac{\varphi (0,X_{0^-})}{1+\epsilon } + \int _0^t \eta ^\epsilon _{s^-} \, {{\mathrm {d}}} \varphi (s,X_s) + \int _0^t \varphi (s,X_{s^-}) \, {{\mathrm {d}}} \eta ^\epsilon _s + \int _0^t {{\mathrm {d}}} \bigl [\eta ^\epsilon , \varphi (\cdot , X) \bigr ]_s \nonumber \\&= \dfrac{\varphi (0,X_{0^-})}{1+\epsilon } + \int _0^t \eta ^\epsilon _{s^-} \bigl [\partial _s + {{\mathcal {A}}} \bigr ] \varphi (s, X_s) \, {{\mathrm {d}}} s + \int _0^t \eta ^\epsilon _{s^-} {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \, {{\mathrm {d}}} \nu _s \nonumber \\&\quad + \sum _{0 \le s \le t} \eta ^\epsilon _{s^-} \Bigl [\varphi (s,X_s) - \varphi (s, X_{s^-}) - {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \, \Delta \nu _s\Bigr ] + \int _0^t \eta ^\epsilon _{s^-} \, {{\mathrm {d}}} M_s^\varphi \nonumber \\&\quad - \int _0^t \dfrac{\epsilon \eta _s^2 \varphi (s,X_{s^-})}{(1+\epsilon \eta _s)^3} \Vert {\gamma ^{-1}(s) h(s, X_s)}\Vert ^2 \, {{\mathrm {d}}} s + \int _0^t \dfrac{\eta _s \varphi (s,X_{s^-})}{(1+\epsilon \eta _s)^2} \gamma ^{-1}(s) h(s, X_s) \, {{\mathrm {d}}} {\overline{B}}_s. \end{aligned}$$
(3.10)

Step 2 (Projection onto \({{\mathbb {Y}}} \)) Notice that \(X_t = X_{t^-} + \Delta \nu _t\), \({{\mathbb {P}}} \)-a.s., \(t \in [0,T]\), and that, since \({{\mathcal {Y}}} _{0^-} = {{\mathcal {Y}}} _0 = \{\emptyset , \Omega \}\), we have

$$\begin{aligned} {{\mathbb {E}}} [\varphi (0, X_{0^-}) \mid {{\mathcal {Y}}} _{0^-} ] = \int _{\mathbb {R}^m} \varphi (0, x) \, \xi ({{\mathrm {d}}} x) = \xi (\varphi _0). \end{aligned}$$

Therefore, taking conditional expectation with respect to \({{\mathcal {Y}}} \), we have (rearranging some terms)

$$\begin{aligned} {{\mathbb {E}}} [\eta ^\epsilon _t \varphi (t,X_t) \mid {{\mathcal {Y}}} ] =&\dfrac{\xi (\varphi _0)}{1+\epsilon } + {{\mathbb {E}}} \biggl [\int _0^t \eta ^\epsilon _{s^-} \bigl [\partial _s + {{\mathcal {A}}} \bigr ] \varphi (s, X_s) \, {{\mathrm {d}}} s \biggm | {{\mathcal {Y}}} \biggr ] \nonumber \\&+ {{\mathbb {E}}} \biggl [\int _0^t\eta ^\epsilon _{s^-} {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \, {{\mathrm {d}}} \nu _s \biggm | {{\mathcal {Y}}} \biggr ] + {{\mathbb {E}}} \biggl [\int _0^t\dfrac{\eta _s \varphi (s,X_{s^-})}{(1+\epsilon \eta _s)^2} \gamma ^{-1}(s) h(s, X_s) \, {{\mathrm {d}}} {\overline{B}}_s \biggm | {{\mathcal {Y}}} \biggr ] \nonumber \\&+ {{\mathbb {E}}} \biggl [\sum _{0 \le s \le t} \eta ^\epsilon _{s^-} \Bigl [\varphi (s,X_{s^-} + \Delta \nu _s) - \varphi (s, X_{s^-}) - {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \cdot \Delta \nu _s\Bigr ] \biggm | {{\mathcal {Y}}} \biggr ] \nonumber \\&+ {{\mathbb {E}}} \biggl [\int _0^t \eta ^\epsilon _{s^-} \, {{\mathrm {d}}} M_s^\varphi \biggm | {{\mathcal {Y}}} \biggr ] - {{\mathbb {E}}} \biggl [\int _0^t \dfrac{\epsilon \eta _s^2 \varphi (s,X_{s^-})}{(1+\epsilon \eta _s)^3} \Vert {\gamma ^{-1}(s) h(s, X_s)}\Vert ^2 \, {{\mathrm {d}}} s \biggm | {{\mathcal {Y}}} \biggr ]. \end{aligned}$$
(3.11)

We analyze now each of the terms appearing in (3.11). For any bounded \({{\mathcal {Y}}} \)-measurable Z, thanks to conditions (2.4) and (2.5), there exists a constant \(C_1\), depending on Z, \(\epsilon \), \(\varphi \), b, and \(\sigma \) such that

$$\begin{aligned} |Z \eta ^\epsilon _t \bigl [\partial _t + {{\mathcal {A}}} \bigr ] \varphi (t, X_t)| \le C_1(1+ \Vert {X_t}\Vert ^2), \quad t \in [0,T], \end{aligned}$$

which implies, using the estimate given in (2.9),

$$\begin{aligned} {{\mathbb {E}}} \biggl [\int _0^t Z \eta ^\epsilon _s \bigl [\partial _s + {{\mathcal {A}}} \bigr ] \varphi (s, X_s) \, {{\mathrm {d}}} s\biggr ]&\le C {{\mathbb {E}}} \left[ \int _0^t (1+\Vert {X_s}\Vert ^2) \, {{\mathrm {d}}} s\right] \\&\le C_1T[1+\kappa (1+ {{\mathbb {E}}} [\Vert {X_{0^-}}\Vert ^2])] < +\infty . \end{aligned}$$

Therefore, applying the tower rule and Fubini–Tonelli’s theorem,

$$\begin{aligned} {{\mathbb {E}}} \biggl [Z \, {{\mathbb {E}}} \biggl [\int _0^t \eta ^\epsilon _s \bigl [\partial _s + {{\mathcal {A}}} \bigr ] \varphi (s, X_s) \, {{\mathrm {d}}} s \biggm | {{\mathcal {Y}}} \biggr ]\biggr ] = {{\mathbb {E}}} \biggl [Z \int _0^t {{\mathbb {E}}} [\eta ^\epsilon _s \bigl [\partial _s + {{\mathcal {A}}} \bigr ] \varphi (s, X_s) \mid {{\mathcal {Y}}} ] \, {{\mathrm {d}}} s \biggr ], \end{aligned}$$

whence

$$\begin{aligned} {{\mathbb {E}}} \biggl [\int _0^t \eta ^\epsilon _s \bigl [\partial _s + {{\mathcal {A}}} \bigr ] \varphi (s, X_s) \, {{\mathrm {d}}} s \biggm | {{\mathcal {Y}}} \biggr ] = \int _0^t {{\mathbb {E}}} [\eta ^\epsilon _s \bigl [\partial _s + {{\mathcal {A}}} \bigr ] \varphi (s, X_s) \mid {{\mathcal {Y}}} ] \, {{\mathrm {d}}} s. \end{aligned}$$
(3.12)

Similarly, for any bounded \({{\mathcal {Y}}} \)-measurable Z we have that

$$\begin{aligned} \Vert {Z \eta ^\epsilon _t {{\mathrm {D}}} _x \varphi (t, X_{t^-})}\Vert \le \dfrac{|{Z}|C'_\varphi }{\epsilon } < +\infty , \quad {{\mathrm {d}}} {{\mathbb {P}}} \otimes {{\mathrm {d}}} t\text {-a.e.} \end{aligned}$$

This fact will allow to use Fubini-Tonelli’s theorem in formula (3.13) below. We need to introduce the changes of time associated to the processes \(\nu ^{i,+}\) and \(\nu ^{i,-}\), \(i = 1, \dots , m\), defined as

$$\begin{aligned} C^{i,+}_t {{:}{=}} \inf \{s \ge 0 :\nu ^{i,+}_s \ge t\}, \quad C^{i,-}_t {{:}{=}} \inf \{s \ge 0 :\nu ^{i,-}_s \ge t\}, \quad t \ge 0, \quad i = 1, \dots ,m, \end{aligned}$$

where \(\nu ^{i,+}\) (resp. \(\nu ^{i,-}\)) denotes the positive part (resp. negative part) process of the i-th component of process \(\nu \) (see the list of notation in Sect. 1.2 for a more detailed definition).

For each \(t \ge 0\) and \(i = 1, \dots , m\), \(C^{i,+}_t\) and \(C^{i,-}_t\) are \({{\mathbb {Y}}} \)-stopping times (see, e.g., [20], Chapter VI, Def. 56 or [28], Proposition I.1.28). Hence, applying the change of time formula (see, e.g., [20], Chapter VI, Equation (55.1) or [28], Equation (1), p. 29) and Fubini-Tonelli’s theorem, we get

$$\begin{aligned}&{{\mathbb {E}}} \biggl [Z \, {{\mathbb {E}}} \biggl [\int _0^t\eta ^\epsilon _s {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \, {{\mathrm {d}}} \nu _s \biggm | {{\mathcal {Y}}} \biggr ]\biggr ] \nonumber \\&= {{\mathbb {E}}} \biggl [\int _0^{+\infty } \mathbf {1}_{s \le t} Z \eta ^\epsilon _s {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \, {{\mathrm {d}}} \nu _s\biggr ] \nonumber \\&= \sum _{i=1}^m {{\mathbb {E}}} \biggl [\int _0^{+\infty } \mathbf {1}_{s \le t} Z \eta ^\epsilon _s \partial _i \varphi (s, X_{s^-}) \, {{\mathrm {d}}} \nu ^{i,+}_s\biggr ] - \sum _{i=1}^m {{\mathbb {E}}} \biggl [\int _0^{+\infty } \mathbf {1}_{s \le t} Z \eta ^\epsilon _s \partial _i \varphi (s, X_{s^-}) \, {{\mathrm {d}}} \nu ^{i,-}_s\biggr ] \nonumber \\&= \sum _{i=1}^m {{\mathbb {E}}} \biggl [\int _0^{+\infty } \mathbf {1}_{C_s^{i,+} \le t} Z \eta ^\epsilon _{C_s^{i,+}} \partial _i \varphi (C_s^{i,+}, X_{({C_s^{i,+}})^-}) \mathbf {1}_{C_s^{i,+}< +\infty } \, {{\mathrm {d}}} s\biggr ] \nonumber \\&\qquad - \sum _{i=1}^m {{\mathbb {E}}} \biggl [\int _0^{+\infty } \mathbf {1}_{C_s^{i,-} \le t} Z \eta ^\epsilon _{C_s^{i,-}} \partial _i \varphi (C_s^{i,-}, X_{({C_s^{i,-}})^-}) \mathbf {1}_{C_s^{i,-}< +\infty } \, {{\mathrm {d}}} s\biggr ] \nonumber \\&= \sum _{i=1}^m \int _0^{+\infty } {{\mathbb {E}}} \left[ \mathbf {1}_{C_s^{i,+} \le t} Z {{\mathbb {E}}} \bigl [\eta ^\epsilon _{C_s^{i,+}} \partial _i \varphi (C_s^{i,+}, X_{({C_s^{i,+}})^-}) \bigm | {{\mathcal {Y}}} \bigr ] \mathbf {1}_{C_s^{i,+}< +\infty }\right] \, {{\mathrm {d}}} s\biggr ] \nonumber \\&\qquad - \sum _{i=1}^m \int _0^{+\infty } {{\mathbb {E}}} \left[ \mathbf {1}_{C_s^{i,-} \le t} Z {{\mathbb {E}}} \bigl [\eta ^\epsilon _{C_s^{i,-}} \partial _i \varphi (C_s^{i,-}, X_{({C_s^{i,-}})^-}) \bigm | {{\mathcal {Y}}} \bigr ] \mathbf {1}_{C_s^{i,-} < +\infty }\right] \, {{\mathrm {d}}} s\biggr ] \nonumber \\&= \sum _{i=1}^m {{\mathbb {E}}} \biggl [\int _0^{+\infty } \! \mathbf {1}_{s \le t} Z {{\mathbb {E}}} [\eta ^\epsilon _s \, \partial _i \varphi (s, X_{s^-}) \mid {{\mathcal {Y}}} ] \, {{\mathrm {d}}} \nu ^{i,+}_s\biggr ] \nonumber \\&\qquad - \sum _{i=1}^m {{\mathbb {E}}} \biggl [\int _0^{+\infty } \! \mathbf {1}_{s \le t} Z {{\mathbb {E}}} [\eta ^\epsilon _s \, \partial _i \varphi (s, X_{s^-}) \mid {{\mathcal {Y}}} ] \, {{\mathrm {d}}} \nu ^{i,-}_s\biggr ] \nonumber \\&= {{\mathbb {E}}} \biggl [\int _0^{+\infty } \mathbf {1}_{s \le t} Z {{\mathbb {E}}} [\eta ^\epsilon _s \, {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \mid {{\mathcal {Y}}} ] \, {{\mathrm {d}}} \nu _s\biggr ] = {{\mathbb {E}}} \biggl [Z \int _0^t {{\mathbb {E}}} [\eta ^\epsilon _s \, {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \mid {{\mathcal {Y}}} ] \, {{\mathrm {d}}} \nu _s\biggr ], \end{aligned}$$
(3.13)

whence

$$\begin{aligned} {{\mathbb {E}}} \biggl [\int _0^t\eta ^\epsilon _s {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \, {{\mathrm {d}}} \nu _s \biggm | {{\mathcal {Y}}} \biggr ] = \int _0^t {{\mathbb {E}}} [\eta ^\epsilon _s \, {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \mid {{\mathcal {Y}}} ] \, {{\mathrm {d}}} \nu _s. \end{aligned}$$
(3.14)

Next, using (A.4) we obtain

$$\begin{aligned}&{{\mathbb {E}}} \biggl [\int _0^t \biggl (\frac{\eta ^\epsilon _s}{1+\epsilon \eta _s} \varphi (s,X_{s^-}) \Vert {\gamma ^{-1}(s) h(s, X_s)}\Vert \biggr )^2 \, {{\mathrm {d}}} s \biggr ] \\&\quad \le \frac{C_\varphi ^2}{\epsilon ^2} \, {{\mathbb {E}}} \biggl [\int _0^t \Vert {\gamma ^{-1}(s) h(s, X_s)}\Vert ^2 \, {{\mathrm {d}}} s \biggr ] < +\infty , \end{aligned}$$

hence, by Lemma 3.2 we have:

$$\begin{aligned}&{{\mathbb {E}}} \biggl [\int _0^t \dfrac{\eta _s \varphi (s,X_{s^-})}{(1+\epsilon \eta _s)^2} \gamma ^{-1}(s) h(s, X_s) \, {{\mathrm {d}}} {\overline{B}}_s \biggm | {{\mathcal {Y}}} \biggr ] \nonumber \\&\quad = \int _0^t {{\mathbb {E}}} \biggl [\dfrac{\eta _s \varphi (s,X_{s^-})}{(1+\epsilon \eta _s)^2} \gamma ^{-1}(s) h(s, X_s) \biggm | {{\mathcal {Y}}} \biggr ] \, {{\mathrm {d}}} {\overline{B}}_s. \end{aligned}$$
(3.15)

Recalling that \(|{\nu }|^i_T \le K\), \({{\mathbb {P}}} \)-a.s., and hence \(|{\Delta \nu ^i_t}| \le K\), for all \(t \in [0,T]\) and all \(i=1,\dots ,m\), \({{\mathbb {P}}} \)-a.s., for any bounded \({{\mathcal {Y}}} \)-measurable Z we have that

$$\begin{aligned}&\sum _{0 \le s \le t} {{\mathbb {E}}} \biggl | Z \eta ^\epsilon _s \Bigl [\varphi (s,X_{s^-} + \Delta \nu _s) - \varphi (s, X_{s^-}) - {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \, \Delta \nu _s\Bigr ] \biggr | \\&\quad \le \frac{|{Z}|}{\epsilon } \frac{C''_\varphi }{2} \sum _{0 \le s \le t} {{\mathbb {E}}} \bigl [\Vert {\Delta \nu _s}\Vert ^2\bigr ] \\&\quad = \frac{|{Z}|}{\epsilon } \frac{C''_\varphi }{2} {{\mathbb {E}}} \biggl [\sum _{0 \le s \le t} \Delta \nu _s^* \Delta \nu _s\biggr ] \le \frac{|{Z}|}{\epsilon } \frac{C''_\varphi }{2} \sum _{i=1}^m {{\mathbb {E}}} \left[ \int _0^t |{\Delta \nu ^i_s}| {{\mathrm {d}}} |{\nu ^i}|_s\right] \le \frac{|{Z}|}{\epsilon } \frac{C''_\varphi }{2} mK^2 < + \infty . \end{aligned}$$

Therefore, using once more Fubini-Tonelli’s theorem

$$\begin{aligned}&{{\mathbb {E}}} \biggl [Z \, {{\mathbb {E}}} \biggl [\sum _{0 \le s \le t} \eta ^\epsilon _s \Bigl [\varphi (s,X_{s^-} + \Delta \nu _s) - \varphi (s, X_{s^-}) - {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \cdot \Delta \nu _s\Bigr ] \biggm | {{\mathcal {Y}}} \biggr ]\biggr ] \\&= {{\mathbb {E}}} \biggl [Z \sum _{0 \le s \le t} {{\mathbb {E}}} \Bigl [ \eta ^\epsilon _s \Bigl [\varphi (s,X_{s^-} + \Delta \nu _s) - \varphi (s, X_{s^-}) - {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \cdot \Delta \nu _s\Bigr ] \Bigm | {{\mathcal {Y}}} \Bigr ] \, \biggr ], \end{aligned}$$

and hence

$$\begin{aligned}&{{\mathbb {E}}} \biggl [\sum _{0 \le s \le t} \eta ^\epsilon _s \Bigl [\varphi (s,X_{s^-} + \Delta \nu _s) - \varphi (s, X_{s^-}) - {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \cdot \Delta \nu _s\Bigr ] \biggm | {{\mathcal {Y}}} \biggr ] \nonumber \\&= \sum _{0 \le s \le t} {{\mathbb {E}}} \Bigl [ \eta ^\epsilon _s \Bigl [\varphi (s,X_{s^-} + \Delta \nu _s) - \varphi (s, X_{s^-}) - {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \cdot \Delta \nu _s\Bigr ] \Bigm | {{\mathcal {Y}}} \Bigr ]. \end{aligned}$$
(3.16)

Finally, being \(\eta ^\epsilon \) bounded, Lemma 3.3 entails \({{\mathbb {E}}} \bigl [\int _0^t \eta ^\epsilon _s \, {{\mathrm {d}}} M_s^\varphi \bigm | {{\mathcal {Y}}} \bigr ] = 0\), and, using the same rationale of the previous evaluations,

$$\begin{aligned}&{{\mathbb {E}}} \biggl [\int _0^t \dfrac{\epsilon \eta _s^2 \varphi (s,X_{s^-})}{(1+\epsilon \eta _s)^3} \Vert {\gamma ^{-1}(s) h(s, X_s)}\Vert ^2 \, {{\mathrm {d}}} s \biggm | {{\mathcal {Y}}} \biggr ] \nonumber \\&\quad = \int _0^t {{\mathbb {E}}} \biggl [\dfrac{\epsilon \eta _s^2 \varphi (s,X_{s^-})}{(1+\epsilon \eta _s)^3} \Vert {\gamma ^{-1}(s) h(s, X_s)}\Vert ^2 \biggm | {{\mathcal {Y}}} \biggr ] \, {{\mathrm {d}}} s. \end{aligned}$$
(3.17)

Taking into account (3.12), (3.14), (3.15), (3.16), and (3.17), Equation (3.11) becomes

$$\begin{aligned} {{\mathbb {E}}} [\eta ^\epsilon _t \varphi (t,X_t) \mid {{\mathcal {Y}}} ]&= \frac{\xi (\varphi _0)}{1+\epsilon } + \int _0^t {{\mathbb {E}}} [\eta ^\epsilon _s \bigl [\partial _s + {{\mathcal {A}}} \bigr ] \varphi (s, X_s) \mid {{\mathcal {Y}}} ] \, {{\mathrm {d}}} s \nonumber \\&\quad - \int _0^t {{\mathbb {E}}} \biggl [\dfrac{\epsilon \eta _s^2 \varphi (s,X_{s})}{(1+\epsilon \eta _s)^3} \Vert {\gamma ^{-1}(s) h(s, X_s)}\Vert ^2 \biggm | {{\mathcal {Y}}} \biggr ] \, {{\mathrm {d}}} s \nonumber \\&\quad + \int _0^t {{\mathbb {E}}} \biggl [\dfrac{\eta _s \varphi (s,X_{s})}{(1+\epsilon \eta _s)^2} \gamma ^{-1}(s) h(s, X_s) \biggm | {{\mathcal {Y}}} \biggr ] \, {{\mathrm {d}}} {\overline{B}}_s\nonumber \\&\quad + \int _0^t {{\mathbb {E}}} [\eta ^\epsilon _s \, {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \mid {{\mathcal {Y}}} ] \, {{\mathrm {d}}} \nu _s \nonumber \\&\quad + \sum _{0 \le s \le t} {{\mathbb {E}}} \Bigl [ \eta ^\epsilon _s \Bigl [\varphi (s,X_{s^-} + \Delta \nu _s) - \varphi (s, X_{s^-}) - {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \cdot \Delta \nu _s\Bigr ] \Bigm | {{\mathcal {Y}}} \Bigr ]. \end{aligned}$$
(3.18)

Step 3 (Taking limits) It remains to show that all the terms appearing in (3.18) converge appropriately to give (3.8). As \(\epsilon \rightarrow 0\), we have that \(\eta ^\epsilon _t \rightarrow \eta _t\), \({{\mathbb {E}}} [\eta ^\epsilon _t \varphi (t,X_t) \mid {{\mathcal {Y}}} ] \longrightarrow \rho _t(\varphi )\), for all \(t \in [0,T]\), and

$$\begin{aligned} {{\mathbb {E}}} [\eta ^\epsilon _t \bigl [\partial _t + {{\mathcal {A}}} \bigr ] \varphi (t, X_t) \mid {{\mathcal {Y}}} ] \longrightarrow \rho _t\bigl (\bigl [\partial _t + {{\mathcal {A}}} _t\bigr ] \varphi _t\bigr ), \quad {{\mathrm {d}}} {{\mathbb {P}}} \otimes {{\mathrm {d}}} t\text {-a.e.} \end{aligned}$$

Using boundedness of \(\varphi \) and (2.4), (2.5), we get that

$$\begin{aligned} |{{{\mathbb {E}}} [\eta ^\epsilon _t \bigl [\partial _t + {{\mathcal {A}}} \bigr ] \varphi (t, X_t) \mid {{\mathcal {Y}}} ]}| \le C_2 {{\mathbb {E}}} [\eta _t(1+\Vert {X_t}\Vert ^2) \mid {{\mathcal {Y}}} ], \quad t \in [0,T], \end{aligned}$$

for some constant \(C_2\), depending on \(\varphi \), b, and \(\sigma \). The r.h.s. of this inequality is \({{\mathrm {d}}} {{\mathbb {P}}} \otimes {{\mathrm {d}}} t\) integrable on \(\Omega \times [0,t]\), since (apply again the tower rule and Fubini-Tonelli’s theorem)

$$\begin{aligned} {{\mathbb {E}}} \left[ \int _0^t C_2 {{\mathbb {E}}} [\eta _s(1+\Vert {X_s}\Vert ^2) \mid {{\mathcal {Y}}} ] \, {{\mathrm {d}}} s\right] \le C_2 T \bigl \{1 + \kappa (1+ {\widetilde{{{\mathbb {E}}} }}[\Vert {X_{0^-}}\Vert ^2])\bigr \} < +\infty , \end{aligned}$$

where we used (2.9) (which holds also under \({\widetilde{{{\mathbb {P}}} }}\) because the dynamics of X does not change under this measure), and the fact that \({\widetilde{{{\mathbb {E}}} }}[\Vert {X_{0^-}}\Vert ^2] = {{\mathbb {E}}} [\Vert {X_{0^-}}\Vert ^2] < +\infty \).

Using the conditional form of the dominated convergence theorem, we have that, for all \(t \in [0,T]\),

$$\begin{aligned} {{\mathbb {E}}} \biggl [\int _0^t {{\mathbb {E}}} [\eta ^\epsilon _s \bigl [\partial _s + {{\mathcal {A}}} \bigr ] \varphi (s, X_s) \mid {{\mathcal {Y}}} ] \, {{\mathrm {d}}} s \biggm | {{\mathcal {Y}}} \biggr ] \longrightarrow {{\mathbb {E}}} \biggl [\int _0^t \rho _s\bigl (\bigl [\partial _s + {{\mathcal {A}}} _s\bigr ] \varphi _s\bigr ) \, {{\mathrm {d}}} s \biggm | {{\mathcal {Y}}} \biggr ], \quad {{\mathbb {P}}} \text {-a.s.}, \end{aligned}$$

as \(\epsilon \rightarrow 0\), whence, noticing that the integrals are \({{\mathcal {Y}}} \)-measurable random variables,

$$\begin{aligned} \int _0^t {{\mathbb {E}}} [\eta ^\epsilon _s \bigl [\partial _s + {{\mathcal {A}}} \bigr ] \varphi (s, X_s) \mid {{\mathcal {Y}}} ] \, {{\mathrm {d}}} s \longrightarrow \int _0^t \rho _s\bigl (\bigl [\partial _s + {{\mathcal {A}}} _s\bigr ] \varphi _s\bigr ) \, {{\mathrm {d}}} s, \quad {{\mathbb {P}}} \text {-a.s.}, \quad \forall t \in [0,T]. \end{aligned}$$

We consider, now, the term on the second line of (3.18). We have that, for all \(t \in [0,T]\),

$$\begin{aligned} {{\mathbb {E}}} \biggl [\dfrac{\epsilon \eta _t^2 \varphi (t,X_{t})}{(1+\epsilon \eta _t)^3} \Vert {\gamma ^{-1}(t) h(t, X_t)}\Vert ^2 \biggm | {{\mathcal {Y}}} \biggr ] \longrightarrow 0, \end{aligned}$$

as \(\epsilon \rightarrow 0\), and that

$$\begin{aligned} {{\mathbb {E}}} \biggl [\dfrac{\epsilon \eta _t^2 \varphi (t,X_{t})}{(1+\epsilon \eta _t)^3} \Vert {\gamma ^{-1}(t) h(t, X_t)}\Vert ^2 \biggm | {{\mathcal {Y}}} \biggr ] \le C_\varphi {{\mathbb {E}}} [\eta _t\Vert {\gamma ^{-1}(t) h(t, X_t)}\Vert ^2 \mid {{\mathcal {Y}}} ]. \end{aligned}$$

The r.h.s. of the last inequality is \({{\mathrm {d}}} {{\mathbb {P}}} \otimes {{\mathrm {d}}} t\) integrable on \(\Omega \times [0,t]\), since

$$\begin{aligned}&{{\mathbb {E}}} \left[ \int _0^t C_\varphi {{\mathbb {E}}} [\eta _s\Vert {\gamma ^{-1}(s) h(s, X_s)}\Vert ^2 \mid {{\mathcal {Y}}} ] \, {{\mathrm {d}}} s\right] \\&\quad \le n C_\varphi C_h C_\gamma T[1 + \kappa (1+ {\widetilde{{{\mathbb {E}}} }}[\Vert {X_{0^-}}\Vert ^2])] < +\infty , \end{aligned}$$

where we used (A.4), that holds also under \({\widetilde{{{\mathbb {P}}} }}\) (again, because the dynamics of X does not change under this measure), and the fact that \({\widetilde{{{\mathbb {E}}} }}[\Vert {X_{0^-}}\Vert ^2] = {{\mathbb {E}}} [\Vert {X_{0^-}}\Vert ^2] < +\infty \).

Hence, reasoning as above, after applying the conditional form of the dominated convergence theorem we obtain that, for all \(t \in [0,T]\), as \(\epsilon \rightarrow 0\),

$$\begin{aligned} \int _0^t {{\mathbb {E}}} \biggl [\dfrac{\epsilon \eta _s^2 \varphi (s,X_{s})}{(1+\epsilon \eta _s)^3} \Vert {\gamma ^{-1}(s) h(s, X_s)}\Vert ^2 \biggm | {{\mathcal {Y}}} \biggr ] \, {{\mathrm {d}}} s \longrightarrow 0, \quad {{\mathbb {P}}} \text {-a.s.} \end{aligned}$$

Looking at the third line of (3.18), the next step is to show that

$$\begin{aligned} \int _0^t {{\mathbb {E}}} \biggl [\dfrac{\eta _s \varphi (s,X_{s})}{(1+\epsilon \eta _s)^2} \gamma ^{-1}(s) h(s, X_s) \biggm | {{\mathcal {Y}}} \biggr ] \, {{\mathrm {d}}} {\overline{B}}_s \longrightarrow \int _0^t \gamma ^{-1}(s) \rho _s(\varphi _s h_s) \, {{\mathrm {d}}} {\overline{B}}_s, \quad {{\mathbb {P}}} \text {-a.s.} \end{aligned}$$

The proof of this fact is standard (see, e.g., [2], Theorem 3.24 and Exercise 3.25.i or [5], Theorem 4.1.1). It is important to notice that condition (3.7) intervenes here.

Next, we examine the other integral in the third line of (3.18). We have that

$$\begin{aligned} {{\mathbb {E}}} [\eta ^\epsilon _t \, {{\mathrm {D}}} _x \varphi (t, X_{t^-}) \mid {{\mathcal {Y}}} ] \longrightarrow \rho _{t^-}\bigl ({{\mathrm {D}}} _x \varphi _t\bigr ), \quad {{\mathbb {P}}} \text {-a.s.}, \end{aligned}$$

as \(\epsilon \rightarrow 0\), for all \(t \in [0,T]\). Notice that, for any \(t \in [0,T]\),

$$\begin{aligned} \Vert {{{\mathbb {E}}} [\eta ^\epsilon _t \, {{\mathrm {D}}} _x \varphi (t, X_{t^-}) \mid {{\mathcal {Y}}} ]}\Vert \le C'_\varphi {{\mathbb {E}}} [\eta _t \mid {{\mathcal {Y}}} ]. \end{aligned}$$

Since \(\eta \) is non-negative, a \({{\mathbb {Y}}} \)-optional version of \(\left\{ {{\mathbb {E}}} [\mathbf {1}_{t \le T} \eta _t \mid {{\mathcal {Y}}} _t]\right\} _{t \ge 0}\) is given by the \({{\mathbb {Y}}} \)-optional projection of \(\left\{ \mathbf {1}_{t \le T} \eta _t\right\} _{t \ge 0}\) (see, e.g., [14], Corollary 7.6.8). Therefore, applying [20], Chapter VI, Theorem 57 and using Lemma 3.1 we get that for all \(t \in [0,T]\), and all \(i = 1, \dots , m\),

$$\begin{aligned} {{\mathbb {E}}} \biggl [\int _0^T C'_\varphi {{\mathbb {E}}} [\eta _t \mid {{\mathcal {Y}}} ] \, {{\mathrm {d}}} |{\nu ^i}|_t\biggr ]&= C'_\varphi {{\mathbb {E}}} \biggl [\int _0^{+\infty } {{\mathbb {E}}} [\mathbf {1}_{t \le T} \eta _t \mid {{\mathcal {Y}}} _t] \, {{\mathrm {d}}} |{\nu ^i}|_t\biggr ]\\&= C'_\varphi {{\mathbb {E}}} \biggl [\int _0^T \eta _t \, {{\mathrm {d}}} |{\nu ^i}|_t\biggr ] < +\infty , \end{aligned}$$

where finiteness of \({{\mathbb {E}}} [\int _0^T \eta _t \, {{\mathrm {d}}} |{\nu ^i}|_t]\) can be established with a reasoning analogous to the proof of (A.10). Therefore, we can apply the conditional form of the dominated convergence theorem, to obtain that, for all \(t \in [0,T]\), as \(\epsilon \rightarrow 0\),

$$\begin{aligned} {{\mathbb {E}}} \biggl [\int _0^t {{\mathbb {E}}} [\eta ^\epsilon _s \, {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \mid {{\mathcal {Y}}} ] \, {{\mathrm {d}}} \nu _s \biggm | {{\mathcal {Y}}} \biggr ] \longrightarrow {{\mathbb {E}}} \biggl [\int _0^t \rho _{s^-}\bigl ({{\mathrm {D}}} _x \varphi _s\bigr ) \, {{\mathrm {d}}} \nu _s \biggm | {{\mathcal {Y}}} \biggr ], \quad {{\mathbb {P}}} \text {-a.s.} \end{aligned}$$

Since the integrals are \({{\mathcal {Y}}} \)-measurable random variables, this implies that, for all \(t \in [0,T]\), as \(\epsilon \rightarrow 0\),

$$\begin{aligned} \int _0^t {{\mathbb {E}}} [\eta ^\epsilon _s \, {{\mathrm {D}}} _x \varphi (s, X_{s^-}) \mid {{\mathcal {Y}}} ] \, {{\mathrm {d}}} \nu _s \longrightarrow \int _0^t \rho _{s^-}\bigl ({{\mathrm {D}}} _x \varphi _s\bigr ) \, {{\mathrm {d}}} \nu _s, \quad {{\mathbb {P}}} \text {-a.s.} \end{aligned}$$

Finally, looking at the fourth line of (3.18), we have that, for all \(t \in [0,T]\), as \(\epsilon \rightarrow 0\),

$$\begin{aligned} \Lambda ^\epsilon _t {{:}{=}} {{\mathbb {E}}} \Bigl [ \eta ^\epsilon _t \Bigl [\varphi (t,X_{t^-} + \Delta \nu _t) - \varphi (t, X_{t^-}) - {{\mathrm {D}}} _x \varphi (t, X_{t^-}) \cdot \Delta \nu _t\Bigr ] \Bigm | {{\mathcal {Y}}} \Bigr ] \\ \longrightarrow \rho _{t^-}\bigl (\varphi _t(\cdot + \Delta \nu _t) - \varphi _t - {{\mathrm {D}}} _x \varphi _t \cdot \Delta \nu _t\bigr ), \quad {{\mathbb {P}}} \text {-a.s.} \end{aligned}$$

Observe that, for any \(t \in [0,T]\), \(\Lambda ^\epsilon _t\) is bounded by \(\frac{1}{2} C''_\varphi {{\mathbb {E}}} [\eta _t \Vert {\Delta \nu _t}\Vert ^2 \mid {{\mathcal {Y}}} ]\), which is positive and integrable with respect to the product of measure \({{\mathbb {P}}} \) and the jump measure associated to \(\nu \), since:

$$\begin{aligned} {{\mathbb {E}}} \biggl [\sum _{0 \le s \le t} \frac{1}{2} C''_\varphi {{\mathbb {E}}} [\eta _s \Vert {\Delta \nu _s}\Vert ^2 \mid {{\mathcal {Y}}} ] \biggr ]&= \frac{1}{2} C''_\varphi \sum _{0 \le s \le t} {{\mathbb {E}}} [\eta _s \Vert {\Delta \nu _s}\Vert ^2] \\ {}&=\frac{1}{2} C''_\varphi {{\mathbb {E}}} \left[ \sum _{0 \le s \le t} \eta _s \Delta \nu _s \cdot \Delta \nu _s\right] \\&\le \frac{1}{2} C''_\varphi \sum _{i=1}^m {{\mathbb {E}}} \left[ \int _0^t \eta _s |{\Delta \nu ^i_s}| {{\mathrm {d}}} |{\nu ^i}|_s\right] \\ {}&\le \frac{1}{2} C''_\varphi K \sum _{i=1}^m {{\mathbb {E}}} \left[ \int _0^t \eta _s {{\mathrm {d}}} |{\nu ^i}|_s\right] < +\infty . \end{aligned}$$

By the conditional form of the dominated convergence theorem, we have that, for all \(t \in [0,T]\), as \(\epsilon \rightarrow 0\),

$$\begin{aligned} {{\mathbb {E}}} \Bigl [\sum _{0 \le s \le t} \Lambda ^\epsilon _s \Bigm | {{\mathcal {Y}}} \Bigr ] \longrightarrow {{\mathbb {E}}} \Bigl [\sum _{0 \le s \le t} \rho _{s^-}\bigl (\varphi _s(\cdot + \Delta \nu _s) - \varphi _s - {{\mathrm {D}}} _x \varphi _s \cdot \Delta \nu _s\bigr ) \Bigm | {{\mathcal {Y}}} \Bigr ], \quad {{\mathbb {P}}} \text {-a.s.} \end{aligned}$$

and since the sums are \({{\mathcal {Y}}} \)-measurable random variables, this implies that, for all \(t \in [0,T]\), as \(\epsilon \rightarrow 0\),

$$\begin{aligned} \sum _{0 \le s \le t} \Lambda ^\epsilon _s \longrightarrow \sum _{0 \le s \le t} \rho _{s^-}\bigl (\varphi _s(\cdot + \Delta \nu _s) - \varphi _s - {{\mathrm {D}}} _x \varphi _s \cdot \Delta \nu _s\bigr ), \quad {{\mathbb {P}}} \text {-a.s.} \end{aligned}$$

\(\square \)

Remark 3.1

If the jump times of the process \(\nu \) do not accumulate over [0, T], then the Zakai equation can be split into successive linear SPDEs between the jumps of \(\nu \) (i.e., of X). Set \(T_0 = 0\), denote by \((T_n)_{n \in \mathbb {N}}\) the sequence of jump times of \(\nu \) and indicate by \(\nu ^c\) the continuous part of \(\nu \). Then, for any \(\varphi \in {{\mathrm {C}}} ^{1,2}_b([0,T] \times \mathbb {R}^m)\) and any \(n \in \mathbb {N}_0\) we have \({{\mathbb {P}}} \)-a.s.

$$\begin{aligned} \left\{ \begin{aligned}&{{\mathrm {d}}} \rho _t(\varphi _t) \!=\! \rho _t\bigl (\bigl [\partial _t \!+\! {{\mathcal {A}}} _t\bigr ]\! \varphi _t\bigr ) {{\mathrm {d}}} t + \rho _{t^-}\!\bigl ({{\mathrm {D}}} _x \varphi _t\bigr ) {{\mathrm {d}}} \nu ^c_t + \gamma ^{-1}\!(t) \rho _t(\varphi _t h_t) {{\mathrm {d}}} {\overline{B}}_t,&t \!\in \! [T_n \!\wedge \! T, T_{n+1} \!\wedge \! T), \\&\rho _{0^-}(\varphi _0) = \xi (\varphi _0), \\&\rho _{T_n}(\varphi ) = \rho _{{T_n}^-}\bigl (\varphi _{T_n}(\cdot + \Delta \nu _{T_n})\bigr ). \end{aligned} \right. \end{aligned}$$
(3.19)

We are now ready to deduce, from the Zakai equation, the Kushner–Stratonovich equation, i.e., the equation satisfied by the filtering process \(\pi \), defined in (3.1). The proof of the following two results follows essentially the same steps of [2], Lemma 3.29 and Theorem 3.30, up to necessary modifications due to the present setting (see, also, [5], Lemma 4.3.1 and Theorem 4.3.1).

Lemma 3.5

Under the same assumptions of Theorem 3.4, the process \(\bigl (\rho _t(\mathsf {1})\bigr )_{t \in [0,T]}\) satisfies for all \(t \in [0,T]\)

$$\begin{aligned} \rho _t(\mathsf {1}) = \exp \left\{ \int _0^t \gamma ^{-1}(s) \pi _s(h_s) \, {{\mathrm {d}}} {\overline{B}}_s - \dfrac{1}{2} \int _0^t \Vert {\gamma ^{-1}(s) \pi _s(h_s)}\Vert ^2 \, {{\mathrm {d}}} s\right\} , \quad {{\mathbb {P}}} \text {-a.s.} \end{aligned}$$

Theorem 3.6

Under the same assumptions of Theorem 3.4, the process \(\bigl (\pi _t(\varphi )\bigr )_{t \in [0,T]}\) satisfies for all \(t \in [0,T]\) and all \(\varphi \in {{\mathrm {C}}} ^{1,2}_b([0,T] \times \mathbb {R}^m)\) the Kushner–Stratonovich equation

$$\begin{aligned} \pi _t(\varphi _t) =&\pi _{0^-}(\varphi _0) + \int _0^t \pi _s\bigl (\bigl [\partial _s + {{\mathcal {A}}} _s\bigr ] \varphi _s\bigr ) \, {{\mathrm {d}}} s + \int _0^t \pi _{s^-}\bigl ({{\mathrm {D}}} _x \varphi _s\bigr ) \, {{\mathrm {d}}} \nu _s \nonumber \\&+ \int _0^t \gamma ^{-1}(s) \Bigl \{\pi _s\bigl (\varphi _s h_s\bigr ) - \pi _s(\varphi _s) \pi _s(h_s)\Bigr \} \, \bigl [{{\mathrm {d}}} {\overline{B}}_s - \gamma ^{-1}(s)\pi _s(h_s) \, {{\mathrm {d}}} s\bigr ] \nonumber \\&+\sum _{0 \le s \le t} \Bigl [\pi _{s^-}\bigl (\varphi _s(\cdot + \Delta \nu _s) - \varphi _s - {{\mathrm {D}}} _x \varphi _s \cdot \Delta \nu _s\bigr )\Bigr ], \quad {{\mathbb {P}}} \text {-a.s.} \end{aligned}$$
(3.20)

Remark 3.2

It is not difficult to show (see, e.g., [2], Proposition 2.30) or [5], Theorem 4.3.4, that

$$\begin{aligned} I_t {{:}{=}} {\overline{B}}_t - \gamma ^{-1}(t)\pi _t(h_t), \quad t \in [0,T], \end{aligned}$$

is a \(({\widetilde{{{\mathbb {P}}} }}, {{\mathbb {Y}}} )\)-Brownian motion, the so-called innovation process. This allows to rewrite the Kushner–Stratonovich equation in the (perhaps more familiar) form

$$\begin{aligned} \pi _t(\varphi _t) =&\xi (\varphi _0) + \int _0^t \pi _s\bigl (\bigl [\partial _s + {{\mathcal {A}}} _s\bigr ] \varphi _s\bigr ) \, {{\mathrm {d}}} s + \int _0^t \pi _{s^-}\bigl ({{\mathrm {D}}} _x \varphi _s\bigr ) \, {{\mathrm {d}}} \nu _s \nonumber \\&+ \int _0^t \gamma ^{-1}(s) \Bigl \{\pi _s\bigl (\varphi _s h_s\bigr ) - \pi _s(\varphi _s) \pi _s(h_s)\Bigr \} \, {{\mathrm {d}}} I_s \nonumber \\&+\sum _{0 \le s \le t} \Bigl [\pi _{s^-}\bigl (\varphi _s(\cdot + \Delta \nu _s) - \varphi _s - {{\mathrm {D}}} _x \varphi _s \cdot \Delta \nu _s\bigr )\Bigr ], \quad {\widetilde{{{\mathbb {P}}} }}\text {-a.s.}, \quad t \in [0,T]. \end{aligned}$$

Notice, however, that in this setting the innovation process is not a Brownian motion given a priori, because it depends (through the density process \(\eta \), and hence through X), on the initial law \(\xi \) of the signal process and on process \(\nu \).

Remark 3.3

Similarly to what stated in Remark 3.1, if the jump times of the process \(\nu \) do not accumulate over [0, T], then the Kushner–Stratonovich equation can be split into successive nonlinear SPDEs between the jumps of \(\nu \) (i.e., of X). Using the same notation of the aforementioned Remark, for any \(\varphi \in {{\mathrm {C}}} ^{1,2}_b([0,T] \times \mathbb {R}^m)\) and any \(n \in \mathbb {N}_0\) we have \({{\mathbb {P}}} \)-a.s.

$$\begin{aligned} \left\{ \begin{aligned}&{{\mathrm {d}}} \pi _t(\varphi _t) = \pi _t\bigl (\bigl [\partial _t + {{\mathcal {A}}} _t\bigr ] \varphi _t\bigr ) \, {{\mathrm {d}}} t + \pi _{t^-}\bigl ({{\mathrm {D}}} _x \varphi _t\bigr ) \, {{\mathrm {d}}} \nu ^c_t \\&\quad + \gamma ^{-1}(t) \Bigl \{\pi _t\bigl (\varphi _t h_t\bigr ) \!- \pi _t(\varphi _t) \pi _t(h_t)\Bigr \} \bigl [{{\mathrm {d}}} {\overline{B}}_t - \gamma ^{-1}(t)\pi _t(h_t) {{\mathrm {d}}} t\bigr ],&t \in [T_n \!\wedge \! T, T_{n+1} \!\wedge \! T), \\&\pi _{0^-}(\varphi _0) = \xi (\varphi _0), \\&\pi _{T_n}(\varphi ) = \pi _{{T_n}^-}\bigl (\varphi _{T_n}(\cdot + \Delta \nu _{T_n})\bigr ). \end{aligned}\right. \nonumber \\ \end{aligned}$$
(3.21)

4 Uniqueness of the Solution to the Zakai Equation

In this section we will address the issue of uniqueness of the solution to the Zakai equation (3.8), under the requirement that the jump times of the process \(\nu \) do not accumulate over [0, T]. Proving uniqueness is essential to characterize completely the unnormalized filtering process \(\rho \), defined in (3.3), and is crucial in applications, e.g., in optimal control. Indeed, having ensured that (3.8) (or, equivalently, (3.20)) uniquely characterizes the conditional distribution of the signal given the observation, the filtering process can be employed as a state variable to solve the related separated optimal control problem (cf. [5]).

We follow the approach in [33] (see, also, [2], Chapter 7 and [42], Chapter 6). The idea is to recast the measure-valued Zakai equation into an SPDE in the Hilbert space \(H {{:}{=}} {{\mathrm {L}}} ^2(\mathbb {R}^m)\) and, therefore, to look for a density of \(\rho \) in this space. To accomplish that, we will smooth solutions to (3.8) using the heat kernel, and we will then use estimates in \({{\mathrm {L}}} ^2(\mathbb {R}^m)\) in order to deduce the desired result. An important role in the subsequent analysis is played by the following lemma, whose proof can be found, e.g., in [2], Solution to Exercise 7.2.

Lemma 4.1

Let \(\{\varphi _k\}_{k \in \mathbb {N}}\) be an orthonormal basis of H such that \(\varphi _k \in {{\mathrm {C}}} _b(\mathbb {R}^m)\) for any \(k \in \mathbb {N}\), and let \(\mu \in {{\mathcal {M}}} (\mathbb {R}^m)\) be a finite measure. If

$$\begin{aligned} \sum _{k \in \mathbb {N}} [\mu (\varphi _k)]^2 < +\infty , \end{aligned}$$

then \(\mu \) is absolutely continuous with respect to Lebesgue measure on \(\mathbb {R}^m\) and its density is square-integrable.

Let \(\psi _\epsilon \) be the heat kernel, i.e., the function defined for each \(\epsilon > 0\) as

$$\begin{aligned} \psi _\epsilon (x) {{:}{=}} \frac{1}{(2\pi \epsilon )^{m/2}} {{\mathrm {e}}} ^{-\frac{\Vert {x}\Vert ^2}{2\epsilon }}, \quad x \in \mathbb {R}^m, \end{aligned}$$

and for any Borel-measurable and bounded f and \(\epsilon > 0\) define the operator

$$\begin{aligned} T_\epsilon f(x) {{:}{=}} \int _{\mathbb {R}^m} \psi _\epsilon (x-y) \, f(y) \, {{\mathrm {d}}} y, \qquad x \in \mathbb {R}^m. \end{aligned}$$

We also define the operator \(T_\epsilon :{{\mathcal {M}}} (\mathbb {R}^m) \rightarrow \mathcal M(\mathbb R^m)\) given by

$$\begin{aligned} T_\epsilon \mu (f) {{:}{=}} \mu (T_\epsilon f) = \int _{\mathbb {R}^m} f(y) \underbrace{\int _{\mathbb {R}^m} \psi _\epsilon (x-y) \, \mu ({{\mathrm {d}}} x)}_{{{:}{=}} T_\epsilon \mu (y)} {{\mathrm {d}}} y = \int _{\mathbb {R}^m} f(y) \, T_\epsilon \mu (y) \, {{\mathrm {d}}} y. \end{aligned}$$

The equalities above imply that for any \(\mu \in {{\mathcal {M}}} (\mathbb {R}^m)\) the measure \(T_\epsilon \mu \) always possesses a density with respect to Lebesgue measure, that we will still denote by \(T_\epsilon \mu \).

Remark 4.1

It is important to notice that, by [2], Exercise 7.3, point ii., \(T_\epsilon \mu \in W^2_k(\mathbb {R}^m)\), for any \(\mu \in {{\mathcal {M}}} (\mathbb {R}^m)\), \(\epsilon > 0\), and \(k \in \mathbb {N}\).

Further properties of these operators that will be used in the sequel are listed in the following Lemma (for its proof see, e.g., [2], Solution to Exercise 7.3 and [42], Lemma 6.7, Lemma 6.8).

Lemma 4.2

For any \(\mu \in {{\mathcal {M}}} (\mathbb {R}^m)\), \(h \in H\), and \(\epsilon > 0\) we have that:

  1. i

    \(\Vert {T_{2\epsilon } |\mu |}\Vert _H \le \Vert {T_{\epsilon } |\mu |}\Vert _H\), where \(|\mu |\) denotes the total variation measure of \(\mu \);

  2. ii

    \(\Vert {T_\epsilon h}\Vert _H \le \Vert {h}\Vert _H\);

  3. iii

    \(\langle T_\epsilon \mu , h \rangle = \mu (T_\epsilon h)\);

  4. iv

    If, in addition, \(\partial _i h \in H\), \(i = 1, \dots , m\), then \(\partial _i T_\epsilon h = T_\epsilon \partial _i h\) (with the partial derivative understood in the weak sense).

  5. v

    If \(\varphi \in {{\mathrm {C}}} ^1_b(\mathbb {R}^m)\), then \(\partial _i T_\epsilon \varphi = T_\epsilon (\partial _i \varphi )\).

In this section we will work under the following hypotheses, in addition to Assumptions 2.1 and 2.2, concerning coefficients b, \(\sigma \) and h appearing in SDEs (2.8) and (2.13). In what follows we will use the shorter notation

$$\begin{aligned} a(t,x) {{:}{=}} \frac{1}{2} \sigma \sigma ^*(t,x), \quad t \in [0,T], \, x \in \mathbb {R}^m. \end{aligned}$$
(4.1)

Assumption 4.1

There exist constants \(K_b\), \(K_\sigma \), \(K_h\), such that, for all \(i,j = 1, \dots ,m\), all \(\ell = 1, \dots , n\), all \(t \in [0,T]\), and all \(x \in \mathbb {R}^m\),

$$\begin{aligned} |{b_i(t,x)}| \le K_b, \quad |{a_{ij}(t,x)}| \le K_\sigma , \quad |{h_\ell (t,x)}| \le K_h. \end{aligned}$$

In the next section, we obtain the uniqueness result for the solution to the Zakai equation when the process \(\nu \) has continuous paths. This will be then exploited in Sect. 4.2 in order to obtain the uniqueness claim when \(\nu \) has jump times that do not accumulate over [0, T].

4.1 The Case in Which \(\nu \) has Continuous Paths

We start our analysis with the following Lemma, which will play a fundamental role in the sequel. Its proof can be found in Appendix A.

Lemma 4.3

Suppose that Assumption 4.1 holds. Let \(\zeta = (\zeta _t)_{t \in [0,T]}\) be a \({{\mathbb {Y}}} \)-adapted, càdlàg, \({{\mathcal {M}}} _+(\mathbb {R}^m)\)-valued solution of (3.8), with \(\zeta _{0^-} = \xi \in {{\mathcal {P}}} (\mathbb {R}^m)\). If \(\nu \) is continuous, then for any \(\epsilon > 0\)

$$\begin{aligned} {{\mathbb {E}}} [\sup _{t \in [0,T]} \Vert {T_\epsilon \zeta _{t^-}}\Vert _H^2] < +\infty . \end{aligned}$$

The next result is a useful estimate.

Proposition 4.4

Suppose that Assumption 4.1 holds. Let \(\zeta = (\zeta _t)_{t \in [0,T]}\) be a \({{\mathbb {Y}}} \)-adapted, càdlàg, \({{\mathcal {M}}} (\mathbb {R}^m)\)-valued solution of (3.8), with \(\zeta _{0^-} = \xi \in {{\mathcal {P}}} (\mathbb {R}^m)\). Define the process

$$\begin{aligned} A_t {{:}{=}} t + \sum _{i=1}^m |{\nu ^i}|_t, \quad t \in [0,T]. \end{aligned}$$
(4.2)

If \(\nu \) is continuous and if, for any \(\epsilon > 0\), \({{\mathbb {E}}} [\sup _{t \in [0,T]} \Vert {T_\epsilon |{\zeta }|_{t^-}}\Vert _H^2] < +\infty \), then there exists a constant \(M>0\) such that, for each \(\epsilon > 0\) and all \({{\mathbb {F}}} \)-stopping times \(\tau \le t\), \(t \in [0,T]\),

$$\begin{aligned} {{\mathbb {E}}} [\Vert {T_\epsilon \zeta _{\tau ^-}}\Vert ^2_H] \le \Vert {T_\epsilon \zeta _{0^-}}\Vert _H^2 + M \int _0^{\tau ^-} {{\mathbb {E}}} [\Vert {T_\epsilon |{\zeta }|_{s^-}}\Vert ^2_H] \, {{\mathrm {d}}} A_s. \end{aligned}$$
(4.3)

Proof

To ease notations, for any \(\epsilon > 0\) denote by \(Z^\epsilon \) the process \(Z^\epsilon _t {{:}{=}} T_\epsilon \zeta _t\), \(t \ge 0\). Fix \(\epsilon > 0\) and consider an orthonormal basis \(\{\varphi _k\}_{k \in \mathbb {N}}\) of H such that \(\varphi _k \in {{\mathrm {C}}} _b^2(\mathbb {R}^m)\), for any \(k \in \mathbb {N}\). Writing the Zakai equation for the function \(T_\epsilon \varphi _k\) (recall that \(\nu \) is continuous by assumption) we get:

$$\begin{aligned} \zeta _t(T_\epsilon \varphi _k)&= \xi (T_\epsilon \varphi _k) + \int _0^t \zeta _s\bigl ({{\mathcal {A}}} _s T_\epsilon \varphi _k\bigr ) \, {{\mathrm {d}}} s + \int _0^t \zeta _{s^-}\bigl ({{\mathrm {D}}} _x T_\epsilon \varphi _k\bigr ) \, {{\mathrm {d}}} \nu _s \nonumber \\&\quad + \int _0^t \gamma ^{-1}(s) \zeta _s(T_\epsilon \varphi _k h_s) \, {{\mathrm {d}}} {\overline{B}}_s, \end{aligned}$$
(4.4)

for all \(t \in [0,T]\). Notice that, for any \(\varphi \in {{\mathrm {C}}} ^2_b(\mathbb {R}^m)\) and any \(t \in [0,T]\), we can write:

$$\begin{aligned} {{\mathcal {A}}} _t \varphi (x) = \sum _{i=1}^m b_i(t,x) \partial _i \varphi (x) + \sum _{i,j=1}^m a_{ij}(t,x) \partial _{ij} \varphi (x), \quad x \in \mathbb {R}^m, \end{aligned}$$

where a is the function defined in (4.1). For any \(i,j = 1, \dots , m\), \(\ell = 1, \dots , n\), and \(t \in [0,T]\), we define the random measures on \(\mathbb {R}^m\):

$$\begin{aligned} b^i_t\zeta _t({{\mathrm {d}}} x)&{{:}{=}} b_i(t,x) \zeta _t({{\mathrm {d}}} x), \quad a^{ij}_t\zeta _t({{\mathrm {d}}} x) {{:}{=}} a_{ij}(t,x) \zeta _t({{\mathrm {d}}} x),\\ \gamma h^\ell _t\zeta _t({{\mathrm {d}}} x)&{{:}{=}} \sum _{p=1}^n \gamma ^{-1}_{\ell p}(t) h_p(t,x) \zeta _t({{\mathrm {d}}} x). \end{aligned}$$

These measures are \({{\mathbb {P}}} \)-almost surely finite, for any \(t \in [0,T]\), thanks to Assumption 4.1 and to (A.3) (see also (A.19) for the last measure).

Applying Lemma 4.2 and the integration by parts formula we get:

$$\begin{aligned} \zeta _t\bigl ({{\mathcal {A}}} _t T_\epsilon \varphi _k\bigr )&= \sum _{i=1}^m \int _{\mathbb {R}^m} b_i(t,x) \partial _i T_\epsilon \varphi _k(x) \, \zeta _t({{\mathrm {d}}} x) + \sum _{i,j=1}^m \int _{\mathbb {R}^m} a_{ij}(t,x) \partial _{ij} T_\epsilon \varphi _k(x) \, \zeta _t({{\mathrm {d}}} x) \\&= \sum _{i=1}^m \int _{\mathbb {R}^m} b_i(t,x) T_\epsilon \partial _i \varphi _k(x) \, \zeta _t({{\mathrm {d}}} x) + \sum _{i,j=1}^m \int _{\mathbb {R}^m} a_{ij}(t,x) T_\epsilon \partial _{ij} \varphi _k(x) \, \zeta _t({{\mathrm {d}}} x) \\&= \sum _{i=1}^m b^i_t\zeta _t(T_\epsilon \partial _i \varphi _k) + \sum _{i,j=1}^m a^{ij}_t\zeta _t(T_\epsilon \partial _{ij} \varphi _k) \\&= \sum _{i=1}^m \langle {T_\epsilon (b^i_t\zeta _t), \partial _i \varphi _k}\rangle + \sum _{i,j=1}^m \langle {T_\epsilon (a^{ij}_t\zeta _t), \partial _{ij} \varphi _k}\rangle \\&= \sum _{i,j=1}^m \langle \varphi _k, \partial _{ij} T_\epsilon (a^{ij}_t\zeta _t) \rangle - \sum _{i=1}^m \langle \varphi _k, \partial _i T_\epsilon (b^i_t\zeta _t)\rangle . \end{aligned}$$

In a similar way, we obtain \(\zeta _t\bigl (\partial _i T_\epsilon \varphi _k\bigr ) = - \langle \varphi _k, \partial _i T_\epsilon \zeta _t \rangle \), and

$$\begin{aligned} \sum _{j=1}^n \gamma ^{-1}_{ij}(t) \zeta _t\bigl (T_\epsilon \varphi _k h^j_t\bigr ) = \langle \varphi _k, T_\epsilon (\gamma h^i_t \zeta _t) \rangle , \quad i = 1, \dots , n. \end{aligned}$$

Putting together all these facts, we can rewrite (4.4) as

$$\begin{aligned} \langle \varphi _k, Z_t^\epsilon \rangle&= \langle \varphi _k, Z_{0^-}^\epsilon \rangle + \sum _{i,j=1}^m \int _0^t \langle \varphi _k, \partial _{ij} T_\epsilon (a^{ij}_s\zeta _s) \rangle \, {{\mathrm {d}}} s - \sum _{i=1}^m \int _0^t \langle \varphi _k, \partial _i T_\epsilon (b^i_s\zeta _s)\rangle \, {{\mathrm {d}}} s \\&\quad - \sum _{i=1}^m \int _0^t \langle \varphi _k, \partial _i T_\epsilon \zeta _{s^-} \rangle \, {{\mathrm {d}}} \nu ^i_s + \sum _{i= 1}^n \int _0^t \langle \varphi _k, T_\epsilon (\gamma h^i_s \zeta _s) \rangle \, {{\mathrm {d}}} {\overline{B}}_s^i, \quad {{\mathbb {P}}} \text {-a.s.}, \quad t \in [0,T]. \end{aligned}$$

Applying Itô’s formula we get that, for all \(t \in [0,T]\), \({{\mathbb {P}}} \)-a.s.,

$$\begin{aligned} \langle \varphi _k, Z_t^\epsilon \rangle ^2&= \langle \varphi _k, Z_{0^-}^\epsilon \rangle ^2 + \sum _{i,j=1}^m \int _0^t 2\langle \varphi _k, Z_s^\epsilon \rangle \, \langle \varphi _k, \partial _{ij} T_\epsilon (a^{ij}_s\zeta _s) \rangle \, {{\mathrm {d}}} s\\&\quad - \sum _{i=1}^m \int _0^t 2 \langle \varphi _k, Z_s^\epsilon \rangle \langle \varphi _k, \partial _i T_\epsilon (b^i_s\zeta _s)\rangle \, {{\mathrm {d}}} s + \sum _{i=1}^n \int _0^t \langle \varphi _k, T_\epsilon (\gamma h^i_s \zeta _s) \rangle ^2 \, {{\mathrm {d}}} s \\&\quad - \sum _{i=1}^m \int _0^t 2 \langle \varphi _k, Z_{s^-}^\epsilon \rangle \langle \varphi _k, \partial _i T_\epsilon \zeta _{s^-} \rangle \, {{\mathrm {d}}} \nu ^i_s + \sum _{i=1}^n \int _0^t 2 \langle \varphi _k, Z_s^\epsilon \rangle \langle \varphi _k, T_\epsilon (\gamma h^i_s \zeta _s) \rangle \, {{\mathrm {d}}} {\overline{B}}_s^i. \end{aligned}$$

Using Assumption 4.1 and (A.3), it is possible to show that the stochastic integral with respect to Brownian motion \({\overline{B}}\) is a \({{\mathbb {P}}} \)-martingale. By the optional sampling theorem, this stochastic integral has zero expectation even when evaluated at any bounded stopping time. Therefore, picking an \({{\mathbb {F}}} \)-stopping time \(\tau \le t\), for arbitrary \(t \in [0,T]\), summing over k up to \(N \in \mathbb {N}\), and taking the expectation, by Fatou’s lemma we have that

$$\begin{aligned} {{\mathbb {E}}} \left[ \Vert {Z_{\tau ^-}^\epsilon }\Vert _H^2 \right]&= {{\mathbb {E}}} \biggl [\lim _{N \rightarrow \infty } \sum _{k=1}^N \langle \varphi _k, Z_{\tau ^-}^\epsilon \rangle ^2 \biggr ] \le \liminf _{N \rightarrow \infty } {{\mathbb {E}}} \biggl [\sum _{k=1}^N \langle \varphi _k, Z_{\tau ^-}^\epsilon \rangle ^2 \biggr ] \le \Vert {Z_{0^-}^\epsilon }\Vert _H^2 \nonumber \\&\quad + \liminf _{N \rightarrow \infty } \Biggl \{\sum _{i,j=1}^m {{\mathbb {E}}} \biggl [\int _0^{\tau ^-} \sum _{k=1}^N 2\langle \varphi _k, Z_s^\epsilon \rangle \, \langle \varphi _k, \partial _{ij} T_\epsilon (a^{ij}_s\zeta _s) \rangle \, {{\mathrm {d}}} s\biggr ] \nonumber \\&\quad - \sum _{i=1}^m {{\mathbb {E}}} \biggl [\int _0^{\tau ^-} \sum _{k=1}^N 2 \langle \varphi _k, Z_s^\epsilon \rangle \langle \varphi _k, \partial _i T_\epsilon (b^i_s\zeta _s)\rangle \, {{\mathrm {d}}} s\biggr ] \nonumber \\&\quad + \sum _{i=1}^n {{\mathbb {E}}} \biggl [\int _0^{\tau ^-} \sum _{k=1}^N \langle \varphi _k, T_\epsilon (\gamma h^i_s \zeta _s) \rangle ^2 \, {{\mathrm {d}}} s\biggr ] \nonumber \\&\quad - \sum _{i=1}^m {{\mathbb {E}}} \biggl [\int _0^{\tau ^-} \sum _{k=1}^N 2 \langle \varphi _k, Z_{s^-}^\epsilon \rangle \langle \varphi _k, \partial _i T_\epsilon \zeta _{s^-} \rangle \, {{\mathrm {d}}} \nu ^i_s\biggr ]\Biggr \}, \end{aligned}$$
(4.5)

where we used the fact that, since \(Z_{0^-}^\epsilon \in H\), \(\lim \limits _{N \rightarrow \infty } \sum \limits _{k=1}^N \langle \varphi _k, Z_{0^-}^\epsilon \rangle ^2 = \Vert {Z_{0^-}^\epsilon }\Vert _H^2\). More generally, since \(Z_t^\epsilon \in H\), for all \(t \in [0,T]\), \({{\mathbb {P}}} \)-a.s. (cf. Remark 4.1), we have that

$$\begin{aligned} \sum _{k=1}^N \langle \varphi _k, Z_t^\epsilon \rangle ^2 \le \sum _{k=1}^\infty \langle \varphi _k, Z_t^\epsilon \rangle ^2 = \Vert {Z_t^\epsilon }\Vert _H^2, \quad t \in [0,T]. \end{aligned}$$
(4.6)

We want now to estimate the quantities appearing inside the limit inferior, in order to exchange the limit and the integrals in (4.5). First of all, let us notice that, thanks to Assumption 4.1, the following estimates hold \({{\mathbb {P}}} \)-a.s., for all \(i,j = 1, \dots , m\), all \(\ell = 1, \dots , n\), and all \(t \in [0,T]\):

$$\begin{aligned}&\Vert {\partial _{ij} T_\epsilon (a^{ij}_t\zeta _t)}\Vert _H^2 \le K_1 \Vert {T_\epsilon |{\zeta }|_t}\Vert _H^2,&\Vert {\partial _i T_\epsilon (b^i_t\zeta _t)}\Vert _H^2 \le K_2 \Vert {T_\epsilon |{\zeta }|_t}\Vert _H^2, \\&\Vert {T_\epsilon (\gamma h^\ell _t \zeta _t)}\Vert _H^2 \le K_3 \Vert {T_\epsilon |{\zeta }|_t}\Vert _H^2,&\Vert {\partial _i T_\epsilon \zeta _t}\Vert _H^2 \le K_4 \Vert {T_\epsilon |{\zeta }|_t}\Vert _H^2, \end{aligned}$$

where \(K_1 = K_1(\epsilon , m, \sigma )\), \(K_2 = K_2(\epsilon , m, b)\), \(K_3 = K_3(n, h, \gamma )\), \(K_4 = K_4(\epsilon , m)\). They can be proved following a reasoning analogous to that of [2], Lemma 7.5 (see also [42], Chapter 6).

Recalling that \(2|ab| \le a^2 + b^2\), for all \(a,b \in \mathbb {R}\), using the estimates provided above, Lemma 4.2, and (4.6), we get that, for all \(N \in \mathbb {N}\), all \(i,j = 1, \dots , m\), and all \(s \in [0,T]\),

$$\begin{aligned} \mathbf {1}_{s < \tau } \sum _{k=1}^N 2\langle \varphi _k, Z_s^\epsilon \rangle \, \langle \varphi _k, \partial _{ij} T_\epsilon (a^{ij}_s\zeta _s) \rangle&\le \sum _{k=1}^N \langle \varphi _k, Z_s^\epsilon \rangle ^2 + \sum _{k=1}^N \langle \varphi _k, \partial _{ij} T_\epsilon (a^{ij}_s\zeta _s) \rangle ^2 \\&\le \Vert {Z_s^\epsilon }\Vert _H^2 + \Vert {\partial _{ij} T_\epsilon (a^{ij}_s\zeta _s)}\Vert _H^2 \\ {}&\le (1+K_1) \Vert {T_{\epsilon /2} |{\zeta }|_s}\Vert _H^2. \end{aligned}$$

With analogous computations, we get, for all \(i = 1, \dots , m\), all \(N \in \mathbb {N}\), and all \(s \in [0,T]\),

$$\begin{aligned} \mathbf {1}_{s< \tau } \sum _{k=1}^N 2 \langle \varphi _k, Z_s^\epsilon \rangle \langle \varphi _k, \partial _i T_\epsilon (b^i_s\zeta _s)\rangle \le (1+K_2) \Vert {T_{\epsilon /2} |{\zeta }|_s}\Vert _H^2, \\ \mathbf {1}_{s < \tau } \sum _{k=1}^N 2 \langle \varphi _k, Z_{s^-}^\epsilon \rangle \langle \varphi _k, \partial _i T_\epsilon \zeta _{s^-} \rangle \le (1+K_4) \Vert {T_{\epsilon /2} |{\zeta }|_s}\Vert _H^2, \end{aligned}$$

and, for all \(N \in \mathbb {N}\) and all \(s \in [0,T]\),

$$\begin{aligned} \sum _{i=1}^n \mathbf {1}_{s < \tau } \sum _{k=1}^N \langle \varphi _k, T_\epsilon (\gamma h^i_s \zeta _s) \rangle ^2 \le n K_3 \Vert {T_\epsilon |{\zeta }|_s}\Vert _H^2. \end{aligned}$$

The terms appearing on the r.h.s. of these estimates are \({{\mathrm {d}}} t \otimes {{\mathrm {d}}} {{\mathbb {P}}} \)- and \({{\mathrm {d}}} |{\nu ^i}|_t \otimes {{\mathrm {d}}} {{\mathbb {P}}} \)-integrable on \([0,T] \times \Omega \), for all \(i = 1,\dots ,m\), since, for any \(\epsilon > 0\),

$$\begin{aligned} {{\mathbb {E}}} \left[ \int _0^T \Vert {T_\epsilon |{\zeta }|_s}\Vert _H^2 \, {{\mathrm {d}}} s \right]&\le T {{\mathbb {E}}} [\sup _{s \in [0,T]} \Vert {T_\epsilon |{\zeta }|_s}\Vert _H^2]< +\infty , \\ {{\mathbb {E}}} \left[ \int _0^T \Vert {T_\epsilon |{\zeta }|_s}\Vert _H^2 \, {{\mathrm {d}}} |{\nu ^i}|_s \right]&\le K {{\mathbb {E}}} [\sup _{s \in [0,T]} \Vert {T_\epsilon |{\zeta }|_s}\Vert _H^2] < +\infty . \end{aligned}$$

Therefore, by the dominated convergence theorem, we can pass to the limit in (4.5), as \(N \rightarrow \infty \),

$$\begin{aligned} {{\mathbb {E}}} \left[ \Vert {Z_{\tau ^-}^\epsilon }\Vert _H^2 \right]&\le \Vert {Z_{0^-}^\epsilon }\Vert _H^2 + \sum _{i,j=1}^m {{\mathbb {E}}} \biggl [\int _0^{\tau ^-} \! 2\langle Z_s^\epsilon , \partial _{ij} T_\epsilon (a^{ij}_s\zeta _s) \rangle \, {{\mathrm {d}}} s\biggr ] \nonumber \\&\quad - \sum _{i=1}^m {{\mathbb {E}}} \biggl [\int _0^{\tau ^-} \! 2 \langle Z_s^\epsilon , \partial _i T_\epsilon (b^i_s\zeta _s)\rangle \, {{\mathrm {d}}} s\biggr ] \nonumber \\&\quad + \sum _{i=1}^n {{\mathbb {E}}} \biggl [\int _0^{\tau ^-} \Vert {T_\epsilon (\gamma h^i_s \zeta _s)}\Vert _H^2 \, {{\mathrm {d}}} s\biggr ] - \sum _{i=1}^m {{\mathbb {E}}} \biggl [\int _0^{\tau ^-} \langle Z_{s^-}^\epsilon , \partial _i T_\epsilon \zeta _{s^-} \rangle \, {{\mathrm {d}}} \nu ^i_s\biggr ], \end{aligned}$$
(4.7)

We finally get the claim, bounding the terms on the r.h.s. of (4.7) by using the following results: for the second one, apply [42], Lemma 6.11; for the third and the last one, apply [42], Lemma 6.10; for the fourth one, use the fact that the constant \(K_3\) above does not depend on \(\epsilon \). \(\square \)

Proposition 4.4 allows to deduce that any \({{\mathcal {M}}} _+(\mathbb {R}^m)\)-valued solution of the Zakai equation (3.8) admits a density with respect to Lebesgue measure.

Proposition 4.5

Suppose that Assumption 4.1 holds. Let \(\zeta = (\zeta _t)_{t \in [0,T]}\) be a \({{\mathbb {Y}}} \)-adapted, càdlàg, \({{\mathcal {M}}} _+(\mathbb {R}^m)\)-valued solution of (3.8), with \(\zeta _{0^-} = \xi \in {{\mathcal {P}}} (\mathbb {R}^m)\). If \(\nu \) is continuous and if \(\xi \) admits a square-integrable density with respect to Lebesgue measure on \(\mathbb {R}^m\), then there exists an H-valued process \(Z = (Z_t)_{t \in [0,T]}\) such that, for all \(t \in [0,T]\),

$$\begin{aligned} \zeta _t({{\mathrm {d}}} x) = Z_t(x) {{\mathrm {d}}} x, \quad {{\mathbb {P}}} \text {-a.s.} \end{aligned}$$

Moreover, Z is \({{\mathbb {Y}}} \)-adapted, continuous, and satisfies \({{\mathbb {E}}} [\Vert {Z_t}\Vert _H^2] < +\infty \), for all \(t \in [0,T]\).

Proof

As a consequence of Lemma 4.3, the assumptions of Proposition 4.4 hold and we have that for each \(\epsilon > 0\) and all \({{\mathbb {F}}} \)-stopping times \(\tau \le t\), \(t \in [0,T]\),

$$\begin{aligned} {{\mathbb {E}}} [\Vert {T_\epsilon \zeta _{\tau ^-}}\Vert ^2_H] \le \Vert {T_\epsilon \zeta _{0^-}}\Vert _H^2 + M \int _0^{\tau ^-} {{\mathbb {E}}} [\Vert {T_\epsilon \zeta _{s^-}}\Vert ^2_H] \, {{\mathrm {d}}} A_s. \end{aligned}$$

Therefore, we can apply Lemma A.1 and get that, for all \(t \in [0,T]\),

$$\begin{aligned} {{\mathbb {E}}} [\Vert {T_\epsilon \zeta _{t^-}}\Vert ^2_H] = {{\mathbb {E}}} [\Vert {T_\epsilon \zeta _t}\Vert ^2_H] \le \Vert {T_\epsilon \zeta _{0^-}}\Vert _H^2 {{\mathrm {e}}} ^{M(T+mK)}, \end{aligned}$$
(4.8)

where we used the fact that \(\zeta \) is continuous, since \(\nu \) is, and that \(A_t \le A_T \le T+mK\), for all \(t \in [0,T]\). Notice that, denoting by \(Z_{0^-}\) the density of \(\xi \) with respect to Lebesgue measure on \(\mathbb {R}^m\),

$$\begin{aligned} T_\epsilon \zeta _{0^-}(y) = \int _{\mathbb {R}^m} \psi _{\epsilon }(x-y) \, \xi ({{\mathrm {d}}} x) = \int _{\mathbb {R}^m} \psi _{\epsilon }(x-y) \, Z_{0^-}(x) \, {{\mathrm {d}}} x = T_\epsilon Z_{0^-}(y), \quad y \in \mathbb {R}^m. \end{aligned}$$

By point ii. of Lemma 4.2 and since the constants appearing in (4.8) do not depend on \(\epsilon \), we get

$$\begin{aligned} \sup _{\epsilon > 0} {{\mathbb {E}}} [\Vert {T_\epsilon \zeta _t}\Vert ^2_H] \le \Vert {Z_{0^-}}\Vert _H^2 {{\mathrm {e}}} ^{M(T+mK)}, \quad t \in [0,T]. \end{aligned}$$

Taking, as in the Proof of Proposition 4.4, an orthonormal basis \(\{\varphi _k\}_{k \in \mathbb {N}}\) of H such that \(\varphi _k \in {{\mathrm {C}}} _b^2(\mathbb {R}^m)\), for any \(k \in \mathbb {N}\), the dominated convergence theorem entails that, for all \(k \in \mathbb {N}\),

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \langle T_\epsilon \zeta _t, \varphi _k \rangle = \lim _{\epsilon \rightarrow 0} \int _{\mathbb {R}^m} \left\{ \int _{\mathbb {R}^m} \psi _\epsilon (x-y) \varphi _k(y) \, {{\mathrm {d}}} y \right\} \zeta _t({{\mathrm {d}}} x) = \int _{\mathbb {R}^m} \varphi _k(x) \, \zeta _t({{\mathrm {d}}} x) = \zeta _t(\varphi _k). \end{aligned}$$

Applying Fatou’s Lemma we get that, for all \(t \in [0,T]\),

$$\begin{aligned} {{\mathbb {E}}} \left[ \sum _{k=1}^\infty \zeta _t(\varphi _k)^2 \right]&= {{\mathbb {E}}} \left[ \sum _{k=1}^\infty \lim _{\epsilon \rightarrow 0} \langle T_\epsilon \zeta _t, \varphi _k \rangle ^2 \right] \le \liminf _{\epsilon \rightarrow 0} {{\mathbb {E}}} \left[ \sum _{k=1}^\infty \langle T_\epsilon \zeta _t, \varphi _k \rangle ^2 \right] \nonumber \\&\le \sup _{\epsilon > 0} {{\mathbb {E}}} [\Vert {T_\epsilon \zeta _t}\Vert _H^2] \le \Vert {Z_{0^-}}\Vert _H^2 {{\mathrm {e}}} ^{M(T+mK)} < +\infty , \end{aligned}$$
(4.9)

and hence, from Lemma 4.1 we deduce that, \({{\mathbb {P}}} \)-a.s., \(\zeta _t\) is absolutely continuous with respect to Lebesgue measure on \(\mathbb {R}^m\), for all \(t \in [0,T]\). Moreover, its density process \(Z = (Z_t)_{t \in [0,T]}\) takes values in H and, by standard results, is \({{\mathbb {Y}}} \)-adapted and continuous (because \(\nu \) is).

Finally, since \(\zeta _t(\varphi _k) = \int _{\mathbb {R}^m} \varphi _k(x) Z_t(x) \, {{\mathrm {d}}} x = \langle \varphi _k, Z_t \rangle \), for all \(k \in \mathbb {N}\), and all \(t \in [0,T]\), we get

$$\begin{aligned} {{\mathbb {E}}} [\Vert {Z_t}\Vert _H^2] = {{\mathbb {E}}} \left[ \sum _{k=1}^\infty \langle \varphi _k, Z_t \rangle ^2 \right] = {{\mathbb {E}}} \left[ \sum _{k=1}^\infty \zeta _t(\varphi _k)^2 \right] < +\infty , \quad t \in [0,T]. \end{aligned}$$

\(\square \)

We are now ready to state our first uniqueness result for the solution to the Zakai equation, in the case where \(\nu \) is continuous.

Theorem 4.6

Suppose that Assumptions 2.1, 2.2, 4.1, and (3.7) hold. If \(\nu \) is continuous and if \(\xi \in {{\mathcal {P}}} (\mathbb {R}^m)\) admits a square-integrable density with respect to Lebesgue measure on \(\mathbb {R}^m\), then the unnormalized filtering process \(\rho \), defined in (3.3), is the unique \({{\mathbb {Y}}} \)-adapted, continuous, \({{\mathcal {M}}} _+(\mathbb {R}^m)\)-valued solution to the Zakai equation (3.8).

Moreover, there exists a \({{\mathbb {Y}}} \)-adapted, continuous, H-valued process \(p = (p_t)_{t \in [0,T]}\) satisfying, for all \(t \in [0,T]\), \({{\mathbb {E}}} [\Vert {p_t}\Vert _H^2] < +\infty \) and \(\rho _t({{\mathrm {d}}} x) = p_t(x) {{\mathrm {d}}} x\), \({{\mathbb {P}}} \)-a.s.

Proof

Clearly, the unnormalized filtering process \(\rho \), defined in (3.3), is a \({{\mathbb {Y}}} \)-adapted, continuous (since \(\nu \) is), \({{\mathcal {M}}} _+(\mathbb {R}^m)\)-valued solution to (3.8). Therefore, the second part of the statement follows directly from Proposition 4.5.

Uniqueness can be established as follows. Let \(\zeta ^{(1)}, \zeta ^{(2)}\) be two \({{\mathbb {Y}}} \)-adapted, càdlàg, \({{\mathcal {M}}} _+(\mathbb {R}^m)\)-valued solutions to (3.8). Define \(\zeta {{:}{=}} \zeta ^{(1)}-\zeta ^{(2)} \in {{\mathcal {M}}} (\mathbb {R}^m)\) and let \(Z {{:}{=}} Z^{(1)} - Z^{(2)} \in H\) be its density process, where \(Z^{(1)}\) and \(Z^{(2)}\) are the density processes of \(\zeta ^{(1)}\) and \(\zeta ^{(2)}\), respectively, which exist thanks to Proposition 4.5.

Standard facts from measure theory show that, for all non-negative, bounded, measurable functions \(\varphi :\mathbb {R}^m \rightarrow \mathbb {R}\) and all \(t \in [0,T]\), \(|{\zeta }|_t(\varphi ) \le \zeta ^{(1)}_t(\varphi ) + \zeta ^{(2)}_t(\varphi )\). From this fact, we have that, for all \(y \in \mathbb {R}^m\) and all \(t \in [0,T]\),

$$\begin{aligned}&T_\epsilon |{\zeta }|_t(y) = |{\zeta }|_t(\psi _\epsilon (\cdot -y)) \le \zeta ^{(1)}_t(\psi _\epsilon (\cdot -y)) \\&+ \zeta ^{(2)}_t(\psi _\epsilon (\cdot -y)) = T_\epsilon \zeta ^{(1)}_t(y) + T_\epsilon \zeta ^{(2)}_t(y), \end{aligned}$$

and hence, for all \(t \in [0,T]\),

$$\begin{aligned} \Vert {T_\epsilon |{\zeta }|_t}\Vert _H^2&= \int _{\mathbb {R}^m} |{T_\epsilon |{\zeta }|_t(y)}|^2 \, {{\mathrm {d}}} y \le 2 \int _{\mathbb {R}^m} |{T_\epsilon \zeta ^{(1)}_t(y)}|^2 \, {{\mathrm {d}}} y \\&\quad + 2 \int _{\mathbb {R}^m} |{T_\epsilon \zeta ^{(2)}_t(y)}|^2 \, {{\mathrm {d}}} y = 2\Vert {T_\epsilon \zeta ^{(1)}_t}\Vert _H^2 + 2\Vert {T_\epsilon \zeta ^{(2)}_t}\Vert _H^2. \end{aligned}$$

Thus, applying Lemma 4.3 we deduce that

$$\begin{aligned} {{\mathbb {E}}} [\sup _{t \in [0,T]} \Vert {T_\epsilon |{\zeta }|_{t^-}}\Vert _H^2] \le 2 {{\mathbb {E}}} [\sup _{t \in [0,T]} \Vert {T_\epsilon \zeta ^{(1)}_{t^-}}\Vert _H^2] + 2{{\mathbb {E}}} [\sup _{t \in [0,T]} \Vert {T_\epsilon \zeta ^{(2)}_{t^-}}\Vert _H^2] < +\infty . \end{aligned}$$

Therefore, from Proposition 4.4 we get that for all \(\epsilon > 0\) and all \({{\mathbb {F}}} \)-stopping times \(\tau \le t\), \(t \in [0,T]\),

$$\begin{aligned} {{\mathbb {E}}} [\Vert {T_\epsilon \zeta _{\tau ^-}}\Vert ^2_H] \le M \int _0^{\tau ^-} {{\mathbb {E}}} [\Vert {T_\epsilon |{\zeta }|_{s^-}}\Vert ^2_H] \, {{\mathrm {d}}} A_s, \end{aligned}$$
(4.10)

where A is defined in (4.2).

An application of the dominated convergence theorem shows that \(\Vert {T_\epsilon |{\zeta }|_{t^-}}\Vert _H^2 \longrightarrow \Vert {Z_{t^-}}\Vert ^2_H\), as \(\epsilon \rightarrow 0\), for all \(t \in [0,T]\), and hence we get, by Fatou’s lemma and (4.10),

$$\begin{aligned} {{\mathbb {E}}} [\Vert {Z_{\tau ^-}}\Vert ^2_H]&= {{\mathbb {E}}} [\lim _{\epsilon \rightarrow 0} \Vert {T_\epsilon \zeta _{\tau ^-}}\Vert ^2_H] \le \liminf _{\epsilon \rightarrow 0} {{\mathbb {E}}} [\Vert {T_\epsilon \zeta _{\tau ^-}}\Vert ^2_H] \\&\le \liminf _{\epsilon \rightarrow 0} M \int _0^{\tau ^-} {{\mathbb {E}}} [\Vert {T_\epsilon |{\zeta }|_{s^-}}\Vert ^2_H] \, {{\mathrm {d}}} A_s = M \int _0^{\tau ^-} {{\mathbb {E}}} [\Vert {Z_{s^-}}\Vert ^2_H] \, {{\mathrm {d}}} A_s. \end{aligned}$$

Finally, Proposition 4.5 ensures that

$$\begin{aligned} {{\mathbb {E}}} [\Vert {Z_{t^-}}\Vert ^2_H] \le 2 {{\mathbb {E}}} [\Vert {Z^{(1)}_{t^-}}\Vert ^2_H] + 2 {{\mathbb {E}}} [\Vert {Z^{(2)}_{t^-}}\Vert ^2_H] < +\infty , \quad \text {for all } t \in [0,T], \end{aligned}$$

This allows us to use Lemma A.1 to get that, for all \(t \in [0,T]\), \({{\mathbb {E}}} [\Vert {Z_{t^-}}\Vert ^2_H] = {{\mathbb {E}}} [\Vert {Z_t}\Vert ^2_H] = 0\), whence we obtain \(\Vert {Z_t}\Vert ^2_H = 0\), \({{\mathbb {P}}} \)-a.s., and therefore uniqueness of the solution to the Zakai equation. \(\square \)

4.2 The Case in Which the Jump Times of \(\nu \) do not Accumulate

Exploiting the recursive structure of (3.19), we can prove uniqueness of the solution to the Zakai equation (3.8), also in the case where the jump times of \(\nu \) do not accumulate.

Theorem 4.7

Suppose that Assumptions 2.1, 2.2, 4.1, and (3.7) hold. If the jump times of \(\nu \) do not accumulate over [0, T] and if \(\xi \in {{\mathcal {P}}} (\mathbb {R}^m)\) admits a square-integrable density with respect to Lebesgue measure on \(\mathbb {R}^m\), then the unnormalized filtering process \(\rho \), defined in (3.3), is the unique \({{\mathbb {Y}}} \)-adapted, càdlàg, \({{\mathcal {M}}} _+(\mathbb {R}^m)\)-valued solution to the Zakai equation (3.8).

Moreover, there exists a \({{\mathbb {Y}}} \)-adapted, càdlàg, H-valued process \(p = (p_t)_{t \in [0,T]}\) satisfying, for all \(t \in [0,T]\), \({{\mathbb {E}}} [\Vert {p_t}\Vert _H^2] < +\infty \) and \(\rho _t({{\mathrm {d}}} x) = p_t(x) {{\mathrm {d}}} x\), \({{\mathbb {P}}} \)-a.s.

Proof

Let us denote by \(\rho \) the unnormalized filtering process associated with the initial law \(\xi \) and process \(\nu \), and by \(p_{0^-}\) the density of \(\xi \) with respect to Lebesgue measure on \(\mathbb {R}^m\). Let \(T_0 = 0\) and define the sequence of jump times of \(\nu \)

$$\begin{aligned} T_n {{:}{=}} \inf \{t > T_{n-1} :\Delta \nu _t \ne 0\}, \quad n \in \mathbb {N}, \end{aligned}$$

with the usual convention \(\inf \emptyset = +\infty \). Recall that also \(T_0\) can be a jump time of \(\nu \). Moreover, since the jump times of \(\nu \) do not accumulate over [0, T], we have that \(T_n \le T_{n+1}\), \({{\mathbb {P}}} \)-a.s., and \(T_n< +\infty \, \Longrightarrow T_n < T_{n+1}\), for all \(n \in \mathbb {N}_0\).

We start noticing that the formula \(\rho _{T_n}(\varphi ) = \rho _{{T_n}^-}\bigl (\varphi _{T_n}(\cdot + \Delta \nu _{T_n})\bigr )\), \(n \in \mathbb {N}_0\), appearing in (3.19) holds for all \(\varphi \in {{\mathrm {C}}} _b(\mathbb {R}^m)\). Indeed, continuity of the observation filtration \({{\mathbb {Y}}} \) implies that

$$\begin{aligned} \widetilde{{{\mathbb {E}}} }[\varphi (X_{T_n^-}) \mid {{\mathcal {Y}}} _{T_n}] = \widetilde{{{\mathbb {E}}} }[\varphi (X_{T_n^-}) \mid {{\mathcal {Y}}} _{T_n^-}] = \pi _{T_n^-}(\varphi ). \end{aligned}$$

Using continuity of process \(\eta \), Kallianpur-Striebel formula (3.2), and the freezing lemma, we get

$$\begin{aligned} \rho _{T_n}(\varphi )&= \widetilde{{{\mathbb {E}}} }[\varphi (X_{T_n}) \mid {{\mathcal {Y}}} _{T_n}] \, {{\mathbb {E}}} \bigl [\eta _{T_n} \bigm | {{\mathcal {Y}}} \bigr ] = \widetilde{{{\mathbb {E}}} }[\varphi (X_{T_n^-} + \Delta \nu _{T_n}) \mid {{\mathcal {Y}}} _{T_n}] \, {{\mathbb {E}}} \bigl [\eta _{T_n^-} \bigm | {{\mathcal {Y}}} \bigr ] \\&= \pi _{T_n^-}(\varphi (\cdot + \Delta \nu _{T_n})) \, {{\mathbb {E}}} \bigl [\eta _{T_n^-} \bigm | {{\mathcal {Y}}} \bigr ] = \rho _{T_n^-}(\varphi (\cdot + \Delta \nu _{T_n})), \end{aligned}$$

for all \(\varphi \in {{\mathrm {C}}} _b(\mathbb {R}^m)\) and all \(n \in \mathbb {N}_0\). This, in turn, entails that if \(\rho _{T_n^-}\) admits a density \(p_{T_n^-}\) with respect to Lebesgue measure, then

$$\begin{aligned} \int _{\mathbb {R}^m} \varphi (x) \, \rho _{T_n}({{\mathrm {d}}} x)&= \rho _{T_n^-}(\varphi (\cdot + \Delta \nu _{T_n})) = \int _{\mathbb {R}^m}\varphi (x + \Delta \nu _{T_n}) p_{T_n^-}(x) \, {{\mathrm {d}}} x \\&= \int _{\mathbb {R}^m} \varphi (x) p_{T_n^-}(x - \Delta \nu _{T_n}) \, {{\mathrm {d}}} x. \end{aligned}$$

Therefore, since \({{\mathrm {C}}} _b(\mathbb {R}^m)\) is a separating set (see, e.g., [22], Chapter 3, Sect. 4), we have the equivalence of measures \(\rho _{T_n}({{\mathrm {d}}} x)\) and \(p_{T_n^-}(x - \Delta \nu _{T_n}) \, {{\mathrm {d}}} x\), implying that \(\rho _{T_n}\) admits density with respect to Lebesgue measure on \(\mathbb {R}^m\), given by \(p_{T_n^-}(\cdot - \Delta \nu _{T_n})\).

We can now use the recursive structure of (3.19) to get the claim. Define the process

$$\begin{aligned} \nu ^{(1)}_t {{:}{=}} \nu _t \mathbf {1}_{t < T_1} + \nu _{T_1^-} \mathbf {1}_{t \ge T_1}, \quad t \in [0,T], \end{aligned}$$

and the random measure \(\xi ^{(1)}({{\mathrm {d}}} x) {{:}{=}} p_{0^-}(x - \Delta \nu _0) \, {{\mathrm {d}}} x\), on \(\mathbb {R}^m\). Consider, for all \(\varphi \in {{\mathrm {C}}} ^2_b(\mathbb {R}^m)\), the Zakai equation

$$\begin{aligned} \rho ^{(1)}_t(\varphi )&= \xi ^{(1)}(\varphi ) + \int _0^t \rho ^{(1)}_s\bigl (\bigl [\partial _s + {{\mathcal {A}}} _s\bigr ] \varphi \bigr ) \, {{\mathrm {d}}} s \nonumber \\&\quad + \int _0^t \rho ^{(1)}_{s^-}\bigl ({{\mathrm {D}}} _x \varphi \bigr ) \, {{\mathrm {d}}} \nu ^{(1)}_s \nonumber \\&\quad + \int _0^t \gamma ^{-1}(s) \rho ^{(1)}_s(\varphi h_s) \, {{\mathrm {d}}} {\overline{B}}_s, \quad {{\mathbb {P}}} \text {-a.s.}, \quad t \in [0,T]. \end{aligned}$$
(4.11)

Since \(\nu ^{(1)}\) satisfies point (iv) of Assumption 2.1, we have that (4.11) is the Zakai equation for the filtering problem of the partially observed system (2.8)–(2.13), with initial law \(\xi ^{(1)}\) and process \(\nu ^{(1)}\), which is continuous on [0, T]. Therefore, by Theorem 4.6, \(\rho ^{(1)}\) is its unique solution and admits a density \(p^{(1)}\) with respect to Lebesgue measure on \(\mathbb {R}^m\), with \({{\mathbb {E}}} [\Vert {p^{(1)}_t}\Vert ^2_H] < +\infty \), for each \(t \in [0,T]\). It is clear that, since \(\nu _t = \nu _t^{(1)}\) on \(\{t < T_1\}\), we have that \(\rho _t = \rho _t^{(1)}\) on the same set, and hence \(\rho _t\) admits density \(p^{(1)}_t\) on \(\{t < T_1\}\).

Next, let us define the process

$$\begin{aligned} \nu ^{(2)}_t {{:}{=}} \nu _{t+T_1} \mathbf {1}_{t < T_2 - T_1} + \nu _{T_2^-} \mathbf {1}_{t \ge T_2 - T_1}, \quad t \in [0,T], \end{aligned}$$

and the random measure \(\xi ^{(2)}({{\mathrm {d}}} x) = p_{T_1^-}(x - \Delta \nu _{T_1}) \, {{\mathrm {d}}} x\), on \(\mathbb {R}^m\). Consider, for all \(\varphi \in {{\mathrm {C}}} ^2_b(\mathbb {R}^m)\), the Zakai equation

$$\begin{aligned} \rho ^{(2)}_t(\varphi )= & {} \xi ^{(2)}(\varphi ) + \int _0^t \rho ^{(2)}_s\bigl (\bigl [\partial _s + {{\mathcal {A}}} _{s+T_1}\bigr ] \varphi \bigr ) \, {{\mathrm {d}}} s \nonumber \\&+ \int _0^t \rho ^{(2)}_{s^-}\bigl ({{\mathrm {D}}} _x \varphi \bigr ) \, {{\mathrm {d}}} \nu ^{(2)}_s + \int _0^t \gamma ^{-1}(s+T_1) \rho ^{(2)}_s(\varphi h_{s+T_1}) \, {{\mathrm {d}}} {\overline{B}}_{s+T_1}, \; {{\mathbb {P}}} \text {-a.s.}, \; t \in [0,T].\nonumber \\ \end{aligned}$$
(4.12)

Since \(\nu ^{(2)}\) satisfies point (iv) of Assumption 2.1, we have that (4.12) is the Zakai equation for the filtering problem of the partially observed system (2.8)–(2.13), with initial law \(\xi ^{(2)}\) and process \(\nu ^{(2)}\), which is continuous on [0, T]. Therefore, by Theorem 4.6, \(\rho ^{(2)}\) is its unique solution and admits a density \(p^{(2)}\) with respect to Lebesgue measure on \(\mathbb {R}^m\), with \({{\mathbb {E}}} [\Vert {p^{(2)}_t}\Vert ^2_H] < +\infty \), for each \(t \in [0,T]\). It is clear that, since \(\nu _t = \nu ^{(2)}_{t - T_1}\) on \(\{T_1 \le t < T_2\}\), we have that \(\rho _t = \rho ^{(2)}_{t - T_1}\) on the same set, and hence \(\rho _t\) admits density \(p^{(2)}_{t - T_1}\) on \(\{T_1 \le t < T_2\}\).

Continuing in this manner, we construct a sequence of solutions \((\rho ^{(n)})_{n \in \mathbb {N}}\) and corresponding density processes \((p^{(n)})_{n \in \mathbb {N}}\). We deduce that the unnormalized filtering process is represented by

$$\begin{aligned} \rho _t = \sum _{n=1}^\infty \rho ^{(n)}_{t-T_{n-1}} \mathbf {1}_{T_{n-1} \le t < T_n}, \quad t \in [0,T], \end{aligned}$$

and hence is the unique \({{\mathbb {Y}}} \)-adapted, càdlàg, \({{\mathcal {M}}} _+(\mathbb {R}^m)\)-valued solution to the Zakai equation (3.8), admitting a \({{\mathbb {Y}}} \)-adapted, càdlàg, H-valued density process p, given by

$$\begin{aligned} p_t = \sum _{n=1}^\infty p^{(n)}_{t-T_{n-1}} \mathbf {1}_{T_{n-1} \le t < T_n}, \quad t \in [0,T]. \end{aligned}$$

The fact that \({{\mathbb {E}}} [\Vert {p_t}\Vert _H^2] < +\infty \), for all \(t \in [0,T]\), follows from the analogous property for each of the processes \(p^{(n)}\), \(n \in \mathbb {N}\). \(\square \)