Nonlinear Filtering of Partially Observed Systems Arising in Singular Stochastic Optimal Control

This paper deals with a nonlinear filtering problem in which a multi-dimensional signal process is additively affected by a process ν\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\nu $$\end{document} whose components have paths of bounded variation. The presence of the process ν\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\nu $$\end{document} prevents from directly applying classical results and novel estimates need to be derived. By making use of the so-called reference probability measure approach, we derive the Zakai equation satisfied by the unnormalized filtering process, and then we deduce the corresponding Kushner–Stratonovich equation. Under the condition that the jump times of the process ν\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\nu $$\end{document} do not accumulate over the considered time horizon, we show that the unnormalized filtering process is the unique solution to the Zakai equation, in the class of measure-valued processes having a square-integrable density. Our analysis paves the way to the study of stochastic control problems where a decision maker can exert singular controls in order to adjust the dynamics of an unobservable Itô-process.


Introduction
This paper studies a stochastic filtering problem on a finite time horizon [0, T ], T > 0, in which the dynamics of a multi-dimensional process X = (X t ) t∈[0,T ] , called signal or unobserved process, are additively affected by a process having components of bounded variation. The aim is to estimate the hidden state X t , at each time t ∈ [0, T ], using the information provided by a further stochastic process Y = (Y t ) t∈[0,T ] , called observed process; said otherwise, we look for the conditional distribution of X t given the available observation up to time t. This leads to derive an evolution equation for the filtering process, which is a probability measure-valued process satisfying, for any given bounded and measurable function ϕ : R m → R, where (Y t ) t∈[0,T ] is the natural filtration generated by Y and augmented by P-null sets. The process π provides the best estimate (in the usual L 2 sense) of the signal process X , given the available information obtained through the process Y .
Stochastic filtering is nowadays a well-established research topic. The literature on the subject is vast and many different applications have been studied: the reader may find a fairly detailed historical account in the book by Bain and Crisan [2]. Classic references are the books by Bensoussan [5], Kallianpur [29], Liptser and Shiryaev [35] (cf. also Brémaud [6,Chapter 4] for stochastic filtering with point process observation); more recent monographs are, e.g., the aforementioned book by Bain and Crisan [2], Crisan and Rozovskiȋ [16], and Xiong [42] (see also Cohen and Elliott [14] Chapter 22). Recently, different cases where the signal and/or the observation processes can have discontinuous trajectories (as in the present work) have been studied and explicit filtering equations have been derived: see, for instance, Bandini et al. [4], Calvia [9], Ceci and Gerardi [12,13], Ceci and Colaneri [10,11],Confortola and Fuhrman [15], Grigelionis and Mikulevicius [26].
The main motivation of our analysis stems from the study of singular stochastic control problems under partial observation. Consider a continuous-time stochastic system whose position or level X t at time t ∈ [0, T ] is subject to random disturbances and can be adjusted instantaneously through (cumulative) actions that, as functions of time, do not have to be absolutely continuous with respect to Lebesgue measure. In particular, they may present a Cantor-like component and/or a jump component. The use of such singular control policies is nowadays common in applications in Economics, Finance, Operations Research, as well as in Mathematical Biology. Typical examples are, amongst others, (ir)reversible investment choices (e.g., Riedel and Su [41]), dividends' payout (e.g., Reppen et al. [40]), inventory management problems (e.g., Harrison and Taksar [27], De Angelis et al. [18]), as well as harvesting issues (e.g., [1]). Suppose also that the decision maker acting on the system is not able to observe the dynamics of the controlled process X , but she/he can only follow the evolution of a noisy process Y , whose drift is a function of the signal process. Mathematically, we assume that the pair (X , Y ) is defined on a filtered complete probability space ( , F, F:=(F t ) t∈[0,T ] , P) and that its dynamics are given, for any t ∈ [0, T ], by the following system of SDEs: where δ is an exogenous pollution decay factor, σ > 0, W is a standard real Brownian motion, and β > 0 is a pollution-to-investment ratio. A noisy measurement of X is provided by the observed process Y , satisfying where h : R → R is the so-called sensor function and B is a standard real Brownian motion independent of W .
The firm aims at choosing an optimal partially reversible investment strategy ν to minimize the cost functional: where r > 0 is a given discount factor, and ν + t (resp. ν − t ) is the cumulative investment (resp. disinvestment) into production performed up to time t ∈ [0, T ]. The cost functional is composed of two parts: the first one, corresponding to the first integral, models the fact that the firm must track an emission levelx, chosen by some central authority (e.g., a national government); the second one, corresponding to the second and third integrals, models the cost bore by the firm to implement investment and disinvestment policies ν + , ν − , at the marginal costs K + , K − > 0.
Clearly, the firm needs to base any production plan on observable quantities, so ν must be adapted to the observation filtration (Y t ) t∈ [0,T ] . Moreover, the unobservable feature of the pollution level X , makes this a singular control problem under partial observation, for the study of which a necessary first step is the derivation of the explicit filtering equation satisfied by the filtering process π .
To the best of our knowledge, the derivation of explicit filtering equations in the setting described above has not yet received attention in the literature, except for the linear-Gaussian case, studied in Menaldi and Robin [37]. In this paper we provide a first contribution in this direction. Indeed, the recent literature treating singular stochastic control problems under partial observation assumes that the observed process, rather than the signal one, is additively controlled (cf. Callegaro et al. [7], De Angelis [17], Décamps and Villeneuve [19], and Federico et al. [24]). Clearly, such a modeling feature leads to a filtering analysis that is completely different from ours.
By making use of the so-called reference probability measure approach, we derive the Zakai stochastic partial differential equation (SPDE) satisfied by the so-called unnormalized filtering process, which is a measure-valued process, associated with the filtering process via a suitable change of probability measure. Then, we deduce the corresponding evolution equation for π , namely, the so-called Kushner-Stratonovich equation or Fujisaki-Kallianpur-Kunita equation. Furthermore, we show that the unnormalized filtering process is the unique solution to the Zakai equation, in the class of measure-valued processes having a square-integrable density. The latter result is proved under the technical requirement that the jump times of the process ν affecting X in (1.1) do not accumulate over the considered time-horizon. Although such a condition clearly poses a restriction on the generality of the model, we also acknowledge that it is typically satisfied by optimal control processes arising in singular stochastic control problems. It is important to notice that establishing conditions under which the unnormalized filtering process possesses a density paves the way to recast the separated problem as a stochastic control problem in a Hilbert space, as we will briefly explain in the next section.
The rest of the introduction is now devoted to a discussion of our approach and results at a more technical level.

Methodology and Main Results
In this paper we are going to study the filtering problem described above through the so-called reference probability approach, that we briefly summarize here. To start, let us notice that the model introduced in (1.1) is somewhat ill-posed. In fact, the dynamics of the signal process X depend on the (Y t ) t∈[0,T ] -adapted process ν while, simultaneously, the dynamics of the observed process Y depend on X . Otherwise said, it is not clear how to define ν, which has to be given a priori, and circularity arises if one attempts to introduce the partially observed system (X , Y ) as in (1.1).
A possible way out of this impasse is to define Y as a given Gaussian process independent of X (see (2.2)). In this way, it makes sense to fix a (Y t ) t∈[0,T ] -adapted process ν and to define the dynamics of the signal process X as in the first SDE of (1.1) (see also (2.8)). Finally, under suitable assumptions, there exists a probability measure change (cf. (2.12)) that allows us to recover the dynamics of Y as in the second SDE of (1.1) (see also (2.13)). It is important to notice that the resulting probability depends on the initial law ξ of X 0 − and on ν.
To derive the associated Kushner-Stratonovich equation there are two main approaches in the literature: The Innovations approach and the aforementioned reference probability approach. Although it might be possible to derive the filtering dynamics in our context by using the former approach, we follow the latter method.
Our first main results is Theorem 3.4, where we deduce the Zakai equation verified by the unnormalized filtering process (see (3.3) for its definition). From this result, as a byproduct, we deduce in Theorem 3.6 the Kushner-Stratonovich equation satisfied by the filtering process. It is worth noticing that, given the presence of the boundedvariation process ν in the dynamics of X , Theorem 3.4 cannot be obtained by invoking classical results, but novel estimates need to be derived (cf. Lemma A.1 and Proposition A.2). In particular, we employ a change of variable formula for Lebesgue-Stieltjes integrals.
It is clear that in applications, for instance to optimal control problems, establishing uniqueness of the solution to the Zakai equation or to the Kushner-Stratonovich equation is essential. In the literature there are several approaches to tackle this problem, most notably the following four: The filtered martingale problem approach, originally proposed by Kurtz and Ocone [34], and later extended to singular martingale problems in [32] (see also [31]); the PDE approach, as in the book by [5] (see also [2,Sect. 4.1]); the functional analytic approach, introduced by Lucic and Heunis [36] (see also [2,Sect. 4.2]); the density approach, studied in Kurtz and Xiong [33] (see also [2,Sect. 7] and [42]).
The first three methods allow to prove uniqueness of the solution to the Zakai equation in a suitable class of measure-valued processes. However, they do not guarantee that the unique measure-valued process solution to the Zakai equation admits a density process, a fact that has an impact on the study of the separated problem. Indeed, without requiring or establishing conditions guaranteeing existence of such a density process, the separated problem must be formulated in an appropriate Banach space of measures and, as a consequence, the Hamilton-Jacobi-Bellman (HJB) equation associated to the separated problem must be formulated in such a general setting as well. As a matter of fact, only recently some techniques have been developed to treat this case, predominantly in the theory of mean-field games (an application to optimal control problems with partial observation is given in [3]).
A more common approach in the literature considers, instead, the density process as the state variable for the separated problem. If it is possible to show that such a density process is the unique solution of a suitable SPDE in L 2 (R m ), the so-called Duncan-Mortensen-Zakai equation, then this L 2 (R m )-valued process can be equivalently used as state variable in the separated problem. This is particularly convenient, since for optimal control problems in Hilbert spaces a well-developed theory is available, at least in the regular case (see, e.g., the monograph by Fabbri et al. [23]). Therefore, in view of possible future applications to singular optimal control problems under partial observation, we adopted the density approach to prove that, under suitable assumptions, the unnormalized filtering process is the unique solution to the Zakai equation in the class of measure-valued processes admitting a density with respect to Lebesgue measure.
We show this result, first, in the case where ν is a continuous process (cf. Theorem 4.6) and, then, in the case where the jump times of ν do not accumulate in the time interval [0, T ] (see Theorem 4.7). As we already observed, although this assumption prevents to achieve full generality, it has a clear interpretation and it is usually satisfied by the examples considered in the literature. From a technical side, it seems that a direct approach using the method proposed by [33] is not feasible to treat the case of accumulating jumps, due to difficulties in estimating crucial quantities in the arguments used, that are related to the jump component of filtering process. A possible workaround might consists in approximating the process ν by cutting away jumps of size smaller than some δ > 0 and then, provided that a suitable tightness property holds, pass to the limit, as δ → 0, in the relevant equations. However, this is a delicate and lengthy reasoning, which is left for future research.
The rest of this paper is organized as follows. Section 1.2 provides notation used throughout this work. Section 2 introduces the filtering problem. The Zakai and Kushner-Stratonovich equations are then derived in Section 3, while the uniqueness of the solution to the Zakai equation is proved in Section 4. Finally, Appendix 1 collects the proof of technical results.

Notation
In this section we collect the main notation used in this work. Throughout the paper the set N denotes the set of natural integers N = {1, 2, . . . }, N 0 = {0, 1, . . . }, and R is the set of real numbers.
For any m × n matrix A = (a i j ), the symbol A * denotes its transpose and A is its Frobenius norm; i.e., A = ( m i=1 n j=1 a 2 i j ) 1/2 . For any x, y ∈ R d , x denotes the Euclidean norm of x and x · y = x * y indicates the inner product of x and y. For a fixed Hilbert space H , we denote its inner product by ·, · and by · H its norm.
The symbol 1 C denotes the indicator function of a set C, while 1 is the constant function equal to 1. The symbol b a denotes [a,b] for any −∞ < a ≤ b < +∞. For any d ∈ N and T > 0, we denote by C 1,2 b ([0, T ] × R d ) the set of real-valued bounded measurable functions on [0, T ]×R d , that are continuously differentiable once with respect to the first variable and twice with respect to the second, with bounded derivatives. For any such function, the symbol ∂ t denotes the derivative with respect to the first variable, while D x = (∂ 1 , . . . , ∂ d ) and D 2 x = (∂ 2 i j ) d i, j=1 denote, respectively, the gradient and the Hessian matrix with respect to the second variable. Furthermore, we simply write C 2 b (R d ), when we are considering a real-valued bounded function on R d that is twice continuously differentiable with bounded derivatives.
For any d ∈ N we indicate by L 2 (R d ) the set of all square-integrable functions with respect to Lebesgue measure and for all k ∈ N we denote by W 2 k (R d ) the Sobolev space of all functions f ∈ L 2 (R d ) such that the partial derivatives ∂ α exist in the weak sense and are in L 2 (R d ), whenever the multi-index α = (α 1 , . . . , α d ) is such that α 1 + · · · + α d ≤ k. For a fixed metric space E, endowed with the Borel σ -algebra, we denote by P(E), M + (E), and M(E) the sets of probability, finite positive, and finite signed measures on E, respectively. If μ ∈ M(E), then |μ| ∈ M + (E) is the total variation of μ. For any given càdlàg stochastic process Z = (Z t ) t≥0 defined on a probability space ( , F, P), we denote by (Z t − ) t≥0 the left-continuous version of Z (i.e., Z t − = lim s→t − Z s , P-a.s., for any t ≥ 0), and by Z t :=Z t − Z t − the jump of Z at time t ≥ 0. If Z has finite variation over [0, t], for all t ≥ 0, |Z | (resp. Z + , Z − ) is the variation process (resp. the positive part process, the negative part process) of Z , i.e., the process such that, for each t ∈ [0, T ] and ω ∈ , |Z | t (ω) (resp. Z + t (ω), Z − t (ω)) is the total variation (resp. the positive part, the negative part) of the function s → Z s (ω) on [0, t]. It is useful to remember that Z = Z + − Z − , |Z | = Z + + Z − , and that Z + , Z − are non-decreasing processes.
Finally, with the word measurable we refer to Borel-measurable, unless otherwise specified.

Model Formulation
Let T > 0 be a given fixed time horizon and ( , F, F:=(F t ) t∈[0,T ] , P) be a complete filtered probability space, with F satisfying the usual assumptions.
Define on ( , F, F, P) two independent F-adapted standard Brownian motions W and B, taking values in R d and R n , respectively, with d, n ∈ N. Let then γ : [0, T ] → R n×n be a measurable function such that, for each t ∈ [0, T ], γ (t) is symmetric, with γ i j (t) ∈ L 2 ([0, T ]), for all i, j = 1, . . . , n, and uniformly positive definite; that is, there exists δ > 0 such that for all t ∈ [0, T ] and all x ∈ R m (2.1) These requirements guarantee in particular that the observed process is an R n -valued F-adapted martingale, of which we take a continuous version. Clearly, it holds

Remark 2.1
It is not restrictive to require that γ is symmetric (and uniformly positive definite). Indeed, suppose that B is an R k -valued F-adapted standard Brownian motion and that γ : is uniformly positive definite. Then, we can obtain an equivalent model defining the R n -valued F-adapted In fact, in this case (2.3) becomes: and clearly γ γ * (t) 1/2 is symmetric (and uniformly positive definite).
We indicate with the symbol Y the completed natural filtration generated by Y , i.e., Y:

Remark 2.2
Notice that since γ is invertible, Y coincides with the completed natural filtration generated by B and is, therefore, right-continuous. These facts will be useful in the sequel.
càdlàg, R m -valued process ν whose components have paths of finite variation. We introduce the following requirements, that will be in force throughout the paper.
Its components have paths of finite variation, which in particular satisfy for some constant K > 0.
Under Assumption 2.1, for any such ν, the following SDE for the signal process X = (X t ) t∈[0,T ] admits a unique strong solution: It is important to bear in mind, especially in applications to optimal control problems, that the solution to (2.8) and all the quantities that are related to it depend on the the probability distribution ξ and on ν. However, for the ease of exposition, we will not stress this dependence in the sequel.

Remark 2.3
Conditions (2.4) and (2.5) ensure that SDE (2.8) admits a unique strong solution for any ν. If we assume, in addition, that (2.6) and (2.7) hold, then we have that, for some constant κ depending on T , b, σ , and ν, Proofs of these statements are standard and can be found, for instance, in [14,39].
We finally arrive to the model we intend to analyze via a change of measure. Let h : [0, T ] × R m → R n be a measurable function satisfying the following condition, that will stand from now on.

Assumption 2.2 There exists a constant
For all t ∈ [0, T ] define then: By Proposition A.2, η is a (P, F)-martingale, under Assumptions 2.1 and 2.2. Therefore, we can introduce the probability measure P on ( , , is a ( P, F)-Brownian motion, and under P the dynamics of the observed process are provided by the SDE: We see that Eqs. (2.8) and (2.13) are formally equivalent to model (1.1). Observe, however, that the Brownian motion driving (2.13) is not a source of noise given a priori, but it is obtained through a probability measure change; moreover, our construction implies that it depends on the initial law ξ and on process ν. This formulation is typical in optimal control problems under partial observation (see, e.g., [5], Chapter 8) and has the advantage of avoiding the circularity problem discussed in the Introduction.

Remark 2.4
If the partially observed system defined by (2.8) and (2.13) describes the state variables of a singular optimal control problem, where ν is the control process, then condition (2.7) implies that the singular control is of finite fuel type (see El Karoui and Karatzas [21], Karatzas et al. [30] for early contributions).

Remark 2.5
It is worth noticing that all the results in this paper remain valid if we allow b to depend also on ω, as long as the map (ω, t) → b(ω, t, x) is Y-adapted and càdlàg, for each x ∈ R m , and condition (2.4) holds uniformly with respect to ω (i.e., L b and C b do not depend on ω). To extend our subsequent results to this case, it suffices to apply the so-called freezing lemma whenever necessary.
This modeling flexibility is important when it comes to treating controlled dynamics where b is a deterministic function, depending on an additional parameter representing the action of a regular control α = (α t ) t∈[0,T ] . Clearly, this control must be càdlàg and Y-adapted, i.e., based on the available information. The measurability requirement above ensures that the map

The Zakai and Kushner-Stratonovich Equations
In this section we will deduce the Zakai equation satisfied by the unnormalized filtering process, defined in (3.3). As a byproduct, we will deduce the Kushner-Stratonovich equation satisfied by the filtering process (see (3.1) for its definition). As anticipated in the Introduction, we will use the reference probability approach to achieve these results. The reference probability will be precisely P, under which the observed process is Gaussian and satisfies (2.2). However, the probability measure that matters from a modelization point of view is P, which is defined in (2.12). Indeed, we will define the filtering process under this measure. It is important to bear in mind that P and P are equivalent probability measures. Hence, any result holding P-a.s., holds also P-a.s., and we will write only the first of these two wordings.
The following technical lemma is needed. Its proof is a consequence of the facts highlighted in Remark 2.2 and it is omitted (the reader may refer, for instance, to [2], Prop. 3.15). In what follows we will denote Y:=Y T .
As previously anticipated, the filtering process π = (π t ) t∈[0,T ] is a P(R m )-valued process providing the conditional law of the signal X at each time t ∈ [0, T ], given the available observation up to time t. It is defined for any bounded and measurable Since R m is a complete and separable metric space, π is a well-defined, P(R m )-valued and Y-adapted process. 1 Moreover, π admits a càdlàg modification, since X is càdlàg (see, e.g. [2], Cor. 2.26). Hence, in the sequel we shall consider π as a Y-progressively measurable process. We recall the useful Kallianpur-Striebel formula, which holds thanks to Proposition A.2 for any bounded and measurable ϕ : [0, T ] × R m → R and for any fixed t ∈ [0, T ] (for a proof see, e.g., [2], Prop. 3.16) This formula allows us to define the measure-valued process ρ = (ρ t ) t∈[0,T ] , called unnormalized conditional distribution of X , or unnormalized filtering process, defined, for any bounded and measurable ϕ : Given the properties of π and of η it is possible to show (see, e.g., [2], Lemma 3.18) that ρ is càdlàg and Y-adapted, hence Y-progressively measurable. Moreover, the Kallianpur-Striebel formula implies that for any bounded and measurable ϕ : [0, T ] × R m → R and for any fixed t ∈ [0, T ]: where 1 : R m → R is the constant function equal to 1.
To describe the local dynamics of the signal process X , let us introduce the operator A, defined for any ϕ ∈ C 1,2 b ([0, T ] × R m ) as: We can also define the family of operators A t , t ∈ [0, T ], given by: To obtain the Zakai equation we need, first, to write the semimartingale decomposition of the process ϕ(t, , is a square-integrable (P, F)martingale, thanks to conditions (2.4) and (2.5) (see also Remark 2.3).
We need the following two technical Lemmata. Up to minor modifications, their proofs follow that of [2], Lemma 3.21.
Then, for any j = 1, . . . , k we have Then, We are now ready to state the main result of this section, namely, to provide the Zakai equation.

Theorem 3.4 Suppose that Assumptions 2.1 and 2.2 are satisfied and, moreover, that
Then, for any ϕ ∈ C 1,2 b ([0, T ] × R m ), the unnormalized conditional distribution ρ satisfies the Zakai equation: where the suprema are taken over [0, T ] × R m . The proof is organized in several steps.
Step 1 (Approximation) For any fixed > 0, define the bounded process η = (η t ) t∈[0,T ] : where η is defined in (2.11). Both η and η have continuous trajectories and this fact will be used in what follows without further mention. Applying Itô's formula we obtain Denoting by [·, ·] the optional quadratic covariation operator, thanks to the integration by parts rule and recalling (3.6) we get (3.10) Step 2 (Projection onto Y) Notice that X t = X t − + ν t , P-a.s., t ∈ [0, T ], and that, Therefore, taking conditional expectation with respect to Y, we have (rearranging some terms) (3.11) We analyze now each of the terms appearing in (3.11). For any bounded Y-measurable Z , thanks to conditions (2.4) and (2.5), there exists a constant C 1 , depending on Z , , ϕ, b, and σ such that which implies, using the estimate given in (2.9), Therefore, applying the tower rule and Fubini-Tonelli's theorem, Similarly, for any bounded Y-measurable Z we have that This fact will allow to use Fubini-Tonelli's theorem in formula (3.13) below. We need to introduce the changes of time associated to the processes ν i,+ and ν i,− , i = 1, . . . , m, defined as where ν i,+ (resp. ν i,− ) denotes the positive part (resp. negative part) process of the i-th component of process ν (see the list of notation in Sect. 1.2 for a more detailed definition). For each t ≥ 0 and i = 1, . . . , m, C i,+ t and C i,− t are Y-stopping times (see, e.g., [20], Chapter VI, Def. 56 or [28], Proposition I.1.28). Hence, applying the change of time formula (see, e.g., [20], Chapter VI, Equation (55.1) or [28], Equation (1), p. 29) and Fubini-Tonelli's theorem, we get Finally, being η bounded, Lemma 3.3 entails E t 0 η s dM ϕ s Y = 0, and, using the same rationale of the previous evaluations, (3.18) Step 3 (Taking limits) It remains to show that all the terms appearing in (3.18) converge appropriately to give (3.8). As → 0, we have that Using boundedness of ϕ and (2.4), (2.5), we get that for some constant C 2 , depending on ϕ, b, and σ . The r.h.s. of this inequality is dP ⊗ dt integrable on ×[0, t], since (apply again the tower rule and Fubini-Tonelli's theorem) where we used (2.9) (which holds also under P because the dynamics of X does not change under this measure), and the fact that Using the conditional form of the dominated convergence theorem, we have that, for all t ∈ [0, T ], as → 0, whence, noticing that the integrals are Y-measurable random variables, We consider, now, the term on the second line of (3.18). We have that, for all t ∈ [0, T ], The r.h.s. of the last inequality is dP ⊗ dt integrable on × [0, t], since where we used (A.4), that holds also under P (again, because the dynamics of X does not change under this measure), and the fact that Hence, reasoning as above, after applying the conditional form of the dominated convergence theorem we obtain that, for all t ∈ [0, T ], as → 0, Looking at the third line of (3.18), the next step is to show that The proof of this fact is standard (see, e.g., [2], Theorem 3.24 and Exercise 3.25.i or [5], Theorem 4.1.1). It is important to notice that condition (3.7) intervenes here. Next, we examine the other integral in the third line of (3.18). We have that as → 0, for all t ∈ [0, T ]. Notice that, for any t ∈ [0, T ], Since η is non-negative, a Y-optional version of E[1 t≤T η t | Y t ] t≥0 is given by the Y-optional projection of 1 t≤T η t t≥0 (see, e.g., [14], Corollary 7.6.8). Therefore, applying [20], Chapter VI, Theorem 57 and using Lemma 3.1 we get that for all t ∈ [0, T ], and all i = 1, . . . , m, can be established with a reasoning analogous to the proof of (A.10). Therefore, we can apply the conditional form of the dominated convergence theorem, to obtain that, for all t ∈ [0, T ], as → 0, Since the integrals are Y-measurable random variables, this implies that, for all t ∈ [0, T ], as → 0, Finally, looking at the fourth line of (3.18), we have that, for all t ∈ [0, T ], as → 0, Observe that, for any t ∈ [0, T ], t is bounded by 1 , which is positive and integrable with respect to the product of measure P and the jump measure associated to ν, since: By the conditional form of the dominated convergence theorem, we have that, for all t ∈ [0, T ], as → 0, and since the sums are Y-measurable random variables, this implies that, for all t ∈ [0, T ], as → 0,

(3.19)
We are now ready to deduce, from the Zakai equation, the Kushner-Stratonovich equation, i.e., the equation satisfied by the filtering process π , defined in (3.1). The proof of the following two results follows essentially the same steps of [2], Lemma 3.29 and Theorem 3.30, up to necessary modifications due to the present setting (see, also, [5], Lemma 4.3.1 and Theorem 4.3.1).

Lemma 3.5 Under the same assumptions of Theorem 3.4, the process ρ
(3.20)

Remark 3.2
It is not difficult to show (see, e.g., [2], Proposition 2.30) or [5], Theorem 4.3.4, that is a ( P, Y)-Brownian motion, the so-called innovation process. This allows to rewrite the Kushner-Stratonovich equation in the (perhaps more familiar) form Notice, however, that in this setting the innovation process is not a Brownian motion given a priori, because it depends (through the density process η, and hence through X ), on the initial law ξ of the signal process and on process ν.

Remark 3.3
Similarly to what stated in Remark 3.1, if the jump times of the process ν do not accumulate over [0, T ], then the Kushner-Stratonovich equation can be split into successive nonlinear SPDEs between the jumps of ν (i.e., of X ). Using the same notation of the aforementioned Remark, for any ϕ ∈ C 1,2 b ([0, T ]×R m ) and any n ∈ N 0 we have P-a.s.

Uniqueness of the Solution to the Zakai Equation
In this section we will address the issue of uniqueness of the solution to the Zakai equation (3.8), under the requirement that the jump times of the process ν do not accumulate over [0, T ]. Proving uniqueness is essential to characterize completely the unnormalized filtering process ρ, defined in (3.3), and is crucial in applications, e.g., in optimal control. Indeed, having ensured that (3.8) (or, equivalently, (3.20)) uniquely characterizes the conditional distribution of the signal given the observation, the filtering process can be employed as a state variable to solve the related separated optimal control problem (cf. [5]).
We follow the approach in [33] (see, also, [2], Chapter 7 and [42], Chapter 6). The idea is to recast the measure-valued Zakai equation into an SPDE in the Hilbert space H :=L 2 (R m ) and, therefore, to look for a density of ρ in this space. To accomplish that, we will smooth solutions to (3.8) using the heat kernel, and we will then use estimates in L 2 (R m ) in order to deduce the desired result. An important role in the subsequent analysis is played by the following lemma, whose proof can be found, e.g., in [2], Solution to Exercise 7.2.

Lemma 4.1 Let {ϕ k } k∈N be an orthonormal basis of H such that
then μ is absolutely continuous with respect to Lebesgue measure on R m and its density is square-integrable.
Let ψ be the heat kernel, i.e., the function defined for each > 0 as and for any Borel-measurable and bounded f and > 0 define the operator We also define the operator T : M(R m ) → M(R m ) given by The equalities above imply that for any μ ∈ M(R m ) the measure T μ always possesses a density with respect to Lebesgue measure, that we will still denote by T μ.
Further properties of these operators that will be used in the sequel are listed in the following Lemma (for its proof see, e.g., [2], Solution to Exercise 7.3 and [42], Lemma 6.7, Lemma 6.8).

Lemma 4.2 For any μ ∈ M(R m ), h ∈ H , and > 0 we have that:
i T 2 |μ| H ≤ T |μ| H , where |μ| denotes the total variation measure of μ; . In this section we will work under the following hypotheses, in addition to Assumptions 2.1 and 2.2, concerning coefficients b, σ and h appearing in SDEs (2.8) and (2.13).
In what follows we will use the shorter notation In the next section, we obtain the uniqueness result for the solution to the Zakai equation when the process ν has continuous paths. This will be then exploited in Sect. 4.2 in order to obtain the uniqueness claim when ν has jump times that do not accumulate over [0, T ].

The Case in Which has Continuous Paths
We start our analysis with the following Lemma, which will play a fundamental role in the sequel. Its proof can be found in Appendix A.
The next result is a useful estimate.
If ν is continuous and if, for any > 0, E[sup t∈[0,T ] T |ζ | t − 2 H ] < +∞, then there exists a constant M > 0 such that, for each > 0 and all F-stopping times τ ≤ t, t ∈ [0, T ], Proof To ease notations, for any > 0 denote by Z the process Z t :=T ζ t , t ≥ 0. Fix > 0 and consider an orthonormal basis {ϕ k } k∈N of H such that ϕ k ∈ C 2 b (R m ), for any k ∈ N. Writing the Zakai equation for the function T ϕ k (recall that ν is continuous by assumption) we get: for all t ∈ [0, T ]. Notice that, for any ϕ ∈ C 2 b (R m ) and any t ∈ [0, T ], we can write: where a is the function defined in (4.1). For any i, j = 1, . . . , m, = 1, . . . , n, and t ∈ [0, T ], we define the random measures on R m : These measures are P-almost surely finite, for any t ∈ [0, T ], thanks to Assumption 4.1 and to (A.3) (see also (A.19) for the last measure). Applying Lemma 4.2 and the integration by parts formula we get: In a similar way, we obtain ζ t ∂ i T ϕ k = − ϕ k , ∂ i T ζ t , and Putting together all these facts, we can rewrite (4.4) as Applying Itô's formula we get that, for all t ∈ [0, T ], P-a.s., Using Assumption 4.1 and (A.3), it is possible to show that the stochastic integral with respect to Brownian motion B is a P-martingale. By the optional sampling theorem, this stochastic integral has zero expectation even when evaluated at any bounded stopping time. Therefore, picking an F-stopping time τ ≤ t, for arbitrary t ∈ [0, T ], summing over k up to N ∈ N, and taking the expectation, by Fatou's lemma we have that where we used the fact that, since We want now to estimate the quantities appearing inside the limit inferior, in order to exchange the limit and the integrals in (4.5). First of all, let us notice that, thanks to Assumption 4.1, the following estimates hold P-a.s., for all i, j = 1, . . . , m, all = 1, . . . , n, and all t ∈ [0, T ]: They can be proved following a reasoning analogous to that of [2], Lemma 7.5 (see also [42], Chapter 6).
Recalling that 2|ab| ≤ a 2 + b 2 , for all a, b ∈ R, using the estimates provided above, Lemma 4.2, and (4.6), we get that, for all N ∈ N, all i, j = 1, . . . , m, and all s ∈ [0, T ], With analogous computations, we get, for all i = 1, . . . , m, all N ∈ N, and all s ∈ [0, T ], The terms appearing on the r.h.s. of these estimates are dt ⊗ dP-and d|ν i | t ⊗ dPintegrable on [0, T ] × , for all i = 1, . . . , m, since, for any > 0, Therefore, by the dominated convergence theorem, we can pass to the limit in (4.5), as N → ∞, We finally get the claim, bounding the terms on the r.h.s. of (4.7) by using the following results: for the second one, apply [42], Lemma 6.11; for the third and the last one, apply [42], Lemma 6.10; for the fourth one, use the fact that the constant K 3 above does not depend on .
Therefore, we can apply Lemma A.1 and get that, for all t ∈ [0, T ], where we used the fact that ζ is continuous, since ν is, and that A t ≤ A T ≤ T +m K , for all t ∈ [0, T ]. Notice that, denoting by Z 0 − the density of ξ with respect to Lebesgue measure on R m , By point ii. of Lemma 4.2 and since the constants appearing in (4.8) do not depend on , we get Taking, as in the Proof of Proposition 4.4, an orthonormal basis {ϕ k } k∈N of H such that ϕ k ∈ C 2 b (R m ), for any k ∈ N, the dominated convergence theorem entails that, for all k ∈ N, Applying Fatou's Lemma we get that, for all t ∈ [0, T ], and hence, from Lemma 4.1 we deduce that, P-a.s., ζ t is absolutely continuous with respect to Lebesgue measure on R m , for all t ∈ [0, T ]. Moreover, its density process Z = (Z t ) t∈[0,T ] takes values in H and, by standard results, is Y-adapted and continuous (because ν is).
for all ϕ ∈ C b (R m ) and all n ∈ N 0 . This, in turn, entails that if ρ T − n admits a density p T − n with respect to Lebesgue measure, then Therefore, since C b (R m ) is a separating set (see, e.g., [22], Chapter 3, Sect. 4), we have the equivalence of measures ρ T n (dx) and p T − n (x − ν T n ) dx, implying that ρ T n admits density with respect to Lebesgue measure on R m , given by p T − n (· − ν T n ). We can now use the recursive structure of (3.19) to get the claim. Define the process and the random measure ξ (1) Since ν (1) satisfies point (iv) of Assumption 2.1, we have that (4.11) is the Zakai equation for the filtering problem of the partially observed system (2.8)-(2.13), with initial law ξ (1) and process ν (1) , which is continuous on [0, T ]. Therefore, by Theorem 4.6, ρ (1) is its unique solution and admits a density p (1) and the random measure ξ (2) (2) s (ϕh s+T 1 ) dB s+T 1 , P-a.s., t ∈[0, T ]. (4.12) Since ν (2) satisfies point (iv) of Assumption 2.1, we have that (4.12) is the Zakai equation for the filtering problem of the partially observed system (2.8)-(2.13), with initial law ξ (2) and process ν (2) , which is continuous on [0, T ]. Therefore, by Theorem 4.6, ρ (2) is its unique solution and admits a density p (2) with respect to Lebesgue measure on R m , with E[ p (2) t−T 1 on the same set, and hence ρ t admits density p (2) Continuing in this manner, we construct a sequence of solutions (ρ (n) ) n∈N and corresponding density processes ( p (n) ) n∈N . We deduce that the unnormalized filtering process is represented by and hence is the unique Y-adapted, càdlàg, M + (R m )-valued solution to the Zakai equation (3.8), admitting a Y-adapted, càdlàg, H -valued density process p, given by The fact that E[ p t 2 H ] < +∞, for all t ∈ [0, T ], follows from the analogous property for each of the processes p (n) , n ∈ N.
Funding Open access funding provided by Luiss University within the CRUI-CARE Agreement.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. have that Lemma A.1 Let ( , F, F, P) be a given filtered complete probability space, fix T > 0, and let A and H be two càdlàg, F-adapted real-valued processes. Suppose that A is non-decreasing, with A 0 − = 0 and A T ≤ K , P-a.s., for some constant K > 0, and that H satisfies one of the following: Assume, moreover, that for any F-stopping time τ ≤ T we have Proof The following reasoning is inspired by the proof of [28], Lemma IX.6.3. Let us defineÃ A is still a càdlàg, F-adapted and non-decreasing process, withÃ 0 − = 0. Moreover, for any stopping time τ ≤ T , random measures 1 s<τ d A s and 1 s<τ dÃ s agree, therefore (A.1) implies Next, define C t := inf{s ≥ 0 :Ã s ≥ t}, t ≥ 0, which (see, e.g., [20], Chapter VI, Def. 56 or [28], Proposition I.1.28) is an F-stopping time for all t ≥ 0, satisfying C t ≤ T , thanks to the definition ofÃ. We now fix t ∈ [0, K ]. Using (A.2), we get Since C is a non-decreasing process, we have that {C u < C t } ⊂ {u < t}, and hence 1 C u <C t ≤ 1 u<t . Therefore If H satisfies condition b. we can directly apply Fubini-Tonelli's theorem as below. If, instead, condition a. holds, since C u ≤ T and, for each fixed ω ∈ , the image of the map u → C u (ω) is a subset of [0, T ], we have that sup Therefore, we can apply Fubini-Tonelli's theorem and get Proof Let us notice, first, a fact that will be useful in this proof. It can be easily shown that condition (2.1) implies, for some constant C γ , Therefore, Z is an (F, P)-martingale, and hence η, which is the Doléans-Dade exponential of Z , is a non-negative local (F, P)-martingale (see, e.g., [14], Lemma 15.3.2). Thus, to prove the claim it is enough to show that E[η t ] = 1 for all t ∈ [0, T ]. We start proving, first, that E[η t X t − 2 ] ≤ C, for all t ∈ [0, T ], where C is an appropriately chosen constant. For the sake of brevity, let us write b s :=b(s, X s ), σ s :=σ (s, X s ), and h s :=h(s, X s ). Applying Itô's formula we get and using the integration by parts rule we have Therefore, for any fixed > 0, we obtain where ν c denotes the continuous part of the process ν.
With standard estimates (see, e.g., [2], Solution to Exercise 3.11) it is possible to show that the stochastic integrals with respect to Brownian motions W and B are (F, P)-martingales. This implies, thanks to the optional sampling theorem, that these stochastic integrals have zero expectation even when evaluated at any bounded stopping time. Fixing a F-stopping time τ ≤ t, for arbitrary t ∈ [0, T ], taking the expectation and noticing that the third term in (A.5) is non-negative, we get We proceed, now, to find suitable estimates for the terms appearing in (A.6).
Notice that, thanks to conditions (2.4) and (2.5), we have that for some constant Recalling that η is non-negative and that E[η t ] ≤ 1, for any t ∈ [0, T ], we get Next, we see that Similarly to what we did in the proof of Lemma A.1, let us definẽ ν i t :=|ν i | t 1 t<T + K 1 t≥T , t ≥ 0, i = 1, . . . , m.
It is important to stress that (A.14) holds for any t ∈ [0, T ], since t was arbitrarily chosen. Now we can finally obtain that E[η t ] = 1, for all t ∈ [0, T ]. By Itô's formula, for an arbitrarily fixed > 0 and all t ∈ [0, T ], Thanks to conditions (2.10) and (2.1), standard computations show that the stochastic integral is a (P, F)-martingale. Therefore, taking the expectation we get Notice that η 2 s (1+ η s ) 3 γ −1 (s)h(s, X s ) 2 −→ 0, as → 0, dP ⊗ dt-a.s. Moreover, Proof of Lemma 4.3 Fix > 0. To start, let us notice that continuity of process ν implies that also ζ is continuous and, therefore, ζ t = ζ t − and T ζ t = T ζ t − , dt ⊗ dP-almost everywhere.