Stochastic Volterra equations with time-changed Lévy noise and maximum principles

Motivated by a problem of optimal harvesting of natural resources, we study a control problem for Volterra type dynamics driven by time-changed Lévy noises, which are in general not Markovian. To exploit the nature of the noise, we make use of diﬀerent kind of information ﬂows within a maximum principle approach. For this we work with backward stochastic diﬀerential equations (BSDE) with time-change and exploit the non-anticipating stochastic derivative introduced in [16]. We prove both a suﬃcient and necessary stochastic maximum principle.


Introduction
Optimal harvesting is a fairly classical problem in control theory and it is still a timely question to address when thinking of sustainability in the management of natural resources.In this work we deal with a problem of optimal harvesting from a population, the growth of which is modelled by Volterra time dynamics of the type X(t) = X 0 + ˆt 0 (r(t, s) − Ku(s)) X(s)ds + ˆt 0 σ(s)X(s)dB(s), t ∈ [0, T ]. (1.1) The term r represents the growth rate, the constant K is the catchability coefficient, and the control u is the fishing effort.The Volterra structure is inherited from the deterministic analogous models that can be found, e.g., in [10,23,24].As we can see, this form of time dependence is often used in the description of fish populations.When considering fish as a commodity, the modelling of fish population is representing the possible dynamics of offer, in the interplay between offer and demand.In our work, however, we consider Volterra stochastic integral equations, which represent a natural extension including the uncertainty of the environment influencing the population growth.For this we are motivated by [4,11].
Our model has an element of novelty with respect to the others presented.This is given by the nature of the noise B which is associated to a time-changed Brownian motion.This is well motivated by the clustering effects that such noises can described.For the description on how time-change helps to described clustering, we can refer to a first discussion in [36,Chapter IV,3e] and a more recent study [34,Chapter 3] in the context of market microstructure.Within population dynamics the evidence of clustering is largely discussed in the recent literature in biology and ecology.See just as example [26].
We remark that in the literature of mathematical finance, dynamics of the form (1.1), but with Lévy type noises were used in models [3].On the other side, time-change has been suggested in the study of volatility modelling, e.g.[5,12,21,37,38], energy markets, e.g.[9], and default models, e.g.[28].Also it is used in kinetic theory, see e.g.[29].
Keeping our motivation in mind, we treat here stochastic control for general Volterra type dynamics, allowing also for jumps: ˆt 0 b(t, s, λ s , u(s), X u (s ))ds + ˆt 0 ˆR κ(t, s, z, λ s , u(s), X u (s ))µ(dsdz), (1.2) where the driving noise µ is given by the random measure which is the mixture of a conditional Gaussian measure B on [0, T ] × {0} and a conditional centered Poisson measure H on [0, T ] × R 0 .Here R 0 := R\{0} and B represents the Borel σ-algebra.Both B and H are set in relationship with a time-changed Brownian motion and time-changed Poisson measure, respectively, via Theorem 3.1 in [35] (see also [22]).Note that the coefficients in (1.2) may also depend on the time-change via the process λ.
The time-change processes involved are of the form Λ t (ω) = ˆt 0 λ s (ω)ds, (t, ω) ∈ [0, T ] × Ω, (T > 0).Thus the driving noises (which include jumps) are actually beyond the Brownian and the pure Lévy framework.We abandon noises with independent increments and effectively deal with quite general but still treatable martingales.
Our goal is to find the optimal control û such that among the set A F of admissible F-adapted controls, where F = {F t , t ∈ [0, T ]} represents the smallest right-continuous filtration generated by µ.
Optimization problems such as (1.2), (1.4) are studied, e.g. in [2,3,8].In [2,3] the authors present also a sufficient maximum principle and the dynamics include jumps making use of Malliavin calculus.However, being the restrictions on the domain of the Malliavin derivative extremely serious in the context of optimal control, the authors have lifted the study into the white noise framework and work with the Hida-Malliavin calculus on the space of stochastic distributions.The Hida-Malliavin calculus is taylored for Brownian and for centered Poisson random noises, hence this approach cannot be taken in our work since our driving noises are not of the required nature.On the other hand, in [8], the authors propose a backward SDE approach to solve (1.4).This is possible due to the introduction of memory in (1.2) by means of convolution with a completely monotone kernel which allows for a Markovian representation of the solution of (1.2).
Note that a Malliavin/Skorokhod calculus extension to noises with conditional independent increments, is proposed in [18] and [39].By this, however, we cannot solve the critical issue of the natural restriction of the domains of the involved operators and a Hida-Malliavin type extension is yet not available in the literature.Our approach is then to make use of the nonanticipating (NA) derivative.The NA derivative, introduced in [15] for general martingales and then extended to martingale random fields in [16], is the dual of the Itô integral and has an explicit representation in terms of limit of simple integrands in the Itô framework.Also, the NA derivative provides explicit stochastic integral representations.We stress that, contrarily to the Malliavin derivative, the domain of the NA derivative is the whole L 2 (dP ), thus not creating problems in the context of optimal controls.To the best of our knowledge this is the first time that the non-anticipating derivative is used in optimal control problems such as (1.4).
Our approach to the optimization problem (1.4) is based on the analysis of the noise and the information flows associated.Indeed, we observe that there are two filtrations of interest.The first one is the already mentioned F and the second is the filtration G := {G t , t ∈ [0, T ]}, where G t := F t ∨ F Λ generated by µ and the entire history F Λ of the time-change processes.Note that while F 0 is substantially trivial, G 0 = F Λ .We can regard G as the initial enlargement of F or, we can see F as partial information with respect to G.With this observation in hands, we work out the solution to problem (1.4) as an optimization problem under partial information.In this we have taken inspiration from [31], where the concept of partial information is however not associated to the properties of the noise, and from [19], where the dynamics are however not of Volterra type.Also, for completeness, we show that our techniques provide necessary and sufficient conditions for the optimization problem The study of maximum principles is associated to a stochastic Hamiltonian map of the socalled dual variables, which in turn are obtained from the solution of a backward stochastic equation.In the sequel, we deal with backward stochastic differential equations (BSDEs) of type under the filtration G. Notice that, these backward equations are not of Volterra type.This is because our Hamiltonian functional is going to involve also the NA-derivatives of the adjoint process p.A different approach could have been to follow the work in [3], where the authors deal with a backward stochastic Volterra integral equation (BSVIE) of the form: Even though this approach would allow us to work with simpler Hamiltonian functionals (in the sense that the NA-derivative of p(t) would not be involved) we would need to assume smoothness conditions with respect to t on q(t, s, z) and, to the best of our knowledge, is not clear to what extent those properties are satisfied.Existence and uniqueness of (1.6) can be retrieved from [19].The study of the BSDE under G is in itself critically based on the stochastic integral representation in the form where ξ 0 is G 0 -measurable and the integrand φ is G-predictable.These results are readily available in terms of their existence in the classical Kunita-Watanabe Theorem, while the explicit form of φ is given by means of the NA derivative in [15] Theorem 3.1 and [16] Theorem 3.1.The paper is organized as follows.In the next section we give a presentation of the framework providing the necessary details for the random measure µ and the information flows that we are going to use.In Section 3 we prove a sufficient maximum principle and in Section 4 the corresponding necessary maximum principle.Lastly, we show how the results obtained can be applied to characterise the solution in the optimal harvesting problem associated to the dynamics (1.1).

The noise and the non-anticipating derivative
Let us consider a complete probability space (Ω, F, P ) and a time horizon T < ∞.We shall consider the noise on the time-space where R 0 = R\{0}.The Borel σ-algebra on X is denoted B X .Let L be the space of the two dimensional stochastic processes λ = (λ B , λ H ) such that, for each component k = B, H, we have that 1. λ k t ≥ 0 P − a.s.for all t ∈ [0, T ], 2. lim h→0 P |λ k t+h − λ k t | ≥ ǫ = 0 for all ǫ > 0 and almost all t ∈ [0, T ], 3.
The processes λ ∈ L represent the stochastic time-change rate.Let ν be a σ-finite measure on the Borel sets of R 0 satisfying ´R0 z 2 ν(dz) < ∞.We define the random measure Λ on B X by Furthermore, denote the restrictions of Λ to [0, T ]×{0} and [0, T ]×R 0 by Λ B and Λ H , respectively.
For later use we also introduce the filtration where F Λ t is generated by the values of Λ on the Borelian sets of [0, t] × R. Set F Λ := F Λ T .We recall the following definitions.
Definition 2.1.The conditional Gaussian measure (given F Λ ) B is a signed random measure on the Borel sets of [0, T ] × {0} satisfying Here Φ is the cumulative probability distribution function of a standard normal random variable.
The conditional Poisson measure (given F Λ ) H is a random measure on the Borel sets of [0, T ] × R 0 satisfying ) and H(∆ 2 ) are conditionally independent given F Λ .
Moreover, A5.B and H are conditionally independent given F Λ .
Also the conditional centered Poisson random measure is defined as Observe that if λ B and λ H were deterministic, then B would be a Gaussian process and H a Poisson random measure.Furthermore, B would be a Wiener process if λ B ≡ 1 and H a homogeneous Poisson random measure for λ H ≡ 1.
Definition 2.2.We define the signed random measure µ on the Borel sets ∆ ⊆ X by The random measure µ has conditionally independent values, see [22,35].Observe that (A1) and (A3) yield 2) The random measures B and H are related to a time-changed Brownian motion and timechanged pure jump Lévy process.To illustrate, consider the processes on [0, T ]: and compute the characteristic functions of B and η.From (A1) and (A3) we have that where P Λ B t is the probability distribution of the time-change Λ B t .Correspondingly, we have that where P Λ H t is the probability distribution of the time-change Λ H t .Indeed we recall the following characterization [35, Theorem 3.1] : be a centered pure jump Lévy process with Lévy measure ν independent of Λ H . Then B satisfies (A1)-(A2) if and only if, for any t ≥ 0, In the sequel we shall consider two types of information flows.The first one is represented by the filtration where See [18].The second information flow of interest is The filtration G is right-continuous, see [19].Moreover we note that G T = F T , G 0 = F Λ , and F 0 is substantially trivial.Namely, G includes information on the future values of Λ B and Λ H .In the sequel we shall technically exploit the interplay between the two filtrations.
For ∆ ⊆ (t, T ] × R, the conditional independence in (A2) and (A4), together with (2.2) yield Moreover, (A5) gives us It is immediate to see that µ is also a martingale random field with respect to F.
With the above structures, we access the framework of Itô stochastic integration.For this we introduce I G ⊆ L 2 (dΛ × dP ) representing the subspace of the random fields admitting a G-predictable modification and I F ⊂ I G , the one of F-predictable random fields.Observe that, for all φ ∈ I G , we have that thanks to (A5) and the martingale property of µ.
In this work we shall make use of the non-anticipating derivative introduced in [16] for martingale random fields.
Definition 2.5.The non-anticipating derivative (NA-derivative) D is a linear operator defined for all the elements ζ ∈ L 2 (dP ) as the limit in L 2 (dΛ × dP ) of simple G-predictable random fields ϕ n , n ∈ N, defined as: Here the Borel sets ∆ nk take the form ∆ nk := (s nk , u nk ] × B nk , k = 1, ..., K n , with 0 ≤ s nk ≤ u nk ≤ T , and B nk ∈ B where B is any countable semi-ring that generates the Borel σ-algebra B(R).Then n∈N Kn k=1 ∆ nk = X.With a slight abuse of terminology we call the sets ∆ nk , k = 1, ..., K n , a partition of X with refinement n.Clearly all the sets ∆ nk , k = 1, ..., K n , n ∈ N constitute a semiring generating B(X) see, e.g.[16] and the references therein.
The NA-derivative allows for an explicit integral representation.Namely the integrand is characterized in terms of the inputs: the very random variable to represent, the integrator, and the filtration.See Theorem 3.1 in [16].
The existence and unicity of a stochastic integral representation is well-known from the Kunita-Watanabe Theorem.Theorem 2.6 provides an explicit representation to the integrand.The spirit of this result is in line with representations à la Clark-Haussman-Ocone (CHO), see, e.g.[20].However in that case the noise is either a Brownian motion or a centered Poisson random measure and the integrand is characterized in terms of the Malliavin derivative.We remark that an extension of the Malliavin calculus and CHO representations to the conditional Brownian and the conditional Poisson cases is provided in [39] and [19].When applying Malliavin calculus to optimal control, the domain of the Malliavin derivative constitutes a serious restriction as the variables depend on a control yet to be found.In [1] this was overcome for the Brownian and centered Poisson cases by using the Hida-Malliavin extension which is an extension of Malliavin calculus to the white noise framework (stochastic distributions), see [20].At present there is no such an extension for time-changed noises hence the method cannot be used.In this paper we suggest to use the NA-derivative, which has no restrictions on the domain and it is well defined for all martingales in L 2 (dP ) as integrators.Furthermore, from Theorem 2.6 we can see that D is actually the dual of the Itô integral: Proposition 2.7.For all φ in I G and all ξ in L 2 (dP ), we have Also we have the martingale representation theorem: Theorem 2.8.For any square integrable G martingale, M (t), t ∈ [0, T ], the following representation holds true For future use we also introduce the space S of the G-adapted stochastic processes p(t, ω), t ∈ [0, T ], ω ∈ Ω such that

A sufficient maximum principle with time-change
We are now ready to study the optimization problem (1.4) with performance functional where The controlled dynamics of X are given by the equation where X 0 ∈ R and the coefficients are given by the mappings We also require them to be C 2 with respect to t and to x with partial derivatives L 2 -integrable with respect to dt × dP and dΛ × dP , respectively.Notice also that we will often drop the superscript u when it is clear the dependence of X on u.
Later on we can see the coefficients b and κ in a functional setup: We assume that b and κ are Fréchet differentiable (in the standard topology of càdlàg paths) with C 2 regularity in t, x and u (with the corresponding derivatives).
In the sequel we assume existence and uniqueness of a solution for (3.2).Sufficient conditions for this are provided in the next result, which is in line with the study in [3], though there the driving noises are the Brownian motion and Poisson random measure.Theorem 3.1.Assume that: 3. b(t, s, λ, u, •) and κ(t, s, z, λ, u, •) have linear growth with respect to x, i.e., for all t, s ∈ Then there exists a unique F-adapted solution to (3.2) in L 2 (dt × dP ).
Proof.The proof follows a classical Picard iteration scheme.Here we provide the main ideas.Fix u ∈ A F and define inductively Then, for all t ∈ [0, T ] and for all n ≥ 1, we have the following estimate By (2.1) and using the Lipschitz condition on b and κ, we get for K := 4T C 2 .Also, by the linear growth condition 3. on b and κ, we get that Combining now (3.3) and (3.4), we have that Thus we have that {X(t) n } ∞ n=1 is a Cauchy sequence in L 2 (dP ) and {X(t) n } ∞ n=1 is in L 2 (dP ×dt) Taking the limit on n → ∞ gives the solution to (3.2).The uniqueness is obtained by standard arguments and estimates similar to the ones above.
Before moving forward, we need to state a fundamental result that will allow us to rewrite X in (3.2) in differential form.This is due to [33] and it is known as transformation rule.Hereafter we state the result within our setting.
Remark 3.3.(A link with functional SDEs) Lemma 3.2, suggests a link between the Volterra integral equations of the kind (3.2) and functional SDEs (FSDEs).It is in fact clear that, by defining where we have that (3.2) can be rewritten as We notice that (3.7) is a functional SDE, so we could have tried to state an existence result for functional SDEs instead of using Theorem 3.1.Some existence results for SDEs such as (3.7) are available (see e.g.[6,13,14,27,32]), but no one of those deals with noises such as µ.While some of those results (e.g.[13,27,32]) present condition that would be too restrictive for the current setting, we also point out that the results presented in [6,14] could possibly be extended to the current framework.Nonetheless, this would require to impose some Lipschitz and linear growth conditions on b and κ (like in Theorem 3.1) and, additionally, to impose a Lipschitzianity condition on ∂ t b, not required in the hypothesis of Theorem 3.1.
Having discussed the existence of a solution for (3.2), we are finally ready to proceed to our optimization results.We start by introducing the notion of admissible controls: Definition 3.4.The admissible controls for (3.2) in the optimization problems (1.4) and (1.5) are predictable stochastic processes u : [0, T ] × Ω −→ U such that X in (3.2) has a unique strong solution and We denote A F and A G the sets of F-or G-predictable controls, respectively.We say that (û, X) is an optimal pair if where X := X û is as in (3.2), and Define R G to be the space of G-predictable processes with values in L 2 (dP ).We remark that, if y ∈ R G , then the NA-derivative (2.5) is also in R G i.e. for all t, z D t,z y(•) ∈ R G .In the sequel, when no confusion arises, we will denote with D t,0 y(•) the NA-derivative with respect to the conditional Brownian motion, and with D t,z y(•), z ∈ R 0 , the NA-derivative with respect to the conditional Poisson random measure.
In view of the Volterra structure of the dynamics (3.2), the system is not Markovian.We tackle the problem (3.8) by the maximum principle approach, better suited in this case, see e.g.[40].We introduce the Hamiltonian function: as the mapping given by the sum H(t, λ, u, x, p, q) := H 0 (t, λ, u, x, p, q) + H 1 (t, λ, u, x, p, q) (3.9) of the two components H 0 (t, λ, u, x, p, q) := F (t, λ t , u t , x t ) + b(t, t, λ t , u t , x t )p(t) + κ(t, t, 0, λ t , u t , x t )q t (0)λ B t + ˆR0 κ(t, t, z, λ t , u t , x t )q t (z)λ H t ν(dz) where Z is the space of functions q : R −→ R such that Remark 3.5.Following up on Remark 3.3, instead of considering (3.2) as a Volterra equation, we could have taken the FSDE (3.7) and, following e.g.[14], write the Hamiltonian functional for the functional SDE.We notice that, regardless of the chosen approach, we would still end up with the Hamiltonian functional (3.9).
Associated to H (3.9), we introduce a BSDE of the type (1.6), which we study under G: where the derivative ∂ x H is meant in the Fréchet sense.Sufficient conditions to guarantee the existence of (3.10) on R G × I G can be found in [19].
Remark 3.6.Notice that (3.10) is actually a BSDE and not a Volterra-type backward SDE.In fact, the term ∂ x H 1 (t, λ, u, X, p, q) in the driver ∂ x H(t, λ, u, X, p, q), corresponds to which is a function of time s, after integration.
The optimal control problem (1.4): associated to the performance functional (3.1) is treated in the framework of optimization under partial information.This is inspired by [19], where this approach is taken for standard timechanged dynamics.In the Volterra case treated in the present work, the functionals stemming out of (3.9) are very different from the ones in [19].Indeed we introduce the mapping where Notation 1.Given u, û ∈ A • , X, X represent the associated controlled dynamics of (3.2) and (p, q), (p, q) are the corresponding solutions of (3.10).From now on, if no confusion arises, we will use the compact notation: Similarly, for κ, κ, F , F , G, Ĝ, we will also write: and similarly for 1 .Theorem 3.7.(Sufficient maximum principle with respect to F).Let λ ∈ L. Let û ∈ A F and assume that the corresponding solutions X, (p, q) of (3.2) and (3.10) exist.Assume that • For any t, the map is concave.
Proof.This proof is inspired by both the proof of [2] Theorem 4.1 and [19] Theorem 6.2.The main difference with [2] is the use of the random measure µ instead of a Brownian motion and a compensated random Poisson measure, which requires to abandon the framework of Malliavin calculus.The main difference with [19] is the Volterra structure of the dynamics for the forward equation (3.2), which lead to more involved stochastic calculus.Recall that û ∈ A F is a candidate to be optimal and X û is the corresponding solution of (3.2).Choose an arbitrary other u ∈ A F with corresponding controlled dynamics X and consider J(u) − J(û) = I 1 + I 2 , where Considering now I 1 , from the definition of H F 0 we get that, By the concavity of G, we have We apply the transformation rule (Lemma 3.2) to rewrite the Volterra forward dynamics of X as Also the BSDE p (3.10) associated to the optimal pair (û, X) in differential notation is Using the Itô formula for the product we obtain Now notice that, where we have used Fubini's theorem and the duality formula (Proposition 2.7).By substituting (3.18) into (3.17), and taking the conditional expectation given F t we get that Hence dt × dP a.e. by the maximality of û in (3.14) and the concavity condition (3.13).Hence J(u) ≤ J(û) and û is an optimal control for (3.1).This conclusion is reached applying a separating hyperplane argument to the concave map (3.13).
Notice that a result analogous to Theorem 3.7 can also be obtained when working under the initially enlarged filtration G. Though the next result might not be of direct applicability in view of the anticipated information included in G, the study has mathematical validity.
Remark 3.8.The transformation rule under (3.5) allows for the use of an Itô-type formula in the context of Volterra dynamics.If the equation would not present Volterra structure in the stochastic integral part (i.e. in the coefficient κ), then the requirement (3.5) is clearly lifted.Proposition 3.9.(Sufficient maximum principle with respect to G).Let λ ∈ L. Let û ∈ A G and assume that the corresponding solutions X(t), (p, q) of (3.2) and (3.10) exist.Assume that: • For any t, p, q, the function is concave in x.
Proof.Once considering the filtration G, the arguments in the proof of Theorem 3.7 leading to apply directly without conditioning.

Necessary maximum principles with time-change
Hereafter we study necessary conditions to identify the possible candidates for optimal controls.This can be a useful starting point before applying a verification theorem to ensure optimality.We remark that our results relax the condition of concavity present in Theorem 3.7 and 3.9.However, we introduce some other assumptions on the set of admissible controls and the first variation process of the forward dynamics (3.2).
In the literature we find a first version of necessary maximum principle for Volterra dynamics in [1].There the driving noises were the Gaussian and the centered Poisson random measure.Our work goes beyond these noises.For any t ∈ [0, T ], we consider a random perturbation of the type where α t is a bounded F t measurable random variable and h ∈ [0, T − t].We make the following assumptions: 1.The set of admissible controls A F is such that, for all u ∈ A F , for all perturbations β as in (4.1) and all ε > 0 sufficiently small.
for some K > 0 and for each fixed Assumption 2. above implies that exists and is well defined, whereas assumption 4. ensure us to be able to apply the transformation rule for χ .Remark that sufficient conditions that ensure the existence of the first variation process are that b and κ are in C 1 (U ) uniformly for all s, t ∈ [0, T ] λ ∈ [0, ∞) 2 , x ∈ R and that (∂ x b(t, s) χ (s) + ∂ u b(t, s)β(s)) and (∂ x κ(t, s, z) χ (s) + ∂ u κ(t, s, z)β(s)) satisfy the linear growth and lipschitzianity conditions of Theorem 3.1.
As above we consider the performance functional (3.1) with the related conditions on F and G as in Section 3. We also continue using the compact notation there introduced, see Notation 1.
Proof.With (4.2), we consider for u ∈ A F and the perturbation (4.1), By considering a suitable increasing family of stopping times converging to T as in [30] Theorem 2.2, we may assume that all the local martingales appearing here are true martingales.From (3.12), the transformation rule (Lemma 3.2) and the Itô formula for the product, we find that Now, recalling equality (3.18), and taking the conditional expectation under F t we get that So that, from (3.12), we can write Summarizing, equation (4.5) together with (4.6) and the perturbations in (4.1) give and, for û, (4.3) gives Applying the Fubini theorem to the right-hand side of (4.7) and differentiating at h = 0 we obtain E ∂ u H F,û (t) α t = 0, for all α t bounded and F t measurable.Hence Vice versa, if (4.8) holds, we can reverse the argument to obtain (4.3).
As in Section 3, for the sake of completeness, we propose a necessary maximum principle under the information flow G.This refers to the optimization problem (1.5).In this case we assume that, for all u ∈ A G , u + εβ ∈ A G for all perturbations β as in (4.1) and ε > 0 sufficiently small.Proposition 4.2.(Necessary maximum principle with respect to G).Let λ ∈ L. Suppose that û ∈ A G and the corresponding solutions X, (p, q) of (3.2) and (3.10) exist.Also assume that then ∂ u H û(t) = 0. (4.10) Conversely, if (4.10) holds, then (4.9) is true.
Proof.The argument in the proof of Theorem 4.2 leading to still holds with no need to use conditional expectations.Since û and û + ǫβ are G-predictable, we obtain where we have used the definition of H as in (3.9).We conclude as in Theorem 4.1.

A maximum principle approach in optimal harvesting
We now go back to the optimal harvesting problem within fishery, where the population dynamics is given by the dynamics (1.1).We recall that our starting point are [10,23,24], where the authors consider deterministic Volterra models to model population growth and, following e.g.[4,11], we introduce some random fluctuations that will affect the population growth.Hence, the dynamics considered are of type (1.1): where r(t, s) : Here, B is the conditional Gaussian measure.We assume that (5.1) admits a solution, that r(t, s) is C 2 with respect to both t and s, and that σ is C 1 with respect to t and σ(t) > −1 for all s ∈ [0, T ], z ∈ R. Lastly we assume r(t, s), ∂ t r(t, s) and σ(t) are in L 2 (dt).For sufficient conditions that guarantee the existence of a solution of X we refer to Theorem 3.1.In the context of optimal harvesting of fish, r represents the growth rate, K the catchability coefficient, and the control u is the fishing effort.
Our goal is to characterise the optimal solution to maximization of the performance functional J(u) = E ˆT 0 e −δ(T −t) X(t)u(t)dt , ( where u ∈ A F , δ > 0. In the context of oprimal harvesting this can be regarded as the aggregated net discounted revenue, see [7].Following the approach given in this work, we consider the Hamiltonian functional (3.9), which can be here rewritten as H u (t) = e −δ(T −t) u(t)X(t) + r(t, t) − Ku(t) X(t)p(t) + σ(t)X(t)q(t)λ B t + ˆt 0 ∂ t r(t, s)X(s) − Ku(s)ds p(t), where the backward dynamics for p are given by dp(t) = e −δ(T −t) u(t)dt + r(t, t) + ˆt 0 ∂ t r(t, s)ds p(t)dt + σ(t)q(t)λ B t dt + q(t)dB(t) p(T ) = 0.
(5.3) Also, we consider the mapping H F in (3.12):From Theorem 4.1 we see that a necessary condition for an admissible control û to be optimal is that, for all t ∈ [0, T ], ∂ u H F,û (t) = 0. Furthermore, from Theorem 3.7, being the map (3.13) trivially concave, the condition ∂ u H F,û (t) = 0 is also sufficient for the maximality.In particular this means that an admissible control û is optimal if and only if e −δ(T −t) X(t) = K X(t)E[p(t)|F t ]. (5.4) Namely, for all t ∈ [0, τ ] E[p(t)|F t ] = K −1 e −δ(T −t) . (5.5) To find a solution to (5.3) with respect to the information flow G, we use a Girsanov change of measure as presented in [17].Define the measure Q by dQ = M(T )dP (T ) on G T , where dM(t) = M(t)σ(t)dB(t) M(0) = 1. (5.6) An explicit solution for (5.6) is obtained by the Itô formula (see [17]) and is given by We thus have that, under the measure Q, is a G-martingale.Equation (5.3) can now be rewritten under Q as dp(t) = e −δ(T −t) û(t)dt + r(t, t) + ˆt 0 ∂ t r(t, s)ds p(t)dt + q(t)dB σ (t) p(T ) = 0, (5.7) Thanks to [19] we know that (5.7) admits a unique solution (p, q) and that the process p is given by p(t) = E Q ˆT t exp ˆs t r(v)dv e −δ(T −s) û(s)ds , where we defined r(t) := r(t, t) + ´t 0 ∂ t r(t, s)ds.We thus obtain that E p(t)|F t = E 1 M(T ) ˆT t exp ˆs t r(v)dv e −δ(T −s) û(s)ds F t .

Declarations
Funding.The research leading to these results received funding from the Research Council of Norway within the project STORM: Stochastics for Time-Space Risk Models, grant number: 274410.
Conflicts of interest.The authors have no competing interests to declare that are relevant to the content of this article.
where we denoted by Ξ S the space of measurable function on [0, T ] with values in S. Then we can interpret the coefficients in (3.2) via the evaluation at the point s ∈ [0, T ]: Hence, µ is a martingale random field with respect to G, see e.g.[16] Definition 2.1: Definition 2.4.A square integrable martingale random field µ with conditionally orthogonal values is a stochastic set function µ 3. ∂ x b(t, s) and ∂ u b(t, s) are well defined and C 1 with respect to t with partial derivatives L 2 -integrable with respect to dt × dP .∂ x κ(t, s, •) and ∂ u κ(t, s, •) are well defined and C 1 with respect to t with partial derivatives L 2 -integrable with respect to dΛ × dP .4. ∂ x κ(t, s, z) and ∂ u κ(t, s, z) are such that, for all z ∈ R λ ∈ [0, ∞) 2 , u ∈ U , x ∈ R, the partial derivative of ∂ x κ + ∂ u κ with respect to t is locally bounded (uniformly in t) and satisfies