A Finite Horizon Optimal Switching Problem with Memory and Application to Controlled SDDEs

We consider an optimal switching problem where the terminal reward depends on the entire control trajectory. We show existence of an optimal control by applying a probabilistic technique based on the concept of Snell envelopes. We then apply this result to solve an impulse control problem for stochastic delay differential equations driven by a Brownian motion and an independent compound Poisson process. Furthermore, we show that the studied problem arises naturally when maximizing the revenue from operation of a group of hydro-power plants with hydrological coupling.


Introduction
The standard optimal switching problem (sometimes referred to as starting and stopping problem) is a stochastic optimal control problem of impulse type that arises when an operator controls a dynamical system by switching between the different members in a set of operation modes I = {1, . . . , m}. In the two-modes setting (m = 2) the modes may represent, for example, "operating" and "closed" when maximizing the revenue from mineral extraction in a mine as in [6]. In the multi-modes setting the operating modes may represent different levels of power production in a power plant when the owner seeks to maximize her total revenue from producing electricity [7] or the states "operating" and "closed" of single units in a multi-unit production facility as in [5].
In optimal switching the control takes the form u = (τ 1 , . . . , τ N ; β 1 , . . . , β N ), where τ 1 ≤ τ 2 ≤ · · · ≤ τ N is a sequence of times when the operator intervenes on the system and β j ∈ I −β j−1 := I \ {β j−1 } is the mode in which the system is operated during [τ j , τ j+1 ). The standard multi-modes optimal switching problem in finite horizon (T < ∞) can be formulated as finding the control that maximizes is the operation mode (when starting in a predefined mode b 0 ∈ I), φ b and ψ b are the running and terminal reward in mode b ∈ I, respectively and c b,b ′ (t) is the cost incurred by switching from mode b to mode b ′ at time t ∈ [0, T ]. The standard optimal switching problem has been thoroughly investigated in the last decades after being popularised in [6]. In [16] a solution to the two-modes problem was found by rewriting the problem as an existence and uniqueness problem for a doubly reflected backward stochastic differential equation. In [11] existence of an optimal control for the multi-modes optimal switching problem was shown by a probabilistic method based on the concept of Snell envelopes. Furthermore, existence and uniqueness of viscosity solutions to the related Bellman equation was shown for the case when the switching costs are constant and the underlying uncertainty is modeled by a stochastic differential equation (SDE) driven by a Brownian motion. In [12] the existence and uniqueness results of viscosity solutions was extended to the case when the switching costs depend on the state variable. Since then, results have been extended to Knightian uncertainty [18,17,8] and non-Brownian filtration and signed switching costs [24]. For the case when the underlying uncertainty can be modeled by a diffusion process, generalization to the case when the control enters the drift and volatility term was treated in [14]. This was further developed to include state constraints in [20]. Another important generalization is to the case when the operator only has partial information about the present state of the diffusion process as treated in [23].
In the present work we consider the setting with running and terminal rewards that depend on the entire history of the control. We also show that a special case of the type of switching problems that we consider is that of a controlled stochastic delay differential equation (SDDE), driven by a finite intensity Lévy process.
To motivate our problem formulation we consider the situation when an operator of two hydro-power plants, located in the same river, wants to maximize her revenue from producing electricity during a fixed operation period. We assume that each plant has its own water reservoir. The power production in a hydropower plant depends on the drop height from the water level of the reservoir to the outlet and thus on the amount of water in the reservoir. As water that passes through the upstream plant will eventually reach the reservoir of the downstream plant we need to consider part of the control history in the upstream plant when optimizing operation of the downstream plant.
In this setting our cost functional can be written J(u) := E T 0 φ(s, τ 1 , . . . , τ Ns ; β 1 , . . . , β Ns )ds + ψ(τ 1 , . . . , τ N ; where N s := max{j : τ j ≤ s}. The contribution of the present work is twofold. First, we show that the problem of maximizing J can be solved under certain assumptions on φ, ψ and the switching costs c ·,· by finding an optimal control in terms of a family of interconnected value processes, that we refer to as a verification family. We then show that the revenue maximization problem of the hydro-power producer can be formulated as an impulse control problem where the uncertainty is modeled by a controlled SDDE and use our initial result to find an optimal control for this problem. The remainder of the article is organized as follows. In the next section we state the problem, set the notation used throughout the article and detail the set of assumptions that are made. Then, in Section 3 a verification theorem is derived. This verification theorem is an extension of the original verification theorem for the multi-modes optimal switching problem developed in [11] and presumes the existence of a verification family. In Section 4 we show that, under the assumptions made, there exists a verification family, thus proving existence of an optimal control for the switching problem with cost functional J. In Section 5 we more carefully investigate the example of the hydro-power producer and show that the case of a controlled SDDE fits into the problem description investigated in Sections 3 and 4.

Preliminaries
We consider a finite horizon problem and thus assume that the terminal time T is fixed with T < ∞.
We let (Ω, F, F, P) be a probability space, with F := (F t ) 0≤t≤T a filtration satisfying the usual conditions in addition to being quasi-left continuous.
Remark 2.1. Recall here the concept of quasi-left continuity: A càdlàg process (X t : 0 ≤ t ≤ T ) is quasi-left continuous if for each predictable stopping time γ and every announcing sequence of stopping times γ k ր γ we have X γ− := lim k→∞ X γ k = X γ , P-a.s. A filtration is quasi-left continuous if F γ = F γ− for every predictable stopping time γ.
Throughout we will use the following notation: • P F is the σ-algebra of F-progressively measurable subsets of [0, T ] × Ω.
• For p ≥ 1, we let S p be the set of all R-valued, P F -measurable, càdlàg processes (Z t : 0 ≤ t ≤ T ) such that, P-a.s., E sup t∈[0,T ] |Z t | p < ∞ and let S p qlc be the subset of processes that are quasi-left continuous.
• We let T be the set of all F-stopping times and for each γ ∈ T we let T γ be the corresponding subsets of stopping times τ such that τ ≥ γ, P-a.s.
• We let U f denote the subset of u ∈ U for which N is finite P-a.s. (i.e. ) and for all k ≥ 0 we let U k := {u ∈ U : N ≤ k}. For γ ∈ T we let U γ (and U f γ resp. U k γ ) be the subset of U (and U f resp. U k ) with τ 1 ∈ T γ .
• We define the set D : and let D f be the corresponding subset of all finite sequences.

Problem formulation
In the above notation, our problem can be characterized by two objects: We will make the following preliminary assumptions on these objects: The function Ψ is P-a.s. right-continuous in the intervention times and bounded in the sense that: (ii) For each (t, b) ∈ D f and any b ∈ I −bn we have Ψ(t; b) > Ψ(t, T ; b, b) − c bn,b (T ), P-a.s.
The above assumptions are mainly standard assumptions for optimal switching problems translated to our setting. Assumptions (i.a) and (iii.a) together imply that the expected maximal reward is finite. Assumption (ii) implies that it is never optimal to switch at the terminal time. We show below that the "no-free-loop" condition (iii.b) together with (i.a) implies that, with probability one, the optimal control (whenever it exists) can only make a finite number of switches.
We consider the following problem: As a step in solving Problem 1 we need the following proposition which is a standard result for optimal switching problems and is due to the "no-free-loop" condition.

The Snell envelope
In this section we gather the main results concerning the Snell envelope that will be useful later on. Recall that a progressively measurable process U is of class [D] if the set of random variables {U τ : τ ∈ T } is uniformly integrable.
Theorem 2.4 (The Snell envelope). Let U = (U t ) 0≤t≤T be an F-adapted, R-valued, càdlàg process of class [D]. Then there exists a unique (up to indistinguishability), R-valued càdlàg process Z = (Z t ) 0≤t≤T called the Snell envelope, such that Z is the smallest supermartingale that dominates U . Moreover, the following holds (with ∆U t := U t − U t− ): (i) For any stopping time γ, Z γ = ess sup (ii) The Doob-Meyer decomposition of the supermartingale Z implies the existence of a triple (M, K c , K d ) where (M t : 0 ≤ t ≤ T ) is a uniformly integrable right-continuous martingale, (K c t : 0 ≤ t ≤ T ) is a non-decreasing, predictable, continuous process with K c 0 = 0 and (K d t : 0 ≤ t ≤ T ) is non-decreasing purely discontinuous predictable with K d 0 = 0, such that (iii) Let θ ∈ T be given and assume that for any predictable γ ∈ T θ and any increasing sequence {γ k } k≥0 with γ k ∈ T θ and lim k→∞ γ k = γ, P-a.s, we have lim sup k→∞ U γ k ≤ U γ , P-a.s. Then, the stopping time τ * θ defined by τ * θ := inf{s ≥ θ : Z s = U s } ∧ T is optimal after θ, i.e.
Furthermore, in this setting the Snell envelope, Z, is quasi-left continuous, i.e. K d ≡ 0.
(iv) Let U k be a sequence of càdlàg processes converging pointwisely to a càdlàg process U and let Z k be the Snell envelope of U k . Then the sequence Z k converges pointwisely to a process Z and Z is the Snell envelope of U .
The Snell envelope will be the main tool in showing that Problem 1 has a solution.

Additional assumptions on regularity
From the definition of the Snell envelope it is clear that we need to make some further assumptions on the regularity of the involved processes. To facilitate this we define, for each (t, b) = (t 1 , . . . , t n ; b 1 , . . . , b n ) ∈ D f , the value process corresponding to the control u ∈ U as We make the following additional assumptions: (ii) For all (t, b) ∈ D f and all b ∈ I −bn , the process (ess sup u∈U k V t,s∨tn;b,b,u

A verification theorem
The method for solving Problem 1 will be based on deriving an optimal control under the assumption that a specific family of processes exists, and then showing that the family indeed does exist. We will refer to any such family of processes as a verification family.
Definition 3.1. We define a verification family to be a family of càdlàg supermartingales ( a) The family satisfies the recursion c) For all n ≥ 1 we have that for every b ∈Ī n and η ∈T n , and for all b ∈ I −bn we have The purpose of the present section is to reduce the solution of Problem 1 to showing existence of a verification family. This is done in the following verification theorem: Then the family is unique (i.e. there is at most one verification family, up to indistinguishability) and: (ii) Defines the optimal control, u * = (τ * 1 , . . . , τ * N * ; β * 1 , . . . , β * N * ), for Problem 1, where (τ * j ) 1≤j≤N * is a sequence of F-stopping times given by The proof is divided into three steps where we first, in steps 1 and 2, show that for any 0

4)
P-a.s. for s ∈ [τ * j , τ * j+1 ]. Then in Step 3 we show that u * is the optimal control estabilishing (i) and (ii). A straightforward generalization to arbitrary initial conditions (t, b) ∈ D f then gives that by which uniqueness follows.
Step 1 We start by showing that for each (t, b) ∈ D f the recursion (3.1) can be written in terms of a F-stopping time. From (3.1) we note that, by definition, Y t;b is the smallest supermartingale that dominates T , P-a.s. By Theorem 2.4.(iii) it thus follows that for any θ ∈ T , there is a stopping time γ θ ∈ T tn∨θ such that: Step 2 We now show that Y 0 = J(u * ). We start by noting that Y is the Snell envelope of where Ψ 0 := Ψ(∅), and by step 1 we thus have is the product of an F τ * j -measurable positive r.v. and a càdlàg supermartingale, thus, it is a càdlàg supermartingale for s ≥ τ * j . Hence,Ŷ M is the sum of a finite number of càdlàg supermartingales and thus a càdlàg supermartingale itself. By definition we find thatŶ M dominatesÛ M which is of class [D] by Assumption 2.5.(i) and property b). To show thatŶ M is in fact the Snell envelope ofÛ M assume that Z is another càdlàg supermartingale that dominatesÛ M for all s ∈ [τ * j , T ]. Then for each Hence, there is a subsequence (M k ) k≥1 such that the limit taken over the subsequence is 0, P-a.s. Furthermore, as the convergence is uniform the limit process is càdlàg.
By right-continuity of the switching costs and Ψ and (3. where for notational simplicity we abuse the notation in (3.6) and let Hence, (M k ) k≥0 has a subsequence (M k ) k≥0 such that sup s∈[0,T ] |U s −ÛM k s | → 0, P-a.s. as k → ∞. This implies that U is a càdlàg process which is of class [D] by Assumption 2.5.(i) and property b).
We thus have thatÛM k is a sequence of càdlàg processes of class [D] that converges pointwisely to the càdlàg process U of class [D] and thatŶM k is the Snell envelope ofÛM k , for all k ≥ 0. Then by Theorem 2.4.(iv) we find thatŶM k converges pointwisely to the Snell envelope Snell envelope of U .
Hence, Y τ * 1 ,...,τ * j ;β * 1 ,...,β * j s : τ * j ≤ s ≤ T is the Snell envelope of U . To arrive at the second equality in (3.4) we note that the results we obtained in Step 1 implies that for any sequence ( . Now, arguing as in the proof of Proposition 2.3 and using property b) we find that u * ∈ U f . Letting K → ∞ and using dominated convergence we conclude that Y 0 = J(u * ).
Step 3 It remains to show that the strategy u * is optimal. To do this we pick any other strategŷ u := (τ 1 , . . . ,τN ;β 1 , . . . ,βN ) ∈ U f . By the definition of Y 0 in (3.1) we have but in the same way P-a.s. By repeating this argument and using the dominated convergence theorem we find that J(u * ) ≥ J(û) which proves that u * is in fact optimal. Repeating the above procedure with (t, b) ∈ D f as initial condition (3.5) follows.
The main difference between the above proof and the proof of Theorem 1 in the original work by Djehiche, Hamadéne and Popier [11] is that, due to the fact that the future reward at any time depends on the entire history of the control, we are forced consider a family of processes indexed by an uncountable set rather than a q-tuple for some finite positive q. Hence, we cannot simply write Y τ * 1 ,...,τ * j ;β * 1 ,...,β * j as the sum of a finite number of Snell envelopes. To arrive at the above verification theorem we therefore impose the right-continuity constraint assumed in Assumption 2.5.i. This effectively allowed us to find the two sequences of processes that approach on the one hand the value process corresponding to the optimal control and on the other hand the dominated process, in S 2 .

Existence
Theorem 3.2 presumes existence of the verification family ((Y t;b s ) 0≤s≤T : (t, b) ∈ D f ). To obtain a satisfactory solution to Problem 1, we thus need to establish that a verification family exists. This is the topic of the present section. We will follow the standard existence proof which goes by applying a Picard iteration (see [7,11,17]). We thus define a sequence ((Y t;b,k s ) 0≤s≤T : (t, b) ∈ D f ) k≥0 of families of processes as and for k ≥ 1. Taking the supremum over allû ∈ U on both sides and using that the right hand side is uniformly bounded by Assumption 2.2.(i.a) the first bound follows.
Concerning the second claim, note that Now, arguing as above we find that   Proof. The proof will follow by induction and we use (i') to denote the first statement without the uniformity.
For k = 0, we have Y t,·∨tn;b,b,0 · = V t,·∨tn;b,b,∅ · ∈ S 2 qlc by Assumption 2.5.(ii) and (i') follows from Assumption 2.5.(i). Now, assume that there is a k ′ ≥ 0 such that (i') and (ii) holds for all k ≤ k ′ . Applying a reasoning similar to that in the proof of Theorem 3.2 we find that But then by Assumption 2.5 we find that (i') and (ii) hold for k ′ + 1. By induction (i') and (ii) hold for all k ≥ 0.
It remains to show that (i) holds. By the above reasoning we find that, for each k we have where the right hand side of the last inequality does not depend on k and tends to zero as l → ∞ by Assumption 2.5.(i). The second statement in (i) follows by an identical argument.
Proof. Follows from the definition of Y t;b,k and Propositions 4.1 and 4.2 by applying the same argument as in the proof of the verification theorem (Theorem 3.2).
Proof. Since U k t ⊂ U k+1 t we have that, P-a.s., We thus conclude that there is a P-null set N such that for each ω ∈ Ω \ N we have K(ω) < ∞.
By the "no-free-loop" condition (Assumption 2.2.(iiib)) and the finiteness of I we get that for any control (τ 1 , . . . , τ N ; β 1 , . . . , β N ), N j=1 c β j ,β j−1 (τ j ) ≥ ǫ(N − m)/m, P-a.s. For ω ∈ Ω \ N (in the remainder of the proof N denotes a generic P-null set), we thus have . This implies that for k ′ > 0 we have, where we introduced the processY b,t,k,k ′ corresponding to the truncation (τ k 1 , . . . , τ k N k ∧k ′ ; β k 1 , . . . , β k N k ∧k ′ ) of the optimal control. As the truncation only affects the performance of the controller when N k > k ′ we have Applying Hölder's inequality we get that for ω ∈ Ω \ N , for all s ∈ [0, T ]. We conclude that for all ω ∈ Ω \ N , the sequence (Y t,·∨tn;b,b,k · (ω)) k≥0 is a sequence of càdlàg functions that converges uniformly which implies that the limit is a càdlàg function. Proof. AsȲ t;b is the pointwise limit of an increasing sequence of càdlàg supermartingales it is a càdlàg supermartingale (see p. 86 in [10]). We treat each remaining property in the definition of a verification family separately: a) Applying the convergence result to the right hand side of (4.2) and using the fact that, by Proposition 4.4, is a càdlàg process, (iv) of Theorem 2.4 gives Uniform boundedness was shown in Proposition 4.1.
where taking limits is interchangeable due to the uniform convergence property shown in Proposi- is càdlàg and by Proposition 4.1 it follows that Y t,·∨tn;b,b · ∈ S 2 . It remains to show thatȲ t,·∨tn;b,b · is quasi-left continuous. Using the notation from the proof of Proposition 4.4 we have for k ≥ 0, for all ω ∈ Ω \ N with P(N ) = 0. By Proposition 4.2.(ii) the first part tends to zero P-a.s. as j → ∞. Since k was arbitrary and C is P-a.s. bounded the desired result follows. This finishes the proof.

Application to SDDEs with controlled volatility
We now move to the case of impulse control of SDDEs. However, we start by formalizing the hydro-power production problem proposed as a motivating example in the introduction.

Continuous time hydro-power planning
The increasing competitiveness of electricity markets calls for new operational standards in electric power production facilities. It has previously been acknowledged that optimal switching can be useful in deriving production schedules that maximize the revenue from electricity production [7,11,20]. Here we will extend the applicability of optimal switching by introducing a new example, the coordinated operation of hydropower plants interconnected by hydrological coupling.
We consider the situation where a central operator controls the output of two hydropower stations located in the same river (but note that the model is easily extended to consider an entire system of power stations).
We assume that Plant i, for i = 1, 2, has: • A reservoir containing a volume Z i t m 3 of water at time t.
• A stochastic inflow V i t m 3 /s to the reservoir that is modeled by a jump diffusion process.
• κ i turbines that can be either "in operation", producing p i (Z i t ) MW by releasing α i m 3 /s of water through the turbine or "idle".
We assume that the power plants are hydrologically connected in such a way that the water that passes through Plant 1 will reach the reservoir of Plant 2 after δ ≥ 0 seconds.
We assume that we control the number of turbines in operation in each of the two plants. We thus let I := {0, 1, . . . , κ 1 } × {0, 1, . . . , κ 2 }. The dynamics of the involved processes is then given by and an appropriate reward functional is where R t is the (stochastic) electricity price at time t and q : R 2 + → R is the value of water (per m 3 ) stored in the reservoirs at the end of the operation period 3 .
Remark 5.1. Note that by letting χ 1 ≡ b 0 and taking [h β j−1 ,β j ] 1 (t, x) = β j and letting the first rows of a, σ and γ equal zeros we get [X] 1 = ξ u which implies that the control enters all terms in the SDDE for X u .
We consider the situation when the functional J is given by We assume that the parameters of the SDDE satisfies the following conditions: ii) There is a ρ(z), with ρ 4q (z)µ(dz) < ∞ such that γ : Furthermore, Remark 5.3. Note in particular that since a and σ are continuous in t, a(·, 0, 0) and σ(·, 0, 0) are uniformly bounded and Lipschitz continuity implies that |γ(t, x, y, z)| 4q µ(dz) ≤ C(1 + |x| 4q + |y| 4q ). (5.6) We have the following result: Proof. We first note that existence of a unique solution to the SDDE follows by repeated use of Theorem 3.2 in [1] (where existence of a unique solution to a more general controlled SDDE is shown). It remains to show that the moment estimate holds. We have X u,j = X u,j−1 on [−δ, τ j ) and on [τ j , T ]. By Assumption 5.2.(iii) we get, for t ∈ [τ j , T ], using integration by parts, that By repeated application we find that with τ 0 := 0. Now, since X u,i and X u,j coincide on [0, τ i+1∧j+1 ) we have |γ(s, X u,j s− , X u,j s−δ , z)| 2 µ(dz))ds .
Finally, using the Burkholder-Davis-Gundy inequality in combination with (5.6) we get where the constant C does not depend on j and it follows by Grönwall's lemma that E sup t∈[0,T ] |X u,j t | 4q is bounded uniformly in j. Now, the result follows since τ j → T , P-a.s., as j → ∞.
Arguing as in the proof of Proposition 5.4 we find that for s ∈ [τ j , T ], We thus find that, for each u ∈ U , To illustrate that switching does not diverge solutions we have the following useful lemma: Lemma 5.6. For γ ∈ T and each u ∈ U γ , let ( k Z u ) k≥0 and X u be processes in S 4q (with E[sup s∈[0,γ] | k Z u | 4q ] uniformly bounded) that solve the SDDE (5.3)-(5.5) on (γ, T ] with control u and such that Proof. By the contraction property of h .,. we have that |X u,j Using integration by parts we get, for t ∈ [τ j , T ],

Repeated application implies that
Using Lipschitz continuity of a, σ and γ and the Burkholder-Davis-Gundy inequality we get where the constant C does not depend on the control u, and by Grönwall's inequality we have and (5.9) follows by an identical argument.
We add the following assumptions on the components of the cost functional and the functions h.
for all x ∈ R d .
It is straightforward to see that with the above assumptions the Ψ defined by satisfies Assumption 2.2. The remainder of this section is devoted to showing that Ψ also satisfies Assumption 2.5, guaranteeing the existence of an optimal control to the problem of maximizing J.
We now turn to the total revenue and let By right continuity of the switching costs, we find that P-a.s. The difference in revenue can then be written T 0 (f (s, X s ) − f (s, Z s ))ds + g(X T ) − g(Z T ) + Λ F t .
By local Lipschitz continuity of f and g we get that, for each K > 0 there is a C > 0 such that |f (t, x) − f (t, x ′ )| ≤ C|x − x ′ | and |g(x) − g(x ′ )| ≤ C|x − x ′ | on |x| + |x ′ | ≤ K. This gives us the relation where A := {ω ∈ Ω : T 0 C|X s − Z s | 2 ds + C|X T − Z T | 2 > −Λ}. Doob's maximal inequality then gives that andZ := X η;b,ũ l whereũ l is obtained fromũ asû l was obtained from u. Now, we proceed as above and get for each M ≥ κ, that For l sufficiently large we thus see, by (5.16) and Chebyshev's inequality, that the probability on the right hand side can be made arbitrarily small by choosing M sufficiently large. For the third term we note that where the right hand side can be made arbitrarily small by Lemma 5.10 and quasi-left continuity of the switching costs. We conclude that lim j→∞ E |Y t,γ j ∨tn;b,b,k γ j − Y t,γ∨tn;b,b,k γ | = 0, which implies that Y t,γ j ∨tn;b,b,k γ j → Y t,γ∨tn;b,b,k γ in probability. Now since Y t,·∨tn;b,b,k · has left limits it follows that Y t,γ j ∨tn;b,b,k γ j → Y t,γ∨tn;b,b,k γ , P-a.s. and we conclude that Y t,·∨tn;b,b,k · ∈ S 2 qlc .
By the above results we conclude that an optimal control for the hydropower planning problem does exist (under the assumptions detailed in this section). With a few notable exceptions (see e.g. [3,4] in the case of singular control problems and Chapter 7 in [25] for examples of solvable impulse control problems) finding explicit solutions to impulse control problems is difficult. Instead we often have to resort to numerical methods to approximate the optimal control. A plausible direction for obtaining numerical approximations of solutions to the hydropower operators problem would be to further develop the Monte Carlo technique originally proposed for optimal switching problems in [7] (and later analyzed in [2]) to obtain polynomial approximations of Y t,b . Another possibility would be to apply the Markov-Chain approximations for stochastic control problems of delay systems developed in [21]. However, a thorough investigation of either direction is out of the scope of the present work and will be left as a topic of future research.