1 Introduction

The standard optimal switching problem (sometimes referred to as starting and stopping problem) is a stochastic optimal control problem of impulse type that arises when an operator controls a dynamical system by switching between the different members in a set of operation modes \({\mathcal {I}}=\{1,\ldots ,m\}\). In the two-modes setting (\(m=2\)) the modes may represent, for example, “operating” and “closed” when maximizing the revenue from mineral extraction in a mine as in Brennan and Schwartz (1985). In the multi-modes setting the operating modes may represent different levels of power production in a power plant when the owner seeks to maximize her total revenue from producing electricity as in Carmona and Ludkovski (2008) or the states “operating” and “closed” of single units in a multi-unit production facility as in Brekke and Øksendal (1994).

In optimal switching the control takes the form \(u=(\tau _1,\ldots ,\tau _N;\beta _1,\ldots ,\beta _N)\), where \(\tau _1\le \tau _2\le \cdots \le \tau _N\) is a sequence of times when the operator intervenes on the system and \(\beta _j\in {\mathcal {I}}^{-\beta _{j-1}}:= {\mathcal {I}}\setminus \{\beta _{j-1}\}\) is the mode in which the system is operated during \([\tau _j,\tau _{j+1})\). The standard multi-modes optimal switching problem in finite horizon (\(T<\infty \)) can be formulated as finding the control that maximizes

$$\begin{aligned} {\mathbb {E}}\left[ \int _0^T \phi _{\xi _s}(s)ds+\psi _{\xi _T}-\sum _{j=1}^Nc_{\beta _{j-1},\beta _{j}}(\tau _j)\right] , \end{aligned}$$

where \(\xi _t=b_0\mathbb {1}_{[0,\tau _{1})}(t)+\sum _{j=1}^N \beta _j\mathbb {1}_{[\tau _{j},\tau _{j+1})}(t)\) is the operation mode (when starting in a predefined mode \(b_0\in {\mathcal {I}}\)), \(\phi _b\) and \(\psi _b\) are the running and terminal reward in mode \(b\in {\mathcal {I}}\), respectively and \(c_{b,b'}(t)\) is the cost incurred by switching from mode b to mode \(b'\) at time \(t\in [0,T]\).

The standard optimal switching problem has been thoroughly investigated in the last decades after being popularised in Brennan and Schwartz (1985). In Hamadène and Jeanblanc (2007) a solution to the two-modes problem was found by rewriting the problem as an existence and uniqueness problem for a doubly reflected backward stochastic differential equation. In Djehiche et al. (2009) existence of an optimal control for the multi-modes optimal switching problem was shown by a probabilistic method based on the concept of Snell envelopes. Furthermore, existence and uniqueness of viscosity solutions to the related Bellman equation was shown for the case when the switching costs are constant and the underlying uncertainty is modeled by a stochastic differential equation (SDE) driven by a Brownian motion. In El Asri and Hamadéne (2009) the existence and uniqueness results of viscosity solutions was extended to the case when the switching costs depend on the state variable. Since then, results have been extended to Knightian uncertainty (Hu and Tang 2008; Hamadène and Zhang 2010; Chassagneux et al. 2011) and non-Brownian filtration and signed switching costs in Martyr (2016). For the case when the underlying uncertainty can be modeled by a diffusion process, generalization to the case when the control enters the drift and volatility term was treated in Elie and Kharroubi (2014). This was further developed to include state constraints in Kharroubi (2016). Another important generalization is to the case when the operator only has partial information about the present state of the diffusion process as treated in Li et al. (2015).

In the present work we consider the setting with running and terminal rewards that depend on the entire history of the control. We also show that a special case of the type of switching problems that we consider is that of a controlled stochastic delay differential equation (SDDE), driven by a finite intensity Lévy process.

To motivate our problem formulation we consider the situation when an operator of two hydro-power plants, located in the same river, wants to maximize her revenue from producing electricity during a fixed operation period. We assume that each plant has its own water reservoir. The power production in a hydropower plant depends on the drop height from the water level of the reservoir to the outlet and thus on the amount of water in the reservoir. As water that passes through the upstream plant will eventually reach the reservoir of the downstream plant we need to consider part of the control history in the upstream plant when optimizing operation of the downstream plant.

In this setting our cost functional can be written

$$\begin{aligned} J(u)&:={\mathbb {E}}\left[ \int _0^T\phi (s,\tau _1,\ldots ,\tau _{N_s}; \beta _1,\ldots ,\beta _{N_s})ds\nonumber \right. \\&\quad \left. + \psi (\tau _1,\ldots ,\tau _{N};\beta _1,\ldots ,\beta _{N}) -\sum _{j}c_{\beta _{j-1},\beta _j}(\tau _j)\right] , \end{aligned}$$
(1)

where \(N_s:=\max \{j:\tau _j\le s\}\). The contribution of the present work is twofold. First, we show that the problem of maximizing J can be solved under certain assumptions on \(\phi \), \(\psi \) and the switching costs \(c_{\cdot ,\cdot }\) by finding an optimal control in terms of a family of interconnected value processes, that we refer to as a verification family. We then show that the revenue maximization problem of the hydro-power producer can be formulated as an impulse control problem where the uncertainty is modeled by a controlled SDDE and use our initial result to find an optimal control for this problem.

The remainder of the article is organized as follows. In the next section we state the problem, set the notation used throughout the article and detail the set of assumptions that are made. Then, in Sect. 3 a verification theorem is derived. This verification theorem is an extension of the original verification theorem for the multi-modes optimal switching problem developed in Djehiche et al. (2009) and presumes the existence of a verification family. In Sect. 4 we show that, under the assumptions made, there exists a verification family, thus proving existence of an optimal control for the switching problem with cost functional J. In Sect. 5 we more carefully investigate the example of the hydro-power producer and show that the case of a controlled SDDE fits into the problem description investigated in Sects. 3 and 4.

2 Preliminaries

We consider a finite horizon problem and thus assume that the terminal time T is fixed with \(T<\infty \).

We let \((\varOmega ,{\mathcal {F}},{\mathbb {F}},{\mathbb {P}})\) be a probability space, with \({\mathbb {F}}:=({\mathcal {F}}_t)_{0\le t\le T}\) a filtration satisfying the usual conditions in addition to being quasi-left continuous.

Remark 1

Recall here the concept of quasi-left continuity: A càdlàg process \((X_t:0\le t\le T)\) is quasi-left continuous if for each predictable stopping time \(\gamma \) and every announcing sequence of stopping times \(\gamma _k\nearrow \gamma \) we have \(X_{\gamma -}:=\lim \limits _{k\rightarrow \infty }X_{\gamma _k} = X_\gamma \), \({\mathbb {P}}\)-a.s. A filtration is quasi-left continuous if \({\mathcal {F}}_{\gamma }={\mathcal {F}}_{\gamma -}\) for every predictable stopping time \(\gamma \).

Throughout we will use the following notation:

  • \({\mathcal {P}}_{{\mathbb {F}}}\) is the \(\sigma \)-algebra of \({\mathbb {F}}\)-progressively measurable subsets of \([0,T]\times \varOmega \).

  • For \(p\ge 1\), we let \({\mathcal {S}}^{p}\) be the set of all \({\mathbb {R}}\)-valued, \({\mathcal {P}}_{{\mathbb {F}}}\)-measurable, càdlàg processes \((Z_t: 0\le t\le T)\) such that, \({\mathbb {P}}\)-a.s., \({\mathbb {E}}\left[ \sup _{t\in [0,T]} |Z_t|^p\right] <\infty \) and let \({\mathcal {S}}_{\textit{qlc}}^{p}\) be the subset of processes that are quasi-left continuous.

  • We let \({\mathcal {T}}\) be the set of all \({\mathbb {F}}\)-stopping times and for each \(\gamma \in {\mathcal {T}}\) we let \({\mathcal {T}}_\gamma \) be the corresponding subsets of stopping times \(\tau \) such that \(\tau \ge \gamma \), \({\mathbb {P}}\)-a.s.

  • We let \({\mathcal {U}}\) be the set of all \(u=(\tau _1,\ldots ,\tau _N;\beta _1,\ldots ,\beta _N)\), where \((\tau _j)_{j=1}^N\) is a non-decreasing sequence of \({\mathbb {F}}\)-stopping times (such that \(\lim _{j\rightarrow \infty }\tau _j=T\), \({\mathbb {P}}\)-a.s.) and \(\beta _j\in {\mathcal {I}}^{-\beta _{j-1}}\) is \({\mathcal {F}}_{\tau _j}\)-measurable (with \(\beta _0:=b_0\), the initial operation mode).

  • We let \({\mathcal {U}}^f\) denote the subset of \(u\in {\mathcal {U}}\) for which N is finite \({\mathbb {P}}\)-a.s. (i.e. \({\mathcal {U}}^f:=\{u\in {\mathcal {U}}:\, {\mathbb {P}}\left[ \{\omega \in \varOmega : N(\omega )>k, \,\forall k>0\}\right] =0\}\)) and for all \(k\ge 0\) we let \({\mathcal {U}}^k:=\{u\in {\mathcal {U}}:\,N\le k\}\). For \(\gamma \in {\mathcal {T}}\) we let \({\mathcal {U}}_\gamma \) (and \({\mathcal {U}}_\gamma ^f\) resp. \({\mathcal {U}}_\gamma ^k\)) be the subset of \({\mathcal {U}}\) (and \({\mathcal {U}}^f\) resp. \({\mathcal {U}}^k\)) with \(\tau _1\in {\mathcal {T}}_\gamma \).

  • We define the set \({\mathcal {D}}:=\{(t_1,\ldots ;b_1,\ldots ):t_1\le t_2\le \cdots ,\,b_{j+1}\in {\mathcal {I}}^{-b_j}\}\) and let \({\mathcal {D}}^f\) be the corresponding subset of all finite sequences.

  • For all \(n\ge 0\), we let \({\bar{{\mathcal {I}}}}^n:=\{(b_1,\ldots ,b_n)\in {\mathcal {I}}^n:\, b_{j}\in {\mathcal {I}}^{-b_{j-1}}\}\) and \({\bar{{\mathcal {T}}}}^n:=\{(\eta _1,\ldots ,\eta _n)\in {\mathcal {T}}^n:\, \eta _1\le \eta _2\le \cdots \le \eta _n\}\).

  • For \(l\ge 0\), we let \(\varPi _l:=\{0,T2^{-l},2T2^{-l},\ldots ,T\}\) and define the map \(\varGamma ^l:\cup _{j\ge 1}{\bar{{\mathcal {T}}}}^j \rightarrow \cup _{j\ge 1}{\bar{{\mathcal {T}}}}^j\) as \(\varGamma ^l(\eta _1,\ldots ,\eta _j):=(\inf \{s\in \varPi _l:\,s \ge \eta _1\},\ldots ,\inf \{s\in \varPi _l:\,s\ge \eta _j\})\) for all \(\eta \in {\bar{{\mathcal {T}}}}^j\).

To make notation more efficient we introduce the \({\mathcal {F}}_T\)-measurable function:

$$\begin{aligned} \varPsi (\tau _1,\ldots ,\tau _N;\beta _1,\ldots ,\beta _N)&:=\int _0^T\phi (s,\tau _1,\ldots ,\tau _{N_s};\beta _1,\ldots ,\beta _{N_s})ds\\&\quad + \psi (\tau _1,\ldots ,\tau _{N};\beta _1,\ldots ,\beta _{N}). \end{aligned}$$

2.1 Problem formulation

In the above notation, our problem can be characterized by two objects:

  • A \({\mathcal {F}}_T\otimes {\mathcal {B}}({\mathcal {D}})\)-measurable map \(\varPsi :{\mathcal {D}}\rightarrow {\mathbb {R}}\).

  • A collection, \((c_{b,b'}:\varOmega \times [0,T]\rightarrow {\mathbb {R}})_{(b,b')\in {\bar{{\mathcal {I}}}}^2}\), of \({\mathcal {P}}_{{\mathbb {F}}}\)-measurable processes.

We will make the following preliminary assumptions on these objects:

Assumption 1

  1. (i)

    The function \(\varPsi \) is \({\mathbb {P}}\)-a.s. right-continuous in the intervention times and bounded in the sense that:

    1. (a)

      \(\sup _{u\in {\mathcal {U}}}{\mathbb {E}}[ |\varPsi (\tau _1,\ldots ;\beta _1,\ldots )|^2]<\infty \).

    2. (b)

      For all \((\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f\) and anyFootnote 1\(b\in {\mathcal {I}}^{-b_n}\) we have \(\sup _{u\in {\mathcal {U}}}{\mathbb {E}}[ \sup _{s\in [t_n,T]}|\varPsi (\mathbf{t },s,\tau _1\vee s,\ldots ;\mathbf{b },b,\beta _1,\ldots )|^2]<\infty \).

  2. (ii)

    For each \((\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f\) and any \(b\in {\mathcal {I}}^{-b_n}\) we have \(\varPsi (\mathbf{t };\mathbf{b })>\varPsi (\mathbf{t },T;\mathbf{b },b)-c_{b_n,b}(T)\), \({\mathbb {P}}\)-a.s.

  3. (iii)

    We assume that \((c_{b,b'})_{(b,b')\in {\bar{{\mathcal {I}}}}^2}\in ({\mathcal {S}}_{\textit{qlc}}^2)^{m(m-1)}\) are such that:

    1. (a)

      \(c_{b,b'}\ge 0\), \({\mathbb {P}}\)-a.s.

    2. (b)

      There is an \(\epsilon >0\) such that for each \((t_1,\ldots ,t_{n},b_1,\ldots ,b_n)\) with \(0\le t_1\le \cdots \le t_n\le T\) and \(b_1\in {\mathcal {I}}^{-b_n}\), and \(b_j\in {\mathcal {I}}^{-b_{j-1}}\) for \(j=2,\ldots ,n\), we have

      $$\begin{aligned} c_{b_1,b_2}(t_1)+\cdots +c_{b_n,b_1}(t_{n})\ge \epsilon , \end{aligned}$$

      \({\mathbb {P}}\)-a.s.

The above assumptions are mainly standard assumptions for optimal switching problems translated to our setting. Assumptions (i.a) and (iii.a) together imply that the expected maximal reward is finite. Assumption (ii) implies that it is never optimal to switch at the terminal time. We show below that the “no-free-loop” condition (iii.b) together with (i.a) implies that, with probability one, the optimal control (whenever it exists) can only make a finite number of switches.

We consider the following problem:

Problem 1

Find \(u^*\in {\mathcal {U}}\), such that

$$\begin{aligned} J(u^*)=\sup _{u\in {\mathcal {U}}} J(u). \end{aligned}$$
(2)

\(\square \)

As a step in solving Problem 1 we need the following proposition which is a standard result for optimal switching problems and is due to the “no-free-loop” condition.

Proposition 1

Suppose that there is a \(u^*\in {\mathcal {U}}\) such that \(J(u^*)\ge J(u)\) for all \(u\in {\mathcal {U}}\). Then \(u^*\in {\mathcal {U}}^f\).

Proof

Pick \({\hat{u}}:=({\hat{\tau }}_1,\ldots ,{\hat{\tau }}_{{\hat{N}}};{\hat{\beta }}_1,\ldots ,{\hat{\beta }}_{{\hat{N}}})\in {\mathcal {U}}\setminus {\mathcal {U}}^f\) and let \(B:=\{\omega \in \varOmega : {\hat{N}}(\omega )>k, \,\forall k>0\}\), then \({\mathbb {P}}[B]>0\). Furthermore, if B holds then the switching mode \(\xi \) must make an infinite number of loops and

$$\begin{aligned} J({\hat{u}})&\le \sup _{u\in {\mathcal {U}}} {\mathbb {E}}\big [ |\varPsi (\tau _1,\ldots ;\beta _1,\ldots )|\big ]-\frac{k-m}{m}\epsilon {\mathbb {P}}[B]\le C-\frac{k}{m}\epsilon {\mathbb {P}}[B], \end{aligned}$$

for all \(k\ge 0\), by Assumptions 1(iii.b) and 1(i.a). However, again by Assumption 1(i.a) we haveFootnote 2\(J(\emptyset )\ge -C\). Hence, \({\hat{u}}\) is dominated by the strategy of doing nothing and the assertion follows. \(\square \)

2.2 The Snell envelope

In this section we gather the main results concerning the Snell envelope that will be useful later on. Recall that a progressively measurable process U is of class [D] if the set of random variables \(\{U_\tau :\tau \in {\mathcal {T}}\}\) is uniformly integrable.

Theorem 1

(The Snell envelope) Let \(U=(U_t)_{0\le t\le T}\) be an \({\mathbb {F}}\)-adapted, \({\mathbb {R}}\)-valued, càdlàg process of class [D]. Then there exists a unique (up to indistinguishability), \({\mathbb {R}}\)-valued càdlàg process \(Z=(Z_t)_{0\le t\le T}\) called the Snell envelope, such that Z is the smallest supermartingale that dominates U. Moreover, the following holds (with \(\varDelta U_t:=U_{t}-U_{t-}\)):

  1. (i)

    For any stopping time \(\gamma \),

    $$\begin{aligned} Z_{\gamma }=\mathop {\mathrm{ess}\,\sup }_{\tau \in {\mathcal {T}}_{\gamma }}{\mathbb {E}}\left[ U_\tau \big |{\mathcal {F}}_\gamma \right] . \end{aligned}$$
    (3)
  2. (ii)

    The Doob–Meyer decomposition of the supermartingale Z implies the existence of a triple \((M,K^c,K^d)\) where \((M_t:0\le t\le T)\) is a uniformly integrable right-continuous martingale, \((K^c_t:0\le t\le T)\) is a non-decreasing, predictable, continuous process with \(K^c_0=0\) and \((K^d_t:0\le t\le T)\) is non-decreasing purely discontinuous predictable with \(K^d_0=0\), such that

    $$\begin{aligned} Z_t=M_t-K^c_t-K^d_t. \end{aligned}$$
    (4)

    Furthermore, \(\{\varDelta _t K^d>0\}\subset \{\varDelta _t U<0\}\cap \{Z_{t-}=U_{t-}\}\) for all \(t\in [0,T]\).

  3. (iii)

    Let \(\theta \in {\mathcal {T}}\) be given and assume that for any predictable \(\gamma \in {\mathcal {T}}_\theta \) and any increasing sequence \(\{\gamma _k\}_{k\ge 0}\) with \(\gamma _k\in {\mathcal {T}}_\theta \) and \(\lim _{k\rightarrow \infty }\gamma _k=\gamma \), \({\mathbb {P}}\)-a.s, we have \(\limsup _{k\rightarrow \infty }U_{\gamma _k}\le U_{\gamma }\), \({\mathbb {P}}\)-a.s. Then, the stopping time \(\tau ^*_{\theta }\) defined by \(\tau ^*_{\theta }:=\inf \{s\ge \theta :Z_s=U_s\}\wedge T\) is optimal after \(\theta \), i.e.

    $$\begin{aligned} Z_{\theta }={\mathbb {E}}\left[ U_{\tau ^*_\theta }\big |{\mathcal {F}}_\theta \right] . \end{aligned}$$

    Furthermore, in this setting the Snell envelope, Z, is quasi-left continuous, i.e. \(K^d\equiv 0\).

  4. (iv)

    Let \(U^k\) be a sequence of càdlàg processes converging increasingly and pointwisely to the càdlàg process U and let \(Z^k\) be the Snell envelope of \(U^k\). Then the sequence \(Z^k\) converges increasingly and pointwisely to a process Z and Z is the Snell envelope of U.

In the above theorem (i)–(iii) are standard. Proofs can be found in El Karoui (1981) (see Latifa et al. 2015 for an English version), Appendix D in Karatzas and Shreve (1998), Hamadène (2002) and in the appendix of Cvitanic and Karatzas (1996). Statement (iv) was proved in Djehiche et al. (2009).

We will need to following trivial extension of (iv):

Lemma 1

Let \(U^k\) be a uniformly bounded sequence in \({\mathcal {S}}^2\) and let \(Z^k\) be the Snell envelope of \(U^k\). If there exist a process \(U\in {\mathcal {S}}^2\) such that \(\sup _{t\in [0,T]}|U^k_t-U_t|\rightarrow 0\), \({\mathbb {P}}\)-a.s. as \(k\rightarrow \infty \), then the sequence \(Z^k\) converges pointwisely to a process Z and Z is the Snell envelope of U.

Proof

Note that U is a càdlàg process by the uniform convergence. Hence, it has a Snell envelope, Z. Letting \((\tau _j^k)\subset {\mathcal {T}}_t\) be a sequence of stopping times such that \(Z^k=\lim _{j\rightarrow \infty }{\mathbb {E}}[U^k_{\tau _j^k}|{\mathcal {F}}_ t]\), then

$$\begin{aligned} Z_t&\ge \lim _{j\rightarrow \infty }{\mathbb {E}}\big [U_{\tau _j^k}\big |{\mathcal {F}}_ t\big ]\\&= Z^k_t-\lim _{j\rightarrow \infty }{\mathbb {E}}\big [U^k_{\tau _j^k}-U_{\tau _j^k}\big |{\mathcal {F}}_ t\big ]\\&\ge Z^k_t-{\mathbb {E}}\left[ \sup _{s\in [0,T]}|U^k_{s}-U_{s}|\big |{\mathcal {F}}_ t\right] \end{aligned}$$

But similarly \(Z^k_t\ge Z_t-{\mathbb {E}}[\sup _{s\in [0,T]}|U^k_{s}-U_{s}||{\mathcal {F}}_ t]\) and we conclude that \(|Z^k_t- Z_t|\le {\mathbb {E}}[\sup _{s\in [0,T]}|U^k_{s}-U_{s}||{\mathcal {F}}_ t]\) and the assertion follows. \(\square \)

The Snell envelope will be the main tool in showing that Problem 1 has a solution.

2.3 Additional assumptions on regularity

From the definition of the Snell envelope it is clear that we need to make some further assumptions on the regularity of the involved processes. To facilitate this we define, for each \((\mathbf{t },\mathbf{b })=(t_1,\ldots ,t_n;b_1,\ldots ,b_n)\in {\mathcal {D}}^f\), the value process corresponding to the control \(u\in {\mathcal {U}}\) as

$$\begin{aligned} V^{\mathbf{t };\mathbf{b },u}_s&:={\mathbb {E}}\left[ \varPsi (\mathbf{t },t_n\vee s\vee \tau _1,\ldots ,t_n\vee s\vee \tau _N;\mathbf{b },\beta _1,\ldots ,\beta _N)\right. \\&\quad \left. - \sum _{j=1}^Nc_{\beta _{j-1},\beta _j}(t_n\vee s\vee \tau _j)|{\mathcal {F}}_s\right] , \end{aligned}$$

with \(\beta _0:=b_n\).

We make the following additional assumptions:

Assumption 2

  1. (i)

    For each \(n\ge 0\) and each \((\eta ,\mathbf{b })\in {\bar{{\mathcal {T}}}}^n\times {\bar{{\mathcal {I}}}}^n\) and \(b\in {\mathcal {I}}^{-b_n}\) there is a sequence of maps \(({\mathcal {U}}\rightarrow {\mathcal {U}}:u\rightarrow {\hat{u}}^l)_{l\ge 0}\) such that

    $$\begin{aligned}&\lim _{l\rightarrow \infty }\sup _{u\in {\mathcal {U}}}{\mathbb {E}}\left[ \sup _{s\in [0,T]}|(V^{\eta ;\mathbf{b },u}_s-V^{\varGamma ^l(\eta );\mathbf{b },{\hat{u}}^l}_s)^+\right. \\&\quad \left. +(V^{\eta ,s\vee \eta _n;\mathbf{b },b,u}_s-V^{\varGamma ^l(\eta ),s\vee \varGamma ^l(\eta _n);\mathbf{b },b,{\hat{u}}^l}_s)^+|^2\right] =0. \end{aligned}$$

    Furthermore, we have

    $$\begin{aligned}&\lim _{l\rightarrow \infty }\sup _{u\in {\mathcal {U}}_{\varGamma ^l(\eta _n)}}{\mathbb {E}}\left[ \sup _{s\in [0,T]}|(V^{\varGamma ^l(\eta );\mathbf{b },u}_s- V^{\eta ;\mathbf{b },u}_s)^+\right. \\&\quad \left. (V^{\varGamma ^l(\eta ),s\vee \varGamma ^l(\eta _n);\mathbf{b },b,u}_s- V^{\eta ,s\vee \eta _n;\mathbf{b },b,u}_s)^+|^2\right] =0. \end{aligned}$$
  2. (ii)

    For all \((\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f\) and all \(b\in {\mathcal {I}}^{-b_n}\), the process \((\mathop {\mathrm{ess}\,\sup }_{u\in {\mathcal {U}}^k}V^{\mathbf{t },s\vee t_n;\mathbf{b },b,u}_s:0\le s\le T)\) is in \({\mathcal {S}}_{\textit{qlc}}^2\) for \(k=0,1,\ldots \)

3 A verification theorem

The method for solving Problem 1 will be based on deriving an optimal control under the assumption that a specific family of processes exists, and then showing that the family indeed does exist. We will refer to any such family of processes as a verification family.

Definition 1

We define a verification family to be a family of càdlàg supermartingales \(((Y^{\mathbf{t };\mathbf{b }}_s)_{0\le s\le T}: (\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f)\) such that:

  1. (a)

    The family satisfies the recursion

    $$\begin{aligned} Y^{\mathbf{t };\mathbf{b }}_s&=\mathop {\mathrm{ess}\,\sup }_{\tau \in {\mathcal {T}}_{s\vee t_n}} {\mathbb {E}}\left[ \mathbb {1}_{[\tau \ge T]}\varPsi (\mathbf{t };\mathbf{b })\right. \nonumber \\&\quad \left. +\mathbb {1}_{[\tau < T]}\max _{\beta \in {\mathcal {I}}^{-b_n}}\left\{ -c_{b_n,\beta }(\tau )+Y^{\mathbf{t },\tau ;\mathbf{b },\beta }_\tau \right\} \Big | {\mathcal {F}}_s\right] . \end{aligned}$$
    (5)
  2. (b)

    The family is bounded in the sense that \(\sup \limits _{u\in {\mathcal {U}}}{\mathbb {E}}[\sup \limits _{s\in [0,T]}|Y^{\tau _1,\ldots ,\tau _N;\beta _1,\ldots ,\beta _N}_s|^2]<\infty \).

  3. (c)

    For all \(n\ge 1\) we have that for every \(\mathbf{b }\in {\bar{{\mathcal {I}}}}^n\) and \(\eta \in {\bar{{\mathcal {T}}}}^n\),

    $$\begin{aligned} \lim _{l\rightarrow \infty }{\mathbb {E}}\left[ \sup _{s\in [0,T]}|Y^{\varGamma ^l(\eta );\mathbf{b }}_s-Y^{\eta ;\mathbf{b }}_s|^2\right] = 0 \end{aligned}$$
    (6)

    and for all \(b\in {\mathcal {I}}^{-b_n}\) we have

    $$\begin{aligned} \lim _{l\rightarrow \infty }{\mathbb {E}}\left[ \sup _{s\in [0,T]}|Y^{\varGamma ^l(\eta ),s\vee \varGamma ^l(\eta _n);\mathbf{b },b}_s-Y^{\eta ,s\vee \eta _n;\mathbf{b },b}_s|^2\right] = 0. \end{aligned}$$
    (7)
  4. (d)

    For every \((\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f\) and every \(b\in {\mathcal {I}}^{-b_n}\), the process \((Y^{\mathbf{t },s;\mathbf{b },b}_s:0\le s\le T)\) is in \({\mathcal {S}}_{\textit{qlc}}^2\).

The purpose of the present section is to reduce the solution of Problem 1 to showing existence of a verification family. This is done in the following verification theorem:

Theorem 2

Assume that there exists a verification family \(((Y^{\mathbf{t };\mathbf{b }}_s)_{0\le s\le T}: (\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f)\). Then the family is unique (i.e. there is at most one verification family, up to indistinguishability) and:

  1. (i)

    Satisfies \(Y_0=\sup _{u\in {\mathcal {U}}} J(u)\) (where \(Y:=Y^{\emptyset }\)).

  2. (ii)

    Defines the optimal control, \(u^*=(\tau _1^*,\ldots ,\tau _{N^*}^*;\beta _1^*,\ldots ,\beta _{N^*}^*)\), for Problem 1, where \((\tau _j^*)_{1\le j\le {N^*}}\) is a sequence of \({\mathbb {F}}\)-stopping times given by

    $$\begin{aligned} \tau ^*_j:=\inf \Big \{s&\ge \tau ^*_{j-1}:\,Y_s^{\tau ^*_{1}, \ldots ,\tau ^*_{j-1};\beta ^*_{1},\ldots ,\beta ^*_{j-1}}\\&=\max _{\beta \in {\mathcal {I}}^{-\beta ^*_{j-1}}}\Big \{-c_{\beta _{j-1}^*,\beta }(s)+ Y^{\tau ^*_{1},\ldots ,\tau ^*_{j-1},s;\beta ^*_{1},\ldots ,\beta ^*_{j-1},\beta }_s\Big \}\Big \}\wedge T, \end{aligned}$$

    \((\beta _j^*)_{1\le j\le {N^*}}\) is defined as a measurable selection of

    $$\begin{aligned} \beta ^*_j\in \mathop {\arg \max }_{\beta \in {\mathcal {I}}^{-\beta ^*_{j-1}}}\Big \{-c_{\beta _{j-1}^*,\beta }(\tau _j^*)+ Y^{\tau ^*_{1},\ldots ,\tau ^*_j;\beta ^*_{1},\ldots ,\beta ^*_{j-1},\beta }_{\tau ^*_j}\Big \} \end{aligned}$$

    and \(N^*=\max \{j:\tau ^*_j<T\}\), with \((\tau _0^*,\beta ^*_0):=(0,b_0)\).

Proof

The proof is divided into three steps where we first, in steps 1 and 2, show that for any \(0\le j\le N^*\) we have

$$\begin{aligned} Y^{\tau ^*_1,\ldots ,\tau ^*_j;\beta ^*_1,\ldots ,\beta ^*_j}_{s}&=\mathop {\mathrm{ess}\,\sup }_{\tau \in {\mathcal {T}}_s}{\mathbb {E}}\Big [\mathbb {1}_{[\tau \ge T]}\varPsi (\tau ^*_1,\ldots ,\tau ^*_j;\beta ^*_1,\ldots ,\beta ^*_j) \nonumber \\&\quad +\mathbb {1}_{[\tau< T]}\max _{\beta \in {\mathcal {I}}^{-\beta ^*_j}}\left\{ -c_{\beta ^*_j,\beta }(\tau ) +Y^{\tau ^*_1,\ldots ,\tau ^*_{j},\tau ;\beta ^*_1,\ldots ,\beta ^*_{j},\beta }_{\tau }\right\} \Big | {\mathcal {F}}_{s}\Big ] \nonumber \\&={\mathbb {E}}\Big [\mathbb {1}_{[\tau ^*_{j+1} \ge T]}\varPsi (\tau ^*_1,\ldots ,\tau ^*_j;\beta ^*_1,\ldots ,\beta ^*_j) \nonumber \\&\quad +\mathbb {1}_{[\tau ^*_{j+1} < T]}\left\{ -c_{\beta ^*_j,\beta ^*_{j+1}}(\tau ^*_{j+1}) +Y^{\tau ^*_1,\ldots ,\tau ^*_{j+1};\beta ^*_1,\ldots ,\beta ^*_{j+1}}_{\tau ^*_{j+1}}\right\} \Big | {\mathcal {F}}_{s}\Big ], \end{aligned}$$
(8)

\({\mathbb {P}}\)-a.s. for \(s\in [\tau _j^*,\tau ^*_{j+1}]\). Then in Step 3 we show that \(u^*\) is the optimal control estabilishing (i) and (ii). A straightforward generalization to arbitrary initial conditions \((\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f\) then gives that

$$\begin{aligned} Y^{\mathbf{t };\mathbf{b }}_s=\mathop {\mathrm{ess}\,\sup }_{u\in {\mathcal {U}}_{s\vee t_n}} \,{\mathbb {E}}\left[ \varPsi (\mathbf{t },\tau _1,\ldots ,\tau _N;\mathbf{b },\beta _1,\ldots ,\beta _N)-\sum _{j=1}^Nc_{\beta _{j-1},\beta _j}(\tau _j)\Big |{\mathcal {F}}_s\right] ,\qquad \end{aligned}$$
(9)

by which uniqueness follows.

Step 1 We start by showing that for each \((\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f\) the recursion (5) can be written in terms of a \({\mathbb {F}}\)-stopping time. From (5) we note that, by definition, \(Y^{\mathbf{t };\mathbf{b }}\) is the smallest supermartingale that dominates

$$\begin{aligned} U^{\mathbf{t };\mathbf{b }}&:=\left( \mathbb {1}_{[s=T]}\varPsi (\mathbf{t };\mathbf{b })+\mathbb {1}_{[s < T]}\max _{\beta \in {\mathcal {I}}^{-b_n}}\left\{ -c_{b_n,\beta }(s\vee t_n) \nonumber \right. \right. \\&\quad \left. \left. +Y^{\mathbf{t },s\vee t_n;\mathbf{b },\beta }_s\right\} \Big | :\, 0\le s\le T\right) . \end{aligned}$$
(10)

Now, by Assumption 1(iii) and property (d) in the definition of a verification family (Definition 1) we note that \(U^{\mathbf{t };\mathbf{b }}\) is a càdlàg process of class [D] that is quasi-left continuous on [0, T). Furthermore, by Assumption 1(ii) and property (d) we get that for any sequence \((\eta _k)_{k\ge 0}\subset {\mathcal {T}}\) such that \(\eta _k\nearrow T\), \({\mathbb {P}}\)-a.s. we have \(\lim _{k\rightarrow \infty } U^{\mathbf{t };\mathbf{b }}_{\eta _k}\le U^{\mathbf{t };\mathbf{b }}_T\), \({\mathbb {P}}\)-a.s. By Theorem 1(iii) it thus follows that for any \(\theta \in {\mathcal {T}}\), there is a stopping time \(\gamma _\theta \in {\mathcal {T}}_{t_n\vee \theta }\) such that:

$$\begin{aligned} Y^{\mathbf{t };\mathbf{b }}_\theta ={\mathbb {E}}\left[ \mathbb {1}_{[\gamma _\theta = T]}\varPsi (\mathbf{t };\mathbf{b })+\mathbb {1}_{[\gamma _\theta < T]}\max _{\beta \in {\mathcal {I}}^{-b_n}}\left\{ -c_{b_n,\beta }(\gamma _\theta ) +Y^{\mathbf{t },\gamma _\theta ;\mathbf{b },\beta }_{\gamma _\theta }\right\} \Big | {\mathcal {F}}_\theta \right] . \end{aligned}$$

Step 2 Next, we show that \(Y_0=J(u^*)\). We start by noting that Y is the Snell envelope of

$$\begin{aligned} \left( \mathbb {1}_{[s=T]}\varPsi _0+\mathbb {1}_{[s < T]}\max _{\beta \in {\mathcal {I}}^{-b_0}}\left\{ -c_{b_0,\beta }(s)+Y^{s,\beta }_s\right\} :\, 0\le s\le T\right) , \end{aligned}$$

where \(\varPsi _0:=\varPsi (\emptyset )\), and by step 1 we thus have

$$\begin{aligned} Y_0&=\sup _{\tau \in {\mathcal {T}}} {\mathbb {E}}\left[ \mathbb {1}_{[\tau =T]}\varPsi _0+\mathbb {1}_{[\tau< T]}\max _{\beta \in {\mathcal {I}}^{-b_0}}\left\{ -c_{b_0,\beta }(\tau )+Y^{\tau ,\beta }_\tau \right\} \right] \\&={\mathbb {E}}\left[ \mathbb {1}_{[\tau ^*_1=T]}\varPsi _0+\mathbb {1}_{[\tau ^*_1< T]}\max _{\beta \in {\mathcal {I}}^{-b_0}}\left\{ -c_{b_0,\beta }(\tau ^*_1)+Y^{\tau ^*_1,\beta }_{\tau ^*_1}\right\} \right] \\&={\mathbb {E}}\left[ \mathbb {1}_{[\tau ^*_1=T]}\varPsi _0+\mathbb {1}_{[\tau ^*_1 < T]}\left\{ -c_{b_0,\beta ^*_1}(\tau ^*_1)+Y^{\tau ^*_1,\beta ^*_1}_{\tau ^*_1}\right\} \right] . \end{aligned}$$

Moving on we pick \(j\in \{1,\ldots , N^*\}\). For \(M\ge 0\), let \(z_{-1}=-1\) and \(z_k:=kT/2^M\) for \(k=0,\ldots ,2^M\). Furthermore, we define the processes \(({\hat{Y}}^{M}_s:0\le s\le T)\) and \(({\hat{U}}^{M}_t:0\le s\le T)\) by

$$\begin{aligned} {\hat{Y}}^{M}_{s}&:=\sum _{(k_1,\ldots k_j)\in {\bar{{\mathbb {Z}}}}^j}\sum _{(b_1,\ldots ,b_j)\in {\bar{{\mathcal {I}}}}^j} {\mathbb {E}}\big [\mathbb {1}_{(z_{k_1-1},z_{k_1}]}(\tau ^*_1)\cdots \mathbb {1}_{(z_{k_j-1},z_{k_j}]}(\tau ^*_j) \mathbb {1}_{[\beta ^*_{1}=b_1]} \\&\qquad \cdots \mathbb {1}_{[\beta ^*_{j}=b_j]}\big |{\mathcal {F}}_s\big ] Y_s^{z_{k_1},\ldots ,z_{k_{j}};b_1,\ldots ,b_j}, \end{aligned}$$

and

$$\begin{aligned} {\hat{U}}^{M}_{s}&:=\sum _{(k_1,\ldots k_j)\in {\bar{{\mathbb {Z}}}}^j}\sum _{(b_1,\ldots ,b_j)\in {\bar{{\mathcal {I}}}}^j} {\mathbb {E}}\big [\mathbb {1}_{(z_{k_1-1},z_{k_1}]}(\tau ^*_1)\cdots \mathbb {1}_{(z_{k_j-1},z_{k_j}]}(\tau ^*_j) \mathbb {1}_{[\beta ^*_{1}=b_1]} \\&\qquad \cdots \mathbb {1}_{[\beta ^*_{j}=b_j]}\big |{\mathcal {F}}_s\big ]\Big (\mathbb {1}_{[s=T]}\varPsi (z_{k_1},\ldots ,z_{k_{j}};b_1,\ldots ,b_j) \\&\quad +\mathbb {1}_{[s < T]}\max _{\beta \in {\mathcal {I}}^{-b_j}}\Big \{-c_{b_j,\beta }(s\vee z_{k_{j}}) + Y^{z_{k_1},\ldots ,z_{k_{j}},s\vee z_{k_{j}};b_1,\ldots ,b_j,\beta }_s\Big \}\Big ), \end{aligned}$$

for all \(s\in [0,T]\), where \({\bar{{\mathbb {Z}}}}^j:=\{(k_1,\ldots ,k_j)\in \{0,\ldots ,2^M\}^j:k_1\le k_2\le \cdots \le k_j\}\). Now, for each \((k_1,\ldots ,k_j,b_1,\ldots ,b_j)\in {\bar{{\mathbb {Z}}}}^j\times {\bar{{\mathcal {I}}}}^j\) we have that

$$\begin{aligned} \mathbb {1}_{(z_{k_1-1},z_{k_1}]}(\tau ^*_1)\cdots \mathbb {1}_{(z_{k_j-1},z_{k_j}]}(\tau ^*_j) \mathbb {1}_{[\beta ^*_{1}=b_1]}\cdots \mathbb {1}_{[\beta ^*_{j}=b_j]} Y_s^{z_{k_1},\ldots ,z_{k_{j}};b_1,\ldots ,b_j}, \end{aligned}$$

is the product of an \({\mathcal {F}}_{\tau ^*_j}\)-measurable positive r.v. and a càdlàg supermartingale, thus, it is a càdlàg supermartingale for \(s\ge \tau ^*_j\). Hence, \({\hat{Y}}^{M}\) is the sum of a finite number of càdlàg supermartingales and thus a càdlàg supermartingale itself. By definition we find that \({\hat{Y}}^{M}\) dominates \({\hat{U}}^{M}\) which is of class [D] by Assumption 1(i) and property b). To show that \({\hat{Y}}^{M}\) is in fact the Snell envelope of \({\hat{U}}^{M}\) assume that Z is another càdlàg supermartingale that dominates \({\hat{U}}^{M}\) for all \(s\in [\tau ^*_{j},T]\). Then for each \((k_1,\ldots ,k_j;b_1,\ldots ,b_j)\in {\bar{{\mathbb {Z}}}}^j\times {\bar{{\mathcal {I}}}}^j\) and \(s\ge \tau _j^*\), we have

$$\begin{aligned}&\mathbb {1}_{(z_{k_1-1},z_{k_1}]}(\tau ^*_1)\cdots \mathbb {1}_{(z_{k_j-1},z_{k_j}]}(\tau ^*_j) \mathbb {1}_{[\beta ^*_{1}=b_1]}\cdots \mathbb {1}_{[\beta ^*_{j}=b_j]}Z_s\\&\quad \ge \mathbb {1}_{(z_{k_1-1},z_{k_1}]}(\tau ^*_1)\cdots \mathbb {1}_{(z_{k_j-1},z_{k_j}]}(\tau ^*_j) \mathbb {1}_{[\beta ^*_{1}=b_1]}\\&\qquad \cdots \mathbb {1}_{[\beta ^*_{j}=b_j]}\Big (\varPsi (z_{k_1},\ldots ,z_{k_j};b_1,\ldots ,b_j)\\&\qquad +\mathbb {1}_{[s < T]}\max _{\beta \in {\mathcal {I}}^{-b_j}}\left\{ -c_{b_j,\beta }(s)+Y^{z_{k_1},\ldots ,z_{k_j},s;b_1,\ldots ,b_j,\beta }_s\right\} \Big ), \end{aligned}$$

\({\mathbb {P}}\)-a.s. which by (5) gives that

$$\begin{aligned}&\mathbb {1}_{(z_{k_1-1},z_{k_1}]}(\tau ^*_1)\cdots \mathbb {1}_{(z_{k_j-1},z_{k_j}]}(\tau ^*_j) \mathbb {1}_{[\beta ^*_{1}=b_1]}\cdots \mathbb {1}_{[\beta ^*_{j}=b_j]}Z_s\\&\quad \ge \mathbb {1}_{(z_{k_1-1},z_{k_1}]}(\tau ^*_1)\cdots \mathbb {1}_{(z_{k_j-1},z_{k_j}]}(\tau ^*_j) \mathbb {1}_{[\beta ^*_{1}=b_1]}\cdots \mathbb {1}_{[\beta ^*_{j}=b_j]}{\hat{Y}}^{z_{k_1},\ldots ,z_{k_j};b_1,\ldots ,b_j}_s. \end{aligned}$$

Summing over all \((k_1,\ldots ,k_j;b_1,\ldots ,b_j)\in {\bar{{\mathbb {Z}}}}^j\times {\bar{{\mathcal {I}}}}^j\) we get \(Z_s\ge {\hat{Y}}^{M}_s\), \({\mathbb {P}}\)-a.s.

Noting that \({\hat{Y}}^{M}=Y^{\varGamma ^M(\tau _1^*,\ldots ,\tau _j^*);\beta _1^*,\ldots ,\beta _j^*}\) and using (6) of property (c) we find that \(\sup _{s\in [0,T]}|Y_s^{\tau ^*_1,\ldots ,\tau ^*_j;\beta ^*_1,\ldots ,\beta ^*_j}-{\hat{Y}}^{M}_{s}|\rightarrow 0\) in probability, as \(M\rightarrow \infty \). Hence, there is a subsequence \((M_k)_{k\ge 1}\) such that the limit taken over the subsequence is 0, \({\mathbb {P}}\)-a.s. Furthermore, as the convergence is uniform the limit process is càdlàg. 

By right-continuity of the switching costs and \(\varPsi \) and (7) of property (c) we have that \({\mathbb {E}}[\sup _{s\in [0,T]}|U_s-{\hat{U}}^{M_k}_{s}|^2]\rightarrow 0\) as \(k\rightarrow \infty \), where for notational simplicity we abuse the notation in (10) and let

$$\begin{aligned} U&:=\left( \mathbb {1}_{[s=T]}\varPsi (\tau ^*_1,\ldots ,\tau ^*_j;\beta ^*_1,\ldots ,\beta ^*_j)+\mathbb {1}_{[s < T]}\max _{\beta \in {\mathcal {I}}^{-\beta ^*_j}}\left\{ -c_{\beta ^*_j,\beta }(s)\right. \right. \\&\quad \left. \left. + Y^{\tau ^*_1,\ldots ,\tau ^*_j,s;\beta ^*_1,\ldots ,\beta ^*_j,\beta }_s\right\} :\, \tau ^*_{j}\le s\le T\right) . \end{aligned}$$

Hence, \((M_k)_{k\ge 0}\) has a subsequence \(({{\tilde{M}}}_k)_{k\ge 0}\) such that \(\sup _{s\in [0,T]}|U_s-{\hat{U}}^{{\tilde{M}}_k}_{s}|\rightarrow 0\), \({\mathbb {P}}\)-a.s. as \(k\rightarrow \infty \). This implies that U is a càdlàg process which is of class [D] by Assumption 1(i) and property (b).

We thus have that \({\hat{U}}^{{\tilde{M}}_k}\) is a sequence of càdlàg processes, uniformly bounded in \(\mathcal {S}^2\) that converges uniformly in t to the càdlàg process U of class [D] and that \({\hat{Y}}^{{\tilde{M}}_k}\) is the Snell envelope of \({\hat{U}}^{{\tilde{M}}_k}\), for all \(k\ge 0\). Then, by Lemma 1 we find that \({\hat{Y}}^{{\tilde{M}}_k}\) converges pointwisely to the Snell envelope Snell envelope of U. Hence, \(\Big (Y^{\tau ^*_1,\ldots ,\tau ^*_j;\beta ^*_1,\ldots ,\beta ^*_j}_s:\, \tau ^*_{j}\le s\le T\Big )\) is the Snell envelope of U.

To arrive at the second equality in (8) we note that the results we obtained in Step 1 implies that for any sequence \((\gamma _l)_{l\ge 0}\subset {\mathcal {T}}\) with \(\gamma _l\nearrow \gamma \in {\mathcal {T}}\) we have \(\lim _{l\rightarrow \infty }{\mathbb {E}}[{\hat{U}}^{M}_{\gamma _l}]\le {\mathbb {E}}[{\hat{U}}^{M}_{\gamma }]\) for all \(M\ge 1\). Now, for all \(k\ge 0\) this gives

$$\begin{aligned} \lim _{l\rightarrow \infty }{\mathbb {E}}[U_{\gamma _l}]&\le \lim _{l\rightarrow \infty }{\mathbb {E}}[{\hat{U}}^{{\tilde{M}}_k}_{\gamma _l}]+\lim _{l\rightarrow \infty }{\mathbb {E}}[|U_{\gamma _l}-{\hat{U}}^{{\tilde{M}}_k}_{\gamma _l}|]\\&\le {\mathbb {E}}[U_{\gamma }]+2{\mathbb {E}}\left[ \sup _{s\in [0,T]}|U_{s}-{\hat{U}}^{{\tilde{M}}_k}_{s}|\right] , \end{aligned}$$

where the last term can be made arbitrarily small and we, thus, have that \(\lim _{l\rightarrow \infty }{\mathbb {E}}[U_{\gamma _l}]\le {\mathbb {E}}[U_{\gamma }]\) and by Theorem 1(iii) we get (8).

By induction we get that for each \(K\ge 0\),

$$\begin{aligned} Y_0&={\mathbb {E}}\left[ \mathbb {1}_{[N^*\le K]}\varPsi (\tau ^*_1,\ldots ,\tau ^*_{N^*};\beta ^*_1,\ldots ,\beta ^*_{N^*})-\sum _{j=1}^{K\wedge N^*}c_{\beta ^*_{j-1},\beta ^*_{j}}(\tau ^*_j)\right. \\&\quad \left. +\mathbb {1}_{[N^*> K]}\{-c_{\beta ^*_{K},\beta ^*_{K+1}}(\tau ^*_{K+1})+Y^{\tau ^*_1,\ldots ,\tau ^*_{K+1};\beta ^*_1,\ldots ,\beta ^*_{K+1}}_{\tau ^*_{K+1}}\right] . \end{aligned}$$

Now, arguing as in the proof of Proposition 1 and using property (b) we find that \(u^*\in {\mathcal {U}}^f\). Letting \(K\rightarrow \infty \) and using dominated convergence we conclude that \(Y_0=J(u^*)\).

Step 3 It remains to show that the strategy \(u^*\) is optimal. To do this we pick any other strategy \({\hat{u}}:=({\hat{\tau }}_1,\ldots ,{\hat{\tau }}_{{\hat{N}}};{\hat{\beta }}_1,\ldots ,{\hat{\beta }}_{{\hat{N}}})\in {\mathcal {U}}^f\). By the definition of \(Y_0\) in (5) we have

$$\begin{aligned} Y_0&\ge {\mathbb {E}}\left[ \mathbb {1}_{[{\hat{\tau }}_1 \ge T]}\varPsi _0 + \mathbb {1}_{[{\hat{\tau }}_1< T]}\max _{\beta \in {\mathcal {I}}^{-b_0}}\left\{ -c_{b_0,\beta }({\hat{\tau }}_1)+Y^{{\hat{\tau }}_1;\beta }_{{\hat{\tau }}_1}\right\} \right] \\&\ge {\mathbb {E}}\left[ \mathbb {1}_{[{\hat{\tau }}_1 \ge T]}\varPsi _0 + \mathbb {1}_{[{\hat{\tau }}_1 < T]}\left\{ -c_{b_0,{\hat{\beta }}_1}({\hat{\tau }}_1)+Y^{{\hat{\tau }}_1;{\hat{\beta }}_1}_{{\hat{\tau }}_1}\right\} \right] \end{aligned}$$

but in the same way

$$\begin{aligned} Y^{{\hat{\tau }}_1,{\hat{\beta }}_1}_{{\hat{\tau }}_1}\ge {\mathbb {E}}\Big [\mathbb {1}_{[{\hat{\tau }}_2 \ge T]}\varPsi ({\hat{\tau }}_1,{\hat{\beta }}_1) + \mathbb {1}_{[{\hat{\tau }}_2 < T]}\left\{ -c_{{\hat{\beta }}_1,{\hat{\beta }}_2}({\hat{\tau }}_2)+Y^{{\hat{\tau }}_1,{\hat{\tau }}_2;{\hat{\beta }}_1,{\hat{\beta }}_2}_{{\hat{\tau }}_1}\right\} \Big | {\mathcal {F}}_{{\hat{\tau }}_1}\Big ], \end{aligned}$$

\({\mathbb {P}}\)-a.s. By repeating this argument and using the dominated convergence theorem we find that \(J(u^*)\ge J({\hat{u}})\) which proves that \(u^*\) is in fact optimal. Repeating the above procedure with \((\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f\) as initial condition (9) follows. \(\square \)

The main difference between the above proof and the proof of Theorem 1 in the original work by Djehiche et al. (2009) is that, due to the fact that the future reward at any time depends on the entire history of the control, we are forced consider a family of processes indexed by an uncountable set rather than a q-tuple for some finite positive q. Hence, we cannot simply write \(Y^{\tau _1^*,\ldots ,\tau _j^*;\beta _1^*,\ldots ,\beta _j^*}\) as the sum of a finite number of Snell envelopes. To arrive at the above verification theorem we therefore impose the right-continuity constraint assumed in Assumption 2.i. This effectively allowed us to find the two sequences of processes that approach on the one hand the value process corresponding to the optimal control and on the other hand the dominated process, in \({\mathcal {S}}^2\).

4 Existence

Theorem 2 presumes existence of the verification family \(((Y^{\mathbf{t };\mathbf{b }}_s)_{0\le s\le T}: (\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f)\). To obtain a satisfactory solution to Problem 1, we thus need to establish that a verification family exists. This is the topic of the present section. We will follow the standard existence proof which goes by applying a Picard iteration (see Carmona and Ludkovski 2008; Djehiche et al. 2009; Hamadène and Zhang 2010). We thus define a sequence \(((Y^{\mathbf{t };\mathbf{b },k}_s)_{0\le s\le T}: (\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f)_{k\ge 0}\) of families of processes as

$$\begin{aligned} Y^{\mathbf{t };\mathbf{b },0}_s:={\mathbb {E}}\Big [\varPsi (\mathbf{t };\mathbf{b })\Big | {\mathcal {F}}_s\Big ] \end{aligned}$$
(11)

and

$$\begin{aligned} Y^{\mathbf{t };\mathbf{b },k}_s&:=\mathop {\mathrm{ess}\,\sup }_{\tau \in {\mathcal {T}}_{s\vee t_n}} {\mathbb {E}}\Big [\mathbb {1}_{[\tau \ge T]}\varPsi (\mathbf{t };\mathbf{b }) \nonumber \\&\quad +\mathbb {1}_{[\tau < T]}\max _{\beta \in {\mathcal {I}}^{-b_n}}\left\{ -c_{b_n,\beta }(\tau )+Y^{\mathbf{t },\tau ;\mathbf{b },\beta ,k-1}_\tau \right\} \Big | {\mathcal {F}}_s\Big ] \end{aligned}$$
(12)

for \(k\ge 1\).

Proposition 2

The sequence \(((Y^{\mathbf{t };\mathbf{b },k}_s)_{0\le s\le T}: (\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f)_{k\ge 0}\) is uniformly bounded in the sense that there is a \(K>0\) such that,

$$\begin{aligned} \sup _{u\in {\mathcal {U}}}{\mathbb {E}}\left[ \sup _{s\in [0,T]}|Y^{\tau _1,\ldots ;\beta _1,\ldots ,k}_s|^2\right] \le K, \end{aligned}$$

and for all \((\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f\) and \(b\in {\mathcal {I}}^{-b_n}\), we have

$$\begin{aligned} {\mathbb {E}}\left[ \sup _{s\in [0,T]}|Y^{\mathbf{t },s\vee t_n;\mathbf{b },b,k}_s|^2\right] \le K, \end{aligned}$$

for all \(k\ge 0\).

Proof

By the definition of \(Y^{\mathbf{t };\mathbf{b },k}\) we have that for any \(u\in {\mathcal {U}}^f\),

$$\begin{aligned} {\mathbb {E}}\Big [\varPsi (\tau _1,\ldots ;\beta _1,\ldots )\big |{\mathcal {F}}_s\Big ]\le Y^{\tau _1,\ldots ;\beta _1,\ldots ,k}_s&\le \mathop {\mathrm{ess}\,\sup }_{{\hat{u}}\in {\mathcal {U}}}{\mathbb {E}}\Big [\varPsi ({\hat{\tau }}_1,\ldots ;{\hat{\beta }}_1,\ldots )\big |{\mathcal {F}}_s\Big ]. \end{aligned}$$

By Doob’s maximal inequality we have that for any \({\hat{u}}:=({\hat{\tau }}_1,\ldots ;{\hat{\beta }}_1,\ldots )\in {\mathcal {U}}\)

$$\begin{aligned} {\mathbb {E}}\left[ \sup _{s\in [0,T]}{\mathbb {E}}\Big [|\varPsi ({\hat{\tau }}_1,\ldots ;{\hat{\beta }}_1,\ldots )|\big |{\mathcal {F}}_s\Big ]^2\right] \le C{\mathbb {E}}\left[ |\varPsi ({\hat{\tau }}_1,\ldots ;{\hat{\beta }}_1,\ldots )|^2\right] . \end{aligned}$$

Taking the supremum over all \({\hat{u}}\in {\mathcal {U}}\) on both sides and using that the right hand side is uniformly bounded by Assumption 1(i.a) the first bound follows.

Concerning the second claim, note that

$$\begin{aligned}&{\mathbb {E}}\left[ \sup _{s\in [0,T]}|Y^{\mathbf{t },s\vee t_n;\mathbf{b },b,k}_s|^2\right] \\&\quad \le \sup _{u\in {\mathcal {U}}} {\mathbb {E}}\left[ \sup _{s\in [0,T]}{\mathbb {E}}[\sup _{r\in [t_n,T]}|\varPsi (\mathbf{t },r,\tau _1\vee r,\ldots ;\mathbf{b },b,\beta _1,\ldots )|\big |{\mathcal {F}}_s]^2\right] . \end{aligned}$$

Now, arguing as above we find that

$$\begin{aligned} {\mathbb {E}}\left[ \sup _{s\in [0,T]}|Y^{\mathbf{t },s\vee t_n;\mathbf{b },b,k}_s|^2\right] \le C\sup _{u\in {\mathcal {U}}}{\mathbb {E}}\left[ \sup _{r\in [t_n,T]}|\varPsi (\mathbf{t },r,\tau _1\vee r,\ldots ;\mathbf{b },b,\beta _1,\ldots )|^2\right] \end{aligned}$$

where the right hand side is bounded by Assumption 1(i.b). \(\square \)

Proposition 3

The family of processes \(((Y^{\mathbf{t };\mathbf{b },k}_s)_{0\le s\le T}: (\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f)\) satisfies:

  1. (i)

    For every \(n\ge 1\) and every \((\eta ,\mathbf{b })\in {\bar{{\mathcal {T}}}}^n\times {\bar{{\mathcal {I}}}}^n\) and \(b\in {\mathcal {I}}^{-b_n}\) we have

    $$\begin{aligned} {\mathbb {E}}\left[ \sup _{s\in [0,T]}|Y^{\varGamma ^l(\eta );\mathbf{b },k}_s-Y^{\eta ;\mathbf{b },k}_s|^2\right] \rightarrow 0 \end{aligned}$$

    and

    $$\begin{aligned} {\mathbb {E}}\left[ \sup _{s\in [0,T]}|Y^{\varGamma ^l(\eta ),s\vee \varGamma ^l(\eta _n);\mathbf{b },b_n,k}_s-Y^{\eta ,s\vee \eta _n;\mathbf{b },b_n,k}_s|^2\right] \rightarrow 0, \end{aligned}$$

    as \(l\rightarrow \infty \) uniformly in k.

  2. (ii)

    For every \((\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f\) and every \(b\in {\mathcal {I}}^{-b_n}\), the process \((Y^{\mathbf{t },s\vee t_n;\mathbf{b },b,k}_s:0\le s\le T)\) is in \({\mathcal {S}}_{\textit{qlc}}^2\) for \(k=0,1,\ldots \)

Proof

The proof will follow by induction and we use (i’) to denote the first statement without the uniformity.

For \(k=0\), we have \(Y^{\mathbf{t },\cdot \vee t_n;\mathbf{b },b,0}_\cdot =V_\cdot ^{\mathbf{t },\cdot \vee t_n;\mathbf{b },b,\emptyset }\in {\mathcal {S}}_{\textit{qlc}}^2\) by Assumption 2(ii) and (i’) follows from Assumption 2(i). Now, assume that there is a \(k'\ge 0\) such that (i’) and (ii) holds for all \(k\le k'\). Applying a reasoning similar to that in the proof of Theorem 2 we find that

$$\begin{aligned} Y^{\mathbf{t };\mathbf{b },k'+1}_s=\mathop {\mathrm{ess}\,\sup }_{u \in {\mathcal {U}}^{k'+1}_{s\vee t_n}} V^{\mathbf{t };\mathbf{b },u}_s. \end{aligned}$$

But then by Assumption 2 we find that (i’) and (ii) hold for \(k'+1\). By induction (i’) and (ii) hold for all \(k\ge 0\).

It remains to show that (i) holds. By the above reasoning we find that, for each k we have

$$\begin{aligned}&{\mathbb {E}}\left[ \sup _{s\in [0,T]}|Y^{\varGamma ^l(\eta );\mathbf{b },k}_s-Y^{\eta ;\mathbf{b },k}_s|^2\right] \\&\quad \le {\mathbb {E}}\left[ \sup _{s\in [0,T]}|(Y^{\varGamma ^l(\eta );\mathbf{b },k}_s-Y^{\eta ;\mathbf{b },k}_s)^+|^2\right] + {\mathbb {E}}\left[ \sup _{s\in [0,T]}|(Y^{\eta ;\mathbf{b },k}_s - Y^{\varGamma ^l(\eta );\mathbf{b },k}_s)^+|^2\right] \\&\quad \le \sup _{u\in {\mathcal {U}}_{\varGamma ^l(\eta _n)}}{\mathbb {E}}\left[ \sup _{s\in [0,T]}|(V^{\varGamma ^l(\eta );\mathbf{b },u}_s-V^{\eta ;\mathbf{b },u}_s)^+|^2\right] \\&\qquad + \sup _{u\in {\mathcal {U}}}{\mathbb {E}}\left[ \sup _{s\in [0,T]}|(V^{\eta ;\mathbf{b },u}_s-V^{\varGamma ^l(\eta );\mathbf{b },{\hat{u}}^l}_s)^+|^2\right] \end{aligned}$$

where the right hand side of the last inequality does not depend on k and tends to zero as \(l\rightarrow \infty \) by Assumption 2(i). The second statement in (i) follows by an identical argument. \(\square \)

Corollary 1

For each \(k\ge 0\) and each \(s\in [0,T]\) there is a \(u^k=(\tau ^k_1,\ldots ,\tau ^k_{N^k};\)\(\beta ^k_1,\ldots ,\beta ^k_{N^k})\in {\mathcal {U}}^k_{t_n\vee s}\), such that

$$\begin{aligned} Y^{\mathbf{t };\mathbf{b },k}_s={\mathbb {E}}\bigg [\varPsi (\mathbf{t },\tau ^k_1,\ldots ,\tau ^k_{N^k};\mathbf{b },\beta ^k_1,\ldots ,\beta ^k_{N^k}) -\sum _{j=1}^{N^k}c_{\beta ^k_j,\beta ^k_{j-1}}(\tau ^k_j)\Big |{\mathcal {F}}_s\bigg ], \end{aligned}$$

with \(\beta ^k_0=b_0\).

Proof

Follows from the definition of \(Y^{\mathbf{t };\mathbf{b },k}\) and Propositions 2 and 3 by applying the same argument as in the proof of the verification theorem (Theorem 2). \(\square \)

Proposition 4

For each \((\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f\), the limit \({\bar{Y}}^{\mathbf{t };\mathbf{b }}:=\lim _{k\rightarrow \infty }Y^{\mathbf{t };\mathbf{b },k}\), exists as an increasing pointwise limit, \({\mathbb {P}}\)-a.s. Furthermore, the process \({\bar{Y}}^{\mathbf{t },\cdot \vee t_n;\mathbf{b },b}_\cdot \) is càdlàg for each \(b\in {\mathcal {I}}^{-b_n}\).

Proof

Since \({\mathcal {U}}^k_t\subset {\mathcal {U}}^{k+1}_t\) we have that, \({\mathbb {P}}\)-a.s.,

$$\begin{aligned} Y^{\mathbf{t };\mathbf{b },k}_s \le Y^{\mathbf{t };\mathbf{b },k+1}_s\le \mathop {\mathrm{ess}\,\sup }_{u\in {\mathcal {U}}}{\mathbb {E}}\Big [|\varPsi (\tau _1,\ldots ;\beta _1,\ldots )|\big |{\mathcal {F}}_s\Big ], \end{aligned}$$

where the right hand side is bounded \({\mathbb {P}}\)-a.s. by Proposition 2. Hence, the sequence \(((Y^{\mathbf{t };\mathbf{b },k}_s)_{0\le s\le T}: (\mathbf{t },\mathbf{b })\in {\mathcal {D}})\) is increasing and \({\mathbb {P}}\)-a.s. bounded, thus, it converges \({\mathbb {P}}\)-a.s. for all \(s\in [0,T]\).

Concerning the second claim, note that for \(p\in (1,2)\), we have

$$\begin{aligned}&\sup _{s\in [0,T]}Y^{\mathbf{t },s\vee t_n;\mathbf{b },b,k}_s\le \sup _{s\in [0,T]}\sup _{r\in [0,T]}Y^{\mathbf{t },r\vee t_n;\mathbf{b },b,k}_s \\&\quad \le \sup _{s\in [0,T]}\mathop {\mathrm{ess}\,\sup }_{u\in {\mathcal {U}}}{\mathbb {E}}[\sup _{r\in [t_n,T]}|\varPsi (\mathbf{t },r,\tau _1\vee r,\ldots ;\mathbf{b },b,\beta _1,\ldots )|\big |{\mathcal {F}}_s] \\&\quad \le 1+\sup _{s\in [0,T]}\mathop {\mathrm{ess}\,\sup }_{u\in {\mathcal {U}}}{\mathbb {E}}[\sup _{r\in [t_n,T]}|\varPsi (\mathbf{t },r,\tau _1\vee r,\ldots ;\mathbf{b },b,\beta _1,\ldots )|^p\big |{\mathcal {F}}_s]=: K(\omega ) \end{aligned}$$

for all \(k\ge 0\) (where the inequalities hold \({\mathbb {P}}\)-a.s.). Now, arguing as in the proof of Proposition 2 we have

$$\begin{aligned}&{\mathbb {E}}\left[ \sup _{s\in [0,T]}\mathop {\mathrm{ess}\,\sup }_{u\in {\mathcal {U}}}{\mathbb {E}}[\sup _{r\in [t_n,T]}|\varPsi (\mathbf{t },r,\tau _1\vee r,\ldots ;\mathbf{b },b,\beta _1,\ldots )|^p\big |{\mathcal {F}}_s]^{2/p}\right] \\&\quad \le C\sup _{u\in {\mathcal {U}}}{\mathbb {E}}\left[ \sup _{r\in [t_n,T]}|\varPsi (\mathbf{t },r,\tau _1\vee r,\ldots ;\mathbf{b },b,\beta _1,\ldots )|^2\right] <\infty . \end{aligned}$$

We thus conclude that there is a \({\mathbb {P}}\)-null set \({\mathcal {N}}\) such that for each \(\omega \in \varOmega \setminus {\mathcal {N}}\) we have \(K(\omega )<\infty \).

By the “no-free-loop” condition [Assumption 1(iiib)] and the finiteness of \({\mathcal {I}}\) we get that for any control \((\tau _1,\ldots ,\tau _N;\beta _1,\ldots ,\beta _N)\),

$$\begin{aligned} \sum _{j=1}^{N}c_{\beta _j,\beta _{j-1}}(\tau _j)\ge \epsilon (N-m)/m, \end{aligned}$$

\({\mathbb {P}}\)-a.s. For \(\omega \in \varOmega \setminus {\mathcal {N}}\) (in the remainder of the proof \({\mathcal {N}}\) denotes a generic \({\mathbb {P}}\)-null set), we thus have

$$\begin{aligned} -K(\omega )\le Y^{\mathbf{t },s\vee t_n;\mathbf{b },b,k}_s(\omega )&\le {\mathbb {E}}[\varPsi (\mathbf{t },s\vee t_n,\tau ^k_1,\ldots ,\tau ^k_{N^k};\mathbf{b },b,\beta _1,\ldots ,\beta ^k_{N^k}) \\&\quad -\epsilon (N^k/m-1)|{\mathcal {F}}_s](\omega ) \\&\le K(\omega )+\epsilon -\epsilon /m{\mathbb {E}}[ N^k |{\mathcal {F}}_s](\omega ), \end{aligned}$$

where \((\tau ^k_1,\ldots ,\tau ^k_{N^k};\beta ^k_1,\ldots ,\beta ^k_{N^k})\in {\mathcal {U}}^{k}_{s\vee t_n}\) is a control corresponding to \(Y^{\mathbf{t },s\vee t_n;\mathbf{b },b,k}_s\). This implies that for \(k'>0\) we have,

$$\begin{aligned} {\mathbb {P}}[N^k>k' |{\mathcal {F}}_s](\omega )\le (2K(\omega )m/\epsilon +m)/k'. \end{aligned}$$

Now, for all \(0\le k'\le k\) we have,

$$\begin{aligned}&\breve{Y}^{\mathbf{t },s\vee t_n;\mathbf{b },b,k,k'}_s:={\mathbb {E}}\bigg [\varPsi (\mathbf{t },s,\tau ^k_1,\ldots ,\tau ^k_{N^k\wedge k'};\mathbf{b },b,\beta ^k_1,\ldots ,\beta ^k_{N^k\wedge k'}) \\&\quad - \sum _{j=1}^{N^k\wedge k'}c_{\beta ^k_{j-1},\beta ^k_{j}}(\tau ^k_j)\Big |{\mathcal {F}}_s\bigg ]\le Y^{\mathbf{t },s\vee t_n;\mathbf{b },b,k'}_s\le Y^{\mathbf{t },s\vee t_n;\mathbf{b },b,k}_s, \end{aligned}$$

where we introduced the process \(\breve{Y}^{\mathbf{b },\mathbf{t },k,k'}\) corresponding to the truncation \((\tau ^k_1,\ldots ,\tau ^k_{N^k\wedge k'};\beta ^k_1,\ldots ,\beta ^k_{N^k\wedge k'})\) of the optimal control. As the truncation only affects the performance of the controller when \(N^k>k'\) we have

$$\begin{aligned}&Y^{\mathbf{t },s\vee t_n;\mathbf{b },b,k}_s-\breve{Y}^{\mathbf{t },s\vee t_n;\mathbf{b },b,k,k'}_s \\&\quad ={\mathbb {E}}\bigg [\mathbb {1}_{[N^k>k']}\Big (\varPsi (\mathbf{t },s\vee t_n,\tau ^k_1,\ldots ,\tau ^k_{N^k};\mathbf{b },b,\beta ^k_1,\ldots ,\beta ^k_{N^k}) - \sum _{j=1}^{N^k}c_{\beta ^k_{j-1},\beta ^k_{j}}(\tau ^k_j) \\&\qquad -\varPsi (\mathbf{t },s\vee t_n,\tau ^k_1,\ldots ,\tau ^k_{N^k\wedge k'};\mathbf{b },b,\beta ^k_1,\ldots ,\beta ^k_{N^k\wedge k'}) + \sum _{j=1}^{N^k\wedge k'}c_{\beta ^k_{j-1},\beta ^k_{j}}(\tau ^k_j)\Big )\Big |{\mathcal {F}}_s\bigg ] \\&\quad \quad \le {\mathbb {E}}\bigg [\mathbb {1}_{[N^k>k']}\Big (\varPsi (\mathbf{t },s\vee t_n,\tau ^k_1,\ldots ,\tau ^k_{N^k};\mathbf{b },b,\beta ^k_1,\ldots ,\beta ^k_{N^k}) \\&\qquad -\varPsi (\mathbf{t },s\vee t_n,\tau ^k_1,\ldots ,\tau ^k_{N^k\wedge k'};\mathbf{b },b,\beta ^k_1,\ldots ,\beta ^k_{N^k\wedge k'}) \Big )\Big |{\mathcal {F}}_s\bigg ]. \end{aligned}$$

Applying Hölder’s inequality we get that for \(\omega \in \varOmega \setminus {\mathcal {N}}\),

$$\begin{aligned}&Y^{\mathbf{t },s\vee t_n;\mathbf{b },b,k}_s(\omega )-\breve{Y}^{\mathbf{t },s\vee t_n;\mathbf{b },b,k,k'}_s(\omega ) \\&\quad \le 2{\mathbb {E}}[\mathbb {1}_{[N^k>k']}|{\mathcal {F}}_s]^{1/q}(\omega ) \\&\qquad \times \mathop {\mathrm{ess}\,\sup }_{u\in {\mathcal {U}}}{\mathbb {E}}\left[ \sup _{r\in [t_n,T]}|\varPsi (\mathbf{t },r,\tau _1\vee r,\ldots ;\mathbf{b },b,\beta _1,\ldots )|^p\big |{\mathcal {F}}_s\right] ^{1/p}(\omega ) \\&\quad \le 2((K(\omega )m/\epsilon +m)/k')^{1/q}(K(\omega ))^{1/p}, \end{aligned}$$

with \(\frac{1}{p}+\frac{1}{q}=1\), there is thus a constant \(C=C(\omega )\) such that

$$\begin{aligned} Y^{\mathbf{t },s\vee t_n;\mathbf{b },b,k}_s(\omega )-Y^{\mathbf{t },s\vee t_n;\mathbf{b },b,k'}_s(\omega )\le C(k')^{-1/q}, \end{aligned}$$

for all \(s\in [0,T]\). We conclude that for all \(\omega \in \varOmega \setminus {\mathcal {N}}\), the sequence

\((Y^{\mathbf{t },\cdot \vee t_n;\mathbf{b },b,k}_\cdot (\omega ))_{k\ge 0}\) is a sequence of càdlàg functions that converges uniformly which implies that the limit is a càdlàg function. \(\square \)

Proposition 5

The family \((({\bar{Y}}^{\mathbf{t };\mathbf{b }}_s)_{0\le s\le T}:(\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f)\) is a verification family.

Proof

As \({\bar{Y}}^{\mathbf{t };\mathbf{b }}\) is the pointwise limit of an increasing sequence of càdlàg supermartingales it is a càdlàg supermartingale (see p. 86 in Dellacherie and Meyer (1980)). We treat each remaining property in the definition of a verification family separately:

  1. (a)

    Applying the convergence result to the right hand side of (12) and using the fact that, by Proposition 4,

    $$\begin{aligned} \mathbb {1}_{[s\ge T]}\varPsi (\mathbf{t };\mathbf{b })+\mathbb {1}_{[s < T]}\max _{\beta \in {\mathcal {I}}^{-b_n}}\left\{ -c_{b_n,\beta }(s)+{\bar{Y}}^{\mathbf{t },s\vee t_n;\mathbf{b },\beta }_s\right\} \end{aligned}$$

    is a càdlàg process, (iv) of Theorem 1 gives

    $$\begin{aligned} {\bar{Y}}^{\mathbf{t };\mathbf{b }}_s:=\mathop {\mathrm{ess}\,\sup }_{\tau \in {\mathcal {T}}_{s}} {\mathbb {E}}\Big [&\mathbb {1}_{[\tau \ge T]}\varPsi (\mathbf{t };\mathbf{b })+\mathbb {1}_{[\tau < T]}\max _{\beta \in {\mathcal {I}}^{-b_n}}\left\{ -c_{b_n,\beta }(\tau )+{\bar{Y}}^{\mathbf{t },\tau ;\mathbf{b },\beta }_\tau \right\} \Big | {\mathcal {F}}_s\Big ]. \end{aligned}$$
  2. (b)

    Uniform boundedness was shown in Proposition 2.

  3. (c)

    We have

    $$\begin{aligned} \lim _{l\rightarrow \infty }{\mathbb {E}}\left[ \sup _{s\in [0,T]}|{\bar{Y}}^{\varGamma ^l(\eta );\mathbf{b }}_s-{\bar{Y}}^{\eta ;\mathbf{b }}_s|^2\right]&=\lim _{l\rightarrow \infty } {\mathbb {E}}\left[ \sup _{s\in [0,T]}\lim _{k\rightarrow \infty }|{\bar{Y}}^{\varGamma ^l(\eta );\mathbf{b },k}_s-{\bar{Y}}^{\eta ;\mathbf{b },k}_s|^2\right] \\&\le \lim _{l\rightarrow \infty }\lim _{k\rightarrow \infty }{\mathbb {E}}\left[ \sup _{s\in [0,T]}|{\bar{Y}}^{\varGamma ^l(\eta );\mathbf{b },k}_s-{\bar{Y}}^{\eta ;\mathbf{b },k}_s|^2\right] \\&= \lim _{k\rightarrow \infty }\lim _{l\rightarrow \infty }{\mathbb {E}}\left[ \sup _{s\in [0,T]}|{\bar{Y}}^{\varGamma ^l(\eta );\mathbf{b },k}_s-{\bar{Y}}^{\eta ;\mathbf{b },k}_s|^2\right] \\&=0 \end{aligned}$$

    where taking limits is interchangeable due to the uniform convergence property shown in Proposition 3(i). The second statement in c), that is equation (7), follows by an identical argument.

  4. (d)

    We know from Proposition 4 that \({\bar{Y}}^{\mathbf{t },\cdot \vee t_n;\mathbf{b },b}_\cdot \) is càdlàg and by Proposition 2 it follows that \({\bar{Y}}^{\mathbf{t },\cdot \vee t_n;\mathbf{b },b}_\cdot \in {\mathcal {S}}^2\). It remains to show that \({\bar{Y}}^{\mathbf{t },\cdot \vee t_n;\mathbf{b },b}_\cdot \) is quasi-left continuous. Using the notation from the proof of Proposition 4 we have for \(k\ge 0\),

    $$\begin{aligned}&|{\bar{Y}}^{\mathbf{t },\gamma _j(\omega )\vee t_n;\mathbf{b },b}_{\gamma _j(\omega )}(\omega )-{\bar{Y}}^{\mathbf{t },\gamma (\omega )\vee t_n;\mathbf{b },b}_{\gamma (\omega )}(\omega )|\\&\quad \le |Y^{\mathbf{t },\gamma _j(\omega )\vee t_n;\mathbf{b },b,k}_{\gamma _j(\omega )}(\omega ) -Y^{\mathbf{t },\gamma (\omega )\vee t_n;\mathbf{b },b,k}_{\gamma (\omega )}(\omega )|+2C(\omega )k^{-1/q}, \end{aligned}$$

    for all \(\omega \in \varOmega \setminus {\mathcal {N}}\) with \({\mathbb {P}}({\mathcal {N}})=0\). By Proposition 3(ii) the first part tends to zero \({\mathbb {P}}\)-a.s. as \(j\rightarrow \infty \). Since k was arbitrary and C is \({\mathbb {P}}\)-a.s. bounded the desired result follows. This finishes the proof.

\(\square \)

5 Application to SDDEs with controlled volatility

We now move to the case of impulse control of SDDEs. However, we start by formalizing the hydro-power production problem proposed as a motivating example in the introduction.

5.1 Continuous time hydro-power planning

The increasing competitiveness of electricity markets calls for new operational standards in electric power production facilities. It has previously been acknowledged that optimal switching can be useful in deriving production schedules that maximize the revenue from electricity production (Carmona and Ludkovski 2008; Djehiche et al. 2009; Kharroubi 2016). Here we will extend the applicability of optimal switching by introducing a new example, the coordinated operation of hydropower plants interconnected by hydrological coupling.

We consider the situation where a central operator controls the output of two hydropower stations located in the same river (but note that the model is easily extended to consider an entire system of power stations).

We assume that Plant i, for \(i=1,2\), has:

  • A reservoir containing a volume \(Z^i_t\)\(\hbox {m}^3\) of water at time t.

  • A stochastic inflow \(V^i_t\)\(\hbox {m}^3\)/s to the reservoir that is modeled by a jump diffusion process.

  • \(\kappa _i\) turbines that can be either “in operation”, producing \(p_i(Z^i_t)\) MW by releasing \(\alpha _i\)\(\hbox {m}^3\)/s of water through the turbine or “idle”.

We assume that the power plants are hydrologically connected in such a way that the water that passes through Plant 1 will reach the reservoir of Plant 2 after \(\delta \ge 0\) seconds.

We assume that we control the number of turbines in operation in each of the two plants. We thus let \({\mathcal {I}}:=\{0,1,\ldots ,\kappa _1\}\times \{0,1,\ldots ,\kappa _2\}\). The dynamics of the involved processes is then given by

$$\begin{aligned} dV_t&=a(t,V_t)dt+\sigma (t,V_t)dW_t+\int _{{\mathbb {R}}^2\setminus \{0\}}\gamma (t,V_{t-},z)\varGamma (dt,dz)\\ dZ^1_t&=(V^1_t-\alpha _1\xi ^1_t)dt\\ dZ^2_t&=(V^2_t-\alpha _2\xi ^2_t+\alpha _1\xi ^1_{t-\delta })dt\\ (V_0,Z_0)&=(v_0,z_0)\in {\mathbb {R}}_+^4 \end{aligned}$$

and an appropriate reward functional is

$$\begin{aligned} J(u):={\mathbb {E}}\left[ \int _0^TR_t(\xi ^1_t p_1(Z^1_t)+\xi ^2_t p_2(Z^2_t))dt+q(Z^1_T,Z^2_T)\right] , \end{aligned}$$

where \(R_t\) is the (stochastic) electricity price at time t and \(q:{\mathbb {R}}_ +^2\rightarrow {\mathbb {R}}\) is the value of water (per \(\hbox {m}^3\)) stored in the reservoirs at the end of the operation period.Footnote 3

5.2 A general SDDE model

Motivated by the above example we assume that \({\mathbb {F}}\) is the completed filtration generated by an d-dimensional Brownian motion W and an \(d\)-dimensional, independent, finite activity, Poisson random measure \(\varGamma \) with intensity measure \(\nu (ds; dz) = ds \times \mu (dz)\), where \(\mu \) is the Lévy measure on \({\mathbb {R}}^d\) of \(\varGamma \) and \({\tilde{\varGamma }}(ds; dz) := (\varGamma - \nu )(ds; dz)\) is called the compensated jump martingale random measure of \(\varGamma \).

For \(u\in {\mathcal {U}}\), we let \(X^{u,0}\) solve

$$\begin{aligned} dX^{u,0}_t&=a(t,X^{u,0}_t,X^{u,0}_{t-\delta })dt+\sigma (t,X^{u,0}_t,X^{u,0}_{t-\delta })dW_t\nonumber \\&\quad +\int _{{\mathbb {R}}^d\setminus \{0\}} \gamma (t,X^{u,0}_{t-},X^{u,0}_{t-\delta },z){\tilde{\varGamma }}(dt,dz),\quad \mathrm{for\, all}\,t\in (0,T], \end{aligned}$$
(13)
$$\begin{aligned} X^{u,0}_{s}&=\chi (s),\quad s\in [-\delta ,0], \end{aligned}$$
(14)

where \(\delta >0\) is a constant and \(\chi :[-\delta ,0]\rightarrow {\mathbb {R}}^d\) is a deterministic càdlàg function with \(\sup _{s\in [-\delta ,0]}|\chi (s)|\le C\), and define recursively

$$\begin{aligned} dX^{u,j}_t&=a(t,X^{u,j}_t,X^{u,j}_{t-\delta })dt+\sigma (t,X^{u,j}_t,X^{u,j}_{t-\delta })dW_t\nonumber \\&\quad +\int _{{\mathbb {R}}^d\setminus \{0\}} \gamma (t,X^{u,j}_{t-},X^{u,j}_{t-\delta },z){\tilde{\varGamma }}(dt,dz),\quad \mathrm{for\, all}\,t\in (\tau _{j},T], \end{aligned}$$
(15)
$$\begin{aligned} X^{u,j}_{\tau _j}&=h_{\beta _{j-1},\beta _j}(\tau _j,X^{u,j-1}_{\tau _j}) \end{aligned}$$
(16)
$$\begin{aligned} X^{u,j}_{s}&=X^{u,j-1}_{s},\quad s\in [-\delta ,\tau _j). \end{aligned}$$
(17)

Finally we define the controlled processFootnote 4\(X^u\) as \(X^u:=\lim _{j\rightarrow \infty }X^{u,j}\) on [0, T) and \(X^u_T:={\lim \sup }_{j\rightarrow \infty }X^{u,j}_T\).

Remark 2

Note that by letting \(\chi _1\equiv b_0\) and taking \([h_{\beta _{j-1},\beta _j}]_1(t,x)=\beta _j\) and letting the first rows of a, \(\sigma \) and \(\gamma \) equal zeros we get \([X]_1=\xi ^u\) which implies that the control enters all terms in the SDDE for \(X^u\).

We consider the situation when the functional J is given by

$$\begin{aligned} J(u):={\mathbb {E}}\left[ \int _0^T f(t,X^u_t)dt+g(X^u_T)-\sum _{j=1}^N c_{\beta _{j-1},\beta _j}(\tau _j)\right] . \end{aligned}$$

We assume that the parameters of the SDDE satisfies the following conditions:

Assumption 3

  1. i)

    The functions \(a:[0,T]\times {\mathbb {R}}^d\times {\mathbb {R}}^d\rightarrow {\mathbb {R}}^d\) and \(\sigma :[0,T]\times {\mathbb {R}}^d\times {\mathbb {R}}^d\rightarrow {\mathbb {R}}^d\times {\mathbb {R}}^d\) are continuous in t and satisfy

    $$\begin{aligned} |a(t,x,y)-a(t,x',y')|+|\sigma (t,x,y)-\sigma (t,x',y')|\le C(|x-x'|+|y-y'|) \end{aligned}$$

    for all \((x,x',y,y')\in {\mathbb {R}}^{4d}\).

  2. ii)

    There is a \(\rho (z)\), with \(\int \rho ^{4q}(z)\mu (dz)< \infty \) such that \(\gamma :[0,T]\times {\mathbb {R}}^d\times {\mathbb {R}}^d\times {\mathbb {R}}^d\rightarrow {\mathbb {R}}^d\) satisfies

    $$\begin{aligned} |\gamma (t,x,y,z)-\gamma (t,x',y',z)|&\le \rho (z)(|x-x'|+|y-y'|), \\ |\gamma (t,x,y,z)|&\le \rho (z)(1+|x|+|y|). \end{aligned}$$
  3. iii)

    For all \((t,x)\in [0,T]\times {\mathbb {R}}^d\) and all \((b,b')\in {\bar{{\mathcal {I}}}}^2\), the map \(h_{b,b'}:[0,T]\times {\mathbb {R}}^d\rightarrow {\mathbb {R}}^d\) satisfies

    $$\begin{aligned} |h_{b,b'}(t,x)|\le C\vee |x|. \end{aligned}$$

    Furthermore,

    $$\begin{aligned} |h_{b,b'}(t,x)-h_{b,b'}(t',x')|\le |x-x'|+C|t-t'| \end{aligned}$$

    for all \((x,x')\in {\mathbb {R}}^{2d}\) and \((t,t')\in [0,T]^2\).

Remark 3

Note in particular that since a and \(\sigma \) are continuous in t, \(a(\cdot ,0,0)\) and \(\sigma (\cdot ,0,0)\) are uniformly bounded and Lipschitz continuity implies that

$$\begin{aligned}&|a(t,x,y)|^{4q}+|\sigma (t,x,y)|^{4q}+\int _{{\mathbb {R}}^d\setminus \{0\}} |\gamma (t,x,y,z)|^{4q}\mu (dz)\nonumber \\&\quad \le C(1+|x|^{4q}+|y|^{4q}). \end{aligned}$$
(18)

We have the following result:

Proposition 6

Under Assumption 3 the SDDE (15)–(17) admits a unique solution for each \(u\in {\mathcal {U}}\). Furthermore, the solution has moments of order 4q, i.e. \(\sup _{u\in {\mathcal {U}}}{\mathbb {E}}\left[ \sup _{t\in [0,T]}|X^u_t|^{4q}\right] <\infty \).

Proof

We first note that existence of a unique solution to the SDDE follows by repeated use of Theorem 3.2 in Agram and Øksendal (2019) (where existence of a unique solution to a more general controlled SDDE is shown). It remains to show that the moment estimate holds. We have \(X^{u,j}=X^{u,j-1}\) on \([-\delta ,\tau _{j})\) and

$$\begin{aligned} X^{u,j}_t&=h_{\beta _{j-1},\beta _{j}}(\tau _{j},X^{u,j-1}_{\tau _{j}})+\int _{\tau _{j}}^ta(s,X^{u,j}_s,X^{u,j}_{s-\delta })ds \\&\quad + \int _{\tau _{j}}^t\sigma (t,X^{u,j}_s,X^{u,j}_{s-\delta })dW_s+\int _{\tau _{j}}^t \int _{{\mathbb {R}}^d\setminus \{0\}} \gamma (s,X^{u,j}_{s-},X^{u,j}_{s-\delta },z){\tilde{\varGamma }}(ds,dz) \end{aligned}$$

on \([\tau _{j},T]\). By Assumption 3(iii) we get, for \(t\in [\tau _{j},T]\), using integration by parts, that

$$\begin{aligned} |X^{u,j}_t|^2&= |X^{u,j}_{\tau _{j}}|^2+2\int _{\tau _{j}+}^t X^{u,j}_{s-} dX^{u,j}_s+\int _{\tau _{j}+}^t d[X^{u,j},X^{u,j}]_s \\&\quad \le C\vee |X^{u,j-1}_{\tau _{j}}|^2+2\int _{\tau _{j}+}^t X^{u,j}_{s-} dX^{u,j}_s+\int _{\tau _{j}+}^t d[X^{u,j},X^{u,j}]_s \\&\quad \le C\vee |X^{u,j-1}_{\tau _{j-1}}|^2+2\int _{\tau _{j-1}+}^{\tau _{j}} X^{u,j-1}_{s-} dX^{u,j-1}_s+\int _{\tau _{j-1}+}^{\tau _{j}}d[X^{u,j-1},X^{u,j-1}]_s \\&\quad +2\int _{\tau _{j}+}^t X^{u,j}_{s-} dX^{u,j}_s+\int _{\tau _{j}+}^t d[X^{u,j},X^{u,j}]_s. \end{aligned}$$

By repeated application we find that

$$\begin{aligned} |X^{u,j}_t|^2&\le C\vee |X^{u,0}_{0}|^2+\sum _{i=0}^{j-1} \{2\int _{\tau _{i}+}^{\tau _{i+1}} X^{u,i}_{s-} dX^{u,i}_s+\int _{\tau _{i}+}^{\tau _{i+1}} d[X^{u,i},X^{u,i}]_s\} \\&\quad +2\int _{\tau _{j}+}^t X^{u,j}_{s-} dX^{u,j}_s+\int _{\tau _{j}+}^t d[X^{u,j},X^{u,j}]_s \\&\quad \le C+\sum _{j=0}^{j-1} \big \{2\int _{\tau _{i}+}^{\tau _{i+1}} X^{u,i}_{s-} dX^{u,i}_s+\int _{\tau _{i}+}^{\tau _{i+1}} d[X^{u,i},X^{u,i}]_s\big \} \\&\quad +2\int _{\tau _{j}+}^t X^{u,j}_{s-} dX^{u,j}_s+\int _{\tau _{j}+}^t d[X^{u,j},X^{u,j}]_s, \end{aligned}$$

with \(\tau _0:=0\). Now, since \(X^{u,i}\) and \(X^{u,j}\) coincide on \([0,\tau _{i+1\wedge j+1})\) we have

$$\begin{aligned}&\sum _{i=0}^{j-1} \int _{\tau _{i}+}^{\tau _{i+1}} X^{u,i}_{s-} dX^{u,i}_s+\int _{\tau _{j}+}^t X^{u,j}_{s-} dX^{u,j}_s \\&\quad =\int _{0}^{t}X_{s}^{u,j}a(s,X^{u,j}_s,X^{u,j}_{s-\delta })ds+\int _{0}^{t}X_{s}^{u,j}\sigma (s,X^{u,j}_s,X^{u,j}_{s-\delta })dW_s\\&\qquad +\int _{0}^{t}\int _{{\mathbb {R}}^d\setminus \{0\}}X_{s-}^{u,j} \gamma (s,X^{u,j}_{s-},X^{u,j}_{s-\delta },z){\tilde{\varGamma }}(ds,dz) \end{aligned}$$

and

$$\begin{aligned}&{\mathbb {E}}\left[ \sum _{i=0}^{j-1} \int _{\tau _{i}+}^{\tau _{i+1}} d[X^{u,i},X^{u,i}]_s + \int _{\tau _{j}+}^t d[X^{u,j},X^{u,j}]_s\right] \\&\quad ={\mathbb {E}}\left[ \int _{0}^{t}(|\sigma (s,X^{u,j}_s,X^{u,j}_{s-\delta })|^2+\int _{{\mathbb {R}}^d\setminus \{0\}}| \gamma (s,X^{u,j}_{s-},X^{u,j}_{s-\delta },z)|^2\mu (dz))ds\right] . \end{aligned}$$

Finally, using the Burkholder–Davis–Gundy inequality in combination with (18) we get

$$\begin{aligned} {\mathbb {E}}\left[ \sup _{s\in [0,t]}|X^{u,j}_s|^{4q}\right]&\le C + C\int _{0}^t{\mathbb {E}}\left[ \sup _{r\in [0,s]}|X^{u,j}_r|^{4q}\right] ds, \end{aligned}$$

where the constant C does not depend on j and it follows by Grönwall’s lemma that \({\mathbb {E}}\Big [\sup _{t\in [0,T]}|X^{u,j}_t|^{4q}\Big ]\) is bounded uniformly in j. Now, the result follows since \(\tau _j\rightarrow T\), \({\mathbb {P}}\)-a.s., as \(j\rightarrow \infty \). \(\square \)

For each \((\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f\) and each \(u\in {\mathcal {U}}\) we let

$$\begin{aligned} X^{\mathbf{t };\mathbf{b },u}:=X^{t_1,\ldots ,t_n,t_n\vee \tau _1,\ldots ,t_n\vee \tau _N;b_1,\ldots ,b_n,\beta _1,\ldots ,\beta _N} \end{aligned}$$

and

$$\begin{aligned} X^{\mathbf{t };\mathbf{b },u,j}:=X^{t_1,\ldots ,t_n,t_n\vee \tau _1,\ldots ,t_n\vee \tau _{N};b_1,\ldots ,b_n,\beta _1,\ldots ,\beta _{N},j}. \end{aligned}$$

Proposition 7

For all \((\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f\) we have

$$\begin{aligned} \sup _{u\in {\mathcal {U}}}{\mathbb {E}}\left[ \sup _{s\in [0,T]}\sup _{t\in [t_n,T]}|X^{\mathbf{t },t;\mathbf{b },b,u}_s|^{4q}\right] <\infty . \end{aligned}$$

Proof

For \(t\in [t_n,T]\) we have, for \(s\ge t\),

$$\begin{aligned} X^{\mathbf{t },t;\mathbf{b },b}_s&=h_{b_n,b}(t,X^{\mathbf{t };\mathbf{b }}_t)+\int _{t}^s a(r,X^{\mathbf{t },t;\mathbf{b },b}_r,X^{\mathbf{t },t;\mathbf{b },b}_{r-\delta })dr\\&\quad + \int _{t}^s\sigma (r,X^{\mathbf{t },t;\mathbf{b },b}_r,X^{\mathbf{t },t;\mathbf{b },b}_{r-\delta })dW_r\\&\quad +\int _{t}^s \int _{{\mathbb {R}}^d\setminus \{0\}} \gamma (r,X^{\mathbf{t },t;\mathbf{b },b}_{r-},X^{\mathbf{t },t;\mathbf{b },b}_{r-\delta },z){\tilde{\varGamma }}(dr,dz). \end{aligned}$$

Arguing as in the proof of Proposition 6 we find that for \(s\in [\tau _{j},T]\),

$$\begin{aligned}&\sup _{t\in [t_n,T]}|X^{\mathbf{t },t;\mathbf{b },b,u,n+1+j}_s|^2\\&\quad \le C\vee \sup _{t\in [t_n,T]}|X^{\mathbf{t };\mathbf{b }}_{t}|^2 +\sup _{t\in [t_n,T]}\left\{ \sum _{i=0}^{j-1} \big \{2\int _{t\vee \tau _{i}+ }^{\tau _{i+1}} X^{\mathbf{t },t;\mathbf{b },b,u,n+1+i}_{r-} dX^{\mathbf{t },t;\mathbf{b },b,u,n+1+i}_r\right. \\&\qquad +\int _{\tau _{i}+}^{\tau _{i+1}} d[X^{\mathbf{t },t;\mathbf{b },b,u,n+1+i},X^{\mathbf{t },t;\mathbf{b },b,u,n+1+i}]_r\big \}\\&\qquad +2\int _{t\vee \tau _{j}+}^s X^{\mathbf{t },t;\mathbf{b },b,u,n+1+j}_{r-} dX^{\mathbf{t },t;\mathbf{b },b,u,n+1+j}_{r}\\&\qquad \left. +\int _{\tau _{j}+}^s d[X^{\mathbf{t },t;\mathbf{b },b,u,n+1+j},X^{\mathbf{t },t;\mathbf{b },b,u,n+1+j}]_r\right\} . \end{aligned}$$

We thus find that, for each \(u\in {\mathcal {U}}\),

$$\begin{aligned}&{\mathbb {E}}\left[ \sup _{s\in [0,r]}\sup _{t\in [t_n,T]}|X^{\mathbf{t },t;\mathbf{b },b,u}_s|^{4q}\right] \\&\quad \le C +C{\mathbb {E}}\left[ \sup _{s\in [0,T]}|X^{\mathbf{t };\mathbf{b }}_s|^{4q}\right] + C\int _{0}^{r}{\mathbb {E}}\left[ \sup _{s\in [0,v]}\sup _{t\in [t_n,T]}|X^{\mathbf{t },t;\mathbf{b },b,u}_s|^{4q}\right] dv \end{aligned}$$

and the assertion again follows by applying Grönwall’s lemma and using Proposition 6. \(\square \)

To illustrate that switching does not diverge solutions we have the following useful lemma:

Lemma 2

For \(\gamma \in {\mathcal {T}}\) and each \(u\in {\mathcal {U}}_{\gamma }\), let \((^k Z^u)_{k\ge 0}\) and \(X^u\) be processes in \({\mathcal {S}}^{4q}\) (with \({\mathbb {E}}[\sup _{s\in [0,\gamma ]}|^k Z^u|^{4q}]\) uniformly bounded) that solve the SDDE (15)–(17) on \((\gamma ,T]\) with control u and such that

$$\begin{aligned} {\mathbb {E}}\left[ \int _0^{\gamma }|X_s^{u}-^kZ^{u}_s|^{4}ds+|X_{\gamma }^{u,0}-^kZ^{u,0}_\gamma |^{4}\right] \rightarrow 0, \end{aligned}$$
(19)

as \(k\rightarrow \infty \). Then,

$$\begin{aligned} \lim _{k\rightarrow \infty }\sup _{u\in {\mathcal {U}}_\gamma }{\mathbb {E}}\left[ \sup _{s\in [\gamma ,T]}|X_s^{u}-^kZ^{u}_s|^{2}\right] \rightarrow 0 \end{aligned}$$
(20)

and for all \(b\in {\mathcal {I}}^{-b_0}\) we have

$$\begin{aligned} \lim _{k\rightarrow \infty }\sup _{u\in {\mathcal {U}}_\gamma }{\mathbb {E}}\left[ \sup _{t\in [\gamma ,T]}\sup _{s\in [\gamma ,T]}|X_s^{t,b,u}-^kZ^{t,b,u}_s|^{2}\right] \rightarrow 0. \end{aligned}$$
(21)

Proof

By the contraction property of \(h_{.,.}\) we have that \(|X_{\tau _j}^{u,j}-^kZ_{\tau _j}^{u,j}|<|X_{\tau _j}^{u,j-1}-^kZ_{\tau _j}^{u,j-1}|\). Using integration by parts we get, for \(t\in [\tau _j,T]\),

$$\begin{aligned}&|X_{t}^{u,j}-^kZ_{t}^{u,j}|^2 = |X_{\tau _j}^{u,j}-^kZ_{\tau _j}^{u,j}|^2+2\int _{\tau _j+}^t(X_{s-}^{u,j}-^kZ_{s-}^{u,j})(dX_{s}^{u,j}-d^kZ_{s}^{u,j}) \\&\qquad +\int _{\tau _j+}^t d[X^{u,j}-^kZ^{u,j},X^{u,j}-^kZ^{u,j}]_s \\&\qquad \le |X_{\tau _{j-1}}^{u,j-1}-^kZ_{\tau _{j-1}}^{u,j-1}|^2+2\int _{\tau _{j-1}}^{\tau _{j}}(X_{s-}^{u,j-1} -^kZ_{s-}^{u,j-1})(dX_{s}^{u,j-1}-d^kZ_{s}^{u,j-1}) \\&\qquad +2\int _{\tau _j+}^t(X_{s-}^{u,j}-^kZ_{s-}^{u,j})(dX_{s}^{u,j}-d^kZ_{s}^{u,j}) \\&\qquad +\int _{\tau _{j-1}+}^{\tau _{j}} d[X^{u,j-1}-^kZ^{u,j-1},X^{u,j-1}-^kZ^{u,j-1}]_s \\&\qquad +\int _{\tau _j+}^t d[X^{u,j}-^kZ^{u,j},X^{u,j}-^kZ^{u,j}]_s. \end{aligned}$$

Repeated application implies that

$$\begin{aligned} |X_{t}^{u}-^kZ_{t}^{u}|^2&\le |X_{\gamma }^{u,0}-^kZ_{\gamma }^{u,0}|^2+2\sum _{j=0}^{\infty }\int _{\tau _j+}^{\tau _{j+1}\wedge t}(X_{s-}^{u,j}-^kZ_{s-}^{u,j})(dX_{s}^{u,j}-d^kZ_{s}^{u,j}) \\&\quad +\sum _{j=0}^{\infty }\int _{\tau _j+}^{\tau _{j+1}\wedge t} d[X^{u,j}-^kZ^{u,j},X^{u,j}-^kZ^{u,j}]_s. \end{aligned}$$

Now, for \(s\in (\tau _j,T]\) we have

$$\begin{aligned} dX_{s}^{u,j}-d^kZ_{s}^{u,j}&=(a(s,X^{u,j}_s,X^{u,j}_{s-\delta })-a(s,^kZ^{u,j}_s,^kZ^{u,j}_{s-\delta }))ds \\&\quad +(\sigma (s,X^{u,j}_s,X^{u,j}_{s-\delta })-\sigma (s,^kZ^{u,j}_s,^kZ^{u,j}_{s-\delta }))dW_s\\&\quad +\int _{{\mathbb {R}}^d\setminus \{0\}} (\gamma (s,X^{u,j}_{s-},X^{u,j}_{s-\delta },z)-\gamma (s,^kZ^{u,j}_{s-},^kZ^{u,j}_{s-\delta },s)){\tilde{\varGamma }}(ds,dz). \end{aligned}$$

Using Lipschitz continuity of \(a,\sigma \) and \(\gamma \) and the Burkholder–Davis–Gundy inequality we get

$$\begin{aligned} {\mathbb {E}}\left[ \sup _{s\in [\gamma ,t]}|X_{s}^{u}-^kZ_{s}^{u}|^4\right]&\le C{\mathbb {E}}\Big [|X_{\gamma }^{u,0}-^kZ_{\gamma }^{u,0}|^4+\int _0^{\gamma }|X_{s}^{u}-^kZ_{s}^{u}|^4ds\Big ] \\&\quad + C\int _{\gamma }^{t}{\mathbb {E}}\left[ \sup _{r\in [\gamma ,s]}|X_{r}^{u}-^kZ_{r}^{u}|^4\right] ds, \end{aligned}$$

where the constant C does not depend on the control u, and by Grönwall’s inequality we have

$$\begin{aligned} {\mathbb {E}}\left[ \sup _{s\in [\gamma ,t]}|X_{s}^{u}-^kZ_{s}^{u}|^4\right]&\le C{\mathbb {E}}\Big [|X_{\gamma }^{u,0}-^kZ_{\gamma }^{u,0}|^4+\int _0^{\gamma }|X_{s}^{u}-^kZ_{s}^{u}|^4ds\Big ]. \end{aligned}$$

Now, applying Jensen’s inequality gives (20). Furthermore, we have

$$\begin{aligned}&\sup _{r\in [0,T]}|X_{t}^{r,b,u}-^kZ_{t}^{r,b,u}|^2 \le \sup _{r\in [0,T]}|X_{r}^{u,0}-^kZ_{r}^{u,0}|^2 \\&\quad +2\sup _{r\in [0,T]}\left\{ \sum _{j=0}^{\infty }\int _{\tau _j+\vee r}^{\tau _{j+1}\wedge t}(X_{s-}^{r,b,u,j}-^kZ_{s-}^{r,b,u,j})(dX_{s}^{r,b,u,j}-d^kZ_{s}^{r,b,u,j})\right. \\&\quad \left. +\sum _{j=0}^{\infty }\int _{\tau _j+}^{\tau _{j+1}\wedge t} d[X^{r,b,u,j}-^kZ^{r,b,u,j},X^{r,b,u,j}-^k Z^{r,b,u,j}]_s\right\} . \end{aligned}$$

and (21) follows by an identical argument. \(\square \)

We add the following assumptions on the components of the cost functional and the functions h.

Assumption 4

  1. (i)

    The functions \(f:[0,T]\times {\mathbb {R}}^d\rightarrow {\mathbb {R}}\) and \(g:{\mathbb {R}}^d\rightarrow {\mathbb {R}}\) are both locally Lipschitz in x. Furthermore, there are constants \(q> 1\) and \(K>0\) such that

    $$\begin{aligned} |f(t,x)|+|g(x)|\le K(1+|x|^q) \end{aligned}$$

    for all \((t,x)\in [0,T]\times {\mathbb {R}}^d\).

  2. (ii)

    For all \(b\in {\mathcal {I}}\) we have

    $$\begin{aligned} g(x)>\max _{b'\in {\mathcal {I}}^{-b}}g(h_{b,b'}(T,x))-c_{b,b'}(T), \end{aligned}$$

    for all \(x\in {\mathbb {R}}^d\).

  3. (iii)

    There is a constant \(\kappa >0\) such that for any sequence \((b_1,\ldots ,b_{j})\in {\bar{{\mathcal {I}}}}^j\) with \(j>\kappa \) there is a subsequence \(1 = \iota _1<\cdots <\iota _{j'}=j\) with \(j'\le \kappa \) and \((b_{\iota _1},\ldots ,b_{\iota _{j'}})\in {\bar{{\mathcal {I}}}}^{j'}\) for which

    $$\begin{aligned}&h_{b_{j-1},b_{j}}(t,\cdots h_{b_2,b_3}(t,h_{b_1,b_2}(t,x))\cdots ) \\&\quad =h_{b_{\iota _{j'-1}},b_{\iota _{j'}}}(t,\cdots h_{b_{\iota _2},b_{\iota _3}}(t,h_{b_{\iota _1},b_{\iota _2}}(t,x))\cdots ). \end{aligned}$$

It is straightforward to see that with the above assumptions the \(\varPsi \) defined by

$$\begin{aligned} \varPsi (\mathbf{t };\mathbf{b }):=\int _0^T f(t,X^{\mathbf{t };\mathbf{b }}_t)dt+g(X^{\mathbf{t };\mathbf{b }}_T) \end{aligned}$$

satisfies Assumption 1.

The remainder of this section is devoted to showing that \(\varPsi \) also satisfies Assumption 2, guaranteeing the existence of an optimal control to the problem of maximizing J.

Proposition 8

For each \(n\ge 1\) and each \((\eta ,\mathbf{b })\in {\bar{{\mathcal {T}}}}^n\times {\bar{{\mathcal {I}}}}^n\) and \(b\in {\mathcal {I}}^{-b_n}\) there is a map \(({\mathcal {U}}\rightarrow {\mathcal {U}}:u\rightarrow {\hat{u}}^l)_{l\ge 1}\) such that

$$\begin{aligned} \lim _{l\rightarrow \infty }\sup _{u\in {\mathcal {U}}}{\mathbb {E}}\left[ \sup _{s\in [0,T]}|(V^{\eta ;\mathbf{b },u}_s-V^{\varGamma ^l(\eta );\mathbf{b },{\hat{u}}^l}_s)^+|^2\right] =0 \end{aligned}$$
(22)

and

$$\begin{aligned} \lim _{l\rightarrow \infty }\sup _{u\in {\mathcal {U}}}{\mathbb {E}}\left[ \sup _{s\in [0,T]}|(V^{\eta ,s\vee \eta _n;\mathbf{b },b,u}_s-V^{\varGamma ^l(\eta ),s\vee \varGamma ^l(\eta _n);\mathbf{b },b,{\hat{u}}^l}_s)^+|^2\right] =0. \end{aligned}$$
(23)

Furthermore, we have

$$\begin{aligned} \lim _{l\rightarrow \infty }\sup _{u\in {\mathcal {U}}_{\varGamma ^l(\eta _n)}}{\mathbb {E}}\left[ \sup _{s\in [0,T]}|(V^{\varGamma ^l(\eta );\mathbf{b },u}_s-V^{\eta ;\mathbf{b },u}_s)^+|^2\right] =0 \end{aligned}$$
(24)

and

$$\begin{aligned} \lim _{l\rightarrow \infty }\sup _{u\in {\mathcal {U}}_{\varGamma ^l(\eta _n)}}{\mathbb {E}}\left[ \sup _{s\in [0,T]}|(V^{\varGamma ^l(\eta ),s\vee \varGamma ^l(\eta _n);\mathbf{b },b,u}_s-V^{\eta ,s\vee \eta _n;\mathbf{b },b,u}_s)^+|^2\right] =0. \end{aligned}$$
(25)

Proof

To simplify notation we let \((\zeta _i)_{1\le i\le n}\) denote \(\varGamma ^l(\eta )\) and let X and Z (resp. \(X^j\) and \(Z^j\)) denote \(X^{\eta ;\mathbf{b },u}_t\) resp. \(X^{\varGamma ^l(\eta );\mathbf{b },{\hat{u}}^l}\) (resp. \(X^{\eta ;\mathbf{b },u,j}\) and \(X^{\varGamma ^l(\eta );\mathbf{b },{\hat{u}}^l,j}\)). Furthermore, we let \(U^*_t:=\sup _{s\in [0,t]}|U_s|\) be the running maximum of the process |U|.

We have:

(i):

\(X_t= Z_t\), for all \(t\in [0,\eta _1)\), \({\mathbb {P}}\)-a.s.

(ii):

On \([\eta _1,\zeta _1)\) we have \(|X_t-Z_t|\le (X)_T^*+(Z)_T^*\).

(iii):

If \(\eta _j\le \zeta _1\), then \(\zeta _j=\zeta _{j-1}=\cdots =\zeta _1\).

Letting \(M_1:=\max \{j\ge 1: \eta _j\le \zeta _1\}\) we get

$$\begin{aligned} X^{M_1}_{\zeta _{M_1}}-Z^{M_1}_{\zeta _{M_1}}&=X^{{M_1}}_{\zeta _{M_1}}+(h_{b_{M_1-1},b_{M_1}}(\eta _{M_1},X^{M_1-1}_{\eta _{M_1}}) -X^{{M_1}}_{\eta _{M_1}}) \\&\quad -h_{b_{M_1-1},b_{M_1}}(\zeta _{M_1},Z^{M_1-1}_{\zeta _{M_1}}). \end{aligned}$$

Hence,

$$\begin{aligned} |X^{M_1}_{\zeta _{M_1}}-Z^{M_1}_{\zeta _{M_1}}|&\le |X^{{M_1}}_{\zeta _{M_1}}-X^{{M_1}}_{\eta _{M_1}}|+ C|\eta _{M_1}-\zeta _{M_1}| + |X^{M_1-1}_{\eta _{M_1}} - Z^{M_1-1}_{\zeta _{M_1}}| \\&\le C2^{-l} +|X^{{M_1}}_{\zeta _{M_1}}-X^{{M_1}}_{\eta _{M_1}}|+ |X^{{M_1-1}}_{\zeta _{M_1}}-X^{{M_1-1}}_{\eta _{M_1}}| \\&\quad + |X^{M_1-1}_{\zeta _{M_1}} - Z^{M_1-1}_{\zeta _{M_1}}|. \end{aligned}$$

But \(X^{0}_{\zeta _1}=Z^{0}_{\zeta _1}\) and by induction it follows that

$$\begin{aligned} |X^{M_1}_{\zeta _{M_1}}-Z^{M_1}_{\zeta _{M_1}}|&\le M_1C2^{-l}+\sum _{j=1}^{M_1}(|X^{{j}}_{\zeta _{j}}-X^{j}_{\eta _{j}}|+ |X^{{j-1}}_{\zeta _{j}}-X^{{j-1}}_{\eta _{j}}|). \end{aligned}$$

If we iteratively define \(M_i:=\max \{j > M_{i-1} : \eta _j\le \zeta _{M_{i-1}+1}\}\), for \(i=1,\ldots n_M\) with \(M_{n_M}= n\) and \(M_0:=0\). Then we get, in the same manner,

$$\begin{aligned} |X^{M_i}_{\zeta _{M_i}}-Z^{M_i}_{\zeta _{M_i}}|&\le (M_i-M_{i-1}) C2^{-l}+\sum _{j=M_{i-1}+1}^{M_i}(|X^{{j}}_{\zeta _{j}}-X^{j}_{\eta _{j}}|+ |X^{{j-1}}_{\zeta _{j}}-X^{{j-1}}_{\eta _{j}}|) \\&\quad + |X^{M_{i-1}}_{\zeta _{M_i}} - Z^{M_{i-1}}_{\zeta _{M_i}}|. \end{aligned}$$

Now on \([{\zeta _{M_i}},T]\) we have

$$\begin{aligned} X_t^{M_i}-Z^{M_i}_t&= X^{M_i}_{\zeta _{M_i}}-Z^{M_i}_{\zeta _{M_i}}+\int _{\zeta _{M_i}}^t(a(s,X^{M_i}_s,X^{M_i}_{s-\delta })-a(s,Z^{M_i}_s,Z^{M_i}_{s-\delta }))ds \\&\quad +\int _{\zeta _{M_i}}^t(\sigma (s,X^{M_i}_s,X_{s-\delta }^{M_i})-\sigma (s,Z_s^{M_i},Z_{s-\delta }^{M_i}))dB_s \\&\quad +\int _{\zeta _{M_i}}^t \int _{{\mathbb {R}}^d\setminus \{0\}}(\gamma (s,X^{M_i}_{s-},X_{s-\delta }^{M_i})-\gamma (s,Z_{s-}^{M_i},Z_{s-\delta }^{M_i})){\tilde{\varGamma }}(ds,dz). \end{aligned}$$

Put together we find that for \(t\in [\zeta _{M_i},T]\) we have

$$\begin{aligned}&|X_{t}^{M_i}-Z^{M_i}_t|\le (M_i-M_{i-1}) C2^{-l}+\sum _{j=M_{i-1}+1}^{M_i}(|X^{{j}}_{\zeta _{j}}-X^{j}_{\eta _{j}}|+ |X^{{j-1}}_{\zeta _{j}}-X^{{j-1}}_{\eta _{j}}|) \\&\quad + |X^{M_i-1}_{\zeta _{M_i}} - Z^{M_i-1}_{\zeta _{M_i}}|+\int _{\zeta _{M_i}}^t|a(s,X^{M_i}_s,X^{M_i}_{s-\delta })-a(s,Z^{M_i}_s,Z^{M_i}_{s-\delta })|ds \\&\quad +|\int _{\zeta _{M_i}}^t(\sigma (s,X^{M_i}_s,X_{s-\delta }^{M_i})-\sigma (s,Z_s^{M_i},Z_{s-\delta }^{M_i}))dB_s \\&\quad +\int _{\zeta _{M_i}}^t \int _{{\mathbb {R}}^d\setminus \{0\}}(\gamma (s,X^{M_i}_{s-},X_{s-\delta }^{M_i})-\gamma (s,Z_{s-}^{M_i},Z_{s-\delta }^{M_i})){\tilde{\varGamma }}(ds,dz)|. \end{aligned}$$

Applying Thm 66, p. 339 in Protter (2004) and Lipschitz continuity iteratively gives

$$\begin{aligned}&{\mathbb {E}}\left[ \sup _{s\in [\zeta _{M_i},t]}|X_{s}^{M_i}-Z^{M_i}_s|^4\right] \le C2^{-l}+C{\mathbb {E}}\left[ \sum _{j=1}^{M_i}(|X^{{j}}_{\zeta _{j}}-X^{j}_{\eta _{j}}|^4\right. \\&\quad \left. + |X^{{j-1}}_{\zeta _{j}}-X^{{j-1}}_{\eta _{j}}|^4)+ \int _{0}^t (|X^{M_i}_s-Z^{M_i}_s|^4+|X^{M_i}_{s-\delta }-Z_{s-\delta }^{M_i}|^4)ds\right] . \end{aligned}$$

By Grönwall’s inequality and point ii) above we find that

$$\begin{aligned}&{\mathbb {E}}\left[ \sup _{t\in [\zeta _{M_i},T]}|X_{t}^{M_i}-Z^{M_i}_t|^4\right] \le C2^{-l}(1+(X^*_T)^4+(Z^*_T)^4) \nonumber \\&\quad +C\sum _{j=1}^{M_i}{\mathbb {E}}\big [|X^{{j}}_{\zeta _{j}}-X^{j}_{\eta _{j}}|^4+ |X^{{j-1}}_{\zeta _{j}}-X^{{j-1}}_{\eta _{j}}|^4\big ]. \end{aligned}$$
(26)

Moving on we consider the possibility of interventions in the period \([\eta _n,\zeta _n)\). Let \(N':=\max \{j\ge 0: \tau _j<\zeta _n\}\) and note that if \(N'> \kappa \), then there is a subsequence \((\iota _j)_{j=1}^{\kappa '}\) with \(1\le \iota _1< \cdots <\iota _{\kappa '}=N'\) with \(\kappa '\le \kappa \) and \((b_n,\beta _{\iota _1}, \ldots , \beta _{\iota _{\kappa '}})\in {\bar{{\mathcal {I}}}}^{\kappa '+1}\) such that, for all \((t,x)\in [0,T]\times {\mathbb {R}}^d\),

$$\begin{aligned} h_{\beta _{N'-1},\beta _{N'}}(t,\cdots h_{b_n,\beta _1}(t,x)\cdots )= h_{\beta _{\iota _{\kappa '-1}},\beta _{\iota _{\kappa '}}}(t,\cdots h_{b_n,\beta _{\iota _1}}(t,x)\cdots ). \end{aligned}$$

We then letFootnote 5\({\hat{u}}^l=({\hat{\tau }}_1,\ldots ,{\hat{\tau }}_{{\hat{N}}};{\hat{\beta }}_1,\ldots ,{\hat{\beta }}_{{\hat{N}}}):= (\zeta _{n}{\mathbf{1}}_{\kappa '},\tau _{N'+1},\ldots ,\tau _N;\beta _{\iota _1},\ldots ,\beta _{\iota _{\kappa '}},\beta _{N'+1},\ldots ,\beta _N)\). Arguing as above, we find that

$$\begin{aligned} |X_{\zeta _n}-Z_{\zeta _n}|&\le N' C2^{-l}+\sum _{j=1}^{N'}(|X^{{n+j}}_{\zeta _n}-X^{n+j}_{\tau _{j}}|+ |X^{{n+j-1}}_{\zeta _n}-X^{{n+j-1}}_{\tau _{j}}|) \nonumber \\&\quad +|X^{n}_{\zeta _n}-Z^{n}_{\zeta _n}|. \end{aligned}$$
(27)

We now turn to the total revenue and let

$$\begin{aligned} \varLambda :=\sum _{j=1}^{{\hat{N}}}c_{{\hat{\beta }}_{j-1},{\hat{\beta }}_j}({\hat{\tau }}_j)-\sum _{j=1}^{N}c_{\beta _{j-1},\beta _j}(\tau _j). \end{aligned}$$

By right continuity of the switching costs, we find that

$$\begin{aligned} \lim _{l\rightarrow \infty }\varLambda \le \bigg (\frac{\kappa }{2}-\frac{N'-m}{m}\bigg )\rho , \end{aligned}$$
(28)

\({\mathbb {P}}\)-a.s. The difference in revenue can then be written

$$\begin{aligned} V^{\eta ;\mathbf{b },u}_t-V^{\zeta ;\mathbf{b },{\hat{u}}^l}_t&= {\mathbb {E}}\left[ \int _0^T (f(s,X_s)-f(s,Z_s))ds+g(X_T)-g(Z_T)+\varLambda \big | {\mathcal {F}}_t\right] . \end{aligned}$$

By local Lipschitz continuity of f and g we get that, for each \(K>0\) there is a \(C> 0\) such that \(|f(t,x)-f(t,x')|\le C|x-x'|\) and \(|g(x)-g(x')|\le C|x-x'|\) on \(|x|+|x'|\le K\). This gives us the relation

$$\begin{aligned}&(V^{\eta ;\mathbf{b },u}_t-V^{\zeta ;\mathbf{b },{\hat{u}}^l}_t)^+\\&\quad \le {\mathbb {E}}\Big [\left( \int _0^TC|X_s-Z_s|ds+C|X_T-Z_T|+\varLambda \right) ^+\big | {\mathcal {F}}_t\Big ]\\&\qquad +C{\mathbb {E}}[\mathbb {1}_{[X_T^*+Z^*_T> K]}(1+(X_T^*)^q+(Z^*_T)^q )|{\mathcal {F}}_t]\\&\quad \le {\mathbb {E}}\Big [\mathbb {1}_{A}\left( \int _0^TC|X_s-Z_s|ds+C|X_T-Z_T|+\varLambda ^+\right) \big | {\mathcal {F}}_t\Big ]\\&\qquad +C{\mathbb {E}}[\mathbb {1}_{[X_T^*+Z^*_T > K]}(1+(X_T^*)^q+(Z^*_T)^q )|{\mathcal {F}}_t], \end{aligned}$$

where \(A:=\{\omega \in \varOmega :\int _0^TC|X_s-Z_s|ds+C|X_T-Z_T|>-\varLambda \}\). Doob’s maximal inequality then gives that

$$\begin{aligned}&{\mathbb {E}}\left[ \sup _{t\in [0,T]}((V^{\eta ;\mathbf{b },u}_t-V^{\zeta ;\mathbf{b },{\hat{u}}^l}_t)^+)^2\right] \\&\quad \le C{\mathbb {E}}\left[ \mathbb {1}_{A}\left( \int _0^T|X_s-Z_s|^2ds+|X_T-Z_T|^2+(\varLambda ^+)^2\right) \right] \\&\qquad +C{\mathbb {E}}[\mathbb {1}_{[X_T^*+Z^*_T> K]}(1+(X_T^*)^{2q}+(Z^*_T)^{2q} )] \\&\quad \le C{\mathbb {E}}\left[ \mathbb {1}_{A}\left( \int _0^T|X_s-Z_s|^2ds+|X_T-Z_T|^2+(\varLambda ^+)^2\right) \right] \\&\qquad +C{\mathbb {P}}[X_T^*+Z^*_T > K]^{1/2}, \end{aligned}$$

where we have used Hölder’s inequality and the moment estimate in Proposition 6 to arrive at the last inequality. For any \(M>0\) we thus have

$$\begin{aligned}&{\mathbb {E}}\left[ \sup _{t\in [0,T]}((V^{\eta ;\mathbf{b },u}_t-V^{\zeta ;\mathbf{b },{\hat{u}}^l}_t)^+)^2\right] \nonumber \\&\quad \le C{\mathbb {E}}\left[ \mathbb {1}_{[N'\le M]}\left( \int _0^T|X_s-Z_s|^2ds+|X_T-Z_T|^2\right) \right] \nonumber \\&\qquad +C{\mathbb {E}}\Big [\mathbb {1}_{[N'>M]}\mathbb {1}_A((X_T^*)^2+(Z^*_T)^2)\Big ]\nonumber \\&\qquad +C{\mathbb {E}}\big [(\varLambda ^+)^2\big ]+C{\mathbb {P}}[X_T^*+Z^*_T > K]^{1/2}, \end{aligned}$$
(29)

Concerning the first term, we have that \(\mathbb {1}_{[N'\le M]}|X_s-Z_s|\le |{\tilde{X}}_s-{\tilde{Z}}_s|\), where \({\tilde{X}}=X\) and \({\tilde{Z}}=Z\) on \([N'\le M]\). On \([N'> M]\) we let \({\tilde{X}}:=X^{\eta ;\mathbf{b },{\tilde{u}}}\) with

$$\begin{aligned} {\tilde{u}}:=\left\{ \begin{array}{ll} (\tau _1,\ldots ,\tau _{M},\zeta _n,\tau _{N'+1},\ldots ,\tau _N;\beta _1,\ldots ,\beta _{M},\beta _{N'},\ldots ,\beta _N), &{}\quad \mathrm{if}\, \beta _{M}\ne \beta _{N'}, \\ (\tau _1,\ldots ,\tau _{M},\tau _{N'+1},\ldots ,\tau _N;\beta _1,\ldots ,\beta _{M},\beta _{N'+1},\ldots ,\beta _N), &{}\quad \mathrm{if}\, \beta _{M}=\beta _{N'}. \end{array}\right. \end{aligned}$$

and \({\tilde{Z}}:=X^{\eta ;\mathbf{b },{\tilde{u}}^l}\) where \({\tilde{u}}^l\) is obtained from \({\tilde{u}}\) as \({\hat{u}}^l\) was obtained from u. Now, we proceed as above and get for each \(M\ge \kappa \), that

$$\begin{aligned} |{\tilde{X}}_{\zeta _n}-{\tilde{Z}}_{\zeta _n}|&\le M C2^{-l}+\sum _{j=1}^{N'\wedge M}(|X^{{n+j}}_{\zeta _n}-X^{n+j}_{\tau _{j}}|+ |X^{{n+j-1}}_{\zeta _n}-X^{{n+j-1}}_{\tau _{j}}|) \\&\quad +|X^{n}_{\zeta _n}-Z^{n}_{\zeta _n}|. \end{aligned}$$

By (26) and (20) of Lemma 2 we then find that for each \(M\ge \kappa \), the first term on the right hand side in (29) goes to 0 as \(l\rightarrow \infty \). Concerning the second term we have, again by Hölder’s inequality and Proposition 6, that

$$\begin{aligned} {\mathbb {E}}\Big [\mathbb {1}_{[N'>M]}\mathbb {1}_A((X_T^*)^2+(Z^*_T)^2)\Big ]\le C{\mathbb {P}}[[N'>M]\cap A]^{1/2}. \end{aligned}$$

Now, \(A\subset \{\omega :C (X_T^*+Z^*_T)>-\varLambda \}\), where \(C>0\) does not depend on l. For l sufficiently large we thus see, by (28) and Chebyshev’s inequality, that the probability on the right hand side can be made arbitrarily small by choosing M sufficiently large. For the third term we note that

$$\begin{aligned} {\mathbb {E}}\big [(\varLambda ^+)^2\big ]\le \kappa ^2\sum _{(b,b')\in {\bar{{\mathcal {I}}}}^2} {\mathbb {E}}\left[ \sup _{s\in [\eta _n,\zeta _n]}|c_{b,b'}(\zeta _n)-c_{b,b'}(s)|^2\right] , \end{aligned}$$

where the right hand side goes to 0 as \(l\rightarrow \infty \) by right-continuity of the switching costs. Finally, the last term of (29) can be made arbitrarily small by choosing K large.

Concerning the second claim we note that with \(X=X^{\eta ,s\vee \eta _n,\mathbf{b },b,u}\) and \(Z=X^{\varGamma ^l(\eta ),s\vee \varGamma ^l(\eta _n),\mathbf{b },b,u}\) the relation in (27) is replaced by

$$\begin{aligned} |X_{\zeta _n}-Z_{\zeta _n}|&\le (N'+1) C2^{-l}+\sup _{r\in [\eta _n,\zeta _n]}\sum _{j=1}^{N'+1}(|X^{{n+j}}_{\zeta _n}-X^{n+j}_{r}| \\&\quad + |X^{{n+j-1}}_{\zeta _n}-X^{{n+j-1}}_{r}|)+|X^{n}_{\zeta _n}-Z^{n}_{\zeta _n}|. \end{aligned}$$

Hence, appealing to (21) of Lemma 2, right-continuity and the result in Proposition 7 the first second and last terms in the equivalent to (29) tends to 0 as \(l\rightarrow \infty \) and (24) follows.

The last two statements given in equations (24)–(25) follow by a similar reasoning while noting that in this case \(N'=0\) which implies that \(\varLambda =0\), \({\mathbb {P}}\)-a.s. \(\square \)

Lemma 3

For all \((\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f\) and \(k\ge 0\) we have

$$\begin{aligned} \sup _{u\in {\mathcal {U}}^k}{\mathbb {E}}\left[ \sup _{s\in [t',T]}|X^{\mathbf{t },t';\mathbf{b },b,u}_s-X^{\mathbf{t },t;\mathbf{b },b,u}_s|\big |{\mathcal {F}}_{t'}\right] \rightarrow 0, \end{aligned}$$

\({\mathbb {P}}\)-a.s. as \(t'\searrow t\).

Proof

Starting with \(k=0\) we note that for \(t'\ge t\) we have

$$\begin{aligned} X^{\mathbf{t },t;\mathbf{b },b}_{t'}&=h_{b_n,b}(t,X^{\mathbf{t };\mathbf{b }}_{t})+X^{\mathbf{t },t;\mathbf{b },b}_{t'}-X^{\mathbf{t },t;\mathbf{b },b}_{t} \end{aligned}$$

which gives

$$\begin{aligned} |X^{\mathbf{t },t';\mathbf{b },b}_{t'}-X^{\mathbf{t },t;\mathbf{b },b}_{t'}|&\le C|t'-t|+|X^{\mathbf{t };\mathbf{b }}_{t'}-X^{\mathbf{t };\mathbf{b }}_{t}|+|X^{\mathbf{t },t;\mathbf{b },b}_{t'}-X^{\mathbf{t },t;\mathbf{b },b}_{t}|. \end{aligned}$$

For \(k> 0\) and \(u\in {\mathcal {U}}^k_t\) we have, for \(i\le k\)

$$\begin{aligned} X^{\mathbf{t },t;\mathbf{b },b,u,n+i+1}_{t'}&=\mathbb {1}_{[\tau _i\le t']}\{h_{\beta _{i-1},\beta _i}(\tau _i,X^{\mathbf{t },t;\mathbf{b },b,u,n+i}_{\tau _i})+X^{\mathbf{t },t;\mathbf{b },b,u,n+i+1}_{t'} \\&\quad -X^{\mathbf{t },t;\mathbf{b },b,u,n+i+1}_{\tau _i}\}+ \mathbb {1}_{[\tau _i> t']}X^{\mathbf{t },t;\mathbf{b },b,u,n+i}_{t'} \end{aligned}$$

and

$$\begin{aligned} X^{\mathbf{t },t';\mathbf{b },b,u,n+i+1}_{t'}&=\mathbb {1}_{[\tau _i\le t']}h_{\beta _{i-1},\beta _i}(t',X^{\mathbf{t },t';\mathbf{b },b,u,n+i}_{t'}) + \mathbb {1}_{[\tau _i> t']}X^{\mathbf{t },t';\mathbf{b },b,u,n+i}_{t'}. \end{aligned}$$

which gives

$$\begin{aligned}&|X^{\mathbf{t },t';\mathbf{b },b,u,n+i+1}_{t'}-X^{\mathbf{t },t;\mathbf{b },b,u,n+i+1}_{t'}| \\&\quad \le \mathbb {1}_{[\tau _i\le t']}\{C|t'-\tau _i|+|X^{\mathbf{t },t;\mathbf{b },b,u,n+i}_{t'}-X^{\mathbf{t },t';\mathbf{b },b,u,n+i}_{t'}| \\&\qquad +|X^{\mathbf{t },t;\mathbf{b },b,u,n+i}_{t'} - X^{\mathbf{t },t;\mathbf{b },b,u,n+i}_{\tau _i}| + |X^{\mathbf{t },t;\mathbf{b },b,u,n+i+1}_{t'} - X^{\mathbf{t },t;\mathbf{b },b,u,n+i+1}_{\tau _i}|\} \\&\qquad + \mathbb {1}_{[\tau _i> t']}|X^{\mathbf{t },t;\mathbf{b },b,u,n+i}_{t'}-X^{\mathbf{t },t';\mathbf{b },b,u,n+i}_{t'}|. \end{aligned}$$

Repeated application renders

$$\begin{aligned}&|X^{\mathbf{t },t';\mathbf{b },b,u}_{t'}-X^{\mathbf{t },t;\mathbf{b },b,u}_{t'}| \\&\quad \le C(k+1)|t'-t|+\sum _{i=1}^{k}\mathbb {1}_{[\tau _i\le t']}\{|X^{\mathbf{t },t;\mathbf{b },b,u,n+i}_{t'} - X^{\mathbf{t },t;\mathbf{b },b,u,n+i}_{\tau _i}| \\&\qquad + |X^{\mathbf{t },t;\mathbf{b },b,u,n+i+1}_{t'} - X^{\mathbf{t },t;\mathbf{b },b,u,n+i+1}_{\tau _i}|\}+|X^{\mathbf{t };\mathbf{b }}_{t'}-X^{\mathbf{t };\mathbf{b }}_{t}|+|X^{\mathbf{t },t;\mathbf{b },b}_{t'}-X^{\mathbf{t },t;\mathbf{b },b}_{t}|. \end{aligned}$$

Furthermore, we have

$$\begin{aligned} \int _{0}^{t'}|X^{\mathbf{t },t';\mathbf{b },b,u}_{s}-X^{\mathbf{t },t;\mathbf{b },b,u}_{s}|^4ds&\le |t'-t|((X^{\mathbf{t },t';\mathbf{b },b,u})^*_T+(X^{\mathbf{t },t;\mathbf{b },b,u})^*_T)^4, \end{aligned}$$

where the right hand side tends to zero \({\mathbb {P}}\)-a.s. as \(t'\searrow t\) by \({\mathbb {P}}\)-a.s. boundedness of \(\sup _{u\in {\mathcal {U}}}\sup _{r\in [t_n,T]}|(X^{\mathbf{t },r;\mathbf{b },b,u})^*_T|^{4}\). Arguing as in the proof of Lemma 2 we find that

$$\begin{aligned}&{\mathbb {E}}\left[ \sup _{s\in [t',T]}|X^{\mathbf{t },t';\mathbf{b },b,u}_{s}-X^{\mathbf{t },t;\mathbf{b },b,u}_{s}|^4\big |{\mathcal {F}}_{t'}\right] \\&\quad \le C(|X^{\mathbf{t },t';\mathbf{b },b,u}_{t'}-X^{\mathbf{t },t;\mathbf{b },b,u}_{t'}|^4+\int _0^{t'}|X^{\mathbf{t },t';\mathbf{b },b,u}_{s}-X^{\mathbf{t },t;\mathbf{b },b,u}_{s}|^4ds), \end{aligned}$$

and the assertion follows by right continuity of X. \(\square \)

Lemma 4

For all \((\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f\) and all \(b\in {\mathcal {I}}^{-b_n}\) we have whenever \(\gamma _j\nearrow \gamma \in {\mathcal {T}}_{t_n}\), with \((\gamma _j)_{j\ge 0}\subset {\mathcal {T}}_{t_n}\), that

$$\begin{aligned} \lim _{j\rightarrow \infty }\sup _{u\in {\mathcal {U}}^k_{\gamma _j}} {\mathbb {E}}\left[ \sup _{s\in [\gamma ,T]}|X_s^{\mathbf{t },\gamma _j;\mathbf{b },b,u}-X_s^{\mathbf{t },\gamma ;\mathbf{b },b,u}|^2\right] =0, \end{aligned}$$

for all \(0\le k<\infty \).

Proof

Arguing as in the proof of the previous lemma we find that

$$\begin{aligned}&|X^{\mathbf{t },\gamma _j;\mathbf{b },b,u}_\gamma -X^{\mathbf{t },\gamma ;\mathbf{b },b,u}_\gamma | \\&\quad \le C(k+1)(\gamma -\gamma _j)+\sum _{i=1}^{k}\mathbb {1}_{[\tau _i\le \gamma ]}\{|X^{\mathbf{t },\gamma _j;\mathbf{b },b,u,n+i}_{\gamma } - X^{\mathbf{t },\gamma _j;\mathbf{b },b,u,n+i}_{\tau _i}| \\&\qquad + |X^{\mathbf{t },\gamma _j;\mathbf{b },b,u,n+i+1}_{\gamma } - X^{\mathbf{t },\gamma _j;\mathbf{b },b,u,n+i+1}_{\tau _i}|\}+|X^{\mathbf{t };\mathbf{b }}_{\gamma }-X^{\mathbf{t };\mathbf{b }}_{\gamma _j}| \\&\qquad +|X^{\mathbf{t },\gamma _j;\mathbf{b },b}_{\gamma }-X^{\mathbf{t },\gamma _j;\mathbf{b },b}_{\gamma _j}|. \end{aligned}$$

Furthermore, by Hölder’s inequality we have

$$\begin{aligned}&{\mathbb {E}}\left[ \int _{0}^{\gamma }|X^{\mathbf{t },\gamma ;\mathbf{b },b,u}_{s} -X^{\mathbf{t },\gamma _j;\mathbf{b },b,u}_{s}|^4ds\right] \\&\quad \le C{\mathbb {E}}[\gamma -\gamma _j]^{1/p}{\mathbb {E}}\big [((X^{\mathbf{t },\gamma ;\mathbf{b },b,u})^*_T +(X^{\mathbf{t },\gamma _j;\mathbf{b },b,u})^*_T)^{4q}\big ]^{1/q}, \end{aligned}$$

where \(\frac{1}{p}+\frac{1}{q}=1\). Now, by definition \(\gamma \) is a predictable stopping time and the jump part of our SDDE is \({\mathbb {P}}\)-a.s. constant at predictable stopping times. We can, thus, apply Lemma 2 and the assertion follows. \(\square \)

Proposition 9

For all \((\mathbf{t },\mathbf{b })\in {\mathcal {D}}^f\) and all \(b\in {\mathcal {I}}^{-b_n}\), the process

\((\mathop {\mathrm{ess}\,\sup }_{u\in {\mathcal {U}}^k}V^{\mathbf{t },s\vee t_n;\mathbf{b },b,u}_s:0\le s\le T)\) is in \({\mathcal {S}}_{\textit{qlc}}^2\) for all \(k\ge 0\).

Proof

Let \(Y^{\mathbf{t };\mathbf{b },k}_t:=\mathop {\mathrm{ess}\,\sup }_{u\in {\mathcal {U}}^k}V^{\mathbf{t };\mathbf{b },u}_t\). To show that \(Y^{\mathbf{t },\cdot \vee t_n;\mathbf{b },b,k}_\cdot \) has a càdlàg version we consider

$$\begin{aligned} Y^{\mathbf{t },t';\mathbf{b },b,k}_{t'}-Y^{\mathbf{t },t;\mathbf{b },b,k}_{t}=(Y^{\mathbf{t },t';\mathbf{b },b,k}_{t'}-Y^{\mathbf{t },t;\mathbf{b },b,k}_{t'})+(Y^{\mathbf{t },t;\mathbf{b },b,k}_{t'}-Y^{\mathbf{t },t;\mathbf{b },b,k}_{t}) \end{aligned}$$

where the second term on the right hand side goes to zero \({\mathbb {P}}\)-a.s. as \(t'\searrow t\) by uniform integrability and right continuity of the filtration. Concerning the first term we have

$$\begin{aligned}&|Y^{\mathbf{t },t';\mathbf{b },b,k}_{t'}-Y^{\mathbf{t },t;\mathbf{b },b,k}_{t'}| \nonumber \\&\quad \le \sup _{u\in {\mathcal {U}}^k}{\mathbb {E}}\bigg [\int _t^T|f(s,X^{\mathbf{t },t';\mathbf{b },b,u}_s) -f(s,X^{\mathbf{t },t;\mathbf{b },b,u}_s)|ds\nonumber \\&\qquad +|g(X^{\mathbf{t },t';\mathbf{b },b,u}_T) - g(X^{\mathbf{t },t;\mathbf{b },b,u}_T)|\nonumber \\&\qquad +\sum _{j=1}^N|c_{\beta _{j-1},\beta _j}(\tau _j\vee t')-c_{\beta _{j-1},\beta _j}(\tau _j\vee t)|\Big |{\mathcal {F}}_{t'}\bigg ]\nonumber \\&\quad \le \sup _{u\in {\mathcal {U}}^k}{\mathbb {E}}\bigg [\int _t^{t'}|f(s,X^{\mathbf{t },t';\mathbf{b },b}_s) -f(s,X^{\mathbf{t },t;\mathbf{b },b,u}_s)|ds\Big |{\mathcal {F}}_{t'}\bigg ]\nonumber \\&\qquad +k\sup _{s\in [t,t']}\sum _{b,b'\in {\bar{{\mathcal {I}}}}^2}|c_{b,b'}(t')-c_{b,b'}(s)|\nonumber \\&\qquad +C(K)\sup _{u\in {\mathcal {U}}^k}{\mathbb {E}}\bigg [\int _{t'}^T|X^{\mathbf{t },t';\mathbf{b },b,u}_s -X^{\mathbf{t },t;\mathbf{b },b,u}_s|+|X^{\mathbf{t },t';\mathbf{b },b,u}_T - X^{\mathbf{t },t;\mathbf{b },b,u}_T|\Big |{\mathcal {F}}_{t'}\bigg ]\nonumber \\&\qquad +C\sup _{u\in {\mathcal {U}}^k}{\mathbb {E}}\left[ \sup _{r\in [t_n,T]}\mathbb {1}_{[(X^{\mathbf{t },r;\mathbf{b },b,u})_T^*\ge K]} (1+|(X^{\mathbf{t },r;\mathbf{b },b,u})_T^*|^q) \Big |{\mathcal {F}}_{t'}\right] , \end{aligned}$$
(30)

for each \(K>0\), by the local Lipschitz property of f and g. Concerning the last term Doob’s maximal inequality gives, for fixed \(u\in {\mathcal {U}}^k\),

$$\begin{aligned}&{\mathbb {E}}\left[ \sup _{t\in [0,T]}{\mathbb {E}}\left[ \sup _{r\in [t_n,T]} \mathbb {1}_{[(X^{\mathbf{t },r;\mathbf{b },b,u})_T^*\ge K]} |(X^{\mathbf{t },r;\mathbf{b },b,u})_T^*|^q \Big |{\mathcal {F}}_{t}\right] ^2\right] \\&\quad \le C{\mathbb {E}}\left[ \sup _{r\in [t_n,T]}\mathbb {1}_{[(X^{\mathbf{t },r;\mathbf{b },b,u})_T^*\ge K]} |(X^{\mathbf{t },r;\mathbf{b },b,u})_T^*|^{2q} \right] , \end{aligned}$$

Applying Hölder’s inequality to the right hand side and taking the supremum over \({\mathcal {U}}\), we get

$$\begin{aligned}&\sup _{u\in {\mathcal {U}}}{\mathbb {E}}\left[ \sup _{t\in [0,T]}{\mathbb {E}}\left[ \sup _{r\in [t_n,T]}\mathbb {1}_{[(X^{\mathbf{t },r;\mathbf{b },b,u})_T^*\ge K]} |(X^{\mathbf{t },r;\mathbf{b },b,u})_T^*|^q \Big |{\mathcal {F}}_{t}\right] ^2\right] \\&\quad \le \sup _{u\in {\mathcal {U}}}\left( {\mathbb {P}}\left[ \sup _{r\in [t_n,T]}(X^{\mathbf{t },r;\mathbf{b },b,u})^*_T\ge K\right] \right) ^{1/2} \sup _{u\in {\mathcal {U}}} \left( {\mathbb {E}}\left[ \sup _{r\in [t_n,T]}|(X^{\mathbf{t },r;\mathbf{b },b,u})_T^*|^{4q}\right] \right) ^{1/2}. \end{aligned}$$

Now, by Chebyshev’s inequality and Proposition 7, \(\sup _{u\in {\mathcal {U}}}{\mathbb {P}}[\sup _{r\in [t_n,T]}(X^{\mathbf{t },r;\mathbf{b },b,u})_T^*\ge K]\) can be made arbitrarily small by choosing K large. By monotonicity, it follows that the last term in (30) tends to zero, \({\mathbb {P}}\)-a.s. as \(K\rightarrow \infty \). We conclude that \(Y^{\mathbf{t },t';\mathbf{b },b,k}_{t'}\) tends to \(Y^{\mathbf{t },t;\mathbf{b },b,k}_{t}\), \({\mathbb {P}}\)-a.s. when \(t'\searrow t\) by right continuity of the switching costs in combination with Lemma 3 and it follows that \(Y^{\mathbf{t },\cdot \vee t_n;\mathbf{b },b,k}_\cdot \) has a càdlàg version.

Arguing as above we have that

$$\begin{aligned}&Y^{\mathbf{t },\gamma _j\vee t_n;\mathbf{b },b,k}_{\gamma _j}-Y^{\mathbf{t },\gamma \vee t_n;\mathbf{b },b,k}_{\gamma } \\&\quad =(Y^{\mathbf{t },\gamma _j\vee t_n;\mathbf{b },b,k}_{\gamma _j} - Y^{\mathbf{t },\gamma \vee t_n;\mathbf{b },b,k}_{\gamma _j})+(Y^{\mathbf{t },\gamma \vee t_n;\mathbf{b },b,k}_{\gamma _j}-Y^{\mathbf{t },\gamma \vee t_n;\mathbf{b },b,k}_{\gamma }). \end{aligned}$$

Letting \(j\rightarrow \infty \) the last term tends to zero \({\mathbb {P}}\)-a.s. by uniform integrability and quasi-left continuity of the filtration. Concerning the first term we have (where we for notational convenience assume that \(\gamma ,\gamma _j\in {\mathcal {T}}_{t_n}\))

$$\begin{aligned}&{\mathbb {E}}\big [|Y^{\mathbf{t },\gamma _j;\mathbf{b },b,k}_{\gamma _j} - Y^{\mathbf{t },\gamma ;\mathbf{b },b,k}_{\gamma _j}|\big ] \\&\quad \le \sup _{u\in {\mathcal {U}}^k}{\mathbb {E}}\bigg [\int _{\gamma _j}^{\gamma }|f(s,X^{\mathbf{t },\gamma _j;\mathbf{b },b,u}_s) -f(s,X^{\mathbf{t },\gamma ;\mathbf{b },b}_s)|ds\bigg ] \\&\qquad + k\sum _{b,b'\in {\bar{{\mathcal {I}}}}^2}\sup _{\tau \in {\mathcal {T}}_{\gamma _j}}{\mathbb {E}}\big [|c_{b,b'}(\tau )-c_{b,b'}(\tau \vee \gamma )|\big ] \\&\qquad + C(K)\sup _{u\in {\mathcal {U}}^k}{\mathbb {E}}\bigg [\int _{\gamma }^T|X^{\mathbf{t },\gamma _j;\mathbf{b },b,u}_s -X^{\mathbf{t },\gamma ;\mathbf{b },b,u}_s|+|X^{\mathbf{t },\gamma _j;\mathbf{b },b,u}_T - X^{\mathbf{t },\gamma ;\mathbf{b },b,u}_T|\bigg ] \\&\qquad + C\sup _{u\in {\mathcal {U}}^{k+1}}{\mathbb {E}}\Big [\mathbb {1}_{[(X^{\mathbf{t };\mathbf{b },u})_T^*\ge K]}(1+ |(X^{\mathbf{t };\mathbf{b },u})_T^*|^q) \Big ] \end{aligned}$$

where the right hand side can be made arbitrarily small by Lemma 4 and quasi-left continuity of the switching costs. We conclude that

$$\begin{aligned} \lim _{j\rightarrow \infty } {\mathbb {E}}\left[ |Y^{\mathbf{t },\gamma _j\vee t_n;\mathbf{b },b,k}_{\gamma _j}-Y^{\mathbf{t },\gamma \vee t_n;\mathbf{b },b,k}_{\gamma }|\right] =0, \end{aligned}$$

which implies that \(Y^{\mathbf{t },\gamma _j\vee t_n;\mathbf{b },b,k}_{\gamma _j}\rightarrow Y^{\mathbf{t },\gamma \vee t_n;\mathbf{b },b,k}_{\gamma }\) in probability. Now since \(Y^{\mathbf{t },\cdot \vee t_n;\mathbf{b },b,k}_{\cdot }\) has left limits it follows that \(Y^{\mathbf{t },\gamma _j\vee t_n;\mathbf{b },b,k}_{\gamma _j}\rightarrow Y^{\mathbf{t },\gamma \vee t_n;\mathbf{b },b,k}_{\gamma }\), \({\mathbb {P}}\)-a.s. and we conclude that \(Y^{\mathbf{t },\cdot \vee t_n;\mathbf{b },b,k}_{\cdot }\in {\mathcal {S}}_{\textit{qlc}}^2\). \(\square \)

By the above results we conclude that an optimal control for the hydropower planning problem does exist (under the assumptions detailed in this section). With a few notable exceptions (see e.g. Aslaksen et al. 1990, 1993 in the case of singular control problems and Chapter 7 in Øksendal and Sulem (2007) for examples of solvable impulse control problems) finding explicit solutions to impulse control problems is difficult. Instead we often have to resort to numerical methods to approximate the optimal control. A plausible direction for obtaining numerical approximations of solutions to the hydropower operators problem would be to further develop the Monte Carlo technique originally proposed for optimal switching problems in Carmona and Ludkovski (2008) (and later analyzed in Aïd et al. (2014)) to obtain polynomial approximations of \(Y^{\mathbf{t },\mathbf{b }}\). Another possibility would be to apply the Markov-Chain approximations for stochastic control problems of delay systems developed in Kushner (2008). However, a thorough investigation of either direction is out of the scope of the present work and will be left as a topic of future research.