## 1 Introduction

Classical Markovian mixed optimal stopping and control problems, where the target is to maximise the (linear) expectation of a payoff on a finite time horizon T, are defined as

\begin{aligned} u(t,x)= \sup _{\tau }\sup _{\alpha } \mathbb {E}^{t,x}\bigg [\int _t^\tau e^{-r(s-t)} f(\alpha _s,X^{\alpha ,t,x}_s)\,ds+e^{-r(\tau -t)}\xi (\tau , X^{\alpha ,t,x}_\tau )\bigg ],\nonumber \\ \end{aligned}
(1.1)

where $$t\in [0,T]$$ is the initial time of the control problem, $$\alpha$$ is an admissible control process and $$\tau$$ is a stopping time, and $$X^{\alpha ,t,x}$$ is a controlled stochastic differential equation (SDE) of the form:

\begin{aligned} dX^{\alpha ,t,x}_s= & {} b(\alpha _s,X^{\alpha ,t,x}_s)\,ds+ \sigma (\alpha _s,X^{\alpha ,t,x}_s)\,dW_s\\&+\,\eta (\alpha _s,X^{\alpha ,t,x}_s,e)\,{\tilde{N}}(ds,de),\; s\in [t,T]; \quad X^{\alpha ,t,x}_t=x. \end{aligned}

The positive constant r denotes the discount rate, and the functions $$\xi$$ and f represent the terminal payoff and the instantaneous reward function, respectively. Under certain regularity assumptions on the coefficients, one can demonstrate that the value function u satisfies a nonlocal Hamilton–Jacobi–Bellman variational inequality (HJBVI) in the viscosity sense.

These results are extended in Reference  to a setting where the linear expectation $$\mathbb {E}$$ is replaced by a nonlinear expectation $$\mathcal {E}^{\alpha , t,x}$$ generated by a BSDE with jumps,

\begin{aligned} u(t,x)= \sup _{\tau }\sup _{\alpha } \mathcal {E}^{\alpha , t,x}_{t,\tau }[\xi (\tau , X^{\alpha ,t,x}_\tau )]. \end{aligned}
(1.2)

It has been demonstrated in Reference  that under suitable assumptions the value function u in (1.2) (after a change of time variable) can be characterized by the viscosity solution to a more complicated HJBVI (1.3), which involves an extra nonlinearity resulting from the nonlinear expectation:

\begin{aligned}&\min \big \{u(\mathbf{x})-\zeta (\mathbf{x}),u_t+\inf _{\alpha \in \mathbf{A }}\big (-L^\alpha u-f(\alpha ,\mathbf{x},u,(\sigma ^\alpha )^T Du,B^\alpha u)\big ) \big \}= \; 0,\nonumber \\&\quad \mathbf{x}\in [0,T]\times {{\mathbb {R}}}^d; u(0,x)= \; g(x), \quad x\in {{\mathbb {R}}}^d, \end{aligned}
(1.3)

with $$\mathbf{x}= (t,x)$$, nonlocal operators $$L^\alpha$$ and $$B^\alpha$$, the driver f of the BSDE, and given functions $$\zeta$$ and g, which we will specify in Sect. 2. Particularly, in the case where the driver is additive in y and independent of z and k, i.e., $$f(\alpha ,\mathbf{x},y,z,k)\equiv f(\alpha ,x)-ry$$, the generalized control problem (1.2) reduces to the classical linear expectation case (1.1), and (1.3) reduces to an HJB obstacle problem.Footnote 1

Nonlinear expectations as in (1.2), and hence HJBVIs as in (1.3), arise naturally in financial mathematics, for instance as models for American options in a market with constrained portfolios , from recursive utility optimization problems , from dynamic risk measures , and from robust pricing and risk measures under probability model uncertainty . In Sect. 5, we describe in more detail the situation of an ambiguity-averse investor who chooses the optimal wealth allocation and liquidation time of an asset whose price process has infinite activity jumps. This problem can be modelled naturally within the setup described above, and therefore leads directly to an HJBVI of the type (1.3) with all its key features present (i.e., singular non-local term, nonlinear driver, control optimisation, obstacle term). As it is usually difficult to obtain analytic solutions of HJBVIs, it is necessary to design efficient and robust numerical methods for solving these fully nonlinear PIDEs, of which we give a worked-out example in Sect. 5.

We also remark that, to the best of our knowledge, even for the case with linear expectations (i.e. f in the special form from above), there is no published numerical scheme covering the generality of (1.3). However, there is a vast literature on monotone approximations for local HJB equations (see, e.g., [2, 11, 13] and references therein) and a number of works covering specific extensions. For instance, monotone finite-difference quadrature schemes are proposed in References [5,6,7, 10] for nonlocal HJB equations. We refer the reader also to Reference  for penalty approximations to nonlocal variational inequalities and to Reference  for an application of policy iteration together with penalization to solve HJB obstacle problems. Probabilistic methods for solving HJB equations (without jumps and optimal stopping) can be found, for example, in Reference .

All the aforementioned PDE methods solve (1.3) (with $$f(\alpha ,\mathbf{x},y,z,k)\equiv f(\alpha ,x)-ry$$) by the standard “discretize, then optimize” approach, where one discretizes the operators in (1.3), and solves the resulting nonlinear discretized equations using policy iteration, or more generally semi-smooth Newton methods [21, 28].

However, this standard approach cannot be easily extended to nonlinear f which is only assumed to be Lipschitz (and generally is not semi-smooth ), which prevents a direct application of Newton-like solvers (see Reference  for a special case of the discrete optimization problem with $$f(\alpha ,\mathbf{x},y,z,k)\equiv f(\alpha ,x,y)$$ differentiable and concave in y, and $$\mathbf{A }$$ finite).

Moreover, at each step of policy iteration, one needs to identify the global optimal policy for each computational node. The nonlinear driver f and other PDE coefficients may have sufficiently complicate nonlinearities in the control variable such that the only way to construct a convergent algorithm is to discretize the admissible control set, and perform exhaustive search to determine the optimal policy at each node.

Another approach to solve (1.3) uses piecewise constant policy time stepping (PCPT) as in Reference . It is based explicitly on a discrete approximation of the admissible set by a finite set, say with J elements, and then defines a piecewise decoupled system of PDEs corresponding to these J (constant) controls. The information from the different solutions is assembled at the end of each timestep by taking the pointwise maximum.

As a specific scheme for these semi-linear PDEs, we propose an implicit Euler time discretization, monotone (semi-Lagrangian) approximations for local diffusions, a monotone quadrature-based scheme for the nonlocal terms, and the Lax-Friedrichs scheme for the nonlinearity in the gradient. The different solutions may be defined on different discretization grids by possibly high order monotonicity preserving interpolations. This approach not only avoids policy iteration, but also allows for an easier construction of convergent monotone schemes and an efficient parallel implementation of the individual semi-linear PDEs. Note that it is essential to obtain a monotone discretization, since it is well-known that non-monotone schemes may fail to converge or even converge to false “solutions” . By Godunov’s Theorem , in general, one can expect a monotone scheme to be at most first-order accurate.

The main contributions of our paper are:

• We formulate our algorithm by approximating the solution of (1.3) by the solution to a switching system with small switching cost. We shall establish a comparison principle for the switching system and demonstrate that as the switching cost tends to zero, the solution of the switching system converges to the viscosity solution of (1.3), which extends the results in Reference  to obstacle problems of switching systems and includes nonlinear drivers.

• We discretize the switching system piecewise in time by fully implicit monotone approximations. The convergence of the scheme is demonstrated, which subsequently gives a constructive proof for the existence of a viscosity solution to the switching system. Our results extend the one obtained in Reference  from the case of standard control problems. In contrast to there, PCPT leads to coupled semi-linear PDEs rather than linear PDEs due to the nonlinear expectations. The optimal stopping is treated as an additional control and included in the switching directly instead of the classical penalisation approach.

• By truncation of the singular jump measure and discretization of the control set, we obtain a stochastic control and optimal stopping problem whose value function is shown to converge to the value function of the initial problem and which satisfies a HJBVI equation. Our result extends earlier ones obtained only in the case of a linear expectation without control and optimal stopping (see e.g. ). This control-theoretical interpretation further enables us to establish the convergence order of these approximations through a probabilistic argument.

• For practical implementations, we propose a Picard-type iteration for the efficient numerical solution without the need to invert the dense matrices resulting from the nonlocal terms.

• Numerical examples for a recursive utility maximization problem are included to investigate the convergence order of the scheme with respect to different discretization parameters.

The remainder of this paper is organized as follows. In Sect. 2, we introduce the Markovian mixed optimal stopping and control problem with nonlinear expectations, and characterize its value function as the viscosity solution of a nonlocal HJBVI. We then derive numerical schemes in Sect. 3 by approximating the HJBVI with a switching system, PCPT, and ultimately fully discrete monotone schemes. Then we move on to the convergence analysis of our numerical schemes in Sect. 4. Numerical examples for a recursive utility maximization problem are presented in Sect. 5 to illustrate the effectiveness of our algorithms. In the Appendix, we include a rigorous proof of the comparison principle for the switching system and some complementary results that are used in this article.

## 2 Problem Formulation and Preliminaries

In this section, we formulate the mixed optimal stopping and control problem with nonlinear expectation and introduce the connection between such problems and HJBVIs, which is crucial for the subsequent developments. We start with some useful notation that is needed frequently in the rest of this work.

We write by $$T > 0$$ the terminal time, and by $$(\Omega , \mathcal {F}, P )$$ a complete probability space, in which two mutually independent processes, a d-dimensional Brownian motion W and a Poisson random measure N(dtde) with compensator $$\nu (de)dt$$, are defined. We assume $$\nu$$ is a $$\sigma$$-finite measure on $$E:={{\mathbb {R}}}^n\setminus \{0\}$$ equipped with its Borel field $$\mathcal {B}(E)$$ and satisfies

\begin{aligned} \int _E (1 \wedge |e|^2)\,\nu (de) <\infty . \end{aligned}
(2.1)

We denote by $$\mathbb {E}$$ the usual expectation operator with respect to the measure P.

For any given $$t \in [0,T]$$, we define the t-translated Brownian motion $$W^t:=(W_s-W_t)_{s \ge t}$$ and the t-translated Poisson random measure $$N^t:=N(]t,s],\cdot )_{s \ge t}$$. We denote by $${\tilde{N}}^t(dt,de)=N^t(dt,de)-\nu (de)dt$$ the compensated process of $$N^t$$, and by $$\mathbb {F}^t=\{\mathcal {F}^t_s\}_{s\in [t,T]}$$ be the filtration generated by $$W^t$$ and $$N^t$$ augmented by the P-null sets.

Furthermore, we introduce several spaces: $$L^2_\nu$$ is the space of Borel functions $$l:E\rightarrow {{\mathbb {R}}}$$ with $$\Vert l\Vert _\nu ^2:=\int _E|l(e)|^2\,\nu (de)<\infty$$; $$\mathbb {H}^2_t$$ (resp. $$\mathbb {H}^2_{t,\nu }$$) is the space of $${{\mathbb {R}}}^{d}$$-valued (resp. real-valued) $$\mathbb {F}^t$$-predictable processes $$(\pi _s)$$ (resp. $$(l_s(\cdot )$$) with $$\mathbb {E}\int _t^T |\pi _s|^2\,ds<\infty$$ (resp. $$\mathbb {E}\int _t^T \Vert l_s\Vert _{L^2_\nu }^2\,ds<\infty$$); $$\mathcal {S}^2_t$$ is the space of real-valued $$\mathbb {F}^t$$-adapted càdlàg processes $$(\psi _s)$$ with $$\mathbb {E}[\sup _{t \le s \le T} \psi _s^2]<\infty$$.

We now proceed to introduce the control problem of interest. For each $$t\in [0,T]$$, let $$\mathcal {A}_t^t$$ be a set of admissible controls, which are $$\mathbb {F}^t$$-predictable processes $$(\alpha _s)_{s\in [t,T]}$$ valued in a compact set $$\mathbf {A}$$, and $$\mathcal {T}_t^t$$ be the set of $$\mathbb {F}^t$$-stopping times which take values in [tT]. For any given initial state $$x\in {{\mathbb {R}}}^d$$, and control $$\alpha \in \mathcal {A}_t^t$$, we consider the controlled jump-diffusion process $$(X^{\alpha ,t, x}_s)_{ t \le s\le T}$$ satisfying the following SDE: for each $$s\in [t,T]$$,

\begin{aligned} X^{\alpha ,t,x}_s= & {} x+\int _t^s b(\alpha _v,X^{\alpha ,t,x}_v)\,dv+\int _t^s \sigma (\alpha _v,X^{\alpha ,t,x}_v)\,dW^t_v\nonumber \\&+\int _t^s\int _E \eta (\alpha _v,X^{\alpha ,t,x}_v,e)\,{\tilde{N}}^t(dv,de), \end{aligned}
(2.2)

where b, $$\eta \in {{\mathbb {R}}}^d$$ and $$\sigma \in {{\mathbb {R}}}^{d\times d}$$ are given measurable functions. We remark that although our analyses are performed only for jump-diffusion processes with time-homogenous coefficients, similar results are valid for controlled dynamics with time-dependent coefficients.

The performance of the control problem, depending on $$\alpha$$, is evaluated by a nonlinear expectation induced by a BSDE with a controlled driver $$f(\alpha _s,s,X_s^{\alpha ,t,x},y,z,k)$$. That is, for any given stopping time $$\tau \in \mathcal {T}_t^t$$ and any bounded Borel function $$\xi$$, we define the nonlinear expectation

\begin{aligned} \mathcal {E}^{\alpha ,t, x}_{t,\tau }[\xi (\tau ,X^{\alpha ,t,x}_\tau )]:=Y^{\alpha , \tau ,t, x}_{t}, \end{aligned}

where the process $$(Y^{\alpha , \tau , t,x}_{s})_{s\le \tau }$$ is a solution in $$\mathcal {S}_t^2$$ of the following BSDE: for each $$s\in [t,\tau ]$$,

\begin{aligned} {\left\{ \begin{array}{ll} -dY^{\alpha ,t,x}_{s,\tau }=f(\alpha _s,s,X_s^{\alpha ,t,x},Y^{\alpha ,t,x}_{s,\tau },Z^{\alpha ,t,x}_{s,\tau },K^{\alpha ,t,x}_{s,\tau })ds-Z^{\alpha ,t,x}_{s,\tau }dW^t_s\\ \quad -\int _E K^{\alpha ,t,x}_{s,\tau }\,{\tilde{N}}^t(ds,de),\\ Y^{\alpha ,t,x}_{\tau ,\tau }=\xi (\tau ,X^{\alpha ,t,x}_\tau ), \end{array}\right. } \end{aligned}
(2.3)

and $$(Z^{\alpha ,t,x}_{s,\tau })$$, $$(K^{\alpha ,t,x}_{s,\tau })$$ are two associated processes, if they exist, lying in $$\mathbb {H}_t$$ and $$\mathbb {H}^\nu _t$$, respectively.

Now we are ready to state the generalized mixed optimal stopping and control problem. For each initial time $$t\in [0,T]$$ and initial state $$x\in {{\mathbb {R}}}^d$$, we consider the following value function:

\begin{aligned} u(t,x)= \sup _{\tau \in \mathcal {T}_t^t}\sup _{\alpha \in \mathcal {A}_t^t} \mathcal {E}^{\alpha , t,x}_{t,\tau }[\xi (\tau , X^{\alpha ,t,x}_\tau )], \end{aligned}
(2.4)

subject to the controlled SDE (2.2), where $$\xi$$ is the terminal position given by

\begin{aligned} \xi (\tau ,X^{\alpha ,t,x}_\tau )=\zeta (\tau , X^{\alpha ,t,x}_\tau )1_{t\le \tau <T}+g(X^{\alpha ,t,x}_T)1_{\tau =T}, \end{aligned}

for some reward functions $$\zeta$$ and g. Note that the value function of our control problem is constant up to a P-null set. Throughout this work, we shall perform the analysis under the following standard assumptions on the coefficients:

### Assumption 2.1

The set of control values $$\mathbf{A }$$ is compact and the driver f is a measurable function of the form $$f(\alpha ,s,x,y,z,k):={\hat{f}}(\alpha ,s,x,y,z,\int _E k(e)\gamma (x,e)\,\nu (de))1_{s\ge t}$$ for some functions $${\hat{f}}$$ and $$\gamma$$. Moreover, there exists a constant $$C>0$$ such that for any $$\alpha ,\alpha '\in \mathbf{A }$$, $$t\in [0,T]$$, $$e\in E$$, $$x,x'\in {{\mathbb {R}}}^d$$, $$u,v\in {{\mathbb {R}}}$$, $$p,q\in {{\mathbb {R}}}^d$$, $$k,k'\in {{\mathbb {R}}}$$, we have

1. (1)

$$|b(\alpha ,x)-b(\alpha ',x')|+|\sigma (\alpha ,x)-\sigma (\alpha ',x')|\le C(|x-x'|+|\alpha -\alpha '|)$$;

2. (2)

$$|\eta (\alpha ,x,e)-\eta (\alpha ',x',e)|\le C(|x-x'|+|\alpha -\alpha '|)(1 \wedge |e|)$$ and $$|\eta (\alpha ,x,e)|\le C(1 \wedge |e|)$$;

3. (3)

$$|\gamma (x,e)-\gamma (x',e)|\le C|x-x'|(1 \wedge |e|^2)$$; $$|\gamma (x,e)|\le C(1 \wedge |e|)$$ and $$\gamma (x,e)\ge 0$$;

4. (4)

$${\hat{f}}:\mathbf{A }\times [0,T]\times {{\mathbb {R}}}^d\times {{\mathbb {R}}}\times {{\mathbb {R}}}^{d}\times {{\mathbb {R}}}\rightarrow {{\mathbb {R}}}$$ is continuous in t and admits the properties:

1. (a)

(Boundedness.) $$|{\hat{f}}(\alpha ,t,x,0,0,0)|\le C$$;

2. (b)

(Monotonicity.) $${\hat{f}}(\alpha ,t,x,v,p,k)-{\hat{f}}(\alpha ,t,x,u,p,k)\ge C(u-v)$$ when $$u\ge v$$, and $$k\rightarrow {\hat{f}}(\alpha ,t,x,u,p,k)$$ is non-decreasing in k;

3. (c)

(Lipschitz continuity.) $$|{\hat{f}}(\alpha ,t,x,u,p,k)-{\hat{f}}(\alpha ',t,x',v,q,k')|\le C(|\alpha -\alpha '|+|x-x'|+|u-v|+|p-q|+|k-k'|)$$;

5. (5)

the function $$\zeta :[0,T]\times {{\mathbb {R}}}^d\rightarrow {{\mathbb {R}}}$$ is continuous in t, and we have $$|g(x)-g(x')|+|\zeta (t,x)-\zeta (t,x)|\le C|x-x'|$$, $$|g(x)|+|\zeta (t,x)|\le C$$, and $$g(x)\ge \zeta (0,x)$$.

### Remark 1

Assumption 2.1 is the same as that made in Reference . Under assumptions (1), (2), equation (2.2) admits a unique solution. Assumptions $$(3), (4.a), (4.c), (5)$$ guarantee the existence and uniqueness of the solution of (2.3).

Note that the Lipschitz continuity of $${\hat{f}}$$ with respect to (xyzu) is a key assumption in order to establish the “good” measurability properties of the value function $$u^\alpha (t,x)$$ with respect to $$\alpha$$ and x, where $$u^\alpha (t,x):=\sup _{\tau \in \mathcal {T}_t^t} \mathcal {E}^{\alpha , t,x}_{t,\tau }[\xi (\tau , X^{\alpha ,t,x}_\tau )]$$, which are needed in the proof of the dynamic programming principle in Reference  (see in particular Sect. 3.2, Theorem 3.7, which holds under Assumption 2.1).

The monotonicity of $${\hat{f}}$$ with respect to the fourth variable y is a standard assumption for the comparison principle of the viscosity solutions to the HJB equations (see e.g. Assumption (C3) in Reference ). It can be made without loss of generality by performing an exponential in time scaling of the solution (see e.g. Remark 2.1 in Reference ).

The boundedness of $${\hat{f}},g$$ and $$\zeta$$ ensures the boundedness of the value function u (see Lemma 6.2 in Reference ), which subsequently allows us to work with the infinity norm of the solution to the HJBVIs. It also plays an important role in deriving the error estimates of our approximations (see e.g. Theorem 4.3).

Finally, we remark that the Lipschitz continuity of $$\gamma$$ is needed in Reference  for the proof of the comparison principle, which ensures the uniqueness of the solution to the corresponding HJBVIs with nonlinear drivers (see e.g. Lemma 4.3 in Reference  and (A.2 vi) in Reference ).

For notational convenience, in the sequel, we will write $${\hat{f}}$$ as f, and denote by $$\psi ^\alpha$$ a generic function $$\psi$$ with control-dependence.

The rest of this section is devoted to the equivalence between the mixed control problem and a generalized nonlocal HJBVI. Specifically, we now consider a Hamilton-Jacobi-Bellman variational inequality of the following form:

\begin{aligned} 0&=F(\mathbf{x},u,Du,D^2u,\{K^\alpha u\}_{\alpha \in \mathbf{A }},\{B^\alpha u\}_{\alpha \in \mathbf{A }})\nonumber \\&={\left\{ \begin{array}{ll} \min \big \{ u(\mathbf{x})-\zeta (\mathbf{x}),u_t+\inf _{\alpha \in \mathbf{A }}\big (-L^\alpha u-f^\alpha [u] \big ) \big \}, &{} \mathbf{x}\in \mathcal {Q}_T ,\\ u(\mathbf{x})-g(x), &{} \mathbf{x}\in \{0\}\times {{\mathbb {R}}}^d, \end{array}\right. } \end{aligned}
(2.5)

where $$f^{\alpha }[u]=f(\alpha ,\mathbf{x},u,(\sigma ^\alpha )^T \textit{Du},B^\alpha u)$$, $$\mathcal {Q}_T= (0,T]\times {{\mathbb {R}}}^d$$, $$\mathbf{x}=(t,x)$$ contains both the time t and the spatial coordinate $$x\in {{\mathbb {R}}}^d$$, and the nonlocal operators $$L^\alpha :=A^\alpha +K^\alpha$$ and $$B^\alpha$$ satisfy, for $$\phi \in C^{1,2}({\bar{\mathcal {Q}}}_T)$$:

\begin{aligned} A^\alpha \phi (\mathbf{x})&=\frac{1}{2} \text {tr}(\sigma ^\alpha (x)(\sigma ^\alpha (x))^TD^2\phi (\mathbf{x}))+b^\alpha (x) \cdot D\phi (\mathbf{x}), \end{aligned}
(2.6)
\begin{aligned} K^\alpha \phi (\mathbf{x})&=\int _{{E}}\big (\phi (t,x+\eta ^\alpha (x,e))-\phi (\mathbf{x})-\eta ^\alpha (x,e)\cdot D\phi (\mathbf{x})\big )\,\nu (de), \end{aligned}
(2.7)
\begin{aligned} B^\alpha \phi (\mathbf{x})&=\int _{{E}}\big (\phi (t,x+\eta ^\alpha (x,e))-\phi (\mathbf{x})\big )\gamma (x,e)\,\nu (de), \end{aligned}
(2.8)

where $$E={{\mathbb {R}}}^n\setminus \{0\}$$ is defined at the beginning of Sect. 2 and the nonlocal operators $$K^\alpha u$$ and $$B^\alpha u$$ are well-defined under Assumption 2.1.

We emphasize that since the matrix $$\sigma ^\alpha (\sigma ^\alpha )^T$$ is only assumed to be nonnegative definite, both the diffusion coefficient $$\sigma ^\alpha (\sigma ^\alpha )^T$$ and the jump intensity $$\eta$$ of (2.5) are allowed to vanish at some points. Consequently, there is no Laplacian smoothing from the second-order differential operator nor fractional Laplacian smoothing from the nonlocal operator to this degenerate equation (2.5). Therefore, in general, this HJBVI will not admit classical solutions, and we shall interpret the equation in the following viscosity sense based on semi-continuous envelopes of the equation [1, 29].

### Definition 2.2

(Viscosity solution of HJBVI) An upper (resp. lower) semicontinuous function u is said to be a viscosity subsolution (resp. supersolution) of (2.5) if and only if for any point $$\mathbf{x}_0$$ and for any $$\phi \in C^{1,2}({\bar{\mathcal {Q}}}_T)$$ such that $$\phi (\mathbf{x}_0)=u(\mathbf{x}_0)$$ and $$u-\phi$$ attains its global maximum (resp. minimum) at $$\mathbf{x}_0$$, one has

\begin{aligned}&F_*(\mathbf{x}_0,u(\mathbf{x}_0),D\phi (\mathbf{x}_0),D^2\phi (\mathbf{x}_0),\{K^\alpha \phi (\mathbf{x}_0)\}_{\alpha \in \mathbf{A }},\{B^\alpha \phi (\mathbf{x}_0)\}_{\alpha \in \mathbf{A }})\le 0, \\ \big (resp. \quad&F^*(\mathbf{x}_0,u(\mathbf{x}_0),D\phi (\mathbf{x}_0),D^2\phi (\mathbf{x}_0),\{K^\alpha \phi (\mathbf{x}_0)\}_{\alpha \in \mathbf{A }},\{B^\alpha \phi (\mathbf{x}_0)\}_{\alpha \in \mathbf{A }})\ge 0 \big ). \end{aligned}

A continuous function is a viscosity solution of the HJBVI (2.5) if it is both a a viscosity sub- and supersolution.

Under Assumption 2.1, the HJBVI (2.5) is well-posed in the class of bounded continuous functions (see References [16, 17]). The unique viscosity solution of (2.5) (after a change of time variable) can be further characterized as the optimal value function (2.4) of the mixed control problem. In other words, to obtain the optimal value function for all initial times t and initial states x, it is equivalent to design effective numerical schemes to solve (2.5).

Moreover, a strong comparison principle holds for the HJBVI (2.5), the proof of which is similar to that in Reference  (without controls) and hence omitted. In particular, if U is a bounded viscosity subsolution and V is a bounded viscosity supersolution to (2.5) with $$U(0,\cdot )\le V(0,\cdot )$$, we have $$U(\mathbf{x})\le V(\mathbf{x})$$ for all $$\mathbf{x}\in {\bar{\mathcal {Q}}}_T$$.

## 3 Construction of Numerical Schemes

In this section, we will design numerical schemes for solving HJBVI (2.5). We carry out the following string of approximations to construct our numerical algorithm:

• truncation of the singular jump measure (Eq. (3.3) and Sect. 4.1);

• approximation of the control set with a finite set (Eq. (3.5) and Theorem 4.3);

• approximation of the discretized control problem with a switching system (Eq. (3.6) and Theorem 4.6);

• discretization in time and space (Eqs. (3.8, 3.9) and Theorem 4.7).

We start the derivations of our schemes by approximating the singular measure $$\nu$$ with a truncated non-singular measure and a modified diffusion coefficient as suggested in Reference . This can be done by introducing an approximative jump-diffusion dynamics and an approximative backward SDE (see Sect. 4.1). More precisely, for any given $$\varepsilon >0$$, let us define the truncated measure $$\nu _{\varepsilon }(de)=1_{|e|>\varepsilon }\nu (de)$$ and the modified diffusion coefficient $${\tilde{\sigma }}^{\alpha }(x)$$ such that $${\tilde{\sigma }}^{\alpha }_{ij}(x)=\sigma ^{\alpha }_{ij}(x)$$ for $$i\not =j$$ and

\begin{aligned} {\tilde{\sigma }}^{\alpha }_{ii}(x)= & {} \text {sgn}(\sigma ^{\alpha }_{ii}(x))\bigg ((\sigma ^{\alpha }_{ii}(x))^2\nonumber \\&+\int _{|e|<\varepsilon }|\eta ^{\alpha }_i(x,e)|^2\, \nu (de)\bigg )^{1/2}, \quad i=1,\ldots , d, \; x\in {{\mathbb {R}}}^d. \end{aligned}
(3.1)

We further introduce the modified local operator $$A^{\alpha }_{\varepsilon }$$ as:

\begin{aligned} A^{\alpha }_{\varepsilon }\phi (\mathbf{x}):=\frac{1}{2} \text {tr}({\tilde{\sigma }}^\alpha (x)({\tilde{\sigma }}^\alpha (x))^TD^2\phi (\mathbf{x})),\quad \phi \in C^{1,2}([0,T]\times {{\mathbb {R}}}^d), \end{aligned}
(3.2)

and truncated nonlocal operators $$K^{\alpha }_{\varepsilon }$$ and $$B^{\alpha }_{\varepsilon }$$ by replacing $$\nu$$ with $$\nu _{\varepsilon }$$ in (2.7) and (2.8), respectively.

With these operators in hand, we consider the following modified HJBVI:

\begin{aligned} 0&=F^\varepsilon (\mathbf{x},u,Du,D^2u,\{K_\varepsilon ^\alpha u\}_{\alpha \in \mathbf{A }},\{B_\varepsilon ^\alpha u\}_{\alpha \in \mathbf{A }})\nonumber \\&={\left\{ \begin{array}{ll} \min \big \{ u-\zeta ,u_t+\min _{\alpha \in \mathbf{A }}\big (-L_\varepsilon ^\alpha u-f^\alpha _\varepsilon [u]\big ) \big \}, &{} \mathbf{x}\in \mathcal {Q}_T ,\\ u(\mathbf{x})-g(x), &{} \mathbf{x}\in \{0\}\times {{\mathbb {R}}}^d, \end{array}\right. } \end{aligned}
(3.3)

where we have $$f_{\varepsilon }^\alpha [u] =f^{\alpha }(\mathbf{x},u,({\tilde{\sigma }}^\alpha )^T \textit{Du},B_\varepsilon ^\alpha u)$$ and $$L^{\alpha }_{\varepsilon }\phi :=A^{\alpha }_{\varepsilon }\phi +K^{\alpha }_{\varepsilon }\phi$$ for any $$\phi \in C^{1,2}([0,T]\times {{\mathbb {R}}}^d)$$. These modified coefficients clearly satisfy Assumption 2.1, and hence (3.3) is well-posed in the viscosity sense.

We then approximate the admissible control set in (3.3) by a finite set. More precisely, for a finite subset $$\mathbf{A }_\delta$$ of the compact set $$\mathbf{A }$$ such that

\begin{aligned} \max _{\alpha \in \mathbf{A }}\min _{{\tilde{\alpha }}\in \mathbf{A }_\delta }|\alpha -{\tilde{\alpha }}|<\delta , \end{aligned}
(3.4)

we introduce the finite control HJBVI by

\begin{aligned} 0&=F^{\varepsilon ,\delta }(\mathbf{x},u,Du,D^2u,\{K_\varepsilon ^\alpha u\}_{\alpha \in \mathbf{A }_\delta },\{B_\varepsilon ^\alpha u\}_{\alpha \in \mathbf{A }_\delta })\nonumber \\&={\left\{ \begin{array}{ll} \min \big \{ u-\zeta ,u_t+\min _{\alpha \in \mathbf{A }_\delta }\big (-L_\varepsilon ^\alpha u-f^{\alpha }_{\varepsilon }[u]\big ) \big \}, &{} \mathbf{x}\in \mathcal {Q}_T ,\\ u(\mathbf{x})-g(x), &{} \mathbf{x}\in \{0\}\times {{\mathbb {R}}}^d, \end{array}\right. } \end{aligned}
(3.5)

Since (3.5) is a special case of (2.5) with a finite admissible set, it is clear that (3.5) admits a unique bounded viscosity solution.

### Remark 2

In Sect. 4.1, we provide an alternative interpretation of the above two approximations by identifying the viscosity solution of (3.5) as the value function of a mixed control problem in terms of modified SDE and BSDE. This characterization further enables us to establish the convergence of these approximations through a probabilistic argument.

Next, we approximate the finite control equation (3.5) by a switching system ( [2, 6]). Suppose the finite control set is given by $$\mathbf{A }_\delta =\{\alpha _1,\alpha _2,\ldots , \alpha _J\}$$. We denote by $$U_j^{\varepsilon ,\delta ,c}$$, $$j=1,\ldots , J$$ the solution of the following system of HJB equations:

\begin{aligned} 0&=F_j^{\varepsilon ,\delta ,c}(\mathbf{x},U_j, DU_j, D^2U_j,\{K_\varepsilon ^\alpha u\}_{\alpha \in \mathbf{A }_\delta },\{B_\varepsilon ^\alpha u\}_{\alpha \in \mathbf{A }_\delta },\{U_k\}_{k\not =j})\nonumber \\&={\left\{ \begin{array}{ll} \min \bigg [U_j-\zeta ,\; \min \big (U_{j,t}-L_\varepsilon ^{\alpha _j} U_j-f_\varepsilon ^{\alpha _j}[U_j]; \; U_{j}-\mathcal {M}_jU\big )\bigg ], &{}\mathbf{x}\in \mathcal {Q}_T,\\ U_j(\mathbf{x})-g(x), &{} \mathbf{x}\in \{0\}\times {{\mathbb {R}}}^d, \end{array}\right. } \end{aligned}
(3.6)

where we define $$\mathcal {M}_jU:=\max _{k\not =j}\big (U_k-c\big )$$ for any $$c>0$$. The positive switching cost is needed for the well-posedness of the switching system (3.6).

We now proceed to introduce a discrete approximation to the switching system based on the idea of piecewise constant policy timestepping. Define a set of nodes $$\{x_{j,i}\}$$ and timesteps $$t^n$$, with discretization parameters h and $$\Delta t$$, i.e.,

\begin{aligned} \max _{1\le j\le J,x\in {{\mathbb {R}}}^d}\min _{i}|x-x_{j,i}|=h,\quad \max _n(t^{n+1}-t^n)=\Delta t. \end{aligned}
(3.7)

By parameterizing the grid $$\Omega _{j,h}=\{\mathbf{x}_{j,i}\}_i$$ with the control index j, we are allowing the usage of different discretization grids for different controls. We denote by $$U^{n}_{j,i}$$ the discrete approximation to $$U_{j}$$ at the point $$\mathbf{x}^n_{j,i}=(t^n,x_{j,i})$$, and extend it to the computational domain by interpolation.

Let $$L^{\alpha _j}_{\varepsilon ,h}$$ and $$f^{\alpha _j}_{\varepsilon ,h}$$ be the discrete form of the operators $$L_\varepsilon ^{\alpha _j}$$ and $$f_\varepsilon ^{\alpha _j}$$, respectively. We discretize (3.6) on the grid $$\Omega _{j,h}$$ with the uniform time partition $$t^{n+1}-t^n=\Delta t$$ by performing piecewise constant policy timestepping and applying the constraints at the beginning of a new timestep,

\begin{aligned}&U^{n+\frac{1}{2}}_{j,i}=\max \big [\zeta _i^{n+1}, \, U^{n}_{j,i},\,\max _{k\not = j}({\tilde{U}}^{n}_{k,i(j)}-c)\big )\big ], \end{aligned}
(3.8)
\begin{aligned}&U^{n+1}_{j,i}-\Delta t\big (L_{\varepsilon ,h}^{\alpha _j} U_{j,i}^{n+1}+f_{\varepsilon ,h}^{\alpha _j}[U_{j,i}^{n+1}]\big )=U^{n+\frac{1}{2}}_{j,i},\quad \quad j=1,\ldots , J, \end{aligned}
(3.9)

where $${\tilde{U}}^{n}_{k,i(j)}$$ is the value of the interpolant of $$\{U^{n}_{k,l}\}_{l\in \Omega _{k,l}}$$ at the i-th point of the grid $$\Omega _{j,h}$$.

Now by rearranging the terms of (3.9), we obtain the following numerical scheme: for $$\mathbf{x}_{j,i}^n\in \mathcal {Q}_T$$ and $$j=1,\ldots , J$$,

\begin{aligned} 0&=G_j(\mathbf{x}^{n+1}_{j,i},h,U^{n+1}_{j,i}, \{U^{b+1}_{j,a}\}_{{(a,b)\!\not =\!(i,n)}}, \{ {\tilde{U}}^n_k\}_{k\not =j})\nonumber \\&=\min \bigg [U^{n+1}_{j,i}-\zeta _i^{n+1}-\Delta t\big (L_{\varepsilon ,h}^{\alpha _j} U_{j,i}^{n+1}+f_{\varepsilon ,h}^{\alpha _j}[U_{j,i}^{n+1}]\big ),\; \frac{U^{n+1}_{j,i}-U^{n}_{j,i}}{\Delta t}-L_{\varepsilon ,h}^{\alpha _j} U_{j,i}^{n+1}\nonumber \\&\quad -f_{\varepsilon ,h}^{\alpha _j}[U_{j,i}^{n+1}], U^{n+1}_{j,i}-\max _{k\not = j}({\tilde{U}}^{n}_{k,i(j)}-c) -\Delta t\big (L_{\varepsilon ,h}^{\alpha _j} U_{j,i}^{n+1}+f_{\varepsilon ,h}^{\alpha _j}[U_{j,i}^{n+1}]\big ) \bigg ]. \quad \end{aligned}
(3.10)

As seen from (3.10), performing switching at the beginning of a new timestep introduces two additional terms to both the switching part and the obstacle part of the equation, which will not appear in a straightforward discretization of the switching system (3.6). However, we will demonstrate in Sect. 4.4 that these terms vanish as $$\Delta t, h\rightarrow 0$$, and consequently our scheme (3.10) forms a consistent approximation to (3.6).

For notational simplicity, we label our approximations only by h and assume in the sequel that $$\Delta t$$ is a given function of h with $$\Delta t\rightarrow 0$$ as $$h\rightarrow 0$$.

### Remark 3

To determine the optimal stopping strategy and the optimal controls, one can simply compare values of the obstacle and all components of the switching system at each grid point. As we will see in Sect. 4.2, each component of the switching system converges to the solution of the HJBVI (2.5) as the discretization parameters tend to 0; therefore, although we have no guarantee for the convergence of the control approximation, this numerically found control is close to optimal for the mixed control problem (2.4).

We now describe in detail how we perform spatial discretizations for $$L^{\alpha }_{\varepsilon }$$ and $$B^{\alpha }_{\varepsilon }$$ to construct a monotone discrete operator $$L^{\alpha }_{\varepsilon ,h}$$ and $$f^{\alpha }_{\varepsilon ,h}$$, for a fixed control parameter $$\alpha \in \{\alpha _1,\ldots ,\alpha _J\}$$. To simplify the presentation, we consider a piecewise linear or multilinear interpolation $$\mathcal {I}_h$$ on a uniform spatial grid $$h{\mathbb {Z}}^d$$. That is,

\begin{aligned} \mathcal {I}_h[\phi ](x)=\sum _{m\in {\mathbb {Z}}^d}\phi (x_m)\omega _m(x;h),\quad x\in {{\mathbb {R}}}^d, \end{aligned}
(3.11)

for the standard “tent functions” $$\omega _m$$ satisfying $$0\le \omega _m(x;h)\le 1$$, $$\omega _m(x_i;h)=\delta _{mi}$$, $$\sum _m\omega _m=1$$, $$\text {supp}\,\omega _m\subset B(x_m,2h)$$, and $$|D\omega _m|\le C/h$$.

We start with the nonlocal terms. The definition of $$K^{\alpha }_{\varepsilon }$$ gives

\begin{aligned} K^\alpha _{\varepsilon }\phi (\mathbf{x})&=\int _{|e|\ge \varepsilon }\big (\phi (t, \mathbf{x}+\eta ^\alpha (x,e))-\phi (x)-\eta ^\alpha (x,e)\cdot D\phi (\mathbf{x})\big )\,\nu (de)\\&=\int _{|e|\ge \varepsilon }\big (\phi (t,x+\eta ^\alpha (x,e))-\phi (x)\big )\,\nu (de)\\&\quad +\int _{|e|\ge \varepsilon }-\eta ^\alpha (x,e)\,\nu (de)\cdot D\phi (\mathbf{x})\\&:=K^{\alpha ,1}_{\varepsilon }\phi (\mathbf{x})+b^{\alpha }_{\varepsilon }(x)\cdot D\phi (\mathbf{x}). \end{aligned}

Then, by replacing the integrands by their monotone interpolants (c.f. ), we derive the following approximations for $$K^{\alpha ,1}_{\varepsilon }$$ and $$B^{\alpha }_{\varepsilon }$$ (where we have dropped the mesh index j in x for simplicity):

\begin{aligned} K^{\alpha ,1}_{\varepsilon ,h}\phi (t^n,x_i)&:=\int _{|e|\ge \varepsilon }\mathcal {I}_h[\phi (t^n,x_i+\cdot )-\phi (t^n,x_i)](\eta ^\alpha (x_i,e))\,\nu (de)\nonumber \\&=\sum _{m\in {\mathbb {Z}}^d}\kappa ^{\alpha ,n}_{h,m,i}[\phi (t^n,x_i+x_m)-\phi (t^n, x_i)] \end{aligned}
(3.12)
\begin{aligned} B^{\alpha }_{\varepsilon ,h}\phi (t^n,x_i)&:=\int _{|e|\ge \varepsilon }\mathcal {I}_h[\phi (t^n,x_i+\cdot )-\phi (t^n,x_i)](\eta ^\alpha (x_i,e))\gamma (x_i,e)\,\nu (de)\nonumber \\&=\sum _{m\in {\mathbb {Z}}^d}\beta ^{\alpha ,n}_{h,m,i}[\phi (t^n,x_i+x_m)-\phi (t^n, x_i)], \end{aligned}
(3.13)

with the coefficients

\begin{aligned}&\kappa ^{\alpha ,n}_{h,m,i}:=\int _{|e|\ge \varepsilon }\omega _m(\eta ^\alpha (x_i,e);h)\,\nu (de),\quad \beta ^{\alpha ,n}_{h,m,i}\nonumber \\&\quad :=\int _{|e|\ge \varepsilon }\omega _m(\eta ^\alpha (x_i,e);h)\gamma (x_i,e)\,\nu (de), \end{aligned}
(3.14)

which are well-defined and nonnegative, and consequently result in monotone approximations. These coefficients can be efficiently evaluated by using quadrature rules with positive weights, such as Gauss methods of appropriate order .

We then turn to discretizing the local terms. We introduce the modified drift by $${\tilde{b}}^\alpha (x)=b^\alpha (x)+b^\alpha _{\varepsilon }(x)$$, and write the modified diffusion $${\tilde{\sigma }}^\alpha =({\tilde{\sigma }}^\alpha _1,\ldots , {\tilde{\sigma }}^\alpha _d)$$, where $${\tilde{\sigma }}_l^\alpha$$, $$l=1,\ldots , d$$ is the l-th column of $${\tilde{\sigma }}^\alpha$$ defined in (3.1).

With these modified coefficients, we are ready to construct the following approximations of the local operators: for any $$k>0$$,

\begin{aligned} \frac{1}{2}\text {tr}({\tilde{\sigma }}^\alpha (x)({\tilde{\sigma }}^\alpha (x))^TD^2\phi (x))&\approx \frac{1}{2}\sum _{l=1}^{d}\frac{\mathcal {I}_h[\phi ](x+k{\tilde{\sigma }}^\alpha _l)-2\mathcal {I}_h[\phi ](x)+\mathcal {I}_h[\phi ](x-k{\tilde{\sigma }}^\alpha _l)}{k^2}\\ {\tilde{b}}^\alpha (x) \cdot D\phi&\approx \frac{1}{2}\frac{\mathcal {I}_h[\phi ](x+k^2{\tilde{b}}^\alpha )-2\mathcal {I}_h[\phi ](x)+\mathcal {I}_h[\phi ](x+k^2{\tilde{b}}^\alpha )}{k^2}, \end{aligned}

which, by using (3.11) and the fact that $$\sum _m\omega _m=1$$, can be further written in the discrete monotone form

\begin{aligned} A^{\alpha }_{\varepsilon ,h,k}\phi (t^n,x_i)=\sum _{m\in {\mathbb {Z}}^d}d^{\alpha ,n}_{h,k,m,i}[\phi (t^n,x_m)-\phi (t^n,x_i)], \end{aligned}
(3.15)

with non-negative coefficients

\begin{aligned} d^{\alpha ,n}_{h,k,m,i}= & {} \frac{1}{2}\sum _{l=1}^{d}\frac{\omega _m(x_i+k{\tilde{\sigma }}^\alpha _l(x_i);h) +\omega _m(x_i-k{\tilde{\sigma }}^\alpha _l(x_i);h)}{k^2}\nonumber \\&+\,\frac{\omega _m(x_i+k^2{\tilde{b}}^\alpha (x_i);h)}{k^2}\ge 0. \end{aligned}
(3.16)

The approximation to the local operator $$A^{\alpha }_{\varepsilon }$$ falls into the class of semi-Lagrangian schemes (see e.g. ), and provides a consistent monotone approximation for possibly degenerate, non-diagonally dominant diffusion coefficients.

Before presenting our fully discrete scheme, we shall point out that by considering a truncated problem, one can without loss of generality assume that $$\sigma ^\alpha$$ is bounded, which consequently implies the Hamiltonian

\begin{aligned} {\bar{f}}^{\alpha }(\mathbf{x},u, p,k):=f^{\alpha }(\mathbf{x},u,({\tilde{\sigma }}^\alpha (x))^T p,k) \end{aligned}

is Lipschitz continuous with respect to p. Indeed, suppose $$\sigma ^\alpha$$ is unbounded, then for any given $$\mu >0$$, we define the cut-off function

\begin{aligned} \xi _\mu :{{\mathbb {R}}}^d\rightarrow [0,1],\quad \xi _\mu \equiv 1\text { for }|x|<\frac{1}{\mu },\text { and }\xi _\mu \in C_0^\infty ({{\mathbb {R}}}^d), \end{aligned}

and consider a truncated HJBVI (3.5) by replacing $$\sigma ^\alpha$$ with the bounded diffusion coefficient $$\sigma _\mu ^\alpha (x):=\xi _\mu (x)\sigma ^\alpha (x)$$. Using the fact that $$1-\xi _\mu \rightarrow 0$$ uniformly on compact sets as $$\mu \rightarrow 0$$, one can easily prove this additional approximation is consistent with (3.5), and hence its viscosity solution converges to the solution to (3.5) uniformly on bounded sets.

The Lipschitz continuous Hamiltonian enables an approximation by the implicit Lax-Friedrichs numerical flux . For each $$l=1,\ldots , d$$, we denote by $$\Delta ^{(l)}_+ U^n_{j,i}$$ (resp. $$\Delta ^{(l)}_- U^n_{j,i}$$) the one-step forward (resp. backward) difference operator along the l-th coordinate, and by $$\Delta U^{n}_{j,i}=(\Delta ^{(1)}_+ U^n_{j,i}+\Delta ^{(1)}_- U^n_{j,i},\ldots , \Delta ^{(d)}_+ U^n_{j,i}+\Delta ^{(d)}_- U^n_{j,i})^T$$ the central difference operator at the grid point $$\mathbf{x}_{j,i}^n$$. Then for any $$\theta >0$$, the Lax-Friedrichs numerical flux is given for any $$(\mathbf{x}^{n}_{j,i}, u, k)\in \Omega _{j,h}\times {{\mathbb {R}}}\times {{\mathbb {R}}}$$ by

\begin{aligned} {\tilde{f}}^{\alpha }(\mathbf{x}^{n}_{j,i},u, \Delta U^{n}_{j,i},k):={\bar{f}}^{\alpha }\left( \mathbf{x}^{n}_{j,i},u, \frac{\Delta U^{n}_{j,i}}{2h},k\right) +\sum _{l=1}^d \frac{\theta }{\lambda }\bigg (\frac{\Delta ^{(l)}_+ U^n_{j,i}-\Delta ^{(l)}_- U^n_{j,i}}{h}\bigg ),\nonumber \\ \end{aligned}
(3.17)

where we define $$\lambda =\Delta t/h$$.

A fully implicit time discretisation is finally given by

\begin{aligned} \begin{aligned}&U^{n}_{j,i}-\Delta t\big (A^{\alpha }_{\varepsilon ,h,k}U^{n}_{j,i} +K^{\alpha ,1}_{\varepsilon ,h}U^{n}_{j,i}\\&\quad + {\tilde{f}}^{\alpha }(\mathbf{x}^{n}_{j,i},U^{n}_{j,i}, \Delta U^{n}_{j,i},B^{\alpha }_{\varepsilon ,h}U^{n}_{j,i}) \big )-U^{n-\frac{1}{2}}_{j,i}=0. \end{aligned} \end{aligned}
(3.18)

Substituting (3.8) into (3.18), one can reformulate this implicit scheme in its equivalent form (3.10).

We end this section with a remark about the implementation of the implicit scheme (3.18). To avoid solving linear systems with the dense matrices resulting from the discretization of the nonlocal operators, we write the solution to (3.18) as the fixed point of a sparse contraction mapping T, such that sufficient accuracy is achieved in practice by a few fixed point iterations.

Given bounded functions $$U^{n-\frac{1}{2}}_{j}$$ and $$U^{n,(k)}_{j}$$, we define the following mapping T on $$\ell ^\infty ({\mathbb {Z}}^d)$$, i.e., the Banach space of bounded functions on $$h{\mathbb {Z}}^d$$ employed with the sup-norm $$|\cdot |_0$$:

\begin{aligned}&(1-\Delta t A^{\alpha }_{\varepsilon ,h,k})(TU^{n,(k)}_{j,i}) =\Delta t \big (K^{\alpha ,1}_{\varepsilon ,h}U^{n,(k)}_{j,i}\nonumber \\&\quad + {\tilde{f}}^{\alpha }(\mathbf{x}^{n}_{j,i},U^{n,(k)}_{j,i}, \Delta U^{n,(k)}_{j,i},B^{\alpha }_{\varepsilon ,h}U^{n,(k)}_{j,i}) \big )+U^{n-\frac{1}{2}}_{j,i}. \end{aligned}
(3.19)

It is clear that a fixed point $$U_{j}^n$$ with $$TU_{j}^n=U_{j}^n$$ is a solution to (3.18). Moreover, for any given functions $$U^{n,(k)}_{j}$$ and $$V^{n,(k)}_{j}$$ in $$\ell ^\infty ({\mathbb {Z}}^d)$$, we obtain from the Lipschitz continuity of f and the $$\ell ^\infty$$ stability of the numerical flux $${\tilde{f}}$$ (see Lemma 4.11) that

\begin{aligned}&|TU^{n,(k)}_{j}-TV^{n,(k)}_{j}|_0\le \bigg (\Delta t \big [\sum _{m\not =0}\kappa ^{\alpha ,n}_{h,m,i}\\&\quad +C \big (1+\sum _{m\not =0}\beta ^{\alpha ,n}_{h,m,i}\big )\big ]+4d\theta \bigg )|U^{n,(k)}_{j}-V^{n,(k)}_{j}|_0. \end{aligned}

Since we need $$h=o(\varepsilon )$$ in general to achieve consistency of our scheme (see Lemma 4.9), it suffices to require $$\frac{\Delta t}{\varepsilon ^2}<1$$ and $$4d\theta <1$$ to ensure T is a contraction mapping on $$\ell ^\infty ({\mathbb {Z}}^d)$$. This establishes the well-posedness of (3.18) and enables us to solve the nonlinear equation (3.18) through Picard iterations by setting

\begin{aligned} U^{n,(0)}_{j}=U^{n-\frac{1}{2}}_{j},\qquad U^{n,(k+1)}_{j}=TU^{n,(k)}_{j},\quad k\ge 0. \end{aligned}

We emphasize that the criterion $$\frac{\Delta t}{\varepsilon ^2}<1$$ is a sufficient condition in the worst case, but is often far from computationally optimal since we have used no information about the exact behavior of the singular measure $$\nu$$ around zero (see Remark 6 for details). For typical Lévy measures from finance , we only need $$\Delta t=O(\varepsilon )$$ (such as in the variance gamma case in our tests) or even $$\Delta t$$ independent of $$\varepsilon$$ (for instance, for a Gaussian density) to guarantee T is a contraction mapping.

Moreover, since in practice we can evaluate the discrete nonlocal operators $$K^{\alpha ,1}_{\varepsilon ,h}U^{n}_{j}$$ and $$B^{\alpha }_{\varepsilon ,h}U^{n}_{j}$$ at all grid points in $$O(N\log N)$$ operations using a FFT (see e.g. ), and in each iteration a sparse linear system is solved (in one dimension it is tridiagonal), the total complexity of the implicit scheme is still close to linear.

It is also worth pointing out that even if we chose explicit approximations for the nonlocal operators, due to the nonlinearity of f, one still has to perform iterations to solve the resulting nonlinear equations. Because we only assume Lipschitz continuity but no higher regularity of f, we adopt the Picard iteration to solve for $$U^{n}$$.

## 4 Convergence Analysis

In this section, we establish the convergence of the numerical approximations in Sect. 3 to the viscosity solution of the HJBVI (2.5). To simplify the notation, we will occasionally drop the terms $$\{K^\alpha u\}_{\alpha \in \mathbf{A }}$$ and $$\{B^\alpha u\}_{\alpha \in \mathbf{A }}$$ in (3.3), (3.5) and (3.6), and simply denote them by $$F(\mathbf{x},u,Du,D^2u)=0$$ in the sequel.

### 4.1 Approximation by Non-singular Measures and Finite Control Sets

In this section, we shall study the approximations of HJBVI (2.5) with a non-singular measure and finite control set.

In fact, it is not difficult to see that (3.3) and (3.5) are consistent approximations of (2.5) in the viscosity sense (see e.g. [22, 29]), such that the comparison principle of (2.5) enables us to conclude that the solutions of (3.3) and (3.5) converge to that of (2.5) on compact sets as $$\varepsilon ,\delta \rightarrow 0$$.

The remainder of this section thus focuses on obtaining an error estimate for these approximations in terms of the jump truncation size $$\varepsilon$$ and control mesh size $$\delta$$. Although it should be possible to extend the analytic arguments in Reference  to the present nonlinear setting, we follow a different path by first identifying the viscosity solution of (3.5) as the value function of a mixed control problem in terms of modified SDE and BSDE. This control-theoretical interpretation further enables us to develop a shorter proof of the convergence order of these approximations through a probabilistic argument.

We start with the truncation of singular measures. A possible way to work with a nonsingular jump measure is to introduce a Backward SDE with a modified driver and an approximative jump-diffusion dynamics where the small jumps part has been substituted by a rescaled diffusion coefficient of the Brownian motion W.

More precisely, we adopt the same probability space as introduced in Sect. 2, which supports the Brownian motion process W and the independent Poisson measure N(dtde). For a given jump truncation size $$\varepsilon >0$$, we define a modified diffusion coefficient $${\tilde{\sigma }}^\alpha$$ as in (3.1), and also introduce a modified driver $$f^\varepsilon (\alpha ,t,x,y,z,k):={\hat{f}}(\alpha ,t, x,y,z,\int _{|e| \ge \varepsilon } k(e)\gamma (x,e)\nu (de)),$$ where $${\hat{f}}$$ is given in Assumption 2.1.

Next we shall modify the coefficients by including the control discretization. For any given control mesh size $$\delta >0$$, we denote by $$\phi ^\delta$$ the piecewise constant approximation (on the control variable $$\alpha$$) of a generic function $$\phi$$ based on its value on $$\mathbf{A}_\delta$$, where $$\phi =b, {\tilde{\sigma }}, \eta , f^\varepsilon$$. Note that the Lipschitz continuity of the coefficients on the control parameter $$\alpha$$ (see Assumption 2.1) and the condition (3.4) imply that there exists a constant $$C\ge 0$$, such that it holds for any given $$(\alpha ,t,x,u,p,k)\in \mathbf{A }\times [0,T]\times {{\mathbb {R}}}^d\times {{\mathbb {R}}}\times {{\mathbb {R}}}^d\times {{\mathbb {R}}}$$ and discretization parameters $$\varepsilon ,\delta >0$$ that

\begin{aligned} |b(\alpha ,x)-b^\delta (\alpha ,x)|+|\sigma (\alpha ,x)-{\tilde{\sigma }}^\delta ({\alpha },x)|&\le C\bigg (\delta +\bigg |\int _{|e|<\varepsilon } (1\wedge |e|^2)\, \nu (de)\bigg |^{\frac{1}{2}}\bigg ), \end{aligned}
(4.1)
\begin{aligned} |\eta (\alpha ,x,e)-\eta ^\delta (\alpha ,x,e)|&\le C\delta (1\wedge |e|), \end{aligned}
(4.2)
\begin{aligned} |{f}(\alpha ,t,x,u,p,k)-{f}^{\varepsilon ,\delta }({\alpha },t,x,u,p,k)|&\le C\bigg (\delta +\int _{|e|<\varepsilon } k(e)\gamma (x,e)\, \nu (de)\bigg ). \end{aligned}
(4.3)

For any given initial state $$x\in {{\mathbb {R}}}^d$$, control $$\alpha \in \mathcal {A}_t^t$$ and $$\tau \in \mathcal {T}_t^t$$, we consider the modified controlled jump-diffusion process $$(X^{\varepsilon ,\delta ,\alpha ,t, x}_s)_{ t \le s\le T}$$ satisfying the following SDE: for each $$s\in [t,T]$$,

\begin{aligned} X_s= & {} x+\int _t^s b^\delta (\alpha _v,X_v)\,dv+\int _t^s {\tilde{\sigma }}^\delta (\alpha _v,X_v)\,dW^t_v\nonumber \\&+\int _t^s\int _{|e|>\varepsilon } \eta ^\delta (\alpha _v,X_v,e)\,{\tilde{N}}^t(dv,de), \end{aligned}
(4.4)

and the BSDE with the modified controlled driver $$f^{\varepsilon ,\delta }(\alpha _s,s,X_s^{\varepsilon ,\delta ,\alpha ,t,x},y,z,k)$$:

\begin{aligned} {\left\{ \begin{array}{ll} \!\begin{aligned} -dY^{\varepsilon ,\delta ,\alpha ,t,x}_{s,\tau }=&{}f^{\varepsilon ,\delta }(\alpha _s,s,X_s^{\varepsilon ,\delta ,\alpha ,t,x},Y^{\varepsilon ,\delta ,\alpha ,t,x}_{s,\tau },Z^{\varepsilon ,\delta ,\alpha ,t,x}_{s,\tau },K^{\varepsilon ,\delta ,\alpha ,t,x}_{s,\tau })ds\\ &{}\qquad -\,Z^{\varepsilon ,\delta ,\alpha ,t,x}_{s,\tau }dW^t_s-\int _{E} K^{\varepsilon ,\delta ,\alpha ,t,x}_{s,\tau }(e)\,{\tilde{N}}^t(ds,de),\qquad s\in [t,\tau );\\ Y^{\varepsilon ,\alpha ,t,x}_{\tau ,\tau }=&{}\xi (\tau ,X^{\varepsilon ,\alpha ,t,x}_\tau ). \end{aligned} \end{array}\right. }\nonumber \\ \end{aligned}
(4.5)

The coefficients of the above SDE and BSDE satisfy Assumption 2.1, and therefore the equations are well-posed.

Now for any each time $$t\in [0,T]$$ and state $$x\in {{\mathbb {R}}}^d$$, we consider the following value function:

\begin{aligned} u^{\varepsilon ,\delta }(t,x)= \sup _{\tau \in \mathcal {T}_t^t}\sup _{\alpha \in \mathcal {A}_t^t} \mathcal {E}^{f^{\varepsilon ,\delta ,\alpha }}_{t,\tau }[\xi (\tau , X^{\varepsilon ,\delta ,\alpha , t,x}_\tau )], \end{aligned}
(4.6)

subject to the controlled SDE (4.4), where the nonlinear expectation is induced by (4.5).

The following theorem shows that $$u^{\varepsilon ,\delta }$$ is the unique viscosity solution of the HJBVI equation (3.5) introduced in Sect. 3.

### Proposition 4.1

The function $$(t,x)\mapsto u^{\varepsilon ,\delta }(T-t,x)$$, with $$u^{\varepsilon ,\delta }$$ defined by (4.6), is the unique viscosity solution of the HJBVI (3.5).

### Proof

Due to the compactness of the set $$\mathbf A$$, one can follow the same arguments in Reference [16, 17] and characterize $$u^{\varepsilon ,\delta }$$ as the unique viscosity solution of an HJBVI with coefficients $$b^\delta ,{\tilde{\sigma }}^{\delta },\eta ^\delta , f^{\varepsilon ,\delta }$$. Then it suffices to observe that such HJBVI is equivalent to (3.5) since its coefficients are piecewise linear in the control parameter. $$\square$$

### Remark 4

Contrary to the case without control and optimal stopping studied in Reference , it is not clear that one can use a different approximation of the forward backward system by introducing an independent Brownian motion scaled with the standard deviation of small jumps. Indeed, the equations would be well-posed in an enlarged filtration $$\mathbb {G}$$, but the control process is $$\mathbb {F}$$-predictable, with $$\mathbb {F} \subset \mathbb {G}$$, which leads to difficulties in the derivation of the dynamic programming principle.

We now exploit the above control-theoretical characterization of the viscosity solution $$u^{\varepsilon ,\delta }$$ and obtain a convergence order in terms of the jump truncation size $$\varepsilon$$ and control mesh size $$\delta$$. Let us first show the following uniform convergence result of the forward component $$X^{\varepsilon ,\delta , \alpha ,t,x}$$ towards $$X^{\alpha ,t,x}$$ when $$\varepsilon ,\delta$$ tends to 0.

### Lemma 4.2

For each $$\varepsilon ,\delta \in (0,1)$$, $$p\ge 2$$, $$t\in [0,T]$$, $$x \in \mathbb {R}^d$$ and $$\alpha \in \mathcal {A}_t^t$$, it holds that

\begin{aligned}&\mathbb {E}\left[ \sup _{t \le s \le T} |X_s^{\varepsilon ,\delta , \alpha ,t,x}-X_s^{\alpha ,t,x}|^p\right] \nonumber \\&\quad \le C \bigg \{\delta ^p+ \bigg (\int _{|e|\le \varepsilon }|e|^2\nu (de)\bigg )^{p/2}+\bigg (\int _{|e|\le \varepsilon } |e|^p\nu (de)\bigg )\bigg \}, \end{aligned}
(4.7)

where C is a constant independent of $$t,x,\alpha ,\varepsilon$$ and $$\delta$$.

### Proof

Fix $$\varepsilon ,\delta \in (0,1)$$, $$\alpha \in \mathcal {A}_t^t$$ and $$v \in [t,T]$$. We have:

\begin{aligned}&{\mathbb {E}}\left[ \sup _{t \le u \le v} |X_u^{\varepsilon , \delta ,\alpha ,t,x}-X_u^{\alpha ,t,x}|^p\right] \\&\quad \le C \mathbb {E}\left[ \sup _{t \le u \le v} \left| \int _t^u (b^\delta (\alpha _s, X_s^{\varepsilon , \delta ,\alpha ,t,x})-b(\alpha _s, X_s^{\alpha ,t,x}))ds\right| ^p \right] \\&\qquad + C \mathbb {E}\left[ \sup _{t \le u \le v} \left| \int _t^u (\tilde{\sigma }^\delta (\alpha _s, X_s^{\varepsilon , \delta ,\alpha ,t,x})-\sigma (\alpha _s, X_s^{\alpha ,t,x}))dW_s\right| ^p \right] \\&\qquad +C \mathbb {E}\left[ \sup _{t \le u \le v} \left| \int _t^u \int _{|e| > \varepsilon }(\eta ^\delta (\alpha _s, X_s^{\varepsilon ,\delta , \alpha ,t,x},e)-\eta (\alpha _s, X_s^{\alpha ,t,x},e))\tilde{N}(ds,de)\right| ^p \right] \\&\qquad +C \mathbb {E}\left[ \sup _{t \le u \le v} \left| \int _t^u \int _{|e| \le \varepsilon }(\eta (\alpha _s, X_s^{\alpha ,t,x},e))\tilde{N}(ds,de)\right| ^p \right] , \end{aligned}

where C is a constant independent of $$\alpha$$. The Burkholder–Davis–Gundy inequality, together with (4.1), (4.2), and the Lipschitz assumptions on the coefficients $$b, \sigma , \eta$$ (see Assumption 2.1) lead to:

\begin{aligned}&\mathbb {E}\left[ \sup _{t \le u \le v} |X_u^{\varepsilon , \delta ,\alpha ,t,x}-X_u^{\alpha ,t,x}|^p\right] \\&\quad \le \, C\bigg \{\delta ^p+\bigg (\int _{|e|\le \varepsilon }(1 \wedge |e|^2)\nu (de)\bigg )^{\frac{p}{2}}\\&\qquad +\mathbb {E}\left[ \int _t^v \bigg (\sup _{t \le u \le s}\left| X_u^{\varepsilon ,\delta , \alpha ,t,x}-X_u^{\alpha ,t,x}\right| ^p\bigg )ds \right] \bigg \}\\&\qquad +C \mathbb {E}\left[ \left( \int _t^v \int _{|e|> \varepsilon }|\eta ^\delta (\alpha _s, X_s^{\varepsilon ,\delta , \alpha ,t,x},e)-\eta (\alpha _s, X_s^{\alpha ,t,x},e)|^2\nu (de)ds\right) ^{\frac{p}{2}} \right] \\&\qquad +C \mathbb {E}\left[ \int _t^v \int _{|e| > \varepsilon }|\eta ^\delta (\alpha _s, X_s^{\varepsilon ,\delta , \alpha ,t,x},e)-\eta (\alpha _s, X_s^{\alpha ,t,x},e)|^p\nu (de)ds \right] \\&\qquad + C \mathbb {E}\left[ \left( \int _t^v \int _{|e| \le \varepsilon }|\eta (\alpha _s, X_s^{\alpha ,t,x},e)|^2\nu (de)ds\right) ^{\frac{p}{2}}\right. \\&\qquad \left. + \left( \int _t^v \int _{|e| \le \varepsilon }|\eta (\alpha _s, X_s^{\alpha ,t,x},e)|^p\nu (de)ds\right) \right] \\&\quad \le \,C \bigg \{\mathbb {E}\left[ \int _t^v \left( \sup _{t \le u \le s}|X_u^{\varepsilon ,\delta , \alpha ,t,x}-X_u^{\alpha ,t,x}|^p\right) ds \right] \\&\qquad +\delta ^p+\bigg (\int _{|e|\le \varepsilon }|e|^2\nu (de)\bigg )^{\frac{p}{2}}+\bigg (\int _{|e|\le \varepsilon } |e|^p\nu (de)\bigg )\bigg \}, \end{aligned}

where the last inequality follows by the integrability assumption on the measure $$\nu$$. Then we obtain the desired result (4.7) from the Gronwall’s inequality. $$\square$$

Based on the above estimate, we now show the convergence order of the viscosity function $$u^{\varepsilon ,\delta }$$ of (3.5) towards u in terms of $$\varepsilon$$ and $$\delta$$.

### Theorem 4.3

For any $$p>2$$, there exists a constant $$C_p$$ depending on p, such that for all $$\varepsilon ,\delta \in (0,1)$$, we have

\begin{aligned}&\left| u^{\varepsilon ,\delta } (t,x)-u(t,x)\right| \le C_p \bigg \{ \delta +\bigg (\int _{|e|\le \varepsilon }|e|^2\nu (de)\bigg )^{\frac{1}{2}}\\&\quad +\bigg (\int _{|e|\le \varepsilon }|e|^p\nu (de)\bigg )^{\frac{1}{p}}\bigg \}, \quad (t,x) \in [0,T] \in \mathbb {R}^d. \end{aligned}

### Proof

Fix $$\varepsilon ,\delta \in (0,1)$$, $$t \in [0,T]$$ and $$x \in \mathbb {R}^d$$. The definitions of $$u^{\varepsilon ,\delta }$$ and u imply that

\begin{aligned}&|u^{\varepsilon ,\delta }(t,x)-u(t,x)|^2 = \big |\sup _{\alpha \in \mathcal {A}_t^t } \sup _{\tau \in \mathcal {T}_t^t } \mathcal {E}^{f^{\varepsilon ,\delta ,\alpha }}_{t,\tau }\left[ \zeta (\tau , X_\tau ^{\varepsilon ,\delta , \alpha ,t,x})\right] \nonumber \\&\quad -\sup _{\alpha \in \mathcal {A}_t^t } \sup _{\tau \in \mathcal {T}_t^t } \mathcal {E}^{f^{\alpha }}_{t,\tau }\left[ \zeta (\tau , X_\tau ^{\alpha ,t,x})\right] \big |^2 \nonumber \\&\quad \le \sup _{\alpha \in \mathcal {A}_t^t } \sup _{\tau \in \mathcal {T}_t^t} \left| \mathcal {E}^{f^{\varepsilon ,\delta ,\alpha }}_{t,\tau }\left[ \zeta (\tau , X_\tau ^{\varepsilon ,\delta , \alpha ,t,x})\right] - \mathcal {E}^{f^{\alpha }}_{t,\tau }\left[ \zeta (\tau , X_\tau ^{\alpha ,t,x})\right] \right| ^2. \end{aligned}
(4.8)

Recall that, since $$\alpha \in \mathcal {A}_t^t$$ and $$\tau \in \mathcal {T}_t^t$$, $$\bigg |\mathcal {E}^{f^{\varepsilon ,\delta ,\alpha }}_{t,\tau }\left[ \zeta (\tau , X_\tau ^{\varepsilon ,\delta , \alpha ,t,x})\right] - \mathcal {E}^{f^{\alpha }}_{t,\tau }\left[ \zeta (\tau , X_\tau ^{\alpha ,t,x})\right] \bigg |$$ is deterministic. By the a priori estimates on the spread between the first component of the solutions of two BSDEs with jumps (see Proposition A.4. in Reference ), we derive from (4.3) that there exist $$\beta >0$$ and $$\eta >0$$ independent on $$\tau \in \mathcal {T}_t^t$$ and $$\alpha \in \mathcal {A}_t^t$$, such that

\begin{aligned}&\left| \mathcal {E}^{f^{\varepsilon ,\delta ,\alpha }}_{t,\tau }\left[ \zeta (\tau , X_\tau ^{\varepsilon ,\delta , \alpha ,t,x})\right] - \mathcal {E}^{f^{\alpha }}_{t,\tau }\left[ \zeta (\tau , X_\tau ^{\alpha ,t,x})\right] \right| ^2 \\&\quad \le \mathbb {E}\left[ e^{\beta (\tau -t)} \left( \zeta (\tau , X_\tau ^{\varepsilon , \delta ,\alpha ,t,x})-\zeta (\tau , X_\tau ^{\alpha ,t,x})\right) ^2 \right] \\&\qquad +\eta \mathbb {E} \left[ \int _t^\tau e^{\beta (s-t)} \left( f(s,\alpha _s,X_s^{\alpha ,t,x},Y_{s,\tau }^{\alpha ,t,x},Z_{s,\tau }^{\alpha ,t,x},K_{s,\tau }^{\alpha ,t,x})\right. \right. \\&\qquad \left. \left. -f^{\varepsilon ,\delta }(s,\alpha _s,X_s^{\varepsilon ,\delta ,\alpha ,t,x},Y_{s,\tau }^{\alpha ,t,x},Z_{s,\tau }^{\alpha ,t,x},K_{s,\tau }^{\alpha ,t,x})\right) ^2 ds\right] \\&\quad \le C \bigg \{ \mathbb {E}\bigg [\sup _{t \le u \le T} |X_u^{\varepsilon ,\delta , \alpha ,t,x}-X_u^{\alpha ,t,x}|^2 \bigg ]+\delta ^2\\&\qquad +\mathbb {E}\bigg [ \int _t^\tau \bigg (\int _{|e| \le \varepsilon } K_{s,\tau }^{\alpha ,t,x}(e)\gamma (X_s^{\alpha ,t,x},e)\nu (de)\bigg )^2ds\bigg ] \\&\qquad + \mathbb {E}\bigg [ \int _t^\tau \bigg (\int _{|e|> \varepsilon } K_{s,\tau }^{\alpha ,t,x}(e)(\gamma (X_s^{\alpha ,t,x},e)-\gamma (X_s^{\varepsilon ,\delta ,\alpha ,t,x},e))\nu (de)\bigg )^2 ds\bigg ] \bigg \}, \end{aligned}

where C is a constant independent on $$t,x,\varepsilon ,\delta ,\alpha , \tau$$, only depending on $$\beta$$, $$\eta$$, T and the Lipschitz constant of f.

Now we estimate the last two terms in the above inequality. For any given $$p\ge 2$$, the uniform boundness of $$\zeta$$, g and f with respect to $$t,x,\alpha$$ and $$\tau$$ (see Assumption 2.1), together with the a priori estimates for $$L^p$$ solutions of BSDEs (see Proposition 2 in Reference ) gives us an uniform control on the $$\mathbb {H}^p_{t,\nu }$$ norm of $$K_{\cdot ,\tau }^{\alpha ,t,x}$$ (which only depends on p and the bounds of $$\zeta$$, g, f and T). Using this result and the boundedness of the map $$\gamma$$ (see Assumption 2.1), we derive from Hölder’s inequality that there exists a constant C independent on $$\tau$$ and $$\alpha$$ such that

\begin{aligned}&\mathbb {E}\bigg [ \int _t^\tau \bigg (\int _{|e| \le \varepsilon } K_{s,\tau }^{\alpha ,t,x}(e)\gamma (X_s^{\alpha ,t,x},e)\nu (de)\bigg )^2ds\bigg ]\\&\quad \le \mathbb {E}\bigg [ \int _t^\tau \bigg (\int _{|e| \le \varepsilon } (K_{s,\tau }^{\alpha ,t,x})^2(e) \nu (de)\bigg ) \bigg (\!\int _{|e| \le \varepsilon } \gamma ^2(X_s^{\alpha ,t,x},e)\nu (de)\bigg )ds\bigg ]\\&\quad \le C\bigg (\int _{|e|\le \varepsilon }|e|^2\nu (de)\bigg ). \end{aligned}

Furthermore, for any given $$p>1$$, by using the Lipschitz continuity of the map $$\gamma$$, we can obtain

\begin{aligned}&\mathbb {E}\bigg [ \int _t^\tau \bigg (\int _{|e|> \varepsilon } K_{s,\tau }^{\alpha ,t,x}(e)(\gamma (X_s^{\alpha ,t,x},e)-\gamma (X_s^{\varepsilon ,\delta ,\alpha ,t,x},e))\nu (de)\bigg )^2 ds\bigg ] \nonumber \\&\le C \mathbb {E}\bigg [ \int _t^\tau \bigg (\int _{|e|> \varepsilon } |K_{s,\tau }^{\alpha ,t,x}(e)||X_s^{\alpha ,t,x}-X_s^{\varepsilon ,\delta ,\alpha ,t,x}|(1\wedge |e|^2)\nu (de)\bigg )^2 ds\bigg ]\nonumber \\&\le C \mathbb {E}\bigg [ \sup _{t\le u\le T}|X_u^{\alpha ,t,x}-X_u^{\varepsilon ,\delta ,\alpha ,t,x}|^2 \int _t^\tau \bigg (\int _{E} |K_{s,\tau }^{\alpha ,t,x}(e)|(1\wedge |e|^2)\nu (de)\bigg )^2 ds\bigg ]\nonumber \\&\le C \mathbb {E}\bigg [ \sup _{t\le u\le T}|X_u^{\alpha ,t,x}-X_u^{\varepsilon ,\delta ,\alpha ,t,x}|^2 \bigg (\int _t^\tau \int _{E} |K_{s,\tau }^{\alpha ,t,x}(e)|^2\nu (de)ds\bigg ) \bigg (\int _{E} (1\wedge |e|^4)\nu (de)\bigg )\bigg ]\nonumber \\&\le C_p \bigg (\mathbb {E}\bigg [ \sup _{t\le u\le T}|X_u^{\alpha ,t,x}-X_u^{\varepsilon ,\delta ,\alpha ,t,x}|^{2p}\bigg ]\bigg )^{1/p}, \end{aligned}
(4.9)

where we have used Hölder’s inequality and the boundedness of the $$\mathbb {H}^{2p/(p-1)}_{t,\nu }$$ norm of $$K_{\cdot ,\tau }^{\alpha ,t,x}$$ for the last line. Consequently, by summarizing all the above estimates and using Lemma 4.2, we can obtain that

\begin{aligned}&\left| \mathcal {E}^{f^{\varepsilon ,\delta ,\alpha }}_{t,\tau }\left[ \zeta (\tau , X_\tau ^{\varepsilon ,\delta , \alpha ,t,x})\right] - \mathcal {E}^{f^{\alpha }}_{t,\tau }\left[ \zeta (\tau , X_\tau ^{\alpha ,t,x})\right] \right| ^2 \\&\quad \le \, C \bigg \{ \mathbb {E}\bigg [\sup _{t \le u \le T} |X_u^{\varepsilon ,\delta , \alpha ,t,x}-X_u^{\alpha ,t,x}|^2 \bigg ]+\delta ^2\\&\qquad +\int _{|e|\le \varepsilon }|e|^2\nu (de)+\bigg (\mathbb {E}\bigg [ \sup _{t\le u\le T}|X_u^{\varepsilon ,\delta ,\alpha ,t,x}-X_u^{\alpha ,t,x}|^{2p}\bigg ]\bigg )^{\frac{1}{p}}\bigg \}\\&\quad \le \,C \bigg \{ \delta ^2+\int _{|e|\le \varepsilon }|e|^2\nu (de)+\bigg [\delta ^{2p}+\bigg (\int _{|e|\le \varepsilon }|e|^2\nu (de)\bigg )^p\\&\qquad +\bigg (\int _{|e|\le \varepsilon }|e|^{2p}\nu (de)\bigg )\bigg ]^{\frac{1}{p}}\bigg \}\\&\quad \le \,C \bigg \{ \delta ^2+\bigg (\int _{|e|\le \varepsilon }|e|^2\nu (de)\bigg )+\bigg (\int _{|e|\le \varepsilon }|e|^{2p}\nu (de)\bigg )^{\frac{1}{p}}\bigg \}, \end{aligned}

from which we can conclude the desired estimate by taking the supremum over $$\alpha$$ and $$\tau$$. $$\square$$

Theorem 4.3 extends the continuous dependence result for classical nonlocal HJBVIs in Reference [22, Theorem 4.4] to the HJBVIs with nonlinear drivers and state-dependent measures (the operator $$B^\alpha$$ defined by (2.8) involves the measure $$\gamma (x,e)\nu (de)$$, which depends on the spatial variable x).

Due to the presence of the state-dependent measure, in particular the term (4.9), our error estimate has an additional term $$\big (\int _{|e|\le \varepsilon }|e|^p\nu (de)\big )^{1/p}$$, which in general cannot be compared to the term $$\big (\int _{|e|\le \varepsilon }|e|^2\nu (de)\big )^{1/2}$$ without further information. For example, if $$\nu$$ is finite around zero, then for small enough $$\varepsilon$$, the fact that $$p>2$$ and Jensen’s inequality lead to the following estimate:

\begin{aligned}&\bigg (\int _{|e|\le \varepsilon }|e|^2\nu (de)\bigg )^{\frac{p}{2}}=\nu (B_\varepsilon )^{\frac{p}{2}}\bigg (\int _{|e|\le \varepsilon }|e|^2\frac{\nu (de)}{\nu (B_\varepsilon )}\bigg )^{\frac{p}{2}}\\&\quad \le \nu (B_\varepsilon )^{\frac{p}{2}-1}\bigg (\int _{|e|\le \varepsilon }|e|^p{\nu (de)}\bigg )\le \int _{|e|\le \varepsilon }|e|^p{\nu (de)}, \end{aligned}

where we denote $$\nu (B_\varepsilon )=\nu (\{e\in E\mid 0<|e|\le \varepsilon \})$$. On the other hand, if we assume that the singular measure $$\nu$$ behaves similar to the Lévy measures of $$\alpha$$-stable processes around zero, in the sense that $$\nu$$ admits a density $$\rho (e)$$ such that it holds for some constants $$C> 0$$ and $$\kappa \in [0, 2)$$ that

\begin{aligned} 0\le \rho (e)\le C|e|^{-n-\kappa }, \quad |e|<1, \, e\in E={{\mathbb {R}}}^n\setminus \{0\}, \end{aligned}

then a direct computation shows $$\big (\int _{|e|\le \varepsilon }|e|^p\nu (de)\big )^{1/p}=\mathcal {O}(\varepsilon ^{(p-\kappa )/p})$$, $$p\ge 2$$, which implies the jump truncation error is dominated by the term $$\big (\int _{|e|\le \varepsilon }|e|^2\nu (de)\big )^{1/2}$$, and consequently we recover the same convergence rate as that for the classical setting.

### 4.2 Approximation by Switching Systems

In this section, we study the approximation of (3.5) by switching systems. We adopt the following standard definition of a viscosity solution to switching systems of the form (3.6) (see [1, 6, 29] and references therein).

### Definition 4.4

(Viscosity solution of switching system) A $${{\mathbb {R}}}^J$$-valued upper (resp. lower) semicontinuous function U is said to be a viscosity subsolution (resp. supersolution) of (3.6) if and only if for any point $$\mathbf{x}_0$$ and for any $$\phi \in C^{1,2}({\bar{\mathcal {Q}}}_T)$$ such that $$U_j-\phi$$ attains its global maximum (resp. minimum) at $$\mathbf{x}_0$$, one has

\begin{aligned}&F_{j*}^{\varepsilon ,\delta ,c}(\mathbf{x}_0,U_j(\mathbf{x}_0), D\phi (\mathbf{x}_0), D^2\phi (\mathbf{x}_0),\{K_\varepsilon ^\alpha \phi (\mathbf{x}_0)\}_{\alpha \in \mathbf{A }_\delta },\\&\qquad \{B^\alpha _\varepsilon \phi (\mathbf{x}_0)\}_{\alpha \in \mathbf{A }_\delta },\{U_k(\mathbf{x}_0)\}_{k\not =j})\le 0\\&\quad \big (resp. \quad F_j^{\varepsilon ,\delta ,c\,*}(\mathbf{x}_0,U_j(\mathbf{x}_0), D\phi (\mathbf{x}_0), D^2\phi (\mathbf{x}_0),\{K_\varepsilon ^\alpha \phi (\mathbf{x}_0)\}_{\alpha \in \mathbf{A }_\delta },\\&\qquad \{B^\alpha _\varepsilon \phi (\mathbf{x}_0)\}_{\alpha \in \mathbf{A }_\delta }, \{U_k(\mathbf{x}_0)\}_{k\not =j})\ge 0\big ). \end{aligned}

A continuous function is a viscosity solution of the HJBVI (3.6) if it is both a a viscosity sub- and supersolution.

Note that in the definition of the viscosity solution of $$F_j$$, the test function only replaces $$U_j$$ in the integrals and derivatives, while leaving the terms $$\{U_k\}_{k\not =j}$$ unchanged.

Now we present the comparison principle for bounded semicontinuous viscosity solutions of (3.6), which not only implies the uniqueness of bounded viscosity solutions of (3.6), but is also essential for our convergence analysis. The proof will be given in Appendix 1.

### Theorem 4.5

Let $$U{=(U_1,U_2,...,U_J)}$$ and $$V {=(V_1,V_2,...,V_J)}$$ be bounded viscosity sub- and supersolutions, respectively, of (3.6) with $$U(0,\cdot )\le V(0,\cdot )$$. Then it holds under Assumption 2.1 that $$U_j(\mathbf{x})\le V_j(\mathbf{x})$$ for all $$j=1,\ldots , J$$.

The following theorem demonstrates the convergence of the switching system to the finite control HJBVI (3.5) as the switching cost goes to 0. Convergence with order 1 / 3 is proved in Reference  by a different technique, for nonlocal Bellman equations without obstacles and nonlinear source terms.

We momentarily assume the switching system (3.6) to admit a viscosity solution bounded independently of the (small enough) switching cost c. We give a constructive proof of existence through our numerical schemes in Sect. 4.3.

### Theorem 4.6

Under Assumption 2.1, let $$U^{\varepsilon ,\delta , c}=(U^{\varepsilon ,\delta , c}_1,\ldots , U^{\varepsilon ,\delta , c}_J)$$ and $$u^{\varepsilon ,\delta }$$ be the viscosity solution of (3.6) and (3.5), respectively. Then for fixed $$\varepsilon , \delta >0$$, we have for each $$j=1,\ldots , J$$ that $$U^{\varepsilon ,\delta , c}_j\rightarrow u^{\varepsilon ,\delta }$$ uniformly on compact sets as $$c\rightarrow 0$$.

### Proof

Since $$\varepsilon$$ and $$\delta$$ are fixed for our analysis, we shall omit the dependence on $$\varepsilon$$ and $$\delta$$, and simply denote by $$U^{c}$$ the solution of (3.6). Consider a sequence of switching costs $$c_m\rightarrow 0$$ as $$m\rightarrow \infty$$, and the corresponding viscosity solution $$U^{c_m}=(U^{c_m}_1,\ldots , U^{c_m}_J)$$. We shall first prove by contradiction that

\begin{aligned} U^{c_m}_j(\mathbf{x})\ge \mathcal {M}_j U^{c_m},\quad \mathbf{x}\in \mathcal {Q}_T, \; j=1,\ldots , J. \end{aligned}
(4.10)

Suppose the statement is false, then there would exist $$k\not = j$$ and $$\mathbf{x}_0\in \mathcal {Q}_T$$ such that $$U^{c_m}_j(\mathbf{x}_0)< U^{c_m}_k(\mathbf{x}_0)-c_m$$. We then obtain from the continuity of $$U_j^{c_m}$$ and $$U^{c_m}_k$$ that there exists a nonempty open ball B around $$\mathbf{x}_0$$ such that

\begin{aligned} U^{c_m}_j(\mathbf{x})< U^{c_m}_k(\mathbf{x})-c_m, \quad \mathbf{x}\in B. \end{aligned}

On the other hand, due to the fact that semi-jets are nonempty on a dense set (see e.g. [25, Lemma 8 on pp. 23]), there exists a $$C^2$$ function $$\phi$$ such that $$U^{c_m}_j-\phi$$ attains its minimum at some point in B, say $$\mathbf{x}_1$$. Hence we deduce from the fact that $$U^{c_m}_j$$ is a supersolution that

\begin{aligned} U^{c_m}_j(\mathbf{x}_1)\ge \mathcal {M}_j U^{c_m}(\mathbf{x}_1)\ge U^{c_m}_k(\mathbf{x}_1)-c_m, \end{aligned}

We now introduce the following functions through a relaxed limit: for $$j=1,\ldots , J$$,

\begin{aligned} {\overline{U}}_j(x)=\lim _{r\rightarrow \infty }\sup _{m>r}\sup _{|\mathbf{y}-\mathbf{x}|<1/r}U^{c_m}_j(\mathbf{y}),\quad \underline{U}_j(x)=\lim _{r\rightarrow \infty }\inf _{m>r}\inf _{|\mathbf{y}-\mathbf{x}|<1/r}U^{c_m}_j(\mathbf{y}). \end{aligned}
(4.11)

It is not hard to check $${\overline{U}}_1=\ldots ={\overline{U}}_J\equiv {\overline{U}}$$ and $$\underline{U}_1=\ldots =\underline{U}_J\equiv \underline{U}$$. In fact, for any given $$j,k\in \{1,\ldots , J\}$$, $$j\not =k$$, $$\mathbf{x}\in \mathcal {Q}_T$$, and $$m,r\in {\mathbb {N}}$$, we obtain from (4.10) that $$U^{c_m}_j(\mathbf{y})\ge U^{c_m}_k(\mathbf{y})-c_m$$ for $$\mathbf{y}\in \mathcal {Q}_T$$, and hence

\begin{aligned} \sup _{m>r}\sup _{|\mathbf{y}-\mathbf{x}|<1/r}U^{c_m}_j(\mathbf{y})\ge \sup _{m>r}\sup _{|\mathbf{y}-\mathbf{x}|<1/r}U^{c_m}_k(\mathbf{y})-\sup _{m>r}c_m. \end{aligned}

Letting $$r\rightarrow \infty$$ leads to the fact that $${\overline{U}}_j\ge {\overline{U}}_k$$ for all $$j\not = k$$. The statement for $$\{\underline{U}_j\}$$ can be shown similarly.

Since it is clear that $${\overline{U}}$$ and $$\underline{U}$$ is bounded upper and lower semicontinuous, respectively, we now aim to show $${\overline{U}}$$ and $$\underline{U}$$ is respectively a sub- and supersolution of (3.5). Then the strong comparision principle gives us $${\overline{U}}\le \underline{U}$$, which implies $$U={\overline{U}}= \underline{U}$$ is the unique viscosity solution of (3.5). Uniform convergence on compact sets follows from a variation of Dini’s theorem (See Remark 6.4 in Reference ).

We start by showing $${\overline{U}}$$ is a subsolution of (3.5). Let $$\phi \in C^{1,2}$$ and $${\overline{U}}-\phi$$ have a strict global maximum at $${\hat{\mathbf{x}}}_0\in {\bar{\mathcal {Q}}}_T$$, then there will be a sequence $$c_m\rightarrow 0$$ such that for each $$j\in \{1,\ldots , J\}$$, we have $${\hat{\mathbf{x}}}^j_m\rightarrow {\hat{\mathbf{x}}}_0$$, $$U_j^{c_m}({\hat{\mathbf{x}}}^j_m)\rightarrow {\overline{U}}({\hat{\mathbf{x}}}_0)$$, and $$U_j^{c_m}-\phi$$ attains a global maximum at $${\hat{\mathbf{x}}}^j_m$$. Since $$U_j^{c_m}$$ is a subsolution of (3.6) with $$c_m$$, if we have $${\hat{\mathbf{x}}}_0\in \{0\}\times {{\mathbb {R}}}^d$$, $$U_j^{c_m}({\hat{\mathbf{x}}}^j_m)\le g({\hat{\mathbf{x}}}^j_m)$$ for infinitly many m and a fixed j, then it is clear that $${\overline{U}}({\hat{\mathbf{x}}}_0)\le g({\hat{\mathbf{x}}}_0)$$. Therefore, without loss of generality, we assume for all m and j that

\begin{aligned}&\min \bigg [U_j^{c_m}({\hat{\mathbf{x}}}^j_m)-\zeta ({\hat{\mathbf{x}}}^j_m),\; \min \big (\phi _{t}({\hat{\mathbf{x}}}^j_m)-L_\varepsilon ^{\alpha _j} \phi ({\hat{\mathbf{x}}}^j_m)\nonumber \\&\quad -f^{\alpha _j}({\hat{\mathbf{x}}}^j_m,U^{c_m}_{j}({\hat{\mathbf{x}}}^j_m),{\tilde{\sigma }}^{\alpha _j}\cdot D\phi ({\hat{\mathbf{x}}}^j_m),B_\varepsilon ^{\alpha _j} \phi ({\hat{\mathbf{x}}}^j_m)); \nonumber \\&\quad U^{c_m}_{j}({\hat{\mathbf{x}}}^j_m)-\mathcal {M}_jU^{c_m}({\hat{\mathbf{x}}}^j_m)\big )\bigg ]\le 0. \end{aligned}
(4.12)

We have two cases. If there exists $$j\in \{1,\ldots , J\}$$ and a subsequence of $$c_m$$ such that $$U_j^{c_m}({\hat{\mathbf{x}}}^j_m)-\zeta ({\hat{\mathbf{x}}}^j_m)\le 0$$, then by passing to the limit $$m\rightarrow \infty$$, we have $${\overline{U}}({\hat{\mathbf{x}}}_0)-\zeta ({\hat{\mathbf{x}}}_0)\le 0$$. Otherwise, by passing to subsequence, without loss of generality we can assume $$U_j^{c_m}({\hat{\mathbf{x}}}^j_m)-\zeta ({\hat{\mathbf{x}}}^j_m)>0$$ holds for all j and m. Then for each $$m\in {\mathbb {N}}$$, we can choose $$j_m\in \{1,\ldots , J\}$$ and $${\hat{\mathbf{x}}}^{j_m}_m$$ such that

\begin{aligned} (U_{j_m}^{c_m}-\phi )({\hat{\mathbf{x}}}^{j_m}_m)=\max _{j=1,\ldots , J}(U_{j}^{c_m}-\phi )({\hat{\mathbf{x}}}^{j}_m)=\max _{j=1,\ldots , J}\max _{\mathbf{x}}(U_{j}^{c_m}-\phi )(\mathbf{x}), \end{aligned}

and deduce from (4.12) that

\begin{aligned} \min \big (\phi _{t}({\hat{\mathbf{x}}}^{j_m}_m)-L_\varepsilon ^{\alpha _{j_m}} \phi ({\hat{\mathbf{x}}}_m)-&f^{\alpha _{j_m}}({\hat{\mathbf{x}}}^{j_m}_m,U^{c_m}_{{j_m}}({\hat{\mathbf{x}}}^{j_m}_m),{\tilde{\sigma }}^{\alpha _{j_m}}\cdot D\phi ({\hat{\mathbf{x}}}^{j_m}_m),B_\varepsilon ^{\alpha _{j_m}}\phi ({\hat{\mathbf{x}}}^{j_m}_m)); \nonumber \\&U^{c_m}_{{j_m}}({\hat{\mathbf{x}}}^{j_m}_m)-\mathcal {M}_{j_m}U^{c_m}({\hat{\mathbf{x}}}^{j_m}_m)\big )\le 0. \end{aligned}
(4.13)

Our choice of $$j_m$$ implies $$(U_{j_m}^{c_m}-\phi )({\hat{\mathbf{x}}}^{j_m}_m)\ge (U_{k}^{c_m}-\phi )({\hat{\mathbf{x}}}^{j_m}_m)$$ for all $$k\not = j_m$$, and thus $$U^{c_m}_{{j_m}}({\hat{\mathbf{x}}}^{j_m}_m)>\mathcal {M}_{j_m}U^{c_m}({\hat{\mathbf{x}}}^{j_m}_m)$$. Consequently we obtain from (4.13) that

\begin{aligned} \phi _{t}({\hat{\mathbf{x}}}^{j_m}_m)-L_\varepsilon ^{\alpha _{j_m}} \phi ({\hat{\mathbf{x}}}_m)-f^{\alpha _{j_m}}({\hat{\mathbf{x}}}^{j_m}_m,U^{c_m}_{{j_m}}({\hat{\mathbf{x}}}^{j_m}_m),{\tilde{\sigma }}^{\alpha _{j_m}}\cdot D\phi ({\hat{\mathbf{x}}}^{j_m}_m),B_\varepsilon ^{\alpha _{j_m}} \phi ({\hat{\mathbf{x}}}^{j_m}_m))\le 0. \end{aligned}

Since we only have finite many choices of $$j_m$$, by passing to a further subsequence if necessary, we can assume that $$j_m\rightarrow j_0$$, then letting $$m\rightarrow \infty$$ and using the continuity of the equation, we have

\begin{aligned} \phi _{t}({\hat{\mathbf{x}}}_0)-L_\varepsilon ^{\alpha _{j_0}} \phi ({\hat{\mathbf{x}}}_0)-f^{\alpha _{j_0}}({\hat{\mathbf{x}}}_0,{\overline{U}}({\hat{\mathbf{x}}}_0),{\tilde{\sigma }}^{\alpha _{j_0}}\cdot D\phi ({\hat{\mathbf{x}}}_0),B_\varepsilon ^{\alpha _{j_0}} \phi ({\hat{\mathbf{x}}}_0))\le 0. \end{aligned}

Since $$\alpha _{j_0}\in \mathbf{A }_\delta$$ is an admissible control, we obtain

\begin{aligned} \min _{\alpha \in \mathbf{A }_\delta }\big \{\phi _{t}({\hat{\mathbf{x}}}_0)-L_\varepsilon ^{\alpha } \phi ({\hat{\mathbf{x}}}_0)-f^{\alpha _{j}}({\hat{\mathbf{x}}}_0,{\overline{U}}({\hat{\mathbf{x}}}_0),{\tilde{\sigma }}^{\alpha }\cdot D\phi ({\hat{\mathbf{x}}}_0),B_\varepsilon ^\alpha \phi ({\hat{\mathbf{x}}}_0))\big \}\le 0, \end{aligned}

and conclude that $${\overline{U}}$$ is a subsolution of (3.6).

We now proceed to show $$\underline{U}$$ is a supersolution. If $$\phi \in C^{1,2}$$ and $${\overline{U}}-\phi$$ has a strict global mimimum at $${\hat{\mathbf{x}}}_0\in {\bar{\mathcal {Q}}}_T$$, then for any given $$j\in \{1,\ldots , J\}$$, there will be sequences $$c_m\rightarrow 0$$, $${\hat{\mathbf{x}}}_m\rightarrow {\hat{\mathbf{x}}}_0$$, $$U_j^{c_m}({\hat{\mathbf{x}}}_m)\rightarrow \underline{U}({\hat{\mathbf{x}}}_0)$$, and $$U_j^{c_m}-\phi$$ attains a global mimimum at $${\hat{\mathbf{x}}}_m$$. Using the fact that $$U_j^{c_m}$$ is asupersolution to (3.6), we have (by ignoring the term $$U^{c_m}_{j}({\hat{\mathbf{x}}}^j_m)-\mathcal {M}_jU^{c_m}({\hat{\mathbf{x}}}^j_m)$$):

\begin{aligned}&\min \bigg [U_j^{c_m}({\hat{\mathbf{x}}}_m)-\zeta ({\hat{\mathbf{x}}}_m),\; \phi _{t}({\hat{\mathbf{x}}}_m)-L_\varepsilon ^{\alpha _j} \phi ({\hat{\mathbf{x}}}_m)\\&\quad -f^{\alpha _j}({\hat{\mathbf{x}}}_m,U^{c_m}_{j}({\hat{\mathbf{x}}}_m),{\tilde{\sigma }}^{\alpha _j}\cdot D\phi ({\hat{\mathbf{x}}}_m),B_\varepsilon ^{\alpha _j} \phi ({\hat{\mathbf{x}}}_m))\bigg ]\ge 0, \end{aligned}

then passing $$m\rightarrow \infty$$ enables us to conclude for any $$j\in \{1,\ldots , J\}$$,

\begin{aligned}&\min \bigg [\underline{U}({\hat{\mathbf{x}}}_0)-\zeta ({\hat{\mathbf{x}}}_0),\; \phi _{t}({\hat{\mathbf{x}}}_0)-L_\varepsilon ^{\alpha _j} \phi ({\hat{\mathbf{x}}}_0)\\&\quad -f^{\alpha _j}({\hat{\mathbf{x}}}_0,\underline{U}({\hat{\mathbf{x}}}_0),{\tilde{\sigma }}^{\alpha _j}\cdot D\phi ({\hat{\mathbf{x}}}_0),B_\varepsilon ^{\alpha _j} \phi ({\hat{\mathbf{x}}}_0))\bigg ]\ge 0, \end{aligned}

which completes our proof. $$\square$$

### 4.3 General Discrete Approximation to the Switching System

In this section, we establish the convergence of the piecewise constant policy approximation of (3.10) to the solution of the switching system (3.6). We will first summarize all the required conditions to guarantee the convergence, and perform the analysis under these assumptions. Then we will demonstrate in Sect. 4.4 that these conditions are in fact satisfied by the numerical scheme (3.18) proposed in Sect. 3 .

We assume the scheme (3.10) satisfies the following conditions introduced in Reference :

### Condition 1

1. (1)

(Positive interpolation.) Let $${\tilde{U}}^n_{k,i(j)}$$ be the interpolant of the k-th grid onto the i-th point $$\mathbf{x}_{j,i}^n$$ of the j-th grid, and $$N^k(j,i,n)$$ be the neighboursFootnote 2 to the point $$\mathbf{x}_{j,i}^n$$ on the k-th grid $$\Omega _{k,h}$$. Then there exist weights $$\{\omega ^n_{k,i(j),a}\}_{a\in N^k(j,i,n)}$$ satisfying $$\omega ^n_{k,i(j),a}\ge 0$$ and $$\sum _{a\in N^k(j,i,n)}\omega ^n_{k,i(j),a}=1$$, such that we can write

\begin{aligned} {\tilde{U}}^n_{k,i(j)}=\sum _{a\in N^k(j,i,n)}\omega ^n_{k,i(j),a} U^n_{k,a}. \end{aligned}
(4.14)
2. (2)

(Weak monotonicity.) The scheme (3.10) is monotone with respect to $$U^n_{j,i}$$ and $${\tilde{U}}^{n}_{k,i(j)}$$, i.e., if

\begin{aligned} V^n_{j,i}\ge U^n_{j,i}, \quad \forall (i,j,n); \quad {\tilde{V}}^{n}_{k,i(j)}\ge {\tilde{U}}^{n}_{k,i(j)}, \quad \forall (i,k,n), \end{aligned}

then we have

\begin{aligned}&G_j(\mathbf{x}^n_{j,i},h,U^{n+1}_{j,i}, \{V^{b+1}_{j,a}\}_{{(a,b)\!\not =\!(i,n)}}, \{ {\tilde{V}}^n_k\}_{k\not =j})\nonumber \\&\quad \le G_j(\mathbf{x}^n_{j,i},h,U^{n+1}_{j,i}, \{U^{b+1}_{j,a}\}_{(a,b)\!\not =\!(i,n)}, \{ {\tilde{U}}^n_k\}_{k\not =j}). \end{aligned}
(4.15)
3. (3)

($$\ell ^\infty$$ stability.) The solution $$U^{n+1}_{j,i}$$ of the scheme (3.10) exists and is bounded uniformly in h and c.

4. (4)

(Consistency.) Let $$\varepsilon ,\delta , c$$ be fixed. For any test functions $$\phi _j\in C^{1,2}({\bar{\mathcal {Q}}}_T)$$ and continuous $$\varphi _k$$, there exist function $$\omega _1(h)$$ and $$\omega _2(\xi )$$, possibly depending on $$\varepsilon$$, such that $$\omega _1(h)\rightarrow 0$$ as $$h\rightarrow 0$$, $$\omega _2(\xi )\rightarrow 0$$ as $$\xi \rightarrow 0$$, and

\begin{aligned} \begin{aligned}&|G_j(\mathbf{x}^{n+1}_{j,i},h,\phi ^{n+1}_{j,i}+\xi , \{\phi ^{b+1}_{j,a}\}_{(a,b)\!\not =\!(i,n)}+\xi , \{{\tilde{\varphi }}^n_k\}_{k\not =j})\\&\quad -F^{\varepsilon ,\delta , c}_j(\mathbf{x}^{n+1}_{j,i},\phi _j(\mathbf{x}^{n+1}_{j,i}), D\phi _j(\mathbf{x}^{n+1}_{j,i}), D^2\phi _j(\mathbf{x}^{n+1}_{j,i}),\\&\quad \{{\tilde{\varphi }}_k(\mathbf{x}^{n}_{j,i})\}_{k\not =j})|\le \omega _1(h)+\omega _2(\xi ). \end{aligned} \end{aligned}
(4.16)

### Remark 5

As pointed out in Reference , Condition 1 (1)–(2) are weaker than the standard condition that the scheme is monotone in $$U^n_{k,\alpha }$$ (see e.g. ). By only requiring that the interpolation has positive coefficients and that the numerical scheme is monotone in the interpolant $${\tilde{U}}^n_{k,\alpha }$$, we are allowing the usage of high order nonlinear interpolations among different grids (e.g., the monotonicity preserving interpolations in Reference ).

Also note the contrast to the linear interpolant (3.11) used in (3.12) and (3.13) for the construction of a monotone approximation to the integral operators.

We now present the convergence of the discrete approximation to the switching system.

### Theorem 4.7

Under Assumptions 2.1, the solution to any scheme of the form (3.10) satisfying Condition 1 converges to the viscosity solution of (3.6) uniformly on bounded domains.

The proof is essentially the same as that in Reference  and is omitted. We remark that in the proof, we construct the solution of the switching system directly from the numerical solutions. Since the solution of the scheme (3.10) is uniformly bounded, Theorems 4.5 and 4.7 immediately give the existence and uniqueness of a bounded viscosity solution to the switching system (3.6).

### Corollary 4.8

Under Assumption 2.1 and the existence of a scheme satisfying Condition 1, the switching system (3.6) admits a unique viscosity solution bounded uniformly in c.

### 4.4 A Specific Implicit Scheme for the Switching System

In this section, we analyze the implicit scheme (3.18) and demonstrate that it satisfies Condition 1, which subsequently implies its convergence to the switching system.

The following estimates are essential for our consistency and stability analysis.

### Lemma 4.9

Under Assumption 2.1, there exists C independent of $$h,k,\varepsilon , \delta$$ such that for any test functions $$\phi _j\in C^{1,2}({\bar{\mathcal {Q}}}_T)$$ and $$\varepsilon <1$$ that

\begin{aligned}&|A^{\alpha }_{\varepsilon ,h,k}\phi ^{n+1}_{j,i}+K^{\alpha ,1}_{\varepsilon ,h}\phi ^{n+1}_{j,i} -A_\varepsilon ^{\alpha } \phi _j(\mathbf{x}_{j,i}^{n+1})\\&\quad -K_\varepsilon ^{\alpha } \phi _j(\mathbf{x}_{j,i}^{n+1})|\le C \left( \frac{h^2}{k^2}+\frac{h^2}{\varepsilon ^2}+\omega (\mathbf{x}^{n+1}_{j,i},k)\right) ,\\&|B^{\alpha }_{\varepsilon ,h}\phi ^{n+1}_{j,i}-B_\varepsilon ^{\alpha }\phi (\mathbf{x}^{n+1}_{j,i})|\le C\frac{h^2}{\varepsilon }. \end{aligned}

for some $$\omega (\mathbf{x}^{n+1}_{j,i},k)$$ such that $$\omega (\cdot , k)\rightarrow 0$$ as $$k\rightarrow 0$$ uniformly on compact neighbourhoods of $$\mathbf{x}^{n+1}_{j,i}$$.

### Proof

We first derive the estimate for $$B^{\alpha }_{\varepsilon ,h}\phi ^{n+1}_{j,i}$$. It follows from $$|\eta ^\alpha |\le C$$ and the definitions of $$B^{\alpha }_{\varepsilon ,h}\phi$$ and $$B_\varepsilon ^{\alpha }\phi$$ that

\begin{aligned}&|B^{\alpha }_{\varepsilon ,h}\phi ^{n+1}_{j,i}-B_\varepsilon ^{\alpha }\phi (\mathbf{x}^{n+1}_{j,i})|\\&\quad \le \int _{|e|\ge \varepsilon }|\mathcal {I}_h[\phi (t^{n+1},x_{j,i}+\cdot )](\eta ^\alpha (x_{j,i},e))\\&\quad -\phi (t^{n+1},x_{j,i}+\eta ^\alpha (x_{j,i},e))| \gamma (x_{j,i},e)\,\nu (de)\\&\quad \le Ch^2|D^2\phi |_{B(\mathbf{x}^{n+1}_{j,i},C)}\int _{|e|\ge \varepsilon }(1\wedge |e|)\,\nu (de)\le C\frac{h^2}{\varepsilon }, \end{aligned}

where we have used the fact that $$|\mathcal {I}_h[\phi ]-\phi |_{B(\mathbf{x}^{n+1}_{j,i},C)}\le C|D^2\phi |_{B(\mathbf{x}^{n+1}_{j,i},C)}h^2$$. Similar arguments give us that $$|K^{\alpha ,1}_{\varepsilon ,h}\phi ^{n+1}_{j,i}-K^{\alpha ,1}_{\varepsilon }\phi (\mathbf{x}^{n+1}_{j,i})|\le Ch^2|D^2\phi |_{B(\mathbf{x}^{n+1}_{j,i},C)}\int _{|e|\ge \varepsilon }\,\nu (de)\le C\frac{h^2}{\varepsilon ^2}$$.

We then infer from Taylor’s theorem with an integral remainder that the truncation errors of the local terms can be bounded by

\begin{aligned}&|A^{\alpha }_{\varepsilon ,h,k}\phi ^{n+1}_{j,i}-A^{\alpha }_{\varepsilon }\phi (\mathbf{x}^{n+1}_{j,i})-b^\alpha _{\varepsilon }(x_{j,i})\cdot D\phi (\mathbf{x}^{n+1}_{j,i})|\\&\quad \le C|D^2\phi |_{B(\mathbf{x}^{n+1}_{j,i},C)}\frac{h^2}{k^2}+\omega (\mathbf{x}^{n+1}_{j,i},k) \end{aligned}

for some function $$\omega (\mathbf{x}^{n+1}_{j,i},k)$$ such that $$\omega (\cdot , k)\rightarrow 0$$ as $$k\rightarrow 0$$ uniformly on compact neighbourhoods of $$\mathbf{x}^{n+1}_{j,i}$$, which enables us to deduce that

\begin{aligned}&|A^{\alpha }_{\varepsilon ,h,k}\phi ^{n+1}_{j,i}+K^{\alpha ,1}_{\varepsilon ,h}\phi ^{n+1}_{j,i} -A^{\alpha }_\varepsilon \phi _j(\mathbf{x}_{j,i}^{n+1})-K^{\alpha } _\varepsilon \phi _j(\mathbf{x}_{j,i}^{n+1})|\\&\quad \le |A^{\alpha }_{\varepsilon ,h,k}\phi ^{n+1}_{j,i}-A^{\alpha }_{\varepsilon }\phi (\mathbf{x}^{n+1}_{j,i})-b^\alpha _{\varepsilon }(x_{j,i})\cdot D\phi (\mathbf{x}^{n+1}_{j,i})|+|K^{\alpha ,1}_{\varepsilon ,h}\phi ^{n+1}_{j,i}-K^{\alpha ,1}_{\varepsilon }\phi (\mathbf{x}^{n+1}_{j,i})|\\&\quad \le C\left( \frac{h^2}{k^2}+\frac{h^2}{\varepsilon ^2}+\omega (\mathbf{x}^{n+1}_{j,i},k)\right) . \end{aligned}

$$\square$$

### Lemma 4.10

Under Assumption 2.1 there exists C independent of $$h,k,\varepsilon , \delta$$ such that for all $$\varepsilon <1$$

\begin{aligned} \sum _{m\not =0}\kappa ^{\alpha ,n}_{h,m,i}\le \frac{C}{h\varepsilon }\wedge \frac{1}{\varepsilon ^2},\quad \sum _{m\not =0}\beta ^{\alpha ,n}_{h,m,i}\le \frac{C}{h}\wedge \frac{1}{\varepsilon }, \quad \sum _{m\in {\mathbb {Z}}^d}d^{\alpha ,n}_{h,k,m,i} \le \frac{C}{k^2}, \end{aligned}

where $$\kappa ^{\alpha ,n}_{h,m,i}$$, $$\beta ^{\alpha ,n}_{h,m,i}$$, and $$d^{\alpha ,n}_{h,k,m,i}$$ are defined in (3.14) and (3.16), respectively.

### Proof

We shall only prove the estimate for $$\kappa ^{\alpha ,n}_{h,m,i}$$, since the estimate for $$\beta ^{\alpha ,n}_{h,m,i}$$ follows from a similar argument, and the estimate for $$d^{\alpha ,n}_{h,k,m,i}$$ follows directly from the fact that $$\sum _m \omega _m=1$$.

The definition of $$\kappa ^{\alpha ,n}_{h,m,i}$$ and the integrability property (2.1) of $$\nu$$ imply that

\begin{aligned} \sum _{m\not =0}\kappa ^{\alpha ,n}_{h,m,i}&=\sum _{m\not =0}\int _{|e|>\varepsilon } \omega _m(\eta ^\alpha (x_i,e);h)\,\nu (de)\\&=\sum _{m\not =0}\int _{|e|>\varepsilon } \omega _m(\eta ^\alpha (x_i,e);h)1_{\{\eta ^\alpha (x_i,e)\in \text {supp}\,\omega _m\}}\,\nu (de)\\&=\sum _{m\not =0}\int _{|e|>\varepsilon } \big (\omega _m(\eta ^\alpha (x_i,e);h)-\omega _m(0;h)\big )1_{\{\eta ^\alpha (x_i,e)\in \text {supp}\,\omega _m\}}\,\nu (de)\\&\le \int _{|e|>\varepsilon }\sum _{m\not =0} |D\omega _m|_0|\eta ^\alpha (x_i,e)|1_{\{\eta ^\alpha (x_i,e)\in \text {supp}\,\omega _m\}}\,\nu (de)\\&\le \frac{C}{h}\int _{|e|>\varepsilon }(1\wedge |e|)\,\nu (de)\\&\le \frac{C}{h}\int _{|e|>\varepsilon }\frac{1\wedge |e|}{\varepsilon }(1\wedge |e|)\,\nu (de)=\frac{C}{h\varepsilon }\int _{|e|>\varepsilon }(1\wedge |e|^2)\,\nu (de)\le \frac{C}{h\varepsilon }. \end{aligned}

Alternatively, it follows directly from the identity $$\sum _{m\in {\mathbb {Z}}^d}\omega _m(\cdot ;h)\equiv 1$$ that

\begin{aligned}&\sum _{m\not =0}\kappa ^{\alpha ,n}_{h,m,i}=\sum _{m\not =0}\int _{|e|>\varepsilon } \omega _m(\eta ^\alpha (x_i,e);h)\,\nu (de)\\&\quad \le \int _{|e|>\varepsilon }\,\nu (de)\le \frac{1}{\varepsilon ^2}\int _{|e|>\varepsilon }(1\wedge |e|^2)\,\nu (de), \end{aligned}

which leads us to the desired estimates. $$\square$$

### Remark 6

Since we have not used any information on the exact behavior of the nonsingular measure $$\nu$$ around zero, the estimates for the nonlocal terms in Lemmas 4.9 and 4.10 are not optimal for many specific cases. If one can estimate upper bounds of the density of the Lévy measure, or equivalently estimate the (pseudo-differential) orders of the nonlocal operators $$K^\alpha$$ and $$B^\alpha$$, more precise results for the truncation error of the singular measure can be deduced (Reference ).

The next lemma presents some important properties of the Lax–Friedrichs numerical flux for Lipschitz continuous Hamiltonian, which are crucial for our subsequent analysis. We refer readers to Reference  for a proof of these statements. Then the following hold:

### Lemma 4.11

Let $${\tilde{f}}$$ as in (3.17) and $$(\mathbf{x}^{n}_{j,i}, u, k)\in \Omega _{j,h}\times {{\mathbb {R}}}\times {{\mathbb {R}}}$$, and suppose Assumption 2.1 and the condition $$\theta >C\lambda$$ hold, where C is the Lipschitz constant of the Hamiltonian $${\bar{f}}$$.

1. (1)

(Consistency.) For any test functions $$\phi \in C^{1,2}([0,T]\times {{\mathbb {R}}}^d)$$, we have

\begin{aligned} |{\tilde{f}}^{\alpha }(\mathbf{x}^{n}_{j,i},u, \Delta \phi ^{n}_{j,i},k)-{\bar{f}}^{\alpha }(\mathbf{x}_{{j,i}}^{{n}},u, D\phi (\mathbf{x}^{n}_{j,i}),k)|\le Ch^2/\Delta t. \end{aligned}
2. (2)

(Monotonicity.) If $$V^n_{j,i}\ge U^n_{j,i}$$, for all ijn, then we have

\begin{aligned} \Delta t{\tilde{f}}^{\alpha }(\mathbf{x}^{n}_{j,i},u, \Delta V^{n}_{j,i},k)+2d\theta V_{j,i}^n\ge \Delta t{\tilde{f}}^{\alpha }(\mathbf{x}^{n}_{j,i},u, \Delta U^{n}_{j,i},k)+2d\theta U_{j,i}^n. \end{aligned}
3. (3)

(Stability.) For any bounded functions U and V, we have

\begin{aligned}&|(\Delta t{\tilde{f}}^{\alpha }(\mathbf{x}^{n}_{j,i},u, \Delta V^{n}_{j,i},k)+2d\theta V_{j,i}^n)\\&\quad -\,( \Delta t{\tilde{f}}^{\alpha }(\mathbf{x}^{n}_{j,i},u, \Delta U^{n}_{j,i},k)+2d\theta U_{j,i}^n)| \le 2d\theta |U-V|_0. \end{aligned}

### Proposition 4.12

Suppose Assumption 2.1, the positive interpolation property in Condition 1 and the condition $$\theta >C\lambda$$ hold. Then we have the following:

1. (1)

There exists a unique bounded solution $$U^{n}$$ of the scheme (3.18).

2. (2)

The scheme is $$\ell ^\infty$$ stable and weakly monotone. It is consistent with the switching system (3.6) provided $$h^2/\Delta t\rightarrow 0$$ and $$h/k\rightarrow 0$$ as $$h,k,\Delta t \rightarrow 0$$ ($$\varepsilon$$ is fixed here).

### Proof

We start to establish the existence and uniqueness of a bounded solution of (3.18) in (1) by an induction argument. It is clear the statement holds for $$t^0=0$$ since $$U^0=g$$ is bounded. Now we assume that $$\{U_j^{n-1}\}_{j=1}^J$$ are bounded functions on $$h{\mathbb {Z}}^d$$ and consider the time point $$t^n$$. The positive interpolation property implies the interpolation step among different grids does not increase the $$\ell ^\infty$$ norm of the solution, and hence $$U_j^{n-\frac{1}{2}}$$ is bounded for each $$j=1,\ldots , J$$.

For each $$\rho >0$$ and $$j=1,\ldots , J$$, we define the operator $$\mathcal {P}:U_j^n\rightarrow U_j^n$$ by

\begin{aligned} \mathcal {P}U_{j,i}^n=U_{j,i}^n-\rho \cdot (\text {left-hand side of } (3.18)), \quad i\in {\mathbb {Z}}^d, \end{aligned}

with a given function $$U^{n-\frac{1}{2}}_j$$. By virtue of the fact that fixed points to the equation $$\mathcal {P}U_{j}^n=U_j^n$$ are precisely the solutions to (3.18), it suffices to establish that for small enough $$\rho$$, the operator $$\mathcal {P}$$ is a contraction on $$\ell ^\infty ({\mathbb {Z}}^d)$$, i.e., the Banach space of bounded functions on $$h{\mathbb {Z}}^d$$ employed with the sup-norm, which along with the contraction mapping theorem leads to the desired results. (Similar contraction operators have been introduced in References [7, 13] to demonstrate the well-posedness of their numerical schemes).

For any bounded functions $$U_j^n$$ and $$V_j^n$$, the definitions of $$\mathcal {P}$$, $$A^{\alpha }_{\varepsilon ,h,k}$$ and $$K^{\alpha ,1}_{\varepsilon ,h}$$ give that

\begin{aligned}&\mathcal {P}U_{j,i}^n-\mathcal {P}V_{j,i}^n\nonumber \\ \le&(1-\rho )(U_{j,i}^n-V_{j,i}^n)+\rho \Delta t \bigg [\sum _{m\in {\mathbb {Z}}^d}d^{\alpha ,n}_{h,k,m,i}[(U_{j,m}^n-V_{j,m}^n)-(U_{j,i}^n-V_{j,i}^n)]\nonumber \\&+ \sum _{m\not =0}\kappa ^{\alpha ,n}_{h,m,i}[(U_{j,i+m}^n-V_{j,i+m}^n)-(U_{j,i}^n-V_{j,i}^n)]\nonumber \\&+{\tilde{f}}^{\alpha }(\mathbf{x}^{n}_{j,i},U^{n}_{j,i},\Delta U^{n}_{j,i}, B^{\alpha }_{\varepsilon ,h}U^{n}_{j,i}) -{\tilde{f}}^{\alpha }(\mathbf{x}^{n}_{j,i},V^{n}_{j,i},\Delta V^{n}_{j,i},B^{\alpha }_{\varepsilon ,h}V^{n}_{j,i}) \bigg ]\nonumber \\ \le&(1-\rho -\rho \Delta t \sum _{m\in {\mathbb {Z}}^d}d^{\alpha ,n}_{h,k,m,i}-\rho \Delta t \sum _{m\not =0}\kappa ^{\alpha ,n}_{h,m,i})(U_{j,i}^n-V_{j,i}^n)\nonumber \\&\quad +\rho \Delta t( \sum _{m\in {\mathbb {Z}}^d}d^{\alpha ,n}_{h,k,m,i}+\sum _{m\not =0}\kappa ^{\alpha ,n}_{h,m,i})|U^{n}_{j}-V^{n}_{j}|_0\nonumber \\&+\rho \Delta t \big ( {\tilde{f}}^{\alpha }(\mathbf{x}^{n}_{j,i},U^{n}_{j,i},\Delta U^{n}_{j,i}, B^{\alpha }_{\varepsilon ,h}U^{n}_{j,i}) -{\tilde{f}}^{\alpha }(\mathbf{x}^{n}_{j,i},V^{n}_{j,i},\Delta U^{n}_{j,i}, B^{\alpha }_{\varepsilon ,h}U^{n}_{j,i}) \big ) \end{aligned}
(4.17)
\begin{aligned}&+\rho \Delta t \big ( {\tilde{f}}^{\alpha }(\mathbf{x}^{n}_{j,i},V^{n}_{j,i},\Delta U^{n}_{j,i}, B^{\alpha }_{\varepsilon ,h}U^{n}_{j,i}) -{\tilde{f}}^{\alpha }(\mathbf{x}^{n}_{j,i},V^{n}_{j,i},\Delta U^{n}_{j,i}, B^{\alpha }_{\varepsilon ,h}V^{n}_{j,i}) \big ) \end{aligned}
(4.18)
\begin{aligned}&+\rho \Delta t \big ( {\tilde{f}}^{\alpha }(\mathbf{x}^{n}_{j,i},V^{n}_{j,i},\Delta U^{n}_{j,i}, B^{\alpha }_{\varepsilon ,h}V^{n}_{j,i}) -{\tilde{f}}^{\alpha }(\mathbf{x}^{n}_{j,i},V^{n}_{j,i},\Delta V^{n}_{j,i},B^{\alpha }_{\varepsilon ,h}V^{n}_{j,i})\big ). \end{aligned}
(4.19)

It remains to estimate (4.17), (4.18) and (4.19). Lemma 4.11 (3) enables us to bound (4.19) by $$-\rho 2d\theta (U^{n}_{j,i}-V^{n}_{j,i})+\rho 2d\theta |U^{n}_{j}-V^{n}_{j}|_0.$$ We then derive upper bounds for (4.17) and (4.18) depending on whether $$U^n_{j,i}-V^n_{j,i}$$ or $$B^{\alpha }_{\varepsilon ,h}U^{n}_{j,i}-B^{\alpha }_{\varepsilon ,h}V^{n}_{j,i}$$ is positive. If $$U^n_{j,i}-V^n_{j,i}>0$$, the monotonicity of f in y implies that (4.17) is bounded above by $$-\rho \Delta tC(U^n_{j,i}-V^n_{j,i})$$, while if $$U^n_{j,i}-V^n_{j,i}<0$$, the Lipschitz continuity of f in y enables us to bound (4.17) by $$\rho \Delta tC|U^n_{j,i}-V^n_{j,i}|=-\rho \Delta tC(U^n_{j,i}-V^n_{j,i})$$.

We then discuss the sign of $$B^{\alpha }_{\varepsilon ,h}U^{n}_{j,i}-B^{\alpha }_{\varepsilon ,h}V^{n}_{j,i}$$. Suppose $$B^{\alpha }_{\varepsilon ,h}U^{n}_{j,i}-B^{\alpha }_{\varepsilon ,h}V^{n}_{j,i}<0$$, then we obtain from the monotonicity of f in k that (4.18)$$\le 0$$. Consequently we obtain that

\begin{aligned} \mathcal {P}U_{j,i}^n-\mathcal {P}V_{j,i}^n \le&(1-\rho -\rho \Delta t \sum _{m\in {\mathbb {Z}}^d}d^{\alpha ,n}_{h,k,m,i}\nonumber \\&-\rho \Delta t \sum _{m\not =0}\kappa ^{\alpha ,n}_{h,m,i}-\rho \Delta t C-\rho 2d\theta )(U_{j,i}^n-V_{j,i}^n)\nonumber \\&+\rho (\Delta t \sum _{m\in {\mathbb {Z}}^d}d^{\alpha ,n}_{h,k,m,i}+\Delta t\sum _{m\not =0}\kappa ^{\alpha ,n}_{h,m,i}+ 2d\theta )|U^{n}_{j}-V^{n}_{j}|_0\nonumber \\ \le&(1-\rho -\rho \Delta t C)|U_{j}^n-V_{j}^n|_0, \end{aligned}
(4.20)

provided that $$1-\rho (1+2d\theta )-\rho \Delta t\big ( \sum _{m\in {\mathbb {Z}}^d}d^{\alpha ,n}_{h,k,m,i}+\sum _{m\not =0}\kappa ^{\alpha ,n}_{h,m,i}+ C)>0$$, which is satisfied for small enough $$\rho$$.

On the other hand, if $$B^{\alpha }_{\varepsilon ,h}U^{n}_{j,i}-B^{\alpha }_{\varepsilon ,h}V^{n}_{j,i}>0$$, the Lipschitz continuity of f in k enables us to bound (4.18) by $$C (B^{\alpha }_{\varepsilon ,h}U^{n}_{j,i}-B^{\alpha }_{\varepsilon ,h}V^{n}_{j,i})$$, which along with (3.13) implies again (4.20) provided that the the following condition is satisfied:

\begin{aligned} 1-\rho (1+2d\theta )-\rho \Delta t\bigg ( \sum _{m\in {\mathbb {Z}}^d}d^{\alpha ,n}_{h,k,m,i}+\sum _{m\not =0}(\kappa ^{\alpha ,n}_{h,m,i}+\beta ^{\alpha ,n}_{h,m,i})+ C\bigg )>0,\qquad \end{aligned}
(4.21)

which holds for small enough $$\rho$$. This completes the proof that $$\mathcal {P}$$ is a contraction operator.

We now proceed to establish the $$\ell ^\infty$$ stability of the scheme. Let $$\{U_j^{n-1}\}_{j=1}^J$$ be the solutions to (3.18). By expressing the discrete operators $$A^{\alpha }_{\varepsilon ,h,k}$$ and $$K^{\alpha ,1}_{\varepsilon ,h}$$ in the monotone form (3.15) and (3.12), and substituting them into (3.18), we obtain that

\begin{aligned}&[1+2d\theta +\Delta t\big (\sum _{m\in {\mathbb {Z}}^d}d^{\alpha ,n}_{h,k,m,i}+ \sum _{m\not =0}\kappa ^{\alpha ,n}_{h,m,i}\big )]U^n_{j,i}\\&\qquad -\Delta t\big (\sum _{m\in {\mathbb {Z}}^d}d^{\alpha ,n}_{h,k,m,i}U^n_{j,m}+\sum _{m\not =0}\kappa ^{\alpha ,n}_{h,m,i}U^{n}_{j,i+m}\big )\\&\quad =U^{n-\frac{1}{2}}_{j,i}+\Delta t {\tilde{f}}^{\alpha }(\mathbf{x}^n_{j,i},U^{n}_{j,i},\Delta U^{n}_{j,i}, B^{\alpha }_{\varepsilon ,h}U^{n}_{j,i})+2d\theta U^{n}_{j,i}, \end{aligned}

from which we can deduce

\begin{aligned}&[1+2d\theta +\Delta t\big (\sum _{m\in {\mathbb {Z}}^d}d^{\alpha ,n}_{h,k,m,i}+ \sum _{m\not =0}\kappa ^{\alpha ,n}_{h,m,i}\big )]U^n_{j,i}\nonumber \\&\qquad -\Delta t\big (\sum _{m\in {\mathbb {Z}}^d}d^{\alpha ,n}_{h,k,m,i}+\sum _{m\not =0}\kappa ^{\alpha ,n}_{h,m,i}\big )|U^{n}_{j,i}|_0\nonumber \\&\quad \le \Delta t\big [ f^{\alpha }(\mathbf{x}^n_{j,i},U^{n}_{j,i},\Delta U^{n}_{j,i},B^{\alpha }_{\varepsilon ,h}U^{n}_{j,i})- f^{\alpha }(\mathbf{x}^n_{j,i},0,\Delta U^{n}_{j,i},B^{\alpha }_{\varepsilon ,h}U^{n}_{j,i})\big ] \end{aligned}
(4.22)
\begin{aligned}&\qquad +\Delta t\big [f^{\alpha }(\mathbf{x}^n_{j,i},0,\Delta U^{n}_{j,i},B^{\alpha }_{\varepsilon ,h}U^{n}_{j,i})- f^{\alpha }(\mathbf{x}^n_{j,i},0,\Delta U^{n}_{j,i},0) \big ]\nonumber \\&\qquad +|U^{n-\frac{1}{2}}_j|_0+( \Delta t [{\tilde{f}}^{\alpha }(\mathbf{x}^n_{j,i},0,\Delta U^{n}_{j,i},0)-{\tilde{f}}^{\alpha }(\mathbf{x}^n_{j,i},0,0,0)]\nonumber \\&\quad \quad +2d\theta U^{n}_{j,i})+\Delta t {\tilde{f}}^{\alpha }(\mathbf{x}^n_{j,i},0,0,0) . \end{aligned}
(4.23)

Using similar arguments as those for the upper bound of (4.17), we deduce that (4.22) is bounded above by $$-\Delta tCU^n_{j,i}$$ independent of the sign of $$U^n_{j,i}$$.

Suppose now $$B^{\alpha }_{\varepsilon ,h}U^{n}_{j,i}<0$$, then we obtain from the monotonicity of f in k that (4.23) is nonpositive. Then the $$\ell ^\infty$$ stability of the numerical flux and the boundedness of $$f^{\alpha }(\mathbf{x},0,0,0)$$ yield that

\begin{aligned} (1+\Delta tC)|U^n_{j}|_0\le |U^{n-\frac{1}{2}}_{j}|_0+ \Delta tC_1. \end{aligned}
(4.24)

Here C is the constant from Assumption 2.1 and $$C_1>0$$ is a large enough constant that we will choose later. On the other hand, if $$B^{\alpha }_{\varepsilon ,h}U^{n}_{j,i}>0$$, the Lipschitz continuity of f in k enables us to bound (4.23) by $$C B^{\alpha }_{\varepsilon ,h}U^{n}_{j,i}$$, which along with (3.12) implies again (4.24).

With the estimate (4.24) in hand, we are ready to derive a uniform bound for the solutions $$\{U_j^n\}$$, which is independent of h and c. The proof follows from an inductive argument. Let us introduce the notation $$|U^n|_0=\max _{1\le j\le J}|U_j^n|_0$$ for each n and define the term $$a_0=\max (|g|_0,|\zeta |_0)$$, then it is clear that $$a_0\ge \max (|U^0|_0,|\zeta |_0)$$. Suppose we have $$a_{n-1}$$ such that $$a_{n-1}\ge \max (|U^{n-1}|_0,|\zeta |_0)$$. Then the definition of $$U^{n-\frac{1}{2}}_{j,i}$$ implies that $$|U^{n-\frac{1}{2}}_j|_0\le \max (|\zeta |_0,|U^{n-1}|_0)\le a_{n-1}$$. Define the term

\begin{aligned} a_{n}:=\frac{1}{1+\Delta t C}a_{n-1}+\Delta t C_1, \end{aligned}

with the same constants as those in (4.24), then we have $$|U^{n}|_0\le a_n$$. To proceed by induction, we further require $$a_n\ge |\zeta |_0$$. Since $$a_{n-1}\ge |\zeta |_0$$ and C is fixed, it suffices to require $$C_1\ge C|\zeta |_0$$. In this way, we can construct a sequence $$\{a_n\}$$, such that $$|U^{n}|_0\le a_n$$, but $$a_n$$ is uniformly bounded independent of c, h and $$\Delta t$$, and hence this completes the proof of $$\ell ^\infty$$ stability.

We now study the weak monotonicity of the scheme. Let $$V^n_{j,i}\ge U^n_{j,i}$$ and $${\tilde{V}}^{n}_{k,i(j)}\ge {\tilde{U}}^{n}_{k,i(j)}$$ for all ijkn, then we have $$V^{n+\frac{1}{2}}_{j,i}\ge U^{n+\frac{1}{2}}_{j,i}$$. Moreover the monotonicity of f in k and the weak monotonicity of $${\tilde{f}}$$ imply that

\begin{aligned}&\sum _{m\in {\mathbb {Z}}^d}d^{\alpha ,n}_{h,k,m,i}U^{n+1}_{j,m}+ \sum _{m\not =0}\kappa ^{\alpha ,n}_{h,m,i}U^{n+1}_{j,i+m}\\&\quad + {\tilde{f}}^{\alpha }(\mathbf{x}^{n+1}_{j,i},U^{n+1}_{j,i}, \Delta U^{n+1}_{j,i},\sum _{m\not =0}\beta ^{\alpha ,n}_{h,m,i}[U^{n+1}_{j,i+m}-U^{n+1}_{j,i}]) \end{aligned}

is nondecreasing with $$\{U^{b+1}_{j,a}\}_{(a,b)\!\not =\!(i,n)}$$, which gives the weak monotonicity of the scheme (3.18).

Finally we study the consistency of the scheme. By using the Lipschitz continuity of $$x\rightarrow \min (x,a)$$, it is clear that it suffices to bound

\begin{aligned} (I_1):=&\Delta t\big (A^{\alpha }_{\varepsilon ,h,k}\phi ^{n+1}_{j,i}+K^{\alpha ,1}_{\varepsilon ,h}\phi ^{n+1}_{j,i}+ {\tilde{f}}^{\alpha }(\mathbf{x}^{n+1}_{j,i},\phi ^{n+1}_{j,i}+\xi ,\Delta \phi ^{n+1}_{j,i}, B^{\alpha }_{\varepsilon ,h}\phi ^{n+1}_{j,i})\big )\\ (I_2):=&\bigg |\frac{\phi ^{n+1}_{j,i}-\phi _{j,i}^{n}}{\Delta t}-\big (A^{\alpha }_{\varepsilon ,h,k}\phi ^{n+1}_{j,i}+K^{\alpha ,1}_{\varepsilon ,h}\phi ^{n+1}_{j,i}\\&\quad + {\tilde{f}}^{\alpha }(\mathbf{x}^{n+1}_{j,i},\phi ^{n+1}_{j,i}+\xi ,\Delta \phi ^{n+1}_{j,i},B^{\alpha }_{\varepsilon ,h}\phi ^{n+1}_{j,i})\big )\\&\quad -\phi _{j,t}(\mathbf{x}_{j,i}^{n+1})-A_\varepsilon ^{\alpha } \phi _j(\mathbf{x}_{j,i}^{n+1})-K_\varepsilon ^{\alpha } \phi _j(\mathbf{x}_{j,i}^{n+1})\\&\quad - f^{\alpha }(\mathbf{x}^{n+1}_{j,i},\phi (\mathbf{x}^{n+1}_{j,i}),D\phi (\mathbf{x}^{n+1}_{j,i}),B_\varepsilon ^{\alpha }\phi (\mathbf{x}^{n+1}_{j,i}))\bigg |, \end{aligned}

which can be estimated by using Lemmas 4.9, 4.11, and the Lipschitz continuity of f. $$\square$$

### Remark 7

The contraction operator $$\mathcal {P}$$ is introduced to demonstrate our scheme admits a unique solution for any given discretization parameters $$\Delta t$$, h, k and $$\varepsilon$$. However, due to its low convergence rate, it is not advisable to implement this contraction mapping directly to solve the nonlinear equation (3.18). In fact, Lemma 4.10 and the stability condition (4.20) restrict the contraction constant of $$\mathcal {P}$$ to admit a lower bound depending on the spatial discretization of the diffusion operator. This undesirable dependence of $$\Delta t$$ on k can be avoided by considering the mapping T defined by (3.19), which is implicit in the local terms. It has been shown that for small enough h, the contraction constant of T is proportional to $$\theta$$, which can be chosen to achieve a rapid convergence.

## 5 Numerical Experiments

In this section, we present several numerical experiments to analyse the effectiveness of the numerical scheme proposed in Sect. 3. We shall investigate the convergence of numerical solutions with respect to the switching cost, timestep, and mesh size, and show that a relatively coarse discretization of the admissible control set already leads to an accurate approximation.

We consider a portfolio optimization problem over a time interval [0, T], in a framework of recursive utility. An investor can control his wealth process $$X^{t,x,\alpha }$$ through a selection of the control process $$\alpha \in \mathcal {A}^t_t$$, say his or her portfolio strategy, and can also choose the duration of the investment via a stopping time $$\tau$$. If the agent chooses a strategy pair $$(\alpha ,\tau )$$, then the associated terminal reward is given by

\begin{aligned} \xi ^{t,x,\alpha }_\tau =\zeta (\tau ,X_\tau ^{t,x,\alpha })1_{t\le \tau <T}+g(X_\tau ^{t,x,\alpha })1_{\tau =T} \end{aligned}

for some utilities $$\zeta$$ and g, and where $$\tau \in \mathcal {T}_t^t$$, the set of $$\mathbb {F}^t$$-stopping times valued in [tT].

The performance of this investment is evaluated under a particular nonlinear expectation, called the recursive utility process (see e.g. ), which is associated with a BSDE (with Lipschitz continuous drivers). It generalizes the standard additive utilities by including a dependence on the future utility (corresponding to the future wealth). Roughly speaking, the recursive utility depends on the future utility through the dependance of the driver f on y, and can also depend on the “variability” or “volatility” of future utility through the dependance of f on z and k.

Let x be the wealth at the initial time t, $$(\alpha ,\tau )$$ be the chosen strategy, $$\mathcal {E}^{\alpha ,t,x}[\cdot ]$$ be a recursive utility function associated with the BSDE with driver $$f^\alpha$$. The aim of the investor is to maximize the utility of the investment:

\begin{aligned} u(t,x):=\sup _{\tau \in \mathcal {T}_t^t}\sup _{\alpha \in \mathcal {A}_t^t} \mathcal {E}^{t,\alpha }_{t,\tau }[\xi ^{t,x,\alpha }_\tau ], \end{aligned}

over all admissible choices of $$(\alpha ,\tau )$$. Under Assumption 2.1, it can be shown that the value function u of this mixed optimization problem coincides with the unique bounded viscosity solution of the (backward) HJBVI (2.5).

For the numerical tests, we consider a financial market with a risk-free asset with an interest rate r and a risky asset whose price follows

\begin{aligned} dS_t=S_{t^-}\bigg [b \, dt+\sigma \, dW_t+\int _E \eta (e)\,{\tilde{N}}(dt,de)\bigg ], \end{aligned}

where W is a Brownian motion and $${\tilde{N}}(dt,de)=N(dt,de)-\nu (de)dt$$ is a compensated jump measure. If we denote by $$\alpha _t$$ the percentage of the portfolio held in the risky asset at time t, then the dynamics of the portfolio is given by

\begin{aligned} dX_t=\alpha _tX_{t^-}\bigg [b \, dt+\sigma \, dW_t+\int _E \eta (e)\,{\tilde{N}}(dt,de)\bigg ],\quad X_0=x_0. \end{aligned}

The performance will be evaluated by the recursive utility function induced by the BSDE with the following driver:

\begin{aligned} f(t,x,y,z)=\psi (t,x)-\beta y-\kappa |z|. \end{aligned}

for some instantaneous reward function $$\psi$$. Recall that any concave utility function admits a dual representation via a set of probability measures absolutely continuous with respect to the original probability measure P (see e.g. ). This result allows us to interpret $$\kappa \ge 0$$ as an ambiguity-aversion coefficient relative to the Brownian motion as suggested in [9, Sect. 3.3].

The value function of this control problem satisfies the following HJBVI:

\begin{aligned} {\left\{ \begin{array}{ll} \min \big \{ u(t,x)-\zeta (t,x),u_t+\inf _{\alpha \in [0,1]}\big (-L^\alpha u-\psi +\beta u+ \alpha \kappa \sigma |x u_x|)\big ) \big \}=0,\\ u(0,x)-g(x)=0 \end{array}\right. }\nonumber \\ \end{aligned}
(5.1)

for $$(t,x)\in [0,T]\times {{\mathbb {R}}}$$, where the nonlocal operator $$L^\alpha = A^\alpha +K^\alpha$$ satisfies for $$\phi \in C^2([0,T]\times {{\mathbb {R}}})$$

\begin{aligned} A^\alpha \phi (t,x)&= \frac{1}{2}\alpha ^2\sigma ^2 x^2\phi _{xx}(t,x)+(\alpha b + (1-\alpha ) r) x \phi _x(t,x),\nonumber \\ K^\alpha \phi (t,x)&=\int _{{{\mathbb {R}}}\setminus \{0\}}\big (\phi (t,x+\alpha x\eta (e))-\phi (t,x)-\alpha x\eta (e) \phi _x(t,x)\big )\,\nu (de). \end{aligned}
(5.2)

We then specify the choice of data for our numerical experiments. We use the exponential utility function $$\zeta (t,x)=g(x)=(1-e^{-x})^+$$, which determines both the intermediate and terminal payoff, and acts as the initial condition and the obstacle to the HJBVI. Moreover, we consider the tempered stable Lévy measure $$\nu (de)=\frac{e^{-\mu |e|}}{|e|}de$$ on $${{\mathbb {R}}}$$ with intensity $$\eta (e)=1\wedge |e|$$ for the jump component (which is a special case of the variance Gamma model in Reference ). For simplicity, we choose a zero interest rate, i.e., $$r=0$$.

We further choose the function $$\psi (t,x)=0.8\exp (-(T-t))\exp (-x/2)$$ as the instantaneous reward. As we will see later, this choice of $$\psi$$ implies that the optimal control $$\alpha$$ varies in the state space and evolves in time, and there can be non-trivial stopping. The resulting HJBVI will be localized to the domain (0, 2) with $$u(t,x)=g(x)$$ for $$(t,x)\in (0,T)\times {{\mathbb {R}}}\setminus (0,2)$$. The numerical values for the parameters used in the experiments are given in Table 1.

Now we are ready to discuss the selection of the discretization parameters in detail. The density of the tempered stable measure $$\nu$$ enables us to improve the estimates in Lemma 4.10 to $$\sum _{m\not =0}\kappa ^{\alpha ,n}_{h,m,i}\le \log (\varepsilon )$$, and hence choosing $$\varepsilon =h$$ and $$\Delta t=O(h)$$ leads us to a consistent approximation to the switching system (3.6). Moreover, choosing $$\theta =\frac{1}{40}$$ and $$\Delta t=\frac{h}{15}$$ ensures the numerical flux is stable and the contraction constant of T in (3.19) is less than $$\frac{1}{10}$$.

The coefficients of the nonlocal terms are evaluated by the midpoint quadrature formula, which is clearly monotone and consistent. We observe that for the control problem with the parameters as in Table 1, the optimal strategy $$\alpha ^*$$ will always be obtained at one of the endpoints of [0, 1]. In fact, using Taylor’s theorem, we are able to approximate the nonlocal term $$K^\alpha u$$ by

\begin{aligned} K^\alpha u(t,x)\approx \frac{1}{2}\alpha ^2x^2 \int _{{{\mathbb {R}}}\setminus \{0\}} (1\wedge |e|)^2 \,\nu (de) u_{xx}(t,x)=\frac{1}{2}\alpha ^2x^2 C u_{xx}(t,x), \end{aligned}

at any given (tx) for which the value function lies above the obstacle and is sufficiently smooth. Then we infer from the HJBVI (5.1) that the optimal control $$\alpha ^*$$ is the maximizer of a quadratic function on [0, 1], which is attained in the interior only if

\begin{aligned} u_{xx}(t,x)<0, \quad -\frac{bu_x-\sigma |u_x|}{(\sigma ^2+C)xu_{xx}}\in (0,1). \end{aligned}

However, since we have $$b<\sigma$$, the above conditions can never hold for any $$x>0$$. Consequently, we deduce that the admissible set is already finite, and replacing [0, 1] by $$\mathbf{A }_\delta =\{0,1\}$$ in (5.1) will not introduce any discretiztion error. This has been confirmed with our numerical experiments. For the sake of simplicity, we discretise each component of the switching system on a single uniform mesh, thus Condition 1 (1) is trivially satisfied.

Table 2 contains the numerical solutions to the last component of the switching system at the grid point $$(T,x_0)$$ with different mesh size h and switching cost c. We examine the convergence of the numerical solutions, denoted as $$U_h$$, in h for fixed c, as well as their convergence with respect to the cost c. For any fixed positive switching cost c, we infer from the lines (a) that the numerical solutions converge monotonically to the exact solution. Moreover, the lines (c) indicate the approximation error admits an asymptotic magnitude $$O(h)+O(\Delta t)$$, which seems not to be affected by the size of the cost c. By considering the boldface values in Table 2 as an accurate approximation to the exact solution of the switching system with a given cost c, we can further conclude that the switching system is consistent to the HJBVI (5.1) with order 1. This follows from the approximate factor of four between the differences 0.00469, 0.00117, and 0.00029 between the last three pairs of values, proportional to the reduction in c. Therefore, by taking $$c=O(h)$$ and $$\Delta t=O(h)$$, we can obtain a first-order scheme for the HJBVI.

We then proceed to analyze the effect of the control discretization. We pick the same parameters as those in Table 1, except that $$b=0.25$$, which is chosen such that it is now possible that the optimal control is attained in the interior of (0, 1) (as seen from a similar argument as earlier). Computations are performed using Matlab R2016b on a 3.30GHz Intel Xeon ES-2667 16-Core processor with 256 GB RAM to enable parallelization. Table 3 illustrates the numerical results for different control meshes ($$J=1/\delta +1$$) with a fixed mesh size $$h=0.005$$ and switching cost $$c=1/2560$$, and also compares the runtime with or without parallelization.

We can clearly observe from line (a) second order convergence of the numerical solutions, and a relatively coarse control mesh has already yielded an accurate approximation with a negligible control discretization error.

Next, we discuss lines (b)–(f) which analyse the algorithm’s parallel efficiency. Hereby, the implicit finite difference scheme for individual components of the switching system (i.e., (3.9), for different j) is solved independently on different processors, while the maximisation step (3.8) requires communication between processors.

The total execution time with and without parallelization are presented in line (b) and (c), respectively, which indicate a significant reduction of computational times. Moreover, by subtracting the communication time among clusters, as shown in line (d), from the total runtime, we can obtain the actual time spent on executing the numerical scheme (line (e)). The speed-up rate of the parallelization is shown in line (f), which grows with the number of controls, and remains stable at the number of cores. Therefore, together with parallelization, piecewise constant timestepping enables us to achieve a high accuracy in the control discretization without significantly increasing the computational time, which is an advantage over policy iterations, which do not parallelise naturally.

We finally examine the impact of the computational domain by performing computations on (0, 3) with $$h=1/400$$, $$\Delta t=h/20$$, $$c=1/640$$ and the parameters as in Table 1. Compared to the results in Table 2, this larger domain leads to a relative difference of $$7.53 \cdot 10^{-7}$$, which is negligible compared to the time and spatial discretization errors.

The numerical value function and the corresponded feedback control strategy with $$J=21$$ are presented in Figure 1, in which the white area represents the region where the obstacle is active, and otherwise the colour indicates the value of the optimal control, as shown in the panel on the right. The approximation to the optimal control pair $$(\tau ,\alpha )$$ was found from the numerical solution as follows (see also (3.8) and Remark 3), noting that in our tests $$x_{j,i}=x_{k,i}$$ for all jk, and therefore no interpolation is needed:

\begin{aligned} i_n^*\in & {} \mathrm{argmax}_{k}{U}^{n}_{k,i}, \\ \theta _i^n= & {} \left\{ \begin{array}{rl} 0 &{} \max _{k}{U}^{n}_{k,i} > \xi (t_n,x_{1,i}), \\ 1 &{} \max _{k}{U}^{n}_{k,i} \le \xi (t_n,x_{1,i}), \end{array} \right. \end{aligned}

where $$\alpha _i^n = \alpha _{i_n^*}$$ is an approximation to the optimal policy and $$\{(t_n,x_{1,i}): \theta _i^n=1\}$$ is an approximation to the stopping region.