1 Introduction

The theory of optimal control for stochastic differential equations is mathematically challenging and it has been considered in many fields such as economics, engineering, biology and finance (Fleming and Soner 2006; Pham 2000). Stochastic optimal control problems have been studied by many researches (Krylov 2000; Jakobsen 2003; Peyrl et al. 2005). In some cases, the well posedness of such problems have been studied using methods such as viscosity and minimax techniques (see Crandall and Lions 1983; Crandall et al. 1984, 1992). In general, most of them do not have an explicit solution, therefore there have been many attempts to develop novel methods for their approximations. Numerical approximation of stochastic optimal control problem is therefore an active research area and has attracted a lot of attentions (Huang et al. 2004; Krylov 1999, 2005, 2000; Jakobsen 2003; Peyrl et al. 2005). The keys challenge for solving HJB equation are the low regularity of the solution and the lack of appropriate numerical methods to tackle the degeneracy of the differential operator in HJB equation. Indeed adding to the standard issue that we usually have when solving degenerated PDE, we need to couple with an optimization problem at each point of the grid and for each time step. A standard approach is based on Markov chain approximation, which suffers from time step limitations due to stability issues (Forsyth and Labahn 2007) as the method is indeed based on finite difference approach. Many stochastic optimal control problems such as Merton optimal problems have degenerated linear operator when the spatial variables approach the region near to zero. This degeneracy has an adverse impact on the accuracy when the finite difference method is used to solve such optimal problems (Dleuna Nyoumbi and Tambue 2021; Wilmott 2005) as the monotonicity of the scheme is usually lost. However, when solving HJB equation, the monotonicity also plays a key role to ensure the convergence of the numerical scheme toward the viscosity solution. Indeed in high dimensional Merton’s control problem, the matrix in the diffusion part is lower rank near the origin and it has been found in Bénézet et al. (2019) and Henderson et al. (2020) that the standard finite difference schemes become non monotone and may not converge to the viscosity solution of the HJB. To solve the degeneracy issue, a fitted finite volume have been proposed in Dleuna Nyoumbi and Tambue (2021) for one and two dimensional optimal control problems. This method uses special technique called fitted technique to tackle the degeneracy. The scheme have been initially developed to solve Black–Scholes PDEs for options pricing (see Wang 2004 and references therein). In Dleuna Nyoumbi and Tambue (2021), numerical experiments have been used to demonstrate that the fitted finite volume scheme is more accurate than the standard finite difference approach to approximate one and two dimensional stochastic optimal problems. To the best of our knowledge, even for Black–Scholes PDEs for options pricing, fitted technique for high dimensional domain (\(n\ge 3\)) has be lacked in the literature.

The aim of this research is to introduce the first fitted finite volume method for stochastic optimal control problems in high dimensional domain (\(n\ge 3\)). This method is suitable to handle the degeneracy of the linear operator while solving numerically the HJB equation. The method is coupled with implicit time-stepping method and the iterative method presented in Peyrl et al. (2005) for optimization problem at every time step. The merit of the method is that it is absolutely stable in time because of the implicit nature of the time discretisation and yields a linear system with a positive-definite M-matrix, this is in contrast of the standard finite difference scheme.

The novel contribution of our paper over the existing literature can be summarized as

  • We have upgraded the fitted finite volume technique to discretize a more generalized HJB equation coupled with the implicit time-stepping method for temporal discretization method and the iterative method for the optimization problem at every time step. To best of our knowledge such combination has not yet proposed so far to solve stochastic optimal control problems in high dimensional domain (\(n\ge 3\)).

  • We have proved that the corresponding matrices after spatial and temporal discretization are positive-definite M-matrices. We have demonstrated by numerical experiments that the proposed scheme can be more accurate than the standard finite difference scheme.

The rest of the paper is organized as follows. The stochastic optimal control problems is introduced in Sect. 2. In Sect. 3, we introduce the fitted finite volume in high dimensional domain and show that the system matrix of the resulting discrete equations is an M-matrix. Section 5 provides temporal discretization and optimization algorithm for spatial diiscretized HJB equation. In Sect. 6, we present some numerical examples illustrating the accuracy of the proposed method comparing to the standard finite difference. Finally, in Sect. 7, we summarise our finding.

2 Preliminaries and Formulation

Let \(\left( \varOmega , {\mathcal {F}}, {\mathbb {F}}=({\mathcal {F}}_t)_{t \ge 0}, {\mathbb {P}}\right) \) be a filtrated probability space. We consider the numerical approximation of the following controlled stochastic differential equation (SDE) defined in \({\mathbb {R}}^{ n}\) by

$$\begin{aligned} dx_s = b(s,x_s, \alpha _s) dt + \sigma (s, x_s, \alpha _s) d\omega _s,\quad s\,\in (t, T]\quad x_t=x \end{aligned}$$
(1)

where

$$\begin{aligned} b: [0, T] \times {\mathbb {R}}^n \times {\mathcal {A}} \rightarrow {\mathbb {R}}^n \quad (t,x_t, \alpha _t) \rightarrow b(t,x_t,\alpha _t) \end{aligned}$$
(2)

is the drift term and

$$\begin{aligned} \sigma :[0, T] \times {\mathbb {R}}^n\times {\mathcal {A}} \rightarrow {\mathbb {R}}^{n\times d} \quad (t,x_t,\alpha _t) \rightarrow \sigma (t,x_t,\alpha _t) \end{aligned}$$
(3)

the d-dimensional diffusion coefficients. Note that \( \omega _t \) are d-dimensional independent Brownian motion on \( \left( \varOmega , {\mathcal {F}}, ({\mathcal {F}}_t)_{t \ge 0}, {\mathbb {P}} \right) \), the control \( \alpha =(\alpha _t)_{t \ge 0}\) is an \( {\mathbb {F}} \)-adapted process, valued in \( {\mathcal {A}} \) compact convex subset of \( {\mathbb {R}}^m \, (m \ge 1)\) and satisfying some integrability conditions and/or state constraints. Precise assumptions on b and \( \sigma \) to ensure the existence of the unique solution \( x_t \) of (1) can be found in Pham (2000).

Given a function f from \([0, T] \times {\mathbb {R}}^n\times {\mathcal {A}} \) into \( {\mathbb {R}}\) and g from \({\mathbb {R}}^n \) into \( {\mathbb {R}}\), the performance functional is defined as

$$\begin{aligned} J(t, x, \alpha ) = {\mathbb {E}}\, \left[ \int _t^T f(s, x_s,\alpha ) \,ds + g ( x_T) \right] , \,\,\,\forall \,x\,\in \,{\mathbb {R}}^n. \end{aligned}$$
(4)

We assume that

$$\begin{aligned} {\mathbb {E}}\, \left\{ \left[ \int _t^T f(s, x_s,\alpha ) \,ds + g (x_T) \right] \right\} < \infty . \end{aligned}$$
(5)

The model problem consists to solve the following optimization

$$\begin{aligned} v(t, x) = \underset{\alpha \in {\mathcal {A}}}{\sup }\, J(t, x, \alpha ), \,\,\,\,\forall \,x\,\in \,{\mathbb {R}}^n. \end{aligned}$$
(6)

By dynamic programming, the resulting Hamilton–Jacobi–Bellamn (HJB) equation (Krylov 2000) is given by

$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{ \partial v(t, x)}{ \partial t} + \underset{\alpha \in {\mathcal {A}}}{\sup } \left[ L^{\alpha } v(t, x) + f(t, x,\alpha )\right] = 0 \quad \text {on} \ [0,T)\times {\mathbb {R}}^n\\ v(T, x) = g(x), \,\,\,\,x \,\in {\mathbb {R}}^n \end{array}\right. } \end{aligned}$$
(7)

where

$$\begin{aligned} L^{\alpha } v(t, x) = \sum _{i=1}^n (b(t, x,\alpha ))_i \dfrac{ \partial v(t,x)}{ \partial x_i} + \sum _{i,j=1}^n ( a^{\alpha } (t,x))_{i,j}\,\dfrac{ \partial ^2 v(t,x)}{\partial x_i\, \partial x_j}, \end{aligned}$$
(8)

and \( a^\alpha (t,x) = \dfrac{1}{2}\bigg (\sigma (t,x,\alpha )(\sigma (t,x,\alpha ))^T\bigg ) \). The resulting Hamilton–Jacobi–Bellman equation is typically a second order nonlinear partial differential equation, which can degenerate and therefore should to solve accurately.

3 Fitted Finite Volume Method in Three Dimension HJB

As we have already mentioned, even for Black–Scholes PDEs for options pricing, fitted technique for three dimensional space has be lacked in the literature to the best of our knowledge. The goal here is to update the technique in Dleuna Nyoumbi and Tambue (2021) and Wang (2004) to three dimensional HJB equation.

Consider the more generalized HJB Eq. (7) in dimension 3 which can be written in the form by setting \( \tau = T-t \)

$$\begin{aligned} -\dfrac{ \partial v(\tau ,x,y,z) }{\partial \tau } + \sup _{\alpha \in {\mathcal {A}}}\left[ \nabla \cdot \left( k (v(\tau ,x,y,z))\right) + c(\tau ,x,y,z,\alpha )\,v(\tau ,x,y,z) \right] = 0, \end{aligned}$$
(9)

where \(k(v(\tau ,x,y,z)) = A(\tau ,x,y,z,\alpha )\cdot \nabla v(\tau ,x,y,z)+ b(\tau ,x,y,z,\alpha )\,v(\tau ,x,y,z)\)  with

$$\begin{aligned} b = (x\,b_1, y\,b_2, z\,b_3)^T,\quad A=\left[ \begin{array}{ccc} a_{11} &{} a_{12} &{} a_{13} \\ a_{21} &{} a_{22}&{} a_{23} \\ a_{31} &{} a_{32}&{} a_{33} \end{array} \right] . \end{aligned}$$
(10)

Indeed this divergence form is not a restriction as the differentiation is respect to xy and z and not respect to the control \(\alpha \), which may be discontinuous in some applications. We will assume that \(a_{21}= a_{12}, a_{31}= a_{13}\) and \( a_{32}= a_{23} \). We also define the following coefficients, which will help us to build our scheme \( a_{11}(t,x,y,\alpha ) = {\overline{a}}_1(t,x, y, \alpha )\,x^2, a_{22}(t,x,y,\alpha ) = {\overline{a}}_2(t,x,y,\alpha ) y^2,\; a_{33}(t,x,y,\alpha ) = {\overline{a}}_3(t,x,y,\alpha ) z^2\), \( a_{1 2}=a_{2 1} = d_1(t,x,y,\alpha )\, x\,y\,z\)\( a_{1 3}=a_{3 1} = d_2(t,x,y,\alpha )\, x\,y\,z \) and \( a_{2 3}=a_{3 2} = d_3(t,x,y,\alpha )\, x\,y\,z\). Although this initial value problem (9) is defined on the unbounded region \( {\mathbb {R}}^3 \), for computational reasons we often restrict to a bounded region. As usual the three dimensional domain is truncated to \( I_{x}= [0,x_{\text {max}}] \), \(I_{y}= [0,y_{\text {max}}] \) and \( I_{z}= [0,z_{\text {max}}] \). The truncated domain will be divided into \( N_1 \), \( N_2 \) and \( N_3 \) sub-intervals

$$\begin{aligned} I_{x_{i}} :=(x_i, x_{i+1}),\,\, I_{y_{j}} :=(y_j, y_{j+1}) ,\,\, I_{z_{k}} :=(z_k, z_{k+1}), \end{aligned}$$

\( i=0\ldots N_1-1,\,\,j=0\ldots N_2-1,\,\,\,k=0\ldots N_3-1\) with \( 0 = x_{0}< x_{1}< \cdots \cdots < x_{N_1} = x_{\text {max}},\,\)   \(0 = y_{0}< y_{1}< \cdots \cdots < y_{N_2}= y_{\text {max}} \) and \( 0 = z_{0}< z_{1}< \cdots \cdots < z_{N_3}= z_{\text {max}} \). This defines on \( I_{x} \times I_{y} \times I_{z} \) a rectangular mesh.

By setting

$$\begin{aligned} x_{i+1/2}:= & {} \dfrac{x_{i} + x_{i+1} }{2},\,\, x_{i-1/2} :=\dfrac{x_{i} + x_{i-1} }{2},\,\,y_{j+1/2} :=\dfrac{y_{j} + y_{j+1} }{2},\nonumber \\ \,y_{j-1/2}:= & {} \dfrac{y_{j} + y_{j-1} }{2}, \,\, \,z_{k+1/2} :=\dfrac{z_{k} + z_{k+1} }{2},\,\,z_{k-1/2} :=\dfrac{z_{k} + z_{k-1} }{2}, \end{aligned}$$
(11)

for each \( i=1\ldots N_1-1\)   \( j=1\ldots N_2-1\)  and each \( k=1\ldots N_3-1\). These mid-points form a second partition of \( I_{x} \times I_{y} \times I_{z}\) if we define \( x_{-1/2} = x_{0}\)\( x_{N_1+1/2} = x_{\text {max}}\),   \( y_{-1/2} = y_{0}\)\( y_{N_2+1/2} = y_{\text {max}}\) and \( z_{-1/2} = z_{0}\)\( z_{N_3+1/2} = z_{\text {max}}\). For each \(i = 0, 1, \ldots ,N_1 \),   \(j = 0, 1, \ldots ,N_2 \) and \(k = 0, 1,\ldots ,N_3 \), we set \(h_{x_i} = x_{i+1/2} - x_{i-1/2} \), \(h_{y_j} = y_{j+1/2} - y_{j-1/2} \),   \(h_{z_k} = z_{k+1/2} - z_{k-1/2} \) and define the grids points as

$$\begin{aligned} {\mathcal {G}}=\left\{ (x_i,y_j,z_k):~1\le i\le N_1-1;\,\,\,1\le j\le N_2-1; \,\,\,1\le k\le N_3-1\right\} . \end{aligned}$$

Integrating both size of (9) over \( {\mathcal {R}}_{i,j,k}=\left[ x_{i-1/2}, x_{i+1/2} \right] \times \left[ y_{j-1/2}, y_{j+1/2}\right] \times \left[ z_{k-1/2}, z_{k+1/2}\right] \) we have

$$\begin{aligned}&- \int _{x_{i-1/2}}^{x_{i+1/2}} \int _{y_{j-1/2}}^{y_{j+1/2}} \int _{z_{k-1/2}}^{z_{k+1/2}} \dfrac{ \partial v }{ \partial \tau }\, dx\, dy\, dz \nonumber \\&\quad + \int _{x_{i-1/2}}^{x_{i+1/2}} \int _{y_{j-1/2}}^{y_{j+1/2}} \int _{z_{k-1/2}}^{z_{k+1/2}} \sup _{\alpha \in {\mathcal {A}}} \left[ \nabla \cdot \left( k(v)\right) + c\,v \right] \, dx\,dy\,dz = 0, \end{aligned}$$
(12)

for \( i =1,2,\ldots , N_1-1 \)\( j =1,2,\ldots , N_2-1 \)\(k =1,2,\ldots , N_3-1 \).

Applying the mid-points quadrature rule to the first and the last point terms, we obtain the above

$$\begin{aligned} -\dfrac{\partial \, v_{i,j,k}(\tau ) }{\partial \, \tau }\,l_{i,j,k} + \sup _{\alpha \in {\mathcal {A}}} \left[ \int _{ {\mathcal {R}}_{i,j,k}}\nabla \cdot \left( k(v)\right) \,dx\,dy\,dz + c_{i,j,k}(\tau ,\alpha )\, v_{i,j,k}(\tau )\,l_{i,j,k}\right] =0 \end{aligned}$$
(13)

for \( i= 1,2,\ldots N_1-1 \)\( j= 1,2,\ldots N_2-1 \)\( k= 1,2,\ldots N_3-1 \) where \( l_{i,j,k} = \left( x_{i+1/2} - x_{i-1/2} \right) \times \left( y_{j+1/2} - y_{j-1/2}\right) \times \left( z_{k+1/2} - z_{k-1/2}\right) \) is the volume of \( {\mathcal {R}}_{i,j,k} \). Note that \( v_{i,j,k}(\tau ) \) denotes the nodal approximation to \( v(\tau , x_{i}, y_{j}, z_k) \) at each point of the grid.

We now consider the approximation of the middle term in (13). Let \(\mathbf{n} \) denote the unit vector outward-normal to \( \partial {\mathcal {R}}_{i,j,k} \). By Ostrogradski Theorem, integrating by parts and using the definition of flux k, we have

$$\begin{aligned}&\int _{{\mathcal {R}}_{i,j,k}} \nabla \cdot \left( k(v)\right) d x\,dy\,dz = \int _{\partial {\mathcal {R}}_{i,j,k}} k \cdot \mathbf{n } \, ds \nonumber \\&\quad = \int _{\left( x_{i+1/2},y_{j-1/2},z_{k-1/2} \right) }^{\left( x_{i+1/2}, y_{j+1/2}, z_{k+1/2} \right) }\left( a_{11}\,\dfrac{\partial v }{d x}+ a_{12}\,\dfrac{d v }{\partial y}+a_{13}\,\dfrac{\partial v }{\partial z}+ x\,b_1\,v \right) d y\,d z \nonumber \\&\qquad - \int _{\left( x_{i-1/2},y_{j-1/2},z_{k-1/2} \right) }^{\left( x_{i-1/2}, y_{j+1/2}, z_{k+1/2} \right) } \left( a_{11}\,\dfrac{\partial v }{\partial x}+ a_{12}\,\dfrac{\partial v }{\partial y}+a_{13}\,\dfrac{\partial v }{\partial z}+ x\,b_1\,v \right) \,d y\,dz \nonumber \\&\qquad +\int _{\left( x_{i-1/2},y_{j+1/2},z_{k-1/2} \right) }^{\left( x_{i+1/2}, y_{j+1/2}, z_{k+1/2} \right) } \left( a_{21}\,\dfrac{\partial v }{ \partial x}+ a_{22}\,\dfrac{\partial v }{d y}+a_{23}\,\dfrac{\partial v }{\partial z}+ y\,b_2\,v \right) d x\,dz\nonumber \\&\qquad - \int _{\left( x_{i-1/2},y_{j-1/2},z_{k-1/2} \right) }^{\left( x_{i+1/2}, y_{j-1/2}, z_{k+1/2} \right) } \left( a_{21}\,\dfrac{\partial v }{\partial x}+ a_{22}\,\dfrac{\partial v }{ \partial y}+a_{23}\,\dfrac{\partial v }{\partial z}+ y\,b_2\,v \right) d x\,dz \nonumber \\&\qquad + \int _{\left( x_{i-1/2},y_{j-1/2},z_{k+1/2} \right) }^{\left( x_{i+1/2}, y_{j+1/2}, z_{k+1/2} \right) } \left( a_{31}\,\dfrac{\partial v }{\partial x}+ a_{32}\,\dfrac{\partial v }{ \partial y}+a_{33}\,\dfrac{\partial v }{ \partial z}+ z\,b_3\,v \right) d x\,dy \nonumber \\&\qquad - \int _{\left( x_{i-1/2},y_{j-1/2},z_{k-1/2} \right) }^{\left( x_{i+1/2}, y_{j+1/2}, z_{k-1/2} \right) } \left( a_{31}\,\dfrac{\partial v }{\partial x}+ a_{32}\,\dfrac{\partial v }{ \partial y}+a_{33}\,\dfrac{\partial v }{\partial z}+z\, b_3\,v \right) d x\,dy. \end{aligned}$$
(14)

Note that

$$\begin{aligned} \int _{\left( x_{1},y_{1},z_{1} \right) }^{\left( x_{1}, y_{2}, z_{2} \right) } f(x,y,z) dy dz:=\int _{y_1}^{y_2} \int _{z_1}^{z_2} f(x_1,y,z) dz dy. \end{aligned}$$
(15)

We shall look at (14) term by term. For the first term we want to approximate the integral by a constant, i.e,

$$\begin{aligned}&\int _{\left( x_{i+1/2},y_{j-1/2},z_{k-1/2} \right) }^{\left( x_{i+1/2}, y_{j+1/2}, z_{k+1/2} \right) }\left( a_{11}\,\dfrac{\partial v }{\partial x}+ a_{12}\,\dfrac{\partial v }{ \partial y}+a_{13}\,\dfrac{\partial v }{\partial z} + x\, b_1\,v \right) d y\,dz \nonumber \\&\quad \approx \left( a_{11}\,\dfrac{\partial v }{\partial x}+ a_{12}\,\dfrac{\partial v}{\partial y}+ a_{13}\,\dfrac{ \partial v }{\partial z} + x\,b_1\,v \right) \bigg |_{\left( x_{{i+1/2}},y_{j},z_k \right) } \cdot h_{y_{j}}\cdot h_{z_{k}}. \end{aligned}$$
(16)

To achieve this, it is clear that we now need to derive approximations of the \( k (v) \cdot \mathbf{n} \) defined above at the mid-point \( \left( x_{i+1/2},y_{j},z_{k} \right) \), of the interval \( I_{x_{i}} \) for \( i = 0, 1,\ldots N_1-1\). This discussion is divided into two cases for \( i \ge 1 \), and \( i = 0\, \) on the interval \( I_{x_0} = [0,x_{1}] \). This is really an extension of the two dimensional fitted finite volume presented (Huang et al. 2010).

Case I: For \( i\ge 1 \).

Let set \( a_{11}(\tau , x, y, z, \alpha ) = {\overline{a}}_1(\tau , x, y, z, \alpha )\,x^2 \). We approximate the term \( \left( a_{11} \dfrac{\partial v}{\partial x}+ x\,b_1\,v \right) \) by solving the following two points boundary value problem

$$\begin{aligned}&\left( {\overline{a}}_1(\tau , x_{i+1/2}, y_j, z_k, \alpha _{i,j,k})\,x \dfrac{ \partial v}{ \partial x}+ {b_1}(\tau , x_{i+1/2}, y_j, z_k, \alpha _{i,j,k})\,v \right) '= 0, \nonumber \\&v(x_{i},y_{j},z_k)= v_{i,j,k},\,\,\,\, v(x_{i+1}, y_{j}, z_k)= v_{i+1,j,k}, \end{aligned}$$
(17)

integrating (17) yields the first-order linear equations

$$\begin{aligned} {\overline{a}}_1(\tau , x_{i+1/2}, y_j, z_k, \alpha _{i,j,k})\,x \dfrac{ \partial v}{ \partial x}+ {b_1}(\tau , x_{i+1/2}, y_j, z_k, \alpha _{i,j,k})\,v = C_1, \end{aligned}$$
(18)

where \( C_1 \) denotes an additive constant. As in Huang et al. (2010), we get

$$\begin{aligned} C_1 = \dfrac{{b_1}_{i+1/2,j,k}(\tau ,\alpha _{i,j,k})\,\left( x_{i+1}^{\beta _{i,j,k}(\tau )}\,v_{i+1,j,k}-x_{i}^{\beta _{i,j,k}(\tau )}\,v_{i,j,k} \right) }{x_{i+1}^{\beta _{i,j,k}(\tau )}-x_{i}^{\beta _{i,j,k}(\tau )}}. \end{aligned}$$
(19)

Therefore,

$$\begin{aligned}&a_{11}\,\dfrac{\partial v }{\partial x}+ a_{12}\,\dfrac{ \partial v }{ \partial y}+ a_{13}\,\dfrac{\partial v }{\partial z} +x\, b_1\,v \bigg |_{\left( x_{{i+1/2}},y_{j},z_k \right) } \nonumber \\&\quad \approx x_{i+1/2}\,\left( \dfrac{{b_1}_{i+1/2,j,k}(\tau ,\alpha _{i,j,k})\,\left( x_{i+1}^{\beta _{i,j,k}(\tau )}\,v_{i+1,j,k}-x_{i}^{\beta _{i,j,k}(\tau )}\,v_{i,j,k} \right) }{x_{i+1}^{\beta _{i,j,k}(\tau )}-x_{i}^{\beta _{i,j,k}(\tau )}} \right. \nonumber \\&\qquad \left. + {d_1}_{i,j,k}(\tau ,\alpha _{i,j,k})\,y_j\,z_k\,\dfrac{ \partial v }{\partial y} \bigg |_{\left( x_{1_{i+1/2}},y_{j},z_k \right) } + {d_2}_{i,j,k}(\tau ,\alpha _{i,j,k})\,y_j\,z_k\,\dfrac{\partial v }{\partial z} \bigg |_{\left( x_{1_{i+1/2}},y_{j},z_k \right) } \right) , \end{aligned}$$
(20)

where \( \beta _{i,j,k}(\tau ) =\dfrac{{b_1}_{i+1/2,j,k}(\tau ,\alpha _{i,j,k})}{{{\overline{a}}_1}_{i+1/2,j,k}(\tau ,\alpha _{i,j,k})} \), \( a_{12} = a_{21} = d_{1}(\tau ,x,y,z,\alpha )\,x\,y\,z \) and \( a_{13} = a_{31} = d_{2}(\tau ,x,y,z,\alpha )\,x\,y\,z \).

Note that in this deduction, we have assumed that \( {b_1}_{i+1/2,j,k}(\tau ,\alpha _{i,j,k}) \ne 0 \). Finally, we use the forward difference,

$$\begin{aligned} \dfrac{\partial v }{\partial y}\bigg |_{\left( x_{{i+1/2}},y_{j},z_k \right) } \approx \dfrac{v_{i,j+1,k}-v_{i,j,k}}{h_{y_j}},\,\,\,\dfrac{\partial v }{\partial z}\bigg |_{\left( x_{{i+1/2}},y_{j},z_k \right) } \approx \dfrac{v_{i,j,k+1}-v_{i,j,k}}{h_{z_k}}. \end{aligned}$$

We finally have

$$\begin{aligned}&\left[ a_{11}\,\dfrac{\partial v }{ \partial x}+ a_{12}\,\dfrac{ \partial v}{ \partial y}+ a_{13}\,\dfrac{\partial v}{ \partial z}+x\, b_1\,v \right] _{\left( x_{i+1/2},y_{j},z_k \right) } \cdot h_{y_{j}}\cdot h_{z_{k}} \nonumber \\&\quad \approx x_{i+1/2}\left( \dfrac{{b_1}_{i+1/2,j,k}(\tau ,\alpha _{i,j,k})\,\left( x_{i+1}^{\beta _{i,j,k}(\tau )}\,v_{i+1,j,k}-x_{i}^{\beta _{i,j,k}(\tau )}\,v_{i,j,k} \right) }{x_{i+1}^{\beta _{i,j,k}(\tau )}- x_{i}^{\beta _{i,j,k}(\tau )}}\right. \nonumber \\&\qquad \left. + {d_1}_{i,j,k}(\tau ,\alpha _{i,j,k})\,y_j\,z_k\,\dfrac{v_{i,j+1,k}-v_{i,j,k}}{h_{y_j}}+ {d_2}_{i,j,k}(\tau ,\alpha _{i,j,k})\,y_j\,z_k\,\dfrac{v_{i,j,k+1}-v_{i,j,k}}{h_{z_k}} \right) \cdot {h_{y_j}}\cdot {h_{z_k}}. \end{aligned}$$
(21)

Similarly, the second term in (14) can be approximated by

$$\begin{aligned}&\left[ a_{11}\,\dfrac{\partial v }{\partial x}+ a_{12}\,\dfrac{\partial v}{\partial y}+ a_{13}\,\dfrac{\partial v}{ \partial z}+ x\,b_1\,v \right] _{\left( x_{i-1/2},y_{j},z_k \right) } \cdot h_{y_{j}}\cdot h_{z_{k}} \nonumber \\&\quad \approx x_{i-1/2}\left( \dfrac{{b_1}_{i-1/2,j,k}(\tau ,\alpha _{i,j,k})\,\left( x_{i}^{\beta _{i-1,j,k}(\tau )}\,v_{i,j,k}-x_{i-1}^{\beta _{i-1,j,k}(\tau )}\,v_{i-1,j,k} \right) }{x_{i}^{\beta _{i-1,j,k}(\tau )}- x_{i-1}^{\beta _{i-1,j,k}(\tau )}} \right. \nonumber \\&\qquad \left. + {d_1}_{i,j,k}(\tau ,\alpha _{i,j,k})\,y_j\,z_k\,\dfrac{v_{i,j+1,k}-v_{i,j,k}}{h_{y_j}} + {d_2}_{i,j,k}(\tau ,\alpha _{i,j,k})\,y_j\,z_k\,\dfrac{v_{i,j,k+1}-v_{i,j,k}}{h_{z_k}} \right) \cdot {h_{y_j}}\cdot {h_{z_k}}. \end{aligned}$$
(22)

Case II Approximation of the flux at \( i = 0 \) on the interval \( I_{x_0} = [0,x_1] \). Note that the analysis in the case I does not apply to the approximation of the flux on \( I_{x_0} \) because it is the degenerated zone. Therefore, we reconsider the following form

$$\begin{aligned}&\bigg ({{\overline{a}}_1}_{x_{1/2},j,k}(\tau ,\alpha _{1,j,k})\,x \dfrac{\partial v}{\partial x}+ {b_1}_{x_{1/2},j,k}(\tau ,\alpha )\,v \bigg )' \equiv C_2\,\,\,\mathbf{in} \,\,[0,x_1] \nonumber \\&v(x_0, y_j, z_k)= v_{0,j,k},\,\,\,\, v(x_1,y_j,z_k) = v_{1,j,k}, \end{aligned}$$
(23)

where \( C_2 \) is an unknown constant to be determined. Integrating (23), we find

$$\begin{aligned}&\left( {\overline{a}}_1(\tau ,\alpha _{1,j,k})\,x \dfrac{\partial v}{\partial x}+ {b_1}\,v\right) \bigg |_{\left( x_{1_{1/2}},y_{j},z_k \right) } \nonumber \\&\quad = \dfrac{1}{2}\left[ ({{\overline{a}}_1}_{x_{1/2},j,k}(\tau ,\alpha _{1,j,k})+{b_1}_{x_{1/2},j,k}(\tau ,\alpha _{1,j,k}))v_{1,j,k}\right. \nonumber \\&\qquad -\left. ({{\overline{a}}_1}_{x_{1/2},j,k}(\tau ,\alpha _{1,j,k})-{b_1}_{x_{1/2},j,k}(\tau ,\alpha _{1,j,k}))v_{0,j,k}\right] \end{aligned}$$
(24)

and deduce that

$$\begin{aligned}&\left[ a_{11}\,\dfrac{\partial v }{\partial x}+ a_{12}\,\dfrac{\partial v}{\partial y}+ a_{13}\,\dfrac{\partial v}{\partial z} +x\, b_1\,v \right] _{\left( x_{1/2},y_{j},z_k \right) } \cdot h_{y_j}\cdot h_{z_k}\nonumber \\&\quad \approx x_{1/2}\left( \dfrac{1}{2}\left[ ({{\overline{a}}_1}_{x_{1/2},j,k} (\tau ,\alpha _{1,j,k})+{b_1}_{x_{1/2},j,k}(\tau ,\alpha _{1,j,k}))\,v_{1,j,k}\right. \right. \nonumber \\&\qquad \left. \left. -({{\overline{a}}_1}_{x_{1/2},j,k}( \tau ,\alpha _{1,j,k})-{b_1}_{x_{1/2},j,k}(\tau ,\alpha _{1,j,k}))\,v_{0,j,k}\right] \right. \nonumber \\&\qquad \left. + {d_1}_{1,j,k}(\tau ,\alpha _{1,j,k})\,y_j\,z_k\,\dfrac{v_{1,j+1,k}-v_{1,j,k}}{h_{y_j}} \right. \nonumber \\&\qquad \left. + {d_2}_{1,j,k}(\tau ,\alpha _{1,j,k})\,y_j\,z_k\,\dfrac{v_{1,j,k+1}-v_{1,j,k}}{h_{z_k}} \right) \cdot h_{y_j}\cdot h_{z_k}. \end{aligned}$$
(25)

Remark 1

Notice that if \( I_{x}= [\zeta ,x_{\text {max}}]\) with \(\zeta \ne 0 \), we do not need to truncate the interval \( I_x \), we just apply the fitted finite volume method directly as for \( i \ge 1\).

Case III For \( j\ge 1 \). For the third term in (14) we want to approximate the integral by a constant, i.e,

$$\begin{aligned}&\int _{\left( x_{i-1/2},y_{j+1/2},z_{k-1/2} \right) }^{\left( x_{i+1/2}, y_{j+1/2}, z_{k+1/2} \right) }\left( a_{21}\,\dfrac{\partial v }{\partial x}+ a_{22}\,\dfrac{\partial v }{\partial y}+a_{23}\,\dfrac{\partial v }{\partial z}+ y\,b_2\,v \right) d x\,dz \nonumber \\&\quad \approx \left( a_{21}\,\dfrac{\partial v }{\partial x}+ a_{22}\,\dfrac{\partial v}{\partial y}+ a_{23}\,\dfrac{\partial v }{\partial z} + y\,b_2\,v \right) |_{\left( x_{i},y_{j+1/2},z_k \right) } \cdot h_{x_{i}}\cdot h_{z_{k}}. \end{aligned}$$
(26)

Following the same procedure for the case I of this section, we find that

$$\begin{aligned}&\left[ a_{21}\,\dfrac{\partial v }{\partial x}+ a_{22}\,\dfrac{\partial v}{\partial y}+ a_{23}\,\dfrac{\partial v}{ \partial z}+y\, b_2\,v \right] _{\left( x_{i},y_{j+1/2},z_k \right) } \cdot h_{x_{i}}\cdot h_{z_{k}} \nonumber \\&\quad \approx y_{j+1/2}\left( \dfrac{{b_2}_{i,j+1/2,k}(\tau ,\alpha _{i,j,k})\,\left( y_{j+1}^{{\beta _1}_{i,j,k}(\tau )}\,v_{i,j+1,k}-y_{j}^{{\beta _1}_{i,j,k}(\tau )}\,v_{i,j,k} \right) }{y_{j+1}^{{\beta _1}_{i,j,k}(\tau )}- y_{j}^{{\beta _1}_{i,j,k}(\tau )}} \right. \nonumber \\&\quad \left. + {d_1}_{i,j,k}(\tau ,\alpha _{i,j,k})\,x_i\,z_k\,\dfrac{v_{i+1,j,k}-v_{i,j,k}}{h_{x_i}} + {d_3}_{i,j,k}(\tau ,\alpha _{i,j,k})\,x_i\,z_k\,\dfrac{v_{i,j,k+1}-v_{i,j,k}}{h_{z_k}} \right) \cdot {h_{x_i}}\cdot {h_{z_k}}, \end{aligned}$$
(27)

where \( {\beta _1}{i,j,k}(\tau ) =\dfrac{{b_2}_{i,j+1/2,k}(\tau ,\alpha _{i,j,k})}{{{\overline{a}}_2}_{i,j+1/2,k}(\tau ,\alpha _{i,j,k})} \),   \( a_{22}(\tau , x, y, z,\alpha ) = {{\overline{a}}_2}(\tau ,x,y ,z,\alpha )\,y^2, \) and \( a_{23} = a_{32} = d_{3}(\tau ,x,y,z,\alpha )\,x\,y\,z\). Similary, the fourth term in (14) can be approximated by

$$\begin{aligned}&\left[ a_{21}\,\dfrac{\partial v }{d\partial x}+ a_{22}\,\dfrac{\partial v}{\partial y}+ a_{23}\,\dfrac{\partial v}{\partial z}+y\, b_2\,v \right] _{\left( x_{i},y_{j-1/2},z_k \right) } \cdot h_{x_{i}}\cdot h_{z_{k}} \nonumber \\&\quad \approx y_{j-1/2}\left( \dfrac{{b_2}_{i,j-1/2,k}(\tau ,\alpha _{i,j,k})\,\left( y_{j}^{{\beta _1}_{i,j-1,k}(\tau )}\,v_{i,j,k}-y_{j-1}^{{\beta _1}_{i,j-1,k}(\tau )}\,v_{i,j-1,k} \right) }{y_{j}^{{\beta _1}_{i,j-1,k}(\tau )}- y_{j-1}^{{\beta _1}_{i,j-1,k}(\tau )}} \right. \nonumber \\&\qquad \left. + {d_1}_{i,j,k}(\tau ,\alpha _{i,j,k})\,x_i\,z_k\,\dfrac{v_{i+1,j,k}-v_{i,j,k}}{h_{x_i}}+ {d_3}_{i,j,k}(\tau ,\alpha _{i,j,k})\,x_i\,z_k\,\dfrac{v_{i,j,k+1}-v_{i,j,k}}{h_{z_k}} \right) \cdot {h_{x_i}}\cdot {h_{z_k}}. \end{aligned}$$
(28)

Case IV Approximation of the flux at \( I_{y_0} \) i.e for \(j= 0 \). Using the same procedure for the approximation of the flux at \( I_{x_0} \), we deduce that

$$\begin{aligned}&\left[ a_{21}\,\dfrac{\partial v }{ \partial x}+ a_{22}\,\dfrac{ \partial v}{\partial y}+ a_{23}\,\dfrac{\partial v}{\partial z} + y\,b_2\,v \right] _{\left( x_{i},y_{1/2},z_k \right) } \cdot h_{x_i}\cdot h_{z_k} \nonumber \\&\quad \approx y_{1/2}\left( \dfrac{1}{2}\left[ ({{\overline{a}}_2}_{i,y_{1/2},k}(\tau ,\alpha _{i,1,k})+{b_2}_{i,y_{1/2},k}(\tau ,\alpha _{i,1,k}))v_{i,1,k}\right. \right. \nonumber \\&\qquad \left. \left. -({{\overline{a}}_2}_{i,y_{1/2},k}(\tau ,\alpha _{i,1,k})-{b_2}_{i,y_{1/2},k}(\tau ,\alpha _{i,1,k}))\,v_{i,0,k}\right] \right. \nonumber \\&\qquad \left. + {d_1}_{i,1,k}(\tau ,\alpha _{i,1,k})\,x_i\,z_k\,\dfrac{v_{i+1,1,k}-v_{i,1,k}}{h_{x_i}} \right. \nonumber \\&\qquad \left. + {d_3}_{i,1,k}(\tau ,\alpha _{i,1,k})\,x_i\,z_k\,\dfrac{v_{i,1,k+1}-v_{i,1,k}}{h_{z_k}} \right) \cdot h_{x_i}\cdot h_{z_k}. \end{aligned}$$
(29)

For the fifth term in (14) we want to approximate the integral with a constant. Following the same procedure as in the case I and case III, we have

$$\begin{aligned}&\left[ a_{31}\,\dfrac{\partial v }{\partial x}+ a_{32}\,\dfrac{\partial v}{\partial y}+a_{33}\,\dfrac{\partial v}{\partial z}+z\, b_3\,v \right] _{\left( x_{i},y_{j},z_{k+1/2} \right) } \cdot h_{x_{i}}\cdot h_{y_{j}} \nonumber \\&\quad \approx z_{k+1/2}\left( \dfrac{{b_3}_{i,j,k+1/2}(\tau ,\alpha _{i,j,k})\,\left( z_{k+1}^{{\beta _2}_{i,j,k}(\tau )}\,v_{i,j,k+1}-z_{k}^{{\beta _2}_{i,j,k}(\tau )}\,v_{i,j,k} \right) }{z_{k+1}^{{\beta _2}_{i,j,k}(\tau )}- z_{k}^{{\beta _2}_{i,j,k}(\tau )}} \right. \nonumber \\&\qquad \left. + {d_2}_{i,j,k}(\tau ,\alpha _{i,j,k})\,x_i\,y_j\,\dfrac{v_{i+1,j,k}-v_{i,j,k}}{h_{x_i}}+ {d_3}_{i,j,k}(\tau ,\alpha _{i,j,k})\,x_i\,y_j\,\dfrac{v_{i,j+1,k}-v_{i,j,k}}{h_{y_j}} \right) \cdot {h_{x_i}}\cdot {h_{y_j}}, \end{aligned}$$
(30)

where \( {\beta _2}{i,j,k}(\tau ) =\dfrac{{b_3}_{i,j,k+1/2}(\tau ,\alpha _{i,j,k})}{\bar{a_3}_{i,j,k+1/2}(\tau ,\alpha _{i,j,k})} \),   \( a_{33}(\tau , x, y, z,\alpha ) = {\overline{a}}_3(\tau ,x,y ,z,\alpha )\,z^2 \).

Similarly, the sixth term in (14) can be approximated by

$$\begin{aligned}&\left[ a_{31}\,\dfrac{\partial v }{\partial x}+ a_{32}\,\dfrac{ \partial v}{ \partial y}+a_{33}\,\dfrac{\partial v}{\partial z}+ z\,b_3\,v \right] _{\left( x_{i},y_{j},z_{k-1/2} \right) } \cdot h_{x_{i}}\cdot h_{y_{j}} \nonumber \\&\quad \approx z_{k-1/2}\left( \dfrac{{b_3}_{i,j,k-1/2}(\tau ,\alpha _{i,j,k})\,\left( z_{k}^{{\beta _2}_{i,j,k-1}(\tau )}\,v_{i,j,k}-z_{k-1}^{{{\beta _2}_{i,j,k-1}(\tau )}}\,v_{i,j,k-1} \right) }{z_{k}^{{\beta _2}_{i,j,k-1}(\tau )}- z_{k-1}^{{{\beta _2}_{i,j,k-1}(\tau )}}} \right. \nonumber \\&\qquad \left. +{d_2}_{i,j,k}(\tau ,\alpha _{i,j,k})\,x_i\,y_j\,\dfrac{v_{i+1,j,k}-v_{i,j,k}}{h_{x_i}}+ {d_3}_{i,j,k}(\tau ,\alpha _{i,j,k})\,x_i\,y_j\,\dfrac{v_{i,j+1,k}-v_{i,j,k}}{h_{y_j}} \right) \cdot {h_{x_i}}\cdot {h_{y_j}}. \end{aligned}$$
(31)

Case V Approximation of the flux at \( I_{z_0} \). Using the same procedure for the Approximation of the flux at \( I_{z_0} \), we deduce that

$$\begin{aligned}&\left[ a_{31}\,\dfrac{\partial v }{\partial x}+ a_{32}\,\dfrac{ \partial v}{\partial y}+ a_{33}\,\dfrac{\partial v}{\partial z} + z\,b_3\,v \right] _{\left( x_{i},y_{j},z_{1/2} \right) } \cdot h_{x_i}\cdot h_{y_j} \nonumber \\&\quad \approx z_{1/2}\left( \dfrac{1}{2}\left[ (\bar{a_3}_{i,j,z_{1/2}}(\tau ,\alpha _{i,j,1}) + {b_3}_{i,j,z_{1/2}}(\tau ,\alpha _{i,j,1}))\,v_{i,j,1} \right. \right. \nonumber \\&\qquad \left. \left. -(({{\overline{a}}_3}_{i,j,z_{1/2}}(\tau ,\alpha _{i,j,1}) - {b_3}_{i,j,z_{1/2}}(\tau ,\alpha _{i,j,1}))v_{i,j,0}\right] \right. \nonumber \\&\qquad \left. + {d_2}_{i,j,1}(\tau ,\alpha _{i,j,1})\,x_i\,y_j\,\dfrac{v_{i+1,j,1}-v_{i,j,1}}{h_{x_i}} + {d_3}_{i,j,1}(\tau ,\alpha _{i,j,1}) \,x_i\,y_j\,\dfrac{v_{i,j+1,1}-v_{i,j,1}}{h_{y_j}} \right) \cdot h_{x_i}\cdot h_{y_j}. \end{aligned}$$

Equation (13) becomes by replacing the flux by his value for \( i = 1,\ldots ,N_1-1 \),  \( j = 1,\ldots ,N_2-1 \),   \( k = 1,\ldots ,N_3-1 \) and \( N =(N_1-1)\times (N_2-1)\times (N_3-1) \)

$$\begin{aligned} {\left\{ \begin{array}{ll} -\dfrac{d\, v_{i,j,k}(\tau )}{d\, \tau } + \underset{\alpha _{i,j,k} \in {\mathcal {A}}^{N}}{\sup }\, \left[ e_{i-1,j,k}^{i,j,k}\, v_{i-1,j,k} +e_{i,j,k}^{i,j,k}\, v_{i,j,k}+ e_{i+1,j,k}^{i,j,k}\, v_{i+1,j,k} \right. \\ \left. + e_{i,j-1,k}^{i,j,k}\, v_{i,j-1,k}+e_{i,j+1,k}^{i,j,k} v_{i,j+1}+ e_{i,j,k-1}^{i,j,k}\, v_{i,j-1,k}+e_{i,j,k+1}^{i,j,k} v_{i,j,k+1} \right] = 0,\\ ~~~\text{ with }~~ \,\,\,\, \mathbf{v} (0) \,\,\,\text {given}, \end{array}\right. } \end{aligned}$$
(32)

This can be rewritten as the Ordinary Differential Equation (ODE) coupled with optimization

$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{d\, \mathbf{v} (\tau )}{d\, \tau } =\underset{\alpha \in {\mathcal {A}}^N}{\sup }\,\left[ A (\tau ,\alpha )\,\mathbf{v} (\tau ) + G(\tau , \alpha ) \right] \\ ~~~\text{ with }~~ \,\,\,\, \mathbf{v} (0) \,\,\,\text {given}, \end{array}\right. } \end{aligned}$$
(33)

or

$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{d\,\mathbf{v} (\tau )}{d\,\tau } + \underset{\alpha \in {\mathcal {A}}^N}{\inf }\,\left[ E (\tau , \alpha )\,\mathbf{v} (\tau ) + F(\tau ,\alpha ) \right] = 0, \\ ~~~\text{ with }~~ \,\,\,\, \mathbf{v} (0)\,\,\,\text {given}, \end{array}\right. } \end{aligned}$$
(34)

where \( A (\tau ,\alpha ) = - E (\tau ,\alpha )\) is an \(N\times N\) matrix, \( {\mathcal {A}}^{N} = \underset{(N_{1}-1)\times (N_2-1)\times (N_3-1)}{\underbrace{{\mathcal {A}} \times {\mathcal {A}}\times \cdots \times {\mathcal {A}}}} \)    \(G (\tau , \alpha ) =-F(\tau , \alpha ) \) depends of the boundary condition and the term c, \(\mathbf{v} = \left( v_{i,j,k}\right) \). By setting \(n_1=N_1-1,\; n_2=N_2-1; \;n_3=N_3-1, \;\; I:=I (i,j,k)= i + (j-1)n_1 +(k-1)n_1 n_2\) and \(J:=J (i',j',k')= i' + (j'-1)n_1 +(k'-1)n_1 n_2 \), we have \(E (\tau , \alpha ) (I,J)= \left( e_{i',j',k'}^{i,j,k}\right) \), \(i', i= 1,\ldots , N_1-1 \),    \( j',j = 1,\ldots , N_2-1 \)   and   \(k', k = 1,\ldots , N_3-1 \) where the coefficients are defined by

$$\begin{aligned} e_{i+1,j,k}^{i,j,k}= & {} - \dfrac{{d_2}_{i,j,k}(\tau , \alpha _{i,j,k})\,x_i\,y_j}{h_{x_i}} - \dfrac{{d_1}_{i,j,k}(\tau , \alpha _{i,j,k})\,x_i\,z_k}{h_{x_i}}\nonumber \\&- x_{i+1/2}\dfrac{{b_1}_{i+1/2,j,k}(\tau , \alpha _{i,j,k})\,x_{i+1}^{\beta _{i,j,k}(\tau )} }{h_{x_i}\left( x_{i+1}^{\beta _{i,j,k}(\tau )}- x_{i}^{\beta _{i,j,k}(\tau )}\right) },\nonumber \\ e_{i-1,j, k}^{i,j,k}= & {} - x_{i-1/2}\dfrac{{b_1}_{i-1/2,j,k}(\tau , \alpha _{i,j,k})\,x_{i-1}^{\beta _{i-1,j,k}(\tau )}}{h_{x_i}\left( x_{i}^{\beta _{i-1,j,k}(\tau )}- x_{i-1}^{\beta _{i-1,j,k}(\tau )}\right) },\nonumber \\ e_{i,j+1,k}^{i,j,k}= & {} - \dfrac{{d_1}_{i,j,k}(\tau , \alpha _{i,j,k})\,y_j\,z_k}{h_{y_j}} - \dfrac{{d_3}_{i,j,k}(\tau , \alpha _{i,j,k})\,x_i\,y_j}{h_{y_j}} \nonumber \\&- y_{j+1/2}\dfrac{{b_2}_{i,j+1/2,k}(\tau , \alpha _{i,j,k})\,y_{j+1}^{{\beta _1}_{i,j,k}(\tau )} }{h_{y_j}\left( y_{j+1}^{{\beta _1}_{i,j,k}(\tau )}- y_{j}^{{\beta _1}_{i,j,k}(\tau )}\right) },\nonumber \\ e_{i,j-1,k}^{i,j,k}= & {} - y_{j-1/2}\dfrac{{b_2}_{i,j-1/2,k}(\tau , \alpha _{i,j,k})\,y_{j-1}^{{\beta _1}_{i,j-1,k}(\tau )}}{h_{y_j}\left( y_{j}^{{\beta _1}_{i,j-1,k}(\tau )}- y_{j-1}^{{\beta _1}_{i,j-1,k}(\tau )}\right) },\nonumber \\ e_{i,j,k+1}^{i,j,k}= & {} - \dfrac{{d_2}_{i,j,k}(\tau , \alpha _{i,j,k})\,y_j\,z_k}{h_{z_k}} - \dfrac{{d_3}_{i,j,k}(\tau , \alpha _{i,j,k})x_i\,z_k}{h_{z_k}} \nonumber \\&- z_{k+1/2}\dfrac{{b_3}_{i,j,k+1/2}(\tau , \alpha _{i,j,k})\,z_{k+1}^{{\beta _2}_{i,j,k}(\tau )} }{h_{z_k}\left( z_{k+1}^{{\beta _2}_{i,j,k}(\tau )}- z_{k}^{{\beta _2}_{i,j,k}(\tau )}\right) },\nonumber \\ e_{i,j,k-1}^{i,j,k}= & {} - z_{k-1/2}\dfrac{{b_3}_{i,j,k-1/2}(\tau , \alpha _{i,j,k})\,z_{k-1}^{{\beta _2}_{i,j,k-1}(\tau )} }{h_{z_k}\left( z_{k}^{{\beta _2}_{i,j,k-1}(\tau )}- z_{k-1}^{{\beta _2}_{i,j,k-1}(\tau )}\right) }, \end{aligned}$$
(35)

and

$$\begin{aligned} e_{i,j,k}^{i,j,k}= & {} x_{i-1/2}\dfrac{{b_1}_{i-1/2,j,k}(\tau , \alpha _{i,j,k})\,x_{i}^{\beta _{i-1,j,k}(\tau )}}{h_{x_i}\left( x_{i}^{\beta _{i-1,j,k}(\tau )}- x_{i-1}^{\beta _{i-1,j,k}(\tau )}\right) } + y_{j-1/2}\dfrac{{b_2}_{i,j-1/2,k}(\tau , \alpha _{i,j,k})\,y_{j}^{{\beta _1}_{i,j-1,k}(\tau )}}{h_{y_j}\left( y_{j}^{{\beta _1}_{i,j-1,k}(\tau )}- y_{j-1}^{{\beta _1}_{i,j-1,k}(\tau )}\right) } \nonumber \\&+\dfrac{{d_3}_{i,j,k}(\tau , \alpha _{i,j,k})\,x_i\,y_j}{h_{y_j}} +\dfrac{{d_3}_{i,j,k}(\tau , \alpha _{i,j,k})\,x_i\,z_k}{h_{z_k}} + \dfrac{{d_2}_{i,j,k}(\tau , \alpha _{i,j,k})\,x_i\,y_j}{h_{x_i}}\nonumber \\&- c_{i,j,k}(\tau , \alpha _{i,j,k}) +\dfrac{{d_1}_{i,j,k}(\tau , \alpha _{i,j,k})\,y_j\,z_k}{h_{y_j}}+\dfrac{{d_2}_{i,j,k}(\tau ,\alpha _{i,j,k})\,y_j\,z_k}{h_{z_k}}+\dfrac{{d_1}_{i,j,k}(\tau ,\alpha _{i,j,k})\,x_i\,z_k}{h_{x_i}}\nonumber \\&+ z_{k-1/2}\dfrac{{b_3}_{i,j,k-1/2}(\tau , \alpha _{i,j,k})\,z_{k}^{{\beta _2}_{i,j,k-1}(\tau )} }{h_{z_k}\left( z_{k}^{{\beta _2}_{i,j,k-1}(\tau )}- z_{k-1}^{{\beta _2}_{i,j,k-1}(\tau )}\right) }+z_{k+1/2}\dfrac{{b_3}_{i,j,k+1/2}(\tau , \alpha _{i,j,k})\,z_{k}^{{\beta _2}_{i,j,k}(\tau )} }{h_{z_k}\left( z_{k+1}^{{\beta _2}_{i,j,k}(\tau )}- z_{k}^{{\beta _2}_{i,j,k}(\tau )}\right) } \nonumber \\&+ x_{i+1/2}\dfrac{{b_1}_{i+1/2,j,k}(\tau , \alpha _{i,j,k})\,x_{i}^{\beta _{i,j,k}(\tau )} }{h_{x_i}\left( x_{i+1}^{\beta _{i,j,k}(\tau )}- x_{i}^{\beta _{i,j,k}(\tau )}\right) }+y_{j+1/2}\dfrac{{b_2}_{i,j+1/2,k}(\tau , \alpha _{i,j,k})\,y_{j}^{{\beta _1}_{i,j,k}(\tau )} }{h_{y_j}\left( y_{j+1}^{{\beta _1}_{i,j,k}(\tau )}- y_{j}^{{\beta _1}_{i,j,k}(\tau )}\right) } \end{aligned}$$
(36)

for \( i = 2,\ldots , N_1-1 \), \( j = 2,\ldots , N_2-1 \) and \( k = 2,\ldots , N_3-1 \) and

$$\begin{aligned} e_{0,j,k}^{1,j,k}= & {} - \dfrac{1}{2\,x_2 } \,x_{1}(\bar{a_1}_{x_{1/2},i,j}(\tau ,\alpha _{1,j,k})- {b_1}_{x_{1/2},j,k}(\tau ,\alpha _{1,j,k}))\,v_{0,j,k},\nonumber \\ e_{1,j,k}^{1,j,k}= & {} \dfrac{1}{2\,x_2 } \,x_{1}( \bar{a_1}_{x_{1/2},i,j}(\tau ,\alpha _{1,j,k})+ {b_1}_{x_{1/2},j,k}(\tau ,\alpha _{1,j,k})) - \dfrac{1}{3}\,c_{1,j,k}(\tau ,\alpha _{1,j,k}) \nonumber \\&+\dfrac{{d_2}_{1,j,k}(\tau ,\alpha _{1,j,k})\,x_1\,y_j}{h_{x_1}} + \dfrac{{d_1}_{1,j,k}(\tau ,\alpha _{1,j,k})\,x_1\,z_k}{h_{x_1}}\nonumber \\&+ x_{1+1/2}\dfrac{{b_1}_{1+1/2,j,k}(\tau ,\alpha _{1,j,k})\,x_{1}^{\beta _{1,j,k}(\tau )} }{h_{x_1}\left( x_{2}^{\beta _{1,j,k}(\tau )}- x_{1}^{\beta _{1,j,k}(\tau )}\right) }, \nonumber \\ e_{2,j,k}^{1,j,k}= & {} -\dfrac{{d_2}_{1,j,k}(\tau ,\alpha _{1,j,k})\,x_1\,y_j}{h_{x_1}} - \dfrac{{d_1}_{1,j,k}(\tau ,\alpha _{1,j,k})\,x_1\,z_k}{h_{x_1}} \nonumber \\&- x_{1+1/2}\dfrac{{b_1}_{1+1/2,j,k}(\tau ,\alpha _{1,j,k})\,x_{2}^{\beta _{1,j,k}(\tau )} }{h_{x_1}\left( x_{2}^{\beta _{1,j,k}(\tau )}- x_{1}^{\beta _{1,j,k}(\tau )}\right) }, \nonumber \\ e_{i,0,k}^{i,1,k}= & {} -\dfrac{1}{2\,y_2 }\,y_1( \bar{a_2}{i,y_{1/2},k}(\tau ,\alpha _{i,1,k})- {b_2}_{i,y_{1/2},k}(\tau ,\alpha _{i,1,k})) \,v_{i,0,k},\nonumber \\ e_{i,1,k}^{i,1,k}= & {} \dfrac{1}{2\,y_2 } \,y_1 ( \bar{a_2}_{i,y_{1/2},k}(\tau ,\alpha _{i,1,k})+ {b_2}_{i,y_{1/2},k}(\tau ,\alpha _{i,1,k})) - \dfrac{1}{3}\,c_{i,1,k}(\tau ,\alpha _{i,1,k}) \nonumber \\&+ \dfrac{{d_1}_{i,1,k}(\tau ,\alpha _{i,1,k})\,y_1\,z_k}{h_{y_1}} + \dfrac{{d_3}_{i,1,k}(\tau ,\alpha _{i,1,k})\,x_i\,y_1}{h_{y_1}}\nonumber \\&+y_{1+1/2}\dfrac{{b_2}_{i,1+1/2,k}(\tau ,\alpha _{i,1,k})\,y_{1}^{{\beta _1}_{i,1,k}(\tau )} }{h_{y_1}\left( y_{2}^{{\beta _1}_{i,1,k}(\tau )}- y_{1}^{{\beta _1}_{i,1,k}(\tau )}\right) },\end{aligned}$$
(37)
$$\begin{aligned} e_{i,2,k}^{i,1,k}= & {} -\dfrac{{d_1}_{i,1,k}(\tau ,\alpha _{i,1,k})\,y_1\,z_k}{h_{y_1}} - \dfrac{{d_3}_{i,1,k}(\tau ,\alpha _{i,1,k})\,x_i\,y_1}{h_{y_1}}\nonumber \\&- y_{1+1/2}\dfrac{{b_2}_{i,1+1/2,k}(\tau ,\alpha _{i,1,k})\,y_{2}^{{\beta _1}_{i,1,k}(\tau )} }{h_{y_1}\left( y_{2}^{{\beta _1}_{i,1,k}(\tau )}- y_{1}^{{\beta _1}_{i,1,k}(\tau )}\right) },\end{aligned}$$
(38)
$$\begin{aligned} e_{i,j,0}^{i,j,1}= & {} - \dfrac{1}{2\,z_2 } \,z_{1}(\bar{a_3}_{i,j,z_{1/2}}(\tau ,\alpha _{i,j,1})- {b_3}_{i,j,z_{1/2}}(\tau ,\alpha _{i,j,1}))\,v_{i,j,0},\nonumber \\ e_{i,j,1}^{i,j,1}= & {} \dfrac{1}{2\,z_2 } \,z_{1}( \bar{a_3}_{i,j,z_{1/2}}(\tau ,\alpha _{i,j,1}) + {b_3}_{i,j,z_{1/2}}(\tau ,\alpha _{i,j,1})) \nonumber \\&- \dfrac{1}{3}\,c_{i,j,1}(\tau ,\alpha _{i,j,1}) + \dfrac{{d_2}_{i,j,1}(\tau ,\alpha _{i,j,1})\,y_j\,z_1}{h_{z_1}} \nonumber \\&+ \dfrac{{d_3}_{i,j,1}(\tau ,\alpha _{i,j,1})\,x_i\,z_1}{h_{z_1}}+ z_{1+1/2}\dfrac{{b_3}_{i,j,1+1/2}(\tau ,\alpha _{i,j,1})\,z_{1}^{{\beta _2}_{i,j,1}(\tau )} }{h_{z_1}\left( z_{2}^{{\beta _2}_{i,j,1}(\tau )}- z_{1}^{{\beta _2}_{i,j,1}(\tau )}\right) },\nonumber \\ e_{i,j,2}^{i,j,1}= & {} -\dfrac{{d_2}_{i,j,1}(\tau ,\alpha _{i,j,1})\,y_j\,z_1}{h_{z_1}} - \dfrac{{d_3}_{i,j,1}(\tau ,\alpha _{i,j,1})\,x_i\,z_1}{h_{z_1}}\nonumber \\&- z_{1+1/2}\dfrac{{{b_3}_{i,j,1+1/2}(\tau ,\alpha _{i,j,1})}\,z_{2}^{{\beta _2}_{i,j,1}(\tau )} }{h_{z_1}\left( z_{2}^{{\beta _2}_{i,j,1}(\tau )}- z_{1}^{{\beta _2}_{i,j,1}(\tau )}\right) }. \end{aligned}$$
(39)

G collects the given homogeneous boundary therm \( v_{0,j,k},\, v_{i,0,k}, \,v_{i,j,0},\, v_{N_1,j,k}, \, v_{i,N_2,k}\, \) and   \(v_{i,j,N_3} \) for \( i = 1,\ldots , N_1-1 \),  \( j = 1,\ldots , N_2-1 \)  and  \( k = 1,\ldots , N_3-1 \).

Theorem 1

Assume that the coefficients of A given by (10) are positive and \(c<0\).Footnote 1

Let

$$\begin{aligned} h = \underset{{\underset{k = 1,\ldots ,N_3-1}{\underset{j = 1,\ldots ,N_2-1,}{i = 1,\ldots ,N_1-1}}}}{\max } \{h_{x_i},\, h_{y_j},\,\,h_{z_k}\}, \end{aligned}$$
(40)

if h is relatively small then the matrix \(E (\tau ,\alpha )\) in the system (34) is an M-matrix for any \( \alpha _{i,j,k} \,\in \,{\mathcal {A}}^{N}\).

Proof

Let us show that \(E(\tau ,\alpha ) \) has positive diagonals, non-positive off-diagonals, and is diagonally dominant. We first note that

$$\begin{aligned}&\dfrac{{b_1}_{i+1/2,j,k}(\tau ,\alpha _{i,j,k})}{x_{i+1}^{\beta _{i,j,k}(\tau )}- x_{i}^{\beta _{i,j,k}(\tau )}} = \dfrac{\bar{a_1}_{i,j,k}(\tau ,\alpha _{i,j,k})\,\beta _{i,j,k}(\tau ) }{x_{i+1}^{\beta _{i,j,k}(\tau )}- x_{i}^{\beta _{i,j,k}(\tau )}}> 0,\nonumber \\&\dfrac{{b_2}_{i,j+1/2,k}(\tau ,\alpha _{i,j,k})}{y_{j+1}^{{\beta _1}_{i,j,k}(\tau )}- y_{j}^{{\beta _1}_{i,j,k}(\tau )}} = \dfrac{\bar{a_2}_{i,j,k}(\tau ,\alpha _{i,j,k})\,{\beta _1}_{i,j,k}(\tau ) }{y_{j+1}^{{\beta _1}_{i,j,k}(\tau )}- y_{j}^{{\beta _1}_{i,j,k}(\tau )}}> 0,\nonumber \\&\dfrac{{b_3}_{i,j,k+1/2}(\tau ,\alpha _{i,j,k})}{z_{k+1}^{{\beta _2}_{i,j,k}(\tau )}-z_{k}^{{\beta _2}_{i,j,k}(\tau )}}=\dfrac{\bar{a_3}_{i,j,k}(\tau ,\alpha _{i,j,k})\,{\beta _2}_{i,j,k}(\tau ) }{z_{k+1}^{{\beta _2}_{i,j,k}(\tau )}- z_{k}^{{\beta _2}_{i,j,k}(\tau )}} > 0, \end{aligned}$$
(41)

for \( i = 1,\ldots , N_1-1 \), \( j = 1,\ldots , N_2-1 \), \( k = 1,\ldots , N_3-1 \) and all \( {b_1}_{i+1/2,j,k}(\tau ,\alpha _{i,j,k}) \ne 0,\,\, {b_2}_{i,j+1/2,k}(\tau ,\alpha _{i,j,k}) \ne 0, \,\,{b_3}_{i,j,k+1/2}(\tau ,\alpha _{i,j,k}) \ne 0\) with \( \bar{a_1}_{i,j,k}(\tau ,\alpha _{i,j,k}) > 0 \), \( \bar{a_2}_{i,j,k}(\tau ,\alpha _{i,j,k}) > 0\) and \( \bar{a_3}_{i,j,k}(\tau ,\alpha _{i,j,k}) >0 \).

This also holds when \( {b_1}_{i+1/2,j,k}(\tau ,\alpha _{i,j,k})\rightarrow 0 \), \({b_2}_{i,j+1/2,k}(\tau ,\alpha _{i,j,k})\rightarrow 0 \) and \( {b_3}_{i,j,k+1/2}(\tau ,\alpha _{i,j,k})\rightarrow 0 \). Indeed

$$\begin{aligned} \lim _{{b_1}_{i+1/2,j,k}(\tau ,\alpha )\rightarrow 0} \dfrac{{b_1}_{i+1/2,j,k}(\tau ,\alpha )}{x_{i+1}^{\beta _{i,j,k}(\tau )}- x_{i}^{\beta _{i,j,k}(\tau )}}= & {} \dfrac{{b_1}_{i+1/2,j,k}(\tau ,\alpha ) }{e^{\beta _{i,j,k}(\tau )\ln (x_{i+1})}-e^{\beta _{i,j,k}(\tau )\ln (x_{i})}}\nonumber \\= & {} \dfrac{{b_1}_{i+1/2,j,k}(\tau ,\alpha ) }{\beta _{i,j,k}(\tau )\ln (x_{i+1})-\beta _{i,j,k}(\tau )\ln (x_{i})}\nonumber \\= & {} \bar{a_1}_{i+1/2,j,k}(\tau ,\alpha )\ln \left( \dfrac{ x_{i+1}}{x_{i}}\right) ^{-1}> 0,\\ \lim _{{b_1}_{i-1/2,j,k}(\tau ,\alpha )\rightarrow 0} \dfrac{{b_1}_{i-1/2,j,k}(\tau )}{x_{i}^{\beta _{i-1,j,k}(\tau )}- x_{i-1}^{\beta _{i-1,j}(\tau )}}= & {} \dfrac{{b_1}_{i-1/2,j,k}(\tau ,\alpha ) }{e^{\beta _{i-1,j,k}(\tau )\ln (x_{i})}-e^{\beta _{i-1,j,k}(\tau )\ln (x_{i-1})}}\nonumber \\= & {} \dfrac{{b_1}_{i-1/2,j,k}(\tau ,\alpha ) }{\beta _{i-1,j,k}(\tau )\ln (x_{i})-\beta _{i-1,j}(\tau )\ln (x_{i-1})}\nonumber \\= & {} \bar{a_1}_{i-1/2,j,k}(\tau ,\alpha ) \ln \left( \dfrac{x_{i}}{x_{i-1}}\right) ^{-1} > 0, \end{aligned}$$

Indeed

$$\begin{aligned}&\lim _{{b_2}_{i,j+1/2,k}(\tau ,\alpha )\rightarrow 0} \dfrac{{b_2}_{i,j+1/2,k}(\tau )}{y_{j+1}^{{\beta _1}_{i,j,k}(\tau )}- y_{j}^{{\beta _1}_{i,j,k}(\tau )}}>0,\\&\lim _{{b_2}_{i,j-1/2,k}(\tau ,\alpha )\rightarrow 0} \dfrac{{b_2}_{i,j-1/2,k}(\tau )}{y_{j}^{{\beta _1}_{i,j-1,k}(\tau )}- y_{j-1}^{{\beta _1}_{i,j-1,k}(\tau )}}>0,\\&\lim _{{b_3}_{i,j,k+1/2}(\tau ,\alpha )\rightarrow 0} \dfrac{{b_3}_{i,j,k+1/2}(\tau )}{z_{k+1}^{{\beta _2}_{i,j,k}(\tau )}- z_{k}^{{\beta _2}_{i,j,k}(\tau )}}>0,\\&\lim _{{b_3}_{i,j,k-1/2}(\tau ,\alpha )\rightarrow 0} \dfrac{{b_3}_{i,j,k-1/2}(\tau )}{z_{k}^{{\beta _1}_{i,j,k-1}(\tau )}- z_{k-1}^{{\beta _2}_{i,j,k-1}(\tau )}} >0. \end{aligned}$$

Using the definition of \(E (\tau ,\alpha ) =\left( e_{i,j,k}^{i,j,k}\right) \), \( i= 1,\ldots , N_1-1 \),    \( j = 1,\ldots , N_2-1 \)   and   \( k = 1,\ldots , N_3-1 \) given above, we see that

$$\begin{aligned}&e_{i,j,k}^{i,j,k} \geqslant 0,\,\,\,e_{i+1,j,k}^{i,j,k} \leqslant 0,\,\,\, e_{i-1,j,k}^{i,j,k} \leqslant 0\,\,,\,e_{i,j+1,k}^{i,j,k} \leqslant 0, \\&e_{i,j-1,k}^{i,j,k}\leqslant 0,\,\,\,e_{i,j,k+1}^{i,j,k} \leqslant 0,\,\,\text {and}\,\, e_{i,j,k-1}^{i,j,k}\leqslant 0, \end{aligned}$$

For \( i = 2,\ldots , N_1-1 \), \( j = 2,\ldots , N_2-1 \) and \( k = 2,\ldots , N_3-1 \), since \(x_{i+1}^{\beta _{i,j,k}(\tau )} \approx x_{i}^{\beta _{i,j,k}(\tau )} + x_{i}^{\beta _{i,j,k}(\tau )-1}\,{\beta _{i,j,k}(\tau )}\,h_{x_i}\),   \(x_{i-1}^{\beta _{i-1,j,k}(\tau )} \approx x_{i}^{\beta _{i-1,j,k}(\tau )} - x_{i}^{\beta _{i-1,j,k}(\tau )-1}\,{\beta _{i-1,j,k}(\tau )}\,h_{x_i}\)    \(y_{j+1}^{{\beta _1}_{i,j,k}(\tau )} \approx y_{j}^{{\beta _1}_{i,j,k}(\tau )} + y_{j}^{{\beta _1}_{i,j,k}(\tau )-1}\,{{\beta _1}_{i,j,k}(\tau )}\,h_{y_j}\),

\(y_{j-1}^{{\beta _1}_{i,j-1,k}(\tau )} \approx y_{j}^{{\beta _1}_{i,j-1,k}(\tau )} - y_{j}^{{\beta _1}_{i,j-1,k}(\tau )-1}\,{{\beta _1}_{i,j-1,k}(\tau )}\,h_{y_j}\)   \(z_{k+1}^{{\beta _2}_{i,j,k}(\tau )} \approx z_{k}^{{\beta _2}_{i,j,k}(\tau )} + z_{k}^{{\beta _2}_{i,j,k}(\tau )-1}\,{{\beta _2}_{i,j,k}(\tau )}\,h_{z_k}\)  and \(z_{k-1}^{{\beta _2}_{i,j,k-1}(\tau )} \approx z_{k}^{{\beta _2}_{i,j,k-1}(\tau )} - z_{k}^{{\beta _2}_{i,j,k-1}(\tau )-1}\,{{\beta _2}_{i,j,k-1}(\tau )}\,h_{z_k}\), when \( h = \max \{h_{x_i},\, h_{y_j},\,\,h_{z_k}\} \longrightarrow 0 \),

$$\begin{aligned}&\left| e_{i,j,k}^{i,j,k}\right| - \left| e_{i-1,j,k}^{i,j,k} \right| - \left| e_{i,j-1,k}^{i,j,k} \right| - \left| e_{i,j,k-1}^{i,j,k} \right| - \left| e_{i+1,j,k}^{i,j,k}\right| -\left| e_{i,j+1,k}^{i,j,k} \right| -\left| e_{i,j,k}^{i,j,k+1}\right| \\&\rightarrow - c_{i,j,k}(\tau ,\alpha _{i,j,k}), \end{aligned}$$

we have

$$\begin{aligned} \left| e_{i,j,k}^{i,j,k}\right|\ge & {} \left| e_{i-1,j,k}^{i,j,k}\right| +\left| e_{i,j-1,k}^{i,j,k}\right| +\left| e_{i,j+1,k}^{i,j,k}\right| +\left| e_{i+1,j,k}^{i,j,k}\right| +\left| e_{i,j,k+1}^{i,j,k}\right| +\left| e_{i,j,k-1}^{i,j,k}\right| \\\ge & {} \sum _{m=1}^{N_1-1} \sum _{n_1=1}^{N_2-1} \sum _{n_2=1}^{N_3-1} \left| e_{m,n_1,n_2}^{i,j,k}\right| ,\,\,m \ne i,\,\,n_1 \ne j, \,\,n_2 \ne k, \end{aligned}$$

We also have similar inequalities when one of the indices ijk is equal to 1. Therefore \( \,E(\tau ,\alpha ) \) is an M -matrix. \(\square \)

4 Fitted Finite Volume Scheme in n Dimensional Spatial Domain

The goal here is to update our three dimension fitted schemes in high dimensional space (\( n \ge 3 \)). Recall that the HJB equation in \( n\ge 1 \) dimensional space is given by

$$\begin{aligned} {\left\{ \begin{array}{ll} v_t(t, x) + \underset{\alpha \in {\mathcal {A}}}{\sup } \left[ L^{\alpha } v(t, x) + f(t, x, \alpha )\right] = 0 \quad \text {on} \,\, [0,T]\times {\mathbb {R}}^n ,\\ v(T,x) = g(x), \,\,\,\,x \,\in {\mathbb {R}}^n \end{array}\right. } \end{aligned}$$
(42)
$$\begin{aligned} \text {where}\,\,\,L^\alpha \,v(t, x) = \dfrac{1}{2}\,\sum _{i,j = 1}^n (\sigma \sigma ^T)_{i,j} (t, x, \alpha )\dfrac{\partial ^2 v(t, x)}{\partial x_i\,\partial x_j}+ \sum _{i = 1}^n b_i(t, x,\alpha )\dfrac{\partial v(t, x)}{ \partial x_i}. \end{aligned}$$

The divergence form of equation (42) by setting \( \tau = T-t \) is given by

$$\begin{aligned} -\dfrac{\partial v(\tau , x) }{\partial \tau } + \sup _{\alpha \in {\mathcal {A}}}\left[ \nabla \cdot \left( k (v(\tau , x))\right) + c(\tau , x, \alpha )\,v(\tau , x) \right] = 0, \end{aligned}$$
(43)

where \(k(v(\tau , x)) = A(\tau , x,\alpha )\nabla v(\tau , x)+ b(\tau , x, \alpha )\,v(\tau , x)\)  with

$$\begin{aligned} b = (x_1\,b_1, x_2\,b_2, x_3\,b_3,\ldots , x_n\,b_n)^T,\quad A=\left[ \begin{array}{ccccc} a_{11} &{} a_{12} &{} a_{13}&{} \cdots &{} a_{1n} \\ a_{21} &{} a_{22}&{} a_{23} &{} \cdots &{} a_{2n} \\ a_{31} &{} a_{32}&{} a_{33} &{} \cdots &{} a_{3n}\\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \vdots \\ a_{n1} &{} a_{n2}&{} a_{n3} &{} \cdots &{} a_{nn} \end{array} \right] . \end{aligned}$$
(44)

Indeed this divergence form is not a restriction as the differentiation is respect to x and not respect to the control \(\alpha \), which may be discontinuous in some applications. We will assume that for \( i\ne r,\,\,a_{ir} = a_{ri},\, r,i =1,\ldots ,n\). We also define the following coefficients, which will help us to build our scheme smoothly

$$\begin{aligned}&a_{ii}(\tau , x, \alpha ) = \overline{a_i}(\tau , x, \alpha )\,x_i^2\,\, \text {and}\,\, a_{ir} = a_{ri} = d_{ir}(\tau , x, \alpha ) \prod _{i=1}^{n} x_i=d_{ri}(\tau , x, \alpha ) \prod _{i=1}^{n} x_i,\,\,\,r\ne i, \end{aligned}$$

\(i,r=1,\ldots ,n. \) As usual the n dimensional domain is truncated to \( I_{x_i}= [0, {x_i}_{\text {max}}] \)\( i = 1,\ldots ,n \) be divided into \( N_i \) sub-intervals

$$\begin{aligned} I_{{x_1}_j} =({x_1}_j, {x_1}_{j+1}),\,\, I_{{x_2}_k} =({x_2}_k, {x_2}_{k+1}) ,\,\, I_{{x_3}_l} =({x_3}_l, {x_3}_{l+1}),\ldots ,I_{{x_n}_m} =({x_n}_m, {x_n}_{m+1}) \end{aligned}$$

\( j = 0\ldots N_1-1,\,\,k = 0\ldots N_2-1,\,\,\,l=0\ldots N_3-1,\ldots , m=0\ldots N_n-1,\) with \( 0 = {x_i}_{0}< {x_i}_{1}< \cdots \cdots < {x_i}_{p} = {x_i}_{\text {max}},\,\).

This defines on \( I_{x} = \mathop {\prod }\nolimits _{i=1}^{n} I_{x_i} \) a rectangular mesh. By setting

$$\begin{aligned}&{x_1}_{j+1/2} :=\dfrac{{x_1}_{j} + {x_1}_{j+1} }{2},\qquad {x_1}_{j-1/2} :=\dfrac{{x_1}_{j} + {x_1}_{j-1} }{2},\nonumber \\&{x_2}_{k+1/2} :=\dfrac{{x_2}_{k} + {x_2}_{k+1} }{2},\qquad {x_2}_{k-1/2} :=\dfrac{{x_2}_{k} + {x_2}_{k-1} }{2},\nonumber \\&{x_3}_{l+1/2} :=\dfrac{{x_3}_{l} + {x_3}_{l+1} }{2},\,\, \,\qquad {x_3}_{l-1/2} :=\dfrac{{x_3}_{l} + {x_3}_{l-1} }{2},\nonumber \\&\qquad \vdots \qquad \qquad \qquad \qquad \qquad \qquad \quad \vdots \nonumber \\&{x_n}_{m+1/2} :=\dfrac{{x_n}_{m} + {x_n}_{m+1} }{2},\,\,\,\, {x_n}_{m-1/2} :=\dfrac{{x_n}_{m} + {x_n}_{m-1} }{2}, \end{aligned}$$
(45)

for each \( j=1\ldots N_1-1\),   \( k=1\ldots N_2-1\), \( l=1\ldots N_3-1,\ldots , m=1\ldots N_n-1 \). These mid-points form a second partition of \( I_{x} = \mathop {\prod }\nolimits _{i=1}^{n} I_{x_i} \) if we define \( {x_i}_{-1/2} = {x_i}_{0}\)\( {x_i}_{N_i+1/2} = {x_i}_{\text {max}}\),   \( i=1,2,\ldots , n \). For each \(j = 0, 1, \ldots ,N_1 \),   \(k = 0, 1, \ldots ,N_2 \), \(l = 0, 1,\ldots , N_3,\ldots , m = 0, 1,\ldots ,N_n \)   we put  \(h_{{x_1}_j} = {x_1}_{j+1/2} - {x_1}_{j-1/2} \), \(h_{{x_2}_k} = {x_2}_{k+1/2} - {x_2}_{k-1/2} \),   \(h_{{x_3}_l} = {x_3}_{l+1/2} - {x_3}_{l-1/2} , \ldots , h_{{x_n}_m} = {x_n}_{m+1/2} - {x_n}_{m-1/2} \) and \( h = \max \{h_{{x_1}_j},\,h_{{x_2}_k},\,h_{{x_3}_l},\ldots , h_{{x_n}_m}\} \). Integrating both size of (42) over \( {\mathcal {R}}_{j,k,l,\ldots ,m} = \left[ {x_1}_{j-1/2}, {x_1}_{j+1/2} \right] \times \left[ {x_2}_{k-1/2}, {x_2}_{k+1/2} \right] \times \left[ {x_3}_{l-1/2}, {x_3}_{l+1/2} \right] \times \cdots \times \left[ {x_n}_{m-1/2}, {x_n}_{m+1/2} \right] \) we have

$$\begin{aligned}&- \int _{{\mathcal {R}}_{j,k,l,\ldots ,m}} \dfrac{ \partial v }{\partial \tau }\, dx_1\,dx_2\, dx_3\ldots dx_n \nonumber \\&\quad + \int _{{\mathcal {R}}_{j,k,l,\ldots ,m}} \sup _{\alpha \in {\mathcal {A}}} \left[ \nabla \cdot \left( k(v)\right) + c\,v \right] \, dx_1\,dx_2\, dx_3\ldots dx_n = 0, \end{aligned}$$
(46)

for \( j =1,2,\ldots N_1-1 \)\( k =1,2,\ldots N_2-1 \)\(l =1,2,\ldots N_3-1,\ldots ,m =1,2,\ldots N_n-1 \).

Applying the mid-points quadrature rule to the first and the last point terms, we obtain the above

$$\begin{aligned}&-\dfrac{d v_{j,k,l,\ldots ,m}(\tau ) }{d \tau }\,l_{j,k,l,\ldots ,m} \nonumber \\&\quad +\sup _{\alpha \in {\mathcal {A}}} \left[ \int _{ {\mathcal {R}}_{j,k,l,\ldots ,m}}\nabla \cdot \left( k(v)\right) \,dx_1\,dx_2\ldots dx_n + c_{j,k,l,\ldots ,m}(\tau , \alpha )\, v_{j,k,l,\ldots ,m}(\tau )\,l_{j,k,l,\ldots ,m}\right] =0 \end{aligned}$$
(47)

where \( l_{j,k,l,\ldots ,m} = \left( {x_1}_{j+1/2} - {x_1}_{j-1/2} \right) \times \left( {x_2}_{k+1/2} - {x_2}_{k-1/2}\right) \times \left( {x_3}_{l+1/2} - {x_3}_{l-1/2}\right) \times \cdots \times \left( {x_n}_{m+1/2} - {x_n}_{m-1/2}\right) \) is the volume of \( {\mathcal {R}}_{j,k,l,\ldots ,m} \). Note that \( v_{j,k,l,\ldots ,m}(\tau ) \) denotes the nodal approximation to \( v(\tau , {x_1}_{j}, {x_2}_{k}, {x_3}_{l},\ldots ,{x_n}_{m}) \) at each point of the grid.

We now consider the approximation of the middle term in (47). Let \(\mathbf{n} \) denote the unit vector outward-normal to \( \partial {\mathcal {R}}_{j,k,l,\ldots ,m} \). By General Stokes Theorem, integrating by parts and using the definition of flux k, we have

$$\begin{aligned}&\int _{{\mathcal {R}}_{j,k,l,\ldots ,m}} \nabla \cdot \left( k(v)\right) d x_1\,dx_2\,dx_3\ldots dx_n \\&\quad = \int _{\partial {\mathcal {R}}_{j,k,l,\ldots ,m}} k(v) \cdot \mathbf{n} \, ds \\&\quad = \int _{\left( {x_1}_{j+1/2},{x_2}_{k-1/2},{x_3}_{l-1/2},\ldots ,{x_n}_{m-1/2} \right) }^{\left( {x_1}_{j+1/2},{x_2}_{k+1/2},{x_3}_{l+1/2},\ldots ,{x_n}_{m+1/2} \right) }\left( \sum _{i = 1}^{n}a_{1i}\,\dfrac{\partial v }{\partial x_i}+ x_1\,b_1\,v \right) dx_2\,dx_3\ldots dx_n \\&\qquad - \int _{\left( {x_1}_{j-1/2},{x_2}_{k-1/2},{x_3}_{l-1/2},\ldots ,{x_n}_{m-1/2} \right) }^{\left( {x_1}_{j-1/2},{x_2}_{k+1/2},{x_3}_{l+1/2},\ldots ,{x_n}_{m+1/2} \right) } \left( \sum _{i = 1}^{n}a_{1i}\,\dfrac{ \partial v }{\partial x_i}+ x_1\,b_1\,v \right) dx_2\,dx_3\ldots dx_n \\&\qquad +\int _{\left( {x_1}_{j-1/2},{x_2}_{k+1/2},{x_3}_{l-1/2},\ldots ,{x_n}_{m-1/2} \right) }^{\left( {x_1}_{j+1/2},{x_2}_{k+1/2},{x_3}_{l+1/2},\ldots ,{x_n}_{m+1/2} \right) } \left( \sum _{i=1}^{n}a_{2i}\,\dfrac{\partial v }{ \partial x_i} + x_2\,b_2\,v \right) dx_1\,dx_3\ldots dx_n \\&\qquad - \int _{\left( {x_1}_{j-1/2},{x_2}_{k-1/2},{x_3}_{l-1/2},\ldots ,{x_n}_{m-1/2} \right) }^{\left( {x_1}_{j+1/2},{x_2}_{k-1/2},{x_3}_{l+1/2},\ldots ,{x_n}_{m+1/2} \right) } \left( \sum _{i=1}^{n}a_{2i}\,\dfrac{\partial v }{\partial x_i}+ x_2\,b_2\,v \right) dx_1\,dx_3\ldots dx_n \\&\qquad\qquad \vdots \quad \qquad\vdots \quad\quad \vdots \quad \quad \vdots \quad\quad \vdots \quad\quad \vdots \quad\quad \vdots \quad\quad \vdots \quad\quad \vdots \quad\quad \vdots \quad\quad \vdots \\&\qquad + \int _{\left( {x_1}_{j-1/2},{x_2}_{k-1/2},{x_3}_{l-1/2},\ldots ,{x_n}_{m+1/2} \right) }^{\left( {x_1}_{j+1/2},{x_2}_{k+1/2},{x_3}_{l+1/2},\ldots ,{x_n}_{m+1/2} \right) } \left( \sum _{i=1}^{n}a_{ni}\,\dfrac{\partial v }{\partial x_i} + x_n\,b_n\,v \right) dx_1\,dx_2\,dx_3\ldots dx_{n-1}\\&\qquad - \int _{\left( {x_1}_{j-1/2},{x_2}_{k-1/2},{x_3}_{l-1/2},\ldots ,{x_n}_{m-1/2} \right) }^{\left( {x_1}_{j+1/2},{x_2}_{k+1/2},{x_3}_{l+1/2},\ldots ,{x_n}_{m-1/2} \right) } \left( \sum _{i=1}^{n}a_{ni}\,\dfrac{\partial v }{\partial x_i} + x_n\,b_n\,v \right) dx_1\,dx_2\,dx_3\ldots dx_{n-1}\\&\quad = \sum _{i =1}^{n}\bigg (\int _{\left( {x_1}_{j-1/2},{x_2}_{k-1/2},{x_3}_{l-1/2},\ldots ,{x_i}_{q+1/2},\ldots ,{x_n}_{m-1/2} \right) }^{\left( {x_1}_{j+1/2},{x_2}_{k+1/2},{x_3}_{l+1/2},\ldots ,{x_i}_{q+1/2},\ldots ,{x_n}_{m+1/2} \right) } \left( \sum _{r =1}^{n} a_{ir}\,\dfrac{\partial v }{\partial x_r} + x_i\,b_i\,v \right) \prod _{i\ne r}^{n}dx_r\bigg )\\&\qquad - \sum _{i =1}^{n}\bigg (\int _{\left( {x_1}_{j-1/2},{x_2}_{k-1/2},{x_3}_{l-1/2},\ldots ,{x_i}_{q-1/2},\ldots ,{x_n}_{m-1/2} \right) }^{\left( {x_1}_{j+1/2},{x_2}_{k+1/2},{x_3}_{l+1/2},\ldots ,{x_i}_{q-1/2},\ldots ,{x_n}_{m+1/2} \right) }\left( \sum _{r =1}^{n} a_{ir}\,\dfrac{\partial v }{\partial x_r} + x_i\,b_i\,v \right) \prod _{i\ne r}^{n}dx_r\bigg ) \end{aligned}$$

We will approximate the first term using the the mid-points quadrature rule as

$$\begin{aligned}&\sum _{i =1}^{n}\bigg (\int _{\left( {x_1}_{j-1/2},{x_2}_{k-1/2},{x_3}_{l-1/2},{x_i}_{q+1/2},\ldots ,{x_n}_{m-1/2} \right) }^{\left( {x_1}_{j+1/2},{x_2}_{k+1/2},{x_3}_{l+1/2},{x_i}_{q+1/2},\ldots ,{x_n}_{m+1/2} \right) } \left( \sum _{r =1}^{n} a_{ir}\,\dfrac{\partial v }{\partial x_r} + x_i\,b_i\,v \right) \prod _{i\ne r}^{n}dx_r\bigg )\nonumber \\&\quad \approx \sum _{i,r =1}^{n}\left( a_{ir}\,\dfrac{\partial v }{\partial x_r} + x_i\,b_i\,v \right) \bigg |_{\left( {x_1}_{j},{x_2}_{k},{x_3}_{l},\ldots ,{x_i}_{q+1/2},{x_i}_{s},\ldots ,{x_n}_{m}\right) } \prod _{i\ne r}^{n}h_{{x_r}_\nu }. \end{aligned}$$
(48)

where the value of the subscript \( \nu \in \{j,k,l,\ldots ,q,s,\ldots ,m\} \) depends respectively of the value taking by \( r \in \{1,2,3,\ldots ,i, \ldots ,n\} \). To achieve this, it is clear that we now need to derive approximations of the \( k(v) \cdot \mathbf{n} \) defined above at the mid-point \(\left( {x_1}_{j},{x_2}_{k},{x_3}_{l},\ldots ,{x_i}_{q+1/2},{x_{i+1}}_{s},\ldots ,{x_n}_{m}\right) \), of the interval \( I_{{x_i}_q} \) for \( q = 0, 1,\ldots N_i-1\)\(i=1,2,\ldots ,n \). This discussion is divided into two cases for \( q \ge 1 \), and \( q = 0\, \) on the interval \( I_{{x_i}_0} = [0, {x_i}_1],\,\, i=1,2,\ldots ,n \). This is really the generalization of the fitted finite scheme.

Case I For \( q\ge 1 \).

We follow the same procedure as in three dimension and have the following generalization

$$\begin{aligned}&\sum _{i,r =1}^{n}\left( a_{ir}\,\dfrac{\partial v }{ \partial x_r} + x_i\,b_i\,v \right) \bigg |_{\left( {x_1}_{j},{x_2}_{k},{x_3}_{l},\ldots ,{x_i}_{q+1/2},{x_{i+1}}_{s},\ldots ,{x_n}_{m}\right) } \prod _{i\ne r}^{n} h_{{x_r}_\nu }\nonumber \\&\quad \approx \sum _{i=1}^{n}{x_i}_{q+1/2} {b_i}_{j,k,l,\ldots ,q+1/2,s\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,q,s,\ldots ,m})\nonumber \\&\qquad \times \dfrac{\left( {x_i}_{q+1}^{\beta _{j,k,l,\ldots ,q,s\ldots ,m}(\tau )}\,v_{j,k,l,\ldots ,q+1,s,\ldots ,m}-{x_i}_{q}^{\beta _{j,k,l,\ldots ,q,s\ldots ,m}(\tau )}\,v_{j,k,l,\ldots ,q,s,\ldots ,m} \right) }{{x_i}_{q+1}^{\beta _{j,k,l,\ldots ,q,s\ldots ,m}(\tau )}- {x_i}_{q}^{\beta _{j,k,l,\ldots ,q,s\ldots ,m}(\tau )}}\prod _{i\ne r}^{n} h_{{x_r}_\nu }\nonumber \\&\qquad +\sum _{\underset{i\ne r}{i,r=1}}^n{x_i}_{q+1/2}\left( {d_{ir}}_{j,k,l,\ldots ,q,s,\ldots ,m}(\tau ,\alpha _{j,k,l,\ldots ,q,s,\ldots ,m})\left( \prod _{i\ne r}{x_r}_\nu \right) \times \right. \nonumber \\&\quad \left. \dfrac{v_{j,k,l,\ldots ,q,s,\nu +1,\ldots ,m}-v_{j,k,l,\ldots ,q,s,\nu ,\ldots ,m}}{ h_{{x_r}_\nu }} \right) \prod _{i\ne r}^{n} h_{{x_r}_\nu }. \end{aligned}$$
(49)

where the value of the subscript \( \nu \in \{j,k,l,\ldots ,q,s,\ldots ,m\} \) depends respectively of the value taking by \( r \in \{1,2,3,\ldots ,i,\ldots ,n\} , \beta _{j,k,l,\ldots ,q,s,\nu \ldots ,m}(\tau ) = \dfrac{{b_i}_{j,k,l,\ldots ,q+1/2,s,r\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,q,s,\nu ,\ldots ,m})}{{{{\overline{a}}_i}}_{j,k,l,\ldots ,q+1/2,s,\nu \ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,q,s,\nu ,\ldots ,m})}\ne 0.\) Similarly

$$\begin{aligned}&\sum _{i,r =1}^{n}\left( a_{ir}\,\dfrac{\partial v }{\partial x_r} + x_i\,b_i\,v \right) \bigg |_{\left( {x_1}_{j},{x_2}_{k},{x_3}_{l},\ldots ,{x_i}_{q-1/2},{x_{i+1}}_{s},\ldots ,{x_n}_{m}\right) } \prod _{i\ne r}^{n}dx_r\nonumber \\&\quad \approx \sum _{i=1}^{n}{x_i}_{q-1/2} {b_i}_{j,k,l,\ldots ,q-1/2,s\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,q,s,\ldots ,m})\times \left( \prod _{i\ne r}^{n} h_{{x_r}_\nu }\right) \nonumber \\&\qquad \times \dfrac{\left( {x_i}_{q}^{\beta _{j,k,l,\ldots ,q-1,s\ldots ,m}(\tau )}\,v_{j,k,l,\ldots ,q,s,\ldots ,m}-{x_i}_{q-1}^{\beta _{j,k,l,\ldots ,q-1,s\ldots ,m}(\tau )}\,v_{j,k,l,\ldots ,q-1,s,\ldots ,m} \right) }{{x_i}_{q}^{\beta _{j,k,l,\ldots ,q-1,s\ldots ,m}(\tau )}- {x_i}_{q-1}^{\beta _{j,k,l,\ldots ,q-1,s\ldots ,m}(\tau )}}\nonumber \\&\qquad +\sum _{\underset{i\ne r}{i,r=1}}^n{x_i}_{q-1/2}\left( {d_{ir}}_{j,k,l,\ldots ,q,s,\ldots ,m}(\tau ,\alpha _{j,k,l,\ldots ,q,s,\ldots ,m})\left( \prod _{i\ne r}{x_r}_\nu \right) \right. \nonumber \\&\qquad \times \left. \dfrac{v_{j,k,l,\ldots ,q,s,\nu +1,\ldots ,m}-v_{j,k,l,\ldots ,q,s,\nu ,\ldots ,m}}{ h_{{x_r}_\nu }} \right) \prod _{i\ne r}^{n} h_{{x_r}_\nu }. \end{aligned}$$
(50)

where \( \beta _{j,k,l,\ldots ,q-1,s,\nu \ldots ,m}(\tau ) = \dfrac{{b_i}_{j,k,l,\ldots ,q-1/2,s,\nu ,\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,q,s,\nu ,\ldots ,m})}{{{{\overline{a}}_i}}_{j,k,l,\ldots ,q-1/2,s,\nu ,\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,q,s,\nu ,\ldots ,m})}\ne 0.\)

Case II Approximation of the flux at \( q = 0 \) on the interval \( I_{{x_i}_0} = [0, {x_i}_1],\,\,i=1,\ldots ,n\). Note that the analysis in case I does not apply to the approximation of the flux on \( I_{{x_i}_0} \) because it is the degenerated zone. Follow the same lines as for the three dimensional case, we get

$$\begin{aligned}&\sum _{i,r =1}^{n}\left( a_{ir}\,\dfrac{\partial v }{\partial x_r} + x_i\,b_i\,v \right) \bigg |_{\left( {x_1}_{j},{x_2}_{k},{x_3}_{l},\ldots ,{x_i}_{1/2},{x_i}_{s},\ldots ,{x_n}_{m}\right) } \prod _{i\ne r}^{n}dx_r\nonumber \\&\quad \approx \sum _{i=1}^n {x_i}_{1/2} \dfrac{1}{2}\bigg ( \bigg ({a_i}_{j,k,l,\ldots ,1/2,s\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,1,s,\ldots ,m})\nonumber \\&\qquad +{b_i}_{j,k,l,\ldots ,1/2,s\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,1,s,\ldots ,m})\bigg )\,v_{j,k,l,\ldots ,1,s,\ldots ,m} \prod _{i\ne r}^{n} h_{{x_r}_\nu }\bigg ) \nonumber \\&\qquad -\sum _{i=1}^n {x_i}_{1/2}\dfrac{1}{2}\bigg (\bigg ({a_i}_{j,k,l,\ldots ,1/2,s\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,1,s,\ldots ,m})\nonumber \\&\qquad -{b_i}_{j,k,l,\ldots ,1/2,s\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,1,s,\ldots ,m})\bigg )\,v_{j,k,l,\ldots ,0,s,\ldots ,m}\prod _{i\ne r}^{n} h_{{x_r}_\nu } \bigg )\nonumber \\&\qquad + \sum _{\underset{i\ne r}{i,r=1}}^n{x_i}_{1/2}\left( {d_{ir}}_{j,k,l,\ldots ,1,s,\ldots ,m}(\tau ,\alpha _{j,k,l,\ldots ,1,s,\ldots ,m})\left( \prod _{i\ne r}{x_r}_\nu \right) \right. \nonumber \\&\qquad \times \left. \dfrac{v_{j,k,l,\ldots ,1,s,\nu +1,\ldots ,m}-v_{j,k,l,\ldots ,1,s,\nu ,\ldots ,m}}{ h_{{x_r}_\nu }} \right) \prod _{i\ne r}^{n} h_{{x_r}_\nu }. \end{aligned}$$
(51)

Equation (47) becomes by replacing the flux by its value for \( j = 1,\ldots ,N_1-1 \),  \( k = 1,\ldots ,N_2-1 \),   \( l = 1,\ldots ,N_3-1,\ldots , m = 1,\ldots ,N_n-1, \) and \( N = \prod _{i=1}^{n} (N_i-1)\).

$$\begin{aligned}&-\dfrac{d\, v_{j,k,l,\ldots ,q,s,\nu ,\ldots ,m}(\tau )(\tau )}{d\, \tau }+\dfrac{1}{l_{j,k,l,\ldots ,q,s,\ldots ,m}} \nonumber \\&\quad \times \underset{\alpha _{j,k,l,\ldots ,q,s,\ldots ,m} \in {\mathcal {A}}^{N}}{\sup } \bigg [\sum _{i=1}^{n}{x_i}_{q+1/2} {b_i}_{j,k,l,\ldots ,q+1/2,s\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,q,s,\ldots ,m})\nonumber \\&\quad \times \dfrac{\left( {x_i}_{q+1}^{\beta _{j,k,l,\ldots ,q,s\ldots ,m}(\tau )}\,v_{j,k,l,\ldots ,q+1,s,\ldots ,m}-{x_i}_{q}^{\beta _{j,k,l,\ldots ,q,s\ldots ,m}(\tau )}\,v_{j,k,l,\ldots ,q,s,\ldots ,m} \right) }{{x_i}_{q+1}^{\beta _{j,k,l,\ldots ,q,s\ldots ,m}(\tau )}- {x_i}_{q}^{\beta _{j,k,l,\ldots ,q,s\ldots ,m}(\tau )}}\prod _{i\ne r}^{n} h_{{x_r}_\nu }\nonumber \\&\quad +\sum _{\underset{i\ne r}{i,r=1}}^n{x_i}_{q+1/2}\bigg ( {d_{ir}}_{j,k,l,\ldots ,q,s,\ldots ,m}(\tau ,\alpha _{j,k,l,\ldots ,q,s,\ldots ,m})\left( \prod _{i\ne r}{x_r}_\nu \right) \nonumber \\&\quad \times \dfrac{v_{j,k,l,\ldots ,q,s,\nu +1,\ldots ,m}-v_{j,k,l,\ldots ,q,s,\nu ,\ldots ,m}}{ h_{{x_r}_\nu }} \bigg )\prod _{i\ne r}^{n} h_{{x_r}_\nu }\nonumber \\&\quad - \sum _{i=1}^{n}{x_i}_{q-1/2} {b_i}_{j,k,l,\ldots ,q-1/2,s\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,q,s,\ldots ,m})\times \left( \prod _{i\ne r}^{n} h_{{x_r}_\nu }\right) \nonumber \\&\quad \times \dfrac{\left( {x_i}_{q}^{\beta _{j,k,l,\ldots ,q-1,s\ldots ,m}(\tau )}\,v_{j,k,l,\ldots ,q,s,\ldots ,m}-{x_i}_{q-1}^{\beta _{j,k,l,\ldots ,q-1,s\ldots ,m}(\tau )}\,v_{j,k,l,\ldots ,q-1,s,\ldots ,m} \right) }{{x_i}_{q}^{\beta _{j,k,l,\ldots ,q-1,s\ldots ,m}(\tau )}- {x_i}_{q-1}^{\beta _{j,k,l,\ldots ,q-1,s\ldots ,m}(\tau )}}\nonumber \\&\quad +\sum _{\underset{i\ne r}{i,r=1}}^n{x_i}_{q-1/2}\bigg ( {d_{ir}}_{j,k,l,\ldots ,q,s,\ldots ,m}(\tau ,\alpha _{j,k,l,\ldots ,q,s,\ldots ,m})\left( \prod _{i\ne r}{x_r}_\nu \right) \nonumber \\&\quad \times \dfrac{v_{j,k,l,\ldots ,q,s,\nu +1,\ldots ,m}-v_{j,k,l,\ldots ,q,s,\nu ,\ldots ,m}}{ h_{{x_r}_\nu }} \bigg )\prod _{i\ne r}^{n} h_{{x_r}_\nu }\nonumber \\&\quad +c_{j,k,l,\ldots ,q,s,\nu ,\ldots ,m}\,v_{j,k,l,\ldots ,q,s,\nu ,\ldots ,m}\,l_{j,k,l,\ldots ,q,s,\nu ,\ldots ,m}\bigg ]=0. \end{aligned}$$
(52)

This can be rewritten as the ordinary differential equation (ODE) coupled with optimization

$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{d\,\mathbf{v} (\tau )}{d\,\tau } + \underset{\alpha \in {\mathcal {A}}^N}{\inf }\,\left[ E (\tau , \alpha )\,\mathbf{v} (\tau ) + F(\tau ,\alpha ) \right] = 0, \\ ~~~\text{ with }~~ \,\,\,\, \mathbf{v} (0)\,\,\,\text {given}, \end{array}\right. } \end{aligned}$$
(53)

where \( E (\tau ,\alpha )\) is an \(N\times N\) matrix,  \( {\mathcal {A}}^N=\underset{N}{\underbrace{{\mathcal {A}}\times \cdots \times {\mathcal {A}}}} \)\(G (\tau , \alpha ) =-F(\tau , \alpha ) \) depends of the boundary condition and the term c. \(\mathbf{v} = \left( v_{j,k,l,\ldots ,q,s,\nu ,\ldots ,m}\right) \),    and    \(G (\tau , \alpha ) =-F(\tau , \alpha ) \). By setting \(n_1=N_1-1,\; n_2=N_2-1; \;n_3=N_3-1;,\ldots ,n_n=N_n-1;, \;\; I:=I (j,k,l,\ldots ,q,s,\nu ,\ldots ,m)= j + (k-1)n_1 +(l-1)n_1 n_2 + \cdots + (m-1)\mathop {\prod }\nolimits _{i=1}^{n-1} n_i\) and \(J:=J (j',k',l',\ldots ,q',s',\nu ',\ldots ,m')= j' + (k'-1)n_1 +(l'-1)n_1 n_2 + \cdots + (m'-1)\mathop {\prod }\nolimits _{i=1}^{n-1} n_i \), we have \(E (\tau , \alpha ) (I,J)= \left( e_{j',k',l'\ldots ,q',s',\nu ',\ldots ,m'}^{j,k,l,\ldots ,q,s,\nu ,\ldots ,m}\right) \), \(j', j = 1,\ldots , N_1-1 \),    \( k',k = 1,\ldots , N_2-1 \)   and   \(l', l = 1,\ldots , N_3-1,\ldots ,m',m = 1,\ldots , N_n-1 \) and

$$\begin{aligned}&e_{j,k,l\ldots ,q,s,r,\ldots ,m}^{j,k,l,\ldots ,q,s,\nu ,\ldots ,m}=\dfrac{1}{l_{j,k,l,\ldots ,q,s,\ldots ,m}} \bigg [\sum _{i=1}^{n}{x_i}_{q+1/2} {b_i}_{j,k,l,\ldots ,q+1/2,s\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,q,s,\ldots ,m})\nonumber \\&\quad \times \dfrac{{x_i}_{q}^{\beta _{j,k,l,\ldots ,q,s\ldots ,m}(\tau )} }{{x_i}_{q+1}^{\beta _{j,k,l,\ldots ,q,s\ldots ,m}(\tau )}- {x_i}_{q}^{\beta _{j,k,l,\ldots ,q,s\ldots ,m}(\tau )}}\prod _{i\ne r}^{n} h_{{x_r}_\nu }\nonumber \\&\quad +\sum _{\underset{i\ne r}{i,r=1}}^n\bigg ( {d_{ir}}_{j,k,l,\ldots ,q,s,\ldots ,m}(\tau ,\alpha _{j,k,l,\ldots ,q,s,\ldots ,m})\left( \prod _{i\ne r}{x_r}_\nu \right) \dfrac{1}{h_{{x_r}_\nu }} \bigg )\prod _{i=1}^{n} h_{{x_i}_q}\nonumber \\&\quad +\sum _{i=1}^{n}{x_i}_{q-1/2} {b_i}_{j,k,l,\ldots ,q-1/2,s\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,q,s,\ldots ,m})\times \left( \prod _{i\ne r}^{n} h_{{x_r}_\nu }\right) \nonumber \\&\quad \times \dfrac{{x_i}_{q}^{\beta _{j,k,l,\ldots ,q-1,s\ldots ,m}(\tau )}}{{x_i}_{q}^{\beta _{j,k,l,\ldots ,q-1,s\ldots ,m}(\tau )}-{x_i}_{q-1}^{\beta _{j,k,l,\ldots ,q-1,s\ldots ,m}(\tau )}}\bigg ]- c_{j,k,l,\ldots ,q,s,\nu ,\ldots ,m},\nonumber \\&\sum _{\overset{j'\ne j}{j'=j-1}}^{N_1-1} \sum _{\overset{k'\ne k}{k'=k-1}}^{N_2-1} \sum _{\overset{l'\ne l}{l'=l-1}}^{N_3-1}\cdots \sum _{\overset{m'\ne m}{m'=m-1}}^{N_n-1} e_{j',k',l'\ldots ,q',s',\nu ',\ldots ,m'}^{j,k,l,\ldots ,q,s,r,\ldots ,m}\nonumber \\&\quad =\dfrac{1}{l_{j,k,l,\ldots ,q,s,\ldots ,m}} \bigg [ \sum _{i=1}^{n}{x_i}_{q-1/2} {b_i}_{j,k,l,\ldots ,q-1/2,s\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,q,s,\ldots ,m})\nonumber \\&\qquad \times \dfrac{-{x_i}_{q-1}^{\beta _{j,k,l,\ldots ,q-1,s\ldots ,m}(\tau )}}{{x_i}_{q}^{\beta _{j,k,l,\ldots ,q-1,s\ldots ,m}(\tau )}- {x_i}_{q-1}^{\beta _{j,k,l,\ldots ,q-1,s\ldots ,m}(\tau )}}\left( \prod _{i\ne r}^{n} h_{{x_r}_\nu }\right) \bigg ],\nonumber \\&\sum _{\overset{j'\ne j}{j'=j+1}}^{N_1-1} \sum _{\overset{k'\ne k}{k'=k+1}}^{N_2-1} \sum _{\overset{l'\ne l}{l'=l+1}}^{N_3-1}\cdots \sum _{\overset{m'\ne m}{m'=m+1}}^{N_n-1} e_{j',k',l'\ldots ,q',s',\nu ',\ldots ,m'}^{j,k,l,\ldots ,q,s,\nu ,\ldots ,m}\nonumber \\&\quad =\dfrac{1}{l_{j,k,l,\ldots ,q,s,\ldots ,m}} \bigg [ \sum _{i=1}^{n}{x_i}_{q+1/2} {b_i}_{j,k,l,\ldots ,q+1/2,s\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,q,s,\ldots ,m})\nonumber \\&\qquad \times \dfrac{-{x_i}_{q+1}^{\beta _{j,k,l,\ldots ,q,s\ldots ,m}(\tau )}}{{x_i}_{q+1}^{\beta _{j,k,l,\ldots ,q,s\ldots ,m}(\tau )}- {x_i}_{q}^{\beta _{j,k,l,\ldots ,q,s\ldots ,m}(\tau )}}\prod _{i\ne r}^{n} h_{{x_r}_\nu }\nonumber \\&\qquad -\sum _{\underset{i\ne r}{i,r=1}}^n\bigg ( {d_{ir}}_{j,k,l,\ldots ,q,s,\nu ,\ldots ,m}(\tau ,\alpha _{j,k,l,\ldots ,q,s,\nu ,\ldots ,m})\left( {x_i}_q {x_r}_\nu \right) \dfrac{1}{h_{{x_i}_q}} \bigg )\bigg (\prod _{r=1}^{n} h_{{x_r}_\nu }\bigg )\bigg ] \end{aligned}$$
(54)

for \(j= 2,\ldots , N_1-1 \),    \( k = 2,\ldots , N_2-1 ,\ldots ,m = 2,\ldots , N_n-1 \). If one of the indices \( j, k, l,\ldots ,m \) is equal to 1,

$$\begin{aligned}&e_{j,k,l\ldots ,1,s,\nu ,\ldots ,m}^{j,k,l,\ldots ,1,s,\nu ,\ldots ,m}\nonumber \\&\quad =\dfrac{1}{l_{j,k,l,\ldots ,1,s,\ldots ,m}} {x_i}_{1/2}\dfrac{1}{2}\bigg (\bigg ({a_i}_{j,k,l,\ldots ,1/2,s\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,1,s,\nu ,\ldots ,m})\nonumber \\&\qquad +{b_i}_{j,k,l,\ldots ,1/2,s\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,1,s,\ldots ,m})\bigg )\prod _{i\ne r}^{n} h_{{x_r}_\nu } \bigg ) \nonumber \\&\qquad +\dfrac{1}{l_{j,k,l,\ldots ,1,s,\ldots ,m}} \bigg [{x_i}_{1+1/2} {b_i}_{j,k,l,\ldots ,1/2,s\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,1,s,\ldots ,m})\nonumber \\&\qquad \times \dfrac{{x_i}_{1}^{\beta _{j,k,l,\ldots ,q,s\ldots ,m}(\tau )} }{{x_i}_{2}^{\beta _{j,k,l,\ldots ,1,s\ldots ,m}(\tau )}- {x_i}_{1}^{\beta _{j,k,l,\ldots ,1,s\ldots ,m}(\tau )}}\prod _{i\ne r}^{n} h_{{x_r}_\nu }\nonumber \\&\qquad +\sum _{\underset{i\ne r}{r=1}}^n\bigg ( {d_{ir}}_{j,k,l,\ldots ,1,s,\nu \ldots ,m}(\tau ,\alpha _{j,k,l,\ldots ,1,s,\nu ,\ldots ,m})\left( {x_i}_1{x_r}_\nu \right) \dfrac{1}{h_{{x_i}_1}} \bigg )h_{{x_i}_1}\prod _{i\ne r}^{n} h_{{x_r}_\nu }\bigg ]\nonumber \\&\qquad - \dfrac{1}{n} c_{j,k,l,\ldots ,1,s,\nu ,\ldots ,m},\nonumber \\&\sum _{{j'=j-1}}^{N_1-1} \sum _{{k'=k-1}}^{N_2-1} \sum _{{l'=l-1}}^{N_3-1}\ldots \sum _{{m'=m-1}}^{N_n-1} e_{j',k',l'\ldots ,0,s',\nu ',\ldots ,m'}^{j,k,l,\ldots ,1,s,\nu ,\ldots ,m}\nonumber \\&\quad =-\dfrac{1}{l_{j,k,l,\ldots ,1,s,\ldots ,m}} \bigg [ \sum _{i=1}^n {x_i}_{1/2}\dfrac{1}{2}\bigg (\bigg ({a_i}_{j,k,l,\ldots ,1/2,s,\nu ,\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,1,s,\nu ,\ldots ,m})\nonumber \\&\qquad -{b_i}_{j,k,l,\ldots ,1/2,s,\nu ,\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,1,s,\nu ,\ldots ,m})\bigg )\prod _{i\ne r}^{n} h_{{x_r}_\nu } \bigg )\bigg ],\nonumber \\&\sum _{{j'=j+1}}^{N_1-1} \sum _{{k'=k+1}}^{N_2-1}\sum _{{l'=l+1}}^{N_3-1}\cdots \sum _{{m'=m+1}}^{N_n-1}e_{j',k',l'\ldots ,q',s',\nu ',\ldots ,m'}^{j,k,l,\ldots ,1,s,\nu ,\ldots ,m}\nonumber \\&\quad = \dfrac{1}{l_{j,k,l,\ldots ,1,s,\nu ,\ldots ,m}} \bigg [ \sum _{i=1}^{n}{x_i}_{1+1/2} {b_i}_{j,k,l,\ldots ,1+1/2,s,\nu ,\ldots ,m}(\tau , \alpha _{j,k,l,\ldots ,1,s,\nu ,\ldots ,m})\nonumber \\&\qquad \times \bigg (\dfrac{-{x_i}_{2}^{\beta _{j,k,l,\ldots ,1,s,\nu ,\ldots ,m}(\tau )}}{{x_i}_{2}^{\beta _{j,k,l,\ldots ,1,s,\nu ,\ldots ,m}(\tau )}- {x_i}_{1}^{\beta _{j,k,l,\ldots ,1,s,\nu ,\ldots ,m}(\tau )}}\bigg )\prod _{i\ne r}^{n} h_{{x_r}_\nu }\nonumber \\&\qquad -\sum _{\underset{i\ne r}{i,r=1}}^n\bigg ( {d_{ir}}_{j,k,l,\ldots ,1,s,\nu ,\ldots ,m}(\tau ,\alpha _{j,k,l,\ldots ,1,s,\nu ,\ldots ,m})\left( {x_i}_1 {x_r}_\nu \right) \dfrac{1}{h_{{x_i}_1}} \bigg )h_{{x_i}_1}\prod _{i\ne r}^{n} h_{{x_r}_\nu }\bigg ]. \end{aligned}$$
(55)

The monotonicity of system matrix \(E (\tau ,\alpha ) \) is given in the following theorem.

Theorem 2

Assume that the coefficients of A given by (44) are positive and \(c<0\).Footnote 2If h is relatively small then the matrix \(E (\tau ,\alpha )\) in the system (53) is an M-matrix for any \( \alpha _{j,k,l,\ldots ,m} \,\in \,{\mathcal {A}}^{N}\).

Proof

The proof follows the same lines as in Theorem 1. \(\square \)

5 Temporal Discretization and Optimization Problem

This section is devoted to the numerical time discretization method for the spatially discretized optimization problem after the fitted finite volume method. Let us re-consider the differential equation coupled with optimization problem given in (33) by

$$\begin{aligned} \dfrac{d\,\mathbf{v (\tau )}}{d\,\tau }&= \sup _{\alpha \in {\mathcal {A}}^N} \left[ A (\tau ,\alpha ) \mathbf{v} (\tau )+ G(\tau ,\alpha ) \right] \quad \mathbf{v} (0)\,\,\,\text {given}, \end{aligned}$$
(56)

For temporal discretization, we use a constant time step \(\varDelta t > 0\), of course variable time steps can be used. The temporal grid points given by  \(\varDelta t = \tau _{n+1}-\tau _n \)  for   \(n = 1, 2,\ldots m-1 \). We denote \(\mathbf{v} (\tau _n) \approx \mathbf{v} ^n\) , \(A^n (\alpha ) = A(\tau _n,\alpha )\) and \(G^n (\alpha )=G(\tau _n,\alpha ).\)

For \(\theta \,\in \left[ \frac{1}{2}, 1\right] \), following (Peyrl et al. 2005), the \(\theta \)-Method approximation in time is given by

$$\begin{aligned} \mathbf{v} ^{n+1} - \mathbf{v} ^{n}= & {} \varDelta t \,\sup _{\alpha \in {\mathcal {A}}^N} \left( \theta \, [A^{n+1} (\alpha )\,\mathbf{v} ^{n+1} + G^{n+1}(\alpha )] \right. \nonumber \\&\left. + (1-\theta )\, [A^{n} (\alpha )\,\mathbf{v} ^{n} +G^{n}(\alpha )]\right) , \end{aligned}$$
(57)

this also can be written as

$$\begin{aligned} \inf _{\alpha \in {\mathcal {A}}^N} \left( [I + \varDelta t \,\theta \, E^{n+1}]\mathbf{v} ^{n+1} +F^{n+1}(\alpha ) + [I + \varDelta t \,\theta \, E^{n}]\mathbf{v} ^{n} + F^{n}(\alpha )\right) = 0. \end{aligned}$$
(58)

We can see that to find the unknown \(\mathbf{v} ^{n+1}\), we need also to solve an optimization. Let

$$\begin{aligned} \alpha ^{n+1} \in \left( \underset{\alpha \in {\mathcal {A}}^N }{arg \sup } \left\{ \theta \,\varDelta t \left[ A^{n+1}(\alpha )\,\mathbf{v} ^{n+1} + G^{n+1}(\alpha )\right] + (1-\theta )\,\varDelta t \left[ A^{n} (\alpha )\,\mathbf{v} ^{n} + G^{n}(\alpha )\right] \right\} \right) . \end{aligned}$$
(59)

Then, the unknown \(\mathbf{v} ^{n+1}\) is solution of the following equation

$$\begin{aligned}&[ I - \theta \, \varDelta t \,A^{n+1} (\alpha ^{n+1})]\,\mathbf{v} ^{n+1} = [I + (1-\theta )\,\varDelta t\, A^{n} (\alpha ^{n+1})]\,\mathbf{v} ^{n} \nonumber \\&\quad +[\theta \, \varDelta t \, G^{n+1}(\alpha ^{n+1})+(1-\theta ) \varDelta t\, G^{n}(\alpha ^{n+1})], \end{aligned}$$

Note that, for \(\theta = \dfrac{1}{2}\), we have the Crank Nickolson scheme and for \(\theta =1\) we have the Implicit scheme. Unfortunately (57)–(59) are nonlinear and coupled and we need to iterate at every time step. The following iterative scheme close to the one in Peyrl et al. (2005) is used.

  1. 1.

    Let \( \left( \mathbf{v} ^{n+1}\right) ^0=\mathbf{v} ^{n}\),

  2. 2.

    Let \( \hat{\mathbf{v }}^{k}= \left( \mathbf{v} ^{n+1}\right) ^k\),

  3. 3.

    For \(k=0,1,2 \ldots \) until convergence (\(\Vert \hat{\mathbf{v }}^{k+1}-\hat{\mathbf{v }}^{k}\Vert \le \epsilon \), given tolerance) solve

    $$\begin{aligned}&\alpha ^{k}_i \in \left( \underset{\alpha \in {\mathcal {A}}^N }{arg \sup } \left\{ \theta \,\varDelta t \left[ A^{n+1} (\alpha )\,\hat{\mathbf{v }}^k+ G^{n+1}(\alpha ) \right] _i + (1-\theta )\,\varDelta t \, \left[ A^{n} (\alpha )\,\mathbf{v} ^{n} + G^{n}(\alpha )\right] _i \right\} \right) \nonumber \\&\alpha ^{k}=(\alpha ^{k})_i\nonumber \\&[ I - \theta \, \varDelta t\,A^{n+1} (\alpha ^{k})]\,\hat{\mathbf{v }}^{k+1} = [I + (1-\theta )\,\varDelta t \, A ^{n}(\alpha ^{k})] \mathbf{v} ^{n} \nonumber \\&\quad +[\theta \, \varDelta t \, G^{n+1}(\alpha ^{k})+(1-\theta ) \varDelta t \,G^{n}(\alpha ^{k})], \end{aligned}$$
    (60)
  4. 4.

    Let \(k_l\) being the last iteration in step 3, set \(\mathbf{v} ^{n+1}:=\hat{\mathbf{v }}^{k_l}\),   \(\alpha ^{n+1}:=\alpha ^{k_l}\).

The monotonicity of system matrix of (58), more precisely \( [I + \varDelta t \,\theta E^{n+1}] \) is given in the following theorem.

Theorem 3

Under the same assumptions as in Theorem 1, for any given \(n = 1, 2, \ldots , m - 1 \), the system matrix \( [I + \varDelta t \,\theta E^{n+1}] \) in (58) is an M-matrix for each\( \alpha \in {\mathcal {A}}^N. \)

Proof

The proof is obvious. Indeed as in Theorem 1, \( [I + \varDelta t \,\theta E^{n+1}] \) is (strictly) diagonally dominant since \( \varDelta t > 0 \). Then, it is an M-matrix. \(\square \)

The merit of the proposed method is that it is unconditionally stable in time because of the implicit nature of the time discretization. More precisely, following (Angermann and Wang 2007, Theorem 6 and Lemma 3), we can easily prove that the scheme (57) is stable and consistent, so the convergence of the scheme is ensured (see Barles and Souganidis 1991)

6 Application

To validate our method presented in the previous section, we present here some numerical experiments. All computations were performed in Matlab 2013.

Consider the following three dimensional Merton’s stochastic control problem such that \( \alpha = \alpha _1(t,x) \) is a feedback control in [0, 1] given by

$$\begin{aligned} v(t,x,y,z)=\underset{\alpha \, \in \, [0,1]}{\sup } {\mathbb {E}}\left\{ \dfrac{1}{p}\,x^p(T) \times \dfrac{1}{p}\,y^p(T)\times \dfrac{1}{p}\,z^p(T)\right\} ,\,\,\, 0< p < 1 \end{aligned}$$
(61)

s.t.

$$\begin{aligned} d x_t= & {} \left( r_1 + \alpha _t\, (\mu _1-r_1)\right) \,x_t\,dt + \sigma \,x_t \alpha _t\,d\omega _{t},\nonumber \\ d y_t= & {} \mu _2\,y_t\,dt + \sigma \,y_t\, d\omega _{t},\nonumber \\ d z_t= & {} \mu _3\,z_t\,dt + \sigma \,z_t\, d\omega _{t}. \end{aligned}$$
(62)

\(r_1\), \(\mu _1\), \(\mu _2\), \(\mu _2\), \(\sigma \) are positive constants, \( x_t,\, y_t,\, z_t\,\in \,{\mathbb {R}} \). We assume that \( \mu _1 > r_1 \). For the problem (61)–(62), the corresponding HJB equation is given by

$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{d\,v(t, x, y, z)}{d\,t} + \underset{\alpha \in [0, 1]}{\sup } \left[ L^\alpha \,v(t, x, y, z)\right] = 0 \quad \text {on} \ [0,T)\times {\mathbb {R}}\times {\mathbb {R}}\times {\mathbb {R}}\\ v(T, x, y, z ) = \dfrac{x^p}{p}\times \dfrac{y^p}{p}\times \dfrac{y^p}{p}, \,\, \,x,\, \,y,\,\,\, z\,\in {\mathbb {R}}_+ \end{array}\right. } \end{aligned}$$
(63)

where

$$\begin{aligned} L^\alpha \,v(t, x, y, z)= & {} \dfrac{1}{2}\,\sigma ^2\,\alpha ^2\,x^2 \dfrac{d^2 v(t, x, y, z)}{d x^2} + \dfrac{1}{2}\,\sigma ^2\,y^2 \dfrac{d^2 v(t, x, y, z)}{d y^2} +\dfrac{1}{2}\,\sigma ^2\,z^2 \dfrac{d^2 v(t, x, y, z)}{d z^2} \nonumber \\&\quad + \sigma ^2\,\alpha \,x\,y\, \dfrac{d^2 v(t, x, y, z)}{d x\partial y} + \sigma ^2\,\alpha \,x\,z\, \dfrac{d^2 v(t, x, y, z)}{d x\,d z} + \sigma ^2\,z\,y\, \dfrac{d^2 v(t, x, y, z)}{d z\,d y} \nonumber \\&\quad + (r_1 + (\mu _1 -r_1)\alpha )\,x\, \dfrac{d v(t, x,y,z)}{d x} + \mu _2\,y\, \dfrac{d v(t, x, y, z)}{d y} + \mu _3\,z\, \dfrac{d v(t, x, y, z)}{d z}. \end{aligned}$$
$$\begin{aligned} \dfrac{d v(t,x,y,z) }{\partial t} + \sup _{\alpha \, \in \, [0,1]}\left[ \nabla \cdot \left( k(t,x,y,z,\alpha ) (v(t,x,y,z))\right) + c(t,x,y,z,\alpha )\,v(t,x,y,z) \right] = 0, \end{aligned}$$
(64)

and the different variable in (63) is given by

$$\begin{aligned} k(v(t,x,y,z)) = A(t,x,y,z,\alpha )\nabla v(t, x, y, z)+ b(t,x,y,z,\alpha )\,v(t, x, y, z) \end{aligned}$$

  is the flux,  \(b = (x\,b_1, y\,b_2, z\,b_3)^T\),

$$\begin{aligned} A=\left[ \begin{array}{ccc} a_{11} &{} a_{12} &{} a_{13} \\ a_{21} &{} a_{22}&{} a_{23} \\ a_{31} &{} a_{32}&{} a_{33} \end{array} \right] . \end{aligned}$$

with

$$\begin{aligned}&a_{11} = \dfrac{1}{2}\sigma ^2\,\alpha ^2\,x^2 ,~ a_{22} = \dfrac{1}{2}\sigma ^2\,y^2, ~ a_{33} = \dfrac{1}{2}\sigma ^2\,z^2, \nonumber \\&a_{12} = a_{21} = \dfrac{1}{2}\sigma ^2\,\alpha \,x\,y, ~ a_{13} = a_{31} = \dfrac{1}{2}\sigma ^2\,\alpha \,x\,z,\nonumber \\&a_{23} = a_{32} = \dfrac{1}{2}\sigma ^2\,y\,z.\end{aligned}$$
(65)
$$\begin{aligned}&{\left\{ \begin{array}{ll} &{} b_1(t,x,y,z,\alpha )= r_1 + (\mu _1-r_1)\,\alpha - \sigma ^2\,\alpha - \sigma ^2\,\alpha ^2\\ &{} b_2(t,x,y,z,\alpha ) = \mu _2 -\dfrac{1}{2}\sigma ^2\alpha - \dfrac{3}{2}\sigma ^2 \\ &{} b_3(t,x,y,z,\alpha ) = \mu _3 -\dfrac{1}{2}\sigma ^2\,\alpha - \dfrac{3}{2}\sigma ^2\\ &{} c(t,x,y,z,\alpha ) = - \left[ r_1 + (\mu _1-r_1)\,\alpha - 2\,\sigma ^2\,\alpha - \sigma ^2\,\alpha ^2 + \mu _2\,+\mu _3 - 3\,\sigma ^2 \right] . \end{array}\right. } \end{aligned}$$
(66)

The domain where we compare the solution is \( \varOmega =\left[ 0, x_{{max}}\right] \times \left[ 0, y_{{max}}\right] \times \left[ 0, z_{{max}}\right] \). For each simulation, the exact or reference solution is the analytical solution using Ansatz method as we are going to develop in the next section.

6.1 Analytical Solution Using Ansatz Method

Here we propose the analytical solution using the Ansatz decomposition. Let set the Ansatz decomposition of v

$$\begin{aligned} v(t, x, y, z)=\psi (t)\times u(x)\times u(y)\times u(z), \end{aligned}$$
(67)

where   \( u(x) = \dfrac{x^p}{p},\) \(\,\, 0< p < 1\), \(\forall \, x \,\in {\mathbb {R}}_+ \) is the power utility function. The different derivative of v(txyz) gives us

$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{d v(t,x,y,z)}{d x}=\psi (t) \dfrac{d u(x)}{d x}\, u(y)\,u(z);\,\, \dfrac{d v(t,x,y,z)}{d y}=\psi (t) \dfrac{d u(y)}{d y}\, u(x)\,u(z)\\ \dfrac{d v(t,x,y,z)}{d z}=\psi (t) \dfrac{d u(z)}{d z}\, u(x)\,u(y);\,\, \dfrac{d v(t,x,y,z)}{d t}=\psi '(t)\, u(x)\,u(y)\,u(z)\\ \dfrac{d^2 v(t,x,y,z)}{d x^2} =\psi (t) \dfrac{d^2 u(x)}{d x^2}\, u(y)\,u(z);\,\, \dfrac{d^2 v(t,x,y,z)}{d y^2} =\psi (t) \dfrac{d^2 u(y)}{d y^2}\, u(x)\,u(z)\,\\ \dfrac{d^2 v(t,x,y,z)}{d z^2} =\psi (t) \dfrac{d^2 u(z)}{d z^2}\, u(x)\,u(y);\,\, \dfrac{d^2 v(t,x,y,z)}{d x\,d y} =\psi (t) \dfrac{d u(x)}{d x}\,\dfrac{d u(y)}{d y}\, u(z)\\ \dfrac{d^2 v(t,x,y,z)}{d x\,d z} =\psi (t) \dfrac{d u(x)}{d x}\,\dfrac{d u(z)}{d z}\, u(y);\,\, \dfrac{d^2 v(t,x,y,z)}{d y\,d z} =\psi (t) \dfrac{d u(y)}{d y}\,\dfrac{d u(z)}{d z}\, u(x), \end{array}\right. } \end{aligned}$$
(68)

plugging into (63), we get

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}\dfrac{d\,\psi (t)}{d t} \, u(x)\,u(y)\,u(z) \\ &{}\quad + r_1\,x \,\psi (t) \dfrac{d \left( u(x)\,u(y)\,u(z)\right) }{d x} +\underset{\alpha \in A}{\sup }\left[ \alpha (\mu _1-r_1)\,x\,\psi (t) \dfrac{d \left( u(x)\,u(y)\,u(z)\right) }{d x} \right. \\ &{}\quad \left. +\mu _2\,y\,\psi (t) \dfrac{d \left( u(x)\,u(y)\,u(z)\right) }{d y}+\mu _3\,z\,\psi (t) \dfrac{d \left( u(x)\,u(y)\,u(z)\right) }{d z} \right. \\ &{}\quad \left. + \dfrac{1}{2} \sigma ^2\,\alpha ^2\,x^2\psi (t) \dfrac{d^2 \left( u(x)\,u(y)\,u(z)\right) }{d x^2} + \dfrac{1}{2} \sigma ^2\,y^2 \,\psi (t) \dfrac{d^2 \left( u(x)\,u(y)\,u(z)\right) }{d y^2} \right. \\ &{}\quad \left. + \dfrac{1}{2} \sigma ^2\,z^2\psi (t) \dfrac{d^2 \left( u(x)\,u(y)\,u(z)\right) }{d z^2} + \sigma ^2\,\alpha \,\psi (t)\,x\,y\, \dfrac{d^2 }{d x\,d y} \left( u(x)\,u(y)\,u(z)\right) \right. \\ &{} \quad \left. + \sigma ^2\,\alpha \,\psi (t) \,x\,z\, \dfrac{d^2 }{d x\,d z} \left( u(x)\,u(y)\,u(z)\right) + \sigma ^2\,\psi (t) \,y\,z\, \dfrac{d^2 }{d y\,d z} \left( u(x)\,u(y)\,u(z)\right) \right] =0 \\ &{}\quad \psi (T)=1,\; (\text {since}~~ v(T,x,y,z)=\psi (T)\,u(x)\,u(y)\,u(z)= u(x)\,u(y)\,u(z)) \end{array}\right. } \end{aligned}$$
(69)

We then obtained

$$\begin{aligned}&\dfrac{d \psi (t)}{d t}+ p\,\rho \,\psi (t) = 0 ~~\text {where}~\nonumber \\&\rho = \underset{\alpha \in \,[0,1]}{\sup }\left[ r_1 + (\mu _1-r_1)\,\alpha + \mu _2+\mu _3\,+ \dfrac{1}{2} \sigma ^2\,\alpha ^2\,(p-1)\right. \nonumber \\&\quad \quad \left. + \sigma ^2\,(p-1) + 2\,\sigma ^2\,\alpha \,p +\sigma ^2\,p\right] ,\quad \psi (T) = 1. \end{aligned}$$
(70)

So by setting \( \tau = T-t \), the analytical value function for Ansatz method is then equal to

$$\begin{aligned} v\left( \tau ^n, x_i, y_j, z_k\right) = e^{p \times (n \times \varDelta t - T) \times \rho } \times \dfrac{(x_i)^p}{p} \times \dfrac{(y_j)^p}{p}\times \dfrac{(z_k)^p}{p}, \,\,\text {with}\,\,0< p < 1. \end{aligned}$$
(71)

We use the following \(L^2\left( [0,T]\times \varOmega \right) \) norm of the absolute error

$$\begin{aligned}&\left\| v^m-v\right\| _{L^2\left( [0,T]\times \varOmega \right) }\nonumber \\&\quad = \left( \sum _{n = 0}^{m-1} \sum _{i = 1}^{N_1-1} \sum _{j = 1}^{N_2-1} \sum _{k = 1}^{N_3-1} (\tau _{n+1}-\tau _n)\times l_{i,j,k}\times (v_{i,j,k}^n - v\left( \tau ^n, x_{i}, y_j, z_k\right) )^2 \right) ^{1/2}, \end{aligned}$$
(72)

where \(v^m\) is the numerical approximation of v computed from our numerical scheme. For our computation, we us we have \(\varOmega =[0,1/2]\times [0,1/4] \times [0,1/2]\) for computational domain with \(N_1= 10\), \(N_2 = 10\), \(N_3= 10\), \(r_1 = 0.0449\), \(\mu _1 = 0.0657\), \(\mu _2 = 0.067\), \(\mu _3 = 0.066\), \(\sigma = 0.2537\), \(p= 0.13\) and \(T=1 \). Figure 1 shows the structure of the matrix A after space discretisation with the fitted volume method. As you can observe the structure of the matrix is similar to the one from finite difference method. Figure 2 shows the optimal investment policy as function of x while using the fitted scheme. The optimal investment policy for finite difference method is quite similar. Indeed the optimal parameter \(\alpha \) is independent of y and z. The controller is the solution of (61). It is computed with the numerical procedure as outlined in Sect. 5. We have also found that in overall the value the maximum number of iterations in our optimisation algorithm is 3 in both fitted scheme and finite difference scheme.

Fig. 1
figure 1

Structure of the matrix A at time \(T=1\)

Fig. 2
figure 2

Optimal investment policy at time \(T=1\)

We compare the fitted finite volume and the finite difference method in Table 1.

Table 1 Comparison of the implicit fitted finite volume method and implicit finite difference method

Figure 3 shows the structure of the matrix A and Fig. 4 shows the optimal investment policy as function of x. The controller is the solution of (61). It is computed with the numerical procedure as outlined in Sect. 5.

Fig. 3
figure 3

Structure of the matrix A at time \(T=1.5\)

Fig. 4
figure 4

Optimal investment policy at time \(T=1.5\)

In Table 2, we have used \(\varOmega =[0,1/2]\times [0,1/4] \times [0,1/2]\) for computational domain with the following parameters \(N_1= 8\), \(N_2 = 9\), \(N_1= 10\), \(r_1 = 0.0449\), \(r_2 = 0.0448/3\), \(r_3 = 0.0447\), \(\mu _1 = 0.0657\), \(\mu _2 = 0.0656\), \(\mu _3 = 0.0655\), \(\sigma = 0.2537\), \(p= 0.17\) and \(T=1.5 \).

Table 2 Comparison of the implicit fitted finite volume method and implicit finite difference method

Tables 1 and 2 display the numerical errors of finite volume method and finite difference method. By fitting the data from Tables 1 and 2, we found that the convergence order in time 1 for the fitted finite volume method and the finite difference method. From the two tables, we can observe a slight accuracy of the implicit fitted finite volume comparing to the implicit finite difference method, thanks to the fitted technique.

7 Conclusion

We have introduced a novel scheme based on finite volume method with fitted technique to solve high dimensional stochastic optimal control problems (\(n\ge 3\)). The optimization problem is solved at every time step using iterative method. We have shown that the system matrix of the resulting non linear system is an M-matrix and therefore the maximum principle is preserved for the discrete system obtained after the fitted finite volume spatial discretization. Numerical experiments are used to demonstrate the accuracy of the novel scheme comparing to the standard finite difference method.