1 Introduction

The method of lines (MOL) is a computational approach for solving PDE problems of the form

$$\begin{aligned} \partial _tu=f\left( t,x,u,\partial _xu,\partial ^2_{xx}u\right) . \end{aligned}$$

The numerical solution process proceeds in two steps:

  1. 1.

    Space semi-discretization (using, for example, finite differences).

  2. 2.

    The resulting system of semi-discrete ordinary differential equations (ODE’s) is integrated in time.

From the numerous literature concerning the numerical method of lines for classical differential equations we mention the monographs [8, 25, 27]. MOL for nonlinear parabolic functional differential equations with initial boundary conditions of the Dirichlet type are investigated in [30], where the error estimates implying the convergence of MOL on a rectangular domain is given. Shakeri and Dehghan [24] present two forms of MOL for one-dimensional nonlocal hyperbolic partial differential equation. MOL is used in [4] to obtain numerical solutions to a quasilinear parabolic inverse problem. Parabolic inverse problems can be reduced to a system of ODE’s by fourth order compact scheme (see [18]).

The aim of the paper is to construct a method of lines for nonlinear parabolic functional differential equations with general initial boundary conditions and with a non-rectangular domain. To deal with a cylindrical domain, we can proceed in several ways. For instance, we can consider the polar coordinate system, like in [25, 27] or the Cartesian coordinate system. In the later case we have at least two possibilities. We can add extra points either at the boundary of the domain (see [14, 17]), or outside the domain (see [19]). In this paper we consider the second possibility. In our scheme we obtain additional points using the reflection with respect to the boundary. We extend the solution of the approximate problem outside the domain by a transformation generated by the boundary condition. This leads to differential-algebraic equations (DAE). The theoretical analysis of DAE’s and some appropriate numerical methods for initial and boundary value problems can be found in [3, 15]. In [6] the authors study numerical solutions of DAE’s. Soltaniana et al. [26] present a homotopy perturbation method to solve DAE’s. The numerical solution of DAE’s by the Rosenbroch one-stage scheme with complex coefficients are investigated in [2]. However, there are considered systems which depend only on present time. Equations with aftereffects of various kinds (such as delays) are called functional differential equations (FDE) The theory of FDE’s can be found in [7]. Semiexplicit numerical methods of the Rosenbroch type for functional differential-algebraic equations in the whole space were studied in [10]. In [13] the authors, on the basis of Newton’s method, propose a fast quasilinearization numerical scheme, coupled with Rothe’s method, for fully nonlinear parabolic equations. Comparison theorems for ODE and DAE systems are investigated in [11].

We admit some advantages of such scheme:

  1. 1.

    Simplicity of description and implementation.

  2. 2.

    No need to multiply cases.

  3. 3.

    No need to look for the nearest points.

  4. 4.

    No need to approximate boundary conditions.

  5. 5.

    A universal interpolation pattern and the values of \(u\) in the grid points outside the domain \(Q\).

The new approach is applicable for \(n\)-dimensional spatial variables, even though the numerical experiments are implemented for a two-dimensional spatial case. The weak point of the proposed method is a limitation on the boundary of the area of the spatial variable. Namely, we assume that the domain \(\Omega \) is bounded, convex with the boundary of class \(C^{2+\alpha },\ 0<\alpha <1\). We require this assumption to construct a suitably regular extension of the functions beyond the domain \(Q\). Fakhar-Izadi and Dehghan [5] deal with irregular domains for a weakly parabolic partial integro-differential equation. They propose a spectral method to find numerical solutions. It seems that spectral methods are more time consuming compared with methods on regular meshes which obey a kind of maximum principle.

Our technique may be used in many physical problems such as chemical reaction [9] and heat conduction [22], where a flow across the boundary surface is proportional to the difference between the surrounding density and the density inside surface. In these papers authors consider complex approximate methods, exclusively for the differential equations without any delay. In the article [23] one examines non-linearities and delays with Robin boundary conditions. In our paper we deal with difference schemes for functional differential equations with Robin boundary conditions, which is a much wider class that delays. We can also use our technique to a quasilinear parabolic system of PDE’s, more precisely in the chemotaxis model considered in [29]. Recent advances in computational mathematical biology confirm our interest in theoretical investigations, compare: [16]. If the domain is not convex, as it often happens in real-world applications (chemotaxis, reaction-diffusion etc.), it is still possible to use our techniques by means of some smooth mappings which transform an irregular domain to a disc. For instance, a bean-shape region can be just projected to a disc.

The paper is organized in the following way. In Sect. 2 we set up the notation and terminology. Section 3 contains auxiliary lemmas. In Sect. 4 our main results are stated and proved. In the last section numerical experiments are presented.

2 Formulation of the problem

Let \(\Omega \subset \mathbb {R}^n\) be a bounded, convex domain with boundary \(\partial \Omega \) of class \(C^{2+\alpha }\). Write

$$\begin{aligned} Q_0 = [-T_0,0]\times \bar{\Omega },\quad Q = [0,T]\times \bar{\Omega }, \end{aligned}$$

where \(T>0\) and \(T_0\in \mathbb {R}_+=[0,+\infty )\). For a function \(u:Q_0\cup Q \rightarrow \mathbb {R}\) and for \(t\in [0,T]\) we define a function \(u_{t} :Q_0 \rightarrow \mathbb {R}\) by

$$\begin{aligned} u_{t}(\tau ,x) = u(t+\tau ,x),\quad (\tau ,x)\in Q_0. \end{aligned}$$

For any metric spaces \(X\) and \(Y\) we denote by \(C(X,Y)\) the space of all continuous functions defined on \(X\) and taking values in \(Y\). In the case \(Y=\mathbb {R}\) we write \(C(X)\).

Given \(f:Q\times C(Q_0)\times \mathbb {R}^n\rightarrow \mathbb {R},\ a_{ij}:Q \rightarrow \mathbb {R},\beta :[0,T]\times \partial \Omega \rightarrow [0,+\infty )\), \(\psi :Q_0\rightarrow \mathbb {R}\), we consider the functional differential equation

$$\begin{aligned} \partial _tu -\sum _{i,j=1}^n a_{ij}(t,x)\partial _{x_ix_j}^2u = f(t,x,u_t,\partial _x u),\quad (t,x)\in Q \end{aligned}$$
(1)

with the initial boundary conditions

$$\begin{aligned} \frac{\partial u}{\partial n}+\beta (t,x)u = 0,\quad (t,x)\in [0,T]\times \partial \Omega , \end{aligned}$$
(2)
$$\begin{aligned} {\qquad \;\;}u(t,x) = \psi (t,x),\quad (t,x)\in Q_0, \end{aligned}$$
(3)

where \(n=n(x)\) is the unit outward normal on \(\partial \Omega \).

Equation (1) with a particular right hand side can be interpreted as a reaction-diffusion equation, which is widely used as a model describing various physical, chemical and biological problems, see [1, 16, 20]. Note that the right hand side of the equation contains a functional variable. Therefore, we can also consider differential equation with deviated variables or differential integral equations, as it is shown in our examples 1 and 2.

We transform the boundary condition (2) by considering an extension of a function \(u\) outside \(\tilde{Q}:=Q_0\cup Q\). Set \(R(x) = x\) for \(x\in \Omega \) and

$$\begin{aligned} R(x) = \mathop {\mathrm{argmin}}\limits _{\bar{x}\in \partial \Omega }\Vert {\bar{x}-x}\Vert \quad \mathrm for x\in \Omega ^c. \end{aligned}$$

Given \(u:\tilde{Q} \rightarrow \mathbb {R}\) and denoting \(r(x) = 2R(x)-x,\) we extend \(u\) to the set \([-T_0,T]\times \mathbb {R}^n\) by

$$\begin{aligned} u(t,x) = u(t,r(x))\exp {\{-\Vert x-r(x)\Vert \cdot \beta (t,R(x))\}}\quad \mathrm for \ x\in \Omega ^c. \end{aligned}$$

For any smooth function \(u\) this extension implies boundary condition (2) on \([0,T]\times \partial \Omega \). If \(\beta \equiv 0\), then this extension is a mirror reflection with respect to the boundary \(\partial \Omega \), generated by Neumann’s conditions.

3 Discretization

We construct a regular mesh on \(\mathbb {R}^n\) in the following way. Let \(h=(h_1,\ldots ,h_n)\), \(h_i > 0\) be the steps of the mesh. For \(m\in \mathbb {Z}^n,\) \( m=(m_1,\ldots ,m_n),\) we denote nodal points in the following way: \(x^{(m)} = (m_1h_1,\ldots ,m_nh_n)\). Write \(\mathbb {R}_{h}^{n} = \{x^{(m)}: m\in \mathbb {Z}^n\}\), \(\Omega _h = \Omega \cap \mathbb {R}_{h}^{n}\) and

$$\begin{aligned} \Omega _h^{*}&= \left\{ x^{(m)}\in \mathbb {R}_{h}^{n}\setminus \Omega _h : \mathop {\forall }\limits _{x^{(\bar{m})}\in \Omega _h} \max _{i}\left| m_i-\bar{m}_i\right| \le 1\right\} ,\\ Q_{h}&= [0,T]\times \Omega _h,\quad Q_h^*= [0,T]\times \Omega _h^*,\quad Q_{0.h}=[-T_0,0]\times (\Omega ^*_h\cup \Omega _h). \end{aligned}$$

Denote \(\tilde{\Omega }_h =\Omega _h^{*}\cup \Omega _h\). Let \(p^{*}\) be the number of all nodal points of \(\Omega _h^{*}\) and \(p\)—the number of all nodal points of \(\Omega _{h}\). Set \(P=p^{*}+p\), see Fig. 1. For any spaces \(X\) and \(Y\) we denote by \({}^XY\) the class of all functions defined on \(X\) and taking values in \(Y\). Difference operators for spatial variables are defined in the following way. Write \(J=\{(i,j):\ i,j=1,\ldots ,n,\ i\ne j\}.\) Suppose that we have two disjoint sets \(J_+,\ J_- \subset J\) such that \(J_+\cup J_- = J\) and \((i,j)\in J_+\) if \((j,i)\in J_+\). Given \(u:[-T_0,T] \rightarrow {}^{\tilde{\Omega }_h}\mathbb {R}\) and \(m\in \mathbb {Z}^n\). Write

$$\begin{aligned} \delta _i^+u^{(m)}(t)&= \frac{u^{(m+e_i)}(t) - u^{(m)}(t)}{h_i},\ \delta _i^-u^{(m)}(t) = \frac{u^{(m)}(t) - u^{(m-e_i)}(t)}{h_i},\\ \delta _iu^{(m)}(t)&= \frac{1}{2}\left[ \delta _i^+u^{(m)}(t) + \delta _i^-u^{(m)}(t)\right] ,\\ \delta u^{(m)}(t)&= \left( \delta _1u^{(m)}(t),\ldots ,\delta _nu^{(m)}(t)\right) . \end{aligned}$$
Fig. 1
figure 1

Nodal points of \(\Omega _h^{*}\) and \(\Omega _{h}\)

The difference operators \(\delta ^{(2)} = [\delta _{ij}]_{i,j=1,\ldots ,n},\) are defined in the following way:

$$\begin{aligned} \delta ^{(2)}_{ii}u^{(m)}(t)=\delta _i^+\delta _i^-u^{(m)}(t)\quad \mathrm for \ i=1,\ldots ,n \end{aligned}$$

and

$$\begin{aligned} \delta ^{(2)}_{ij}u^{(m)}(t)\,&= \,\frac{1}{2}\left[ \delta _i^+\delta _j^-u^{(m)}(t) + \delta _i^-\delta _j^+u^{(m)}(t)\right] \quad \mathrm {for}\ (i,j)\in J^-,\\ \delta ^{(2)}_{ij}u^{(m)}(t)\,&= \,\frac{1}{2}\left[ \delta _i^+\delta _j^+u^{(m)}(t) + \delta _i^-\delta _j^-u^{(m)}(t)\right] \quad \mathrm {for}\ (i,j)\in J^+. \end{aligned}$$

Let us consider the interpolating operator

\(\mathcal {J}_h:C(Q_{0.h}\cup Q_{h}\cup Q_h^{*})\rightarrow C(conv(Q_{0.h}\cup Q_{h}\cup Q_h^{*}))\) defined by

$$\begin{aligned} \mathcal {J}_h[u](t,x) = \sum _{s\in S_+}u^{(m+s)}(t)\left( \frac{x-x^{(m)}}{h} \right) ^s \left( \mathbf 1 - \frac{x-x^{(m)}}{h} \right) ^{1-s} \end{aligned}$$
(4)

where \(x^{(m)}\le x \le x^{(m+\mathbf 1 )}\), \(\mathbf 1 :=(1,\ldots ,1)\),

$$\begin{aligned} S_+ = \{s = (s_1,\ldots ,s_n): s_i\in \{0,1\},\ 1\le i\le n \}. \end{aligned}$$

In [12] (page 85) we find another extrapolation method. It is easy to see that \(\mathcal {J}_h[u] \in C(conv(Q_{0.h}\cup Q_{h}\cup Q_h^{*}))\) and the norm of \(\mathcal {J}_h\) is equal to \(1\).

Consider the differential-difference equations

$$\begin{aligned}&\frac{d}{dt} u^{(m)}(t) -\sum _{i,j=1}^n a_{ij}\left( t,x^{(m)}\right) \delta ^{(2)}_{ij}u^{(m)}(t)\nonumber \\&\quad =f(t,x^{(m)},\left( \mathcal {J}_h[u]\right) _t,\delta u^{(m)}(t) \end{aligned}$$
(5)

for \((t,x^{(m)})\in [0,T]\times \Omega _{h}\), the algebraic equations

$$\begin{aligned} u^{(m)}(t) = \exp {\left\{ -\left\| x^{(m)}-r(x^{(m)})\right\| \beta \left( t,R(x^{(m)})\right\} \right) }\mathcal {J}_h[u]\left( r(x^{(m)})\right) \end{aligned}$$
(6)

for \((t,x^{(m)})\in [-T_0,T]\times \Omega _h^{*}\), and with the initial condition

$$\begin{aligned} u^{(m)}(t) =\psi \left( t,x^{(m)}\right) \end{aligned}$$
(7)

for \((t,x^{(m)})\in [-T_0,0]\times \Omega _{h}.\)

The method of lines (5)–(7) can be written as the abstract differential-algebraic problem

$$\begin{aligned} \left[ \begin{array}{c@{\quad }c} I_{p\times p} &{} 0\\ 0 &{} 0 \end{array}\right] \frac{d}{dt}\mathbf u - \mathbf K \mathbf u = \varphi _h(t,\mathbf u _t) \end{aligned}$$
(8)

with the initial condition

$$\begin{aligned} \mathbf u (t) = \psi _h(t),\quad \mathrm for \, t\in [-T_0,0]. \end{aligned}$$
(9)

\(\mathbf K \) is generated by \([a_{ij}]\) and \(\partial _{q_j}f\), while \(\varphi _h\) by \(f\). The choice is not unique. Here one can choose \(K\) dependent on \(a_{ij}\) for the components with the indices not greater than \(p\) and \(\varphi _h(t,\mathbf u _t)\) dependent on \(f\).

\(\mathbf {Assumption}\ \mathbf {A} [\sigma ]\)

  • \(\sigma :[0,T]\times \mathbb {R}_+\rightarrow \mathbb {R}_+\) is continuous non-decreasing in the second variable, \(\sigma (t,0) = 0\), and the only maximal solution \(\omega (\cdot ;\gamma ,\tilde{\gamma })\) to the Cauchy problem

    $$\begin{aligned} \omega '(t) = \sigma (t,\omega (t))+\tilde{\gamma },\quad \omega (t) = \gamma \ \mathrm for t\in [-T_0,0] \end{aligned}$$
    (10)

    tends to zero as \(\tilde{\gamma },\ \gamma \rightarrow 0\).

In particular \(\omega '(t) =\sigma (t,\omega (t)),\ \omega (t)= 0\) for \(t\in [-T_0,0]\) implies \(\omega (\cdot ;0,0)\equiv 0\).

Assumption A[K]

  • K \(:[0,T]\rightarrow M_{P\times P}\) is bounded and continuous,

  • \(k_{i1}+\ldots +k_{iP}= 0\) for each \(i=1,\ldots ,P\),

  • \(k_{ij}\ge 0\) for \(i\ne j\), \(i,j=1,\ldots ,P\),

  • the matrix \(\mathbf K \) is DA-irreducible (i.e. \(k_{i1}+\ldots +k_{ip}>0\) for \(i>p\)).

Assumption A[C] The initial function \(\psi _h\in C([-T_0,0],\mathbb {R}^P)\) satisfies the consistency condition

$$\begin{aligned} K_3\psi _{h.D}(0) + K_4\psi _{h.A}(0) + \tilde{\varphi }_A(0)=0 \end{aligned}$$

Lemma 1

Suppose that \(\mathbf {A[K], A[\sigma ]}\) is satisfied and \(\tilde{\varphi }\) is bounded and continuous. Then problem

$$\begin{aligned} \left[ \begin{array}{c@{\quad }c} I_{p\times p} &{} 0\\ 0 &{} 0 \end{array}\right] \frac{d}{dt}\mathbf z - \mathbf K \mathbf z = \tilde{\varphi }(t) \end{aligned}$$
(11)

with the initial condition \(z(t) = \psi _h(t)\) on \([-T_0,0]\) has exactly one solution \(\mathbf z :[-T_0,T]\rightarrow \mathbb {R}^P\) provided that the initial data satisfy the consistency condition A[C].

Proof

Problem (11) can be written in the following form

$$\begin{aligned} \left\{ \begin{array}{rcl}z_D' - K_1z_D - K_2z_A &{}=&{} \tilde{\varphi }_D,\\ -K_3z_D - K_4z_A &{}=&{} \tilde{\varphi }_A\end{array}\right. \end{aligned}$$

with the initial conditions \(z_D = (\psi _h)_D, \quad z_A = (\psi _h)_A.\) From DA-irreducibility of the matrix \(K\) it follows that \(det(K_4)\ne 0\). Hence

$$\begin{aligned} z_A = K_4^{-1}(-K_3z_D - \tilde{\varphi }_A ). \end{aligned}$$

Therefore

$$\begin{aligned} z_D' - \left( K_1 - K_2K_4^{-1}K_3\right) z_D =-K_2K_4^{-1}\tilde{\varphi }_A +\tilde{\varphi }_D. \end{aligned}$$
(12)

In this theorem we only consider the case when the right hand side is dependent only on \(t\). Hence from the boundedness, DA-irreducibility and continuity of \(K\) there exist a unique solution of problem (12) (see [28] Thm VII). The component \(z_A\) is uniquely determined by \(z_D\). The consistency condition A[C] guarantees the continuity of the solution.

Lemma 2

Suppose that \(\mathbf {A[\sigma ], A[K], A[C]}\) are satisfied and \(\left| {g_{i}(t,\xi )}\right| \le \sigma (t,\Vert \xi \Vert )\) for \(i\le p\) and \(g_{i}(t,\xi ) = 0\) for \(i>p\). Then

$$\begin{aligned}\Vert \mathbf z (t)\Vert \le \omega (t;\gamma ,\tilde{\gamma })\quad \mathrm on \ [-T_0,T]\times \tilde{\Omega }_{h}, \end{aligned}$$

where \(\mathbf z \) is a solution of

$$\begin{aligned}\left[ \begin{array}{cc} I_{p\times p} &{} 0\\ 0 &{} 0 \end{array}\right] \frac{d}{dt}\mathbf z - \mathbf K \mathbf z =g(t,\mathbf z _t) +\tilde{\Gamma }(t) \end{aligned}$$

with the initial condition (9) and \(\gamma = \Vert \psi _h\Vert \) \(\tilde{\gamma } = \Vert \tilde{\Gamma }\Vert \).

Proof

If \(\tilde{\gamma },\ \gamma \equiv 0\), then according to [11] \(z\equiv 0\). If we assume that \(\tilde{\gamma },\ \gamma \ge 0\) then we consider the comparison system

$$\begin{aligned} x_i'&= B_i\sum _{j=1,j\ne i}^pk_{ij}\frac{x_j}{B_j} + B_i\left\{ \sigma \left( t,\sup _{1\le l\le p}\left| \frac{x_l}{B_l}\right| _t+C_K\tilde{\gamma }\right) +\tilde{\gamma }\right\} ,\quad i\le p\\ \left| {\frac{x_i}{B_i}}\right|&\le \sup _{1\le l\le p}\left| \frac{x_l}{B_l}\right| _t+C_K\tilde{\gamma },\quad i> p, \end{aligned}$$

where \(\left| z_i\right| \le \frac{x_i}{B_i}\), \(C_K = \mathop {\sup }\limits _{p+1\le i\le P}\left( \mathop {\sum }\limits _{l=1}^pk_{il}\right) ^{-1}\) and \(B_i\) is the solution of the ODE

$$\begin{aligned} B_i'=-k_{ii}B_i,\quad B_i(0)=1. \end{aligned}$$

Proceeding analogously like in [11] we have \(x_i = x_1\frac{B_i}{B_1},\ i=2,\ldots ,P\) and

$$\begin{aligned} \left[ \frac{x_1}{B_1}\right] ' \le \sigma \left( t,\left\| \frac{x_1}{B_1}\mathbf 1 \right\| _t + C_K\tilde{\gamma }\right) +\tilde{\gamma }. \end{aligned}$$

It follows from the comparison principle for ODE’s that

$$\begin{aligned} \frac{x_i}{B_i}=\frac{x_1}{B_1} \le \omega (t;\gamma ,\tilde{\gamma })-C_K\tilde{\gamma } \le \omega (t;\gamma ,\tilde{\gamma }), \end{aligned}$$

which completes the proof.

The following lemma is crucial in the proof of Theorem 1.

Lemma 3

Suppose that \(\mathbf {A[\sigma ], A[K], A[C]}\) are satisfied and

$$\begin{aligned} \left| {\varphi _{i.h}(t,w)-\varphi _{i.h}(t,\tilde{w})}\right| \le \sigma (t,\Vert w-\tilde{w}\Vert )\quad \mathrm for \ i=1,\ldots , p \end{aligned}$$

and \(\varphi _{i.h}(t,w) = 0\) for \(i=p+1,\ldots ,P\). Then there exists exactly one solution of problem (8,9).

Proof

Consider an iterative method, which starts from a prescribed function \(\mathbf u _0\in C([-T_0,T],\mathbb {R}^P)\) satisfying (8) with some error \(\tilde{\Gamma }(t)\) and \(\tilde{\gamma } = \Vert \tilde{\Gamma }\Vert \). Now we consider the linear system of equations for each \(k=0,1,\ldots \)

$$\begin{aligned} \left[ \begin{array}{cc} I_{p\times p} &{} 0\\ 0 &{} 0 \end{array}\right] \frac{d}{dt}\mathbf u _{k+1} - \mathbf K \mathbf u _{k+1} = \varphi _h(t,(\mathbf u _{k})_t) \end{aligned}$$

with the initial condition (9). It follows from Lemma 1 that there is exactly one solution of the above problem. Applying Lemma 2 to the differences \(\mathbf u _{k+l} - \mathbf u _k\) for \(k,\ l \in {\mathbb {N}}\) we conclude that \(\{\mathbf{u }_k\}_{k\in {\mathbb {N}}}\) is the Cauchy sequence.

4 Stability and convergence

We are now in position to state our main stability and convergence result for the MOL corresponding to (123). We need the following assumptions on the functions \(f\), \(\beta \), \(a_{ij},\) and the steps \(h\) of the mesh.

Assumption A

  • \(f:Q\times C(Q_0)\times \mathbb {R}^n\rightarrow \mathbb {R}\) is continuous in \(t, w, q\), the same property have the derivatives \(\partial _{q_j}f\) and they are bounded,

  • \(a_{ij}:Q\rightarrow \mathbb {R}\) are bounded and continuous in \(t\) for \(i,j\in \{1,\ldots ,n\}\),

  • \(\partial _{q_j}f\), \(a_{ij}\) and steps \(h\) satisfy the relations (CFL)

    $$\begin{aligned} \begin{array}{c} \displaystyle -\frac{h_j}{2}\left| {\partial _{q_j}f}\right| +a_{jj} -\sum _{l\ne j}\frac{h_j}{h_l}\left| {a_{jl}}\right| \ge 0 \\ \displaystyle a_{ij}\left( t,x^{(m)}\right) \le \,0\ \mathrm for \ (i,j)\in J_-,\quad a_{ij}\left( t,x^{(m)}\right) \ge 0\ \mathrm for \ (i,j)\in J_+, \end{array} \end{aligned}$$
  • \(\left| {f(t,x,\bar{w},q) - f(t,x,w,q)}\right| \le \sigma (t,\Vert \bar{w}-w\Vert ),\) \(\left| {f(t,x,0,0)}\right| \le M_f,\)

  • \(\beta \) is a bounded, continuous function such that \(\beta \ge 0\).

Theorem 1

Suppose that \(\mathbf {A[\sigma ], A}\) are satisfied. Then there exists exactly one solution \(u:[T_0,T]\rightarrow {}^{\tilde{\Omega }_h}\mathbb {R}\) of problem (57).

Proof

Consider the iterative method. We choose an arbitrary function \(\mathbf u _0\in C([-T_0,T],\mathbb {R}^P)\). Consider the ODE system

$$\begin{aligned} \frac{d}{dt} u_k^{(m)}(t) -\sum _{i,j=1}^n a_{ij}\left( t,x^{(m)}\right) \delta ^{(2)}_{ij}u_k^{(m)}(t) = f\left( t,x^{(m)},(\mathcal {J}_h[u_{k-1}])_t,\delta u_{k}^{(m)}(t)\right) \end{aligned}$$
(13)

with the initial and boundary conditions (6)–(7).

We apply the Hadamard mean value theorem to the difference

$$\begin{aligned} f\left( t,x^{(m)},(\mathcal {J}_h[u_{k-1}])_t,\delta u_{k}^{(m)}(t)\right) - f\left( t,x^{(m)},(\mathcal {J}_h[u_{k-1}])_t,\delta u_{k-1}^{(m)}(t)\right) . \end{aligned}$$

We have

$$\begin{aligned}&\frac{d}{dt} u_k^{(m)}(t) -\sum _{i,j=1}^n a_{ij}\left( t,x^{(m)}\right) \delta ^{(2)}_{ij}u_k^{(m)}(t) - \sum _{i=1}^n\int _0^1\partial _{q_i}f(\Phi (\tau ))d\tau \delta _i u_k^{(m)}(t) \\&\quad = f\left( t,x^{(m)},(\mathcal {J}_h[u_{k-1}])_t,\delta u_{k-1}^{(m)}(t)\right) - \sum _{i=1}^n\partial _{q_i}f(\Phi (\tau ))d\tau \delta _i u_{k-1}^{(m)}(t), \end{aligned}$$

where

$$\begin{aligned} \Phi _0(\tau ) = (t,x^{(m)},(J_h[u_{k-1}])_t,(1-\tau )\delta u_{k-1}^{(m)}(t)+ \tau \delta u_k^{(m)}(t)). \end{aligned}$$

We substitute the formulas for \(\delta ,\ \delta ^{(2)}\) in the above equation. The matrix \(\mathbf K \) consists of elements which are linear combinations of \(a_{ij}\) and \(\partial _{q_j}f\).

According to Lemma 2 we show that the matrix \(\mathbf K \) satisfies all conditions of A[K]. Put

$$\begin{aligned} S_0&= \sum _{(i,j)\in J}\frac{1}{h_ih_j}\left| {a_{ij}(t,x^{(m)})}\right| - 2\sum _{i=1}^n\frac{1}{h_i^2}a_{ii}\left( t,x^{(m)}\right) ,\\ S_i^+\,&= \,\frac{1}{2h_i}\int _0^1\partial _{q_i}f(\Phi (\tau ))d\tau + \frac{1}{h_i^2}a_{ii}\left( t,x^{(m)}\right) -\sum _{j=1,j\ne i}^n\frac{\left| {a_{ij}(t,x^{(m)}}\right| }{h_ih_j},\\ S_i^-\,&= \,-\frac{1}{2h_i}\int _0^1\partial _{q_i}f(\Phi (\tau ))d\tau + \frac{1}{h_i^2}a_{ii}\left( t,x^{(m)}\right) -\sum _{j=1,j\ne i}^n\frac{\left| {a_{ij}(t,x^{(m)}}\right| }{h_ih_j},\\ S_{ij}\,&= \,\frac{1}{2h_ih_j}\left| a_{ij}\left( t,x^{(m)}\right) \right| . \end{aligned}$$

It follows from assumption A that

$$\begin{aligned} S_0\le 0,\quad S_i^+,S_i^-\ge 0,\quad S_{ij}\ge 0,\quad i,j=1,\ldots , n \end{aligned}$$

and

$$\begin{aligned} S_0+\sum _{i=1}^n(S_i^++S_i^-) + 2\sum _{(i,j)\in J}S_{ij} = 0. \end{aligned}$$

Since \(\mathcal {J}_h[u]\) is a convex combination of \(u(t,x^{(m)})\) and

$$\begin{aligned} \exp {\left\{ -\left\| x^{(m)}-r\left( x^{(m)}\right) \right\| \beta \left( t,R(x^{(m)}\right) \right\} }\le 1, \end{aligned}$$

first two conditions of \(\mathbf A[K] \) are satisfied. Note that at least one coefficient in \(\mathcal {J}_h\) is positive. Thus last two conditions of \(\mathbf A[K] \) are satisfied. It follows from Lemma 1 that there is exactly one solution of problem (13).

Next we analyse the difference \(\mathbf u _{k+l}-\mathbf u _k\) for \(k,l\in \mathbb {N}\). Put \(\Delta _lu = \mathbf u _{k+l}-\mathbf u _k\). We have

$$\begin{aligned}&\frac{d}{dt}\Delta _lu^{(m)}_k - \sum _{i,j=1}^na_{ij}\left( t,x^{(m)}\right) \delta ^{(2)}_{ij}\Delta _lu_k^{(m)}(t)\\&\quad =f\left( t,x^{(m)},(\mathcal {J}_h[u_{k+l-1}])_t,\delta u_{k+l}^{(m)}(t)\right) -f\left( t,x^{(m)},(\mathcal {J}_h[u_{k-1}])_t,\delta u_{k}^{(m)}(t)\right) . \end{aligned}$$

We apply the Hadamard mean value theorem

$$\begin{aligned}&\frac{d}{dt}\Delta _lu^{(m)}_k - \sum _{i,j=1}^na_{ij}\left( t,x^{(m)}\right) \delta ^{(2)}_{ij}\Delta _lu_k^{(m)}(t) - \sum _{i=1}^n\int _0^1\partial _{q_i}f(\Upsilon (\tau ))d\tau \delta _i\Delta _lu_k^{(m)}\\&\quad =f\left( t,x^{(m)},(\mathcal {J}_h[u_{k+l-1}])_t,\delta u_{k}^{(m)}(t)\right) - f\left( t,x^{(m)},(\mathcal {J}_h[u_{k-1}])_t,\delta u_{k}^{(m)}(t)\right) , \end{aligned}$$

where

$$\begin{aligned} \Upsilon (\tau ) = \left( t,x^{(m)},(J_h[u_{k+l-1}])_t,(1-\tau )\delta u_{k}^{(m)}(t)+ \tau \delta u_{k+l}^{(m)}(t)\right) . \end{aligned}$$

Applying Lemma 2 we have

$$\begin{aligned} \Vert \Delta _lu_k(t)\Vert \le \omega _k(t,\gamma ,\tilde{\gamma }), \end{aligned}$$

where

$$\begin{aligned} \mathop {\lim }\limits _{k\rightarrow \infty }\omega _{k}(t) = 0 \end{aligned}$$

uniformly on \([0,\tilde{T}]\), for each \(\tilde{T} \in (0,T)\) (see [21]). We conclude that \(\{\mathbf{u }_k\}_{k\in {\mathbb {N}}}\) is the Cauchy sequence.

Theorem 2

Suppose that \(\mathbf {A[\sigma ]}\), A are satisfied and

  • \(u\) is a solution of (1,2,3) and \(\tilde{u}\) is a solution of (5,6,7) such that

    $$\begin{aligned}&\left| {u^{(m)}(t)-\tilde{u}^{(m)}(t)}\right| \le \gamma _h\ \mathrm on \ [-T_0,0]\times \tilde{\Omega }_{h},\\&\left| u^{(m)}(t)-\tilde{u}^{(m)}(t)\right| \le \gamma ^{*}_h(t)\ \mathrm on \ (t,x)\in [0,T]\times \Omega _h^{*}. \end{aligned}$$

Then there is \(\omega _h:[-T_0,T]\rightarrow \mathbb {R}_+\) such that

$$\begin{aligned} \left| {u^{(m)}(t)-\tilde{u}^{(m)}(t)}\right| \le \omega _h(t),\quad \mathrm and \quad \lim _{\Vert h\Vert \rightarrow 0}\omega _h(t)=0. \end{aligned}$$
(14)

Remark 1

We assume that \(\mathrm{{sign}} \ a_{ij}\) is constant. One can omit this assumption by considering \(J_-, J_+\) as sets which depend on \(x^{(m)}\).

Proof

(Theorem 2) Let \(\Gamma _h : Q_h \rightarrow \mathbb {R}\) be defined by the relation

$$\begin{aligned}&\frac{d}{dt}u^{(m)}(t) - \sum _{i,j=1}^n a_{ij}\left( t,x^{(m)}\right) \delta ^{(2)}_{ij}u^{(m)}(t)\\&\quad = f\left( t,x^{(m)},(\mathcal {J}_h[u])_t,\delta u^{(m)}(t)\right) + \Gamma _h^{(m)}(t)\quad \mathrm on \ Q_h. \end{aligned}$$

It follows that there is \(\tilde{\gamma }_h\) such that

$$\begin{aligned} \left| {\Gamma _h^{(m)}(t)}\right| \le \tilde{\gamma }_h\quad \mathrm on \ Q_h\quad \mathrm and \ \mathop {\lim }\limits _{h\rightarrow 0}\tilde{\gamma }_h = 0. \end{aligned}$$

From the definition of \(\mathcal {J}_h\) we have

$$\begin{aligned} \left| {\mathcal {J}_h[u] - \mathcal {J}_h[\tilde{u}]}\right| \le \Vert u - \tilde{u}\Vert \le \gamma _h \quad \mathrm on \quad [-T_0,0]\times \Omega _{h}^{*}. \end{aligned}$$

Applying the Hadamard mean value theorem we have

$$\begin{aligned}&\frac{d}{dt}(u -\tilde{u})^{(m)}(t) - \sum _{i,j=1}^n a_{ij}\left( t,x^{(m)}\right) \delta ^{(2)}_{ij}(u -\tilde{u})^{(m)}(t)\\&\qquad -\,\sum _{i=1}^n\int _0^1\partial _{q_i}f(\Phi (\tau ))d\tau \left[ \delta _i (u -\tilde{u})^{(m)}(t)\right] \\&\quad = f\left( t,x^{(m)},(\mathcal {J}_h[u])_t,\delta \tilde{u}^{(m)}(t)\right) \\&\qquad -\,f\left( t,x^{(m)},(\mathcal {J}_h[\tilde{u}])_t,\delta \tilde{u}^{(m)}(t)\right) \!+\! \Gamma _h^{(m)}, \end{aligned}$$

where

$$\begin{aligned} \Phi (\tau )=\left( t,x^{(m)},(\mathcal {J}_h[u])_t,\delta \tilde{u}^{(m)}(t) + \tau (\delta u^{(m)}\left( t)-\delta \tilde{u}^{(m)}(t)\right) \right) . \end{aligned}$$

Hence the above equation can be written in the following form

$$\begin{aligned} \left[ \begin{array}{c@{\quad }c} I_{p\times p} &{} 0\\ 0 &{} 0 \end{array}\right] \frac{d}{dt}(u -\tilde{u}) - \mathbf K (u -\tilde{u}) = g(t,(\mathcal {J}_h[(u -\tilde{u})])_t) + \Gamma _h(t), \end{aligned}$$

where

$$\begin{aligned} \Gamma _{i.h}(t) = \left\{ \begin{array}{l@{\quad }l} \Gamma _{i.h}(t), &{} i=1,\ldots , p,\\ \gamma ^{*}_{h},&{} i=p+1,\ldots ,P. \end{array}\right. \end{aligned}$$

According to Lemma 2 we show that the matrix \(\mathbf K \) satisfies all conditions of A[K].

Put

$$\begin{aligned} S_0&= \sum _{(i,j)\in J}\frac{1}{h_ih_j}\left| {a_{ij}\left( t,x^{(m)}\right) }\right| - 2\sum _{i=1}^n\frac{1}{h_i^2}a_{ii}\left( t,x^{(m)}\right) ,\\ S_i^+\,&= \,\frac{1}{2h_i}\int _0^1\partial _{q_i}f(\Phi (\tau ))d\tau + \frac{1}{h_i^2}a_{ii}\left( t,x^{(m)}\right) -\sum _{j=1,j\ne i}^n\frac{\left| {a_{ij}(t,x^{(m)}}\right| }{h_ih_j},\\ S_i^-\,&= \,-\frac{1}{2h_i}\int _0^1\partial _{q_i}f(\Phi (\tau ))d\tau + \frac{1}{h_i^2}a_{ii}\left( t,x^{(m)}\right) -\sum _{j=1,j\ne i}^n\frac{\left| {a_{ij}(t,x^{(m)}}\right| }{h_ih_j},\\ S_{ij}&= \frac{1}{2h_ih_j}\left| a_{ij}\left( t,x^{(m)}\right) \right| . \end{aligned}$$

It follows from assumption A that

$$\begin{aligned} S_0\le 0,\quad S_i^+,S_i^-\ge 0,\quad S_{ij}\ge 0,\quad i,j=1,\ldots , n \end{aligned}$$

and

$$\begin{aligned} S_0+\sum _{i=1}^n(S_i^++S_i^-) + 2\sum _{(i,j)\in J}S_{ij} = 0. \end{aligned}$$

Since \(\mathcal {J}_h[u]\) is a convex combination of \(u(t,x^{(m)})\) and

$$\begin{aligned} \exp {\left\{ -\left\| x^{(m)}-r(x^{(m)})\right\| )\beta \left( t,R(x^{(m)}\right) \right\} }\le 1, \end{aligned}$$

first two conditions of \(\mathbf A[K] \) are satisfied. Note that at least one coefficient in \(\mathcal {J}_h\) is positive. Thus last two conditions of \(\mathbf A[K] \) are satisfied. The conclusion in (14) can be seen by observing that the function \(\omega :[-T_0,T]\times H\rightarrow \mathbb {R}_+\) is a solution of the Cauchy problem

$$\begin{aligned} \omega '(t) = \sigma (t,\omega (t))+\tilde{\gamma }_h,\quad \omega (t) = \gamma _h,\ \mathrm for t\in [-T_0,0]. \end{aligned}$$

5 Numerical examples

We apply the results presented in Sect. 3 to a differential equation with deviated variables and to a differential integral problem. We consider our numerical examples on the cylinder \([0,T]\times B_1,\) where \(B_1\) is the unit ball centered at \((0,0)\).

Example 1

Consider the differential integral problem

$$\begin{aligned}&\partial _tu -\partial _{xx}^2u+\frac{1}{2}\partial ^2_{xy}u - \partial _{yy}^2u =\left( 1+x^2+y^2\right) \int _0^tu(s,x,y)ds\\&\quad + \left( 4x^2t^2-2xyt^2+4y^2t^2\right) u+\left( 4t-2-x^2-y^2\right) \sin \left( t(x^2+y^2+1)\right) , \end{aligned}$$

with initial and boundary conditions

$$\begin{aligned} u(0,x,y)&= 1\ \mathrm on \quad B_1,\\ \frac{\partial u}{\partial n} +2t\tan \left( t(x^2+y^2+1)\right) u&= 0\quad \mathrm on \left[ 0,\frac{\pi }{8}\right] \times \partial B_1. \end{aligned}$$

The solution of the above problem is known, \(u(t,x,y) = \cos (t(x^2+y^2+1))\).

In Table 1 we give experimental values of the maximal error \(\varepsilon _{max}\) for \(h_0 = 0.01\) and \(h_1=h_2=0.125\), where \((h_0,h_1,h_2)\) are steps of the mesh with respect to \((t,x,y)\).

Table 1 Maximal error for PDE with integrals

Example 2

Consider the differential equation with deviated variables

$$\begin{aligned} \partial _{t}u_t&- \partial _{xx}^2u+\frac{1}{2}\partial _{xy}^2u - \partial _{yy}^2u = u\left( \frac{t}{2},x,y\right) \\&+ [-x^2-y^2+4t-4t^2x^2-2t^2xy-4t^2y^2]u - \exp \{-0.5t(x^2+y^2)\}, \end{aligned}$$

with initial and boundary conditions

$$\begin{aligned}&u(0,x,y) = 1\ \mathrm on \ B_1,\\&\frac{\partial u}{\partial n} +2tu = 0\ \mathrm on \ [0,1]\times \partial B_1. \end{aligned}$$

The solution of the above problem is known, \(u = e^{-t(x^2+y^2)}\).

In Table 2 we give experimental values of the maximal error for \(h_0 = 0.01\) and \(h_1=h_2=0.125\), where \((h_0,h_1,h_2)\) are steps of the mesh with respect to \((t,x,y)\).

Table 2 Maximal error for PDE with delays

The above examples are carried out for two-dimensional spatial variables. This is done only for our convenience of implementation. The theory presented in our paper is not limited with respect to the dimension of spatial variables. Both coefficients of the derivatives of the unknown function and the functions appearing on the right hand side of the equation and the initial and the boundary conditions satisfy the assumptions imposed in our paper. The computed results have been compared with the exact solution to show the required accuracy of the method. The computation time is \(0.38\)sec for the first example, and \(24.82\) s for the second example. The presented experiments illustrate the convergence of the proposed method.