# On the numerical approximation of viscosity solutions for the differential-functional Cauchy problem

- First Online:

- Received:
- Accepted:

DOI: 10.1007/s10092-012-0071-3

- Cite this article as:
- Topolski, K.A. Calcolo (2013) 50: 329. doi:10.1007/s10092-012-0071-3

- 555 Downloads

## Abstract

We consider the Cauchy problem for first order differential-functional equations. We present finite difference schemes to approximate viscosity solutions of this problem. The functional dependence in the equation is of the Hale type. It contains, as a particular case, equation with a retarded and deviated argument, and differential-integral equation. Numerical examples to illustrate the theory are presented.

### Keywords

Viscosity solutionsCauchy problemDifferential-functional equationsFinite difference schemes### Mathematics Subject Classification (2010)

35D4035R1065M1565M06## 1 Introduction

Throughout the paper \(C(D)\) stands for the space of all continuous functions \(w: D\rightarrow \mathbb R \) with the supremum norm \(\Vert \cdot \Vert _{D}\).

Let \(f:\bar{\Theta }\times C(D)\times \mathbb R ^m \rightarrow \mathbb R \) and \({\varphi }:\Theta _{0}\rightarrow \mathbb R \) be continuous functions.

Although (1) is formulated in a rather abstract way it contains as a particular case a large group of differential-functional equations. The most important are: equations with a retarded and deviated argument, differential-integral equations, and of course equations without functional dependence on \(u\). All these situations can be derived from (1), (2) by specializing the function \(f\).

*Example 1*

The next example shows how to transform differential-integral problem to (1), (2).

**Example 2**

Of course, we can combine these two kinds of functional dependence and treat them using one model. We can also multiply functional dependence in (3) and (4) by putting \(u(\mu _{1}(t,x),\beta _{1}(t,x)),\ldots ,u(\mu _{N}(t,x),\beta _{N}(t,x))\) in place of \(u(\mu (t,x),\beta (t,x))\) in (3) and introducing \(K_{1},\ldots ,K_{N}\) in (4).

In this paper we will investigate viscosity solutions of (1), (2).

**Definition 1**

**Definition 2**

A function \(u\in C(E)\) is a viscosity solution of (1), (2) if \(u\) is both a viscosity subsolution and supersolution of (1), (2).

It is immediate that,

*Remark 1*

If \(u\in C(E)\cap C^{1}(\Theta )\), then \(u\) is viscosity solution (v. subsolution, v. super-solution) of (1), (2) if and only if \(u\) is a classical solution (subsolution, supersolution) of (1), (2).

We use the symbol \(SOL(f,{\varphi })\) for the set of all viscosity solutions of (1), (2).

This notion of solution was first introduced in [2, 15] for the first order differential equations. The second order equations (not considered here) are widely presented in [1]. The Cauchy problem for differential-functional equations is investigated in [20].

There are numerous papers concerning difference schemes for the first order equations where nonfunctional dependence and classical solution are investigated. Here we concentrate on functional problems where generalized solutions are treated. Numerical approximation for generalized solution of first order equation was first investigated in [16] for weak solutions (in distributional sense), and in [11–13] for almost everywhere solutions. (with a restrictive assumption of convexity in the last variable). Difference methods are used in [18] to prove the existence of weak solutions for quasilinear equations with functional dependence and in [6] for generalized solutions with entropy uniqueness condition (see [11–13]). The method presented leads to existence results rather than to practical applications. In the study of viscosity solution the convexity assumption can be released. We also do not need the additional assumption on the solution (entropy condition). Moreover, the difference scheme applied in purely theoretical papers [3, 17], although giving a slow convergence (a square root rate), can be useful in practical experiments. In this paper we extend results obtained in [3, 17] into the case of equation with functional dependence on \(u\). We based our reasoning on the estimations obtained for the nonfunctional case and on the a priori estimations for the functional case ([19]). We also present some numerical experiments where functional dependence leads to many practical difficulties.

Numerical approximation for classical solutions of first order equation with functional dependence was investigated in [4], where explicit method is considered and in [5, 10], where implicit schemes are treated. Convergence of the difference analog of the first order equation is investigated in [14] via difference inequalities.

## 2 Finite difference scheme

In this part we present finite difference method to approximate viscosity solution of (1), (2).

For two vectors \(a,b\in \mathbb R ^k\ a\le b\ (a<b)\) means \(a_i\le b_i\ (a_i<b_i)\) for \(i=1,\ldots ,k\). Similarly we define \(\ge \) and \(>\).

Put \(h>0\), \(k=(k_1,\ldots ,k_m)>0\) and \(N_{0}, N\in \mathbb N \) such that \((-N_0-1)h\le -a_{0}\le -N_0h,Nh\le \tau <(N+1)h.\) For \(\alpha =(\alpha _1,\ldots ,\alpha _m)\in \mathbb Z ^m\) we write \(\alpha k=(\alpha _1k_1,\ldots ,\alpha _mk_m)\) and \(\frac{\alpha }{k}=(\frac{\alpha _1}{k_1},\dots ,\frac{\alpha _m}{k_m})\). Let \(x_\alpha =\alpha k\) and \(t_n=nh\) for \(n\in \mathbb Z \). Define \(I_n=\{-N_0,\ldots ,0,\ldots ,n\},{\Delta }=\{(t_n,x_\alpha ):\ \alpha \in \mathbb Z ^m,n\in I_N\},{\Delta }_0=\{(t_n,x_\alpha ):\ \alpha \in \mathbb Z ^m,n\in I_0\}.\)

Let \(U:I_N\times \mathbb Z ^m\rightarrow \mathbb R ,\) i.e. \(U=\{U_{\alpha }^{n}\}^{n\in I_N}_{\alpha \in \mathbb Z ^m}\). Of course, for fixed \(n\) we have \(U^n:\mathbb Z ^m\rightarrow \mathbb R \) and for fixed \(\alpha ,U_{\alpha }:\ I_N\rightarrow \mathbb R \). Let \(e_i\) for \(i=0,\dots ,m\) denote standard versors in \(\mathbb R ^m\). Write \({\Delta }^+U= ({\Delta }_{x_1}^+U,\ldots ,{\Delta }_{x_m}^+U)\) where \({\Delta }_{x_i}^+U\) is defined by \(({\Delta }_{x_i}^+U)_{\alpha }=U_{\alpha +e_i}-U_{\alpha }\), for \(\alpha \in \mathbb Z ^m,i=0,\ldots ,m\). Put \(\frac{{\Delta }^+}{k}U=(\frac{{\Delta }_{x_1}^+U}{k_1},\ldots ,\frac{{\Delta }_{x_m}^+U}{k_m})\).

Let \(A\subseteq \mathbb Z ^k\). We will use the symbol \(l^{\infty }(A)\) for the space of all discrete real functions bounded on \(A\) with the norm \(|U|_{\infty }=\sup {\{|U_{\beta }|:\ \beta \in A\}}\). (In the following we write \(U^n_{\alpha }\) instead of \(U_\beta =U_{(n,\alpha )}\) for \(A= I_N\times \mathbb Z ^m\).) For \(U,V\in l^{\infty }(A)\) we write \(U\le V\) if \(U_{\beta }\le V_{\beta }\) for every \(\beta \in A\).

We write \(BC(X)\) for the space of all continuous and bounded real functions on \(X\subseteq \mathbb R ^k\) and \(BC(X;L)\) for the space of all \(u\in BC(X)\) Lipschitz with a fixed constant \(L>0\).

For \(X\subseteq \mathbb R ^{1+m}\) we define \(X_t=\{(s,z)\in R^{1+m}:s\le t\}\) and write for short \(\Vert \cdot \Vert _t=\Vert \cdot \Vert _{X_{t}}\) if the set \(X\) is known. Similarly we define a norm \(|\cdot |_{\infty }^n\) in \(l^{\infty }(I_{n}\times \mathbb Z ^m)\).

Assume now that \(g\) is independent of \(w\). We will write for short \(U=U_{\alpha }^{n},\ V=V_{\alpha }^{n}\). Let \(l^{\infty }(\mathbb Z ^{m};L)=\{U\in l^{\infty }(\mathbb Z ^m): |\Delta ^{+}U|_{\infty }\le Lk\}\).

**Proposition 1**

- (1)
\(G(s,U+\lambda )=G(s,U)+\lambda \) for \(\lambda \in \mathbb R \),

- (2)
if \(G(s,\cdot )\) is nondecreasing in \(l^{\infty }(\mathbb Z ^m;L_s)\), then \(|G(s,U)-G(s,V)|_{\infty }\le |U-V|_{\infty }\) for \(U,V\in l^{\infty }(\mathbb Z ^m;L_s)\),

- (3)if \(G(s,\cdot )\) is nondecreasing in \(l^{\infty }(\mathbb Z ^m;L_s)\) and \(g\) is Lipschitz continuous in \(x\) with a constant \(L_x[g]\), then$$\begin{aligned} |{\Delta }^{+}G(s,U)|_{\infty }\le |{\Delta }^{+}U|_{\infty }+L_x[g]hk \quad \text{ on} \ l^{\infty }(\mathbb Z ^m;L_s). \end{aligned}$$(15)

*Proof*

As (1) is immediate we begin with (2). Since \(U\le V+|U-V|_{\infty }\) putting \(\lambda =|U-V|_{\infty }\) we get by (1) and by monotonicity of \(G(s,\cdot )\), \(G(s,U)-G(s,V)\le G(s,V+\lambda )-G(s,V)=\lambda \). By changing the role of \(U\) and \(V\) we obtain the desired estimation.

**Lemma 1**

*Proof*

## 3 Convergence of the scheme

In this section we will consider a general situation when \(f\) depends on \(w\). Particularly interesting is the case when this dependence is functional. Our results can be applied to a large class of differential-integral equations and equations with a retarded argument (see Examples 1 and 2).

**Assumption 1**

- (1)
there exists \(\gamma >0\) such that \(|f(t,x,0,0)|\le \gamma \) in \(\bar{\Theta }\),

- (2)
\(f(t,x,u,p)\) is global Lipschitz continuous in \(u\) and local Lipschitz continuous in \(p\),

- (3)there exists \(C>0\) such thatin \(\bar{\Theta }\times C_L(D)\times \mathbb R ^n,\)$$\begin{aligned} |f(t,x,u,p)-f(t,\bar{x},u,p)|\le C(1+L_x[u]+p)|x-\bar{x}| \end{aligned}$$
- (4)there exists \(\Gamma :\mathbb R ^n \rightarrow [0,\infty )\) such thatin \(\bar{\Theta }\times C_L(D)\times \mathbb R ^n.\)$$\begin{aligned} |f(t,x,u,p)-f(\bar{t},x,u,p)|\le \Gamma (p)(1+L_t[u])|t-\bar{t}| \end{aligned}$$

Here \(C_L(D)\) stands for the space of all Lipschitz functions on \(D\).

Since we use the space \(C_L(D)\) in (3) and (4) we can apply our results to equations with a retarded and deviated argument under restriction that \(\alpha \) depends on \(t\) and \(\beta \) depends on \(x\). It would be impossible if we considered the space \(C(D)\) leaving out \(L_x[u], L_t[u]\). Of course, the assumption would be stronger in this case, general enough to cover only differential-integral equation and constant retarded and deviated argument.

Assumption A can be formulated in more general form (see [19]) which gives a priori estimations on the solution and its Lipschitz constant in \(x\) (with a natural assumption on \(\varphi \)). Such general formulation can be reduced, however, to our formulation by a standard argument.

Now we will investigate the finite difference scheme (10), (11).

**Definition 3**

We say that \(g\) is consistent with \(f\) if for every \(a\in \mathbb R ^m\)\(g(t,x,u,a,\ldots ,a)=f(t,x,u,a)\) in \(\bar{\Theta }\times C(D)\).

In the following we will assume that \(g\) is consistent with \(f\), and \(k_{i}/h\) for \(i=1,\ldots ,m\) are constant. Put \(\lambda _{x_{i}}=k_{i}/h\).

In view of [19] we know that if \(f\) satisfies Assumption A, \(\varphi \in BC(\Theta _{0};L_0),\tilde{u}\in SOL(f,\varphi )\), then there exists \(L>0\) independent of \(\tilde{u}\) such that \(\tilde{u}\in BC(E;L)\). Let \(L\) be such a constant.

**Definition 4**

We say that scheme (10), (11) is monotone on \([-L,L]\) if \(G[u](s,\cdot )\) is nondecreasing in \(l^{\infty }(\mathbb Z ^m,L)\) for every \(u\in C(D)\).

The main result of our paper is

**Theorem 1**

*Proof*

Put \({\varphi }_{0}(x)={\varphi }(0,x)\) and \(\Phi _{0}(\alpha )=\Phi (0,\alpha )=\phi (0,\alpha )\) for \(\alpha \in \mathbb Z ^{m}\). Suppose that \(\tilde{u}\in SOL(f,{\varphi }),\tilde{U} \in AP(G,\Phi )\). Of course \(\tilde{u}\in SOL(f[\tilde{u}],{\varphi }_0),\tilde{U}\in AP(G[T\tilde{U}],\Phi _0)\). Obviously, it is clear that if \(f\) is consistent with \(g\), then \(f[\tilde{u}]\) is consistent with \(g[\tilde{u}]\).

## 4 Numerical examples

The above estimation shows that \(\lambda =k/h\ge L_{p}[f]\) is a sufficient condition for (18) to be monotone on \([-L,L]\).

**Example 3**

It is not difficult to verify that \(\tilde{u}(t,x)=\cos {(t-2|x|)}\) for \((t,x)\in [0,\pi ]\times \mathbb R \) is a viscosity solution of the above problem. The monotonicity condition for the scheme (18) holds if \(h\le k\) (\(L_{p}[f]=1\) is global).

\(\mathbf{\delta }\) | \(h=0.01\) | \(h=0.01\) | \(h=0.01\) | \(h=0.01\) | \(K_{1}\sqrt{h}\) | \(h=0.01\) | \(h=0.01\) |
---|---|---|---|---|---|---|---|

\(t\) | \(k=0.015\) | \(k=0.01\) | \(k=0.009\) | \(k=0.007\) | \(t\) | \(k=0.015\) | \(k=0.01\) |

(a) | (b) | ||||||

0.0 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0 | 0.0000 | 0.0000 |

0.3 | 0.0121 | 0.0044 | 0.0039 | 0.0030 | 0.3 | 0.0539 | 0.0431 |

0.6 | 0.0237 | 0.0085 | 0.0073 | 0.8469 | 0.6 | 0.3698 | 0.2958 |

0.9 | 0.0338 | 0.0119 | 0.0100 | 290.14 | 0.9 | 2.0465 | 1.6372 |

1.2 | 0.0419 | 0.0148 | 0.0247 | 98231 | 1.2 | 12.826 | 10.261 |

1.5 | 0.0479 | 0.0171 | 0.1192 | 3.D \(+\) 07 | 1.5 | 101.27 | 81.015 |

1.8 | 0.0514 | 0.0185 | 0.4689 | 1.D \(+\) 10 | 1.8 | 1056.2 | 844.95 |

2.1 | 0.0520 | 0.0188 | 2.4524 | 4.D \(+\) 12 | 2.1 | 14924 | 11939 |

2.4 | 0.0493 | 0.0177 | 12.020 | 1.D \(+\) 15 | 2.4 | 3.D \(+\) 05 | 2.D \(+\) 05 |

2.7 | 0.0516 | 0.0187 | 58.000 | 4.D \(+\) 17 | 2.7 | 8.D \(+\) 06 | 6.D \(+\) 06 |

3.0 | 0.0657 | 0.0246 | 278.49 | 1.D \(+\) 20 | 3.0 | 3.D \(+\) 08 | 2.D \(+\) 08 |

An interesting effect can be observed if we prolong the time interval beyond \(\pi \). The error estimate is growing up. It is due to the fact that \(\tilde{u}(t,x)=\cos {(t-2|x|)}\) is no longer a viscosity solution for \(t>\pi \). (it is still a.e. solution). Our method gives an approximation of viscosity solution which exists and is unique globally. It is rather difficult to find an explicit formula for such solution if \(t>\pi \). Maximal errors in the set \([3,5]\times [-B,B]\) where \(B\approx 150\) are given in the table below (monotonicity condition holds).

\(\mathbf{\delta }\) | \(h=0.01\) | \(h=0.01\) |
---|---|---|

\(t\) | \(k=0.015\) | \(k=0.01\) |

3.0 | 0.0657 | 0.0246 |

3.2 | 0.0800 | 0.0303 |

3.4 | 0.0963 | 0.0475 |

3.6 | 0.1341 | 0.1373 |

3.8 | 0.2620 | 0.2738 |

4.0 | 0.4284 | 0.4507 |

4.2 | 0.6241 | 0.6590 |

4.4 | 0.8368 | 0.8869 |

4.6 | 1.0507 | 1.1177 |

4.8 | 1.2491 | 1.3289 |

5.0 | 1.4228 | 1.5083 |

**Example 4**

A numerical experiment was made for \(h=0.005\) with different \(k\). The approximate values were obtained in the set \([0,2]\times [-B, B]\) where \(B\approx 200\). Maximal errors are given in Table (a) below. The last two columns represent the case of non-monotonic scheme. Numerical errors can be compared with the theoretical results by using Table (b) (see Example 3).

\(\mathbf{\delta }\) | \(h=0.005\) | \(h=0.005\) | \(h=0.005\) | \(h=0.005\) | \(K_{1}\sqrt{h}\) | \(h=0.005\) | \(h=0.005\) |
---|---|---|---|---|---|---|---|

\(t\) | \(k=0.007\) | \(k=0.005\) | \(k=0.003\) | \(k=0.004\) | \(t\) | \(k=0.007\) | \(k=0.005\) |

(a) | (b) | ||||||

0.0 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0 | 0.0000 | 0.0000 |

0.1 | 0.0140 | 0.0022 | 0.0169 | 0.0054 | 0.1 | 0.0188 | 0.0156 |

0.2 | 0.0130 | 0.0045 | 1.1045 | 0.0055 | 0.2 | 0.0415 | 0.0345 |

0.3 | 0.0155 | 0.0068 | 329.48 | 0.0258 | 0.3 | 0.0687 | 0.0573 |

0.4 | 0.0207 | 0.0091 | 94870. | 0.4337 | 0.4 | 0.1013 | 0.0844 |

0.5 | 0.0261 | 0.0115 | 3.D \(+\) 07 | 4.6522 | 0.5 | 0.1399 | 0.1166 |

0.6 | 0.0317 | 0.0139 | 9.D \(+\) 09 | 39.357 | 0.6 | 0.1855 | 0.1546 |

0.7 | 0.0376 | 0.0165 | 3.D \(+\) 12 | 312.76 | 0.7 | 0.2392 | 0.1994 |

0.8 | 0.0437 | 0.0192 | 8.D \(+\) 14 | 3142.3 | 0.8 | 0.3021 | 0.2518 |

0.9 | 0.0501 | 0.0221 | 3.D \(+\) 17 | 32585 | 0.9 | 0.3757 | 0.3131 |

1.0 | 0.0567 | 0.0251 | 8.D \(+\) 19 | 3.D \(+\) 05 | 1.0 | 0.4613 | 0.3844 |

**Example 5**

A numerical experiment was made for \(h=0.005\) with different \(k\). The approximate values were obtained in the set \([0,2]\times [-B, B]\) where \(B\approx 202\). Maximal errors are given in Table (a) below. The last two columns represent the case of non-monotonic scheme. The errors can be compared with the theoretical results by using Table (b) (see Example 3).

\({\delta }\) | \(h=0.005\) | \(h=0.005\) | \(h=0.005\) | \(h=0.005\) | \(K_{1}\sqrt{h}\) | \(h=0.005\) | \(h=0.005\) |
---|---|---|---|---|---|---|---|

\(t\) | \(k=0.015\) | \(k=0.01\) | \( k=0.009\) | \( k=0.008\) | \(t\) | \(k=0.015\) | \(k=0.01\) |

(a) | (b) | ||||||

0.0 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0 | 0.0000 | 0.0000 |

0.2 | 0.0112 | 0.0048 | 0.0038 | 0.0055 | 0.2 | 0.1382 | 0.1036 |

0.4 | 0.0171 | 0.0073 | 0.0060 | 0.2883 | 0.4 | 0.3376 | 0.2532 |

0.6 | 0.0186 | 0.0077 | 0.0156 | 8.9333 | 0.6 | 0.6184 | 0.4638 |

0.8 | 0.0175 | 0.0070 | 0.0575 | 199.64 | 0.8 | 1.0072 | 0.7554 |

1.0 | 0.0154 | 0.0057 | 0.2425 | 3839.4 | 1.0 | 1.5377 | 1.1533 |

1.2 | 0.0133 | 0.0045 | 0.7090 | 1.D \(+\) 05 | 1.2 | 2.2538 | 1.6903 |

1.4 | 0.0116 | 0.0034 | 1.0670 | 4.D \(+\) 06 | 1.4 | 3.2116 | 2.4087 |

1.6 | 0.0100 | 0.0023 | 3.1155 | 1.D \(+\) 08 | 1.6 | 4.4830 | 3.3622 |

1.8 | 0.0084 | 0.0012 | 8.4006 | 5.D \(+\) 09 | 1.8 | 6.1600 | 4.6200 |

2.0 | 0.0069 | 0.0007 | 22.910 | 2.D \(+\) 11 | 2.0 | 8.3598 | 6.2698 |

**Example 6**

\(t\) | \(h=0.01\) | \(h=0.005\) | \(h=0.01\) |
---|---|---|---|

\(k=0.01\) | \(k=0.01\) | \(k=0.005\) | |

0.0 | 0.0000 | 0.0000 | 0.0000 |

0.1 | 0.0041 | 0.0091 | 0.0259 |

0.2 | 0.0042 | 0.0124 | 0.1094 |

0.3 | 0.0043 | 0.0152 | 0.2517 |

0.4 | 0.0043 | 0.0176 | 0.4118 |

0.5 | 0.0042 | 0.0199 | 0.5324 |

0.6 | 0.0041 | 0.0220 | 0.6468 |

0.7 | 0.0039 | 0.0238 | 0.7819 |

0.8 | 0.0037 | 0.0255 | 0.9344 |

0.9 | 0.0034 | 0.0270 | 1.0616 |

1.0 | 0.0031 | 0.0284 | 1.2198 |

### Open Access

This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.