1 Introduction

let \(A:{\mathcal {H}}\rightarrow {\mathcal {H}}\) be a bounded linear operator on a complex Hilbert space \({\mathcal {H}},\) with inner product \(\langle \cdot ,\cdot \rangle ,\) and associated norm \(\Vert \cdot \Vert .\) The numerical range of A is the set W(A) defined by the following:

$$\begin{aligned} W(A)=\{ {\langle }Ax, x{\rangle }:\,\,x\in {\mathcal {H}},\, \Vert x\Vert =1 \}. \end{aligned}$$
(1.1)

By the famous Toeplitz–Hausdorff Theorem [1, 2], W(A) is convex. This set has been studied extensively for many decades. It is useful in the study and to understand the matrices and operators (see [3,4,5,6,7,8]), and has many applications in numerical analysis, differential equations, system theory, etc (see [7, 9,10,11,12]). There are many different generalizations of numerical range. For \(1\le k \le n,\) the k-numerical range of linear operator A on \({\mathbb {C}}^{n},\) introduced by Halmos [13] is defined as follows:

$$\begin{aligned} W_k(A)= \left\{ \sum _{j=1}^{k} \frac{1}{k} x_{j}^*A x_{j}: \{x_1,\dots ,x_k\} \,\, \text{ is } \text{ an } \text{ orthonormal } \text{ set } \right\} \end{aligned}$$
(1.2)

which is proved to be convex by Berger [14]. For any \(\mathbf{c }=(c_{1},\dots ,c_{k})\in {\mathbb {R}}^{k},\) the c-numerical range of linear operator A on \({\mathbb {C}}^{n},\) first introduced by Marcus [15], is defined as follows:

$$\begin{aligned} W_c(A) = \left\{ \sum _{i=1}^{n} c_i x^{*}_i A x_i : \{x_1,\dots ,x_n\} \,\, \text{ is } \text{ an } \text{ orthonormal } \text{ set } \right\} . \end{aligned}$$
(1.3)

Extending the definition in (1.3) for a bounded or an unbounded operator A on a complex Hilbert space \({\mathcal {H}},\) with inner product \(\langle \cdot ,\cdot \rangle ,\) and for \(\mathbf{c }=(c_{1},\dots ,c_{k})\in {\mathbb {R}}^{k}\) the c-numerical range\(W_{c}(A)\) of A is defined as follows:

$$\begin{aligned} W_c(A)= \!\left\{ \sum _{j=1}^{k}\! c_j\langle A x_j,x_j \rangle \!: (x_1,\dots ,x_k)\, \text{ is } \text{ an } \text{ orthonormal } \text{ subset } \text{ of }\, {\mathcal {D}}\!(A)) \! \right\} ,\qquad \end{aligned}$$
(1.4)

where \({\mathcal {D}}(A)\) is the domain of A. Furthermore, if \(\mathbf{c }\) consists of just one number \(c_1\), \(W_c(A)\), then we obtain the classical numerical range W(A) (see [13]). In addition, for \(c_1=c_2=\dots =c_k=1,\) the c-numerical range \(W_c(A)\) turns into \(W_k(A).\) Indeed, \(W_k(A) \subseteq W(A).\) Westwick [16] has shown that \(W_c(A)\) is convex for any \(c\in {\mathbb {R}}^{n}.\) In addition, he gave an example which shows that, for complex vectors \(c\in {\mathbb {C}}^{n}\) with \(n\ge 3,\) the range \(W_c(A)\) may fail to be convex.

The c-numerical range is unitary similarity invariant, \(W_{c}(A) = W_{c}(UAU^{*}),\) for any unitary matrix U. It is also transpose invariant, \(W_{c}(A) = W_{c}(A^{\text {T}}).\) Clearly, we have the basic property:

$$\begin{aligned} W_{c}(aA+bI_{n}) = aW_{c}(A)+b\sum _{j=1}^{k}c_j, \end{aligned}$$
(1.5)

for every \(a,b \in {\mathbb {C}}.\) A review of the properties of c-numerical ranges of operator matrices may be found in [17].

In this paper, we consider how to compute the c-numerical range by finite difference methods, which reduce the problem to that of computing the c-numerical range of a (finite) matrix and block matrix.

The paper is organized as follows. In Sect. 2.1, some theoretical results areinvestigated dealing with the c-numerical range of operators using finite difference method. In Sect. 3, we shall applying these results to compute the c-numerical range of differential operators. We shall consider two particular special cases. In the first case, \({\mathcal {A}}\) is the Hain–Lüst operator, it is clear the underlying Hilbert space in this case is \(H:= L^2(0,1)\times L^2(0,1)\) and the operator is as follows:

$$\begin{aligned} {\mathcal {A}} = {{{\mathcal {L}}}}:=\left( \begin{array}{ll} L &{}w\\ {\widetilde{w}} &{}z \end{array}\right) , \end{aligned}$$
(1.6)

in which where \(L=-\frac{\mathrm{{d}}^2}{\mathrm{{d}}x^2}+q\) is a Schrödinger operator (with bounded potential q), while w, \({\widetilde{w}}\) and z are bounded multiplication operators. The domain of \({{{\mathcal {L}}}}\) is given by the following:

$$\begin{aligned} {\mathcal {D}}({{\mathcal {L}}})= & {} H^{2}(0,1)\cap H_0^1(0,1)\times L^2(0,1) \\= & {} \left\{ {\left( \begin{array} {c} u \\ v \end{array}\right) }: u(0)=0=u(1), \int _0^1 (|u''|^2+|u'|^2+|u|^2+|v|^2)<\infty \ \right\} . \end{aligned}$$

This operator has been introduced by Hain and Lüst [18] in application to problems of magnetohydrodynamics and the problem of this type has been studied in [19,20,21,22,23]. We shall be concerned with the effects of discretization, both of the unbounded second-order differential operator L and of the bounded multiplication operators which appear in the other entries.

In the last case, \( {\mathcal {A}}\) is the Stokes-type system of ordinary differential equations. The underlying Hilbert space in this case is also \(H:= L^2(0,1)\times L^2(0,1)\) and the operator is as follows:

$$\begin{aligned} {\mathcal {A}}:=\left( \begin{array}{ll} -\frac{\text {d}^2}{\text {d}x^2}&{} -\frac{\text {d}}{\text {d}x}\\ \frac{\text {d}}{\text {d}x}&{}-\frac{3}{2}\end{array}\right) ; \end{aligned}$$
(1.7)

the domain of \({\mathcal {A}}\) is given by the following:

$$\begin{aligned} {\mathcal {D}}({\mathcal {A}}) = \left\{ {\left( \begin{array} {l} u \\ v \end{array}\right) }: u(0)=0=u(1), u\in H_0^1(0,1)\cap H^2(0,1)\,\, \text{ and }\,\, v\in H^1(0,1)\right\} . \end{aligned}$$

Note that \({\mathcal {A}}\) is not closed; however, \({\mathcal {A}}\) is closable and its closure is self-adjoint.

2 An Approximating Discrete Operator

We shall replace the Schrödinger operator by a matrix. Suppose \( u\in {\mathcal {D}}(L)\). Pick \(N+1\) points \(x_{0},x_{1},\ldots ,x_{N}\) in the interval [0, 1]; for the sake of simplicity, we assume equal spacing, so that \(x_{0}=0,\)\(x_{j}=jh,\)\(x_{N}=1=Nh\) with \(h=\frac{1}{N}.\) We would like to form the vectors:

$$\begin{aligned} {\underline{u}}={\left( \begin{array} {ll} u(x_{1})\\ u(x_{2})\\ \vdots \\ u(x_{N-1}) \end{array}\right) }\in {\mathbb {C}}^{N-1}. \end{aligned}$$
(2.1)

Moreover, if we are to use the quadrature estimates, we shall need more smoothness in u. We, therefore, observe that the set

$$\begin{aligned} {{\mathcal {C}}(L)} := C^4(0,1)\cap H^1_0(0,1) \end{aligned}$$
(2.2)

is a core of the operator L; for \(u\in {\mathcal {C}}(L)\), Eq. (2.1) makes sense. Recall the second-order divided difference approximation of the second derivative:

$$\begin{aligned} \frac{u(x_{j+1})-2u(x_{j})+u(x_{j-1})}{h^2} =u''(x_{j})+\frac{1}{4!}h^2(u^4(\xi _j)+u^4(\eta _j)). \end{aligned}$$
(2.3)

Hence

$$\begin{aligned} \left( \begin{array}{ll} u''(x_{1})\\ u''(x_{2})\\ \vdots \\ u''(x_{N-1}) \end{array}\right) \simeq \frac{1}{h^2}T{\mathbf {u}}, \end{aligned}$$

where \(T = \text{ tridiag }(1,-2,1)\) is the tridiagonal matrix with entries \(T_{jj}=-2\), \(T_{j,j\pm 1}= 1\). We also define diagonal matrices:

$$\begin{aligned} Q_N=\text{ diag }(q(x_{1}),q(x_{2}),\ldots ,q(x_{N-1})). \end{aligned}$$
(2.4)

Our matrix replacement of the Schrödinger operator is a matrix of dimension \((N-1)\times (N-1)\) given by the following:

$$\begin{aligned} L_{N}:=-\frac{1}{h^2}T+Q_N. \end{aligned}$$
(2.5)

We shall use this technique in this paper for computing c-numerical range.Muhammad and Marlleta [24] replaced the Hain–Lüst operator and Stokes operator bymatrices. To discretize the Hain–Lüst operator, suppose that \({(\begin{array} {c} u \\ v \end{array})}\in {\mathcal {D}}(A)\). Pick \(N+1\) points \(x_{0},x_{2},\ldots ,x_{N}\) in the interval [0, 1]; for the sake of simplicity, we assume equalspacing, so that \(x_{0}=0,\)\(x_{j}=jh,\)\(x_{N}=1=Nh\) with \(h=\frac{1}{N}.\) We would like to form the vectors:

$$\begin{aligned} {\underline{u}}={\left( \begin{array} {cc} u(x_{1}) \\ u(x_{2})\\ \vdots \\ u(x_{N-1}) \end{array}\right) }, \;\; {\underline{v}}={\left( \begin{array} {ll} v(x_{1}) \\ v(x_{2})\\ \vdots \\ v(x_{N-1}) \end{array}\right) }\in {\mathbb {C}}^{N-1}; \end{aligned}$$
(2.6)

however, since we only have \(v\in L^2(0,1)\), point values of v are meaningless.Moreover, we shall need more smoothness in u and v if we are to use thequadrature estimates we need. We, therefore, observe that the set

$$\begin{aligned} {{\mathcal {C}}}(A) := (C^4(0,1)\cap H^1_0(0,1))\times C^1(0,1) \end{aligned}$$
(2.7)

is a core of the operator A; for \({\left( \begin{array} {c} u \\ v \end{array}\right) }\in {\mathcal {C}}(A),\) Eq. (2.6) makes sense. We also define diagonal matrices:

$$\begin{aligned} B_N= & {} \text{ diag }(w(x_{1}),w(x_{2}),\ldots ,w(x_{N-1})), \\ C_N= & {} \text{ diag }({\tilde{w}}(x_{1}),{\tilde{w}}(x_{2}),\ldots ,{\tilde{w}}(x_{N-1})),\\ D_N= & {} \text{ diag }(z(x_{1}),z(x_{2}),\ldots ,z(x_{N-1})). \end{aligned}$$

Our matrix replacement of the Hain–Lüst operator is a matrix of dimension \(2(N-1)\times 2(N-1)\) given by the following:

$$\begin{aligned} {\mathbb {A}}_{N}:=\left( \begin{array}{cc} L_{N}&{} B_{N}\\ C_{N} &{}D_{N} \end{array}\right) , \end{aligned}$$
(2.8)

where \( L_{N}:=-\frac{1}{h^2}T+Q_N\). To discretize the Stokes operator. Observe that

$$\begin{aligned}&-\left( \frac{u(x_{j+1})-2u(x_{j})+u(x_{j-1})}{h^2}\right) -\left( \frac{v\left( x_{j+\frac{1}{2}}\right) -v\left( x_{j-\frac{1}{2}}\right) }{h}\right) \nonumber \\&\quad =-\lambda u(x_{j})+O(h^{2}), \end{aligned}$$
(2.9)

and

$$\begin{aligned} -\left( \frac{u(x_{j})-u(x_{j-1})}{h}\right) -\frac{1}{2}v\left( x_{j-\frac{1}{2}}\right) =\lambda v\left( x_{j-\frac{1}{2}}\right) +O(h^{2}). \end{aligned}$$
(2.10)

Hence, our matrix replacement of the Stokes operator is a matrix of dimension \((2N-1)\times (2N-1)\) given by the following:

$$\begin{aligned} {\mathbb {M}}_{N}:=\left( \begin{array}{ll} E_{N}&{} W_{N}\\ W^{\text {T}}_{N} &{}Z_{N} \end{array}\right) , \end{aligned}$$
(2.11)

where \( E_{N}:=-\frac{1}{h^2}T,\)

$$\begin{aligned} W_{N}= -\frac{1}{h}\left( \begin{array}{llllll} -1 &{} 1 &{} 0 &{} 0 &{} \cdots &{} 0 \\ 0 &{} -1 &{} 1 &{} 0 &{} \cdots &{} 0 \\ 0 &{} 0 &{} -1 &{} 1 &{} \cdots &{} 0 \\ . &{} . &{} . &{} . &{} \cdots &{} . \\ . &{} . &{} . &{} . &{} \cdots &{} . \\ 0 &{} 0 &{} 0 &{} \cdots &{}-1 &{} 1 \end{array}\right) , \end{aligned}$$

and \(Z_{N}=\frac{-3}{2}I_{N\times N}.\) We shall use this technique in this paper for computing c-numerical range.

2.1 c-Numerical Range Inclusion

The following theorem shows that by finite difference discretization approximation, we derived that the every point of the c-numerical range of the Schrödinger operator \(L=-\frac{{\text {d}}^{2}}{{\text {d}}x^{2}} + q(x)\) can be approximated to arbitrary accuracy.

Theorem 2.1

Suppose \(\lambda \) is a point of the c-numerical range of the Schrödinger operator \(L=-\frac{\mathrm{{d}}^{2}}{\mathrm{{d}}x^{2}} + q(x)\) in which q (the potential in L) in \(C^{2}[0,1].\) Then, for all \(\epsilon >0\), there exists \({\widetilde{N}}\), such that for all \(N\ge {\widetilde{N}}\) the c-numerical range of the \((N-1)\times (N-1)\) discretization has a point \(\lambda _{N}\) with \(|\lambda -\lambda _{N}|< \epsilon .\)

Proof

Fix \( \epsilon > 0\) and \(\lambda \in W_{c}(L).\) Then, there exist \(u_{1},u_{2},\ldots ,u_{k}\in {\mathcal {C}}(L)\), such that

$$\begin{aligned} \lambda =\sum _{i=1}^{k}c_{j}\frac{\langle Lu_{i},u_{i}\rangle _{L^{2}(0,1)}}{\langle u_{i},u_{i}\rangle _{L^{2}(0,1)}}. \end{aligned}$$

Then, it is clear which we need two types of integral which are estimated by quadrature.

Type 1: \(L^2\) integrals of smooth functions

Let \(u_{1},u_{2},\ldots ,u_{k}\in C^4(0,1)\) with Dirichlet boundary conditions at 0 and 1. Then

$$\begin{aligned} \sum _{i=1}^{k}c_{i} \langle u_{i},u_{i}\rangle _{L^{2}(0,1)}= & {} \sum _{i=1}^{k}c_{i}\int _0^1 u_{i}\overline{u_{i}}\,\mathrm{{d}}x = \sum _{i=1}^{k}\sum _{j=1}^{N}c_{i}\int _{x_{j-1}}^{x_j} u_{i}(x)\overline{u_{i}(x)}\,\mathrm{{d}}x, \,\, \nonumber \\= & {} \sum _{i=1}^{k}\sum _{j=1}^N c_{i}\left[ \frac{h}{2}\left( u_{i}(x_{j-1})\overline{u_{i}(x_{j-1})}+u_{i}(x_{j})\overline{u_{i}(x_{j})}\right) +O(h^{3})\right] \nonumber \\= & {} \sum _{i=1}^{k}\sum _{j=1}^{N-1}c_{i} h\left| u_{i}(x_{j})\right| ^2+O(h^{2})\;\; \text{ since } u_{i}(x_0)=0=u_{i}(x_N) \nonumber \\= & {} \sum _{i=1}^{k}c_{i}h\langle {{\underline{u}}_{i}},{{\underline{u}}_{i}}\rangle _{{\mathbb {C}}^{N-1}}+O(h^{2}) \end{aligned}$$
(2.12)

for some vectors \({\underline{u}}_{1},{\underline{u}}_{2},\ldots ,{\underline{u}}_{k}\in {\mathbb {C}}^{N-1}, \) where

$$\begin{aligned} {{\underline{u}}_{i}}={\left( \begin{array} {cc} u_{i}(x_{1}) \\ u_{i}(x_{2})\\ \vdots \\ u_{i}(x_{N-1}) \end{array}\right) }\in {\mathbb {C}}^{N-1}\,\,\,\text{ for }\, 1\le i\le k. \end{aligned}$$
(2.13)

By a straightforward computation, the following result can be verified. \(\square \)

Lemma 2.2

Suppose that \({\underline{u}}_{1},{\underline{u}}_{2},\ldots ,{\underline{u}}_{k}\in {\mathbb {C}}^{N-1},\) and \(u_{1},u_{2},\ldots ,u_{k}\) is an eigenfunction of the Schrödinger operator L; then

$$\begin{aligned} \sum _{i=1}^{k}\left( \frac{1}{h\langle {\underline{u}}_{i},{\underline{u}}_{i}\rangle _{{\mathbb {C}}^{N-1}}+O(h^{2})}\right) =\sum _{i=1}^{k}\left( \frac{1}{h\langle {\underline{u}}_{i},{\underline{u}}_{i}\rangle _{{\mathbb {C}}^{N-1}}}\right) + O(h^{2}). \end{aligned}$$

Type 2: The integrals \(\langle Lu,u\rangle _{L^2}\) for smooth u

For \(u_{1},u_{2},\ldots ,u_{k}\) as before, we observe that \( Lu_{1},Lu_{2},\ldots ,Lu_{k}\) is Lipschitz, because \(u_{1}'',u_{2}'',\ldots ,u_{k}''\in C^2(0,1)\) and \(q\in C^2(0,1):\)

$$\begin{aligned}&\sum _{i=1}^{k}c_{i} \langle Lu_{i},u_{i}\rangle _{L^{2}(0,1)} = \sum _{i=1}^{k}c_{i}\int _0^1 Lu_{i}\overline{u_{i}}\,\mathrm{{d}}x = \sum _{i=1}^{k}\sum _{j=1}^{N}c_{i}\int _{x_{j-1}}^{x_j} Lu_{i}(x)\overline{u_{i}(x)}\,\mathrm{{d}}x, \,\, \nonumber \\&\quad =\sum _{i=1}^{k}\sum _{j=1}^N c_{i}\left[ \frac{h}{2}\left( Lu_{i}(x_{j-1})\overline{u_{i}(x_{j-1})}+Lu_{i}(x_{j})\overline{u_{i}(x_{j})}\right) +O(h^{3})\right] \nonumber \\&\quad =\sum _{i=1}^{k}\sum _{j=1}^{N-1}c_{i} h\left( Lu_{i}(x_{j})\overline{u_{i}(x)}\right) +O(h)\;\; \nonumber \\&\quad = \sum _{i=1}^{k}\sum _{j=1}^{N-1}c_{i} h(L_{N}{{\underline{u}}_{i}})_{j}\overline{({{\underline{u}}_{i}})_{j}}+O(h)\;\; \nonumber \\&\quad =\sum _{i=1}^{k}c_{i}h\langle L_{N}{{\underline{u}}_{i}},{{\underline{u}}_{i}}\rangle _{{\mathbb {C}}^{N-1}}+O(h); \end{aligned}$$
(2.14)

also we have

$$\begin{aligned} \sum _{i=1}^{k} (L_{N}{{u}_{i}}-\lambda {\underline{u}}_{i})_{j}:=\sum _{i=1}^{k}(Lu_{i})(x_{j}) -\lambda \sum _{i=1}^{k}u_{i}(x_j)-\frac{1}{4!}h^2\sum _{i=1}^{k}(s_{i})_{j}, \end{aligned}$$
(2.15)

where \((s_{i})_{j}:=u_{i}^{4}(\xi _j)+u_{i}^{4}(\eta _j),\) and \(\xi _j,\,\eta _j \in (x_{j-1},x_{j+1}),\) for \(j=1,\ldots ,N-1\) and \(i=1,\ldots ,k.\) Then, by taking inner products on both sides of Eq. (2.15), we get the following:

$$\begin{aligned}&\sum _{i=1}^{k}c_{i}\langle L_{N}{\underline{u}}_{i}-\lambda {\underline{u}}_{i},{\underline{u}}_{i}\rangle _{{\mathbb {C}}^{N-1}}:=\sum _{i=1}^{k}c_{i}\langle L_{N}{\underline{u}}_{i},{\underline{u}}_{i}\rangle _{{\mathbb {C}}^{N-1}}-\sum _{i=1}^{k}c_{i} \lambda \langle {\underline{u}}_{i},{\underline{u}}_{i}\rangle _{{\mathbb {C}}^{N-1}}\nonumber \\&\quad - \sum _{i=1}^{k}c_{i}{\frac{h^{2}}{4!}\langle {\underline{s}}_{i},{\underline{u}}_{i}\rangle _{{\mathbb {C}}^{N-1}}}. \end{aligned}$$
(2.16)

Multiplying by h and adding \(\sum _{i=1}^{k}\langle {Lu_{i}},u_{i}\rangle _{L^2}-h\sum _{i=1}^{k}\langle L_{N}{\underline{u}}_{i},{\underline{u}}_{i}\rangle _{{\mathbb {C}}^{N-1}}\) to both sides of Eq. (2.16), we obtain the following:

$$\begin{aligned} h\sum _{i=1}^{k}c_{i}\langle L_{N}{\underline{u}}_{i},{\underline{u}}_{i}\rangle _{{\mathbb {C}}^{N-1}}+O(h):=\sum _{i=1}^{k}c_{i}\langle Lu_{i},u_{i}\rangle _{L^2}- \sum _{i=1}^{k}{\frac{h^{3}}{4!}\langle {\underline{s}}_{i},{\underline{u}}_{i}\rangle }, \end{aligned}$$
(2.17)

and thus, using Lemma 2.2 and dividing both sides of Eq. (2.17) by \(\sum _{i=1}^{k}\langle u_{i},u_{i}\rangle _{L^{2}},\) we get the following:

$$\begin{aligned} \sum _{i=1}^{k}c_{i}\left( \frac{h\langle L_{N}{\underline{u}}_{i},{\underline{u}}_{i}\rangle _{{\mathbb {C}}^{N-1}}+O(h)+O(h^3)}{(\langle {\underline{u}}_{i},{\underline{u}}_{i}\rangle _{{\mathbb {C}}^{N-1}}+O(h^{2}))}\right) =\sum _{i=1}^{k}c_{i}\left( \frac{\langle Lu_{i},u_{i}\rangle _{L^{2}}}{\langle u_{i},u_{i}\rangle _{L^{2}}}\right) . \end{aligned}$$

This implies that

$$\begin{aligned} \sum _{i=1}^{k}c_{i}\left( \frac{\langle L_{N}{\underline{u}}_{i},{\underline{u}}_{i}\rangle _{{\mathbb {C}}^{N-1}}}{\langle {\underline{u}}_{i},{\underline{u}}_{i}\rangle _{{\mathbb {C}}^{N-1}}}\right) =\lambda +O(h). \end{aligned}$$

It follows that there exists \(\lambda _{N}\in W_{c}(L_{N})\), such that \(|\lambda -\lambda _{N}|\le \epsilon .\)

We introduce Definition 2.3, which is make sense for any bounded or an unbounded linear operator to bound some crucial parts in our main Theorem 2.6. Suppose that A is an unbounded linear operator \(A:{\mathcal {D}}(A) \subset H \rightarrow H.\) Consider a sesquilinear form:

$$\begin{aligned} a(x) =\langle Ax, x \rangle \,\,\,\text{ for } \,\,x \in {\mathcal {D}}(A). \end{aligned}$$

We define R-partial c-numerical range as follows:

Definition 2.3

Let A be an unbounded operator with quadratic form \(a(\cdot ).\) Then, the set

$$\begin{aligned} {W_{cR}(A)}&=\left\{ \sum _{i=1}^{k}c_{i}a(x_{i})|\,(x_1,\dots ,x_k)\,\, \text{ is } \text{ an } \text{ orthonormal } \text{ vectors } \text{ in } {\mathbb {C}}^{k}\right. \\&\quad \left. \text{ and }\,\left| \sum _{i=1}^{k}c_{i}a(x_{i})\right| \le \,R \right\} \end{aligned}$$

is called the R-partial c-numerical range of A.

We recall some lemmas useful to prove the main Theorem 2.6.

Lemma 2.4

Let A be an operator which is defined in Definition (2.3); then

$$\begin{aligned} \overline{W_{c}(A)}= \overline{\bigcup _{ \begin{array}{c} R>0 \end{array} } W_{cR}(A)}. \end{aligned}$$

Proof

Suppose that \(\lambda \in W_{c}(A);\) then, there exist an orthonormal vectors \(x_1,\dots ,x_k\in {\mathcal {D}}(A),\) such that \(\lambda =\sum _{i=1}^{k}c_{i}a(x_{i}).\) Therefore, by Definition (2.3), \(\lambda \in W_{cR}(A)\) for any \(\left| \sum _{i=1}^{k}c_{i}a(x_{i})\right| \le R.\) Hence, \(\overline{W_{c}(A)}\subseteq \overline{\bigcup _{ \begin{array}{c} R>0 \end{array}}W_{cR}(A)}.\) To prove opposite inclusion, it is clear \(W_cR(A)\subseteq W_{c}(A)\) for all \(R>0,\) so this immediately gives \(\overline{\bigcup _{ \begin{array}{c} R>0 \end{array}}W_{cR}(A)}\subseteq \overline{W_{c}(A)}.\)

The next Lemma on the consequences of constructing a piecewise linear function from a vector in \({\mathbb {C}}^{N-1}\) by interpolation will be used in the proof of Theorem 2.6\(\square \)

Lemma 2.5

Let \({\underline{u}}_{1},{\underline{u}}_{2},\ldots ,{\underline{u}}_{k}\) in \({\mathbb {C}}^{N-1},\)\(u_{1},u_{2},\ldots ,u_{k}\) in \(H^{1}_{0}(0,1)\) be the piecewise linear functions given by \(u_{r}(x)=u_{r(j-1)}\frac{(x_{j}-x)}{(x_{j}-x_{j-1})}+u_{rj}\frac{(x-x_{j-1})}{(x_{j}-x_{j-1})}\,\,\,\, \text{ for }\,\, 1\le r \le k,\) where \(x_{j-1}\le x \le x_{j}\) for \(J=1,2,\ldots ,N \) and \(u_{r0} =0= u_{rN}.\) Then, \((\Vert u'_{r} \Vert _{L_{2}}/\Vert u_{r} \Vert _{L_{2}})^{2} \le 6/h^{2}.\)

Proof

For \(1\le r \le k\) and let \(u_{1},u_{1},\ldots ,u_{k}\in H_0^1(0,1),\) be the piecewise linear functions given by \(u_{r}(x)=u_{r(j-1)}\frac{(x_{j}-x)}{(x_{j}-x_{j-1})}+u_{rj}\frac{(x-x_{j-1})}{(x_{j}-x_{j-1})}.\) Then, for \(-h/2 \le x\le h/2\), it is not difficult to see that

$$\begin{aligned} \int _{-h/2}^{h/2} (u_{r}(x))^2\mathrm{{d}}x= & {} \frac{h}{2}((u_{rj}^2+ u_{r(j-1)}^2)-\frac{h}{6}(u_{rj}- u_{r(j-1)})^2; \end{aligned}$$

thus

$$\begin{aligned} \int _0^1\left| u_{r}\right| ^{2}= & {} \sum _{j=1}^{N} h\left| u_{rj}\right| ^{2}-\frac{h}{6} \int _0^1\left| u'_{r}\right| ^{2}\mathrm{{d}}x. \end{aligned}$$

Rearranging gives

$$\begin{aligned} h\left\langle {\underline{u}}_{r}\, , \, {\underline{u}}_{r}\right\rangle _{{\mathbb {C}}^{N-1}} = \int _0^1 (|u_{r}|^2 + \frac{h^2}{6} \Vert u'_{r} \Vert _2^2)\mathrm{{d}}x \ge \Vert u_{r} \Vert _2^2 \end{aligned}$$
(2.18)

\(\square \).

The following theorem is our main result.

Theorem 2.6

Suppose that the coefficient q appearing in the Schrödinger operator \(L=-\frac{\mathrm{{d}}^{2}}{\mathrm{{d}}x^{2}} + q(x)\) is Lipschitz. Consider the \((N-1)\times (N-1)\) matrix \(L_{N}\) given in (2.5). For each \(R>0,\) there exists a constant \(K_{R}\) independent of N, such that \(W_{cR}(L_{N})\) is contained in a \(K_{R}\,N^{-1/2}-\)neighborhood of \(W_{cR}(L).\)

Proof

Suppose that \(\lambda _{N} \in W_{cR}(L_{N});\) then, there exist an orthonormal vectors \({\underline{u}}_{1},{\underline{u}}_{2},\ldots ,{\underline{u}}_{k}\) in \({\mathbb {C}}^{N-1}\), such that

$$\begin{aligned} \lambda _{N}=c_{1}\left\langle L_{N}{\underline{u}}_{1},{\underline{u}}_{1}\right\rangle +c_{2}\left\langle L_{N}{\underline{u}}_{2},{\underline{u}}_{2}\right\rangle +\cdots +c_{k}\left\langle L_{N}{\underline{u}}_{k},{\underline{u}}_{k}\right\rangle . \end{aligned}$$

We begin by constructing a piecewise linear functions \(u_{1},u_{2},\ldots ,u_{k}\) in \(H_0^1(0,1)\), such that \(u_{r}(x_{j})=u_{rj},\) and \(u_{r0}=0=u_{rN}\) for \(1\le r \le k.\) It is clear \(u_{1},u_{2},\ldots ,u_{k}\notin {\mathcal {D}}(L)\), because \(u_{1},u_{2},\ldots ,u_{k}\notin H^2(0,1).\) This is not a problem when using the characterization of \(W_{c}(A)\) in Definition (2.3); we simply note that the quadratic form associated with L is the usual Dirichlet form:

$$\begin{aligned} \ell \left( u_{1},u_{1}\right) :=\int _0^1 \left( \left| u'_{1}\right| ^2+q\left| u_{1}\right| ^2\right) \mathrm{{d}}x,\ldots ,\ell \left( u_{k},u_{k}\right) :=\int _0^1 \left( \left| u'_{k}\right| ^2+q\left| u_{k}\right| ^2\right) \mathrm{{d}}x, \end{aligned}$$

so, first, we need to estimate \({\mathcal {I}}_{1}\) on the right-hand side of Eq. (2.19):

$$\begin{aligned}&\left| \sum _{i=1}^{k}c_{i}\left( \frac{\left\langle L_{N}{{\underline{u}}_{i}},{{\underline{u}}_{i}}\right\rangle _{{\mathbb {C}}^{N-1}}}{\left\langle {{\underline{u}}_{i}},{{\underline{u}}_{i}}\right\rangle _{{\mathbb {C}}^{N-1}}}\right) -\sum _{i=1}^{k}c_{i}\left( \frac{\ell \left( u_{i},u_{i}\right) }{\langle u_{i},u_{i}\rangle _{L^2}}\right) \right| \nonumber \\&\quad \le \underbrace{c_{1}\left| \frac{\langle L_{N}{{\underline{u}}_{1}},{{\underline{u}}_{1}}\rangle _{{\mathbb {C}}^{N-1}}}{\langle {{\underline{u}}_{1}},{{\underline{u}}_{1}}\rangle _{{\mathbb {C}}^{N-1}}}-\frac{\ell (u_{1},u_{1})}{\langle u_{1},u_{1}\rangle _{L^2}}\right| }_{{\mathcal {I}}_{1}}+\cdots \nonumber \\&\qquad +\underbrace{c_{k}\left| \frac{\langle L_{N}{{\underline{u}}_{k}},{{\underline{u}}_{k}}\rangle _{{\mathbb {C}}^{N-1}}}{\langle {{\underline{u}}_{k}},{{\underline{u}}_{k}}\rangle _{{\mathbb {C}}^{N-1}}}-\frac{\ell (u_{k},u_{k})}{\langle u_{k},u_{k}\rangle _{L^2}}\right| }_{{\mathcal {I}}_{k}} \end{aligned}$$
(2.19)

\(\square \)

Here, \(c_{1}\ell (u_{1},u_{1})= c_{1} l_{0}(u_{1},u_{1})+ c_{1}\int _0^1q |u_{1}|^2\mathrm{{d}}x,\) where \(c_{1} l_{0}(u_{1},u_{1})=c_{1} \int _0^1 |u'_{1}|^2 \mathrm{{d}}x.\) By summation by parts

$$\begin{aligned} c_{1} \int _0^1 \left| u'_{1}\right| ^2\mathrm{{d}}x = \frac{c_{1}}{h}\sum _{j=1}^{N}\left| u_{1j}-u_{1(j-1)}\right| ^{2} = c_{1}h\left\langle -\frac{1}{h^{2}}T{{\underline{u}}_{1}} ,{{\underline{u}}_{1}}\right\rangle _{{\mathbb {C}}^{N-1}}, \end{aligned}$$
(2.20)

and we are left to deal with

$$\begin{aligned}&\sum _{j=1}^{N}\int _{x_{j-1}}^{x_{j}}q(x)|u_{1}|^{2} \nonumber \\&\quad =h\sum _{j=1}^{N-1}q(x_{j})\left| u_{1j}\right| ^{2} \,\ +h\sum _{j=1}^{N-1}\int _{x_{j-1}}^{x_{j}}(q(x)-q(x_{j}))\text{ Re }\left( u_{1j}\left( u_{1(j-1)}-u_{1j}\right) \right) \mathrm{{d}}x \,\ \nonumber \\&\qquad + \frac{h}{3}\sum _{j=1}^{N-1}q(x_{j})\left| u_{1j}-u_{1(j-1)}\right| ^{2}+\frac{h}{3}\sum _{j=1}^{N-1}\int _{x_{j-1}}^{x_{j}}(q(x)-q(x_{j}))\left| u_{1j}-u_{1(j-1)}\right| ^{2}\mathrm{{d}}x\nonumber \\&\qquad +h\sum _{j=1}^{N-1}q(x_{j})\text{ Re }\left( u_{1j}\left( u_{1(j-1)}-u_{1j}\right) \right) +h\sum _{j=1}^{N-1}\int _{x_{j-1}}^{x_{j}}(q(x)-q(x_{j}))\left| u_{1j}\right| ^{2}\mathrm{{d}}x.\nonumber \\ \end{aligned}$$
(2.21)

Assuming that \(q(\cdot )\) is Lipschitz, and using the Cauchy–Schwarz inequality, we get the following:

$$\begin{aligned}&\left| h\sum _{j=1}^{N-1}\int _{x_{j-1}}^{x_{j}}(q(x)-q(x_{j}))\text{ Re }\left( u_{1j}\left( u_{1(j-1)}-u_{1j}\right) \mathrm{{d}}x \right| \right. \nonumber \\&\quad \le \max _{1\le k\le N}\Big (\Big |\int _{x_{k-1}}^{x_{k}}(q(x)-q(x_{k}))\mathrm{{d}}x\Big |\Big )h \sum _{j=1}^{N}\left| \left( u_{1j}\left( u_{1(j-1)}-u_{1j}\right) \right| \right. \nonumber \\&\quad \le M_{1}h^{2}\Big (\sum _{j=1}^{N-1}h\left| u_{1j}\right| ^{2}\Big )^{1/2}h^{1/2}\Big (\sum _{j=1}^{N-1}\left| u_{1(j-1)}-u_{1j}\right| ^{2}\Big )^{1/2} \nonumber \\&\quad \le M_{1}h^{3}\Vert u'_{1}\Vert _{2}, \end{aligned}$$
(2.22)

in which the last inequality follows from (2.22). Here, \(\Vert u'_{1}\Vert _{2}\) can be bounded in a way which depends only on R and not on h. To see this, recall that since \(\lambda \in W_R(L_N)\), and by Definition (2.3):

$$\begin{aligned} \left| h\left\langle -\frac{1}{h^{2}}T{{\underline{u}}_{1}}\, , \, {{\underline{u}}_{1}}\right\rangle _{{\mathbb {C}}^{N-1}} + h\left\langle Q_N {{\underline{u}}_{1}}\, , \,{{\underline{u}}_{1}}\right\rangle _{{\mathbb {C}}^{N-1}} \right| \le R h \left\langle {{\underline{u}}_{1}}\, , \, {{\underline{s}}{u}_{1}}\right\rangle _{{\mathbb {C}}^{N-1}}; \end{aligned}$$

this yields, in light of (2.20):

$$\begin{aligned} \frac{\Vert u'_{1} \Vert _2^2}{ h\left\langle {{\underline{u}}_{1}}\, , {{\underline{u}}_{1}}\right\rangle _{{\mathbb {C}}^{N-1}}} \le \sup (|q|) + R. \end{aligned}$$

By Lemma 2.5 for \(r=1\):

$$\begin{aligned} h\left\langle {\underline{u}}_{1}\, , \, {\underline{u}}_{1}\right\rangle _{{\mathbb {C}}^{N-1}} = \int _0^1 (|u_{1}|^2 + \frac{h^2}{6} \Vert u'_{1} \Vert _2^2)\mathrm{{d}}x \ge \Vert u_{1} \Vert _2^2. \end{aligned}$$
(2.23)

Thus

$$\begin{aligned} \frac{\Vert u'_{1}\Vert _{2}^2}{\Vert u_{1}\Vert _{2}^2}\le R+ \text{ sup }(|q|). \end{aligned}$$
(2.24)

By the Lipschitz property of q and using (2.22), the fourth term on the right-hand side of Eq. (2.21) satisfies the following:

$$\begin{aligned} \left| \frac{h}{3}\sum _{j=1}^{N-1}\int _{x_{j-1}}^{x_{j}}(q(x)-q(x_{j}))\left| u_{1j}-u_{1(j-1)}\right| ^{2}\mathrm{{d}}x \right| \le M_{1}h^2\Vert u'_{1}\Vert _{2}^2, \end{aligned}$$
(2.25)

while the sixth term admits the bound

$$\begin{aligned} \left| h\sum _{j=1}^{N-1}\int _{x_{j-1}}^{x_{j}}(q(x)-q(x_{j}))\left| u_{1j}\right| ^{2}\mathrm{{d}}x \right| \le M_{1}h^{2}. \end{aligned}$$
(2.26)

The third term of the right-hand side of (2.21) satisfies the following:

$$\begin{aligned}&\left| \frac{h}{3}\sum _{j=1}^{N-1}q(x_{j})\left| u_{1j}-u_{1(j-1)}\right| ^{2}\right| \nonumber \\&\quad \le \max _{1\le k\le N}(|q_{k}|)h\sum _{j=1}^{N}\left| u_{1j}-u_{1(j-1)}\right| ^{2} \le M_{1}h^{2} \Vert u'_{1}\Vert _{2}^2, \end{aligned}$$
(2.27)

and the last inequality again coming from (2.22). The one remaining term of the right-hand side of Eq. (2.21) is estimated by the following:

$$\begin{aligned}&\left| h\sum _{j=1}^{N-1}q(x_{j})\text{ Re }\left( u_{1j}\left( u_{1(j-1)}-u_{1j}\right) \right) \right| \le \max _{1\le k\le N}(|q_{k}|)h\sum _{j=1}^{N}\left| \left( u_{1j}\left( u_{1(j-1)}-u_{1j}\right) \right) \right| \nonumber \\&\quad \le M_{1}\Big (\sum _{j=1}^{N-1}h\left| u_{1j}\right| ^{2}\Big )^{1/2}h^{1/2}\Big (\sum _{j=1}^{N-1}\left| u_{1(j-1)}-u_{1j}\right| ^{2}\Big )^{1/2} \nonumber \\&\quad \le M_{1}h\Vert u'_{1}\Vert _{2}, \end{aligned}$$
(2.28)

with (2.21) again providing the last step. Summing all the contributions, we obtain the following:

$$\begin{aligned}&\left| \int _0^1 q \left| u_{1}\right| ^{2}-h\left\langle Q_N {{\underline{u}}_{1}}\, , \,{{\underline{u}}_{1}}\right\rangle _{{\mathbb {C}}^{N-1}}\right| \le M_{1}h\Vert u'_{1}\Vert _{2}, \end{aligned}$$
(2.29)

and so

$$\begin{aligned} h\langle Q_N{{\underline{u}}_{1}},{{\underline{u}}_{1}}\rangle _{ {\mathbb {C}}^{N-1} }= \int _0^1 q \left| u_{1}\right| ^{2}+\zeta _{h}, \end{aligned}$$
(2.30)

where \(|\zeta _{h}|\le M_{1}h \Vert u'_{1}\Vert _{2}.\)

We are now in a position to estimate the following:

$$\begin{aligned} c_{1}\left| \frac{\langle L_{N}{{\underline{u}}_{1}},{{\underline{u}}_{1}}\rangle _{{\mathbb {C}}^{N-1}}}{\langle {{\underline{u}}_{1}},{{\underline{u}}_{1}}\rangle _{{\mathbb {C}}^{N-1}}}-\frac{\int _0^1 q \left| u_{1}\right| ^{2}}{\int _0^1 \left| u_{1}\right| ^{2}}\right| =:\varepsilon _{q}. \end{aligned}$$
(2.31)

From (2.30) to (2.23):

$$\begin{aligned} \varepsilon _{q}= & {} \left| \frac{\int _0^1 q \left| u_{1}\right| ^{2}+\zeta _{h}}{(\int _0^1\left| u_{1}\right| ^{2})\Big (1+\frac{h^{2}}{6}\frac{\Vert u'\Vert _{2}^2}{\Vert u\Vert _{2}^2}\Big )}-\frac{\int _0^1 q \left| u_{1}\right| ^{2}}{\int _0^1 \left| u_{1}\right| ^{2}}\right| \nonumber \\= & {} \left| \frac{\zeta _{h}-\frac{h^{2}}{6}\frac{\Vert u'_{1}\Vert _{2}^2}{\Vert u_{1}\Vert _{2}^2}\int _0^1 q|u|^2}{(\int _0^1\left| u_{1}\right| ^{2})\Big (1+\frac{h^{2}}{6}\frac{\Vert u'_{1}\Vert _{2}^2}{\Vert u\Vert _{2}^2}\Big )}\right| \nonumber \\\le & {} \frac{|\zeta _{h}|}{\Vert u_{1}\Vert _{2}^2}+\text{ sup }(|q|)\frac{h^{2}}{6}\Big (\frac{\Vert u'_{1}\Vert _{2}}{\Vert u\Vert _{2}}\Big )^2. \end{aligned}$$
(2.32)

Now, we estimate

$$\begin{aligned} c_{1}\Big |\frac{h\langle \frac{-1}{h^2}T_{N}{\underline{u}}_{1},{\underline{u}}_{1}\rangle _{{\mathbb {C}}^{N-1}}}{h\langle {\underline{u}}_{1},{\underline{u}}_{1}\rangle _{{\mathbb {C}}^{N-1}}}-\frac{l_{0}(u,u)}{\langle u_{1},u_{1}\rangle }\Big |. \end{aligned}$$
(2.33)

From Eqs. (2.20) to (2.23), we have the following:

$$\begin{aligned} c_{1}\frac{h\langle \frac{-1}{h^2}T_{N}{\underline{u}}_{1},{\underline{u}}_{1}\rangle _{{\mathbb {C}}^{N-1}}}{h\langle {\underline{u}}_{1},{\underline{u}}_{1}\rangle _{{\mathbb {C}}^{N-1}}}&=\frac{c_{1}l_{0}(u,u)}{(\int _0^1 |u_{1}|^{2})\Big (1+\frac{h^{2}}{6}\frac{\Vert u'_{1}\Vert _{2}^2}{\Vert u_{1}\Vert _{2}^2}\Big )}\nonumber \\&\quad \ge \frac{c_{1}l_{0}(u_{1},u_{1})\Big (1-\frac{h^{2}}{6}\frac{\Vert u'_{1}\Vert _{2}^2}{\Vert u_{1}\Vert _{2}^2}\Big )}{\int _0^1|u_{1}|^{2}}. \end{aligned}$$
(2.34)

Thus, we get the following:

$$\begin{aligned}&c_{1}\Big |\frac{h\langle \frac{-1}{h^2}T_{N}{\underline{u}}_{1},{\underline{u}}_{1}\rangle _{{\mathbb {C}}^{N-1}}}{h\langle {\underline{u}}_{1},{\underline{u}}_{1}\rangle _{{\mathbb {C}}^{N-1}}}-\frac{l_{0}(u_{1},u_{1})}{\langle u_{1},u_{1}\rangle } c_{1}\Big | \nonumber \\&\quad \le c_{1}\frac{l_{0}(u_{1},u_{1})}{\int _0^1|u_{1}|^{2}}\frac{h^{2}}{6}\frac{\Vert u'_{1}\Vert _{2}^2}{\Vert u_{1}\Vert _{2}^2} \le c_{1} M_{1} \Big (\frac{\Vert u'_{1}\Vert _{2}}{\Vert u_{1}\Vert _{2}}\Big )^4 h^2. \end{aligned}$$
(2.35)

Combining Eqs. (2.32), (2.35), and using Eq. (2.24), we get the following:

$$\begin{aligned} c_{1}\Big |\frac{\langle L_{N}{\underline{u}}_{1},{\underline{u}}_{1}\rangle _{{\mathbb {C}}^{N-1}}}{\langle {\underline{u}}_{1},{\underline{u}}_{1}\rangle _{{\mathbb {C}}^{N-1}}}-\frac{\ell (u_{1},u_{1})}{\langle u_{1},u_{1}\rangle _{L^2}}\Big |\le & {} c_{1}\frac{|\zeta _{h}|}{\Vert u_{1}\Vert _{2}^2}+c_{1}\text{ sup }(|q|)\frac{h^{2}}{6}\Big (\frac{\Vert u'_{1}\Vert _{2}}{\Vert u_{1}\Vert _{2}}\Big )^2\nonumber \\&+c_{1}M_{1} \Big (\frac{\Vert u'_{1}\Vert _{2}^2}{\Vert u_{1}\Vert _{2}^2}\Big )^2 h^2 \le c_{1} M_{1}h\sqrt{R}+c_{1}M_{1}h^{2}R\nonumber \\&+c_{1}M_{1}h^{2}R^{2}. \end{aligned}$$
(2.36)

By the same argument, the second term and the last term of the right-hand side of Eq. (2.19) satisfies the following:

$$\begin{aligned}&c_{2}\Big |\frac{\langle L_{N}{\underline{u}}_{2},{\underline{u}}_{2}\rangle _{{\mathbb {C}}^{N-1}}}{\langle {\underline{u}}_{2},{\underline{u}}_{2}\rangle _{{\mathbb {C}}^{N-1}}}-\frac{\ell (u_{2},u_{2})}{\langle u_{2},u_{2}\rangle _{L^2}}\Big | \nonumber \\&\quad \le c_{2} M_{2}h\sqrt{R}+c_{k}M_{2}h^{2}R+c_{2}M_{2}h^{2}R^{2}. \end{aligned}$$
(2.37)

and

$$\begin{aligned}&c_{k}\Big |\frac{\langle L_{N}{\underline{u}}_{k},{\underline{u}}_{k}\rangle _{{\mathbb {C}}^{N-1}}}{\langle {\underline{u}}_{k},{\underline{u}}_{k}\rangle _{{\mathbb {C}}^{N-1}}}-\frac{\ell (u_{k},u_{k})}{\langle u_{k},u_{k}\rangle _{L^2}}\Big | \nonumber \\&\quad \le c_{k} M_{k}h\sqrt{R}+c_{k}M_{2}h^{2}R+c_{k}M_{k}h^{2}R^{2}. \end{aligned}$$
(2.38)

Combining Eqs. (2.36), (2.37), and (2.38), we get the following:

$$\begin{aligned}&\left| \sum _{i=1}^{k}c_{i}\left( \frac{\left\langle L_{N}{{\underline{u}}_{i}},{{\underline{u}}_{i}}\right\rangle _{{\mathbb {C}}^{N-1}}}{\left\langle {{\underline{u}}_{i}},{{\underline{u}}_{i}}\right\rangle _{{\mathbb {C}}^{N-1}}}\right) -\sum _{i=1}^{k}c_{i}\left( \frac{\ell \left( u_{i},u_{i}\right) }{\langle u_{i},u_{i}\rangle _{L^2}}\right) \right| \\&\quad \le \sum _{i=1}^{k}\left( c_{i}M_{i}h\sqrt{R}+c_{i}M_{i}h^{2}R+c_{i}M_{i}h^{2}R^{2}\right) . \end{aligned}$$

3 Numerical Experiments on Differential Operator

In this section, we study some concrete examples and demonstrate that, in spite of the results obtained in the previous section, practical calculation of the c-numerical range is very far from being straightforward. We shall show that the discretization techniques may often yield misleading results, and that a good knowledge of existing analytical estimates of q-numerical range is often needed to understand the results. The computations were performed in MATLAB.

3.1 Example 1

Assume that \(w\,:[0,1]\, \rightarrow [0,\infty ),\;{\widetilde{w}}\,:[0,1]\, \rightarrow [0,\infty ),\) and \(z\,:[0,1]\, \rightarrow {\mathbb {C}}\) are such that \(w(x)=1,\;{\widetilde{w}}(x)=1,\)

$$\begin{aligned} z(x)=18\mathrm{{e}}^{2\pi i x}-20, \end{aligned}$$

for each \(x \in [0,1]\).

We introduce the differential expression:

$$\begin{aligned} \tau _{{\widetilde{A}}}:= -\frac{\mathrm{{d}}^{2}}{\mathrm{{d}}x^{2}}, \,\,\, \tau _{{\widetilde{B}}}:=w(x), \,\,\, \tau _{{\widetilde{C}}}:={\widetilde{w}}(x),\,\,\, \tau _{{\widetilde{D}}}:=z(x). \end{aligned}$$
(3.1)

Let \(L,\,B,\,C,\,D\) be the operators in the Hilbert space \(L^{2}(0,1)\) induced by thedifferential expressions \(\tau _{{\widetilde{A}}},\,\tau _{{\widetilde{B}}},\,\tau _{{\widetilde{C}}},\,\tau _{{\widetilde{D}}}\) with domain:

$$\begin{aligned} {\mathcal {D}}(L):= H^{2}(0,1)\cap H_0^1(0,1),\;\;\;{\mathcal {D}}(B)={\mathcal {D}}(C)={\mathcal {D}}(D):= L^2(0,1). \end{aligned}$$

In the Hilbert space \(L^{2}_{2}(0,1):= L^{2}(0,1)\oplus L^{2}(0,1),\) we introduce the matrix differential operator:

$$\begin{aligned} {\mathcal {A}}=\left( \begin{array}{cc} L&{} B\\ C &{}D \end{array} \right) =\left( \begin{array}{ll} -\frac{\mathrm{{d}}^{2}}{\mathrm{{d}}x^{2}} &{} w(x)\\ {\widetilde{w}}(x) &{}z(x) \end{array} \right) , \end{aligned}$$
(3.2)

on the domain

$$\begin{aligned} {\mathcal {D}}(A):=\left\{ {\left( \begin{array} {l} y_{1} \\ y_{2} \end{array}\right) }: y_{1}\in H^2(0,1):\,y_{1}(0)=0=y_{1}(1)\,\, \text{ and }\,\, y_{2}\in L_2(0,1) \right\} . \end{aligned}$$

The finite difference approximation to the Hain–Lüst operator is the \(2(N-1)\times 2(N-1)\) matrix \({\mathbb {A}}_{N}\) given in Eq. (2.8).

Figure 1 show the attempts to compute approximations to the c-numerical range of the Hain–Lüst operator using matrices (2.8) for different values of \(\mathbf{c }=(1,0,\dots ,0)\) and \(\tilde{\mathbf{c }}:=\mathbf{c }=(1.1,0,\dots ,-0.1)\) with \(N=25.\)

Fig. 1
figure 1

The c-numerical range of \(W_{\mathbf{c }}({\mathbb {A}}_{25})\)   and  \(W_{\tilde{\mathbf{c }}}({\mathbb {A}}_{25})\)

3.2 Analytical Estimates for Example 1

To understand the results in Fig. 1, it is useful to find an analytical estimate for \(W_{c}(A)\). Let \({\underline{y}}_{1},{\underline{y}}_{2},\ldots ,{\underline{y}}_{k}\, \text{ be } \text{ an } \text{ orthonormal } \text{ vectors } \text{ in } {\mathcal {D}}({\mathcal {A}})\), where

$$\begin{aligned} {\underline{y}}_{1}= \left( \begin{array}{ll} y_{11}\\ y_{12}\end{array}\right) ,\,\, {\underline{y}}_{2} = \left( \begin{array}{ll} y_{21}\\ y_{22}\end{array}\right) ,\ldots ,\,\,{\underline{y}}_{k} = \left( \begin{array}{ll} y_{k1}\\ y_{k2}\end{array}\right) , \end{aligned}$$

and let

$$\begin{aligned} \lambda= & {} \sum _{j=1}^{k} c_j\langle {\mathcal {A}}{\underline{y}}_{j},{\underline{y}}_{j}\rangle \nonumber \\= & {} \sum _{j=1}^{k} c_j\langle -y''_{j1},y_{j1}\rangle _{L^{2}(0,1)}+2\sum _{j=1}^{k} c_j\mathfrak {R}\langle y_{j1},y_{j2}\rangle _{L^{2}(0,1)}\nonumber \\&+\sum _{j=1}^{k} c_j\langle y_{j2},y_{j1}\rangle _{L^{2}(0,1)}. \end{aligned}$$
(3.3)

The first term of Eq. (3.3) gives, as an estimate:

$$\begin{aligned} \sum _{j=1}^{k} c_j\langle -y''_{j1},y_{j1}\rangle _{L^{2}(0,1)}\ge \pi ^2 \sum _{j=1}^{k} c_j\langle y_{j1},y_{j1}\rangle _{L^{2}(0,1)}. \end{aligned}$$
(3.4)

For the second term on the right-hand side of Eq. (3.3), the Cauchy–Schwarz inequality and Youngs inequality yield the following:

$$\begin{aligned} 2\sum _{j=1}^{k} c_j\mathfrak {R}\langle y_{j1},y_{j2}\rangle _{L^{2}(0,1)} \ge - \sum _{j=1}^{k} c_j. \end{aligned}$$
(3.5)

The third term of the right-hand side of Eq. (3.3) satisfies the following:

$$\begin{aligned} \sum _{j=1}^{k} c_j\langle uy_{j2},y_{j1}\rangle _{L^{2}(0,1)}\ge \underset{x\in [0,1]}{\text {inf}}\mathfrak {R}(u)\sum _{j=1}^{k}c_{j}(1- \langle y_{j1},y_{j1}\rangle _{L^{2}(0,1)}). \end{aligned}$$
(3.6)

Hence, from Eqs. (3.4), (3.5), and (3.6), we get that

$$\begin{aligned}&\mathfrak {R}(\lambda )\ge \left( \pi ^2- \underset{x\in [0,1]}{\text {inf}}\mathfrak {R}(u)\right) \sum _{j=1}^{k} c_j\langle y_{j1},y_{j1}\rangle _{L^{2}(0,1)}\nonumber \\&\quad +\left( \underset{x\in [0,1]}{\text {inf}}\mathfrak {R}(u)-1\right) \sum _{j=1}^{k} c_j. \end{aligned}$$
(3.7)

This yields the following:

$$\begin{aligned} \mathfrak {R}(\lambda ) \ge \left\{ \begin{array}{ll} \left( \underset{x\in [0,1]}{\text {inf}}\mathfrak {R}(u)-1\right) \mathop {\sum }\nolimits _{j=1}^{k} c_j, &{} \text{ if } \pi ^2-\underset{x\in [0,1]}{\text {inf}}\mathfrak {R}(u)\ge 0;\\ (\pi ^2 - 1)\mathop {\sum }\nolimits _{j=1}^{k} c_j, &{} \text{ if } \pi ^2-\underset{x\in [0,1]}{\text {inf}}\mathfrak {R}(u)< 0. \end{array}\right. \end{aligned}$$

For our example, these yield \(\mathfrak {R}(\lambda )\ge -39 \sum _{j=1}^{k} c_j\)

To estimate \(\text{ Im }(\lambda )\), observe that

$$\begin{aligned} \text{ Im }(\lambda )= & {} \sum _{j=1}^{k} c_j\langle uy_{j2},y_{j1}\rangle _{L^{2}(0,1)} \nonumber \\\le & {} \underset{x\in [0,1]}{\text {sup}}\mathfrak {I}(u)\sum _{j=1}^{k} c_j \langle y_{j2},y_{j2}\rangle _{L^{2}(0,1)})\le 18\,\sum _{j=1}^{k} c_j \langle y_{j2},y_{j2}\rangle _{L^{2}(0,1)})\qquad \end{aligned}$$
(3.8)

and

$$\begin{aligned} \text{ Im }(\lambda )= & {} \sum _{j=1}^{k} c_j\langle uy_{j2},y_{j1}\rangle _{L^{2}(0,1)}\nonumber \\\ge & {} \underset{x\in [0,1]}{\text {inf}}\mathfrak {I}(u)\sum _{j=1}^{k} c_j \langle y_{j2},y_{j2}\rangle _{L^{2}(0,1)})\ge -18\,\sum _{j=1}^{k} c_j \langle y_{j2},y_{j2}\rangle _{L^{2}(0,1)});\qquad \end{aligned}$$
(3.9)

hence

$$\begin{aligned} -18\sum _{j=1}^{k} c_j\le \text{ Im }(\lambda )\le 18\sum _{j=1}^{k} c_j. \end{aligned}$$

This completes the estimates on \(W_{c}(A)\).

The following result shows that the c-numerical range of Example 1 is unbounded.

Remark 3.1

Let \(\eta _{1},\eta _{2},\ldots ,\eta _{k}\) be an orthonormal vectors in \( {\mathcal {D}}({\mathcal {A}}),\) where

$$\begin{aligned} \eta _{1}= \left( \begin{array}{ll} \sqrt{2} \sin (l\pi x)\\ 0\end{array}\right) ,\,\eta _{2}= \left( \begin{array}{ll} \sqrt{2} \sin (k\pi x)\\ 0\end{array}\right) ,\ldots ,\eta _{k}= \left( \begin{array}{ll} \sqrt{2} \sin (k\pi x)\\ 0\end{array}\right) ; \end{aligned}$$

then

$$\begin{aligned} \mathop {\sum }\nolimits _{j=1}^{k} c_j\langle A\eta _{j},\eta _{j}\rangle _{L^{2}(0,1)}=l^{2}\pi ^{2}\mathop {\sum }\nolimits _{j=1}^{k} c_j. \end{aligned}$$

However, \(l\in {\mathbb {Z}}\) can be arbitrary large. This means that \(\mathfrak {R}(\lambda )\) is unbounded above.

3.3 Example 2

We consider the differential expression:

$$\begin{aligned}&\tau _{{\widetilde{A}}}:= -\frac{\mathrm{{d}}^{2}}{\mathrm{{d}}x^{2}},\tau _{B}:=-\frac{\mathrm{{d}}}{\mathrm{{d}}x}, \end{aligned}$$
(3.10)
$$\begin{aligned}&\tau _{C}:=\frac{\mathrm{{d}}}{\mathrm{{d}}x},\tau _{D_{0}}:=-\frac{3}{2}. \end{aligned}$$
(3.11)

Let \(A_{0},\)\(B_{0},\)\(C_{0},\)\(D_{0}\) be the operators in the Hilbert space \(L^2(0,1)\) induced by the differential expressions \(\tau _{{\widetilde{A}}},\)\(\tau _{B}\)\(\tau _{C},\)\(\tau _{D_{0}}\) with domain:

$$\begin{aligned}&{\mathcal {D}}(A_0):= H^{2}(0,1)\cap H_0^1(0,1), \,\, {\mathcal {D}}(B_0):= H^{1}(0,1),\\&{\mathcal {D}}(C_0):=H_0^1(0,1),\,\, {\mathcal {D}}(D_0):= L^2(0,1). \end{aligned}$$

In the Hilbert space \(L^2_{2}(0,1):= L^2(0,1)\oplus L^2(0,1),\) we introduce the matrix differential operator:

$$\begin{aligned} {\mathcal {A}}:=\left( \begin{array}{cc} A_{0}&{} B_{0}\\ C_{0} &{}D_{0} \end{array} \right) =\left( \begin{array}{ll} -\frac{\mathrm{{d}}^{2}}{\mathrm{{d}}x^{2}} &{} -\frac{\mathrm{{d}}}{\mathrm{{d}}x}\\ \frac{\mathrm{{d}}}{\mathrm{{d}}x} &{}-\frac{3}{2} \end{array} \right) , \end{aligned}$$
(3.12)

on the domain

$$\begin{aligned} {\mathcal {D}}({\mathcal {A}}):= (H^{2}(0,1)\cap H^{1}(0,1))\oplus H^{1}(0,1). \end{aligned}$$
(3.13)

The finite difference approximation to the Stokes operator is the \((2N-1)\times (2N-1)\) matrix:

$$\begin{aligned} {\mathbb {M}}_{N}:=\left( \begin{array}{ll} E_{N}&{} W_{N}\\ W^{\text {T}}_{N} &{}Z_{N} \end{array}\right) , \end{aligned}$$

given in Eq. (2.11). Figure 2 shows an estimate of the c-numerical range of the Stokes operator using matrices (2.6) for different values of \(\mathbf{c }=(1,0,\dots ,0)\) and \(\tilde{\mathbf{c }}:=\mathbf{c }=(1.1,0,\dots ,-0.1)\) with \(N=25.\)

Fig. 2
figure 2

The c-numerical range of \(W_{\mathbf{c }}(\mathbb {A}_{25})\)    and   \(W_{\tilde{\mathbf{c }}}(\mathbb {A}_{25})\)

3.4 Analytical Estimates for Example 2

To understand the results in Fig. 2, it is useful to find an analytical estimate for \(W_{c}({\mathcal {A}}_{0})\). Let \({\underline{\zeta }}_{1},{\underline{\zeta }}_{2},\ldots ,{\underline{\zeta }}_{k}\, \text{ be } \text{ an } \text{ orthonormal } \text{ vectors } \text{ in } {\mathcal {D}}({\mathcal {A}}_{0}),\) where

$$\begin{aligned} {\underline{\zeta }}_{1}= \left( \begin{array}{ll} \zeta _{11}\\ \zeta _{12}\end{array}\right) ,\,\, {\underline{\zeta }}_{2} = \left( \begin{array}{ll} \zeta _{21}\\ \zeta _{22}\end{array}\right) ,\ldots ,{\underline{\zeta }}_{k} = \left( \begin{array}{ll} \zeta _{k1} \\ \zeta _{k2}\end{array}\right) , \end{aligned}$$

and let

$$\begin{aligned} \lambda= & {} \sum _{j=1}^{k} c_j\langle {\mathcal {A}}_{0}{\underline{\zeta }}_{j},{\underline{\zeta }}_{j}\rangle \nonumber \\= & {} \sum _{j=1}^{k} c_j\langle -\zeta ''_{j1},\zeta _{j1}\rangle _{L^{2}(0,1)}+2\sum _{j=1}^{k} c_j\langle \zeta '_{j1},\zeta _{j2}\rangle _{L^{2}(0,1)}\nonumber \\&+\sum _{j=1}^{k} c_j\langle \alpha \zeta _{j2},\zeta _{j1}\rangle _{L^{2}(0,1)}. \end{aligned}$$
(3.14)

The first term of Eq. (3.14) gives, as an estimate, the following:

$$\begin{aligned} \sum _{j=1}^{k} c_j\langle -\zeta ''_{j1},\zeta _{j1}\rangle _{L^{2}(0,1)}\ge \pi ^2 \sum _{j=1}^{k} c_j\langle \zeta _{j1}, \zeta _{j1}\rangle _{L^{2}(0,1)}. \end{aligned}$$
(3.15)

For the second term on the right-hand side of Eq. (3.14), the Cauchy–Schwarz inequality and Youngs inequality yield the following:

$$\begin{aligned} 2\sum _{j=1}^{k} c_j\ \langle \zeta '_{j1},\zeta _{j2}\rangle _{L^{2}(0,1)} \ge - \sum _{j=1}^{k} c_j. \end{aligned}$$
(3.16)

The third term of the right-hand side of Eq. (3.14) satisfies

$$\begin{aligned} \sum _{j=1}^{k} c_j\langle u\zeta _{j2},\zeta _{j1}\rangle _{L^{2}(0,1)}= \alpha \sum _{j=1}^{k}c_{j}(1- \langle \zeta _{j1},\zeta _{j1}\rangle _{L^{2}(0,1)}). \end{aligned}$$
(3.17)

Hence, from Eqs. (3.15), (3.16), and (3.17), we get that

$$\begin{aligned} \mathfrak {R}(\lambda )\ge (\pi ^2- \alpha )\sum _{j=1}^{k} c_j\langle y_{j1},y_{j1}\rangle _{L^{2}(0,1)}+(\alpha -1)\sum _{j=1}^{k} c_j. \end{aligned}$$
(3.18)

This yields the following:

$$\begin{aligned} \mathfrak {R}(\lambda ) \ge \left\{ \begin{array}{ll} (\alpha -1)\mathop {\sum }\nolimits _{j=1}^{k} c_j, &{} \text{ if } \;\pi ^2-\alpha \ge 0; \\ (\pi ^2 - 1)\mathop {\sum }\nolimits _{j=1}^{k} c_j, &{} \text{ if } \;\pi ^2-\alpha < 0. \end{array}\right. \end{aligned}$$

Because in our case \(\alpha =-\frac{3}{2},\) then this implies that \(\mathfrak {R}(\lambda ) \ge -\frac{5}{2}\sum _{j=1}^{k} c_j.\) On the other hand, it is not difficult to see that \(\mathfrak {I}(\lambda )=0.\)

This completes the estimates on \(W_{c}({\mathcal {A}}_{0}).\)

4 Conclusions

The computation of c-numerical ranges of operator matrices of differential operators and block operator matrices of differential operators by finite difference techniques is easier to implement than Galerkin discretization methods, but the theoretical analysis is more involved and the results obtained are less esthetically satisfactory. In the case of Galerkin discretizations, the c-numerical ranges are nested:

$$\begin{aligned} W_{c}(A_{N+1}) \supseteq W_{c}(A_{N}) \,\, \text{ for } \text{ all }\,\, N\in {\mathbb {N}}. \end{aligned}$$

This is not guaranteed for finite differences, where only have Theorem 2.6. However,in all cases, whether the discretization be by finite difference or variationalmethods, the numerical results can only be properly understood in conjunction with anunderstanding of the spectral theory of the original operator.