# Local two-sided bounds for eigenvalues of self-adjoint operators

## Abstract

We examine the equivalence between an extension of the Lehmann–Maehly–Goerisch method developed a few years ago by Zimmermann and Mertins, and a geometrically motivated method developed more recently by Davies and Plum. We establish a general framework which allows sharpening various previously known results in these two settings and determine explicit convergence estimates for both methods. We demonstrate the applicability of the method of Zimmermann and Mertins by means of numerical tests on the resonant cavity problem.

### Mathematics Subject Classification

65M60 65L60 65L15 65N12## 1 Introduction

In this work we study in close detail the equivalence between two pollution-free techniques for numerical computation of eigenvalue bounds for general self-adjoint operators: a method considered a few years ago by Zimmermann and Mertins [35], and a method developed more recently by Davies and Plum [23]. These two methods are pollution-free by construction and have been proven to provide reliable numerical approximations.

The approach of Zimmermann and Mertins is built on an extension of the Lehmann–Maehly–Goerisch method [4, 26, 33] and it has proven to be highly successful in various concrete applications. These include the computation of bounds for eigenvalues of the radially reduced magnetohydrodynamics operator [15, 35], the study of complementary eigenvalue bounds for the Helmholtz equation [6] and the calculation of sloshing frequencies [4, 5].

The method of Davies and Plum on the other hand, is based on a notion of approximated spectral distance and is highly geometrical in character. Its original formulation dates back to [21, 22, 23] but it is yet to be tested properly on models of dimension larger than one.

In this work we follow the analysis conducted in [23, Section 6] where the equivalence of both these techniques was formulated in a precise manner. Our main goal is two-fold. On the one hand we examine more closely the nature of this equivalence by considering multiple eigenvalues. On the other hand we determine sharp estimates for both methods. These results include convergence and error estimates for both the eigenvalues and associated eigenfunctions. We finally illustrate the applicability of the method of Zimmermann and Mertins using the Maxwell eigenvalue problem as benchmark.

### 1.1 Context, scope and contribution of the present work

The computational approach considered in this work has a “local” character, in the sense that a shift parameter should be set before hand. The methods derived from this approach only provide information about the spectrum in a vicinity of this parameter, in similar fashion as the Galerkin method gives information only about the eigenvalues below the bottom of the essential spectrum. They give upper bounds for the eigenvalues to the right of the parameter and lower bounds for the eigenvalues to the left of it.

The method of Davies and Plum primarily relies on the geometrical properties of a notion of approximated spectral distance. We introduce this notion in Sect. 3. Our Proposition 2 was first formulated in [21, theorems 3 and 4]. These statements played a fundamental role in the proof of [23, Theorem 11] which provided crucial connections with the method of Zimmermann and Mertins. In Proposition 5 and Corollary 6 we establish an extension of [21, theorems 3 and 4] allowing multiple eigenvalues. These rely on convexity results due to Danskin (see Lemma 4 and [8, Theorem D1]) and they are of fundamental importance in various parts of our analysis.

Our Lemma 9 follows the original [23, Theorem 11] and its proof involves very similar arguments. In conjunction with Corollary 6, it leads to an alternative proof of [35, Theorem 1.1] which includes multiplicity counting. The latter is the central statement of what we call the method of Zimmermann and Mertins. This alternative derivation of the method is formulated in our main Theorem 10 and Corollary 11.

Theorems 13 and 14, and Corollary 15, are precise formulations of convergence in the setting of the method of Davies and Plum. The two theorems differ from one another in that a higher order of approximation occurs when the shift parameter is away from the spectrum. In Theorem 16 we show that, remarkably, the method of Zimmermann and Mertins always renders the higher order of approximation as a consequence of Corollary 15. This is, for instance, in great agreement with the results presented in [34], which compare the errors in Lehmann–Goerisch and Rayleigh–Ritz bounds (see also [28], where convergence of iterative solvers is studied).

In Proposition 7 we establish upper bounds for error estimates for eigenfunctions in terms of spectral gaps. This statement is related to similar results of Weinberger [32] and Trefftz [30]. See also [33, Chapter 5]. The precise connection between Proposition 7 and all these results is unclear at present and will be examined elsewhere.

The model of the isotropic resonant cavity that we consider in Sect. 6 has been well-documented to render spectral pollution when the classical Galerkin method and finite elements of nodal type are employed for numerical approximation. We show by means of numerical tests that, remarkably, the method of Zimmermann and Mertins provides robust and accurate approximations of the eigenvalues of the Maxwell operator even when implemented on standard Lagrange elements. By construction, this method is free from spectral pollution. A more systematic investigation in this respect with many more numerical tests (including anisotropic media), a convergent algorithm and a reference to a fully reproducible computer code can be found in [3].

Preliminary information on the number of eigenvalues in a given interval, which might or might not be available in practice, allows the determination of enclosures from the one-sided bounds produced by the approaches discussed in this work. Convergence also yields enclosures in suitable asymptotic regimes. The algorithm described in [3] is an example of a concrete realisation of this assertion.

### 1.2 Outline of the analysis

Section 2 includes the notational conventions and assumptions which will be used throughout this work. Section 3 sets the general framework of approximated spectral distances and their geometrical properties. There we also discuss approximation of eigenspaces with explicit estimates. The method of Zimmermann and Mertins is derived in Sect. 4 and its convergence is established in Sect. 5. These two sections comprise the main contribution of this work. The final Sect. 6 is devoted to illustrating a concrete computational application of the method of Zimmermann and Mertins to the resonant cavity problem.

## 2 Preliminary notation, conventions, and assumptions

Let \(A:{\text {D}}(A)\longrightarrow \mathcal {H}\) be a self-adjoint operator acting on a Hilbert space \(\mathcal {H}\). Decompose the spectrum of *A* in the usual fashion, as the disjoint union of discrete and essential spectra, \(\sigma (A)=\sigma _\mathrm {disc}(A)\cup \sigma _\mathrm {ess}(A)\). Let *J* be any Borel subset of \(\mathbb R\). Below the spectral projector associated to *A* is denoted by \({\mathbbm {1} }_{J}(A)=\int _J \mathrm {d}E_\lambda \), so that \( {{\mathrm{Tr}}}{\mathbbm {1} }_{J}(A)=\dim {\mathbbm {1} }_{J}(A) \mathcal {H}.\) We write \(\mathcal {E}_J(A)=\oplus _{\lambda \in J} \ker (A-\lambda )\) with the convention \(\mathcal {E}_\lambda (A)=\mathcal {E}_{\{\lambda \}}(A)\). Generally \(\mathcal {E}_J(A)\subseteq {\mathbbm {1} }_{J}(A)\mathcal {H}\), however there is no reason for these two subspaces to be equal except when the spectrum within *J* is only pure point.

Everywhere below \(t\in \mathbb R\) will denote a scalar parameter. This is the shift parameter which is intrinsic to the methods.

*t*-dependant semi-norm, which is a norm if

*t*is not an eigenvalue,

*t*for the spectrum of

*A*.

*t*to \(\sigma (A)\),

*t*to the

*j*th nearest point in \(\sigma (A)\) counting multiplicity but in a generalised sense. That is, the sequence \((\mathfrak {d}_j(t))_{j\in \mathbb N}\) becomes stationary when it attains the distance from

*t*to the essential spectrum. Moreover

*j*th point in \(\sigma (A)\) to the left\((-)\)/right\((+)\) of

*t*counting multiplicities. Here \(t\in \sigma (A)\) is allowed and neither

*t*nor \(\mathfrak {n}_j^{\mp }(t)\) have to be isolated from the rest of \(\sigma (A)\). Without further mention, all the statements below regarding bounds on \(\mathfrak {n}_j^\mp (t)\) will be immediate and useless in either of these two cases and so will not be considered in the proofs.

*A*which are strictly to the left and strictly to the right of

*t*respectively. The inequality \(\nu _j^\pm (t)\not = \mathfrak {n}^\pm _j(t)\) only occurs when

*t*is an eigenvalue.

Everywhere below \(\mathcal {L}\subset {\text {D}}(A)\) will be a (trial) subspace of dimension \(n=\dim \mathcal {L}\). Unless explicitly stated, we will assume the following.

### Assumption 1

*t*and subspace \(\mathcal {L}\) are such that

The integer number \(m\le n\) will always be chosen such that the following assumption holds true.

### Assumption 2

By virtue of (6), \(\delta _j(t)> \mathfrak {d}_j(t)\) for all \(j\le m\).

## 3 Approximated local counting functions

*t*from the action of

*A*onto \(\mathcal {L}\), see [21, Section 3]. For \(j\le n\), let

*j*spectral points of

*A*in the segment \([t-F_j(t),t+F_j(t)]\). As we shall see next, this possibly includes a part of the essential spectrum.

### Lemma 1

### Proof

*B*be a non-negative self-adjoint operator such that \(\mathcal {L}\subset {\text {D}}(B)\subset {\text {D}}(B^{1/2})\). Let \(b(u)=\langle B^{1/2}u,B^{1/2}u\rangle \) for all \(u\in {\text {D}}(B^{1/2})\) be the closure of the quadratic form associated to

*B*. Let

*B*. In other words, \(\mathcal {E}_{\lambda _j}(B)\ne \{0\}\). Let us firstly verify the validity of this claim.

*v*is an eigenvector associated with \(\lambda _1\). This implies the above claim for \(j=1\).

Now suppose that \(j\ge 2\). We have two possibilities. Either \(\tilde{\lambda }_j(\mathcal {L})=\lambda _j\) is in the discrete spectrum of *B* and the claim follows, or it is in the essential spectrum. In the latter case, without loss of generality we can assume that \(\tilde{\lambda }_j(\mathcal {L})\not \in \sigma _\mathrm {disc}(B)\) and \(\lambda _{j-1}\in \sigma _\mathrm {disc}(A)\). That is, \(\lambda _k\in \sigma _\mathrm {disc}(B)\) for any \(k\in \{1,\ldots ,j-1\}\) and \(\lambda _k=\lambda _j\) for any \(k\in \{j,\ldots ,n\}\).

*B*. This is the above claim for \(j\ge 2\).

We now complete the proof of the lemma. Recall (3) and (7). We have two possibilities, either \(F_j(t)=\mathfrak {d}_j(t)\) or \(F_j(t)>\mathfrak {d}_j(t)\).

*j*eigenvalues and so

*A*or is an endpoint of a segment in \(\sigma (A)\). Thus,

*j*th smallest eigenvalue \(\mu \) of the non-negative weak problem:

### 3.1 Optimal setting for local detection of the spectrum

As we show next, it is possible to detect the spectrum of *A* to the left/right of *t* by means of \(F_j\) in an optimal setting. This is a crucial ingredient in the formulation of the strategy proposed in [21, 22, 23].

The following statement was first formulated in [21, theorems 3 and 4] and will be sharpened in Corollary 6.

### Proposition 2

### Proof

*t*, from Poincaré’s Eigenvalue Separation Theorem [9, Theorem III.1.1], a necessary requirement on \(\mathcal {L}\) should certainly be the condition

### Remark 1

From Proposition 2 it follows that optimal lower bounds for \(\mathfrak {n}_j^-(t)\) are achieved by finding \(\hat{t}^-_j\le t\), the closest point to *t*, such that \(F_j(\hat{t}^-_j)=t-\hat{t}^-_j\). Indeed, by virtue of (13), \( t^- - F_j(t^-)\le \hat{t}^-_j - F_j(\hat{t}^-_j)\le \mathfrak {n}_j^-(t)\) for any other \(t^-\) as in (12). Similarly, optimal upper bounds for \(\mathfrak {n}_j^+(t)\) are found by analogous means. This observation will play a crucial role in Sect. 4.

Proposition 2 is central to the hierarchical method for finding eigenvalue inclusions examined a few years ago in [21, 22]. For fixed \(\mathcal {L}\) this method leads to bounds for eigenvalues which are far sharper than those obtained from the obvious idea of estimating local minima of \(F_1(t)\). From an abstract perspective, Proposition 2 provides an intuitive insight on the mechanism for determining complementary bounds for eigenvalues. The method proposed in [21, 22, 23] is yet to be explored more systematically in a practical setting. However in most circumstances, the technique described in [35], considered in detail in Sect. 4, is easier to implement.

### 3.2 Geometrical properties of the first approximated counting function

We now determine various geometrical properties of \(F_1\) and examine its connection to the spectral distance.

### Lemma 3

- (a)
There exists a minimiser \(u\in \mathcal {L}\) of the right side of (7) for \(j=1\), such that \(|u|_t=\mathfrak {d}_1(t)\) for a single \(t\in (\lambda -\frac{|\lambda -\nu _1^-(\lambda )|}{2}, \lambda +\frac{|\lambda -\nu _1^+(\lambda )|}{2})\),

- (b)
\(F_1(t)=\mathfrak {d}_1(t)\) for a single \(t\in (\lambda -\frac{|\lambda -\nu _1^-(\lambda )|}{2}, \lambda +\frac{|\lambda -\nu _1^+(\lambda )|}{2})\),

- (c)
\(F_1(s)=\mathfrak {d}_1(s)\) for all \(s\in [\lambda -\frac{|\lambda -\nu _1^-(\lambda )|}{2},\lambda +\frac{|\lambda -\nu _1^+(\lambda )|}{2}]\),

- (d)
\(\mathcal {L}\cap \mathcal {E}_\lambda (A)\not =\{0\}\).

### Proof

Since \(\mathcal {L}\) is finite-dimensional, (a) and (b) are equivalent by the definitions of \(\mathfrak {d}_1(t)\), \(F_1(t)\) and \(q_t\). From the paragraph above the statement of the lemma it is clear that (d) \(\Rightarrow \) (c) \(\Rightarrow \) (b). Since \(|u|_t/ \Vert u\Vert \) is the square root of the Rayleigh quotient associated to the operator \((A-t)^2\), the fact that \(\lambda \) is isolated combined with the Rayleigh–Ritz principle, gives the implication (a) \(\Rightarrow \) (d). \(\square \)

As there can be a mixing of eigenspaces, it is not possible to replace (b) in this lemma by an analogous statement including \(t=\lambda \pm \frac{|\lambda -\nu _1^\pm (\lambda )|}{2}\). If \(\lambda '=\lambda +|\lambda -\nu _1^+(\lambda )|\) is an eigenvalue, for example, then \(F_1(\frac{\lambda +\lambda '}{2})=\mathfrak {d}_1(\frac{ \lambda +\lambda '}{2})\) ensures that \(\mathcal {L}\) contains elements of \(\mathcal {E}_\lambda (A) \oplus \mathcal {E}_{\lambda '}(A)\). However it is not guaranteed to contain elements of any of these two subspaces.

### 3.3 Geometrical properties of the subsequent approximated counting functions

Various extensions of Lemma 3 to the case \(j>1\) are possible, however it is difficult to write these results in a neat fashion. Proposition 5 below is one such an extension.

### Lemma 4

In the statement of this lemma, note that the left and right derivatives of both \(\mathcal {J}\) and \(\tilde{\mathcal {J}}\) can be different.

### Proposition 5

- (a)
\(|F_j(t)-F_j(s)|= |t-s|\) for some \(s\not =t\).

- (b)There exists an open segment \(J\subset \mathbb R\) containing
*t*in its closure, such that$$\begin{aligned} |F_j(t)-F_j(s)|= |t-s| \quad \forall s\in \overline{J}. \end{aligned}$$ - (c)There exists an open segment \(J\subset \mathbb R\) containing
*t*in its closure, such that$$\begin{aligned} \forall s\in J,\text { either} \quad \mathcal {L}\cap \mathcal {E}_{s+ F_j(s)}\not =\{0\} \quad \text {or} \quad \mathcal {L}\cap \mathcal {E}_{s- F_j(s)}(A)\not =\{0\}. \end{aligned}$$

### Proof

\(\underline{ \mathrm{(b)} \Rightarrow \mathrm{(c)}}\). Assume (b). Then \(s\mapsto F_j(s)\) is differentiable in *J* and its one-sided derivatives are equal to 1 or \(-1\) in the whole of this interval. For this part of the proof, we aim at applying (15), in order to get another expression for these derivatives.

Let \(\mathcal {F}_j\) be the family of \((j-1)\)-dimensional linear subspaces of \(\mathcal {L}\). Identify an orthonormal basis of \(\mathcal {L}\) with the canonical basis of \(\mathbb {C}^n\). Then any other orthonormal basis of \(\mathcal {L}\) is represented by a matrix in \(\mathrm {O}(n)\), the orthonormal group. By picking the first \((j-1)\) columns of these matrices, we cover all possible subspaces \(V\in \mathcal {F}_j\). Indeed we just have to identify \((\underline{v}_1|\cdots |\underline{v}_{j-1})\) for \([\underline{v}_{kl}]_{kl=1}^n \in \mathrm {O}(n)\) with \(V=\mathrm {Span}\{\underline{v}_k\}_{k=1}^{j-1}\).

*g*(

*s*,

*V*) and \(\partial _{s}^{\pm }g(s,V)\) are upper semi-continuous. Therefore, a further application of Lemma 4 yields

*u*must be an eigenvector of

*A*associated with either \(s+F_j(s)\) or \(s-F_j(s)\). This is precisely (c).

\(\underline{\mathrm{(c)} \Rightarrow \mathrm{(a)}}\). Under the condition (c), there exists an open segment \(\tilde{J}\subseteq J\), possibly smaller, such that \(t\in \overline{\tilde{J}}\) and \(F_j(s)=\mathfrak {d}_j(s)\) for all \(s\in \tilde{J}\). Since \(|\mathfrak {d}_j(s)-\mathfrak {d}_j(r)|=|s-r|\), then either (a) is immediate, or it follows by taking \(r\rightarrow t\). \(\square \)

Proposition 5 leads to the following version of Proposition 2 for *t* an eigenvalue.

### Corollary 6

*k*. Let \(t^-<t<t^+\). If \(\mathcal {E}_t(A)\cap \mathcal {L}= \{0\}\), then

### Proof

*A*for all \(s\in (\tau ,t^-)\). Hence, as \(s - F_j(s)\) is continuous and \(\mathcal {H}\) is separable, this function should be constant in the segment \((\tau ,t^-)\). Moreover, due to monotonicity for any \(s\in (\tau ,t^-)\), \(s+F_j(s)=t^-\). Hence if \(s\in (\tau ,t^-)\mapsto s - F_j(s)\) is constant (equal to some value, say

*v*), then

*s*is the midpoint between

*t*and

*v*for any \(s\in (\tau ,t^-)\). This contradicts the fact that \(\tau \ne t^-\). Hence

### 3.4 Approximated eigenspaces

We conclude this section by showing how to obtain certified information about spectral subspaces.

Our model is the implication (b) \(\Rightarrow \) (d) in Lemma 3. In a suitable asymptotic regime for \(\mathcal {L}\), the distance between these eigenfunctions and the spectral subspaces of \(|A-t|\) in the vicinity of the origin is controlled by a term which is as small as \(\mathcal {O}(\sqrt{F_j(t)-\mathfrak {d}_j(t)})\) for \(F_j(t)-\mathfrak {d}_j(t)\rightarrow 0\).

The following statement is independent, but it is clearly connected with classical results of Weinberger [32] and Trefftz [30]. Note that a shift parameter can be introduced in Weinberger’s formulation following [4].

### Proposition 7

*m*be as in Assumption 2. Let \(t\in \mathbb R\) and \(j\in \{1,\ldots ,m\}\) be fixed. Let \(\{u_j^t\}_{j=1}^n\subset \mathcal {L}\) be an orthonormal family of eigenfunctions associated to the eigenvalues \(\mu =F_j(t)\) of the weak problem (10). Suppose that \(F_j(t)-\mathfrak {d}_j(t)\) is small enough so that \(0<\varepsilon _j<1\) holds true in the following inductive construction,

### Proof

As it is clear from the context, in this proof we suppress the index *t* on top of any vector. We write \(\Pi _\mathcal {S}\) to denote the orthogonal projection onto the subspace \(\mathcal {S}\) with respect to the inner product \(\langle \cdot ,\cdot \rangle \).

*A*is self-adjoint,

*j*up to

*m*inductively as follows. Set

*j*up to \(k-1\). Define \(\mathcal {S}_k=\mathcal {E}_{\{t- \mathfrak {d}_k(t),t+ \mathfrak {d}_k(t)\}}\!(A)\ominus {\text {Span}}\{\phi _l\}_1^{k-1}\). We first show that \(\Pi _{\mathcal {S}_k}u_k\not =0\), and so we can define

### Remark 2

If \(t=\frac{\mathfrak {n}_j^-(t)+\mathfrak {n}_j^+(t)}{2}\) for a given *j*, the vectors \(\phi _j^t\) introduced in Proposition 7 (and invoked subsequently) might not be eigenvectors of *A* despite the fact that \(|A-t|\phi _j^t=\mathfrak {d}_j(t) \phi _j^t\). However, in any other circumstance \(\phi _j^t\) are eigenvectors of *A*.

## 4 Local bounds for eigenvalues

*t*. Below we will denote eigenfunctions associated with \(\tau ^\mp _{j}(t)\) by \(u_j^\mp (t)\).

Below we write most statements only for the case of “lower bounds for the eigenvalues of *A* which are to the left of *t*”. As the position of *t* relative to the essential spectrum is irrelevant here, evidently this does not restrict generality. The corresponding results regarding “upper bounds for the eigenvalues of *A* which are to the right of *t*” can be recovered by replacing *A* by \(-A\).

The left side of (14) ensures the existence of \(\tau ^-_1(t)\).

### Lemma 8

- (a\(^-\))
\(F_1(s)>t-s\) for all \(s<t\)

- (b\(^-\))
\(\frac{\langle Au,u\rangle }{\langle u,u\rangle }>t\) for all \(u\in \mathcal {L}\)

- (c\(^-\))
all the eigenvalues of (22) are positive.

### Remark 3

Let \(\mathcal {L}={\text {Span}}\{b_j\}_{j=1}^n\). The matrix \([q_t(b_j,b_k)]_{jk=1}^n\) is singular if and only if \(\mathcal {E}_t(A)\cap \mathcal {L}\ne \{0\}\). On the other hand, the kernel of (22) might be non-empty. If \(n_0(t)\) is the dimension of this kernel and \(n_\infty (t)=\dim (\mathcal {E}_t(A)\cap \mathcal {L})\), then \(n=n_\infty (t)+n_0(t)+n^-(t)+n^+(t)\).

Note that \(n_\infty (t)\ge 1\) if and only if \(F_j(t)=0\) for \(j=1,\ldots ,n_{\infty }(t)\). In this case the conclusions of Lemma 9 and Theorem 10 below do not have any meaning. In order to write our statements in a more transparent fashion we use Assumption 1.

*t*, achievable by (13) in Proposition 2.

### 4.1 The eigenvalue to the immediate left of *t*

We begin with the case \(j=1\), see [23, Theorem 11].

### Lemma 9

### Proof

*u*for which equality is achieved is exactly \(u=u^-_1(t)\).

### 4.2 Subsequent eigenvalues

An extension of Lemma 9 to the case \(j\not =1\) is now found by induction.

### Theorem 10

*j*if and only if

### Proof

Recall that \(t\in \mathbb R\) and \(\mathcal {L}\) satisfy Assumption 1. For \(j=1\) the statements are Lemma 9 taking into consideration (14). For \(j>1\), due to the self-adjointness of the eigenproblem (22), it is enough to apply again Lemma 9 by fixing \(\tilde{\mathcal {L}}=\mathcal {L}\ominus {\text {Span}}\{u^-_1(t),\ldots ,u^-_{j-1}(t) \}\) as trial spaces. Note that the negative eigenvalues of (22) for the trial space \(\tilde{\mathcal {L}}\) are those of (22) for \(\mathcal {L}\) except for \(\tau _1^-(t),\ldots ,\tau _{j-1}^-(t)\). \(\square \)

A neat procedure for finding spectral bounds for *A*, as described in [35], can now be deduced from Theorem 10. By virtue of Proposition 2 and Remark 1, this procedure is optimal in the context of the approximated counting functions discussed in Sect. 3, see [23, Section 6]. We summarise the core statement as follows.

### Corollary 11

This corollary is an extension of the case \(j=1\) established in [23, Theorem 11]. In recent years, numerical techniques based on this statement (for \(j=1\)) have been developed to successfully compute eigenvalues for the radially reduced magnetohydrodynamics operator [15, 35], the Helmholtz equation [6] and the calculation of sloshing frequencies [5]. We show an implementation to the case of the Maxwell operator with \(j\ge 1\) in Sect. 6. See also [3].

## 5 Convergence and error estimates

*A*within a certain order of precision \(\mathcal {O}(\varepsilon )\) as specified below, then the residuals

- (a)
\(\mathcal {O}(\varepsilon )\) for any \(t\in \mathbb R\),

- (b)
\(\mathcal {O}(\varepsilon ^2)\) for \(t\not \in \sigma (A)\).

*j*runs from 1 to

*m*. From Assumption 2 it follows that the family \(\{\phi _j^s\}_{j=1}^m\subset \mathcal {E}_{[t-\mathfrak {d}_m(t),t+\mathfrak {d}_m(t)]}(A)\) and the family \(\{w_j^s\}_{j=1}^m\subset \mathcal {L}\) above can always be chosen piecewise constant for

*s*in a neighbourhood of

*t*. Moreover, they can be chosen so that jumps only occur at \(s\in \sigma (A)\).

A set \(\{w_j^t\}_{j=1}^m\) subject to (A\(_\mathrm{0}\))–(A\(_\mathrm{1}\)) is not generally orthonormal. However, according to the next lemma, it can always be substituted by an orthonormal set, provided \(\varepsilon _j\) is small enough.

### Lemma 12

*t*with jumps only at the spectrum of

*A*.

### Proof

*t*on top of any vector. The desired conclusion is achieved by applying the Gram–Schmidt procedure. Let \(G=[\langle w_k,w_l\rangle ]_{kl=1}^m \in \mathbb {C}^{m\times m}\) be the Gram matrix associated to \(\{w_j\}\). Set

### 5.1 Convergence of the approximated local counting function

The next theorem addresses the claim (a) made at the beginning of this section. According to Lemma 12, in order to examine the asymptotic behaviour of \(F_j(t)\) as \(\varepsilon _j\rightarrow 0\) under the constraints (A\(_\mathrm{0}\))–(A\(_\mathrm{1}\)), without loss of generality the trial vectors \(w_j^t\) can be assumed to form an orthonormal set in the inner product \(\langle \cdot ,\cdot \rangle \).

### Theorem 13

### Proof

In terms of order of approximation, Theorem 13 will be superseded by Theorem 14 for \(t\not \in \sigma (A)\). However, if \(t\in \sigma (A)\), the trial space \(\mathcal {L}\) can be chosen so that \(F_1(t)-\mathfrak {d}_1(t)\) is only linear in \(\varepsilon _1\). Indeed, fixing any non-zero \(u\in {\text {D}}(A)\) and \(\mathcal {L}={\text {Span}}\{u\}\), yields \(F_1(t)-\mathfrak {d}_1(t)=F_1(t)=\varepsilon _1\). Therefore Theorem 13 is optimal, on the presumption that *t* is arbitrary.

The next theorem addresses the claim (b) made at the beginning of this section. Its proof is reminiscent of that of [29, Theorem 6.1].

### Theorem 14

### Proof

*t*, as \(P_\mathcal {L}\) does. We first show that, under hypothesis (27), \(\mu _\mathcal {L}^j(t)<\frac{1}{2}\). Indeed, given \(\phi \in \mathcal {F}_j\) we decompose it as \(\phi =\sum _{k=1}^j c_k \phi _k\). Then

As the next corollary shows, a quadratic order of decrease for \(F_j(t)-\mathfrak {d}_j(t)\) is prevented for \(t\in \sigma (A)\) (in the context of Theorems 13 and 14), only for *j* up to \(\dim \mathcal {E}_{t}(A)\).

### Corollary 15

*k*ensuring the following. If (A\(_\mathrm{1}\)) holds true for \(\sqrt{\sum _{j=1}^m \varepsilon _j^2}<\varepsilon \), then

### Proof

Without loss of generality we assume that \(t+\mathfrak {d}_k(t)\in \sigma (A)\). Otherwise \(t-\mathfrak {d}_k(t)\in \sigma (A)\) and the proof is analogous to the one presented below.

### 5.2 Convergence of local bounds for eigenvalues

Our next task in this section is to formulate precise statements on the convergence of the method of Zimmermann and Mertins (Sect. 4). Theorem 16 below improves upon two crucial aspects of a similar result established in [15, Lemma 2]. It allows \(j>1\) and it allows \(t\in \sigma (A)\). These two improvements are essential in order to obtain sharp bounds for those eigenvalues which are either degenerate or form a tight cluster.

### Remark 4

The constants \(\tilde{\varepsilon }_t\) and \(C_t^\pm \) below do have a dependence on *t*. This dependence can be determined explicitly from Theorem 14, Corollary 15 and the proof of Theorem 16. Despite the fact that these constants can deteriorate as *t* approaches the isolated eigenvalues of *A* and they can have jumps precisely at these points, they may be chosen independent of *t* on compact sets outside the spectrum.

### Remark 5

We regard the following as one of the main results of this work.

### Theorem 16

*A*such that \({\text {Span}}\{\phi _k\}_{k=1}^{\tilde{m}}=\mathcal {E}_J(A)\). For fixed \(t\in J\) such that Assumption 1 is satisfied, there exist \(\tilde{\varepsilon }_t>0\) and \(C^{-}_t>0\) independent of the trial space \(\mathcal {L}\), ensuring the following. If there are \(\{w_j\}_{j=1}^{\tilde{m}}\subset \mathcal {L}\) such that

### Proof

The hypotheses ensure that the number of indices \(j\le n^-(t)\) such that \(\nu ^-_j(t)\in J\) never exceeds \(\tilde{m}\). Therefore this condition in the conclusion of the theorem is consistent.

*j*be such that \(\nu ^-_j(t)\in J\). Since \( \nu _j^-(t)-(\alpha +t) \le (t+\alpha )- \nu _1^-(t) \) for all \(\alpha \) such that \(\frac{\nu _j^-(t)+\nu _1^-(t)}{2}-t\le \alpha \le 0\), then

*g*is an increasing function of \(\alpha \) and \(g(0)=F_j(t)>0\). For the strict inequality in the latter, recall Assumption 1. Moreover, according to (39),

### 5.3 Convergence to eigenfunctions

We conclude this section with a statement on convergence to eigenfunctions.

### Corollary 17

*A*such that \({\text {Span}}\{\phi _k\}_{k=1}^{\tilde{m}}=\mathcal {E}_J(A)\). For fixed \(t\in J\), there exist \(\tilde{\varepsilon }_t>0\) and \(C^{\pm }_t>0\) independent of the trial space \(\mathcal {L}\), ensuring the following. If there are \(\{w_j\}_{j=1}^{\tilde{m}}\subset \mathcal {L}\) such that (38) holds, then for all \(j\le n^\pm (t)\) such that \(\nu _j^\pm (t)\in J\) we can find \(\psi _j^{\varepsilon \pm }\in \mathcal {E}_{\{\nu _j^-(t),\nu _j^+(t)\}}(A)\) satisfying

### Proof

Fix \(t\in J\). According to Theorem 10, \( u^\pm _j(t)=u^{\hat{t}^\pm _j}_j \) in the notation for eigenvectors employed in Proposition 7. The claimed conclusion is a consequence of the latter combined with Theorem 14 or Corollary 15, as appropriate. \(\square \)

### Remark 6

Once again, we remark that the vectors in the statement of the corollary can be chosen locally constant in *t* with jumps only at the spectrum of *A*.

## 6 Implementations to the Maxwell eigenvalue problem

The method of Zimmermann and Mertins can be applied to a large variety of self-adjoint operators. Of particular interest are the operators which are not bounded below or above. A significant class of block operator matrices [31] which are highly relevant in applications fall into this category and are covered by the present framework. In order to illustrate our findings in this setting, we now apply the method of Zimmermann and Mertins to the Maxwell operator. This operator has been extensively studied in the last few years with a special emphasis on the spectral pollution phenomenon.

The orthogonal complement of this subspace is the gradient space, which has infinite dimension and lies in the kernel of the eigenvalue equation (40). Here, we use the term “kernel” to refer to the solution of the eigenvalue problem associated to \(\omega =0\). In turns, this means that (40), (41) and the unrestricted problem (40), have the same non-zero spectrum and the same corresponding eigenspaces.

The numerical estimation of the eigenfrequencies of (40)-(41) is known to be extremely challenging for general regions \(\Omega \). The operator \(\mathcal {M}\) does not have a compact resolvent and it is strongly indefinite. If we consider, instead, the problem (40)-(41), this would lead to a formulation involving an operator with a compact resolvent (due to (41)), but the problem would still be strongly indefinite. By considering the square of \(\mathcal {M}\) on the solenoidal subspace, one obtains a positive definite eigenvalue problem (involving the bi-curl) which can be discretised via the Galerkin method. However, a serious drawback of this idea for practical computations is the fact that the standard finite element spaces are not solenoidal. Usually, spurious modes associated to the infinite-dimensional kernel appear and give rise to spectral pollution. This has been well documented and it is known to be a manifested problem when the underlying mesh is unstructured, see [2, 11] and references therein.

Various ingenious methods, e.g. [7, 11, 12, 13, 14, 16, 17, 27], capable of approximating the eigenvalues of (40) by means of the finite element method have been documented in the past. In all the above-cited works, either a particular choice of finite element spaces, or an appropriate modification of the weak formulation of the problem, has to be performed prior to the computation of the eigenvalues.

The method of Zimmermann and Mertins does not need to introduce any prior change to the problem at hand in order to find eigenvalue bounds for \(\mathcal {M}\). We can even pick \(\mathcal {L}\) made of Lagrange finite elements on unstructured meshes. Convergence and absence of spectral pollution are guaranteed by Corollary 11 and Theorem 16. Our purpose below is only to illustrate the context of the theory presented in the previous sections. A more comprehensive numerical investigation of this model, including the case of anisotropic media, has been conducted in [3].

The only hypothesis required in the analysis carried out in Sect. 5, ensuring that the \(\omega ^\pm _j\) are close to \(\omega _j\), is for the trial space to capture well the eigenfunctions in the graph norm of \({\text {D}}(\mathcal M)\), that is the \([\mathcal {H}({\text {curl}},\Omega )]^2\)-norm. See (38). Therefore, as we have substantial freedom to choose these spaces and they constitute the simplest alternative, we have picked the Lagrange nodal elements.

*h*, such that

This regularity assumption on the corresponding vector spaces can be formulated in different ways in order to suit the chosen algorithm. For the one we have employed here, if we wish to obtain a lower/upper bound for the *j*-eigenvalue to the left/right of a fixed *t* (and consequently obtain approximate eigenvectors) all the vectors of the sum of all eigenspaces up to *j* have to be regular. If by some misfortune an intermediate eigenspace does not fullfill this requirement, then the algorithm will converge slowly. To circumvent this difficulty, the computational procedure can be modified in many ways. For instance, it can be allowed to split iteratively the initial interval, once it is clear that some accuracy can not be achieved after a fixed number of steps. See [3, Procedure 1].

### 6.1 Order of convergence on a cube

The eigenfunctions of (40) are regular in the interior of a convex domain. In this case, the method of Zimmermann and Mertins for the resonant cavity problem achieves an optimal order of convergence in the context of finite elements.

*l*,

*m*,

*n*). That is, for example, \(\omega _1=\sqrt{2}\) (the first positive eigenvalue) has multiplicity 3 corresponding to indices \(\{(1,1,0),(0,1,1),(1,0,1)\}\) each one of them contributing to one of the dimensions of the eigenspace. However, \(\omega _2=\sqrt{3}\) (the second positive eigenvalue) corresponding to index \(\{(1,1,1)\}\) has multiplicity 2 determined by \(\underline{\alpha }\) on a plane.

### 6.2 Benchmark eigenvalue bounds for the Fichera domain

In this next experiment we consider the region \(\Omega =\Omega _{\mathrm {F}}=(0,\pi )^3 {\setminus } [0,\pi /2]^3\). Some of the eigenvalues can be obtained by domain decomposition and the corresponding eigenfunctions are regular. For example, eigenfunctions on the cube of side \(\pi /2\) can be assembled in a suitable fashion, in order to build eigenfunctions on \(\Omega _{\mathrm {F}}\). Therefore the set \(\{\pm 2\sqrt{l^2+m^2+n^2}\}\) where not two indices vanish simultaneously certainly lies inside \(\sigma (\mathcal M)\). The first eigenvalue in this set is \(2\sqrt{2}\).

The slight numerical discrepancy shown in the table for the seemingly multiple eigenvalues appears to be a consequence of the fact that the meshes employed are not symmetric with respect to permutation of the spacial coordinates. See [3, Section 6.2] for more details.

## Acknowledgments

We kindly thank Michael Levitin and Stefan Neuwirth for their suggestions during the preparation of this manuscript. We kindly thank Université de Franche-Comté, University College London and the Isaac Newton Institute for Mathematical Sciences, for their hospitality. Funding was provided by MOPNET, the British-French project PHC Alliance (22817YA), the British Engineering and Physical Sciences Research Council (EP/I00761X/1) and the French Ministry of Research (ANR-10-BLAN-0101).

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.