1 Introduction

Several schemes for the approximation of eigenvalue problems arising from partial differential equations lead to the algebraic form: find \(\lambda \in {\mathbb {R}}\) and \(x\in {\mathbb {R}}^n\) with \(x\ne 0\) such that

$$\begin{aligned} {\mathsf {A}}x=\lambda {\mathsf {B}}x, \end{aligned}$$
(1)

where \({\mathsf {A}}\) and \({\mathsf {B}}\) are matrices in \({\mathbb {R}}^{n\times n}\).

We consider the case when the matrices \({\mathsf {A}}\) and \({\mathsf {B}}\) are symmetric and positive semidefinite and may depend on a parameter. This is a typical situation found in applications where elliptic partial differential equations are approximated by schemes that require suitable parameters to be tuned (for consistency and/or stability reasons). In this paper we discuss in particular applications arising from the use of the Virtual Element Method (VEM), see [5, 13, 15, 16, 20,21,22], where suitable parameters have to be chosen for the correct approximation. Similar situations are present, for instance, when a parameter-dependent stabilization is used for the approximation of discontinuous Galerkin formulations and when a penalty term is added to the discretization of the eigenvalue problem associated with Maxwell’s equations [2, 6,7,8, 10,11,12, 23, 26]

In general, it may be not immediate to describe how the matrices \({\mathsf {A}}\) and \({\mathsf {B}}\) depend on the given parameters. For simplicity, we consider the case when the dependence is linear: under suitable assumptions it is easy to discuss how the computed spectrum varies with respect to the parameters.

The description of the spectrum in the linear case is not surprising and is well known to a broad scientific community [14, 18, 19, 24, 25]. Nevertheless, the main focus of perturbation theory for eigenvalues and eigenvectors is usually centered on the asymptotic behavior when the parameters tend to zero. In our case, the asymptotic parameter is usually the mesh size h and we are interested in the convergence when h goes to zero, that is when the size of the involved matrices tends to infinity. The presence of additional parameters makes the convergence more difficult to describe and can produce unexpected results in the pre-asymptotic regime. For this reason, we start by recalling how the spectrum of problem (1) is influenced by the parameter, without considering h, and we translate those results to an example of interest in Sect. 3.1 where the discretization parameter h is considered as well.

We assume that the matrices \({\mathsf {A}}\) and \({\mathsf {B}}\) satisfy the following condition for \({\mathsf {C}}={\mathsf {A}},{\mathsf {B}}\).

Assumption 1

The matrix \({\mathsf {C}}\) can be split into the sum

$$\begin{aligned} {\mathsf {C}}={\mathsf {C}}_1+\gamma {\mathsf {C}}_2, \end{aligned}$$
(2)

where \(\gamma\) is a non negative real number and \({\mathsf {C}}_1\) and \({\mathsf {C}}_2\) are symmetric. The matrices \({\mathsf {C}}_1\) and \({\mathsf {C}}_2\) satisfy the following properties:

  1. (a)

    \({\mathsf {C}}_1\) is positive semidefinite with kernel \(K_{{\mathsf {C}}_1}\);

  2. (b)

    \({\mathsf {C}}_2\) is positive semidefinite and positive definite on \(K_{{\mathsf {C}}_1}\);

  3. (c)

    \({\mathsf {C}}_2\) vanishes on \(K_{{\mathsf {C}}_1}^\perp\), the orthogonal complement of \(K_{{\mathsf {C}}_1}\) in \({\mathbb {R}}^n\).

In Sect. 2 we describe the spectrum of (1) as a function of the parameters, in various situations that mimic the behavior of matrices \({\mathsf {A}}\) and \({\mathsf {B}}\) originating from several discretization schemes.

Section 3, which is the core of this paper, discusses the influence of the parameters on the VEM approximation of eigenvalue problems. Several numerical examples complete the papers, showing that the parameters have to be carefully tuned and that wrong choices can produce useless results.

2 Parametric algebraic eigenvalue problem

Given two symmetric and positive semidefinite matrices \({\mathsf {A}}\) and \({\mathsf {B}}\) that can be written as

$$\begin{aligned} {\mathsf {A}}={\mathsf {A}}_1+\alpha {\mathsf {A}}_2\end{aligned}$$
(3)

and

$$\begin{aligned} {\mathsf {B}}={\mathsf {B}}_1+\beta {\mathsf {B}}_2, \end{aligned}$$
(4)

with nonnegative parameters \(\alpha\) and \(\beta\), we consider the eigensolutions to the generalized problem (1).

We assume that the splitting of the matrices \({\mathsf {A}}\) and \({\mathsf {B}}\) is obtained with symmetric matrices and satisfies Assumption 1 for \({\mathsf {C}}_1={\mathsf {A}}_1,{\mathsf {B}}_1\) and \({\mathsf {C}}_2={\mathsf {A}}_2,{\mathsf {B}}_2\). Moreover we denote by \(n_{{\mathsf {A}}_1}\) and \(n_{{\mathsf {B}}_1}\) the dimension of \(K_{{\mathsf {A}}_1}\) and \(K_{{\mathsf {B}}_1}\), respectively.

Remark 1

Problem (1) has n eigenvalues if and only if \(\mathrm {rank}{\mathsf {B}}=n\), see [17]. If \({\mathsf {B}}\) is singular the spectrum can be finite, empty, or infinite (if \({\mathsf {A}}\) is singular too). If \({\mathsf {A}}\) is non singular, usually one can circumvent this difficulty by computing the eigenvalues of \({\mathsf {B}} x=\mu {\mathsf {A}}x\) and setting \(\lambda =1/\mu\). The kernel of \({\mathsf {B}}\) is the eigenspace associated with the vanishing eigenvalue with multiplicity m, and the original problem has exactly m eigenvalues conventionally set to \(\infty\).

We want to study the behavior of the eigenvalues as the parameters \(\alpha\) and \(\beta\) vary. We consider three cases.

2.1 Case 1

We fix \(\beta >0\) so that \({\mathsf {B}}\) is positive definite. This implies that the eigenvalues of (1) are all non negative. Let us consider first \(\alpha =0\) so that (1) reduces to

$$\begin{aligned} {\mathsf {A}}_1x=\lambda {\mathsf {B}}x. \end{aligned}$$
(5)

Since \({\mathsf {A}}_1\) is positive semidefinite, \(\lambda =0\) is an eigenvalue of (5) with multiplicity equal to \(n_{{\mathsf {A}}_1}=\dim (K_{{\mathsf {A}}_1})\) and \(K_{{\mathsf {A}}_1}\) is the associated eigenspace. In addition, we have \(m_A=n-n_{{\mathsf {A}}_1}\) positive eigenvalues \(\{\mu _1\le \dots \le \mu _{m_A}\}\) counted with their multiplicity (since we are dealing with a symmetric problem, we do not distinguish between geometric and algebraic multiplicity). We denote by \(v_j\in K_{{\mathsf {A}}_1}^\perp\) the eigenvector associated with \(\mu _j\), that is

$$\begin{aligned} {\mathsf {A}}_1v_j=\mu _j {\mathsf {B}}v_j. \end{aligned}$$

Thanks to property c) of Assumption 1 when \({\mathsf {C}}={\mathsf {A}}\), we observe that

$$\begin{aligned} {\mathsf {A}}v_j={\mathsf {A}}_1v_j+\alpha {\mathsf {A}}_2v_j={\mathsf {A}}_1v_j=\mu _j{\mathsf {B}} v_j. \end{aligned}$$

Therefore \((\mu _j,v_j)\), for \(j=1,\ldots ,m_A\), are eigensolutions of the original system (1).

On the other hand, the eigensolutions of

$$\begin{aligned} {\mathsf {A}}_2w=\nu {\mathsf {B}}w \end{aligned}$$

are characterized by the fact that \(n_{{\mathsf {A}}_1}\) eigenvalues \(\nu _i\) (\(i=1,\ldots ,n_{{\mathsf {A}}_1}\)) are strictly positive with corresponding eigenvectors \(w_i\) belonging to \(K_{{\mathsf {A}}_1}\), while the remaining \(m_A\) eigenvalues vanish and have \(K_{{\mathsf {A}}_1}^\perp\) as eigenspace. Thus, property a) of Assumption 1, for \({\mathsf {C}}={\mathsf {A}}\), yields

$$\begin{aligned} {\mathsf {A}}w_i={\mathsf {A}}_1w_i+\alpha {\mathsf {A}}_2w_i=\alpha {\mathsf {A}}_2w_i=\alpha \nu _i {\mathsf {B}}w_i, \end{aligned}$$

which means that \((\alpha \nu _i,w_i)\), for \(i=1,\ldots ,n_{{\mathsf {A}}_1}\), are eigensolutions of (1).

Summarizing, the eigenvalues of (1) are:

$$\begin{aligned} \lambda _k= \left\{ \begin{array}{ll} \alpha \nu _k &{} \quad {\text {if }}1\le k\le n_{{\mathsf {A}}_1}\\ \mu _{k-n_{{\mathsf {A}}_1}} &{} \quad {\text {if }} n_{{\mathsf {A}}_1}+1\le k\le n. \end{array} \right. \end{aligned}$$
(6)

The left panel in Fig. 1 shows the eigenvalues of a simple example where \({\mathsf {A}}\in {\mathbb {R}}^{6\times 6}\) is obtained by the combination of diagonal matrices with entries

$$\begin{aligned} {{\,\mathrm{diag}\,}}({\mathsf {A}}_1)=[3,4,5,6,0,0],\quad {{\,\mathrm{diag}\,}}({\mathsf {A}}_2)=[0,0,0,0,1,2]. \end{aligned}$$
(7)

and \({\mathsf {B}}={\mathbb {I}}_6\) is the identity matrix.

Along the vertical lines we see the eigenvalues corresponding to a fixed value of \(\alpha\). The eigenvalues 3, 4, 5, 6 are associated with eigenvectors in \(K_{{\mathsf {A}}_1}^\perp\) and do not depend on \(\alpha\). The solid lines starting at the origin display the eigenvalues 1, 2 multiplied by \(\alpha\).

Fig. 1
figure 1

Dependence of the eigenvalues on the parameters \(\alpha\) (Case 1) and \(\beta\) (Case 2), respectively

Remark 2

We observe that if \({\mathsf {A}}_2\) is not positive definite on \(K_{{\mathsf {A}}_1}\), its kernel has a nonempty intersection with \(K_{{\mathsf {A}}_1}\). Let \(n_{12}\) be the dimension of this intersection, then problem (1) admits \(n_{12}\) vanishing eigenvalues which appear in the first case of (6).

2.2 Case 2

Let us now fix \(\alpha >0\), so that \({\mathsf {A}}\) is positive definite. We have that all the eigenvalues are positive. We observe that when \(\beta =0\) the matrix \({\mathsf {B}}={\mathsf {B}}_1\) may be singular, therefore it is convenient to consider the following problem:

$$\begin{aligned} {\mathsf {B}}x=\chi {\mathsf {A}}x, \end{aligned}$$
(8)

where \(\chi =\frac{1}{\lambda }\). If \(\chi =0\), we conventionally set \(\lambda =\infty\). Problem (8) reproduces the same situation we had in Case 1, with the matrices \({\mathsf {A}}\) and \({\mathsf {B}}\) switched. Repeating the same arguments as before, we obtain that problem (8) has two families of eigenvalues

$$\begin{aligned} \chi _k= \left\{ \begin{array}{ll} \beta \xi _k &{} \quad {\text {if }}1\le k\le n_{{\mathsf {B}}_1}\\ \zeta _{k-n_{{\mathsf {B}}_1}} &{} \quad {\text {if }} n_{{\mathsf {B}}_1}+1\le k\le n, \end{array} \right. \end{aligned}$$

where

$$\begin{aligned} {\mathsf {B}}_1r_j&= \zeta _j {\mathsf {A}} r_j,\ j=1,\ldots ,n-n_{{\mathsf {B}}_1}{\text { with }}r_j\in K_{{\mathsf {B}}_1}^\perp \\ {\mathsf {B}}_2s_i&= \xi _i{\mathsf {A}}s_i,\ i=1,\ldots ,n_{{\mathsf {B}}_1}{\text { with }}s_i\in K_{{\mathsf {B}}_1}. \end{aligned}$$

Going back to the original problem (1), we can conclude that the eigensolutions of (1) are the following ones:

$$\begin{aligned}&\left( \frac{1}{\beta \xi _k},s_k\right) {\text { for }} k=1,\ldots ,n_{{\mathsf {B}}_1}\nonumber \\&\quad \left( \frac{1}{\zeta _{k-n_{{\mathsf {B}}_1}}},r_{k-n_{{\mathsf {B}}_1}}\right) {\text { for }}k=n_{{\mathsf {B}}_1}+1,\ldots ,n. \end{aligned}$$
(9)

In the right panel of Fig. 1, we report the eigenvalues of a simple example where \({\mathsf {A}}={\mathbb {I}}_6\) and \({\mathsf {B}}\) is obtained by combining \({\mathsf {B}}_1={\mathsf {A}}_1\) and \({\mathsf {B}}_2={\mathsf {A}}_2\) defined in (7). We see that the eigenvalues \(\frac{1}{3},\frac{1}{4},\frac{1}{5},\frac{1}{6}\) are independent of \(\beta\) and that the remaining two eigenvalues lie along the hyperbolas \(\frac{1}{\beta }\) and \(\frac{1}{2\beta }\), plotted with solid line.

2.3 Case 3

We consider now the case when \(\alpha\) and \(\beta\) can vary independently from each other. We have different situations corresponding to the relation between \(K_{{\mathsf {A}}_1}\) and \(K_{{\mathsf {B}}_1}\). To ease the reading, let us introduce the following notation:

$$\begin{aligned} {\mathsf {A}}_1v&= \mu {\mathsf {B}}_1v \end{aligned}$$
(10a)
$$\begin{aligned} {\mathsf {A}}_1w&= \nu {\mathsf {B}}_2w \end{aligned}$$
(10b)
$$\begin{aligned} {\mathsf {A}}_2y&= \chi {\mathsf {B}}_1y \end{aligned}$$
(10c)
$$\begin{aligned} {\mathsf {A}}_2z&= \eta {\mathsf {B}}_2z. \end{aligned}$$
(10d)

In this case the space \({\mathbb {R}}^n\) can be decomposed into four mutually orthogonal subspaces

$$\begin{aligned} {\mathbb {R}}^n=(K_{{\mathsf {A}}_1}\cap K_{{\mathsf {B}}_1})\oplus (K_{{\mathsf {A}}_1}\cap K_{{\mathsf {B}}_1}^\perp )\oplus (K_{{\mathsf {A}}_1}^\perp \cap K_{{\mathsf {B}}_1})\oplus (K_{{\mathsf {A}}_1}^\perp \cap K_{{\mathsf {B}}_1}^\perp ). \end{aligned}$$

Let us denote by \(n_{{\mathsf {A}}_1\cap {\mathsf {B}}_1}\) the dimension of \(K_{{\mathsf {A}}_1}\cap K_{{\mathsf {B}}_1}\). If \(K_{{\mathsf {A}}_1}\cap K_{{\mathsf {B}}_1}\ne \emptyset\), for \(x\in K_{{\mathsf {A}}_1}\cap K_{{\mathsf {B}}_1}\) the eigenproblem to be solved is \(\alpha {\mathsf {A}}_2x=\lambda \beta {\mathsf {B}}_2x\), hence the eigenvalues are given by \(\frac{\alpha }{\beta }\eta _i\) \(i=1,\ldots ,n_{{\mathsf {A}}_1\cap {\mathsf {B}}_1}\), see (10d). Next, if \(x\in K_{{\mathsf {A}}_1}\cap K_{{\mathsf {B}}_1}^\perp\) we have to solve \(\alpha {\mathsf {A}}_2x=\lambda {\mathsf {B}}_1x\), which admits \((\alpha \chi _i,y_i)\) \(i=1,\ldots ,n_{{\mathsf {A}}_1}-n_{{\mathsf {A}}_1\cap {\mathsf {B}}_1}\) as eigensolutions where \((\chi _i,y_i)\) are defined in (10c). Similarly, if \(x\in K_{{\mathsf {A}}_1}^\perp \cap K_{{\mathsf {B}}_1}\), we find that the eigensolutions are \(\left( \frac{1}{\beta }\nu _i,w,_i\right)\) \(i=1,\ldots ,n_{{\mathsf {B}}_1}-n_{{\mathsf {A}}_1\cap {\mathsf {B}}_1}\) with \((\nu _i,w_i)\) given by (10b). In the last case, \(x\in K_{{\mathsf {A}}_1}^\perp \cap K_{{\mathsf {B}}_1}^\perp\), the matrices \({\mathsf {A}}\) and \({\mathsf {B}}\) are non singular and thanks to property c) in Assumption 1, for \({\mathsf {C}}={\mathsf {A}}\) and \({\mathsf {C}}={\mathsf {B}}\), we obtain that the eigenvalues are positive and independent of \(\alpha\) and \(\beta\) and correspond to those of (10a). In conclusion, we have

$$\begin{aligned} \lambda _k=\left\{ \begin{array}{ll} \displaystyle \frac{\alpha }{\beta }\eta _k &{}\quad {\text { if }}1\le k\le n_{{\mathsf {A}}_1\cap {\mathsf {B}}_1}\\ \alpha \chi _{k-n_{{\mathsf {A}}_1\cap {\mathsf {B}}_1}} &{}\quad {\text { if }}n_{{\mathsf {A}}_1\cap {\mathsf {B}}_1}+1\le k\le n_{{\mathsf {A}}_1}\\ \displaystyle \frac{1}{\beta }\nu _{k-n_{{\mathsf {A}}_1}} &{}\quad {\text { if }}n_{{\mathsf {A}}_1}+1\le k\le n_{{\mathsf {A}}_1}+n_{{\mathsf {B}}_1}-n_{{\mathsf {A}}_1\cap {\mathsf {B}}_1}\\ \mu _{k-n_{{\mathsf {A}}_1}+n_{{\mathsf {B}}_1}-n_{{\mathsf {A}}_1\cap {\mathsf {B}}_1}} &{}\quad {\text { if }}n_{{\mathsf {A}}_1}+n_{{\mathsf {B}}_1}-n_{{\mathsf {A}}_1\cap {\mathsf {B}}_1}+1\le k\le n. \end{array} \right. \end{aligned}$$
Fig. 2
figure 2

Eigenvalues when \(K_{{\mathsf {A}}_1}\cap K_{{\mathsf {B}}_1}\ne \emptyset\) as a function of \(\alpha\) and \(\beta\)

We report in Fig. 2 the eigenvalues illustrating this last case when we have diagonal matrices given by

$$\begin{aligned} {{\,\mathrm{diag}\,}}({\mathsf {A}}_1)&= [3,0,0,4,5,6] \quad {{\,\mathrm{diag}\,}}({\mathsf {A}}_2)=[0,1,2,0,0,0]\\ {{\,\mathrm{diag}\,}}({\mathsf {B}}_1)&= [7,8,0,0,9,10] \quad {{\,\mathrm{diag}\,}}({\mathsf {B}}_2)=[0,0,0.8,1,0,0]. \end{aligned}$$

The surface contains the eigenvalues depending on both \(\alpha\) and \(\beta\), the hyperbolas those depending only on \(\beta\) and the straight lines those depending only on \(\alpha\). If we cut the three dimensional picture with a plane at \(\beta >0\) fixed we recognize the behavior analyzed in Sect. 2.1 and shown in Fig. 1 left. Analogously, taking a plane with \(\alpha >0\) fixed, we recover Case 2 (see Sect. 2.2).

If \(K_{{\mathsf {A}}_1}\cap K_{{\mathsf {B}}_1}=\emptyset\), we set \(n_{{\mathsf {A}}_1\cap {\mathsf {B}}_1}=0\), hence the eigenvalues are

$$\begin{aligned} \lambda _k=\left\{ \begin{array}{ll} \alpha \chi _k &{}\quad {\text { if }}1\le k\le n_{{\mathsf {A}}_1}\\ \displaystyle \frac{1}{\beta }{\nu _{k-n_{{\mathsf {A}}_1}}} &{} \quad {\text { if }} n_{{\mathsf {A}}_1}+1\le k\le n_{{\mathsf {A}}_1}+n_{{\mathsf {B}}_1}\\ \mu _{k-n_{{\mathsf {A}}_1}-n_{{\mathsf {B}}_1}} &{}\quad {\text { if }} n_{{\mathsf {A}}_1}+n_{{\mathsf {B}}_1}+1\le k\le n \end{array}. \right. \end{aligned}$$
Fig. 3
figure 3

Eigenvalues when \(K_{{\mathsf {A}}_1}\cap K_{{\mathsf {B}}_1}=\emptyset\) as a function of \(\alpha\) and \(\beta\)

In order to illustrate the case \(K_{{\mathsf {A}}_1}\cap K_{{\mathsf {B}}_1}=\emptyset\), we report in Fig. 3 the eigenvalues computed using the following diagonal matrices with entries

$$\begin{aligned} {{\,\mathrm{diag}\,}}({\mathsf {A}}_1)&= [0,0,3,4,5,6], \quad {{\,\mathrm{diag}\,}}({\mathsf {A}}_2)=[1,2,0,0,0,0],\\ {{\,\mathrm{diag}\,}}({\mathsf {B}}_1)&= [7,8,9,10,0,0], \quad {{\,\mathrm{diag}\,}}({\mathsf {B}}_2)=[0,0,0,0,0.8,1]. \end{aligned}$$

For a fixed \(\alpha\), we can see in solid line the hyperbolas \(\frac{\nu _j}{\beta }\), \(j=1,2\) while when \(\beta\) is fixed we can see the straight lines \(\alpha \chi _j\), \(j=1,2\). The remaining two eigenvalues are independent of \(\alpha\) and \(\beta\).

3 Virtual element method for eigenvalue problems

In this section we recall how algebraic eigenvalue problems similar to the ones discussed in the previous section can be obtained withing the framework of the Virtual Element Method (VEM) for the discretization of elliptic eigenvalue problems, see [15, 16].

We consider the model problem of the Laplacian operator. Given a connected open domain with Lipschitz continuous boundary \(\Omega \subseteq {\mathbb {R}}^d\), with \(d=2,3\), we look for eigenvalues \(\lambda \in {\mathbb {R}}\) and eigenfunctions \(u\ne 0\) such that

$$\begin{aligned} \left\{ \begin{array}{ll} -\Delta u=\lambda u &{}\quad \text { in }\Omega \\ u=0&{}\quad {\text { on }}\partial \Omega . \end{array} \right. \end{aligned}$$

In view of the application of VEM, we consider the weak form: find \(\lambda \in {\mathbb {R}}\) and \(u\in H^1_0(\Omega )\) with \(u\ne 0\) such that

$$\begin{aligned} a(u,v)=\lambda b(u,v)\quad \forall v\in H^1_0(\Omega ), \end{aligned}$$
(11)

where

$$\begin{aligned} a(u,v)=(\nabla u,\nabla v),\quad b(u,v)=(u,v), \end{aligned}$$

and \((\cdot ,\cdot )\) is the scalar product in \(L^2(\Omega )\).

It is well-known that problem (11) admits an infinite sequence of positive eigenvalues

$$\begin{aligned} 0<\lambda _1\le \dots \le \lambda _i\le \cdots \end{aligned}$$

repeated according to their multiplicity, each one associated with an eiegenfunction \(u_i\) with the following properties

$$\begin{aligned} a(u_i,u_j)&= b(u_i,u_j)=0\quad {\text {if }}i\ne j\nonumber \\ b(u_i,u_i)&= 1,\quad a(u_i,u_i)=\lambda _i. \end{aligned}$$
(12)

Let us briefly recall the definition of the virtual element spaces and of the discrete bilinear forms which we are going to use in this section, see [1, 3]. We present only the two dimensional spaces, the three dimensional ones are obtained using the 2D virtual elements on the faces.

We decompose \(\Omega\) into polygons P, with diameter \(h_P\) and area |P|. Similarly, if e is an edge of an element P, we denote by \(h_e=|e|\) its length. Depending on the context \(\partial P\) refers to either the boundary of P or the set of the edges of P. The notation \({\mathcal {T}}_h\) and \({\mathcal {E}}_h\) stands for the set of the elements and the edges, respectively. As usual, \(h=\max _{P\in {\mathcal {T}}_h} h_P\). We assume the following mesh regularity condition (see [3]): there exists a positive constant \(\gamma\), independent of h, such that each element \(P\in {\mathcal {T}}_h\) is star-shaped with respect to a ball of radius greater than \(\gamma h_P\); moreover, for every element P and for every edge \(e\subset \partial P\), it holds \(h_e \ge \gamma h_P\).

For \(k\ge 1\) and \(P\in {\mathcal {T}}_h\) we define

$$\begin{aligned} {\tilde{V}}_h^k(P)=\{v\in H^1(P): v|_{\partial P}\in C^0(\partial P), v|_e\in {\mathbb {P}}_k(e)\ \forall e\subset \partial P, \Delta v\in {\mathbb {P}}_k(P)\}. \end{aligned}$$

We consider the following linear forms on the space \({\tilde{V}}_h^k(P)\)

  1. (D1)

    the values \(v(V_i)\) at the vertices \(V_i\) of P,

  2. (D2)

    the scaled edge moments up to order \(k-2\)

    $$\begin{aligned} \dfrac{1}{|e|}\int _e vm\,{\text {d}}s\quad \forall m\in {\mathcal {M}}_{k-2}(e),\ \forall e\subset \partial P, \end{aligned}$$
  3. (D3)

    the scaled element moments up to order \(k-2\)

    $$\begin{aligned} \dfrac{1}{|P|}\int _P vm\,{\text {d}}x\quad \forall m\in {\mathcal {M}}_{k-2}(P),\ \end{aligned}$$

where \({\mathcal {M}}_{k-2}(\omega )\) is the set of scaled monomials on \(\omega\), namely

$$\begin{aligned} {\mathcal {M}}_{k-2}(\omega )=\Big \{\Big (\dfrac{{\mathbf {x}}-{\mathbf {x}}_\omega }{h_\omega }\Big )^s, |s|\le k-2\Big \}, \end{aligned}$$

with \({\mathbf {x}}_\omega\) the barycenter of \(\omega\), and with the convention that \({\mathcal {M}}_{-1}(\omega )=\emptyset\).

From the values of the linear operators D1–D3, on each element P we can compute a projection operator \(\Pi _k^\nabla : {\tilde{V}}_h^k(P)\rightarrow {\mathbb {P}}_k(P)\) defined as the unique solution of the following problem:

$$\begin{aligned}&a^P(\Pi _k^\nabla v-v,p)=0\quad \forall p\in {\mathbb {P}}_k(P)\nonumber \\&\quad \int _{\partial P}(\Pi _k^\nabla v-v){\text {d}}s=0, \end{aligned}$$
(13)

where \(a^P(u,v)=(\nabla u,\nabla v)_P\) and \((\cdot ,\cdot )_P\) denotes the \(L^2(P)\)-scalar product.

The local virtual space is defined as

$$\begin{aligned} V_h^k(P)=\left\{ v\in {\tilde{V}}_h^k(P):\int _P (v-\Pi _k^\nabla v) p{\text {d}}x=0\ \forall p\in ({\mathbb {P}}_k{\setminus }{\mathbb {P}}_{k-2})(P)\right\} , \end{aligned}$$
(14)

where \(({\mathbb {P}}_k{\setminus }{\mathbb {P}}_{k-2})(P)\) contains the polynomials in \({\mathbb {P}}_k(P)\) \(L^2\)-orthogonal to \({\mathbb {P}}_{k-2}(P)\).

We recall that by construction \({\mathbb {P}}_k(P)\subset V_h^k(P)\), so that the optimal rate of convergence is ensured. Moreover, the linear operators D1–D3 provide a unisolvent set of degrees of freedom (DoFs) for \(V_h^k(P)\), which allows us to define and compute \(\Pi _k^\nabla\) on \(V_h^k(P)\). In addition, the \(L^2\)-projection operator \(\Pi ^0_k:V_h^k(P)\rightarrow {\mathbb {P}}_k(P)\) is also computable using the DoFs.

The global virtual space is

$$\begin{aligned} V_h^k=\{v\in H^1_0(\Omega ): v|_P\in V_h^k(P)\ \forall P\in {\mathcal {T}}_h\}. \end{aligned}$$
(15)

In order to discretize problem (11), we introduce the discrete counterparts \(a_h\) and \(b_h\) of the bilinear forms a and b, respectively. Both discrete forms are obtained as sum of the following local contributions: for all \(u_h,v_h\in V_h^k\)

$$\begin{aligned} a_h^P(u_h,v_h)&= a^P(\Pi _k^\nabla u_h,\Pi _k^\nabla v_h) +S_a^P((I-\Pi _k^\nabla )u_h,(I-\Pi _k^\nabla )v_h)\nonumber \\ b_h^P(u_h,v_h)&= b^P(\Pi ^0_ku_h,\Pi ^0_kv_h)+S_b^P((I-\Pi ^0_k)u_h,(I-\Pi ^0_k)v_h), \end{aligned}$$
(16)

where \(b^P(u,v)=(u,v)_P\), and \(S_a^P\) and \(S_b^P\) are symmetric positive definite bilinear forms on \(V_h^k(P)\times V_h^k(P)\) such that

$$\begin{aligned}& c_0 a^P(v,v)\le S_a^P(v,v)\le c_1a^P(v,v) \quad \forall v\in V_h^k(P){\text { with }}\Pi _k^\nabla v=0\nonumber \\& c_2 b^P(v,v)\le S_b^P(v,v)\le c_3b^P(v,v) \quad \forall v\in V_h^k(P){\text { with }}\Pi ^0_kv=0, \end{aligned}$$
(17)

for some positive constants \(c_i\) (\(i=0,\ldots ,3\)) independent of h. We define \(a_h(u_h,v_h)=\sum _{P\in {\mathcal {T}}_h}a_h^P(u_h,v_h)\) and \(b_h(u_h,v_h)=\sum _{P\in {\mathcal {T}}_h}b_h^P(u_h,v_h)\).

The virtual element counterpart of (11) reads: find \(\lambda _h\) and \(u_h\in V_h^k\) with \(u_h\ne 0\) such that

$$\begin{aligned} a_h(u_h,v_h)=\lambda _h b_h(u_h,v_h)\quad \forall v_h\in V_h^k. \end{aligned}$$
(18)

Thanks to (17), the discrete problem (18) admits \(N_h=\dim {V_h^k}\) positive eigenvalues

$$\begin{aligned} 0<\lambda _{1h}\le \dots \lambda _{N_hh} \end{aligned}$$

and the corresponding eigenfunctions \(u_{ih}\), for \(i=1,\ldots ,N_h\), enjoy the discrete counterpart of properties in (12).

The following convergence result has been proved in [16].

Theorem 1

Let \(\lambda\) be an eigenvalue of (11) of multiplicity m and \({\mathcal {E}}_\lambda\) the corresponding eigenspace. Then there are exactly m discrete eigenvalues of (18) \(\lambda _{j(i)h}\) (\(i=1,\ldots ,m\)) tending to \(\lambda\). Moreover, assuming that \(u\in H^{1+r}(\Omega )\), for all \(u\in {\mathcal {E}}_\lambda\), the following inequalities hold true:

$$\begin{aligned}&|\lambda -\lambda _{j(i)h}|\le Ch^{2t}\\&{{\hat{\delta }}}({\mathcal {E}}_\lambda ,\oplus _i{\mathcal {E}}_{j(i)h})\le Ch^t, \end{aligned}$$

where \(t=\min (k,r)\), \({{\hat{\delta }}}({\mathcal {E}},{\mathcal {F}})\) represents the gap between the spaces \({\mathcal {E}}\) and \({\mathcal {F}}\), and \({\mathcal {E}}_{\ell h}\) is the eigenspace spanned by \(u_{\ell h}\).

Remark 3

It is also possible to consider on the right hand side of (18) the bilinear form for \({\tilde{b}}_h(u_h,v_h)=\sum _{P\in {\mathcal {T}}_h}b^P(\Pi ^0_ku_h,\Pi ^0_kv_h)\). This leads to the following discrete eigenvalue problem: find \(({\tilde{\lambda }}_h,{\tilde{u}}_h)\in {\mathbb {R}}\times V_h^k\) with \({\tilde{u}}_h\ne 0\) such that

$$\begin{aligned} a_h({\tilde{u}}_h,v_h)={\tilde{\lambda }}_h{\tilde{b}}_h({\tilde{u}}_h,v_h)\quad \forall v_h\in V_h^k. \end{aligned}$$
(19)

The analogue of Theorem 1 holds true for this partially non stabilized discretization as well.

3.1 Computational aspects and numerical results

In order to compute the solution of problems (18) and (19), we need to describe how to obtain the matrices associated to our bilinear forms. By construction the matrix \({\mathsf {A}}_1\) (respectively, \({\mathsf {B}}_1\)) associated with \(\sum _P a^P(\Pi _k^\nabla \cdot ,\Pi _k^\nabla \cdot )\) (respectively, \(\sum _P b^P(\Pi ^0_k\cdot ,\Pi ^0_k\cdot )\)) has kernel corresponding to the elements \(v_h\in V_h^k\) such that \(\Pi _k^\nabla v_h\) is constant (respectively, \(\Pi ^0_kv_h=0\)) for all \(P\in {\mathcal {T}}_h\).

We observe that the local contributions of the bilinear forms displayed in (16) mimic the following exact relations

$$\begin{aligned} a^P(u_h,v_h)&= a^P(\Pi _k^\nabla u_h,\Pi _k^\nabla v_h)+a^P((I-\Pi _k^\nabla )u_h,(I-\Pi _k^\nabla )v_h)\nonumber \\ b^P(u_h,v_h)&= b^P(\Pi ^0_ku_h,\Pi ^0_kv_h)+b^P((I-\Pi ^0_k)u_h,(I-\Pi ^0_k)v_h). \end{aligned}$$
(20)

Let us denote by \({\mathsf {A}}^\ell _1\), \({\mathsf {A}}^\ell _2\), \({\mathsf {B}}^\ell _1\) and \({\mathsf {B}}^\ell _2\) the matrices whose entries are given by

$$\begin{aligned} ({\mathsf {A}}^\ell _1)_{ij}&= a^P(\Pi _k^\nabla \phi _i,\Pi _k^\nabla \phi _j),\quad ({\mathsf {A}}^\ell _2)_{ij}=a^P((I-\Pi _k^\nabla )\phi _i,(I-\Pi _k^\nabla )\phi _j)\nonumber \\ ({\mathsf {B}}^\ell _1)_{ij}&= b^P(\Pi ^0_k\phi _i,\Pi ^0_k\phi _j), \quad({\mathsf {B}}^\ell _2)_{ij}=b^P((I-\Pi ^0_k)\phi _i,(I-\Pi ^0_k)\phi _j) \end{aligned}$$
(21)

with \(\phi _i\) basis functions for \(V_h^k(P)\).

Even if the global matrices \({\mathsf {A}}\) and \({\mathsf {B}}\) do not satisfy the properties stated in Assumption 1, it turns out that Assumption 1 is fulfilled by \({\mathsf {C}}={\mathsf {B}}^\ell _1+\beta {\mathsf {B}}^\ell _2\); moreover, \({\mathsf {C}}={\mathsf {A}}^\ell _1+\alpha {\mathsf {A}}^\ell _2\) is characterized by the situation described in Remark 2.

We start with the pair \({\mathsf {A}}^\ell _1\) and \({\mathsf {A}}^\ell _2\). The kernel \(K_{{\mathsf {A}}^\ell _1}\), with abuse of notation, is characterized by

$$\begin{aligned} K_{{\mathsf {A}}^\ell _1}=\{v\in V_h^k(P): a^P(\Pi _k^\nabla v,\Pi _k^\nabla w)=0\ \forall w\in V_h^k(P)\}, \end{aligned}$$

that is, \(K_{{\mathsf {A}}^\ell _1}\) is made of v with constant \(\Pi _k^\nabla v\) on P. Moreover, the orthogonal complement of \(K_{{\mathsf {A}}^\ell _1}\), denoted by \(K_{{\mathsf {A}}^\ell _1}^\perp\) contains the elements \(v\in V_h^k(P)\) such that \(a^P(v,w)=0\) for all \(w\in K_{{\mathsf {A}}^\ell _1}\).

We now show that \({\mathsf {A}}^\ell _2( K_{{\mathsf {A}}^\ell _1}^\perp )=0\), that is, for all \(v\in K_{{\mathsf {A}}^\ell _1}^\perp\), \(a^P((I-\Pi _k^\nabla )v,(I-\Pi _k^\nabla )w)=0\) for all \(w\in V_h^k(P)\). We recall that, if \(v\in K_{{\mathsf {A}}^\ell _1}^\perp\), then \(a^P(v,w)=0\) for all \(w\in K_{{\mathsf {A}}^\ell _1}\). This implies that for \(v\in K_{{\mathsf {A}}^\ell _1}^\perp\) and \(w\in K_{{\mathsf {A}}^\ell _1}\), it holds true that \(a^P(v,w)=a^P((I-\Pi _k^\nabla )v,(I-\Pi _k^\nabla )w)=0\). Now we can write for all \(w\in V_h^k(P)\)

$$\begin{aligned}&a^P((I-\Pi _k^\nabla )v,(I-\Pi _k^\nabla )w)\\&\quad =a^P((I-\Pi _k^\nabla )v,(I-\Pi _k^\nabla )(I-\Pi _k^\nabla )w)+ a^P((I-\Pi _k^\nabla )v,(I-\Pi _k^\nabla )\Pi _k^\nabla w)=0. \end{aligned}$$

Indeed, \(\Pi _k^\nabla (I-\Pi _k^\nabla )w=0\) implies that \((I-\Pi _k^\nabla )w\in K_{{\mathsf {A}}^\ell _1}\), and thus the first term vanishes, while for the second term it is enough to observe that \(\Pi _k^\nabla (\Pi _k^\nabla w)=\Pi _k^\nabla w\). Thus property (c) of Assumption 1 is verified for \({\mathsf {C}}={\mathsf {A}}\).

Concerning property (b) of Assumption 1, we have by construction, that \(a^P((I-\Pi _k^\nabla )v,(I-\Pi _k^\nabla )v)\ge 0\) for all \(v\in V_h^k(P)\), see (20). On the other hand, if v is constant on P, then \(\Pi _k^\nabla v=v\) is constant, therefore \(v\in K_{{\mathsf {A}}^\ell _1}\) and \((I-\Pi _k^\nabla )v=0\) so that v belongs also to the kernel of \({\mathsf {A}}^\ell _2\). Hence the pair \({\mathsf {A}}^\ell _1\) and \({\mathsf {A}}^\ell _2\) does not satisfy property b), but it is in the situation described in Remark 2.

Let us now consider the pair \({\mathsf {B}}^\ell _1\) and \({\mathsf {B}}^\ell _2\). We observe that the kernel of \({\mathsf {B}}^\ell _1\) is characterized by \(\Pi ^0_kv=0\). The analysis performed for the pair \({\mathsf {A}}^\ell _1\) and \({\mathsf {A}}^\ell _2\) can be repeated and gives that in this case Assumption 1 is verified for \({\mathsf {C}}={\mathsf {B}}\).

As a consequence of the assembling of the local matrices, the global matrices \({\mathsf {A}}_1\) and \({\mathsf {A}}_2\) (\({\mathsf {B}}_1\) and \({\mathsf {B}}_2\), respectively) do not satisfy anymore the properties listed in Assumption 1. In particular, for \(k=1\) we shall see that the matrices \({\mathsf {A}}_1\) and \({\mathsf {B}}_1\) are not singular. Nevertheless, we are going to show that the numerical results look pretty much similar to the ones reported in Sect. 2.

Moreover, in practice the matrices \({\mathsf {A}}^\ell _2\) and \({\mathsf {B}}^\ell _2\) are not available and they are replaced by using the local bilinear forms \(S_a^P\) and \(S_b^P\) given in (16) as follows.

Let us denote by \({\mathbf {u}}_h,{\mathbf {v}}_h\in {\mathbb {R}}^{N_P}\) the vectors containing the values of the \(N_P\) local DoFs associated to \(u_h,v_h\in V_h^k(P)\). Then, we define the local stabilized forms as

$$\begin{aligned} S_a^P(u_h,v_h)=\sigma _P {\mathbf {u}}_h^\top {\mathbf {v}}_h,\quad S_b^P(u_h,v_h)=\tau _P h_P^2{\mathbf {u}}_h^\top {\mathbf {v}}_h \end{aligned}$$

where the stability parameters \(\sigma _P\) and \(\tau _P\) are positive constants which might depend on P but are independent of h. We point out that this choice implies the stability requirements in (17). In the applications, the parameter \(\sigma _P\) is usually chosen depending on the mean value of the eigenvalues of the matrix stemming from the term \(a_P(\Pi _k^\nabla \cdot ,\Pi _k^\nabla \cdot )\), and \(\tau _P\) as the mean value of the eigenvalues of the matrix resulting from \(\frac{1}{h^2_P}(\Pi ^0_k\cdot ,\Pi ^0_k\cdot )_P\). The choice of the stabilized form \(S_a^P\) is discussed in some papers concerning the source problem, see, e.g., [4] and the references therein. One can find an analysis of the stabilization parameters \(\sigma _P\) in [9].

If \(\sigma _P\) and \(\tau _P\) vary in a small range, it is reasonable to take \(\sigma _P=\alpha\) and \(\tau _P=\beta\) for all P and this is the situation which we discuss further. Therefore, the structure of the matrices is \({\mathsf {A}}={\mathsf {A}}_1+\alpha {\mathsf {A}}_2\) and \({\mathsf {B}}={\mathsf {B}}_1+\beta {\mathsf {B}}_2\) where \({\mathsf {A}}_2\) and \({\mathsf {B}}_2\) are the matrices with local contribution given by \({\mathbf {u}}_h^\top {\mathbf {v}}_h\) and \(h_P^2{\mathbf {u}}_h^\top {\mathbf {v}}_h\), respectively. We study the behavior of the eigenvalues as \(\alpha\) and \(\beta\) vary in given ranges.

In the following tests \(\Omega\) is the unit square partitioned using a sequence of Voronoi meshes with a given number of elements. In Fig. 4 we report the coarsest mesh with 50 elements (\(h=0.2350\), 151 edges, 102 vertices). We recall that the exact eigenvalues are given by \((i^2+j^2)\pi ^2\) for \(i,j\in {\mathbb {N}}{\setminus }\{0\}\) with eigenfunctions \(\sin (i\pi x)\sin (j\pi y)\). The following numerical results have been obtained using Matlab and, in particular, the routine eig for the computation of the eigenvalues. In the following figures, we shall always report the computed eigenvalues divided by \(\pi ^2\).

Fig. 4
figure 4

Voronoi mesh with 50 polygons

Tables 1 and 2 display the dimension of the kernel of the matrices \({\mathsf {A}}_1\) and \({\mathsf {B}}_1\) for \(k=1,2,3\), and for different numbers N of the elements in the mesh.

Table 1 Dimension of \(K_{{\mathsf {A}}_1}\) with respect to k and the number of elements
Table 2 Dimension of \(K_{{\mathsf {B}}_1}\) with respect to k and the number of elements

In particular we see that for \(k=1\) the matrix \({\mathsf {A}}_1\) is nonsingular.

We have computed the lowest eigenvalue of \({\mathsf {A}}_1x=\lambda {\mathsf {B}}_1x\), which gives an estimate of the inf-sup constant of the discrete problem (18). The results, presented in Table 3, show that the first eigenvalue is decreasing, and this behavior corresponds to the fact that the bilinear form \(\sum _P a^P(\Pi _k^\nabla \cdot ,\Pi _k^\nabla \cdot )\) is not stable.

Table 3 First eigenvalues of \({\mathsf {A}}_1x=\lambda {\mathsf {B}}_1x\) for different meshes

We now discuss some tests, where we present the behavior of the eigenvalues as the parameters \(\alpha\) and \(\beta\) vary, for the mesh with \(N=200\) and different degree k of the polynomials in the space \(V_h^k\).

The rows of Fig. 5 contain the results for fixed k and the values \(\beta =0,1,5\), while, in the columns, \(\beta\) is fixed and k varies. In each picture, we plot in red the exact eigenvalues and with different colors those corresponding to \(\alpha =10^r\) with \(r=-3,\ldots ,1\).

These plots clearly confirm that the choice of the parameters for optimal performance is not so immediate. Consider, in particular, that we are solving the Laplace eigenvalue problem (isotropic diffusion) on a domain as simple as a square. For an arbitrary elliptic problem and more general domains the situation could be much more complicated. For \(\beta =0\), the first 30 eigenvalues are well approximated with higher degree of polynomials whenever \(\alpha \ge 0.1\). The value \(\alpha =0.1\) seems to be the best choice in the case \(k=1\). Increasing \(\beta\) does not produce much improvement. All the pictures seem to indicate that higher values of \(\alpha\) might give better results. In particular, for \(k=2,3\) the first 30 eigenvalues are approximated with a reasonable accuracy for \(\alpha =10\) and \(\beta =1\). Increasing \(\beta\) and keeping \(\alpha =10\), we see that a smaller number of eigenvalues are captured.

Fig. 5
figure 5

First 30 eigenvalues for different values of k, \(\alpha\) and \(\beta\)

Figure 6 shows the behavior of the eigenvalues as \(\alpha\) varies from 0 to 10. At a first glance the pictures remind of Fig. 1 (left) even if, as it has been explained before, the situation is not exactly matching what we discussed in Sect. 2.

Each subplot reports all computed eigenvalues between 0 and 40; the dotted horizontal lines represent the exact solutions. The first 30 computed eigenvalues are connected together with lines of different colors in an automated way. An “ideal” good approximation would correspond to a series of colored lines matching the dotted lines of the exact eigenvalues. It is interesting to look at the differences between various degrees (k from 1 to 3 moving from the top to the bottom) and values of \(\beta\) (equal to 0, 1, and 5 from left to right).

Fig. 6
figure 6

Eigenvalues versus \(\alpha\) for different values of k and \(\beta\)

Fig. 7
figure 7

Same plot as in Fig. 6h with four marked (spurious) eigenvalues

Fig. 8
figure 8

Eigenfunctions corresponding to the eigenvalues marked in Fig. 7

More reliable results seem to be obtained for large k and small \(\beta\). Actually, the limit case of \(\beta =0\) appears to be the safest choice. This is in agreement with the claim of [5] where the authors remark that “even the value \(\sigma _E=0\) yields very accurate results, in spite of the fact that for such a value of the parameter the stability estimate and hence most of the proofs of the theoretical results do not hold” (note that \(\sigma _E=0\) in [5] has the same meaning as \(\beta\) in our paper). It is interesting to observe that the analysis of [16], summarized in Theorem 1, covers the case \(\beta =0\) as well. On the other hand \(\beta =0\) may produce a singular matrix \({\mathsf {B}}\) and this could be not convenient from the computational point of view.

In order to better understand the behavior of the eigenvalues reported in Fig. 6h, we highlight in Fig. 7 four eigenvalues that are apparently aligned along an oblique line. The corresponding eigenfunctions are reported in Fig. 8. The four eigenfunctions look similar, so that the analogy with Fig. 1 (left) is even more evident.

We conclude this discussion with an example where, for a given value of \(\alpha\), a good eigenvalue (i.e., an eigenvalue corresponding to a correct approximation) is crossing a spurious one (i.e., an eigenvalue belonging to an oblique line). In this case it may happen that the two eigenfunctions mix together, thus yielding to an even more complicated situation. This behavior is reported in Fig. 9, where a region of the plot shown in Fig. 6h is blown-up close to an intersection point: actually three eigenvalues (a spurious one and two corresponding to good ones) are clustered at the marked intersection points.

Fig. 9
figure 9

Intersections of good and spurious eigenvalues

Figure 10 shows the computed eigenvalues smaller that 40 when \(\beta\) varies from 0 to 5 and for a fixed value of \(\alpha\). As in Fig. 5 and in analogy with Fig. 6, the rows correspond to the degree k of polynomials, while the columns refer to different values of \(\alpha\). The dotted horizontal lines represent the exact eigenvalues. The lines with different colors in each picture follow the n-th eigenvalue for \(n=1,\ldots ,30\). It turns out that all lines are originating from curves that look like hyperbolas when \(\beta\) is large. Following each of these hyperbolas from \(\beta =+\,\infty\) backwards, it happens that when the hyperbola meets a correct approximation of an eigenvalue of the continuous problem, it deviates from its trajectory and becomes a (almost horizontal) straight line. In the case \(k=1\), we see that the higher eigenvalues are computed with decreasing accuracy as \(\beta\) approaches 0.

Fig. 10
figure 10

Eigenvalues versus \(\beta\) for different values of k and \(\alpha\)

We recognize in these pictures the situation presented in Sect. 2.2, corresponding to the behavior of the eigenvalues when the parameter \(\beta\) in matrix \({\mathsf {B}}\) varies. In this test, the kernel of matrix \({\mathsf {B}}_1\) is not empty only for \(k=3\). Nevertheless, we can see that when \(\beta\) approaches 0, there are several eigenvalues going to \(\infty\). On the other side, for greater values of \(\beta\) we obtain several spurious eigenvalues. The range of \(\beta\), which gives eigenvalues close to the exact ones, clearly depends on k and \(\alpha\).

Figure 11 displays, in separate pictures, the first four eigenvalues, with \(k=1\), \(\alpha =10\), different values of h, and \(0\le \beta \le 400\). Taking into account that the routine eig sorts the eigenvalues in ascending order, the four pictures display, in lexicographical order, the first, second, third and fourth computed eigenvalues. In each subplot, each line refers to a particular mesh. We can see that the eigenvalues computed with the finest mesh seem to be insensitive with respect to the value of \(\beta\). On the opposite side the coarsest mesh gives approximations of the correct values only when \(\beta\) is very small and, furthermore, the accuracy is rather low. For each eigenvalue and each fixed mesh we recognize a critical value of the parameter such that greater values of \(\beta\) produce spurious eigenvalues. The behavior of these eigenvalues clearly reproduces that of the eigenvalues in Fig. 1 (right) referring to Case 2. The results are plotted with a different perspective depending on the fact that the results now depend also on the computational mesh. The right bottom plot of Fig. 11 highlights a phenomenon which already appears in Fig. 10i. Indeed, we see that the red line corresponding to the fourth computed eigenvalue for \(N=400\) lies along an hyperbola until \(\beta =65\) where it reaches the value 5 associated with second and third exact eigenvalues. Between \(\beta =65\) and \(\beta =55\) the red line remains close to 5, then decreasing \(\beta\) it follows a different hyperbola until it reaches the expected value for \(\beta =35\).

Fig. 11
figure 11

First four eigenvalues

4 Conclusions

In this paper we have discussed how numerically computed eigenvalues can depend on discretization parameters. Section 2 shows the dependence on \(\alpha\) and \(\beta\) of the eigenvalues of (1) when \({\mathsf {A}}\) and \({\mathsf {B}}\) have the forms (3) and (4), respectively. In Sect. 3 we have studied the behavior of the eigenvalues of the Laplace operator computed with the Virtual Element Method. The presence of two parameters resembles the abstract setting of Sect. 2; even if assumptions satisfied by the VEM matrices are more complicated than the ones previously discussed, the numerical results are pretty much in agreement. The present work opens the question of a viable choice of the parameters for eigenvalue computations when the discretization scheme depends on a suitable tuning of them (such as in the case of VEM).