Exact formulae and matrix-less eigensolvers for block banded symmetric Toeplitz matrices · Block matrices

Precise asymptotic expansions for the eigenvalues of a Toeplitz matrix T n ( f ) , as the matrix size n tends to inﬁnity, have recently been obtained, under suitable assumptions on the associated generating function f . A restriction is that f has to be polynomial, monotone, and scalar-valued. In this paper we focus on the case where f is an s × s matrix-valued trigonometric polynomial with s ≥ 1, and T n ( f ) is the block Toeplitz matrix generated by f , whose size is N ( n , s ) = sn . The case s = 1 corresponds to that already treated in the literature. We numerically derive conditions which ensure the existence of an asymptotic expansion for the eigenvalues. Such conditions gen-eralize those known for the scalar-valued setting. Furthermore, following a proposal in the scalar-valued case by the ﬁrst author, Garoni, and the third author, we devise an extrapolation algorithm for computing the eigenvalues of banded symmetric block Toeplitz matrices with a high level of accuracy and a low computational cost. The resulting algorithm is an eigensolver that does not need to store the original matrix, does not need to perform matrix-vector products, and for this reason is called matrix-less . We use the asymptotic expansion for the efﬁcient computation of the spectrum of special block Toeplitz structures and we provide exact formulae for the eigenvalues of the matrices coming from the Q p Lagrangian Finite Element approximation of a second order elliptic differential problem. Numerical results are presented and critically discussed. trigonometric polynomial, s ≥ 1, and { T n ( f ) } n a sequence of block Toeplitz matrix generated by f , with T n ( f ) of size N ( n , s ) = sn . We numerically observed conditions insuring the existence of an asymptotic expansion generalizing the assumptions known for the scalar-valued setting. Furthermore, following a proposal in the scalar-valued case by the ﬁrst author, by Garoni, and by the third author, we devised an extrapolation algorithm for computing the eigenvalues in the present setting regarding banded symmetric block Toeplitz matrices, with a high level of accuracy and with a low computational


Introduction
A matrix A n of the form where A −(n−1) , . . . , A n−1 are blocks in C s×s , is said to be a block Toeplitz matrix. Note that the size of A n is N (n, s) = sn. We say that φ : [− π, π] → C s×s is a complex matrix-valued Lebesgue integrable function if all its components φ i, j : [− π, π] → C, i, j = 1, . . . , s, are complexvalued Lebesgue integrable functions. The nth block Toeplitz matrix generated by φ is defined as where the quantitiesφ k ∈ C s×s are the Fourier coefficients of φ, that is, (1.1) We refer to {T n (φ)} n as the block Toeplitz sequence generated by φ, which in turn is called the generating function or the symbol of {T n (φ)} n . Such type of matrix sequences have been studied, especially for s = 1, by many authors including Szegö, Avram, Böttcher, Parter, Sibermann, Tilli, and Tyrtyshnikov (see e.g. [17,22] and references therein). Furthermore, if φ is Hermitian almost everywhere then, by (1.1),φ −k =φ * k for every k ∈ Z and therefore each T n (φ) is Hermitian. As a consequence, the spectrum of T n (φ) is real. Moreover, the analytical properties of φ decide many delicate features of the eigenvalues of T n (φ) such as distribution, clustering, and localization, as we briefly describe below without entering into technical details.
Distribution. In [22] it was proved that {T n (φ)} n has an asymptotic spectral distribution, in the Weyl sense, described by φ(θ), under the assumption that φ(θ) is a Lebesgue integrable matrix-valued function which is Hermitian almost everywhere. An extension to the non-Hermitian case was given in [11], by adapting the tools introduced by Tilli in [23] for complex-valued generating functions. When the symbol φ is also continuous, i.e., each component φ i, j is continuous, the present distribution result can be described as follows: for sufficiently large n, up to a small number of possible outliers, the eigenvalues of T n (φ) can be grouped into s "branches" having approximate cardinality n and for each q = 1, . . . , s the eigenvalues belonging to the qth branch are approximately given by the samples over a certain uniform grid in [− π, π] of the qth eigenvalue function λ (q) (φ). Clustering. For any > 0, take an -neighborhood of the set R φ , which is defined as the union of the essential ranges of the eigenvalue functions λ (q) (φ). Then the spectrum of {T n (φ)} n is clustered at R φ in the sense that the number of the eigenvalues of T n (φ) that do not belong to the -neighborhood of R φ is o(n) as n tends to infinity. If φ is a Hermitian-valued trigonometric polynomial, then the number of such outliers is O (1) and it is at most linearly depending on s and on the degree of the polynomial. Such clustering results are consequences of the distribution result.

Remark 1.1 Part 1.
When the symbol φ is continuous, then each eigenvalue function λ (q) (φ), q = 1, . . . , s, is continuous and therefore the essential infimum becomes a minimum and the essential supremum becomes a maximum (because the interval [− π, π] is a compact set), while the essential range is the standard range. Part 2.
In this paper we focus on the case where the symbol is a Hermitian matrix-valued trigonometric polynomial (HTP) f with Fourier coefficientsf 0 ,f 1 , . . . ,f m ∈ R s×s , that is, a function of the form The assumptions on f(θ ) imply that T n (f) is a real symmetric block banded matrix with "block bandwidth" 2m + 1, of the form and hence f(θ ) has the same eigenvalues as f(− θ). Thus, each eigenvalue function λ (q) (f) is even and we can therefore simply focus on its restriction λ (q) (f) : [0, π] → R s×s (in accordance with the second part of Remark 1.1). In view of the above distribution, clustering, and localization results, up to O(1) possible outliers, the eigenvalues of the symmetric matrix T n (f) can be partitioned in s subsets (or "branches") of approximately the same cardinality n; and the eigenvalues belonging to the qth branch are approximately equal to the samples of the qth eigenvalue function λ (q) (f) over a uniform grid in [0, π].
In this paper we show that the different branches have a much finer structure and that, under mild restrictions, there exists a hierarchy of symbols which allow us to design extremely economical procedures for the computation of the eigenvalues of the matrices T n (f). In particular, we conjecture on the basis of numerical experiments that for every integer α ≥ 0, every s ≥ 1, and every q ∈ {1, . . . , s}, the following asymptotic expansion holds under the specific local condition and global condition that will be discussed below: for all n ∈ N and j = 1, . . . , n, where: . . , N (n, s)}, are the eigenvalues of T n (f), which are sorted so that, for each fixedq ∈ {1, . . . , s}, the eigenvalues λ (q−1)n+ j (T n (f)), for j = 1, . . . , n, are arranged in non-decreasing or non-increasing order, depending on whether λ (q) (f) is increasing or decreasing (this can be seen using the local or the global condition below); • {c (q) k } k=1,...,α is asequence of functions from [0,π ] to R which depends only on f; is the remainder (the error), which satisfies the inequality |E (q) j,n,α | ≤ C α h α+1 for some constant C α depending only on α and f. We note that in the scalar-valued case s = 1, several theoretical and computational results are available in support of the above expansion [2,5,6,9,[14][15][16], including also extensions to preconditioned matrices and matrices arising in a differential context [1,13].
Unfortunately, as already shown in [2,15,16], the expansion (1.5) is not always satisfied even for s = 1. Below we give two conditions which ensure that the expansion holds.
Local condition. The eigenvalue λ γ (T n (f)) can be expanded as in (1.5) if there exists > 0 such that, for all ∈ (0,¯ ) and all y ∈ (λ γ (T n (f)) − , λ γ (T n (f)) + ), there exists a unique q ∈ {1, . . . , s} and a uniqueθ ∈ [0, π] for which y = λ (q) (f(θ)). (1.6) Global condition. A trivial global condition is obtained by imposing that the local condition is satisfied for every eigenvalue which is not an outlier (if the eigenvalue λ γ (T n (f)) is an outlier, then, by definition, it does not belong to the range of f and consequently relation (1.6) cannot be satisfied). A simple general assumption, which is equivalent to the trivial global condition, is that each λ (q) (f), q = 1, . . . , s, is monotone (non-increasing or non-decreasing) over the interval [0, π] and max θ∈[0,π ] In other words, the global condition can be summarized as follows: strict monotonicity of every eigenvalue function and the intersection of the ranges of two eigenvalue functions λ ( j) (f) and λ (k) (f) is empty for every pair of indices j, k ∈ {1, . . . , s} such that j = k. This version of the global condition is of course much simpler to verify. Moreover, in the case s = 1 it reduces to the monotonicity condition already used in the literature; see [2,5,6,9,15,16] and references therein.
In [15], the authors employed the asymptotic expansion (1.5) with s = 1 for computing an accurate approximation of λ j (T n (f)) for very large n, if the values λ j 1 (T n 1 (f)), . . . , λ j k (T n k (f)) are available for moderately sized n 1 , . . . , n k such that θ j 1 ,n 1 = · · · = θ j k ,n k = θ j,n . We stress that the algorithm was developed in [15] and then improved in [1,12,14], while the mathematical foundations of the considered expansions and few numerical tests were already present in [5].
The purpose of this paper is to carry out this idea and to support it by numerical experiments accompanied by an appropriate error analysis in the more general case where s > 1. In particular, we devise an algorithm to compute λ j (T n (f)) with a high level of accuracy and a relatively low computational cost. The algorithm is completely analogous to the extrapolation procedure [21,Section 3.4], which is employed in the context of Romberg integration to obtain high precision approximations of an integral from a few coarse trapezoidal approximations. In this regard, the asymptotic expansion (1.5) plays here the same role as the Euler-Maclaurin summation formula [21,Section 3.3].
The paper is organized as follows. In Sect. 2, assuming the asymptotic eigenvalue expansion (1.5), we present our extrapolation algorithm for computing the eigenvalues of the s × s block matrix T n (f) for s > 1. In Sect. 3 we provide numerical experiments in support of the asymptotic eigenvalue expansion (1.5) in different cases and we derive exact formulae for the eigenvalues in some practical examples and for matrices coming from order p Lagrangian Finite Element approximations of a second order elliptic differential problem, which are denoted as Q p . In Sect. 4 we draw conclusions and we outline future lines of research. In "Appendix A" we formally prove (1.5) in the basic case α = 0, and in "Appendix B" we report in detail the mass and stiffness Q p elements for p = 2, 3, 4.

Algorithm for computing the eigenvalues of T n (f) for s > 1
Assuming that the expansion (1.5) holds true and taking inspiration from [14], in the present section we propose an interpolation-extrapolation algorithm for computing the eigenvalues of T n (f). In what follows, for each positive integer n ∈ N = {1, 2, 3, . . .} and each s > 1 we define N (n, s) = sn. Moreover, with each positive integer n we associate the stepsize h = 1/(n + 1) and the grid points θ j,n = jπ h, j = 1, . . . , n. For notational convenience, unless otherwise stated, we will always denote a positive integer and the associated stepsize in a strongly related way. For example, if the positive integer is n, then the associated stepsize is h; if the positive integer is n 1 , then the associated stepsize is h 1 ; if the positive integer isn, then the associated stepsize is h; etc. Throughout this section, we make the following assumptions.
Take an n n 1 and fix an index j ∈ {1, . . . , n}. We henceforth assume that q ∈ {1, 2, . . . , s}. To compute an approximation of λ γ (T n (f)), γ = (q − 1)n + j, through the expansion (1.5) we need the value c is not available in practice, but we can approximate it by interpolating and extrapolating the valuesc It is known, however, that interpolating over a large number of uniform nodes is not advisable, as it may give rise to spurious oscillations (Runge's phenomenon). It is therefore better to adopt another kind of approximation. An alternative could be the following: we approximate c   k (θ j 1 ,n 1 ) at θ j 1 ,n 1 for all j 1 = 1, . . . , n 1 . This strategy removes for sure any spurious oscillation, yet it is not accurate. In particular, it does not preserve the accuracy of approximation at the nodes θ j 1 ,n 1 established in Theorem 2.1, i.e., there is no guarantee that |c being a constant depending only on α and q. As proved in Theorem 2.2, a local approximation strategy that preserves the accuracy (2.4), at least if c (q) k (θ ) is sufficiently smooth, is the following: let θ (1) , . . . , θ (α−k+1) be α − k + 1 points of the grid {θ 1,n 1 , . . . , θ n 1 ,n 1 } which are closest to the point θ j,n , 1 and letc (q) k, j (θ ) be the interpolation polynomial of the data (θ (1) k, j (θ j,n ). Note that, by selecting α − k + 1 points from {θ 1,n 1 , . . . , θ n 1 ,n 1 }, we are implicitly assuming that n 1 ≥ α − k + 1.
for some constant B We are now ready to formulate our algorithm for computing the eigenvalues of T n (f).
Remark 2.1 Algorithm 1 is specifically designed for computing λ γ (T n (f)) in the case where n is quite large. When applying this algorithm, it is implicitly assumed that n 1 and α are small (much smaller than n), so that each n k = 2 k−1 (n 1 + 1) − 1 is small as well and the computation of the eigenvaluesλ γ (T n (f))-which is required in the first step-can be efficiently performed by any standard eigensolver (e.g., the Matlab eig function).
The last theorem of the current section provides an estimate for the approximation error made by Algorithm 1.

Numerical experiments
In the current section we present a selection of numerical experiments to validate the algorithms based on the asymptotic expansion (1.5) in different cases where f is matrix-valued, and we give exact formulae for the eigenvalues in some examples of practical interest.

Description
We test the asymptotic expansion and the interpolation-extrapolation algorithm in Sect. 2 in order to obtain an approximation of the eigenvalues λ γ (T n (f)), for γ = 1, . . . , sn, for large n.
Example 1. We show that the expansion and the associated interpolation-extrapolation algorithm can be applied to the whole spectrum, since the symbol satisfies the global condition. Example 2. We show that the expansion and the interpolation-extrapolation algorithm can be locally applied for computing the approximation of the eigenvalues verifying the local condition. In this particular case, the global condition does not hold because the intersection of ranges of two eigenvalue functions is a nontrivial interval and in addition there exists an index q ∈ {1, . . . , s} such that λ (q) (f) is non-monotone. Example 3. We show that the expansion and interpolation-extrapolation algorithm can be locally applied for the computation of the eigenvalues satisfying the local condition. For the specific example, the global condition does not hold since there exists an index q ∈ {1, . . . , s} such that λ (q) (f) is non-monotone either globally on [0, π] or just on a subinterval contained in [0, π]. Example 4. We show how to bypass the local condition in a few special cases: in fact, using different sampling grids, we can recover exact formulas for parts of the spectrum, where the assumption of monotonicity is violated. Example 5. We give a close formula for the eigenvalues of matrices arising from the rectangular Lagrange Finite Element method with polynomials of degree p > 1, usually denoted as Q p elements. The number of the eigenvalue functions, which verify the global condition, depends on the order of the Q p elements. In this specific setting we have s = p.

Experiments
In Examples 1-3 we do not compute analytically the eigenvalue functions of f, but, for q = 1, . . . s, we are able to provide an 'exact' evaluation of λ (q) (f) at θ j k ,n k , j k = 1, . . . , n k , by exploiting the following procedure: • sample f at θ j k ,n k , j k = 1, . . . , n k , obtaining n k s × s matrices, M j k ; • for each j k = 1, . . . , n k , compute the s eigenvalues of M j k , λ q (M j k ), q = 1, . . . , s; • for a fixed q = 1, . . . s, the evaluation of λ (q) (f) at θ j k ,n k , j k = 1, . . . , n k , is given by λ q (M j k ), j k = 1, . . . , n k . This procedure is justified by the fact that here f is a trigonometric polynomial and, denoting by C n k (f) the circulant matrix generated by f, the eigenvalues of C n k (f) are given by the evaluations of λ (q) (f) at the grid points θ r ,n k = 2π r n k , r = 0, . . . , n k − 1, since , and I s the s ×s identity matrix [18]. Furthermore, by exploiting the localization results [19,20] In top left panel of Fig. 2 the graphs of the three eigenvalue functions are shown. The Toeplitz matrix generated by f is a pentadiagonal block matrix, T n (f) ∈ R N ×N , where N = 3n, and all the blocks belong to R 3×3 , that is Here f is such that the global condition is satisfied. Hence we can use the asymptotic expansion and Algorithm 1 to get an accurate approximation of the eigenvalues of T n (f) for a large n. Solving system (2.3) with α = 4 and n 1 = 100, we obtain the approximation of c (q) k (θ j 1 ,n 1 ), k = 1, . . . , α. In Fig. 2, in the top right and bottom panels, the approximated expansion functionsc  k (θ j 1 ,n 1 ), k = 1, . . . , α, j 1 = 1, . . . , n 1 are known, and finally we can computeλ γ (T n (f)) for n = 10000, by using (1.5). For simplicity we plot the eigenvalue functions and also the expansion errors, E (q) j 1 ,n 1 ,0 , for q = 1, 2, 3. In the right panel of Fig. 3 (in black) we show the errors, E   (2) (f). Bottom Right: Approximationsc k (θ j 1 ,n 1 ) for λ (3) j,n,α , q = 1, . . . , 3, we see the errors are significantly reduced if we calculateλ γ (T n (f)), γ = 1, . . . , 3n, shown in the left panel of Fig. 3, using Algorithm 1, with α = 4, n 1 = 100, and n = 10000. Furthermore, a careful study of the left panel of Fig. 3 (coloured) also reveals that, for q = 1, . . . , s,Ẽ (q) j,n,α have local minima, attained when θ j,n is approximately equal to some of the coarse grid points θ j 1 ,n 1 , j 1 = 1, . . . , n 1 . This is no surprise, because for θ j,n = θ j 1 ,n 1 we havẽ k (θ j 1 ,n 1 ), which means that the error of the approximationc n 1 ). The latter implies that we are not introducing further errors due to the interpolation process.

Example 2.
In the present example we choose block size s = 3, with eigenvalue functions λ (1) (f) and λ (3) (f) being strictly monotone on [0, π]. The second eigenvalue function, λ (2) (f), is non-monotone on a small subinterval of [0, π]. Furthermore the range of λ (2) (f) intersects that of λ (3)  λ (2) When comparing with Example 1, the only difference in forming the matrix T n (f) consists in the first Fourier coefficient which is defined aŝ In this example we want to show that it is possible to give an approximation of the eigenvalues λ γ (T n (f)), n = 10000, satisfying the local condition.
From the top left panel of Fig. 4, where the graphs of the three eigenvalue functions are displayed, we notice that • λ (1) (f) is monotone non-decreasing and its range does not intersect that of λ (q) (f), q = 2, 3. Hence, using the asymptotic expansion in (1.5), we expect that it is possible to give an approximation of the first n eigenvalues λ γ (T n (f)), for j = 1, . . . , n; • λ (3) (f) is monotone non-increasing and there existθ 1 Hence, of the remaining 2n eigenvalues, we expect that it is possible to give a fast approximation just of those eigenvalues λ γ (T n (f)) verifying local condition, that is those satisfying the relation below λ γ (T n (f)) ∈ (λ (3)

Example 3.
In this example we set the block size s = 3, and the eigenvalue functions λ (q) (f), q = 1, 2, 3, satisfy max θ∈ [0,π ] λ (1) See the top left panel of Fig. 6 for the plot of λ (q) (f), q = 1, 2, 3. The matrix T n ( f ) ∈ R N ×N , N = 3n, shows a pentadiagonal block structure, and all the blocks belongs to R 3×3 , that is T 1f In analogy with Example 2, we want to give an approximation of λ γ (T n (f)), for n = 10000, in case that the global condition is not satisfied. Although the intersection of the ranges of λ ( j) (f) and λ (k) (f) is empty for every pair ( j, k), j = k, j, k ∈ {1, 2, 3}, the assumption of monotonicity is violated either globally on [0, π] or on a subinterval in [0, π].
We want to approximate the eigenvalues of T n (f), where f(θ ) is constructed from p (q) (θ ), q = 1, 2, 3. For the graph of the chosen polynomials see the top left panel of Fig. 8. Due to the special structure of allf k , see (3.3), we have Therefore T n (f) is similar to the matrix and finally it is trivial to see that the block case, in this setting, is reduced to 3 different scalar problems, which can be treated separately. Differently from previous examples, here the analytical expressions of the eigenvalue functions of f(θ ) are known, since they coincide, by construction, with p (q) (θ ), for q = 1, 2, 3. So we will describe the spectrum of T n (f), approximating or calculating exactly the 3n eigenvalues, treating separately the 3 different scalar problems. For the first n eigenvalues it is known that they can be calculated exactly, sampling p (1) with grid θ j,n = jπ/(n + 1), j = 1, . . . , n. Analogously, the n eigenvalues can be found exactly by sampling p (2) on a special grid defined in [16]. For the last n eigenvalues, the grid that gives exact eigenvalues is not known, but p (3) is monotone non-decreasing and consequently we can use asymptotic expansion in the scalar case. We set the parameters as previous cases: n 1 = 100 and n = 10000. In the top right panel of Fig. 8 we report the expansion errors E (q) j 1 ,n 1 ,0 , calculated using grid θ j 1 ,n 1 = j 1 π/(n 1 + 1), j 1 = 1, . . . , n 1 , q = 1, 2, 3. There is no surprise that in the first region of the graph (green area) the error is zero, since the first n 1 eigenvalues are exactly given, sampling p (1) on standard θ j 1 ,n 1 grid. In yellow area we see the result of direct calculation of λ γ (T n 1 (f)) − λ (3) (f(θ j 1 ,n 1 )), for j 1 = 1, . . . , n 1 , q = 3, as we are using asymptotic expansion with α = 0. The green area, containing the errors related to p (2) (θ ), is obviously chaotic since p (2) (θ ) is non-monotone.
To map the two grids above to match the given symbol f(θ ) we construct θ n 1 by A more general formula to match grids θ (1) n ω and θ (2) n ω +1 to be evaluated on the standard symbol is In the left bottom panel of Fig. 8 we report the global expansion errors E (q) j 1 ,n 1 ,0 , calculated using grid described above. In this way the region where the error is 0 is the second (red area), since the eigenvalues are calculated exactly, by sampling p (2) (θ ). Furthermore, in the green and in the yellow areas we see the result of the direct calculation of λ γ (T n 1 (f)) − λ (q) (f(θ j 1 ,n 1 )), for j 1 = 1, . . . , n 1 , q = 1, 3, as we are using asymptotic expansion with α = 0. Hence, the first n eigenvalues of T n (f) can be calculated exactly sampling p (1) with grid θ j,n = jπ/(n + 1), j = 1, . . . , n and n exact eigenvalues can be found sampling p (2) on grid (3.4). For the computation of the last n eigenvalues, we use the matrix-less procedure in the scalar setting, passing through the approximation of c (3) k (θ j 1 ,n 1 ), k = 1, . . . , α, for α = 4, see the bottom right panel of Fig. 8.
In fact, for α = 4 we ignore the first two evaluations of c 4 at the initial points θ 1,n and θ 2,n , because their values behave in a erratic way. This problem has been emphasized in [2] and it is due to the fact that the first and second derivative of p (3) (θ ) at θ = 0 vanish simultaneously. However, we have to make two observations for clarifying the situation • The present pathology is not a counterexample to the asymptotic expansion (1.5) since we take θ fixed and all the pairs j, n such that θ j,n = θ : in the current case and in that considered in [2] in the scalar-valued setting, we have j fixed and n grows so that the point θ is not well defined.
• There are simple ways for overcoming the problem and then for computing reliable evaluations of c (3) 4 at those bad points θ 1,n and θ 2,n . One of them is described in [12] and consists in choosing a sufficiently large α > 4 and in computing c (3) k , for k = 1, 2, 3, 4. Using this trick, the c (3) 4 at the initial points θ 1,n and θ 2,n have the expected behavior. In addition we stress the fact that this behavior has little impact on the numerically computed solution. Assuming double precision computations, the contribution to the error deriving from c (3) 4 (θ j,n )h 4 will be numerically negligible, even for moderate n. Further discussions on the topic are presented in [12].

Example 5.
Consider the Q p Lagrangian Finite Element approximation, of the second order elliptic differential problem on ∂Ω, (3.5) in one dimension with β = γ = 0, and f ∈ L 2 (Ω). The resulting stiffness matrix is A n is a ( pn − 1) × ( pn − 1) block matrix. The construction of the matrix and the symbol is given in [18]. The p × p matrix-valued symbol of K We have where the subscript − denotes that the last row and column of T n (f) are removed. This is due to the homogeneous boundary conditions. For detailed expressions off 0 andf 1 in the particular case p = 2, 3, 4, see "Appendix B". In Table 1, we list seven examples of uniform grids, with varying n. The general notation for a grid, where the type is defined by context, is θ j,n , where n is the number of grid points, and j is the indices j = 1, . . . , n. The grid fineness parameter h, for the respective grids, is also presented in Table 1. The names of the different grids are chosen in view of their relations with the τ -algebras [7] [see specifically equations (19), (22), and (23) therein].
In Example 1 of [18] the case p = 2 is considered, and explicit formulas for the two eigenvalue functions are given, with their notation, 129 + 126 cos(θ ) + cos 2 (θ ).
In Fig. 9 we present the appropriate grids, defined in Table 1, for the exact eigenvalues of K

Conclusions and future work
In this paper we considered the case of f being a s × s matrix-valued trigonometric polynomial, s ≥ 1, and {T n (f)} n a sequence of block Toeplitz matrix generated by f, with T n (f) of size N (n, s) = sn. We numerically observed conditions insuring the existence of an asymptotic expansion generalizing the assumptions known for the scalar-valued setting. Furthermore, following a proposal in the scalar-valued case by the first author, by Garoni, and by the third author, we devised an extrapolation algorithm for computing the eigenvalues in the present setting regarding banded symmetric block Toeplitz matrices, with a high level of accuracy and with a low computational cost. The resulting algorithm is an eigensolver that does not need to store the original matrix and does not need to perform matrix-vector products: for this reason we call it matrix-less We have used the asymptotic expansion for the efficient computation of the spectrum of special block Toeplitz structures and we have shown exact formulae for the eigenvalues of the matrices coming from the Q p Lagrangian Finite Element approximation of a second order elliptic differential problem.
A lot of open issues remain, including a formal proof of the asymptotic expansion clearly indicated by the numerical experiments at least under the global assumption of monotonicity and pair-wise separation of the eigenvalue functions.

Remark 5.1
With regard to Theorem 5.1, for q = 1, . . . , s, the case where λ (q) (f) are bounded and non-monotone is even easier. If we considerλ (q) (f), the monotone nondecreasing rearrangement of λ (q) (f) on [0, π], taking into account that the derivative of λ (q) (f) has at most a finite number S of sign changes, we deduce thatλ (q) (f) is Lipschitz continuous and its Lipschitz constant is bounded by λ (q) (f) ∞ (notice thatλ (q) (f) is not necessarily continuously differentiable but the derivative ofλ (q) (f) has at most S points of discontinuity). Furthermore the eigenvalues λ γ (τ N (f)), g = (q − 1)n are exactly given by λ (q) f jπ n + 1 so that, by ordering these values non-decreasingly, we deduce that they coincide witĥ λ (q) f x j,n , with x j,n of the form jπ n+1 (1 + o(1)). With these premises, the proof follows exactly the same steps as in Theorem 5.1, using the MinMax characterization and the interlacing theorem for Hermitian matrices.