Exact formulae and matrixless eigensolvers for block banded symmetric Toeplitz matrices
 668 Downloads
 1 Citations
Abstract
Precise asymptotic expansions for the eigenvalues of a Toeplitz matrix \(T_n(f)\), as the matrix size n tends to infinity, have recently been obtained, under suitable assumptions on the associated generating function f. A restriction is that f has to be polynomial, monotone, and scalarvalued. In this paper we focus on the case where \(\mathbf {f}\) is an \(s\times s\) matrixvalued trigonometric polynomial with \(s\ge 1\), and \(T_n(\mathbf {f})\) is the block Toeplitz matrix generated by \(\mathbf {f}\), whose size is \(N(n,s)=sn\). The case \(s=1\) corresponds to that already treated in the literature. We numerically derive conditions which ensure the existence of an asymptotic expansion for the eigenvalues. Such conditions generalize those known for the scalarvalued setting. Furthermore, following a proposal in the scalarvalued case by the first author, Garoni, and the third author, we devise an extrapolation algorithm for computing the eigenvalues of banded symmetric block Toeplitz matrices with a high level of accuracy and a low computational cost. The resulting algorithm is an eigensolver that does not need to store the original matrix, does not need to perform matrixvector products, and for this reason is called matrixless. We use the asymptotic expansion for the efficient computation of the spectrum of special block Toeplitz structures and we provide exact formulae for the eigenvalues of the matrices coming from the \(\mathbb {Q}_p\) Lagrangian Finite Element approximation of a second order elliptic differential problem. Numerical results are presented and critically discussed.
Keywords
Eigenvalues Asymptotic eigenvalue expansion Polynomial interpolation Extrapolation Block matricesMathematics Subject Classification
MSC 15B05 MSC 65F15 MSC 65D05 MSC 65B051 Introduction
 Distribution.

In [22] it was proved that \(\{T_n(\phi )\}_n\) has an asymptotic spectral distribution, in the Weyl sense, described by \(\phi (\theta )\), under the assumption that \(\phi (\theta )\) is a Lebesgue integrable matrixvalued function which is Hermitian almost everywhere. An extension to the nonHermitian case was given in [11], by adapting the tools introduced by Tilli in [23] for complexvalued generating functions.
When the symbol \(\phi \) is also continuous, i.e., each component \(\phi _{i,j}\) is continuous, the present distribution result can be described as follows: for sufficiently large n, up to a small number of possible outliers, the eigenvalues of \(T_n(\phi )\) can be grouped into s “branches” having approximate cardinality n and for each \(q=1,\ldots ,s\) the eigenvalues belonging to the qth branch are approximately given by the samples over a certain uniform grid in \([\,\pi ,\pi ]\) of the qth eigenvalue function \(\lambda ^{(q)}(\phi )\).
 Clustering.

For any \(\epsilon >0\), take an \(\epsilon \)neighborhood of the set \({\mathcal R}_\phi \), which is defined as the union of the essential ranges of the eigenvalue functions \(\lambda ^{(q)}(\phi )\). Then the spectrum of \(\{T_n(\phi )\}_n\) is clustered at \({\mathcal R}_\phi \) in the sense that the number of the eigenvalues of \( T_n(\phi )\) that do not belong to the \(\epsilon \)neighborhood of \({\mathcal R}_\phi \) is o(n) as n tends to infinity. If \(\phi \) is a Hermitianvalued trigonometric polynomial, then the number of such outliers is O(1) and it is at most linearly depending on s and on the degree of the polynomial. Such clustering results are consequences of the distribution result. For the case of trigonometric polynomials, on which the present work is focused, see also Appendix A.
 Localization.

Assume that \(\lambda ^{(q)}(\phi )\), \(q=1,\dots ,s\), are sorted in nondecreasing order, that is, \(\lambda ^{(1)}(\phi )\le \lambda ^{(2)}(\phi )\le \cdots \le \lambda ^{(s)}(\phi )\). Then, for all n, the eigenvalues of \(T_n(\phi )\) belong to the interval \([m_\phi ,M_\phi ]\), where \(m_\phi ={{\mathrm{ess\,inf}}}_{\theta \in [\pi ,\pi ]} \lambda ^{(1)}(\phi )\) and \(M_\phi ={{\mathrm{ess\,sup}}}_{\theta \in [\pi ,\pi ]}\lambda ^{(s)}(\phi )\). Moreover, if the function \(\lambda ^{(1)}(\phi )\) is not essentially constant, then the eigenvalues of \(T_n(\phi )\) belong to \((m_\phi ,M_\phi ]\), and, if the function \(\lambda ^{(s)}(\phi )\) is not essentially constant, then the eigenvalues of \(T_n(\phi )\) belong to \([m_\phi ,M_\phi )\). For such results refer to [19, 20].
Remark 1.1
Part 1. When the symbol \(\phi \) is continuous, then each eigenvalue function \(\lambda ^{(q)}(\phi )\), \(q=1,\dots , s\), is continuous and therefore the essential infimum becomes a minimum and the essential supremum becomes a maximum (because the interval \([\,\pi ,\pi ]\) is a compact set), while the essential range is the standard range. Part 2. Finally the interval \([\,\pi ,\pi ]\) can be replaced by the interval \([0,\pi ]\) when \(\phi (\,\theta )=\phi (\theta )^T\): this is precisely the case we consider, see (1.3).
In view of the above distribution, clustering, and localization results, up to O(1) possible outliers, the eigenvalues of the symmetric matrix \(T_n(\mathbf {f})\) can be partitioned in s subsets (or “branches”) of approximately the same cardinality n; and the eigenvalues belonging to the qth branch are approximately equal to the samples of the qth eigenvalue function \(\lambda ^{(q)}(\mathbf f)\) over a uniform grid in \([0,\pi ]\).

\(\gamma =\gamma (q,j)=(q1)n+j\);

\(\lambda _k(T_n(\mathbf {f}))\), \(k\in \{1,\dots ,N(n,s)\}\), are the eigenvalues of \(T_n(\mathbf { f})\), which are sorted so that, for each fixed \(\bar{q}\in \{1,\dots , s\}\), the eigenvalues \(\lambda _{(\bar{q}1)n+j}(T_n(\mathbf { f}))\), for \(j=1,\ldots ,n\), are arranged in nondecreasing or nonincreasing order, depending on whether \(\lambda ^{(\bar{q})}(\mathbf { f})\) is increasing or decreasing (this can be seen using the local or the global condition below);

\(\{c_k^{(q)}\}_{k=1,\ldots ,\alpha }\) is a sequence of functions from \([0,\pi ]\) to \(\mathbb R\) which depends only on \(\mathbf {f}\);

\(h=\frac{1}{n+1}\) and \(\theta _{j,n}=\frac{j\pi }{n+1}=j\pi h\), \(j=1,\ldots , n\);

\(E_{j,n,\alpha }^{(q)}=O(h^{\alpha +1})\) is the remainder (the error), which satisfies the inequality \(E_{j,n,\alpha }^{(q)}\le C_\alpha h^{\alpha +1}\) for some constant \(C_\alpha \) depending only on \(\alpha \) and \(\mathbf {f}\).
 Local condition.
 The eigenvalue \(\lambda _\gamma (T_n(\mathbf { f}))\) can be expanded as in (1.5) if there exists \(\bar{\epsilon }>0\) such that, for all \(\epsilon \in (0,\bar{\epsilon })\) and all \(y\in (\lambda _\gamma (T_n(\mathbf { f}))\epsilon ,\lambda _\gamma (T_n(\mathbf { f}))+\epsilon )\), there exists a unique \(q\in \{1,\ldots ,s\}\) and a unique \(\bar{\theta }\in [0,\pi ]\) for which$$\begin{aligned} y=\lambda ^{(q)}(\mathbf { f}(\bar{\theta })). \end{aligned}$$(1.6)
 Global condition.
 A trivial global condition is obtained by imposing that the local condition is satisfied for every eigenvalue which is not an outlier (if the eigenvalue \(\lambda _\gamma (T_n(\mathbf { f}))\) is an outlier, then, by definition, it does not belong to the range of \(\mathbf { f}\) and consequently relation (1.6) cannot be satisfied). A simple general assumption, which is equivalent to the trivial global condition, is that each \(\lambda ^{(q)}(\mathbf { f})\), \(q=1,\dots , s\), is monotone (nonincreasing or nondecreasing) over the interval \([0,\pi ]\) andfor \(q=1,\dots ,s1\). In other words, the global condition can be summarized as follows: strict monotonicity of every eigenvalue function and the intersection of the ranges of two eigenvalue functions \(\lambda ^{(j)}(\mathbf {f})\) and \(\lambda ^{(k)}(\mathbf { f})\) is empty for every pair of indices \(j,k\in \{1,\ldots ,s\}\) such that \(j\ne k\). This version of the global condition is of course much simpler to verify. Moreover, in the case \(s=1\) it reduces to the monotonicity condition already used in the literature; see [2, 5, 6, 9, 15, 16] and references therein.$$\begin{aligned} \max _{\theta \in [0,\pi ]} \lambda ^{(q)}(\mathbf { f})< \min _{\theta \in [0,\pi ]}\lambda ^{(q+1)}(\mathbf { f}) \end{aligned}$$
In [15], the authors employed the asymptotic expansion (1.5) with \(s=1\) for computing an accurate approximation of \(\lambda _j(T_n(\mathbf { f}))\) for very large n, if the values \(\lambda _{j_1}(T_{n_1}(\mathbf { f})), \ldots ,\lambda _{j_k}(T_{n_k}(\mathbf { f}))\) are available for moderately sized \(n_1,\ldots ,n_k\) such that \(\theta _{j_1,n_1}=\cdots =\theta _{j_k,n_k}=\theta _{j,n}\). We stress that the algorithm was developed in [15] and then improved in [1, 12, 14], while the mathematical foundations of the considered expansions and few numerical tests were already present in [5].
The purpose of this paper is to carry out this idea and to support it by numerical experiments accompanied by an appropriate error analysis in the more general case where \(s>1\). In particular, we devise an algorithm to compute \(\lambda _j(T_n(\mathbf { f}))\) with a high level of accuracy and a relatively low computational cost. The algorithm is completely analogous to the extrapolation procedure [21, Section 3.4], which is employed in the context of Romberg integration to obtain high precision approximations of an integral from a few coarse trapezoidal approximations. In this regard, the asymptotic expansion (1.5) plays here the same role as the EulerMaclaurin summation formula [21, Section 3.3].
The paper is organized as follows. In Sect. 2, assuming the asymptotic eigenvalue expansion (1.5), we present our extrapolation algorithm for computing the eigenvalues of the \(s\times s\) block matrix \(T_n(\mathbf { f})\) for \(s>1\). In Sect. 3 we provide numerical experiments in support of the asymptotic eigenvalue expansion (1.5) in different cases and we derive exact formulae for the eigenvalues in some practical examples and for matrices coming from order p Lagrangian Finite Element approximations of a second order elliptic differential problem, which are denoted as \(\mathbb {Q}_p\). In Sect. 4 we draw conclusions and we outline future lines of research. In “Appendix A” we formally prove (1.5) in the basic case \(\alpha =0\), and in “Appendix B” we report in detail the mass and stiffness \(\mathbb {Q}_p\) elements for \(p=2,3,4\).
2 Algorithm for computing the eigenvalues of \(T_n(\mathbf { f})\) for \(s>1\)

\(s>1\) and \(n,n_1,\alpha \in \mathbb N\) are fixed parameters.

\(n_k=2^{k1}(n_1+1)1\) for \(k=1,\ldots ,\alpha \).

\(j_k=2^{k1}j_1\) where \(j_1=\{1,\ldots ,n_1\}\) and \(k=1,\ldots ,\alpha \); \(j_k\) are the indices such that \(\theta _{j_k,n_k}=\theta _{j_1,n_1}\).
Theorem 2.1
Proof
It is a straightforward adaptation of the proof given in [14, Theorem 1].
Take an \(n\gg n_1\) and fix an index \(j\in \{1,\ldots ,n\}\). We henceforth assume that \(q\in \{1,2,\dots ,s\}\). To compute an approximation of \(\lambda _\gamma (T_n(\mathbf { f}))\), \(\gamma =(q1)n+j\), through the expansion (1.5) we need the value \(c_k^{(q)}(\theta _{j,n})\) for each \(k=1,\ldots ,\alpha \). Of course, \(c_k^{(q)}(\theta _{j,n})\) is not available in practice, but we can approximate it by interpolating and extrapolating the values \(\tilde{c}_k^{(q)}(\theta _{j_1,n_1})\), \(j_1=1,\ldots ,n_1\). For example, we may define \(\tilde{c}_k^{(q)}(\theta )\) as the interpolation polynomial of the data \((\theta _{j_1,n_1},\tilde{c}_k^{(q)}(\theta _{j_1,n_1})),\hbox { for } j_1=1,\ldots ,n_1\),—so that \(\tilde{c}_k^{(q)}(\theta )\) is expected to be an approximation of \(c_k^{(q)}(\theta )\) over the whole interval \([0,\pi ]\)—and take \(\tilde{c}_k^{(q)}(\theta _{j,n})\) as an approximation to \(c_k^{(q)}(\theta _{j,n})\). It is known, however, that interpolating over a large number of uniform nodes is not advisable, as it may give rise to spurious oscillations (Runge’s phenomenon). It is therefore better to adopt another kind of approximation. An alternative could be the following: we approximate \(c_k^{(q)}(\theta )\) by the spline function \(\tilde{c}_k^{(q)}(\theta )\) which is linear on each interval \([\theta _{j_1,n_1},\theta _{j_1+1,n_1}]\) and takes the value \(\tilde{c}_k^{(q)}(\theta _{j_1,n_1})\) at \(\theta _{j_1,n_1}\) for all \(j_1=1,\ldots ,n_1\). This strategy removes for sure any spurious oscillation, yet it is not accurate. In particular, it does not preserve the accuracy of approximation at the nodes \(\theta _{j_1,n_1}\) established in Theorem 2.1, i.e., there is no guarantee that \(c_k^{(q)}(\theta )\tilde{c}_k^{(q)}(\theta )\le B_\alpha ^{(q)} h_1^{\alpha k+1}\) for \(\theta \in [0,\pi ]\) or \(c_k^{(q)}(\theta _{j,n})\tilde{c}_k^{(q)}(\theta _{j,n})\le B_\alpha ^{(q)} h_1^{\alpha k+1}\) for \(j=1,\ldots ,n\), with \(B_\alpha ^{(q)}\) being a constant depending only on \(\alpha \) and q. As proved in Theorem 2.2, a local approximation strategy that preserves the accuracy (2.4), at least if \(c_k^{(q)}(\theta )\) is sufficiently smooth, is the following: let \(\theta ^{(1)},\ldots ,\theta ^{(\alpha k+1)}\) be \(\alpha k+1\) points of the grid \(\{\theta _{1,n_1},\ldots ,\theta _{n_1,n_1}\}\) which are closest to the point \(\theta _{j,n}\),^{1} and let \(\tilde{c}_{k,j}^{(q)}(\theta )\) be the interpolation polynomial of the data \((\theta ^{(1)},\tilde{c}_k^{(q)}(\theta ^{(1)})),\ldots ,(\theta ^{(\alpha k+1)},\tilde{c}_k^{(q)}(\theta ^{(\alpha k+1)}))\); then, we approximate \(c_k^{(q)}(\theta _{j,n})\) by \(\tilde{c}_{k,j}^{(q)}(\theta _{j,n})\). Note that, by selecting \(\alpha k+1\) points from \(\{\theta _{1,n_1},\ldots ,\theta _{n_1,n_1}\}\), we are implicitly assuming that \(n_1\ge \alpha k+1\).
Theorem 2.2
Proof
It is a straightforward adaptation of the proof of [14, Theorem 2].
Remark 2.1
Algorithm 1 is specifically designed for computing \(\lambda _\gamma (T_n(\mathbf { f}))\) in the case where n is quite large. When applying this algorithm, it is implicitly assumed that \(n_1\) and \(\alpha \) are small (much smaller than n), so that each \(n_k=2^{k1}(n_1+1)1\) is small as well and the computation of the eigenvalues \(\tilde{\lambda }_\gamma (T_n(\mathbf { f}))\)—which is required in the first step—can be efficiently performed by any standard eigensolver (e.g., the Matlab eig function).
The last theorem of the current section provides an estimate for the approximation error made by Algorithm 1.
Theorem 2.3
Proof
3 Numerical experiments
In the current section we present a selection of numerical experiments to validate the algorithms based on the asymptotic expansion (1.5) in different cases where \(\mathbf {f}\) is matrixvalued, and we give exact formulae for the eigenvalues in some examples of practical interest.
3.1 Description
 Example 1.
We show that the expansion and the associated interpolation–extrapolation algorithm can be applied to the whole spectrum, since the symbol satisfies the global condition.
 Example 2.
We show that the expansion and the interpolation–extrapolation algorithm can be locally applied for computing the approximation of the eigenvalues verifying the local condition. In this particular case, the global condition does not hold because the intersection of ranges of two eigenvalue functions is a nontrivial interval and in addition there exists an index \(q \in \{1,\dots , s\}\) such that \(\lambda ^{(q)}(\mathbf {f})\) is nonmonotone.
 Example 3.
We show that the expansion and interpolation–extrapolation algorithm can be locally applied for the computation of the eigenvalues satisfying the local condition. For the specific example, the global condition does not hold since there exists an index \(q \in \{1,\dots , s\}\) such that \(\lambda ^{(q)}(\mathbf {f})\) is nonmonotone either globally on \([0,\pi ]\) or just on a subinterval contained in \([0,\pi ]\).
 Example 4.
We show how to bypass the local condition in a few special cases: in fact, using different sampling grids, we can recover exact formulas for parts of the spectrum, where the assumption of monotonicity is violated.
 Example 5.
We give a close formula for the eigenvalues of matrices arising from the rectangular Lagrange Finite Element method with polynomials of degree \(p>1\), usually denoted as \(\mathbb {Q}_p\) elements. The number of the eigenvalue functions, which verify the global condition, depends on the order of the \(\mathbb {Q}_p\) elements. In this specific setting we have \(s=p\).
3.2 Experiments

sample \(\mathbf {f}\) at \(\theta _{j_k,n_k}\), \(j_k=1,\dots , n_k\), obtaining \(n_k\) \(s\times s\) matrices, \(M_{j_k};\)

for each \(j_k=1,\dots , n_k\), compute the s eigenvalues of \(M_{j_k}\), \(\lambda _q(M_{j_k})\), \(q=1,\dots ,s\);

for a fixed \(q=1,\dots s \), the evaluation of \(\lambda ^{(q)}(\mathbf {f})\) at \(\theta _{j_k,n_k}\), \(j_k=1,\dots , n_k,\) is given by \(\lambda _q(M_{j_k})\), \(j_k=1,\dots ,n_k\).
Example 1
Example 2

\(\lambda ^{(1)}(\mathbf {f})\) is monotone nondecreasing and its range does not intersect that of \(\lambda ^{(q)}(\mathbf {f})\), \(q=2,3\). Hence, using the asymptotic expansion in (1.5), we expect that it is possible to give an approximation of the first n eigenvalues \(\lambda _\gamma (T_n(\mathbf {f}))\), for \(j=1,\dots , n\);
 \(\lambda ^{(3)}(\mathbf {f})\) is monotone nonincreasing and there exist \(\hat{\theta }_1\), \(\hat{\theta }_2 \in [0,\pi ]\) such that, \(\forall \) \(\theta \in [0,\hat{\theta }_1)\cup (\hat{\theta }_2,\pi ]\),Hence, of the remaining 2n eigenvalues, we expect that it is possible to give a fast approximation just of those eigenvalues \(\lambda _\gamma (T_n(\mathbf {f}))\) verifying local condition, that is those satisfying the relation below$$\begin{aligned} (\lambda ^{(3)}(\mathbf {f}))(\theta )\not \in \mathrm{Range}(\lambda ^{(2)}(\mathbf {f})). \end{aligned}$$$$\begin{aligned} \lambda _\gamma (T_n(\mathbf {f}))\in {\Biggl [}(\lambda ^{(3)}(\mathbf {f}))(\pi ),(\lambda ^{(3)}(\mathbf {f}))(\hat{\theta }_2){\Biggl )}\quad \bigcup \quad {\Biggl (}(\lambda ^{(3)}(\mathbf {f}))(\hat{\theta }_1),(\lambda ^{(3)}(\mathbf {f}))(0){\Biggl ]}. \end{aligned}$$(3.1)
For \(q=2\) no extrapolation procedure can be applied with \(\tilde{c}^{(2)}_k(\theta _{j_1,n_1}), \hbox { for } k=1,\ldots ,4\), as we can see from the oscillating and irregular graph in the bottom left panel of Fig. 4. Concerning Fig. 5 the chaotic behavior of \(\tilde{c}^{(2)}_k(\theta _{j_1,n_1}), \, k=1,\ldots ,4\) corresponds to the rather large and oscillating errors \(E_{j,n,0}^{(2)}\) and \(\tilde{E}_{j,n,\alpha }^{(2)}\). On the other hand for \(q=3\) we can use the extrapolation procedure and the underlying asymptotic expansion with \(\tilde{c}^{(3)}_k(\theta _{j_1,n_1}), k=1,\ldots ,4\) for \(\theta _{j_1,n_1}\in [0,\hat{\theta }_1)\cup (\hat{\theta }_2,\pi ],\,\) \(j_1=1,\dots ,n_1\).
As a consequence we compute the approximation of the first n eigenvalues \(\lambda _\gamma (T_n(\mathbf { f}))\), for \(\gamma =1,\dots ,n\) and that of other \(\hat{n}_1+\hat{n}_2\), that verify (3.1). For simplicity, in the right panel of Fig. 5, we visualize them by using the nondecreasing order instead of the computational one.
Example 3
Although the intersection of the ranges of \(\lambda ^{(j)}(\mathbf { f})\) and \(\lambda ^{(k)}(\mathbf { f})\) is empty for every pair (j, k), \(j\ne k\), \(j,k\in \{1,2,3\}\), the assumption of monotonicity is violated either globally on \([0,\pi ]\) or on a subinterval in \([0,\pi ]\).

\(\lambda ^{(1)}({\mathbf {f}})\), is fully nonmonotone on \([0,\pi ]\), hence we expect that no fast approximation can be given on first n eigenvalues, \(\lambda _\gamma (T_n(\mathbf {f}))\), for \(\gamma =1,\dots , n\);

\(\lambda ^{(3)}(\mathbf {f})\) is monotone nondecreasing and its range does not intersect that of \(\lambda ^{(q)}(\mathbf {f})\), \(q=1,2\). Hence we can provide an approximation, of the last n eigenvalues \(\lambda _\gamma (T_n(\mathbf {f}))\) for \(\gamma =2n+1,\dots , 3n\), (analogously with what we did for treating the first n eigenvalues in Example 2);
 \(\lambda ^{(2)}({\mathbf {f}})\) is nonmonotone on a subinterval \([0,\hat{\theta }_1]\) in \([0,\pi ]\) and monotone nondecreasing on the remaining subinterval, \((\hat{\theta }_1,\pi ]\). Hence we are able to efficiently compute also the eigenvalues that verify the following relation$$\begin{aligned} \lambda _\gamma (T_n(\mathbf {f}))\in {\Biggl (}(\lambda ^{(2)}(\mathbf {f}))(\hat{\theta }_1),(\lambda ^{(2)}(\mathbf {f}))(\pi ){\Biggr ]}. \end{aligned}$$(3.2)
In the top right image of Fig. 6 we display the resulting chaotic graph of \(\tilde{c}^{(1)}_k(\theta _{j_1,n_1}), k=1,\ldots ,4\). The graph confirms that, for \(q=1\), the interpolation–extrapolation algorithm cannot be used and, consequently, the first n eigenvalues, \(\lambda _\gamma (T_n(\mathbf {f}))\), \(q=1\), \(j=1,\dots ,n\), cannot be efficiently computed using (1.5): the latter is confirmed by the errors \(\tilde{E}_{j,n,\alpha }^{(1)}\) and \(E_{j,n,0}^{(1)}\), in Fig. 5.
The chaotic behaviour is also present in the values \(\tilde{c}^{(2)}_k(\theta _{j_1,n_1}), k=1,\ldots ,4\), see the bottom left panel of Fig. 6, in the subinterval \([0,\hat{\theta }_1]\) of \([0,\pi ]\), that coincides with same subinterval where \(\lambda ^{(2)}({\mathbf {f}})\) is nonmonotone.
Hence, if we restrict to \([0,\hat{\theta }_1]\), the extrapolation procedure can be used again on \(\tilde{c}^{(2)}_k(\theta _{j_1,n_1}), k=1,\ldots ,4\), for \(\theta _{j_1,n_1}\in (\hat{\theta }_1,\pi ],\,\) \(j_1=1,\dots ,n_1\). Consequently we obtain a good approximation of \(\lambda _\gamma (T_n(\mathbf {f}))\), for \(q=2\), \(j=\hat{j},\dots ,n\). Notice that \(\hat{j}\) is the first index in \(\{1,\dots ,n\}\) such that \({\hat{j}\pi }/({n+1})\in (\hat{\theta }_1,\pi ]\), that is we can compute the eigenvalues belonging to the interval reported in (3.2). This is reflected, in Fig. 7, in the gradual reduction of the errors \(\tilde{E}_{j,n,\alpha }^{(2)}\) and \(E_{j,n,0}^{(2)}\), for indices larger than \(\hat{n}_1=n+\hat{j}\).
Finally, the remaining n eigenvalues can be well reconstructed with a standard matrixless procedure, using the values of \(\tilde{c}^{(3)}_k(\theta _{j_1,n_1}), k=1,\ldots ,4\), shown in the top right panel of Fig. 6. The errors related to latter approximation, \(\tilde{E}_{j,n,\alpha }^{(3)}\), are shown in Fig. 7.
In total, \(3n\hat{j}+1\) eigenvalues of \(T_n(\mathbf {f})\) can be computed and plotted (in nondecreasing order) in Fig. 7.
Example 4
Differently from previous examples, here the analytical expressions of the eigenvalue functions of \(\mathbf {f}(\theta )\) are known, since they coincide, by construction, with \(p^{(q)}(\theta )\), for \(q=1,2,3\). So we will describe the spectrum of \(T_n( {\mathbf {f}})\), approximating or calculating exactly the 3n eigenvalues, treating separately the 3 different scalar problems.
For the first n eigenvalues it is known that they can be calculated exactly, sampling \(p^{(1)}\) with grid \(\theta _{j,n}={j\pi }/({n+1}),\, j=1, \dots , n \). Analogously, the n eigenvalues can be found exactly by sampling \(p^{(2)}\) on a special grid defined in [16]. For the last n eigenvalues, the grid that gives exact eigenvalues is not known, but \(p^{(3)}\) is monotone nondecreasing and consequently we can use asymptotic expansion in the scalar case.
We set the parameters as previous cases: \(n_1=100\) and \(n=10000\).
In the top right panel of Fig. 8 we report the expansion errors \(E^{(q)}_{j_1,n_1,0}\), calculated using grid \(\theta _{j_1,n_1}={j_1\pi }/({n_1+1})\), \(j_1=1,\dots ,n_1\), \( q=1,2,3\). There is no surprise that in the first region of the graph (green area) the error is zero, since the first \(n_1\) eigenvalues are exactly given, sampling \(p^{(1)}\) on standard \(\theta _{j_1,n_1}\) grid.
The green area, containing the errors related to \(p^{(2)}(\theta )\), is obviously chaotic since \(p^{(2)}(\theta )\) is nonmonotone.
Hence, the first n eigenvalues of \(T_n( {\mathbf {f}})\) can be calculated exactly sampling \(p^{(1)}\) with grid \(\theta _{j,n}={j\pi }/({n+1}),\, j=1, \dots , n \) and n exact eigenvalues can be found sampling \(p^{(2)}\) on grid (3.4). For the computation of the last n eigenvalues, we use the matrixless procedure in the scalar setting, passing through the approximation of \(c^{(3)}_k(\theta _{j_1,n_1}), k=1,\ldots ,\alpha \), for \(\alpha =4\), see the bottom right panel of Fig. 8.

The present pathology is not a counterexample to the asymptotic expansion (1.5) since we take \(\theta \) fixed and all the pairs j, n such that \(\theta _{j,n}=\theta \): in the current case and in that considered in [2] in the scalarvalued setting, we have j fixed and n grows so that the point \(\theta \) is not well defined.

There are simple ways for overcoming the problem and then for computing reliable evaluations of \(c_4^{(3)} \) at those bad points \(\theta _{1,n} \) and \(\theta _{2,n}\). One of them is described in [12] and consists in choosing a sufficiently large \(\alpha >4\) and in computing \(c_k^{(3)} \), for \(k=1,2,3,4\). Using this trick, the \(c_4^{(3)} \) at the initial points \(\theta _{1,n} \) and \(\theta _{2,n}\) have the expected behavior. In addition we stress the fact that this behavior has little impact on the numerically computed solution. Assuming double precision computations, the contribution to the error deriving from \(c_4^{(3)}(\theta _{j,n})h^4\) will be numerically negligible, even for moderate n. Further discussions on the topic are presented in [12].
Example 5
Seven examples of uniform grids
Name  Grid  j  h  Description 

\(\tau _n\)  \(j\pi /(n+1)\)  \(1,\ldots ,n\)  \(1/(n+1)\)  \(\tau _n(0,0)\) 
\(\tau _{n1}\)  \(j\pi /n\)  \(1,\ldots ,n1\)  1 / n  \(\tau _{n1}(0,0)\) 
\(\tau _{n2}\)  \(j\pi /(n1)\)  \(1,\ldots ,n2\)  \(1/(n1)\)  \(\tau _{n2}(0,0)\) 
\(\tau _{n1}^{0}\)  \((j1)\pi /n\)  \(1,\ldots ,n\)  1 / n  \(\tau _n(1,1)=0\cup \tau _{n1}(0,0)\) 
\(\tau _{n1}^{\pi }\)  \(j\pi /n\)  \(1,\ldots ,n\)  1 / n  \(\tau _n(1,1)=\tau _{n1}(0,0)\cup \pi \) 
\(\tau _{n2}^{0,\pi }\)  \((j1)\pi /(n1)\)  \(1,\ldots ,n\)  \(1/(n1)\)  \(0\cup \tau _{n2 }(0,0)\cup \pi \) 
\(\tau _{n1}^{0,\pi }\)  \((j1)\pi /n\)  \(1,\ldots ,n+1\)  1 / n  \(0\cup \tau _{n1}(0,0)\cup \pi \) 
In Fig. 9 we present the appropriate grids, defined in Table 1, for the exact eigenvalues of \(K_n^{(p)}\) and \(M_n^{(p)}\) with \(n=6\) and \(p=5\).
4 Conclusions and future work
In this paper we considered the case of \(\mathbf {f}\) being a \(s\times s\) matrixvalued trigonometric polynomial, \(s\ge 1\), and \(\{T_n(\mathbf {f})\}_n\) a sequence of block Toeplitz matrix generated by \(\mathbf {f}\), with \(T_n(\mathbf {f})\) of size \(N(n,s)=sn\). We numerically observed conditions insuring the existence of an asymptotic expansion generalizing the assumptions known for the scalarvalued setting. Furthermore, following a proposal in the scalarvalued case by the first author, by Garoni, and by the third author, we devised an extrapolation algorithm for computing the eigenvalues in the present setting regarding banded symmetric block Toeplitz matrices, with a high level of accuracy and with a low computational cost. The resulting algorithm is an eigensolver that does not need to store the original matrix and does not need to perform matrixvector products: for this reason we call it matrixless
We have used the asymptotic expansion for the efficient computation of the spectrum of special block Toeplitz structures and we have shown exact formulae for the eigenvalues of the matrices coming from the \(\mathbb {Q}_p\) Lagrangian Finite Element approximation of a second order elliptic differential problem.
A lot of open issues remain, including a formal proof of the asymptotic expansion clearly indicated by the numerical experiments at least under the global assumption of monotonicity and pairwise separation of the eigenvalue functions.
Footnotes
 1.
These \(\alpha k+1\) points are uniquely determined by \(\theta _{j,n}\) except in the following two cases: (a) \(\theta _{j,n}\) coincides with a grid point \(\theta _{j_1,n_1}\) and \(\alpha k+1\) is even; (b) \(\theta _{j,n}\) coincides with the midpoint between two consecutive grid points \(\theta _{j_1,n_1},\theta _{j_1+1,n_1}\) and \(\alpha k+1\) is odd.
Notes
Acknowledgements
The research of SvenErik Ekström is cofinanced by the Graduate School in Mathematics and Computing (FMB) and Uppsala University. Isabella Furci and Stefano SerraCapizzano belong to the INdAM Research group GNCS and the work of Isabella Furci is (partially) financed by the GNCS2018 Project “Tecniche innovative per problemi di algebra lineare”.
References
 1.Ahmad, F., AlAidarous, E.S., Abdullah Alrehaili, D., Ekström, S.E., Furci, I., SerraCapizzano, S.: Are the eigenvalues of preconditioned banded symmetric Toeplitz matrices known in almost closed form? Numer. Algorithms 78(3), 867–893 (2018)MathSciNetCrossRefGoogle Scholar
 2.Barrera, M., Böttcher, A., Grudsky, S.M., Maximenko, E.A.: Eigenvalues of even very nice Toeplitz matrices can be unexpectedly erratic. Operator Theory Adv. Appl. 268, 51–77 (2018)MathSciNetCrossRefGoogle Scholar
 3.Bhatia, R.: Matrix Analysis. Springer, New York (1997)CrossRefGoogle Scholar
 4.Bini, D., Capovani, M.: Spectral and computational properties of band symmetric Toeplitz matrices. Linear Algebra Appl. 52–53, 99–126 (1983)MathSciNetCrossRefGoogle Scholar
 5.Bogoya, J.M., Böttcher, A., Grudsky, S.M., Maximenko, E.A.: Eigenvalues of Hermitian Toeplitz matrices with smooth simpleloop symbols. J. Math. Anal. Appl. 422, 1308–1334 (2015)MathSciNetCrossRefGoogle Scholar
 6.Bogoya, J.M., Grudsky, S.M., Maximenko, E.A.: Eigenvalues of Hermitian Toeplitz matrices generated by simpleloop symbols with relaxed smoothness. Oper. Theory Adv. Appl. 259, 179–212 (2017)MathSciNetCrossRefGoogle Scholar
 7.Bozzo, E., Di Fiore, C.: On the use of certain matrix algebras associated with discrete trigonometric transforms in matrix displacement decomposition SIAM. J. Matrix Anal. Appl. 16, 312–326 (1995)MathSciNetCrossRefGoogle Scholar
 8.Brezinski, C., Redivo, Zaglia M.: Extrapolation Methods: Theory and Practice. Elsevier Science Publishers B.V, Amsterdam (1991)zbMATHGoogle Scholar
 9.Böttcher, A., Grudsky, S.M., Maximenko, E.A.: Inside the eigenvalues of certain Hermitian Toeplitz band matrices. J. Comput. Appl. Math. 233, 2245–2264 (2010)MathSciNetCrossRefGoogle Scholar
 10.Di Benedetto, F., Fiorentino, G., Serra, S.: C.G. Preconditioning for Toeplitz matrices. Comput. Math. Appl. 25, 33–45 (1993)MathSciNetCrossRefGoogle Scholar
 11.Donatelli, M., Neytcheva, M., SerraCapizzano, S.: Canonical eigenvalue distribution of multilevel block Toeplitz sequences with nonHermitian symbols. Oper. Theory Adv. Appl. 221, 269–291 (2012)MathSciNetzbMATHGoogle Scholar
 12.Ekström S.E.: Matrixless Methods for Computing Eigenvalues of Large Structured Matrices. Ph.D. Thesis, Uppsala University, (2018)Google Scholar
 13.Ekström, S.E., Furci, I., Garoni, C., Manni, C., SerraCapizzano, S., Speleers, H.: Are the eigenvalues of the Bspline IgA approximation of \(\varDelta {u}= \lambda u\) known in almost closed form? Numer. Linear. Algebra Appl. https://doi.org/10.1002/nla.2198. Early version by Ekström S.E., Furci I., SerraCapizzano S. with the same title in Technical report, 2017016, Department of Information Technology, Uppsala University (2017)MathSciNetCrossRefGoogle Scholar
 14.Ekström, S.E., Garoni, C.: A matrixless and parallel interpolation–extrapolation algorithm for computing the eigenvalues of preconditioned banded symmetric Toeplitz matrices. Numer. Algorithms. https://doi.org/10.1007/s1107501805080.
 15.Ekström, S.E., Garoni, C., SerraCapizzano, S.: Are the eigenvalues of banded symmetric Toeplitz matrices known in almost closed form? Exp. Math. (in press). https://doi.org/10.1080/10586458.2017.1320241
 16.Ekström, S.E., SerraCapizzano, S.: Eigenvalues and eigenvectors of banded Toeplitz matrices and the related symbols. Numer. Linear. Algebra Appl. (in press). https://doi.org/10.1002/nla.2137 MathSciNetCrossRefGoogle Scholar
 17.Garoni, C., SerraCapizzano, S.: The Theory of Generalized Locally Toeplitz Sequences: Theory and Applications, Vol. I. Springer—Springer Monographs in Mathematics, Berlin (2017)CrossRefGoogle Scholar
 18.Garoni, C., SerraCapizzano, S., Sesana, D.: Spectral analysis and spectral symbol of \(d\)variate \(\mathbb{Q}_{\varvec {p}}\) Lagrangian FEM stiffness matrices. SIAM J. Matrix Anal. Appl. 36, 1100–1128 (2015)MathSciNetCrossRefGoogle Scholar
 19.SerraCapizzano, S.: Asymptotic results on the spectra of block Toeplitz preconditioned matrices. SIAM J. Matrix Anal. Appl. 20–1, 31–44 (1999)MathSciNetzbMATHGoogle Scholar
 20.SerraCapizzano, S.: Spectral and computational analysis of block Toeplitz matrices having nonnegative definite matrixvalued generating functions. BIT 39–1, 152–175 (1999)MathSciNetCrossRefGoogle Scholar
 21.Stoer, J., Bulirsch, R.: Introduction to Numerical Analysis, 3rd edn. Springer, New York (2002)CrossRefGoogle Scholar
 22.Tilli, P.: A note on the spectral distribution of Toeplitz matrices. Linear Multilinear Algebra 45, 147–159 (1998)MathSciNetCrossRefGoogle Scholar
 23.Tilli, P.: Some results on complex Toeplitz eigenvalues. J. Math. Anal. Appl. 239, 390–401 (1999)MathSciNetCrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.