In the current section we present a selection of numerical experiments to validate the algorithms based on the asymptotic expansion (1.5) in different cases where \(\mathbf {f}\) is matrix-valued, and we give exact formulae for the eigenvalues in some examples of practical interest.
Description
We test the asymptotic expansion and the interpolation–extrapolation algorithm in Sect. 2 in order to obtain an approximation of the eigenvalues \(\lambda _\gamma ({T_n(\mathbf {f})})\), for \(\gamma =1,\dots ,sn\), for large n.
-
Example 1.
We show that the expansion and the associated interpolation–extrapolation algorithm can be applied to the whole spectrum, since the symbol satisfies the global condition.
-
Example 2.
We show that the expansion and the interpolation–extrapolation algorithm can be locally applied for computing the approximation of the eigenvalues verifying the local condition. In this particular case, the global condition does not hold because the intersection of ranges of two eigenvalue functions is a nontrivial interval and in addition there exists an index \(q \in \{1,\dots , s\}\) such that \(\lambda ^{(q)}(\mathbf {f})\) is non-monotone.
-
Example 3.
We show that the expansion and interpolation–extrapolation algorithm can be locally applied for the computation of the eigenvalues satisfying the local condition. For the specific example, the global condition does not hold since there exists an index \(q \in \{1,\dots , s\}\) such that \(\lambda ^{(q)}(\mathbf {f})\) is non-monotone either globally on \([0,\pi ]\) or just on a subinterval contained in \([0,\pi ]\).
-
Example 4.
We show how to bypass the local condition in a few special cases: in fact, using different sampling grids, we can recover exact formulas for parts of the spectrum, where the assumption of monotonicity is violated.
-
Example 5.
We give a close formula for the eigenvalues of matrices arising from the rectangular Lagrange Finite Element method with polynomials of degree \(p>1\), usually denoted as \(\mathbb {Q}_p\) elements. The number of the eigenvalue functions, which verify the global condition, depends on the order of the \(\mathbb {Q}_p\) elements. In this specific setting we have \(s=p\).
Experiments
In Examples 1–3 we do not compute analytically the eigenvalue functions of \(\mathbf {f}\), but, for \(q=1,\dots s\), we are able to provide an ’exact’ evaluation of \(\lambda ^{(q)}(\mathbf {f})\) at \(\theta _{j_k,n_k}\), \(j_k=1,\dots , n_k,\) by exploiting the following procedure:
-
sample \(\mathbf {f}\) at \(\theta _{j_k,n_k}\), \(j_k=1,\dots , n_k\), obtaining \(n_k\) \(s\times s\) matrices, \(M_{j_k};\)
-
for each \(j_k=1,\dots , n_k\), compute the s eigenvalues of \(M_{j_k}\), \(\lambda _q(M_{j_k})\), \(q=1,\dots ,s\);
-
for a fixed \(q=1,\dots s \), the evaluation of \(\lambda ^{(q)}(\mathbf {f})\) at \(\theta _{j_k,n_k}\), \(j_k=1,\dots , n_k,\) is given by \(\lambda _q(M_{j_k})\), \(j_k=1,\dots ,n_k\).
This procedure is justified by the fact that here \(\mathbf {f}\) is a trigonometric polynomial and, denoting by \(C_{n_k}(\mathbf {f})\) the circulant matrix generated by \(\mathbf {f}\), the eigenvalues of \(C_{n_k}(\mathbf {f})\) are given by the evaluations of \(\lambda ^{(q)}(\mathbf {f})\) at the grid points \(\theta _{r,n_k}=2\pi \frac{r}{n_k}\), \(r=0,\dots ,n_k-1\), since
$$\begin{aligned} C_{ n_k}(\mathbf {f})=(F_{n_k}\otimes I_s) D_{ n_k}(\mathbf {f}) (F_{ n_k}\otimes I_s)^*, \end{aligned}$$
where
$$\begin{aligned} D_{n_k}(\mathbf {f}) =\mathrm{diag}_{ 0\le r\le n_k- 1}\left( \mathbf {f}\left( \theta _{ r,n_k}\right) \right) , \quad \theta _{ r,n_k}=2\pi \frac{ {r}}{{ n_k}}, \quad F_{n_k}=\frac{1}{\sqrt{ n_k}} \left( \mathrm{e}^{-i2\pi \frac{j r}{n_k}}\right) _{ j, r=0}^{n_k-1}, \end{aligned}$$
and \(I_s\) the \(s \times s\) identity matrix [18]. Furthermore, by exploiting the localization results [19, 20] stated in the introduction, we know that each eigenvalue of \(T_n(\mathbf {f})\), for each n, belongs to the interval
$$\begin{aligned} \left( \min _{\theta \in [0,\pi ]} \lambda ^{(1)}(\mathbf {f}), \max _{\theta \in [0,\pi ]}\lambda ^{(s)}(\mathbf {f})\right) . \end{aligned}$$
Example 1
In this example we have block size \(s=3\), and each eigenvalue function \(\lambda ^{(q)}(\mathbf {f}), q=1,2,3\), is strictly monotone over \([0,\pi ]\). The eigenvalue functions satisfy
$$\begin{aligned} \max _{\theta \in [0,\pi ]} \lambda ^{(1)}(\mathbf {f})&< \min _{\theta \in [0,\pi ]}\lambda ^{(2)}(\mathbf {f}),\\ \max _{\theta \in [0,\pi ]} \lambda ^{(2)}(\mathbf {f})&< \min _{\theta \in [0,\pi ]}\lambda ^{(3)}(\mathbf {f}). \end{aligned}$$
In top left panel of Fig. 2 the graphs of the three eigenvalue functions are shown.
The Toeplitz matrix generated by \(\mathbf {f} \) is a pentadiagonal block matrix, \(T_n(\mathbf {f})\in \mathbb {R}^{N\times N}\), where \(N=3n\), and all the blocks belong to \(\mathbb {R}^{3\times 3}\), that is
$$\begin{aligned} T_n(\mathbf {f})&=\left[ \begin{array}{cccccc} \hat{\mathbf {f}}_0&{}\quad \hat{\mathbf {f}}_1&{}\quad \hat{\mathbf {f}}_2&{}\quad &{}\quad &{}\quad \phantom {\ddots }\\ \hat{\mathbf {f}}_1&{}\quad \ddots &{}\quad \ddots &{}\quad \ddots \\ \hat{\mathbf {f}}_2&{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots \\ &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \hat{\mathbf {f}}_2\\ &{}\quad &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \hat{\mathbf {f}}_1\\ \phantom {\ddots }&{}\quad &{}\quad &{}\quad \hat{\mathbf {f}}_2&{}\quad \hat{\mathbf {f}}_1&{}\quad \hat{\mathbf {f}}_0 \end{array} \right] ,\quad \hat{\mathbf {f}}_0=\left[ \begin{array}{rrr} 50&{}\quad 2&{}\quad 0\\ 2&{}\quad -\,55&{}\quad 2\\ 0&{}\quad 2&{}\quad 10 \end{array}\right] , \\ \hat{\mathbf {f}}_1&=\left[ \begin{array}{rrr} 11&{}\quad -\,1&{}\quad 0\\ -\,1&{}\quad -\,6&{}\quad -\,1\\ 0&{}\quad -\,1&{}\quad 9 \end{array}\right] , \quad \hat{\mathbf {f}}_2=\begin{bmatrix} 1&\quad 0&\quad 2\\ 0&\quad 1&\quad 0\\ 2&\quad 0&\quad 1 \end{bmatrix}. \end{aligned}$$
Here \(\mathbf {f}\) is such that the global condition is satisfied. Hence we can use the asymptotic expansion and Algorithm 1 to get an accurate approximation of the eigenvalues of \(T_n(\mathbf {f})\) for a large n. Solving system (2.3) with \(\alpha =4\) and \(n_1=100\), we obtain the approximation of \(c^{(q)}_k(\theta _{j_1,n_1})\), \(k=1,\ldots ,\alpha \). In Fig. 2, in the top right and bottom panels, the approximated expansion functions \(\tilde{c}^{(q)}_k(\theta _{j_1,n_1})\), \(k=1,\ldots ,\alpha \), \(q=1,\dots ,s\) are shown for each eigenvalue function. For a fixed \(q=1,\dots , s\), the values \(\tilde{c}^{(q)}_k(\theta _{j_1,n_1})\), \(k=1,\ldots ,\alpha \), \(j_1=1,\dots ,n_1\) are known, and finally we can compute \(\tilde{\lambda }_\gamma (T_n(\mathbf { f}))\) for \(n=10000\), by using (1.5). For simplicity we plot the eigenvalue functions and also the expansion errors, \(E^{(q)}_{j_1,n_1,0}\), for \(q=1,2,3\). In the right panel of Fig. 3 (in black) we show the errors, \(E_{j,n,0}^{(q)}\), \(q=1,\dots ,3\), versus \(\gamma \), from direct calculation of
$$\begin{aligned} \lambda _\gamma (T_n(\mathbf { f}))-\lambda ^{(q)}(\mathbf { f}(\theta _{j,n})), \end{aligned}$$
for \(j=1,\dots ,n\), \(q=1,\dots ,3\). As expected, with \(\alpha =0\), the errors \(E_{j,n,0}^{(q)}\), \(q=1,\dots ,3\), are rather large. In the right panel of Fig. 3, comparing \(E_{j,n,0}^{(q)}\) with errors \(\tilde{E}_{j,n,\alpha }^{(q)}\), \(q=1,\dots ,3\), we see the errors are significantly reduced if we calculate \(\tilde{\lambda }_\gamma (T_n(\mathbf { f}))\), \(\gamma =1,\dots ,3n\), shown in the left panel of Fig. 3, using Algorithm 1, with \(\alpha =4\), \(n_1=100\), and \(n=10000\). Furthermore, a careful study of the left panel of Fig. 3 (coloured) also reveals that, for \(q=1,\dots ,s\), \(\tilde{E}_{j,n,\alpha }^{(q)}\) have local minima, attained when \(\theta _{j,n}\) is approximately equal to some of the coarse grid points \(\theta _{j_1,n_1},\ j_1=1,\ldots ,n_1\). This is no surprise, because for \(\theta _{j,n}=\theta _{j_1,n_1}\) we have \(\tilde{c}_{k,j}^{(q)}(\theta _{j,n})=\tilde{c}_k^{(q)}(\theta _{j_1,n_1})\) and \(c_k^{(q)}(\theta _{j,n})=c_k^{(q)}(\theta _{j_1,n_1})\), which means that the error of the approximation \(\tilde{c}_{k,j}^{(q)}(\theta _{j,n})\approx c_k^{(q)}(\theta _{j,n})\) reduces to the error of the approximation \(\tilde{c}_k^{(q)}(\theta _{j_1,n_1})\approx c_k^{(q)}(\theta _{j_1,n_1})\). The latter implies that we are not introducing further errors due to the interpolation process.
Example 2
In the present example we choose block size \(s=3\), with eigenvalue functions \(\lambda ^{(1)}({\mathbf {f}})\) and \(\lambda ^{(3)}({\mathbf {f}})\) being strictly monotone on \([0,\pi ]\). The second eigenvalue function, \(\lambda ^{(2)}({\mathbf {f}})\), is non-monotone on a small subinterval of \([0,\pi ]\). Furthermore the range of \(\lambda ^{(2)}({\mathbf {f}})\) intersects that of \(\lambda ^{(3)}({\mathbf {f}})\), that is
$$\begin{aligned} \max _{\theta \in [0,\pi ]} \lambda ^{(1)}(\mathbf {f})&< \min _{\theta \in [0,\pi ]}\lambda ^{(2)}(\mathbf {f}),\\ \max _{\theta \in [0,\pi ]} \lambda ^{(2)}(\mathbf {f})&> \min _{\theta \in [0,\pi ]}\lambda ^{(3)}(\mathbf {f}). \end{aligned}$$
When comparing with Example 1, the only difference in forming the matrix \(T_n({\mathbf {f}})\) consists in the first Fourier coefficient which is defined as
$$\begin{aligned} \hat{\mathbf {f}}_0=\left[ \begin{array}{rrr} 12&{}\quad 2&{}\quad 0\\ 2&{}\quad -\,55&{}\quad 2\\ 0&{}\quad 2&{}\quad 10 \end{array}\right] . \end{aligned}$$
In this example we want to show that it is possible to give an approximation of the eigenvalues \(\lambda _\gamma (T_n({\mathbf {f}}))\), \(n=10000\), satisfying the local condition.
From the top left panel of Fig. 4, where the graphs of the three eigenvalue functions are displayed, we notice that
-
\(\lambda ^{(1)}(\mathbf {f})\) is monotone non-decreasing and its range does not intersect that of \(\lambda ^{(q)}(\mathbf {f})\), \(q=2,3\). Hence, using the asymptotic expansion in (1.5), we expect that it is possible to give an approximation of the first n eigenvalues \(\lambda _\gamma (T_n(\mathbf {f}))\), for \(j=1,\dots , n\);
-
\(\lambda ^{(3)}(\mathbf {f})\) is monotone non-increasing and there exist \(\hat{\theta }_1\), \(\hat{\theta }_2 \in [0,\pi ]\) such that, \(\forall \) \(\theta \in [0,\hat{\theta }_1)\cup (\hat{\theta }_2,\pi ]\),
$$\begin{aligned} (\lambda ^{(3)}(\mathbf {f}))(\theta )\not \in \mathrm{Range}(\lambda ^{(2)}(\mathbf {f})). \end{aligned}$$
Hence, of the remaining 2n eigenvalues, we expect that it is possible to give a fast approximation just of those eigenvalues \(\lambda _\gamma (T_n(\mathbf {f}))\) verifying local condition, that is those satisfying the relation below
$$\begin{aligned} \lambda _\gamma (T_n(\mathbf {f}))\in {\Biggl [}(\lambda ^{(3)}(\mathbf {f}))(\pi ),(\lambda ^{(3)}(\mathbf {f}))(\hat{\theta }_2){\Biggl )}\quad \bigcup \quad {\Biggl (}(\lambda ^{(3)}(\mathbf {f}))(\hat{\theta }_1),(\lambda ^{(3)}(\mathbf {f}))(0){\Biggl ]}. \end{aligned}$$
(3.1)
We fix \(\alpha =4\), \(n_1=100\) and we proceed to calculate the approximation of \(c^{(q)}_k(\theta _{j_1,n_1}), k=1,\ldots ,\alpha \), as in the previous example. As expected, the graph of \(\tilde{c}^{(1)}_k(\theta _{j_1,n_1}),\, k=1,\ldots ,4\), shown in the top right panel of Fig. 4, reveals that we can compute \(\tilde{\lambda }_\gamma (T_n(\mathbf { f}))\), for \(q=1\) and \(j=1,\dots ,n\), using (1.5). In other words the first n eigenvalues of \(T_n(\mathbf { f})\) can be computed using our matrix-less procedure.
For \(q=2\) no extrapolation procedure can be applied with \(\tilde{c}^{(2)}_k(\theta _{j_1,n_1}), \hbox { for } k=1,\ldots ,4\), as we can see from the oscillating and irregular graph in the bottom left panel of Fig. 4. Concerning Fig. 5 the chaotic behavior of \(\tilde{c}^{(2)}_k(\theta _{j_1,n_1}), \, k=1,\ldots ,4\) corresponds to the rather large and oscillating errors \(E_{j,n,0}^{(2)}\) and \(\tilde{E}_{j,n,\alpha }^{(2)}\). On the other hand for \(q=3\) we can use the extrapolation procedure and the underlying asymptotic expansion with \(\tilde{c}^{(3)}_k(\theta _{j_1,n_1}), k=1,\ldots ,4\) for \(\theta _{j_1,n_1}\in [0,\hat{\theta }_1)\cup (\hat{\theta }_2,\pi ],\,\) \(j_1=1,\dots ,n_1\).
As a consequence we compute the approximation of the first n eigenvalues \(\lambda _\gamma (T_n(\mathbf { f}))\), for \(\gamma =1,\dots ,n\) and that of other \(\hat{n}_1+\hat{n}_2\), that verify (3.1). For simplicity, in the right panel of Fig. 5, we visualize them by using the non-decreasing order instead of the computational one.
The good approximation of the \(\hat{n}_1+\hat{n}_2\) eigenvalues belonging to
$$\begin{aligned} {\Biggl [}(\lambda ^{(3)}(\mathbf {f}))(\pi ),(\lambda ^{(3)}(\mathbf {f}))(\hat{\theta }_2){\Biggl )}\quad \bigcup \quad {\Biggl (}(\lambda ^{(3)}(\mathbf {f}))(\hat{\theta }_1),(\lambda ^{(3)}(\mathbf {f}))(0){\Biggr ]} \end{aligned}$$
is confirmed by the error \(\tilde{E}_{j,n,\alpha }^{(3)}\) in the left panel of Fig. 5. In fact the error is quite high for \(\gamma =2n+\hat{n}_1+1,\dots ,3n-\hat{n}_2\), but it becomes sufficiently small for \(\gamma =2n+1,\dots , 2n+\hat{n}_1\) and \(\gamma =3n-\hat{n}_2+1,\dots ,3n \).
Example 3
In this example we set the block size \(s=3\), and the eigenvalue functions \( \lambda ^{(q)}(\mathbf {f}), q=1,2,3\), satisfy
$$\begin{aligned} \max _{\theta \in [0,\pi ]} \lambda ^{(1)}(\mathbf {f})&< \min _{\theta \in [0,\pi ]}\lambda ^{(2)}(\mathbf {f}),\\ \max _{\theta \in [0,\pi ]} \lambda ^{(2)}(\mathbf {f})&< \min _{\theta \in [0,\pi ]}\lambda ^{(3)}(\mathbf {f}). \end{aligned}$$
See the top left panel of Fig. 6 for the plot of \( \lambda ^{(q)}(\mathbf {f}), \,q=1,2,3\).
The matrix \(T_n(f)\in \mathbb {R}^{N\times N}\), \(N=3n\), shows a pentadiagonal block structure, and all the blocks belongs to \(\mathbb {R}^{3\times 3}\), that is
$$\begin{aligned} T_n({\mathbf {f}})&=\left[ \begin{array}{cccccc} \hat{\mathbf {f}}_0&{}\quad \hat{\mathbf {f}}_1^T&{}\quad \hat{\mathbf {f}}_2^T&{}\quad &{}\quad &{}\quad \phantom {\ddots }\\ \hat{\mathbf {f}}_1&{}\quad \ddots &{}\quad \ddots &{}\quad \ddots \\ \hat{\mathbf {f}}_2&{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots \\ &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \hat{\mathbf {f}}_2^T\\ &{}\quad &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \hat{\mathbf {f}}_1^T\\ \phantom {\ddots }&{}\quad &{}\quad &{}\quad \hat{\mathbf {f}}_2&{}\quad \hat{\mathbf {f}}_1&{}\quad \hat{\mathbf {f}}_0 \end{array}\right] , \hat{\mathbf {f}}_0= \frac{1}{5}\left[ \begin{array}{rrr} 16 &{}\quad -\,12 &{} \quad 5 \\ -\,12 &{}\quad 34 &{}\quad -\,10 \\ 5 &{}\quad -\,10 &{}\quad 100\\ \end{array}\right] ,\\ \hat{\mathbf {f}}_1&= \frac{1}{10}\left[ \begin{array}{rrr} -\,4 &{}\quad 7 &{} \quad 0 \\ 8&{} \quad -\,16 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad -\,10 \\ \end{array}\right] , \hat{\mathbf {f}}_2 = \frac{1}{20}\left[ \begin{array}{rrr} -\,12 &{}\quad -\,12 &{} \quad 0 \\ -\,16&{} \quad 12&{} \quad 1\\ 0 &{}\quad 2 &{} \quad 0 \\ \end{array}\right] . \end{aligned}$$
In analogy with Example 2, we want to give an approximation of \(\lambda _\gamma (T_n({\mathbf {f}}))\), for \( n=10000 \), in case that the global condition is not satisfied.
Although the intersection of the ranges of \(\lambda ^{(j)}(\mathbf { f})\) and \(\lambda ^{(k)}(\mathbf { f})\) is empty for every pair (j, k), \(j\ne k\), \(j,k\in \{1,2,3\}\), the assumption of monotonicity is violated either globally on \([0,\pi ]\) or on a subinterval in \([0,\pi ]\).
In detail:
-
\(\lambda ^{(1)}({\mathbf {f}})\), is fully non-monotone on \([0,\pi ]\), hence we expect that no fast approximation can be given on first n eigenvalues, \(\lambda _\gamma (T_n(\mathbf {f}))\), for \(\gamma =1,\dots , n\);
-
\(\lambda ^{(3)}(\mathbf {f})\) is monotone non-decreasing and its range does not intersect that of \(\lambda ^{(q)}(\mathbf {f})\), \(q=1,2\). Hence we can provide an approximation, of the last n eigenvalues \(\lambda _\gamma (T_n(\mathbf {f}))\) for \(\gamma =2n+1,\dots , 3n\), (analogously with what we did for treating the first n eigenvalues in Example 2);
-
\(\lambda ^{(2)}({\mathbf {f}})\) is non-monotone on a subinterval \([0,\hat{\theta }_1]\) in \([0,\pi ]\) and monotone non-decreasing on the remaining subinterval, \((\hat{\theta }_1,\pi ]\). Hence we are able to efficiently compute also the eigenvalues that verify the following relation
$$\begin{aligned} \lambda _\gamma (T_n(\mathbf {f}))\in {\Biggl (}(\lambda ^{(2)}(\mathbf {f}))(\hat{\theta }_1),(\lambda ^{(2)}(\mathbf {f}))(\pi ){\Biggr ]}. \end{aligned}$$
(3.2)
We set for computation \(\alpha =4\) and \(n_1=100\) and we proceed, as in the previous examples, to calculate first the approximation of \(c^{(q)}_k(\theta _{j_1,n_1}), k=1,\ldots ,\alpha \) .
In the top right image of Fig. 6 we display the resulting chaotic graph of \(\tilde{c}^{(1)}_k(\theta _{j_1,n_1}), k=1,\ldots ,4\). The graph confirms that, for \(q=1\), the interpolation–extrapolation algorithm cannot be used and, consequently, the first n eigenvalues, \(\lambda _\gamma (T_n(\mathbf {f}))\), \(q=1\), \(j=1,\dots ,n\), cannot be efficiently computed using (1.5): the latter is confirmed by the errors \(\tilde{E}_{j,n,\alpha }^{(1)}\) and \(E_{j,n,0}^{(1)}\), in Fig. 5.
The chaotic behaviour is also present in the values \(\tilde{c}^{(2)}_k(\theta _{j_1,n_1}), k=1,\ldots ,4\), see the bottom left panel of Fig. 6, in the subinterval \([0,\hat{\theta }_1]\) of \([0,\pi ]\), that coincides with same subinterval where \(\lambda ^{(2)}({\mathbf {f}})\) is non-monotone.
Hence, if we restrict to \([0,\hat{\theta }_1]\), the extrapolation procedure can be used again on \(\tilde{c}^{(2)}_k(\theta _{j_1,n_1}), k=1,\ldots ,4\), for \(\theta _{j_1,n_1}\in (\hat{\theta }_1,\pi ],\,\) \(j_1=1,\dots ,n_1\). Consequently we obtain a good approximation of \(\lambda _\gamma (T_n(\mathbf {f}))\), for \(q=2\), \(j=\hat{j},\dots ,n\). Notice that \(\hat{j}\) is the first index in \(\{1,\dots ,n\}\) such that \({\hat{j}\pi }/({n+1})\in (\hat{\theta }_1,\pi ]\), that is we can compute the eigenvalues belonging to the interval reported in (3.2). This is reflected, in Fig. 7, in the gradual reduction of the errors \(\tilde{E}_{j,n,\alpha }^{(2)}\) and \(E_{j,n,0}^{(2)}\), for indices larger than \(\hat{n}_1=n+\hat{j}\).
Finally, the remaining n eigenvalues can be well reconstructed with a standard matrix-less procedure, using the values of \(\tilde{c}^{(3)}_k(\theta _{j_1,n_1}), k=1,\ldots ,4\), shown in the top right panel of Fig. 6. The errors related to latter approximation, \(\tilde{E}_{j,n,\alpha }^{(3)}\), are shown in Fig. 7.
In total, \(3n-\hat{j}+1\) eigenvalues of \(T_n(\mathbf {f})\) can be computed and plotted (in non-decreasing order) in Fig. 7.
Example 4
In this further example we consider three trigonometric polynomials,
$$\begin{aligned} p^{(1)}(\theta )&=2-2\cos (\theta ),\\ p^{(2)}(\theta )&=7-2\cos (2\theta ),\\ p^{(3)}(\theta )&=16-8\cos (\theta )+2\cos (2\theta )=10+(p^{(1)}(\theta ))^2, \end{aligned}$$
with the aim of approximating the eigenvalues of a block banded Toeplitz matrix, with matrix-valued generating function \({\mathbf {f}}(\theta )\), such that \(\lambda ^{(q)}({\mathbf {f}})=p^{(q)}\) for \(q=1,2,3\). We choose \(s=3\) but obviously the following procedure holds for any \(s\in \mathbb {Z}_+\) and for any chosen s trigonometric polynomials, \(p^{(1)}(\theta ),p^{(2)}(\theta ),\ldots ,p^{(s)}(\theta )\), such that
$$\begin{aligned} \max _{\theta \in [0,\pi ]} p^{(q)}(\theta )< \min _{\theta \in [0,\pi ]}p^{(q+1)}(\theta ) \end{aligned}$$
for \(q=1,\dots ,s-1\). We can define
$$\begin{aligned} {\mathbf {f}}(\theta )=Q_3 \begin{bmatrix} p^{(1)}(\theta )&\quad 0&\quad 0\\ 0&\quad p^{(2)}(\theta )&\quad 0\\ 0&\quad 0&\quad p^{(3)}(\theta ) \end{bmatrix} Q_3^{\mathrm {T}}, \end{aligned}$$
where \(Q_3\) is any orthogonal matrix in \(\mathbb {R}^{3\times 3}\). For the current example we choose
$$\begin{aligned} Q_3=\begin{bmatrix} 1&\quad 0&\quad 0\\ 0&\quad \cos (\pi /3)&\quad -\,\sin (\pi /3)\\ 0&\quad \sin (\pi /3)&\quad \cos (\pi /3) \end{bmatrix}= \frac{1}{2}\begin{bmatrix} 2&\quad 0&\quad 0\\ 0&\quad 1&\quad -\,\sqrt{3}\\ 0&\quad \sqrt{3}&\quad 1 \end{bmatrix}. \end{aligned}$$
Now we define the Fourier coefficients of \(\mathbf {f}(\theta )\), that is
$$\begin{aligned} \hat{\mathbf {f}}_{k}= Q_3\begin{bmatrix} \hat{p}_k^{(1)}&\quad 0&\quad 0\\ 0&\quad \hat{p}_k^{(2)}&\quad 0\\ 0&\quad 0&\quad \hat{p}_k^{(3)} \end{bmatrix}Q_3^{\mathrm {T}}=Q_3\hat{D}_kQ_3^T, \end{aligned}$$
(3.3)
where \(\hat{p}_k^{(q)}\) is the kth Fourier coefficient of the eigenvalue function \(p^{(q)}(\theta )\), and \(k=-m,\ldots ,m\), where \(m=\max _{q=1,\dots , s}\mathrm{deg}(p^{(q)}(\theta )\). In our example \(m=2\), for \(p^{(2)}(\theta )\) and \(p^{(3)}(\theta )\) and \(m=1\) for \(p^{(1)}(\theta )\). Each \(p^{(q)}(\theta )\) is a real cosine trigonometric polynomial (RCTP), so \({\mathbf {f}}(\theta )\) is a symmetric matrix-valued function with Fourier coefficients
$$\begin{aligned} \hat{\mathbf {f}}_0&= \frac{1}{4}\left[ \begin{array}{rrr} 8&{}\quad 0&{}\quad 0\\ 0&{}\quad 55&{}\quad -\,9\sqrt{3}\\ 0&{}\quad -\,9\sqrt{3}&{}\quad 37 \end{array}\right] ,\quad \hat{\mathbf {f}}_1= \left[ \begin{array}{rrr} -\,1&{}\quad 0&{}\quad 0\\ 0&{}\quad -\,3&{}\quad \sqrt{3}\\ 0&{}\quad \sqrt{3}&{}\quad -\,1 \end{array}\right] ,\\ \hat{\mathbf {f}}_2&= \frac{1}{2}\left[ \begin{array}{rrr} 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 1&{}\quad -\,\sqrt{3}\\ 0&{}\quad -\,\sqrt{3}&{}\quad -\,1 \end{array}\right] , \end{aligned}$$
where \(\hat{\mathbf {f}}_{-k}=\hat{\mathbf {f}}_{k}^T=\hat{\mathbf {f}}_{k}\), \(k=0,1,2\).
The resulting block banded Toeplitz matrix is the following matrix
$$\begin{aligned} T_n({\mathbf {f}})= \left[ \begin{array}{ccccc} \hat{\mathbf {f}}_{0}&{}\quad \hat{\mathbf {f}}_{-1}&{}\quad \hat{\mathbf {f}}_{-2}&{}\quad &{}\\ \hat{\mathbf {f}}_{1}&{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\\ \hat{\mathbf {f}}_{2}&{}\ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \hat{\mathbf {f}}_{-2}\\ &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \hat{\mathbf {f}}_{-1}\\ &{}\quad &{}\quad \hat{\mathbf {f}}_{2}&{}\quad \hat{\mathbf {f}}_{1}&{}\quad \hat{\mathbf {f}}_{0} \end{array}\right] , \end{aligned}$$
with symbol
$$\begin{aligned} {\mathbf {f}}(\theta )=\hat{\mathbf {f}}_0+\sum _{k=1}^{2}\left( \hat{\mathbf {f}}_{k}e^{\mathbf {i}k\theta }+\hat{\mathbf {f}}_{-k}e^{-\mathbf {i}k\theta }\right) =\hat{\mathbf {f}}_0+2\hat{\mathbf {f}}_1\cos (\theta )+2\hat{\mathbf {f}}_2\cos (2\theta ) . \end{aligned}$$
We want to approximate the eigenvalues of \(T_n({\mathbf {f}})\), where \(\mathbf {f}(\theta )\) is constructed from \(p^{(q)}(\theta )\), \(q=1,2,3\). For the graph of the chosen polynomials see the top left panel of Fig. 8.
Due to the special structure of all \(\hat{\mathbf {f}}_{k}\), see (3.3), we have
$$\begin{aligned} T_n({\mathbf {f}})= I_n\otimes Q_3 \left[ \begin{array}{ccccc} \hat{D}_{0}&{}\quad \hat{D}_{-1}&{}\hat{D}_{-2}&{}\quad &{}\\ \hat{D}_{1}&{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\\ \hat{D}_{2}&{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \hat{D}_{-2}\\ &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad \hat{D}_{-1}\\ &{}\quad &{}\quad \hat{D}_{2}&{}\quad \hat{D}_{1}&{}\quad \hat{D}_{0} \end{array}\right] I_n\otimes Q_3^T. \end{aligned}$$
Therefore \(T_n({\mathbf {f}})\) is similar to the matrix
$$\begin{aligned} \begin{bmatrix} T_n(p^{(1)}(\theta ))&\quad 0&\quad 0\\ 0&\quad T_n(p^{(2)}(\theta ))&\quad 0\\ 0&\quad 0&\quad T_n(p^{(3)}(\theta )) \end{bmatrix}, \end{aligned}$$
and finally it is trivial to see that the block case, in this setting, is reduced to 3 different scalar problems, which can be treated separately.
Differently from previous examples, here the analytical expressions of the eigenvalue functions of \(\mathbf {f}(\theta )\) are known, since they coincide, by construction, with \(p^{(q)}(\theta )\), for \(q=1,2,3\). So we will describe the spectrum of \(T_n( {\mathbf {f}})\), approximating or calculating exactly the 3n eigenvalues, treating separately the 3 different scalar problems.
For the first n eigenvalues it is known that they can be calculated exactly, sampling \(p^{(1)}\) with grid \(\theta _{j,n}={j\pi }/({n+1}),\, j=1, \dots , n \). Analogously, the n eigenvalues can be found exactly by sampling \(p^{(2)}\) on a special grid defined in [16]. For the last n eigenvalues, the grid that gives exact eigenvalues is not known, but \(p^{(3)}\) is monotone non-decreasing and consequently we can use asymptotic expansion in the scalar case.
We set the parameters as previous cases: \(n_1=100\) and \(n=10000\).
In the top right panel of Fig. 8 we report the expansion errors \(E^{(q)}_{j_1,n_1,0}\), calculated using grid \(\theta _{j_1,n_1}={j_1\pi }/({n_1+1})\), \(j_1=1,\dots ,n_1\), \( q=1,2,3\). There is no surprise that in the first region of the graph (green area) the error is zero, since the first \(n_1\) eigenvalues are exactly given, sampling \(p^{(1)}\) on standard \(\theta _{j_1,n_1}\) grid.
In yellow area we see the result of direct calculation of
$$\begin{aligned} \lambda _\gamma (T_{n_1}(\mathbf { f}))-\lambda ^{(3)}(\mathbf { f}(\theta _{j_1,n_1})), \end{aligned}$$
for \(j_1=1,\dots ,n_1\), \(q=3\), as we are using asymptotic expansion with \(\alpha =0\).
The green area, containing the errors related to \(p^{(2)}(\theta )\), is obviously chaotic since \(p^{(2)}(\theta )\) is non-monotone.
Following the notation and the analysis in [16], since \(p^{(2)}=7-2\cos (2\theta )\) and \(n_1=100\), we have two changes of monotonicity which we collect in the parameter \(\omega \). As a consequence, in accordance with the study in [16], we choose
$$\begin{aligned} \omega =2,\quad \beta&=\mathrm {mod}(n_1,\omega )=0, \quad n_\omega =(n_1-\beta )/\omega =50,\\ \theta _{n_\omega }^{(1)}&=\frac{j\pi }{n_\omega +1},\quad j=1,\ldots n_\omega ,\\ \theta _{n_\omega +1}^{(2)}&=\frac{j\pi }{n_\omega +2},\quad j=1,\ldots n_\omega +1. \end{aligned}$$
To map the two grids above to match the given symbol \(\mathbf {f}(\theta )\) we construct \(\theta _{n_1}\) by
$$\begin{aligned} \theta _{n_1}=\left\{ \frac{1}{2}\theta _{n_\omega }^{(1)},\frac{1}{2}\theta _{n_\omega +1}^{(2)}+\frac{\pi }{2}\right\} . \end{aligned}$$
A more general formula to match grids \(\theta _{n_\omega }^{(1)}\) and \(\theta _{n_\omega +1}^{(2)}\) to be evaluated on the standard symbol is
$$\begin{aligned} \theta _{n}=\frac{1}{\omega }\left\{ \bigcup _{r_1=1}^{\omega -\beta }\left( \theta _{n_\omega }^{(1)}+(r_1-1)\pi \right) ,\bigcup _{r_2=1}^{\beta }\left( \theta _{n_\omega +1}^{(2)}+(r_2-1)\pi +(\omega -\beta )\pi \right) \right\} . \end{aligned}$$
(3.4)
In the left bottom panel of Fig. 8 we report the global expansion errors \(E^{(q)}_{j_1,n_1,0}\), calculated using grid described above. In this way the region where the error is 0 is the second (red area), since the eigenvalues are calculated exactly, by sampling \(p^{(2)}(\theta )\). Furthermore, in the green and in the yellow areas we see the result of the direct calculation of
$$\begin{aligned} \lambda _\gamma (T_{n_1}(\mathbf { f}))-\lambda ^{(q)}(\mathbf { f}(\theta _{j_1,n_1})), \end{aligned}$$
for \(j_1=1,\dots ,n_1\), \(q=1,3\), as we are using asymptotic expansion with \(\alpha =0\).
Hence, the first n eigenvalues of \(T_n( {\mathbf {f}})\) can be calculated exactly sampling \(p^{(1)}\) with grid \(\theta _{j,n}={j\pi }/({n+1}),\, j=1, \dots , n \) and n exact eigenvalues can be found sampling \(p^{(2)}\) on grid (3.4). For the computation of the last n eigenvalues, we use the matrix-less procedure in the scalar setting, passing through the approximation of \(c^{(3)}_k(\theta _{j_1,n_1}), k=1,\ldots ,\alpha \), for \(\alpha =4\), see the bottom right panel of Fig. 8.
In fact, for \(\alpha =4\) we ignore the first two evaluations of \(c_4^{(3)} \) at the initial points \(\theta _{1,n} \) and \(\theta _{2,n}\), because their values behave in a erratic way. This problem has been emphasized in [2] and it is due to the fact that the first and second derivative of \(p^{(3)}(\theta )\) at \(\theta =0\) vanish simultaneously. However, we have to make two observations for clarifying the situation
-
The present pathology is not a counterexample to the asymptotic expansion (1.5) since we take \(\theta \) fixed and all the pairs j, n such that \(\theta _{j,n}=\theta \): in the current case and in that considered in [2] in the scalar-valued setting, we have j fixed and n grows so that the point \(\theta \) is not well defined.
-
There are simple ways for overcoming the problem and then for computing reliable evaluations of \(c_4^{(3)} \) at those bad points \(\theta _{1,n} \) and \(\theta _{2,n}\). One of them is described in [12] and consists in choosing a sufficiently large \(\alpha >4\) and in computing \(c_k^{(3)} \), for \(k=1,2,3,4\). Using this trick, the \(c_4^{(3)} \) at the initial points \(\theta _{1,n} \) and \(\theta _{2,n}\) have the expected behavior. In addition we stress the fact that this behavior has little impact on the numerically computed solution. Assuming double precision computations, the contribution to the error deriving from \(c_4^{(3)}(\theta _{j,n})h^4\) will be numerically negligible, even for moderate n. Further discussions on the topic are presented in [12].
Example 5
Consider the \(\mathbb {Q}_p\) Lagrangian Finite Element approximation, of the second order elliptic differential problem
$$\begin{aligned} {\left\{ \begin{array}{ll} - \varDelta u+\beta \cdot \nabla u + \gamma u=f, &{} \text { in }\varOmega =(0,1)^d,\\ u=0, &{} \text { on }\partial \varOmega , \end{array}\right. } \end{aligned}$$
(3.5)
in one dimension with \(\beta =\gamma =0\), and \(f\in L^2(\varOmega )\). The resulting stiffness matrix is \(A_n^{(p)}=nK_n^{(p)}\), where \(K_n^{(p)}\) is a \((pn-1)\times (pn-1)\) block matrix. The construction of the matrix and the symbol is given in [18]. The \(p\times p\) matrix-valued symbol of \(K_n^{(p)}\) is
$$\begin{aligned} {\mathbf {f}}(\theta )=\hat{\mathbf {f}}_0+\hat{\mathbf {f}}_1e^{\mathbf {i}\theta }+\hat{\mathbf {f}}_1^{\mathrm {T}}e^{-\mathbf {i}\theta }. \end{aligned}$$
We have
$$\begin{aligned} K_n^{(p)}=T_n({\mathbf {f}})_-, \end{aligned}$$
where the subscript − denotes that the last row and column of \(T_n({\mathbf {f}})\) are removed. This is due to the homogeneous boundary conditions. For detailed expressions of \(\hat{\mathbf {f}}_0\) and \(\hat{\mathbf {f}}_1 \) in the particular case \(p=2,3,4\), see “Appendix B”.
In Table 1, we list seven examples of uniform grids, with varying n. The general notation for a grid, where the type is defined by context, is \(\theta _{j,n}\), where n is the number of grid points, and j is the indices \(j=1,\ldots ,n\). The grid fineness parameter h, for the respective grids, is also presented in Table 1. The names of the different grids are chosen in view of their relations with the \(\tau \)-algebras [7] [see specifically equations (19), (22), and (23) therein].
Table 1 Seven examples of uniform grids
In Example 1 of [18] the case \(p=2\) is considered, and explicit formulas for the two eigenvalue functions are given, with their notation,
$$\begin{aligned} \lambda _1(\mathbf {f}_2(\theta ))&=5+\frac{1}{3}\cos (\theta )+\frac{1}{3}\sqrt{129+126\cos (\theta )+\cos ^2(\theta )},\\ \lambda _2(\mathbf {f}_2(\theta ))&=5+\frac{1}{3}\cos (\theta )-\frac{1}{3}\sqrt{129+126\cos (\theta )+\cos ^2(\theta )}. \end{aligned}$$
Here we present the two grids used to sample the two eigenvalue functions in order to attain exact eigenvalues,
$$\begin{aligned} \lambda _1(\mathbf {f}_2(\theta _{j_1,n-1}^{(1)})),&\quad \theta _{j_1,n-1}^{(1)}=\frac{j_1\pi }{n},\quad j_1=1,\ldots , n-1,\\ \lambda _2(\mathbf {f}_2(\theta _{j_2,n}^{(2)})),&\quad \theta _{j_2,n}^{(2)}=\frac{j_2\pi }{n},\quad j_2=1,\ldots , n. \end{aligned}$$
With the notation in Table 1, we use the grid \(\tau _{n-1}\) for the first eigenvalue function, and grid \(\tau _{n-1}^\pi \) for the second. Since for \(p>2\) the analytical expression of the the eigenvalue functions can not be computed easily, the following four steps algorithm can be used to obtain the exact eigenvalues for any p.
The mass matrix, of the system (3.5) (that is, \(\gamma =1\)), is \(B_n^{(p)}=n^{-1}M_n^{(p)}\), where \(M_n^{(p)}=T_n(\mathbf {g})_-\) is the scaled mass matrix.
The \(p\times p\) matrix-valued symbol of \(M_n^{(p)}\) is given by
$$\begin{aligned} {\mathbf {g}}(\theta )=\hat{\mathbf {g}}_0+\hat{\mathbf {g}}_1e^{\mathbf {i}\theta }+\hat{\mathbf {g}}_1^{\mathrm {T}}e^{-\mathbf {i}\theta }. \end{aligned}$$
For detailed expressions of \(\hat{\mathbf {g}}_0\) and \(\hat{\mathbf {g}}_1 \) in the particular case \(p=2,3,4\), see Appendix B. The algorithm for writing the exact eigenvalues of \(M_n^{(p)}\) for p even is the same as the one described for \(K_n^{(p)}\) above, just replacing \(\mathbf {f}(\theta )\) with \(\mathbf {g}(\theta )\). However, for \(p>1\) odd, we have a slight modification:
If \((p+1)/2\) is odd, that is \(p=5,9,13,\ldots \), define \(\hat{p}=p\). If \((p+1)/2\) is even, that is \(p=3,7,11,\ldots \), define \(\hat{p}=p-2\). In summary, to obtaining the exact eigenvalues of \(M_n^{(p)}\), the algorithm becomes:
In Fig. 9 we present the appropriate grids, defined in Table 1, for the exact eigenvalues of \(K_n^{(p)}\) and \(M_n^{(p)}\) with \(n=6\) and \(p=5\).