Advertisement

BIT Numerical Mathematics

, Volume 58, Issue 1, pp 133–162 | Cite as

On restarting the tensor infinite Arnoldi method

  • Giampaolo Mele
  • Elias Jarlebring
Open Access
Article
  • 558 Downloads

Abstract

An efficient and robust restart strategy is important for any Krylov-based method for eigenvalue problems. The tensor infinite Arnoldi method (TIAR) is a Krylov-based method for solving nonlinear eigenvalue problems (NEPs). This method can be interpreted as an Arnoldi method applied to a linear and infinite dimensional eigenvalue problem where the Krylov basis consists of polynomials. We propose new restart techniques for TIAR and analyze efficiency and robustness. More precisely, we consider an extension of TIAR which corresponds to generating the Krylov space using not only polynomials, but also structured functions, which are sums of exponentials and polynomials, while maintaining a memory efficient tensor representation. We propose two restarting strategies, both derived from the specific structure of the infinite dimensional Arnoldi factorization. One restarting strategy, which we call semi-explicit TIAR restart, provides the possibility to carry out locking in a compact way. The other strategy, which we call implicit TIAR restart, is based on the Krylov–Schur restart method for the linear eigenvalue problem and preserves its robustness. Both restarting strategies involve approximations of the tensor structured factorization in order to reduce the complexity and the required memory resources. We bound the error introduced by some of the approximations in the infinite dimensional Arnoldi factorization showing that those approximations do not substantially influence the robustness of the restart approach. We illustrate the effectiveness of the approaches by applying them to solve large scale NEPs that arise from a delay differential equation and a wave propagation problem. The advantages in comparison to other restart methods are also illustrated.

Keywords

Nonlinear eigenvalue problem Restart Tensor infinite Arnoldi Krylov subspace method Krylov–Schur method 

Mathematics Subject Classification

35P30 65H17 65F60 15A18 65F15 

1 Introduction

We consider the nonlinear eigenvalue problem (NEP) defined as finding \((\lambda ,v) \in \mathbb {C}\times \mathbb {C}^n {\setminus } \left\{ 0 \right\} \) such that
$$\begin{aligned} M(\lambda ) v = 0 \end{aligned}$$
(1)
where \(\lambda \in \varOmega \subseteq \mathbb {C}, \varOmega \) is an open disk centered in the origin and \(M:\varOmega \rightarrow \mathbb {C}^{n \times n}\) is analytic. The NEP has received a considerable attention in literature. See the review papers [26, 38] and the problem collection [7].

There is a large number of methods available in a large amount of numerical linear algebra literature for (1). There are specialized methods for solving different classes of NEPs such as polynomial eigenvalue problems (PEPs) see [18, 22, 23] and [2, Chapter 9], in particular quadratic eigenvalue problems (QEPs) [3, 24, 25, 33] and rational eigenvalue problems (REPs) [5, 6, 30, 36]. There are also methods that exploit the structure of the operator \(M(\lambda )\) like Hermitian structure [31, 32] or low rank of the matrix-coefficients [34]. Methods for solving a more general class of NEP are also present in literature. There exist methods based on modification of the Arnoldi method [37], which can be restarted for certain problems, Jacobi–Davidson methods [8], Newton-like methods [9, 16, 28]. Finally, there is a class of methods (to which the presented method belongs) based on Krylov methods and rational Krylov methods that can be interpreted as either dynamically expanding an approximation of the NEP or applying a method on an infinite dimensional operator [11, 14, 35].

In principle, we do not assume any particular structure of the NEP except for the analyticity and the computability of certain quantities associated with \(M(\lambda )\) (further described later). This is similar to the infinite Arnoldi method (IAR) [14], which is in the same line of reasoning as our approach. IAR is equivalent to the Arnoldi method applied to a linear operator. More precisely, under the assumption that zero is not an eigenvalue, the problem (1) can be reformulated as \(\lambda B(\lambda ) v = v\), where \( B(\lambda ) = M(0)^{-1} (M(0)-M(\lambda ))/\lambda \). This problem is equivalent to the linear and infinite dimensional eigenvalue problem \(\lambda \mathscr {B}\psi (\theta ) = \psi (\theta )\), where \(\psi : \mathbb {C}\rightarrow \mathbb {C}^n\) is an analytic function [14, Theorem 3]. The operator \(\mathscr {B}\) is linear, maps functions to functions, and it is defined as
$$\begin{aligned} \mathscr {B}\psi (\theta ) := \int _0^{\theta } \psi ({\hat{\theta }}) d {\hat{\theta }} + \sum _{i=0}^{\infty } \frac{B^{(i)}(0)}{i!} \psi ^{(i)}(0). \end{aligned}$$
(2)
IAR has a particular structure such that it can be represented with a tensor and this was the basis for the tensor infinite Arnoldi method (TIAR) in [13]. TIAR is equivalent to IAR but computationally more attractive (in terms of memory and CPU-time) due to a memory efficient representation of the basis matrix.

A problematic aspect of any algorithm based on the Arnoldi method is that, when many iterations are performed, the computation time per iteration will eventually become large. Moreover, finite arithmetic aspects may restrict the accuracy. Fortunately, an appropriate restart of the algorithm can resolve these issues in many situations. There exist two main classes of restarting strategies: explicit restart and implicit restart. Most of the explicit restart techniques correspond to selecting a starting vector that generates an Arnoldi factorization with the wanted Ritz values. The implicit restart consists in computing a new Arnoldi factorization, without explicitly computing a starting vector, with the wanted Ritz values. This process can be done deflating the unwanted Ritz values as in, e.g., IRA [20] or extracting a proper subspace of the Krylov space by using the Krylov–Schur restart approach [29]. For reasons of numerical stability, the implicit restart is often considered more robust than explicit restart. See [27] for further discussions about the restart of the Arnoldi method for the linear eigenvalue problem.

In this work we propose two new restart techniques for TIAR:
  • An implicit restart, which consists in an adaption of the Krylov–Schur restart,

  • A semi-explicit restart, which consists in an explicit restart by imposing the structure on the converged locked Ritz pairs and on the starting function.

The derivation of our implicit restart procedure is based on Krylov–Schur restarting in infinite dimensions. We improve the procedure in several ways. The structure of the Arnoldi factorization allows us to perform approximations that reduce the complexity and the memory requirements. We prove that the coefficients matrix, representing the basis of the Kylov space, presents a fast decay in the singular values. This allows us to effectively use a low rank approximation of this matrix. Moreover, we prove that there is a fast decay in the coefficients of the polynomials representing the basis of the Krylov space. Therefore, the high order coefficients of the Krylov basis can be neglected if the power series coefficients of \(M(\lambda )\) decay to zero. We give explicit bounds on the errors due to those approximations.

A semi-explicit restart for IAR was presented in [12]. We extend the procedure to TIAR. The feature of imposing the structure on the converged Ritz values and starting function is obtained by generating the Krylov space using a particular type of structured functions, which are sums of polynomial and exponential functions. We show that such functions can be included in the framework of TIAR. In particular we carry out a memory efficient representation of the structured functions, similar to [13].

There exist other Arnoldi-like methods combined with a companion linearization that use memory efficient representation of the Krylov basis matrix and that can be restarted. There are, for instance, TOAR [17, 39] and CORK [35], which are based on the concept of compact Arnoldi decompositions [21]. Similar to TIAR, the direct usage of the Krylov–Schur restart for these methods does not decrease the complexity unless SVD-based approximations are used (which is indeed suggested in the implementation of the methods). More precisely, the coefficients that represent the Krylov basis are replaced with their low rank approximations. In contrast to those approaches, our specific setting, in particular the infinite dimensional function formulation and the representation of the basis with tensor structure functions, allows us to characterize the impact of the approximations.

The relationship with CORK [35] can be seen as follows. A variation of a special case of our approach (implicit restart without SVD-compression and without polynomial degree reduction) has similarities with a special case of a variation of CORK (single shift, with a particular companion linearization without SVD-compression). Our approach is based on a derivation involving infinite dimensionality which allows us to derive theory for the truncation and it allows us to restart with infinite dimensional objects. This strategy is effective since the invariant pairs are infinite dimensional objects. In contrast to this, CORK is derived from reasoning concerning the NEP linearization. This allows the usage of different types of companion linearizations, that correspond to different approximations of the nonlinearities of \(M(\lambda )\), and leads to a rational Krylov approach, i.e., several shifts can be used in one run.

The paper is organized as follows: in Sect. 2 we extend TIAR to tensor structured functions. In Sect. 3 we present a derivation of the Krylov–Schur type restarting in an abstract and infinite dimensional setting. Section 4 contains the derivation of a semi-explicit restart for TIAR. In Sect. 5 we carry out the adaption of Krylov–Schur restart for TIAR. We analyze the complexity of the proposed methods in Sect. 6. Finally, in Sect. 7 we show the effectiveness of the restarting strategies with numerical simulations to large and sparse NEPs.

We denote by \(a_{:,:,:}\) a three-dimensional tensor and by \(a_{i,:,:}, a_{:,j,:}\) and \(a_{:,:,\ell }\) the slices of the tensor with respect to the first, second and third dimension. The vector \(z_j\) denotes the j–th column of the matrix Z and \(e_j\) the j–th canonical unit vector. The matrix \(I_{m,p}\) denotes the matrix obtained by extracting the fist m rows and p columns of a larger square identity matrix. The matrix \(H_k\) denotes the square matrix obtained by removing the last row from the matrix \(\underline{H}_k \in \mathbb {C}^{(k+1) \times k }\).

2 Tensor structured functions and TIAR factorizations

Our main algorithms are derived using particular types of functions. More precisely, we consider functions that can be expressed as \(\psi (\theta ) = q(\theta ) + Y \exp (S \theta ) c\) where \(q: \mathbb {C}\rightarrow \mathbb {C}^n\) is a polynomial, \(Y \in \mathbb {C}^{n \times p}, S \in \mathbb {C}^{p \times p}\), \(c \in \mathbb {C}^{p}\) and \(\exp (S \theta )\) denotes the matrix exponential. Such functions were also used in [12]. We now introduce a new memory-efficient representation of such functions involving tensors.

Definition 1

(Tensor structured function) The vector-valued function \(\psi : \mathbb {C}\rightarrow \mathbb {C}^n\) is a tensor structured function if there exist \(Y, W \in \mathbb {C}^{n \times p}, {\bar{a}}\in \mathbb {C}^{d\times r}, {\bar{b}}\in \mathbb {C}^{d\times p}, {\bar{c}} \in \mathbb {C}^{p}, S \in \mathbb {C}^{p \times p}\) and \(Z \in \mathbb {C}^{n \times r}\) where \([Z, \ W]\) has orthonormal columns and \({\text {span}}(Y)={\text {span}}(W)\), such that
$$\begin{aligned} \psi (\theta ) =&P_{d-1}(\theta ) \left( \sum _{\ell =1}^r {\bar{a}}_{:,\ell } \otimes z_\ell + \sum _{\ell =1}^p {\bar{b}}_{:,\ell } \otimes w_\ell \right) + Y \exp _{d-1} (\theta S) {\bar{c}} \end{aligned}$$
(3)
where
$$\begin{aligned} P_{d-1}(\theta ) := (1, \theta , \dots , \theta ^{d-1}) \otimes I_n \end{aligned}$$
(4)
and \( \exp _{d-1} (\theta S) := \sum _{i=d}^\infty \theta ^i S^i/i! \) is the remainder of the Taylor series expansion of the exponential function.
The matrix-valued function \(\varPsi _{k}: \mathbb {C}\rightarrow \mathbb {C}^{n \times k}\) is a tensor structured function if it can be expressed as \(\varPsi _{k}(\theta )=(\psi _1(\theta ), \dots , \psi _{k}(\theta ))\), where each \(\psi _i\) is a tensor structured function. We denote the i–th column of \(\varPsi _{k}\) by \(\psi _i\). The structure induced by (3) is now, in a compact form
$$\begin{aligned} \varPsi _{k}(\theta ) =&P_{d-1}(\theta ) \left( \sum _{\ell =1}^r a_{:,:,\ell } \otimes z_\ell + \sum _{\ell =1}^p b_{:,:,\ell } \otimes w_\ell \right) + Y \exp _{d-1} (\theta S) C \end{aligned}$$
(5)
where \(a \in \mathbb {C}^{d \times k \times r}, b \in \mathbb {C}^{d \times k \times p}\) and \(C \in \mathbb {C}^{p \times k}\). In particular, \(\varPsi _{k}(\theta )\) is represented by the matrices (ZWYS) and the coefficients (abC). We say that \(\varPsi _{k}(\theta )\) is orthogonal if the columns are orthonormal, i.e., \(\langle \psi _i, \psi _j\rangle \) = \(\delta _{i,j}\) for \(i,j=1,\dots ,k\). We use the scalar product consistent with the other papers about the infinite Arnoldi method [12, 14], i.e., if \(\psi (\theta )=\sum _{i=0}^\infty \theta ^ix_i\) and \(\varphi (\theta )=\sum _{i=0}^\infty \theta ^iy_i\), then
$$\begin{aligned} \langle \psi ,\varphi \rangle =\sum _{i=0}^\infty y_i^Hx_i. \end{aligned}$$
The computation of this scalar product and the induced norm for the tensor structured functions (3) is further characterized in this section. We let \(\Vert F \Vert \) denote the induced norm of \(F: \mathbb {C}\rightarrow \mathbb {C}^{n \times p}\). Notice that the polynomials are also tensor structured functions, more precisely they are represented as (5) with \(C=0\). In particular, by definition of (4), we have the following relation between the induced norm of a polynomial and the Frobenius norm if its coefficients
$$\begin{aligned} \Vert P_{d-1} W\Vert =\Vert W\Vert _F \end{aligned}$$
(6)
for any \(W\in \mathbb {C}^{nd\times p}\).

Similar to many restart strategies for linear eigenvalue problems, our approach is based on computation, representation and manipulation of an Arnoldi-type factorization. For our infinite dimensional operator, the analogous Arnoldi-type factorization is defined as follows.

Definition 2

(TIAR factorization) Let \(\mathscr {B}\) be the operator defined in (2). Let \(\varPsi _{k+1}: \mathbb {C}\rightarrow \mathbb {C}^{n \times (k+1)}\) be a tensor structured function with orthonormal columns and let \(\underline{H}_{k} \in \mathbb {C}^{(k+1) \times k}\) be a Hessenberg matrix with positive elements in the sub-diagonal. The pair \((\varPsi _{k+1},\underline{H}_{k})\) is a TIAR factorization of length k if
$$\begin{aligned} \mathscr {B}\varPsi _{k}(\theta ) = \varPsi _{k+1}(\theta ) \underline{H}_{k}. \end{aligned}$$
(7)

2.1 Action of \(\mathscr {B}\) on tensor structured functions

The TIAR factorization in Definition 2 involves the action of the operator \(\mathscr {B}\). In order to characterize the action of the operator \(\mathscr {B}\) on tensor structured functions (3), we need the function \(\mathbb {M}_d:\mathbb {C}^{n\times p} \times \mathbb {C}^{p\times p}\rightarrow \mathbb {C}^{n\times p}\), defined in [12] as
$$\begin{aligned} \mathbb {M}_d(Y,S) := \sum _{i=d+1}^{\infty } \frac{M_i Y S^i}{i!}, \end{aligned}$$
(8)
where we introduced the notation \(M_i:=M^{(i)}(0)\).

Theorem 3

(Action of \(\mathscr {B}\)) Let \((Z,W,Y,S)\in \mathbb {C}^{n\times r}\times \mathbb {C}^{n\times p}\times \mathbb {C}^{n\times p}\times \mathbb {C}^{p\times p}\) be the matrices and \(({\bar{a}}, {\bar{b}}, {\bar{c}}) \in \mathbb {C}^{d \times r}\times \mathbb {C}^{d \times p}\times \mathbb {C}^{p}\) be the coefficients that represent \(\psi (\theta )\) given in (3). Suppose \(\lambda (S) \subset \varOmega {\setminus } \left\{ 0 \right\} \), let \({\tilde{c}} := S^{-1} {\bar{c}}\) and
$$\begin{aligned} {\tilde{z}}: = -M_0^{-1} \left[ \sum _{i=1}^d M_i \left( \sum _{\ell =1}^r \frac{{\bar{a}}_{i,\ell }}{i} z_\ell + \sum _{\ell =1}^p \frac{{\bar{b}}_{i,\ell }}{i} w_\ell \right) + \mathbb {M}_d(Y,S){\tilde{c}} \right] . \end{aligned}$$
(9)
Under the assumption that
$$\begin{aligned} {\tilde{z}} \not \in {\text {span}}(z_1, \dots , z_r, w_1,\dots , w_p), \end{aligned}$$
(10)
let \(z_{r+1}\) be the normalized result of the Gram–Schmidt orthogonalization of \({\tilde{z}}\) against \(z_1, \dots , z_r, w_1, \dots , w_p\) and \({\tilde{a}}_{1,\ell }\) and \({\tilde{b}}_{1,\ell }\) be the orthonormalization coefficients, i.e.,
$$\begin{aligned} {\tilde{z}} = \sum _{\ell =1}^{r+1} {\tilde{a}}_{1,\ell } z_\ell + \sum _{\ell =1}^{p} {\tilde{b}}_{1,\ell } w_\ell . \end{aligned}$$
(11)
Then, the action of \(\mathscr {B}\) on the tensor structured function defined by (3) is
$$\begin{aligned} \mathscr {B}\psi (\theta ) = P_{d}(\theta ) \left( \sum _{\ell =1}^{r+1} {\tilde{a}}_{:,\ell } \otimes z_\ell + \sum _{\ell =1}^p {\tilde{b}}_{:,\ell } \otimes w_\ell \right) + Y \exp _{d} (\theta S) {\tilde{c}} \end{aligned}$$
(12)
where \({\tilde{a}} \in \mathbb {C}^{(d+1) \times (r+1)}\) and \({\tilde{b}} \in \mathbb {C}^{(d+1) \times p}\) are defined as follows
$$\begin{aligned}&{\tilde{a}}_{i,r+1} := 0, \quad i=2, \dots , d+1 , \end{aligned}$$
(13a)
$$\begin{aligned}&{\tilde{a}}_{i+1,\ell } := {\bar{a}}_{i,\ell }/i, \quad i=1, \dots , d \ ; \ \ell = 1, \dots , r , \end{aligned}$$
(13b)
$$\begin{aligned}&{\tilde{b}}_{i+1,\ell } := {\bar{b}}_{i,\ell }/i, \quad i=1, \dots , d \ ; \ \ell = 1, \dots , p. \end{aligned}$$
(13c)

Proof

With the notation
$$\begin{aligned} x_i := \sum _{\ell =1}^r {\bar{a}}_{i+1,\ell } z_\ell + \sum _{\ell =1}^p {\bar{b}}_{i+1,\ell } w_\ell , \quad i=0, \dots , d-1, \end{aligned}$$
(14)
and \(x:={{\text {vec}}}(x_0, \dots , x_{d-1}) \in \mathbb {C}^{dn}, \psi (\theta )\) defined in (3) can be expressed as
$$\begin{aligned} \psi (\theta ) = P_{d-1}(\theta ) x + Y \exp _{d-1} (\theta S) {\bar{c}}. \end{aligned}$$
(15)
By invoking [12, theorem 4.2] and using (14), we can express the action of the operator as
$$\begin{aligned} \mathscr {B}\psi (\theta ) = P_{d}(\theta ) x_+ + Y \exp _{d} (\theta S) {\tilde{c}} \end{aligned}$$
(16)
where \(x_+:={{\text {vec}}}(x_{+,0}, \dots , x_{+,d}) \in \mathbb {C}^{(d+1)n}\) with
$$\begin{aligned}&x_{+,i} := \sum _{\ell =1}^r \frac{\bar{a}_{i,\ell }}{i} z_\ell + \sum _{\ell =1}^p \frac{\bar{b}_{i,\ell }}{i} w_\ell , \;\;i = 1, \dots , d, x_{+,0} := -M_0^{-1} \left( \sum _{i=1}^d M_i x_{+,i} + \mathbb {M}_d(Y,S)\tilde{c} \right) . \end{aligned}$$
Substituting \(x_{+,i}\) into \(x_{+,0}\) we obtain \(x_{+,0} = {\tilde{z}} \) given in (9). Using (13) and (11) we can express \(x_+\) in terms of \({\tilde{a}}\) and \({\tilde{b}}\) and we conclude by substituting this expression for \(x_+\) in (16).

Remark 4

The assumption (10) can only be satisfied if \(r+p \le n\). This is the generic case that we consider in this paper. Our focus is on large–scale NEPs and, in Sect. 5.1, we introduce approximations that avoid r from being large. The hypothesis \(\lambda (S) \subseteq \varOmega \) is necessary in order to define \(\mathbb {M}_d(Y,S)\) that is used to compute \({\tilde{z}}\) in Eq. (9).

2.2 Orthogonality

The tensor structured functions in the TIAR factorization (Definition 2) are orthonormal. In order to impose the orthogonality in our algorithms, we now present a theory which characterizes the orthonormality of tensor structured functions in terms of their coefficients. In particular, we derive the theory necessary to carry out the Gram–Schmidt orthogonalization. Since most of the orthogonalization procedures involve linear combinations of vectors, we start with the observation that linear combinations of tensor structured functions carry over directly to the coefficients.

Observation 5

(Linearity with respect to coefficients) Given the tensor structured function \(\varPsi _k(\theta )\) represented by the matrices \((Z,W,Y,S) \in \mathbb {C}^{n\times r}\times \mathbb {C}^{n\times p}\times \mathbb {C}^{n\times p}\times \mathbb {C}^{p\times p}\) with coefficients \((a,b,C) \in \mathbb {C}^{d \times k \times r}\times \mathbb {C}^{d \times k \times p}\times \mathbb {C}^{p \times k}\) and \({\tilde{\varPsi }}_{{\tilde{k}}}(\theta )\) represented by the same matrices but with coefficients \(({\tilde{a}}, {\tilde{b}}, {\tilde{C}}) \in \mathbb {C}^{d \times {\tilde{k}} \times r}\times \mathbb {C}^{d \times {\tilde{k}} \times p}\times \mathbb {C}^{p \times {\tilde{k}}}\). Let \(M \in \mathbb {C}^{k \times t}\) and \(N \in \mathbb {C}^{{\tilde{k}} \times t}\), then the function \({\hat{\varPsi }}_{{\hat{k}}}(\theta ) := \varPsi _k(\theta )M + {\tilde{\varPsi }}_{{\tilde{k}}}(\theta ) N\) is also a tensor structured function represented by the same matrices and coefficients \(({\hat{a}}, {\hat{b}}, {\hat{C}})\), where for \(\ell =1, \dots , r\) and \(\ell '=1, \dots , p\)
$$\begin{aligned} {\hat{a}}_{:,:,\ell } = a_{:,:,\ell }M + {\tilde{a}}_{:,:,\ell }N, \qquad { {\hat{b}}_{:,:,\ell '} = b_{:,:,\ell '}M + {\tilde{b}}_{:,:,\ell '}N, } \qquad {\hat{C}} = CM+{\tilde{CN}}. \end{aligned}$$
The above relation can be expressed also in terms of the unfolding in the first dimension, in particular for \(i=1, \dots , d\) it holds
$$\begin{aligned} {\hat{a}}_{i,:,:}^T = a_{i,:,:}^T M + {\tilde{a}}_{i,:,:}^T N, \qquad {\hat{b}}_{i,:,:}^T = b_{i,:,:}^T M + {\tilde{b}}_{i,:,:}^T N. \end{aligned}$$

Theorem 6

(Gram–Schmidt orthogonalization) Let \((Z,W,Y,S)\in \mathbb {C}^{n\times r}\times \mathbb {C}^{n\times p}\times \mathbb {C}^{n\times p}\times \mathbb {C}^{p\times p}\) be the matrices and \(({\bar{a}}, {\bar{b}}, {\bar{c}}) \in \mathbb {C}^{d \times r}\times \mathbb {C}^{d \times p}\times \mathbb {C}^{p}\) and \((a, b, c) \in \mathbb {C}^{d \times k \times r}\times \mathbb {C}^{d \times k \times p}\times \mathbb {C}^{p \times k}\) be the coefficients that represent \(\psi (\theta )\) given in (3) and \(\varPsi _k(\theta )\) given in (5). Assume that \(\varPsi _k(\theta )\) is orthogonal. Let
$$\begin{aligned} h = \sum _{\ell =1}^r (a_{:,:,\ell })^H {\bar{a}}_{:,\ell } + \sum _{\ell =1}^p (b_{:,:,\ell })^H {\bar{b}}_{:,\ell } + \sum _{i=d}^{\infty } C^H \frac{(S^i)^H Y^H Y S^i}{(i!)^2} {\bar{c}}. \end{aligned}$$
(17)
The normalized result of the Gram–Schmidt orthogonalization of \(\psi (\theta )\) against the columns of \(\varPsi _k(\theta )\) is
$$\begin{aligned} \psi ^\perp (\theta ) = P_{d-1}(\theta ) \left( \sum _{\ell =1}^r a_{:,\ell }^\perp \otimes z_\ell + \sum _{\ell =1}^p b_{:,\ell }^\perp \otimes w_\ell \right) + Y \exp _{d-1} (\theta S) c^\perp \end{aligned}$$
where
$$\begin{aligned} c^\perp&= {\bar{c}}-C h, \end{aligned}$$
(18a)
$$\begin{aligned} a_{:,\ell }^\perp&= {\bar{a}}_{:,\ell }- a_{:,:,\ell } h, \quad \ell =1, \dots , r, \end{aligned}$$
(18b)
$$\begin{aligned} b_{:,\ell }^\perp&= {\bar{b}}_{:,\ell }- b_{:,:,\ell } h, \quad \ell =1, \dots , p. \end{aligned}$$
(18c)
The vector h contains the orthogonalization coefficients, i.e., \(h_j = \langle \psi , \psi _j\rangle \). Moreover
$$\begin{aligned} \Vert \psi ^\perp \Vert := \beta = \sqrt{ \Vert a^\perp \Vert _F^2 + \Vert b^\perp \Vert _F^2 + \sum _{i=d}^{\infty } \frac{ \left( c^\perp \right) ^H \left( S^i\right) ^H Y^H Y S^i c^\perp }{(i!)^2}}. \end{aligned}$$
(19)

Proof

Let us define \(h_j:=\langle \psi , \psi _j\rangle \) for \(j=1, \dots , k\). The orthogonal complement, computed with the Gram–Schmidt process, is \(\psi ^\perp (\theta ) = \psi (\theta ) - \varPsi _k(\theta ) h\). Using the Observation 5 we obtain directly (18).

We express \(\psi (\theta )\) as (15) and, the columns of \(\varPsi _k\) as
$$\begin{aligned} \psi _j(\theta ) = P_{d-1}(\theta ) x^{(j)} + Y \exp _{d-1} (\theta S) c_j \end{aligned}$$
(20)
where \(x^{(j)}:={{\text {vec}}}(x_0^{(j)}, \dots , x_{d-1}^{(j)}) \in \mathbb {C}^{dn}\), with
$$\begin{aligned} x_i^{(j)} := \sum _{\ell =1}^r a_{i+1,j,\ell } z_\ell + \sum _{\ell =1}^p b_{i+1,j,\ell } w_\ell ,&i=0, \dots , d-1. \end{aligned}$$
(21)
By applying [12, Equation (4.32)] we obtain
$$\begin{aligned} h_j = \sum _{i=0}^{d-1} (x_i^{(j)})^H x_i + c_j^H \sum _{i=d}^{\infty } \frac{(S^i)^H Y^H Y S^i}{(i!)^2} {\bar{c}},\quad j=1, \dots , k. \end{aligned}$$
(22)
We now substitute (14) and (21) in (22) and, by using the orthonormality of the vectors \(z_1, \dots , z_r, w_1, \dots , w_p\), we find that
$$\begin{aligned} h_j = \sum _{\ell =1}^r (a_{:,j,\ell })^H {\bar{a}}_{:,\ell } + \sum _{\ell =1}^p (b_{:,j,\ell })^H {\bar{b}}_{:,\ell } + \sum _{i=d}^{\infty } c_j^H \frac{(S^i)^H Y^H Y S^i}{(i!)^2} {\bar{c}}, \qquad j=1, \dots , k. \end{aligned}$$
Those are the elements of the right-hand side of (17). Using that \( \Vert \psi ^\perp \Vert ^2 = \langle \psi ^\perp , \psi ^\perp \rangle \), and repeating the same reasoning, we have
$$\begin{aligned} \Vert \psi ^\perp \Vert ^2&= \sum _{\ell =1}^r (a^\perp _{:,\ell })^H a^\perp _{:,\ell } + \sum _{\ell =1}^p (b^\perp _{:,\ell })^H b^\perp _{:,\ell } + \sum _{i=d}^{\infty } \frac{ \left( c^\perp \right) ^H \left( S^i\right) ^H Y^H Y S^i c^\perp }{(i!)^2} \end{aligned}$$
which proves (19).

3 Restarting for TIAR in an abstract setting

3.1 A TIAR expansion algorithm in finite dimension

One algorithmic component common in many restart procedures is the expansion of an Arnoldi-type factorization. The standard way to expand Arnoldi-type factorizations (as, e.g., described in [29, Sect. 3]) requires the computation of the action of the operator/matrix and orthogonalization. We now show how we can carry out an expansion of the infinite dimensional TIAR factorization (7) by only using operations on matrices and vectors (of finite dimension).

In the previous section we characterized the action of the operator \(\mathscr {B}\) and orthogonalization of tensor structured functions. Notice that we have derived the orthogonalization procedure only for tensor structured functions that have the same degree in the polynomial part. The expansion of a TIAR factorization \((\varPsi _{k},\underline{H}_{k-1})\), involves the orthogonalization of the tensor structured function \(\mathscr {B}\psi _{k}\) (computed using the Theorem 3) against the columns of \(\varPsi _{k}\). If the degree of \(\varPsi _{k}\) is \(d-1\) then the degree of \(\mathscr {B}\psi _{k}\) is d. Therefore, in order to perform the orthogonalization, we have to represent \(\varPsi _{k}\) as a tensor structured function with degree d. Starting from (5) we rewrite \(\varPsi _{k}\) as
$$\begin{aligned} \varPsi _{k}(\theta ) = P_{d-1}(\theta ) \left( \sum _{\ell =1}^r a_{:,:,\ell } \otimes z_\ell + \sum _{\ell =1}^p b_{:,:,\ell } \otimes w_\ell \right) + \frac{Y S^{d}C}{d!} \theta ^d + Y \exp _{d} (\theta S) C.\nonumber \\ \end{aligned}$$
(23)
We define
$$\begin{aligned} \begin{array}{ccc} \displaystyle E:=\frac{W^H Y S^{d}C}{d!}, &{} \quad &{} \begin{array}{clcl} a_{d+1,j,\ell } &{}:= 0 &{} \quad &{} \ell =1,\dots , r+1, \\ b_{d+1,j,\ell } &{}:= e_{\ell , j} &{} \quad &{} \ell =1,\dots , p, \end{array} \end{array} \end{aligned}$$
(24)
for \(j=1,\dots , k\). Since \({\text {span}}(W)={\text {span}}(Y)\) and, since W is orthogonal, we have that \(Y=WW^H Y\). Hence, using this relation and (24), the function \(\varPsi _{k}\) in (23) can be expressed as
$$\begin{aligned} \varPsi _k(\theta ) = P_{d}(\theta ) \left( \sum _{\ell =1}^{r+1} a_{:,:,\ell } \otimes z_\ell + \sum _{\ell =1}^p b_{:,:,\ell } \otimes w_\ell \right) + Y \exp _{d} (\theta S) C . \end{aligned}$$
These results can be directly combined to expand a TIAR factorization. The resulting algorithm is summarized in Algorithm 1. The action of the operator \(\mathscr {B}\), described in Theorem 3, is expressed in Steps 2–4. Step 5 corresponds to increasing the degree of the TIAR factorization as described in (23) and (24). The orthogonalization of the new function, carried out by Theorem 6, is expressed in Steps 6–8 and the orthonormalization coefficients are then stored in the Hessenberg matrix \(\underline{H}_{{\tilde{k}}}\). We can truncate the sum in (8), (19) and (17) analogously to [12]. Due to the representation of \(\varPsi _k\) as tensor structured function, the expansion with one column corresponds to an expansion of all the coefficients representing \(\varPsi _k\). This expansion is visualized in Fig. 1.
Fig. 1

Graphical illustration of the expansion of the tensor structured function that represents the TIAR factorization in Algorithm 1

3.2 The Krylov–Schur decomposition for TIAR factorizations

We briefly recall the reasoning for the Krylov–Schur type restarting [29]. This procedure can be carried out with operations on matrices and vectors of finite size. Let \((\varPsi _{m+1}, \underline{H}_m)\) be a TIAR factorization. Let \(P \in \mathbb {C}^{m \times m}\) be such that \(P^H H_m P\) is triangular (ordered Schur factorization). Then,
$$\begin{aligned} \mathscr {B}{\hat{\varPsi }}_m = {\hat{\varPsi }}_{m+1} \begin{pmatrix} R_{1,1} &{}\quad R_{1,2} &{}\quad R_{1,3} \\ &{}\quad R_{2,2} &{}\quad R_{2,3} \\ &{}\quad &{}\quad R_{3,3} \\ a_1^H &{}\quad a_2^H &{}\quad a_3^H \end{pmatrix}, \end{aligned}$$
(25)
where \({\hat{\varPsi }}_{m+1} = \left[ \varPsi _{m} P, \ \psi _{m+1} \right] \). The matrix P is selected in a way that the matrix \(R_{1,1} \in \mathbb {C}^{p_\ell \times p_\ell }\) contains the converged Ritz values, the matrix \(R_{2,2} \in \mathbb {C}^{(p-p_\ell ) \times (p-p_\ell )}\) contains the wanted Ritz values and the matrix \(R_{3,3} \in \mathbb {C}^{(m-p) \times (m-p)}\) contains the Ritz values that we want to purge. From (25) we find that
$$\begin{aligned} \mathscr {B}{\tilde{\varPsi }}_p = {\tilde{\varPsi }}_{p+1} \begin{pmatrix} R_{1,1} &{}\quad R_{1,2} \\ &{} \quad R_{2,2} \\ a_1^H &{}\quad a_2^H \end{pmatrix}, \end{aligned}$$
(26)
where \({\tilde{\varPsi }}_{p+1} := [ {\hat{\varPsi }}_{m} I_{m,p}, \ \psi _{m+1} ] = [{\hat{\varPsi }}_p, \ \psi _{m+1}]\). By using a product of Householder reflectors, we compute a matrix \(Q \in \mathbb {C}^{p \times p}\) such that
$$\begin{aligned} \mathscr {B}{\bar{\varPsi }}_p = {\bar{\varPsi }}_{p+1} \begin{pmatrix} R_{1,1} &{}\quad F \\ &{}\quad H \\ a_1^H &{}\quad \beta e_{p-p_\ell }^H \end{pmatrix}, \end{aligned}$$
(27)
where \({\bar{\varPsi }}_{p+1} = [{\tilde{\varPsi }}_{p} Q, \ \psi _{m+1}]\) and H has an upper Hessenberg form. Since we want to lock the Ritz values in the matrix \(R_{1,1}\), we replace in (27) the vector \(a_1\) with zeros introducing an error \(\mathscr {O}(\Vert a_1 \Vert )\). With this approximation, (27) is the wanted TIAR factorization of length p.

Observation 7

In the TIAR factorization (27), \(({\bar{\varPsi }}_{p_\ell }, R_{1,1})\) is an approximation of an invariant pair, i.e., \(\mathscr {B}{\bar{\varPsi }}_{p_\ell } = {\bar{\varPsi }}_{p_\ell } R_{1,1}\). Moreover \(({\bar{\varPsi }}_{p_\ell }(0), R_{1,1}^{-1})\) is an approximation of an invariant pair of the original NEP in the sense of [16, Definition 1], see [12, Theorem 2.2].

3.3 Two structured restarting approaches

The standard restart approach for TIAR using Krylov–Schur type restarting, as described in the previous section, involves expansions and manipulations of the TIAR factorization. Due to the linearity of tensor structured functions with respect to the coefficients, described in Observation 5, the manipulations for \(\varPsi _{m}\) leading to \(\varPsi _p\) can be directly carried out on the coefficients representing \(\varPsi _m\). Unfortunately, due to the implicit representation of \(\varPsi _m\), the memory requirements are not substantially reduced since the basis matrix \(Z\in \mathbb {C}^{n\times r}\) is not modified in the manipulations. The size of the basis matrix Z is the same before and after the restart.

We propose two ways of further exploiting the structure of the functions in order to avoid a dramatic increase in the required memory resources.
  • Semi-explicit restart (Sect. 4): An invariant pair can be completely represented by exponentials and therefore it does not contribute to the memory requirement for Z. The fact that invariant pairs are exponentials was exploited in the restart in [12]. We show how the ideas in [12] can be carried over to tensor structured functions. More precisely, the adaption of [12] involves restarting the iteration with a locked pair, i.e., only the first \(p_\ell \) columns of (27), and a function f constructed in a particular way. The approach is outlined in Algorithm 2 with details specified in Sect. 4.

  • Implicit restart (Sect. 5): By only representing polynomials, we show that the TIAR factorization has a particular structure such that it can be accurately approximated. This allows us to carry out a full implicit restart, and subsequently approximate the TIAR factorization reducing the size of the matrix Z. The adaption is given in Algorithm 3. The approximation of the TIAR factorization in Step 6 is specified in Algorithm 4 and derived in Sect. 5, including an error analysis.

4 Tensor structure exploitation for the semi-explicit restart

The IAR restart approach in [12] is based on representing functions as sums of exponentials and polynomials. An attractive feature of that approach is that the invariant pairs can be exactly represented and locking can be efficiently incorporated. Due to the explicit storage of polynomial coefficients, the approach still requires considerable memory. We here show that, by representing the functions implicitly as a tensor structured functions (3), we can maintain all the advantages but improve performance (both in memory and CPU-time). This construction is equivalent to [12], but more efficient.

The expansion of the TIAR factorization with tensor structured functions (as described in Algorithm 1), combined with the locking procedure (as described in Sect. 3.2), and imposing the structure to the invariant pair as in [12], results in Algorithm 2. Steps 3–10 follow the procedure described in [12] adapted for tensor structured functions. In particular Steps 3– 6 consist in extracting and imposing the structure on the invariant pair \(({\bar{\varPsi }}, R_{1,1})\). In Steps 7–9 a new starting function f is selected and orthogonalized with respect to \({\bar{\varPsi }}\) and in Step 10 the new TIAR factorization is defined.

The computation of the invariant pair \(({\bar{\varPsi }}, R_{1,1})\) and of the new starting function f involves the matrix \({\hat{Y}}\) [12, Equation (5.11)]. This matrix can be extracted from the tensor structured representation as follows. By using Observation 5 with \(M:=P \ I_{m,p} \ Q\), we obtain
$$\begin{aligned} {\hat{Y}}&:= \varPsi _{m} (0) M = P_{d-1}(0) \left( \sum _{\ell =1}^r a_{:,:,\ell } M \otimes z_\ell + \sum _{\ell =1}^p b_{:,:,\ell } M \otimes w_\ell \right) + Y \exp _{d-1} (0) { C M } \end{aligned}$$
(28a)
$$\begin{aligned}&= \sum _{\ell =1}^r a_{1,:,\ell } M \otimes z_\ell + \sum _{\ell =1}^p b_{1,:,\ell } M \otimes w_\ell . \end{aligned}$$
(28b)

5 Tensor structure exploitation for the implicit polynomial restart

In contrast to the procedure in Sect. 4, where the main idea was to do locking with exponentials and restart with a factorization of length \(p_\ell \), we now propose a fully implicit procedure involving a factorization of length p. In this setting we use \(C=0\), i.e., we only represent polynomials with the tensor structured functions. This allows us to derive theory for the structure of the coefficient matrix, which shows how to approximate the TIAR factorization. This procedure is summarized in Algorithm 3.

The approximation in Step 6 is done in order to reduce the growth in memory requirements for the representation. The approximation technique is derived in the following subsections and summarized in Algorithm 4.

Our approximation approach is based on an approximation with a truncated singular value decomposition and a degree reduction. A compression with a truncated singular value decomposition was also made for the compact representations in CORK [35] and TOAR [17]. Our specific setting allows us to prove bounds on the error introduced by the approximations (Sects. 5.1, 5.2). We also show the effectiveness by proving a bound on the decay of the singular values (Sect. 5.3).

Similar to the semi-explicit restart, the approximations that we use in the implicit restart are based on the fact that the invariant pairs of \(\mathscr {B}\) are represented by exponential functions. In particular, if \((\varPsi , \varLambda ^{-1})\) is an invariant pair of \(\mathscr {B}\), then \(\varPsi (\theta ) = Y \exp (\varLambda \theta )\), see [12, Theorem 2.2]. Hence, \(\varPsi \) expressed in a monomial basis, corresponds to a Taylor series expansion with coefficients having a fast decay. In this section we illustrate that, under certain hypothesis, the functions generated by Algorithm 1 and Algorithm 3 have similar decay properties. We introduce a definition that describes the decay in the magnitude of the coefficients of the power series, represented by the tensor structured function \(\varPsi _k\) (5), in terms of the representing polynomial coefficients a and b, i.e.,
$$\begin{aligned} C(\varPsi _k) := \min \left\{ \beta \in \mathbb {R}: \Vert a_{i,:,:} \Vert _F + \Vert b_{i,:,:} \Vert _F \le \frac{ \beta }{(i-1)!} \ \ \text { for } i=1, \dots , d \right\} . \end{aligned}$$
(29)
Observe that, if \((\varPsi , \varLambda ^{-1})\) is an invariant pair, due to the decay of the power series coefficients of the exponential function, we have
$$\begin{aligned} C(\varPsi )=\Vert \varLambda \Vert _F. \end{aligned}$$
(30)

Theorem 8

Let \(\psi \) be a tensor structured function (3) such that \({\bar{c}}=0\) and \(\Vert \psi \Vert =1\). Let \( Z \in \mathbb {C}^{n \times r}\) be a matrix and \( a \in \mathbb {C}^{(d+k+1) \times (k+1) \times r}\) be a set of coefficients that represent the tensor structured function \(\varPsi _{k+1}\) and \(\underline{H}_{k} \in \mathbb {C}^{(k+1) \times k}\) be such that \((\varPsi _{k+1}, \underline{H}_{k})\) is a TIAR factorization obtained by using \(\psi \) as a starting function. Then
$$\begin{aligned} C(\varPsi _{k+1}) \le { \frac{\sqrt{d-1}}{\kappa (L_k)} } \Vert L_k \Vert _F C(\psi ) + \kappa (L_k) , \end{aligned}$$
(31)
where \( L_k:= [v, C_{k+1} v, \dots , C_{k+1}^{k+1} v], C_{k+1}\) is defined in [14, Eq. 29] and \(v=\sum _{\ell =1}^r {\bar{a}}_{:,\ell } \otimes z_\ell \) and \(\kappa \) is the condition number with respect to the Frobenius norm.

Proof

Let \(\Phi _{k+1}(\theta ) = \left( \psi (\theta ), \mathscr {B}\psi (\theta ), \dots , \mathscr {B}^k \psi (\theta ) \right) \). By applying Theorem 3, we obtain
$$\begin{aligned} \Phi _{k+1}(\theta ) = P_{k}(\theta ) \left( \sum _{\ell =1}^r {\hat{a}}_{:,:,\ell } \otimes z_\ell \right) \end{aligned}$$
where, for \(\ell =1,\dots , r\) we have \({\hat{a}}_{:,:,\ell }=D T_\ell \in \mathbb {C}^{(d+k+1) \times (k+1)}\) with \(D \in \mathbb {C}^{d \times d}\) being a diagonal matrix with elements \(D_{i,i}=1/(i-1)!\) and \(T_\ell \in \mathbb {C}^{(d+k+1) \times (k+1)}\) is a Toeplitz matrix with leading column \([{\bar{a}}_{1,\ell }, 1! {\bar{a}}_{2,\ell }, \dots , (d-1)! {\bar{a}}_{d,\ell }, 0, \dots , 0]\) and with leading row \([{\bar{a}}_{1,\ell }, {\hat{a}}_{1,2,\ell }, \dots , {\hat{a}}_{1,k+1,\ell }]\). The structure of \({\hat{a}}_{:,:,\ell }\) implies that
$$\begin{aligned} \Vert {\hat{a}}_{i,:,\ell } \Vert _F^2&= \frac{1}{(i-1)!^2} \left( \sum _{t=2}^{\min (d,i)} { {\bar{a}}_{t,\ell }^2} (t-1)!^2 + \sum _{t=1}^{k-i+2} {\hat{a}}_{1,t,\ell }^2 \right) \\&\le \frac{(d-1) C(\psi )^2 + \Vert {\hat{a}}_{1,:,\ell } \Vert _F^2 }{(i-1)!^2} \end{aligned}$$
and by using the properties of the Frobenius norm we have
$$\begin{aligned} \Vert {\hat{a}}_{i,:,:} \Vert _F^2 \le \frac{(d-1) C(\psi )^2 + \Vert {\hat{a}}_{1,:,:} \Vert _F^2 }{(i-1)!^2}. \end{aligned}$$
(32)
Since \((\varPsi _{k+1}, \underline{H}_{k})\) forms a TIAR factorization, it holds \({\text {span}}\left( \Phi _{k+1} \right) = {\text {span}}\left( \varPsi _{k+1} \right) \). Therefore it exists an invertible matrix \(R \in \mathbb {C}^{(k+1) \times (k+1)}\) such that \(\Phi _{k+1} = \varPsi _{k+1} R\). By using Observation 5 and the sub-multiplicativity of the Frobenius norm we have that for \(i=1, \dots , k+1\)
$$\begin{aligned} \Vert a_{i,:,:} \Vert _F = \Vert {\hat{a}}_{i,:,:} { R^{-1} } \Vert _F \le \Vert {\hat{a}}_{i,:,:} \Vert _F \Vert { R^{-1} } \Vert _F. \end{aligned}$$
(33)
By combining (33), (32) and sub-additivity of the square root we obtain
$$\begin{aligned} \Vert a_{i,:,:} \Vert _F&\le \frac{\sqrt{(d-1) C(\psi )^2 + \Vert {\hat{a}}_{1,:,:} \Vert _F^2} }{(i-1)!} \Vert { R^{-1} } \Vert _F\nonumber \\&\le \frac{\sqrt{d-1} C(\psi ) \Vert { R^{-1} } \Vert _F + \Vert a_{1,:,:} \Vert _F \kappa (R) }{(i-1)!}. \end{aligned}$$
(34)
Since \(\varPsi _k\) is orthonormal and (6) we have that \(\Vert a_{1,:,:}\Vert _F \le 1\).

Let \(L_k^\dagger \) denote the pseudo-inverse of \(L_k\). In order to show (31), we now show that \(\Vert R^{-1} \Vert _F =\Vert L_k^\dagger \Vert _F\) and \(\kappa (R) = \kappa (L_k)\) where \(\kappa (L_k)=\Vert L_k\Vert _F\Vert L_k^\dagger \Vert _F\). Due to the equivalence of TIAR and IAR and the companion matrix interpretation of IAR [14, theorem 6], we have that TIAR is equivalent to using the Arnoldi method on the matrix \(C_{k+1}\) with starting vector \(v=\sum _{\ell =1}^r {\bar{a}}_{:,\ell } z_\ell \). More precisely, the relation \(\Phi _{k+1} = \varPsi _{k+1} R\) can be written in terms of vectors as \(L_{k+1} = V_{k+1} R\) where the first column of \(V_{k+1}\) and \(L_{k+1}\) is \(v=\sum _{\ell =1}^r {\bar{a}}_{:,\ell } \otimes z_\ell \) and \(L_k=[v, C_{k+1} v, \dots , C_{k+1}^{k+1}v]\). By using the orthogonality of \(V_{k+1}\) we conclude that that \(\Vert R^{-1} \Vert _F =\Vert L_k^\dagger \Vert _F\) and \(\kappa (R) =\kappa (L_k)\).

The approximations that we introduce in the next sections (required in Step 6 of Algorithm 3) are based on the assumption that the tensor structured functions \(\varPsi ^{(j)}\) are such that the decay constant \(C(\varPsi ^{(j)})\) is small. Theorem 8 shows that this constant remains small after the TIAR expansion in Step 2 if \(\kappa (L_k)\) is not large. However, the condition number of the Krylov matrix \(L_k\) can be large, see [4]. This does not necessarily imply that the decay constant is large. Notice that if \((\varPsi _k, H_k)\) is an invariant pair, \(L_k\) has linearly dependent columns and \(\kappa (L_k)\) is infinite. Analogously, if \((\varPsi _k, H_k)\) is (in this sense) close to an invariant pair, we expect \(\kappa (L_k)\) to be large. Hence, in these situations, the right-hand side of (31) is expected to be large. However, the decay constant is not expected to be large, since the decay constant for an invariant pair is given by (30). Note that the decay is also preserved in the operations associated with the restart. After the Ritz-value selection (Step 3–4) the new TIAR factorization is computed in Step 5. Since \(\varPsi ^{(j+1)}\) is obtained through a unitary transformation from \(\varPsi ^{(j)}\), by using the properties of the Frobenius norm, we get \(C(\varPsi ^{(j+1)}) \le \sqrt{p+1} C(\varPsi ^{(j)})\).

5.1 Approximation by SVD compression

Given a TIAR factorization with basis function \(\varPsi _k\) we now show (in the following theorem) how we can approximate the basis function with less memory, by using a thinner basis matrix \(Z\in \mathbb {C}^{n\times \tilde{r}}\) where \(\tilde{r}\) should be selected as small as possible. The theorem also shows how this approximation influences the approximation \(\varPsi _k\) as well as the residual of the TIAR factorization. It turns out that the residual error is small if \(\sigma _{\tilde{r}}\) is small, where \(\sigma _1,\ldots ,\sigma _r\) are singular values associated with the coefficient tensor. This implies that \(\tilde{r}\) can be chosen small if we have a fast singular value decay. We characterize the decay of the singular values in Sect. 5.3.

Theorem 9

Let \( a \in \mathbb {C}^{(d+1) \times (k+1) \times r}, Z \in \mathbb {C}^{n \times r}\) be the coefficients and the matrix that represent the tensor structured function \(\varPsi _{k+1}\) and let \(\underline{H}_{k} \in \mathbb {C}^{(k+1) \times k}\) be such that \((\varPsi _{k+1}, \underline{H}_{k})\) is a TIAR factorization. Suppose \(\left\{ |z| \le R \right\} \subseteq \varOmega \) with \(R>1\). Let \(A:=[A_1,\ldots ,A_{d+1}] \in \mathbb {C}^{ r \times (d+1)(k+1)}\) be the unfolding of the tensor a in the sense that \(A_i=(a_{i,:,:})^T\). Given the singular value decomposition of A
$$\begin{aligned}&A=[U_1, U] {\text {diag}}(\varSigma _1,\varSigma ){ [V_1^H,\ldots ,V_{d+1}^H] } \nonumber \\&\varSigma _1={\text {diag}}(\sigma _1,\ldots ,\sigma _{{\tilde{r}}}),\;\;\;\varSigma ={\text {diag}}(\sigma _{{\tilde{r}}+1},\ldots ,\sigma _r), \end{aligned}$$
(35)
let
$$\begin{aligned} \tilde{Z} := ZU_1,\qquad \tilde{A}_i := \varSigma _1 V_i^H \qquad i=1,\ldots ,d+1. \end{aligned}$$
(36)
The tensor structured function \({\tilde{\varPsi }}_{k+1}\) is defined by \({\tilde{a}} \in \mathbb {C}^{(d+1) \times (k+1) \times {\tilde{r}}}\) and \({\tilde{Z}} \in \mathbb {C}^{n \times {\tilde{r}}}\), with \({\tilde{a}}_{i,:,:}={\tilde{A}}_i^T\). Then,
$$\begin{aligned} \Vert \varPsi _{k+1}-\tilde{\varPsi }_{k+1} \Vert&\le \sqrt{(d+1) (k+1)}\sigma _{{\tilde{r}}+1} \end{aligned}$$
(37a)
$$\begin{aligned} \Vert \mathscr {B}\tilde{\varPsi }_{k} - \tilde{\varPsi }_{k+1} \underline{H}_k \Vert&\le \sqrt{k} (C_d+C_s) \sigma _{{\tilde{r}}+1} \end{aligned}$$
(37b)
with \(C_d := \gamma + \log (d+1) + (d+1) \Vert \underline{H}_k \Vert _2\) and \(C_s := \Vert M_0^{-1} \Vert _2 \left[ (\gamma + \log (s+1) ) \max _{1 \le i \le s} \Vert M_i \Vert _2+ \max _{|\lambda |=R} \Vert M(\lambda ) \Vert _2 \right] \) where \(\gamma \approx 0.57721\) is the Euler–Mascheroni constant and \(s:=\min \{s\in \mathbb {N}: C(\varPsi _k) \sqrt{k} (d-s)/R^s\) \(\left. \le \sigma _{{\tilde{r}}+1} \right\} .\)

Proof

The proof of (37a) is based on the construction of a difference function \(\hat{\varPsi }_{k+1}=\varPsi _{k+1}-\tilde{\varPsi }_{k+1}\) as follows. We define
$$\begin{aligned}&\hat{Z} := Z U,&\hat{A}_i := \varSigma V_{i}^H,\\&X_i := Z A_{i+1},&\hat{X}_i:=\hat{Z} \hat{A}_{i+1},&\tilde{X}_i:=\tilde{Z} \tilde{A}_{i+1}, \\&X := [ X_0^H \dots X_d^H ]^H,&\hat{X} := [ \hat{X}_0^H,\dots , \hat{X}_d^H ]^H,&\tilde{X} := [ \tilde{X}_0^H, \dots , \tilde{X}_d^H ]^H. \end{aligned}$$
Hence, we can express \( { \varPsi _{k+1}(\theta ) = P_d(\theta ) X } , {\tilde{\varPsi }}_{k+1}(\theta ) = P_d(\theta ) {\tilde{X}} \) and \( {\hat{\varPsi }}_{k+1}(\theta ) = P_d(\theta ) {\hat{X}} \). By using (6) and \(\Vert \hat{X}_i\Vert _F^2 \le (k+1) \Vert \hat{X}_i\Vert _2^2 = (k+1)\Vert \hat{Z}\hat{A}_{i+1}\Vert _2^2= (k+1)\Vert \hat{A}_{i+1}\Vert _2^2= {(k+1)\Vert \varSigma V_{i+1}^H\Vert _2^2} \le (k+1)\Vert \varSigma \Vert _2^2=(k+1)\sigma _{\tilde{r}+1}^2\) we obtain
$$\begin{aligned} \Vert {\hat{\varPsi }}_{k+1} \Vert ^2 = \sum _{i=0}^d \Vert {\hat{X}}_i \Vert _F^2 \le (d+1) (k+1) \sigma _{{\tilde{r}} + 1}^2 \end{aligned}$$
which proves (37a).
In order to show (37b) we first use that, since \((\varPsi _{k+1},\underline{H}_k)\) is a TIAR factorization, it holds \( \Vert \mathscr {B}{\tilde{\varPsi }}_{k} - {\tilde{\varPsi }}_{k+1} \underline{H}_k \Vert = \Vert \mathscr {B}{\hat{\varPsi }}_{k} - {\hat{\varPsi }}_{k+1} \underline{H}_k \Vert \) and subsequently we use the decay of \(A_i\) and analyticity of M as follows. For notational convenience we define
$$\begin{aligned} Y_i := {\hat{X}}_i I_{k+1,k},\quad \text { for }i=0, \dots , d \end{aligned}$$
(38)
and \(Y:=[ Y_0^H \dots Y_d^H ]^H\) such that we can express \( {\hat{\varPsi }}_k(\theta ) = P_{d-1}(\theta ) Y \). Using [12, theorem 4.2] for each column of \({\hat{\varPsi }}_k(\theta )\), we get \(\mathscr {B}{\hat{\varPsi }}_k (\theta ) = P_d(\theta ) Y_+\) with
$$\begin{aligned} Y_{+,i+1}&:= \frac{Y_i}{i+1}&\quad \text{ for } \qquad i=0, \dots , d-1 \qquad \text{ and } \qquad Y_{+,0} := -M_0^{-1} \sum _{i=1}^d M_i Y_{+,i}. \end{aligned}$$
By definition and (6) we have
$$\begin{aligned} \Vert \mathscr {B}{\hat{\varPsi }}_{k} - {\hat{\varPsi }}_{k+1} \underline{H}_k \Vert =\Vert P_d(\theta ) Y_+ - P_d(\theta ) {\hat{X}} \underline{H}_k \Vert = \Vert Y_+ - {\hat{X}} \underline{H}_k \Vert _F. \end{aligned}$$
Moreover, by using the two-norm bound of the Frobenius norm, (38) and that \(\Vert \hat{X}_i\Vert _2 \le \sigma _{\tilde{r}+1}\),
$$\begin{aligned} \Vert Y_+ - {\hat{X}} \underline{H}_k \Vert _F&\le \sum _{i=0}^d \Vert Y_{+,i} - {\hat{X}}_{i} \underline{H}_k \Vert _F \le \sqrt{k} \sum _{i=0}^d ( \Vert Y_{+,i} \Vert _2 + \Vert {\hat{X}}_{i} \Vert _2 \Vert \underline{H}_k \Vert _2) \end{aligned}$$
(39a)
$$\begin{aligned}&= \sqrt{k} \left( \Vert Y_{+,0} \Vert _2 + \sum _{i=1}^d \Vert Y_{+,i} \Vert _2 + \sum _{i=0}^d \Vert {\hat{X}}_{i} \Vert _2 \Vert \underline{H}_k \Vert _2 \right) \end{aligned}$$
(39b)
$$\begin{aligned}&\le \sqrt{k} \left( \Vert Y_{+,0} \Vert _2 + \sum _{i=1}^d \frac{\Vert {\hat{X}}_{i-1} I_{k+1,k}\Vert _2}{i} + \sum _{i=0}^d \Vert {\hat{X}}_{i} \Vert _2 \Vert \underline{H}_k \Vert _2\right) \end{aligned}$$
(39c)
$$\begin{aligned}&\le \sqrt{k} \left( \Vert Y_{+,0} \Vert _2 + \sum _{i=1}^d \frac{\sigma _{\tilde{r}+1}}{i} + \sum _{i=0}^d \sigma _{\tilde{r}+1} \Vert \underline{H}_k \Vert _2\right) \end{aligned}$$
(39d)
$$\begin{aligned}&\le \sqrt{k} \left[ \Vert Y_{+,0} \Vert _2 + \sigma _{{\tilde{r}}+1} \left( \gamma + \log (d+1) + (d+1) \Vert \underline{H}_k \Vert _2 \right) \right] . \end{aligned}$$
(39e)
In the last inequality we use the Euler–Mascheroni inequality where \(\gamma \) is defined in [1, Formula 6.1.3]. It remains to bound \(\Vert Y_{+,0}\Vert _2\). By using the definition of \(Y_{+,0}\) and again applying the Euler–Mascheroni inequality we have that
$$\begin{aligned} \Vert Y_{+,0} \Vert _2&\le \Vert M_0^{-1} \Vert _2 \sum _{i=1}^d \Vert M_i \Vert _2 \frac{ \Vert {\hat{X}}_{i-1} I_{k+1,k} \Vert _2}{i} \le \Vert M_0^{-1} \Vert _2 \sum _{i=1}^d \Vert M_i \Vert _2 \frac{ \Vert {\hat{X}}_{i-1} \Vert _2}{i} \nonumber \\&= \Vert M_0^{-1} \Vert _2 \left( \sum _{i=1}^s \Vert M_i \Vert _2 \frac{ \Vert {\hat{X}}_{i-1} \Vert _2}{i} + \sum _{i=s+1}^d \Vert M_i \Vert _2 \frac{ \Vert {\hat{X}}_{i-1} \Vert _2}{i}\right) \nonumber \\&\le \Vert M_0^{-1} \Vert _2 \left( \sigma _{{\tilde{r}}+1} ( \gamma + \log (s+1) ) \max _{1 \le i \le s} \Vert M_i \Vert _2 + \sum _{i=s+1}^d \Vert M_i \Vert _2 \frac{\Vert {\hat{X}}_{i-1} \Vert _2}{i} \right) . \end{aligned}$$
(40)
As consequence of the Cauchy integral formula
$$\begin{aligned} \Vert M_i \Vert _2 \frac{ \Vert {\hat{X}}_{i-1} \Vert _2}{i} \le \Vert M_i \Vert _2 \frac{\Vert A_{i} \Vert _2}{i} \le C(\varPsi _k) \sqrt{k} \frac{ \Vert M_i \Vert _2}{i!} \le C(\varPsi _k) \sqrt{k} \frac{ \displaystyle \max _{|\lambda |=R} \Vert M(\lambda ) \Vert _2}{R^i}. \end{aligned}$$
(41)
By substituting (41) in (40) we obtain
$$\begin{aligned} \Vert Y_{+,0} \Vert _2&\le \Vert M_0^{-1} \Vert _2 \left( \sigma _{{\tilde{r}}+1} ( \gamma + \log (s+1) ) \max _{1 \le i \le s} \Vert M_i \Vert _2\right. \nonumber \\&\quad \left. + \max _{|\lambda |=R} \Vert M(\lambda ) \Vert _2 C(\varPsi _k) \sqrt{k} \frac{ d-s }{R^s} \right) \nonumber \\&\le \sigma _{{\tilde{r}}+1} \Vert M_0^{-1} \Vert _2 \left( ( \gamma + \log (s+1) ) \max _{1 \le i \le s} \Vert M_i \Vert _2 + \max _{|\lambda |=R} \Vert M(\lambda ) \Vert _2 \right) . \end{aligned}$$
(42)
We reach the conclusion (37b) from the combination of (42) in (39).

5.2 Approximation by reducing the degree

Another approximation which reduces the complexity can be done by truncating the polynomial in \(\varPsi _k\). The following theorem illustrates the approximation properties of this approach.

Theorem 10

Let \( a \in \mathbb {C}^{(d+1) \times (k+1) \times r}, Z \in \mathbb {C}^{n \times r}\) be the coefficients and the matrix that represent the tensor structured function \(\varPsi _{k+1}\) and let \(\underline{H}_{k} \in \mathbb {C}^{(k+1) \times k}\) be such that \((\varPsi _{k+1}, \underline{H}_{k})\) is a TIAR factorization. For \({\tilde{d}} \le d\) let
$$\begin{aligned} {\tilde{\varPsi }}_{k+1}(\theta ) := P_{{\tilde{d}}}(\theta ) \left( \sum _{\ell =1}^r {\tilde{a}}_{:,:,\ell } \otimes z_\ell \right) \end{aligned}$$
(43)
where \({\tilde{a}}_{i,j,\ell }=a_{i,j,\ell }\) for \(i=1, \dots , {\tilde{d}}, j = 1, \dots , k+1\) and \(\ell =1, \dots , r\). Then
$$\begin{aligned} \Vert {\tilde{\varPsi }}_{k+1} - \varPsi _{k+1} \Vert&\le C(\varPsi _{k+1}) \frac{(d-{\tilde{d}}) }{{\tilde{d}}!} \end{aligned}$$
(44)
$$\begin{aligned} \Vert \mathscr {B}{\tilde{\varPsi }}_k - {\tilde{\varPsi }}_{k+1}\underline{H}_k \Vert&\le C(\varPsi _{k+1}) \left( \max _{{\tilde{d}}+1 \le i \le d} \Vert M_i \Vert _F \right) \Vert M_0^{-1} \Vert _F \frac{d-{\tilde{d}}}{({\tilde{d}}+1)!} . \end{aligned}$$
(45)

Proof

We define \( X_i := Z A_{i+1} \) for \( i=0, \dots , d \) and \( X:=[X_0^T, \dots , X_d^T]\) and \({\tilde{X}}:=[X_0^T, \dots , X_{{\tilde{d}}}^T]\) such that \(\varPsi _{k+1} (\theta ) = P_{d} (\theta ) X\) and \({\tilde{\varPsi }}_{k+1} (\theta ) = P_{{\tilde{d}}} (\theta ) {\tilde{X}}\). We have
$$\begin{aligned} \Vert \varPsi _{k+1} (\theta ) - {\tilde{\varPsi }}_{k+1} (\theta ) \Vert ^2 = \sum _{i={\tilde{d}}+1}^d \Vert X_i \Vert _F^2 = \sum _{i={\tilde{d}}+1}^d \Vert A_i \Vert _F^2 . \end{aligned}$$
By using the definition of \(C(\varPsi _{k+1})\) we obtain (44).

By definition \( \varPsi _k(\theta ) = \varPsi _{k+1}(\theta ) I_{k+1,k} \) and \( {\tilde{\varPsi }}_k(\theta ) = {\tilde{\varPsi }}_{k+1}(\theta ) I_{k+1,k} \), using the Observation 5, if we define \(Y_i := X_i I_{k+1,k}\) for \(i=0, \dots , d-1\) and \(Y:=[ Y_0^H \dots Y_{d-1}^H ]^H\) and \({\tilde{Y}}:=[ Y_0^H \dots {\tilde{Y}}_{{\tilde{d}} - 1}^H ]^H\) we can express \( \varPsi _k(\theta ) = P_{d-1}(\theta ) Y \) and \( {\tilde{\varPsi }}_k(\theta ) = P_{{\tilde{d}}-1}(\theta ) {\tilde{Y}}. \)

Using [12, theorem 4.2] for each column of \(\varPsi _k(\theta )\) and \({\tilde{\varPsi }}_k(\theta )\), we get \(\mathscr {B}\varPsi _k (\theta ) = P_d(\theta ) Y_+\) and \({ \mathscr {B}{\tilde{\varPsi }}_k (\theta ) = P_{{\tilde{d}}}(\theta ) {\tilde{Y}}_+ }\) with
$$\begin{aligned} Y_{+,i+1}&:= \frac{Y_i}{i+1}&\quad \text{ for }\; i=0, \dots , d-1 \qquad \text{ and } \qquad Y_{+,0} := -M_0^{-1} \varSigma _{i=1}^d M_i Y_{+,i} \\ {\tilde{Y}}_{+,i+1}&:= Y_{+,i+1}&\quad \text{ for } \; i=0, \dots , {\tilde{d}}-1 \qquad \text{ and } \qquad {\tilde{Y}}_{+,0} := -M_0^{-1} \varSigma _{i=1}^{{\tilde{d}}} M_i Y_{+,i} \end{aligned}$$
In our notation, the fact that \((\varPsi _{k+1}, \underline{H}_k)\) is a TIAR factorization, can be expressed as \(P_d(\theta )Y_+=P_d(\theta )X\underline{H}_k\), which implies that the monomial coefficients are equal, i.e.,
$$\begin{aligned} Y_{+,i} = X_i \underline{H}_k\quad \text { for }i=0, \dots , d. \end{aligned}$$
(46)
Hence, from (6) we have
$$\begin{aligned} \Vert \mathscr {B}{\tilde{\varPsi }}_k - {\tilde{\varPsi }}_{k+1} \underline{H}_k \Vert ^2&= \Vert P_{d}(\theta ) {\tilde{Y}}_+- P_{d}(\theta ) {\tilde{X}} \underline{H}_k \Vert ^2\\&= \Vert {\tilde{Y}}_+ - {\tilde{X}} \underline{H}_k \Vert _F^2 \\&= \Vert {\tilde{Y}}_{+,0} - X_0 \underline{H}_k \Vert _F^2 + \sum _{i=1}^{{\tilde{d}}} \Vert Y_{+,i} - X_i \underline{H}_k \Vert _F^2\\&= \Vert {\tilde{Y}}_{+,0} - X_0 \underline{H}_k \Vert _F^2. \end{aligned}$$
In the last step we applied (46). Moreover, by again using (46), we have
$$\begin{aligned} Y_{+,0} - X_0 \underline{H}_k&= -M_0^{-1} \sum _{i=1}^d M_i Y_{+,i} - X_0 \underline{H}_k \\&= -M_0^{-1} \sum _{i=1}^{{\tilde{d}}} M_i {\tilde{Y}}_{+,i} -M_0^{-1} \sum _{i={\tilde{d}}+1}^d M_i Y_{+,i} - X_0 \underline{H}_k \\&= {\tilde{Y}}_{+,0} - X_0 \underline{H}_k -M_0^{-1} \sum _{i={\tilde{d}}+1}^d M_i \frac{X_{i-1} I_{k+1,k}}{i}. \end{aligned}$$
Therefore \(\Vert {\tilde{Y}}_{+,0} - X_0 \underline{H}_k \Vert _F \le \Vert M_0^{-1} \Vert _F \sum _{i={\tilde{d}}+1}^d \frac{\Vert M_i \Vert _F \Vert A_i\Vert _F}{i}\). We obtain (45) by using the properties of the Frobenius norm and the definition of \(C(\varPsi _{k+1})\).

Remark 11

The approximation given in Theorem 10 can only be effective under the condition that \( \left( \max _{{\tilde{d}}+1 \le i \le d} \Vert M_i \Vert _F \right) /({\tilde{d}}+1)!\) is small. In particular this condition is satisfied if the Taylor coefficients \( M_i /i!\) present a fast decay. This condition corresponds to having the coefficients of the power series expansion of \(M(\lambda )\) that are decaying to zero.

The final approximation procedure is summarized in Algorithm 4. In particular Step 1–2 correspond to the approximation by SVD compression, described in Sect. 5.1, whereas Step 3–4 correspond to the approximation by reducing the degree, described in Sect. 5.2.

5.3 The fast decay of singular values

Finally, as a further justification for our approximation procedure, we now show how fast the singular values decay. The fast decay in the singular values illustrated below justifies the effectiveness of the truncation in Sect. 5.1.

Lemma 12

Let \( a \in \mathbb {C}^{(d+1) \times (k+1) \times r}, Z \in \mathbb {C}^{n \times r}\) be the coefficients and the matrix that represent the tensor structured function \(\varPsi _{k+1}\) and let \(\underline{H}_{k} \in \mathbb {C}^{(k+1) \times k}\) be such that \((\varPsi _{k+1}, \underline{H}_{k})\) is a TIAR factorization. Then, the tensor a is generated by \(d+1\) vectors, in the sense that each vector \(a_{i,j,:}\) for \(i=1, \dots , d+1\) and \(j=1, \dots , k+1\) can be expressed as linear combination of the vectors \( a_{i,1,:}\) and \(a_{1,j,:}\) for \( i = 1, \dots , d-k \) and \( j = 1, \dots , k+1\).

Proof

The proof is based on induction over the length k of the TIAR factorization. The result is trivial if \(k=1\). Suppose the result holds for some k. Let \(Z \in \mathbb {C}^{n \times (r-1)}, a \in \mathbb {C}^{d \times k \times (r-1)}\) represent the tensor structured function \(\varPsi _{k}\) and let \(\underline{H}_{k-1} \in \mathbb {C}^{k \times (k-1)}\) be an upper Hessenberg matrix such that \((\varPsi _k,\underline{H}_{k-1})\) is a TIAR factorization. If we expand the TIAR factorization \((\varPsi _k,\underline{H}_{k-1})\) by using the Algorithm 1, more precisely by using (13b) and (18b), we obtain \(\beta a_{i+1,k+1,:} = a_{i,k,:}/i - \sum _{j=1}^k h_j a_{i,j,:}\) for \(i=1, \dots , d\). We reach the conclusion by induction.

Theorem 13

Under the same hypothesis of Lemma 12, let A be the unfolding of the tensor a in the sense that \(A=[A_1, \dots , A_{d+1}]\) such that \(A_i:=(a_{i,:,:})^T\). We have the following decay in the singular values
$$\begin{aligned} \sigma _{i} \le C(\varPsi _{k+1}) \frac{d-(R-k-1)}{(R-k)!} \qquad i=R+1, \dots , d+1, \end{aligned}$$
where \(k \le R \le d+k+1\).

Proof

We define the matrix \({\tilde{A}}:=[ A_1, \dots , A_{ R-k}, 0, \dots , 0] \in \mathbb {C}^{r \times (k+1)(d+1)}\). Notice that the columns of the matrices A and \({\tilde{A}}\) correspond to the vectors \(a_{i,j,:}^T\). In particular, using the Lemma 12, we have that \({\text {rank}}(A_1)=k+1\) whereas \({\text {rank}}(A_j)=1\) if \(j \le d-k\) otherwise \({\text {rank}}(A_j)=0\). Then we have that \({\text {rank}}(A) = d+1\) and \({\text {rank}}( {\tilde{A}}) = R\). Using Weyl’s theorem [10, Corollary 8.6.2] and Theorem 8 we have for \(i \ge R + 1\)
$$\begin{aligned} \sigma _{i}&\le \Vert A - {\tilde{A}} \Vert _F \le \sum _{ i = R-k+1 }^{d+1} \Vert A_i \Vert _F \le \sum _{ i = R-k+1 }^{d+1} \frac{ C(\varPsi _{k+1}) }{(i-1)!}\le C(\varPsi _{k+1}) \frac{ d-(R-k-1)}{(R-k)!}. \end{aligned}$$
\(\square \)

6 Complexity analysis

We presented above two different restarting strategies: the structured semi-explicit restart and the implicit restart. They have different performance for different problems and we have not been able to conclusively determine if one is better than the other. The best choice of the restarting strategy appears to depend on many problem properties. It may be convenient to test both methods on the same problem. We now discuss the general performances, in terms of complexity and stability. The complexity discussion is based on the assumption that the complexity of the action of \(M_0^{-1}\) is neglectable in comparison to the other parts of the algorithm.

6.1 Complexity of expanding the TIAR factorization

The main computational effort of the Algorithm 2 and Algorithm 3 is the expansion of a TIAR factorization described in Algorithm 1, independent of which restarting strategy is used. The essential computational effort of Algorithm 1 is the computation of \({\tilde{z}}\), given in Eq. (9). This operation has complexity \(\mathscr {O}(drn)\) for each iteration. In the implicit restart, Algorithm 3, r and d are not large in general, due to the way they are automatically selected in the approximation procedure in Algorithm 4. In the semi–explicit restart, Algorithm 2, we have instead \(r, d \le m\).

6.2 Complexity of the restarting strategies

After an implicit restart we obtain a TIAR factorization of length p, whereas after a semi-explicit restart, we obtain a TIAR factorization of length \(p_\ell \). This means that the semi–explicit restart requires a re–computation phase, i.e., after the restart we need to perform extra \(p-p_\ell \) steps in order to have a TIAR factorization of length p. If \(p-p_\ell \) is large, i.e., if not many Ritz values converged in comparison to the restarting parameter p, then the re-computation phase is the essential computational effort of the algorithm. Notice that this is hard to predict since we do not know how fast the Ritz values will converge in advance.

6.3 Stability of the restarting strategies

We will illustrate in Sect. 7 that the restarting approaches have different stability properties. The semi-explicit restart tends to be efficient if only a few eigenvalues are wanted, i.e., if \(p_\mathrm{max}\) is small. This is due to the fact that we impose the structure in the starting function. On the other hand, the implicit restart requires a thick restart in order to be stable in several situations, see corresponding discussions for the linear case in [19, chapter 8]. Then p has to be large enough in a sense that at each restart the p wanted Ritz values have the corresponding residual not small. This leads to additional computational and memory resources.

If we use the semi-explicit restart, then the computation of \({\tilde{z}}\), in Eq. (9), involves the term \(\mathbb {M}_d(Y,S)\). This quantity can be computed in different ways. In the simulations we must choose between (8) or [12, Equation (4.8)]. The choice influences the stability of the algorithm. In particular if one eigenvalue of S is close to \(\partial \varOmega \) and \(M(\lambda )\) is not analytic on \(\partial \varOmega \), the series (8) may converge slowly and in practice overflow can occur. In such situations, [12, Equation (4.8)] is preferable. Notice that it is not always possible to use [12, Equation (4.8)] since many problems cannot be formulated as a short sum of product of matrices and functions.

6.4 Memory requirements of the restarting strategies

From a memory point of view, the essential part of the semi-explicit restart is the storage of the matrices Z and Y, that is \(\mathscr {O}(nm+np)\). In the implicit restart the essential part is the storage of the matrix Z and requires \(\mathscr {O}(nr_{\max })\) where \(r_{\max }\) denotes the maximum value that the variable r takes in the Algorithm 1. The size of \(r_{\max }\) is not predictable since it depends on the SVD-approximation introduced in Algorithm 4. Since in each iteration of the Algorithm 1 the variable r is increased, it holds \(r_{\max } \ge m-p\). Therefore, in the optimal case where \(r_{\max }\) takes the lower value, the two methods are comparable in terms of memory requirements. Notice that, the semi–explicit restart in general requires less memory and has the advantage that the required memory is problem independent.

7 Numerical experiments

7.1 Delay eigenvalue problem

In order to illustrate properties of the proposed restart methods and advantages in comparison to other approaches, we carried out numerical simulations1 for solving a delay eigenvalue problem (DEP). More precisely, we consider the DEP obtained by discretizing the characteristic equation of the delay differential equation defined in [15, sect 4.2, Eq (22a)] with \(\tau =1\). By using a standard second order finite difference discretization, the DEP is formulated as \(M(\lambda ) = - \lambda ^2 I + \lambda T_1 + T_0 + e^{-\lambda } T_2\). We show how the proposed methods perform in terms of m, the maximum length of the TIAR factorization, and the restarting parameter p.

Table 1a, b show the advantages of our semi-explicit restart approach in comparison to the equivalent method described in [12]. Our new approach is faster in terms of CPU-time and can solve larger problems due to the memory efficient representation of the Krylov basis.

Table 2a, b show the effectiveness of approximations introduced in Sects. 5.1 and 5.2 in comparison to the corresponding restart procedure without approximations. In particular, in Algorithm 4 we consider a drop tolerance \(\varepsilon =10^{-14}\). Since the DEP is defined by entire functions, the power series coefficients decay to zero and, according to Remark 11, the approximation by reducing the degree is expected to be effective. By approximating the TIAR factorization, the implicit restart requires less resources in terms of memory and CPU-time and can solve larger problems.
Fig. 2

Implicit and semi-explicit restart for DEP of size \(n=40401\) with \(m=20, p=5\) and resting 7 times (restart \(=\) 7)

Fig. 3

Implicit and semi-explicit restart for DEP of size \(n=40401\) with \(m=40, p=10\) and resting 4 times (restart = 4)

We now illustrate the differences between the semi-explicit and the implicit restart. More precisely, we show how the parameters m and p influence the convergence of the Ritz values with respect to the number of iterations. The accuracy of an eigenpair \((\lambda , v)\) of the nonlinear eigenvalue problem is measured with the relative residualwhere \(M(\lambda )=\sum _{i=1}^q T_i f_i(\lambda )\). The convergence of the semi-explicit restart appears to be slower when p is not sufficiently large. See Fig. 2. The convergence speed of both restarting strategies is comparable for a larger m and p. See Fig. 3.
In practice, the performance of the two restarting strategies corresponds to a trade-off between CPU-time and memory. In particular, due to the fact that we impose the structure, the semi-explicit restart does not have a growth in the polynomial part at each restart and therefore requires less memory. On the other hand, for this problem, the semi-explicit restart appears to be slower in terms of CPU-time. See Tables 1 and 2. For completeness we provide the total number of accumulated inner iterations: for Table 1a 134, for Table 1b 154, for Table 2a 110 and for Table 2b 130. An illustration of the CPU-time for a fixed accuracy is given in Table 3.
Table 1

DEP, Semi-explicit restart

Size

Tensor struct. Functions

Original approach [12]

CPU

Memory

CPU

Memory

(a) \(m=20, p=5\), restart = 7

   10,201

19.07 s

3.7 MB

31.41 s

65.38 MB

   40,401

30.14 s

14.8 MB

1 m 30 s

258.92 MB

   160,801

1 m 47 s

58.9 MB

6 m 04

1.01 GB

   641,601

7 m 30 s

235.0 MB

24 m 27 s

4.02 GB

   1,002,001

12 m 01 s

366.9 MB

(b) \(m=40, p=10\), restart = 4

   10,201

13.47 s

7.62 MB

1 m 05 s

255.27 MB

   40,401

41.81 s

30.20 MB

4 m

1 GB

   160,801

144.79 s

120.23 MB

15 m 54 s

3.93 GB

   641,601

10 m 43 s

479.71 MB

   1,002,001

16 m 21 s

749.18 MB

7.2 Waveguide eigenvalue problem

In order to illustrate how the performance depends on the problem properties, we now consider a NEP defined by functions with branch point and branch cut singularities. More precisely, we consider the waveguide eigenvalue problem (WEP) described in [13, Section 5.1] after the Cayley transformation with pole removal, i.e.,The matrix Open image in new window and the second degree polynomials Open image in new window and Open image in new window are sparse. The matrix Open image in new window is defined by nonlinear functions of \(\lambda \) involving square roots of polynomials and it is dense. However, in our framework, an explicit storage of this matrix is not necessary since the matrix–vector product Open image in new window is efficiently computed using two Fast Fourier Transforms (FFTs) and a multiplication with a diagonal matrix. See [13] for a full description of the problem. In this NEP, \(\varOmega \) is the unit disc and there are branch point singularities in \(\partial \varOmega \). Thus, due to the slow convergence of the power series of \(M(\lambda )\), in the semi-explicit restart we have to use [12, Equation (4.8)] in order to compute \(\mathbb {M}_d(Y,S)\). This also implies that the approximation by reducing the degree is not expected to be effective since the power series coefficients of \(M(\lambda )\) are not decaying to zero.
Table 2

DEP, Implicit restart

Size

Approximation

No approximation

CPU

Memory

CPU

Memory

(a) \(m=20, p=5\), restart = 7

   10,201

6.82 s

7.8 MB

11.95 s

17.1 MB

   40,401

21.96 s

30.8 MB

37.63 s

67.8 MB

   160,801

1 m20 s

120.2 MB

2 m21 s

269.9 MB

   641,601

5 m24 s

469.9 MB

9 m33 s

1.1 GB

   1,002,001

8 m36 s

733.9 MB

15 m16 s

1.6 GB

(b) \(m=40, p=10\), restart = 4

   10,201

9.54 s

11.1 MB

16.61 s

20.2 MB

   40,401

30.48 s

43.8 MB

50.66 s

80.1 MB

   160,801

1 m54 s

174.2 MB

3 m11 s

319.0 MB

   641,601

8 m05 s

695.1 MB

13 m14 s

1.2 GB

   1,002,001

12 m17 s

1.1 GB

20 m57 s

1.9 GB

Table 3

DEP, Implicit and semi-explicit restart, stopped when residual of \(p_\mathrm{max}=10\) Ritz values is less than \(10^{-10}\)

Size

Implicit restart

Semi-explicit restart

Nr. restarts

CPU

Memory

Nr. restarts

CPU

Memory

40,401

6

13.8 s

29.59 MB

7

28.97 s

14.79 MB

160,801

4

37.02 s

115.32 MB

6

1 m 29 s

58.89 MB

641,601

4

2 m 31 s

450.34 MB

6

6 m 23 s

234.96 MB

1,002,001

4

3 m 58 s

703.30 MB

5

8 m 29 s

366.94 MB

2,563,201

5

21 m10 s

938.67 MB

Fig. 4

Implicit and semi-explicit restart for WEP of size Open image in new window with Open image in new window and restart = 4

Fig. 5

Implicit and semi-explicit restart for WEP of size Open image in new window with Open image in new window and restart = 6

In analogy to the previous subsection, we carried out numerical simulations in order to compare the semi-explicit and the implicit restart. With Figs. 4 and 5, we illustrate the performance of the two restarting approaches with respect to the choice of the parameters m and p. When p is sufficiently large, the residual in the semi-explicit restart appears to stagnate after the first restart whereas it decreases in a regular way in the implicit restart. See Fig. 4. This is due to the fact that semi-explicit restart imposes the structure on p vectors which is not beneficial when they do not contain eigenvector approximations. On the other hand, when p is small, the behavior of the residual appear to be specular. See Fig. 5. This is a consequence of the fact that, already after the first restart, the Krylov subspace is almost an invariant subspace (since p Ritz pairs are quite accurate). This is consistent with the linear case where implicit restarting with a Krylov subspace which is almost an invariant subspace is known to suffer from numerical instabilities. It is known that this specific problem has two eigenvalues. Therefore, in order to reduce the CPU-time and the memory resources, the restarting parameter p should be selected small. As consequence of the above discussion, we conclude that the semi-explicit restart is the best restarting strategy for this problem.

8 Concluding remarks and outlook

In this work we have derived an extension of the TIAR algorithm and two restarting strategies. Both restarting strategies are based on approximating the TIAR factorization. In other works on the IAR-method it has been proven that the basis matrix contains a structure that allows exploitations, e.g. for NEPs with low rank structure in the coefficients [34]. An investigation about the combination of the approximations of the TIAR factorization with such structures of the NEP seems possible but deserve further attention.

Although the framework of TIAR and restarted TIAR is general, a specialization of the methods to the NEP is required in order to efficiently solve the problem. More precisely, an efficient computation procedure for computing (9) is required. This is a nontrivial task for many application and requires problem specific research.

Footnotes

  1. 1.

    All simulations were carried out with Intel octa core i7-3770 CPU 3.40GHz and 16 GB RAM using MATLAB.

Notes

Acknowledgements

We gratefully acknowledge the support of the Swedish Research Council under Grant No. 621-2013-4640.

References

  1. 1.
    Abramowitz, M., Stegun, I.: Handbook of Mathematical Functions: With Formulas, Graphs, and Mathematical Tables, vol. 55. Courier Corporation, North Chelmsford (1964)zbMATHGoogle Scholar
  2. 2.
    Bai, Z., Demmel, J., Dongarra, J., Ruhe, A., van der Vorst, H.: Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide, vol. 11. Siam, New Delhi (2000)CrossRefzbMATHGoogle Scholar
  3. 3.
    Bai, Z., Su, Y.: SOAR: A second-order Arnoldi method for the solution of the quadratic eigenvalue problem. SIAM J. Matrix Anal. Appl. 26(3), 640–659 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Beckermann, B.: The condition number of real Vandermonde, Krylov and positive definite Hankel matrices. Numer. Math. 85(4), 553–577 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Betcke, M.M., Voss, H.: Restarting projection methods for rational eigenproblems arising in fluid-solid vibrations. Math. Model. Anal. 13(2), 171–182 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Betcke, M.M., Voss, H.: Restarting iterative projection methods for Hermitian nonlinear eigenvalue problems with minmax property. Numer. Math. 135(2), 397–430 (2017)Google Scholar
  7. 7.
    Betcke, T., Higham, N.J., Mehrmann, V., Schröder, C., Tisseur, F.: NLEVP: a collection of nonlinear eigenvalue problems. In: Technical Report, Manchester Institute for Mathematical Sciences (2011)Google Scholar
  8. 8.
    Betcke, T., Voss, H.: A Jacobi–Davidson-type projection method for nonlinear eigenvalue problems. Future Gener. Comput. Syst. 20(3), 363–372 (2004)CrossRefGoogle Scholar
  9. 9.
    Effenberger, C.: Robust solution methods for nonlinear eigenvalue problems. Ph.D. thesis, École polytechnique fédérale de Lausanne (2013)Google Scholar
  10. 10.
    Golub, G.H., Van Loan, C., Charles, F.: Matrix Computations, vol. 3. JHU Press, Baltimore (2012)zbMATHGoogle Scholar
  11. 11.
    Güttel, S., Van Beeumen, R., Meerbergen, K., Michiels, W.: NLEIGS: a class of fully rational Krylov methods for nonlinear eigenvalue problems. SIAM J. Sci. Comput. 36(6), A2842–A2864 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Jarlebring, E., Meerbergen, K., Michiels, W.: Computing a partial Schur factorization of nonlinear eigenvalue problems using the infinite Arnoldi method. SIAM J. Matrix Anal. Appl. 35(2), 411–436 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Jarlebring, E., Mele, G., Runborg, O.: The waveguide eigenvalue problem and the tensor infinite Arnoldi method. SIAM J. Sci. Comput. 39(3), A1062–A1088 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Jarlebring, E., Michiels, W., Meerbergen, K.: A linear eigenvalue algorithm for the nonlinear eigenvalue problem. Numer. Math. 122(1), 169–195 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Jarlebring, E., Poloni, F.: Iterative methods for the delay Lyapunov equation with T-Sylvester preconditioning. In: Technical Report (2015). ArXiv:1507.02100
  16. 16.
    Kressner, D.: A block Newton method for nonlinear eigenvalue problems. Numer. Math. 114(2), 355–372 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Kressner, D., Roman, J.E.: Memory-efficient Arnoldi algorithms for linearizations of matrix polynomials in Chebyshev basis. Numer. Linear Algeb. Appl. 21(4), 569–588 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Lancaster, P., Psarrakos, P.: On the pseudospectra of matrix polynomials. SIAM J. Matrix Anal. Appl. 27(1), 115–129 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Lehoucq, R.B.: Analysis and implementation of an implicitly restarted Arnoldi iteration. Ph.D. thesis, Rice University (1995)Google Scholar
  20. 20.
    Lehoucq, R.B., Sorensen, D.C.: Deflation techniques for an implicitly restarted Arnoldi iteration. SIAM J. Matrix Anal. Appl. 17(4), 789–821 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Lu, D., Su, Y., Bai, Z.: Stability analysis of the two-level orthogonal Arnoldi procedure. SIAM J. Matrix Anal. Appl. 37(1), 195–214 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Mackey, D.S., Mackey, N., Mehl, C., Mehrmann, V.: Structured polynomial eigenvalue problems: good vibrations from good linearizations. SIAM J. Matrix Anal. Appl. 28(4), 1029–1051 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Mackey, D.S., Mackey, N., Tisseur, F.: Polynomial eigenvalue problems: Theory, computation, and structure. In: Numerical Algebra, Matrix Theory, Differential-Algebraic Equations and Control Theory, pp. 319–348. Springer (2015)Google Scholar
  24. 24.
    Meerbergen, K.: Locking and restarting quadratic eigenvalue solvers. SIAM J. Sci. Comput. 22(5), 1814–1839 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Meerbergen, K.: The quadratic Arnoldi method for the solution of the quadratic eigenvalue problem. SIAM J. Matrix Anal. Appl. 30(4), 1463–1482 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Mehrmann, V., Voss, H.: Nonlinear eigenvalue problems: a challenge for modern eigenvalue methods. GAMM Mitt. 27(2), 121–152 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    Morgan, R.: On restarting the Arnoldi method for large nonsymmetric eigenvalue problems. Math. Comput. 65(215), 1213–1230 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    Neumaier, A.: Residual inverse iteration for the nonlinear eigenvalue problem. SIAM J. Numer. Anal. 22(5), 914–923 (1985)MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Stewart, G.W.: A Krylov–Schur algorithm for large eigenproblems. SIAM J. Matrix Anal. Appl. 23(3), 601–614 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Su, Y., Bai, Z.: Solving rational eigenvalue problems via linearization. SIAM J. Matrix Anal. Appl. 32(1), 201–216 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  31. 31.
    Szyld, D., Vecharynski, E., Xue, F.: Preconditioned eigensolvers for large-scale nonlinear Hermitian eigenproblems with variational characterizations. II. Interior eigenvalues. SIAM J. Sci. Comput. 37(6), A2969–A2997 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Szyld, D., Xue, F.: Preconditioned eigensolvers for large-scale nonlinear Hermitian eigenproblems with variational characterizations. I. Extreme eigenvalues. Math. Comput. 85(302), 2887–2918 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  33. 33.
    Tisseur, F., Meerbergen, K.: The quadratic eigenvalue problem. SIAM Rev. 2, 235–286 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    Van Beeumen, R., Jarlebring, E., Michiels, W.: A rank-exploiting infinite Arnoldi algorithm for nonlinear eigenvalue problems. Numer. Linear Algebra Appl. 23(4), 607–628 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Van Beeumen, R., Meerbergen, K., Michiels, W.: Compact rational Krylov methods for nonlinear eigenvalue problems. SIAM J. Matrix Anal. Appl. 36(2), 820–838 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    Voss, H.: A maxmin principle for nonlinear eigenvalue problems with application to a rational spectral problem in fluid-solid vibration. Appl. Math. 48(6), 607–622 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  37. 37.
    Voss, H.: An Arnoldi method for nonlinear eigenvalue problems. BIT 44(2), 387–401 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    Voss, H.: Nonlinear eigenvalue problems. In: L. Hogben (ed.) Handbook of Linear Algebra, Second Edition, no. 164 in Discrete Mathematics and Its Applications. Chapman and Hall/CRC (2013)Google Scholar
  39. 39.
    Zhang, Y., Su, Y.: A memory-efficient model order reduction for time-delay systems. BIT 53(4), 1047–1073 (2013)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© The Author(s) 2017

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Department of Mathematics, Swedish e-science Research Center (SeRC)KTH Royal Institute of TechnologyStockholmSweden

Personalised recommendations