On restarting the tensor infinite Arnoldi method

An efficient and robust restart strategy is important for any Krylov-based method for eigenvalue problems. The tensor infinite Arnoldi method (TIAR) is a Krylov-based method for solving nonlinear eigenvalue problems (NEPs). This method can be interpreted as an Arnoldi method applied to a linear and infinite dimensional eigenvalue problem where the Krylov basis consists of polynomials. We propose new restart techniques for TIAR and analyze efficiency and robustness. More precisely, we consider an extension of TIAR which corresponds to generating the Krylov space using not only polynomials, but also structured functions, which are sums of exponentials and polynomials, while maintaining a memory efficient tensor representation. We propose two restarting strategies, both derived from the specific structure of the infinite dimensional Arnoldi factorization. One restarting strategy, which we call semi-explicit TIAR restart, provides the possibility to carry out locking in a compact way. The other strategy, which we call implicit TIAR restart, is based on the Krylov–Schur restart method for the linear eigenvalue problem and preserves its robustness. Both restarting strategies involve approximations of the tensor structured factorization in order to reduce the complexity and the required memory resources. We bound the error introduced by some of the approximations in the infinite dimensional Arnoldi factorization showing that those approximations do not substantially influence the robustness of the restart approach. We illustrate the effectiveness of the approaches by applying them to solve large scale NEPs that arise from a delay differential equation and a wave propagation problem. The advantages in comparison to other restart methods are also illustrated.


Introduction
We consider the nonlinear eigenvalue problem (NEP) defined as finding (λ, v) ∈ C × C n \ {0} such that where λ ∈ Ω ⊆ C, Ω is an open disk centered in the origin and M : Ω → C n×n is analytic. The NEP has received a considerable attention in literature. See the review papers [26,38] and the problem collection [7]. There is a large number of methods available in a large amount of numerical linear algebra literature for (1). There are specialized methods for solving different classes of NEPs such as polynomial eigenvalue problems (PEPs) see [18,22,23] and [2,Chapter 9], in particular quadratic eigenvalue problems (QEPs) [3,24,25,33] and rational eigenvalue problems (REPs) [5,6,30,36]. There are also methods that exploit the structure of the operator M(λ) like Hermitian structure [31,32] or low rank of the matrix-coefficients [34]. Methods for solving a more general class of NEP are also present in literature. There exist methods based on modification of the Arnoldi method [37], which can be restarted for certain problems, Jacobi-Davidson methods [8], Newton-like methods [9,16,28]. Finally, there is a class of methods (to which the presented method belongs) based on Krylov methods and rational Krylov methods that can be interpreted as either dynamically expanding an approximation of the NEP or applying a method on an infinite dimensional operator [11,14,35].
In principle, we do not assume any particular structure of the NEP except for the analyticity and the computability of certain quantities associated with M(λ) (further described later). This is similar to the infinite Arnoldi method (IAR) [14], which is in the same line of reasoning as our approach. IAR is equivalent to the Arnoldi method applied to a linear operator. More precisely, under the assumption that zero is not an eigenvalue, the problem (1) can be reformulated as λB(λ)v = v, where B(λ) = M(0) −1 (M(0) − M(λ))/λ. This problem is equivalent to the linear and infinite dimensional eigenvalue problem λBψ(θ) = ψ(θ), where ψ : C → C n is an analytic function [14,Theorem 3]. The operator B is linear, maps functions to functions, and it is defined as IAR has a particular structure such that it can be represented with a tensor and this was the basis for the tensor infinite Arnoldi method (TIAR) in [13]. TIAR is equivalent to IAR but computationally more attractive (in terms of memory and CPU-time) due to a memory efficient representation of the basis matrix.
A problematic aspect of any algorithm based on the Arnoldi method is that, when many iterations are performed, the computation time per iteration will eventually become large. Moreover, finite arithmetic aspects may restrict the accuracy. Fortunately, an appropriate restart of the algorithm can resolve these issues in many situations. There exist two main classes of restarting strategies: explicit restart and implicit restart. Most of the explicit restart techniques correspond to selecting a starting vector that generates an Arnoldi factorization with the wanted Ritz values. The implicit restart consists in computing a new Arnoldi factorization, without explicitly computing a starting vector, with the wanted Ritz values. This process can be done deflating the unwanted Ritz values as in, e.g., IRA [20] or extracting a proper subspace of the Krylov space by using the Krylov-Schur restart approach [29]. For reasons of numerical stability, the implicit restart is often considered more robust than explicit restart. See [27] for further discussions about the restart of the Arnoldi method for the linear eigenvalue problem.
In this work we propose two new restart techniques for TIAR: -An implicit restart, which consists in an adaption of the Krylov-Schur restart, -A semi-explicit restart, which consists in an explicit restart by imposing the structure on the converged locked Ritz pairs and on the starting function.
The derivation of our implicit restart procedure is based on Krylov-Schur restarting in infinite dimensions. We improve the procedure in several ways. The structure of the Arnoldi factorization allows us to perform approximations that reduce the complexity and the memory requirements. We prove that the coefficients matrix, representing the basis of the Kylov space, presents a fast decay in the singular values. This allows us to effectively use a low rank approximation of this matrix. Moreover, we prove that there is a fast decay in the coefficients of the polynomials representing the basis of the Krylov space. Therefore, the high order coefficients of the Krylov basis can be neglected if the power series coefficients of M(λ) decay to zero. We give explicit bounds on the errors due to those approximations.
A semi-explicit restart for IAR was presented in [12]. We extend the procedure to TIAR. The feature of imposing the structure on the converged Ritz values and starting function is obtained by generating the Krylov space using a particular type of structured functions, which are sums of polynomial and exponential functions. We show that such functions can be included in the framework of TIAR. In particular we carry out a memory efficient representation of the structured functions, similar to [13].
There exist other Arnoldi-like methods combined with a companion linearization that use memory efficient representation of the Krylov basis matrix and that can be restarted. There are, for instance, TOAR [17,39] and CORK [35], which are based on the concept of compact Arnoldi decompositions [21]. Similar to TIAR, the direct usage of the Krylov-Schur restart for these methods does not decrease the complexity unless SVD-based approximations are used (which is indeed suggested in the implementation of the methods). More precisely, the coefficients that represent the Krylov basis are replaced with their low rank approximations. In contrast to those approaches, our specific setting, in particular the infinite dimensional function formulation and the representation of the basis with tensor structure functions, allows us to characterize the impact of the approximations.
The relationship with CORK [35] can be seen as follows. A variation of a special case of our approach (implicit restart without SVD-compression and without polynomial degree reduction) has similarities with a special case of a variation of CORK (single shift, with a particular companion linearization without SVD-compression). Our approach is based on a derivation involving infinite dimensionality which allows us to derive theory for the truncation and it allows us to restart with infinite dimensional objects. This strategy is effective since the invariant pairs are infinite dimensional objects. In contrast to this, CORK is derived from reasoning concerning the NEP linearization. This allows the usage of different types of companion linearizations, that correspond to different approximations of the nonlinearities of M(λ), and leads to a rational Krylov approach, i.e., several shifts can be used in one run.
The paper is organized as follows: in Sect. 2 we extend TIAR to tensor structured functions. In Sect. 3 we present a derivation of the Krylov-Schur type restarting in an abstract and infinite dimensional setting. Section 4 contains the derivation of a semi-explicit restart for TIAR. In Sect. 5 we carry out the adaption of Krylov-Schur restart for TIAR. We analyze the complexity of the proposed methods in Sect. 6. Finally, in Sect. 7 we show the effectiveness of the restarting strategies with numerical simulations to large and sparse NEPs.
We denote by a :,:,: a three-dimensional tensor and by a i,:,: , a :, j,: and a :,:, the slices of the tensor with respect to the first, second and third dimension. The vector z j denotes the j-th column of the matrix Z and e j the j-th canonical unit vector. The matrix I m, p denotes the matrix obtained by extracting the fist m rows and p columns of a larger square identity matrix. The matrix H k denotes the square matrix obtained by removing the last row from the matrix H k ∈ C (k+1)×k .

Tensor structured functions and TIAR factorizations
Our main algorithms are derived using particular types of functions. More precisely, we consider functions that can be expressed as ψ(θ) = q(θ ) + Y exp(Sθ)c where q : C → C n is a polynomial, Y ∈ C n× p , S ∈ C p× p , c ∈ C p and exp(Sθ) denotes the matrix exponential. Such functions were also used in [12]. We now introduce a new memory-efficient representation of such functions involving tensors.

Definition 1 (Tensor structured function)
The vector-valued function ψ : C → C n is a tensor structured function if there exist Y, W ∈ C n× p ,ā ∈ C d×r ,b ∈ C d× p ,c ∈ C p , S ∈ C p× p and Z ∈ C n×r where [Z , W ] has orthonormal columns and span(Y ) = span(W ), such that and exp d−1 (θ S) := ∞ i=d θ i S i /i! is the remainder of the Taylor series expansion of the exponential function.
The matrix-valued function Ψ k : C → C n×k is a tensor structured function if it can be expressed as Ψ k (θ ) = (ψ 1 (θ ), . . . , ψ k (θ )), where each ψ i is a tensor structured function. We denote the i-th column of Ψ k by ψ i . The structure induced by (3) is now, in a compact form where a ∈ C d×k×r , b ∈ C d×k× p and C ∈ C p×k . In particular, Ψ k (θ ) is represented by the matrices (Z , W, Y, S) and the coefficients (a, b, C). We say that Ψ k (θ ) is orthogonal if the columns are orthonormal, i.e., ψ i , ψ j = δ i, j for i, j = 1, . . . , k.
We use the scalar product consistent with the other papers about the infinite Arnoldi method [12,14], i.e., if ψ(θ) = ∞ i=0 θ i x i and ϕ(θ) = ∞ i=0 θ i y i , then The computation of this scalar product and the induced norm for the tensor structured functions (3) is further characterized in this section. We let F denote the induced norm of F : C → C n× p . Notice that the polynomials are also tensor structured functions, more precisely they are represented as (5) with C = 0. In particular, by definition of (4), we have the following relation between the induced norm of a polynomial and the Frobenius norm if its coefficients for any W ∈ C nd× p . Similar to many restart strategies for linear eigenvalue problems, our approach is based on computation, representation and manipulation of an Arnoldi-type factorization. For our infinite dimensional operator, the analogous Arnoldi-type factorization is defined as follows.
Definition 2 (TIAR factorization) Let B be the operator defined in (2). Let Ψ k+1 : C → C n×(k+1) be a tensor structured function with orthonormal columns and let H k ∈ C (k+1)×k be a Hessenberg matrix with positive elements in the sub-diagonal.

Action of B on tensor structured functions
The TIAR factorization in Definition 2 involves the action of the operator B. In order to characterize the action of the operator B on tensor structured functions (3), we need the function M d : C n× p × C p× p → C n× p , defined in [12] as where we introduced the notation M i := M (i) (0).

Remark 4
The assumption (10) can only be satisfied if r + p ≤ n. This is the generic case that we consider in this paper. Our focus is on large-scale NEPs and, in Sect. 5.1, we introduce approximations that avoid r from being large. The hypothesis λ(S) ⊆ Ω is necessary in order to define M d (Y, S) that is used to computez in Eq. (9).

Orthogonality
The tensor structured functions in the TIAR factorization (Definition 2) are orthonormal. In order to impose the orthogonality in our algorithms, we now present a theory which characterizes the orthonormality of tensor structured functions in terms of their coefficients. In particular, we derive the theory necessary to carry out the Gram-Schmidt orthogonalization. Since most of the orthogonalization procedures involve linear combinations of vectors, we start with the observation that linear combinations of tensor structured functions carry over directly to the coefficients.

A TIAR expansion algorithm in finite dimension
One algorithmic component common in many restart procedures is the expansion of an Arnoldi-type factorization. The standard way to expand Arnoldi-type factorizations (as, e.g., described in [29,Sect. 3]) requires the computation of the action of the operator/matrix and orthogonalization. We now show how we can carry out an expansion of the infinite dimensional TIAR factorization (7) by only using operations on matrices and vectors (of finite dimension).
In the previous section we characterized the action of the operator B and orthogonalization of tensor structured functions. Notice that we have derived the orthogonalization procedure only for tensor structured functions that have the same degree in the polynomial part. The expansion of a TIAR factorization (Ψ k , H k−1 ), involves the orthogonalization of the tensor structured function Bψ k (computed using the Theorem 3) against the columns of Ψ k . If the degree of Ψ k is d − 1 then the degree of Bψ k is d. Therefore, in order to perform the orthogonalization, we have to represent Ψ k as a tensor structured function with degree d. Starting from (5) we rewrite Ψ k as We define Hence, using this relation and (24), the function Ψ k in (23) can be expressed as These results can be directly combined to expand a TIAR factorization. The resulting algorithm is summarized in Algorithm 1. The action of the operator B, described in Theorem 3, is expressed in Steps 2-4.
Step 5 corresponds to increasing the degree of the TIAR factorization as described in (23) and (24). The orthogonalization of the new function, carried out by Theorem 6, is expressed in Steps 6-8 and the orthonormalization coefficients are then stored in the Hessenberg matrix Hk. We can truncate the sum in (8), (19) and (17) analogously to [12]. Due to the representation of Ψ k as tensor structured function, the expansion with one column corresponds to an expansion of all the coefficients representing Ψ k . This expansion is visualized in Fig. 1.

Two structured restarting approaches
The standard restart approach for TIAR using Krylov-Schur type restarting, as described in the previous section, involves expansions and manipulations of the TIAR factorization. Due to the linearity of tensor structured functions with respect to the coefficients, described in Observation 5, the manipulations for Ψ m leading to Ψ p can be directly carried out on the coefficients representing Ψ m . Unfortunately, due to the implicit representation of Ψ m , the memory requirements are not substantially reduced since the basis matrix Z ∈ C n×r is not modified in the manipulations. The size of the basis matrix Z is the same before and after the restart. We propose two ways of further exploiting the structure of the functions in order to avoid a dramatic increase in the required memory resources.
-Semi-explicit restart (Sect. 4): An invariant pair can be completely represented by exponentials and therefore it does not contribute to the memory requirement for Z . The fact that invariant pairs are exponentials was exploited in the restart in [12]. We show how the ideas in [12] can be carried over to tensor structured functions. More precisely, the adaption of [12] involves restarting the iteration with a locked pair, i.e., only the first p columns of (27), and a function f constructed in a particular way. The approach is outlined in Algorithm 2 with details specified in Sect. 4. -Implicit restart (Sect. 5): By only representing polynomials, we show that the TIAR factorization has a particular structure such that it can be accurately approximated. This allows us to carry out a full implicit restart, and subsequently approximate the TIAR factorization reducing the size of the matrix Z . The adaption is given in Algorithm 3. The approximation of the TIAR factorization in Step 6 is specified in Algorithm 4 and derived in Sect. 5, including an error analysis. Set Approximate the TIAR factorization, Algorithm 4 end 7 Return the eigenvalues of R 1,1

Tensor structure exploitation for the semi-explicit restart
The IAR restart approach in [12] is based on representing functions as sums of exponentials and polynomials. An attractive feature of that approach is that the invariant pairs can be exactly represented and locking can be efficiently incorporated. Due to the explicit storage of polynomial coefficients, the approach still requires considerable memory. We here show that, by representing the functions implicitly as a tensor structured functions (3), we can maintain all the advantages but improve performance (both in memory and CPU-time). This construction is equivalent to [12], but more efficient.
The expansion of the TIAR factorization with tensor structured functions (as described in Algorithm 1), combined with the locking procedure (as described in Sect. 3.2), and imposing the structure to the invariant pair as in [12], results in Algorithm 2. Steps 3-10 follow the procedure described in [12] adapted for tensor structured functions. In particular Steps 3-6 consist in extracting and imposing the structure on the invariant pair (Ψ , R 1,1 ). In Steps 7-9 a new starting function f is selected and orthogonalized with respect toΨ and in Step 10 the new TIAR factorization is defined.
The computation of the invariant pair (Ψ , R 1,1 ) and of the new starting function f involves the matrixŶ [12,Equation (5.11)]. This matrix can be extracted from the tensor structured representation as follows. By using Observation 5 with M := P I m, p Q, we obtain

Tensor structure exploitation for the implicit polynomial restart
In contrast to the procedure in Sect. 4, where the main idea was to do locking with exponentials and restart with a factorization of length p , we now propose a fully implicit procedure involving a factorization of length p. In this setting we use C = 0, i.e., we only represent polynomials with the tensor structured functions. This allows us to derive theory for the structure of the coefficient matrix, which shows how to approximate the TIAR factorization. This procedure is summarized in Algorithm 3. The approximation in Step 6 is done in order to reduce the growth in memory requirements for the representation. The approximation technique is derived in the following subsections and summarized in Algorithm 4.
Our approximation approach is based on an approximation with a truncated singular value decomposition and a degree reduction. A compression with a truncated singular value decomposition was also made for the compact representations in CORK [35] and TOAR [17]. Our specific setting allows us to prove bounds on the error introduced by the approximations (Sects. 5.1, 5. Observe that, if (Ψ, Λ −1 ) is an invariant pair, due to the decay of the power series coefficients of the exponential function, we have (3) such thatc = 0 and ψ = 1. Let Z ∈ C n×r be a matrix and a ∈ C (d+k+1)×(k+1)×r be a set of coefficients that represent the tensor structured function Ψ k+1 and H k ∈ C (k+1)×k be such that (Ψ k+1 , H k ) is a TIAR factorization obtained by using ψ as a starting function. Then
By combining (33), (32) and sub-additivity of the square root we obtain a i,:,: F ≤ (d − 1)C(ψ) 2 + â 1,:,: Since Ψ k is orthonormal and (6) we have that a 1,:,: F ≤ 1. Let L † k denote the pseudo-inverse of L k . In order to show (31), we now show that Due to the equivalence of TIAR and IAR and the companion matrix interpretation of IAR [14, theorem 6], we have that TIAR is equivalent to using the Arnoldi method on the matrix C k+1 with starting vector v = r =1ā :, z . More precisely, the relation k+1 = Ψ k+1 R can be written in terms of vectors as L k+1 = V k+1 R where the first column of V k+1 and L k+1 is v = r =1ā :, ⊗ z and L k = [v, C k+1 v, . . . , C k+1 k+1 v].
By using the orthogonality of V k+1 we conclude that that R −1 F = L † k F and κ(R) = κ(L k ).
The approximations that we introduce in the next sections (required in Step 6 of Algorithm 3) are based on the assumption that the tensor structured functions Ψ ( j) are such that the decay constant C(Ψ ( j) ) is small. Theorem 8 shows that this constant remains small after the TIAR expansion in Step 2 if κ(L k ) is not large. However, the condition number of the Krylov matrix L k can be large, see [4]. This does not necessarily imply that the decay constant is large. Notice that if (Ψ k , H k ) is an invariant pair, L k has linearly dependent columns and κ(L k ) is infinite. Analogously, if (Ψ k , H k ) is (in this sense) close to an invariant pair, we expect κ(L k ) to be large. Hence, in these situations, the right-hand side of (31) is expected to be large. However, the decay constant is not expected to be large, since the decay constant for an invariant pair is given by (30). Note that the decay is also preserved in the operations associated with the restart. After the Ritz-value selection (Step 3-4) the new TIAR factorization is computed in Step 5. Since Ψ ( j+1) is obtained through a unitary transformation from Ψ ( j) , by using the properties of the Frobenius norm, we get C(Ψ ( j+1) ) ≤ √ p + 1C(Ψ ( j) ).

Approximation by SVD compression
Given a TIAR factorization with basis function Ψ k we now show (in the following theorem) how we can approximate the basis function with less memory, by using a thinner basis matrix Z ∈ C n×r wherer should be selected as small as possible. The theorem also shows how this approximation influences the approximation Ψ k as well as the residual of the TIAR factorization. It turns out that the residual error is small if σr is small, where σ 1 , . . . , σ r are singular values associated with the coefficient tensor. This implies thatr can be chosen small if we have a fast singular value decay. We characterize the decay of the singular values in Sect. 5.3.

By definition and (6) we have
Moreover, by using the two-norm bound of the Frobenius norm, (38) and that X i 2 ≤ σr +1 ,

Approximation by reducing the degree
Another approximation which reduces the complexity can be done by truncating the polynomial in Ψ k . The following theorem illustrates the approximation properties of this approach.

Remark 11
The approximation given in Theorem 10 can only be effective under the condition that maxd +1≤i≤d M i F /(d + 1)! is small. In particular this condition is satisfied if the Taylor coefficients M i /i! present a fast decay. This condition corresponds to having the coefficients of the power series expansion of M(λ) that are decaying to zero.
The final approximation procedure is summarized in Algorithm 4. In particular Step 1-2 correspond to the approximation by SVD compression, described in Sect. 5.1, whereas Step 3-4 correspond to the approximation by reducing the degree, described in Sect. 5.2.

The fast decay of singular values
Finally, as a further justification for our approximation procedure, we now show how fast the singular values decay. The fast decay in the singular values illustrated below justifies the effectiveness of the truncation in Sect. 5.1.

Lemma 12
Let a ∈ C (d+1)×(k+1)×r , Z ∈ C n×r be the coefficients and the matrix that represent the tensor structured function Ψ k+1 and let H k ∈ C (k+1)×k be such that (Ψ k+1 , H k ) is a TIAR factorization. Then, the tensor a is generated by d + 1 vectors,

Theorem 13
Under the same hypothesis of Lemma 12, let A be the unfolding of the tensor a in the sense that A = [A 1 , . . . , A d+1 ] such that A i := (a i,:,: ) T . We have the following decay in the singular values Proof We define the matrixÃ : Notice that the columns of the matrices A andÃ correspond to the vectors a T i, j,: . In particular, using the Lemma 12, we have that rank(A 1 ) = k+1 whereas rank(A j ) = 1 if j ≤ d−k otherwise rank(A j ) = 0. Then we have that rank(A) = d + 1 and rank(Ã) = R. Using Weyl's theorem [10,Corollary 8.6.2] and Theorem 8 we have for i ≥ R + 1

Complexity analysis
We presented above two different restarting strategies: the structured semi-explicit restart and the implicit restart. They have different performance for different problems and we have not been able to conclusively determine if one is better than the other. The best choice of the restarting strategy appears to depend on many problem properties. It may be convenient to test both methods on the same problem. We now discuss the general performances, in terms of complexity and stability. The complexity discussion is based on the assumption that the complexity of the action of M −1 0 is neglectable in comparison to the other parts of the algorithm.

Complexity of expanding the TIAR factorization
The main computational effort of the Algorithm 2 and Algorithm 3 is the expansion of a TIAR factorization described in Algorithm 1, independent of which restarting strategy is used. The essential computational effort of Algorithm 1 is the computation ofz, given in Eq. (9). This operation has complexity O(drn) for each iteration. In the implicit restart, Algorithm 3, r and d are not large in general, due to the way they are automatically selected in the approximation procedure in Algorithm 4. In the semi-explicit restart, Algorithm 2, we have instead r, d ≤ m.

Complexity of the restarting strategies
After an implicit restart we obtain a TIAR factorization of length p, whereas after a semi-explicit restart, we obtain a TIAR factorization of length p . This means that the semi-explicit restart requires a re-computation phase, i.e., after the restart we need to perform extra p − p steps in order to have a TIAR factorization of length p. If p − p is large, i.e., if not many Ritz values converged in comparison to the restarting parameter p, then the re-computation phase is the essential computational effort of the algorithm. Notice that this is hard to predict since we do not know how fast the Ritz values will converge in advance.

Stability of the restarting strategies
We will illustrate in Sect. 7 that the restarting approaches have different stability properties. The semi-explicit restart tends to be efficient if only a few eigenvalues are wanted, i.e., if p max is small. This is due to the fact that we impose the structure in the starting function. On the other hand, the implicit restart requires a thick restart in order to be stable in several situations, see corresponding discussions for the linear case in [19, chapter 8]. Then p has to be large enough in a sense that at each restart the p wanted Ritz values have the corresponding residual not small. This leads to additional computational and memory resources.
If we use the semi-explicit restart, then the computation ofz, in Eq. (9), involves the term M d (Y, S). This quantity can be computed in different ways. In the simulations we must choose between (8) or [12,Equation (4.8)]. The choice influences the stability of the algorithm. In particular if one eigenvalue of S is close to ∂Ω and M(λ) is not analytic on ∂Ω, the series (8) may converge slowly and in practice overflow can occur. In such situations, [12,Equation (4.8)] is preferable. Notice that it is not always possible to use [12,Equation (4.8)] since many problems cannot be formulated as a short sum of product of matrices and functions.

Memory requirements of the restarting strategies
From a memory point of view, the essential part of the semi-explicit restart is the storage of the matrices Z and Y , that is O(nm + np). In the implicit restart the essential part is the storage of the matrix Z and requires O(nr max ) where r max denotes the maximum value that the variable r takes in the Algorithm 1. The size of r max is not predictable since it depends on the SVD-approximation introduced in Algorithm 4. Since in each iteration of the Algorithm 1 the variable r is increased, it holds r max ≥ m − p. Therefore, in the optimal case where r max takes the lower value, the two methods are comparable in terms of memory requirements. Notice that, the semi-explicit restart in general requires less memory and has the advantage that the required memory is problem independent.

Delay eigenvalue problem
In order to illustrate properties of the proposed restart methods and advantages in comparison to other approaches, we carried out numerical simulations 1 for solving a delay eigenvalue problem (DEP). More precisely, we consider the DEP obtained by discretizing the characteristic equation of the delay differential equation defined in [15, sect 4.2, Eq (22a)] with τ = 1. By using a standard second order finite difference discretization, the DEP is formulated as M(λ) = −λ 2 I + λT 1 + T 0 + e −λ T 2 . We show how the proposed methods perform in terms of m, the maximum length of the TIAR factorization, and the restarting parameter p. Table 1a, b show the advantages of our semi-explicit restart approach in comparison to the equivalent method described in [12]. Our new approach is faster in terms of CPUtime and can solve larger problems due to the memory efficient representation of the Krylov basis. Table 2a, b show the effectiveness of approximations introduced in Sects. 5.1 and 5.2 in comparison to the corresponding restart procedure without approximations. In particular, in Algorithm 4 we consider a drop tolerance ε = 10 −14 . Since the DEP is defined by entire functions, the power series coefficients decay to zero and, according to Remark 11, the approximation by reducing the degree is expected to be effective. By approximating the TIAR factorization, the implicit restart requires less resources in terms of memory and CPU-time and can solve larger problems.
We now illustrate the differences between the semi-explicit and the implicit restart. More precisely, we show how the parameters m and p influence the convergence of the Ritz values with respect to the number of iterations. The accuracy of an eigenpair (λ, v) of the nonlinear eigenvalue problem is measured with the relative residual The convergence of the semi-explicit restart appears to be slower when p is not sufficiently large. See Fig. 2. The convergence speed of both restarting strategies is comparable for a larger m and p. See Fig. 3.
In practice, the performance of the two restarting strategies corresponds to a tradeoff between CPU-time and memory. In particular, due to the fact that we impose the structure, the semi-explicit restart does not have a growth in the polynomial part at each restart and therefore requires less memory. On the other hand, for this problem, the semi-explicit restart appears to be slower in terms of CPU-time. See Tables 1 and  2. For completeness we provide the total number of accumulated inner iterations: for Table 1a 134, for Table 1b 154, for Table 2a 110 and for Table 2b 130. An illustration of the CPU-time for a fixed accuracy is given in Table 3.

Waveguide eigenvalue problem
In order to illustrate how the performance depends on the problem properties, we now consider a NEP defined by functions with branch point and branch cut singularities. More precisely, we consider the waveguide eigenvalue problem (WEP) described in [13, Section 5.1] after the Cayley transformation with pole removal, i.e., .
The matrix C T 2 and the second degree polynomials F A (λ) and F C 1 (λ) are sparse. The matrixP(λ) is defined by nonlinear functions of λ involving square roots of polynomials and it is dense. However, in our framework, an explicit storage of this matrix is not necessary since the matrix-vector productP(λ)w is efficiently computed using two Fast Fourier Transforms (FFTs) and a multiplication with a diagonal matrix. See [13] for a full description of the problem. In this NEP, Ω is the unit disc and there are branch point singularities in ∂Ω. Thus, due to the slow convergence of the power series of M(λ), in the semi-explicit restart we have to use [12,Equation (4.8)] in order to compute M d (Y, S). This also implies that the approximation by reducing the degree is not expected to be effective since the power series coefficients of M(λ) are not decaying to zero. In analogy to the previous subsection, we carried out numerical simulations in order to compare the semi-explicit and the implicit restart. With Figs. 4 and 5, we illustrate the performance of the two restarting approaches with respect to the choice of the parameters m and p. When p is sufficiently large, the residual in the semi-explicit restart appears to stagnate after the first restart whereas it decreases in a regular way in the implicit restart. See Fig. 4. This is due to the fact that semi-explicit restart imposes the structure on p vectors which is not beneficial when they do not contain eigenvector approximations. On the other hand, when p is small, the behavior of the residual appear to be specular. See Fig. 5. This is a consequence of the fact that, already after the first restart, the Krylov subspace is almost an invariant subspace (since p Ritz pairs are quite accurate). This is consistent with the linear case where implicit restarting with a Krylov subspace which is almost an invariant subspace is known to suffer from numerical instabilities. It is known that this specific problem has two eigenvalues. Therefore, in order to reduce the CPU-time and the memory resources, the restarting parameter p should be selected small. As consequence of the above discussion, we conclude that the semi-explicit restart is the best restarting strategy for this problem.

Concluding remarks and outlook
In this work we have derived an extension of the TIAR algorithm and two restarting strategies. Both restarting strategies are based on approximating the TIAR factorization. In other works on the IAR-method it has been proven that the basis matrix contains a structure that allows exploitations, e.g. for NEPs with low rank structure in the coefficients [34]. An investigation about the combination of the approximations of the TIAR factorization with such structures of the NEP seems possible but deserve further attention.
Although the framework of TIAR and restarted TIAR is general, a specialization of the methods to the NEP is required in order to efficiently solve the problem. More precisely, an efficient computation procedure for computing (9) is required. This is a nontrivial task for many application and requires problem specific research.