Restarting for the Tensor Infinite Arnoldi method

An efficient and robust restart strategy is important for any Krylov-based method for eigenvalue problems. The tensor infinite Arnoldi method (TIAR) is a Krylov-based method for solving nonlinear eigenvalue problems (NEPs). This method can be interpreted as an Arnoldi method applied to a linear and infinite dimensional eigenvalue problem where the Krylov basis consists of polynomials. We propose new restart techniques for TIAR and analyze efficiency and robustness. More precisely, we consider an extension of TIAR which corresponds to generating the Krylov space using not only polynomials but also structured functions that are sums of exponentials and polynomials, while maintaining a memory efficient tensor representation. We propose two restarting strategies, both derived from the specific structure of the infinite dimensional Arnoldi factorization. One restarting strategy, which we call semi-explicit TIAR restart, provides the possibility to carry out locking in a compact way. The other strategy, which we call implicit TIAR restart, is based on the Krylov-Schur restart method for linear eigenvalue problem and preserves its robustness. Both restarting strategies involve approximations of the tensor structured factorization in order to reduce complexity and required memory resources. We bound the error in the infinite dimensional Arnoldi factorization showing that the approximation does not substantially influence the robustness of the restart approach. We illustrate the approaches by applying them to solve large scale NEPs that arise from a delay differential equation and a wave propagation problem. The advantages in comparison to other restart methods are also illustrated.


Introduction
We consider the nonlinear eigenvalue problem (NEP) defined as finding (λ, v) ∈ C × C n \ {0} such that M (λ)v = 0 (1) where λ ∈ Ω ⊆ C, Ω is an open disk centered in the origin and M : Ω → C n×n is analytic. The NEP has received a considerable attention in literature. See the review papers [26,37] and the problem collection [8].
We do not assume any particular structure of the NEP except for the analyticity and the computability of certain quantities associated with M (λ) (further described later). In this paper we consider the Infinite Arnoldi method (IAR) [15], which is equivalent to the Arnoldi method applied to a linear operator. More precisely, under the assumption that zero is not an eigenvalue, the problem (1) can be reformulated as λB(λ)v = v, where B(λ) = M (0) −1 (M (0) − M (λ))/λ. This problem is equivalent to the linear and infinite dimensional eigenvalue problem λBψ(θ) = ψ(θ), where ψ(θ) : C → C is an analytic function [15,Theorem 3]. The operator B is linear, maps functions to functions, and is defined as The Tensor Infinite Arnoldi (TIAR), which is an improvement of IAR, was presented in [14]. This method is equivalent to IAR but computationally more attractive. In contrast to IAR, the basis of the Krylov space, which consists of polynomials, is implicitly represented in a memory efficient way. This improves the performances in terms of memory and CPU-time. Another improvement of IAR was presented in [13]. This method consists in generating the Krylov space by using structured functions, which are sums of polynomials and exponential functions. The main advantage of this approach is the possibility to perform a semi-explicit restart by imposing the structure. In this paper extend the framework of TIAR to structured functions and study restart techniques.
A problematic aspect of any algorithm based on the Arnoldi method is that, when many iterations are performed, there are numerical complexity and stability issues. Fortunately, an appropriate restart of the algorithm can partially resolve these issues. There exist two main classes of restarting strategies: explicit restart and implicit restart. Most of the explicit restart techniques consist in selecting a starting vector that generates an Arnoldi factorization with the wanted Ritz values. The implicit restart consists computing a new Arnoldi factorization with the wanted Ritz values. This process can be done deflating the unwanted Ritz values as in, e.g., IRA [21] or extracting a proper subspace of the Krylov space by using the Krylov-Schur restart approach [29]. Both approaches are mathematically equivalent. For reasons of numerical stability it is in general preferable to use implicit restart. See [27] for further discussions about the restart of the Arnoldi method for the linear eigenvalue problems.
The paper is organized as follows: in Section 2 we extend TIAR to tensor structured function. In Section 4 we propose a semi-explicit restart for TIAR. This new algorithm is equivalent to [13] but the implicit representation of the Krylov basis gives an improvements in terms of memory and CPU time. In section 5 we propose an implicit restart for TIAR based on an adaption of Krylov-Schur restart. The Krylov-Schur restart for the Arnoldi method in the linear case has constant CPU-time for outer iteration. In contrast to this, a direct usage of the Krylov-Schur restart for TIAR does not give a substantial improvement due to the memory efficient representation of the Krylov basis. We show that the structure of the Arnoldi factorization allow us to perform approximations that reduce the complexity and the memory requirements. We prove that the coefficients matrix representing the basis of the Kylov space present a fast decay in the singular values. Therefore we use this in a derivation of a low rank approximation of such matrices. Moreover we prove that there is a fast decay in the coefficients of the polynomial part of the functions in the Krylov space. This can be used to introduce another approximation when the power series coefficients of M (λ) decay to zero . We give explicit bounds on the errors due to those approximations.
There exist other Arnoldi-like methods combined with a companion linearization that use memory efficient representation of the Krylov basis matrix. See CORK [5], TOAR [18] and [38]. Similar to TIAR, the direct usage of the Krylov-Schur restart for these methods does not decrease the complexity unless svd-based approximations are used. More precisely, the coefficients that represent the Krylov basis are replaced with their low rank approximations. In contrast to those approaches, our specific setting allow us to characterize the impact of the approximations.
Finally, in Section 7 we show, with numerical simulations, the effectiveness of the restarting strategies.

Tensor structured functions and TIAR factorizations
Similar to many restart strategies for linear eigenvalue problems, our approach is based on computation, representation and manipulation of an Arnoldi-type factorization. For our infinite-dimensional operator, the analogous Arnolditype factorization is defined as follows. The functions Ψ k are represented with a particular tensor structure which we further described in Section 2.1.
Definition 1 (TIAR factorization) Let Ψ k+1 (θ) be a tensor structured function with orthogonal columns and let H k ∈ C (k+1)×k be an Hessenberg matrix with positive elements in the sub-diagonal. The pair

Representation and properties of the tensor structred functions
We consider a class of structured functions introduced in [13], represented in a different and memory-efficient way.
The matrix-valued functions Ψ k : C → C n×k is a tensor structured function if it can be expressed as Ψ k (θ) = (ψ 1 (θ), . . . , ψ k (θ)), where each ψ i is a tensor structured function. We denote the i-th column of Ψ k by ψ i . The structure induced by (3) is now, in a compact form where a ∈ C d×k×r , b ∈ C d×k×p , C ∈ C p×k . We say that Ψ k (θ) is orthogonal if the columns are orthogonormal, i.e., < ψ i (θ), ψ j (θ) >= δ i,j for i, j = 1, . . . , k.
We use the scalar product consistent with the other papers about the infinite Arnoldi method [13,15] The computation of this scalar product and norms for the tensor structured functions (3) can be done analogous to [13]. In particular, by definition of (4) we have for any W ∈ C nd×p .
Remark 3 (Representation of tensor structured functions) The polynomial part of a tensor structured function (5) is a linear combination of the columns of the matrices Z and W using the coefficients given by the tensors a and b. The exponential part is given by a linear combination of the columns of the matrix Y and using as coefficients the matrix C multiplied by the powers of S. Therefore we can represent a tensor structured function (5) using the matrices (Z, W, Y, S) and the coefficients (a, b, C).
Observation 4 (Linearity with respect the coefficients) Given the tensor structured function Ψ k (θ) represented by (Z, W, Y, S) with coefficients (a, b, C) andΨk(θ) represented by the same matrices but with coefficients (ã,b,C). The functionΨk(θ) = Ψ k (θ)M +Ψk(θ)N is also a tensor structured function represented by the same matrices and coefficients (â,b,Ĉ) where for = 1, . . . , r we have defined is defined as in [13]. In particular, any nonlinear function M can be represented as a sum of products of scalar nonlinearities and we define M d : which equivalently can be expressed as The action of the operator B on functions represented as in (3) can now be expressed in a closed form using the notation above.
The assumption (11) can only be satisfied if r + p ≤ n. This is the case that we are considering in this paper, since we assume the NEP to be large-scale and in Section 5.1 we introduce approximations that avoid r from being large. The hypothesis λ(S) ⊆ Ω is necessary in order to define M d (Y, S) that is used to computez in equation (10).

Orthogonalization
In order to expand a TIAR factorization (Ψ k , H k−1 ), we need to orthogonalize the tensor structured function Bψ k (computed using the theorem 5) against the columns of Ψ k (θ). The degree of Ψ k (θ) is d−1 whereas the degree of Bψ k (θ) is d. In order to perform the orthogonalization, we transform them to the same degree d. Starting from (5) we can rewrite Ψ k as for j = 1, . . . , k. Since span(W ) = span(Y ) and, since W is orthogonal, we have that Y = W W H Y . Hence, using this relation and (21), the function Ψ k in (20) can be expressed as The orthogonal complement of ψ(θ) against the columns of Ψ k (θ) is The vector h contains the orthogonalization coefficients, i.e., h j =< ψ i , ψ >. Moreover, given it holds ψ ⊥ = β.

A TIAR expansion algorithm in finite dimension
One algorithmic component common in many restart procedures is the expansion of an Arnoldi-type factorizations. The standard way to expand Arnolditype factorizations (as, e.g., described in [29, Section 3]) involves the computation of the action of the operator/matrix and orthogonalization. We now show how we can carry out an expansion of the infinite dimensional TIARfactorization (2) by only using operations on matrices and vectors of finite dimension.
In the previous subsections we presented the action of the operator B and orthogonalization for tensor structured functions (3). These results can be directly combined to expand the TIAR factorization. The resulting algorithm is summarized in Algorithm 1. The action of the operator B described in Theorem 5 is expressed in Steps 2-4. The orthogonalization of the new function using Theorem 7 is expressed in Steps 6-7 and Step 5 corresponds to increasing the degree as described in (20) and (21). Due to the representation of Ψ k as tensor structured function, the expansion with one column corresponds to an expansion of all the coefficients representing Ψ k . This expansion is visualized in Figure 1. Compute zr +1 and increaser =r + 1 4 Setã,b andc as in (14) 5 Compute E and expand the tensors a and b as (21) and increased =d + 1 6 Compute h using (22), whereā =ã,b =b andc =c 7 Compute a ⊥ , b ⊥ , c ⊥ using (23) and β using (24) and extend Expand c k+1 := c ⊥ /β and a :,k+1,: := a ⊥ /β and b :,k+1,: := b ⊥ /β. end 3 Restarting for TIAR in an abstract setting

The Krylov-Schur decomposition for TIAR-factorizations
We briefly recall the reasoning for the Krylov-Schur type restarting [29] in an abstract and infinite dimensional setting. We later show that the operations can be carried out with operations on matrices and vectors of finite size. Let (Ψ m+1 , H m ) be a TIAR factiorization. Let P such that P H H m P is triangular (ordered Schur factorization), then
contains the Ritz values that we want to purge. From (28) we find that Using a composition of Householder reflections, we compute a matrix Q such that Since we want to lock the Ritz values in the matrix R 1,1 , we replace in (30) the vector a 1 with zeros, introducing an error O( a 1 ). With this approximation, (30) is the wanted TIAR factiorization of length p.

Two structured restarting approaches
The standard restart approach for TIAR using Krylov-Schur type restarting, as described in the previous section, involves expansions and manipulations of the TIAR factorization. Due linearity of tensor structured functions described in Observation 4, the manipulations for Ψ m leading to Ψ p can be directly carried out on the coefficients representing Ψ m . Unfortunately, due to the implicit representation of Ψ m , the memory requirements are not substantially reduced since the basis matrix Z ∈ C n×r is not modified in the manipulations. The size of the basis matrix Z is the same before and after the restart.
We propose two ways of further exploiting the structure of the functions in order to avoid a dramatic increase in the required memory resources.
-Semi-explicit restart (Section 4): An invariant pair can be completely represented by exponentials and therefore does not contribute to the memory requirement for Z. The fact that invariant pairs are exponentials was exploited in the restart in [13]. We show how the ideas in [13] can be carried over to tensor-structured functions. More precisely, the adaption of [13] involves restarting the iteration with a locked pair, i.e., only the first p columns of (30), and a function f constructed in a particular way. The approach is outlined in Algorithm 2 with details are specified in section 5. -Implicit restart (Section 5): By only representing polynomials, we show that the TIAR-factorization has a particular structure such that it can be accurately approximated. This allows us to carry out a full implicit restart, and subsequently approximate the TIAR-factorization such that the matrix Z can be reduced in size. The adaption is given in Algorithm 3 with details about the approximation specified in section 4.
Step 6 of Algorithm 3 is given in Algorithm 4.
Algorithm 2: Semi-explicit restarting for TIAR in operator setting input : A normalized tensor structured function represented by Expand the the TIAR factorization (Ψ (j) , H (j) ) to length m using algorithm 1 3 Compute the p converged Ritz pairs and P , R i,j and a i given in (28) 4 Compute the matrices Q, F , H and β given in ( Compute the p converged Ritz pairs and P , R i,j and a i given in (28) 4 Compute the matrices Q, F , H and β given in (30) Approximation of TIAR factorization, algorithm 4 end 7 Return the eigenvalues of R 1,1 4 Tensor structure exploitation for the semi-explicit restart A restarting strategy for IAR, based representing functions as sums of exponentials and polynomials, was presented in [13]. A nice feature of that approach is that the invariant pairs can be exactly represented, and locking can be efficiently incorporated. Due to the explicit storage of polynomial coefficients in [13], the approach still requires considerable memory. We here show that by representing the functions implicitly as tensor-structured functions (3) we can maintain the advantages of [13] but improve performance (both in memory and CPU-time). This construction is equivalent to [13], but more efficient.
The expansion of the TIAR factorization with tensor structured functions (as described in Algorithm 1) combined with the locking procedure (as described in Section 3.1) results in Algorithm 2. Steps 3-7 follow the procedure described in [13] adapted for tensor-structured functions. In Step 6 the function used as a new starting function can be extracted from the tensor structured representation as follows, completely equivalent with [13].
We can use Observation 4 to computeỸ from the tensor structured function representation. We define M := P I k,p Q such that we obtaiñ Tensor structure exploitation for the implicit polynomial restart In contrast to the procedure in Section 4, where the main idea was to do locking with exponentials and restart with a factorization of length p , we now propose a fully implicit procedure involving a factorization of length p. In this setting we use Y = 0, i.e., only representing polynomials with the tensor structured functions. This allows us to develop theory for the structure of the coefficient matrix, which can be exploited in an approximation of the TIAR factorization. The algorithm is summarized in Algorithm 3. The approximation in Step 6 is done in order to avoid the growth in memory requirements for the representation. The approximation technique is derived in the following subsections and summarized in Algorithm 4.
Our approximation approach is based on degree reduction and approximation with a truncated singular value decomposition. A compression with a truncated singular value decomposition was also made for the compact representations in CORK [5] and TOAR [18]. In contrast to [5,18] our specific setting allows to prove bounds on the error introduced by the approximations (Section 5.1-5.2). We also show the effictiveness by proving a bound on the decay of the singular values (Section 5.3).
We first note the following decay in the magnitude of the elements of the tensor a, which are the coefficients representing Ψ k .
Observation 10 In the numerical simulations, we observed a very fast decay of the norm of the matrices a i,:,: with respect i. Unfortunately, the condition number of the Krylov matrix [v, C k+1 v, . . . , C k+1 k+1 v] grows at least exponentially with respect k. See [4] and the therein references. The bound provided by Theorem 9 is pessimistic and not sharp; we use it only for theoretical purposes. Let consider the Algorithm 3 with a constant starting function in Step 1, i.e., ψ(θ) is such that a i,1, = 0 if i > 1. As consequence of Theorem 9, after expansion of the TIAR factorization in Step 2, we have that the norm of a i,:,: satisfies (31). By using the Corollary 11 we obtain that this relation is preserved also after the Step 5, which consist in writing the new TIAR factorization with the wanted Ritz values. In conclusion, in the Algorithm 3 the coefficients of Krylov basis Ψ (j) always fulfill (31). This allow us to introduce an approximation of the TIAR factorization.

Approximation by SVD compression
Given a TIAR factorization with basis function Ψ k we show in the following theorem how we can approximate the basis function with less memory, more precisely with a smaller Z-matrix. The theorem also shows how this approximation influences the influences the approximation Ψ k . Moreover, we show that the approximation has a small impact also on the residual of the TIAR factorization.
Theorem 12 Let a ∈ C (d+1)×k×r , Z ∈ C n×r be the coefficients that represent the tensor structured function andΨ k+1 the tensor structured function defined by the coefficients a ∈ C (d+1)×(k+1)×r andZ ∈ C n×r , withã i,:,: =Ã T i . Then, where γ ≈ 0.57721 is the Euler-Mascheroni constant and s := min s ∈ N : where C is defined in Corollary 11.
Proof The proof of (36a) is based on construction a difference functionΨ k+1 = Ψ k+1 −Ψ k+1 as follows. We definê then we can express Ψ k+1 = P d (θ)X whereΨ k+1 (θ) = P d (θ)X andΨ k+1 (θ) = P d (θ)X. By using (6) and X i 2 In order to show (36b) we first use that BΨ k+1 −Ψ k H k = BΨ k+1 −Ψ k H k since (Ψ k+1 , H k ) is a TIAR factorization and subsequently use the decay of A i and analyticity of M as follows. For notational convenience we define Y i :=X i I k+1,k , for i = 0, . . . , d − 1 (37) and Y := [Y H 0 . . . Y H d ] H such that we can expressΨ k (θ) = P d−1 (θ)Y . Using [13, theorem 4.2] for each column ofΨ k (θ), we get BΨ k (θ) = P d (θ)Y + with By definition and (6) we have Moreover, by using the two-norm bound of the Frobenius norm, (37) and that X i ≤ σr +1 , In the last inequality we use the Euler-Mascheroni inequality where γ is defined in [1,Formula 6.1.3]. It remains to bound Y +,0 . By using the definition of Y +,0 and again applying the Euler-Mascheroni inequality we have that As consequence of the Cauchy integral formula By substituting (40) in (39) we obtain We reach the conclusion (36b) from the combination of (41) in (38).

Approximation by reducing the degree
Another approximation which reduces the storage requirements can be done by truncating the polynomial in Ψ k . The following theorem illustrated the approximation properties of this approach.
Remark 14 The approximation given in Theorem 13 can only be effective if maxd +1≤i≤d M i /(d + 1)! is small. In particular this condition is satisfied if the Taylor coefficients M i /i! present a fast decay. More precisely, this condition correspond to have the coefficients of the power series expansion of M (λ) that are decaying to zero.

The fast decay of singular values
Finally, as a further justification for our approximation procedure, we now show how fast the singular values decay. The fast decay in the singular values illustrated below justifies the effectiveness of the truncation in Section 5.1.

Lemma 15
Let Z ∈ C n×r , a ∈ C d×(k+1)×r represent the tensor structured function Ψ k+1 as in (5) with Y = W = 0 and let H k ∈ C (k+1)×k be a Hessenberg matrix such that (Ψ k+1 , H k ) is TIAR factorization. Then, the tensor a is generated by d vectors, in the sense that each vector a i,j,: for i = 1, . . . , d and j = 1, . . . , k can be expressed as linear combination of the vectors a i,1,: and a 1,k,: for i = 1, . . . , k − d and j = 1, . . . , k.

Algorithm 4: Approximation of TIAR factorization
input : A TIAR factorization (Ψk +1 , Hk) expressed by Y, W ∈ C n×p , a ∈ C d×k×r , b ∈ C d×k×p and C ∈ C p×k output: A TIAR factorization (Ψk +1 , Hk) expressed by Y, W ∈ C n×p , a ∈ C d×k×r , b ∈ C d×k×p and C ∈ C p×k 1 Compute the SVD decomposition given in (34)  Let Z ∈ C n×(r−1) , a ∈ C (d−1)×k×r represent the tensor structured function Ψ k and let H k−1 ∈ C k×(k−1) an Hessenberg matrix such that (Ψ k , H k−1 ) is TIAR factorization. If we expand the TIAR factorization (Ψ k , H k−1 ) by using the Algorithm 1, more precisely by using (14b) and (23b), we obtain We reach the condition of the theorem by induction.
Theorem 16 Under the same hypothesis of Lemma 15, let A be the unfolding of the tensor a in a sense that A = [A 1 , . . . , A d ] such that A i := (a i,:,: ) T . We have the following decay in the singular values where k ≤ R ≤ d and C is the constant provided by Corollary 11.

Complexity analysis
We presented two different restarting strategies: the structured semi-explicit restart and the implicit restart. They have different performances and in general, one is not preferable to the other. The best choice of the restarting strategy depends on the problem features. It may be convenient to test both methods on the same problem. We now discuss the general performances, in terms of complexity and stability. The complexity discussion is based on the assumption that the complexity of the action of M −1 0 is neglectable in comparison to the other parts.

Complexity of expanding the TIAR factorization
Independently of which restarting strategy is used, the main computational effort of the algorithms 2 and 3 is the expansion of a TIAR factorization described in algorithm 1. The essential computational effort of the algorithm 1 is the computation ofz, given in equation (10). This operation has complexity O(drn) for each iteration. In both restarting strategies r and d are, in general, not large due to the way they are automatically selected in the algorithm 4.

Complexity of the restarting strategies
After an implicit restart we obtain a TIAR factorization of length p, whereas after a semi-explicit restart, we obtain a TIAR factorization of length p . This means that the semi-explicit restart requires a re-computation phase, i.e. after the restart we need to perform extra p − p steps in order to have a TIAR factorization of length p. If p − p is large, i.e. not many Ritz values converged in comparison to the restarting parameter p, then the re-computation phase is the essential computational effort of the algorithm. Notice that this is hard to predict since we do not know how fast the Ritz values will converge.

Stability of the restarting strategies
We will illustrate in section 7 that the restarting approaches have different stability properties. The semi-explicit restart tends to be efficient if only a few eigenvalues are wanted, i.e. if p is small. This is due to the fact that we impose the structure in the starting function. On the other hand the implicit restart requires a thick restart in order to be stable in several situations, see corresponding discussions for the linear case in [20, chapter 8] . Then p has to be large enough in a sense that at each restart the p wanted Ritz values have the corresponding residual not small. This leads to additional computational and memory resources.
If we use the semi-explicit restart, then the computation ofz, in equation (10), involves the term M d (Y, S). This quantity can be computed in different ways. In the simulations we must choose between (8) or (9). The choice influences the stability of the algorithm. In particular if one eigenvalue of S is close to ∂Ω and M (λ) is not analytic in ∂Ω, the series (9) converges slowly and in practice overflow can occur. In such situations, (8) is preferable. Notice that it is not always it is possible to use (8) since many problems cannot be formulated as (7) with small q.

Memory requirements of the restarting strategies
From a memory point of view, the essential part of the semi-explicit restart is the storage of the matrices Z and Y , that is O(nm+np). In the implicit restart the essential part is the storage of the matrix Z and requires O(nr max ) where r max denotes the maximum value that the variable r takes in the algorithm. The size of r max is not predictable since it depends on the svd-approximation introduced in algorithm 4. Since in each iteration of the algorithm 1 the variable r is increased, it holds r max ≥ m − p. Therefore, in the optimal case where r max takes the lower value, the two methods are comparable in terms of memory requirements. Notice that, the semi-explicit restart requires less memory and has the advantage that the required memory is problem independent.

Delay eigenvalue problem
In order to illustrate properties of the proposed restart methods and advantages in comparison to other approaches, we carried out numerical simulations for solving the delay eigenvalue problem (DEP). More precisely, we consider the DEP associated with the delay differential equation defined in [16, sect 4.2] with τ = 1. By using a standard second order finite difference discretization, the DEP is formulated as M (λ) = −λ 2 I + λA 1 + A 0 + e −λ A 2 + I.
We show how the proposed methods perform in terms of m, the maximum length of the TIAR factorization, and p, the number of wanted Ritz values. Table 1a and Table 1b show the advantages of our semi-explicit restart approach in comparison to the equivalent method described in [13]. Our new approach is faster in terms of CPU-time and can solve larger problems due to the memory efficient representation of the Krylov basis. Table 2a and Table 2b show the effectiveness of approximations introduced in Section 5.1 and 5.2 in comparison to the corresponding restart procedure without approximations. In particular, in Algorithm 4 we consider a drop tolerance ε = 10 −14 . Since the DEP is defined by entire functions, the power series coefficients decay to zero and, according to Remark 14, the approximation by reducing the degree is expected to be effective. By approximating the TIAR factorization, the implicit restart requires less resources in terms of memory and CPU-time and can solve larger problems.
We now illustrate the differences between the semi-explicit and the implicit restart. More precisely, we show how the parameters m and p influence the convergence of the Ritz values with respect the number of iterations. The convergence of the semi-explicit restart appear to be slower in the semi-explicit restart when p is not sufficiently large. See Figure 2a. The convergence speed of both restarting strategies is comparable for a larger m and p. See Figure 3a.
In practice, the performance of the two restarting strategies corresponds to a trade-off between CPU-time and memory. In particular, due to the fact that we impose the structure, the semi-explicit restart does not have a growth in the polynomial part at each restart and therefore requires less memory. On the other hand, for this problem, the semi-explicit restart appears to be slower in term of CPU-time. See Figure 2

Waveguide eigenvalue problem
In order to illustrate how the performance depends on the problem properties, we now consider a NEP defined by functions with branch point and branch cut singularities. More precisely, we consider the waveguide eigenvalue problem (WEP) described in [14, Section 5.1] after the Cayley transformation. In this problem, Ω is the unit disc and there are branch point singularities in ∂Ω. Thus, due to the slow convergence of the power series, in the semi-explicit restart we have to use (9) in order to compute M d (Y, S). This also implies that the approximation by reducing the degree is not expected to be effective since the power series coefficients of M (λ) are not decaying to zero.
In analogy to the previous subsection, we carried out numerical simulations in order to compare the semi-explicit and the implicit restart.   With Figure 4a and 5a, we illustrate the performance of the two restarting approaches with respect the choice of the parameters m and p. When p is sufficiently large, the residual in the semi-explicit restart appears to stagnate after the first restart whereas it decreases in a regular way in the implicit restart. See Figure 4a. On the other hand, when p is small, the behavior of the residual appear to be specular. See Figure 5a. This is due to the fact that  semi-explicit restart imposes the structure on p vectors which is not beneficial when they do not contain eigenvector approximations. It is known that this specific problem has two eigenvalues. Therefore, in order to reduce the CPU-time and the memory resources, the the number of wanted Ritz values p should be selected small. As consequence of the above discussion, we conclude that the semi-explicit restart is the best restarting strategy for this problem.

Concluding remarks and outlook
In this work we have derived an extension of the TIAR algorithm and two restarting strategies. Both restarting strategies are based on approximating the TIAR factorization. In other works on the IAR-method it has been proven that the basis matrix contains a structure that allows exploitations, e.g. for NEPs with low rank structure in the coefficients [34]. An investigation about the combination of the approximations of the TIAR factorization with such structures of the NEP seems possible but deserve further attention.
Although the framework of TIAR and restarted TIAR is general, a specialization of the methods to the NEP is required in order to efficiently solve the problem. More precisely, an efficient computation procedure for computing (10) is required. This is a nontrivial task for many application and requires problem specific research.     Table 3 Implicit and semi-explicit restart for the waveguide problem.