Asymptotic convergence of spectral inverse iterations for stochastic eigenvalue problems

We consider and analyze applying a spectral inverse iteration algorithm and its subspace iteration variant for computing eigenpairs of an elliptic operator with random coefficients. With these iterative algorithms the solution is sought from a finite dimensional space formed as the tensor product of the approximation space for the underlying stochastic function space, and the approximation space for the underlying spatial function space. Sparse polynomial approximation is employed to obtain the first one, while classical finite elements are employed to obtain the latter. An error analysis is presented for the asymptotic convergence of the spectral inverse iteration to the smallest eigenvalue and the associated eigenvector of the problem. A series of detailed numerical experiments supports the conclusions of this analysis. Numerical experiments are also presented for the spectral subspace iteration, and convergence of the algorithm is observed in an example case, where the eigenvalues cross within the parameter space. The outputs of both algorithms are verified by comparing to solutions obtained by a sparse stochastic collocation method.


Introduction
During the recent years numerical solution of stochastic partial differential equations (sPDE) has attracted a lot of attention and become a well-established field. However, the field of stochastic eigenvalue problems (sEVP) and their numerical solution is still in its infancy. It is natural that, after the source problem, more effort is put on addressing the eigenvalue problem.
A few different algorithms have recently been suggested for computing approximate eigenpairs of sEVPs. As with sPDEs, the solution methods are typically divided into intrusive and non-intrusive ones. A benchmark for non-intrusive methods is the sparse collocation algorithm suggested and thoroughly analyzed by Andreev and Schwab in [1]. An attempt towards a Galerkin-based (intrusive) method was made by Verhoosel et al. in [20], though this method omits uniform normalization of the eigenmodes. Very recently Meidani and Ghanem proposed a spectral power iteration, in which the eigenmodes are normalized using a quadrature rule over the parameter space [16]. The algorithm has been further developed and studied by Sousedík and Elman in [19]. However, neither of the papers present a comprehensive error analysis for the method.
Inspired by the original method of Meidani and Ghanem we have suggested a purely Galerkinbased spectral inverse iteration, in which normalization of the eigenmodes is achieved via solution of a simple nonlinear system [10]. This method, and its generalization to a spectral subspace iteration, is the focus of the current paper. Although the algorithms in [16] and [19] differ from ours in the way normalization is performed, the basic principles are still the same and hence our results on convergence should apply to these methods as well.
In this work we consider computing eigenpairs of an elliptic operator with random coefficients. We assume a physical domain D ⊂ R d and, in order to capture the random dimension of the system, a parameter domain Γ ⊂ R ∞ with associated measure ν. One may think of a parametrization that arises from Karhunen-Loève representations of the random coefficients in the system, for instance. Discretization in space is achieved by standard FEM and associated with a discretization parameter h, whereas discretization in the random dimension is achieved using collections of certain multivariate polynomials. These collections are represented by multi-index sets A of increasing cardinality #A as → 0.
In the current paper we present a step-by-step analysis that leads to the main result: the asymptotic convergence of the spectral inverse iteration towards the exact eigenpair (µ, u). In this context the eigenpair of interest is the ground state, i.e., the smallest eigenvalue and the associated eigenfunction of the system. However, analogously to the classical inverse iteration, the computation of other eigenpairs may be possible by using a suitably chosen shift parameter λ ∈ R. We show that under sufficient assumptions the iterates of the algorithm (µ k , u k ) for k = 1, 2, . . . obey where l ∈ N is the degree of polynomials used in the spatial discretization and r > 0 depends on the properties of the region to which the solution, as a function of the parameter vector, admits a complex-analytic extension. The quantity λ 1/2 reflects the gap between the two smallest eigenvalues of the system and should be less than one. The first term in the formulas (1) and (2) is justified by standard theory for Galerkin approximation of eigenvalue problems, a simple consequence of which we have recapped in Theorem 1. The second term can be deduced from Theorem 2, which bounds the Galerkin approximation errors by residuals of certain polynomial approximations of the solution. Using best P -term polynomial approximations, we see that these residuals are ultimately expected to decay at an algebraic rate r > 0, see [15] and [6]. Finally, the third term follows from Theorem 3, which states that asymptotically the iterates of the spectral inverse iteration converge to a fixed point in geometric fashion.
Here the analogy to classical inverse iteration is evident. Each of these three important steps that comprise the main result is separately verified through detailed numerical examples.
A variant of our algorithm for spectral subspace iteration is also presented. No analysis of this algorithm is given, but the numerical experiments support the conclusion that it convergences towards the exact subspace of interest, and that the rate of convergence is analogous to what we would expect from classical theory. This is despite the fact that the individual eigenmodes, as defined by the pointwise order of magnitude of the eigenvalues, are not continuous functions over the parameter space due to an eigenvalue crossing. To the authors' knowledge such a scenario has not yet been considered in the scientific literature.
The rest of the paper is organized as follows. Our model problem and its fundamental properties are assessed in Sections 2 and 3. A detailed review of the discretization of the spatial and stochastic approximation spaces is given in Section 4. Analysis of the spectral inverse iteration, supported by thorough numerical experiments, is given in Section 5. Finally, the algorithm of spectral subspace iteration and numerical experiments of its convergence are presented in Section 6.

Problem statement
In this work we consider eigenvalue problems of elliptic operators with random coefficients. It is assumed that the random coefficients admit a parametrization with respect to countably many independent and bounded random variables. As a model problem we consider the eigenvalue problem of a diffusion operator with a random diffusion coefficient. It will be evident, however, that our methods and analysis in fact cover a much broader class of problems.

Model problem
Let (Ω, F, P) be a probability space, Ω being the set of outcomes, F a σ-algebra of events, and P a probability measure defined on Ω. We denote by L 2 P (Ω) the space of square integrable random variables on Ω and define for v ∈ L 2 P (Ω) the expected value Let D ⊂ R d be a bounded convex domain with a sufficiently smooth boundary and assume a diffusion coefficient a : D × Ω → R that is a random field on D. The diffusion coefficient is assumed to be strictly uniformly positive and uniformly bounded, i.e., for some positive constants a min and a max it holds that We now formulate the model problem as: find functions µ : Ω → R and u : D × Ω → R such that the equations −∇ · (a(·, ω)∇u(·, ω)) = µ(ω)u(·, ω) in D u(·, ω) = 0 on ∂D hold P-almost surely. In order to make the solutions physically meaningful we also impose a normalization condition ||u(·, ω)|| L 2 (D) = 1 that should hold P-almost surely.

Parametrization of the random input
We make the assumption that the input random field admits a representation of the form where {y m } ∞ m=1 are mutually independent and bounded random variables. For simplicity, we assume here that each y m is uniformly distributed. Thus, after possible rescaling, the dependence on ω is now parametrized by the vector y = (y 1 , y 2 , . . .) ∈ Γ := [−1, 1] ∞ . We denote by ν the underlying uniform product probability measure and by L 2 ν (Γ) the corresponding weighted L 2space.
The usual convention is that the parametrization (5) results from a Karhunen-Loève expansion, which gives a(x, ω) as a linear combination of the eigenfunctions of the associated covariance operator. The distinguishing feature of the Karhunen-Loève expansion compared to other linear expansions is that it minimizes the mean square truncation error [8].
It is easy to see that a 0 ∈ L ∞ (D) and are sufficient conditions to ensure the assumption (3). In order to ensure analyticity of the eigenpair (µ, u) with respect to the parameter vector y = (y 1 , y 2 , . . .) we assume that for some p 0 ∈ (0, 1) and that for a certain level of smoothness s ∈ N we have a 0 ∈ W s,∞ (D) and for some p s ∈ (0, 1). In particular, we consider the interesting case of algebraic decay of the coefficients in the series (5).

Parametric eigenvalue problem and its variational formulation
With the diffusion coefficient given by (5), the model problem (4) becomes an eigenvalue problem of the operator Thus, we obtain the parametric eigenvalue problem: find µ : Γ → R and u : Γ → H 1 0 (D) such that We denote by σ(A(y)) the set of eigenvalues of A(y) for y ∈ Γ. For any fixed y ∈ Γ the problem (9) reduces to a single deterministic eigenvalue problem. In variational form this is given by: find µ(y) ∈ R and u(·, y) ∈ H 1 0 (D) such that Under assumption (6) the bilinear form b(y; u, v) is continuous and elliptic. Thus, as in [10] and [1], we deduce that the problem (10) admits a countable number of real eigenvalues and corresponding eigenfunctions that form an orthogonal basis of L 2 (D).

Analyticity of eigenmodes
A key issue in the analysis of parametric eigenvalue problems is that eigenvalues may cross within the parameter space. Here we first disregard this possibility and recap the main results from [1] for simple eigenvalues that are sufficiently well separated from the rest of the spectrum. After this we briefly comment on the case of possibly clustered eigenvalues and associated invariant subspaces.

Eigenmodes of simple eigenvalues
Let us first restrict our analysis to eigenvalues that are strictly nondegenerate. We call an eigenvalue µ of problem (9) strictly nondegenerate if (i) µ(y) is simple as an eigenvalue of A(y) for all y ∈ Γ and (ii) the minimum spectral gap inf y∈Γ dist(µ(y), σ(A(y))\{µ(y)}) is positive.
In the case of strictly nondegenerate eigenvalues, the eigenpair (µ, u) is in fact analytic with respect to the parameter vector y.
Proposition 1. Consider a strictly nondegenerate eigenvalue µ of the problem (9) and the corresponding eigenfunction u normalized so that ||u(y)|| L 2 (D) = 1 for all y ∈ Γ. For s ∈ N assume that a 0 ∈ W s,∞ (D) and the assumptions (6) -(8) hold for some p 0 , p s ∈ (0, 1). Given Then there exists C 1 > 0 independent of m such that with C 2 > 0 arbitrary and τ given by

the eigenpair (µ, u) can be extended to a jointly complex-analytic function on E(τ ) with values in
Proof. This is analogous to Corollary 2 of Theorem 4 in [1].
It is well known that for elliptic operators on a connected domain D the smallest eigenvalue is simple [11]. Thus, Proposition 1 may at least be applied for the smallest eigenvalue of problem (9).

Finite dimensional subspaces
Let us now consider invariant subspaces for which the corresponding cluster of eigenvalues is sufficiently well separated from the rest of the spectrum. Assume a cluster M(y) = {µ q (y)} Q q=1 of eigenvalues of (9) so that (i) each µ q (y) is of finite multiplicity as an eigenvalue of A(y) for all y ∈ Γ and (ii) the minimum spectral gap inf y∈Γ dist(M(y), σ(A(y))\M(y)) is positive.
It is in general difficult to consider the analyticity of each of the eigenmodes separately. However, we might still expect the associated invariant subspace to be analytic as a function of y. More precisely, let {u q (y)} Q q=1 be a maximal collection of linearly independent eigenfunctions corresponding to the eigenvalues M(y) for all y ∈ Γ. It is not completely unreasonable to assume that span{u q (y)} Q q=1 is analytic, in a suitable sense, as a function of the parameter vector y. This assumption is the basis of our Algorithm of spectral subspace iteration given in Section 6. For more information on the regularity of perturbed eigenvalues see [13] and [14].

Stochastic finite elements
Proposition 1, under sufficient assumptions, guarantees the existence of an analytic eigenpair for problem (9). It now makes sense to look for the eigenvalue in the space L 2 ν (Γ) and the eigenfunction in the space L 2 ν (Γ) ⊗ H 1 0 (D). The space H 1 0 (D) may be discretized by means of the traditional finite element method. For the discretization of L 2 ν (Γ), we follow the usual convention in stochastic Galerkin methods and construct a basis of orthogonal polynomials of the input random variables. Orthogonal polynomials for various probability distributions exist and the use of these as the approximation basis has been observed to yield optimal rates of convergence [18], [21]. Here we consider uniformly distributed random variables which lead to the choice of tensorized Legendre polynomials.

Galerkin discretization in space
denote a finite dimensional approximation space associated with the discretization parameter h > 0. We assume approximation estimates and that are standard for piecewise polynomials of degree l. Fix y ∈ Γ and let (µ h , u h ) be the solution to the variational equation where b(y; ·, ·) is as in (10). Then we have the following bounds for the discretization error.
Proof. This follows from the theory of Galerkin approximation for variational eigenvalue problems. See Section 8 in [3] and Section 9 in [7].
Then (13) can be written as a parametric matrix eigenvalue problem: find µ h : Γ → R and u h : Γ → R N such that where u h (x, y) = i∈J ϕ i (x)(u h ) i (y). The coefficient matrices are given by For each fixed y ∈ Γ the problem (16) reduces to a positive-definite generalized matrix eigenvalue problem.

Legendre chaos
Recall that y = (y 1 , y 2 , . . .) ∈ Γ is a vector of mutually independent uniform random variables and ν is the underlying constant product probablity measure. Now whenever the integral is finite. We define (N ∞ 0 ) c to be the set of all multi-indices with finite support, i.e., where L p (x) denotes the univariate Legendre polynomial of degree p. We will assume the normal- with convergence in L 2 ν (Γ). The expansion coefficients are given by We now obtain moment matrices G (m) ∈ R P ×P for m ∈ N 0 and G (α) ∈ R P ×P for α ∈ A by setting [G (m) ] αβ = c mαβ and [G (α) ] βγ = c αβγ .

Sparse polynomial approximation in the parameter domain
We fix a finite set A ⊂ (N ∞ 0 ) c and employ the approximation space W A = span{Λ α } α∈A ⊂ L 2 ν (Γ). We let P A and R A denote the underlying projection and residual operators so that v ∈ L 2 ν (Γ) is approximated by and the approximation error is given by where we conclude that the choice of the multi-index set A ultimately determines the accuracy of our expansion.

Remark 1.
The orthogonality of the basis {Λ α } α∈A yields easy formulas for the mean and variance: We proceed as in [15] and use best P -term approximations to prove convergence of the approximation error.

Proposition 2. Let H be a Hilbert space. Assume that v : Γ → H admits a complex-analytic extension in the region
Then for each > 0 and 0 and as → 0 we have Proof. This is essentially Proposition 3.1 in [15], the proof of which can be found in [5]. Here we recapitulate the main ideas. Define and thus, by Lemmas 4.3. and 4.4 in [2], we obtain for any M ≥ C( )P r/( −1) . On the other hand, the proof of Lemma A.3 in [5] now applies to v M , and we see that the Legendre coefficients (note the different scaling as compared to [5]) We set ϑ := (r + 1/2) −1 so that −1 < ϑ < 2. Using the geometric series formula we compute where we have used the inequality 1 + ordered by decreasing H-norm and that A represents a selection of the P largest coefficients. Using (29) we may now write The claim follows from combining (26) and (30).

Stochastic Galerkin approximation of vectors and matrices
We now generalize the concept of sparse polynomial approximation to vector and matrix valued functions. Assume that the dimensions of the approximation spaces V h and W A are N and P respectively. We denote by Moreover, we associate any v ∈ W A with the array of coefficientsv := {v α } α∈A ∈ R P and similarly any v ∈ W N A with the array of coefficientŝ v := {v αi } α∈A,i∈J ∈ R P N .
We denote by ·, · R N M the inner product on R N induced by the positive definite matrix M and by || · || R N M the associated norm. Furthermore, we let || · || R P denote the standard norm on R P and || · || R P ⊗R N M denote the tensorized norm on R P N given by Let us consider the linear system defined by a parametric matrix A ∈ W N ×N A . The Galerkin approximation of this system is: Using the notation introduced earlier we may write (32) as the fully discrete system: givenf ∈ R P N findv ∈ R P N such that where The existence of a solution, i.e. the invertibility of the coefficient matrix, is guaranteed by the following Lemma.
where λ(y) is the smallest eigenvalue of A(y) for each y ∈ Γ.
Proof. Observe that the system (34) is equivalent to the variational form The left hand side of (36) is a symmetric and elliptic bilinear form so the existence of a unique solution is guaranteed by the Lax-Milgram Lemma. Moreover, the associated coefficient matrix in (33) is positive definite. Now letλ ∈ R be such thatλ < inf y∈Γ λ(y). The matrix A(y) −λI N , where I N is an identity matrix, is positive definite for all y ∈ Γ. Thereby the eigenvalues of the associated coefficient matrix should be positive. Let χ be an eigenvalue of (33), i.e., there exists w ∈ W N A such that Then and we deduce that χ >λ. Equation (35) now follows from taking the limitλ → inf y∈Γ λ(y).

Spectral inverse iteration
In this section we introduce the Algorithm of spectral inverse iteration, analyze its asymptotic convergence, and present numerical examples to support our analysis. The spectral inverse iteration, see [10], can be considered as an extension of the classical inverse iteration to the case of parametric matrix eigenvalue problems. In the spectral version each of the elementary operations is computed in Galerkin sense via projecting to the sparse polynomial basis W A . Optimal convergence of the Algorithm requires that the eigenmode of interest, i.e., the smallest eigenvalue of the parametric matrix, is strictly nondegenerate.

Algorithm description
Fix a finite set of multi-indices A ⊂ (N ∞ 0 ) c and let P = #A. The spectral inverse iteration for the system (16) is now defined in Algorithm 1. One should note that, if the projections in the Algorithm were precise, the Algorithm would correspond to performing classical inverse iteration pointwise over the parameter space Γ. We expect the Algorithm to converge to an approximation of the eigenvector corresponding to the smallest eigenvalue of the system. Algorithm 1 (Spectral inverse iteration). Fix tol > 0 and let u (0) ∈ W N A be an initial guess for the eigenvector. For k = 1, 2, . . . do A from the linear equation (2) Solve s ∈ W A from the nonlinear equation < tol and return u (k) as the approximate eigenvector.
Once the approximate eigenvector u (k) ∈ W N A has been computed, the corresponding eigenvalue µ (k) ∈ W A may be evaluated from the Rayleigh quotient, as in [10], or alternatively from the linear system Lemma 1 guarantees the invertibility of the linear system (39) and, assuming that s(y) > 0 for all y ∈ Γ, the invertibility of the systems (41) and (42). The nonlinear system (40) may be solved using for instance Newton's method.

Remark 3.
For the computation of non-extremal eigenmodes, one may proceed as in [10] and replace K(y) in (39) with (K(y) − λM), where λ ∈ R is a suitably chosen parameter. In this case we expect the algorithm to converge to an eigenpair for which the eigenvalue is close to λ. Note, however, that now the existence of a unique solution to (39) is not necessarily guaranteed by Lemma 1.
We try to write Algorithm 1 in a computationally more convenient form. The projections in the algorithm can be computed explicitly using the notation introduced in Section 4. It is easy to verify that equations (39) -(41) become β∈A γ∈A β∈A γ∈A respectively. Givenŝ = {s α } α∈A ∈ R P we define matrices where M (A) := max{m ∈ N | ∃α ∈ A s.t. α = 0} and I P ∈ R P ×P and I N ∈ R N ×N are identity matrices. We also define the nonlinear function F : and let F s : R P × R P → R P and F v : R P N × R P N → R P denote the associated bilinear forms given by F s α (ŝ,t) :=ŝ · G (α)t and F v α (v,ŵ) :=v · (G (α) ⊗ M)ŵ. Now Algorithm 1 may be rewritten in the following form.
(2) Solveŝ = {s α } α∈A ∈ R P from the nonlinear system αi } α∈A,i∈J ∈ R P N from the linear system < tol and returnû (k) as the approximate eigenvector.
The approximate eigenvalueμ (k) ∈ R P may now be solved from the equation In general it is not guaranteed that the Newton iteration for the system (47) converges to a solution. The following proposition will give some insight to the conditions under which this happens to be the case. α } α∈A ∈ R P be given by s Assume that there is a norm || · || * on R P and r > 0 such that then the Newton method for F (·,v) = 0 with the initial guessŝ (0) converges to a unique solution in B(ŝ (0) , r).
Proof. This is a direct application of the Newton-Kantorovich theorem for the equation F (·,v) = 0, see [12] (Theorem 6, 1.XVIII). Note that the first derivative (Jacobian) of F (·,v) atŝ (0) is 2||v|| R P ⊗R N M I P and the second derivative is represented by the tensor of coefficients 2c αβγ .
From Proposition 3 we see that convergence of the Newton iteration is a consequence of the boundedness of the function F s , which again is ultimately determined by the structure of the multi-index set A.

Analysis of convergence
Due to a lack of general mathematical theory for multi-parametric eigenvalue problems we rely on a slightly unconventional approach in analyzing our algorithm. First of all, we restrict ourselves to asymptotic analysis since the underlying problem is nonlinear and thus hard to analyze globally. Second, we will analyze the solutions pointwise in the parameter space and deduce convergence theorems from classical eigenvalue perturbation bounds.

Characterization of the dominant fixed point
The classical inverse iteration converges to the dominant eigenpair of the inverse matrix. In a somewhat similar fashion the spectral inverse iteration tends to converge to a certain fixed point, which we shall refer to as the dominant fixed point. Here we will establish a connection between this dominant fixed point of the spectral inverse iteration and the dominant eigenpair of the inverse of the parametric matrix under consideration. This connection is obtained by considering the fixed point as a pointwise perturbation of the eigenvalue problem of the parametric matrix.
If u A ∈ W N A is a fixed point of the Algorithm 1, then there exists a pair (s We call u A the dominant fixed point if, whenever (s,ṽ) = (s, v) also solves the system (50), then s(y) >s(y) for all y ∈ Γ. For any fixed y ∈ Γ we may write (50) as The following Lemma will be helpful in establishing a connection between the eigenpair of interest and the system (51).
(iii) From f (κ) = 0 and κ ≥ 1/2 we deduce that and Applying Lemma 2 to the system (51) pointwise for y ∈ Γ we obtain the following result.

Proposition 4. Let u A ∈ W N A be the dominant fixed point of Algorithm 1 and denote by (s, v) the associated pair in W A × W N
A that solves (50). Let µ A ∈ W A be such that P A (sµ A ) = 1. For y ∈ Γ denote byλ(y) the gap between the two largest eigenvalues of K −1 (y)M. Assume that inf y∈Γ s(y) > 0 and inf y∈Γλ (y) > 0. then there exists C > 0 such that and where µ h : Γ → R is the smallest eigenvalue of M −1 K(y) and u h : Γ → R N is the corresponding eigenvector normalized in || · || R N M (and with appropriate sign).
Proof. It is easy to see that the system (51) is equivalent to where w(y) = s −1 (y)M

Convergence of the dominant fixed point to a parametric eigenpair
The next step in our analysis is to bound the error between the dominant fixed point of Algorithm 1 and the dominant eigenpair of the inverse of the parametric matrix. To this end we will use the pointwise estimate obtained previously.
From Proposition 4 we may easily deduce the following result.

Theorem 2. Let u A ∈ W N A be the dominant fixed point of Algorithm 1 and denote by (s, v) the associated pair in W A × W N
A that solves (50). Let µ A ∈ W A be such that P A (sµ A ) = 1. For y ∈ Γ denote byλ(y) the gap between the two largest eigenvalues of K −1 (y)M. Assume that s * := inf y∈Γ s(y) > 0,λ * := inf y∈Γλ (y) > 0 and that the quantity is small enough. Then there exists C > 0 such that and Proof. With r defined as in Proposition 4 we have and The bounds (75) and (76) now follow from Proposition 4.
By Proposition 1 the exact eigenvalue and eigenvector of problem (9) are analytic functions of the parameter vector y ∈ Γ. This suggests that the residuals on the right hand side of equations (75) and (76) can be asymptotically estimated from Proposition 2.

Convergence of the spectral inverse iteration to the dominant fixed point
The classical inverse iteration converges to the dominant eigenpair of the inverse matrix at a speed characterized by the gap between the two largest eigenvalues. Here we will establish a similar asymptotic result for the convergence of the spectral inverse iteration towards the dominant fixed point.
Fixed points of the spectral inverse iteration may be characterized using the tensor notation of Algorithm 2. Letû A ∈ R P N be a fixed point of the algorithm, i.e.,û A = Sv and (ŝ,v) ∈ R P ×R P N are such that Define a linear operator R(ŝ,v) : The convergence of the spectral inverse iteration to the fixed pointû A can now be related to the ratio of the norms of ∆ −1 (ŝ) and R(ŝ,v)S −1 .

Theorem 3. Letû
forû (k) sufficiently close toû A . Furthermore, there exists C > 0 such that Proof. The partial derivative (Jacobian) of the function F (ŝ +t,v +ŵ) with respect tot att = 0 is given by 2∆(ŝ). The implicit function theorem now guarantees that there is a unique differentiable functiont(ŵ) defined in a neighbourhood ofŵ = 0 such that F (ŝ +t(ŵ),v +ŵ) = 0. Computing the first order approximation of this function we see that forŵ small enougĥ where h.o.t. stands for higher order terms. From (82) we obtain Since Equations (80) and (81) now follow from (85) and the fact thatμ (k) is asymptotically given as a linear function ofû (k) .
Adapting Theorem 3 to the context of Algorithm 1 we obtain the following Corollary.

Corollary 1. Let u A ∈ W N A be a fixed point of the Algorithm 1 and (s, v) ∈ W A × W N
A a corresponding solution to (50). Let µ A ∈ W A be such that P A (sµ A ) = 1. Assume that s * := inf y∈Γ s(y) > 0 and let ψ max be as in Theorem 3. Then the iterates of Algorithm 1 satisfy Proof. Interpret Theorem 3 in the context of Algorithm 1. The bound φ min ≥ s * is a consequence of Lemma 1.
Obviously the previous Corollary has practical value only if ψ max < s * . Here we will briefly discuss the value of ψ max in the case that (ŝ,v) ∈ R P × R P N is associated to the dominant fixed point of Algorithm 2. Observe that the equationẑ = R(ŝ,v)ŵ is equivalent to the system z(y) = w(y) − P A (tu A )(y) for all y ∈ Γ. We see that, if w = v then z = 0, whereas, if w(y), v(y) R N M = 0 for all y ∈ Γ then z = w. Thus, the matrix R(ŝ,v) acts as a deflation that shrinks vectors that are close to v(y) and preserves vectors that are almost orthogonal to v(y). From Proposition 4 we know that s −1 (y) is an approximation of the smallest eigenvalue of M −1 K(y) and v(y) is an approximation of the corresponding eigenvector. By Lemma 1 the operator norm of S −1 is bounded by sup y∈Γ λ −1 1 (y), where λ 1 (y) is the smallest eigenvalue of M −1 K(y). Analogously, since the eigenvector corresponding to this smallest eigenvalue is deflated by R(ŝ,v), we expect the norm of R(ŝ,v)S −1 to be bounded by a value close to sup y∈Γ λ −1 2 (y), where λ 2 (y) is the second smallest eigenvalue of M −1 K(y). With this reasoning, if the deflation is sufficient, there is ψ * max ∈ R such that The speed of convergence of the spectral inverse iteration is now characterized by what is essentially the largest ratio of the two smallest eigenvalues of the parametric matrix M −1 K(y).

Combined error analysis
Let (µ, u) ∈ L 2 ν (Γ) × L 2 ν (Γ) ⊗ H 1 0 (D) be the smallest eigenvalue and the associated eigenfunction of the continuous problem (9). Let (µ h , u h ) ∈ L 2 ν (Γ) × L 2 ν (Γ) ⊗ V h be the corresponding eigenpair of the semi-discrete problem (13). Assume that there exists a dominant fixed point u A ∈ W N A of Algorithm 1 and an associated eigenvalue approximation µ h,A := µ A ∈ W A as in Proposition 4. Denote by u (k) ∈ W N A the k:th iterate of Algorithm 1 and by µ h,A,k := µ (k) ∈ W A the associated solution to (42). Let u h,A and u h,A,k denote the functions in W A ⊗ V h , whose coordinates are defined by the vectors u A and u (k) respectively. The spatial, stochastic, and iteration errors may now be separated in the following sense: and Under sufficient conditions we may now bound each term in the equations (89) and (90) separately using the theory developed earlier in this section. The first term may be approximated using Theorem 1, the second term may be approximated using Theorem 2 and Proposition 2, and the third term may be approximated using Corollary 1 of Theorem 3 and the hypothesis (88). We therefore expect that, with an optimal choise the multi-index sets A for > 0, the output of the spectral inverse iteration converges to the exact solution according to for certain rates r > 0 and l > 0.

Numerical examples
We present numerical evidence to verify the equations (91) [15] an algorithm for generating them has been suggested. We use a matrix free formulation of the conjugate gradient method for solving the linear systems (46) and (48). The preconditioner is constructed using the mean of the parametric matrix in question [17] and as an initial guess we set the solution of the system from the previous iteration. We wish to note that in this setting only a very few iterations of the conjugate gradient method are needed at each step of the spectral inverse iteration.

as in the proof of Proposition 2. Multi-index sets of this form have been introduced in [6] and in
In the lack of an exact solution we compute an overkill solution (µ * , u * ) for which the number of deterministic degrees of freedom is N = 36741, the parameter is chosen such that #A = 264, and the number of iterations is k = 16. This results in roughly 10 7 total degrees of freedom. The number of active dimensions in the overkill solution is M (A ) = 113. All the numerical examples in this section have been computed using this overkill solution as a reference. The expected value and variance of the eigenfunction are presented in Figure 1.

Convergence in space
Keeping the number of stochastic degrees of freedom #A = 264 and the number of iterations k = 16 fixed, we may investigate the convergence of the solution (µ * ,h , u * ,h ) as a function of the spatial discretization parameter h. This convergence for piecewise quadratic basis functions is illustrated in Figure 2. We observe algebraic convergence rates of order 3 and 4 for the eigenfunction and eigenvalue respectively, exactly as predicted by Theorem 1. Thus, the error behaves like N −3/2 and N −2 with respect to the number of deterministic degrees of freedom.

Convergence in the parameter domain
Keeping the number of spatial degrees of freedom N = 36741 and the number of iterations k = 16 fixed, we may investigate the convergence of the solution (µ * ,A , u * ,A ) as a function of #A as → 0. This convergence is illustrated in Figure 3. We observe approximate algebraic convergence rates of order −r = −1.9 with respect to the number of stochastic degrees of freedom #A .
In Figure 4 we have presented the norms of the Legendre coefficients of the overkill solution. The ordering of the coefficients is the same as the order in which they would appear in the multiindex set #A as → 0. We see that the norms converge at the rate −r − 1/2 = −2.4 exactly as we would expect from the proof of Proposition 2. In Figure 5 we have presented the norms of the  Interestingly we observe two well separated clusters of values in Figure 4b. It seems that many of the multi-indices that correspond to relatively large Legendre coefficients of the eigenfunction, account only for a marginal contribution to the eigenvalue.

Convergence of the iteration error
Keeping the number of spatial basis functions N = 36741 and the parameter fixed so that #A = 264, we may investigate the convergence of the solution (µ * ,k , u * ,k ) as a function of the number of iterations k. This convergence is illustrated in Figure 6. Assuming that the variation in the eigenvalues within the parameter space is small, the value λ 1/2 defined in (88) may be approximated by the ratio of the two smallest eigenvalues of the problem at y = 0. Thus, Figure 6 suggests that the error behaves asymptotically like λ k 1/2 , just as predicted by Corollary 1. It is worth noting that, from the analysis of the classical inverse iteration, one might expect the eigenvalue to converge faster than the eigenfunction. In fact, the eigenvalue exhibits a faster rate of convergence at first and the error behaves like λ 2k 1/2 . Comparing to the results of the previous example, we see that k ≈ 9 represents a turning point after which the stochastic error in the eigenfunction starts to dominate the iteration error. Hence, for k ≥ 9 the polynomial approximation in the parameter domain is insufficient to guarantee the degree of accuracy that is required for the eigenvalue to exhibit the faster rate of convergence that is otherwise characteristic to it.

Concluding remarks and comparison to sparse collocation
Using the finest levels of discretization, i.e., N = 9296 degrees of freedom for approximation in space and #A = 121 degrees of freedom for approximation in the parameter domain, and computing k = 9 steps of the inverse iteration we obtain a solution for which the L 2 ν (Γ) ⊗ L 2 (D) error of the eigenfunction is approximately 3 · 10 −6 . The number of total degrees of freedom in this case is more than 10 6 and the number of active dimensions is M (A ) = 60. The total computational      17)), the statistics of the two solutions seem to almost coincide. Again using the finest levels of discretization (N = 9296 and #A = 121) for both methods, the L 2 (D) errors of mean and variance of the eigenfunction are both less than 3 · 10 −8 and the errors in the eigenvalue are less than 3 · 10 −11 and 3 · 10 −9 for the mean and variance respectively.

Spectral subspace iteration
In this section we extend the spectral inverse iteration to a spectral subspace iteration, with which we can compute dominant subspaces of the inverse of the parametric matrix under consideration. The underlying assumption is that the subspace is sufficiently smooth with respect to the param-    eters. Convergence of the spectral subspace iteration is verified through numerical experiments.

Algorithm description
As with the classical subspace iteration, the idea in the spectral version is to perform inverse iteration for a set of vectors and orthogonalize these vectors at each step. Orthogonality should here be understood in a sense that the vectors are orthogonal for all points in the parameter space Γ. This can be approximately achieved by performing the Gram-Schmidt orthogonalization process for the vectors in the Galerkin sense, i.e., by projecting each elementary operation to the basis W A .
Fix a finite set of multi-indices A ⊂ (N ∞ 0 ) c and let P = #A. The spectral subspace iteration for the system (16) is now defined in Algorithm 3. Observe that, if the projections were precise, then the Algorithm would correspond to performing the classical subspace iteration pointwise on Γ. Orhtogonalization of the basis vectors via the Gram-Schmidt process is achieved in step (2). We expect Algorithm 3 to converge to an approximate basis for the Q-dimensional invariant subspace associated to the smallest eigenvalues of the system. Algorithm 3 (Spectral subspace iteration). Fix tol > 0 and let {u (0,q) } Q q=1 ⊂ W N A be an initial guess for the basis of the subspace. For k = 1, 2, . . . do (1) For each q = 1, . . . , Q solve v (q) ∈ W N A from the linear equation (2) For q = 1, . . . , Q do (2.2) Solve s (q) ∈ W A from the nonlinear equation (3) Stop if a suitable criterion is satisfied and return {u (k,q) } Q q=1 ⊂ W N A as the approximate basis for the subspace.
In general we can not expect the output vectors {u (k,q) } Q q=1 ⊂ W N A of Algorithm 3 to converge to any particular eigenvectors of the system (16). However, we still expect them to approximately span the subspace associated to the smallest eigenvalues of the system. In view of Section 3, if a cluster of eigenvalues is sufficiently well separated from the rest of the spectrum, then we assume the associated subspace to be analytic with respect to the parameter vector y ∈ Γ. In this case we may expect optimal convergence of the projections in the Algorithm.

Remark 6.
In order to measure convergence of the Algorithm 3 we should be able to estimate the angle between subspaces over the parameter space Γ. It is not entirely trivial to perform this kind of a computation in practise. The numerical examples in Section 6.2 will hopefully give some more insight on this.

Remark 7.
As noted in Section 3, the smallest eigenvalue of the problem (9) is always simple, hence analytic. For more general problems this might not be the case. For instance, in the event of an eigenvalue crossing, the eigenmode corresponding to the pointwise smallest eigenvalue is not (in general) even a continuous function of the parameter vector y. In this case we can modify the Algorithm 3 by adding the step (2.1). This should ensure optimal convergence, since even if the eigenmodes change places, we still expect their sum to be smooth with respect to y.
Using the tensors defined in Section 5 we may write Algorithm 3 in the following form.
(3) Stop if a suitable criterion is satisfied and return {û (k,q) } Q q=1 ⊂ R P N as the approximate basis for the subspace.

Numerical examples
We use Algorithm 3 to compute the 3-dimensional subspace associated with the smallest eigenvalues of the model problem considered in Section 5.3. We let the deterministic mesh be a uniform grid of second order quadrilateral elements with N = 2465 degrees of freedom. As an initial guess we use the smallest eigenvectors of the problem at y = 0. In Figure 7 we have presented the four smallest eigenvalues of the problem as a function of y 1 , when y 2 , y 3 , . . . are held constant. We observe an eigenvalue crossing due to which the eigenvectors corresponding to the pointwise second and third smallest eigenvalues are discontinuous as functions of y.   . . = 0. The smallest eigenvalue is well-separated. However, we observe a crossing of the second and third smallest eigenvalues.
In order to investigate the convergence of the spectral subspace iteration, we attempt to estimate the angle between the exact invariant subspace and the approximate one computed by Algorithm 3. For any fixed y ∈ Γ we let v 1 (y), . . . , v Q (y) be a set of R N M orthonormal exact eigenvectors corresponding to the Q-smallest eigenvalues of the problem. We define θ k (y) := | det(Θ (k) (y))|, where Θ (k) (y) ∈ R Q×Q is a matrix with elements Θ . Now θ k (y) can be viewed as the cosine of the angle between the two subspaces at y ∈ Γ (see for instance [9] formula (2.2)). Thus, convergence of the algorithm can be measured in terms of the statistics of θ k . In the following examples we have estimated the mean and variance of θ k using the non-composite version of the sparse collocation operator employed in [1]. For the definition of the collocation operator we have used the overkill multi-index set of section 5.3 (#A = 264).
Convergence of the spectral subspace iteration for Q = 3 is illustrated in Figure 8. We see that the values arccos(E[θ k ]) behave like λ k 3/4 , where λ 3/4 is the ratio of the third and fourth smallest eigenvalues of the problem. Simultaneously the values Var[θ k ] converge to zero. These results suggest that the angle between the exact subspace and the approximation computed by Algorithm 3 converges to zero on Γ. Furthermore, the rate of convergence is characterised by the rate λ k 3/4 much like for the classical subspace iteration. Note however, that with a fixed basis for polynomial approximation, i.e. a fixed multi-index set A , only a certain accuracy for the output may be reached. Increasing the number of basis polynomials makes more accurate solutions achievable.

Conclusions and future prospects
We have presented a comprehensive error analysis for the spectral inverse iteration, when applied to solving the ground state of a stochastic elliptic operator. We have also proposed a method of spectral subspace iteration and, using numerical examples, shown its potential in computing approximate subspaces associated to possibly clustered eigenvalues. Further analysis, both numerical and theoretical, of this algorithm is left for future research.
The numerical examples suggest that our algorithms are both accurate and efficient. However, theoretical estimates for the computational complexity are not entirely trivial to obtain as this would require information on the structure of the tensor of coeffiecients c αβγ . Moreover, when iterative solvers are used, the optimal strategy is to increase the associated tolerances in the course of the iteration. We note that sparse products of the spatial and stochastic approximation spaces, as in [15], may be applied to further reduce the computational effort, and that matrix free algorithms also allow for easy parallelization.