A note on the Krylov solvability of compact normal operators on Hilbert space

We analyse the Krylov solvability of inverse linear problems on Hilbert space $\mathcal{H}$ where the underlying operator is compact and normal. Krylov solvability is an important feature of inverse linear problems that has profound implications in theoretical and applied numerical analysis as it is critical to understand the utility of Krylov based methods for solving inverse problems. Our results explicitly describe for the first time the Krylov subspace for such operators given any datum vector $g\in\mathcal{H}$, as well as prove that all inverse linear problems are Krylov solvable provided that $g$ is in the range of such an operator. We therefore expand our knowledge of the class of Krylov solvable operators to include the normal compact operators. We close the study by proving an isomorphism between the closed Krylov subspace for a general bounded normal operator and an $L^2$-measure space based on the scalar spectral measure.


Introduction
The question of 'Krylov solvability' of inverse linear problems is operator theoretic with deep roots in numerical applications and profound implications for the use of Krylov based methods to solve inverse linear problems.Recently this phenomenon has been studied for both bounded and unbounded operators [8,4,5,6], after having received some past attention in the bounded setting [17,18,11,13,14,3,2,9].
In general one has a linear operator A : H → H acting on some Hilbert space H, that in many applications represents some physical law, and a datum vector g ∈ ran A that represents some measurable output.The inverse linear problem is formulated as where f ∈ H is a solution to the problem that in applications is often a-priori unknown.We call the problem (1.1) solvable as g ∈ ran A. If A is injective, we call (1.1) well-defined, and if additionally A −1 ∈ B(H) we call the problem (1.1) well-posed.
The question of 'Krylov solvability' becomes relevant in applications when one attempts to solve (1.1) by means of the very popular and celebrated family of Krylov algorithms that search for solution(s) to (1.1) in the distinguished Krylov subspace.Therefore, one naturally wants to know whether a solution(s) f ∈ H is approximable by such vectors in the Krylov subspace or in other words, whether there exists a solution to (1.1) f ∈ K(A, g).Such a solution we call a Krylov solution, and an inverse linear problem that exhibits such an occurrence we call Krylov solvable.The practical advantage of Krylov solvable inverse linear problems is that one may construct a solution(s) to (1.1) using the easy-to-compute vectors g, Ag, A 2 g, . . . .Of course, the critical importance of having such knowledge a-priori is that one may decide whether a given problem is indeed a suitable candidate for treatment using a Krylov based algorithm before it is used.The above question is already well-understood and under good control in the finite-dimensional setting, and is treated in several well-known monographs [24,16,12].To a lesser extent the question of Krylov solvability has been studied in several past works in the infinite-dimensional setting for bounded operators, most of which choose to remain within a particular class of operators (e.g., positive self-adjoint), using specific Krylov algorithms (e.g., GMRES, MINRES, CG, LSQR).Recently, the problem has been studied using operator-theoretic techniques in the infinitedimensional setting, including unbounded operators [4,5,19], and has also resulted in a recent monograph on the topic [6].
In this work we choose to remain in the abstract infinite-dimensional Hilbert space setting where, innocent as the question of Krylov solvability may seem, there are several results that appear un-intuitive when coming from a finite-dimensional analysis perspective.(Indeed, several good examples of Krylov solvability, or lack thereof, of inverse linear problems may be found in [8, E.g. 3.1].) One strategy to confront certain difficulties that naturally arise in the infinitedimensional setting has been to identify certain classes of operators that have favourable Krylov solvability properties.In a previous study [8] we were able to identify that the bounded self-adjoint operators always give rise to Krylov-solvable inverse linear problems, and we also identified a new class of operators that we called the 'K -class' that also always exhibit Krylov solvability.We recently expanded our study of the K -class under the effects of perturbations in [7].These operator classes just described in fact belong to the larger class of Krylov solvable operators, i.e., the collection of linear operators on H that always admit a Krylov solution to (1.1) given any g in the range of the operator.
Here we expand our knowledge of the class of Krylov solvable operators by proving that the compact normal operators on Hilbert space always belong to this class.The analysis that permits us to conclude such a result is based primarily on the functional calculus for bounded operators on Banach space (see, for example [20,15,23,1]), and the canonical decomposition of compact normal operators [1,15].Moreover, we are able to explicitly describe the Krylov subspace in terms of the datum g and the projection operators onto the eigenspaces of A.
We begin this note with a preparatory theorem in Section 2 (Theorem 2.2) before moving on to the analysis specific to compact normal operators in Section 3 (Propositions 3.1, 3.4, and 3.9), and finally in Section 4 we close with two theorems for general bounded normal operators (Theorems 4.1 and 4.4).
Notation.Throughout this note H denotes an abstract Hilbert space with scalar product •, • antilinear in the first argument, and norm H .We use |ϕ ψ|, for ϕ, ψ ∈ H, to denote the rank-1 linear map v → ψ, v ϕ for v ∈ H; and op to denote the standard operator norm on B(H).

A preparatory theorem
Our preparatory Theorem 2.2 concerns the approximation generated by polynomials of A ∈ B(H) of certain Riesz projections in the operator norm op .We shall use Theorem 2.2 in Section 3 in order to analyse the structure of the Krylov subspace K(A, g) itself given some g ∈ H.We begin with a simple definition before deriving the main result.Definition 2.1 ( [20,1]).An admissible domain U of an operator A ∈ B(H) is a non-empty bounded open subset of the complex plane C such that the boundary ∂U consists of finitely many rectifiable Jordan curves contained in the resolvent ρ(A) of the operator A and oriented in the positive sense.
Theorem 2.2.Let A ∈ B(H) with spectrum σ(A).Suppose that σ(A) be separated into two parts σ 1 and σ 2 such that there are admissible domains U 1 and U 2 containing σ 1 and σ 2 respectively; suppose further that Then there exist polynomial sequences (p We know from [22,Th. 13.7] that there exists a polynomial sequence (p is expressible as a limit of Riemann sums, we see that where l < +∞ is the length of the curve ∂U 1 ∪ ∂U 2 .The right side of (*) vanishes owing to the analyticity of R (A, z) for all z in the compact curve coupled with the uniform vanishing of |p As f (A) = P 1 , we have our conclusion for j = 1.The proof is similar for j = 2 by replacing the function

Compact normal operators
In this section we derive fundamental results that describe both the Krylov subspace and prove the Krylov solvability for a compact normal operator.We use this to provide a simple proof of the cyclicity of these operators on separable Hilbert space (Corollary 3.2).
We use the representation given in [15, Ch.V] for a compact normal operator A on Hilbert space H, namely where U n is an admissible domain that contains only the single point λ n from the spectrum σ(A).P 0 is the orthogonal projection onto ker A (which is O when A is injective), and P 0 P n = O for all n ∈ N. Convergence of the sum (3.1) occurs in the operator norm topology.
We have that the partial sums of (P n ) n∈S form a resolution of the identity, where convergence of the sum is in the strong operator topology, so that (3.4) Finally, we recall the important fact that for any normal operator we have the relation ker A = ker A * , ensuring that ran A ⊥ = ker A.
Our first proposition reveals explicitly the structure of the Krylov subspace in a way that be more easily accessible and more meaningful for the purposes of investigating Krylov solvability and structural properties of the space than the standard definition (1.2).Proposition 3.1.Let A ∈ B(H) be a compact normal operator and g ∈ H. Then Proof.When A = O, the result is trivially true as P 0 = ½ and S = {0}.
Assume A = O so that S {0}, and take n 0 ∈ S \ {0}.Let B be a bounded open ball about 0 such that λ n0 / ∈ B and ∂B ⊂ ρ(A) (this is always possible as σ(A) = {λ n } n∈S is discrete).There are at most finitely many points of σ(A) outside B as 0 is the only point of accumulation possible in the spectrum.For each remaining point λ n ∈ σ(A) \ B we construct bounded open balls B n , containing only the respective point λ n , with disjoint closures and also disjoint closure from B (clearly where S ′ ⊂ S is the finite set containing the indices of all the spectral points λ n not in B. U and B n0 are admissible domains with mutually disjoint closures, and moreover C \ (U ∪ B n0 ) is connected (indeed, U ∪ B n0 is the union of finitely many bounded, disjoint, closed balls).
Applying Theorem 2.2 taking U 1 = B n0 and U 2 = U, there exists some polynomial sequence (p ) j∈N such that p As n∈S\{0} P n g ∈ K (A, g) from (3.6), combining this with (3.4) implies that (½ − P 0 )g ∈ K (A, g).The linearity of K (A, g) and the fact that g ∈ K (A, g) imply P 0 g ∈ K (A, g).We therefore have the inclusion The reverse inclusion is from the fact that for any k ∈ N, From equation (3.3), g = n∈S P n g so that g ∈ span {P n g | n ∈ S}, and so We conclude by taking the closure.
Here we present the following corollary of Proposition 3.1 that provides a simple exposition of the cyclicity of compact normal operators with simple eigenvalues.We recall that an operator A ∈ B(H) is called cyclic if there exists some g ∈ H such that K (A, g) is dense in H. Though the conclusion of Corollary 3.2 is already known (see, for example, [10,Cor. 30.15] or [21, Th. 1.1]), we choose to present it here through the lens of our explicit knowledge of the Krylov subspace provided by Proposition 3.1.Corollary 3.2.Let A ∈ B(H) be a compact normal operator on a separable Hilbert space H with dim(ker A) ≤ 1, and dim(ran P n ) = 1 for all n ∈ S \ {0}.Then A is cyclic.
Proof.We know that the P n 's form a resolution of the identity and are mutually orthogonal, so there exists an orthogonal basis {ϕ n } n∈S of H such that P n = |ϕ n ϕ n |, where ϕ n H = 1 for all n ∈ S \ {0}, and Remark 3.3.We elaborate on the comparison between the cyclicity result Corollary 3.2 and the more general cyclicity condition for normal operators presented in [21,Th. 1.1].Theorem 1.1 of [21] states that a normal operator A ∈ B(H) is cyclic if and only if there exists a positive, finite, Borel measure µ on σ(A) such that A is unitarily equivalent to the multiplication operator Indeed, it is enough to consider A and g ∈ H as given in the statement and proof respectively of Corollary 3.2, and the finite positive Borel measure µ(Ω) := E (A) (Ω)g, g , where E (A) is the unique projection valued spectral measure for A. µ has support exactly on σ(A), and there exists the unitary operator T : We see that T ((M z f )(z)) = AT f for all f ∈ L 2 (µ), i.e., M z = T * AT , thus the statement of Corollary 3.2 satisfies the conditions of [21, Th. 1.1] and therefore A is cyclic.
The following proposition reveals our main result, namely that compact normal operators give rise to Krylov solvable inverse linear problems, and therefore belong to a larger class of operators that always exhibit Krylov solvability (this class contains, for example, the bounded self-adjoint and K -class operators).The proof of this proposition is a result of the explicit knowledge of the Krylov subspace as revealed in Proposition 3.1.
Proof.When A = O the conclusion is obvious: g ∈ ran A implies that g = 0 and therefore K (A, g) = {0} with a solution f • = 0 ∈ K (A, g) to the inverse linear problem.Therefore we consider the case A = O.First we show that the vector f • in (3.7) is in H. Indeed, as g ∈ ran A there exists some f ∈ H such that Af = g, and therefore Given any n ∈ S \{0} owing to the mutual orthogonality of the projections (P n ) n∈S , we get P n g = λ n P n f .Therefore P n f = 1 λn P n g for all n ∈ S \ {0} and where equation (3.4) is used in the last equality, so that indeed f • ∈ H.
Next we show by direct substitution that f • is a solution to Af = g.Indeed, As g ∈ ran A ⊥ ker A this implies P 0 g = 0 meaning that g = n∈S\{0} P n g, and indeed f • is a solution to the inverse linear problem as claimed.
As f • ⊥ ker A, any solution f to Af = g must be of the form f From Proposition 3.1 Remark 3.7.The above considerations in Remarks 3.5 and 3.6 also hold for Krylov solutions (if they exist) to the inverse linear problem arising from any operator A ∈ B(H), provided that ker A ⊂ ker A * .
Our next proposition analyses an important structural property of the Krylov subspace informally known as Krylov reducibility, that is intimately linked to the Krylov solvability properties of inverse linear problems (see [8,Prop. 3.3]).First we recall the appropriate definition for bounded operators.(For the unbounded setting one may refer to [4].)Definition 3.8 ( [8]).Let A ∈ B(H) and g ∈ H.If both K (A, g) and K (A, g) ⊥ are invariant under A, i.e., (3.8)A K (A, g) ⊂ K (A, g) , A K (A, g) ⊥ ⊂ K (A, g) ⊥ , then we say that A is K (A, g)-reduced.If A is K (A, g)-reduced then we also have A * K (A, g) ⊂ K (A, g) [8, Lem.2.2].
Proposition 3.9.Let A ∈ B(H) be a compact normal operator, and let g ∈ H.
Then A is K (A, g)-reduced.
Proof.The action of A * is given by Therefore where by Proposition 3.1 A * g ∈ K (A, g).We know that for bounded normal operators, A is K (A, g)-reduced if and only if A * g ∈ K (A, g) [8,Prop. 2.4].This completes the proof.
n , where S ⊂ N 0 is an index set with 0 ∈ S, and λ n ∈ C \ {0} for all n ∈ S \ {0} are the distinct non-zero eigenvalues of A, with λ 0 = 0 not necessarily an eigenvalue of A. (λ n ) n∈S is a bounded sequence in C such that λ n n→∞ − −−− → 0 when S is infinite.The P n 's are mutually orthogonal projections given by (3.2) g − P n0 g H j→∞ −−−→ 0. Therefore P n0 g ∈ K (A, g).As n 0 ∈ S \ {0} was arbitrary, we have following inclusion(3.6)span {P n g | n ∈ S \ {0}} ⊂ K (A, g) .

Proposition 3 . 4 .
Let A ∈ B(H) be a compact normal operator.If g ∈ ran A, then Af = g has a unique and minimal norm Krylov solution.If in addition A = O, then the Krylov solution is [8,s solution is unique in K (A, g) owing to[8, Prop.3.9].Remark 3.5.We recall that for any A ∈ B(H) normal with Krylov solvable inverse linear problem Af = g, g ∈ ran A, [8, Prop.3.9]states that there exists exactly one Krylov solution.Remark 3.6.The argument of the norm minimality of the Krylov solution (if it exists) from the proof of Proposition 3.4 can be extended beyond the class of compact normal operators to the whole class of bounded normal operators.Indeed, let f • ∈ K (A, g) be the Krylov solution (when it exists) to the inverse linear problem.As K (A, g) ⊂ ran A ⊥ ker A it follows that f • ⊥ ker A, and any solution f to Af = g has the form f = f • + ψ, where ψ ∈ ker A. Therefore, as f • ⊥ ψ we have f 2H = f • 2 H + ψ 2 H from which f • is a minimal norm solution.