Compressions of Self-Adjoint Extensions of a Symmetric Operator and M.G. Krein’s Resolvent Formula

. Let S be a symmetric operator with ﬁnite and equal defect numbers in the Hilbert space H . We study the compressions P H (cid:2) A (cid:3)(cid:3) H of the self-adjoint extensions (cid:2) A of S in some Hilbert space (cid:2) H ⊃ H . These compressions are symmetric extensions of S in H . We characterize properties of these compressions through the corresponding parameter of (cid:2) A in M.G. Krein’s resolvent formula. If dim ( (cid:2) H (cid:3) H ) is ﬁnite, according to Stenger’s lemma the compression of (cid:2) A is self-adjoint. In this case we express the corresponding parameter for the compression of (cid:2) A in Krein’s formula through the parameter of the self-adjoint extension (cid:2) A . Mathematics Subject Classiﬁcation. 47B25, 47A20, 47A56.


intersection (dom A) ∩ H. If
A is self-adjoint and the extending space H H is finite dimensional, then Stenger's lemma [24] yields that the compression C H ( A) is also self-adjoint; if H H is infinite dimensional this is no longer true in general.
Compressions of linear operators or relations were recently studied in the papers [2][3][4][5]23], and [11]. In the latter we gave a description-in terms 41 Page 2 of 30 A. Dijksma, H. Langer IEOT of certain parameters-of the compressions of the self-adjoint extensions A of a symmetric operator S with finite and equal defect numbers d > 0 under the assumption that dim ( H H) < ∞. According to Stenger's lemma these compressions are self-adjoint extensions of S, and hence their resolvents can also be described by Krein's resolvent formula. Such a description was given at the end of [11].
In the present paper Krein's resolvent formula is the starting point. We consider the self-adjoint extensions A T of a symmetric operator S with exit in a space H such that dim ( H H) is not necessarily finite. Here T is the parameter in Krein's formula (see (1.12)) which characterizes the self-adjoint extension A T : It is a d × d relation valued Nevanlinna function. The compressions C H ( A T ) are in general symmetric and closed, but not self-adjoint extensions of S, acting in the space H. We describe these compressions and, in particular, we describe those parameters T for which the compression C H ( A T ) coincides with S or with the self-adjoint extension A 0 of S which acts as basic operator in Krein's formula (1.12). If dim ( H H) < ∞ and hence the compression is self-adjoint, we show that the corresponding parameter for C H ( A T ) in Krein's formula is T (∞) = lim z→∞ T (z), where the limit is understood in the sense of linear relations, see (2.5).
A short synopsis is as follows. In the next two subsections of this Introduction we recall some facts about matrix or relation valued Nevanlinna functions, and about Krein's resolvent formula. In Sect. 2 we prove some statements connected with Krein's formula which might be of general interest. In Subsect. 2.1 we derive a relation that connects the parameters for two Krein formulas with basic extensions A 0 and A 1 , in Subsect. 2.2 we prove a representation of the resolvent of the self-adjoint extension A T using the operator or relation representation of the parameter T (comp. [9]). It leads to a formula for the extension A T which is the starting point for our study of the compressions of A T in Sect. 3. There, in Theorem 3.2, we give sufficient conditions for the parameters T which lead to extensions A T such that Conditions under which in this relation the signs ⊂ become equalities are given in Subsect. 3.2. In Subsect. 3.3 we show that any symmetric extension S of S in H is the compression C H ( A) for some self-adjoint extension A of S. Clearly, because of Stenger's lemma, if S is not self-adjoint the extending space H H has to be infinite dimensional.
In Sect. 3 the parameter T is assumed to be a matrix function. The main results there can easily be adapted to the case where T is a relation valued function. This is indicated in Remark 4.2.
In Sect. 4 we consider extensions A T with finite-dimensional exit space and hence with self-adjoint compressions. Such a self-adjoint compression corresponds to a constant parameter in Krein's formula. As one of the main results of this paper we show in Theorem 4.6 that this parameter is the limit T (∞).
Finally, in an Appendix we show that the dilation theory for dissipative operators as developed in [19,20] and [21] leads in a natural way to self-adjoint extensions for which the compression is the original symmetry S. This paper is dedicated to our colleague and dear friend Rien Kaashoek, to appreciate his leading role in operator theory and also to thank him for his personal support in establishing the contact of the second author to the colleagues in Groningen.

1.2.
In this subsection we collect some facts about matrix and relation valued Nevanlinna functions. Let d ∈ N. The d×d matrix valued function N , defined on C \ R, is a Nevanlinna function if it has one of the following equivalent properties: (a) N is holomorphic and satisfies where A and B are symmetric d × d matrices, B ≥ 0, and Σ is a symmetric non-decreasing d × d matrix function on R such that The properties (a) and (b) are also equivalent to the following: (c) N admits an operator or relation representation, that is, there exist a Hilbert space H N , a self-adjoint linear relation B N in H N , and, after fixing a point z 0 ∈ C \ R, a linear mapping δ : We denote by R N (z) := (B N − z) −1 the resolvent of B N , and set and for an arbitrary z 0 ∈ C \ R the relation (1.2) becomes The operator representation (1.2) will always be chosen minimal, which means that The triplet (H N , B N , δ) is sometimes called a model of the function N . The above relations extend to points z ∈ R into which N can be continued analytically or, equivalently, which belong to ρ(B N ).
This representation is with respect to a decomposition With the orthogonal projection P onto L op , the relation (1.5) can also be written as Clearly, N is a matrix valued function if and only if P = I. The first summand on the right-hand side of (1.6) can be decomposed further as An intrinsic definition of d × d relation valued Nevanlinna functions, with the above decompositions as consequences, was given in [22].
If the d × d matrix valued Nevanlinna function N is rational its representation (1.1) becomes with mutually distinct points α j ∈ R and nonzero d × d matrices A j ≥ 0, j = 1, 2, . . . , , a symmetric d × d matrix A, and a d × d matrix B ≥ 0. In the following, the asymptotic behavior of the d×d matrix Nevanlinna function N with the representation (1.1) or (1.7) plays an essential role. We mention the following relations: and, if z ∈ C \ R, In the sequel, using the language of linear relations we often make no distinction between operators and their graphs (as, for example in [7,10] and [9]).

1.3.
Here we recall M.G. Krein's resolvent formula. In the following, S denotes a densely defined, closed symmetric operator in a Hilbert space H with finite and equal defect numbers d > 0. We choose a canonical self-adjoint extension A 0 of S (canonical means that A 0 acts in H), a point z 0 ∈ C \ R, and a bijective mapping γ : C d → ker(S * − z 0 ). With γ and the canonical selfadjoint extension A 0 we define a so-called γ-field Evidently, γ z is a bijection, and γ z0 = γ. Note that for each z ∈ C \ R With the γ-field γ z there is defined a corresponding Q-function Q 0 by the relation see [18]. It is a d × d matrix valued function, which is determined by (1.11) up to a constant symmetric d × d matrix summand. Evidently, If γ z : C d → ker(S * −z) is another γ-field with corresponding Q-function Q 0 (z), then there are an invertible d × d matrix C and a symmetric d × d matrix D such that γ z = γ z C and Q 0 (z) = C * Q 0 (z)C + D, z ∈ C \ R, The Q-function plays an essential role in M.G. Krein's resolvent formula. If A is any self-adjoint extension of S, acting in H or in some larger Hilbert space H, the compressed resolvent of A: P H ( A − z) −1 H is called a generalized resolvent of S, corresponding to the extension A. The set of all generalized resolvents of S can be described as follows (see [17, (1.12) We call (1.12) Krein's resolvent formula. It depends on the chosen canonical self-adjoint extension A 0 of S, which determines the γ-field and the Qfunction. To express this dependence on A 0 we call (1.12) sometimes Krein's formula based on A 0 . The operator A on the left-hand side of (1.12) corresponding to T is denoted by A T . If T is relation valued the inverse on the right-hand side of (1.12) reads as where the operator part T op (z) of T (z) and the projection P are as in (1.6), see also [17,Theorem 5.1] and [22, (1.8)]). In Krein's resolvent formula, the parameter T (z) is a z-independent selfadjoint relation in C d if and only if A T is a canonical self-adjoint extension of S. If T is a rational d × d relation valued function then the extending space H H is finite-dimensional, its dimension being the total multiplicity of the poles (including ∞) of T op . The parameter T is a matrix valued function if and only if A T ∩ A 0 = S (comp. Proposition 3.4 below).

2.1.
In this subsection we compare the parameters in two Krein formulas based on two different canonical self-adjoint extensions. Let S be a densely defined, closed symmetric operator in a Hilbert space H with finite and equal defect numbers d > 0. Let A 0 and A 1 be two canonical self-adjoint extensions of S, denote by Q 0 (z) and Q 1 (z) corresponding Q-functions and by γ 0,z and γ 1,z corresponding γ-fields. The latter means that for j = 0, 1 and z, w ∈ C\R

Then, by Krein's formulas based on
Let A be a self-adjoint extension of S in a possibly larger Hilbert space H ⊃ H. Then, by Krein's formula, there exist d × d matrix or relation valued Nevanlinna functions S 0 , S 1 such that In the present subsection we prove a formula connecting S 0 (z) and S 1 (z).
To this end we 'normalize' the Q-functions and the γ-fields as follows. We fix z 0 ∈ C \ R and a bijection γ z0 : C d −→ ker(S * − z * 0 ), and then choose γ j,z and Q j (z), j = 0, 1, such that The latter normalization can be made since a Q-function is determined up to a constant symmetric d × d matrix summand. Then Q is an invertible skew Theorem 2.1. With the normalization (2.3) and under the assumption that S 0 (z) is a matrix function we have Recall that if F and G are linear relations in C d then On the right-hand side of (2.4), T 0 and T 0 − S 0 (z) −1 can be relations. If In this case the multi-valued part is given by To see that it is independent of z, we use that the subspace ker ImS 0 (z) of C d is independent of z and that S 0 (z) restricted to this subspace is identically equal to a constant matrix C, say (see [13,Lemma 5.3] and [6, Step 1 in the proof of Theorem 3 and hence x ∈ ker ImS 0 (z) . It follows that S 0 (z)x = Cx, and we find that In the proof of Theorem 2.1 we use properties of the convergence of linear relations. Let T and T n , n ∈ N, be linear relations in C d . We say that T n converges to T as n → ∞, in symbols T n T if (iv) If in addition T n and T are matrices and the T n 's are uniformly bounded, then T n → T .
Proof. We only prove the first statement in (ii) and (iv). Let L be the limit of T n A and let {u, w} ∈ T A. Then {Au, w} ∈ T and hence there is a sequence {v n , w n } ∈ T n converging to {Au, w}. Set u n = A −1 v n . Then {u n , w n } ∈ T n A and this sequence converges to {u, w}. Hence {u, w} ∈ L and T A ⊂ L.
To prove (iv), let {x, T x} ∈ T . Then there are {u n , v n } ∈ T n converging to {x, T x}. Hence, if · denotes the norm in C d , Proof of Theorem 2.1. The proof is split into two parts. In the first part we additionally assume that T 0 is a matrix, in the second part T 0 is a relation.
(i) Assume T 0 is a matrix. We set where all the inverses exist as matrices. Via Krein's formula based on A 1 the generalized resolvent P H ( A − z) −1 | H determines and is determined by the parameter S 1 (z). Thus, if we assume that S 1 (z) is given by (2.4), (2.2) implies that the theorem is proved by showing that We set D := T 0 + Q and obtain Now assume that S 1 (z) is given as in the theorem. Then Hence, by (2.8), and therefore This implies We show that the two defining equalities in the set on the right-hand side imply that γ 1,z h = γ 0,z Δ 0 (z)γ * 0,z * u.
(2.9) Then (2.6) and hence the claim in the theorem are proved. From We apply T 0 − S 0 (z) to both sides of this equality and use the relation (ii) Now we drop the assumption that T 0 is a matrix. Then it is a relation where P 0 is an orthogonal projection in C d and T 0,op is the operator part of T 0 . Let (T n ) n∈N be a sequence of matrices such that T n T 0 if n → ∞. For example, relative to the decomposition C d = ker P 0 ⊕ ran P 0 we choose Let A 1,n be the canonical self-adjoint extension of S which corresponds to the parameter T n in Krein's formula based on A 0 : In what follows we fix z ∈ C \ R. Then there exist a c > 0 such that Im Q 0 (z)/Im z = γ * 0,z γ 0,z > c and hence the matrices (Q 0 (z) + T n ) −1 are uniformly bounded: From Lemma 2.2 it follows that for n → ∞ they converge to the block matrix relative to the decomposition C d = ker P 0 ⊕ran P 0 . The equality (2.10) implies where, by Krein's formula, A 1 is a canonical self-adjoint extension of S. Denote by γ 1,n;z and Q 1,n (z) the γ-field and Q-function associated with A 1,n and S, normalized so that, in accordance with (2.3), Then there exist parameters S 1,n (z) of the form S 1,n (z) = P n x, P n S 1,n;op (z)P n x + (I − P n )x} : where P n is an orthogonal projection in C d and S 1,n;op (z) is the operator part of S 1,n (z), such that ,n;z (Q 1,n (z) + S 1,n (z)) −1 γ * 1,n;z * . (2.12) By part (i) they are given by S 1,n (z) = S 0 (z) + (S 0 (z) − Q)(T n − S 0 (z)) −1 (S 0 (z) + Q). It remains to show that S 1 (z) is the parameter associated with A in Krein's formula based on A 1 . This follows from the equality (2.12) by letting n → ∞. Indeed from (2.11) and the equalities γ 1,n;z = I + (z − z 0 )(A 1,n − z) −1 γ z0 and Q 1,n (z) = Q * + (z − z 0 * )γ * z0 γ 1,n;z it follows that γ 1,n;z → γ 1,z and Q 1,n (z) → Q 1 (z). The latter convergence implies (as in the beginning of the proof of part (ii)) that the matrices (Q 1,n (z) + S 1,n (z)) −1 are uniformly bounded. Hence, by Lemma 2.2, (Q 1,n (z) + S 1,n (z)) −1 = 0 0 0(P n Q 1,n (z)P n +S 1,n;op (z)) −1 → (Q 1 (z) + S 1 (z)) −1 .

Remark 2.3. If in Theorem 2.1 T 0 is a matrix, then (2.4) can be written as
In this formula, for S 0 (z) we insert the elements of a sequence (S 0,n (z)) of d×d matrix Nevanlinna functions which tend to the relation {0, C d } if n → ∞.
Then the corresponding relations S 1,n (z) tend to −T 0 . According to (2.2), to S 0,n (z) there correspond generalized resolvents P H ( A n − z) −1 | H of S, which converge for n → ∞ strongly to (A 0 − z) −1 , and from the second equality in (2.2) we obtain This relation should be compared with (2.1): Hence, for two canonical self-adjoint extensions A 0 and A 1 of S the parameters in Krein's formula for A 0 , based on A 1 , and in Krein's formula for A 1 , based on A 0 , differ just in their sign.

2.2.
In this subsection we assume that the parameter T in Krein's formula (1.12) is a d×d matrix Nevanlinna function with minimal operator or relation representation as in (1.2): B T and R T (z), z ∈ ρ(B T ), denote the representing relation for T in H T and its resolvent, respectively, and we define δ z , z ∈ C\R, as in (1.3). Since T is a matrix function, for the self-adjoint extension A T , corresponding to T , we have A T ∩ A 0 = S.
The following theorem was proved in [9, (1.10)] by means of boundary triplets. For the convenience of the reader we give a proof, using the minimal model for the function T .  Clearly, and the set on the right-hand side of (2.14) is independent of z ∈ C \ R. Indeed, if we replace then, by the resolvent identity, The entry in the left upper corner of the matrix in the middle term of (2.13) is the generalized resolvent of S generated by A = A T .

Proof of Theorem 2.4. We first observe that
Using the equalities we find that R T (z) also satisfies the resolvent identity Hence it is the resolvent of the self-adjoint relation (2.14).
To prove that A is an operator consider f g ∈ A(0). Then, since S ⊂ A and A is self-adjoint, It follows that f = 0, because S is densely defined. Thus, for all z ∈ C \ R, The top component on the right-hand side being zero and the relation γ * z γ z = Im Q 0 (z) Im z > 0 (2.15) imply that Q 0 (z)+T (z) −1 δ * z * g = 0 for all z ∈ C \ R. The relation (2.15) also implies that the matrix Q 0 (z) + T (z) is invertible, hence δ * z * g = 0. Thus (g, δ z * x) HT From the minimality of the operator B T in the representation model (1.2) for T it follows that also g = 0. Hence A(0) = {0}, that is, A is an operator.
It remains to show that the extension A is minimal: To this end, since H is contained in the set on the left-hand side (choose z = w), it suffices to prove the implication Rewriting the first equality we get Thus A is minimal if This implication follows from the same arguments used above to show that A is an operator.

Let T be a d × d matrix Nevanlinna function with integral representation
(1.1) and operator representation (1.2). In the next lemma the multi-valued part B T (0) of the self-adjoint relation B T in (1.2) is related to the matrix B in (1.1) (comp. [1,Theorem 3]). We denote by P BT (0) the orthogonal projection in H T onto B T (0). (1.1) and (1.2). Then:

Lemma 2.5. Let T be a d × d matrix Nevanlinna function T with integral and operator representations
In particular, The minimality of the operator representation of T (see (1.4)) implies that f = 0. Hence δ * | BT (0) is a bijection onto ran B.
The claim (iii) follows from (ii), and (iv) follows from (ii) and (iii). In this subsection we formulate conditions on the parameter T under which

Compressions of self-adjoint extensions: S ⊂ C H ( A T ) ⊂
where A 0 is the basic canonical self-adjoint extension of S in Krein's formula (1.12). In Subsect. 3.2 we are interested in the extreme cases C H ( A T ) = S and C H ( A T ) = A 0 . Writing formula (2.14) in full detail we get : f∈ H, g ∈ H T .

Hence the restriction A T H becomes
and we obtain for the compression Proof. The if part of the statement follows from (3.3) and from the equality As to the only if part, assume C H ( A T ) ⊂ A 0 and consider f ∈ H and g ∈ H T satisfying .
The following theorem gives a sufficient condition on the matrix function T for the implication (3.4) to hold.

7) and hence
defines a bijective correspondence between all subspaces L of C d and all symmetric extensions S of S in H such that under the assumption (3.6) the inclusion is an equality.
The set on the right-hand side of (3.12) is independent of z: . This follows from the inclusion S ⊂ S C d and the following equivalent statements for f ∈ H: Thus the set of all S with S ⊂ S ⊂ A 0 coincides with the set of all S L where L runs through the set of subspaces of C d .
As to the bijective correspondence: where P is a projection in C d and T op is the operator part of T acting in ran P , Krein's formula for A T acting in the Hilbert space H ⊕ H T becomes Then P γ * z * f = 0 and P H ( Since A T is self-adjoint, we find (z − z * )(g z , g z ) HT = 0, that is g z = 0. Thus To prove the reverse inclusion assume {R 0 (z)f, f +zR 0 (z)f } ∈ A T ∩A 0 . Then, by Krein's formula, Then for some f, h ∈ H and g ∈ H T with and this readily implies that the map P BT (0) δ : C d → B T (0) is injective. The relation (3.17) now follows from Lemma 2.5.

3.3.
The following theorem implies that every symmetric operator between S and A 0 is the compression C H ( A) of some self-adjoint extension A of S. Proof. For a given extension S we choose the subspace L such that S = S L as in (3.12). Consider a d × d matrix Nevanlinna function T with operator representation (1.2). The defining relations g ∈ B T (0) and γ * Hence if we choose T such that it satisfies (3.6) and  α j e j with α j ∈ C and ∞ j=1 |α j | 2 < ∞.
We now show that T , that is m and the basis x of C d , can be chosen such that (3.21) is satisfied: L = δ * B T (0). We have Denote by y = y 1 · · · y r a basis for L, r := dim L. We choose m = r and claim that there is a basis x for C d such that The claim and (3.22) imply L = δ * z * B T (0) and hence (3.21). To prove the claim, note that the d × d Vandermonde matrix is invertible. Extend the basis y of L by y 0 to a basis y y 0 of C d . Then is also a basis of C d . Let x be the basis of C d dual to it: If we multiply both sides of this equality from the right by the d × r matrix I r 0 we obtain y, x = V r and then (3.23) follows and the claim is proved.
Moreover, with the representation (4.1) for T (z) the limit T (∞) is given by Proof of Theorem 4.6. To apply Theorem 2.1 we set γ 0,z = γ z , S 0 (z) = T (z), and T 0 = T (∞). Further, we define the canonical self-adjoint extension A 1 of S by and the parameter S 1 (z) = T 1 (z) by Without loss of generality we can suppose that Q 0 (z), γ 0,z , Q 1 (z), and γ 1,z are normalized to satisfy (2.3). Then T (z) = S 0 (z) = j=1 A j α j − z + A + zB, (4.6) and introduce the following decomposition of C d : where L = ker B ∩ ∩ j=1 ker A j , L = ker B L .
With respect to the decomposition (4.7) the equation (  Since A = A * and, by the normalization (2.3), Im Q > 0, it follows that x 2 = 0 and x 3 = 0. Thus x = 0 and this implies (4.5).