Local spectral multiplicity of selfadjoint couplings with general interface conditions

We consider selfadjoint operators obtained by pasting a ﬁnite number of boundary relations with one-dimensional boundary space. A typical example of such an operator is the Schr¨odinger operator on a star-graph with a ﬁnite number of ﬁnite or inﬁnite edges and an interface condition at the common vertex. A wide class of “selfadjoint” interface conditions, subject to an assumption which is generically satisﬁed, is considered. We determine the spectral multiplicity function on the singular spectrum (continuous as well as point) in terms of the spectral data of decoupled operators.


Introduction
In this paper we analyze the singular spectrum of a selfadjoint operator built by gluing together a finite number of selfadjoint operators with simple spectrum by means of interface conditions.We realize the operators with help of boundary triplets, and understand interface conditions as linear dependencies among boundary values.An archetypical example for such a glued together operator is a Schrödinger operator on a star-graph with an interface condition at the inner vertex.
The reader who is not familiar with the language of boundary triplets and couplings of such (e.g.[BHdS20,DHMdS09]), is advised to pause for a moment, and before proceeding here read through Section 1.1 below in order to get an intuition; there we elaborate in detail the above mentioned example.We also point out that all necessary notions and results from the literature are recalled in Section 2.
Assume we have selfadjoint operators L l , l = 1, . . ., n, in Hilbert spaces H l , l = 1, . . ., n, that emerge from boundary relations with one-dimensional boundary value space (cf.Section 2.3).Denote by L 0 the orthogonal coupling, i.e., the diagonal operator, L 0 := n l=1 L l acting in H := n l=1 H l .The spectrum of L 0 and its multiplicity is easily understood: letting N l and N 0 be the respective spectral multiplicity functions of L l and L 0 , it holds that N 0 = n l=1 N l .Here -and always -we tacitly understand that any relation between spectral multiplicity functions should be valid only after making an appropriate choice of representants for each of them.
The orthogonal coupling can be seen as gluing together the single operators without allowing any interaction between them.Much more interesting is what happens when the single operators do influence each other after gluing.Assume we have an interface condition, formulated as a linear dependence among boundary values and described by matrices A, B (cf. Section 2.4), such that the corresponding operator L A,B is selfadjoint, and let N A,B be the spectral multiplicity function of L A,B .The basic question is: How to compute N A,B from N l , l = 1, . . ., n ?By a simple dimension argument, it always holds that N A,B (x) ≤ n.Further, the Kato-Rosenblum theorem fully settles the question on the absolutely continuous part of the spectrum: there we have N A,B (x) = N 0 (x) since L A,B and L 0 are finite rank perturbations of each other in the resolvent sense.Contrasting this, the singular spectrum (eigenvalues as well as singular continuous part) may behave wildly, and much less is known.
A classical result for the case that n = 2 is Kac's Theorem [Kac62,Kac63], which says in the present language that for a particular interface condition, namely the standard condition, the multiplicity of the singular spectrum does not exceed 1.His proof proceeds via an analysis of the Cauchy transforms of the involved spectral measures and the Titchmarsh-Kodaira formula.Different proofs are given in [Gil98] (by using subordinacy theory) and in [Sim05] (by reducing to the Aronszajn-Donoghue Theorem).Generalizations of the theorem of Kac are given in [SW14] and in [Mal19].In the first reference, we allow arbitrary n but still prescribe the standard interface condition.The second reference goes into another direction.There still n = 2 but the boundary value spaces are allowed to have higher dimensions and a certain class of interface conditions is permitted.
In the present paper we allow arbitrary n and consider a fairly rich class of interface conditions defined by an algebraic property (cf.Section 3).This property expresses, at least in some sense, that all single operators influence each other and no splitting into independent blocks happens, though one has to be careful with this intuition, it is only very rough.The previously considered standard condition belongs to that class.A striking difference is that interface conditions of the presently considered class can give rise to perturbations of any rank up to n, while the standard condition always yields a rank one perturbation.Our main result says that, letting r be the rank of the perturbation, on the singular spectrum the relation (1.1) holds.A formulation in terms of spectral measures, and without reference to a particular choice of representants of multiplicity functions, is given in Theorem 4.3.The proof of the theorem is carried out by an in depth analysis of Cauchy integrals and the matrix measure in the Titchmarsh-Kodaira formula.We exploit algebraic properties of the considered class of interface conditions to obtain the rank of the derivative of that matrix measure w.r.t.its trace measure, and this leads to (1.1).
Other approaches to the above emphasised basic question might proceed via the already mentioned work of M.Malamud [Mal19], or via a generalization of Aronzsajn-Donoghue's theorem given by C.Liaw and S.Treil in [LT20].To the best of our knowledge, no such results have been obtained so far using these approaches.
The present paper is organized as follows.In Section 2 we introduce objects and tools needed to formulate and prove our result.In particular, these include boundary relations, pasting of such with selfadjoint interface conditions, matrixvalued Weyl functions and corresponding measures.In Section 3 we discuss in detail the class of selfadjoint interface conditions that we consider, namely, their description in terms of matrices A, B and the additional assumption that we make about them.In Section 4 we give the statement of the main result, Theorem 4.3.Before that we formulate the result separately for the case of point spectrum, Theorem 4.1, since this can be shown under slightly weaker assumptions.In Section 5 we prove Theorem 4.1.In Section 6 we prove the part of Theorem 4.3 concerning the case where many layers of the spectrum "overlap".In Section 7 we prove the remaining part of Theorem 4.3.

The Schrödinger operator on a star-graph
Let us discuss, as an example, the Schrödinger operator on a metric star-graph.In fact, this example can serve as a model for the general case.We denote the edges of the graph by E 1 , . . ., E n and associate them with intervals [0, e l ), where the endpoint 0 corresponds to the inner vertex and e l can be finite or infinite.Assume we are given data: (1) Real-valued potentials q l ∈ L 1,loc ([0, e l )) for l = 1, . . ., n.
(2) Boundary conditions at e l for those l = 1, . . ., n, for which q l is regular or is in the limit point case at e l .
For l = 1, . . ., n let H l be the Hilbert space L 2 (0, e l ), and let L l be the selfadjoint Schrödinger operator with Dirichlet boundary conditions: dom L l := u ∈ L 2 (0, e l ) : u, u ′ are absolutely continuous, u satisfies the boundary condition at e l (if present) .

Now assume we have an interface condition at the inner vertex written in the form
where A and B are n × n matrices such that Here (A, B) denotes the n × 2n matrix which has A as its first n columns and B as its last n columns.The operator L A,B is defined in the Hilbert space H := n l=1 L 2 (0, e l ) and acts by the rule u l satisfies the boundary condition at e l (if present), u 1 , . . ., u n satisfy the interface condition (1.2) . (1.5) Since the matrices A and B satisfy (1.3), the operator L A,B is selfadjoint [KS99].
Obviously, this correspondence between matrices and operators is not one-toone: one can multiply A and B simultaneously from the left by any invertible matrix, and this defines the same interface condition and the same operator.
The orthogonal coupling L 0 := n l=1 L l corresponds to the matrices A 0 = I, B 0 = 0: in the above notation L 0 = L I,0 .The standard interface condition corresponds to the matrices where the first n − 1 lines express continuity of the solution at the vertex, and the last line corresponds to the Kirchhoff condition that the sum of derivatives vanishes.The class of interface conditions we consider in the present paper is given by those matrices A, B subject to (1.3) which satisfy in addition the following assumption: each set of rank B many different columns of B is lineary independent.Under this assumption the rank of the difference of resolvents of L A,B and L 0 equals rank B, and (1.1) holds.

Boundary behavior of Herglotz functions
Recall the notion of matrix valued Herglotz functions (often also called Nevanlinna functions).
2.1 Definition.An analytic function M : Any Herglotz function M admits an integral representation.Namely, there exists a finite positive n×n-matrix valued Borel measure Ω (which means that Ω(∆) is a positive semidefinite matrix for every Borel set ∆), a selfadjoint matrix a, and a positive semidefinite matrix b, such that For the scalar case, this goes back to [Her11], for the matrix valued case see, e.g., [GT00, Theorem 5.4].We use several known facts about the boundary behavior of Herglotz functions which relate normal or nontangential boundary limits to the measure Ω in (2.1).The key notion in this context is the symmetric derivative of one measure relative to another.If σ is a positive Borel measure and ν is a positive Borel measure or a complex Borel measure absolutely continuous w.r.t.σ, then we define the symmetric derivative dν dσ (x) at a point x ∈ R as the limit dν dσ whenever it exists in [0, ∞], or in C, respectively.

Proposition ([DiB02]
).There exists a Borel set X ⊆ R with σ(R \ X) = 0, such that the symmetric derivative exists for all x ∈ X and the function dν dσ is measurable on X.
By the de la Valleé-Poussin theorem [Sak64,DiB02] the function dν dσ is a Radon-Nikodym derivative of ν with respect to σ.We formulate two corollaries of the de la Valleé-Poussin theorem which will be of particular convenience to us in what follows.An explicit proof can be found in [SW14].
The first corollary concerns properties of sets.A set 2.3 Corollary.Let ν and σ be positive Borel measures, and let X ⊆ R.
The following statements concern the relationship between boundary behavior of Herglotz functions and symmetric derivatives of the measures associated with them.

Proposition
(1) Let ν and σ be finite positive Borel measures and x ∈ R. Assume that (2) Let ν be a finite positive Borel measure, let m ν be a Herglotz function with the measure ν in its integral representation, and let x ∈ R.
(3) Let ν and σ be finite positive Borel measures with ν ≪ σ, and let σ s be the singular part of σ w.r.t.λ.Again let m ν and m σ be two Herglotz functions with ν or σ, respectively, in their integral representations.Then Note that in (1) we can use dσ := dλ 1+x 2 , which implies whenever dν dλ (x) exists and is finite.For a more detailed compilation and references about symmetric derivatives and boundary values we refer the reader to [SW14, §2.3 and §2.4].
Item (1) of the above theorem has an obvious extension to matrix valued functions and measures.
x ∈ R the symmetric derivative dΩ dρ (x) exists and is related to M by (2.2) Proof.Let x ∈ R, and assume that all symmetric derivatives at x of positive and negative parts of the real and the imaginary parts of entries of Ω w.r.t.ρ exist and are finite.This is fulfilled for ρ-a.a.x ∈ R.

Spectral multiplicity
Let L be a selfadjoint operator in a Hilbert space H and E L be its projection valued spectral measure.A subspace The minimum of dimensions of generating subspaces is called the (global) spectral multiplicity of operator L and is denoted by mult L.
The operators that we deal with always have finite spectral multiplicity, hence we shall assume from now on that mult L < ∞.There exist elements g 1 , . . ., g mult L ∈ H, such that their linear span is a generating subspace and the positive measures are ordered in the sense of absolute continuity as An element g 1 occuring in a collection of elements with these properties is called an element of maximal type.Fix a choice of such elements g 1 , . . ., g mult L .Then the operator L is unitarily equivalent to the operator of multiplication by the independent variable in the space and define the (local) spectral multiplicity function of L as the equivalence class (i.e., functions which are E L -a.e.equal) of the function (2.4) The need of considering an equivalence class of functions arises since the sets Y l are defined non-uniquely up to ν 1 -zero sets and thus the function #{l : x ∈ Y l } can be changed on ν 1 -zero sets.
Intuitively, the sets Y l correspond to the "layers" of the spectrum, and hence indeed N L expresses spectral multiplicity in a natural way.If x is an eigenvalue of L, then N L (x) = dim ker (L − xI) is the usual multiplicity of an eigenvalue.
The spectral multiplicity function is a unitary invariant of the operator and does not depend on a choice of generating basis.
2.7 Remark.We will also speak of the spectral multiplicity function of a selfadjoint linear relation L in a Hilbert space H.This notion is defined by simply ignoring the multivalued part (and doing so is natural, since one can think of the multivalued part as an eigenspace for the eigenvalue ∞).To be precise, let mul L := {g ∈ H : (0; g) ∈ L}.Then is a selfadjoint operator (recall that we identify operators with their graphs) in the Hilbert space (mul L) ⊥ , and The following classical fact will be used below.
2.8 Lemma.Let L 1 and L 2 be selfadjoint relations in Hilbert spaces H 1 and H 2 , respectively, with finite multiplicities.Set ) ∼ E L2 be scalar measures defined by elements of maximal type f for L 1 and g for L 2 via (2.3).Let µ = µ ac + µ s , ν = ν ac + ν s be the Lebesgue decompositions of the measures µ, ν with respect to each other: µ ac -a.e. and ν ac -a.e., Consider the measure µ + ν and the sets Then the sets X 1 \ X 2 , X 1 ∩ X 2 and X 2 \ X 1 carry the measures µ s , µ ac ∼ ν ac and ν s , respectively.We can see that caution is needed when dealing with local spectral multiplicities: the values of function N L2 have no meaning on the set X 1 \ X 2 and can be changed arbitrarily there, the same for N L1 on X 2 \ X 1 .So the statement N L = N L1 + N L2 , looking natural, is in fact wrong: the functions N L1 and N L2 are defined non-uniquely, each in its own sense.
The next lemma is a general and folklore-type result for the behavior of local spectral multiplicity of an operator under finite-dimensional perturbation generalized to the linear relations case.We do not know an explicit reference for it, and for the reader's convenience we provide its proof.
2.9 Lemma.Let L 1 and L 2 be selfadjoint relations in the Hilbert space H such that for ( some be scalar measures defined by elements of maximal type f for L 1 and g for L 2 via (2.3) and N 1 , N 2 be their local spectral multiplicity functions.Let µ = µ ac + µ s , ν = ν ac + ν s be Lebesgue decompositions of the measures µ, ν with respect to each other: The second, a simple symmetric operator S, acts in the subspace The subspaces H L and H S reduce the relations S, L 1 , L 2 , and S * , see [BHdS20, Lemma 3.4.2],and S = L ⊕ S. Thus L 1 and L 2 have orthogonal decompositions where the linear relations L 1 and L 2 are selfadjoint extensions of S.
Consider λ ∈ C \ R and the subspace We have: for every v ∈ H λ The converse is also true, and therefore By assumption the subspace H λ has codimension k, hence the deficiency index of S is (k, k).The deficiency index of S coincides with that of S and hence is (k, k).
Then spectral multiplicities of both relations L 1 and L 2 do not exceed k, because defect subspaces of simple symmetric operators are generating subspaces for the operator parts of their selfadjoint extensions.The rest follows from Lemma 2.8.One should write out the "triple" Lebesgue decomposition for scalar spectral measures of maximal type of L, L 1 and L 2 w.r.t. each other and count the differences of multiplicities a.e with respect to each part according to Lemma 2.8; we skip the details.

Boundary relations and the Titchmarsh-Kodaira formula
Using the abstract setting of boundary relations leads to a unified approach to the spectral theory of many concrete operators.A recent standard reference for this theory is [BHdS20]; we shall sometimes also refer to [DHMdS06].Let us now recall some basic facts used in the present paper.
2.10 Definition.Let H and B be Hilbert spaces, S be a closed symmetric linear relation in H and Γ be a linear relation from Then Γ is called a boundary relation for S * , if the following holds.
(BR1) The domain of Γ is contained in S * and is dense there.
In what follows we consider only boundary relations with B = C n .In this case it is known that deficiency indices of S are equal and the domain of Γ is equal to S * .If additionally mul Γ = {(0; 0)}, then Γ is a bounded operator from S * to C 2n which can be written in the form Γ = (Γ 1 , Γ 2 ), Γ i : S * → C n , i = 1, 2, and then the collection (C n ; Γ 1 , Γ 2 ) is called an (ordinary) boundary triplet for S * .
For the case of a boundary triplet (C n ; Γ 1 , Γ 2 ) for S * symmetric and selfadjoint extensions of S can be described in a neat way.To this end introduce the indefinite scalar product [•, •] on C n × C n defined by the Gram operator Explicitly, this is Then symmetric extensions A ⊆ H × H of S correspond bijectively to [•, •]neutral subspaces θ ⊆ C n × C n by means of the inverse image map Γ −1 .In this correspondence A is selfadjoint if and only if θ is a maximal neutral subspace (equivalently, θ is neutral and dim θ = n).Such subspaces are sometimes also called Lagrange planes, e.g., [Har00].One has S = ker Γ and the induced linear operator Γ on the quotient linear space S * /S is its linear isomorphism with C 2n .In the general case of a boundary relation Γ there is a linear isomorphism between the quotient spaces S * /S = dom Γ/ ker Γ and ran Γ/mul Γ.For this reason a statement about spectrum can only be expected for boundary relations of simple symmetric linear relations (which therefore are operators, but may be nondensely defined, so that their adjoint may be a multivalued linear relation).A cornerstone in Weyl theory is the Titchmarsh-Kodaira formula.
We give a formulation which is in fact a generalization of the Titchmarsh-Weyl-Kodaira theory [Tit62], [Wey10], [Kod49] for one-dimensional Schrödinger operators to the abstract setting of boundary relations.Then the operator part of L is unitarily equivalent to the operator of multiplication by independent variable in the space L 2 (R, Ω).The spectral multiplicity function N L of L is given as (2.6)

Pasting of boundary relations
In this subsection we describe what we understand by a pasting of boundary relations by means of interface conditions.For more details on operations with boundary relations we refer the reader to [BHdS20] or [SW14,§3].We restrict ourselves to pastings of relations from a particular simple class (B = C), however the construction that we use would also make sense in a more general case.We use the word pasting in two meanings: for boundary relations Γ l and for selfadjoint linear relations For the former a pasting is obtained from a fractional linear transform w of Γ 0 = n l=1 Γ l , where the matrix w is J-unitary and is constructed in a non-unique way from matrices A and B which determine the interface condition.For the latter the pasting is uniquely determined by A and B.
We consider the following setting.
(D1) Let n ≥ 2 and for l ∈ {1, . . ., n} let either H l be a Hilbert space, S l a simple closed symmetric linear relation in H l with deficiency index (1, 1) (which is hence an operator, possibly not densely defined), Γ l ⊆ H 2 l × C 2 a boundary relation of function type for S * l and L l = ker[π 1 •Γ l ] a selfadjoint linear relation, or (an artificial edge) H l = {0}, S l = L l = S * l = {(0; 0)}, and Γ l = {0} 2 × mul Γ l ⊆ {0} 2 × C 2 .We assume that for at least one l we do not have an artificial edge.Note that our assumptions in (D1) imply that boundary relations Γ l which are not boundary functions must be artificial.Namely, if mul Γ l = {0} 2 , then by a dimension argument we must have H l = {0} and Γ l = {0} 2 ×{(a; m l a), a ∈ C} with some constant m l ∈ R serving as Weyl function.Allowing such degenerate cases has meaningful applications: in [SW14] using a construction with an artificial edge we showed that Aronszajn-Donoghue result [Aro57], [Don65] can be deduced from the result of that work (these results actually imply each other).
We see that S is a simple closed symmetric linear relation in H with deficiency index at most (n, n), and is a boundary relation for S * .The Weyl family of Γ 0 is given as the diagonal relation M 0 = diag(m 1 , . . ., m n ), where m l are Weyl functions of boundary relations Γ l or the real constants associated with artificial edges, respectively.Since Γ l are of function type, so is Γ 0 , and M 0 is a diagonal matrix function.
and we think of L 0 as the uncoupled selfadjoint relation.
In order to model interaction between the boundary relations Γ 1 , . . ., Γ n , one can use fractional linear transforms defined by J C n -unitary matrices w.Recall here that a matrix w ∈ C 2n×2n is called J C n -unitary, if w * J C n w = J C n .Also note that w is J C n -unitary if and only if w * is: Given matrices A, B which satisfy (D3) we can construct a J C n -unitary matrix w ∈ C 2n×2n with A and B used as upper blocks.The following result is of a folklore kind, however, for completeness we provide its proof.
2.13 Lemma.Let A, B ∈ C n×n be given.There exist C, D ∈ C n×n such that the matrix The backwards implication readily follows: if we find C and D such that (2.7) is J C n -unitary, then BA * = AB * and ker[(A, B) * ] = {0}.In order to show the forward implication, assume that A and B satisfy these two conditions.Then the matrix One can show that Γ w is a boundary relation for S * and that the Weyl family of Γ w is is a matrix valued function and Γ w is of function type.The selfadjoint relation (2.5) for Γ w is given as It depends only on the first row of w, i.e., on the matrices A and B, and hence we may legitimately call L A,B the pasting of the relations L l = ker(π 1 • Γ l ) by means of the interface conditions (A, B).We think of L A,B as the coupled selfadjoint relation.
The relation L A,B is a finite-rank perturbation of L 0 in the resolvent sense with the actual rank of the perturbation not exceeding n.This holds simply because both are extensions of the symmetry S which has deficiency index at most (n, n).The following lemma helps to estimate this rank.
Denote for A, B ∈ C n×n Then Proof.Under (D3), θ A,B and θ A ′ ,B ′ are Lagrange planes in C n × C n which correspond to the relations L A,B and We have: Proof.We have and this dimension is computed as Note that this is a condition on B only.
We have no fully precise intuition for (D4).However, in some sense it is related to how different edges are mixed.To illustrate the situation, observe that the matrix B st from the standard interface conditions in (1.6) satisfies (D4), and apparently the Kirchhoff condition n l=1 u ′ l (0) = 0 for a star-graph combines all edges.On the other hand consider for instance the matrices Obviously, B does not satisfy (D4).Thinking of a situation as in (1.4)-(1.5),we see that interface conditions with this matrix B fail to mix all edges in the second component of their boundary values.For example, the operator corresponding to the interface conditions (A 1 , B) splits in two uncoupled parts, one corresponding to the first two edges and another to the third edge.At the same time, the operator corresponding to (A 2 , B) will mix all edges.
The condition (D4) can be characterized in different ways.Here we call a linear subspace L of C n a coordinate plane, if L = span{e i1 , . . ., e i l } with some 1 ≤ i 1 < i 2 < . . .< i l ≤ n, and where e i denotes the i-th canonical basis vector in C n .
3.1 Lemma.For a matrix B ∈ C n×n the following statements are equivalent.
(2) For every coordinate plane L in C n with dim L ≤ rank B, the restriction of B to L is injective.
Finally, assume (3) and let 1 ≤ i 1 < . . .< i r ≤ n.Let X be the diagonal matrix having diagonal entry 1 in the i l -th columns, l = 1, . . ., r, and 0 otherwise.Then the linear span K of the i 1 , . . ., i r -th columns of B is nothing but the range of BX This means that the i 1 , . . ., i r -th columns of B are linearly independent and we have (1).

❑
The selfadjoint relation L A,B defined as pasting by means of the interface conditions (A, B), cf.(2.10), does not uniquely correspond to (A, B).
Another way to modify A and B without essentially changing the corresponding operator is to simultaneously permute columns of A and B; this corresponds to "renumerating the edges": Let π be a permutation of {1, . . ., n}, and let P be the corresponding permutation matrix.Then the operator L A,B defined as pasting of Γ 1 , . . ., Γ n with interface conditions (A, B) is unitarily equivalent to the operator built from Γ π(1) , . . ., Γ π(n) with (AP, BP ), and hence these two operators share all their spectral properties.We are going to use the above two operations to reduce interface conditions to a suitable normal form.For matrices A, B, Ã, B ∈ C n×n let us write (A, B) ∼ ( Ã, B), if there exist Q, P ∈ C n×n with Q invertible and P a permutation matrix, such that ( Ã, B) = Q(A, B) P 0 0 P .
Clearly, ∼ is an equivalence relation.Two equivalent pairs (A, B) and ( Ã, B) together do or do not satisfy (D3).Further, for any matrices B, Q, P ∈ C n×n with Q invertible and P a permutation matrix, the matrices B and QBP together do or do not satisfy (D4).
3.2 Lemma.Let A, B ∈ C n×n and assume that (D3) and (D4) hold.Denote r := rank B. Then there exist A 1 ∈ C (n−r)×r and A 2 ∈ C r×r with A 2 = A * 2 , such that (I C k denotes the identity matrix of size k × k, and block matrices are understood with respect to the decomposition Proof.By the definition of r we find an invertible Q 1 ∈ C n×n such that with some blocks B 21 ∈ C r×(n−r) and B 22 ∈ C r×r .Since rank(Q 1 A, Q 1 B) = rank(A, B) = n, the first n − r rows of Q 1 A are linearly independent.Hence, we find an invertible Q 2 ∈ C (n−r)×(n−r) and a permutation matrix with some blocks A 12 ∈ C (n−r)×r and A 22 ∈ C r×r .By (D4), the last r columns of Q 1 BP are linearly independent.Equivalently, the right lower , we obtain with some block B ′ 21 ∈ C r×(n−r) .Putting together, we have ❑ Conclusion: When investigating spectral properties of pastings of boundary relations with interface conditions given by matrices A and B subject to (D3) and (D4), we may restrict attention without loss of generality to pairs (A, B) of the form with some A 1 ∈ C (n−r)×r and A 2 ∈ C r×r , A 2 = A * 2 .
3.3 Remark.Two pairs of the form (3.3) can be equivalent modulo ∼.For example, if there exist permutation matrices P 1 ∈ C (n−r)×(n−r) and P 2 ∈ C r×r such that Validity of (D4) for a matrix B of form (3.3) can also be characterized in different ways.
3.4 Lemma.Let B 1 ∈ C r×(n−r) , and set Then the following statements are equivalent.
(2) All minors of size r of the matrix (B 1 , I C r ) are nonzero.
The formulation of the result for singular spectrum is quite more elaborate compared to point spectrum, but only because of the measure theoretic nature of the involved quantities.The basic message is quite the same.Again there occurs a distinction into two cases, many layers of spectrum vs. few layers of spectrum.If locally at a point the uncoupled operator L 0 has many layers of spectrum compared to the rank of matrix B, then after performing the perturbation the multiplicity will have decreased by rank B. In particular, if we had exactly rank B many layers, the point will not anymore contribute to the spectral measure.If L 0 has few layers of spectrum, then the multiplicity will not become too large after performing the perturbation.Depending on the ratio of r and N 0 (x) it may increase or must decrease.
We use the simplified notation 1 {N0=l} • ν for the part 1 Y l • ν of a measure ν ≪ µ (and their sums such as 1 {N0>r} • ν), cf. the definition of Y l and N 0 in Section 2.2.This definition does not depend on the choice of a representative of the equivalence class of sets {N 0 = l} owing to absolute continuity of ν w.r.t.µ.
4.3 Theorem.Let data be given according to (D1)-(D4) above.Let w be a Junitary matrix provided by Lemma 2.13, cf.(2.7), Γ w = w • Γ 0 be the pasting of Γ 1 ,. . .,Γ n , and L A,B = ker[π 1 • Γ w ] be the pasting of L 1 ,. . .,L n by means of the interface condition (A, B), cf.(2.10).Let m l be the (scalar) Weyl functions for Γ l , M 0 and M w be the matrix valued Weyl functions for Γ 0 and Γ w , and µ l , Ω 0 , Ω w be the measures in their Herglotz integral representations (if the "l-th edge" is "artificial", we assume µ l = 0).Let µ := n l=1 µ l = tr Ω 0 and ρ := tr Ω w be scalar spectral measures of the linear selfadjoint relations L 0 , L A,B , and N 0 , N A,B be their spectral multiplicity functions.
Assume in addition that we do not have too many artificial edges in the sense that (D5) #{l : mul Γ l = {(0; 0)}} ≥ r.
Then the following statements hold.

Remark.
1. Item (S1) follows from [GT00] and is included here only for completeness.
2. Note that spectral multiplicity functions N 0 and N A,B are not defined uniquely and can be changed on µ-and ρ-zero sets, respectively.However, these sets of non-uniqueness are ρ s,ac -zero, because ρ s,ac ≪ ρ and ρ s,ac ≪ µ, thus we can compare N A,B and N 0 in (S3) and (S4).
The proof of Theorem 4.1 is by linear algebra and is given in Section 5.The proof of Theorem 4.3 is given in Sections 6 and 7.For the many layers case we proceed via the Titchmarsh-Kodaira formula and the boundary behavior of the Weyl function, while the few layers case is settled by a geometric reduction.

The point spectrum
In this section we prove Theorem 4.1.Throughout the section let data be given as in this theorem, where A, B are assumed to be in block form as in (3.3).Moreover, fix x ∈ R.
We have x ∈ σ p (S * l ) if and only if there exist Assume that x ∈ σ p (S * l ).Then the boundary values a l , b l are uniquely determined by the element f l , and f l itself is unique up to a scalar multiple.Fix a choice of f l .Since S l is simple, we have for the corresponding boundary values (a l ; b l ) = 0.Moreover, x ∈ σ p (L l ) if and only if a l = 0.

❑
In the third step we compute or estimate, respectively, the dimensions of ker γ and ker γ ∩ ker Ξ.
5.2 Lemma. ( Proof.Consider the case that #(J * p \ J p ) ∪ J m ≤ n − r.We show that (5.2) The inclusion "⊇" holds since m l = 0 for l / ∈ (J * p \ J p ) ∪ J m .Let c ∈ ker γ.Since D 1,...,n−r B = 0, it follows that D 1,...,n−r AD (J * p \Jp)∪Jm c = 0.The left side is a linear combination of at most n − r columns of the matrix (I C n−r , A 1 and Lemma 3.4 implies that D (J * p \Jp)∪Jm c = 0 and hence M c = 0. From this we obtain BD Jp c = 0.
Thus the inclusion "⊆" also holds and the proof of (5.2) is complete.The sets J * p \ J p , J m , and J p , are pairwise disjoint, and each r columns of B are linearly independent.Thus (5.2) implies dim ker γ We Note here that #J p < r.Since R + R ′ ⊆ ran γ, we obtain From this, clearly, dim ker γ ≤ r − #J p .Next, we have ker D J * p ∪Jm ⊆ ker γ ∩ ker Ξ, and hence

❑
It is easy to deduce Theorem 4.1 from the above lemma.
Proof of Theorem 4.1.Assume that N p 0 (x) ≥ r, and note that N p 0 (x) = #J p .Then case (1) in Lemma 5.2 takes place, and it follows that On the other hand, using (7.3), dim θ k = dim θ ′ , and the respective canonical injections, yields

❑
It is now easy to deduce the few layers case in Theorem 4.3.
e. and ν ac -a.e., (3) N 2 k, ν s -a.e.Proof.Consider the symmetric linear relation S := L 1 ∩ L 2 (recall once more that we identify operators with their graphs).It has an orthogonal decomposition into a selfadjoint and a simple (i.e., completely nonselfadjoint) symmetric part.The first, a selfadjoint linear relation L, acts in the subspace H L := λ∈C\R ran(S − λI).

2. 11
Definition.Let Γ be a boundary relation.The map M which assigns to a point z ∈ C \ R the linear relation M (z) := (a; b) ∈ C n × C n : ∃ f ∈ H with (f ; zf ); (a; b) ∈ Γ is called the Weyl family of Γ.The Weyl family M of a boundary relation of function type (where B = C n ) is a n × n-matrix-valued Herglotz function, and is also called the Weyl function.The Weyl family of a boundary relation is intimitely related to the spectral theory of selfadjoint extensions of S, specifically, to the spectrum of the selfadjoint (see [DHMdS06, Proposition 4.22]) relationL := ker π 1 • Γ , (2.5)where π 1 : C n ×C n → C n is the projection onto the first component π 1 ((a; b)) := a.In the case of a boundary triplet (C n ; Γ 1 , Γ 2 ) we have L = ker Γ 1 .Since the Weyl family comprises the information given by defect elements, naturally, a selfadjoint part of S (including its multivalued part) cannot be accessed using M (z).

2. 12
Theorem (Titchmarsh-Kodaira formula).Let S be a closed simple symmetric linear relation in a Hilbert space H and Γ ⊆ H 2 × C 2n be a boundary relation of function type for S * .Consider the selfadjoint extension L := ker[π 1 •Γ] of S.Moreover, let M be the Weyl function associated with Γ, let Ω be the measure in the Herglotz integral representation of M , and let ρ be its trace measure ρ := tr Ω.

❑3
Discussion of interface conditionsIn our present work we investigate spectral properties of coupled operators L A,B whose interface conditions (A, B) are subject to an additional mixing condition.Namely, besides the general condition (D3) AB * = BA * and rank(A, B) = n we are going to assume the condition (D4) each set of rank B many different columns of B is linearly independent.
have ker Ξ = {c ∈ C n : D J * p c = 0}, and see that ker γ ∩ ker Ξ = c ∈ C n : D J * p c = D Jm c = 0 .Thus dim(ker γ ∩ ker Ξ) = n − #J * p − #J m .Consider now the case that #(J * p \ J p ) ∪ J m > n − r.We show that dim ran γ ≥ (n − r) + #J p .(5.3) Choose n − r indices in (J * p \ J p ) ∪ J m , and let R be the linear span of the corresponding columns of γ.Denote by π + the projection in C n onto the first n − r coordinates (π + = D {1,...,n−r} ).The image π + (R) is the span of n − r columns of the matrix (I C n−r , A 1 ), and hence has dimension n − r.It follows that dim R = n − r, R ∩ ker π + = {0}.Let R ′ be the linear span of the columns of γ corresponding to indices in J p .For those indices columns of γ are actually columns of B, and it follows that dim R ′ = #J p , R ′ ⊆ ker π + .
Let ρ = ρ ac,k + ρ s,k be the Lebesgue decomposition of ρ w.r.t.ρ k .Since ρ k (W (k) k ) = 0, it follows that ρ ac,k (W (k) k ) = 0 or, equivalently, 1 W (k) k • ρ ac,k = 0, hence s,k .(7.4) Lemma 2.14 applied to A, B andA k , B k gives rank[(L A,B − λI) −1 − (L A k ,B k − λI) −1 ] ≤ dim θ A,B θ A,B ∩ θ A k ,B k = r − kand Lemma 2.9 applied to L A,B and L A k ,B k givesN A,B ≤ r − k, ρ s,k -a.e.and, owing to (7.4), 1 W (k) k • ρ s,ac -a.e.On the other hand, from the definitions of sets W (k) kwe haveN 0 = k, 1 W (k) k• µ-a.e. and hence 1 W (k) k• ρ s,ac -a.e., unitary, if and only if AB * = BA * and rank(A, B) = n.Proof.Multiplying out the product wJ C n w * , shows that w is J C n -unitary if and only if the four equations