Abstract
We consider selfadjoint operators obtained by pasting a finite number of boundary relations with one-dimensional boundary space. A typical example of such an operator is the Schrödinger operator on a star-graph with a finite number of finite or infinite edges and an interface condition at the common vertex. A wide class of “selfadjoint” interface conditions, subject to an assumption which is generically satisfied, is considered. We determine the spectral multiplicity function on the singular spectrum (continuous as well as point) in terms of the spectral data of decoupled operators.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper we analyze the singular spectrum of a selfadjoint operator built by gluing together a finite number of selfadjoint operators with simple spectrum by means of interface conditions. We realize the operators with help of boundary triplets, and understand interface conditions as linear dependencies among boundary values. An archetypical example for such a glued together operator is a Schrödinger operator on a star-graph with an interface condition at the inner vertex.
The reader who is not familiar with the language of boundary triplets and couplings of such (e.g. [2, 4]), is advised to pause for a moment, and before proceeding here read through Section 1.1 below in order to get an intuition; there we elaborate in detail the above mentioned example. We also point out that all necessary notions and results from the literature are recalled in Sect. 2.
Assume we have selfadjoint operators \(L_l\), \(l=1,\ldots ,n\), in Hilbert spaces \(H_l\), \(l=1,\ldots ,n\), that emerge from boundary relations with one-dimensional boundary value space (cf. Sect. 2.3). Denote by \(L_0\) the orthogonal coupling, i.e., the diagonal operator, \(L_0:=\prod _{l=1}^n L_l\) acting in \(H:=\prod _{l=1}^n H_l\). The spectrum of \(L_0\) and its multiplicity is easily understood: letting \(N_l\) and \(N_0\) be the respective spectral multiplicity functions of \(L_l\) and \(L_0\), it holds that \(N_0=\sum _{l=1}^n N_l\). Here—and always—we tacitly understand that any relation between spectral multiplicity functions should be valid only after making an appropriate choice of representants for each of them.
The orthogonal coupling can be seen as gluing together the single operators without allowing any interaction between them. Much more interesting is what happens when the single operators do influence each other after gluing. Assume we have an interface condition, formulated as a linear dependence among boundary values and described by matrices A, B (cf. Sect. 2.4), such that the corresponding operator \(L_{A,B}\) is selfadjoint, and let \(N_{A,B}\) be the spectral multiplicity function of \(L_{A,B}\). The basic question is:
How to compute \(N_{A,B}\) from \(N_l\), \(l=1,\ldots ,n\) ?
By a simple dimension argument, it always holds that \(N_{A,B}(x)\le n\). Further, the Kato-Rosenblum theorem fully settles the question on the absolutely continuous part of the spectrum: there we have \(N_{A,B}(x)=N_0(x)\) since \(L_{A,B}\) and \(L_0\) are finite rank perturbations of each other in the resolvent sense. Contrasting this, the singular spectrum (eigenvalues as well as singular continuous part) may behave wildly, and much less is known.
A classical result for the case that \(n=2\) is Kac’s Theorem [11, 12], which says in the present language that for a particular interface condition, namely the standard condition, the multiplicity of the singular spectrum does not exceed 1. His proof proceeds via an analysis of the Cauchy transforms of the involved spectral measures and the Titchmarsh-Kodaira formula. Different proofs are given in [7] (by using subordinacy theory) and in [21] (by reducing to the Aronszajn-Donoghue Theorem). Generalizations of the theorem of Kac are given in [22] and in [16]. In the first reference, we allow arbitrary n but still prescribe the standard interface condition. The second reference goes into another direction. There still \(n=2\) but the boundary value spaces are allowed to have higher dimensions and a certain class of interface conditions is permitted.
In the present paper we allow arbitrary n and consider a fairly rich class of interface conditions defined by an algebraic property (cf. Sect. 3). This property expresses, at least in some sense, that all single operators influence each other and no splitting into independent blocks happens, though one has to be careful with this intuition, it is only very rough. The previously considered standard condition belongs to that class. A striking difference is that interface conditions of the presently considered class can give rise to perturbations of any rank up to n, while the standard condition always yields a rank one perturbation. Our main result says that, letting r be the rank of the perturbation, on the singular spectrum the relation
holds. A formulation in terms of spectral measures, and without reference to a particular choice of representants of multiplicity functions, is given in Theorem 4.3. The proof of the theorem is carried out by an in depth analysis of Cauchy integrals and the matrix measure in the Titchmarsh-Kodaira formula. We exploit algebraic properties of the considered class of interface conditions to obtain the rank of the derivative of that matrix measure w.r.t. its trace measure, and this leads to (1.1).
Other approaches to the above emphasised basic question might proceed via the already mentioned work of M.Malamud [16], or via a generalization of Aronzsajn-Donoghue’s theorem given by C.Liaw and S.Treil in [15]. To the best of our knowledge, no such results have been obtained so far using these approaches.
The present paper is organized as follows. In Sect. 2 we introduce objects and tools needed to formulate and prove our result. In particular, these include boundary relations, pasting of such with selfadjoint interface conditions, matrix-valued Weyl functions and corresponding measures. In Sect. 3 we discuss in detail the class of selfadjoint interface conditions that we consider, namely, their description in terms of matrices A, B and the additional assumption that we make about them. In Sect. 4 we give the statement of the main result, Theorem 4.3. Before that we formulate the result separately for the case of point spectrum, Theorem 4.1, since this can be shown under slightly weaker assumptions. In Sect. 5 we prove Theorem 4.1. In Sect. 6 we prove the part of Theorem 4.3 concerning the case where many layers of the spectrum “overlap”. In Sect. 7 we prove the remaining part of Theorem 4.3.
1.1 The Schrödinger Operator on a Star-Graph
Let us discuss, as an example, the Schrödinger operator on a metric star-graph. In fact, this example can serve as a model for the general case. We denote the edges of the graph by \(E_1,\ldots ,E_n\) and associate them with intervals \([0,e_l)\), where the endpoint 0 corresponds to the inner vertex and \(e_l\) can be finite or infinite. Assume we are given data:
-
(1)
Real-valued potentials \(q_l\in L_{1,{{\,\textrm{loc}\,}}}([0,e_l))\) for \(l=1,\ldots ,n\).
-
(2)
Boundary conditions at \(e_l\) for those \(l=1,\ldots ,n\), for which \(q_l\) is regular or is in the limit point case at \(e_l\).
For \(l=1,\ldots ,n\) let \(H_l\) be the Hilbert space \(L_2(0,e_l)\), and let \(L_l\) be the selfadjoint Schrödinger operator with Dirichlet boundary conditions:
Now assume we have an interface condition at the inner vertex written in the form
where A and B are \(n\times n\) matrices such that
Here (A, B) denotes the \(n\times 2n\) matrix which has A as its first n columns and B as its last n columns. The operator \(L_{A,B}\) is defined in the Hilbert space \(H:=\prod _{l=1}^n L_2(0,e_l)\) and acts by the rule
on the domain
Since the matrices A and B satisfy (1.3), the operator \(L_{A,B}\) is selfadjoint [14]. Obviously, this correspondence between matrices and operators is not one-to-one: one can multiply A and B simultaneously from the left by any invertible matrix, and this defines the same interface condition and the same operator.
The orthogonal coupling \(L_0:=\prod _{l=1}^nL_l\) corresponds to the matrices \(A_0=I\), \(B_0=0\): in the above notation \(L_0=L_{I,0}\). The standard interface condition corresponds to the matrices
where the first \(n-1\) lines express continuity of the solution at the vertex, and the last line corresponds to the Kirchhoff condition that the sum of derivatives vanishes. The class of interface conditions we consider in the present paper is given by those matrices A, B subject to (1.3) which satisfy in addition the following assumption: each set of \({{\,\textrm{rank}\,}}B\) many different columns of B is lineary independent. Under this assumption the rank of the difference of resolvents of \(L_{A,B}\) and \(L_0\) equals \(\mathrm{rank\,}B\), and (1.1) holds.
2 Preliminaries
2.1 Boundary Behavior of Herglotz Functions
Recall the notion of matrix valued Herglotz functions (often also called Nevanlinna functions).
2.1 Definition
An analytic function \(M:{\mathbb {C}}\setminus {\mathbb {R}}\rightarrow {\mathbb {C}}^{n\times n}\) is called a (\(n\times n\)-matrix valued) Herglotz function, if
-
(H1)
\(M({\overline{z}})=M(z)^*\), \(z\in {\mathbb {C}}\setminus {\mathbb {R}}\).
-
(H2)
For each \(z\in {\mathbb {C}}^+\), the matrix \({{\,\textrm{Im}\,}}M(z):=\frac{1}{2i}(M(z)-M(z)^*)\) is positive semidefinite.
Any Herglotz function M admits an integral representation. Namely, there exists a finite positive \(n\!\times \!n\)-matrix valued Borel measure \(\Omega \) (which means that \(\Omega (\Delta )\) is a positive semidefinite matrix for every Borel set \(\Delta \)), a selfadjoint matrix a, and a positive semidefinite matrix b, such that
For the scalar case, this goes back to [10], for the matrix valued case see, e.g., [8, Theorem 5.4].
We use several known facts about the boundary behavior of Herglotz functions which relate normal or nontangential boundary limits to the measure \(\Omega \) in (2.1). The key notion in this context is the symmetric derivative of one measure relative to another. If \(\sigma \) is a positive Borel measure and \(\nu \) is a positive Borel measure or a complex Borel measure absolutely continuous w.r.t. \(\sigma \), then we define the symmetric derivative \(\frac{d\nu }{d\sigma }(x)\) at a point \(x\in {\mathbb {R}}\) as the limit
whenever it exists in \([0,\infty ]\), or in \({\mathbb {C}}\), respectively.
2.2 Proposition
([5]) There exists a Borel set \(X\subseteq {\mathbb {R}}\) with \(\sigma ({\mathbb {R}}\setminus X)=0\), such that the symmetric derivative exists for all \(x\in X\) and the function \(\frac{d\nu }{d\sigma }\) is measurable on X.
By the de la Valleé–Poussin theorem [5, 20] the function \(\frac{d\nu }{d\sigma }\) is a Radon–Nikodym derivative of \(\nu \) with respect to \(\sigma \). We formulate two corollaries of the de la Valleé–Poussin theorem which will be of particular convenience to us in what follows. An explicit proof can be found in [22].
The first corollary concerns properties of sets. A set \(X\subseteq {\mathbb {R}}\) is called \(\nu \)-zero, if there exists a Borel set \(X'\supseteq X\) such that \(\nu (X')=0\); a set is called \(\nu \)-full, if its complement is \(\nu \)-zero. For a Borel set X the measure \(\mathbbm {1}_X\cdot \nu \) is defined as \((\mathbbm {1}_X\cdot \nu )(\Delta )=\nu (X\cap \Delta )\) on Borel sets \(\Delta \).
2.3 Corollary
Let \(\nu \) and \(\sigma \) be positive Borel measures, and let \(X\subseteq {\mathbb {R}}\).
-
(1)
If \(\frac{d\nu }{d\sigma }(x)=0\) for all \(x\in X\), then X is \(\nu \)-zero.
-
(2)
If the set X is \(\nu \)-zero, then \(\frac{d\nu }{d\sigma }(x)=0\) for \(\sigma \)-a.a. \(x\in X\).
-
(3)
If X is a Borel set and \(\frac{d\nu }{d\sigma }(x)\in [0,\infty )\) for all \(x\in X\), then \(\mathbbm {1}_X\cdot \nu \ll \sigma \).
-
(4)
If X is a Borel set and \(\frac{d\nu }{d\sigma }(x)\in (0,\infty )\) for all \(x\in X\), then \(\mathbbm {1}_X\cdot \nu \sim \mathbbm {1}_X\cdot \sigma \).
The second corollary concerns properties of the symmetric derivative.
2.4 Corollary
Let \(\nu \) and \(\sigma \) be positive Borel measures on \({\mathbb {R}}\). Let \(\nu =\nu _{ac}+\nu _s\) and \(\sigma =\sigma _{ac}+\sigma _s\) be the Lebesgue decompositions of \(\nu \) with respect to \(\sigma \) and of \(\sigma \) with respect to \(\nu \), respectively. Then
-
(1)
\(\frac{d\nu }{d\sigma }(x)\in [0,\infty )\), \(\sigma \)-a.e.
-
(2)
\(\frac{d\nu }{d\sigma }(x)\in (0,\infty ]\), \(\nu \)-a.e.
-
(3)
\(\frac{d\nu }{d\sigma }(x)\in (0,\infty )\), \(\nu _{ac}\)-a.e. and \(\sigma _{ac}\)-a.e.
-
(4)
\(\frac{d\nu }{d\sigma }(x)=\infty \), \(\nu _s\)-a.e.
-
(5)
\(\frac{d\nu }{d\sigma }(x)=0\), \(\sigma _s\)-a.e.
The following statements concern the relationship between boundary behavior of Herglotz functions and symmetric derivatives of the measures associated with them.
2.5 Proposition
-
(1)
Let \(\nu \) and \(\sigma \) be finite positive Borel measures and \(x\in {\mathbb {R}}\). Assume that \(\frac{d\nu }{d\sigma }(x)\) exists in \([0,\infty )\) and \(\frac{d\sigma }{d\lambda }(x)\) exists in \((0,\infty ]\). Let \(m_\nu \) and \(m_\sigma \) be two Herglotz functions with the measures \(\nu \) and \(\sigma \), respectively, in their integral representations (2.1). Then
$$\begin{aligned} \lim _{z\downarrow x}\frac{{{\,\textrm{Im}\,}}m_\nu (z)}{{{\,\textrm{Im}\,}}m_\sigma (z)}=\frac{d\nu }{d\sigma }(x) . \end{aligned}$$ -
(2)
Let \(\nu \) be a finite positive Borel measure, let \(m_\nu \) be a Herglotz function with the measure \(\nu \) in its integral representation, and let \(x\in {\mathbb {R}}\). If \(\frac{d\nu }{d\lambda }(x)=\infty \), then
$$\begin{aligned} \lim _{z\downarrow x}{{\,\textrm{Im}\,}}m_\nu (z)=\infty . \end{aligned}$$ -
(3)
Let \(\nu \) and \(\sigma \) be finite positive Borel measures with \(\nu \ll \sigma \), and let \(\sigma _s\) be the singular part of \(\sigma \) w.r.t. \(\lambda \). Again let \(m_\nu \) and \(m_\sigma \) be two Herglotz functions with \(\nu \) or \(\sigma \), respectively, in their integral representations. Then
$$\begin{aligned} \lim _{z\downarrow x}\frac{m_\nu (z)}{m_\sigma (z)}=\frac{d\nu }{d\sigma }(x) \text { for}\quad \; \sigma _s\text {-a.a. }x\in {\mathbb {R}} . \end{aligned}$$
Note that in (1) we can use \(d\sigma :=\frac{d\lambda }{1+x^2}\), which implies
whenever \(\frac{d\nu }{d\lambda }(x)\) exists and is finite.
For a more detailed compilation and references about symmetric derivatives and boundary values we refer the reader to [22, §2.3 and §2.4].
Item (1) of the above theorem has an obvious extension to matrix valued functions and measures.
2.6 Lemma
Let M be a \(n\times n\)-matrix valued Herglotz function and let \(\Omega \) be the measure in its integral representation (2.1). Denote by \(\rho \) the trace measure of \(\Omega \), i.e., \(\rho (\Delta ):={{\,\textrm{tr}\,}}\Omega (\Delta )\) for every Borel set \(\Delta \). Then \(\Omega \ll \rho \), and for \(\rho \)-a.a. \(x\in {\mathbb {R}}\) the symmetric derivative \(\frac{d\Omega }{d\rho }(x)\) exists and is related to M by
Proof
Let \(x\in {\mathbb {R}}\), and assume that all symmetric derivatives at x of positive and negative parts of the real and the imaginary parts of entries of \(\Omega \) w.r.t. \(\rho \) exist and are finite. This is fulfilled for \(\rho \)-a.a. \(x\in {\mathbb {R}}\).
We have \(M({\bar{z}})=M(z)^*\) and hence
Hence, using linearity of the integral in the measure, Proposition 2.5, (1), and Corollary 2.4, (1),(2), we obtain
\(\square \)
2.2 Spectral Multiplicity
Let L be a selfadjoint operator in a Hilbert space H and \(E_L\) be its projection valued spectral measure. A subspace G of H is called generating, if
The minimum of dimensions of generating subspaces is called the (global) spectral multiplicity of operator L and is denoted by \({{\,\textrm{mult}\,}}L\).
The operators that we deal with always have finite spectral multiplicity, hence we shall assume from now on that \({{\,\textrm{mult}\,}}L<\infty \). There exist elements \(g_1,\ldots ,g_{{{\,\textrm{mult}\,}}L}\in H\), such that their linear span is a generating subspace and the positive measures
are ordered in the sense of absolute continuity as
An element \(g_1\) occuring in a collection of elements with these properties is called an element of maximal type.
Fix a choice of such elements \(g_1,\ldots ,g_{{{\,\textrm{mult}\,}}L}\). Then the operator L is unitarily equivalent to the operator of multiplication by the independent variable in the space \(\prod _{l=1}^{{{\,\textrm{mult}\,}}L}L_2({\mathbb {R}},\nu _l)\). We choose Borel sets \(Y_1,\ldots ,Y_{{{\,\textrm{mult}\,}}L}\) such that
and define the (local) spectral multiplicity function of L as the equivalence class (i.e., functions which are \(E_L\)-a.e. equal) of the function
The need of considering an equivalence class of functions arises since the sets \(Y_l\) are defined non-uniquely up to \(\nu _1\)-zero sets and thus the function \(\#\{l:x\in Y_l\}\) can be changed on \(\nu _1\)-zero sets.
Intuitively, the sets \(Y_l\) correspond to the “layers” of the spectrum, and hence indeed \(N_L\) expresses spectral multiplicity in a natural way. If x is an eigenvalue of L, then \(N_L(x)=\mathrm{dim\,ker\,}(L-xI)\) is the usual multiplicity of an eigenvalue.
The spectral multiplicity function is a unitary invariant of the operator and does not depend on a choice of generating basis.
2.7 Remark
We will also speak of the spectral multiplicity function of a selfadjoint linear relation L in a Hilbert space H. This notion is defined by simply ignoring the multivalued part (and doing so is natural, since one can think of the multivalued part as an eigenspace for the eigenvalue \(\infty \)). To be precise, let \({{\,\textrm{mul}\,}}L:=\{g\in H:(0;g)\in L\}\). Then
is a selfadjoint operator (recall that we identify operators with their graphs) in the Hilbert space \(({{\,\textrm{mul}\,}}L)^\perp \), and \(L=L_{op}\oplus (\{0\}\times {{\,\textrm{mul}\,}}L)\). Now we define \(N_L:=N_{L_{op}}\).
The following classical fact will be used below.
2.8 Lemma
Let \(L_1\) and \(L_2\) be selfadjoint relations in Hilbert spaces \(H_1\) and \(H_2\), respectively, with finite multiplicities. Set \(H:=H_1\oplus H_2\) and \(L:=L_1\oplus L_2\). Let \(\mu =(E_{L_1}f,f)\sim E_{L_1}\), \(\nu =(E_{L_2}g,g)\sim E_{L_2}\) be scalar measures defined by elements of maximal type f for \(L_1\) and g for \(L_2\) via (2.3). Let
be the Lebesgue decompositions of the measures \(\mu \), \(\nu \) with respect to each other: \(\mu _{ac}\ll \nu \), \(\mu _s\perp \nu \); \(\nu _{ac}\ll \mu \), \(\nu _s\perp \mu \), \(\mu _{ac}\sim \nu _{ac}\). Then
Consider the measure \(\mu +\nu \) and the sets
Then the sets \(X_1\setminus X_2\), \(X_1\cap X_2\) and \(X_2\setminus X_1\) carry the measures \(\mu _s\), \(\mu _{ac}\sim \nu _{ac}\) and \(\nu _s\), respectively. We can see that caution is needed when dealing with local spectral multiplicities: the values of function \(N_{L_2}\) have no meaning on the set \(X_1\setminus X_2\) and can be changed arbitrarily there, the same for \(N_{L_1}\) on \(X_2\setminus X_1\). So the statement \(N_L=N_{L_1}+N_{L_2}\), looking natural, is in fact wrong: the functions \(N_{L_1}\) and \(N_{L_2}\) are defined non-uniquely, each in its own sense.
The next lemma is a general and folklore-type result for the behavior of local spectral multiplicity of an operator under finite-dimensional perturbation generalized to the linear relations case. We do not know an explicit reference for it, and for the reader’s convenience we provide its proof.
2.9 Lemma
Let \(L_1\) and \(L_2\) be selfadjoint relations in the Hilbert space H such that for (some \(\Leftrightarrow \) every) \(\lambda \in {\mathbb {C}}\setminus {\mathbb {R}}\)
Let \(E_1, E_2\) be projection valued spectral measures of their operator parts, \(\mu =(E_{L_1}f,f)\sim E_{L_1}\), \(\nu =(E_{L_2}g,g)\sim E_{L_2}\) be scalar measures defined by elements of maximal type f for \(L_1\) and g for \(L_2\) via (2.3) and \(N_1,N_2\) be their local spectral multiplicity functions. Let
be Lebesgue decompositions of the measures \(\mu \), \(\nu \) with respect to each other: \(\mu _{ac}\ll \nu \), \(\mu _s\perp \nu \); \(\nu _{ac}\ll \mu \), \(\nu _s\perp \mu \), \(\mu _{ac}\sim \nu _{ac}\). Then
-
(1)
\(N_1\leqslant k\), \(\mu _s\)-a.e.,
-
(2)
\(|N_1-N_2|\leqslant k\), \(\mu _{ac}\)-a.e. and \(\nu _{ac}\)-a.e.,
-
(3)
\(N_2\leqslant k\), \(\nu _s\)-a.e.
Proof
Consider the symmetric linear relation \(S:=L_1\cap L_2\) (recall once more that we identify operators with their graphs). It has an orthogonal decomposition into a selfadjoint and a simple (i.e., completely nonselfadjoint) symmetric part. The first, a selfadjoint linear relation L, acts in the subspace
The second, a simple symmetric operator \({\widetilde{S}}\), acts in the subspace
The subspaces \(H_L\) and \({\widetilde{H}}_S\) reduce the relations S, \(L_1\), \(L_2\), and \(S^*\), see [2, Lemma 3.4.2], and \(S=L\oplus {\widetilde{S}}\). Thus \(L_1\) and \(L_2\) have orthogonal decompositions
where the linear relations \({\widetilde{L}}_1\) and \({\widetilde{L}}_2\) are selfadjoint extensions of \({\widetilde{S}}\).
Consider \(\lambda \in {\mathbb {C}}\setminus {\mathbb {R}}\) and the subspace
We have: for every \(v\in H_{\lambda }\)
hence \((u;v)\in (L_1-\lambda I)\cap (L_2-\lambda I)=S-\lambda I\) and \(v\in {{\,\textrm{ran}\,}}(S-\lambda I)\). The converse is also true, and therefore
By assumption the subspace \(H_{\lambda }\) has codimension k, hence the deficiency index of S is (k, k). The deficiency index of \({\widetilde{S}}\) coincides with that of S and hence is (k, k). Then spectral multiplicities of both relations \({\widetilde{L}}_1\) and \({\widetilde{L}}_2\) do not exceed k, because defect subspaces of simple symmetric operators are generating subspaces for the operator parts of their selfadjoint extensions.
The rest follows from Lemma 2.8. One should write out the “triple” Lebesgue decomposition for scalar spectral measures of maximal type of L, \({\widetilde{L}}_1\) and \({\widetilde{L}}_2\) w.r.t. each other and count the differences of multiplicities a.e with respect to each part according to Lemma 2.8; we skip the details. \(\square \)
2.3 Boundary Relations and the Titchmarsh–Kodaira Formula
Using the abstract setting of boundary relations leads to a unified approach to the spectral theory of many concrete operators. A recent standard reference for this theory is [2]; we shall sometimes also refer to [3]. Let us now recall some basic facts used in the present paper.
2.10 Definition
Let H and B be Hilbert spaces, S be a closed symmetric linear relation in H and \(\Gamma \) be a linear relation from \(H^2\) to \(B^2\) (i.e., \(\Gamma \subseteq H^2\times B^2\)). Then \(\Gamma \) is called a boundary relation for \(S^*\), if the following holds.
-
(BR1)
The domain of \(\Gamma \) is contained in \(S^*\) and is dense there.
-
(BR2)
For all \(((f;g);(a;b)),((f';g');(a';b'))\in \Gamma \) the abstract Green’s identity holds:
$$\begin{aligned} (g,f')_{H}-(f,g')_{H}=(b,a')_{B}-(a,b')_{B} . \end{aligned}$$ -
(BR3)
The relation \(\Gamma \) is maximal with respect to the properties (BR1) and (BR2).
If \({{\,\textrm{mul}\,}}\Gamma \cap \big (\{0\}\times B\big )=\{0\}\), then we say that \(\Gamma \) is of function type. If \({{\,\textrm{mul}\,}}\Gamma =\{(0;0)\}\), then \(\Gamma \) is called a boundary function.
In what follows we consider only boundary relations with \(B={\mathbb {C}}^n\). In this case it is known that deficiency indices of S are equal and the domain of \(\Gamma \) is equal to \(S^*\). If additionally \({{\,\textrm{mul}\,}}\Gamma =\{(0;0)\}\), then \(\Gamma \) is a bounded operator from \(S^*\) to \({\mathbb {C}}^{2n}\) which can be written in the form \(\Gamma =(\Gamma _1,\Gamma _2)\), \(\Gamma _i:S^*\rightarrow {\mathbb {C}}^n\), \(i=1,2\), and then the collection \(({\mathbb {C}}^n;\Gamma _1,\Gamma _2)\) is called an (ordinary) boundary triplet for \(S^*\).
For the case of a boundary triplet \(({\mathbb {C}}^n;\Gamma _1,\Gamma _2)\) for \(S^*\) symmetric and selfadjoint extensions of S can be described in a neat way. To this end introduce the indefinite scalar product \([\cdot ,\cdot ]\) on \({\mathbb {C}}^n\times {\mathbb {C}}^n\) defined by the Gram operator
Explicitly, this is
Then symmetric extensions \(A\subseteq {\mathcal {H}}\times {\mathcal {H}}\) of S correspond bijectively to \([\cdot ,\cdot ]\)—neutral subspaces \(\theta \subseteq {\mathbb {C}}^n\times {\mathbb {C}}^n\) by means of the inverse image map \(\Gamma ^{-1}\). In this correspondence A is selfadjoint if and only if \(\theta \) is a maximal neutral subspace (equivalently, \(\theta \) is neutral and \(\dim \theta =n\)). Such subspaces are sometimes also called Lagrange planes, e.g., [9]. One has \(S=\ker \Gamma \) and the induced linear operator \({\widetilde{\Gamma }}\) on the quotient linear space \(S^*/S\) is its linear isomorphism with \({\mathbb {C}}^{2n}\). In the general case of a boundary relation \(\Gamma \) there is a linear isomorphism between the quotient spaces \(S^*/S=\mathrm{{{\,\textrm{dom}\,}}\,}\Gamma /\ker \Gamma \) and \({{\,\textrm{ran}\,}}\Gamma /\mathrm{{{\,\textrm{mul}\,}}\,}\Gamma \).
2.11 Definition
Let \(\Gamma \) be a boundary relation. The map M which assigns to a point \(z\in {\mathbb {C}}\setminus {\mathbb {R}}\) the linear relation
is called the Weyl family of \(\Gamma \).
The Weyl family M of a boundary relation of function type (where \(B={\mathbb {C}}^n\)) is a \(n\times n\)-matrix-valued Herglotz function, and is also called the Weyl function.
The Weyl family of a boundary relation is intimitely related to the spectral theory of selfadjoint extensions of S, specifically, to the spectrum of the selfadjoint (see [3, Proposition 4.22]) relation
where \(\pi _1:{\mathbb {C}}^n\times {\mathbb {C}}^n\rightarrow {\mathbb {C}}^n\) is the projection onto the first component \(\pi _1((a;b)):=a\). In the case of a boundary triplet \(({\mathbb {C}}^n;\Gamma _1,\Gamma _2)\) we have \(L=\ker \Gamma _1\). Since the Weyl family comprises the information given by defect elements, naturally, a selfadjoint part of S (including its multivalued part) cannot be accessed using M(z). For this reason a statement about spectrum can only be expected for boundary relations of simple symmetric linear relations (which therefore are operators, but may be nondensely defined, so that their adjoint may be a multivalued linear relation). A cornerstone in Weyl theory is the Titchmarsh–Kodaira formula. We give a formulation which is in fact a generalization of the Titchmarsh–Weyl–Kodaira theory [13, 23, 24] for one-dimensional Schrödinger operators to the abstract setting of boundary relations.
2.12 Theorem
(Titchmarsh–Kodaira formula) Let S be a closed simple symmetric linear relation in a Hilbert space H and \(\Gamma \subseteq H^2\times {\mathbb {C}}^{2n}\) be a boundary relation of function type for \(S^*\). Consider the selfadjoint extension \(L:=\ker [\pi _1\circ \Gamma ]\) of S. Moreover, let M be the Weyl function associated with \(\Gamma \), let \(\Omega \) be the measure in the Herglotz integral representation of M, and let \(\rho \) be its trace measure \(\rho :={{\,\textrm{tr}\,}}\Omega \).
Then the operator part of L is unitarily equivalent to the operator of multiplication by independent variable in the space \(L_2({\mathbb {R}},\Omega )\). The spectral multiplicity function \(N_L\) of L is given as
2.4 Pasting of Boundary Relations
In this subsection we describe what we understand by a pasting of boundary relations by means of interface conditions. For more details on operations with boundary relations we refer the reader to [2] or [22, §3]. We restrict ourselves to pastings of relations from a particular simple class (\(B={\mathbb {C}}\)), however the construction that we use would also make sense in a more general case.
We use the word pasting in two meanings: for boundary relations \(\Gamma _l\) and for selfadjoint linear relations \(L_l=\ker [\pi _1\circ \Gamma _l]\). For the former a pasting is obtained from a fractional linear transform w of \(\Gamma _0=\prod _{l=1}^n\Gamma _l\), where the matrix w is J-unitary and is constructed in a non-unique way from matrices A and B which determine the interface condition. For the latter the pasting is uniquely determined by A and B.
We consider the following setting.
-
(D1)
Let \(n\ge 2\) and for \(l\in \{1,\ldots ,n\}\) let either \(H_l\) be a Hilbert space, \(S_l\) a simple closed symmetric linear relation in \(H_l\) with deficiency index (1, 1) (which is hence an operator, possibly not densely defined), \(\Gamma _l\subseteq H_l^2\times {\mathbb {C}}^2\) a boundary relation of function type for \(S_l^*\) and \(L_l=\ker [\pi _1\circ \Gamma _l]\) a selfadjoint linear relation, or (an artificial edge) \(H_l=\{0\}\), \(S_l=L_l=S_l^*=\{(0;0)\}\), and \(\Gamma _l=\{0\}^2\times {{\,\textrm{mul}\,}}\Gamma _l\subseteq \{0\}^2\times {\mathbb {C}}^2\). We assume that for at least one l we do not have an artificial edge.
-
(D2)
Denote \(H:=\prod _{l=1}^nH_l\), \(S:=\prod _{l=1}^nS_l\), \(\Gamma _0:=\prod _{l=1}^n\Gamma _l\) and \(L_0:=\prod _{l=1}^nL_l\).
-
(D3)
Let \(A,B\in {\mathbb {C}}^{n\times n}\) be such that \(AB^*=BA^*\) and \({{\,\textrm{rank}\,}}(A,B)=n\).
Note that our assumptions in (D1) imply that boundary relations \(\Gamma _l\) which are not boundary functions must be artificial. Namely, if \({{\,\textrm{mul}\,}}\Gamma _l\ne \{0\}^2\), then by a dimension argument we must have \(H_l=\{0\}\) and \(\Gamma _l=\{0\}^2\times \{(a;m_la),a\in {\mathbb {C}}\}\) with some constant \(m_l\in {\mathbb {R}}\) serving as Weyl function. Allowing such degenerate cases has meaningful applications: in [22] using a construction with an artificial edge we showed that Aronszajn–Donoghue result [1, 6] can be deduced from the result of that work (these results actually imply each other).
We see that S is a simple closed symmetric linear relation in H with deficiency index at most (n, n), and
is a boundary relation for \(S^*\). The Weyl family of \(\Gamma _0\) is given as the diagonal relation \(M_0={{\,\textrm{diag}\,}}(m_1,\ldots ,m_n)\), where \(m_l\) are Weyl functions of boundary relations \(\Gamma _l\) or the real constants associated with artificial edges, respectively. Since \(\Gamma _l\) are of function type, so is \(\Gamma _0\), and \(M_0\) is a diagonal matrix function. Obviously
and we think of \(L_0\) as the uncoupled selfadjoint relation.
In order to model interaction between the boundary relations \(\Gamma _1,\ldots ,\Gamma _n\), one can use fractional linear transforms defined by \(J_{{\mathbb {C}}^n}\)-unitary matrices w. Recall here that a matrix \(w\in {\mathbb {C}}^{2n\times 2n}\) is called \(J_{{\mathbb {C}}^n}\)-unitary, if \(w^*J_{{\mathbb {C}}^n}w=J_{{\mathbb {C}}^n}\). Also note that w is \(J_{{\mathbb {C}}^n}\)-unitary if and only if \(w^*\) is:
Given matrices A, B which satisfy (D3) we can construct a \(J_{{\mathbb {C}}^n}\)-unitary matrix \(w\in {\mathbb {C}}^{2n\times 2n}\) with A and B used as upper blocks. The following result is of a folklore kind, however, for completeness we provide its proof.
2.13 Lemma
Let \(A,B\in {\mathbb {C}}^{n\times n}\) be given. There exist \(C,D\in {\mathbb {C}}^{n\times n}\) such that the matrix
is \(J_{{\mathbb {C}}^n}\)-unitary, if and only if \(AB^*=BA^*\) and \({{\,\textrm{rank}\,}}(A,B)=n\).
Proof
Multiplying out the product \(wJ_{{\mathbb {C}}^n}w^*\), shows that w is \(J_{{\mathbb {C}}^n}\)-unitary if and only if the four equations
hold.
The backwards implication readily follows: if we find C and D such that (2.7) is \(J_{{\mathbb {C}}^n}\)-unitary, then \(BA^*=AB^*\) and \(\ker [(A,B)^*]=\{0\}\). In order to show the forward implication, assume that A and B satisfy these two conditions. Then the matrix
is positive definite. Set \(C:=-X^{-1}B\) and \(D:=X^{-1}A\). Then all four relations in (2.8) are fulfilled. \(\square \)
Given w of the form (2.7), consider the relation
One can show that \(\Gamma _w\) is a boundary relation for \(S^*\) and that the Weyl family of \(\Gamma _w\) is
If \(A+BM_0(z)\) is invertible for every \(z\in {\mathbb {C}}\setminus {\mathbb {R}}\), then
is a matrix valued function and \(\Gamma _w\) is of function type.
The selfadjoint relation (2.5) for \(\Gamma _w\) is given as
It depends only on the first row of w, i.e., on the matrices A and B, and hence we may legitimately call \(L_{A,B}\) the pasting of the relations \(L_l=\ker (\pi _1\circ \Gamma _l)\) by means of the interface conditions (A, B). We think of \(L_{A,B}\) as the coupled selfadjoint relation.
The relation \(L_{A,B}\) is a finite-rank perturbation of \(L_0\) in the resolvent sense with the actual rank of the perturbation not exceeding n. This holds simply because both are extensions of the symmetry S which has deficiency index at most (n, n). The following lemma helps to estimate this rank.
Denote for \(A,B\in {\mathbb {C}}^{n\times n}\)
2.14 Lemma
Let \(A,B\in {\mathbb {C}}^{n\times n}\) and \(A',B'\in {\mathbb {C}}^{n\times n}\) satisfy assumption (D3). Then
Proof
Under (D3), \(\theta _{A,B}\) and \(\theta _{A',B'}\) are Lagrange planes in \({\mathbb {C}}^n\times {\mathbb {C}}^n\) which correspond to the relations \(L_{A,B}\) and \(L_{A',B'}\) in the sense that \(\Gamma _0^{-1}(\theta _{A,B})=L_{A,B}\) and \(\Gamma _0^{-1}(\theta _{A',B'})=L_{A',B'}\). For each \(z\in {\mathbb {C}}\setminus {\mathbb {R}}\) the rank of the resolvent difference is
We have:
\(\square \)
2.15 Corollary
Let \(A,B\in {\mathbb {C}}^{n\times n}\) satisfy (D3). Then
Proof
We have \(\theta _0:=\theta _{I,0}=\{(a,b)\in {\mathbb {C}}^n\times {\mathbb {C}}^n:\ a=0\}\). By the lemma for \(z\in {\mathbb {C}}\setminus {\mathbb {R}}\)
and this dimension is computed as
\(\square \)
3 Discussion of Interface Conditions
In our present work we investigate spectral properties of coupled operators \(L_{A,B}\) whose interface conditions (A, B) are subject to an additional mixing condition. Namely, besides the general condition
-
(D3)
\(AB^*=BA^*\) and \({{\,\textrm{rank}\,}}(A,B)=n\)
we are going to assume the condition
-
(D4)
each set of \({{\,\textrm{rank}\,}}B\) many different columns of B is linearly independent.
Note that this is a condition on B only.
We have no fully precise intuition for (D4). However, in some sense it is related to how different edges are mixed. To illustrate the situation, observe that the matrix \(B_{st}\) from the standard interface conditions in (1.6) satisfies (D4), and apparently the Kirchhoff condition \(\sum _{l=1}^n u_l'(0)=0\) for a star-graph combines all edges. On the other hand consider for instance the matrices
Obviously, B does not satisfy (D4). Thinking of a situation as in (1.4)–(1.5), we see that interface conditions with this matrix B fail to mix all edges in the second component of their boundary values. For example, the operator corresponding to the interface conditions \((A_1,B)\) splits in two uncoupled parts, one corresponding to the first two edges and another to the third edge. At the same time, the operator corresponding to \((A_2,B)\) will mix all edges.
The condition (D4) can be characterized in different ways. Here we call a linear subspace \({\mathcal {L}}\) of \({\mathbb {C}}^n\) a coordinate plane, if \({\mathcal {L}}={{\,\textrm{span}\,}}\{e_{i_1},\ldots ,e_{i_l}\}\) with some \(1\le i_1<i_2<\ldots <i_l\le n\), and where \(e_i\) denotes the i-th canonical basis vector in \({\mathbb {C}}^n\).
3.1 Lemma
For a matrix \(B\in {\mathbb {C}}^{n\times n}\) the following statements are equivalent.
-
(1)
B satisfies (D4).
-
(2)
For every coordinate plane \({\mathcal {L}}\) in \({\mathbb {C}}^n\) with \(\dim {\mathcal {L}}\le {{\,\textrm{rank}\,}}B\), the restriction of B to \({\mathcal {L}}\) is injective.
-
(3)
For every positive semidefinite diagonal matrix X it holds that
$$\begin{aligned} {{\,\textrm{rank}\,}}(BXB^*)=\min \{{{\,\textrm{rank}\,}}X,{{\,\textrm{rank}\,}}B\} . \end{aligned}$$(3.1)
Proof
Assume (1), i.e., that B satisfies (D4), and let \({\mathcal {L}}={{\,\textrm{span}\,}}\{e_{i_1},\ldots ,e_{i_l}\}\) be a coordinate plane with dimension \(l\le {{\,\textrm{rank}\,}}B\). The range of \(B|_{{\mathcal {L}}}\) is the linear span of the corresponding columns of B, and hence has dimension l. Thus \(B|_{{\mathcal {L}}}\) is injective, and we have (2).
Assume (2) and let X be a positive semidefinite diagonal matrix. Since \(X\ge 0\), we have
and hence
Denote \(r:={{\,\textrm{rank}\,}}B\) and \(r':={{\,\textrm{rank}\,}}X\). Clearly, \({{\,\textrm{rank}\,}}(BX^{\frac{1}{2}})\le \min \{r,r'\}\). Let \(1\le i_1<\ldots <i_{r'}\le n\) be those indices for which the corresponding diagonal entry of X is nonzero. Then \({{\,\textrm{ran}\,}}X^{\frac{1}{2}}={{\,\textrm{span}\,}}\{e_{i_1},\ldots ,e_{i_{r'}}\}=:{\mathcal {L}}\). If \(r'\le r\), then B acts injectively on \({\mathcal {L}}\), and hence \({{\,\textrm{rank}\,}}(BX^{\frac{1}{2}})=r'\). If \(r'>r\), then B acts injectively on \({{\,\textrm{span}\,}}\{e_{i_1},\ldots ,e_{i_r}\}\), and hence \({{\,\textrm{rank}\,}}(BX^{\frac{1}{2}})=r\). Put together, we arrive at the asserted formula (3.1), i.e., we have (3).
Finally, assume (3) and let \(1\le i_1<\ldots <i_r\le n\). Let X be the diagonal matrix having diagonal entry 1 in the \(i_l\)-th columns, \(l=1,\ldots ,r\), and 0 otherwise. Then the linear span \({\mathcal {K}}\) of the \(i_1,\ldots ,i_r\)-th columns of B is nothing but the range of \(BX^{\frac{1}{2}}\). Remembering (3.2), we obtain from (3.1) that
This means that the \(i_1,\ldots ,i_r\)-th columns of B are linearly independent and we have (1).
Thus items (1), (2) and (3) are equivalent. \(\square \)
The selfadjoint relation \(L_{A,B}\) defined as pasting by means of the interface conditions (A, B), cf. (2.10), does not uniquely correspond to (A, B). Clearly, if \(Q\in {\mathbb {C}}^{n\times n}\) is invertible, then \(L_{A,B}=L_{QA,QB}\). Another way to modify A and B without essentially changing the corresponding operator is to simultaneously permute columns of A and B; this corresponds to “renumerating the edges”: Let \(\pi \) be a permutation of \(\{1,\ldots ,n\}\), and let P be the corresponding permutation matrix. Then the operator \(L_{A,B}\) defined as pasting of \(\Gamma _1,\ldots ,\Gamma _n\) with interface conditions (A, B) is unitarily equivalent to the operator built from \(\Gamma _{\pi (1)},\ldots ,\Gamma _{\pi (n)}\) with (AP, BP), and hence these two operators share all their spectral properties.
We are going to use the above two operations to reduce interface conditions to a suitable normal form. For matrices \(A,B,{\tilde{A}},{\tilde{B}}\in {\mathbb {C}}^{n\times n}\) let us write \((A,B)\sim ({\tilde{A}},{\tilde{B}})\), if there exist \(Q,P\in {\mathbb {C}}^{n\times n}\) with Q invertible and P a permutation matrix, such that
Clearly, \(\sim \) is an equivalence relation. Two equivalent pairs (A, B) and \(({\tilde{A}},{\tilde{B}})\) together do or do not satisfy (D3). Further, for any matrices \(B,Q,P\in {\mathbb {C}}^{n\times n}\) with Q invertible and P a permutation matrix, the matrices B and QBP together do or do not satisfy (D4).
3.2 Lemma
Let \(A,B\in {\mathbb {C}}^{n\times n}\) and assume that (D3) and (D4) hold. Denote \(r:={{\,\textrm{rank}\,}}B\). Then there exist \(A_1\in {\mathbb {C}}^{(n-r)\times r}\) and \(A_2\in {\mathbb {C}}^{r\times r}\) with \(A_2=A_2^*\), such that (\(I_{{\mathbb {C}}^k}\) denotes the identity matrix of size \(k\times k\), and block matrices are understood with respect to the decomposition \({\mathbb {C}}^n={\mathbb {C}}^{n-r}\times {\mathbb {C}}^r\))
Proof
By the definition of r we find an invertible \(Q_1\in {\mathbb {C}}^{n\times n}\) such that
with some blocks \(B_{21}\in {\mathbb {C}}^{r\times (n-r)}\) and \(B_{22}\in {\mathbb {C}}^{r\times r}\). Since \({{\,\textrm{rank}\,}}(Q_1A,Q_1B)={{\,\textrm{rank}\,}}(A,B)=n\), the first \(n-r\) rows of \(Q_1A\) are linearly independent. Hence, we find an invertible \(Q_2\in {\mathbb {C}}^{(n-r)\times (n-r)}\) and a permutation matrix \(P\in {\mathbb {C}}^{n\times n}\) such that
with some blocks \(A_{12}\in {\mathbb {C}}^{(n-r)\times r}\) and \(A_{22}\in {\mathbb {C}}^{r\times r}\). By (D4), the last r columns of \(Q_1BP\) are linearly independent. Equivalently, the right lower \(r\times r\)-block \(B_{22}'\) of \(Q_1BP\) is invertible. Setting \(Q_3:=(B_{22}')^{-1}\), we obtain
with some block \(B_{21}'\in {\mathbb {C}}^{r\times (n-r)}\).
Putting together, we have
From \(AB^*=BA^*\), we obtain that \(Q_3A_{22}=(Q_3A_{22})^*\) and \(B_{21}'=-A_{12}^*\). \(\square \)
Conclusion: When investigating spectral properties of pastings of boundary relations with interface conditions given by matrices A and B subject to (D3) and (D4), we may restrict attention without loss of generality to pairs (A, B) of the form
with some \(A_1\in {\mathbb {C}}^{(n-r)\times r}\) and \(A_2\in {\mathbb {C}}^{r\times r}\), \(A_2=A_2^*\).
3.3 Remark
Two pairs of the form (3.3) can be equivalent modulo \(\sim \). For example,
if there exist permutation matrices \(P_1\in {\mathbb {C}}^{(n-r)\times (n-r)}\) and \(P_2\in {\mathbb {C}}^{r\times r}\) such that
Validity of (D4) for a matrix B of form (3.3) can also be characterized in different ways.
3.4 Lemma
Let \(B_1\in {\mathbb {C}}^{r\times (n-r)}\), and set
Then the following statements are equivalent.
-
(1)
B satisfies (D4).
-
(2)
All minors of size r of the matrix \((B_1,I_{{\mathbb {C}}^r})\) are nonzero.
-
(3)
All minors of size \(n-r\) of the matrix \((I_{{\mathbb {C}}^{n-r}},-B_1^*)\) are nonzero.
-
(4)
All minors of \(B_1\) are nonzero.
Proof
Since the upper \(n-r\) rows of B contain only zeros, a set of columns of B is linearly independent if and only if the set consisting of the same columns of \((B_1,I_{{\mathbb {C}}^r})\) is linearly independent. This shows the equivalence of (1) and (2).
Let \(k\le \min \{n-r,r\}\), and let \(1\le j_1<\ldots <j_k\le n-r\) and \(n-r<i_1<\ldots <i_k\le n\). Denote by d the minor of size k of the matrix \(B_1\) obtained by selecting the columns \(j_1,\ldots ,j_k\) and the rows \(i_1,\ldots ,i_k\) of B. Let \(1\le j'_1<\ldots <j'_{n-r-k}\le n-r\) and \(n-r<i'_1<\ldots <i'_{r-k}\le n\) be the complementary index sets, i.e.,
Then the minor d is up to a sign equal to the minor of size r of \((B_1,I_{{\mathbb {C}}^r})\) made from the \(j_1,\ldots ,j_k,n-r+i'_1,\ldots ,n-r+i'_{r-k}\)-th columns of this matrix. Further, it is up to a sign and complex conjugation equal to the minor of size \(n-r\) of \((I_{{\mathbb {C}}^{n-r}},-B_1^*)\) made from the \(j'_1,\ldots ,j'_{n-r-k},n-r+i_1,\ldots ,n-r+i_k\)-th columns of this matrix. This shows the equivalence of (2), (3), and (4). \(\square \)
4 Formulation of the Main Result
In this section we formulate the main result of the present paper, the below Theorem 4.3. It is preceded by a corresponding theorem about the point spectrum, Theorem 4.1, which we state independently for several reasons: behavior of multiplicity of eigenvalues may serve as a simple and elementary model for the behavior of spectral multiplicity of singular spectrum, it allows an independent proof by linear algebra, and one assumption from Theorem 4.3, (D5), can be dropped.
The result about the point spectrum now reads as follows.
4.1 Theorem
Let data be given according to (D1)–(D4) above. Let \(L_{A,B}\) be the selfadjoint relation constructed by pasting \(L_1,\ldots ,L_n\) by means of the interface conditions (A, B), cf. (2.10). Let \(r:={{\,\textrm{rank}\,}}B\). For \(x\in {\mathbb {R}}\) denote by \(N^p_{A,B}(x)\) the multiplicity of x as an eigenvalue of \(L_{A,B}\), and by \(N^p_0(x)\) the multiplicity of x as an eigenvalue of \(L_0\), i.e.,
Then the following statements hold.
-
(P1)
If \(N_0^p(x)\ge r\), then \(N^p_{A,B}(x)=N_0^p(x)-r\).
-
(P2)
If \(N_0^p(x)< r\), then \(N^p_{A,B}(x)\le r-N_0^p(x)\).
4.2 Remark
Recall that for any choice of a representative of the multiplicity function \(N_0\) we have \(N_0(x)=N_0^p(x)\) if \(N_0^p(x)>0\) (the same for \(N_{A,B}\)). It is possible that \(N_0^p(x)=0\) while \(N_0(x)>0\).
The formulation of the result for singular spectrum is quite more elaborate compared to point spectrum, but only because of the measure theoretic nature of the involved quantities. The basic message is quite the same. Again there occurs a distinction into two cases, many layers of spectrum vs. few layers of spectrum. If locally at a point the uncoupled operator \(L_0\) has many layers of spectrum compared to the rank of matrix B, then after performing the perturbation the multiplicity will have decreased by \({{\,\textrm{rank}\,}}B\). In particular, if we had exactly \({{\,\textrm{rank}\,}}B\) many layers, the point will not anymore contribute to the spectral measure. If \(L_0\) has few layers of spectrum, then the multiplicity will not become too large after performing the perturbation. Depending on the ratio of r and \(N_0(x)\) it may increase or must decrease.
We use the simplified notation \(\mathbbm {1}_{\{N_0=l\}}\cdot \nu \) for the part \(\mathbbm {1}_{Y_l}\cdot \nu \) of a measure \(\nu \ll \mu \) (and their sums such as \(\mathbbm {1}_{\{N_0>r\}}\cdot \nu \)), cf. the definition of \(Y_l\) and \(N_0\) in Sect. 2.2. This definition does not depend on the choice of a representative of the equivalence class of sets \(\{N_0=l\}\) owing to absolute continuity of \(\nu \) w.r.t. \(\mu \).
4.3 Theorem
Let data be given according to (D1)–(D4) above. Let w be a J-unitary matrix provided by Lemma 2.13, cf. (2.7), \(\Gamma _w=w\circ \Gamma _0\) be the pasting of \(\Gamma _1\),...,\(\Gamma _n\), and \(L_{A,B}=\ker [\pi _1\circ \Gamma _w]\) be the pasting of \(L_1\),...,\(L_n\) by means of the interface condition (A, B), cf. (2.10). Let \(m_l\) be the (scalar) Weyl functions for \(\Gamma _l\), \(M_0\) and \(M_w\) be the matrix valued Weyl functions for \(\Gamma _0\) and \(\Gamma _w\), and \(\mu _l\), \(\Omega _0\), \(\Omega _w\) be the measures in their Herglotz integral representations (if the “l-th edge” is “artificial”, we assume \(\mu _l=0\)). Let \(\mu :=\sum _{l=1}^n\mu _l={{\,\textrm{tr}\,}}\Omega _0\) and \(\rho :={{\,\textrm{tr}\,}}\Omega _w\) be scalar spectral measures of the linear selfadjoint relations \(L_0\), \(L_{A,B}\), and \(N_0\), \(N_{A,B}\) be their spectral multiplicity functions.
Let \(\mu =\mu _{ac}+\mu _s\) and \(\rho =\rho _{ac}+\rho _s\) be Lebesgue decompositions of \(\mu \) and \(\rho \) w.r.t. the Lebesgue measure \(\lambda \), and \(\rho _s=\rho _{s,ac}+\rho _{s,s}\) the Lebesgue decomposition of \(\rho _s\) w.r.t. \(\mu \). Let \(r:={{\,\textrm{rank}\,}}B\).
Assume in addition that we do not have too many artificial edges in the sense that
-
(D5)
\(\#\{l:{{\,\textrm{mul}\,}}\Gamma _l=\{(0;0)\}\}\ge r\).
Then the following statements hold.
-
(S1)
\(\rho _{ac}\sim \mu _{ac}\) and \(N_{A,B}=N_0\) holds \(\rho _{ac}\)-a.e.
-
(S2)
\(\mathbbm {1}_{\{N_0=r\}}\cdot \rho _{s,ac}=0\) or, equivalently, \(\rho _{s,ac}\sim \mathbbm {1}_{\{N_0>r\}}\cdot \rho _{s,ac}+\mathbbm {1}_{\{0<N_0<r\}}\cdot \rho _{s,ac}\).
-
(S3)
\(\mathbbm {1}_{\{N_0>r\}}\cdot \rho _{s,ac}\sim \mathbbm {1}_{\{N_0>r\}}\cdot \mu _s\) and \(N_{A,B}=N_0-r\) holds \(\mathbbm {1}_{\{N_0>r\}}\cdot \rho _{s,ac}\)-a.e.
-
(S4)
\(N_{A,B}\le r-N_0\) holds \(\mathbbm {1}_{\{0<N_0<r\}}\cdot \rho _{s,ac}\)-a.e.
-
(S5)
\(N_{A,B}\le r\) holds \(\rho _{s,s}\)-a.e.
4.4 Remark
-
1.
Item (S1) follows from [8] and is included here only for completeness. Item (S5) follows from Lemma 2.9 and (S1) since \(\rho =(\rho _{ac}+\rho _{s,ac})+\rho _{s,s}\) is the Lebesgue decomposition of \(\rho \) w.r.t. \(\mu \).
-
2.
Note that spectral multiplicity functions \(N_0\) and \(N_{A,B}\) are not defined uniquely and can be changed on \(\mu \)- and \(\rho \)-zero sets, respectively. However, these sets of non-uniqueness are \(\rho _{s,ac}\)-zero, because \(\rho _{s,ac}\ll \rho \) and \(\rho _{s,ac}\ll \mu \), thus we can compare \(N_{A,B}\) and \(N_0\) in (S3) and (S4).
-
3.
Items (S2) and (S3) correspond to the “many layers case”, while item (S4) is the “few layers case”.
The proof of Theorem 4.1 is by linear algebra and is given in Sect. 5. The proof of Theorem 4.3 is given in Sects. 6 and 7. For the many layers case we proceed via the Titchmarsh–Kodaira formula and the boundary behavior of the Weyl function, while the few layers case is settled by a geometric reduction.
5 The Point Spectrum
In this section we prove Theorem 4.1. Throughout the section let data be given as in this theorem, where A, B are assumed to be in block form as in (3.3). Moreover, fix \(x\in {\mathbb {R}}\).
As a first step we study the eigenspaces \(\ker (S_l^*-xI)\) for each \(l\in \{1,\ldots ,n\}\) separately. Recall that \(\dim \ker (S_l^*-xI)\le 1\) and \(\dim {{\,\textrm{mul}\,}}\Gamma _l\le 1\). If \({{\,\textrm{mul}\,}}\Gamma _l\ne \{(0;0)\}\) then \(H_l=\{0\}\) and in particular \(\sigma _p(S_l^*)=\emptyset \). Since \(\Gamma _l\) is of function type, \({{\,\textrm{mul}\,}}\Gamma _l\) cannot contain (0; 1).
We have \(x\in \sigma _p(S_l^*)\) if and only if there exist \(f_l\in H_l\setminus \{0\}\) and \(a_l,b_l\in {\mathbb {C}}\) with
Assume that \(x\in \sigma _p(S_l^*)\). Then the boundary values \(a_l,b_l\) are uniquely determined by the element \(f_l\), and \(f_l\) itself is unique up to a scalar multiple. Fix a choice of \(f_l\). Since \(S_l\) is simple, we have for the corresponding boundary values \((a_l;b_l)\ne 0\). Moreover, \(x\in \sigma _p(L_l)\) if and only if \(a_l=0\).
We fix the following notation throughout:
-
(1)
If \(x\in \sigma _p(L_l)\), let \({\tilde{f}}_l\in H_l\setminus \{0\}\) be the unique element with
$$\begin{aligned} \big (({\tilde{f}}_l;x{\tilde{f}}_l);(0;1)\big )\in \Gamma _l . \end{aligned}$$ -
(2)
If \(x\in \sigma _p(S_l^*)\setminus \sigma _p(L_l)\), let \({\tilde{f}}_l\in H_l\setminus \{0\}\) and \(m_l\in {\mathbb {C}}\) be the unique elements with
$$\begin{aligned} \big (({\tilde{f}}_l;x{\tilde{f}}_l);(1;m_l)\big )\in \Gamma _l . \end{aligned}$$ -
(3)
If \({{\,\textrm{mul}\,}}\Gamma _l\ne \{(0;0)\}\), let \(m_l\in {\mathbb {C}}\) be the unique element such that
$$\begin{aligned} ((0;0);(1;m_l))\in \Gamma _l . \end{aligned}$$
Note that, by the abstract Green’s identity, we have \(m_l\in {\mathbb {R}}\) in this case. Moreover, we denote:
-
(4)
If \(x\notin \sigma _p(S_l^*)\), set \({\tilde{f}}_l:=0\) (including the case \({{\,\textrm{mul}\,}}\Gamma _l\ne \{(0;0)\}\)).
-
(5)
If \(x\in \sigma _p(L_l)\), or \({{\,\textrm{mul}\,}}\Gamma _l=\{(0;0)\}\) and \(x\notin \sigma _p(S_l^*)\), set \(m_l:=0\).
In the second step we identify \(\ker (L_{A,B}-xI)\). Set
and let \({\hat{f}}_l\in H\) be the element whose l-th coordinate is \({\tilde{f}}_l\) and all other coordinates are 0. For a subset J of \(\{1,\ldots ,n\}\) let \(D_J\) be the diagonal matrix whose diagonal entry in the i-th column is 1 if \(i\in J\) and 0 otherwise. Moreover, let M be the diagonal matrix with diagonal entries \(m_1,\ldots ,m_n\).
5.1 Lemma
Set
and let \(\Xi :{\mathbb {C}}^n\rightarrow H\) be the map (we write \(c=(c_l)_{l=1}^n\))
Then \(\ker (L_{A,B}-xI)=\Xi (\ker \gamma )\), and in particular
Proof
We have
In this representation of \(\ker (S^*-xI)\) the constants \(c_l\) for \(l\notin J_p^*\) are irrelevant, since \({\hat{f}}_l=0\) for \(l\notin J_p^*\). This changes when we turn to \(\ker (L_{A,B}-xI)\subseteq \ker (S^*-xI)\), since the values of \(c_l\) for \(l\in J_m\) influence boundary values. For every \({\hat{f}}=\sum _{l=1}^nc_l{\hat{f}}_l\in \ker (S^*-xI)\) there exists (possibly not unique) \((a;b)\in {\mathbb {C}}^{2n}\) such that
and \({\hat{f}}\in \ker (L_{A,B}-xI)\), if and only if there exists \((a;b)\in {\mathbb {C}}^{2n}\) such that \(Aa+Bb=0\) additionally to (5.1). According to (1)–(3) it should be that
which means that
Then \(Aa+Bb=0\) is equivalent to
We conclude that
\(\square \)
In the third step we compute or estimate, respectively, the dimensions of \(\ker \gamma \) and \(\ker \gamma \cap \ker \Xi \).
5.2 Lemma
-
(1)
If \(\#(J_p^*\setminus J_p)\cup J_m\le n-r\), then
$$\begin{aligned}&\dim \ker \gamma =\max \big \{\#J_p-r,0\big \}+n-\#J_p^*-\#J_m , \\&\dim \big (\ker \gamma \cap \ker \Xi \big )=n-\#J_p^*-\#J_m . \end{aligned}$$ -
(2)
If \(\#(J_p^*\setminus J_p)\cup J_m>n-r\), then
$$\begin{aligned}&\dim \ker \gamma \le r-\#J_p , \\&\dim \big (\ker \gamma \cap \ker \Xi \big )\ge n-\#J_p^*-\#J_m . \end{aligned}$$
Proof
Consider the case that \(\#(J_p^*\setminus J_p)\cup J_m\le n-r\). We show that
The inclusion “\(\supseteq \)” holds since \(m_l=0\) for \(l\notin (J_p^*\setminus J_p)\cup J_m\). Let \(c\in \ker \gamma \). Since \(D_{1,\ldots ,n-r}B=0\), it follows that
The left side is a linear combination of at most \(n-r\) columns of the matrix \((I_{{\mathbb {C}}^{n-r}},A_1)\), and Lemma 3.4 implies that \(D_{(J_p^*\setminus J_p)\cup J_m}c=0\) and hence \(Mc=0\). From this we obtain
Thus the inclusion “\(\subseteq \)” also holds and the proof of (5.2) is complete.
The sets \(J_p^*\setminus J_p\), \(J_m\), and \(J_p\), are pairwise disjoint, and each r columns of B are linearly independent. Thus (5.2) implies
We have
and see that
Thus \(\dim (\ker \gamma \cap \ker \Xi )=n-\#J_p^*-\#J_m\).
Consider now the case that \(\#(J_p^*\setminus J_p)\cup J_m>n-r\). We show that
Choose \(n-r\) indices in \((J_p^*\setminus J_p)\cup J_m\), and let \({\mathcal {R}}\) be the linear span of the corresponding columns of \(\gamma \). Denote by \(\pi _+\) the projection in \({\mathbb {C}}^n\) onto the first \(n-r\) coordinates (\(\pi _+=D_{\{1,\ldots ,n-r\}}\)). The image \(\pi _+({\mathcal {R}})\) is the span of \(n-r\) columns of the matrix \((I_{{\mathbb {C}}^{n-r}},A_1)\), and hence has dimension \(n-r\). It follows that
Let \({\mathcal {R}}'\) be the linear span of the columns of \(\gamma \) corresponding to indices in \(J_p\). For those indices columns of \(\gamma \) are actually columns of B, and it follows that
Note here that \(\#J_p<r\). Since \({\mathcal {R}}+{\mathcal {R}}'\subseteq {{\,\textrm{ran}\,}}\gamma \), we obtain
From this, clearly, \(\dim \ker \gamma \le r-\#J_p\). Next, we have \(\ker D_{J_p^*\cup J_m}\subseteq \ker \gamma \cap \ker \Xi \), and hence
\(\square \)
It is easy to deduce Theorem 4.1 from the above lemma.
Proof of Theorem 4.1
Assume that \(N_0^p(x)\ge r\), and note that \(N_0^p(x)=\#J_p\). Then case (1) in Lemma 5.2 takes place, and it follows that
Assume now that \(N_0^p(x)<r\). If in Lemma 5.2 case (1) takes place, then \(N_{A,B}^p(x)=0\). If case (2) takes place, we obtain the bound
\(\square \)
6 The Many Layers Case
In the present section we prove assertions (S2) and (S3) from Theorem 4.3. The argument rests on boundary limit formula for matrix valued Nevanlinna functions: provided that the boundary relation \(\Gamma _w\) is of function type, we employ Titchmarsh–Kodaira formula and Lemma 2.6 to obtain that
Throughout this section let data and notation be as in Theorem 4.3. In addition assume that the matrices A, B describing the interface condition are of the form
with some \(A_1\in {\mathbb {C}}^{(n-r)\times r}\) and \(A_2\in {\mathbb {C}}^{r\times r}\) such that all minors of \(A_1\) are nonzero and \(A_2=A_2^*\). Recall that, by the discussion in Sect. 3, this assumption is no loss in generality.
The algebraic core of the proof is the following auxiliary proposition.
6.1 Proposition
Let \(r<n\), \(B_1\in {\mathbb {C}}^{r\times (n-r)}\) be such that all minors of \(B_1\) are nonzero, and let \(X_1\in {\mathbb {C}}^{(n-r)\times (n-r)}\) and \(X_2\in {\mathbb {C}}^{r\times r}\) be two positive semidefinite diagonal matrices such that \({{\,\textrm{rank}\,}}X_1+{{\,\textrm{rank}\,}}X_2\ge r\). Then
-
(1)
the matrix \(B_1X_1B_1^*+X_2\) is invertible, and
-
(2)
\({\displaystyle {{\,\textrm{rank}\,}}\big (X_1-X_1B_1^*(B_1X_1B_1^*+X_2)^{-1}B_1X_1\big )={{\,\textrm{rank}\,}}X_1+{{\,\textrm{rank}\,}}X_2-r . }\)
Proof
To show (1) set
Then
Clearly, \({{\,\textrm{rank}\,}}X={{\,\textrm{rank}\,}}X_1+{{\,\textrm{rank}\,}}X_2\). By Lemma 3.4 matrix B satisfies (D4), and by Lemma 3.1 we have
This means that the \(r\times r\) matrix \(B_1X_1B_1^*+X_2\) is invertible.
The assertion in (2) requires a slightly more elaborate argument. Set
To avoid confusion with dimensions let us here make explicit that
The essence of the argument are the following four relations:
The equalities in (6.2) follow by plugging \(X_2=H-B_1X_1B_1^*\), the first relation in (6.3) is just the definition of G and the second is \(H^{-1}H=I_{{\mathbb {C}}^r}\).
Together (6.2) and (6.3) show that \(H^{-1}B_1|_{\ker G}\) and \(X_1B_1^*|_{\ker X_2}\) are mutually inverse bijections between \(\ker G\) and \(\ker X_2\). In particular, this implies that
The assertion in (2) is now easily deduced. We have \(\ker (GX_1)=X_1^{-1}(\ker G)\) (the preimage of \(\ker G\)). The definition of G shows that \(\ker G\subseteq {{\,\textrm{ran}\,}}X_1\), and hence
is surjective. Clearly, \(\ker X_1\subseteq X_1^{-1}(\ker G)\), and hence
Together this implies that
and hence that
\(\square \)
We return to the setting of the theorem. Denote for \(z\in {\mathbb {C}}\setminus {\mathbb {R}}\)
Writing out the block form of \(A+BM_0(z)\) gives
Denote also
We see that D(z) is the Schur complement of the upper-left block of \(A+BM_0(z)\).
6.2 Lemma
Let (D1)–(D5) hold with \(r<n\). Then \(\Gamma _w\) is of function type and for every \(z\in {\mathbb {C}}\setminus {\mathbb {R}}\)
-
(1)
D(z) is invertible,
-
(2)
\(A+BM_0(z)\) is invertible,
$$\begin{aligned} (A+BM_0(z))^{-1}= \begin{pmatrix} I_{{\mathbb {C}}^{n-r}}-A_1D(z)^{-1}A_1^*M_{11}(z) &{} -A_1D(z)^{-1} \\ D(z)^{-1}A_1^*M_{11}(z) &{} D(z)^{-1} \end{pmatrix} \end{aligned}$$(6.5)and
$$\begin{aligned} {{\,\textrm{Im}\,}}M_w(z)=((A+BM_0(z))^{-1})^*{{\,\textrm{Im}\,}}M_0(z)(A+BM_0(z))^{-1} . \end{aligned}$$(6.6)
Proof
The imaginary part of D(z) is
By assumption (D5) at least r many of the functions \(m_l\) are not equal to a real constant. Thus
and Proposition 6.1, (1), implies that \({{\,\textrm{Im}\,}}D\) is invertible. Since \({{\,\textrm{Im}\,}}D(z)\ge 0\), it follows that D(z) is also invertible: if \(D(z)u=0\), then
and hence \(u=0\). Since the Schur complement D(z) is invertible, we conclude that \(A+BM_0(z)\) is invertible and the formula (6.5) holds.
As we have noted in Sect. 2.4, invertibility of \(A+BM_0(z)\) implies that \(M_w(z)\) is a matrix for every \(z\in {\mathbb {C}}\setminus {\mathbb {R}}\) and that \(\Gamma _w\) is of function type. Formula (6.6) holds by [8, Lemma 6.3, (i)]. \(\square \)
6.3 Remark
The case \(r=n\) needs separate attention (note that then all functions \(m_l\) are not real constants). In this case there is no block structure and we can write
and
is invertible, hence \(A+BM_0(z)\) is also invertible and \(\Gamma _w\) is of function type, (6.5) becomes \((A+BM_0(z))^{-1}=(A_{22}+M_0(z))^{-1}(z)\) and (6.6) continues to hold.
For points \(x\in {\mathbb {R}}\) with sufficiently nice properties formulae (6.5)–(6.6) can be used to compute (6.1). Here, and in the following, we denote \(m:=\sum _{l=1}^n m_l\).
6.4 Lemma
Let (D1)–(D5) hold with \(r<n\) and let \(x\in {\mathbb {R}}\). Assume that
-
(1)
the symmetric derivative \(\frac{d\Omega _0}{d\mu }(x)\) exists,
-
(2)
the symmetric derivative \(\frac{d\mu }{d\lambda }(x)\) exists and is equal \(\infty \),
-
(3)
\(\lim \limits _{z\downarrow x}\frac{M_0(z)}{m(z)}\) exists and equals to \(\frac{d\Omega _0}{d\mu }(x)\).
Set
let as usual \(\alpha _{11}(x)\) and \(\alpha _{22}(x)\) denote the diagonal blocks of \(\alpha (x)\) of size \(n-r\) and r, respectively, and let
If \({{\,\textrm{rank}\,}}\alpha (x)> r\), then H(x) is invertible, and the following limit exists, equals
and its rank equals \({{\,\textrm{rank}\,}}\alpha (x)-r\). Assume that in addition to (1)–(3) it holds that
-
(4)
the symmetric derivative \(\frac{d\rho }{d\mu }(x)\) exists and is finite.
Then \(\frac{d\rho }{d\mu }(x)=0\) if and only if \({{\,\textrm{rank}\,}}\alpha (x)=r\); if \({{\,\textrm{rank}\,}}\alpha (x)>r\), then the limit \(\lim _{z\downarrow x}\frac{{{\,\textrm{Im}\,}}M_w(z)}{{{\,\textrm{Im}\,}}{{\,\textrm{tr}\,}}M_w(z)}\) exists and
Proof
Let (1)–(3) hold. Set \(H(x):=\alpha _{22}(x)+A_1^*\alpha _{11}(x)A_1\). From (2) it follows by Proposition 2.5, (2), that \(m(z)\rightarrow \infty \) as \(z\downarrow x\). From (3) and (6.4) we have \(\lim _{z\downarrow x}\frac{D(z)}{m(z)}=H(x)\). Proposition 6.1, (1), tells us that if \({{\,\textrm{rank}\,}}\alpha (x)>r\), then H(x) is invertible. Hence, in this case we may conclude that
Now the representation (6.5) yields
Since the symmetric derivative \(\frac{d\Omega _0}{d\mu }(x)=\alpha (x)\) exists and \(\frac{d\mu }{d\lambda }(x)=\infty \), we get by Proposition 2.5, (1), together with Lemma 2.6 that
Using (6.6) we see that the limit exists
Applying Proposition 6.1, (2), we obtain that
Now assume that also (4) holds. Then, since \(\frac{d\mu }{d\lambda }(x)=\infty \), by Proposition 2.5, (1), it follows that
Since the matrix \(\lim _{z\downarrow x}\frac{{{\,\textrm{Im}\,}}M_w(z)}{{{\,\textrm{Im}\,}}m(z)}\) is positive semidefinite, we have
If \({{\,\textrm{rank}\,}}\alpha (x)>r\), then \(\frac{d\rho }{d\mu }(x)>0\) and it follows that the limit exists
and hence \({{\,\textrm{rank}\,}}\,\lim _{z\downarrow x}\frac{{{\,\textrm{Im}\,}}M_w(z)}{{{\,\textrm{Im}\,}}{{\,\textrm{tr}\,}}M_w(z)}=\alpha (x)-r\). \(\square \)
6.5 Remark
Again consider the case \(r=n\) separately. In this case \(\alpha (x)=\alpha _{22}(x)\). If \({{\,\textrm{rank}\,}}\alpha (x)=r=n\), then under conditions (1)–(3)
and owing to (6.6) \(\frac{{{\,\textrm{Im}\,}}M_w(z)}{{{\,\textrm{Im}\,}}m(z)}\rightarrow 0\) as \(z\downarrow x\). If also (4) holds, then \(\frac{d\rho }{d\mu }(x)=\lim _{z\downarrow x}\frac{{{\,\textrm{Im}\,}}{{\,\textrm{tr}\,}}M_w(z)}{{{\,\textrm{Im}\,}}m(z)}=0\).
The proof of the many layers case of Theorem 4.3, (S2)–(S3), is now completed by observing that sufficiently many points \(x\in {\mathbb {R}}\) satisfy conditions (1)–(4) of Lemma 6.4.
6.6 Lemma
There exists a Borel set \(W\subseteq {\mathbb {R}}\) with
such that for every \(x\in W\)
-
(1)
\(\frac{d\Omega _0}{d\mu }(x)\) exists,
-
(2)
\(\frac{d\mu }{d\lambda }(x)=\infty \),
-
(3)
\(\lim \limits _{z\downarrow x}\frac{M_0(z)}{m(z)}\) exists and equals \(\frac{d\Omega _0}{d\mu }(x)\),
-
(4)
\(\frac{d\rho }{d\mu }(x)\) exists and is finite,
and such that the functions \(\frac{d\mu _l}{d\mu }\), \(l=1,\ldots ,n\), are measureable on W.
Proof
By Proposition 2.2 there exists a Borel set X such that \(\mu ({\mathbb {R}}\setminus X)=0\) and symmetric derivatives \(\frac{d\mu _l}{d\mu }(x)\) exist for every \(x\in X\), \(l=1,\ldots ,n\), and are measurable functions from X to [0, 1] (X is the intersection of such sets for each of the measures \(\mu _l\)). Then (1) holds for all \(x\in X\).
By Corollary 2.4, (1), items (1) and (4) hold \(\mu \)-a.e. and hence \(\mu _s\)-a.e. By Corollary 2.4, (4), (1), item (2) holds \(\mu _s\)-a.e. and on a set of zero Lebesgue measure. By Proposition 2.5, (3), item (3) holds \(\mu _s\)-a.e. Since the intersection of \(\mu _s\)-full sets is a \(\mu _s\)-full set, all of (1)–(4) hold \(\mu _s\)-a.e. and on a set of Lebesgue measure zero. There exists a Borel set W contained in this intersection (including X) such that \(\lambda (W)=0\) and \(\mu _s({\mathbb {R}}\setminus W)=0\). The functions \(\frac{d\mu _l}{d\mu }\), \(l=1,\ldots ,n\), remain measurable on W.\(\square \)
Proof of Theorem 4.3, (S2)–(S3)
Consider the set W from the above lemma. Define for \(l=1,\ldots ,n\) Borel sets
which are Borel owing to measurability of \(\frac{d\mu _l}{d\mu }\) on W for each l.
Since \(\{x\in {\mathbb {R}}:\exists \frac{d\Omega _0}{d\mu }(x),{{\,\textrm{rank}\,}}\frac{d\Omega _0}{d\mu }(x)=k\}\setminus W^{(k)}\) are \(\mu _s\)-zero sets, we have \(\mathbbm {1}_{\{N_0=k\}}\cdot \nu =\mathbbm {1}_{W^{(k)}}\cdot \nu \) for every measure \(\nu \ll \mu _s\) (in particular, for \(\rho _{s,ac}\)).
By Lemma 6.4 ( and Remark 6.5) we have \(\frac{d\rho }{d\mu }(x)=0\) for all \(x\in W^{(r)}\), hence by Corollary 2.3, (1), \(\rho (W^{(r)})=0\). Therefore \(\rho _{s,ac}(W^{(r)})=0\), which means that \(\mathbbm {1}_{\{N_0=r\}}\cdot \rho _{s,ac}=0\), and that is (S2).
Further, by Lemma 6.4 we have \(\frac{d\rho }{d\mu }(x)\in (0,\infty )\) for all \(x\in W^{(>r)}\), and hence by Corollary 2.3, (4), \(\mathbbm {1}_{W^{(>r)}}\cdot \mu \sim \mathbbm {1}_{W^{(>r)}}\cdot \rho \). Since \(\lambda (W^{(>r)})=0\), it follows that \(\mathbbm {1}_{W^{(>r)}}\cdot \mu _s=\mathbbm {1}_{W^{(>r)}}\cdot \mu \sim \mathbbm {1}_{W^{(>r)}}\cdot \rho =\mathbbm {1}_{W^{(>r)}}\cdot \rho _s\). As \(\frac{d\rho }{d\mu }(x)\in [0,\infty )\) for all \(x\in W\), taking into account (S1) which means that \(\rho _{s,s}\) is the singular part in the Lebesgue decomposition of \(\rho \) w.r.t. \(\mu \), by Corollary 2.4, (4), we have \(\rho _{s,s}(W)=0\). Then, in particular, \(\mathbbm {1}_{W^{(>r)}}\cdot \rho _{s,s}=0\). Therefore we have
which is the first part of (S3).
For all \(x\in W^{(>r)}\) (and therefore \(\mathbbm {1}_{\{N_0>r\}}\cdot \rho _{s,ac}\)-a.e.) by Lemma 6.4 it holds that
By (6.1) this means that
which completes the proof of (S3). \(\square \)
7 The Few Layers Case
In this section we prove assertions (S4) and (S5) from Theorem 4.3. This will be done by reducing to the already established many layers case (S2)–(S3), and referring to the perturbation Lemma 2.9.
The reduction is achieved by means of the following theorem, which is purely geometric. Recall that we denote for a pair (A, B) of matrices which satisfies (D3) the corresponding Lagrange plane as \(\theta _{A,B}\), cf. (2.11).
7.1 Theorem
Let \(A,B\in {\mathbb {C}}^{n\times n}\) such that (A, B) satisfies (D3) and (D4). Then for each \(k\in \{1,\ldots ,{{\,\textrm{rank}\,}}B\}\) there exist \(A_k,B_k\in {\mathbb {C}}^{n\times n}\) such that
-
(1)
\((A_k,B_k)\) satisfies (D3) and (D4),
-
(2)
\({{\,\textrm{rank}\,}}B_k=k\) and \(\dim \left( {\theta _{A,B}}\big /{\theta _{A,B}\cap \theta _{A_k,B_k}}\right) ={{\,\textrm{rank}\,}}B-k\).
The proof of this theorem relies on two lemmata.
7.2 Lemma
Let \(\theta \) and \(\theta _0\) be Lagrange planes in \({\mathbb {C}}^n\times {\mathbb {C}}^n\), and let \(\Pi \) be a linear subspace of \({\mathbb {C}}^n\times {\mathbb {C}}^n\) with
Then there exists a Lagrange plane \(\theta '\) in \({\mathbb {C}}^n\times {\mathbb {C}}^n\) such that
Proof
We have
Choose a linear subspace \(\Pi '\) with \((\theta +\Pi )^{[\perp ]}=(\theta \cap \theta _0)\dot{+}\Pi '\). Observe that
In particular, we have \(\Pi '\cap \Pi =\{0\}\). Now set \(\theta ':=\Pi '\dot{+}\Pi \). Since \(\Pi '\) and \(\Pi \) are neutral subspaces with \(\Pi '[\perp ]\Pi \), the subspace \(\theta '\) is neutral. We have
and hence can compute
Thus \(\dim \theta '=\dim \Pi +\Pi '=n\), and we see that \(\theta '\) is a Lagrange plane.
and from the latter
\(\square \)
7.3 Lemma
Let \({\mathcal {L}}\) be a linear subspace of \({\mathbb {C}}^n\) with \(\dim {\mathcal {L}}<n\), and let \({\mathcal {C}}_1,\ldots ,{\mathcal {C}}_N\) be a finite number of linear subspaces of \({\mathbb {C}}^n\) with
Then there exists a linear subspace \({\mathcal {L}}'\) with
Proof
Choose a linear subspace \({\mathcal {K}}\) with \({\mathbb {C}}^n={\mathcal {L}}\dot{+}{\mathcal {K}}\), and let \(\pi :{\mathbb {C}}^n\rightarrow {\mathcal {K}}\) be the corresponding projection onto the second summand.
We first observe that for every subspace \({\mathcal {C}}\) with \({\mathcal {L}}\cap {\mathcal {C}}=\{0\}\) it holds that
Indeed, assume that \(x\in {\mathcal {L}},\lambda \in {\mathbb {C}}\), and \(x+\lambda z\in {\mathcal {C}}\setminus \{0\}\). Since \({\mathcal {L}}\cap {\mathcal {C}}=\{0\}\), we must have \(\lambda \ne 0\), and hence \(z=\frac{1}{\lambda }\pi (x+\lambda z)\in \pi ({\mathcal {C}})\).
Now let \({\mathcal {C}}_1,\ldots ,{\mathcal {C}}_N\) be as in the statement of the lemma. Then
Each \(\pi ({\mathcal {C}}_j)\) is a linear subspace of \({\mathcal {K}}\) with
and hence is a closed subset of \({\mathcal {K}}\) with empty interior. Thus also the finite union \(\bigcup _{j=1}^N\pi ({\mathcal {C}}_j)\) is a subset of \({\mathcal {K}}\) with empty interior, in particular,
For each element \(z\in {\mathcal {K}}\setminus \bigcup _{j=1}^N\pi ({\mathcal {C}}_j)\), the space \({\mathcal {L}}':={\mathcal {L}}+{{\,\textrm{span}\,}}\{z\}\) has the required properties. \(\square \)
Proof of Theorem 7.1
We use induction on k. For \(k={{\,\textrm{rank}\,}}B\) there is nothing to prove: just set \(A_0:=A\) and \(B_0:=B\). Assume we have \(k\in \{1,\ldots ,{{\,\textrm{rank}\,}} B\}\) and \((A_k,B_k)\) with (1) and (2). The aim is to construct \((A_{k-1},B_{k-1})\).
We work with Lagrange planes rather than matrices: denote
Further, set \({\mathcal {L}}:=\ker B_k\), so that \(\theta _k\cap \theta _0=\{0\}\times {\mathcal {L}}\). Then \(\dim {\mathcal {L}}=n-k<n\), and by (D4) we have \({\mathcal {L}}\cap {\mathcal {C}}=\{0\}\) for every coordinate plane \({\mathcal {C}}\) with dimension \(k-1\). According to Lemma 7.3 we find \({\mathcal {L}}'\supseteq {\mathcal {L}}\) with \(\dim {\mathcal {L}}'=n-k+1\) such that still \({\mathcal {L}}'\cap {\mathcal {C}}=\{0\}\) for all coordinate planes \({\mathcal {C}}\) with dimension \(k-1\).
Now set \(\Pi :=\{0\}\times {\mathcal {L}}'\). According to Lemma 7.2 we find a Lagrange plane \(\theta '\) with
We need to compute the dimension of the factor space \(\theta /(\theta \cap \theta ')\). The canonical map
is injective, and \(\dim \theta =\dim \theta '\). Hence,
On the other hand, using (7.3), \(\dim \theta _k=\dim \theta '\), and the respective canonical injections, yields
From
we now obtain
and hence
It remains to choose \((A_{k-1},B_{k-1})\) with \(\theta '=\theta _{A_{k-1},B_{k-1}}\). Then \((A_{k-1},B_{k-1})\) satisfies (1) and (2). \(\square \)
It is now easy to deduce the few layers case in Theorem 4.3.
Proof of Theorem 4.3, (S1), (S4)–(S5)
First note that (S1) follows from [8], as we mentioned above, and (S5) is obtained by application of Lemma 2.9.
Let us prove (S1). Consider \(k\in \{1,\ldots ,r-1\}\). Choose \((A_k,B_k)\) according to Theorem 7.1. Denote by \(\rho _k\) the scalar spectral measure of \(L_{A_k,B_k}\) (i.e., \(\rho _k={{\,\textrm{tr}\,}}\Omega _{A_k,B_k}\)), let \(W_k\subseteq {\mathbb {R}}\) be a set as constructed in Lemma 6.6 for \((A_k,B_k)\), and let \(W^{(k)}_k\) be the corresponding set from (6.8). Then we know, from the already proven part (S2) that \(\rho _k(W^{(k)}_k)=0\).
Let \(\rho =\rho _{ac,k}+\rho _{s,k}\) be the Lebesgue decomposition of \(\rho \) w.r.t. \(\rho _k\). Since \(\rho _k(W^{(k)}_k)=0\), it follows that \(\rho _{ac,k}(W^{(k)}_k)=0\) or, equivalently, \(\mathbbm {1}_{W^{(k)}_k}\cdot \rho _{ac,k}=0\), hence
Lemma 2.14 applied to A, B and \(A_k,B_k\) gives
and Lemma 2.9 applied to \(L_{A,B}\) and \(L_{A_k,B_k}\) gives
and, owing to (7.4), \(\mathbbm {1}_{W^{(k)}_k}\cdot \rho _{s,ac}\)-a.e. On the other hand, from the definitions of sets \(W^{(k)}_k\) we have
therefore
Consider the intersection
which is also a \(\lambda \)-zero and \(\mu _s\)-full set, and analogously to (6.8) define \({\widetilde{W}}^{(k)}:=\{x\in {\widetilde{W}}:{{\,\textrm{rank}\,}}\alpha (x)=k\}\) for \(k=1,\ldots ,n\). Then we have \(\mathbbm {1}_{(\cup _{k=1}^{r-1}{\widetilde{W}}^{(k)})}\cdot \mu _s=\mathbbm {1}_{\{0<N_0<r\}}\cdot \mu _s\). Since for every \(k=1,\ldots ,n\) one has \(\mu _s(W_k^{(k)}\setminus {\widetilde{W}}^{(k)})=0\), it is true that \(\mathbbm {1}_{(\cup _{k=1}^{r-1}W_k^{(k)})}\cdot \mu _s=\mathbbm {1}_{\{0<N_0<r\}}\cdot \mu _s\) as well, and therefore also \(\mathbbm {1}_{(\cup _{k=1}^{r-1}W_k^{(k)})}\cdot \rho _{s,ac}=\mathbbm {1}_{\{0<N_0<r\}}\cdot \rho _{s,ac}\). Thus (7.5) in fact coincides with the assertion of (S5), which completes the proof. \(\square \)
References
Aronszajn, N.: On a problem of Weyl in the theory of singular Sturm-Liouville equations. Am. J. Math. 79, 597–610 (1957)
Behrndt, J., Hassi, S., de Snoo, H.S.V.: Boundary Value Problems. Weyl Functions and Differential Operators. Birkhäuser, Basel (2020)
Derkach, V.A., Hassi, S., Malamud, M.M., de Snoo, H.S.V.: Boundary relations and their Weyl families. Trans. Am. Math. Soc. 358(12), 5351–5400 (2006)
Derkach, V.A., Hassi, S., Malamud, M.M., de Snoo, H.S.V.: Boundary relations and generalized resolvents of symmetric operators. Russ. J. Math. Phys. 16(1), 17–60 (2009)
DiBenedetto, E.: Real Analysis. Basler Lehrbücher. Birkhäuser, Boston, Birkhäuser Advanced Texts (2002)
Donoghue, W.F.: On the perturbation of spectra. Commun. Pure Appl. Math. 18, 559–579 (1965)
Gilbert, D.J.: On subordinacy and spectral multiplicity for a class of singular differential operators. Proc. R. Soc. Edinb. Sect. A 128(3), 549–584 (1998)
Gesztesy, F., Tsekanovskii, E.: On matrix-valued Herglotz functions. Math. Nachr. 218(1), 61–138 (2000)
Harmer, M.: Hermitian symplectic geometry and extension theory. J. Phys. A Math. Gen. 33(50), 91–93 (2000)
Herglotz, G.: Über Potenzreihen mit positivem, reellem Teil im Einheitskreis. Leipz. Ber. 63, 501–511 (1911)
Kac, I.S.: Spectral multiplicity of a second-order differential operator and expansion in eigenfunctions. Dokl. Akad. Nauk. SSSR 145, 510–513 (1962)
Kac, I.S.: Spectral multiplicity of a second-order differential operator and expansion in eigenfunctions (Russian). Izv. Akad. Nauk SSSR Ser. Mat. 27, 1081–1112 (1963)
Kodaira, K.: The eigenvalue problem for ordinary differential equations of the second order and Heisenberg’s theory of \(S\)-matrices. Am. J. Math. 71, 921–945 (1949)
Kostrykin, V., Schrader, R.: Kirchhoff’s rule for quantum wires. J. Phys. A Math. Gen. 32, 595–630 (1999)
Liaw, C., Treil, S.: Matrix measures and finite rank perturbations of self-adjoint operators. J. Spectral Theory 10(4), 1173–1210 (2020)
Malamud, M.M.: On singular spectrum of finite dimensional perturbations (to the Aronszajn-Donoghue-Kac theory). Dokl. Math. 100(1), 358–362 (2019)
Pearson, D.B.: Quantum Scattering and Spectral Theory. Techniques of Physics, vol. 9. Academic Press Inc, London (1988)
Poltoratski, A.: On the boundary behaviour of pseudocontinuable functions. St. Petersburg Math. J. 5, 389–406 (1994)
Poltoratski, A.: Asymptotic behavior of arguments of Cauchy integrals. Linear and complex analysis, Amer. Math. Soc. Transl. Ser. 2, 226:133–144 (2009)
Saks, S.: Theory of the Integral, 2nd edn. Dover Publications Inc, New York (1964)
Simon, B.: On a theorem of Kac and Gilbert. J. Funct. Anal. 223, 109–115 (2005)
Simonov, S., Woracek, H.: Spectral multiplicity of selfadjoint Schrödinger operators on star-graphs with standard interface conditions. Integr. Equ. Oper. Theory 78(4), 523–575 (2014)
Titchmarsh, E.C.: Eigenfunction Expansions Associated with Second-Order Differential Equations. Part I, 2nd edn. Clarendon Press, Oxford (1962)
Weyl, H.: Über gewöhnliche Differentialgleichungen mit Singularitäten und die zugehörigen Entwicklungen willkürlicher Funktionen. (German). Math. Ann., 68(2):220–269 (1910)
Acknowledgements
The first author was supported by the RScF 20-11-20032 grant (Sects. 1–4), by the RFBR 19-01-00565A and by the RFBR 19-01-00657A grants (Sects. 5–7). He appreciates the hospitality of TU Wien where the most part of this work was done.The second author was supported by the joint project I 4600 of the Austrian Science Fund (FWF) and the Russian foundation of basic research (RFBR). This research was funded in whole or in part by the Austrian Science Fund (FWF) [10.55776/I4600]. For open access purposes, the author has applied a CC BY public copyright license to any author accepted manuscript version arising from this submission.
Funding
Open access funding provided by TU Wien (TUW).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
There are no conflicts of interest. Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Simonov, S., Woracek, H. Local Spectral Multiplicity of Selfadjoint Couplings with General Interface Conditions. Integr. Equ. Oper. Theory 96, 18 (2024). https://doi.org/10.1007/s00020-024-02767-6
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00020-024-02767-6
Keywords
- Schrödinger operator
- Quantum graph
- Singular spectrum
- Spectral multiplicity
- Weyl theory
- Ordinary boundary triplet
- Boundary relation
- Herglotz function