Abstract
The goal of this chapter is to describe the most general vertex conditions for Schrödinger operators on metric graphs and how these conditions are connected to graph’s topology.
You have full access to this open access chapter, Download chapter PDF
The goal of this chapter is to describe the most general vertex conditions for Schrödinger operators on metric graphs and how these conditions are connected to graph’s topology. As we already mentioned, different types of vertex conditions may be required in order to reflect special properties of the vertices. Considering only standard and Dirichlet conditions is often sufficient, therefore one may get the impression that this chapter can be dropped by readers not aiming to study differential operators on metric graphs in full detail. This is not completely true since the ideas developed in the chapter will be used later on, for example deriving the trace formula.
3.1 Preliminary Discussion
We have seen that differential operators on metric graphs require introducing special conditions connecting limiting values of functions and their normal derivatives at the vertices. The role of such vertex conditions is two-fold:
-
to connect together different edges,
-
to make the differential operator self-adjoint (symmetric).
The Hilbert space \(L_2 (\Gamma ) \) and the formal differential expression (2.17) do not reflect how different edges are connected to each other. It is the vertex conditions that determine the connectivity of the graph, and therefore this question requires more attention than one might expect at the first glance.
Assume that a metric graph is given and we are interested in studying all appropriate vertex conditions. Our experience tells us that we need as many conditions as the number of endpoints—the sum of degrees of all vertices. In order to reflect the graph’s connectivity properly, these conditions should connect together only the limit values associated with each vertex separately. It follows that each vertex can be considered independently, and therefore it is wise to write the boundary form (2.25) collecting together the terms corresponding to each vertex:
For every vertex of valence \( d^m \) one writes precisely \( d^m \) linearly independent conditions so that the corresponding expression
vanishes for each \( m\), ensuring that the operator is symmetric. Here,
denote the \( d^m\)-dimensional vectors of limit values at the vertex \( V^m \). It is not hard to give examples of vertex conditions that guarantee that the boundary form vanishes:
-
Dirichlet conditions:
$$\displaystyle \begin{aligned} \vec{u} (V^m) = \vec{0},\end{aligned}$$ -
Neumann conditions:
$$\displaystyle \begin{aligned} \partial \vec{u} (V^m) = \vec{0},\end{aligned}$$ -
(generalised) Robin conditions:
$$\displaystyle \begin{aligned} \partial \vec{u} (V^m) = A^m \vec{u} (V^m),\end{aligned}$$where \( A^m \) is a Hermitian matrix in \( \mathbb C^{d^m}. \)
However, these families do not cover all possible vertex conditions. In order to obtain all possible conditions, one needs to consider a certain combination of Robin and Dirichlet conditions (as it will be shown in the following section).
One may think that any set of \( d^m \) such conditions guaranteeing zero boundary form is appropriate, but it is necessary to take into account another one aspect. Assume that the endpoints in the vertex \( V^m \) can be divided into two non-intersecting classes \( {V^m}' \) and \( {V^m}'', \)
so that the vertex conditions connect just the limit values associated with each of these subclasses separately (see Fig. 3.1). Then such vertex conditions correspond to the graph \( \Gamma \) with two vertices \( {V^m}' \) and \( {V^m}''\), rather than with one vertex \( V^m.\) If such separation is impossible, then the vertex conditions will be called properly connecting. In what follows we consider only properly connecting conditions unless something else is required for different reasons. If the separation described above is possible, we are going to say that the vertex \( V^m \)splits into two vertices \( {V^m}' \) and \( {V^m}''. \)
In this chapter, we are going to describe all appropriate vertex conditions for star graphs. Such parametrisation can be done in different (equivalent) ways and we collect the most widely used parametrisations to be used in the book. We are convinced that the parametrisation using the irreducible unitary matrix \( S \) (3.21) is the most appropriate, since this parameter has a clear physical interpretation—it coincides with the vertex scattering matrix. Moreover, this parametrisation is unique and guarantees that the vertex conditions are properly connecting.
3.2 Vertex Conditions for the Star Graph
Consider any star graph formed by \( d \) semi-infinite edges \( E_n = [x_n, \infty ) , \, n= 1,2, \dots , d ,\) joined together at one central vertex \( V = \{ x_1, x_2, \dots , x_d \} \) (having degree \( d \)). The boundary form of the maximal operator is given by:
where \( U = (\vec {u}, \partial \vec {u}) \in \mathbb C^{2d}.\) The (sesquilinear) form \( B \) introduced above does not depend on the behaviour of the functions \( u \) and \( v \) inside the edges, but is given via their limit values at the vertex.
We have seen that in order to determine a self-adjoint operator corresponding to the formal expression (2.17), one has to introduce precisely \( d \) linearly independent conditions connecting the limit values \( U = (\vec {u}, \partial \vec {u}) \in \mathbb C^{2d}. \) These conditions should be chosen so that the boundary form \( B [U, V] \) vanishes whenever both \( U \) and \( V \) satisfy the conditions. In other words, in the space \( \mathbb C^{2d} \) one has to select a \( d\)-dimensional subspace \( M \) such that \( B[U,V] \) vanishes, provided \( U, V \in M. \) This is a standard problem from linear algebra and it is not hard to give examples of such subspaces, but we would like to describe all possible such subspaces. The corresponding conditions will be called Hermitian.
Definition 3.1
Conditions relating the limit values \( (\vec {u}, \partial \vec {u}) \in \mathbb C^{2d} \) at a vertex \( V \) of degree \( d \) are called Hermitian if and only if
-
the boundary form (3.4) vanishes whenever \( u \) and \( v \) satisfy these conditions;
-
the subspace in \( \mathbb C^{2d} \) formed by all limit values satisfying these conditions has the maximal dimension \( d\).
Every \( d\)-dimensional subspace \( M \subset \mathbb C^{2d} \) can be described as the image of a linear map from \( \mathbb C^{d} \) to \( \mathbb C^{2d} \), and hence as the set of \( (Et, Ft) \) for \( t \in \mathbb C^d, \) where \( E \) and \( F \) are \( d \times d \) matrices. For reasons that will become clear in a moment, we shall write \( E = B^* \) and \( F = A^* \) for suitable matrices \( A \) and \( B .\)
The subspace
has dimension \( d \) only if the \( d \times 2d \) matrix \( (A,B) \) has maximal rank:
In fact, the dimension of \( M \) is less than \( d \) if and only if there exists a vector \( t_0 \in \mathbb C^d, \, t_0 \neq \vec {0}, \) such that \( B^* t_0 = A^* t_0 = 0. \) Hence, for any \( s \in \mathbb C^d \), we have
i.e. the ranges of \( A \) and \( B \) are both orthogonal to \( t_0 \), so \( \mathrm {rank} (A,B) < d. \)
The boundary form \( B \) vanishes on \( M \times M \) provided the matrix \( AB^* \) is Hermitian, with
To prove this statement, let us consider two arbitrary vectors \( U, V \in M \)
where \( t,s \in \mathbb C^d. \) The boundary form can be expressed using \( s, t \) as follows:
which vanishes if and only if \( AB^* \) is Hermitian. Thus we have proven that all self-adjoint operators on the star graph can be parameterized by \( d\)-dimensional subspaces \( M \) of the form (3.5). But this description of self-adjoint extensions is not convenient, since in order to determine whether a function \( u \) belongs to the domain of the operator, one has to check whether its limit values \( U \) can be presented as \( U = (B^*t, A^*t) \) with a certain vector \( t \in \mathbb C^d .\)
It turns out that \( M \) can be described as the set of all vectors \( U \in \mathbb C^{2d} \) satisfying the vertex conditions [309]
It is trivial, that every \( U \in M \) satisfies (3.9) as the matrix \( AB^* \) is Hermitian and therefore \( AB^* t = BA^* t. \) Moreover, due to (3.6), the set of vectors satisfying (3.9) form a \( d\)-dimensional subspace and has to be equal to \( M \), since \( M \) is also \( d\)-dimensional. Formula (3.9) explains our unusual choice of matrices \( B^* \) and \( A^* \) instead of \( E \) and \( F \) in the definition of \( M. \)
We have proved the following theorem:
Theorem 3.2
Any Hermitian vertex condition at the vertex \( V \) of degree \( d \) can be written in the form
where\( \vec {u} \)and\( \partial \vec {u} \)denote the vectors of limit values of the functions (2.12) and their extended normal derivatives (2.26) at the vertex. The\( d \times d \)matrices\( A \)and\( B \)can be chosen arbitrarily, provided that the rank of the\( d \times 2d \)matrix\( (A,B) \)is maximal, and the matrix\( AB^* \)is Hermitian
The subspace \( M \) (and therefore the self-adjoint operator) is not changed if the matrices \( A \) and \( B \) are replaced with \( CA \) and \( CB \), where \( C \) is any \( d \times d\) non-singular matrix. It follows that there is no one-to-one correspondence between the pairs of matrices and the self-adjoint operators. This fact makes it difficult to use this parametrisation when inverse problems are discussed. It is also not straightforward to check whether the corresponding conditions are properly connecting or not. It is clear that if \( A \) and \( B \) are block-diagonal with the equal sizes of all blocks, then the vertex conditions are not properly connecting. Consider just the following explicit example.
Example 3.3
Let \( \Gamma \) be the star graph formed by three semi-axes joined together in the vertex \( V = \{ x_1, x_2, x_3 \}\) (see Fig. 3.2) and the vertex conditions be given by
![](http://media.springernature.com/lw329/springer-static/image/chp%3A10.1007%2F978-3-662-67872-5_3/MediaObjects/329100_1_En_3_Equg_HTML.png)
It is clear that \( AB^* = 0 = BA^* \) and the rank of \( (A,B) \) is \( 3 .\) Therefore the corresponding vertex conditions are Hermitian.
But both \( A \) and \( B \) are block-diagonal matrices with blocks of size \( 2 \times 2 \) and \( 1 \times 1, \) which allows one to write the same vertex conditions in the form:
or even as
These conditions are not properly connecting and correspond rather to a line and a half line, which are independent of each other, not to the star graph formed by three semi axes.
Multiplication of the matrices \( A \) and \( B \) by a non-singular matrix \( C \) may destroy the block-diagonal structure, in which case it will be hard to see that these conditions can be written such that they connect only the limiting values corresponding to two subvertices.
3.3 Vertex Conditions Via the Vertex Scattering Matrix
In this section we are going to describe another possible equivalent parametrisation of all vertex conditions using the scattering matrix—a unitary matrix describing how the waves are transmitted by the vertex. This parametrisation has the following advantages:
-
the matrix giving this parametrisation is unique;
-
the parameter has a clear interpretation;
-
characterisation of all properly connecting conditions is straightforward.
In what follows, we are going to mainly use this parametrisation in our studies.
3.3.1 The Vertex Scattering Matrix
We introduce here the notion of the vertex scattering matrix. Consider the Laplace operator \( L^{(A,B)} \) on the star graph, defined by \( - \frac {d^2}{dx^2} \) on the domain of functions satisfying (3.10). The absolutely continuous spectrum for this operator is the same as for the Dirichlet Laplacian \( L^D\)—the second derivative operator defined on functions satisfying Dirichlet conditions at the vertex—and coincides with the interval \( [0, \infty ) \), the multiplicity is \( d. \) The corresponding generalised eigenfunctions of \( L^{(A,B)} \), often called scattered waves, are uniformly bounded solutions to the differential equation
satisfying the vertex conditions (3.21). Every solution to this differential equation on each interval \( [x_j, \infty ) \) can be written in the form
One should think about the wave \( \mathrm {e}^{-ik(x-x_j)} b_j \) as a certain incoming wave, which after the interaction with the vertex is reflected as the outgoing wave \( \mathrm {e}^{ik(x-x_j)} a_j. \) Of course, the amplitudes \( b_j \) of the incoming waves are arbitrary, while the amplitudes \( a_i \) of the outgoing waves are determined by the whole set of \( b_j,\; j = 1,2, \dots , d.\) This relation can be written in the matrix form as
where \( S_{\mathbf {v}} (k) \) is called the vertex scattering matrix corresponding to the energy \( \lambda = k^2. \) In our case, the relation between the amplitudes of incoming and outgoing waves is obtained by inserting the function given by (3.12) into the vertex conditions.Footnote 1
Let us calculate \( S_{\mathbf {v}} (k) \) determined by the vertex conditions (3.10). The limit values of the function \( \psi \) are
Substitution into (3.10) gives the relation
leading to
where one takes into account that the vector \( \vec {b} \) of amplitudes of incoming waves is arbitrary. The matrix \( A- \mathrm {i} k B \) is invertible, since otherwise the adjoint matrix \( A^* + \mathrm {i} k B^* \) has a nontrivial kernel, i.e. there exists \( t \) such that \( (A^* + \mathrm {i} k B^*) t = 0 .\) But then multiplying by \( A \) and taking the scalar product with \( t \) we arrive at
Since both \( \| A^* t \|{ }^2 \) and \( \langle AB^* t, t \rangle \) are real (\(AB^* \) is Hermitian), it follows that \( A^* t = 0. \) In a similar way we may prove that \( B^* t = 0 \), which contradicts the second assumption in (3.11) that \( \mathrm {rank}\, (A,B) = d. \)
The vertex scattering matrix can now be calculated from (3.14)
It is easy to see that the matrix \( S_{\mathbf {v}} (k) \) is unitary:
![](http://media.springernature.com/lw541/springer-static/image/chp%3A10.1007%2F978-3-662-67872-5_3/MediaObjects/329100_1_En_3_Equ16_HTML.png)
where we used that \( BA^* \) is Hermitian due to (3.11). Note that we were able to prove that \( S_{\mathbf {v}} (k) \) is unitary only because \( A \) and \( B \) satisfy both conditions (3.11) and \( k \) is real. As we shall see later, the vertex scattering matrix has norm less than 1 if \( \mbox{Im} \, k \in \mathbb R_+ \).
Unitarity of \( S_{\mathbf {v}} (k) \) implies that not only the vectors \( \vec {b} \) of incoming amplitudes span the whole \( \mathbb C^d \), but also the vectors \( \vec {a} \) of outgoing amplitudes. In other words, given any \( \vec {a} \in \mathbf C^d\) one may find the set of incoming amplitudes such that (3.13) holds. On the other hand, some entries in the scattering matrix may vanish, for example if \( (S_{\mathbf {v}} (k))_{12} \) is zero, then the amplitude of the outgoing wave on the first edge is independent of the amplitude of the incoming wave on the second edge.
3.3.2 Scattering Matrix as a Parameter in the Vertex Conditions
Our idea is to use the vertex scattering matrix to parameterise the set of vertex conditions. It is easy to see that the values of \( S_{\mathbf {v}} (k) \) for different \( k \in \mathbb R \) are determined by each other. In particular, we are going to prove the following explicit formula (which probably appeared for the first time in [310]):
where \( I \) denotes the \( d \times d \) unit matrix. In what follows we are going to identify \( \alpha \) with \( \alpha I. \) There is no significance of the particular value of \( k_0 \) chosen in our parametrisation, so let us use \( k_0 = 1 \) in what follows and introduce the notation:
The unitary matrix \( S \) is uniquely determined by \( A \) and \( B \), but not vice versa. The matrices \( A \) and \( B \) can be chosen equal to
It is an easy exercise that the corresponding \( S_{\mathbf {v}} (1) = S. \) One may also prove that such pair \( (A,B)\) satisfies conditions (3.11). The first condition can be shown by taking into account that the matrix \( S \) is unitary
![](http://media.springernature.com/lw483/springer-static/image/chp%3A10.1007%2F978-3-662-67872-5_3/MediaObjects/329100_1_En_3_Equn_HTML.png)
The second condition follows from
which holds for any unitary \( S. \)
To prove formula (3.17) we substitute \( (A,B) \) from (3.19) into formula (3.15) for the scattering matrix:
which is essentially (3.17) in the special case \( k_0 =1 . \) One just needs to take into account that the matrices are commuting and \( S_{\mathbf {v}} (k) \) can be written as a quotient.
In what follows we shall need the special case of (3.17), which expresses the vertex scattering matrix through the unitary parameter \( S\):
3.3.3 On Properly Connecting Vertex Conditions
We are going to discuss now which matrices \( S \) lead to properly connecting vertex conditions. Let us recall that vertex conditions are called properly connecting if and only if the vertex cannot be divided into two (or more) vertices, so that the vertex conditions connect only limit values belonging to each of the new vertices separately. We have seen that one faces certain difficulties to characterise all possible properly connecting conditions when the description (3.10) via pair \( (A,B) \) is used. On the other hand it is clear that all not properly connecting vertex conditions lead to vertex scattering matrices \( S_{\mathbf {v}} \) having block-diagonal form. Conversely, every such matrix leads to not properly connecting vertex conditions.
A matrix is called reducible if and only if it can be transformed into block upper-triangular form by a permutation of coordinates. But every unitary block upper-triangular matrix is block diagonal, so all properly connecting vertex conditions are in one-to-one correspondence with irreducible unitary matrices \( S. \) Therefore, without loss of generality, we are going to restrict ourselves to irreducible unitary matrices \( S \) parameterising the vertex conditions.
Theorem 3.2 can be reformulated as follows
Theorem 3.4
The set of Hermitian properly connecting vertex condition at the vertex\( V \)of degree\( d \)can be uniquely parameterised by\( d \times d \)irreducible unitary matrices\( S \)writing conditions (3.10) in the form
where\( \vec {u} \)and\( \partial \vec {u} \)denote the vectors of limit values of the functions (2.12) and their extended normal derivatives (2.26) at the vertex.
Since every self-adjoint extension of the minimal operator \( L^{\mathrm {min}} \) leads to a certain unitary vertex scattering matrix \( S_{\mathbf {v}} (k) \), the vertex conditions (3.21) describe all possible self-adjoint extensions [90, 442, 506].
In what follows, the self-adjoint operator corresponding to the differential expression \( \tau _{q,a} \) given by (2.17) on a metric graph \( \Gamma \) and vertex conditions (3.21) will be denoted by \( L_{q,a}^S (\Gamma ) .\)Footnote 2 We shall often omit certain indices hoping that no misunderstanding occurs.
A few other possible parametrisations of vertex conditions are described in Appendix 2. In our opinion, the parametrisation (3.21) is the most appropriate, and we are going to use it in what follows. We are going to illustrate the advantages of this parametrisation in the following section, where different properties of vertex scattering matrices are addressed.
Let us consider just one (rather applied) example that illustrates the power of this parametrisation.
Example 3.5 ([338])
Experimental physicists [470] considered transport properties of the system of nano-wires depicted in Fig. 3.3. This problem can be described by Schrödinger equation on \( \Gamma _B \) and requires Hermitian vertex conditions in the vertex \( V = \{ x_1, x_2, x_3, x_4 \}. \) The main question is: how does one select these conditions in order to reflect the geometry of the coupling? It is clear that, in the ballistic regime, the probabilities of the transport between points \( x_1 \) and \( x_3 \), as well as between \( x_2 \) and \(x_4 \), are negligible. Hence it is natural to look for vertex conditions that guarantee that the following entries in the vertex scattering matrix are zero:
One may also assume that the reflection is small, leading to
If a certain entry in the vertex scattering matrix is equal to zero for one particular energy, one cannot be sure that it remains zero for all other values of the energy, since the vertex scattering matrices in general depend on the energy (see (3.20)). One may show that the vertex scattering matrix is independent of the energy if and only if the parameter \( S \) is not only unitary, but also Hermitian: \( S= S^{-1} = S^* \) (see Sect. 3.5.1).
Every \( 4 \times 4 \) real unitary Hermitian matrix satisfying conditions (3.22)–(3.23) is of the form
where \( \sigma = \pm 1 \) and \( \alpha , \beta \in \mathbb R \) are subject to
We required that the matrix is real in order to guarantee that all eigenfunctions may be chosen real. In order to guarantee that the vertex conditions are properly connecting one should require that
3.4 Parametrisation Via Hermitian Matrices
Consider the eigenprojector \( P_{-1} \) associated with the eigenvalue \( -1 \) (if any) of the unitary matrix \( S \) appearing in the parametrisation (3.21). The complementary projector \( P_{-1}^\perp = I - P_{-1} \) projects on the linear span of the eigensubspaces associated with all other eigenvalues of \( S. \) Multiplying (3.21) by \( P_{-1} \) from the left we arrive at
This condition means that the vector \( \vec {u} \) has to be orthogonal to the eigenvectors of \( S\) associated with the eigenvalue \( -1. \)
The second condition is obtained by multiplying (3.21) by \( P_{-1}^\perp \):
where we used that \( S \) commutes with its eigenprojectors. The matrix \( (S+I) \) is invertible on the range of \( P_{-1}^\perp \), hence we have
The ranges of \( P_{-1} \) and \( P_{-1}^\perp \) span the space \( \mathbb C^d \), hence condition (3.21) is equivalent to
where
The matrix \( A_S \) appearing in this parametrisation is Hermitian and its eigenvectors coincide with the eigenvectors of the unitary matrix \( S \) (not corresponding to the eigenvalue \( -1\)). To prove this let us write \( A_S \) in the form
and take the adjoint
This parametrisation shows that the most general vertex conditions at a vertex can be considered as a combination of Dirichlet and Robin type conditions:
-
the first condition in (3.27) is precisely of Dirichlet type,
-
the second condition in (3.27) is of Robin type.
This form of vertex conditions will be extremely useful when quadratic forms of operators are discussed (see Chap. 11).
3.5 Scaling-Invariant and Standard Conditions
3.5.1 Energy Dependence of the Vertex S-matrix
Let us now discuss how the vertex scattering matrix depends on the energy. Since the matrix \( S \) is unitary, it is convenient to use its spectral representation
where \( \theta _n \in (-\pi , \pi ], \vec {e}_n \in \mathbb C^d, S \vec {e}_n = \mathrm {e}^{\mathrm {i}\theta _n} \vec {e}_n. \) We use that \( S_{\mathbf {v}} \) is rational function of \( S\), hence formula (3.20) implies
The unitary matrix \( S_{\mathbf {v}} (k) \) has the same eigenvectors as the matrix \( S \), but the corresponding eigenvalues in general depend on the energy. The eigenvalues \( \pm 1 \) are invariant; all other eigenvalues (i.e. different from \( \pm 1 \)) tend to \( 1 \) as \( k \rightarrow \infty . \)
If \( S \) is not Hermitian, one may calculate both the high and the low energy limits of \( S_{\mathbf {v}} (k) \):
Here we used the notations \( P_{\pm 1} \) for the spectral projectors associated with the eigenvalues \( \pm 1 :\)
The vertex scattering matrix is independent of the energy if and only if the vertex conditions are non-Robin, or scaling-invariant as described in the following section.
3.5.2 Scaling-Invariant, or Non-Robin Vertex Conditions
For the star graph formed by the edges \( E_n = [x_n, \infty ) , \; n =1,2, \dots , d \) consider the scaling transformation
This transformation naturally induces the function transformation
so that if \( y \in E_n = [x_n, \infty ) \) then
![](http://media.springernature.com/lw194/springer-static/image/chp%3A10.1007%2F978-3-662-67872-5_3/MediaObjects/329100_1_En_3_Equz_HTML.png)
It is natural to call vertex conditions scaling invariant if and only if any function \( u \) and its scaling \( u_c \) satisfy conditions simultaneously.
It is clear that the limit values of \( u \) and \( u_c \) are related via
provided the magnetic potential is zero. Vertex conditions (3.27) are invariant under scaling if and only if the matrix \( A_S \) is identically zero. As one can see from (3.28) the parameter matrix \( S \) has just eigenvalues \( 1 \) and \( -1 \), hence \( S \) is not only unitary but also Hermitian. Hence any scaling-invariant vertex condition can be written in the form:
where \( P_{\pm 1} \) are the eigenprojectors on the two orthogonal eigensubspaces spanning up \( \mathbb C^d. \) These conditions can be seen as a combination of Dirichlet and Neumann conditions. The corresponding matrix \( A_S \) appearing in the Hermitian parametrisation is zero, therefore scaling-invariant vertex conditions are often called non-Robin. In the two extreme cases \( P_{-1} = I \; (P_1 = 0) \) and \( P_{-1} = 0 \; (P_1 = I)\) the conditions reduce to usual Dirichlet and Neumann ones.
Characteristic property of scaling-invariant vertex conditions is that the corresponding vertex scattering matrix is independent of the energy (as can be seen from (3.30)) and can be written as a sum of two projectors
3.5.3 Standard Vertex Conditions
Standard vertex conditions (2.27)
appear naturally if we impose the requirement that the functions are continuous at the nodes.Footnote 3 Continuity of the wave-function is a natural requirement and is usually welcomed in applications. It is customary to use these conditions if there is no preference or it is not known which particular vertex conditions should be used, which explains the name. We have already considered standard vertex conditions in Sect. 2.1.3. Writing these conditions using matrices \( A \) and \( B \) is not difficult
The first \( d-1 \) equations imply that the function \( u \) is continuous, while the last equation corresponds Kirchhoff condition.
Let us discuss how to describe the standard conditions using the scattering matrix. To this end we calculate the vertex scattering matrix. Substituting Ansatz (3.12) into (3.35) and taking into account that the ranges of the two matrices are orthogonal, we get:
These conditions can be written as
Then it is clear that the edges are indistinguishable i.e. they are invariant under permutations. Therefore the vertex scattering matrix should satisfy the equation
for any permutation \(P_\sigma \) and therefore be of the form:Footnote 4
At this stage we cannot exclude the possibility that the transmission \( T \) and reflection \( R \) coefficients depend on the spectral parameter \( k.\) Let us assume that there is just one incoming wave arriving along the edge \( E_1 \): the corresponding scattered wave is given by the Ansatz
Substituting this Ansatz into the standard conditions (2.27) leads to the following linear system
Solving the linear system we get the transition and reflection coefficients
The matrix \( S^{\mathrm {st}} \) corresponding to standard vertex conditions is then given by
which allows one to write the standard vertex conditions in the form (3.21)
![](http://media.springernature.com/lw425/springer-static/image/chp%3A10.1007%2F978-3-662-67872-5_3/MediaObjects/329100_1_En_3_Equ42_HTML.png)
The scattering matrix is independent of the energy and therefore can be written using two projectors. One may also introduce the eigensubspaces \( N_1 = \mathcal L \{ (1,1,1,\dots ,1)\}\) and \( N_{-1} = N_1^\perp \) corresponding to the eigenvalues \( \pm 1. \) The orthogonal projectors \( P_{\pm 1} = P_{N_{\pm 1}} \) allow one to write standard vertex conditions also in the form (3.33).
Standard vertex conditions for degree two vertices mean that the function and its first derivative are continuous at the vertex. As the result the corresponding vertex scattering matrix describes free passage through the vertex
Hence degree two vertices with standard conditions can always be removed and the two edges joined at the vertex can be substituted with one edge of the length equal to the sum of the lengths of the two edges.
On the opposite, every point inside an edge can be seen as a degree two vertex with standard conditions.
3.6 Signing Conditions for Degree Two Vertices
The signing conditions remind the standard conditions and differ by two extra signs, hence the name
These condition correspond to the multiplication of the function by \( -1 \) while crossing the vertex. The corresponding vertex scattering matrix is
These conditions will play a very important role when discussing the solution of the inverse problem using magnetic flux dependent spectral data.
For example, introducing signing conditions connecting the endpoints of the same interval corresponds to the loop graph with magnetic flux equal to \( \pi \).
We borrow the name signing conditions from the discrete graph theory, see for example [89, 384].
3.7 Generalised Delta Couplings
In this section we present yet another class of vertex conditions. These conditions were introduced in order to guarantee that the ground state eigenfunction may be chosen positive. They are characterised by the property that the domain of the quadratic form is invariant under taking the absolute value and the value of the quadratic form does not increase (see Sect. 4.5).
With any vertex \( V \) of degree \( d \) we associate \( n \leq d \) arbitrary vectors \( \vec {a}_j \) with the following properties:
-
all coordinates of \( \vec {a}_j \) are non-negative numbersFootnote 5
$$\displaystyle \begin{aligned} \vec{a}_j \in \mathbb R_+^{d} ;\end{aligned}$$ -
the vectors have disjoint supports so that
$$\displaystyle \begin{aligned} \vec{a}_j (x_l) \; \vec{a}_i (x_l) = 0, \; \mbox{provided} \; j \neq i , \; x_l \in V,\end{aligned}$$holds.
Without loss of generality we assume that the vectors \( \vec {a}_j \) are normalised:
The coordinates of the vectors \( \vec {a}_j \) will be called weights.
In addition to the vectors \( \vec {a}_j \) we pick up a Hermitian \( n \times n \) matrix \( \mathbf A \) playing the role of Robin parameter. Then the generalised delta couplings are written as follows
The dimension \( n \) of the subspace
will be referred to as the order of the generalised delta-condition (Fig. 3.4).
The first condition in (3.44) is a weighted continuity condition, since it can be written as follows:
The difference to the classical delta coupling (see Appendix 1) is that the function is not necessarily continuous at the vertex. In the case \( n=1 \) and the corresponding vector \( \vec {a}_1 \) has maximal support, any coordinate of \( \vec {u} \) determines all other coordinates—the value of \( u \) at one endpoint determines its values at all other endpoints. But the values may be different if the weights are different. One may say that the weighted function is continuous in this case. If \( n \geq 2 \), then the entries of \( \vec {u} \) are determined by \( n \) arbitrary parameters. Every coordinate in \( \vec {u} \) belongs to the support of at most one vector \( \vec {a}_j \) for a certain \( j \) and thus determines all other coordinates in the support of \( \vec {a}_j. \) The wave function \( u \) attains \( n \) independent weighted values associated with different groups of endpoints joined at the vertex. One should think about this condition as a weighted continuity of \( u \) at each group of endpoints.
Changing the order \( n; \; 1 \leq n \leq d \), of the delta coupling allows one to interpolate between the classical delta coupling and the most general vertex conditions, so that \( n=1 \) corresponds to weighted delta coupling and \( n=d \) to the most general Robin condition of the form \( \partial \vec {u} = A \;\vec {u}. \)
Note that in Eq. (3.45) we introduced a new vector \(\vec {\mathbf {u}} = (\mathbf u_1, \mathbf u_2, \dots , \mathbf u_n)\)—the reduced vector containing common weighted values of the vector \( \vec {u} \). The dimension of the vector coincides with the dimension \( n \) of the linear subspace \( \mathcal B \).
The second equation in (3.44) is a balance equation for the normal derivatives. The sum of normal derivatives connected with endpoints from the support of one of the vectors \( \vec {a}_j \) is connected via the coupling matrix \( \mathbf A \) to the common values of \( u \) at all other groups of endpoints, since we have
Here we used that the vector \( \vec {a}_i \) is normalised.
For generalised delta couplings to be properly connecting two requirements should be fulfilled:
-
(1)
The union of supports of the vectors \( \vec {a}_j \) coincides with all endpoints in V :
$$\displaystyle \begin{aligned} {} \cup_{j=1}^n \mathrm{supp}\; (\vec{a}_j) = \{ x_l \}_{x_l \in V}. \end{aligned} $$(3.46) -
(2)
The matrix \( \mathbf A = \{ A_{ji} \}_{j,i =1}^n \) is irreducible, i.e. it cannot be put into a block-diagonal form by permutations.
If the first condition is not satisfied, then we have classical Dirichlet conditions at certain endpoints:
Dirichlet endpoints always form separate vertices.
If the second condition is not satisfied, then the vertex \( V \) can be chopped into two (or more) vertices preserving the vertex conditions. Such conditions correspond to the metric graph, where the vertex \( V \) is divided.
As we already pointed out described vertex conditions will play a crucial role proving that the ground state eigenfunction can be chosen positive. For that purpose, all the weights should be real and the matrix \( \mathbf A \) should be not only Hermitian but real with non-positive entries outside the diagonal. You will read more about generalised delta couplings in Sect. 4.5, where, in particular, the corresponding quadratic form is calculated and its properties are discussed.
3.8 Vertex Conditions for Arbitrary Graphs and Definition of the Magnetic Schrödinger Operator
3.8.1 Scattering Matrix Parametrisation of Vertex Conditions
In this section we discuss the most general vertex conditions for arbitrary compact finite graphs generalizing Sect. 3.3. Our main focus will be on which properties of these conditions guarantee their admissibility, and therefore we still assume that the potentials satisfy (2.19) and (2.20).
The standard self-adjoint operator \( L_{q,a}^{\mathrm {st}} \) associated with a symmetric differential expression on a metric graph \( \Gamma \) has already been defined in Sect. 2.1 (Definition 2.2). This operator is selected by introducing standard vertex conditions (2.27) at the vertices. Let us discuss how to introduce other types of vertex conditions, so that the vertex structure of the graph \( \Gamma \) is respected. The boundary form of the maximal operator \( L_{q,a}^{\mathrm {max}} \) can be written as
Let us introduce the vectors \( \vec {U}, \partial \vec {U} \) of limit values of the function \( u \) at all endpoints:
The dimension of these vectors coincides with the number \( D \) of endpoints in \( \mathbf V.\)
In vector notation the boundary form (3.47) looks as follows
and coincides with the standard symplectic form in the space \( \mathbb C^{2D} \ni (\vec {U}, \partial \vec {U}) \). The set of self-adjoint restrictions of the maximal operator \( L^{\mathrm {max}}_{q,a} \) can be described by Lagrangian planes, i.e. maximal isotropicFootnote 6 subspaces in \( \mathbb C^{2D}. \) But not all such Lagrangian subspaces respect the vertex structure of the underlying metric graph. In order to select proper conditions let us re-write the boundary form as follows
Each subspace \( \mathbb C^{2 d^m} \) associated with the vertex \( V^m \) can be considered separately. The corresponding appropriate Lagrangian planes, or vertex conditions, have already been discussed in Sect. 3.3 in the context of star graphs.
With every vertex \( V^m ,\) we associate \( d^m \times d^m \) unitary irreducible matrix \( S^m \) and introduce the vertex conditions
In what follows, we are going to limit our studies to the case of irreducible matrices \( S^m.\) The corresponding vertex conditions will be called admissible.
It will be convenient to consider the vectors \( \vec {u}(V^m) \) as elements from \( \mathbb C^{D}\) extending them by zero for all endpoints not from \( V^m\). Then the unitary matrices \( S^m \) will be identified with the \( D \times D \) matrices obtained by putting equal to zero all entries with the indices \( ij \) if either \( x_i \notin V^m\) or \(x_j \notin V^m\). Then the matrix \( \mathbf S \) given by
is unitary and describes the vertex conditions at all vertices via
Note that the sum in (3.52) is orthogonal since the matrices \( S^m \) map limiting values at different vertices. The matrix \( \mathbf S \) in general is reducible and its invariant subspaces are determined by the vertices.
Then the self-adjoint operator is defined as the restriction of the maximal operator to the domain of functions satisfying vertex conditions (3.51).
Definition 3.6
The magnetic Schrödinger operator\( L_{q,a}^{\mathbf {S}} \) is defined by the differential expression (2.17) on the domain of functions from the Sobolev space \( W_2^2 (\Gamma \setminus \mathbf V) \) satisfying the vertex conditions (3.51) at each vertex.
In this definition it is important that each matrix \( S^m \) is irreducible, while the matrix \( \mathbf S \) is reducible by construction (assuming, of course, that \( \Gamma \) has more than one vertex). The case where at least one of the matrices \( S^m \) is reducible corresponds to a different metric graph. The corresponding graph can be obtained from the graph \( \Gamma \) by splitting one of the vertices into two or more equivalence classes—new vertices (see Fig. 3.1). Thus taking \( \mathbf S = - \mathbf I \) we get the Dirichlet operator \( L_{q,a}^D \) corresponding to the graph consisting of disconnected edges.
Theorem 3.7
The operator \( L_{q,a}^{\mathbf {S}} \) is self-adjoint, provided that the matrix \( \mathbf {S} \) is unitary.
Proof
Consider the minimal operator associated with the differential expression \( L_{q,a} \) in \( L_2 (\Gamma ) \). The adjoint operator is determined by the same differential expression on the domain \( W_2^2 (\Gamma \setminus \mathbf V ). \) This follows directly from the fact that the differential expression \( L_{q,a} \) is formally symmetric.
To prove that \( L_{q,a}^{\mathbf {S}} \) is self-adjoint, one may repeat step-by-step the proof of Theorems 3.2 and 3.4.
The boundary form of the operator is given by (3.47) and it vanishes due to vertex conditions (3.51), since it can be re-written as
Each term in the sum vanishes separately. Calculating the adjoint operator \( (L_{q,a}^{\mathbf {S}})^* \) all vertices may also be treated separately, and therefore the corresponding calculations can be repeated without any major changes. □
Following (3.20), it is natural to introduce the corresponding (global) vertex scattering matrix
This matrix coincides with the scattering matrix for the vertex of valency \( D \) with the vertex conditions given by formula (3.53). This matrix will be used in what follows to calculate the positive spectrum and to establish the corresponding trace formulas.
3.8.2 Quadratic Form Parametrisation of Vertex Conditions
In mathematical physics one often determines self-adjoint operators via their quadratic, or more precisely sesquilinear, form. The reason is two-fold:
-
On one side, there is a one-to-one correspondence between semibounded self-adjoint operators and their quadratic forms.
-
Quadratic forms can be used directly in Min-Max and Max-Min principles to determine the discrete spectrum and the corresponding eigenufnctions.
All operators we discuss here are semibounded, let us look at their quadratic forms. The sesquilinear form of the operator \( L^S_{q,a} \) can be calculated explicitly:
The domain \( \mathrm {Dom}\,Q_{q,a}^{\mathbf {S}} \) of the sesquilinear form is obtained by closing the domain \( \mathrm {Dom}\,(L_{q,a}^{\mathbf {S}}) \) with respect to the norm \( Q_{q,a}^{\mathbf {S}} (u,u) + C \| u \|{ }^2, \) where the constant \( C \) is chosen sufficiently large to ensure positivity. Let us remember that we assume that \( q \) and \( a \) satisfy assumptions (2.19) and (2.20) respectively. Under these assumptions \( Q^{\mathbf {S}}_{q,a} (u,u) \) is bounded if and only if \( u \in W_2^1 (\Gamma \setminus \mathbf V) \) since \( a u \in L_2 (\Gamma ) \) and \( q \| u \|{ }^2 \in L_1 (\Gamma ). \) It remains to understand what happens to the vertex conditions. Every function from \( W_2^1 (\Gamma \setminus \mathbf V) \) is continuous on every edge, but the first derivatives are not continuous anymore, in other words the functionals \( u \mapsto u'(x) \) are not bounded with respect to the norm in the Sobolev space \( W_2^1 (\Gamma \setminus \mathbf V).\) It follows that the Robin part of vertex conditions, that is the second equation in (3.27) is not preserved. On the other hand every function from the closure of \( \mathrm {Dom}\,(L^{\mathbf {S}}_{q,a}) \) with respect to \(W_2^1\)-norm satisfies the Dirichlet part, i.e. the first equation in (3.27).
Summing up, the domain of the quadratic form consists of all functions from the Sobolev space \( W_2^1 (\Gamma \setminus \mathbf V) \) satisfying just the first conditions in (3.27)
The second condition is not preserved, since the functionals \( u \mapsto u'(x) \) are not bounded with respect to the norm in the Sobolev space \( W_2^1 (\Gamma \setminus \mathbf V).\)
The Robin part of vertex conditions is not preserved in the description of the quadratic form domain, nevertheless it can be reconstructed. In other words, the quadratic form \( Q_{q,a}^{\mathbf {S}} \) determines the unique vertex condition. The domain of the quadratic form determines the projectors \( P_{-1}^m \) and hence the subspaces \( (I-P_{-1}^m) \mathbb C^{d^m} . \) The quadratic forms \( \langle A^m \vec {u} (V^m), \vec {u} (V^m) \rangle _{\mathbb C^{d^m}} \) determine the Hermitian matrices \( A^m .\) Therefore the unitary matrices \( S^m \) are given by the formula
The standard vertex conditions correspond to the quadratic form
where vertex terms are absent. The domain is given by all \( W_2^1 (\Gamma \setminus \mathbf V ) \) functions, which are in addition continuous at the vertices. Starting from this quadratic form, which is the most natural candidate from the physical point of view, we get the Schrödinger operator determined by the standard vertex conditions. Hence standard vertex conditions appear if one requires that the functions from the domain of the operator are continuous at the vertices and the quadratic form contains no vertex terms.
Consider the quadratic form given by the same formula (3.58) on the domain of functions from \( W_2^1 (\Gamma \setminus \{ V^m\}_{m=1}^M ) \) without requiring any continuity at the vertices. The corresponding Schrödinger operator is defined on the domain of functions satisfying Neumann conditions at all endpoints of the edges, i.e. the corresponding graph consists of \( N \) completely disconnected intervals.
Notes
- 1.
The vertex scattering matrix introduced in this way coincides with the formal scattering matrix given as a product of wave operators associated with the self-adjoint operators \( L^D\) (the unperturbed operator) and \( L^{(A,B)} \) (the perturbed operator) as is done in abstract scattering theory [90, 442, 506].
- 2.
- 3.
We have already mentioned that these two conditions are sometimes also called Kirchhoff, free or Neumann.
- 4.
We are going to return to this question in Sect. I.
- 5.
We study only the case where the weights \( \vec {a}_j (x_l) \) are non-negative reals, but in principle complex values may be allowed.
- 6.
A subspace is called isotropic if and only if the symplectic form vanishes for any two vectors from the subspace. Every such maximal subspace has dimension \( D. \)
References
T. Aktosun, M. Klaus, R. Weder, Small-energy analysis for the self-adjoint matrix Schrödinger operator on the half line. J. Math. Phys. 52(10), 102101, 24 (2011). https://doi.org/10.1063/1.3640029. MR2894582
M. Astudillo, P. Kurasov, M. Usman, RT-symmetric Laplace operators on star graphs: real spectrum and self-adjointness. Adv. Math. Phys. Posted on 2015, Art. ID 649795, 9. https://doi.org/10.1155/2015/649795. MR3442618
Y. Bilu, N. Linial, Lifts, discrepancy and nearly optimal spectral gap. Combinatorica 26(5), 495–519 (2006). https://doi.org/10.1007/s00493-006-0029-7. MR2279667
M.Sh. Birman, M.Z. Solomjak, Spectral Theory of Selfadjoint Operators in Hilbert Space. Mathematics and its Applications (Soviet Series) (D. Reidel Publishing Co., Dordrecht, 1987). Translated from the 1980 Russian original by S. Khrushchëv and V. Peller. MR1192782
T. Cheon, Reflectionless and Equiscattering Quantum Graphs. ICQNM 2011: The Fifth International Conference on Quantum, Nano and Micro Technologies (2011), pp. 18–22
T. Cheon, Reflectionless and equiscattering quantum graphs and their applications. Int. J. Syst. Meas. 5, 34–44 (2012)
V.A. Derkach, M.M. Malamud, On the Weyl function and Hermite operators with lacunae. Dokl. Akad. Nauk SSSR 293(5), 1041–1046 (1987; Russian). MR890193
V.A. Derkach, M.M. Malamud, Generalized resolvents and the boundary value problems for Hermitian operators with gaps. J. Funct. Anal. 95(1), 1–95 (1991). https://doi.org/10.1016/0022-1236(91)90024-Y. MR1087947
V.I. Gorbachuk, M.L. Gorbachuk, Boundary Value Problems for Operator Differential Equations. Mathematics and Its Applications (Soviet Series), vol. 48 (Kluwer Academic Publishers Group, Dordrecht, 1991). Translated and revised from the 1984 Russian original. MR1154792
M. Harmer, Hermitian symplectic geometry and extension theory. J. Phys. A 33(50), 9193–9203 (2000). https://doi.org/10.1088/0305-4470/33/50/305. MR1804888
M. Harmer, Hermitian symplectic geometry and the factorization of the scattering matrix on graphs. J. Phys. A 33(49), 9015–9032 (2000). https://doi.org/10.1088/0305-4470/33/49/302. MR1811226
M.S. Harmer, Inverse scattering for the matrix Schrödinger operator and Schrödinger operator on graphs with general self-adjoint boundary conditions. ANZIAM J. 44(1), 161–168 (2002). https://doi.org/10.1017/S1446181100008014. Kruskal, 2000 (Adelaide). MR1919936
M. Harmer, Inverse scattering on matrices with boundary conditions. J. Phys. A 38(22), 4875–4885 (2005). https://doi.org/10.1088/0305-4470/38/22/012. MR2148630
J.M. Harrison, U. Smilansky, B. Winn, Quantum graphs where back-scattering is prohibited. J. Phys. A 40(47), 14181–14193 (2007). https://doi.org/10.1088/1751-8113/40/47/010. MR2438119
A.N. Kočubeĭ, Extensions of symmetric operators and of symmetric binary relations. Mat. Zametki 17, 41–48 (1975; Russian). MR0365218
V. Kostrykin, R. Schrader, Kirchhoff’s rule for quantum wires. J. Phys. A 32(4), 595–630 (1999). https://doi.org/10.1088/0305-4470/32/4/006. MR1671833
V. Kostrykin, R. Schrader, Kirchhoff’s rule for quantum wires. II. The inverse problem with possible applications to quantum computers. Fortschr. Phys. 48(8), 703–716 (2000). https://doi.org/10.1002/1521-3978(200008)48:8703::AID-PROP7033.0.CO;2-O. MR1778728
P. Kuchment, Quantum graphs. I. Some basic structures. Waves Random Media 14(1), S107–S128 (2004). https://doi.org/10.1088/0959-7174/14/1/014. Special section on quantum graphs. MR2042548
P. Kurasov, M. Enerbäck, Aharonov-Bohm ring touching a quantum wire: how to model it and to solve the inverse problem. Rep. Math. Phys. 68(3), 271–287 (2011). https://doi.org/10.1016/S0034-4877(12)60010-X. MR2900850
P. Kurasov, M. Nowaczyk, Geometric properties of quantum graphs and vertex scattering matrices. Opuscula Math. 30(3), 295–309 (2010). https://doi.org/10.7494/OpMath.2010.30.3.295. MR2669120
P. Kurasov, R. Ogik, On equi-transmitting matrices. Rep. Math. Phys. 78(2), 199–218 (2016). https://doi.org/10.1016/S0034-4877(16)30063-5. MR3569205
P. Kurasov, F. Stenberg, On the inverse scattering problem on branching graphs. J. Phys. A 35(1), 101–121 (2002). https://doi.org/10.1088/0305-4470/35/1/309. MR1891815
P. Kurasov, R. Ogik, A. Rauf, On reflectionless equi-transmitting matrices. Opuscula Math. 34(3), 483–501 (2014). https://doi.org/10.7494/OpMath.2014.34.3.483. MR3239078
A.W. Marcus, D.A. Spielman, N. Srivastava, Interlacing families I: bipartite Ramanujan graphs of all degrees. Ann. Math. (2) 182(1), 307–325 (2015). https://doi.org/10.4007/annals.2015.182.1.7. MR3374962
R. Ogik, Quantum graphs and equi-transmitting matrices. Licentiate Thesis, Stockholm University (2014)
R. Ogik, Scattering amplitudes in the theory of quantum graphs. PhD Thesis, University of Nairobi (2015)
M. Reed, B. Simon, Methods of Modern Mathematical Physics. I-IV (Academic, New York-London, 1972)
F.S. Rofe-Beketov, Selfadjoint extensions of differential operators in a space of vector-valued functions. Dokl. Akad. Nauk SSSR 184, 1034–1037 (1969; Russian). MR0244808
F.S. Rofe-Beketov, Selfadjoint extensions of differential operators in a space of vector-valued functions. Teor. Funkciĭ Funkcional. Anal. i Priložen. Vyp. 8, 3–24 (1969; Russian). MR0281055
I.A. Shelykh, N.G. Galkin, N.T. Bagraev, Conductance of a gated Aharonov-Bohm ring touching a quantum wire. Phys. Rev. B 74, 165331 (2006)
O. Turek, T. Cheon, Quantum graph vertices with permutation-symmetric scattering probabilities. Phys. Lett. A 375(43), 3775–3780 (2011). https://doi.org/10.1016/j.physleta.2011.09.006. MR2843588
O. Turek, T. Cheon, Hermitian unitary matrices with modular permutation symmetry. Linear Algebra Appl. 469, 569–593 (2015). https://doi.org/10.1016/j.laa.2014.12.011. MR3299079
D.R. Yafaev, Mathematical Scattering Theory. Mathematical Surveys and Monographs, vol. 158 (American Mathematical Society, Providence, 2010). Analytic theory. MR2598115
Author information
Authors and Affiliations
Appendices
Appendix 1: Important Classes of Vertex Conditions
3.1.1 \(\delta \) and \( \delta '\)-Couplings
It is probably worth mentioning that the continuity requirement does not necessarily lead to standard vertex conditions. All self-adjoint operators described by conditions other than standard vertex conditions are usually considered as certain point perturbations of standard operators. For each vertex the following one-parameter family of vertex conditions is usually called a \( \delta \)-coupling at the vertex
Since the function \( u \) is continuous at the vertex, its value \( u(V) \) is well-defined. The real parameter \( \alpha \) describes the strength of the \( \delta \)-coupling.,
Another one-parameter family is sometimes called \( \delta '\)-coupling and is in some sense dual to the \(\delta \)-coupling. It is described by the conditions
The first condition substitutes the continuity condition, while the second condition contains the parameter \( \beta \) describing the strength of the \(\delta '\)-coupling.
The \(\delta \) and \( \delta '\)-couplings can formally be considered for infinite values of the coupling parameters. The \(\delta \)-coupling with \( \alpha = \infty \) corresponds to the Dirichlet condition \( u(V) = 0, \)i.e.\( u(x_j)=0, \) whereas \( \beta = \infty \) leads to the Neumann condition \( \partial u (V) = 0 , \)i.e.\( \partial u (x_j) = 0 .\) Note that these Dirichlet and Neumann conditions describe completely independent edges and therefore are not properly connecting (unless of course the valence is trivial \( d =1 \)).
3.1.2 Circulant Conditions
In many applications it is important to chose vertex conditions satisfying certain additional assumptions. In this section we shall study the case where the vertex conditions are invariant under cyclic permutations. Assume that the limit values \( (\vec {u}, \partial \vec {u}) \) satisfy the vertex conditions whenever \( (\mathcal R \vec {u}, \mathcal R \partial \vec {u}) \) satisfy the same conditions, where \( \mathcal R \) is the rotation matrix
Substitution of the limit values \( (\mathcal R \vec {u}, \mathcal R \partial \vec {u}) \) into original vertex conditions (3.21) gives
Multiplying the last equality by \( \mathcal R^{-1} \) from the left we get
Since the parametrisation (3.21) is one-to-one the two vertex conditions are equivalent if and only if
It follows that the matrix \( S \) is circulant as was probably expected by the reader:
We have already seen the following important examples of circulant vertex conditions: standard (3.42), \(\delta \)- and \( \delta '\)-couplings (3.59), (3.60). Circulant conditions in connection with \( \mathcal P \mathcal T\)-symmetric operators on graphs have been discussed in [34].
3.1.3 ‘Real’ Conditions
The standard Schrödinger equation possesses an important property: its eigenfunctions can always be chosen real, since if \( \psi \) is an eigenfunction, then \( \overline {\psi } \) is also an eigenfunction. This property is also known as time-reversal symmetry. Let us study which vertex conditions possess this property. We have to check under which conditions the limit values \( (\vec {u}, \partial \vec {u} ) \) satisfy (3.21) whenever \( (\overline {\vec {u}}, \overline { \partial \vec {u}} ) \) satisfy the same equation.
The limit values \( (\overline {\vec {u}}, \overline { \partial \vec {u}} ) \) satisfy (3.21) if and only if
holds. Multiplying the last equality by \( (\overline {S})^{-1} = \overline {S}^* = S^{\mathrm {t}} \) we arrive at
These vertex conditions are equivalent to (3.21) if and only if
i.e.\( S \) is a complex symmetric matrix (but not necessarily Hermitian).
Let us note that all ‘real’ vertex conditions leading to energy independent vertex scattering matrices are described by real symmetric matrices. This fact is important for physical applications. Physically relevant models are usually described by matrices with real entries. The requirement that the corresponding Hamiltonian is time-reversal invariant leads directly to scaling-invariant vertex scattering matrices.
3.1.4 Indistinguishable Edges
Let us study which class of vertex conditions corresponds to indistinguishable edges, i.e. vertex conditions invariant under arbitrary permutations of the edges. The corresponding matrices \( S \) satisfy the equation
where \( P_\sigma \) is any permutation matrix corresponding to permutation \( \sigma \). Every matrix \( S \) satisfying (3.63) is of the form
where \( R, T \) are arbitrary complex numbers. In order for \( S \) to be unitary and irreducible one has to require that
The reflection coefficient \( R \) may be equal to zero only if \( d = 2, \) since otherwise the second equality would imply that even \( T = 0. \)
Consider the case of real \( T \) and \( R. \) The corresponding linear system
has just two solutions
The first solution corresponds to standard vertex conditions (3.41) (which coincides with the \( \delta \)-coupling with \( \alpha = 0\)), the second solution—to \( \delta '\)-coupling (3.60) with \( \beta = 0 . \) This fact underlines the importance of the family of \( \delta '\)-couplings, which was introduced originally just as a certain dual to the family of \( \delta \)-couplings. One can read more about such vertex conditions in [486].
3.1.5 Equi-transmitting Vertices
In quantum mechanics, transition probabilities \( \rho _{ij} \) are given by squared absolute values of the scattering coefficients \( \rho _{ij} = \vert s_{ij} \vert ^2. \) Therefore the edges meeting at a vertex are equivalent, from the quantum mechanical point of view, if all non-diagonal entries of the vertex scattering matrix have the same absolute value. The diagonal elements have equal absolute values as well.
Definition 3.8
([263]) A \( d \times d \) unitary matrix \( S \) is called equi-transmitting if and only if
-
\( \vert s_{jj} \vert = \vert s_{ll} \vert , \; j, l = 1,2, \dots , d; \)
-
\( \vert s_{ij} \vert = \vert s_{lm} \vert , \, \mbox{for} \; i \neq j, l \neq m. \)
Equi-transmitting unitary matrices attracted attention in recent years with the hope to repair apparently non-physical behaviour of the vertex scattering matrices (3.41) corresponding to standard vertex conditions:
![](http://media.springernature.com/lw178/springer-static/image/chp%3A10.1007%2F978-3-662-67872-5_3/MediaObjects/329100_1_En_3_Equ64_HTML.png)
In other words, vertices with a large degree are similar to Dirichlet vertices. This is against the physical intuition that by increasing the number of edges, one increases penetrability of the vertex.
In the first step reflectionless equi-transmitting matrices leading to scaling-invariant vertex conditions were studied [358]. Reflectionless means that all diagonal elements are zero. Such matrices exist only in odd dimensions, since the trace is zero, but eigenvalues of Hermitian unitary matrices are just \( \pm 1. \) Their sum can be equal to zero only if the dimension \( d \) is even. It is relatively easy to characterise these matrices in low dimensions \( d = 2, 4, 6, \) which is done in the article mentioned above.
Equi-transmitting matrices leading to scaling-invariant vertex conditions were investigated in [263, 348, 486, 487]. The class of equi-transmitting matrices is invariant under multiplication by \(-1\), hence without loss of generality we may assume the number \( \nu ^+\) of positive diagonal elements is not less than the number of negative ones. In this case the trace of \( S \) is equal to
where \( r = \vert s_{jj} \vert \) and \( \nu ^+ \geq d/2.\) On the other hand, the matrix \( S \) is unitary and Hermitian and therefore its spectrum is given by \( \pm 1. \) Denoting by \( d^+ \) the multiplicity of \( + 1 \) we calculate the trace using the spectrum
implying \( d^+ \geq d/2\), since the trace is non-negative as calculated above. Comparing these formulas we get
In the special case \( \nu ^+ = d^+ = d/2 \) the reflection amplitude \( r \) remains undetermined.
If \( \nu ^+ = d^+ \), then \( r = 1 \), which means that the corresponding unitary matrix \( S \) is diagonal and determines vertex conditions which are not properly connecting (unless \( d= 1\) of course). Moreover one needs \( d^+ < \nu ^+ \) in order to guarantee that \( r < 1. \) Hence all possible values of \( r\) are given by formula (3.65), where the parameters \( d^+ \) and \( \nu ^+ \) should satisfy:
All possible values of r are obtained going through all natural numbers satisfying the above inequalities. Surprisingly, not all possible cases described by (3.65) can be realised. These are the only possible values of the reflection amplitude in odd dimensions. If \( d \) is even, then \( r \) may be arbitrary, provided \( d^+ = \nu ^+ = d/2. \)
Equi-transmitting matrices in low dimensions (\( d \leq 6 \)) are completely described in [348]. It turns out that matrices equivalent to those corresponding to standard vertex conditions play a very exceptional role. For example for \( d= 5 \) admissible values of \( (d^+, \nu ^+) \) are \( (4,5), (3,5) \) and \( (3,4) \) leading to the following possible values of \( r \) respectively:
The case \( r= 3/5 \) corresponds to standard vertex conditions and hence is realisable. The other two cases \( r= 1/5 \) and \( r= 1/3\) do not lead to any equi-transmitting matrix. The same phenomenon is observed when \( d= 3. \) For more details see [411, 412]. In [136, 137], approximations of low-dimensional equi-transmiting matrices are discussed.
The case of large dimensions is much less studied. Equi-transmitting unitary matrices can be constructed using Dirichlet characters [263], but construction heaxvily depends on the dimension. Standard vertex conditions lead to equi-transmitting matrices proving that such matrices exist in any dimension. Reflectionless equi-transmitting matrices may exist in even dimensions only, as discussed above but it is not clear that they are realisable in any even dimension.
Studies of equi-transmitting matrices may be extended by considering unitary symmetric (not necessarily Hermitian) matrices. An interesting example of such matrix for \( d= 5 \) was constructed in [263]
Appendix 2: Parametrisation of Vertex Conditions: Historical Remarks
It is almost impossible to mention all articles where vertex conditions for differential operators on graphs are considered. As we already mentioned the whole set of vertex conditions giving all possible self-adjoint extensions of \( L^{\mathrm {min}} \) can be described either using von Neumann formulas, or the theory of boundary triplets [165, 166, 243, 301, 445, 446], or Lagrangian planes corresponding to the symplectic form given by (3.4). We shall just mention here the most important parametrisations.
3.1.1 Parametrisation Via Linear Relations
V. Kostrykin and R. Schrader [309] suggested the following explicit parametrisation of vertex conditions
where \( A_1 \) and \( B_1 \) are two \( d \times d \) matrices satisfying the following conditions:
-
(1)
the matrix \( A_1B_1^* \) is Hermitian;
-
(2)
the \(d \times 2d \) matrix \( (A_1,B_1) \) has the maximal possible rank \( d\).
The first condition is needed to guarantee that the operator is symmetric. The second condition says that formula (3.66) imposes sufficiently many independent conditions on the functions.
A similar parametrisation of all possible vertex conditions was given by T. Aktosun, M. Klaus, and R. Weder in [22]:
where \( A_2 \) and \( B_2 \) are two \( v \times v \) matrices satisfying the following relations:
-
(1)
the matrix \( B_2^* A_2 \) is Hermitian;
-
(2)
the self-adjoint matrix \( A_2^* A_2 + B_2^* B_2 \) is positive.
These two parametrisations are completely equivalent and parametrize all possible self-adjoint extensions of the minimal operator. Their advantage is that the matrices \( A \) and \( B \) can often be chosen with integer entries (making calculations easier). For example, the standard vertex conditions can be written as (3.35) using just integers.
Both conditions (3.66) and (3.67) can be multiplied on the left by any invertible matrix without changing the set of admissible functions. It follows that such parametrisations are not unique and therefore their use for inverse problems is limited. Moreover it might be difficult to determine whether vertex conditions written in the form (3.66) or (3.67) really connect together all limiting values of \( u \) at the vertex \( V \), or the vertex can be split into two (see discussion in [355] for details).
3.1.2 Parametrisation Using Hermitian Operators
Formula (3.66) determines a certain linear relation for the limit values \( \vec {u}, \partial \vec {u}. \) Therefore, it is natural to parameterize all such linear relations using the linear subspace \( (I-P_{-1}) \mathbb C^v = \left ( \mathrm {Ker}\,(S+I) \right )^\perp \) and the Hermitian operator \( A_S = (I-P_{-1}) \mathrm {i}\frac {I-S}{I+S} (I-P_{-1}) \) acting on this subspace. Such parametrisation was suggested by P. Kuchment [326], and it is given by formula (3.27) (in our notations).
3.1.3 Unitary Matrix Parametrisation
The first explicit parametrisation of vertex conditions using unitary matrices was suggested by M. Harmer [255,256,257,258]. It is almost identical to parametrisation (3.21), but its relation to the vertex scattering matrix remained hidden. The idea to parametrise vertex conditions via the vertex scattering matrix is clear from the physical point of view, and it was realised independently by P. Kurasov and M. Nowaczyk [347] leading to parametriation (3.21). As we already mentioned, this parametrisation of vertex conditions is the most suitable from our point of view.
Problem 9
Consider the star graph formed by three semi-infinite edges \( [x_j, \infty ),\)\( j=1,2,3. \) Express the standard vertex conditions
-
(1)
using matrices \( A \) and \( B \) ,
-
(2)
using the unitary matrix \( S \) .
Are these vertex conditions properly connecting and scaling-invariant?
Problem 10
Consider the lasso graph depicted in Fig. 2.5 with magnetic Schrödinger operator satisfying the standard vertex conditions at the vertex, i.e. the operator \( L^{\mathrm {st}}_{0,a}.\) Assume that the electric potential is zero \( q (x) = 0 \) everywhere on \( \Gamma \), while the magnetic potential is zero on the semi-infinite edge. Let us denote by \( \Phi \) the flux of the magnetic field through the loop: \( \Phi = \int _{x_1}^{x_2} a(x) dx . \) Let \( U_a \) be the unitary transformation \( u(x) \mapsto \exp \left ( - i \int _{x_1}^x a(y) dy \right ) u(x) \) removing the magnetic potential on the loop. Consider the Laplacian
-
(a)
How do the vertex conditions for \( L_{\Phi } \) depend on the magnetic flux \( \Phi \)?
-
(b)
Calculate the scattering matrix for the operator \( L_\Phi \).
-
(c)
Determine the scattering matrix for the original operator \( L_{0,a}.\)
Problem 11
Vertex conditions can be written in the form (3.21) where \( S= S_{\mathbf {v}} (1) \) is used as a parameter. How should formula (3.21) be modified so that \( S_{\mathbf {v}} (k_0) \) is used as a parameter instead of \( S_{\mathbf {v}}(1)\), for \( k_0 \in \mathbb R, k \neq 1.\)
Problem 12
Let \( \Gamma _5 \) be a graph formed by \( 4\) edges \( [x_{2j-1}, x_{2j}], j=1,2,\ldots ,4.\) Let \( L \) be the corresponding Laplace operator defined on the domain of functions satisfying the vertex conditions:
The corresponding vertex scattering matrix is energy independent. Reconstruct the metric graph taking into account that the vertex conditions respect connectivity of the graph.
Write the vertex conditions using the other two standard parametrisations:
-
(1)
via the vertex scattering matrix (canonical);
-
(2)
via subspaces and Hermitian matrices (Kuchment).
Hint: Use the fact that the vertex conditions lead to an energy independent vertex scattering matrix and, therefore, can be written using projectors as (3.34) or (3.33). Hence it is enough to calculate the kernels of the matrices on the different sides of (3.68). The corresponding kernels should be orthogonal and span \( \mathbb C^8.\)
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this chapter
Cite this chapter
Kurasov, P. (2024). Vertex Conditions. In: Spectral Geometry of Graphs. Operator Theory: Advances and Applications, vol 293. Birkhäuser, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-67872-5_3
Download citation
DOI: https://doi.org/10.1007/978-3-662-67872-5_3
Published:
Publisher Name: Birkhäuser, Berlin, Heidelberg
Print ISBN: 978-3-662-67870-1
Online ISBN: 978-3-662-67872-5
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)