The goal of this chapter is to describe the most general vertex conditions for Schrödinger operators on metric graphs and how these conditions are connected to graph’s topology. As we already mentioned, different types of vertex conditions may be required in order to reflect special properties of the vertices. Considering only standard and Dirichlet conditions is often sufficient, therefore one may get the impression that this chapter can be dropped by readers not aiming to study differential operators on metric graphs in full detail. This is not completely true since the ideas developed in the chapter will be used later on, for example deriving the trace formula.

3.1 Preliminary Discussion

We have seen that differential operators on metric graphs require introducing special conditions connecting limiting values of functions and their normal derivatives at the vertices. The role of such vertex conditions is two-fold:

  • to connect together different edges,

  • to make the differential operator self-adjoint (symmetric).

The Hilbert space \(L_2 (\Gamma ) \) and the formal differential expression (2.17) do not reflect how different edges are connected to each other. It is the vertex conditions that determine the connectivity of the graph, and therefore this question requires more attention than one might expect at the first glance.

Assume that a metric graph is given and we are interested in studying all appropriate vertex conditions. Our experience tells us that we need as many conditions as the number of endpoints—the sum of degrees of all vertices. In order to reflect the graph’s connectivity properly, these conditions should connect together only the limit values associated with each vertex separately. It follows that each vertex can be considered independently, and therefore it is wise to write the boundary form (2.25) collecting together the terms corresponding to each vertex:

$$\displaystyle \begin{aligned} {} \langle L_{q,a}^{\mathrm{max}} u, v \rangle - \langle u, L_{q,a}^{\mathrm{max}} v \rangle = \sum_{m=1}^M \left( \sum_{x_j \in V^m} \left\{ \overline{\partial u (x_j)} \cdot v (x_j) - \overline{u (x_j)} \cdot \partial v (x_j) \right\} \right). \end{aligned} $$
(3.1)

For every vertex of valence \( d^m \) one writes precisely \( d^m \) linearly independent conditions so that the corresponding expression

$$\displaystyle \begin{aligned} {} & \sum_{x_j \in V^m} \left\{ \overline{\partial u (x_j)} \cdot v (x_j) - \overline{u (x_j)} \cdot \partial v (x_j) \right\} \\ &\quad = \left\langle \partial \vec{u} (V^m), \vec{v} (V^m) \right\rangle_{\mathbb C^{d^m}} - \left\langle \vec{u} (V^m), \partial \vec{v} (V^m) \right\rangle_{\mathbb C^{d^m}} \end{aligned} $$
(3.2)

vanishes for each \( m\), ensuring that the operator is symmetric. Here,

$$\displaystyle \begin{aligned} {} \vec{u} (V^m) = \{ u(x_j) \}_{j=1}^{d^m} \quad \mbox{and} \quad \partial \vec{u} (V^m) = \{ \partial u(x_j) \}_{j=1}^{d^m} , \end{aligned} $$
(3.3)

denote the \( d^m\)-dimensional vectors of limit values at the vertex \( V^m \). It is not hard to give examples of vertex conditions that guarantee that the boundary form vanishes:

  • Dirichlet conditions:

    $$\displaystyle \begin{aligned} \vec{u} (V^m) = \vec{0},\end{aligned}$$
  • Neumann conditions:

    $$\displaystyle \begin{aligned} \partial \vec{u} (V^m) = \vec{0},\end{aligned}$$
  • (generalised) Robin conditions:

    $$\displaystyle \begin{aligned} \partial \vec{u} (V^m) = A^m \vec{u} (V^m),\end{aligned}$$

    where \( A^m \) is a Hermitian matrix in \( \mathbb C^{d^m}. \)

However, these families do not cover all possible vertex conditions. In order to obtain all possible conditions, one needs to consider a certain combination of Robin and Dirichlet conditions (as it will be shown in the following section).

One may think that any set of \( d^m \) such conditions guaranteeing zero boundary form is appropriate, but it is necessary to take into account another one aspect. Assume that the endpoints in the vertex \( V^m \) can be divided into two non-intersecting classes \( {V^m}' \) and \( {V^m}'', \)

$$\displaystyle \begin{aligned} {V^m}' \cup {V^m}'' = V^m, \; \; {V^m}' \cap {V^m}'' = \emptyset,\end{aligned}$$

so that the vertex conditions connect just the limit values associated with each of these subclasses separately (see Fig. 3.1). Then such vertex conditions correspond to the graph \( \Gamma \) with two vertices \( {V^m}' \) and \( {V^m}''\), rather than with one vertex \( V^m.\) If such separation is impossible, then the vertex conditions will be called properly connecting. In what follows we consider only properly connecting conditions unless something else is required for different reasons. If the separation described above is possible, we are going to say that the vertex \( V^m \)splits into two vertices \( {V^m}' \) and \( {V^m}''. \)

Fig. 3.1
A diagram of splitting a vertex. Seven lines with the same vertex are divided into four lines with one vertex and three lines with one vertex.

Splitting a vertex

In this chapter, we are going to describe all appropriate vertex conditions for star graphs. Such parametrisation can be done in different (equivalent) ways and we collect the most widely used parametrisations to be used in the book. We are convinced that the parametrisation using the irreducible unitary matrix \( S \) (3.21) is the most appropriate, since this parameter has a clear physical interpretation—it coincides with the vertex scattering matrix. Moreover, this parametrisation is unique and guarantees that the vertex conditions are properly connecting.

3.2 Vertex Conditions for the Star Graph

Consider any star graph formed by \( d \) semi-infinite edges \( E_n = [x_n, \infty ) , \, n= 1,2, \dots , d ,\) joined together at one central vertex \( V = \{ x_1, x_2, \dots , x_d \} \) (having degree \( d \)). The boundary form of the maximal operator is given by:

$$\displaystyle \begin{aligned} {} \begin{array}{ccl} \displaystyle \langle L^{\mathrm{max}} u, v \rangle_{L_2(\Gamma)} - \langle u, L^{\mathrm{max}} v \rangle_{L_2(\Gamma)} & = & \displaystyle \langle \partial \vec{u}, \vec{v} \rangle_{\mathbb C^d} - \langle \vec{u}, \partial \vec{v} \rangle_{\mathbb C^d} \\ & =: & \displaystyle B [ U,V], \end{array} \end{aligned} $$
(3.4)

where \( U = (\vec {u}, \partial \vec {u}) \in \mathbb C^{2d}.\) The (sesquilinear) form \( B \) introduced above does not depend on the behaviour of the functions \( u \) and \( v \) inside the edges, but is given via their limit values at the vertex.

We have seen that in order to determine a self-adjoint operator corresponding to the formal expression (2.17), one has to introduce precisely \( d \) linearly independent conditions connecting the limit values \( U = (\vec {u}, \partial \vec {u}) \in \mathbb C^{2d}. \) These conditions should be chosen so that the boundary form \( B [U, V] \) vanishes whenever both \( U \) and \( V \) satisfy the conditions. In other words, in the space \( \mathbb C^{2d} \) one has to select a \( d\)-dimensional subspace \( M \) such that \( B[U,V] \) vanishes, provided \( U, V \in M. \) This is a standard problem from linear algebra and it is not hard to give examples of such subspaces, but we would like to describe all possible such subspaces. The corresponding conditions will be called Hermitian.

Definition 3.1

Conditions relating the limit values \( (\vec {u}, \partial \vec {u}) \in \mathbb C^{2d} \) at a vertex \( V \) of degree \( d \) are called Hermitian if and only if

  • the boundary form (3.4) vanishes whenever \( u \) and \( v \) satisfy these conditions;

  • the subspace in \( \mathbb C^{2d} \) formed by all limit values satisfying these conditions has the maximal dimension \( d\).

Every \( d\)-dimensional subspace \( M \subset \mathbb C^{2d} \) can be described as the image of a linear map from \( \mathbb C^{d} \) to \( \mathbb C^{2d} \), and hence as the set of \( (Et, Ft) \) for \( t \in \mathbb C^d, \) where \( E \) and \( F \) are \( d \times d \) matrices. For reasons that will become clear in a moment, we shall write \( E = B^* \) and \( F = A^* \) for suitable matrices \( A \) and \( B .\)

The subspace

$$\displaystyle \begin{aligned} {} M := \left\{ U = (B^* t,A^*t): t \in \mathbb C^d \right\} \end{aligned} $$
(3.5)

has dimension \( d \) only if the \( d \times 2d \) matrix \( (A,B) \) has maximal rank:

$$\displaystyle \begin{aligned} {} \mathrm{rank}\, (A,B) = d. \end{aligned} $$
(3.6)

In fact, the dimension of \( M \) is less than \( d \) if and only if there exists a vector \( t_0 \in \mathbb C^d, \, t_0 \neq \vec {0}, \) such that \( B^* t_0 = A^* t_0 = 0. \) Hence, for any \( s \in \mathbb C^d \), we have

$$\displaystyle \begin{aligned} \langle B^* t_0, s \rangle = \langle A^* t_0, s \rangle = 0 \Leftrightarrow \langle t_0, Bs \rangle = \langle t_0, As \rangle = 0 ,\end{aligned}$$

i.e. the ranges of \( A \) and \( B \) are both orthogonal to \( t_0 \), so \( \mathrm {rank} (A,B) < d. \)

The boundary form \( B \) vanishes on \( M \times M \) provided the matrix \( AB^* \) is Hermitian, with

$$\displaystyle \begin{aligned} A B^* = BA^*. \end{aligned} $$
(3.7)

To prove this statement, let us consider two arbitrary vectors \( U, V \in M \)

$$\displaystyle \begin{aligned} U = (B^* t, A^* t), \; \, V= (B^* s, A^*s ) ,\end{aligned}$$

where \( t,s \in \mathbb C^d. \) The boundary form can be expressed using \( s, t \) as follows:

$$\displaystyle \begin{aligned} \begin{array}{ccl} \displaystyle B[U,V] & = & \displaystyle \langle \partial \vec{u}, \vec{v} \rangle_{\mathbb C^d} - \langle \vec{u}, \partial \vec{v} \rangle_{\mathbb C^d} \\[2mm] & = & \displaystyle \langle B^*t, A^*s \rangle_{\mathbb C^d} - \langle A^* t, B^*s \rangle_{\mathbb C^d} \\[2mm] & = & \displaystyle \langle A B^*t, s \rangle_{\mathbb C^d} - \langle BA^* t, s \rangle_{\mathbb C^d} ,\\[2mm] \end{array} \end{aligned} $$
(3.8)

which vanishes if and only if \( AB^* \) is Hermitian. Thus we have proven that all self-adjoint operators on the star graph can be parameterized by \( d\)-dimensional subspaces \( M \) of the form (3.5). But this description of self-adjoint extensions is not convenient, since in order to determine whether a function \( u \) belongs to the domain of the operator, one has to check whether its limit values \( U \) can be presented as \( U = (B^*t, A^*t) \) with a certain vector \( t \in \mathbb C^d .\)

It turns out that \( M \) can be described as the set of all vectors \( U \in \mathbb C^{2d} \) satisfying the vertex conditions [309]

$$\displaystyle \begin{aligned} {} A \vec{u} = B \partial \vec{u}. \end{aligned} $$
(3.9)

It is trivial, that every \( U \in M \) satisfies (3.9) as the matrix \( AB^* \) is Hermitian and therefore \( AB^* t = BA^* t. \) Moreover, due to (3.6), the set of vectors satisfying (3.9) form a \( d\)-dimensional subspace and has to be equal to \( M \), since \( M \) is also \( d\)-dimensional. Formula (3.9) explains our unusual choice of matrices \( B^* \) and \( A^* \) instead of \( E \) and \( F \) in the definition of \( M. \)

We have proved the following theorem:

Theorem 3.2

Any Hermitian vertex condition at the vertex \( V \) of degree \( d \) can be written in the form

$$\displaystyle \begin{aligned} {} A \, \vec{u} = B \,\partial \vec{u}, \end{aligned} $$
(3.10)

where\( \vec {u} \)and\( \partial \vec {u} \)denote the vectors of limit values of the functions (2.12) and their extended normal derivatives (2.26) at the vertex. The\( d \times d \)matrices\( A \)and\( B \)can be chosen arbitrarily, provided that the rank of the\( d \times 2d \)matrix\( (A,B) \)is maximal, and the matrix\( AB^* \)is Hermitian

$$\displaystyle \begin{aligned} {} \mathrm{rank}\, (A,B ) = d \; \, \; \; \; \mathrm{and} \; \; \; \; AB^* = BA^*. \end{aligned} $$
(3.11)

The subspace \( M \) (and therefore the self-adjoint operator) is not changed if the matrices \( A \) and \( B \) are replaced with \( CA \) and \( CB \), where \( C \) is any \( d \times d\) non-singular matrix. It follows that there is no one-to-one correspondence between the pairs of matrices and the self-adjoint operators. This fact makes it difficult to use this parametrisation when inverse problems are discussed. It is also not straightforward to check whether the corresponding conditions are properly connecting or not. It is clear that if \( A \) and \( B \) are block-diagonal with the equal sizes of all blocks, then the vertex conditions are not properly connecting. Consider just the following explicit example.

Example 3.3

Let \( \Gamma \) be the star graph formed by three semi-axes joined together in the vertex \( V = \{ x_1, x_2, x_3 \}\) (see Fig. 3.2) and the vertex conditions be given by

It is clear that \( AB^* = 0 = BA^* \) and the rank of \( (A,B) \) is \( 3 .\) Therefore the corresponding vertex conditions are Hermitian.

Fig. 3.2
A star graph has three vertex x 1, x 2, and x 3 that are joined together, resulting in x 1 and x 2 being joined together and x 3 being separated.

Star graph: not properly connecting conditions

But both \( A \) and \( B \) are block-diagonal matrices with blocks of size \( 2 \times 2 \) and \( 1 \times 1, \) which allows one to write the same vertex conditions in the form:

$$\displaystyle \begin{aligned} \left( \begin{array}{cc} 1 & -1 \\ 0 & 0 \end{array} \right) \left( \begin{array}{c} u(x_1) \\ u(x_2) \end{array} \right) = \left( \begin{array}{cc} 0 & 0 \\ 1 & 1 \end{array} \right) \left( \begin{array}{c} \partial u(x_1) \\ \partial u(x_2) \end{array} \right) \; \; \& \; \; u(x_3) = 0,\end{aligned}$$

or even as

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \displaystyle u(x_1) = u(x_2) \\ \displaystyle \partial u (x_1) = - \partial u(x_2) \end{array} \right. \; \; \& \; \; u(x_3 ) = 0.\end{aligned}$$

These conditions are not properly connecting and correspond rather to a line and a half line, which are independent of each other, not to the star graph formed by three semi axes.

Multiplication of the matrices \( A \) and \( B \) by a non-singular matrix \( C \) may destroy the block-diagonal structure, in which case it will be hard to see that these conditions can be written such that they connect only the limiting values corresponding to two subvertices.

3.3 Vertex Conditions Via the Vertex Scattering Matrix

In this section we are going to describe another possible equivalent parametrisation of all vertex conditions using the scattering matrix—a unitary matrix describing how the waves are transmitted by the vertex. This parametrisation has the following advantages:

  • the matrix giving this parametrisation is unique;

  • the parameter has a clear interpretation;

  • characterisation of all properly connecting conditions is straightforward.

In what follows, we are going to mainly use this parametrisation in our studies.

3.3.1 The Vertex Scattering Matrix

We introduce here the notion of the vertex scattering matrix. Consider the Laplace operator \( L^{(A,B)} \) on the star graph, defined by \( - \frac {d^2}{dx^2} \) on the domain of functions satisfying (3.10). The absolutely continuous spectrum for this operator is the same as for the Dirichlet Laplacian \( L^D\)—the second derivative operator defined on functions satisfying Dirichlet conditions at the vertex—and coincides with the interval \( [0, \infty ) \), the multiplicity is \( d. \) The corresponding generalised eigenfunctions of \( L^{(A,B)} \), often called scattered waves, are uniformly bounded solutions to the differential equation

$$\displaystyle \begin{aligned} - \frac{d^2}{dx^2} \psi = \lambda \psi\end{aligned}$$

satisfying the vertex conditions (3.21). Every solution to this differential equation on each interval \( [x_j, \infty ) \) can be written in the form

$$\displaystyle \begin{aligned} {} \psi (x) \vert_{E_j = [x_j, \infty)} = \mathrm{e}^{-ik(x-x_j)} b_j + \mathrm{e}^{ik(x-x_j)} a_j, \; \, k \in \mathbb R_+. \end{aligned} $$
(3.12)

One should think about the wave \( \mathrm {e}^{-ik(x-x_j)} b_j \) as a certain incoming wave, which after the interaction with the vertex is reflected as the outgoing wave \( \mathrm {e}^{ik(x-x_j)} a_j. \) Of course, the amplitudes \( b_j \) of the incoming waves are arbitrary, while the amplitudes \( a_i \) of the outgoing waves are determined by the whole set of \( b_j,\; j = 1,2, \dots , d.\) This relation can be written in the matrix form as

$$\displaystyle \begin{aligned} {} \vec{a} = S_{\mathbf{v}} (k) \vec{b} ,\end{aligned} $$
(3.13)

where \( S_{\mathbf {v}} (k) \) is called the vertex scattering matrix corresponding to the energy \( \lambda = k^2. \) In our case, the relation between the amplitudes of incoming and outgoing waves is obtained by inserting the function given by (3.12) into the vertex conditions.Footnote 1

Let us calculate \( S_{\mathbf {v}} (k) \) determined by the vertex conditions (3.10). The limit values of the function \( \psi \) are

$$\displaystyle \begin{aligned} \begin{array}{ccl} \vec{\psi} & = & \vec{b} + S_{\mathbf{v}} (k) \vec{b}, \\ \partial \vec{\psi} & = & -ik \vec{b} + ik S_{\mathbf{v}} (k) \vec{b}. \end{array}\end{aligned}$$

Substitution into (3.10) gives the relation

$$\displaystyle \begin{aligned} A (I + S_{\mathbf{v}}(k)) \vec{b} = B \mathrm{i}k (-I + S_{\mathbf{v}} (k) ) \vec{b} ,\end{aligned}$$

leading to

$$\displaystyle \begin{aligned} {} A + \mathrm{i} k B = - (A- \mathrm{i} kB) S_{\mathbf{v}} (k), \end{aligned} $$
(3.14)

where one takes into account that the vector \( \vec {b} \) of amplitudes of incoming waves is arbitrary. The matrix \( A- \mathrm {i} k B \) is invertible, since otherwise the adjoint matrix \( A^* + \mathrm {i} k B^* \) has a nontrivial kernel, i.e. there exists \( t \) such that \( (A^* + \mathrm {i} k B^*) t = 0 .\) But then multiplying by \( A \) and taking the scalar product with \( t \) we arrive at

$$\displaystyle \begin{aligned} \| A^* t \|{}^2 - \mathrm{i} k \langle AB^* t, t \rangle = 0.\end{aligned}$$

Since both \( \| A^* t \|{ }^2 \) and \( \langle AB^* t, t \rangle \) are real (\(AB^* \) is Hermitian), it follows that \( A^* t = 0. \) In a similar way we may prove that \( B^* t = 0 \), which contradicts the second assumption in (3.11) that \( \mathrm {rank}\, (A,B) = d. \)

The vertex scattering matrix can now be calculated from (3.14)

$$\displaystyle \begin{aligned} {} S_{\mathbf{v}} (k) = - (A- \mathrm{i} kB)^{-1} \left( A + \mathrm{i} k B \right). \end{aligned} $$
(3.15)

It is easy to see that the matrix \( S_{\mathbf {v}} (k) \) is unitary:

(3.16)

where we used that \( BA^* \) is Hermitian due to (3.11). Note that we were able to prove that \( S_{\mathbf {v}} (k) \) is unitary only because \( A \) and \( B \) satisfy both conditions (3.11) and \( k \) is real. As we shall see later, the vertex scattering matrix has norm less than 1 if \( \mbox{Im} \, k \in \mathbb R_+ \).

Unitarity of \( S_{\mathbf {v}} (k) \) implies that not only the vectors \( \vec {b} \) of incoming amplitudes span the whole \( \mathbb C^d \), but also the vectors \( \vec {a} \) of outgoing amplitudes. In other words, given any \( \vec {a} \in \mathbf C^d\) one may find the set of incoming amplitudes such that (3.13) holds. On the other hand, some entries in the scattering matrix may vanish, for example if \( (S_{\mathbf {v}} (k))_{12} \) is zero, then the amplitude of the outgoing wave on the first edge is independent of the amplitude of the incoming wave on the second edge.

3.3.2 Scattering Matrix as a Parameter in the Vertex Conditions

Our idea is to use the vertex scattering matrix to parameterise the set of vertex conditions. It is easy to see that the values of \( S_{\mathbf {v}} (k) \) for different \( k \in \mathbb R \) are determined by each other. In particular, we are going to prove the following explicit formula (which probably appeared for the first time in [310]):

$$\displaystyle \begin{aligned} {} S_{\mathbf{v}} (k) = \frac{(k+k_0) S_{\mathbf{v}} (k_0) + (k-k_0) I }{(k-k_0) S_{\mathbf{v}} (k_0) + (k+ k_0)I}, \end{aligned} $$
(3.17)

where \( I \) denotes the \( d \times d \) unit matrix. In what follows we are going to identify \( \alpha \) with \( \alpha I. \) There is no significance of the particular value of \( k_0 \) chosen in our parametrisation, so let us use \( k_0 = 1 \) in what follows and introduce the notation:

$$\displaystyle \begin{aligned} S := S_{\mathbf{v}} (1) = - (A- \mathrm{i} B)^{-1} \left( A + \mathrm{i} B \right). \end{aligned} $$
(3.18)

The unitary matrix \( S \) is uniquely determined by \( A \) and \( B \), but not vice versa. The matrices \( A \) and \( B \) can be chosen equal to

$$\displaystyle \begin{aligned} {} \left\{ \begin{array}{ccc} A & = & \mathrm{i}(S-I) \\ B & = & S+I \end{array} \right. . \end{aligned} $$
(3.19)

It is an easy exercise that the corresponding \( S_{\mathbf {v}} (1) = S. \) One may also prove that such pair \( (A,B)\) satisfies conditions (3.11). The first condition can be shown by taking into account that the matrix \( S \) is unitary

$$\displaystyle \begin{aligned} \Rightarrow AB^* = BA^*.\end{aligned}$$

The second condition follows from

$$\displaystyle \begin{aligned} \mathrm{rank}\, (A,B) = \mathrm{rank}\, (S-I, S+I) = d,\end{aligned}$$

which holds for any unitary \( S. \)

To prove formula (3.17) we substitute \( (A,B) \) from (3.19) into formula (3.15) for the scattering matrix:

$$\displaystyle \begin{aligned} \begin{array}{ccl} \displaystyle S_{\mathbf{v}} (k) & = & \displaystyle - \left( \mathrm{i} S -\mathrm{i} - \mathrm{i} k S - \mathrm{i} k\right)^{-1} \left( \mathrm{i} S - \mathrm{i} + \mathrm{i} k S + \mathrm{i} k\right) \\[2mm] & = & \displaystyle \left( (k-1) S + ( k+1) \right)^{-1} \left( (k+1) S + (k-1) \right), \end{array} \end{aligned}$$

which is essentially (3.17) in the special case \( k_0 =1 . \) One just needs to take into account that the matrices are commuting and \( S_{\mathbf {v}} (k) \) can be written as a quotient.

In what follows we shall need the special case of (3.17), which expresses the vertex scattering matrix through the unitary parameter \( S\):

$$\displaystyle \begin{aligned} {} S_{\mathbf{v}} (k) = \frac{(k+1) S + (k-1) I }{(k-1) S + (k+1) I}. \end{aligned} $$
(3.20)

3.3.3 On Properly Connecting Vertex Conditions

We are going to discuss now which matrices \( S \) lead to properly connecting vertex conditions. Let us recall that vertex conditions are called properly connecting if and only if the vertex cannot be divided into two (or more) vertices, so that the vertex conditions connect only limit values belonging to each of the new vertices separately. We have seen that one faces certain difficulties to characterise all possible properly connecting conditions when the description (3.10) via pair \( (A,B) \) is used. On the other hand it is clear that all not properly connecting vertex conditions lead to vertex scattering matrices \( S_{\mathbf {v}} \) having block-diagonal form. Conversely, every such matrix leads to not properly connecting vertex conditions.

A matrix is called reducible if and only if it can be transformed into block upper-triangular form by a permutation of coordinates. But every unitary block upper-triangular matrix is block diagonal, so all properly connecting vertex conditions are in one-to-one correspondence with irreducible unitary matrices \( S. \) Therefore, without loss of generality, we are going to restrict ourselves to irreducible unitary matrices \( S \) parameterising the vertex conditions.

Theorem 3.2 can be reformulated as follows

Theorem 3.4

The set of Hermitian properly connecting vertex condition at the vertex\( V \)of degree\( d \)can be uniquely parameterised by\( d \times d \)irreducible unitary matrices\( S \)writing conditions (3.10) in the form

$$\displaystyle \begin{aligned} {} \mathrm{i} (S-I) \, \vec{u} = (S+I) \,\partial \vec{u}, \end{aligned} $$
(3.21)

where\( \vec {u} \)and\( \partial \vec {u} \)denote the vectors of limit values of the functions (2.12) and their extended normal derivatives (2.26) at the vertex.

Since every self-adjoint extension of the minimal operator \( L^{\mathrm {min}} \) leads to a certain unitary vertex scattering matrix \( S_{\mathbf {v}} (k) \), the vertex conditions (3.21) describe all possible self-adjoint extensions [90, 442, 506].

In what follows, the self-adjoint operator corresponding to the differential expression \( \tau _{q,a} \) given by (2.17) on a metric graph \( \Gamma \) and vertex conditions (3.21) will be denoted by \( L_{q,a}^S (\Gamma ) .\)Footnote 2 We shall often omit certain indices hoping that no misunderstanding occurs.

A few other possible parametrisations of vertex conditions are described in Appendix 2. In our opinion, the parametrisation (3.21) is the most appropriate, and we are going to use it in what follows. We are going to illustrate the advantages of this parametrisation in the following section, where different properties of vertex scattering matrices are addressed.

Let us consider just one (rather applied) example that illustrates the power of this parametrisation.

Example 3.5 ([338])

Experimental physicists [470] considered transport properties of the system of nano-wires depicted in Fig. 3.3. This problem can be described by Schrödinger equation on \( \Gamma _B \) and requires Hermitian vertex conditions in the vertex \( V = \{ x_1, x_2, x_3, x_4 \}. \) The main question is: how does one select these conditions in order to reflect the geometry of the coupling? It is clear that, in the ballistic regime, the probabilities of the transport between points \( x_1 \) and \( x_3 \), as well as between \( x_2 \) and \(x_4 \), are negligible. Hence it is natural to look for vertex conditions that guarantee that the following entries in the vertex scattering matrix are zero:

$$\displaystyle \begin{aligned} {} s_{31} = s_{13} = s_{24} = s_{42} = 0 . \end{aligned} $$
(3.22)

One may also assume that the reflection is small, leading to

$$\displaystyle \begin{aligned} {} s_{11} = s_{22} = s_{33} = s_{44} = 0. \end{aligned} $$
(3.23)

If a certain entry in the vertex scattering matrix is equal to zero for one particular energy, one cannot be sure that it remains zero for all other values of the energy, since the vertex scattering matrices in general depend on the energy (see (3.20)). One may show that the vertex scattering matrix is independent of the energy if and only if the parameter \( S \) is not only unitary, but also Hermitian: \( S= S^{-1} = S^* \) (see Sect. 3.5.1).

Fig. 3.3
A graph of gamma B has a horizonal line and a circle intersecting at vertices x 1, x 2, x 3, and x 4.

The Graph \( \Gamma _B \). A bounded wire with an Aharonov-Bohm ring attached

Every \( 4 \times 4 \) real unitary Hermitian matrix satisfying conditions (3.22)–(3.23) is of the form

$$\displaystyle \begin{aligned} {} S = \left( \begin{array}{cccc} 0 & \alpha & 0 & \beta \\ \alpha & 0 & \sigma \beta & 0 \\ 0 & \sigma \beta & 0 & - \sigma \alpha \\ \beta & 0 & -\sigma \alpha & 0 \end{array} \right), \end{aligned} $$
(3.24)

where \( \sigma = \pm 1 \) and \( \alpha , \beta \in \mathbb R \) are subject to

$$\displaystyle \begin{aligned} {} \alpha^2 + \beta^2 = 1. \end{aligned} $$
(3.25)

We required that the matrix is real in order to guarantee that all eigenfunctions may be chosen real. In order to guarantee that the vertex conditions are properly connecting one should require that

$$\displaystyle \begin{aligned} {} \alpha \neq 0 \neq \beta. \end{aligned} $$
(3.26)

3.4 Parametrisation Via Hermitian Matrices

Consider the eigenprojector \( P_{-1} \) associated with the eigenvalue \( -1 \) (if any) of the unitary matrix \( S \) appearing in the parametrisation (3.21). The complementary projector \( P_{-1}^\perp = I - P_{-1} \) projects on the linear span of the eigensubspaces associated with all other eigenvalues of \( S. \) Multiplying (3.21) by \( P_{-1} \) from the left we arrive at

$$\displaystyle \begin{aligned} - 2 i P_{-1} \vec{u} = 0 \Leftrightarrow P_{-1} \vec{u} = 0.\end{aligned}$$

This condition means that the vector \( \vec {u} \) has to be orthogonal to the eigenvectors of \( S\) associated with the eigenvalue \( -1. \)

The second condition is obtained by multiplying (3.21) by \( P_{-1}^\perp \):

$$\displaystyle \begin{aligned} i (S-I) P_{-1}^\perp \vec{u} = (S+I) P_{-1}^\perp \partial \vec{u},\end{aligned}$$

where we used that \( S \) commutes with its eigenprojectors. The matrix \( (S+I) \) is invertible on the range of \( P_{-1}^\perp \), hence we have

$$\displaystyle \begin{aligned} i (S+I)^{-1} (S-I) P_{-1}^\perp \vec{u} = P_{-1}^\perp \partial \vec{u}.\end{aligned}$$

The ranges of \( P_{-1} \) and \( P_{-1}^\perp \) span the space \( \mathbb C^d \), hence condition (3.21) is equivalent to

$$\displaystyle \begin{aligned} {} \left\{ \begin{array}{l} \displaystyle P_{-1} \vec{u} = 0, \\ \displaystyle (I-P_{-1}) \partial \vec{u} = A_S (I-P_{-1}) \vec{u} , \end{array} \right. \end{aligned} $$
(3.27)

where

$$\displaystyle \begin{aligned} {} A_S = \mathrm{i}\frac{S-I}{S+I} P_{-1}^\perp, \; \mathrm{with} \; P_{-1}^\perp := I - P_{-1}. \end{aligned} $$
(3.28)

The matrix \( A_S \) appearing in this parametrisation is Hermitian and its eigenvectors coincide with the eigenvectors of the unitary matrix \( S \) (not corresponding to the eigenvalue \( -1\)). To prove this let us write \( A_S \) in the form

$$\displaystyle \begin{aligned} A_S = \mathrm{i} P_{-1}^\perp (S+I)^{-1} (S-I) P_{-1}^\perp\end{aligned}$$

and take the adjoint

$$\displaystyle \begin{aligned} \begin{array}{ccl} \displaystyle A_S^* & = & \displaystyle - i P_{-1}^\perp (S^*-I) (S^*+I)^{-1} P_{-1}^\perp \\ & = & \displaystyle - i P_{-1}^\perp (S^*- S S^*) (S^*+S^* S)^{-1} P_{-1}^\perp \\ & = & \displaystyle - i P_{-1}^\perp (I-S) {S^*} ({S^*})^{-1} (I+S)^{-1} P_{-1}^\perp \\ & = & A_S. \end{array} \end{aligned}$$

This parametrisation shows that the most general vertex conditions at a vertex can be considered as a combination of Dirichlet and Robin type conditions:

  • the first condition in (3.27) is precisely of Dirichlet type,

  • the second condition in (3.27) is of Robin type.

This form of vertex conditions will be extremely useful when quadratic forms of operators are discussed (see Chap. 11).

3.5 Scaling-Invariant and Standard Conditions

3.5.1 Energy Dependence of the Vertex S-matrix

Let us now discuss how the vertex scattering matrix depends on the energy. Since the matrix \( S \) is unitary, it is convenient to use its spectral representation

$$\displaystyle \begin{aligned} {} S = \sum_{n=1}^d \mathrm{e}^{\mathrm{i}\theta_n} \langle \vec{e}_n, \cdot \rangle_{\mathbb C^d} \vec{e}_n, \end{aligned} $$
(3.29)

where \( \theta _n \in (-\pi , \pi ], \vec {e}_n \in \mathbb C^d, S \vec {e}_n = \mathrm {e}^{\mathrm {i}\theta _n} \vec {e}_n. \) We use that \( S_{\mathbf {v}} \) is rational function of \( S\), hence formula (3.20) implies

$$\displaystyle \begin{aligned} {} \begin{array}{ccl} \displaystyle S_{\mathbf{v}} (k) & = & \displaystyle \sum_{n=1}^d \frac{(k+1) \mathrm{e}^{\mathrm{i}\theta_n} + (k-1) }{(k-1) \mathrm{e}^{\mathrm{i}\theta_n} + (k+1) } \; \langle \vec{e}_n, \cdot \rangle_{\mathbb C^d} \vec{e}_n \\ & = & \displaystyle \sum_{n=1}^d \frac{k( \mathrm{e}^{\mathrm{i}\theta_n} +1)+ (\mathrm{e}^{\mathrm{i}\theta_n}-1) }{k( \mathrm{e}^{\mathrm{i}\theta_n} +1)- (\mathrm{e}^{\mathrm{i}\theta_n}-1) } \; \langle \vec{e}_n, \cdot \rangle_{\mathbb C^d} \vec{e}_n \\ & = & \displaystyle \hspace{-3mm} \sum_{n: \theta_n = \pi} (-1) \; \langle \vec{e}_n, \cdot \rangle_{\mathbb C^d} \vec{e}_n + \hspace{-3mm} \sum_{n: \theta_n \neq \pi} \frac{k( \mathrm{e}^{\mathrm{i}\theta_n} +1)+ (\mathrm{e}^{\mathrm{i}\theta_n}-1) }{k( \mathrm{e}^{\mathrm{i}\theta_n} +1)- (\mathrm{e}^{\mathrm{i}\theta_n}-1) } \; \langle \vec{e}_n, \cdot \rangle_{\mathbb C^d} \vec{e}_n. \end{array} \end{aligned} $$
(3.30)

The unitary matrix \( S_{\mathbf {v}} (k) \) has the same eigenvectors as the matrix \( S \), but the corresponding eigenvalues in general depend on the energy. The eigenvalues \( \pm 1 \) are invariant; all other eigenvalues (i.e. different from \( \pm 1 \)) tend to \( 1 \) as \( k \rightarrow \infty . \)

If \( S \) is not Hermitian, one may calculate both the high and the low energy limits of \( S_{\mathbf {v}} (k) \):

$$\displaystyle \begin{aligned} {} \begin{array}{ccccccc} \displaystyle S _{\mathbf{v}} (\infty) & = & \displaystyle \lim_{k \rightarrow \infty} S_{\mathbf{v}} (k) & = & \displaystyle - P_{-1} + (I-P_{-1}) & = & \displaystyle I - 2 P_{-1}, \\ \displaystyle S _{\mathbf{v}} (0) & = & \displaystyle \lim_{k \rightarrow 0} S_{\mathbf{v}} (k) & = & \displaystyle P_{1} - (I-P_{1}) & = & \displaystyle 2 P_1 - I. \end{array} \end{aligned} $$
(3.31)

Here we used the notations \( P_{\pm 1} \) for the spectral projectors associated with the eigenvalues \( \pm 1 :\)

$$\displaystyle \begin{aligned} P_{-1} = \sum_{\theta_n = \pi} \langle \vec{e}_n, \cdot \rangle_{\mathbb C^d} \vec{e}_n, \; \; P_{1} = \sum_{\theta_n = 0} \langle \vec{e}_n, \cdot \rangle_{\mathbb C^d} \vec{e}_n. \end{aligned}$$

The vertex scattering matrix is independent of the energy if and only if the vertex conditions are non-Robin, or scaling-invariant as described in the following section.

3.5.2 Scaling-Invariant, or Non-Robin Vertex Conditions

For the star graph formed by the edges \( E_n = [x_n, \infty ) , \; n =1,2, \dots , d \) consider the scaling transformation

$$\displaystyle \begin{aligned} {}[x_n, \infty) \in x \mapsto y = x_n + c (x-x_n) \in [x_n, \infty).\end{aligned}$$

This transformation naturally induces the function transformation

$$\displaystyle \begin{aligned} u \mapsto u_c\end{aligned}$$

so that if \( y \in E_n = [x_n, \infty ) \) then

It is natural to call vertex conditions scaling invariant if and only if any function \( u \) and its scaling \( u_c \) satisfy conditions simultaneously.

It is clear that the limit values of \( u \) and \( u_c \) are related via

$$\displaystyle \begin{aligned} \vec{u} = \vec{u}_c, \; \; \; \partial \vec{u} = c \,\partial \vec{u}_c, \end{aligned} $$
(3.32)

provided the magnetic potential is zero. Vertex conditions (3.27) are invariant under scaling if and only if the matrix \( A_S \) is identically zero. As one can see from (3.28) the parameter matrix \( S \) has just eigenvalues \( 1 \) and \( -1 \), hence \( S \) is not only unitary but also Hermitian. Hence any scaling-invariant vertex condition can be written in the form:

$$\displaystyle \begin{aligned} {} \left\{ \begin{array}{l} \displaystyle P_{-1} \vec{u} = 0, \\ \displaystyle P_{1}\partial \vec{u} = 0 , \end{array} \right. \end{aligned} $$
(3.33)

where \( P_{\pm 1} \) are the eigenprojectors on the two orthogonal eigensubspaces spanning up \( \mathbb C^d. \) These conditions can be seen as a combination of Dirichlet and Neumann conditions. The corresponding matrix \( A_S \) appearing in the Hermitian parametrisation is zero, therefore scaling-invariant vertex conditions are often called non-Robin. In the two extreme cases \( P_{-1} = I \; (P_1 = 0) \) and \( P_{-1} = 0 \; (P_1 = I)\) the conditions reduce to usual Dirichlet and Neumann ones.

Characteristic property of scaling-invariant vertex conditions is that the corresponding vertex scattering matrix is independent of the energy (as can be seen from (3.30)) and can be written as a sum of two projectors

$$\displaystyle \begin{aligned} {} S_{\mathbf{v}} (k) \equiv S = P_{1} - P_{-1} . \end{aligned} $$
(3.34)

3.5.3 Standard Vertex Conditions

Standard vertex conditions (2.27)

$$\displaystyle \begin{aligned} \left\{ \begin{array}{ll} \displaystyle u(x_1) = u(x_2) = \dots = u(x_d) & - \; \mbox{continuity condition}, \\ \displaystyle \sum_{j=1}^d \partial u (x_j) = 0 & - \; \mbox{Kirchhoff condition}, \end{array} \right. \end{aligned}$$

appear naturally if we impose the requirement that the functions are continuous at the nodes.Footnote 3 Continuity of the wave-function is a natural requirement and is usually welcomed in applications. It is customary to use these conditions if there is no preference or it is not known which particular vertex conditions should be used, which explains the name. We have already considered standard vertex conditions in Sect. 2.1.3. Writing these conditions using matrices \( A \) and \( B \) is not difficult

$$\displaystyle \begin{aligned} {} \left( \begin{array}{cccccc} 1 & -1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & -1 & \cdots & 0 & 0 \\ 0 & 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 1 & -1 \\ 0 & 0 & 0 & \cdots & 0 & 0 \end{array} \right) \vec{u} = \left( \begin{array}{cccccc} 0 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 0 & 0 \\ 1 & 1 & 1 & \cdots & 1 & 1 \end{array} \right) \partial \vec{u} , \end{aligned} $$
(3.35)

The first \( d-1 \) equations imply that the function \( u \) is continuous, while the last equation corresponds Kirchhoff condition.

Let us discuss how to describe the standard conditions using the scattering matrix. To this end we calculate the vertex scattering matrix. Substituting Ansatz (3.12) into (3.35) and taking into account that the ranges of the two matrices are orthogonal, we get:

$$\displaystyle \begin{aligned} \begin{array}{l} \left( \begin{array}{cccccc} 1 & -1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & -1 & \cdots & 0 & 0 \\ 0 & 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 1 & -1 \\ 0 & 0 & 0 & \cdots & 0 & 0 \end{array} \right) (\vec{b} + \vec{a} ) = 0; \\ \\ {ik} \left( \begin{array}{cccccc} 0 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 0 & 0 \\ 1 & 1 & 1 & \cdots & 1 & 1 \end{array} \right) (- \vec{b} + \vec{a} ) = 0. \end{array} \end{aligned} $$
(3.36)

These conditions can be written as

$$\displaystyle \begin{aligned} {} \left\{ \begin{array}{l} a_{i} + b_{i} = a_{j} + b_{j}, \quad i,j = 1,2, \dots, d, \\ \displaystyle \sum_{j=1}^d (a_j - b_j) = 0. \end{array} \right. \end{aligned} $$
(3.37)

Then it is clear that the edges are indistinguishable i.e. they are invariant under permutations. Therefore the vertex scattering matrix should satisfy the equation

$$\displaystyle \begin{aligned} S_{\mathbf{v}} (k) = P_\sigma S_{\mathbf{v}} (k) P^{-1}_\sigma\end{aligned}$$

for any permutation \(P_\sigma \) and therefore be of the form:Footnote 4

$$\displaystyle \begin{aligned} S_{ij} (k) = \left\{ \begin{array}{ll} T, & i \neq j, \\ R, & i = j, \end{array} \right. \Rightarrow S (k) = \left( \begin{array}{cccc} R & T & T & \cdots \\ T & R & T & \cdots \\ T & T & R & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{array} \right). \end{aligned} $$
(3.38)

At this stage we cannot exclude the possibility that the transmission \( T \) and reflection \( R \) coefficients depend on the spectral parameter \( k.\) Let us assume that there is just one incoming wave arriving along the edge \( E_1 \): the corresponding scattered wave is given by the Ansatz

$$\displaystyle \begin{aligned} \psi (x) = \left\{ \begin{array}{ll} \displaystyle e^{-i k (x-x_1)} + R e^{i k (x-x_1)}, & \displaystyle x \in E_1 = [x_1, \infty), \\ \displaystyle T e^{i k (x-x_n)}, & \displaystyle x \in E_n = [x_n, \infty), \; n= 2,3, \dots, d. \end{array} \right.\end{aligned}$$

Substituting this Ansatz into the standard conditions (2.27) leads to the following linear system

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \displaystyle 1+ R = T \\ \displaystyle ik \left(-1+R + (d-1) T \right) = 0. \end{array} \right. \end{aligned}$$
$$\displaystyle \begin{aligned} {} \Rightarrow \left\{ \begin{array}{l} \displaystyle T-R = 1 \\ \displaystyle (d-1) T+ R = 1. \end{array} \right. \end{aligned} $$
(3.39)

Solving the linear system we get the transition and reflection coefficients

$$\displaystyle \begin{aligned} {} \left\{ \begin{array}{ccl} T & = & 2/d, \\ R & = & -1 + 2/d. \end{array} \right. \end{aligned} $$
(3.40)

The matrix \( S^{\mathrm {st}} \) corresponding to standard vertex conditions is then given by

$$\displaystyle \begin{aligned} {} S^{\mathrm{st}} = S^{\mathrm{st}}_d = \left( \begin{array}{cccc} -1+2/d & 2/d & 2/d & \cdots \\ 2/d & -1 +2/d & 2/d & \cdots \\ 2/d & 2/d & -1 + 2/d & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{array} \right), \end{aligned} $$
(3.41)

which allows one to write the standard vertex conditions in the form (3.21)

(3.42)

The scattering matrix is independent of the energy and therefore can be written using two projectors. One may also introduce the eigensubspaces \( N_1 = \mathcal L \{ (1,1,1,\dots ,1)\}\) and \( N_{-1} = N_1^\perp \) corresponding to the eigenvalues \( \pm 1. \) The orthogonal projectors \( P_{\pm 1} = P_{N_{\pm 1}} \) allow one to write standard vertex conditions also in the form (3.33).

Standard vertex conditions for degree two vertices mean that the function and its first derivative are continuous at the vertex. As the result the corresponding vertex scattering matrix describes free passage through the vertex

$$\displaystyle \begin{aligned} S^{\mathrm{st}}_2 = \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right).\end{aligned}$$

Hence degree two vertices with standard conditions can always be removed and the two edges joined at the vertex can be substituted with one edge of the length equal to the sum of the lengths of the two edges.

On the opposite, every point inside an edge can be seen as a degree two vertex with standard conditions.

3.6 Signing Conditions for Degree Two Vertices

The signing conditions remind the standard conditions and differ by two extra signs, hence the name

$$\displaystyle \begin{aligned} {} \left\{ \begin{array}{l} u(x_1) = - u(x_2), \\ \partial u(x_1) - \partial u(x_2) = 0. \end{array} \right. \end{aligned} $$
(3.43)

These condition correspond to the multiplication of the function by \( -1 \) while crossing the vertex. The corresponding vertex scattering matrix is

$$\displaystyle \begin{aligned} S^{\mathrm{sign}} = \left( \begin{array}{cc} 0 & -1 \\ -1 & 0 \end{array} \right) = - S^{\mathrm{st}}_2 .\end{aligned}$$

These conditions will play a very important role when discussing the solution of the inverse problem using magnetic flux dependent spectral data.

For example, introducing signing conditions connecting the endpoints of the same interval corresponds to the loop graph with magnetic flux equal to \( \pi \).

We borrow the name signing conditions from the discrete graph theory, see for example [89, 384].

3.7 Generalised Delta Couplings

In this section we present yet another class of vertex conditions. These conditions were introduced in order to guarantee that the ground state eigenfunction may be chosen positive. They are characterised by the property that the domain of the quadratic form is invariant under taking the absolute value and the value of the quadratic form does not increase (see Sect. 4.5).

With any vertex \( V \) of degree \( d \) we associate \( n \leq d \) arbitrary vectors \( \vec {a}_j \) with the following properties:

  • all coordinates of \( \vec {a}_j \) are non-negative numbersFootnote 5

    $$\displaystyle \begin{aligned} \vec{a}_j \in \mathbb R_+^{d} ;\end{aligned}$$
  • the vectors have disjoint supports so that

    $$\displaystyle \begin{aligned} \vec{a}_j (x_l) \; \vec{a}_i (x_l) = 0, \; \mbox{provided} \; j \neq i , \; x_l \in V,\end{aligned}$$

    holds.

Without loss of generality we assume that the vectors \( \vec {a}_j \) are normalised:

$$\displaystyle \begin{aligned} \| \vec{a}_j \|{}^2 := \sum_{x_l \in V} \vert \vec{a}_j (x_l) \vert^2 = 1.\end{aligned}$$

The coordinates of the vectors \( \vec {a}_j \) will be called weights.

In addition to the vectors \( \vec {a}_j \) we pick up a Hermitian \( n \times n \) matrix \( \mathbf A \) playing the role of Robin parameter. Then the generalised delta couplings are written as follows

$$\displaystyle \begin{aligned} {} \left\{ \begin{array}{l} \displaystyle \vec{u} \in \mathfrak L \{ \vec{a}_1, \vec{a}_2, \dots, \vec{a}_n \} ;\\ \displaystyle \langle \vec{a}_j, \partial \vec{u} \rangle = \sum_{i=1}^n A_{ji} \langle \vec{a}_i, \vec{u} \rangle. \end{array} \right. \end{aligned} $$
(3.44)

The dimension \( n \) of the subspace

$$\displaystyle \begin{aligned} \mathcal B := \mathfrak L \{ \vec{a}_1, \vec{a}_2, \dots, \vec{a}_n \}\end{aligned}$$

will be referred to as the order of the generalised delta-condition (Fig. 3.4).

Fig. 3.4
A diagram of a generalized delta coupling has nine lines intersecting on the vertex V power 1.

Generalised delta couplings when \( d= 9 \) and \( n = 3 \)

The first condition in (3.44) is a weighted continuity condition, since it can be written as follows:

$$\displaystyle \begin{aligned} {} \frac{u(x_k)}{ \vec{a}_j (x_k)} = \frac{u(x_l)}{\vec{a}_j (x_l)} := {\mathbf{u}}_j, \; \, x_k, x_l \in \mathrm{supp}\,\vec{a}_j, \; \; j = 1,2, \dots, n. \end{aligned} $$
(3.45)

The difference to the classical delta coupling (see Appendix 1) is that the function is not necessarily continuous at the vertex. In the case \( n=1 \) and the corresponding vector \( \vec {a}_1 \) has maximal support, any coordinate of \( \vec {u} \) determines all other coordinates—the value of \( u \) at one endpoint determines its values at all other endpoints. But the values may be different if the weights are different. One may say that the weighted function is continuous in this case. If \( n \geq 2 \), then the entries of \( \vec {u} \) are determined by \( n \) arbitrary parameters. Every coordinate in \( \vec {u} \) belongs to the support of at most one vector \( \vec {a}_j \) for a certain \( j \) and thus determines all other coordinates in the support of \( \vec {a}_j. \) The wave function \( u \) attains \( n \) independent weighted values associated with different groups of endpoints joined at the vertex. One should think about this condition as a weighted continuity of \( u \) at each group of endpoints.

Changing the order \( n; \; 1 \leq n \leq d \), of the delta coupling allows one to interpolate between the classical delta coupling and the most general vertex conditions, so that \( n=1 \) corresponds to weighted delta coupling and \( n=d \) to the most general Robin condition of the form \( \partial \vec {u} = A \;\vec {u}. \)

Note that in Eq. (3.45) we introduced a new vector \(\vec {\mathbf {u}} = (\mathbf u_1, \mathbf u_2, \dots , \mathbf u_n)\)—the reduced vector containing common weighted values of the vector \( \vec {u} \). The dimension of the vector coincides with the dimension \( n \) of the linear subspace \( \mathcal B \).

The second equation in (3.44) is a balance equation for the normal derivatives. The sum of normal derivatives connected with endpoints from the support of one of the vectors \( \vec {a}_j \) is connected via the coupling matrix \( \mathbf A \) to the common values of \( u \) at all other groups of endpoints, since we have

$$\displaystyle \begin{aligned} \langle \vec{a}_i, \vec{u} \rangle = \sum_{x_l \in \mathrm{supp}\,\vec{a}_i} \vec{a}_i (x_l) u (x_l) = \sum_{x_l \in \mathrm{supp}\,\vec{a}_i} \vert \vec{a}_i (x_l) \vert^2 \frac{u(x_l)}{\vec{a}_i (x_l)} = {\mathbf{u}}_i. \end{aligned}$$

Here we used that the vector \( \vec {a}_i \) is normalised.

For generalised delta couplings to be properly connecting two requirements should be fulfilled:

  1. (1)

    The union of supports of the vectors \( \vec {a}_j \) coincides with all endpoints in V :

    $$\displaystyle \begin{aligned} {} \cup_{j=1}^n \mathrm{supp}\; (\vec{a}_j) = \{ x_l \}_{x_l \in V}. \end{aligned} $$
    (3.46)
  2. (2)

    The matrix \( \mathbf A = \{ A_{ji} \}_{j,i =1}^n \) is irreducible, i.e. it cannot be put into a block-diagonal form by permutations.

If the first condition is not satisfied, then we have classical Dirichlet conditions at certain endpoints:

$$\displaystyle \begin{aligned} u(x_l) = 0, \; \mathrm{provided} \; x_l \notin \cup_{j=1}^n \mathrm{supp}\; (\vec{a}_j) .\end{aligned}$$

Dirichlet endpoints always form separate vertices.

If the second condition is not satisfied, then the vertex \( V \) can be chopped into two (or more) vertices preserving the vertex conditions. Such conditions correspond to the metric graph, where the vertex \( V \) is divided.

As we already pointed out described vertex conditions will play a crucial role proving that the ground state eigenfunction can be chosen positive. For that purpose, all the weights should be real and the matrix \( \mathbf A \) should be not only Hermitian but real with non-positive entries outside the diagonal. You will read more about generalised delta couplings in Sect. 4.5, where, in particular, the corresponding quadratic form is calculated and its properties are discussed.

3.8 Vertex Conditions for Arbitrary Graphs and Definition of the Magnetic Schrödinger Operator

3.8.1 Scattering Matrix Parametrisation of Vertex Conditions

In this section we discuss the most general vertex conditions for arbitrary compact finite graphs generalizing Sect. 3.3. Our main focus will be on which properties of these conditions guarantee their admissibility, and therefore we still assume that the potentials satisfy (2.19) and (2.20).

The standard self-adjoint operator \( L_{q,a}^{\mathrm {st}} \) associated with a symmetric differential expression on a metric graph \( \Gamma \) has already been defined in Sect. 2.1 (Definition 2.2). This operator is selected by introducing standard vertex conditions (2.27) at the vertices. Let us discuss how to introduce other types of vertex conditions, so that the vertex structure of the graph \( \Gamma \) is respected. The boundary form of the maximal operator \( L_{q,a}^{\mathrm {max}} \) can be written as

$$\displaystyle \begin{aligned} {} \begin{array}{cl} & \displaystyle \langle L_{q,a}^{\mathrm{max}} u, v \rangle - \langle u, L_{q,a}^{\mathrm{max}} v \rangle \\ & \\ = & \displaystyle \sum_{n=1}^N \int_{E_n} \left\{ \overline{\left(\mathrm{i}\frac{d}{dx} + a(x) \right)^2u(x)}\, v(x) - \overline{u(x)} \left( \mathrm{i}\frac{d}{dx} + a(x) \right)^2v(x) \right\} dx \\ & \\ = & \displaystyle \sum_{x_j} \left( \overline{\partial u(x_j)} v(x_j) - \overline{u(x_j)} \partial v(x_j) \right). \end{array} \end{aligned} $$
(3.47)

Let us introduce the vectors \( \vec {U}, \partial \vec {U} \) of limit values of the function \( u \) at all endpoints:

$$\displaystyle \begin{aligned} \begin{array}{ccl} \displaystyle \vec{U} & = & \displaystyle \left( u(x_1), u(x_2), \dots \right),\\ \displaystyle \partial \vec{U} & = & \displaystyle \left( \partial u(x_1), \partial u(x_2), \dots \right). \end{array} \end{aligned} $$
(3.48)

The dimension of these vectors coincides with the number \( D \) of endpoints in \( \mathbf V.\)

In vector notation the boundary form (3.47) looks as follows

$$\displaystyle \begin{aligned} \displaystyle \langle L_{q,a}^{\mathrm{max}} u, v \rangle - \langle u, L_{q,a}^{\mathrm{max}} v \rangle = \left\langle \left( \begin{array}{cc} 0 & I \\ - I & 0 \end{array} \right) \left( \begin{array}{c} \vec{U} \\ \partial \vec{U} \end{array} \right), \left( \begin{array}{c} \vec{V} \\ \partial \vec{V} \end{array} \right) \right\rangle_{\mathbb C^{2D}}, \end{aligned} $$
(3.49)

and coincides with the standard symplectic form in the space \( \mathbb C^{2D} \ni (\vec {U}, \partial \vec {U}) \). The set of self-adjoint restrictions of the maximal operator \( L^{\mathrm {max}}_{q,a} \) can be described by Lagrangian planes, i.e. maximal isotropicFootnote 6 subspaces in \( \mathbb C^{2D}. \) But not all such Lagrangian subspaces respect the vertex structure of the underlying metric graph. In order to select proper conditions let us re-write the boundary form as follows

$$\displaystyle \begin{aligned} \begin{array}{cl} & \displaystyle \langle L_{q,a}^{\mathrm{max}} u, v \rangle - \langle u, L_{q,a}^{\mathrm{max}} v \rangle \\ & \\ = & \displaystyle \sum_{m=1}^M \left\{ \sum_{x_j \in V^m} \left( \overline{\partial u(x_j) } v (x_j) - \overline{u(x_j) } \partial v (x_j) \right) \right\} \\ = & \displaystyle \sum_{m=1}^M \left\langle \left( \begin{array}{cc} 0 & I \\ -I & 0 \end{array} \right) \left( \begin{array}{c} \vec{u}(V^m) \\ \partial \vec{u} (V^m) \end{array} \right), \left( \begin{array}{c} \vec{v} (V^m) \\ \partial \vec{v} (V^m) \end{array} \right) \right\rangle_{\mathbb C^{2d^m}}. \end{array} \end{aligned} $$
(3.50)

Each subspace \( \mathbb C^{2 d^m} \) associated with the vertex \( V^m \) can be considered separately. The corresponding appropriate Lagrangian planes, or vertex conditions, have already been discussed in Sect. 3.3 in the context of star graphs.

With every vertex \( V^m ,\) we associate \( d^m \times d^m \) unitary irreducible matrix \( S^m \) and introduce the vertex conditions

$$\displaystyle \begin{aligned} {} \mathrm{i}(S^m-I) \vec{u} (V^m) = (S^m+I) \partial \vec{u} (V^m), \; \; m=1,2,\dots, M. \end{aligned} $$
(3.51)

In what follows, we are going to limit our studies to the case of irreducible matrices \( S^m.\) The corresponding vertex conditions will be called admissible.

It will be convenient to consider the vectors \( \vec {u}(V^m) \) as elements from \( \mathbb C^{D}\) extending them by zero for all endpoints not from \( V^m\). Then the unitary matrices \( S^m \) will be identified with the \( D \times D \) matrices obtained by putting equal to zero all entries with the indices \( ij \) if either \( x_i \notin V^m\) or \(x_j \notin V^m\). Then the matrix \( \mathbf S \) given by

$$\displaystyle \begin{aligned} {} \mathbf S = \bigoplus_{m=1}^M S^m \end{aligned} $$
(3.52)

is unitary and describes the vertex conditions at all vertices via

$$\displaystyle \begin{aligned} {} i (\mathbf S - \mathbf I) \vec{U} = (\mathbf S + \mathbf I) \partial \vec{U}. \end{aligned} $$
(3.53)

Note that the sum in (3.52) is orthogonal since the matrices \( S^m \) map limiting values at different vertices. The matrix \( \mathbf S \) in general is reducible and its invariant subspaces are determined by the vertices.

Then the self-adjoint operator is defined as the restriction of the maximal operator to the domain of functions satisfying vertex conditions (3.51).

Definition 3.6

The magnetic Schrödinger operator\( L_{q,a}^{\mathbf {S}} \) is defined by the differential expression (2.17) on the domain of functions from the Sobolev space \( W_2^2 (\Gamma \setminus \mathbf V) \) satisfying the vertex conditions (3.51) at each vertex.

In this definition it is important that each matrix \( S^m \) is irreducible, while the matrix \( \mathbf S \) is reducible by construction (assuming, of course, that \( \Gamma \) has more than one vertex). The case where at least one of the matrices \( S^m \) is reducible corresponds to a different metric graph. The corresponding graph can be obtained from the graph \( \Gamma \) by splitting one of the vertices into two or more equivalence classes—new vertices (see Fig. 3.1). Thus taking \( \mathbf S = - \mathbf I \) we get the Dirichlet operator \( L_{q,a}^D \) corresponding to the graph consisting of disconnected edges.

Theorem 3.7

The operator \( L_{q,a}^{\mathbf {S}} \) is self-adjoint, provided that the matrix \( \mathbf {S} \) is unitary.

Proof

Consider the minimal operator associated with the differential expression \( L_{q,a} \) in \( L_2 (\Gamma ) \). The adjoint operator is determined by the same differential expression on the domain \( W_2^2 (\Gamma \setminus \mathbf V ). \) This follows directly from the fact that the differential expression \( L_{q,a} \) is formally symmetric.

To prove that \( L_{q,a}^{\mathbf {S}} \) is self-adjoint, one may repeat step-by-step the proof of Theorems 3.2 and 3.4.

The boundary form of the operator is given by (3.47) and it vanishes due to vertex conditions (3.51), since it can be re-written as

$$\displaystyle \begin{aligned} \langle L_{q,a}^{\mathrm{max}} u, v \rangle - \langle u, L_{q,a}^{\mathrm{max}} v \rangle = \sum_{m=1}^M \left( \sum_{x_j \in V^m} \left( \overline{\partial u(x_j)} v(x_j) - \overline{u(x_j)} \partial v(x_j) \right) \right). \end{aligned}$$

Each term in the sum vanishes separately. Calculating the adjoint operator \( (L_{q,a}^{\mathbf {S}})^* \) all vertices may also be treated separately, and therefore the corresponding calculations can be repeated without any major changes. □

Following (3.20), it is natural to introduce the corresponding (global) vertex scattering matrix

$$\displaystyle \begin{aligned} {} \mathbf S_{\mathbf{v}} (k) = \frac{ \displaystyle (k+1) \mathbf S_{\mathbf{v}}(1) + (k-1) \mathbf I}{\displaystyle (k-1) \mathbf S_{\mathbf{v}} (1) + (k+1) \mathbf I}. \end{aligned} $$
(3.54)

This matrix coincides with the scattering matrix for the vertex of valency \( D \) with the vertex conditions given by formula (3.53). This matrix will be used in what follows to calculate the positive spectrum and to establish the corresponding trace formulas.

3.8.2 Quadratic Form Parametrisation of Vertex Conditions

In mathematical physics one often determines self-adjoint operators via their quadratic, or more precisely sesquilinear, form. The reason is two-fold:

  • On one side, there is a one-to-one correspondence between semibounded self-adjoint operators and their quadratic forms.

  • Quadratic forms can be used directly in Min-Max and Max-Min principles to determine the discrete spectrum and the corresponding eigenufnctions.

All operators we discuss here are semibounded, let us look at their quadratic forms. The sesquilinear form of the operator \( L^S_{q,a} \) can be calculated explicitly:

$$\displaystyle \begin{aligned} {} \begin{array}{cl} & \displaystyle Q_{q,a}^{\mathbf{S}} (u,u) \equiv \langle L_{q,a}^{\mathbf{S}} u, u \rangle_{L_2(\Gamma)} \\ = & \displaystyle \sum_{n=1}^N \left( \int_{E_n} - \overline{\left(\frac{d}{dx} - \mathrm{i}a(x) \right)^2 u (x) } u(x) dx + \int_{E_n} q(x) \vert u(x) \vert^2 dx \right) \\ = & \displaystyle \sum_{x_j} \overline{\partial u} (x_j) u (x_j) + \sum_{n=1}^N \left( \int_{E_n} \left\vert \left(\frac{d}{dx} - \mathrm{i}a(x) \right) u (x) \right\vert^2 dx + \int_{E_n} q(x) \vert u(x) \vert^2 dx \right) \\ = & \displaystyle \sum_{m=1}^M \langle \vec{\partial u} (V^m), \vec{u} (V^m) \rangle_{\mathbb C^{d^m}} \\ &\quad \displaystyle+ \sum_{n=1}^N \left( \int_{E_n} \left\vert \left(\frac{d}{dx} - \mathrm{i}a(x) \right) u (x) \right\vert^2 dx + \int_{E_n} q(x) \vert u(x) \vert^2 dx \right) \\ = & \displaystyle \sum_{m=1}^M \langle A_{S^m} \vec{u} (V^m), \vec{u} (V^m) \rangle_{\mathbb C^{d^m}} \\ & \displaystyle + \sum_{n=1}^N \left( \int_{E_n} \left\vert \left(\frac{d}{dx} - \mathrm{i}a(x) \right) u (x) \right\vert^2 dx + \int_{E_n} q(x) \vert u(x) \vert^2 dx \right) . \end{array} \end{aligned} $$
(3.55)

The domain \( \mathrm {Dom}\,Q_{q,a}^{\mathbf {S}} \) of the sesquilinear form is obtained by closing the domain \( \mathrm {Dom}\,(L_{q,a}^{\mathbf {S}}) \) with respect to the norm \( Q_{q,a}^{\mathbf {S}} (u,u) + C \| u \|{ }^2, \) where the constant \( C \) is chosen sufficiently large to ensure positivity. Let us remember that we assume that \( q \) and \( a \) satisfy assumptions (2.19) and (2.20) respectively. Under these assumptions \( Q^{\mathbf {S}}_{q,a} (u,u) \) is bounded if and only if \( u \in W_2^1 (\Gamma \setminus \mathbf V) \) since \( a u \in L_2 (\Gamma ) \) and \( q \| u \|{ }^2 \in L_1 (\Gamma ). \) It remains to understand what happens to the vertex conditions. Every function from \( W_2^1 (\Gamma \setminus \mathbf V) \) is continuous on every edge, but the first derivatives are not continuous anymore, in other words the functionals \( u \mapsto u'(x) \) are not bounded with respect to the norm in the Sobolev space \( W_2^1 (\Gamma \setminus \mathbf V).\) It follows that the Robin part of vertex conditions, that is the second equation in (3.27) is not preserved. On the other hand every function from the closure of \( \mathrm {Dom}\,(L^{\mathbf {S}}_{q,a}) \) with respect to \(W_2^1\)-norm satisfies the Dirichlet part, i.e. the first equation in (3.27).

Summing up, the domain of the quadratic form consists of all functions from the Sobolev space \( W_2^1 (\Gamma \setminus \mathbf V) \) satisfying just the first conditions in (3.27)

$$\displaystyle \begin{aligned} {} P_{-1}^m \vec{u} (V^m) = 0, \; m=1, 2, \dots, M. \end{aligned} $$
(3.56)

The second condition is not preserved, since the functionals \( u \mapsto u'(x) \) are not bounded with respect to the norm in the Sobolev space \( W_2^1 (\Gamma \setminus \mathbf V).\)

The Robin part of vertex conditions is not preserved in the description of the quadratic form domain, nevertheless it can be reconstructed. In other words, the quadratic form \( Q_{q,a}^{\mathbf {S}} \) determines the unique vertex condition. The domain of the quadratic form determines the projectors \( P_{-1}^m \) and hence the subspaces \( (I-P_{-1}^m) \mathbb C^{d^m} . \) The quadratic forms \( \langle A^m \vec {u} (V^m), \vec {u} (V^m) \rangle _{\mathbb C^{d^m}} \) determine the Hermitian matrices \( A^m .\) Therefore the unitary matrices \( S^m \) are given by the formula

$$\displaystyle \begin{aligned} S^m = {P_{-1}^m}^\perp \frac{\mathrm{i}I-A^m}{\mathrm{i}I + A^m} {P_{-1}^m}^\perp \oplus (- P_{-1}^m), \; \mathrm{where} \; {P_{-1}^m}^\perp = (I-P_{-1}^m). \end{aligned} $$
(3.57)

The standard vertex conditions correspond to the quadratic form

$$\displaystyle \begin{aligned} {} Q_{q,a} (u,u) = \sum_{n=1}^N \int_{E_n} \left\vert \left(\frac{d}{dx} - \mathrm{i}a(x) \right) u (x) \right\vert^2 dx + \int_{E_n} q(x) \vert u(x) \vert^2 dx, \end{aligned} $$
(3.58)

where vertex terms are absent. The domain is given by all \( W_2^1 (\Gamma \setminus \mathbf V ) \) functions, which are in addition continuous at the vertices. Starting from this quadratic form, which is the most natural candidate from the physical point of view, we get the Schrödinger operator determined by the standard vertex conditions. Hence standard vertex conditions appear if one requires that the functions from the domain of the operator are continuous at the vertices and the quadratic form contains no vertex terms.

Consider the quadratic form given by the same formula (3.58) on the domain of functions from \( W_2^1 (\Gamma \setminus \{ V^m\}_{m=1}^M ) \) without requiring any continuity at the vertices. The corresponding Schrödinger operator is defined on the domain of functions satisfying Neumann conditions at all endpoints of the edges, i.e. the corresponding graph consists of \( N \) completely disconnected intervals.