1 Introduction

The present paper represents the second part of our investigations on linear hyperbolic systems. Given a metric graph \({\mathcal {G}}\), i.e., a graph \({\mathsf {G}}=({\mathsf {V}},{\mathsf {E}})\) each of whose edges \({\mathsf {e}}\in {\mathsf {E}}\) is identified with an interval \((0,\ell _{\mathsf {e}})\subset {\mathbb {R}}\), we are going to study evolution equations of the form

$$\begin{aligned} \dot{u_{\mathsf {e}}}(t,x)=M_{\mathsf {e}}(x)u'_{\mathsf {e}}(t,x)+ N_{\mathsf {e}}(x) u_{\mathsf {e}}(t,x), \quad t\ge 0,\ x\in (0,\ell _{\mathsf {e}}),\ {\mathsf {e}}\in {\mathsf {E}}, \end{aligned}$$
(1.1)

where \(u_{\mathsf {e}}\) is a vector-valued function of size \(k_{\mathsf {e}}\in {\mathbb {N}}_1:=\{1,2,3, \ldots \}\), and \(M_{\mathsf {e}}\) and \(N_{\mathsf {e}}\) are matrix-valued functions of size \(k_{\mathsf {e}}\times k_{\mathsf {e}}\). Hence, each of these equations is supported on an edge: they are going to be coupled by means of suitable transmission conditions in the vertices. In [28], we have proposed a parametrization of such conditions that bears some similarity to the boundary conditions for scalar-valued, multi-dimensional transport equations studied in [31]. The goal of this paper is to extend it to general conditions that may be either of stationary, like in [28]; or of dynamic type.

In the case of systems of parabolic equations, dynamic boundary conditions have been studied at least since [42] and classically interpreted as conditions of Wentzell type arising in the theory of stochastic processes, see [39] and references therein. For hyperbolic systems, however, dynamic boundary condition has been discussed far less frequently in the literature. Specific classes of problems arising in applied mathematics have been investigated in [10, 11, 16, 19, 47] (system of first-order problems) and [14, 15, 23, 24, 36] (systems of strings and/or beams with point masses at the junctions). At a more abstract level, semigroup approach combined with boundary control systems was used in [18, 48] to consider flows in networks with dynamic ramification nodes and infinite-dimensional port-Hamiltonian systems coupled with finite-dimensional systems that impose dynamical feedback to the boundary were studied in [7, 32, 46].

In this paper, we propose a unified formalism to capture hyperbolic systems with hybrid transmission conditions, including the extreme cases of purely stationary or purely dynamic conditions; in fact, we can also allow for conditions that are dynamic only at some vertices and on some of the unknown’s components; as we will see, this rather general setting is motivated by applications and leads to introducing a block operator matrix

$$\begin{aligned} {{\mathbb {A}}}:=\begin{pmatrix} {\mathcal {A}} &{}\quad 0\\ {\mathcal {B}} &{}\quad { {\mathcal {C}}} \end{pmatrix} \end{aligned}$$

with coupled (i.e., nondiagonal) domain on a suitable direct sum of Hilbert spaces: \({\mathcal {A}}\) is a first-order differential operator encoding the dynamics driving (1.1), while operators \(\mathcal {B,C}\) model (possibly nonlocal) damping phenomena in the vertices.

Just like in [28], our main assumptions involve the existence of a Friedrichs symmetrizer, an idea that goes back to [20]. The importance of symmetrizable systems was recognized by many authors (see for instance [21, 22] and [34, Chap. 2]) since it leads to well-posedness and stability results of various equations from mathematical physics, like Maxwell’s equations of electromagnetism, the wave equation, the Euler equations of compressible gas dynamics, the shallow water wave equations (see below for other examples). The use of a symmetrizer corresponds to an energy space re-norming. In comparison with the port-Hamiltonian approach, we have many options for defining the new scalar product. At a first glance, this might seem physical incorrect, since the associated norm does not necessary relate to the energy of the system. However, we have more flexibility in the choice of the boundary conditions and those leading to the well-posedness (and especially to contractivity) are a fingerprint of a correct modeling. We are going to show that unlike in the canonical setting considered in the literature, however, the Friedrichs symmetrizer of a hyperbolic system with dynamic boundary condition is an operator matrix, with additional terms that control the boundary space—a subspace of functions supported on a graph’s vertices, in the case most relevant for us. Note, finally, that this approach allows to directly express the boundary conditions in terms of the physical variables, which seems not always to be the case if the system is transformed into a characteristic form via the use of Riemann coordinates.

It is known from the theory of parabolic and wave equations with dynamic boundary conditions that boundary operators of higher order are useful to model close feedbacks that may stabilize the system. The role of the operator that couples the hyperbolic evolution with the boundary dynamics—\({\mathcal {B}}\), in the notation above—is even more central in the present context: indeed, we show that the dimensions of its range and null space directly impacts on the maximality of \({\mathbb {A}}\), and hence on the well-posedness of the associated Cauchy problem, see our main Theorem 3.3; backward well-posedness as well as energy conservation or decay properties can be characterized in terms of boundary conditions, too. These results, presented in Sect. 3, contain our main findings from [28] as special cases; they can be regarded as a parametrization of infinitely many realizations enjoying particularly good properties.

In Sect. 4, we then discuss qualitative properties enjoyed by solutions of our hyperbolic systems: in particular, we consider two relevant order intervals of the Hilbert space and discuss their invariance under the semigroup that governs the system by presenting sufficient (and, sometimes, necessary) conditions on the boundary conditions.

In Sect. 5, we revisit some known hyperbolic-type equations with dynamic conditions, including transport equations [48], a second sound model [45], and a 1D Maxwell system [10]. We also consider Dirac equations on networks, for which a parametrization of infinitely many realizations governed by a unitary group (resp., contractive semigroup) was first studied in [12] (resp., [28]); we show that infinitely many further relevant realizations naturally arise by allowing for dynamic conditions.

We furthermore study qualitative properties of solutions of these equations by applying our abstract theory. It turns out that the above mentioned conditions for invariance are rather restrictive: while real-valued initial data give rise to real-valued solutions in most applications, we see that positivity or a priori estimates in \(\infty \)-norm for the solutions can be seldom observed.

2 General setting

We are going to collect different sets of assumptions that we are going to impose in the following; roughly speaking, they are of combinatorial, analytic, and operator theoretical nature, respectively.

Assumption 2.1

\({\mathsf {G}}=({\mathsf {V}},{\mathsf {E}})\) is a nonempty, finite combinatorial graph, \((k_{\mathsf {e}})_{{\mathsf {e}}\in {\mathsf {E}}}\) is a family of positive integers and \((\ell _{\mathsf {e}})_{{\mathsf {e}}\in {\mathsf {E}}}\) is a family of positive numbers.

In the following, we adopt the notation

$$\begin{aligned} k:=\sum _{{\mathsf {e}}\in {\mathsf {E}}} k_{\mathsf {e}}\quad \text {and} \quad k_{\mathsf {v}}:=\sum _{{\mathsf {e}}\in {\mathsf {E}}_{\mathsf {v}}} k_{\mathsf {e}}, \end{aligned}$$

where \({\mathsf {E}}_{\mathsf {v}}\) is the set of all edges incident in \({\mathsf {v}}\). Notice that

$$\begin{aligned} \sum _{{\mathsf {v}}\in {\mathsf {V}}}k_{\mathsf {v}}=2k, \end{aligned}$$
(2.1)

by the Handshaking Lemma.

We rakishly turn \({\mathsf {G}}\) into a metric graph (or network) \({\mathcal {G}}\) by identifying each \({\mathsf {e}}\in {\mathsf {E}}\) with an interval \([0,\ell _{\mathsf {e}}]\subset {\mathbb {R}}\); a more precise definition can be found in [37]. We further impose standard assumptions on the coefficient matrices MN that appear in (1.1); additionally, we require the existence of a Friedrichs symmetrizer Q.Footnote 1

Assumption 2.2

For each \({\mathsf {e}}\in {\mathsf {E}}\), \(M_{\mathsf {e}},N_{\mathsf {e}}:[0,\ell _{\mathsf {e}}]\rightarrow M_{k_{\mathsf {e}}}({\mathbb {C}})\) are mappings such that the following hold.

  1. (1)

    \([0,\ell _{\mathsf {e}}]\ni x\mapsto M_{\mathsf {e}}(x)\in M_{k_{\mathsf {e}}}({\mathbb {C}})\) is Lipschitz continuous; and \(M_{\mathsf {e}}(x)\) is invertible for each \(x\in [0,\ell _{\mathsf {e}}]\).

  2. (2)

    \([0,\ell _{\mathsf {e}}]\ni x\mapsto N_{\mathsf {e}}(x)\in M_{k_{\mathsf {e}}}({\mathbb {C}})\) is of class \(L^\infty \).

  3. (3)

    There exists a Lipschitz continuous mapping \([0,\ell _{\mathsf {e}}]\ni x\mapsto Q_{\mathsf {e}}(x)\in M_{k_{\mathsf {e}}}({\mathbb {C}})\) such that

    1. (i)

      \(Q_{\mathsf {e}}(x)\) and \(Q_{\mathsf {e}}(x) M_{\mathsf {e}}(x)\) are Hermitian for all \(x\in [0,\ell _{\mathsf {e}}]\) and

    2. (ii)

      \(Q_{\mathsf {e}}(\cdot )\) is uniformly positive definite, i.e., there exists \(q>0\) such that

      $$\begin{aligned}Q_{\mathsf {e}}(x)\xi \cdot {\bar{\xi }} \ge q \Vert \xi \Vert ^2 \text { for all }\xi \in {\mathbb {C}}^{k_{\mathsf {e}}} \text { and } x\in [0,\ell _{\mathsf {e}}].\end{aligned}$$

Assumption 2.2 is identical with [28, Assumptions 2.1].

We introduce for each \({\mathsf {v}}\in {\mathsf {V}}\) the trace operator \(\gamma _{\mathsf {v}}: \bigoplus _{{\mathsf {e}}\in {\mathsf {E}}} H^1(0,\ell _{\mathsf {e}})^{k_{\mathsf {e}}}\rightarrow {\mathbb {C}}^{k_{\mathsf {v}}}\) defined by

$$\begin{aligned} \gamma _{\mathsf {v}}(u):= \left( u_{\mathsf {e}}({\mathsf {v}})\right) _{{\mathsf {e}}\in {\mathsf {E}}_{\mathsf {v}}},\qquad {\mathsf {v}}\in {\mathsf {V}}, \end{aligned}$$

and the \(k_{\mathsf {v}}\times k_{\mathsf {v}}\) block-diagonal matrix \(T_{\mathsf {v}}\) with \(k_{\mathsf {e}}\times k_{\mathsf {e}}\) diagonal blocks

$$\begin{aligned} T_{\mathsf {v}}:= {{\,\mathrm{diag}\,}}\left( Q_{\mathsf {e}}({\mathsf {v}}) M_{\mathsf {e}}({\mathsf {v}}) {\iota }_{{\mathsf {v}}{\mathsf {e}}} \right) _{{\mathsf {e}}\in {\mathsf {E}}_{\mathsf {v}}},\qquad {\mathsf {v}}\in {\mathsf {V}}, \end{aligned}$$
(2.2)

where we recall that the \(|{\mathsf {V}}|\times |{\mathsf {E}}|\) (signed) incidence matrix \({\mathcal {I}}=(\iota _{{\mathsf {v}}{\mathsf {e}}})\) of the graph \({\mathsf {G}}\) is defined by

$$\begin{aligned} {\mathcal {I}}:={\mathcal {I}}^+-{\mathcal {I}}^- \end{aligned}$$
(2.3)

with \({\mathcal {I}}^+=(\iota ^+_{{\mathsf {v}}{\mathsf {e}}})\) and \(\mathcal I^-=(\iota _{{\mathsf {v}}{\mathsf {e}}}^-)\) given by

$$\begin{aligned} {\iota }_{{\mathsf {v}}{\mathsf {e}}}^+:=\left\{ \begin{array}{ll} 1 &{}\quad \hbox {if } {\mathsf {v}}\hbox { is terminal endpoint of } {\mathsf {e}}, \\ 0 &{}\quad \hbox {otherwise,} \end{array}\right. \qquad {\iota }_{{\mathsf {v}}{\mathsf {e}}}^-:=\left\{ \begin{array}{ll} 1 &{}\quad \hbox {if } {\mathsf {v}}\hbox { is initial endpoint of } {\mathsf {e}}, \\ 0 &{}\quad \hbox {otherwise.} \end{array}\right. \end{aligned}$$

Unlike in our earlier work [28], our aim is to develop a setting that will eventually allow us to impose dynamic boundary conditions on a subset of the vertex set \({\mathsf {V}}\). Ideas that go back to [1, 3, 6] suggest to study the relevant evolution equation as a Cauchy problem on a larger Hilbert space. The necessary formalism can be introduced as follows.

Assumption 2.3

For each \({\mathsf {v}}\in {\mathsf {V}}\), the following holds.

  1. (1)

    \(Y^{(d)}_{\mathsf {v}}\subset Y_{\mathsf {v}}\) are subspaces of \({\mathbb {C}}^{k_{\mathsf {v}}}\);

  2. (2)

    \(B_{\mathsf {v}}:Y_{\mathsf {v}}\rightarrow Y^{(d)}_{\mathsf {v}}\) is a linear operator;

  3. (3)

    \(C_{\mathsf {v}}\) is a linear operator on \(Y^{(d)}_{\mathsf {v}}\);

  4. (4)

    \(Q_{\mathsf {v}}\) is a Hermitian and positive definite operator on \({Y}^{(d)}_{\mathsf {v}}\).

We stress that the assumptions on \(Q_{\mathsf {e}}\) and \(Q_{\mathsf {v}}\) are structurally different. While, given a system of differential equations, we can only study it by the means of the theory presented in this paper if we are able to find suitable Friedrich symmetrizers \(Q_{\mathsf {e}}\) leading to a Hermitian product \(Q_{\mathsf {e}}M_{\mathsf {e}}\), in the following we are free to take \(Q_{\mathsf {v}}\) as we wish. The “lazy” choice of \(Q_{\mathsf {v}}={\mathbb {I}}\) is always allowed, but the main results in Sect. 3 show that it pays off to pick \(Q_{\mathsf {v}}\) tailored to enforce energy conservation or decay.

With these objects, we set

$$\begin{aligned} {\mathbf {L}}^2({\mathcal {G}}):=\bigoplus _{{\mathsf {e}}\in {\mathsf {E}}} L^2(0,\ell _{\mathsf {e}})^{k_{\mathsf {e}}}\qquad \hbox {and}\qquad Y^{(d)}:=\bigoplus \limits _{{\mathsf {v}}\in {\mathsf {V}}} Y^{(d)}_{\mathsf {v}}\end{aligned}$$

and introduce the Hilbert space

$$\begin{aligned} {{\mathbf {L}}}^2_{d}({\mathcal {G}}):={\mathbf {L}}^2({\mathcal {G}})\oplus Y^{(d)}, \end{aligned}$$

equipped with the inner product

$$\begin{aligned} \begin{aligned}&\left( \begin{pmatrix} u \\ {\mathsf {x}}\end{pmatrix}, \begin{pmatrix} v \\ {\mathsf {y}}\end{pmatrix}\right) _d := \sum _{{\mathsf {e}}\in {\mathsf {E}}}\int _0^{\ell _{\mathsf {e}}} Q_{\mathsf {e}}(x)u_{\mathsf {e}}(x)\cdot {\overline{v}}_{\mathsf {e}}(x)\ dx+\sum _{{\mathsf {v}}\in {\mathsf {V}}} Q_{\mathsf {v}}{\mathsf {x}}_{\mathsf {v}}\cdot {\bar{{\mathsf {y}}}}_{\mathsf {v}},\\&\qquad u,v\in {\mathbf {L}}^2({\mathcal {G}}),\ {\mathsf {x}},{\mathsf {y}}\in {Y^{(d)}}, \end{aligned} \end{aligned}$$
(2.4)

which is equivalent to the canonical one. This is the function space setup we are going to use to deal with dynamic boundary conditions.

We stress that we are not assuming \(B_{\mathsf {v}}\) to be surjective, hence \({{\,\mathrm{Ran}\,}}B_{\mathsf {v}}\) does not need to agree with \(Y^{(d)}_{\mathsf {v}}\). Accordingly, we split up \(Y^{(d)}_{\mathsf {v}}\) as

$$\begin{aligned} Y^{(d)}_{\mathsf {v}}={{\,\mathrm{Ran}\,}}B_{\mathsf {v}}\oplus {{\,\mathrm{Ker}\,}}B_{\mathsf {v}}^*, \end{aligned}$$
(2.5)

where the sum is orthogonal with respect to the inner product of \(Y^{(d)}_{\mathsf {v}}\) induced by the Euclidean inner product of \({\mathbb {C}}^{k_{\mathsf {v}}}\). We shall denote by \(P_{\mathsf {v}}^{(d)}\) (resp., \(P_{\mathsf {v}}^{(d,0)}\)) the orthogonal projector of \({\mathbb {C}}^{k_{\mathsf {v}}}\) onto \(Y^{(d)}_{\mathsf {v}}\) (resp. of \(Y^{(d)}_{\mathsf {v}}\) onto \({{\,\mathrm{Ker}\,}}B_{\mathsf {v}}^*\)), of course with respect to said inner product. In the same spirit, if U is a vector space included into \({\mathbb {C}}^{k_{\mathsf {v}}}\) (resp. \(Y_{\mathsf {v}}\)), we denote by \(U^{\perp }\) (resp. \(U^{\perp _y}\)) its orthogonal complement in \({\mathbb {C}}^{k_{\mathsf {v}}}\) (resp. \(Y_{\mathsf {v}}\)) with respect to said inner product.

3 Well-posedness of systems with dynamic vertex conditions

Inspired by the discussion in [10, § 8.2], where time-dependent transmission conditions for the 1D Maxwell’s equation are derived by methods of asymptotic analysis, we are going to introduce an abstract framework in order to investigate well-posedness of (1.1) under general transmission conditions of dynamic type.

We first introduce the linear and continuous operators \({\mathcal A}\) and \({\mathcal {B}}\) from

$$\begin{aligned} D_{\max }:= \bigoplus _{e\in {\mathsf {E}}} H^1(0,\ell _{\mathsf {e}})^{k_{\mathsf {e}}} \end{aligned}$$

to \({\mathbf {L}}^2({\mathcal {G}})\) and \(Y^{(d)}_{\mathsf {v}}\), respectively, by

$$\begin{aligned} \begin{aligned} ({{\mathcal {A}}} u)_{\mathsf {e}}&:= M_{\mathsf {e}}u'_{\mathsf {e}}+ N_{\mathsf {e}}u_{\mathsf {e}}, \quad {\mathsf {e}}\in {\mathsf {E}}, \\ ({\mathcal {B}} u)_{\mathsf {v}}&:= B_{\mathsf {v}}\gamma _{\mathsf {v}}(u), \quad {\mathsf {v}}\in {\mathsf {V}}, \end{aligned} \end{aligned}$$

as well as the operator \({\mathcal {C}}\) on \(Y^{(d)}\) defined by

$$\begin{aligned} ({\mathcal {C}}{\mathsf {x}})_{\mathsf {v}}:=C_{\mathsf {v}}{\mathsf {x}}_{\mathsf {v}},\quad {\mathsf {v}}\in {\mathsf {V}}, \end{aligned}$$

and study the operator

$$\begin{aligned} {{\mathbb {A}}}:=\begin{pmatrix} {\mathcal {A}} &{} 0\\ {\mathcal {B}} &{} { {\mathcal {C}}} \end{pmatrix}, \end{aligned}$$
(3.1)

with domain

$$\begin{aligned} D({\mathbb {A}}):=\left\{ \begin{pmatrix} u \\ {\mathsf {x}}\end{pmatrix}\in D_{\max }\oplus Y^{(d)}: \gamma _{\mathsf {v}}( u )\in Y_{\mathsf {v}}\hbox { and }{\mathsf {x}}_{\mathsf {v}}=P^{(d)}_{\mathsf {v}}\gamma _{\mathsf {v}}( u ) \hbox { for all } {\mathsf {v}}\in {\mathsf {V}}\right\} .\nonumber \\ \end{aligned}$$
(3.2)

The present setting is a strict generalization of the context discussed in our previous investigation [28], where for all \({\mathsf {v}}\in {\mathsf {V}}\) we take \(Y^{(d)}_{\mathsf {v}}={{\,\mathrm{Ker}\,}}B_{\mathsf {v}}^*= \{0\}\), \({{\,\mathrm{Ker}\,}}B_{\mathsf {v}}=Y_{\mathsf {v}}\). In our main well-posedness results there—[28, Thm. 3.7 and Thm. 4.1]—we had to assume each \(Y_{\mathsf {v}}\) to be a subspace of the null or nonpositive isotropic cone of the quadratic form

$$\begin{aligned} q_{\mathsf {v}}(\xi ):=T_{\mathsf {v}}\xi \cdot {\bar{\xi }},\qquad \xi \in {\mathbb {C}}^{k_{\mathsf {v}}}, \end{aligned}$$
(3.3)

i.e., \(q_{\mathsf {v}}(\xi )\) to be identically zero or nonpositive for all \(\xi \in Y_{\mathsf {v}}\) and all \({\mathsf {v}}\in {\mathsf {V}}\) (see [28, App. C] for more details), in order to control the boundary terms that arise from integration by parts when checking dissipativity of the relevant operator \({\mathcal {A}}\). In the present context, these conditions have to be adapted. More precisely, the definition of \({\mathbb {A}}\) and computations analogous to those at the beginning of [28, §3] show that for any \({\mathbb {u}} :={u \atopwithdelims (){\mathsf {x}}} \in D({\mathbb {A}})\),

$$\begin{aligned} \begin{aligned} \Re \left( {\mathbb {A}} {\mathbb {u}}, {\mathbb {u}}\right) _d&= \Re \sum _{{\mathsf {e}}\in {\mathsf {E}}}\int _0^{\ell _{\mathsf {e}}} \left( Q_{\mathsf {e}}N_{\mathsf {e}}u_{\mathsf {e}}\cdot {\bar{u}}_{\mathsf {e}}\right) \, dx -\frac{1}{2}\sum _{{\mathsf {e}}\in {\mathsf {E}}}\int _0^{\ell _{\mathsf {e}}} \left( Q_{\mathsf {e}}M_{\mathsf {e}}\right) ' u_{\mathsf {e}}\cdot {\bar{u}}_{\mathsf {e}}\,dx \\&\quad +\frac{1}{2} \sum _{{\mathsf {v}}\in {\mathsf {V}}} T_{\mathsf {v}}\gamma _{\mathsf {v}}(u)\cdot \gamma _{\mathsf {v}}({\bar{u}}) \\&\quad + \Re \sum _{{\mathsf {v}}\in {\mathsf {V}}} \left( Q_{\mathsf {v}}\left( B_{\mathsf {v}}+C_{\mathsf {v}}P^{(d)}_{\mathsf {v}}\right) \gamma _{\mathsf {v}}(u) \cdot P^{(d)}_{\mathsf {v}}\gamma _{\mathsf {v}}({\bar{u}}) \right) . \end{aligned} \end{aligned}$$
(3.4)

Rearranging the terms and using the fact that

$$\begin{aligned} Q_{\mathsf {v}}B_{\mathsf {v}}\gamma _{\mathsf {v}}(u)\cdot P^{(d)}_{\mathsf {v}}\gamma _{\mathsf {v}}({\bar{u}}) = P^{(d)}_{\mathsf {v}}Q_{\mathsf {v}}B_{\mathsf {v}}\gamma _{\mathsf {v}}(u)\cdot \gamma _{\mathsf {v}}({\bar{u}}) =Q_{\mathsf {v}}B_{\mathsf {v}}\gamma _{\mathsf {v}}(u)\cdot \gamma _{\mathsf {v}}({\bar{u}}), \end{aligned}$$
(3.5)

since \(Q_{\mathsf {v}}\) maps to \(Y^{(d)}\), we obtain

$$\begin{aligned} \begin{aligned} \Re \left( {\mathbb {A}} {\mathbb {u}}, {\mathbb {u}}\right) _d&= \frac{1}{2}\sum _{{\mathsf {e}}\in {\mathsf {E}}}\int _0^{\ell _{\mathsf {e}}} \left( Q_{\mathsf {e}}N_{\mathsf {e}}+N_{\mathsf {e}}^*Q_{\mathsf {e}}-(Q_{\mathsf {e}}M_{\mathsf {e}}\right) ') u_{\mathsf {e}}\cdot {\bar{u}}_{\mathsf {e}}\,dx \\&\quad + \frac{1}{2} \sum _{{\mathsf {v}}\in {\mathsf {V}}} (Q_{\mathsf {v}}C_{\mathsf {v}}+ C^*_{\mathsf {v}}Q_{\mathsf {v}}){\mathsf {x}}_{\mathsf {v}}\cdot {\bar{{\mathsf {x}}}}_{\mathsf {v}}+\frac{1}{2} \sum _{{\mathsf {v}}\in {\mathsf {V}}} (T_{\mathsf {v}}+Q_{\mathsf {v}}B_{\mathsf {v}}+B_{\mathsf {v}}^*Q_{\mathsf {v}}) \gamma _{\mathsf {v}}(u)\cdot \gamma _{\mathsf {v}}({\bar{u}}). \end{aligned} \end{aligned}$$
(3.6)

We hence have two boundary terms: in \(Y^{(d)}_{\mathsf {v}}\) and in the whole \(Y_{\mathsf {v}}\), respectively.

As in [28, §3], the maximality property of \(\pm {\mathbb {A}}\) relies on a basis property of some specific vectors of \({\mathbb {C}}^k\). We first need to introduce some notations: we write \(I_{\mathsf {v}}:=\{1,2,\ldots , {\dim Y_{\mathsf {v}}^{\perp }}\}\), \(J_{\mathsf {v}}^{(R)}:=\{1,2,\ldots , \dim {{\,\mathrm{Ran}\,}}B_{\mathsf {v}}\}\), \(J_{\mathsf {v}}^{(K)}:=\{1,2,\ldots , \dim {{\,\mathrm{Ker}\,}}B_{\mathsf {v}}^*\}\) and fix bases \(\{{\mathsf {w}}^{({\mathsf {v}}, i)}\}_{i\in I_{\mathsf {v}}}\), \(\{{\mathsf {y}}^{({\mathsf {v}}, j)}\}_{j\in J_{\mathsf {v}}^{(R)}}\), \(\{ {\mathsf {w}}_{KB^*}^{({\mathsf {v}}, l)}\}_{l\in J_{\mathsf {v}}^{(K)}}\) of the subspaces \(Y_{\mathsf {v}}^{\perp }\), \({{\,\mathrm{Ran}\,}}B_{\mathsf {v}}\), and \({{\,\mathrm{Ker}\,}}B_{\mathsf {v}}^*\), respectively. Furthermore, let \( {\mathsf {w}}_{RB^*}^{({\mathsf {v}}, j)}:=B_{\mathsf {v}}^{*} {\mathsf {y}}^{({\mathsf {v}}, j)}, j\in J_{\mathsf {v}}^{(R)}. \) Note that

$$\begin{aligned} {{\,\mathrm{span}\,}}\{{{\mathsf {w}}_{RB^*}^{({\mathsf {v}}, j)}}\}_{{j\in J_{\mathsf {v}}^{(R)}}} ={{\,\mathrm{Ran}\,}}B_{\mathsf {v}}^*\subset Y_{\mathsf {v}}\end{aligned}$$
(3.7)

and \(\dim {{\,\mathrm{Ran}\,}}B_{\mathsf {v}}= \dim {{\,\mathrm{Ran}\,}}B_{\mathsf {v}}^*\). Finally, we introduce the space

$$\begin{aligned} Z_{\mathsf {v}}:={Y_{\mathsf {v}}^{\perp } \oplus \left( {{\,\mathrm{Ran}\,}}B_{\mathsf {v}}^*+ {{\,\mathrm{Ker}\,}}B_{\mathsf {v}}^*\right) }\subset {\mathbb {C}}^{k_{\mathsf {v}}} \end{aligned}$$
(3.8)

which is spanned by the set of vectors

$$\begin{aligned} {\mathcal {W}}_{\mathsf {v}}:=\{{\mathsf {w}}^{({\mathsf {v}}, i)} : i\in I_{\mathsf {v}}\} \cup \{ {{\mathsf {w}}_{RB^*}^{({\mathsf {v}}, j)}} : j\in J_{\mathsf {v}}^{(R)}\}\cup \{{\mathsf {w}}_{KB^*}^{({\mathsf {v}}, l)} : l\in J_{\mathsf {v}}^{(K)} \}. \end{aligned}$$
(3.9)

The choice of this space is guided by the proof of the maximality of the operator \({\mathbb {A}}\), see the proof of Theorem 3.3.

Any element \({\mathsf {w}}\in {\mathbb {C}}^{k_{\mathsf {v}}}\) can be identified with a vector \(({\mathsf {w}}_{\mathsf {e}})_{{\mathsf {e}}\in {\mathsf {E}}_{\mathsf {v}}}\) and we denote by \(\widetilde{\mathsf {w}}\in {\mathbb {C}}^k\) its extension to the whole set of edges, namely,

$$\begin{aligned} {\widetilde{{\mathsf {w}}}}_{\mathsf {e}}:= \left\{ \begin{array}{ll} {\mathsf {w}}_{\mathsf {e}}, &{}\quad \hbox { if } {\mathsf {e}}\in {\mathsf {E}}_{\mathsf {v}},\\ 0, &{}\quad \hbox { else. } \end{array} \right. \end{aligned}$$
(3.10)

In the same way, each coordinate of an element of a subset \(U\subset {\mathbb {C}}^{k_{\mathsf {v}}}\) corresponds to some \({\mathsf {e}}\in {\mathsf {E}}_{{\mathsf {v}}}\) and, as above, we can extend these sets to \({\mathbb {C}}^k\) by setting a 0 in each coordinate corresponding to \({\mathsf {e}}\) whenever \({\mathsf {e}}\notin {\mathsf {E}}_{\mathsf {v}}\). We denote these extensions by \({\widetilde{U}}\subset {\mathbb {C}}^k\). Using this notation, we will assume that

$$\begin{aligned} \text {the set }\widetilde{{\mathcal {W}}}:=\bigcup _{{\mathsf {v}}\in {\mathsf {V}}} \widetilde{{\mathcal {W}}}_{\mathsf {v}}\text { is a basis of }{\mathbb {C}}^{k}. \end{aligned}$$
(3.11)

Remark 3.1

Let us mention two special cases when condition (3.11) can be reformulated in terms of dimension equation.

(1) First, note that in the case of only stationary boundary conditions, i.e., when \(Y^{(d)}_{\mathsf {v}}=\{0\}\) and hence \({{\,\mathrm{Ran}\,}}B_{\mathsf {v}}= {{\,\mathrm{Ran}\,}}B_{\mathsf {v}}^*= {{\,\mathrm{Ker}\,}}B_{\mathsf {v}}^*= \{0\}\) and \({{\,\mathrm{Ker}\,}}B_{\mathsf {v}}=Y_{\mathsf {v}}\)—we have \(J_{\mathsf {v}}^{(R)}=J^{(K)}_{\mathsf {v}}=\emptyset \) and \(Z_{\mathsf {v}}=Y_{\mathsf {v}}^{\perp }\). By [28, Lemma 3.5], the set \(\widetilde{{\mathcal {W}}}=\{{\widetilde{{\mathsf {w}}}}^{({\mathsf {v}}, i)}\}_{i\in I_{\mathsf {v}}, {\mathsf {v}}\in {\mathsf {V}}}\) is a basis of \({\mathbb {C}}^{k}\) if and only if

$$\begin{aligned} \dim \sum _{{\mathsf {v}}\in {\mathsf {V}}} \widetilde{Y_{{\mathsf {v}}}^\perp }= k = \sum _{{\mathsf {v}}\in {\mathsf {V}}} \dim Y_{\mathsf {v}}. \end{aligned}$$

(2) Let us now more generally consider the case of dynamic boundary conditions with surjective operator \(B_{\mathsf {v}}\). Then, \(Z_{\mathsf {v}}\) reduces to the direct sum

$$\begin{aligned} Z_{\mathsf {v}}:= Y_{\mathsf {v}}^{\perp } \oplus {{\,\mathrm{Ran}\,}}B_{\mathsf {v}}^*\end{aligned}$$
(3.12)

and \(J^{(K)}_{\mathsf {v}}=\emptyset \). In this case, \(\widetilde{{\mathcal {W}}}_{\mathsf {v}}\) is a basis of \( {{\widetilde{Z}}}_{\mathsf {v}}\) and, by the same reasoning as in the proof of [28, Lemma 3.5] we see that (3.11) holds if and only if

$$\begin{aligned} \dim \sum _{{\mathsf {v}}\in {\mathsf {V}}} {\widetilde{Z}}_{{\mathsf {v}}} = k= \sum _{{\mathsf {v}}\in {\mathsf {V}}} \dim Z^\perp _{\mathsf {v}}. \end{aligned}$$

Observe that \(Z^\perp _{\mathsf {v}}= Y_{\mathsf {v}}\cap ({{\,\mathrm{Ran}\,}}B^*_{\mathsf {v}})^\perp = {{\,\mathrm{Ker}\,}}B_{\mathsf {v}}\). By the surjectivity of \(B_{\mathsf {v}}\), we further have \(\dim {{\,\mathrm{Ker}\,}}B_{\mathsf {v}}= \dim Y_{\mathsf {v}}- \dim Y_{\mathsf {v}}^{(d)}\) and thus (3.11) is equivalent to

$$\begin{aligned} { \dim \sum _{{\mathsf {v}}\in {\mathsf {V}}} {\widetilde{Z}}_{{\mathsf {v}}} = k= \sum _{{\mathsf {v}}\in {\mathsf {V}}} \left( \dim Y_{\mathsf {v}}- \dim Y_{\mathsf {v}}^{(d)} \right) . }\end{aligned}$$
(3.13)

Remark 3.2

Let us reverse our perspective and assume that we are interested in deriving new well-posed systems from known ones, rather than modeling problems with dynamic conditions stemming from applications; this is similar to the goal of extension theory in mathematical physics, where one is interested of describing as many realizations of a given Hamiltonian as possible, subject to the condition that such realizations are still governing a well-behaved PDE. The condition in (3.13) shows that, in spite of superficial similarities, the present situation is different from that discussed in [23] in the context of parabolic equations. Roughly speaking, the findings in [23] show that, as soon a choice of a family of spaces \(Y_{\mathsf {v}}\), \({\mathsf {v}}\in {\mathsf {V}}\), define boundary conditions leading to well-posedness, each choice of subspaces \(Y^{(d)}_{\mathsf {v}}\) of \(Y_{\mathsf {v}}\), \({\mathsf {v}}\in {\mathsf {V}}\), will lead to a new well-posed system. As a matter of fact, modifying a well-posed hyperbolic system in order to allow for dynamic vertex conditions is a delicate issue: we will see in Sect. 5 that, starting from any well-posed hyperbolic system (say, taken from [28, § 5]) driven by the operator \({\mathcal {A}}\) with stationary conditions

$$\begin{aligned} \gamma _{\mathsf {v}}(u)\in Y^{(0)}_{\mathsf {v}}\end{aligned}$$

encoded in a space \(Y^{(0)}_{\mathsf {v}}\), switching to a dynamic setting requires to carefully enlarge these spaces to find suitable \(Y_{\mathsf {v}}\) and at the same time allow for nontrivial \(Y^{(d)}_{\mathsf {v}}\), if we want (3.13) to be satisfied.

Next results extend [28, Thm 3.7 and Thm. 4.1] to the case where both dynamic and stationary conditions are allowed. We adopt the terminology of [28, Appendix C]. Extending the statement to the case of \(\lambda \ne 0\) might look superfluous, but it will prove useful when discussing concrete systems of PDEs, cf. Sect. 5.3.

Theorem 3.9

For all \({\mathsf {v}}\in {\mathsf {V}}\), let (3.11) hold and let moreover \(Y_{\mathsf {v}}\) be a subspace of the nonpositive isotropic cone of the quadratic form associated with \(T_{\mathsf {v}}+Q_{\mathsf {v}}B_{\mathsf {v}}+B_{\mathsf {v}}^*Q_{\mathsf {v}}-\lambda P_{\mathsf {v}}^{(d)} Q_{\mathsf {v}}P_{\mathsf {v}}^{(d)}\) for some \(\lambda \ge 0\). Then, \({\mathbb {A}}\) generates a strongly continuous semigroup on \({\mathbf {L}}^2_d({\mathcal {G}})\).

Proof

First of all, let us observe that \({\mathbb {A}}\) is densely defined by [41, Lemma 5.6]. As the operator \((u,{\mathsf {x}})^\top \mapsto (Nu , C{\mathsf {x}}+ P_{\mathsf {v}}^{(d,0)})^\top \) is a bounded perturbation of \({\mathbb {A}}\), the claim will follow if we can prove that the operator matrix

$$\begin{aligned} {\mathbb {A}}_0:=\begin{pmatrix} M\frac{d}{dx} &{} 0\\ {\mathcal {B}} &{} -{\mathcal {P}}^{(d,0)} \end{pmatrix},\qquad D({\mathbb {A}}_0):=D({\mathbb {A}}), \end{aligned}$$

with \(( {{\mathcal {P}}}^{(d,0)}{\mathsf {x}})_{\mathsf {v}}= P^{(d,0)}_{\mathsf {v}}{\mathsf {x}}_{\mathsf {v}}\), that corresponds to \({\mathbb {A}}\) with the choice \(N=0\) and \({{\mathcal {C}}=- {\mathcal {P}}^{(d,0)}}\), is m-quasi-dissipative.

Formula (3.6) and the assumptions on matrices \(Q_{\mathsf {e}}\) and \(M_{\mathsf {e}}\) show that dissipativity holds for \(\mathbb {A}_0-\lambda {\mathbb {I}}\) on \(D({\mathbb {A}})\); let us check maximality.

To this aim, for any \({f}\in {\mathbf {L}}^2({\mathcal {G}})\) and any \({\mathsf {g}}\in Y^{(d)}\), we first look for a solution \({\mathbb {u}}:=(u,{\mathsf {x}})^\top \in D({\mathbb {A}})\) of

$$\begin{aligned} {\mathbb {A}}_0 (u,{\mathsf {x}})^\top = ({f},{\mathsf {g}})^\top , \end{aligned}$$

namely solution of

$$\begin{aligned} M_{\mathsf {e}}(x)u'_{\mathsf {e}}(x)={f}_{\mathsf {e}}(x)\quad \hbox {for }x\in (0,\ell _{\mathsf {e}})\hbox { and all } {\mathsf {e}}\in {\mathsf {E}}, \end{aligned}$$

and of

$$\begin{aligned} B_{\mathsf {v}}\gamma _{\mathsf {v}}( u )-P_{\mathsf {v}}^{(d,0)} x_{\mathsf {v}}={\mathsf {g}}_{\mathsf {v}}\quad \text {for all }{\mathsf {v}}\in {\mathsf {V}}. \end{aligned}$$
(3.14)

Such a solution is given by

$$\begin{aligned} u_{\mathsf {e}}(x)=K_{\mathsf {e}}+u_{\mathsf {e}}^{\mathrm{nh}}(x)\quad \hbox {for all } x\in [0,\ell _{\mathsf {e}}], {\mathsf {e}}\in {\mathsf {E}}, \end{aligned}$$

with \(K_{\mathsf {e}}\in {\mathbb {C}}^{k_{\mathsf {e}}}\) and where

$$\begin{aligned}u_{\mathsf {e}}^{\mathrm{nh}}(x)= \int _0^xM^{-1}_{\mathsf {e}}(y){\mathsf {f}}_{\mathsf {e}}(y)\,dy\quad \hbox {for all } x\in [0,\ell _{\mathsf {e}}], {\mathsf {e}}\in {\mathsf {E}}. \end{aligned}$$

It remains to fix the vectors \(K_{\mathsf {e}}\). For that purpose, we recall (see [28, §3]) that the condition \(\gamma _{\mathsf {v}}({\mathsf {u}})\in Y_{\mathsf {v}}\) at any vertex \({\mathsf {v}}\in {\mathsf {V}}\) is equivalent to

$$\begin{aligned} (K_{\mathsf {e}})_{{\mathsf {e}}\in {\mathsf {E}}}\cdot \overline{{\widetilde{{\mathsf {w}}}}^{({\mathsf {v}}, i)}}=-(u_{\mathsf {e}}^{\mathrm{nh}}({\mathsf {v}}))_{{\mathsf {e}}\in {\mathsf {E}}}\cdot \overline{{\widetilde{{\mathsf {w}}}}^{({\mathsf {v}}, i)}} \quad \hbox {for all } i\in I_{\mathsf {v}}. \end{aligned}$$
(3.15)

On the other hand, problem (3.14) is by (2.5) and the definition of basis, equivalent to

$$\begin{aligned} B_{\mathsf {v}}\gamma _{\mathsf {v}}( u )\cdot {\overline{ {\mathsf {y}}^{({\mathsf {v}}, j)}}}= & {} {\mathsf {g}}_{\mathsf {v}}\cdot {\overline{ {\mathsf {y}}^{({\mathsf {v}}, j)}}} \quad \text {for all } j\in {J_{\mathsf {v}}^{(R)}}, {\mathsf {v}}\in {\mathsf {V}},\\ - P_{\mathsf {v}}^{(d,0)}\gamma _{\mathsf {v}}( u )\cdot {\overline{{\mathsf {w}}_{KB^*}^{({\mathsf {v}}, l)}}}= & {} {\mathsf {g}}_{\mathsf {v}}\cdot {\overline{ {\mathsf {w}}_{KB^*}^{({\mathsf {v}}, l)}}} \quad \text {for all } l\in J_{\mathsf {v}}^{(K)}, {\mathsf {v}}\in {\mathsf {V}}, \end{aligned}$$

and hence to

$$\begin{aligned} (K_{\mathsf {e}})_{{\mathsf {e}}\in {\mathsf {E}}}\cdot \overline{{{\widetilde{{\mathsf {w}}}}_{RB^*}^{({\mathsf {v}}, j)}}}= & {} {\mathsf {g}}_{\mathsf {v}}\cdot {\overline{ {\widetilde{{\mathsf {y}}}}^{({\mathsf {v}}, j)}}} -(u_{\mathsf {e}}^{\mathrm{nh}}({\mathsf {v}}))_{{\mathsf {e}}\in {\mathsf {E}}}\cdot \overline{{\widetilde{\mathsf {w}}_{RB^*}^{({\mathsf {v}}, j)}}} \quad \text {for all } j\in {J_{\mathsf {v}}^{(R)}}, {\mathsf {v}}\in {\mathsf {V}},\nonumber \\ \end{aligned}$$
(3.16)
$$\begin{aligned} (K_{\mathsf {e}})_{{\mathsf {e}}\in {\mathsf {E}}}\cdot {\overline{{\widetilde{\mathsf {w}}_{KB^*}^{({\mathsf {v}}, l)}}}}= & {} - {\mathsf {g}}_{\mathsf {v}}\cdot {\overline{{\widetilde{\mathsf {w}}_{KB^*}^{({\mathsf {v}}, l)}}}} -(u_{\mathsf {e}}^{\mathrm{nh}}({\mathsf {v}}))_{{\mathsf {e}}\in {\mathsf {E}}}\cdot {\overline{ {{\widetilde{{\mathsf {w}}}}_{KB^*}^{({\mathsf {v}}, l)}}} }\quad \text {for all } l\in J_{\mathsf {v}}^{(K)}, {\mathsf {v}}\in {\mathsf {V}}. \nonumber \\ \end{aligned}$$
(3.17)

By (3.11), it follows that (3.15)–(3.16)–(3.17) is a \(k\times k\) linear system in \((K_{\mathsf {e}})_{{\mathsf {e}}\in {\mathsf {E}}}\) that has a unique solution. This shows that the operator \({\mathbb {A}}_0\) is an isomorphism from \(D({\mathbb {A}})\) into \({{\mathbf {L}}}^2_{d}({\mathcal {G}})\) and, in particular, it is closed. Hence, by dissipativity of \(\mathbb {A}_0-\lambda {{\mathbb {I}}}\), it is also quasi-m-dissipative. We conclude that \({\mathbb {A}}_0\), and hence also \({\mathbb {A}}\), generate strongly continuous semigroup on \({\mathbf {L}}_d^2({\mathcal {G}})\). \(\square \)

Repeating the same argument for \(-{\mathbb {A}}\) yields the following.

Corollary 3.10

For all \({\mathsf {v}}\in {\mathsf {V}}\), let (3.11) hold and let moreover \(Y_{\mathsf {v}}\) be a subspace of the null isotropic cone of the quadratic form associated with \(T_{\mathsf {v}}+Q_{\mathsf {v}}B_{\mathsf {v}}+B_{\mathsf {v}}^*Q_{\mathsf {v}}\). Then, \({\mathbb {A}}\) generates a strongly continuous group on \({\mathbf {L}}^2_d({\mathcal {G}})\).

Remark 3.5

Because \(\dim Y^{(d)}\le \dim Y\le 2k<\infty \), the compact embedding of each \(H^1(0,\ell _{\mathsf {e}})\) in \(L^2(0,\ell _{\mathsf {e}})\), and hence of \(\bigoplus _{{\mathsf {e}}\in {\mathsf {E}}}H^1(0,\ell _{\mathsf {e}})\) in \(\bigoplus _{{\mathsf {e}}\in {\mathsf {E}}}L^2(0,\ell _{\mathsf {e}})\), directly implies that \({\mathbb {A}}\) has compact resolvent, regardless of the imposed transmission conditions at the vertices.

Remark 3.6

(1) Formula (3.6) shows that, in order to obtain dissipativity (rather than mere quasi-dissipativity) of \({\mathbb {A}}\) on \({\mathbf {L}}^2_d({\mathcal {G}})\), hence generation of a contractive semigroup, the assumptions of Theorem 3.3 shall be complemented by the following:

  • \(Q_{\mathsf {e}}(x)N_{\mathsf {e}}(x)+N_{\mathsf {e}}(x)^*Q_{\mathsf {e}}(x)-(Q_{\mathsf {e}}M_{\mathsf {e}})'(x)\) is negative semi-definite, for all \({\mathsf {e}}\in {\mathsf {E}}\) and a.e. \(x\in (0,\ell _{\mathsf {e}})\); and

  • \(Y^{(d)}_{\mathsf {v}}\) is for all \({\mathsf {v}}\in {\mathsf {V}}\) a subspace of the negative isotropic cone of the quadratic form associated with \(Q_{\mathsf {v}}C_{\mathsf {v}}+C^*_{\mathsf {v}}Q_{\mathsf {v}}\).

(2) If, additionally to the assumptions of Corollary 3.4,

  • \(Q_{\mathsf {e}}(x)N_{\mathsf {e}}(x)+N_{\mathsf {e}}(x)^*Q_{\mathsf {e}}(x)=(Q_{\mathsf {e}}M_{\mathsf {e}})'(x)\), for all \({\mathsf {e}}\in {\mathsf {E}}\) and a.e. \(x\in (0,\ell _{\mathsf {e}})\); and

  • \(Y^{(d)}_{\mathsf {v}}\) is for all \({\mathsf {v}}\in {\mathsf {V}}\) a subspace of the null isotropic cone of the quadratic form associated with \(Q_{\mathsf {v}}C_{\mathsf {v}}+C^*_{\mathsf {v}}Q_{\mathsf {v}}\),

then \({\mathbb {A}}\) generates in fact a unitary group on \({\mathbf {L}}^2_d({\mathcal {G}})\).

In both cases, the quadratic form on \(Y^{(d)}_{\mathsf {v}}\) is considered with respect to the Euclidean inner product. Observe, however, that both contractivity and unitarity—hence decay or conservation of (an appropriate notion of) energy—hold of course, under the above assumptions, with respect to the equivalent norm of \({\mathbf {L}}^2(\mathcal G)\oplus Y^{(d)}\) defined in (2.4), which depends on the matrices \(Q_{\mathsf {e}}(x)\) and \(Q_{\mathsf {v}}\), \(x\in (0,\ell _{\mathsf {e}})\), \({\mathsf {e}}\in {\mathsf {E}}\), \({\mathsf {v}}\in {\mathsf {V}}\).

It turns out that the condition (3.11) is not satisfied in some relevant applications, see, e.g., Sect. 5.5. We present a different approach that requires proving the dissipativeness of both \({\mathbb {A}}\) and its adjoint \(\mathbb {A}^*\). To begin with, let us elaborate on some ideas presented in [28, §3] and describe \({\mathbb {A}}^*\).

Lemma 3.7

The adjoint of the operator \({\mathbb {A}}\) is given by

$$\begin{aligned} \begin{aligned} D({{\mathbb {A}}}^*)&=\left\{ \begin{pmatrix} v \\ {\mathsf {y}}\end{pmatrix}\in {{\mathbf {L}}}^2_{d}({\mathcal {G}}): v\in D_{\max } \hbox { such that} \begin{pmatrix} \gamma _{\mathsf {v}}(v) \\ {\mathsf {y}}_{\mathsf {v}}\end{pmatrix}\in {\mathbb {Y}}^*_{\mathsf {v}}\hbox { for all }{\mathsf {v}}\in {\mathsf {V}}\right\} ,\\ {{\mathbb {A}}}^*&=\begin{pmatrix} {{\mathcal {A}}}^*&{} 0\\ \widetilde{{\mathcal {B}}} &{} \widetilde{ {\mathcal {C}}} \end{pmatrix}, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} ({{\mathcal {A}}}^*v)_{\mathsf {e}}&:=- M_{\mathsf {e}}v'_{\mathsf {e}}-Q_{\mathsf {e}}^{-1}\left( Q_{\mathsf {e}}M_{\mathsf {e}}\right) ' v_{\mathsf {e}}+Q_{\mathsf {e}}^{-1}N_{\mathsf {e}}^*Q_{\mathsf {e}}v_{\mathsf {e}},\qquad {\mathsf {e}}\in {\mathsf {E}}, \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} (\widetilde{{\mathcal {B}}} v)_{\mathsf {v}}&:= Q_{\mathsf {v}}^{-1}P_{{\mathsf {v}}}^{(d)} T_{\mathsf {v}}\gamma _{\mathsf {v}}( v), \quad {\mathsf {v}}\in {\mathsf {V}}, \\ (\widetilde{{\mathcal {C}}} v)_{\mathsf {v}}&:=Q_{\mathsf {v}}^{-1}P_{{\mathsf {v}}}^{(d)} B_{\mathsf {v}}^*Q_{\mathsf {v}}{\mathsf {y}}_{\mathsf {v}}+ Q_{\mathsf {v}}^{-1} C_{\mathsf {v}}^*Q_{\mathsf {v}}{\mathsf {y}}_{\mathsf {v}}, \quad {\mathsf {v}}\in {\mathsf {V}}, \end{aligned} \end{aligned}$$

and, finally, the subspace \({\mathbb {Y}}^*_{\mathsf {v}}\) of \({\mathbb {C}}^{k_{\mathsf {v}}}\oplus Y_{\mathsf {v}}^{(d)}\) is defined by

$$\begin{aligned} {\mathbb {Y}}^*_{\mathsf {v}}:={{\,\mathrm{Ker}\,}}\begin{pmatrix} P^{(d),\perp }_{\mathsf {v}}T_{\mathsf {v}}&P^{(d),\perp }_{\mathsf {v}}B^*_{\mathsf {v}}Q_{\mathsf {v}}\end{pmatrix}, \end{aligned}$$
(3.18)

where \(P_{{\mathsf {v}}}^{(d),\perp }\) is the orthogonal projector onto \((Y_{\mathsf {v}}^{(d)})^{\perp _y}\) with respect to the Euclidean inner product.

Proof

First, we notice that \(D({\mathbb {A}})\) is dense. Indeed, given \(\begin{pmatrix} g \\ {\mathsf {h}}\end{pmatrix}\in {{\mathbf {L}}}^2_{d}(\mathcal G)\), by the surjectivity of the trace mapping, there exists \(u\in D_{\max }\) such that

$$\begin{aligned} {\mathsf {h}}=P_{\mathsf {v}}^{(d)} \gamma _{\mathsf {v}}(u), \end{aligned}$$

and \(\gamma _{\mathsf {v}}(u)\in Y_{\mathsf {v}}\), for all \({\mathsf {v}}\in {\mathsf {V}}\). This in particular means that the pair \(\begin{pmatrix} u \\ {\mathsf {h}}\end{pmatrix}\in D({\mathbb {A}})\). Now, since \(g-u\in {{\mathbf {L}}}^2(\mathcal G)\), there exists a sequence of elements \(\varphi ^{(n)}\in \bigoplus _{{\mathsf {e}}\in {\mathsf {E}}} {{\mathcal {D}}}(0,\ell _{\mathsf {e}})^{k_{\mathsf {e}}}\) such that

$$\begin{aligned} \varphi ^{(n)}\rightarrow g-u \hbox { in } {{\mathbf {L}}}^2({\mathcal {G}}). \end{aligned}$$

Since \(\begin{pmatrix} \varphi ^{(n)} \\ 0\end{pmatrix}\) belongs trivially to \(D({\mathbb {A}})\), we get that \(\begin{pmatrix} u+\varphi ^{(n)} \\ {\mathsf {h}}\end{pmatrix}\) belongs to \(D({\mathbb {A}})\) and satisfies

$$\begin{aligned} \begin{pmatrix} u+\varphi ^{(n)} \\ {\mathsf {h}}\end{pmatrix} \rightarrow \begin{pmatrix} g \\ {\mathsf {h}}\end{pmatrix}\hbox { in } {{\mathbf {L}}}^2_d(\mathcal G). \end{aligned}$$

By definition, \(\begin{pmatrix} v \\ {\mathsf {y}}\end{pmatrix}\in {{\mathbf {L}}}^2_{d}({\mathcal {G}})\) belongs to \(D({\mathbb {A}}^*)\) if and only if there exists \(\begin{pmatrix} g \\ {\mathsf {h}}\end{pmatrix}\in {{\mathbf {L}}}^2_{d}({\mathcal {G}})\) such that

$$\begin{aligned} \left( {\mathbb {A}} \begin{pmatrix} u \\ {\mathsf {x}}\end{pmatrix} , \begin{pmatrix} v \\ {\mathsf {y}}\end{pmatrix}\right) _d=\left( \begin{pmatrix} u \\ {\mathsf {x}}\end{pmatrix}, \begin{pmatrix} g \\ {\mathsf {h}}\end{pmatrix}\right) _d\quad \hbox {for all } \begin{pmatrix} u \\ {\mathsf {x}}\end{pmatrix}\in D({\mathbb {A}}) \end{aligned}$$

and in such a case

$$\begin{aligned} {{\mathbb {A}}}^*v=\begin{pmatrix} g \\ {\mathsf {h}}\end{pmatrix}. \end{aligned}$$

Taking first \( {\mathsf {x}}=0\) and \(u_{\mathsf {e}}\in {\mathcal {D}}(0,\ell _{\mathsf {e}})\) (which yields a pair \(\begin{pmatrix} u \\ {\mathsf {x}}\end{pmatrix}\in D(\mathbb {A})\)), we find that

$$\begin{aligned} -Q_{\mathsf {e}}M_{\mathsf {e}}v'_{\mathsf {e}}-\left( Q_{\mathsf {e}}M_{\mathsf {e}}\right) ' v_{\mathsf {e}}+N_{\mathsf {e}}^*Q_{\mathsf {e}}v_{\mathsf {e}}=Q_{\mathsf {e}}g_{\mathsf {e}}\end{aligned}$$
(3.19)

holds in the distributional sense, hence v belongs to \(D_{\max }\). We can thus apply the identity

$$\begin{aligned} \begin{aligned} \left( {\mathcal {A}} u, v\right)&= \sum _{{\mathsf {e}}\in {\mathsf {E}}}\int _0^{\ell _{\mathsf {e}}} u_{\mathsf {e}}\cdot \overline{\left( -Q_{\mathsf {e}}M_{\mathsf {e}}v'_{\mathsf {e}}-\left( Q_{\mathsf {e}}M_{\mathsf {e}}\right) ' v_{\mathsf {e}}+N_{\mathsf {e}}^*Q_{\mathsf {e}}v_{\mathsf {e}}\right) } \,dx \\&\qquad + \sum _{{\mathsf {v}}\in {\mathsf {V}}} T_{\mathsf {v}}\gamma _{\mathsf {v}}(u)\cdot \gamma _{\mathsf {v}}(\bar{v}) \qquad \hbox {for all } u, v \in D_{\max } \end{aligned} \end{aligned}$$
(3.20)

(see the proof of [28, Lem. 3.10]). By (3.19), the definition of \({\mathbb {A}}\), and inner product (2.4), we obtain

$$\begin{aligned}&\sum _{{\mathsf {v}}\in {\mathsf {V}}} T_{\mathsf {v}}\gamma _{\mathsf {v}}(u)\cdot \gamma _{\mathsf {v}}({\bar{v}}) +\sum _{{\mathsf {v}}\in {\mathsf {V}}} \left( Q_{\mathsf {v}}\left( {B_{\mathsf {v}}\gamma _{\mathsf {v}}(u) +C_{\mathsf {v}}{\mathsf {x}}_{\mathsf {v}}} \right) \cdot {\bar{{\mathsf {y}}}}_{\mathsf {v}}\right) \\&\quad =\sum _{{\mathsf {v}}\in {\mathsf {V}}} Q_{\mathsf {v}}{\mathsf {x}}_{\mathsf {v}}\cdot {\bar{{\mathsf {h}}}}_{\mathsf {v}}, \qquad \hbox {for all } \begin{pmatrix} u \\ {\mathsf {x}}\end{pmatrix}\in D({\mathbb {A}}). \end{aligned}$$

As \({\mathsf {x}}_{\mathsf {v}}=P_{\mathsf {v}}^{(d)} \gamma _{\mathsf {v}}(u)\), we further have

$$\begin{aligned}&\sum _{{\mathsf {v}}\in {\mathsf {V}}} T_{\mathsf {v}}\gamma _{\mathsf {v}}(u)\cdot \gamma _{\mathsf {v}}({\bar{v}}) +\sum _{{\mathsf {v}}\in {\mathsf {V}}} \left( Q_{\mathsf {v}}\left( B_{\mathsf {v}}+C_{\mathsf {v}}P^{(d)}_{\mathsf {v}}\right) \gamma _{\mathsf {v}}(u) \cdot {\bar{{\mathsf {y}}}}_{\mathsf {v}}\right) \\&\quad =\sum _{{\mathsf {v}}\in {\mathsf {V}}} Q_{\mathsf {v}}P_{\mathsf {v}}^{(d)} \gamma _{\mathsf {v}}(u)\cdot \bar{\mathsf {h}}_{\mathsf {v}}, \qquad \hbox {for all } { u\in D({\mathcal {A}}),} \end{aligned}$$

that we write equivalently as

$$\begin{aligned} \sum _{{\mathsf {v}}\in {\mathsf {V}}} \gamma _{\mathsf {v}}(u)\cdot \overline{\left( T_{\mathsf {v}}\gamma _{\mathsf {v}}( v) + \left( B_{\mathsf {v}}^*+ C_{\mathsf {v}}^*\right) Q_{\mathsf {v}}{\mathsf {y}}_{\mathsf {v}}- Q_{\mathsf {v}}{\mathsf {h}}_{\mathsf {v}}\right) }=0, \qquad \hbox {for all }{ u\in D({\mathcal {A}}).} \end{aligned}$$

By the surjectivity of the trace mapping, since \(\gamma _{\mathsf {v}}(u)\in Y_{\mathsf {v}}\), we find that

$$\begin{aligned} P_{Y_{\mathsf {v}}} \left( T_{\mathsf {v}}\gamma _{\mathsf {v}}( v) + \left( B_{\mathsf {v}}^*+ C_{\mathsf {v}}^*\right) Q_{\mathsf {v}}{\mathsf {y}}_{\mathsf {v}}- Q_{\mathsf {v}}{\mathsf {h}}_{\mathsf {v}}\right) =0, \end{aligned}$$
(3.21)

where \(P_{Y_{\mathsf {v}}}\) is the orthogonal projector on \(Y_{\mathsf {v}}\) with respect to the Euclidean inner product.

Since \(Y_{\mathsf {v}}=Y_{\mathsf {v}}^{(d)}\oplus (Y_{\mathsf {v}}^{(d)})^{\perp _y}\) (orthogonal sum), and since \(C_{\mathsf {v}}^*Q_{\mathsf {v}}{\mathsf {y}}_{\mathsf {v}}- Q_{\mathsf {v}}{\mathsf {h}}_{\mathsf {v}}\) belongs to \(Y_{\mathsf {v}}^{(d)}\), (3.21) is equivalent to

$$\begin{aligned} P_{{\mathsf {v}}}^{(d),\perp } \left( T_{\mathsf {v}}\gamma _{\mathsf {v}}( v) + B_{\mathsf {v}}^*Q_{\mathsf {v}}{\mathsf {y}}_{\mathsf {v}}\right) =0, \end{aligned}$$
(3.22)

and

$$\begin{aligned} P_{{\mathsf {v}}}^{(d)} \left( T_{\mathsf {v}}\gamma _{\mathsf {v}}( v) + B_{\mathsf {v}}^*Q_{\mathsf {v}}{\mathsf {y}}_{\mathsf {v}}\right) + C_{\mathsf {v}}^*Q_{\mathsf {v}}{\mathsf {y}}_{\mathsf {v}}- Q_{\mathsf {v}}{\mathsf {h}}_{\mathsf {v}}=0. \end{aligned}$$
(3.23)

Finally, we notice that (3.22) means equivalently that \(\begin{pmatrix} \gamma _{\mathsf {v}}(v) \\ {\mathsf {y}}_{\mathsf {v}}\end{pmatrix}\in \mathbb {Y}^*_{\mathsf {v}}\). On the other hand, (3.23) defines \(h_{\mathsf {v}}\), namely, it is equivalent to

$$\begin{aligned} {\mathsf {h}}_{\mathsf {v}}= Q_{\mathsf {v}}^{-1}P_{{\mathsf {v}}}^{(d)} \left( T_{\mathsf {v}}\gamma _{\mathsf {v}}( v) + B_{\mathsf {v}}^*Q_{\mathsf {v}}{\mathsf {y}}_{\mathsf {v}}\right) + Q_{\mathsf {v}}^{-1} C_{\mathsf {v}}^*Q_{\mathsf {v}}{\mathsf {y}}_{\mathsf {v}}. \end{aligned}$$

This concludes the proof. \(\square \)

Remark 3.8

Observe that (3.22) is a property similar to \({\mathsf {x}}=P_{{\mathsf {v}}}^{(d)} \gamma _{\mathsf {v}}( u)\) and to the boundary condition \(\gamma _{\mathsf {v}}( u)\in Y_{\mathsf {v}}\), since these two conditions can be compactly written

$$\begin{aligned} P_{{\mathsf {v}}}^{(d),\perp \perp }(\gamma _{\mathsf {v}}(u)-{\mathsf {x}})=0, \end{aligned}$$
(3.24)

where \(P_{{\mathsf {v}}}^{(d),\perp \perp }\) means the orthogonal projector on the orthogonal of \((Y_{\mathsf {v}}^{(d)})^\perp \) in \({\mathbb {C}}^{k_{\mathsf {v}}}\) (equal to \(Y_{\mathsf {v}}^{(d)}\oplus Y_{\mathsf {v}}^\perp \)) with respect to the Euclidean inner product. Indeed, (3.24) means that

$$\begin{aligned} \gamma _{\mathsf {v}}(u)-{\mathsf {x}}\in (Y_{\mathsf {v}}^{(d)})^\perp , \end{aligned}$$

or, equivalently,

$$\begin{aligned} \gamma _{\mathsf {v}}(u)={\mathsf {x}}+{\mathsf {y}}\end{aligned}$$

with \({\mathsf {y}}\in (Y_{\mathsf {v}}^{(d)})^\perp \). This gives \(\gamma _{\mathsf {v}}(u)\in Y_{\mathsf {v}}\) and taking the projection on \(Y_{\mathsf {v}}^{(d)}\) that \({\mathsf {x}}=P_{{\mathsf {v}}}^{(d)} \gamma _{\mathsf {v}}( u)\).

If in particular \(Y^{(d)}_{\mathsf {v}}=\{0\}\) and hence \(B^*_{\mathsf {v}}=0\) and the range of \(P^{(d),\perp }_{\mathsf {v}}\) is \(Y^\perp _{\mathsf {v}}\), the assertion in Lemma 3.7 thus agrees with [28, Lemma 3.10].

We are finally in the position to propose a set of sufficient conditions for well-posedness different from those in Theorem 3.3 and Corollary 3.4.

Theorem 1.12 For all \({\mathsf {v}}\in {\mathsf {V}}\), let

  • \(Y^{(d)}_{\mathsf {v}}\) be a subspace of the nonpositive isotropic cone of the quadratic form on \(Y^{(d)}_{\mathsf {v}}\) associated with

    $$\begin{aligned} T_{\mathsf {v}}+Q_{\mathsf {v}}B_{\mathsf {v}}+B_{\mathsf {v}}^*Q_{\mathsf {v}}-\lambda P_{\mathsf {v}}^{(d)} Q_{\mathsf {v}}P_{\mathsf {v}}^{(d)} \end{aligned}$$

    for some \(\lambda \ge 0\), and

  • \({\mathbb {Y}}^*_{\mathsf {v}}\) as in (3.18) be a subspace of the nonpositive isotropic cone (with respect to the Euclidean inner product in \({\mathbb {C}}^{k_{\mathsf {v}}}\oplus Y^{(d)}_{\mathsf {v}}\)) of the quadratic form associated with

    $$\begin{aligned} \begin{pmatrix} -T_{\mathsf {v}}-2\mu &{} T_{\mathsf {v}}\\ P_{{\mathsf {v}}}^{(d)}T_{\mathsf {v}}&{} (P_{{\mathsf {v}}}^{(d)}B_{\mathsf {v}}^*-\mu {{\,\mathrm{Id}\,}}) Q_{\mathsf {v}}+Q_{\mathsf {v}}( B_{\mathsf {v}}-\mu {{\,\mathrm{Id}\,}}) \end{pmatrix} \end{aligned}$$

    for some \(\mu \ge 0\).

Then, \({\mathbb {A}}\) is a quasi-m-dissipative operator. In particular, \({\mathbb {A}}\) generates a strongly continuous semigroup on \({\mathbf {L}}^2_d({\mathcal {G}})\).

Proof

We already know that \({\mathbb {A}}\) is densely defined. Also, it is not difficult to prove that \({\mathbb {A}}\) is closed: this can be seen invoking [27, Lemma 2.3], since closedness of \({\mathcal {A}}\) has been already observed in [28], based on computations in [9].

By [17, Cor. II.3.17], m-dissipativity of \({\mathbb {A}}\) will follow if we can check that both \({\mathbb {A}}\) and its adjoint \({\mathbb {A}}^*\) are dissipative. Similarly to what we have already done in Theorem 3.3, for the sake of simplicity and without loss of generality we assume in the following that \(N_{\mathsf {e}}=C_{\mathsf {v}}=0\).

The proof of Theorem 3.3 shows that \({\mathbb {A}}\) is dissipative under our assumptions. In order to check dissipativity of \({\mathbb {A}}^*\), we start from the identity

$$\begin{aligned} \begin{aligned} \Re \left( {{\mathcal {A}}}^*u, u\right)&= \Re \sum _{{\mathsf {e}}\in {\mathsf {E}}}\int _0^{\ell _{\mathsf {e}}} \left( -\left( Q_{\mathsf {e}}M_{\mathsf {e}}\right) ' u_{\mathsf {e}}+N_{\mathsf {e}}^*Q_{\mathsf {e}}u_{\mathsf {e}}\right) \cdot {\bar{u}}_{\mathsf {e}}\,dx\\&\quad +\frac{1}{2} \sum _{{\mathsf {e}}\in {\mathsf {E}}}\int _0^{\ell _{\mathsf {e}}} \left( Q_{\mathsf {e}}M_{\mathsf {e}}\right) ' u_{\mathsf {e}}\cdot {\bar{u}}_{\mathsf {e}}\,dx -\frac{1}{2}\sum _{{\mathsf {v}}\in {\mathsf {V}}} T_{\mathsf {v}}\gamma _{\mathsf {v}}(u)\cdot \gamma _{\mathsf {v}}({\bar{u}}) \end{aligned} \end{aligned}$$
(3.25)

which was derived in the proof of [28, Thm. 3.11] for all \(u\in D_{\max }\). We then find that for all \(\mathfrak u=(u,{\mathsf {x}})^\top \in D({{\mathbb {A}}}^*)\),

$$\begin{aligned} \begin{aligned} \Re \left( {{\mathbb {A}}}^*{\mathfrak {u}}, {\mathfrak {u}}\right) _d&= \Re \sum _{{\mathsf {e}}\in {\mathsf {E}}}\int _0^{\ell _{\mathsf {e}}} \left( -\left( Q_{\mathsf {e}}M_{\mathsf {e}}\right) ' u_{\mathsf {e}}+N_{\mathsf {e}}^*Q_{\mathsf {e}}u_{\mathsf {e}}\right) \cdot {\bar{u}}_{\mathsf {e}}\,dx\\&\quad +\frac{1}{2} \sum _{{\mathsf {e}}\in {\mathsf {E}}}\int _0^{\ell _{\mathsf {e}}} \left( Q_{\mathsf {e}}M_{\mathsf {e}}\right) ' u_{\mathsf {e}}\cdot {\bar{u}}_{\mathsf {e}}\,dx -\frac{1}{2}\sum _{{\mathsf {v}}\in {\mathsf {V}}} T_{\mathsf {v}}\gamma _{\mathsf {v}}(u)\cdot \gamma _{\mathsf {v}}({\bar{u}}) \\&\quad +\Re \sum _{{\mathsf {v}}\in {\mathsf {V}}}\left( P_{{\mathsf {v}}}^{(d)} \left( T_{\mathsf {v}}\gamma _{\mathsf {v}}( u) + B_{\mathsf {v}}^*Q_{\mathsf {v}}{\mathsf {x}}_{\mathsf {v}}\right) \right) \cdot {\bar{{\mathsf {x}}}}_{\mathsf {v}}. \end{aligned} \end{aligned}$$
(3.26)

Hence, \({{\mathbb {A}}}^*\) is quasi-dissipative if for some \(\mu \ge 0\) it holds

$$\begin{aligned} -\frac{1}{2}T_{\mathsf {v}}\xi \cdot {\bar{\xi }} +\Re \left( \left( P_{{\mathsf {v}}}^{(d)} \left( T_{\mathsf {v}}\xi + B_{\mathsf {v}}^*Q_{\mathsf {v}}{\mathsf {x}}\right) \right) \cdot {\bar{{\mathsf {x}}}}\right) \le {\mu } \left\| \begin{pmatrix} \xi \\ Q_{\mathsf {v}}^\frac{1}{2} {\mathsf {x}}\end{pmatrix} \right\| ^2_{{\mathbb {C}}^{k_{\mathsf {v}}}\oplus Y_{\mathsf {v}}^{(d)}} \quad \hbox {for all } \begin{pmatrix} \xi \\ {\mathsf {x}}\end{pmatrix}\in {\mathbb {Y}}^*_{\mathsf {v}},\end{aligned}$$

where the inner product and the norm are the Euclidean ones. This is equivalent to

$$\begin{aligned} \left( \begin{pmatrix} -T_{\mathsf {v}}&{} T_{\mathsf {v}}\\ P_{{\mathsf {v}}}^{(d)}T_{\mathsf {v}}&{} P_{{\mathsf {v}}}^{(d)}B_{\mathsf {v}}^*Q_{\mathsf {v}}+Q_{\mathsf {v}}B_{\mathsf {v}}\end{pmatrix} \begin{pmatrix} \xi \\ {\mathsf {x}}\end{pmatrix}, \begin{pmatrix} \xi \\ {\mathsf {x}}\end{pmatrix}\right) _{{\mathbb {C}}^{k_{\mathsf {v}}}\oplus Y_{\mathsf {v}}^{(d)}} \le 2\mu \left\| \begin{pmatrix} \xi \\ Q_{\mathsf {v}}^\frac{1}{2}{\mathsf {x}}\end{pmatrix} \right\| ^2_{{\mathbb {C}}^{k_{\mathsf {v}}}\oplus Y_{\mathsf {v}}^{(d)}} \quad \hbox {for all } \begin{pmatrix} \xi \\ {\mathsf {x}}\end{pmatrix}\in {\mathbb {Y}}^*_{\mathsf {v}}, \end{aligned}$$

and the claim follows. \(\square \)

Again, repeating the same argument for \(-{\mathbb {A}}\) yields the following.

Corollary 1.13 For all \({\mathsf {v}}\in {\mathsf {V}}\), let

  • \(Y^{(d)}_{\mathsf {v}}\) be a subspace of the null isotropic cone of the quadratic form on \(Y^{(d)}_{\mathsf {v}}\) associated with

    $$\begin{aligned} T_{\mathsf {v}}+Q_{\mathsf {v}}B_{\mathsf {v}}+B_{\mathsf {v}}^*Q_{\mathsf {v}}-\lambda P_{\mathsf {v}}^{(d)} Q_{\mathsf {v}}P_{\mathsf {v}}^{(d)} \end{aligned}$$

    for some \(\lambda \ge 0\), and

  • \({\mathbb {Y}}^*_{\mathsf {v}}\) as in (3.18) be a subspace of the null isotropic cone of the quadratic form on \({\mathbb {C}}^{k_{\mathsf {v}}}\oplus Y^{(d)}_{\mathsf {v}}\) associated with

    $$\begin{aligned} \begin{pmatrix} -T_{\mathsf {v}}-2\mu &{} T_{\mathsf {v}}\\ P_{{\mathsf {v}}}^{(d)}T_{\mathsf {v}}&{} (P_{{\mathsf {v}}}^{(d)}B_{\mathsf {v}}^*-\mu {{\,\mathrm{Id}\,}}) Q_{\mathsf {v}}+Q_{\mathsf {v}}( B_{\mathsf {v}}-\mu {{\,\mathrm{Id}\,}}) \end{pmatrix} \end{aligned}$$

    for some \(\mu \ge 0\).

Then, both \(\pm {\mathbb {A}}\) are quasi-m-dissipative operators, and accordingly \({\mathbb {A}}\) generates a strongly continuous group on \({\mathbf {L}}^2_d({\mathcal {G}})\).

Remark 3.11

We can formulate conditions for dissipativity (rather than mere quasi-dissipativity) and unitarity of the (semi)group generated by \({\mathbb {A}}\) along the lines of Remark 3.6.

(1) \({\mathbb {A}}\) generates a contractive semigroup on \({\mathbf {L}}^2_d({\mathcal {G}})\) if the assumptions of Theorem 3.9 are complemented by the following:

  • \(Q_{\mathsf {e}}(x)N_{\mathsf {e}}(x)+N_{\mathsf {e}}(x)^*Q_{\mathsf {e}}(x)-(Q_{\mathsf {e}}M_{\mathsf {e}})'(x)\) is negative semi-definite, for all \({\mathsf {e}}\in {\mathsf {E}}\) and a.e. \(x\in (0,\ell _{\mathsf {e}})\); and

  • \(Y^{(d)}_{\mathsf {v}}\) is for all \({\mathsf {v}}\in {\mathsf {V}}\) a subspace of the negative isotropic cone of the quadratic form associated with \(Q_{\mathsf {v}}C_{\mathsf {v}}+C^*_{\mathsf {v}}Q_{\mathsf {v}}\).

(2) If, additionally to the assumptions of Corollary 3.10,

  • \(Q_{\mathsf {e}}(x)N_{\mathsf {e}}(x)+N_{\mathsf {e}}(x)^*Q_{\mathsf {e}}(x)=(Q_{\mathsf {e}}M_{\mathsf {e}})'(x)\), for all \({\mathsf {e}}\in {\mathsf {E}}\) and a.e. \(x\in (0,\ell _{\mathsf {e}})\);

  • \(Y^{(d)}_{\mathsf {v}}\) is for all \({\mathsf {v}}\in {\mathsf {V}}\) a subspace of the null isotropic cone of the quadratic form associated with \(Q_{\mathsf {v}}C_{\mathsf {v}}+C^*_{\mathsf {v}}Q_{\mathsf {v}}\); and

  • \({\mathbb {Y}}^*_{\mathsf {v}}\) is for all \({\mathsf {v}}\in {\mathsf {V}}\) a subspace of the null isotropic cone of the quadratic form associated with

    $$\begin{aligned} \begin{pmatrix} 0 &{} 0\\ 0 &{} Q_{\mathsf {v}}C_{\mathsf {v}}+C^*_{\mathsf {v}}Q_{\mathsf {v}}\end{pmatrix}, \end{aligned}$$

    then \({\mathbb {A}}\) generates a unitary group on \({\mathbf {L}}^2_d({\mathcal {G}})\).

Remark 3.12

We can further easily replace local boundary conditions by global ones: to this purpose, we take the \(2k\times 2k\) matrix T given by

$$\begin{aligned} T:= \begin{pmatrix} -{{\,\mathrm{diag}\,}}\left( Q_{\mathsf {e}}(0) M_{\mathsf {e}}(0)\right) _{{\mathsf {e}}\in {\mathsf {E}}}&{} 0 \\ 0&{} {{\,\mathrm{diag}\,}}\left( Q_{\mathsf {e}}(\ell _{\mathsf {e}}) M_{\mathsf {e}}(\ell _{\mathsf {e}})\right) _{{\mathsf {e}}\in {\mathsf {E}}} \end{pmatrix} \end{aligned}$$
(3.27)

and replace \(B_{\mathsf {v}}, C_{\mathsf {v}}, Q_{\mathsf {v}}\) by globally defined operators \(B:Y\rightarrow Y^{(d)}\), \(C^{(d)}, Q^{(d)} :Y^{(d)}\rightarrow Y^{(d)}\) for some subspaces \(Y^{(d)}\subset Y\subset {\mathbb {C}}^{2k}\). With the notation

$$\begin{aligned} \gamma (u):= \left( \left( u_{\mathsf {e}}(0)\right) _{{\mathsf {e}}\in {\mathsf {E}}}, \left( u_{\mathsf {e}}(\ell _{\mathsf {e}})\right) _{{\mathsf {e}}\in {\mathsf {E}}}\right) ^\top , \end{aligned}$$

we thus consider operator \({\mathbb {A}}\) defined as in (3.1) with domain

$$\begin{aligned} D({\mathbb {A}}):=\left\{ \begin{pmatrix} u \\ {\mathsf {x}}\end{pmatrix}\in D_{\max }\oplus Y^{(d)}: \gamma ( u )\in Y\hbox { and }{\mathsf {x}}={P}^{(d)}\gamma ( u ) \right\} \end{aligned}$$
(3.28)

and assume Y to be the appropriate isotropic cone of the quadratic form associated with \(T+Q^{(d)} B + B^*Q^{(d)}\). In this case \(Z = (Y^\perp \oplus {{\,\mathrm{Ran}\,}}B^*) + {{\,\mathrm{Ker}\,}}B^*\subset {\mathbb {C}}^{2k}\) and the well-posedness condition (3.11) becomes

$$\begin{aligned} \dim Z=\dim P_K Z = k, \end{aligned}$$
(3.29)

where \(P_K\) is the orthogonal projector onto

$$\begin{aligned} K=\left\{ \left( \left( K_{\mathsf {e}}\right) _{{\mathsf {e}}\in {\mathsf {E}}}, \left( K_{\mathsf {e}}\right) _{{\mathsf {e}}\in {\mathsf {E}}}\right) ^\top : K_{\mathsf {e}}\in {\mathbb {C}}^{k_{\mathsf {e}}} \text { for all }{\mathsf {e}}\in {\mathsf {E}}\right\} \end{aligned}$$

with respect to the Euclidean inner product of \({\mathbb {C}}^{2k}\), see [28, Rem. 3.13]) for details. In Sect. 5.5, we are going to see that (3.11) and, equivalently, (3.29) may fail to hold even when the equation can be—by other means—proved to be well-posed.

4 Qualitative properties

We now study when the (semi)group generated by \({\mathbb {A}}\) is, real, positive, or \(\infty \)-contractive. Let \(C\subset {\mathbb {C}}\) be a closed and convex set; we will denote by \(P_C:{\mathbb {C}}\rightarrow {\mathbb {C}}\) the projector onto C. As in [28, §4], we shall apply to the Hilbert space of C-valued vectors in \({\mathbf {L}}^2_d({\mathcal {G}})\), i.e., to

$$\begin{aligned} K :={\mathbf {L}}^2_d({\mathcal {G}};C):= {\mathbf {L}}^2({\mathcal {G}};C) \oplus Y^{(d)}_C, \end{aligned}$$

a generalization (cf. [28, Lemma 4.3]) of a classical result by Brezis for the invariance of the convex subsets of Hilbert spaces; here

$$\begin{aligned} {\mathbf {L}}^2({\mathcal {G}};C):=\{u\in L^2({\mathcal {G}}): u_{\mathsf {e}}(x)\in C^{k_{\mathsf {e}}}\ \hbox { for a.e. }x\in (0,\ell _{\mathsf {e}})\hbox { and all }{\mathsf {e}}\in {\mathsf {E}}\} \end{aligned}$$

and

$$\begin{aligned} Y^{(d)}_C := \{ {\mathsf {x}}\in Y^{(d)}:{\mathsf {x}}_{\mathsf {v}}\in C^{k_{\mathsf {v}}} \hbox { for all }{\mathsf {v}}\in {\mathsf {V}}\}. \end{aligned}$$

(Observe that the latter might well be trivial, like in the case of \(Y^{(d)}\) spanned by the vector \((1,-1)^\top \) and \(C={\mathbb {R}}_+\).)

To this end, we first need to relate the minimizing projector \({\mathbb {P}}_K^Q\) with respect to the inner product \((\cdot ,\cdot )_d\) in the Hilbert space \({{\mathbf {L}}}^2_{d}({\mathcal {G}})\) defined in (2.4) to the minimizing projectors \(P_K\) and \(P^{(d)}_K\) with respect to the standard inner products in the Hilbert spaces \(\mathbf{L}^2({\mathcal {G}})\) and \(Y^{(d)}\), respectively: i.e., the products

$$\begin{aligned} \langle u,v \rangle:= & {} \sum _{{\mathsf {e}}\in {\mathsf {E}}}\int _0^{\ell _{\mathsf {e}}} u_{\mathsf {e}}(x)\cdot {\overline{v}}_{\mathsf {e}}(x)\ dx, \quad u,v\in \mathbf{L}^2({\mathcal {G}}), \end{aligned}$$
(4.1)
$$\begin{aligned} {\mathsf {x}}\cdot {\bar{{\mathsf {y}}}}:= & {} \sum _{{\mathsf {v}}\in {\mathsf {V}}} {\mathsf {x}}_{\mathsf {v}}\cdot \bar{\mathsf {y}}_{\mathsf {v}}, \quad {\mathsf {x}},{\mathsf {y}}\in Y^{(d)}. \end{aligned}$$
(4.2)

By following the steps in the proof of [28, Lemma 4.4] and performing the calculations for each component of K separately, we obtain the following characterization.

Lemma 4.1

Assume \(Q_{\mathsf {e}}^{\frac{1}{2}}(x)\) and \(Q^{\frac{1}{2}}_{\mathsf {v}}\) to be bijective maps on \(C^{k_{\mathsf {e}}}\) and \(C^{k_{\mathsf {v}}}\) for all \({\mathsf {e}}\in {\mathsf {E}}\) and all \(x\in [0,\ell _{\mathsf {e}}]\) and for all \({\mathsf {v}}\in {\mathsf {V}}\), respectively. Then, the minimizing projector \({\mathbb {P}}_K^Q\) with respect to the inner product (2.4) onto \(K={\mathbf {L}}^2_d({\mathcal {G}};C)\) is given by

$$\begin{aligned} {\mathbb {P}}_{K}^Q=\begin{pmatrix}Q^{-\frac{1}{2}} P_K Q^{\frac{1}{2}} &{} 0\\ 0 &{}(Q^{(d)})^{-\frac{1}{2}} P^{(d)}_K (Q^{(d)})^{\frac{1}{2}} \end{pmatrix} \end{aligned}$$
(4.3)

where \(Q:={{\,\mathrm{diag}\,}}(Q_{\mathsf {e}})_{{\mathsf {e}}\in {\mathsf {E}}}\) and \(Q^{(d)}:={{\,\mathrm{diag}\,}}(Q_{\mathsf {v}})_{{\mathsf {v}}\in {\mathsf {V}}}\) are block-diagonal matrices, while \(P_K\) and \(P^{(d)}_K\) are the minimizing projectors with respect to the standard inner products (4.1) and (4.2), respectively.

In the following, we are going to focus on the cases of

  • \(C={\mathbb {R}}\),

  • \(C={\mathbb {R}}_+\),

  • \(C=\{z\in {\mathbb {C}}:|z|\le 1\}\).

Our arguments in the following rely upon [28, Lemma 4.3], which holds for quasi-m-dissipative operators; but in the first two cases (\(C={\mathbb {R}}\), \(C={\mathbb {R}}_+\)), the relevant conditions for invariance are equivalent in the quasi-dissipative and dissipative case, since reality and positivity of a semigroup are not affected by a scalar additive perturbation of its generator.

To begin with, let us consider \(C={\mathbb {R}}\): then Lemma 4.1 states that if \(Q_{\mathsf {v}}\) and \(Q_{\mathsf {e}}\) are real-valued, then the minimizing projector onto \(K={\mathbf {L}}_d^2({\mathcal {G}};{\mathbb {R}})\) is given by

$$\begin{aligned} {\mathbb {P}}_{K}^Q {u\atopwithdelims (){\mathsf {x}}} = \begin{pmatrix}Q^{-\frac{1}{2}} \Re \left( Q^{\frac{1}{2}} u\right) \\ (Q^{(d)})^{-\frac{1}{2}} \Re \left( (Q^{(d)})^{\frac{1}{2}} {\mathsf {x}}\right) \end{pmatrix} = {{\Re u}\atopwithdelims (){\Re {\mathsf {x}}}},\qquad {u\atopwithdelims (){\mathsf {x}}}\in {\mathbf {L}}^2_d({\mathcal {G}}). \end{aligned}$$

This allows for an extension of [28, Prop. 4.5].

Proposition 4.2

Under the assumptions of Theorem 3.3 or Theorem 3.9, let

$$\begin{aligned} \Re \xi \in \bigoplus _{{\mathsf {v}}\in {\mathsf {V}}} Y_{\mathsf {v}}\hbox { for all } \xi \in \bigoplus _{{\mathsf {v}}\in {\mathsf {V}}} Y_{\mathsf {v}}\quad \text {and} \quad \Re {\mathsf {x}}\in \bigoplus _{{\mathsf {v}}\in {\mathsf {V}}} Y_{\mathsf {v}}^{(d)} \hbox { for all } {\mathsf {x}}\in \bigoplus _{{\mathsf {v}}\in {\mathsf {V}}} Y_{\mathsf {v}}^{(d)}, \end{aligned}$$
(4.4)

let the matrix-valued mapping \(Q_{\mathsf {e}}\) be real-valued for all \({\mathsf {e}}\in {\mathsf {E}}\), and let the matrices \(Q_{\mathsf {v}},B_{\mathsf {v}},C_{\mathsf {v}}\) be real for all \({\mathsf {v}}\in {\mathsf {V}}\). Then, the semigroup generated by \({\mathbb {A}}\) is real if the matrix-valued mappings \(M_{\mathsf {e}},N_{\mathsf {e}}\) are real-valued for all \({\mathsf {e}}\in {\mathsf {E}}\).

Proof

First observe that by [28, Lemma 4.7], (4.4) holds if and only if \(Y_{\mathsf {v}}, Y_{\mathsf {v}}^{(d)}\), for each \({\mathsf {v}}\in {\mathsf {V}}\), are spanned by entry-wise real vectors only. Thus, the orthogonal projectors \(P^{(d)}_{\mathsf {v}}\) are real matrices for all \({\mathsf {v}}\) (see, e.g., [35, (5.13.3)]). By the assumptions, we then obtain,

$$\begin{aligned} {\mathbb {P}}_K^Q {u\atopwithdelims (){\mathsf {x}}}\in D({\mathbb {A}})\text { whenever }{u\atopwithdelims (){\mathsf {x}}} \in D({\mathbb {A}}). \end{aligned}$$

As in the proof of [28, Prop. 4.5], we deduce that the reality of the semigroup is equivalent to

$$\begin{aligned} \left( {\mathbb {A}} {{\Re u}\atopwithdelims (){\Re {\mathsf {x}}}},{{\Im u}\atopwithdelims (){\Im {\mathsf {x}}}}\right) _d =\left( {{\mathcal {A}}{\Re u}\atopwithdelims (){\mathcal {B}}{\Re u}+{\mathcal {C}}{\Re {\mathsf {x}}}},{{\Im u}\atopwithdelims (){\Im {\mathsf {x}}}}\right) _d\in {\mathbb {R}} \quad \text {for all } {u\atopwithdelims (){\mathsf {x}}} \in D({\mathbb {A}}), \end{aligned}$$
(4.5)

using the notation from (3.1).

Now, the first term reads \(\sum _{{\mathsf {e}}\in {\mathsf {E}}}\int _0^{\ell _{\mathsf {e}}}Q_{\mathsf {e}}(M_{\mathsf {e}}\frac{d}{dx}+N_{\mathsf {e}}) \Re u\cdot \Im {\bar{u}}\ dx\in {\mathbb {R}}\) for all \(u\in D({\mathcal {A}})\), which by [28, Lemma 4.6] is the case if and only if \(M_{\mathsf {e}},N_{\mathsf {e}}\) are real-valued for all \({\mathsf {e}}\in {\mathsf {E}}\). The boundary term

$$\begin{aligned} \sum _{{\mathsf {v}}\in {\mathsf {V}}} Q_{\mathsf {v}}\left( B_{\mathsf {v}}+C_{\mathsf {v}}P^{(d)}_{\mathsf {v}}\right) \gamma _{\mathsf {v}}(\Re u) \cdot { P^{(d)}_{\mathsf {v}}\gamma _{\mathsf {v}}(\Im {\bar{u}} )} \in {\mathbb {R}}\end{aligned}$$

if and only if \(B_{\mathsf {v}}+C_{\mathsf {v}}P^{(d)}_{\mathsf {v}}\) is real for all \({\mathsf {v}}\in {\mathsf {V}}\), since all entries of \(Q_{\mathsf {v}},P^{(d)}_{\mathsf {v}}\) are real. Finally, the reality of \(B_{\mathsf {v}},C_{\mathsf {v}}\) is sufficient to ensure the reality of \(B_{\mathsf {v}}+C_{\mathsf {v}}P^{(d)}_{\mathsf {v}}\). \(\square \)

We continue with the study of positivity. Without loss of generality we restrict ourselves to the real Hilbert space \({\mathbf {L}}^2_d(\mathcal G;{\mathbb {R}})\) and consider the convex subsets \(C={\mathbb {R}}_+\). First, let us recall that by [28, Lemma 4.8], a real symmetric and positive definite matrix is a lattice isomorphism if and only if it is diagonal. Therefore we shall assume that the matrices \(Q_{\mathsf {v}}\) and \(Q_{\mathsf {e}}(x)\) are real and diagonal for all \({\mathsf {v}}\in {\mathsf {V}}\), \({\mathsf {e}}\in {\mathsf {E}}\), and \(x\in [0,\ell _{\mathsf {e}}]\). Therefore the minimizing projector \({\mathbb {P}}_K^Q\) onto \(K={\mathbf {L}}^2_d({\mathcal {G}};{\mathbb {R}}_+)\) given in Equation 4.3 again takes a simpler form,

$$\begin{aligned} {\mathbb {P}}_K^Q {u\atopwithdelims (){\mathsf {x}}} = \begin{pmatrix}Q^{-\frac{1}{2}} \left( Q^{\frac{1}{2}} u\right) ^+\\ (Q^{(d)})^{-\frac{1}{2}} \left( (Q^{(d)})^{\frac{1}{2}} {\mathsf {x}}\right) ^+\end{pmatrix} = {{u^+}\atopwithdelims (){{\mathsf {x}}^+}}. \end{aligned}$$

Proposition 4.3

Under the assumptions of Theorem 3.3 or Theorem 3.9, let the matrices

  • \(N_{\mathsf {e}}(x), Q_{\mathsf {e}}(x)\), \(Q_{\mathsf {v}}, B_{\mathsf {v}},C_{\mathsf {v}}\) be real-valued,

  • \(M_{\mathsf {e}}(x), Q_{\mathsf {e}}(x), Q_{\mathsf {v}}\) be diagonal, and

  • the projector \(P^+_{\mathsf {v}}\) onto the positive cone of \({\mathbb {R}}^{k_{\mathsf {v}}}\) commutes with \(P_{\mathsf {v}}^{(d)}\),

for all \({\mathsf {e}}\in {\mathsf {E}}\), a.e. \(x\in [0,\ell _{\mathsf {e}}]\), and all \({\mathsf {v}}\in {\mathsf {V}}\). Furthermore, let

$$\begin{aligned} \xi ^+ \in {\bigoplus _{{\mathsf {v}}\in {\mathsf {V}}} Y_{\mathsf {v}}} \hbox { for all } \xi \in {\bigoplus _{{\mathsf {v}}\in {\mathsf {V}}} Y_{\mathsf {v}}} {\quad \text {and} \quad {\mathsf {x}}^+\in \bigoplus _{{\mathsf {v}}\in {\mathsf {V}}} Y_{\mathsf {v}}^{(d)} \hbox { for all } {\mathsf {x}}\in \bigoplus _{{\mathsf {v}}\in {\mathsf {V}}} Y_{\mathsf {v}}^{(d)}.} \end{aligned}$$
(4.6)

If, additionally, all matrices \(B_{\mathsf {v}}+C_{\mathsf {v}}P^{(d)}_{\mathsf {v}}\) are positive, then the semigroup generated by \({\mathbb {A}}\) on \({\mathbf {L}}^2_d({\mathcal {G}}, {\mathbb {R}})\) is positive if for all \({\mathsf {e}}\in {\mathsf {E}}\) and a.e. \(x\in [0,\ell _{\mathsf {e}}]\) all off-diagonal entries of the matrices \(N_{\mathsf {e}}(x)\) are nonnegative. In the special case of \(B_{\mathsf {v}}=0\) for all \({\mathsf {v}}\in {\mathsf {V}}\), the semigroup generated by \(\mathbb {A}\) is positive if all off-diagonal entries of the matrices \(N_{\mathsf {e}}(x)\) and \(C_{\mathsf {v}}\) are nonnegative, for all \({\mathsf {e}}\in {\mathsf {E}}\), a.e. \(x\in [0,\ell _{\mathsf {e}}]\), and all \({\mathsf {v}}\in {\mathsf {V}}\).

We stress that nonnegativity of the off-diagonal entries of \(N_{\mathsf {e}}\) and \(C_{\mathsf {v}}\) amounts to asking that the semigroups generated by \(N_{\mathsf {e}}\) and \(C_{\mathsf {v}}\) are both positive.

Proof

Also in this case, it follows from the assumptions that

$$\begin{aligned} {\mathbb {P}}_K^Q {u\atopwithdelims (){\mathsf {x}}}\in D({\mathbb {A}})\text { whenever }{u\atopwithdelims (){\mathsf {x}}} \in D({\mathbb {A}}). \end{aligned}$$

By repeating the arguments in the proof of [28, Prop. 4.9], we obtain that the semigroup is positive if and only if

$$\begin{aligned} \left( {{\mathbb {A}}} {{u^+}\atopwithdelims (){{\mathsf {x}}^+}}, {{u^-}\atopwithdelims (){{\mathsf {x}}^-}}\right) _d \ge 0\quad \text {for all } {u\atopwithdelims (){\mathsf {x}}} \in D({\mathbb {A}}). \end{aligned}$$
(4.7)

We are going to consider the two components separately. For the first one, we have that, by [28, Lemma 4.11], \(( {\mathcal {A}} u^+, u^-) \ge 0\) if and only the matrices \(M_{\mathsf {e}}(x)\) are diagonal and all off-diagonal entries of the matrices \(N_{\mathsf {e}}(x)\) are nonnegative. Let us turn to the second component: by surjectivity of \(\gamma _{\mathsf {v}}:D({\mathcal {A}})\rightarrow Y_{\mathsf {v}}\) and (3.5), nonnegativity of the boundary term \( \sum _{{\mathsf {v}}\in {\mathsf {V}}} Q_{\mathsf {v}}\left( B_{\mathsf {v}}+C_{\mathsf {v}}P^{(d)}_{\mathsf {v}}\right) \gamma _{\mathsf {v}}(u^+) \cdot { P^{(d)}_{\mathsf {v}}\gamma _{\mathsf {v}}(u^- )}\) for all \(u\in D({\mathcal {A}})\) is equivalent to

$$\begin{aligned} \sum _{{\mathsf {v}}\in {\mathsf {V}}} Q_{\mathsf {v}}\left( B_{\mathsf {v}}+C_{\mathsf {v}}P^{(d)}_{\mathsf {v}}\right) {{\mathsf {y}}^+_{\mathsf {v}}\cdot { {\mathsf {y}}^-_{\mathsf {v}}}}\ge 0\quad \hbox {for all }{{\mathsf {y}}} \in Y_{\mathsf {v}}; \end{aligned}$$
(4.8)

or, in the special case \(B=0\), to

$$\begin{aligned} \sum _{{\mathsf {v}}\in {\mathsf {V}}} Q_{\mathsf {v}}C_{\mathsf {v}}{\mathsf {x}}^+_{\mathsf {v}}\cdot {\mathsf {x}}^-_{\mathsf {v}}\ge 0\quad \hbox {for all }{\mathsf {x}}\in Y^{(d)}_{\mathsf {v}}. \end{aligned}$$
(4.9)

Now, (4.8) certainly holds whenever \(B_{\mathsf {v}}+C_{\mathsf {v}}P^{(d)}_{\mathsf {v}}\) is a positive matrix. On the other hand, by [44, Thm. 2.6] (4.9) is equivalent to positivity of the semigroup generated by \(C_{\mathsf {v}}\), i.e., to the condition that the real matrix \(C_{\mathsf {v}}\) has nonnegative off-diagonal entries.

\(\square \)

Let us finally address the question whether our semigroup is \(\infty \)-contractive: this is a natural issue, since the prototypical example of a hyperbolic equation—the transport equation on \({\mathbb {R}}\)—is governed by a semigroup of isometries on \(L^p({\mathbb {R}})\) for all \(p\in [1,\infty ]\). To this aim, let us introduce the Lebesgue-type spaces

$$\begin{aligned} {{\mathbf {L}}}^p_{d}({\mathcal {G}}):={\mathbf {L}}^p({\mathcal {G}})\oplus Y^{(d)},\qquad p\in [1,\infty ], \end{aligned}$$

equipped with the canonical p-norm.

Proposition 4.4

Assume our standing Assumptions 2.2 and 2.3 hold with \(Q_{\mathsf {e}},Q_{\mathsf {v}}\) identity matrices. Under the assumptions of Theorem 3.3 and Remark 3.6.(1) or else of Theorem 3.9 and Remark 3.11.(1), let for all \({\mathsf {e}}\in {\mathsf {E}}\), all \(x\in [0,\ell _{\mathsf {e}}]\), and all \({\mathsf {v}}\in {\mathsf {V}}\) the matrices

  • \(M_{\mathsf {e}}(x)\) be diagonal and

  • \(N_{\mathsf {e}}(x)\) generate semigroups on \(L^2({\mathcal {G}})\) that are contractive with respect to the \(\infty \)-norm.

Furthermore, let

$$\begin{aligned} (1\wedge |\xi |){{\,\mathrm{sgn}\,}}\xi \in {\bigoplus _{{\mathsf {v}}\in {\mathsf {V}}} Y_{\mathsf {v}}} \hbox { for all } \xi \in {\bigoplus _{{\mathsf {v}}\in {\mathsf {V}}} Y_{\mathsf {v}}}\quad \hbox {and}\quad (1\wedge |{\mathsf {x}}|){{\,\mathrm{sgn}\,}}{\mathsf {x}}\in {\bigoplus _{{\mathsf {v}}\in {\mathsf {V}}} Y_{\mathsf {v}}^{ (d)} } \hbox { for all } {\mathsf {x}}\in {\bigoplus _{{\mathsf {v}}\in {\mathsf {V}}} Y_{\mathsf {v}}^{(d)} }. \end{aligned}$$
(4.10)

If additionally the matrix \(B_{\mathsf {v}}+C_{\mathsf {v}}P^{(d)}_{\mathsf {v}}\) is \(\infty \)-contractive for all \({\mathsf {v}}\in {\mathsf {V}}\), then the semigroup generated by \({\mathbb {A}}\) on \({\mathbf {L}}^2_d({\mathcal {G}})\) is \(\infty \)-contractive.

In the special case of \(B_{\mathsf {v}}=0\) for all \({\mathsf {v}}\in {\mathsf {V}}\), the semigroup generated by \({\mathbb {A}}\) is \(\infty \)-contractive if the semigroup generated by \(C_{\mathsf {v}}\) on \({\mathbb {C}}^{k_{\mathsf {v}}}\) is \(\infty \)-contractive.

Let us remind that

$$\begin{aligned} {{\,\mathrm{sgn}\,}}z:=\frac{z}{|z|}, \quad z\in {{\mathbb {C}}\setminus \{0\}\quad \text {and}\quad {{\,\mathrm{sgn}\,}}0:= 0}. \end{aligned}$$

The sign of vectors with complex entries are defined accordingly.

We observe that the proof of [38, Lemma 6.1] can be easily seen to extend to our setting, where the weight matrices \(Q_{\mathsf {e}},Q_{\mathsf {v}}\) are identity matrices; accordingly, the semigroups generated by \(-N_{\mathsf {e}}(x)=(n_{\mathsf {e}}^{i,j}(x))_{1\le i,j\le k_{\mathsf {e}}}\) and \(-C_{\mathsf {v}}=(c_{\mathsf {v}}^{h\ell })_{1\le h,\ell \le k_{\mathsf {v}}}\) are \(\infty \)-contractive if and only if

$$\begin{aligned} \Re n_{\mathsf {e}}^{i,i}(x)\ge \sum _{j\ne i}|n_{\mathsf {e}}^{i,j}(x)|,\qquad \Re c_{\mathsf {v}}^{h,h}\ge \sum _{\ell \ne h}|c_{\mathsf {v}}^{h,\ell }|\qquad \hbox {for all } {\mathsf {e}}\in {\mathsf {E}}\hbox {, a.e.}\ x\in (0,\ell _{\mathsf {e}})\hbox {, and all }{\mathsf {v}}\in {\mathsf {V}}. \end{aligned}$$

Proof

First of all, observe that \({\mathbb {A}}\) is by assumption m-dissipative in \({\mathbf {L}}^2_d({\mathcal {G}})\), hence we can apply [28, Lemma 4.3] in order to study invariance of the unit ball K of \({\mathbf {L}}^\infty _d({\mathcal {G}})\) under the semigroup generated by \({\mathbb {A}}\). Furthermore, we can apply Lemma 4.1: (4.10) now guarantees that \(D({\mathbb {A}})\) is left invariant under \({\mathbb {P}}^Q_K\) and, in view of the known formula for the minimizing projector onto the unit ball with respect to the \(\infty \)-norm and of [44, Thm. 2.13], we deduce that the relevant condition for invariance of the unit ball of \({\mathbf {L}}^\infty _d({\mathcal {G}})\) under the semigroup generated by \({\mathbb {A}}\) is

$$\begin{aligned} \Re \left( {{\mathbb {A}}} {{(1\wedge |u|){{\,\mathrm{sgn}\,}}u}\atopwithdelims (){(1\wedge |{\mathsf {x}}|){{\,\mathrm{sgn}\,}}{\mathsf {x}}}}, {{(|u|-1)^+{{\,\mathrm{sgn}\,}}u}\atopwithdelims (){(|{\mathsf {x}}|-1)^+{{\,\mathrm{sgn}\,}}{\mathsf {x}}}}\right) _d \le 0\quad \text {for all } {u\atopwithdelims (){\mathsf {x}}} \in D({\mathbb {A}}). \end{aligned}$$
(4.11)

Because \(\frac{d}{dx}(1\wedge |u(x)|){{\,\mathrm{sgn}\,}}u(x), (|\bar{u}(x)|-1)^+{{\,\mathrm{sgn}\,}}{\bar{u}}(x)\) have disjoint support, by diagonality of the matrices \(M_{\mathsf {e}}(x)\) one sees that

$$\begin{aligned} \begin{aligned} \Re \int _0^{\ell _{\mathsf {e}}}&\left( M_{\mathsf {e}}(x)\frac{d}{dx}+N_{\mathsf {e}}(x)\right) (1\wedge |u(x)|){{\,\mathrm{sgn}\,}}u(x)\cdot (|{\bar{u}}(x)|-1)^+{{\,\mathrm{sgn}\,}}{\bar{u}}(x) dx\\&\quad = \Re \int _0^{\ell _{\mathsf {e}}} N_{\mathsf {e}}(x) (1\wedge |u(x)|){{\,\mathrm{sgn}\,}}u(x)\cdot (|{\bar{u}}(x)|-1)^+{{\,\mathrm{sgn}\,}}{\bar{u}}(x) dx \end{aligned} \end{aligned}$$
(4.12)

hence the first term in (4.11) is nonpositive if the semigroup generated by the matrix \(Q_{\mathsf {e}}(x)N_{\mathsf {e}}(x)\) on the unweighted space \({\mathbb {C}}^{k_{\mathsf {e}}}\) is for a.e. \(x\in (0,\ell _{\mathsf {e}})\) \(\infty \)-contractive, since in this case the integrand in the second line of (4.12) is a negative function.

Again by [44, Thm. 2.13], the boundary term in (4.11) is nonpositive if in particular \(B_{\mathsf {v}}+C_{\mathsf {v}}P^{(d)}_{\mathsf {v}}\) is \(\infty \)-contractive; or more generally, cf. the proof of Proposition 4.2, if—provided \(B_{\mathsf {v}}=0\)—merely the semigroup generated by \(C_{\mathsf {v}}\) is \(\infty \)-contractive. \(\square \)

Remark 4.5

The assumption that the matrices \(M_{\mathsf {e}},Q_{\mathsf {e}},Q_{\mathsf {v}}\) are diagonal is very restrictive and hints at the fact that very few linear hyperbolic systems are governed by an \(\infty \)-contractive semigroup. This is not overly surprising: contractive semigroups on \({\mathbf {L}}^2_d({\mathcal {G}})\), which are furthermore \(\infty \)-contractive, too, extrapolate by the Riesz–Thorin Theorem to all \({\mathbf {L}}^p_d({\mathcal {G}})\)-spaces, \(p\ge 2\). However, Brenner’s Theorem (see [5, Thm. 8.4.3]) poses a serious limit to \(L^p\)-well-posedness of even less general systems than ours.

5 Examples

5.1 Transport equation

Arguably, transport equations represent the easiest setting where our Assumption 2.2 is satisfied. Transport equations

$$\begin{aligned} {{\dot{u}}_{\mathsf {e}}=c_{\mathsf {e}}u'_{\mathsf {e}}} \end{aligned}$$

on a network consisting of \(|{\mathsf {E}}|\) edges of unit length with transmission conditions in \(|{\mathsf {V}}|\) vertices given as

$$\begin{aligned} u(t,1)\in {{\,\mathrm{Ran}\,}}({\mathcal {I}}^-_\omega )^\top {\quad \text {and}\quad {\mathcal {I}}^- u(t,1) = {\mathcal {I}}^+_\omega u(t,0)} \end{aligned}$$

have been introduced in [29], where their well-posedness in an \(L^1\)-setting was proved. Here, \( c_{\mathsf {e}}>0\) are constant velocity coefficients, \({\mathcal {I}}^+_\omega \) is the Kronecker product of \({\mathcal {I}}^+\) with a column stochastic \(|{\mathsf {V}}|\times |{\mathsf {E}}|\) matrix \({\mathcal {W}}=(\omega _{{\mathsf {v}}{\mathsf {e}}})\), and \({\mathcal {I}}^\pm \) are the signed incidence matrices introduced in (2.3); see [48, §2] for details. It is assumed that both signed incidence matrices are surjective, that is of rank \(|{\mathsf {V}}|\): by [8, Thm. 2.1] this is the case if and only if the graph contains neither sinks nor sources. As shown in [48, §3], it is possible to consider dynamic conditions as well, by replacing the second (stationary) condition above by a dynamic condition of the form

$$\begin{aligned} \frac{\partial }{\partial t}{\mathcal {I}}^- u(t,1)={\mathcal {I}}^+_\omega u(t,0)+C{\mathcal {I}}^- u(t,1). \end{aligned}$$

Well-posedness of the corresponding abstract Cauchy problem was proved in [48, Thm. 4.5]. Here, we are adopting a global formalism, assuming

$$\begin{aligned}\gamma (u) := \begin{pmatrix} u(1) \\ u(0)\end{pmatrix} \quad \text { and } \quad T: = {{\,\mathrm{diag}\,}}\begin{pmatrix} -{{\,\mathrm{diag}\,}}(c_{\mathsf {e}})_{{\mathsf {e}}\in {\mathsf {E}}} &{} 0\\ 0 &{} {{\,\mathrm{diag}\,}}(c_{\mathsf {e}})_{{\mathsf {e}}\in {\mathsf {E}}} \end{pmatrix}, \end{aligned}$$

see Remark 3.12. Note that, contrary to our notation, in [48] the initial endpoint of an edge is assumed to be in 1 and the terminal endpoint is 0. In order to be able to compare the results, we stick to this terminology in the context of the present Example.

Let us show that the setting in [48] is a special case of ours: we recover the above boundary conditions letting

$$\begin{aligned} Y:={{\,\mathrm{Ran}\,}}({\mathcal {I}}^-_\omega )^\top \oplus {\mathbb {C}}^{|{\mathsf {E}}|}{\simeq {\mathbb {C}}^{|{\mathsf {V}}|}\oplus {\mathbb {C}}^{|{\mathsf {E}}|} } \quad \hbox {and}\quad Y^{(d)} := {\mathbb {C}}^{|{\mathsf {V}}|}{\oplus \{{{\mathbf {0}}}\} } \end{aligned}$$

as well as

$$\begin{aligned} B:=\begin{pmatrix} 0&{\mathcal {I}}^+_\omega \end{pmatrix} \quad \text {and} \quad P^{(d)}:=\begin{pmatrix} {\mathcal {I}}^-&0\end{pmatrix}. \end{aligned}$$

We simply take identity matrices for \(Q_{\mathsf {e}}\) and \(Q_{\mathsf {v}}\). In this way, \(\gamma (u)\in Y\) imposes that the values \(u_{\mathsf {e}}({\mathsf {v}}),u_{\mathsf {f}}({\mathsf {v}})\) agree for any two edges \({\mathsf {e}},{\mathsf {f}}\) with common tail \({\mathsf {v}}\), up to proper weights:

$$\begin{aligned} \frac{u_{\mathsf {e}}({\mathsf {v}})}{\omega _{{\mathsf {v}},{\mathsf {e}}}} = \frac{u_{\mathsf {f}}({\mathsf {v}})}{\omega _{{\mathsf {v}},{\mathsf {f}}}}. \end{aligned}$$

Observe that \(\dim {{\,\mathrm{Ran}\,}}B = {{\,\mathrm{rank}\,}}({\mathcal {I}}^+_\omega ) = |{\mathsf {V}}|\), so B is surjective and by Remark 3.12 we have

$$\begin{aligned} Z= Y^\perp \oplus {{\,\mathrm{Ran}\,}}B^*{= {{\,\mathrm{Ker}\,}}({\mathcal {I}}^-_\omega ) \oplus {{\,\mathrm{Ran}\,}}({\mathcal {I}}^+_\omega )^\top } \subset {\mathbb {C}}^{2|{\mathsf {E}}|}. \end{aligned}$$

In this case, \(\dim {{\,\mathrm{Ran}\,}}B^*= \dim {{\,\mathrm{Ran}\,}}B = |{\mathsf {V}}|\) and \(\dim Y^\perp = \dim {{\,\mathrm{Ker}\,}}({\mathcal {I}}^-_\omega ) = |{\mathsf {E}}| - |{\mathsf {V}}|\) by the Rank–Nullity Theorem. Accordingly, condition (3.29) is satisfied. Hence, the system has the right number of transmission conditions and we recover contractive well-posedness by our Theorem 3.3. We can easily apply the results in Sect. 4 and deduce that the semigroup is real (resp., positive) if and only if the matrix \({\mathcal {C}}\) is real (resp., has nonnegative off-diagonal entries). Furthermore, if the semigroup generated by \({\mathcal {C}}\) (resp., \({\mathcal {C}}^*\)) on \(Y^{(d)}\) is contractive with respect to the \(\infty \)-norm, then the semigroup generated by \({\mathbb {A}}\) is contractive on \(L^\infty _d({\mathcal {G}})\) (resp., on \(L^1_d(\mathcal G)\)), hence on \(L^p_d({\mathcal {G}})\) for all \(p\in [2,\infty ]\) (resp., for \(p\in [1,2]\)).

5.2 Telegrapher’s equations

The \(2\times 2\) hyperbolic system

$$\begin{aligned} \left\{ \begin{array}{ll} {\dot{p}}+L q' +G p+Hq=0 \quad \hbox { in } (0,\ell ) \times (0,+\infty ),\\ {\dot{q}}+P p' +K q+Jp=0 \quad \hbox { in } (0,\ell )\times (0,+\infty ), \end{array} \right. \end{aligned}$$
(5.1)

on a real interval \((0,\ell )\) generalizes the first-order reduction of the wave equation and offers a general framework to treat models that appear in several applications. The analysis of this system on networks with different boundary conditions has been performed in [43].

In electrical engineering [10, 26, 33], p (resp. q) represents the voltage V (resp. the electrical current I) at (xt), \(H=J=0\), \(L=\frac{1}{C}\), \(P=\frac{1}{L}\), \(G=\frac{{\hat{G}}}{C}\), \(K=\frac{R}{L}\), where \(C>0\) is the capacitance, \(L>0\) the inductance, \({\hat{G}}\ge 0\) the conductance, and \(R\ge 0\) the resistance: (5.1) is then referred to as “telegrapher’s equation.”

Also, Maxwell’s equations in tube-like 3D domains can be intuitively reduced to a system of 1D networks [25] for \(P=L=-1\) and \(G=H=K=J=0\), where p (resp. q) represents the electric field E (resp. the magnetic field B). Accurate asymptotic analysis of the system shows that the 1D model is indeed related to the full 3D model, up to errors that can be estimated [26]; more general settings have been considered in [10, 11]. The 1D Maxwell’s equations are also derived from physical principles in [49, § 2], thus obtaining again a special instance of (5.1).

Assuming that LP are two real numbers both positive or both negative, Assumptions 2.2 hold for system (5.1) with \(u_{\mathsf {e}}=(p_{\mathsf {e}}, q_{\mathsf {e}})^\top \) and

$$\begin{aligned} M_{\mathsf {e}}= - \begin{pmatrix} 0 &{} L\\ P &{} 0 \end{pmatrix},\quad N_{\mathsf {e}}=-\begin{pmatrix} G &{} H\\ K &{} J \end{pmatrix}, \quad \hbox {and}\quad Q_{\mathsf {e}}=\begin{pmatrix} |P| &{} 0\\ 0 &{} |L| \end{pmatrix}. \end{aligned}$$

In such a case, we see that

$$\begin{aligned} Q_{\mathsf {e}}M_{\mathsf {e}}=\begin{pmatrix} 0 &{} L |P|\\ L |P| &{} 0 \end{pmatrix}. \end{aligned}$$

Since telegrapher’s equation (5.1) on networks with nondynamic boundary conditions from [13, 43] enters into the framework of [28], we here concentrate on dynamic boundary conditions. We first start with a simple example and then consider a system set on a star-shaped network.

5.2.1 Maxwell system with dynamic boundary conditions

Let us study the Maxwell system

$$\begin{aligned} \left\{ \begin{aligned} {\dot{p}}&=q',\\ {\dot{q}}&= p', \end{aligned} \right. \end{aligned}$$
(5.2)

a special case of (5.1), on two adjacent intervals \({\mathsf {e}}_1=(-1,0)\) and \({\mathsf {e}}_2=(0,1)\) (with common vertex \({\mathsf {v}}_0\equiv 0\)). We denote by \(u_i:=(p_i, q_i)^\top \) the unknowns on the edge \({\mathsf {e}}_i\), \(i=1,2\). We impose electric boundary condition at \(-1\) and the magnetic condition at 1 complemented by continuity of p in 0 along with a dynamic boundary condition. This means that the boundary/dynamic conditions can be written as

$$\begin{aligned} p_1(t,-1)= & {} q_2(t,1)=0, \end{aligned}$$
(5.3)
$$\begin{aligned} p_1(t,0)= & {} p_2(t,0), \end{aligned}$$
(5.4)
$$\begin{aligned} \frac{d}{dt}p_{1}(t,0)= & {} q_2(t, 0)- q_1(t, 0). \end{aligned}$$
(5.5)

To write the system in the formalism introduced in Sect. 3, we define

$$\begin{aligned} \gamma _{{\mathsf {v}}_{-1}}(u):= & {} (p_1(-1),q_1(-1)) \subset {\mathbb {C}}^2,\\ \gamma _{{\mathsf {v}}_1}(u):= & {} (p_2(1),q_2(1)) \subset {\mathbb {C}}^2,\\ \gamma _{{\mathsf {v}}_0}(u):= & {} (p_1(0),q_1(0),p_2(0),q_2(0)) \subset {\mathbb {C}}^4. \end{aligned}$$

In the vertices \({\mathsf {v}}_{-1}\) and \({\mathsf {v}}_1\), we only have stationary boundary conditions (5.3) which are satisfied by taking

$$\begin{aligned} Y_{{\mathsf {v}}_{-1}}:= \{0\}\oplus {\mathbb {C}}\quad \text {and}\quad Y_{{\mathsf {v}}_1}:= {\mathbb {C}}\oplus \{0\}. \end{aligned}$$

In \({\mathsf {v}}_0\), we enforce the stationary condition (5.4) by taking \(Y_{{\mathsf {v}}_0}:=\{ (1,0,-1,0)^\top \}^\perp \) while for the dynamic condition we take

$$\begin{aligned} Y_{{\mathsf {v}}_0}^{(d)}:={{\,\mathrm{span}\,}}\{ (1,0,1,0)^\top \} \subset Y_{{\mathsf {v}}_0}, \end{aligned}$$

and define

$$\begin{aligned} B_{{\mathsf {v}}_0}:=\left( \begin{array}{llll} 0&{}\quad -1&{}\quad 0&{}\quad 1\\ 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad -1&{}\quad 0&{}\quad 1\\ 0&{}\quad 0&{}\quad 0&{}\quad 0 \end{array} \right) . \end{aligned}$$

With this choice, we see that (5.5) is equivalent to

$$\begin{aligned} {\dot{{\mathsf {x}}}}_{{\mathsf {v}}_0}= B_{{\mathsf {v}}_0} \gamma _{{\mathsf {v}}_0}(u) \quad \text {where}\quad {\mathsf {x}}_{{\mathsf {v}}_0}=P_{{\mathsf {v}}_0}^{(d)} \gamma _{{\mathsf {v}}_0}(u)\text { and } \gamma _{{\mathsf {v}}_0}(u)\in Y_{{\mathsf {v}}_0}. \end{aligned}$$
(5.6)

Now, by taking \(Q_{{\mathsf {v}}_0}=I\), we notice that the boundary term in (3.4) corresponding to \({\mathsf {v}}_0\) is equal to

$$\begin{aligned} \Re \left( p_1(0){\bar{q}}_1(0)-p_2(0){\bar{q}}_2(0)+(q_2(0)-q_1(0))\bar{p}_1(0)\right) , \end{aligned}$$

which by (5.4) is zero. Similarly, due to the boundary condition at the two endpoints \({\mathsf {v}}_{-1}, {\mathsf {v}}_1\), their corresponding boundary terms in (3.4) are zero. We are thus in the setting of Remark 3.6.(2).

So, it remains to check (3.11). But as \(B_{{\mathsf {v}}_0} \) is surjective, \( Z_{{\mathsf {v}}_0}\) is given by (3.12) and since \({{\,\mathrm{Ran}\,}}B_{{\mathsf {v}}_0}^*={{\,\mathrm{span}\,}}\{(0, -1, 0, 1)^\top \}\), we find

$$\begin{aligned} {{\widetilde{Z}}}_{{\mathsf {v}}_0} = Z_{{\mathsf {v}}_0} = {{\,\mathrm{span}\,}}\{ (1,0,-1,0)^\top , (0, -1, 0, 1)^\top \} \end{aligned}$$

For the two endpoints \({\mathsf {v}}_{-1}, {\mathsf {v}}_1\), we only have stationary conditions, hence \(Z_{{\mathsf {v}}_{-1}} = Y_{{\mathsf {v}}_{-1}}^\perp \), \(Z_{{\mathsf {v}}_{1}} = Y_{{\mathsf {v}}_{1}}^\perp \) and

$$\begin{aligned} {{\widetilde{Z}}}_{{\mathsf {v}}_{-1}} = {\mathbb {C}}\oplus \{0\}\oplus \{0\}\oplus \{0\},\quad {{\widetilde{Z}}}_{{\mathsf {v}}_{1}} = \{0\}\oplus \{0\}\oplus \{0\}\oplus {\mathbb {C}}. \end{aligned}$$

It is now easy to verify the dimension equation (3.13), hence Corollary 3.4 can be applied. We finally obtain that the considered problem is governed by a unitary group.

According to Proposition 4.2, the group is real since all involved constants are real, but we may expect that it does not preserve positivity and is not \(\infty \)-contractive since \(M_{\mathsf {e}}\) is not diagonal.

5.2.2 Telegrapher’s equations with dynamic boundary conditions

Here, we analyze the electrical formulation of system (5.1) on a Y-shaped structure with the transmission conditions from [10, §8.2] at the common vertex (called the improved Kirchhoff condition). Hence, in reference to the electrical interpretation, we assume that P and L are two positive constants, \(H=J=G=K=0\), further p (resp. q) is denoted by V (resp. I). More precisely, the network consists of three edges \({\mathsf {e}}_i\), \(i=0,1,2\) identified with (0, 1) having a common vertex \({\mathsf {v}}_1\equiv 0\), where the edge \({\mathsf {e}}_0\) plays a specific rule since the transmission conditions at 0 from [10, (8.9)] are given by

$$\begin{aligned} \begin{aligned} \sum _{k=1}^{2}{{\mathcal {L}}}_{j k} {\dot{I}}_{k}(t,0)&=V_{0}(t,0)-V_{j}(t,0) \quad \text {for }j \in \{1,2\}, t>0,\\ {\dot{V}}_{0}(t,0)&=-\sum _{j=0}^{2}I_{j}(t,0)\quad t>0 \end{aligned} \end{aligned}$$
(5.7)

where \({\mathcal {L}}=({\mathcal {L}}_{j k})_{2 \times 2}\) is a symmetric, real-valued positive definite matrix. Here, for simplicity we take all the other coefficients equal to 1. At the endpoints, we take the boundary conditions

$$\begin{aligned} I_0(t,1)=V_j(t,1)=0 \quad \text { for } j=1,2. \end{aligned}$$
(5.8)

To write the system in our formalism, we define

$$\begin{aligned} \gamma _{{\mathsf {v}}_1}(u)=( I_1(0), I_2(0), I_0(0), V_1(0), V_2(0), V_0(0))^\top , \end{aligned}$$

so that \({\mathbb {C}}^{k_{{\mathsf {v}}_1}}={\mathbb {C}}^6\). Since only dynamic conditions are imposed at \({\mathsf {v}}_1\), we take

\(Y_{{\mathsf {v}}_1}:={\mathbb {C}}^6\), we choose

$$\begin{aligned} Y_{{\mathsf {v}}_1}^{(d)}:={{\,\mathrm{span}\,}}\{ (1,0,0,0,0,0)^\top , (0,1,0,0,0,0)^\top , (0,0,0,0,0,1)^\top \}, \end{aligned}$$

and we define

$$\begin{aligned}B_{{\mathsf {v}}_1}:= \left( \begin{array}{cccccc} 0&{}\quad 0&{}\quad 0&{}\quad -a_{11}&{}\quad -a_{12}&{}\quad a_{11}+a_{12}\\ 0&{}\quad 0&{}\quad 0&{}\quad -a_{12}&{}\quad -a_{12}&{}\quad a_{12}+a_{22}\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ - 1&{}\quad -1&{}\quad -1&{}\quad 0&{}\quad 0&{}\quad 0 \end{array} \right) , \end{aligned}$$

where \({\mathcal {L}}^{-1}=\left( \begin{array}{cc} a_{11}&{}\quad a_{12} \\ a_{12}&{}\quad a_{22} \end{array} \right) \). With these notations, we see that (5.7) is equivalent to

$$\begin{aligned} {\dot{{\mathsf {x}}}}_{{\mathsf {v}}_1}= B_{{\mathsf {v}}_1} \gamma _{{\mathsf {v}}_1}(u)\quad \text { where }\quad {\mathsf {x}}_{{\mathsf {v}}_1}=P_{{\mathsf {v}}_1}^{(d)} \gamma _{{\mathsf {v}}_1}(u). \end{aligned}$$

Now, by taking \(Q_{{\mathsf {v}}_1}=PL {{\,\mathrm{diag}\,}}({\mathcal {L}}, 1)\), we notice that the boundary term in (3.4) corresponding to \({\mathsf {v}}_1\) is equal to 0.

We immediately check that \({{\,\mathrm{Ran}\,}}B_{{\mathsf {v}}_1}= Y_{{\mathsf {v}}_1}^{(d)}\) hence, by (3.12), \(Z_{{\mathsf {v}}_1} = {{\,\mathrm{Ran}\,}}B_{{\mathsf {v}}_1}^{*}\). Further, (5.8) yields

$$\begin{aligned} {\tilde{{\mathsf {w}}}}^{({\mathsf {v}}_2, 1)}=(0,0,1,0,0,0)^\top , {\tilde{{\mathsf {w}}}}^{({\mathsf {v}}_3, 1)}=(0,0,0,1,0,0)^\top , {\tilde{{\mathsf {w}}}}^{({\mathsf {v}}_3, 2)}=(0,0,0,0,1,0)^\top . \end{aligned}$$

Since the three columns of \(B_{{\mathsf {v}}_1}^{*}\) and these three vectors form a basis of \({\mathbb {C}}^6\), Corollary 3.4 shows that the considered problem is governed by a group of isometries.

Note if in (5.1) we allow HJG, and K to be different from zero, the considered problem is governed by a group.

As before, according to Proposition 4.2, the group is real since all involved constants are real, but we are not able to say anything about positivity or \(\infty \)-contractivity since \(M_{\mathsf {e}}\) is not diagonal.

5.3 Second sound in networks

A wave-like form of thermal propagation has been conjectured to exist in ultracold gases by Lev Landau and is now known under the name of “second sound”; it has ever since been experimentally observed in several molecules. One classical model boils down to the linear equations of thermoelasticity

$$\begin{aligned} \left\{ \begin{array}{rcll} \ddot{z}-\alpha z'' + \beta \theta '&{}=&{}0 \quad &{}\hbox { in } (0,\ell ) \times (0,+\infty ),\\ {\dot{\theta }}+\gamma q'+\delta {\dot{z}}'&{}=&{}0 \quad &{}\hbox { in } (0,\ell ) \times (0,+\infty ),\\ \tau _0 {\dot{q}}+q + \kappa \theta '&{}=&{}0 \quad &{}\hbox { in } (0,\ell ) \times (0,+\infty ), \end{array} \right. \end{aligned}$$
(5.9)

where z, \(\theta \), and q represent the displacement, the temperature difference to a fixed reference temperature, and the heat flux, respectively, and \(\alpha , \beta ,\gamma , \delta , \tau _0,\kappa \) are positive constants. Racke has discussed in [45] the asymptotic stability of this system under three different classes of boundary conditions, including

$$\begin{aligned} \alpha z'(0)=\beta \theta (0),\quad \theta '(0)=0,\quad z(\ell )=\theta (\ell )=0. \end{aligned}$$

While he does not point it out explicitly, this leads indeed to a dynamic condition: indeed, \(\theta '(0)\) is not well defined if \(\theta \) is merely of class \(H^1\), but assuming that the initial data are smooth enough that the third equation in (5.9) can be evaluated at 0, yielding

$$\begin{aligned} \tau _0 {\dot{q}}(0)+q(0) + \kappa \theta '(0)=0 \end{aligned}$$

the condition \(\theta '(0)=0\) leads to

$$\begin{aligned} {\dot{q}}(0)=-\frac{1}{\tau _0} q(0), \end{aligned}$$
(5.10)

which can indeed be made sense of even for general initial data, and then studied by the method introduced in the previous section. In summary, we now study system (5.9) with the dynamic boundary condition (5.10) and the stationary ones

$$\begin{aligned} \alpha z'(0)=\beta \theta (0),\quad \quad z(\ell )=\theta (\ell )=0. \end{aligned}$$
(5.11)

We observe that Assumption 2.2 is satisfied taking \(u = (z', {\dot{z}}, \theta , q)\),

$$\begin{aligned} M_{\mathsf {e}}:= \begin{pmatrix} 0&{} 1&{} 0&{} 0\\ \alpha &{} 0&{} -\beta &{}0\\ 0&{} -\delta &{} 0 &{}-\gamma \\ 0&{} 0&{} -\frac{\kappa }{\tau _0}&{} 0 \end{pmatrix},\quad Q_{\mathsf {e}}:= \begin{pmatrix} \alpha \delta &{} 0&{} 0&{} 0\\ 0&{} \delta &{} 0&{}0\\ 0&{} 0 &{}\beta &{}0\\ 0&{} 0&{} 0 &{}\frac{\beta \gamma \tau _0}{\kappa } \end{pmatrix},\quad \text {and}\quad N_{\mathsf {e}}:= \begin{pmatrix} 0 &{} 0&{} 0&{} 0\\ 0&{} 0 &{} 0&{}0\\ 0&{} 0 &{} 0 &{}0\\ 0&{} 0&{} 0 &{} -\frac{1}{\tau _0} \end{pmatrix}.\end{aligned}$$

A direct computation shows that

$$\begin{aligned} Q_{\mathsf {e}}M_{\mathsf {e}}=\begin{pmatrix} 0 &{} \alpha \delta &{} 0 &{} 0\\ \alpha \delta &{} 0 &{} -\beta \delta &{} 0\\ 0 &{} -\beta \delta &{} 0 &{} -\beta \gamma \\ 0 &{} 0 &{} -\beta \gamma &{} 0 \end{pmatrix} \end{aligned}$$

with four eigenvalues of the form \(\pm \sqrt{\frac{H\pm 2\sqrt{K}}{2}},\) where \(H:= \alpha ^2 \delta ^2+\beta ^2 \delta ^2+ \beta ^2\gamma ^2\) and \(K:= H^2 - 4 \alpha ^2\beta ^2\gamma ^2\delta ^2\). Because \(H^2>K\) whenever \(\alpha ,\beta ,\gamma ,\delta >0\), \(Q_{\mathsf {e}}M_{\mathsf {e}}\) has two positive and two negative eigenvalues. This is consistent with the above choice (5.10)–(5.11) of boundary conditions in the purely hyperbolic case of \(\tau _0>0\).

If the endpoint 0 (resp. \(\ell \)) is identified with \({\mathsf {v}}_1\) (resp. \({\mathsf {v}}_2\)), we take

$$\begin{aligned} Y_{{\mathsf {v}}_1}= \left\{ x\in {\mathbb {C}}^4: x_1=\frac{\beta }{\alpha } x_3\right\} ,\quad Y^{(d)}_{{\mathsf {v}}_1}= \{0\}\oplus \{0\}\oplus \{0\}\oplus {\mathbb {C}}\subset Y_{{\mathsf {v}}_1}, \end{aligned}$$

and

$$\begin{aligned} Y_{{\mathsf {v}}_2}={\mathbb {C}}\oplus \{0\}\oplus \{0\}\oplus {\mathbb {C}}. \end{aligned}$$

Observe that \(\gamma _{{\mathsf {v}}_1}(u)=u(0)\in Y_{{\mathsf {v}}_1}\), \(\gamma _{{\mathsf {v}}_2}(u)=u(\ell )\in Y_{{\mathsf {v}}_2}\) return all stationary conditions, whereas

$$\begin{aligned} \frac{d}{dt}P^{(d)}_{{\mathsf {v}}_1}\gamma _{{\mathsf {v}}_1}(u)=C_{{\mathsf {v}}_1} P^{(d)}_{{\mathsf {v}}_1}\gamma _{{\mathsf {v}}_1}(u) \end{aligned}$$

with

$$\begin{aligned} C_{{\mathsf {v}}_1}=\begin{pmatrix} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} -\frac{1}{\tau _0} \end{pmatrix} \end{aligned}$$

corresponds to the dynamic condition (5.10). Also, observe that \(B_{{\mathsf {v}}_1}=0\), therefore by (3.8) we have

$$\begin{aligned} Z_{{\mathsf {v}}_1}= Y_{{\mathsf {v}}_1}^{\perp } + {{\,\mathrm{Ker}\,}}B_{\mathsf {v}}^*=Y_{{\mathsf {v}}_1}^{\perp } + Y^{(d)}_{{\mathsf {v}}_1}={{\,\mathrm{span}\,}}\{ (\alpha ,0,-\beta ,0)^\top , (0, 0, 0, 1)^\top \}. \end{aligned}$$

Furthermore,

$$\begin{aligned} {Z_{{\mathsf {v}}_2}=} Y_{{\mathsf {v}}_2}^\perp = {{\,\mathrm{span}\,}}\{ (0 ,1,0,0)^\top , (0 ,0,1,0)^\top \}, \end{aligned}$$

and it is easy to see that (3.11) applies.

Now, by taking \(Q_{{\mathsf {v}}_1}=\tau _0 \beta \), we notice that the boundary term in (3.6) corresponding to \({\mathsf {v}}_1\) is equal to

$$\begin{aligned} 2\beta \gamma \Re (\theta (0){{\overline{q}}}(0))-\beta |\theta (0)|^2= & {} -|\gamma q(0)-\beta \theta (0)|^2+ \gamma ^2 |q(0)|^2 \\\le & {} \gamma ^2 |q(0)|^2= \gamma ^2 |P_{{\mathsf {v}}_1}^{(d)} \gamma _{{\mathsf {v}}_1}(u)|^2. \end{aligned}$$

Hence, in view of Theorem 3.3, the system is well-posed. More precisely, the initial value problem associated with (5.9) with the above boundary conditions is governed by a strongly continuous semigroup on \(L^2_d(\mathcal G)\equiv L^2(0,\ell )\oplus Y^{(d)}\).

As before, according to Proposition 4.2, the semigroup is real since all involved constants are real, but again positivity and \(\infty \)-contractivity cannot be checked by our abstract results since \(M_{\mathsf {e}}\) is not diagonal.

System (5.9) on a network with stationary boundary conditions at the nodes, namely continuity of z and q and Kirchhoff-type conditions for \(z'\) and \(\theta \), were described in [28, § 5.6]. With the method described above, we can, e.g., impose dynamic conditions on the vertex evaluation of z and/or q at an arbitrary subset of \({\mathsf {V}}\), while keeping Kirchhoff-type conditions for \(z'\) and \(\theta \), still retaining a well-posed system.

5.4 Wave-type equations

Wave-type equations on graphs have retained the attention of many authors, see [2, 4, 30, 40] and the references cited there. Here, we show that our framework can be applied to rather general elastic systems modeled as

$$\begin{aligned} \ddot{u}_{\mathsf {e}}(t,x)=u_{\mathsf {e}}''(t,x)+\alpha _{\mathsf {e}}{\dot{u}}_{\mathsf {e}}'(t,x)+\beta _{\mathsf {e}}{\dot{u}}_{\mathsf {e}}(t,x)+\gamma _{\mathsf {e}}u'_{\mathsf {e}}(t,x),\qquad t\ge 0,\ x\in (0,\ell _{\mathsf {e}}), \end{aligned}$$
(5.12)

where \(\alpha _{\mathsf {e}}\in C^1([0,\ell _{\mathsf {e}}])\) and \(\beta _{\mathsf {e}}, \gamma _{\mathsf {e}}\in L^\infty (0,\ell _{\mathsf {e}})\) are real-valued functions. For the sake of simplicity, as in [28, §5.23] we restrict ourselves to stars with \(J\ge 2\) edges as in Fig. 1, which can be regarded as building blocks of more general networks, but contrary to [28, §5.23] we assume that the edges are connected by a point mass at their common vertex, see [14, 24] for \(J=2\) and the wave equation, i.e., \(\alpha _{\mathsf {e}}=\beta _{\mathsf {e}}=\gamma _{\mathsf {e}}=0\) (see also [36] for a cable with a tip mass).

Fig. 1
figure 1

A star-shaped network with one incoming and \(J-1\) outgoing edges

It turns out that (5.12) is equivalent to

$$\begin{aligned} {\dot{U}}_{\mathsf {e}}= M_{\mathsf {e}}U_{\mathsf {e}}'+ N_{\mathsf {e}}U_{\mathsf {e}}, \end{aligned}$$

for the vector function \(U_{\mathsf {e}}=(u_{\mathsf {e}}', {\dot{u}}_{\mathsf {e}})^\top \), where

$$\begin{aligned} M_{\mathsf {e}}=\begin{pmatrix} 0 &{} 1 \\ 1 &{} \alpha _{\mathsf {e}}\end{pmatrix},\qquad N_{\mathsf {e}}=\begin{pmatrix} 0 &{} 0\\ \gamma _{\mathsf {e}}&{} \beta _{\mathsf {e}}\\ \end{pmatrix}. \end{aligned}$$

As \(M_{\mathsf {e}}\) is symmetric, Assumption 2.2 is automatically satisfied by choosing \(Q_{\mathsf {e}}\) as the identity matrix. As before, the boundary conditions at the vertices are related to the values of \(M_{\mathsf {e}}\) at the endpoints of the edge \({\mathsf {e}}\), that generically are given by

$$\begin{aligned} M_{\mathsf {e}}({\mathsf {v}})=\begin{pmatrix} 0&{} 1 \\ 1 &{} \alpha _{\mathsf {e}}({\mathsf {v}}) \end{pmatrix}, \end{aligned}$$

when \({\mathsf {v}}\) is one of the endpoints of \({\mathsf {e}}\); hence, \(M_{\mathsf {e}}({\mathsf {v}})\) has two real eigenvalues of opposite sign,

$$\begin{aligned} \lambda _\pm =\frac{1}{2}\left( \alpha _{\mathsf {e}}({\mathsf {v}})\pm \sqrt{\alpha _{\mathsf {e}}({\mathsf {v}})^2+4}\right) . \end{aligned}$$

We then need J boundary conditions at the common node \({\mathsf {v}}_0\) and one boundary condition at each endpoint \({\mathsf {v}}_i\), \(i=1,\dots , J\).

For an exterior vertex \({\mathsf {v}}_i\) (\(i=1,\dots , J\)), we choose Dirichlet boundary condition

$$\begin{aligned} u_{{\mathsf {e}}_i}({\mathsf {v}}_i)=0, \end{aligned}$$

that leads to \({\dot{u}}_{{\mathsf {e}}_i}({\mathsf {v}}_i)=0,\) and corresponds to the choice of \(Y_{{\mathsf {v}}_i}\) spanned by \((1,0)^\top \) that is a totally isotropic subspace associated with \(T_{{\mathsf {v}}_i}\), whereby \(T_{{\mathsf {v}}_1} = -M_{{\mathsf {e}}_1}({\mathsf {v}}_1)\) and \(T_{{\mathsf {v}}_i} = M_{{\mathsf {e}}_i}({\mathsf {v}}_i)\) for \(i=2,\dots ,J\). We refer to [28, §5.2] for other boundary conditions at the exterior vertices.

Now, inspired by [14, 24], we impose the following boundary conditions at \({\mathsf {v}}_0\), namely continuity of \(u_{\mathsf {e}}\) at \({\mathsf {v}}_0\) and

$$\begin{aligned} { -\sum _{i=1}^J u_{{\mathsf {e}}_i}'({\mathsf {v}}_0) \iota _{{\mathsf {v}}_0{{\mathsf {e}}_i}}= \delta \ddot{u}_{{\mathsf {e}}_1}({\mathsf {v}}_0), } \end{aligned}$$
(5.13)

for some positive constant \(\delta \). Let us check that such a boundary condition corresponds to a dynamical one. Indeed, the continuity condition of \(u_{\mathsf {e}}\) at \({\mathsf {v}}_0\) implies that

$$\begin{aligned} \gamma _{{\mathsf {v}}_0}(U)\!\in \! Y _{{\mathsf {v}}_0}\! :=\! \{(x,y)^{\!\top }\,:\,x\in {\mathbb {C}}^J, y=\alpha {{\mathbf {1}}}, \alpha \in {\mathbb {C}}\}=\left( \!{\mathbb {C}}^{J}\oplus \{{\mathbf{0}^\top }\}\!\right) \oplus {{\,\mathrm{span}\,}}\{({{\mathbf {0}}}, {{\mathbf {1}}})^{\!\top }\}, \end{aligned}$$

where we write

$$\begin{aligned} \gamma _{{\mathsf {v}}_0}(U) :=(u'_{{\mathsf {e}}_1}({\mathsf {v}}_0), \dots , u'_{{\mathsf {e}}_J}({\mathsf {v}}_0), {\dot{u}}_{{\mathsf {e}}_1}({\mathsf {v}}_0), \dots , {\dot{u}}_{{\mathsf {e}}_J}({\mathsf {v}}_0))^\top , \end{aligned}$$

and \({{\mathbf {1}}}, {{\mathbf {0}}}\) are the row vectors in \({\mathbb {C}}^{J}\) whose all entries equal 1 and 0, respectively. In order to formulate (5.13) in our setting, we set

$$\begin{aligned} Y^{(d)} _{{\mathsf {v}}_0} := {{\,\mathrm{span}\,}}\{ {({{\mathbf {0}}}, {\mathbf{1}})}^\top \}\subset Y _{{\mathsf {v}}_0}, \end{aligned}$$

and introduce \(B_{{\mathsf {v}}_0}\) as the \(2J\times 2J\) matrix

$$\begin{aligned} { {B}_{{\mathsf {v}}_0} := -\frac{1}{\delta } \left( \begin{array}{llll} {{\mathbf {0}}}&{}\quad {{\mathbf {0}}}\\ \vdots &{}\quad \vdots \\ {{\mathbf {0}}}&{}\quad {{\mathbf {0}}}\\ \iota _{{\mathsf {v}}_0,*}&{}\quad {{\mathbf {0}}}\\ \vdots &{}\quad \vdots \\ \iota _{{\mathsf {v}}_0,*}&{}\quad {{\mathbf {0}}}\\ \end{array} \right) , } \end{aligned}$$

where \(\iota _{{\mathsf {v}}_0,*}\) is the row of the incidence matrix \({\mathcal {I}}\) corresponding to \({\mathsf {v}}_0\). We then readily see that (5.13) is equivalent to

$$\begin{aligned} {{\dot{{\mathsf {x}}}}}_{{\mathsf {v}}_0}= B_{{\mathsf {v}}_0} \gamma _{{\mathsf {v}}_0}(U), \end{aligned}$$

where \({{\mathsf {x}}_{{\mathsf {v}}_0}}=P^{(d)}_{{\mathsf {v}}_0} \gamma _{{\mathsf {v}}_0}(U)\), recalling the continuity condition of \({\dot{u}}_{\mathsf {e}}\) at \({\mathsf {v}}_0\).

Now, by taking \(Q_{{\mathsf {v}}_0}=\delta \), we notice that the boundary term in (3.4) corresponding to \({\mathsf {v}}_0\) is equal to

$$\begin{aligned} \frac{|{\dot{u}}_{\mathsf {e}}({\mathsf {v}}_0)|^2}{2}\sum _{i=1}^J\alpha _{{\mathsf {e}}_i} ({\mathsf {v}}_0) \iota _{{\mathsf {v}}{\mathsf {e}}_i} = \frac{|P^{(d)}_{{\mathsf {v}}_0} \gamma _{{\mathsf {v}}_0}(U)|^2}{2}\sum _{i=1}^J \alpha _{{\mathsf {e}}_i} ({\mathsf {v}}_0)\iota _{{\mathsf {v}}{\mathsf {e}}_i}. \end{aligned}$$

Finally, we readily check that \({{\,\mathrm{Ran}\,}}B_{{\mathsf {v}}_0}= Y_{{\mathsf {v}}_0}^{(d)}\), hence by (3.12) and since \({{\,\mathrm{Ran}\,}}B_{{\mathsf {v}}_0}^*=Y_{{\mathsf {v}}_0}^{(d)}\) as well, we find

$$\begin{aligned} Z_{{\mathsf {v}}_0}= {\{{{\mathbf {0}}}^\top \} \oplus {\mathbb {C}}^J.} \end{aligned}$$

Further, as \(Z_{{\mathsf {v}}_i} = Y_{{\mathsf {v}}_i}^\perp \) and \(\sum _{i=1}^J {{\widetilde{Y}}}_{{\mathsf {v}}_i}^\perp ={{\mathbb {C}}^{J}\oplus \{{{\mathbf {0}}}^\top \}}\), (3.13) holds for \(k=2J\) and we conclude that system (5.12) with the previous boundary conditions is governed by a group.

In conclusion owing to Theorem 3.3, the system is well-posed. More precisely, the initial value problem associated with (5.9) with the above boundary conditions is governed by a strongly continuous group on \(L^2_d({\mathcal {G}})\equiv L^2(0,\ell )\oplus Y^{(d)}\).

As before according to Proposition 4.2, the semigroup is real since all involved constants are real, but again assessing either positivity or \(\infty \)-contractive is problematic since \(M_{\mathsf {e}}\) is not diagonal.

We have discussed in [28] how our formalism can be used to study networks of beams under rather general transmission conditions of stationary type. We restrain from elaborating on this topic, but it should by now be clear to the reader that suitable, different choices of \(Y_{\mathsf {v}}\) (cf. Remark 3.2), and of course suitable choices of \(Y^{(d)}_{\mathsf {v}}\), promptly lead to models of networks of beams with dynamic transmission conditions, which can then be studied by our theory. We mention that comparable well-posedness results have been recently obtained in [23].

5.5 The Dirac equation

The 1D Dirac equation on a network, as studied in [12], takes on each edge the form

$$\begin{aligned} \imath \hbar \frac{\partial }{\partial t} \psi =\left( \hbar c\begin{pmatrix}0 &{} -1\\ 1 &{} 0\end{pmatrix} \frac{\partial }{\partial x} +mc^2 \begin{pmatrix}1 &{} 0\\ 0 &{} -1\end{pmatrix}\right) \psi \ \end{aligned}$$

for a \({\mathbb {C}}^2\)-valued unknown \(\psi =(\psi ^{(1)},\psi ^{(2)})\). A parametrization of skew-adjoint realizations on a network has been presented in [12] and in [28] we have taken advantage of our theory and provided further realizations generating (semi)groups, since our Assumption 2.2 is satisfied letting

$$\begin{aligned} M_{\mathsf {e}}=\begin{pmatrix} 0 &{} \imath c\\ -\imath c &{} 0 \end{pmatrix},\quad Q_{\mathsf {e}}=\begin{pmatrix} 1 &{} 0\\ 0 &{} 1 \end{pmatrix},\quad \hbox {and}\quad N_{\mathsf {e}}=\begin{pmatrix} - \imath \frac{mc^2}{\hbar } &{} 0\\ 0 &{} \imath \frac{mc^2}{\hbar } \end{pmatrix},\qquad {\mathsf {e}}\in {\mathsf {E}}. \end{aligned}$$

Let us now study the quadratic form \(q_{\mathsf {v}}\), cf. (3.3). We first observe that \(T_{\mathsf {v}}\) is a \(2|{\mathsf {E}}_{\mathsf {v}}|\times 2|{\mathsf {E}}_{\mathsf {v}}|\) block-diagonal matrix with diagonal blocks equaling \(M_{\mathsf {e}}\iota _{{\mathsf {v}}{\mathsf {e}}}\). Hence, if we write

$$\begin{aligned} \gamma _{\mathsf {v}}(U):=(\psi ^{(1)}_{{\mathsf {e}}}({\mathsf {v}}), \psi ^{(2)}_{{\mathsf {e}}}({\mathsf {v}}))_{{\mathsf {e}}\in {\mathsf {E}}_{{\mathsf {v}}}}, \end{aligned}$$

then \((\xi ,\eta )^\top \in {\mathbb {C}}^{2|{\mathsf {E}}_{\mathsf {v}}|}\), with \(\xi := (\psi ^{(1)}_{\mathsf {e}}({\mathsf {v}}))_{{\mathsf {e}}\in {\mathsf {E}}_{\mathsf {v}}}\) and \(\eta :=(\psi ^{(2)}_{\mathsf {e}}({\mathsf {v}}))_{{\mathsf {e}}\in {\mathsf {E}}_{\mathsf {v}}}\) is an isotropic vector for the associated quadratic form \(q_{\mathsf {v}}\) if and only if

$$\begin{aligned} \sum _{{\mathsf {e}}\in {\mathsf {E}}_{\mathsf {v}}} \iota _{{\mathsf {v}}{\mathsf {e}}} \Im (\xi _{{\mathsf {e}}} \bar{\eta }_{{\mathsf {e}}})=0. \end{aligned}$$
(5.14)

A somewhat canonical choice is that of conditions of continuity and of Kirchhoff-type on \(\psi ^{(1)}\) and \(\psi ^{(2)}\), respectively, at each \({\mathsf {v}}\in {\mathsf {V}}\); this fits in our abstract framework by letting

$$\begin{aligned} Y_{\mathsf {v}}:={{\,\mathrm{span}\,}}\{{{\mathbf {1}}}_{{\mathsf {E}}_{\mathsf {v}}}\} \oplus {{\,\mathrm{span}\,}}\left\{ \iota _{{\mathsf {E}}_{\mathsf {v}}}\right\} ^\perp \end{aligned}$$

(we recall that \(\mathbf {\iota }_{{\mathsf {E}}_{\mathsf {v}}}\) denotes the vector in \({\mathbb {C}}^{|{\mathsf {E}}_{\mathsf {v}}|}\) whose \({\mathsf {e}}\)-th entry is \(\iota _{{\mathsf {v}}{\mathsf {e}}}\)) and is easily seen to lead to a hyperbolic system governed by a unitary group. Further instances of the Dirac equation governed by a unitary group, and hence with a quantum mechanical significance, can be easily produced applying the theory presented above: we will only focus on one such realization. By keeping the continuity property of \(\psi ^{(1)}\) at the vertices, we here take

$$\begin{aligned} Y_{\mathsf {v}}:={{\,\mathrm{span}\,}}\{{{\mathbf {1}}}_{{\mathsf {E}}_{\mathsf {v}}}\}\oplus {\mathbb {C}}^{{|{\mathsf {E}}_{\mathsf {v}}|}}, \end{aligned}$$
(5.15)

and we let

$$\begin{aligned} Y^{(d)}_{\mathsf {v}}:={{\,\mathrm{span}\,}}\{{{\mathbf {1}}}_{{\mathsf {E}}_{\mathsf {v}}}\}\oplus \{\mathbf{0}_{{\mathsf {E}}_{\mathsf {v}}}\}\subset Y_{\mathsf {v}}. \end{aligned}$$

Let us finally define

$$\begin{aligned} B_{\mathsf {v}}:Y_{\mathsf {v}}\ni (\xi ,\eta )^\top \mapsto -\imath (\eta \cdot \iota _{{\mathsf {E}}_{\mathsf {v}}}) ({{\mathbf {1}}}_{{\mathsf {E}}_{\mathsf {v}}}\oplus {{\mathbf {0}}}_{{\mathsf {E}}_{\mathsf {v}}}) \in Y_{\mathsf {v}}^{(d)} \end{aligned}$$

and

$$\begin{aligned} {C_{\mathsf {v}}}: Y^{(d)}\ni (\xi ,0)^\top \mapsto ( {C_{\mathsf {v}}}^{(1)}\xi ,0)^\top \in Y^{(d)}, \end{aligned}$$

for any skew-Hermitian \(|{\mathsf {E}}_{\mathsf {v}}|\times |{\mathsf {E}}_{\mathsf {v}}|\)-matrix \(C_{\mathsf {v}}^{(1)}\). This corresponds to imposing

  • continuity conditions across each vertex on \(\psi ^{(1)}\) as well as

  • dynamic conditions

    $$\begin{aligned} \frac{d\psi ^{(1)}}{dt}(t,{\mathsf {v}})= { - \imath \sum _{{\mathsf {e}}\in {\mathsf {E}}_{{\mathsf {v}}}} \psi ^{(2)}_{{\mathsf {e}}}(t,{\mathsf {v}}) \iota _{{\mathsf {v}}{\mathsf {e}}}+C_{\mathsf {v}}^{(1)}\psi ^{(1)}(t,{\mathsf {v}}),}\qquad {\mathsf {v}}\in {\mathsf {V}}. \end{aligned}$$

Observe that

$$\begin{aligned} \dim Y_{\mathsf {v}}= 1+ {|{\mathsf {E}}_{\mathsf {v}}|}, \qquad \dim Y^{(d)}_{\mathsf {v}}= 1, \end{aligned}$$
(5.16)

but this is not sufficient to guarantee (3.13) and thus (3.11). As \(B_{\mathsf {v}}\) is surjective, simple calculations show that (using the parametrization of the edges so that for both of them, \({\mathsf {v}}_1\) is identified with 0 and \({\mathsf {v}}_2\) is identified with 1)

$$\begin{aligned} Z_{{\mathsf {v}}_1}=Z_{{\mathsf {v}}_2}={{\,\mathrm{span}\,}}\{(1,0,-1,0)^\top , (0,1,0,1)^\top \}, \end{aligned}$$

hence (3.11) cannot hold.

However, by taking \(Q_{{\mathsf {v}}}:=c {\mathbb {I}}\) at each \({\mathsf {v}}\in {\mathsf {V}}\) one can show that \({\mathbb {A}}^*=-{\mathbb {A}}\). Furthermore, the boundary terms in (3.6) vanish. Indeed, this is clear by assumptions for the term involving \(C^{(1)}\), whereas the latter boundary term is equal to

$$\begin{aligned} -c\sum _{{\mathsf {e}}\in {\mathsf {E}}_{{\mathsf {v}}}}\iota _{{\mathsf {v}}{\mathsf {e}}} \Im (\xi _{{\mathsf {e}}} \bar{\eta }_{{\mathsf {e}}}) -\imath c \Re ((\eta \cdot \iota _{{\mathsf {E}}_{\mathsf {v}}}) ({\mathbf {1}}_{{\mathsf {E}}_{\mathsf {v}}}\cdot \xi ))=0\qquad \hbox {for all }{\mathsf {v}}\in {\mathsf {V}}, \end{aligned}$$

since \(\xi _{\mathsf {e}}=\xi ,\) for all \( (\xi ,\eta )^\top \in Y_{\mathsf {v}}\), and \(\Im (z)=-\Re ( \imath z)\), for all \(z\in {\mathbb {C}}\). Hence, we can invoke Corollary 3.10 and Remark 3.11 with\(\alpha =\beta =0\) and deduce that \({\mathbb {A}}\) generates a unitary group on \({\mathbf {L}}^2_d({\mathcal {G}})\). This is a new unitary realization of the Dirac equation that does not appear in the classification in [12], as the latter restricts to stationary vertex conditions.

By Proposition 4.2, this semigroup is not real, hence not positive, either. On the other hand, Proposition 4.4 does not apply, although—as mentioned in Remark 4.5—it looks rather plausible that no realization of the Dirac equation is governed by an \(\infty \)-contractive semigroup.