Abstract
Linear evolution equations are considered usually for the time variable being defined on an interval where typically initial conditions or time periodicity of solutions is required to single out certain solutions. Here, we would like to make a point of allowing time to be defined on a metric graph or network where on the branching points coupling conditions are imposed such that time can have ramifications and even loops. This not only generalizes the classical setting and allows for more freedom in the modeling of coupled and interacting systems of evolution equations, but it also provides a unified framework for initial value and timeperiodic problems. For these timegraph Cauchy problems questions of wellposedness and regularity of solutions for parabolic problems are studied along with the question of which timegraph Cauchy problems cannot be reduced to an iteratively solvable sequence of Cauchy problems on intervals. Based on two different approaches—an application of the Kalton–Weis theorem on the sum of closed operators and an explicit computation of a Green’s function—we present the main wellposedness and regularity results. We further study some qualitative properties of solutions. While we mainly focus on parabolic problems, we also explain how other Cauchy problems can be studied along the same lines. This is exemplified by discussing coupled systems with constraints that are nonlocal in time akin to periodicity.
Introduction
Time has classically been considered as a linear phenomenon, especially in western cultures. This has been clearly mirrored in the physical description of the world, all the way from ancient Greek philosophy to modern partial differential equations of mathematical physics. Many real world phenomena can be—more or less naively—modeled as abstract Cauchy problems
such as the heat, transport or Schrödinger equation, which are classically considered with domain for the time variable t in a finite interval [0, a] or a halfline \([0,\infty )\), and there cannot be a unique solution until an initial condition \(\psi (0)=g\) is imposed. Here, for simplicity one may have in mind a sectorial operator \(A\) in a Hilbert space X.
The western philosophy has, ever since Aristotle [10] and perhaps Heraclitus, most commonly regarded time as a linear instance that allows to order events according to the notions of before and after: Similar ideas are also typical in western monotheistic religions. It is folklore that, throughout the world, different cultures have had diverging approaches to the interpretation of time: Some religions of Indian origin—most notably Hinduism and Jainism, unlike Buddhism [4, 11]—postulate that the time consists of ages featuring repeating patterns, leading to a cyclic existence described by K\(\bar{a}\)lacakra; but also the cosmological implications of the Xiuhmolpilli (52year cycles in Aztec calendar) or the Bak’tun (144,000day cycles in Maya calendar) suggest a cyclic understanding of time [35], with such cycles conveniently clocking existence. This does not necessarily lead to mathematical clashes: Indeed, if the time variable t is cyclic and hence lives in a torus \(\mathbb {S}^1\) or the full real line \(\mathbb {R}\), then looking for solutions of (1.1) amounts to inquire existence of periodic solutions.
In each case the time domain is an oriented onedimensional manifold; thus, there is a clear direction at each point in time and a welldefined time before and after it. Going beyond this, there are different perceptions of time expressed for instance in the multiverse interpretation of quantum mechanics or in the discussions on closed timelike curves in general relativity. More recently, the theoretical physicist Carlo Rovelli has been advocating the necessity of giving up even the weakly ordered structure offered by Albert Einstein’s conception of time. He writes in [38, Chapter 6]:
None of the pieces that time has lost (singularity, direction, independence, the present, continuity) puts into question the fact that the world is a network of events. On the one hand, there was time, with its many determinations; on the other, the simple fact that nothing is: things happen.
The absence of the quantity “time” in the fundamental equations does not imply a world that is frozen and immobile. On the contrary, it implies a world in which change is ubiquitous, without being ordered by Father Time; without innumerable events being necessarily distributed in good order, or along the single Newtonian time line, or according to Einstein’s elegant geometry.
In this article, we would like to invite the reader to participate in a thought experiment and to assume time not to consist of a onedimensional manifold, but rather of a metric graph or network. Such ramified structures consist—roughly speaking—of intervals glued together at their endpoints and allow for more freedom in the modeling of evolutionary systems in real and some possibly hypothetical applications. The purpose of this note is to widen the scope of classical evolution equations and to show how graphs can be used to model time evolution. The main idea and recurrent motive is to consider initial conditions as boundary conditions in time: We will make this more precise in the following.
We notice in passing that there do exist classical settings where the notion of onedimensional time is generalized: In the context of analytic semigroups time is allowed to be in a sector of the complex plane as sketched in Fig. 1d. This has a plethora of pleasant mathematical consequences, but it is not evident how to make sense of it physically. Instead, we reckon that allowing time to live on networklike structures may have a practical interpretation as will be discussed in terms of examples.
From initial conditions to boundary conditions in time
To begin with, considering the classical cases illustrated in Fig. 1a–c one first notices that for the real line or the torus there are no initial conditions, and in fact adding initial conditions would overdetermine the system. For a bounded interval or the halfline, the initial value problem can be decomposed using linearity into two separate problems
Both equations can be analyzed in terms of semigroup theory: If A generates a \(C_0\)semigroup, then the mild solutions to these equations are given by the variation of constants formula and the semigroup, i.e.,
where the solution to the inhomogeneous initial value problem is \(\psi =\psi _f +\psi _0\) and the solution space depends on the regularity of the data.
The problem on an interval (0, a) with periodicity conditions exhibits similarities with the first equation in (1.2), and it can be written as
This already indicates which possible ‘initial conditions’—or rather ‘inhomogeneous boundary conditions in time’—can be imposed; namely, one can solve
This means there is no freedom left for initial conditions, but one is free to choose any fixed jump condition \(\psi (0)\psi (a)=g\), and the solution can be expressed (provided \(\mathbb {1} e^{a \, A}\) is invertible) as
which solves (1.4) on (0, a). This solution can be extended to the full real line; it then solves
In particular, this extension does not lift to a solution on the torus. So, in order to interpret time as a loop, one has to consider the periodic extension of (1.5). This is in general a noncontinuous periodic function on \(\mathbb {R}\) which then lifts to a function on the torus.
The regularity of \(\psi _0\) given in (1.5) clearly depends on the regularity of \((\mathbb {1} e^{a\, A})^{1}g\) and therefore on g, as well as on the mapping properties of \((\mathbb {1} e^{a\, A})^{1}\). The usual notions of mild, strong and classical solutions defined edgewise in the same way as, for example, in [13, 16] can be naturally extended to the setting of metric graphs in time, see also Sect. 6.5.
Considering only the first equation in (1.2), this can be solved—instead of using the variation of constants formula—by means of operator theory by finding realizations of \(\partial _t\) with initial condition \(\psi (0)=0\) such that the sum of closed operators \(\partial _t A\) is invertible. For \(L^p\)spaces in time this approach succeeded where the essential ingredient is the theorem of Kalton and Weis on the sum of closed operators. Similarly, Eq. (1.3) can be solved by considering a periodic realization of the time derivative.
Timegraph Cauchy problem
Consider again evolution equations whose time domains are intervals, like in Fig. 1a–c: Both under initial and periodicity conditions they can be split into a part with force, but homogeneous boundary condition in time, and a part without force and inhomogeneous boundary condition in time. We therefore consider finitely many inhomogeneous evolution equations
on time intervals of length \(a_i>0\), \(i=1,\ldots ,n\), where we assume that \(A_i\) are generators of analytic semigroups in Hilbert spaces \(X_i\), \(f_i\in L^2(0,a_i;X_i)\) are given, and the coupling is defined by
where \(\mathbb {B}\) is a bounded operator in \(X_1\oplus \cdots \oplus X_n\) which encodes the geometry of the graph by means of transmission conditions, and \(g_i\in X_i\) are given ‘inhomogeneous boundary conditions in time’ in analogy to the fixed jump conditions for the periodic case. This class of timegraph Cauchy problems comprises the classical settings, where the classical initial value problem corresponds to \(\mathbb {B}=0\), and the timeperiodic problem is given by \(\mathbb {B}=\mathbb {1}\) with \(g_i=0\) for \(i=1, \ldots , n\).
We present two strategies to solve this problem: First, when all \(g_i=0\), one can apply the Kalton–Weis theorem on the sums of closed operators for suitable time and space operators. Second, going beyond this, explicit formulae in terms of semigroups and transmission conditions as in (1.5) can be derived by a Green’s function Ansatz interpreting the system \(\partial _t  A_i\) as a system of vectorvalued ordinary differential equation in time where inhomogeneous boundary conditions in time are included.
First examples, results and outlook
As a next step toward more nonstandard examples, one can extend the timeperiodic situation: Instead of pure periodicity, we may for instance impose a phase shift after one time period \(a>0\), i.e.,
which corresponds to \(\mathbb {B}=\alpha \cdot \mathbb {1}\) and \(g=0\). As we will see later in Sect. 6 the solution to this problem is given—provided that A generates an analytic semigroup and the operator \(1\alpha e^{aA}\) is invertible—by
In particular, extending this to the real line, one has
and the phase shift occurs only after the first time period starting at 0 and ending at a, i.e., for \(n=0\), as sketched in Fig. 2b. If instead one considers \(\psi \) as a function on [0, a) and extends it periodically to \(\mathbb {R}\) by setting \(\psi (t+an)=\psi (t)\) for \(n\in \mathbb {Z}\), then for \(\alpha \ne 1\) this \(\psi \) is a discontinuous periodic function with the additional property that
and this can be represented in Fig. 2a. Another model is sketched in Fig. 2c. Here, time is represented by the real line, a phase shift with phase \(\alpha _1\) occurs at time \(a_1\), and a second phase shift with phase \(\alpha _2\) occurs at time \(a_2\).
To illustrate various features of time graphs one can consider the graphs depicted in Fig. 3. Building on the initial example of time periodicity, one can take its state at a certain time as input to a new system. This would correspond to the tadpolelike graph in Fig. 3a with matching of the type
where \(\psi _1\) lives on the loop and \(\psi _2\) lives on the adjacent interval.
More generally, basic building blocks are the joining and the splitting of two systems—as depicted in Fig. 3b, c—which can be used to describe a system which splits into two noninteracting dynamics or two systems which interact after some time by means of some superposition. These blocks can be assembled to form graphs with cycle, see Fig. 3d. Similarly, one may think of the interaction of various periodic systems with dynamics on the time line, see Fig. 3e which shares some features with Fig. 3a, d.
Timegraph Cauchy problems can be understood as a system of Cauchy problems on intervals with possibly nonlocal constraints such as periodicity, fixed jump conditions, or certain symmetries. Since the map \(\mathbb {B}=(\mathbb {B}_{ij})_{1\le i,j\le n}\) is a block operator matrix with \(\mathbb {B}_{ij}{:}\,X_j \rightarrow X_i\), one can rewrite (1.6) as
that is, a Cauchy problem is assigned on each interval and their ‘jump conditions’ are interdependent. If \(\mathbb {B}_{jj}\ne 0\) the Cauchy problem on \((0,a_j)\) is nonlocal and resembles periodicity, and for \(\mathbb {B}_{jj}=0\) the Cauchy problem on \((0,a_j)\) is an initial value problem.
Time graphs with oriented loops can also be used to model closed loops and other control theoretical gadgets, cf. [30]. One can think also of signals that after a certain time are processed differently as illustrated in Fig. 3i. This means that a system changes its character after a certain time. For instance a heat equation is followed after a certain time by a transport process that after a certain time turns again into a heat equation: thus modeling time delays in a diffusive process. Moreover, couplings at the vertices of a time graph can be frequency dependent, and thus, frequencydependent dynamics can be modeled, too. Also, there are some more nonstandard situations where time graphs come into play. A tree graph as depicted in Fig. 3f can serve as an illustration for the multiverse interpretation of quantum mechanics, where it is assumed that, in contrast to a probabilistic interpretation, each possible state is actually attained, but each in one separated universe. Figure 3g, h gives some possibilities how one may represent time travel—independent of its actual physical possibility—using time graphs, see also Sect. 9.
Our main result states the wellposedness of such timegraph models, under some compatibility assumption on the matrix \(\mathbb {B}\), which encodes the transmission conditions in time, and the ‘spatial’ operators \(A_i\). In particular, a generalized variation of constants formula is obtained, allowing us to derive additional mapping properties.
The question of whether the timegraph Cauchy problem reduces to a sequence of Cauchy problems on intervals which can be solved iteratively is traced back to the block structure of \(\mathbb {B}\), and it is pointed out that loops which are reflected by the transmission conditions \(\mathbb {B}\) prevent such iterative solvability and therefore in such situations one indeed needs tools for global solvability such as for the case of periodicity. The methods developed for the case of parabolic problems can be adapted also for some nonparabolic problems such as Schrödinger equations, wave equations, or even coupled dynamics of different types as first and secondorder Cauchy problems as illustrated in Fig. 3j.
Organization of the paper
In the subsequent Sect. 2 we recapitulate key elements of the classical theory of evolution equations, some of which are necessary in order to develop our approach to time graphs. Thereafter, in Sect. 3, the notion of networks and function spaces thereon are made precise. In Sect. 4 the Banach spacevalued timederivative operator on graphs with couplings and the spatial operator are studied. In Sect. 5 the timegraph problem for the case \(g=0\) is tackled, using the Kalton–Weis sum theorem on commuting operators applied to the timederivative and the spatial operator, where some compatibility assumptions on the boundary conditions are required. Section 6 follows a more direct approach computing the Green’s function for the timegraph problem explicitly. This gives our main result on the solvability of the timegraph Cauchy problem for g in a trace space under less restrictive compatibility conditions. Section 7 addresses the question under which condition solutions to timegraph problems can be reduced to Cauchy problems on intervals. In Sect. 8 we discuss a few examples, focusing on specific instances of time graphs and broaching extensions to classes of nonparabolic evolution equations, including Schrödinger, wave and mixedorder equations.
Some of the suggested settings may look mostly motivated by sciencefictional or hypothetical physical scenarios, as they may allow for loss of causality: In Sect. 9 we discuss these and further related aspects by commenting on tentative interpretations of evolution supported on networktype time structures.
Classical Cauchy problems
Many of the methods applied here make use of classical results on evolution equation theory and initial value problems. It is well established that the initial value problem
with A being a closed linear operator on a Banach space X has for all \(g\in X\) a unique mild solution if and only if A generates a \(C_0\)semigroup on X, cf. [5, Thm. 3.1.12], where at least \(f\in L^1(0,a;X)\) is admissible, \(a>0\). If X is a Hilbert space and \(g=0\), the stronger condition of maximal \(L^2\)regularity amounts to requiring that there is for all \(f\in L^2(0,a;X)\) a unique solution \(\psi \) of (2.1) in the maximal \(L^2\)regularity space, i.e.,
such that
for a constant \(C>0\) independent of f. Maximal \(L^2\)regularity holds if and only if the semigroup generated by A on X is analytic. This is related to the notion of sectorial operators: considering sectors in the complex plane
recall that a closed densely defined linear operator B is sectorial of angle \(\omega \in (0,\pi )\) if

\(\sigma (B)\subset \overline{\Sigma _\omega }\) and

\(\sup \{ \Vert \lambda (\lambda B)^{1}\Vert {:}\,\lambda \in \mathbb {C}{\setminus } \{0\}{,}\nu \le \text{ arg }(\lambda )\le \pi \}<\infty \) for all \(\nu \in (\theta , \pi )\),
cf. [34, Theorem 1.11 ff.]. Note that if \(B=A\) is sectorial of angle smaller than \(\pi /2\), than A is the generator of an analytic semigroup. In the literature, there are several, slightly diverging definitions of sectorial operators. For example, in [16, Definition 4.1] it is the generator itself (rather than its negative) which is called ‘sectorial,’ while in [31, Chapter 3, §3.10] an operator on a Hilbert spaces is called ‘sectorial’ if its numerical range lies in a sector.
For Banach spaces X of class UMD, maximal \(L^p\)regularity can be characterized using the notions of \({\mathcal R}\)sectoriality and \({\mathcal H}^{\infty }\)calculus, where one implication follows from the Dore–Vennitype sum theorem of Kalton and Weis on commuting operators [33, Thm. 6.3], cited here in Theorem 2.1. The key idea in the original Dore–Venni Theorem and its generalizations is to look at evolution equations on a Banach space X as stationary equations on a Bochner space of Xvalued functions.
Theorem 2.1
(Sum theorem of Kalton and Weis) Suppose that \(A\in {\mathcal H}^{\infty }(X)\) and \(B\in {\mathcal R}{\mathcal S}(X)\) are commuting operators such that \(\phi _A^{\infty }+\phi _B^R<\pi \). Then, \(A+B\) is closed with domain \(D(A+B)=D(A)\cap D(B)\), \(A+B\in {\mathcal R}{\mathcal S}(X)\) with \(\phi _{A+B}\le \max \{\phi _A^{\infty },\phi _B^R\}\), and for some constant \(C>0\)
The operator \(A+B\) is invertible if A or B is invertible.
In the following we will seldom use this result in its full generality, as we mostly restrict to the case of Hilbert spaces; we refer the interested reader to the classic monograph [13] by Denk, Hieber and Prüßwhere all these notions are introduced. Theorem 2.1 is formulated for a Banach space X. If X is a Hilbert space, however, then the notions of \({\mathcal R}\)sectoriality and sectoriality agree. We recall that whenever \(A\) is sectorial, the solution \(\psi (t):=e^{tA}g\) lies in D(A) for all \(t>0\) and all initial data \(g\in X\); and that moreover, \(\psi \) lies for all \(p\in (1,\infty )\) in the maximal \(L^p\)regularity space whenever the initial data belong to the trace space, i.e., \(g\in (X,D(A))_{11/p,p}\), given by the real interpolation functor \((\cdot ,\cdot )_{\theta ,p}\), cf. [37, § 3.4].
The Ansatz using the Kalton and Weis sum theorem has been applied successfully by Arendt and Bu [1,2,3]. In particular, the fact that both time domains \(\mathbb {R}\) and \({\mathbb {S}}^1\) are groups has allowed them to apply methods of harmonic analysis and to deliver a comprehensive theory of Cauchy problems with timeperiodic boundary conditions. A general scheme for periodic and almost periodic solutions to semilinear equations has been proposed by Hieber and coauthors, cf. [18, 27] and also [17, 23, 26, 29], where in particular for applications in fluid mechanics semigroup theory plays an important role, cf. [22]. For a similar approach where the stationary part is treated separately, and in particular applications to quasi and semilinear problems see also the works of Kyed and coauthors, cf. [9, 14, 32]. Existence of timeperiodic solutions for (linear or even nonlinear) hyperbolic equations is well known for a large class of problems, cf. the comprehensive monograph [39].
Finite metric graphs
Finite graphs
A graph is a 4tuple
where \({{\mathcal {V}}}\) denotes the set of vertices, \({\mathcal I}\) the set of internal edges and \({\mathcal E}\) the set of external edges, with \({\mathcal E}\cap {\mathcal I}=\emptyset \). We refer to elements of the set \({\mathcal E}\cup {\mathcal I}\) collectively as edges. To avoid notational ambiguities, we also assume \({{\mathcal {V}}}\cap {\mathcal E}={{\mathcal {V}}}\cap {\mathcal I}=\emptyset \). In order to fix an orientation, one distinguishes incoming \({\mathcal E}_\) and outgoing \({\mathcal E}_+\) external edges, where \({\mathcal E}={\mathcal E}_\cup {\mathcal E}_+\) and \({\mathcal E}_\cap {\mathcal E}_+=\emptyset \).
The structure of the graph is given by the boundary map \(\partial \). On one hand, it assigns to each internal edge \(i\in {\mathcal I}\) an ordered pair of vertices \(\partial (i)=\left( \partial _(i),\partial _+(i)\right) \in {{\mathcal {V}}}\times {{\mathcal {V}}}\), where \(\partial _(i)\) is called its initial vertex and \(\partial _+(i)\) its terminal vertex. On the other hand, each incoming external edge \(e_\in {\mathcal E}_\) and each outgoing external edge \(e_+\in {\mathcal E}_+\) is associated by means of \(\partial (e_)=\partial _(e_)\) and \(\partial (e_+)=\partial _+(e_+)\) with a single vertex (its initial and terminal vertex, respectively). A graph is called balanced if \({\mathcal E}_={\mathcal E}_+\). We will see that orientations do play a role only when we study evolution equations that are of first (or, more generally, odd) order in time; in the case of even time order equations, orientations are only imposed for the sake of a consistent parameterization. A graph is called finite if \({{\mathcal {V}}}+{\mathcal I}+{\mathcal E}<\infty \) and a finite graph is called compact if \({\mathcal E}=\emptyset \).
The structure of the network is given by the \({{\mathcal {V}}}\times {\mathcal E}\cup {\mathcal I}\)outgoing and ingoing incidence matrices \(I^+:=(\iota ^+_{\mathsf {v}\mathsf {e}})\) and \(I^:=(\iota ^_{\mathsf {v}\mathsf {e}})\) defined by
This encodes the structure of the graph and allows one to define directions on \({\mathcal G}\). The network \({\mathcal G}\) is the directed graph whose signed incidence matrix is \(I:=(\iota _{\mathsf {v}\mathsf {e}})\), defined by \(I:=I^+I^\); we will occasionally need the underlying undirected graph, which is fully defined by the (signless) incidence matrix \(I:=I^+ +I^\). Roughly speaking, a directed graph is the version of the graph where one can move only along the prescribed direction, while for the undirected graph \({\mathcal G}\) one can move into both directions.
Function spaces on metric graphs
A graph \({\mathcal G}\) is endowed with the following metric structure. Each internal edge \(i\in {\mathcal I}\) is associated with an interval \([0,a_i]\), with \(a_i>0\), such that its initial vertex corresponds to 0 and its terminal vertex to \(a_i\). Each external edge \(e\in {\mathcal E}_\) and \(e\in {\mathcal E}_+\) is associated with a halfline \([0,\infty )\) and \((\infty ,0]\), respectively, such that \(\partial (e)\) corresponds to 0. The numbers \(a_i\) are called lengths of the internal edges \(i\in {\mathcal I}\) and they are collected into the vector
The couple consisting of a finite graph endowed with a metric structure is called a metric graph \(({\mathcal G},\underline{a})\). The metric on the undirected metric graph \(({\mathcal G},\underline{a})\) is defined via minimal path lengths along connected vertices, while for the directed metric graph minimal path lengths along connected vertices is computed taking into account the directions.
Let, for each \(j\in {\mathcal I}\cup {\mathcal E}\), \(X_j\) be a complex Banach space with norm \(\Vert \cdot \Vert _{X_j}\). Then, any collection of functions
can be identified with a map
where the notation for elements in
is shortened to t and \(\psi \), and occasionally we write slightly redundantly \(\psi _j(t)=\psi _j(t_j)\). The metric graph \(({\mathcal G},\underline{a})\) is identified with a quotient of \(\bigsqcup _{j\in {\mathcal I}\cup {\mathcal E}} \overline{I_j}\), and therefore, \(t\in ({\mathcal G},\underline{a})\) is identified with \(t=t_j\in \overline{I_j}\) for some \(j\in {\mathcal E}\cup {\mathcal I}\). Similarly, the maps \(\psi \) defined as in (3.2) can be identified with maps on \(({\mathcal G},\underline{a})\), where on the vertices in general a set of values can be attained, because for the different edges adjacent to a vertex the edgewise defined functions \(\psi _j\) can in general take different values.
Equipping each edge of the oriented or nonoriented metric graph with the onedimensional vectorvalued Bochner–Lebesgue measure, one obtains a measure space. One defines
where \(\mathrm{d}t_{j}\) refers to integration with respect to the Bochner–Lebesgue measure on \(I_j\). We set
and introduce, with a slight abuse of notation, several related spaces: For \(p\in (1,\infty )\) the space
defines a Banach space, and indeed a Hilbert space provided \(p=2\) and \(X_j\) are Hilbert spaces; the canonical norm and inner product are given by
respectively. The corresponding Sobolev spaces are defined for \(p\in [1,\infty )\) and \(m \in \mathbb {N}\) by
Recall that for \(\psi \in W^{m,p}({\mathcal G},\underline{a};{\mathcal X})\), \(m\in \mathbb {N}\), \(p\in [1,\infty )\) traces up to the order \(m1\) are well defined, i.e.,
Also, using
one sets
Operators on metric graphs
As a first step to study the motivating problem, i.e.,
the derivative operator with transmission conditions on graphs is analyzed.
Derivative operators on graphs
One considers the nth derivative operators \(D_n\) on graphs formally given by
where one can define minimal and maximal operators in \(L^{p}({\mathcal G},\underline{a};{\mathcal X})\) by
These are closed linear operators. If \(p=2\) and each \(X_j\) is a Hilbert space, one has \((D_n^{\min })^* = (1)^{n}D_n^{\max }\), and hence, \(D_n^{\min }\) is symmetric if n is even and skewsymmetric if n is odd. In this article, the focus lies on the first and second derivative operator, for which we use the notation
Accretive coupling conditions for the first derivative
When considering the first derivative operator, it is assumed that \({\mathcal G}\) is balanced, i.e., there are as many outgoing as incoming external edges. From now on, let \(X_j\) be Hilbert spaces. On \(L^2({\mathcal G},\underline{a};{\mathcal X})\) a class of maccretive realizations of \(D_t\) defined by boundary conditions is studied, i.e., we consider operators \(D_t^{b.c}\) with
where \(\rho (D_t^{b.c})\ne \emptyset \), and
Integrating by parts yields the following Lagrange identity for the first derivative operator
where
One introduces the space of boundary values
where the claimed isomorphism holds because the graph is balanced, i.e., \({\mathcal E}_+={\mathcal E}_\): The vectors of boundary values \(\underline{\psi }_\in {\mathcal K}\) and \(\underline{\psi }_+\in {\mathcal K}\) are then defined by
where for a fixed bijection
i.e., one orders the outgoing and incoming edges into pairs, and defines
Hence, one obtains
For any subspace \({\mathcal M}\subset {\mathcal K}^2\) one can define a realization by
with the extremal cases \(D_t({\mathcal K}^2)=D_t^{\max }\) and \(D_t(\{0\})=D_t^{\min }\) being clearly edgewise decoupled, and couplings can be implemented by means of boundary conditions.
Lemma 4.1
The operator \(D_t({\mathcal M})\) is closed if and only if \({\mathcal M}\subset {\mathcal K}^2\) is closed.
Proof
If \({\mathcal M}\subset {\mathcal K}^2\) is closed, then \(\psi _n\rightarrow \psi \) and \(\psi _n'\rightarrow \varphi \) in \(L^2({\mathcal G},\underline{a};{\mathcal X})\) for \(\psi _n\in D(D_t({\mathcal M}))\) imply first due to the closedness of \(D_t^{\max }\) that \(\psi \in D(D_t^{\max })\) and \(\varphi =\psi '\). Second, due to the boundedness of the trace operator one has in \({\mathcal K}^2\) that \(\underline{\psi _n}\rightarrow \underline{\psi }\in {\mathcal M}\).
If \({\mathcal M}\subset {\mathcal K}^2\) is not closed, then there exist a Cauchy sequence \((\underline{\psi _n})\subset {\mathcal M}\) with \(\underline{\psi _n}\rightarrow \underline{\psi }\notin {\mathcal M}\). Note that there exist smooth cutoff functions \(\eta _{j}^{\pm }{:}\,I_j \rightarrow [0,1]\) with \(\eta _{j}^{\pm }=1\) close to \(\partial _{\pm }(e_j)\) and zero around \(\partial _{\mp }(e_j)\). Then, \(\psi _{n,j}=\eta _j^{+} (\underline{\psi }_{+})_j +\eta _j^{} (\underline{\psi }_{})_j\) defines functions \(\psi _{n}\in D(D_t({\mathcal M}))\) with \(\psi _{n}\rightarrow \psi \) and \(\psi _{n}'\rightarrow \psi '\), but \(\psi \notin D(D_t({\mathcal M}))\). \(\square \)
Here, the following type of boundary conditions is considered. Let \(\mathbb {B}\in \mathcal {L}({\mathcal K})\) be a bounded operator on \({\mathcal K}\). Note that \(\mathbb {B}\) is a block operator matrix given with respect to the decomposition of \({\mathcal K}:= \bigoplus _{i\in {\mathcal I}\cup {\mathcal E}_} X_i\), i.e.,
For such \(\mathbb {B}\in \mathcal {L}({\mathcal K})\) we consider the boundary conditions defined by
One defines the operator
Under additional assumptions these boundary conditions force the numerical range of \(D_t(\mathbb {B})\) and \(D_t(\mathbb {B})^*\) to lie in a left halfplain of the complex plain.
Lemma 4.2
(Adjoint operator and numerical range) Let \(\mathbb {B}\in \mathcal {L}({\mathcal K})\). Then, \(D_t(\mathbb {B})\) is closed, its Hilbert space adjoint in \(L^2({\mathcal G},\underline{a};{\mathcal X})\) is given by
and furthermore
Proof
By Lemma 4.1\(D_t(\mathbb {B})\) is closed since \({\mathcal M}(\mathbb {B})\) is closed. Note that from \(D_t^{\min }\subset D_t(\mathbb {B})\subset D^{\max }_1\) it follows by taking adjoints that \(D_t^{\min }\subset D_t(\mathbb {B})^*\subset D_t^{\max }\). Hence, it follows from (4.3) that
Note that
Moreover, for \(\psi \in D(D_t(\mathbb {B}))\) one obtains by integration by parts
A similar proof yields the claimed identity for \({{\,\mathrm{Re}\,}}\langle D_t(\mathbb {B})^*\psi ,\psi \rangle \). \(\square \)
Remark 4.3
(Spectral inclusion) Note that if \(\mathbb {B}\) is a contraction, then \(\sigma (D_t(\mathbb {B}))\subset \{z\in \mathbb {C}{:}\,{{\,\mathrm{Re}\,}}z \le 0\}\) since the spectrum is contained in the closure of the numerical range. The case \(\sigma (D_t(\mathbb {B}))=\emptyset \) can occur, and in particular for a compact graph with \(\mathbb {B}=0\) (which corresponds to the boundary condition \(\psi (\partial _(i))=0\) for all \(i\in {\mathcal I}\)) one has \(\sigma (D_t(\mathbb {B}))=\emptyset \).
Proposition 4.4
(Maccretivity and invertibility of \(D_t(\mathbb {B}))\) Let \(\mathbb {B}\in \mathcal {L}({\mathcal K})\).

(a)
If \(\mathbb {B}\) is a contraction on \({\mathcal K}\), i.e., \(\Vert \mathbb {B}\Vert _{\mathcal {L}({\mathcal K})}\le 1\), then \(D_t(\mathbb {B})\) is maccretive in \(L^2({\mathcal G},\underline{a};{\mathcal X});\)

(b)
If \(\mathbb {B}\) is a strict contraction, i.e., \(\Vert \mathbb {B}\Vert _{\mathcal {L}({\mathcal K})}< 1\), then \(D_t(\mathbb {B})\) is boundedly invertible,

(c)
if \(\mathbb {B}\) is unitary, i.e., \(\mathbb {B}^*\mathbb {B}=\mathbb {B}\mathbb {B}^*=\mathbb {1}\), then \(D_t(\mathbb {B})\) is skewselfadjoint, i.e., \(D_t(\mathbb {B})^*=D_t(\mathbb {B})\).
Proof
It is a direct consequence of Lemma 4.2 that if \(\Vert \mathbb {B}\Vert _{\mathcal {L}({\mathcal K})}\le 1\), then
for all \(\psi \in D(D_t(\mathbb {B}))\) and all \(\varphi \in D(D_t(\mathbb {B})^*)\), respectively, i.e., \(D_t(\mathbb {B})\) is maccretive [16, Cor. 3.17]. Recall that one has for the operator norm in \({\mathcal K}\) that \(\Vert \mathbb {B}\Vert _{\mathcal {L}({\mathcal K})}^2 =\Vert \mathbb {B}^*\mathbb {B}\Vert _{\mathcal {L}({\mathcal K})}=\Vert \mathbb {B}\mathbb {B}^*\Vert _{\mathcal {L}({\mathcal K})}\). Hence, one obtains from Lemma 4.2 the following proposition, where part (b) follows using Remark 4.3. \(\square \)
By wellestablished results about operators with bounded \(H^{\infty }\)calculus in Hilbert spaces, cf. [8, 5.2.2. Thm.] and also [34, Chapt. 11], [20, Cor. 7.1.8], the following holds; the notation \(\phi ^{\infty }\) for the angle of bounded \(H^\infty \)calculus is introduced in [8, § 4.5].
Corollary 4.5
(Bounded \(H^{\infty }\)calculus for \(D_t(\mathbb {B}))\) If \(\mathbb {B}\in {\mathcal L}({\mathcal K})\) is a contraction, then \(D_t(\mathbb {B})\) has a bounded \(H^{\infty }\big (L^2({\mathcal G},\underline{a};{\mathcal X})\big )\)calculus of angle \(\phi _{D_t(\mathbb {B})}^{\infty }=\frac{\pi }{2}\).
Spatial operators
As before we assume that \(X_j\) are Hilbert spaces. For each edge \(j\in {\mathcal I}\cup {\mathcal E}\) let \(A_j\) be a given operator in \(X_j\) with \(D(A_j)\subset X_j\). We consider the abstract timegraph Cauchy problem
Note that the operators \(A_j\) in \(X_j\) induce operators in \(L^2(I_j;X_j)\) which with a slight abuse of notation are also denoted by \(A_j\) and \(D(A_j)=L^2(I_j;D(A_j))\). Using this we define the operator \(A_{\mathcal E}\) in \(L^2({\mathcal G},\underline{a};{\mathcal X})\): It acts on functions supported on the time branches by
and with this the Cauchy problem (4.5) can be formulated as a maximal regularity problem
Moreover, the operators \(A_j\) induce an operator \(A_{{\mathcal V}}\) in the space of boundary values \({\mathcal K}\) that acts on functions supported on the vertices by
which in turn induces an operator in \({\mathcal K}^2\) by
The following lemma is straightforward.
Lemma 4.6
(Spectrum of induced operators) Let \(A_j\) be operators in \(X_j\) with domain \(D(A_j)\) for \(j\in {\mathcal I}\cup {\mathcal E}\). Then for the induced operators \(A_{{\mathcal E}}\) in \(L^2({\mathcal G},\underline{a};{\mathcal X})\), \(A_{{\mathcal V}}\) in \({\mathcal K}\), and \(A_{{\mathcal V}^2}\) in \({\mathcal K}^2\) the following holds :

(a)
\(\sigma (A_{{\mathcal E}})=\sigma (A_{{\mathcal V}})= \sigma (A_{{\mathcal V}^2})=\bigcup _{j\in {\mathcal I}\cup {\mathcal E}}\sigma (A_j)\) as an equality of sets, i.e., without counting multiplicities;

(b)
If \(A_j\) are sectorial of angle \(\phi _{A_j}\in [0,\pi )\), then \(A_{{\mathcal E}},A_{{\mathcal V}},A_{{\mathcal V}^2}\) are sectorial with same sectoriality angle
$$\begin{aligned} \phi _{A_{{\mathcal E}}}=\phi _{A_{{\mathcal V}}}=\phi _{A_{{\mathcal V}^2}}=\max _{j\in {\mathcal I}\cup {\mathcal E}}\phi _{A_j}. \end{aligned}$$
The Kalton and Weis sum theorem and the parabolic operator
Solvability of the inhomogeneous problem with homogeneous boundary conditions
Having specified timederivative and spatial operators, one can now define the parabolic operator
The Kalton–Weis sum theorem, formulated here in Theorem 2.1, can now be applied to \(D_t(\mathbb {B})\) and \(A_{\mathcal E}\) using Corollary 4.5 and assuming that the \(A_{\mathcal E}\) is sectorial and commuting with \(D_t(\mathbb {B})\). This gives the wellposedness for the timegraph Cauchy problem with homogeneous initial conditions and inhomogeneous righthand side.
Proposition 5.1
Let \({\mathcal G}\) be balanced and \(\mathbb {B}\) be a contraction in \({\mathcal K}\). Let \(X_j\) be for all \(j\in {\mathcal I}\cup {\mathcal E}\) Hilbert spaces and \(A_j\) sectorial operators of angle \(\phi _{A_i}<\pi /2\) on \(X_j\). Assume that \(D_t(\mathbb {B})\) and \(A_{\mathcal E}\) are resolvent commuting. Then, the operator \(P(\mathbb {B})\) is closed.
If furthermore \(A_{\mathcal E}\) or \(D_t(\mathbb {B})\) are boundedly invertible, then so is \(D_t(\mathbb {B})A_{\mathcal E}\) and in this case there is a constant \(C=C(\mathbb {B},{\mathcal G},\underline{a})>0\) such that for any \(f\in L^2({\mathcal G},\underline{a};{\mathcal X})\) there is a unique solution \(\psi \) to (4.5) with
Remark 5.2
A criterion to assure that the operators \(D_t(\mathbb {B})\) and \(A_{\mathcal E}\) commute is that \((A_{{\mathcal V}}\lambda )^{1}\) and \(\mathbb {B}\) commute for \(\lambda \in \rho (A_{{\mathcal V}})\).
Trace spaces and the parabolic operator
The approach using the Kalton–Weis result on commuting operators allowed us to find a simple way how to check solvability for the timegraph Cauchy problem with homogeneous boundary data. However, the condition that \(D_t(\mathbb {B})\) and \(A_{{\mathcal E}}\) commute seems too strict since (4.5) makes sense without it, and in fact closedness of the parabolic operator can be ensured under weaker assumptions.
For notational simplicity we assume from now on that there are no external edges, i.e., the time graph is assumed to be compact. Considering the maximal parabolic operator
where \({\mathcal E}=\emptyset \), one defines the corresponding trace space
where \([\cdot ,\cdot ]_\theta \) for \(\theta \in (0,1)\) denotes the complex interpolation functor. Recall that for sectorial \(A\) one has the continuous embedding
where BUC stands for the space of bounded uniformly continuous functions, cf. [37, Section 3.4] or [7, Theorem 4.10.2].
Definition 5.3
(Boundary conditions compatible with trace space) The operator \(\mathbb {B}\in {\mathcal L}({\mathcal K})\) is said to be compatible with \({\mathcal K}_A\) if it restricts to an operator in \({\mathcal K}_A\), i.e., \(\mathbb {B}\vert _{{\mathcal K}_A}\in {\mathcal L}({\mathcal K}_A)\) holds.
Remark 5.4

(a)
The actual definition of trace spaces \(({\mathcal X},D(A_{\mathcal V}))_{11/p,p}\) uses the real interpolation functor for \(p\in (1,\infty )\), and here it is used that for \(p=2\) one has \([\cdot ,\cdot ]_{1/2}=(\cdot ,\cdot )_{1/2,2}\). If \(D(A_{\mathcal V})\subset {\mathcal X}\) is dense, then all interpolation spaces \(({\mathcal X},D(A_{\mathcal V}))_{\theta ,p},[ {\mathcal X},D(A_{\mathcal V})]_{\theta }\subset {\mathcal X}\) for \(\theta \in [0,1], p\in (1,\infty )\) are dense in \({\mathcal X}\).

(b)
Note that for \(A_j\) having bounded imaginary powers and \(A_j\) injective one has \([X_j,D(A_j)]_{1/2}=D(A^{1/2})\), and compatibility with the trace space \({\mathcal K}_A\) in the sense of Definition 5.3 holds provided one has that \(A_{{\mathcal V}}^{1/2}\) and \(\mathbb {B}\) commute.
Lemma 5.5
(Closedness of the parabolic operator) Let each \(A_j\) be sectorial of angle smaller than \(\pi /2\), and let \(\mathbb {B}\in {\mathcal L}({\mathcal K})\) such that \(\mathbb {B}\in {\mathcal L}({\mathcal K})\) is compatible with \({\mathcal K}_A\). Then, \(P(\mathbb {B})\) is a closed operator on \(L^2({\mathcal G},\underline{a};{\mathcal X})\).
Proof
One shows first that \(P^{\max }\) is closed. Note that \(P^{\max }\) decouples the edges and hence it is sufficient to prove closedness for a graph consisting of a single interval [0, a]. Consider the operator \(P(\mathbb {B})=P_{0,\delta }\) for \(\mathbb {B}=0\) on \([\delta ,a_i]\) for \(\delta >0\). This is closed and to trace back this property to \(P^{\max }\) one considers continuous extension and restriction operators
with \(R\circ E = \mathbb {1}_{D(P^{\max })}\), where the extension can be realized for instance by even reflection and then multiplying by a cutoff function with value one on \([\delta /2,a]\) and zero in a neighborhood of \(\delta \). Then, \(P^{\max }=R \circ P_{0,\delta }\circ E\) and closedness can be proved straightforward.
Now, let \(\varphi _n\in D(P(\mathbb {B}))\) with
Then by closedness of \(P^{\max }\) and since \(P(\mathbb {B})\) is a restriction of \(P^{\max }\), \(\varphi \in D(P^{\max })\) and \(\psi =P^{\max }\varphi \). Using (5.1), it follows that \(\underline{\varphi _n}_{\pm } \rightarrow \underline{\varphi }_{\pm }\), and hence \(\varphi \in D(P(\mathbb {B}))\). \(\square \)
The parabolic operator and the Green’s functions approach
The operator theoretical consideration of the parabolic operator gives information on the solvability for homogeneous boundary data. However, it does not provide a solution formula, and it does not include the case of inhomogeneous boundary data. To address these issues we supplement our findings by computing explicitly the Green’s function for (4.5).
Green’s function for the parabolic problem
Now, we are in the position to collect suitable assumptions for the timegraph Cauchy problem; we stress that the following are more general than the ones in Proposition 5.1, where here \({\mathcal E}=\emptyset \) has been assumed for notational simplicity only.
Assumption 6.1
Let \({\mathcal E}=\emptyset \) and \(\mathbb {B}\in {\mathcal L}({\mathcal K})\). Let \(X_j\) be a Hilbert space and \(A_j\) a sectorial operator of angle \(\phi _{A_j}<\pi /2\) on \(X_j\) for each \(j\in {\mathcal I}\).
In the following a solution formula is derived generalizing the variation of constants formula from semigroup theory. Note that square integral maps
define integral operators acting on \(L^2({\mathcal G},\underline{a};{\mathcal X})\) via
In this sense, the Green’s function for zero initial conditions, i.e., for \(\mathbb {B}=0\), is
Since each operator \(A_j\) is sectorial, it generates an analytic \(C_0\)semigroup; in particular, \(e^{t_j A_j}\) is a welldefined bounded linear operator on \(X_j\) for each \(t_j\) in the time branch \(I_j\). In the following we will adopt for \(\underline{t}= \{t_j\}_{j\in I_j}\), the notation
and hence \(e^{\underline{t}A}\in {\mathcal {L}}({\mathcal K})\) is a diagonal block operator matrix in \({\mathcal K}\).
Proposition 6.2
(Inhomogeneous problem with homogeneous boundary conditions) Under the Assumption 6.1, let \(\mathbb {B}\) be compatible with \({\mathcal K}_A\), and let \((\mathbb {1}\mathbb {B}\, e^{\underline{a}A})\vert _{{\mathcal K}_A}\) be boundedly invertible in \({\mathcal K}_A\). Then, \(P(\mathbb {B})\) is boundedly invertible, i.e., for each \(f\in L^2({\mathcal G},\underline{a};{\mathcal X})\) there exists a unique solution \(\psi \) to (4.5) in \(D(D_t(\mathbb {B}))\cap D(A_{\mathcal E})\). This \(\psi \) is given by
where
with \(r_0(t,s;A_{\mathcal E})\) given by (6.1) and
Remark 6.3

(a)
Note that \(e^{\underline{a}A}{:}\,{\mathcal K}_A \rightarrow D(A_{\mathcal V})\subset {\mathcal K}_A\), and therefore, if \(\mathbb {B}\) is compatible with \({\mathcal K}_A\), then also \(\mathbb {1}\mathbb {B}\, e^{\underline{a}A}\) is compatible with \({\mathcal K}_A\).

(b)
Moreover if \(\mathbb {1}\mathbb {B}\, e^{\underline{a}A}\in {\mathcal L}({\mathcal K})\) and \((\mathbb {1}\mathbb {B}\, e^{\underline{a}A})\vert _{{\mathcal K}_A}\in {\mathcal L}({\mathcal K}_A)\) hold, then \((\mathbb {1}\mathbb {B}\, e^{\underline{a}A})\vert _{{\mathcal K}_A}^{1}\in {\mathcal L}({\mathcal K}_A)\) implies that \((\mathbb {1}\mathbb {B}\, e^{\underline{a}A})^{1}\in {\mathcal L}({\mathcal K})\). To this end, knowing that \((\mathbb {1}\mathbb {B}\, e^{\underline{a}A})\vert _{{\mathcal K}_A}\) is closable in \({\mathcal K}\), it is sufficient to prove that \((\mathbb {1}\mathbb {B}\, e^{\underline{a}A})\vert _{{\mathcal K}_A}^{1}\) is closable in \({\mathcal K}\), cf. [19, Lemma 2.28]. Now let
$$\begin{aligned} (\underline{\psi }_n)_{n\in \mathbb {N}}\subset {\mathcal K}_A \quad \hbox {with } \underline{\psi }_n \rightarrow 0 \hbox { and } (\mathbb {1}\mathbb {B}\, e^{\underline{a}A})\vert _{{\mathcal K}_A}^{1}\underline{\psi }_n \rightarrow \underline{\varphi } \hbox { in } {\mathcal K}\hbox { as } n\rightarrow \infty . \end{aligned}$$Then since \((\mathbb {1}\mathbb {B}\, e^{\underline{a}A})\in {\mathcal L}({\mathcal K})\) one has
$$\begin{aligned} (\mathbb {1}\mathbb {B}\, e^{\underline{a}A})(\mathbb {1}\mathbb {B}\, e^{\underline{a}A})\vert _{{\mathcal K}_A}^{1}\underline{\psi }_n=\psi _n \rightarrow (\mathbb {1}\mathbb {B}\, e^{\underline{a}A})\underline{\varphi }, \end{aligned}$$and since \(\underline{\psi }_n \rightarrow 0\) the claim follows.
Proof of Proposition 6.2
A solution to the Eq. (4.5) is on each edge \(i \in {\mathcal I}\) of the form
for some vector \(c_i\in X_i\) that is ‘inherited’ from the final state in the preceding edges. Indeed, the boundary condition can be used to determine \(c_i\). Since
recalling that \(a_i\) denotes the length of the edge i, the condition \(\underline{\psi }_=\mathbb {B}\underline{\psi }_+\) gives
Hence, we obtain the vectorvalued identity for \(c=\{c_i\}_{i\in {\mathcal I}}\)
and because \(\mathbb {1}\mathbb {B}\, e^{\underline{a}A}\) is assumed to be invertible
whence
Recall that the Green’s function is the resolvent operator’s integral kernel, i.e., a function \(r(t,s):=r(t,s;\mathbb {B},A_{\mathcal E})\) such that it defines a left and right inverse of \(P(\mathbb {B})\), i.e.,

(a)
\(\varphi (t)=(D_t(\mathbb {B})A_{\mathcal E})\int _{{\mathcal G}} r(t,s)\varphi (s) \mathrm{d}s\) for \(\varphi \in L^2({\mathcal G},\underline{a};{\mathcal X})\),

(b)
\(\psi (t)=\int _{{\mathcal G}}r(t,s) (D_t(\mathbb {B})A_{\mathcal E})\psi (s) \mathrm{d}s\) for \(\psi \in D(P(\mathbb {B}))\).
First, note that
solve \((\partial _tA_j)(\psi _{f}^0)_j=f_j\) and \((\partial _tA_j)(\psi _{f}^1)_j=0\) on each edge \(j\in {\mathcal I}\), respectively, where one applies the classical variation of constants formula and the properties of the semigroups \(e^{t_jA_j}\). Hence, \(\psi =\psi _f^0+\psi _f^1\) solves \((\partial _tA_j)\psi _j=f_j\) on each edge with \(\psi \in D(P^{\max })\). Here, \(\psi _f^1\) is the correction term for the variation of constants term \(\psi _f^0\) assuring that the boundary conditions are satisfied.
Secondly, one has to prove that \(\psi \) satisfies the boundary conditions, and indeed
hence \(\underline{\psi }_  \mathbb {B}\underline{\psi }_+=0\). We conclude that \(\int _{{\mathcal G}} r(\cdot ,s;\mathbb {B},A)\cdot \mathrm{d}s\) is the right inverse of \(P(\mathbb {B})\).
Because the adjoint kernels
consist of the Green’s function for the timereversed problems
and since \( (\partial _{t_j} A_j^*) e^{(a_jt_j)A^*} =0\) one has
with
and concerning the boundary conditions
hence \(\underline{\psi }_+  \mathbb {B}^*\underline{\psi }_=0\).
To conclude, note that \(\mathbb {1} e^{A^*\underline{a}}\,\mathbb {B}^*\) is invertible if and only if so is its adjoint \(\mathbb {1}\mathbb {B}\, e^{\underline{a}A}\). We have thus proven that the adjoint of \(\int _{{\mathcal G}} r(\cdot ,s;\mathbb {B},A)\cdot \mathrm{d}s\) is the right inverse of \(P(\mathbb {B})^*\): We hence take adjoints and find \(\int _{{\mathcal G}} r(\cdot ,s;\mathbb {B},A)\cdot \mathrm{d}s \, P(\mathbb {B})= \mathbb {1}_{D(P(\mathbb {B}))}\). We conclude that \(\int _{{\mathcal G}}r(\cdot ,s;\mathbb {B},A)\ \mathrm{d}s\) is also the left inverse of \(P(\mathbb {B})\). \(\square \)
Remark 6.4
Two sufficient conditions for invertibility of \(\mathbb {1}\mathbb {B}e^{\underline{a}A}\) are that each \(A_i\) are mdissipative and \(\mathbb {B}\) is a strict contraction; or else that each \(A_i+\epsilon \) is mdissipative for some \(\epsilon >0\) and \(\mathbb {B}\) is a contraction.
For general \(\mathbb {B}\) the resolvent of \(D_t(\mathbb {B})\) can be obtained applying Proposition 6.2 for \(A_j:=\lambda \mathbb {1}_{X_j}\), \(\lambda \in \mathbb {C}\) which induces an operator \(A_{{\mathcal E}}=\lambda _{{\mathcal E}}\).
Corollary 6.5
(Resolvent of \(D_t(\mathbb {B}))\) Let \({\mathcal E}=\emptyset \) and \(\mathbb {B}\in {\mathcal L}({\mathcal K})\). If \(\mathbb {1}\mathbb {B}e^{\lambda \underline{a}}\) is invertible in \({\mathcal K}\), then \(\lambda \in \rho (D_t(\mathbb {B}))\) and the unique solution \(\psi \in D(D_t(\mathbb {B}))\) to
is given by \( \psi =\int _{{\mathcal G}}r(\cdot ,s;\mathbb {B},\lambda _{{\mathcal E}})f(s) \mathrm{d}s\).
The inverse of the parabolic operator \(P(\mathbb {B})\) can be seen as being given by a functional calculus where the spectral parameter in Corollary 6.5 is replaced by the operator A. This is akin to the case of classical semigroups, where the solution operator of the ordinary differential equation \((\partial _t \lambda )\psi =f, \quad \psi (0)=\psi _0,\) is considered, and semigroup theory—interpreted as functional calculus for the exponential functions—allows one to ‘replace \(\lambda \) by some generator A.’
Inhomogeneous boundary conditions
So far, we have implicitly focused on the case of 0boundary conditions imposed on sources of the time graph, i.e., at the initial endpoints of those time branches that have no predecessors. This is clearly a relevant limitation and would, for example, lead to identically vanishing solutions as soon as \(f\equiv 0\). Initial conditions can be introduced by interpreting them as inhomogeneous boundary conditions with respect to time. Thus, one considers the problem
for given \(\underline{g}\in {\mathcal K}\) and \(f\in L^2({\mathcal G},\underline{a};{\mathcal X})\). For \(\mathbb {B}=0\) this corresponds to the usual initial condition \(\psi (0)=\psi _{0}\).
The solution to this problem can be computed using the Green’s function, where—as for ordinary differential equations with inhomogeneous boundary conditions—the Lagrange identity (4.1) plays an important role. We start with a heuristic argument. Integration by parts yields
where
Due to the properties of the Green’s function
Hence,
where one uses that
assuming that \(\psi \) solves the Cauchy problem. Hence,
where \( \psi _0\) is given more explicitly by
For \(\underline{g}:= \underline{\psi }_ \mathbb {B}\underline{\psi }_+\) one obtains \(\psi _0(t) = e^{A\underline{t}} [\mathbb {1} + (\mathbb {1}\mathbb {B}\, e^{\underline{a}A})^{1}\mathbb {B}e^{\underline{a}A}]\underline{g}= e^{\underline{t}A}(\mathbb {1}\mathbb {B}\, e^{\underline{a}A})^{1}\underline{g}\).
Theorem 6.6
Let Assumption 6.1 be fulfilled and let \(\mathbb {B}\) be compatible with \({\mathcal K}_A\). If \((\mathbb {1}\mathbb {B}\, e^{\underline{a}A})\vert _{{\mathcal K}_A}\) is boundedly invertible in \({\mathcal K}_A\), then for any \(\underline{g}\in {\mathcal K}_A\) and \(f\in L^2({\mathcal G},\underline{a};{\mathcal X})\) there is a unique solution
to (6.6). The solution is given by
where the kernel \(r(\cdot ,\cdot ;\mathbb {B},A)\) is given in (6.2), and
In particular there exists a constant C independent of f and \(\underline{g}\) such that
Proof
Note that \(\psi _0\in \in W^{1,2}({\mathcal G};\underline{a};{\mathcal X}) \cap L^2({\mathcal G},\underline{a};D(A_{\mathcal E}))\) since \((\mathbb {1}\mathbb {B}\, e^{\underline{a}A})^{1}\underline{g}\in {\mathcal K}_A\) and all \(A_j\) are sectorial, cf. [37, § 3.4] and in particular [37, Prop. 3.4.2] which can be adapted to finite intervals. Its traces satisfy
and hence \(\underline{\psi _0}_ \mathbb {B}\underline{\psi _0}_+=\underline{g}\), and \((\psi _0)_j\) solves \((\partial _{t_j}A_j)(\psi _0)_j=0\) on each edge \(j\in {\mathcal I}\).
To prove uniqueness assume that there is another solution \(\psi _0'\) to (6.6) in the solution space, and consider the difference \( \psi _{\delta }:=\psi _0'\psi _0 \) which solves due to the linearity of the equation
Because of the former equation, there exists \(\underline{g}'\in {\mathcal K}_A\) such that \(\psi _{\delta }=e^{tA}\underline{g}'\), while the latter implies
and by the invertibility of \(\mathbb {1}\mathbb {B}e^{\underline{a}A}\) it follows that \(\psi _\delta \equiv 0\). Hence, the inhomogeneous boundary value problem is uniquely solvable with \(\psi _0\) in the maximal \(L^2\)regularity class, and the rest of the statement follows from Proposition 6.2. \(\square \)
Remark 6.7

(a)
The solution formula given in Theorem 6.6 is a generalization of the wellknown variation of constants formula. Considering only one interval [0, a] with boundary conditions \(u(0)=0\), i.e., \(\mathbb {B}=0\), we find that \(\psi _0(t) = e^{tA} \underline{\psi }_{0}\) and \(r(t,s;\mathbb {B},A_{\mathcal E})=r_0(t,s;A_{\mathcal E})\).

(b)
Another classical case are periodic boundary conditions. For one interval [0, a] with \(u(0)=u(a)\), i.e., \(\mathbb {B}=1\).

(c)
The solution to the timegraph Cauchy problem certainly satisfies a semigroup law on each edge.
Mapping properties
Assume that \(X_j=L^2(S_j,\mu _j)\) for some measure space \((S_j,\Sigma _j,\mu _j)\) are spaces of complex valued functions, and denote by \(X_{j,\mathbb {R}}\) the cone of real valued functions. This induces spaces \({\mathcal K}_{\mathbb {R}}\) and \({\mathcal K}_{\mathbb {R}}^2\).
Proposition 6.8
Let the assumptions of Theorem 6.6 be satisfied and let \(X_j=L^2(S_j,\mu _j)\) be Hilbert spaces of complex valued functions.

(a)
If \(\mathbb {B}\) and the operator families \((e^{t_jA_j})_{t_j\in I_j}\) leave \({\mathcal K}_{\mathbb {R}}\) invariant, then the solution \(\psi \) in Theorem 6.6 is real for real data \(\underline{g}\) and f.

(b)
If in addition to (a) the operators \(\mathbb {B}\), \((\mathbb {1}\mathbb {B}^{\underline{a}A})^{1}\) and the operator families \((e^{t_jA_j})_{t_j\in I_j}\) are positivity preserving, then the solution \(\psi \) in Theorem 6.6 is positive for all times \(t\in ({\mathcal G},\underline{a})\) provided the data \(\underline{g}\) and f are positive.

(c)
If the operator families \((e^{t_jA_j})_{t_j\in I_j}\) as well as \(\mathbb {B}\) and \((\mathbb {1}\mathbb {B}\, e^{\underline{a}A})^{1} \) are \(L^{\infty }\)bounded, then the solution operator in Theorem 6.6 is \(L^{\infty }\)bounded. The solution operator defined by \(\psi _0\) is \(L^\infty \)contractive whenever so are the operator families \((e^{t_jA_j})_{t_j\in I_j}\) and \((\mathbb {1}\mathbb {B}\, e^{\underline{a}A})^{1} \). If additionally \(\mathbb {B}\) and \((\mathbb {1}\mathbb {B}\, e^{\underline{a}A})^{1} \) are \(L^1\)bounded, then the solution operator extrapolates to all \(L^p\)spaces.
Proof
We have shown in Proposition 6.2 that the Green’s function is given by \(r_0(\cdot ,\cdot ;A)+r_1(\cdot ,\cdot ;\mathbb {B},A)\). It is apparent that the claimed properties for the solution to (6.6) are proved as soon as corresponding properties hold for both \(r_0(\cdot ,\cdot ;A),r_1(\cdot ,\cdot ;\mathbb {B},A)\), and \(\psi _0\) where the corresponding properties of \(r_0\) are covered by the classical theory. Now, \(r_1\) can be studied using its factorization into operators that also enjoy the corresponding properties. For the mapping properties of \(\psi _0\) analogous arguments apply. \(\square \)
Maximal \(L^p\)regularity
For notational and mathematical simplicity, we have focused on the Hilbert space case and on maximal \(L^2\)regularity. In the case of evolution equations on \({\mathbb {R}}_+\), under the assumptions of Proposition 6.8(c) the semigroup \(e^{tA}\) extrapolates to a \(C_0\)semigroup on all \(L^p\)spaces, \(p\in [1,\infty )\); this semigroup is additionally analytic on \(L^p\), \(p\in [1,\infty )\), if \(e^{tA}\) satisfies Gaussian estimates. By a celebrated result in [28] this implies in turn \(L^p\)maximal regularity for \(p\in (1,\infty )\), but our theory does not seem to allow us to discuss kernel estimates. However, the solution formulae (6.2) and (6.7) suggest a straightforward generalization to the general case of maximal \(L^p\)regularity in Banach spaces.
To this end, let \(X_j\) be Banach spaces and \(p\in (1,\infty )\). Consider the trace space
and collect the following assumptions.
Assumption 6.9
Assume that \({\mathcal E}=\emptyset \), \(X_j\) be Banach spaces of class UMD, and \(p\in (1,\infty )\). Suppose that \(A_j\) are \({\mathcal R}\)sectorial operators in \(X_j\) of angle smaller than \(\pi /2\), and that \(\mathbb {B}\in {\mathcal L}({\mathcal K})\).
Proposition 6.10
(Maximal \(L^p\)regularity) Let the Assumption 6.9 be fulfilled, \(\mathbb {B}\) be compatible with \({\mathcal K}_{A,p}\). If \(\mathbb {1}\mathbb {B}\, e^{\underline{a}A}\) is boundedly invertible in \({\mathcal K}_{A,p}\), then for any \(\underline{g}\in {\mathcal K}_{A,p}\) and \(f\in L^p({\mathcal G},\underline{a};{\mathcal X})\) there is a unique solution
to (6.6), where the solution is given by the same formulae as in Theorem 6.6 and
Proof
The unperturbed part of the Green’s function \(r_0(\cdot ,\cdot ; A)\) defines a bounded operator
It remains to verify that the correction terms have the same mapping properties. First,
and by assumption \((\mathbb {1}\mathbb {B}\, e^{\underline{a}A})^{1}\underline{g}\in {\mathcal K}_{A,p}\), and hence \(\psi _0\) lies in the maximal regularity space, cf. [37, Prop. 3.4.2] which can be adapted to finite intervals. Secondly,
Using that \(\int _{{\mathcal G}}e^{(\underline{a}\underline{s})A} f(s) \mathrm{d}s\in {\mathcal K}_{A,p}\) which follows from the classical variation of constants formula and maximal \(L^p\)regularity of the initial value problem, it follows that \(\int _{{\mathcal G}}r_1(t,s;\mathbb {B},A) f(s)d\) is in the maximal \(L^p\)regularity space.
The proofs of Proposition 6.2 and Theorem 6.6 now carry over to the present situation. \(\square \)
Remark 6.11
(Transference principle) Transference principles which relate maximal \(L^p\)regularity for the initial value problem to the maximal \(L^p\)regularity problem with time periodicity on the real line are well established. Here, let \({\mathcal E}=\emptyset \) and \(P(\mathbb {B})\) for \(\mathbb {B}=0\) have maximal \(L^p\)regularity, then \(P(\mathbb {B})\) has maximal \(L^p\)regularity for any \(\mathbb {B}\in {\mathcal L}({\mathcal K})\) satisfying the assumption of Proposition 6.10.
Regularity and other notions of solutions
So far, we have focused on solvability in maximal regularity spaces, since these fit into a suitable functional analytic framework. Note that the solution formula from Theorem 6.6 can be made sense of even under milder assumptions. Consider the case where \({\mathcal E}=\emptyset \), \(X_j\) are Banach spaces, the operators \(A_i\) generate \(C_0\)semigroups in \(X_i\), and \(\mathbb {B}\in {\mathcal L}({\mathcal K})\). Classical results from semigroup theory carry over as long as sufficient compatibility of \(\mathbb {B}\) is assumed. For instance, smoother inhomogeneous transmission data improve the regularity of solutions.
Mild solutions
Under the assumptions that
the function defined by (6.7) is a mild solution on each edge, i.e., \(\psi _j\in C(\overline{I_j};X_j)\) for each \(j\in {\mathcal I}\), cf. [5, Prop. 1.3.4] or [16, Prop. VI.7.4 and Prop. VI.7.5]), and the boundary conditions are attained in the larger space \({\mathcal K}\), while in Theorem 6.6 they existed even in \({\mathcal K}_A\).
Classical solutions
For \(\mathbb {B}\) being compatible with \(D(A_{\mathcal V})\), the conditions
respectively, imply that the solution is classical on each edge, i.e., continuously differentiable with respect to time, cf. [16, Cor. VI.7.6] and [16, Cor. VI.7.8]. Of course there are many refinements of the classical semigroup theory which one can carry over to time graphs by assuming sufficient compatibility between \(\mathbb {B}\) and the inverse of \(\mathbb {1}\mathbb {B}\, e^{\underline{a}A}\).
Iterative solvability
A timegraph Cauchy problem is iteratively solvable if it reduces to a finite sequence of initial value problems. This is made precise in the following definition.
Definition 7.1
(Iterative solvability) Assume that \({\mathcal E}=\emptyset \) and \({\mathcal I}=n\), and that there exists an ordering of the edges \(i_1, \ldots , i_n\) such that the solution to (4.5) satisfies
and for any \(1\le j\le n1\) there exists some linear function \(\varphi _{j+1}\) such that
Then, we say that the (4.5) is iteratively solvable as a sequence of Cauchy problems on intervals. If \(\mathbb {B}_{jj}=0\) for all \(j=1, \ldots ,n\), then (4.5) is iteratively solvable as a sequence of initial value problems.
Iterative solvability can be traced back to the block structure of \(\mathbb {B}\).
Proposition 7.2
(Characterization if iterative solvability) Let \({\mathcal E}=\emptyset \) and \(\mathbb {B}\in {\mathcal L}({\mathcal K}_A)\).

(a)
The Cauchy problem (4.5) is iteratively solvable as a sequence of Cauchy problems on intervals if and only if, up to permutation of the edges, \(\mathbb {B}\) is block tridiagonal, i.e., there exists an ordering of the edges \(i_1, \ldots , i_n\) such that \(\mathbb {B}_{ij}=0\) for \(j>i\).

(b)
The Cauchy problem (4.5) is iteratively solvable as a sequence of initial value problems if and only if, up to permutation of the edges, \(\mathbb {B}\) is block tridiagonal with diagonal zero.
Proof
To prove (a) we start assuming that \(\mathbb {B}\) is block tridiagonal. Then,
Hence, \(\psi _n\) is the solution to the Cauchy problem on \(i_n\), and \(\psi _j\) for \(j=1, \ldots , n1\) solves the Cauchy problem on \(i_j\) with
Conversely, if (4.5) is iteratively solvable as a sequence of Cauchy problems on intervals, then there exists an ordering of the edges such that \(\mathbb {B}_{n,j}=0\) for \(j\ne n\), and since \(\varphi _j\) depends only on \(\psi _{j+1}, \ldots , \psi _n\) and g one concludes that \(\mathbb {B}_{ij}=0\) for \(j>i\).
For (b) notice that on each step on has an initial value problem if and only if \(\mathbb {B}_{jj}=0\) for all \(j\in {\mathcal I}\). \(\square \)
Remark 7.3
Invertibility of \(\mathbb {1}\mathbb {B}\, e^{\underline{a}A}\) is automatically satisfied under the assumptions of Proposition 7.2.(b), since
This was a crucial assumption in Theorem 6.6: The structure of the time graph already implies unique solvability. Under the weaker assumptions of Proposition 7.2(a), instead, invertibility of all \(\mathbb {1}\mathbb {B}_{ii}e^{a_iA_i}\) has to be imposed additionally.
An oriented graph \(({\mathcal G},\underline{a})\) contains a directed loop if there exists a sequence of edges \(i_{1}, \ldots , i_m\) such that
(We stress that this usage of the notion of loop is slightly different than in the literature on metric graphs in that we do not require the intersection of the loop’s closure and its complement’s closure (in the time graph) to be a singleton. We say that a loop is reflected by boundary conditions if \(\mathbb {B}_{i_j i_{(j+1)}}\ne 0\) for each \(i_j \in \{i_{1}, \ldots , i_m\}\), where on sets \(i_{m+1}:=i_1\).
Corollary 7.4
(Loops prevent iterative solvability) Let \(({\mathcal G},\underline{a})\) contain a directed loop which is reflected by the boundary conditions.

(a)
Then, (4.5) is not iteratively solvable as a sequence of initial value problems.

(b)
If the loop contains more than one edge, then (4.5) is not iteratively solvable as a sequence of Cauchy problems on intervals.
Proof
To prove (b): By assumption \(m\ge 2\) and \(B_{1,2}, \ldots , B_{(m1),m}\ne 0\) and \(\mathbb {B}_{m,1}\ne 0\). In particular for any permutation of the edges \(\pi \) one has \(B_{\pi (1)\pi (2)}\ne 0\) and \(\mathbb {B}_{\pi (m)\pi (1)}\ne 0\), and therefore, \(\mathbb {B}\) cannot be block tridiagonal and the claim follows from Proposition 7.2(a).
To prove (a) use using Proposition 7.2(b) and note that if \(m=1\), then \(B_{11}\ne 0\), and if \(m\ge 2\) this follows already from (b). \(\square \)
Remark 7.5
(Graph symmetries and symmetries of solutions) Periodic functions on \(\mathbb {R}\) clearly induce functions on \({\mathbb {S}}^1\): Are there further symmetries which can be encoded into a time graph? Many graphs have a natural symmetry which corresponds to a group structure. Given a map
this induces a map on function spaces
see [36, § 8.2]. Let \(\Gamma \) be a group of mappings acting on \(({\mathcal G},\underline{a})\). Assume that
(For example, the shift on a loop satisfies (7.1) with respect to the derivative operator with periodic boundary conditions.) It seems that there are very few graphs whose automorphism group is an infinite Lie group: In most cases, the automorphism group is finite. Having this, for any \(G\in \Gamma \), \({\hat{G}}\) commutes with the solution operator given in Theorem 6.6. Thus, the symmetry is reflected by the solution.
Examples and applications
The case of a loop with phase shift have been discussed already in the introduction as small modification of the classical periodic case. Also, the tadpolelike graph has been discussed there. We now discuss some other cases depicted in Fig. 3.
Splitting of systems
Take the graph consisting of three internal edges as in Fig. 3b, and consider for sectorial spatial operators \(A_i\) in Hilbert spaces \(X_i\), \(i=1,2,3\), the problem \(\partial _t\psi _i A_i\psi _i =f_i\), \( i=1,2,3\), with homogeneous boundary conditions
This corresponds to (6.6) for
If \(\mathbb {B}\) is compatible with the trace space \({\mathcal K}_A\), one observes that
is invertible for any \(\mathbb {B}_{21}, \mathbb {B}_{32}\), cf. Remark 7.3. Therefore by Theorem 6.6 a unique solution to this problem exists for all \(\underline{g}\in {\mathcal K}_A\); in particular \(\underline{g}=(g_1,0,0)^T \in {\mathcal K}_A\) means that \(g_1\in [X_1,D(A_1)]_{1/2}\).
The tree graph given in Fig. 3f results from an iteration of such as splitting procedure, where as above any splitting condition as long as it is compatible with the trace space is admissible. That such splitting problems can be solved iteratively as sequence of initial value problems is straight forward, it can also be seen more formally by applying Proposition 7.2.
Superposition of systems
Analogously, take the graph consisting of three internal edges as in Fig. 3c, and consider for sectorial spatial operators \(A_i\) in Hilbert spaces \(X_i\), \(i=1,2,3\), the problem \(\partial _t\psi _i A_i\psi _i =f_i\), \( i=1,2,3\), with homogeneous boundary conditions
for \(i=1,2,3\). This corresponds to (6.6) for
Assuming that \(\mathbb {B}\) is compatible with \({\mathcal K}_A\) one observes that \(\mathbb {1}\mathbb {B}e^{\underline{a}\, A}\) is invertible in \({\mathcal K}_A\), cf. Remark 7.3. So, Theorem 6.6 is applicable again, and there is a unique solution to this problem if \(g_1\in [X_1,D(A_1)]_{1/2}\) and \(g_2\in [X_2,D(A_2)]_{1/2}\). One observes that this is iteratively solvable as a sequence of initial value problems, too.
Tadpole graph
Consider two edges with
Note that \(\mathbb {1}\mathbb {B}e^{\underline{a}A}\) is invertible if and only if \(\mathbb {1}e^{a_1 A_1}\) is invertible. So, solvability is assured if for instance \(\Vert e^{a_1 A_1}\Vert <1\), i.e., the semigroup generated by \(A_1\) is contractive and exponentially decaying. This tadpole graph system can be interpreted as a timeperiodic system where the output is used as initial data for a new system. In the notion introduced in Sect. 7, this means the problem can be solved iteratively, first solving a timeperiodic problem and then an initial value problem, the data of which depend on the solution obtained in the first step.
Frequencydependent couplings
Frequencydependent transition conditions between time branches may also be considered. Assume for simplicity that \(A_1=A_2\) are positive selfadjoint operators with discrete spectrum \(k_1\le k_2\le \cdots \) (counted with multiplicities): Any element in \({\mathcal K}\) can thus be expanded in terms of eigenfunctions. For two edges one can, for example, consider the left shift operator \(S_\) defined by
where \((\psi _n)\) is an orthonormal basis of eigenfunctions: This induces a map also in \(D(A^{1/2})\). Then,
is an admissible transmission condition where the first row induces an initial condition on \(\psi _1(0)\) and the second is the frequency shift. One may also consider a projection \(P_I\) onto certain frequency ranges \(I\subset \mathbb {N}\),
and one could have a splitting of the system
corresponding to
This is iteratively solvable as sequence of initial value problems, and hence, it is well posed.
Lions maximal \(L^2\)regularity problem for nonautonomous Cauchy problems
Lions’ maximal regularity problem for nonautonomous Cauchy problems considers
for a Hilbert space X and \(f\in L^2(0,T;X)\) and asks if the solution satisfies \(\psi \in W^{1,2}(0,T;X)\); this would in turn imply that also \(A(\cdot )\psi \in L^2(0,T;X)\). This problem has a long history and much remarkable work has been devoted to it. More precisely, depending on \(A(\cdot )\) there are counterexamples as well as criteria which assure an affirmative answer: We refer the interested reader to [24] for an early study of maximal regularity for nonautonomous problems, and to [6] for further information and updated references.
The particular case of \(A(\cdot )\) being a (matrixvalued) step function with matching trace spaces
has already been used by [15] as a first step to consider \(A(\cdot )\) which are of bounded variation. The timegraph approach does not give any additional information at this point but it underlines the role of the compatibility assumption for the trace spaces. Using our approach we directly see that the corresponding abstract timegraph Cauchy problem can be studied by means of
and hence, it is iteratively solvable by Proposition 7.2.
Outline on nonparabolic Cauchy problems
So far, the focus has been on parabolic Cauchy problems, but the Green’s function Ansatz makes sense also in some nonparabolic situations.
Schrödinger equation
Let us study the Schrödingertype problem
Provided that \(\mathbb {1}\mathbb {B}\, e^{i\underline{a}A}\) is invertible in \({\mathcal K}\), the solution map
given by \(\psi _0\) in Theorem 6.6 is well defined for all \(\underline{g}\in {\mathcal K}\) and defines a mild solution. (Notice that invertibility in \({\mathcal K}\) is sufficient since here we merely aim at mild solutions.)
Remark 8.1
While the time graph \({\mathcal G}\) need not display a group structure and, therefore, the issue of time reversal need generally not be well defined, we may still wonder whether the family of solution operators that govern (8.1) consists of unitary operators. Assume that A and \(\mathbb {B}\in {\mathcal L}({\mathcal K})\) be selfadjoint such that \(e^{i\underline{a}A}\) and \(\mathbb {B}\) commute, and \(\mathbb {1}\mathbb {B}\, e^{i\underline{a}A}\) is invertible. Then, the solution operators to the timegraph Schrödinger equation (8.1) are unitary, i.e.,
if and only if \(\mathbb {B}^2 = 2\mathbb {B}\cos (\underline{a}A)\). This follows, using that all operators commute, from
which is in turn equivalent to \(\mathbb {B}^2 = 2\mathbb {B}\cos (\underline{a}A)\). If, in particular, \(\mathbb {B}\) is invertible, then the above condition amounts to \(\mathbb {B}=2 \cos ( \underline{a}A)\). Accordingly, there is a unitary solution operator for fixed jump condition, i.e., \(\mathbb {B}=\mathbb {1}\), if and only if \(\sigma (\underline{a}A) \subset \pi /3+2\pi \mathbb {Z}\). So, the classical case \(\mathbb {B}=0\) is not the only case of a unitary solution operator. Note that time inversion is still possible even for nonunitary solution operator, but the timereversed dynamics differs from the timeforward dynamics and going forth and back is not necessarily equal to stay at one time.
Secondorder Cauchy problem
The above setting can be generalized to different kind of evolution equations. Let \(\mathbb {B}_1,\mathbb {B}_2\) be bounded linear operators on \({\mathcal {K}}^2\) and consider the secondorder Cauchy problem
The idea is to decompose this into
where A is skewsymmetric. Hence, one has to solve two firstorder problems iteratively, where assuming in addition to the assumptions of Theorem 6.6 that \(A_j\) for \(j\in {\mathcal I}\) are selfadjoint and boundedly invertible. Then for any \(\underline{g}_1, \underline{g}_2\in {\mathcal K}\) there is a unique solution to (8.2).
Outline on mixedorder systems
It is also possible to discuss evolution equations whose nature is different on each time branch; in particular, it is possible to define an operator on \(\mathbb {T}\) which agrees with a first derivatives on a subset of \(\mathbb {T}\) and with a second derivative on the remaining time branches. Defining appropriate transition conditions is, however, less obvious: A thorough discussion of ‘wellbehaved’ transition conditions can be found in [25]. Following these lines one can use the Kalton–Weis approach to solve a Cauchy problem of the type
Focusing on this example we consider operators \(A_1,A_2\) in Hilbert spaces \(X_1,X_2\) and couplings defined by
for an orthogonal projection P in \({\mathcal K}:= X_1\oplus X_2\), \(P^{\perp }=\mathbb {1}P\) and \(L\in {\mathcal L}({\mathcal K})\) with \(LP^{\perp }=P^{\perp }L\). With these couplings the operator \(\mathbb {T}_{P,L}\) defined on \(L^2(0,a_1;X_1)\oplus L^2(0,a_2;X_2)\) by
is mdissipative if \(L\) is dissipative, see [25, Theorem 4.1], and similarly to Corollary 4.5 one concludes that it has a bounded \(H^\infty \)calculus of angle \(\pi /2\). If the spatial operator \(A\) is sectorial and commutes with the boundary conditions, the one can apply the Kalton–Weis theorem to obtain wellposedness in a maximal regularity space. A Green’s function approach could be pursued as well on the lines of the Green’s function from [25, Proposition 6.6].
A tentative interpretation of time graphs
Several convenient properties of semigroups depend decisively on the order structure of the underlying set, so it is conceivable to relax the standard approach to time evolution and study semigrouplike operator families that are indexed on posets different from \(\mathbb {R}_+\).
One particularly simple case is that of a treelike time structure. More precisely, we allow for a time that looks like a rooted tree, see Fig. 3f. This seems to be conceptually very close to H. Everett’s many worlds interpretation of quantum mechanics [12], but our mathematical theory is not restricted to Schrödingertype equations, see Sect. 6, and a precise analysis of similarities and differences with Everett’s interpretation goes far beyond the scope of this note. In a very simplified synopsis the many worlds interpretation claims in order to conciliate probabilistic and deterministic interpretations of quantum mechanics that time splits at each point in time and each possible state is actually attained in one of the parallel universes.
Imagining parallel universes, one would have no evidence of these in the case of a tree graph. Only if there is some interaction between different ‘time branches,’ then one can know of the other now nonparallel but interacting universes. This leads to an interpretation of time travel in terms of time graphs, and it seems that time graphs are a convenient way of picturing to oneself time travel independently of its actual physical meaning.
Note that in the case of tree graphs, an evolutionary system can be solved iteratively starting with the first edge where some initial condition is imposed, one determines the state at the end of the edge which is the used as new initial conditions for the next level of the tree, etc. Interesting problems arise when oriented loops are allowed, as outlined in Sect. 7. Then, the time evolution can no longer be described iteratively and there is no clear direction distinguishing ‘before’ and ‘after.’ This is one of the problems occurring in the scientific interpretation of closed timelike curves in general relativity, cf. [21, Chapter 5] for a discussion of closed timelike curves occurring in exact solutions to the Einstein equations. and much earlier this has spurred the imagination of science fiction authors. Just to mention a few, there are H.G. Wells ‘novel The time machine (1895) as well as The Man in the High Castle (1962) and further works written by Philip K. Dick between the 1950s and the 1970s, whose main theme are interacting parallel universes. Variations of these themes and particular the socalled grandfather paradox—preventing one’s one birth during a journey back in time or violating causality in some other way—are at the origin of several mainstream movies, like the classic Back to the Future (1985) or the more recent Looper (2012), in whose plot a person is sent back from 2074 to 2044 so that he can meet and be killed by his younger self.^{Footnote 1} A further, different narrative trick is that of time loops, illustrated for instance in the movies Groundhog Day (1993) or Miss Peregrine’s Home for Peculiar Children (2016), which tell the stories of Phil Connors—respectively, of a group of children and their guardian—who get trapped in a time loop around February 2, 1992—respectively, September 3, 1943 —, before eventually managing to escape: In mathematical terms, this does not mean that a certain function is periodic (the days spent by the main characters in either loop are not identical), but rather that it only satisfies a certain identity condition at different instants (Phil Connors’ environment is reset every day at 6:00 am, and so is the environment of the peculiar children and their guardian, every day at 9:07 pm): This time development can be captured in Fig. 3e or 4d. This suggests a much more downtoearth interpretation of evolution on branching time structures: Namely, it conveniently allows us to formalize the requirement that solutions at different time instants respect certain algebraic relations.
The question of whether such fictional situations can be reconciled with our deeply rooted perception of time as linear seems for us related to the problem of representing a timegraph Cauchy problem as a sequence of initial value problems which can be solved one after another. As we have pointed out in Sect. 7 this is closely related to the absence of loops inside the time graph, and in this case the solution operator acts in a truly global (in time) fashion, as the solution at some point depends on all other times including futurelike times.
Time travel, multiverses and the grandfather paradox
A timetravel scenario can be depicted as in Fig. 3g with a link between some point in the future and a point which might be in the past. Considering such a graph one can, for example, impose the transmission conditions
that correspond to
Therefore, this is not solvable as sequence of Cauchy problems on intervals, but well posed in the sense of Theorem 6.6 under suitable assumptions on the spatial operators \(A_i\). In a sense, time travel occurs in this deterministic world, but there is no free will to cause a grandfather paradox: The system is forced to be contradiction free. This resembles the case of timeperiodic solutions. Given a solution \(\psi \) to
one can compare this to the solution to
Due to the uniqueness of solutions the system comes back to its original state, i.e., \(\psi =\varphi \) and \(\varphi (1)=\psi (1)=\psi (0)\). Living in a timeperiodic world, would thus be locally like living in a timeinterval world with initial conditions, but nevertheless globally it is time periodic. Similarly, in the scenarios of Fig. 3g or 4a, solutions have in time nonlocal constraints which are seen only on a global level.
Considering the graph from Fig. 3h one can, for example, impose the transmission conditions
which leads to
This is solvable as sequence of initial value problems, because \(B_{14}=0\) the loop in the graph is not reflected by boundary conditions. A grandfather paradox does not occur because we actually have a sequence of initial value problems, and this seems the way we represent time travel in our thoughts when watching a science fiction movie: A time traveler reaches a past where the initial conditions are the actual state of the past plus the time traveler. This mixer gives new initial conditions which lead to new events which actually do not affect the time from which the time traveler comes from. Traveling back to the present does not lead to any contradictions since one has a simple superposition of two time branches so that changes due to the altered time branch can be incorporated. This becomes more transparent when as in Fig. 4c—compared to Fig. 4b—an auxiliary edge is inserted.
Caught in a time loop
The scenario that describes being caught in a time loop just as in Groundhog Day can also be represented by time graphs. Having a time graph as in Fig. 4d one can impose at the first vertex a splitting into several copies of the world. At each subsequent vertex there is a superposition of this original state of the time evolution of the incoming edge, where the superposition is such that only the main character is replaced, while all the rest goes back to the original state. Such conditions would allow for iterative solvability as a sequence of initial value problems.
Representing such a plot properly would, in addition, need a dynamical graph the development of which depends on the solution; one would also need to incorporate an end condition stating that if the solution reaches a certain state, then the time evolution proceeds as a usual time axis. This would be a nonlinear feature. Typically, such an end condition consists in the main character’s reaching a certain goal or a key insight into the meaning of life.
Summarizing, it seems that our thinking is preassigned to represent time as linear with wellspecified past and future, and even when imagining sciencefictional scenarios of time travel, time loops and parallel universes we search for an ordering asking ‘what happened first?’, ‘what happened then?’, ‘\(\ldots \) and then?’, etc. So, in our language every sciencefictional scenario needs a representation as a sequence of iteratively solvable sequence of initial value problems. The other way round, thinking of a properly timeperiodic movie would be quite repetitive.
Notes
 1.
At one point, he urges his younger self to refrain from theoretical considerations stating ‘I don’t want to talk about time travel [...]. If we start talking about it, we’re gonna be here all day. Making diagrams with straws’: a witty allusion to time as a graph.
References
 1.
W. Arendt and S. Bu. The operatorvalued Marcinkiewicz multiplier theorem and maximal regularity. Math. Z., 240:311–343, 2002.
 2.
W. Arendt and S. Bu. Operatorvalued Fourier multipliers on periodic Besov spaces and applications. Proc. Edinb. Math. Soc., 47:15–33, 2004.
 3.
W. Arendt and S. Bu. Operatorvalued multiplier theorems characterizing Hilbert spaces. J. Aust. Math. Soc., 77:175–184, 2004.
 4.
M. Abe. Time in Buddhism. In S. Heine, editor, Zen and Comparative Studies, Library of Philosophy and Religion, pp. 163–169. MacMillan, London, 1997.
 5.
W. Arendt, C.J.K. Batty, M. Hieber, and F. Neubrander. VectorValued Laplace Transforms and Cauchy Problems—Second Edition, volume 96 of Monographs in Mathematics. Birkhäuser, Basel, 2010.
 6.
W. Arendt, D. Dier, and S. Fackler. J. L. Lions’ problem on maximal regularity. Arch. Math., 109:59–72, 2017.
 7.
H. Amann. Linear and Quasilinear Parabolic Problems. Vol. 1: Abstract Linear Theory. Birkhäuser, Basel, 1995.
 8.
W. Arendt. Semigroups and evolution equations: Functional calculus, regularity and kernel estimates. In C.M. Dafermos and E. Feireisl, editors, Handbook of Differential Equations: Evolutionary Equations—Vol. 1. North Holland, Amsterdam, 2004.
 9.
A. Celik and M. Kyed. Nonlinear wave equation with damping: periodic forcing and nonresonant solutions to the Kuznetsov equation. ZAMM Z. Angew. Math. Mech., 98(3):412–430, 2018.
 10.
U. Coope. Time for Aristotle: Physics IV. 10–14. Oxford Aristotle Studies. Oxford University Press, New York, 2005.
 11.
H. Coward. Time in Hinduism. J. HinduChristian Stud., 12:22–27, 1999.
 12.
B.S. DeWitt and N. Graham, editors. The Many Worlds Interpretation of Quantum Mechanics. Princeton University Press, Princeton, 2015.
 13.
R. Denk, M. Hieber, and J. Prüss. \({{R}}\)Boundedness, Fourier Multipliers and Problems of Elliptic and Parabolic Type, volume 788 of Mem. Amer. Math. Soc. Amer. Math. Soc., Providence, RI, 2003.
 14.
T. Eiter and M. Kyed. Timeperiodic linearized Navier–Stokes equations: an approach based on Fourier multipliers. In Particles in Flows, Adv. Math. Fluid Mech., pp. 77–137. Birkhäuser, Cham, 2017.
 15.
O. ElMennaoui and H. Laasri. On evolution equations governed by nonautonomous forms. Arch. Math., 107:43–57, 2016.
 16.
K.J. Engel and R. Nagel. OneParameter Semigroups for Linear Evolution Equations, volume 194 of Graduate Texts in Mathematics. Springer, New York, 2000.
 17.
G.P. Galdi, M. Hieber, and T. Kashiwabara. Strong timeperiodic solutions to the 3D primitive equations subject to arbitrary large forces. Nonlinearity, 30(10):3979–3992, 2017.
 18.
M. Geissert, M. Hieber, and Th. H. Nguyen. A general approach to time periodic incompressible viscous fluid flow problems. Arch. Ration. Mech. Anal., 220(3):1095–1118, 2016.
 19.
D. M. Gitman, I. V. Tyutin, and B. L. Voronov. Selfadjoint extensions in quantum mechanics, volume 62 of Progress in Mathematical Physics. Birkhäuser/Springer, New York, 2012. General theory and applications to Schrödinger and Dirac equations with singular potentials.
 20.
M. Haase. The Functional Calculus for Sectorial Operators, volume 169 of Oper. Theory Adv. Appl. Birkhäuser, Basel, 2006.
 21.
S. W. Hawking and G. F. R. Ellis. The Large Scale Structure of SpaceTime. Cambridge University Press, LondonNew York, 1973. Cambridge Monographs on Mathematical Physics, No. 1.
 22.
M. Hieber. On operator semigroups arising in the study of incompressible viscous fluid flows. Philos. Trans. R. Soc. A, 378(2185):618–639, 2020.
 23.
M. Hieber, N. Kajiwara, K. Kress, and P. Tolksdorf. The periodic version of the Da Prato–Grisvard theorem and applications to the bidomain equations with FitzHugh–Nagumo transport. Ann. Mat. Pura Appl. (4), 199(6):2435–2457, 2020.
 24.
M. Hieber and S. Monniaux. Heatkernels and maximal \({L}^p{L}^q\)estimates: the nonautonomous case. J. Fourier Anal. Appl., 6:467–481, 2000.
 25.
A. Hussein and D. Mugnolo. Quantum graphs with mixed dynamics: the transport/diffusion case. J. Phys. A, 46:235202, 2013.
 26.
M. Hieber, A. Mahalov, and R. Takada. Time periodic and almost time periodic solutions to rotating stratified fluids subject to large forces. J. Differ. Equ., 266(23):977–1002, 2019.
 27.
M. Hieber, Th. H. Nguyen, and A. Seyfert. On periodic and almost periodic solutions to incompressible viscous fluid flow problems on the whole line. In Mathematics for Nonlinear Phenomena—Analysis and Computation, volume 215 of Springer Proc. Math. Stat., pp. 51–81. Springer, Cham, 2017.
 28.
M. Hieber and J. Prüss. Heat kernels and maximal \({L}^p{L}^q\) estimates for parabolic evolution equations. Comm. Partial Differ. Equ., 22:1647–1669, 1997.
 29.
M. Hieber and C. Stinner. Strong time periodic solutions to Keller–Segel systems: an approach by the quasilinear Arendt–Bu theorem. J. Differ. Equ., 269(2):1636–1655, 2020.
 30.
B. Jacob and H. Zwart. Linear PortHamiltonian Systems on Infinitedimensional Spaces, volume 223 of Oper. Theory Adv. Appl. Birkhäuser, Basel, 2012.
 31.
T. Kato. Perturbation Theory for Linear Operators, volume 132 of Grundlehren der mathematischen Wissenschaften. Springer, Berlin, 1980.
 32.
M. Kyed and J. Sauer. A method for obtaining timeperiodic \(L^p\) estimates. J. Differ. Equ., 262:633–652, 2017.
 33.
N.J. Kalton and L. Weis. The \({H}^\infty \)calculus and sums of closed operators. Math. Ann., 321:319–345, 2001.
 34.
P.C. Kunstmann and L. Weis. Maximal \(L_p\)regularity for parabolic equations, Fourier multiplier theorems and \(H^\infty \)functional calculus. In Functional Analytic Methods for Evolution Equations, volume 1855 of Lect. Notes Math., pages 65–311. SpringerVerlag, Berlin, 2004.
 35.
S. Martin. Time, kingship, and the Maya universe. Expedition, 54:18–23, 2012.
 36.
D. Mugnolo. Semigroup Methods for Evolution Equations on Networks. Underst. Compl. Syst. Springer, Berlin, 2014.
 37.
J. Prüss and G. Simonett. Moving Interfaces and Quasilinear Parabolic Evolution Equations, volume 105 of Monographs in Mathematics. Birkhäuser/Springer, Cham, 2016.
 38.
C. Rovelli. The Order of Time. Penguin Random House, New York, 2018.
 39.
O. Vejvoda. Partial Differential Equations: TimePeriodic Solutions. Martinus Nijhoff Publishers, The Hague, 1982.
Author information
Affiliations
Corresponding author
Additional information
Dedicated to Matthias Hieber on the occasion of his 60th birthday and in recognition of his brilliant work pointing to the future
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Delio Mugnolo was partially supported by the Deutsche Forschungsgemeinschaft (Grant 397230547).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Hussein, A., Mugnolo, D. If time were a graph, what would evolution equations look like?. J. Evol. Equ. (2021). https://doi.org/10.1007/s00028021006728
Published:
Keywords
 Evolution equations
 Cauchy problems
 Time evolution on graphs
Mathematics Subject Classification
 Primary: 47D99
 Secondary: 47D06
 35B10
 34G10