1 Introduction

Consider a finite network (i.e., of pipelines) where some material is transported along its branches (i.e., pipes). The velocity of the transport depends on a given branch but may also change in time. We would like to know under which condition such a system can be modelled in a way that for any given initial distribution we are able to predict the state of the system in any time. We would also like to obtain stable solutions that continuously depend on the initial state. In this case we will call our problem well-posed.

Such transport problems on networks have already been studied by several authors. The operator theoretical approach by means of abstract Cauchy problems on Banach spaces was initiated by the second author and Sikolya [14]; for an overview and further references we refer to the survey [15]. However, the majority of the publications concentrates on time-independent transport and hence autonomous abstract Cauchy problems. A first attempt to non-autonomous problems of this kind was performed by Bayazit et al. [7]. They considered transport on networks with boundary conditions changing in time. The advantage of such an approach is that the corresponding operator does not change its action on the Banach space, only its domain changes in time. Our aim is to consider also the non-autonomous operator, that is, we study transport problems on finite metric graphs with time-dependent velocities along the edges. We use evolution families and evolution semigroups as studied by Nickel [18] and show that the abstract Cauchy problem, which can be associated to the transport equation on these graphs, is well-posed.

Let us also mention that diffusion and other processes on networks have also been studied by semigroup techniques, see e.g. the monograph by Mugnolo [16]. Time-dependent diffusion on networks was considered in [4] where a non-autonomous form method was used that is only applicable in Hilbert spaces.

This paper is structured as follows. The second section consists of a reminder on some notions from graph theory as well as of the definition and some results on general non-autonomous abstract Cauchy problems and associated evolution families. In the third section we present our non-autonomous transport problem on a metric graph and rewrite it in an operator theoretical context. We first consider non-autonomous operators A(t) with a common time-independent domain \(\textrm{D}( A(t)) \equiv \textrm{D}\). Acquistapace and Terreni [1] studied operators of this kind but their results are not applicable in our situation since they assume the generation of analytic semigroups which is not our case. We can however apply the results by Kato [12]. In Sect. 4 we treat the general case in which the operators A(t) do not necessarily share a common domain. We will obtain the aimed for evolution family as a composition of the translation and the multiplication semigroup, whereby the multiplication semigroup will be constructed by the known solution semigroups of the autonomous case using the results by Graser [11] on unbounded operator multipliers. In order to determine the domain of the generator of the evolution semigroup we will follow the approach via the extrapolation spaces as presented by Nagel et al. [17]. Here we will make use of the fact that in our case the extrapolated operators share a common extrapolation space.

2 Preliminaries

2.1 Metric graphs

We shall use the notation presented for example in [5, 14]. We take a simple, connected, directed, finite graph \((\textrm{V},\textrm{E})\) with the set of vertices \(\textrm{V}=\{\textrm{v}_1,\dots ,\textrm{v}_n\}\) and the set of directed edges \( \textrm{E}= \{\textrm{e}_1,\dots ,\textrm{e}_m\}\subseteq \textrm{V}\times \textrm{V}\). We parameterize each edge with an interval [0, 1] and thus obtain a metric object called metric graph (or network). By an abuse of notation we denote the endpoints of the edge \(\textrm{e}=(\textrm{v}_i,\textrm{v}_j)\) as \(\textrm{v}_i = \textrm{e}(1)\) and \(\textrm{v}_j = \textrm{e}(0)\). For technical reasons we assume that the graph is oriented contrary to the parametrisation of the edges. The structure of the graph can be described by its incidence matrices as follows. The outgoing incidence matrix \(\Phi ^{-}:=(\phi ^{-}_{ij})_{n\times m}\) and incoming incidence matrix \(\Phi ^{+}:=(\phi ^{+}_{ij})_{n\times m}\) are defined as

$$\begin{aligned} \phi ^{-}_{ij} := {\left\{ \begin{array}{ll} 1, &{} \text {if } \textrm{e}_j(1) = \textrm{v}_i,\\ 0, &{} \text {otherwise}, \end{array}\right. } \qquad \text {and} \qquad \phi ^{+}_{ij} := {\left\{ \begin{array}{ll} 1, &{} \text {if } \textrm{e}_j(0) = \textrm{v}_i,\\ 0, &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

We will further assign some weights \(0 \le w_{ij} \le 1\) to the edges of our metric graph such that

$$\begin{aligned} \sum _{j=1}^m{w_{ij}}=1 \text { for all } i=1,\dots , n. \end{aligned}$$
(2.1)

The so-called weighted (transposed) adjacency matrix of the line graph \(\mathbb {B} = (\mathbb {B}_{ij})_{m\times m}\) defined by

$$\begin{aligned} \mathbb {B}_{ij}:={\left\{ \begin{array}{ll} w_{ki},&{} \text { if } \textrm{e}_j (0)=\textrm{v}_k =\textrm{e}_i(1),\\ 0, &{} \text { otherwise}, \end{array}\right. } \end{aligned}$$
(2.2)

contains all information on our weighted metric graph. By (2.1), the matrix \(\mathbb {B}\) is column stochastic, i.e., the sum of entries of each column is 1, and defines a bounded positive operator on \(\mathbb {C}^m\) with \(r(\mathbb {B})= \Vert \mathbb {B}\Vert =1\). Since the graph is connected, \(\mathbb {B}\) is an irreducible matrix. For additional terminology and properties we refer to [5, Ch. 18].

2.2 Non-autonomous abstract Cauchy problems

The abstract operator-theoretical setting we will use is the setting of so-called non-autonomous abstract Cauchy problems. Let \((A(t),\textrm{D}(A(t)))_{t\in \mathbb {R}}\) be a family of (unbounded) operators on a Banach space X. The associated non-autonomous abstract Cauchy problem is given by

$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{u}(t)=A(t)u(t),&{}\quad t\ge s,\\ u(s)=x \in X. \end{array}\right. } \end{aligned}$$
(nACP)

The autonomous case, i.e., when the operators \(A(t)\equiv A\) do not depend on time, is well understood by means of strongly continuous one-parameter operator semigroups, or \(C_0\)-semigroups for short. There are several monographs on this theory, we refer to Engel and Nagel [10]. To some extent this theory can be carried over to non-autonomous case, as described below. First, let us recall the notions of (classical) solutions and well-posedness of (nACP).

Definition 2.1

[10, Def. VI.9.1] Let \((A(t),\textrm{D}(A(t)))\), \(t\in \mathbb {R}\), be linear operators on a Banach space X and take \(s\in \mathbb {R}\) and \(x\in \textrm{D}(A(s))\). A (classical) solution of (nACP) is a function \(u(\cdot ,s,x)=u\in \textrm{C}^1\left( \left[ s,\infty \right) ,X\right) \) such that \(u(t)\in \textrm{D}(A(t))\) and u satisfies (nACP) for \(t\ge s\). The Cauchy problem (nACP) is called well-posed (on spaces \(Y_t\)) if there are subspaces \(Y_s\subseteq \textrm{D}(A(s))\), \(s\in \mathbb {R}\), dense in X, such that for \(s\in \mathbb {R}\) and \(x\in Y_s\) there is a unique solution \(t\mapsto u(t,s,x)\in Y_t\) of (nACP). In addition, for \(s_n\rightarrow s\) and \(Y_{s_n}\ni x_n\rightarrow x\), we have \(\widetilde{u}(t,s_n,x_n)\rightarrow \widetilde{u}(t,s,x)\) uniformly for t in compact intervals in \(\mathbb {R}\), where we set \(\widetilde{u}(t,s,x):={u}(t,s,x)\) for \(t\ge s\) and \(\widetilde{u}(t,s,x):=x\) for \(t<s\).

In the autonomous case, the solutions are represented by \(C_0\)-semigroups. An appropriate analogue for the solutions of (nACP) are so-called evolution families.

Definition 2.2

[10, Def. VI.9.2] A family of bounded operators \((U(t,s))_{t,s\in \mathbb {R}, t\ge s}\) on a Banach space X is called a (strongly continuous) evolution family if

  1. (i)

    \(U(t,s)=U(t,r)U(r,s)\) and \(U(s,s)=\textrm{I}\) for \(t\ge r\ge \) s and \(t,r,s\in \mathbb {R}\),

  2. (ii)

    the mapping \(\left\{ (\tau ,\sigma )\in \mathbb {R}^2:\ \tau \ge \sigma \right\} \ni (t,s)\mapsto U(t,s)\) is strongly continuous,

  3. (iii)

    there exist \(M\ge 1\) and \(\omega \in \mathbb {R}\) such that \(\left\| U(t,s)\right\| \le M\textrm{e}^{\omega (t-s)}\) for all \(t\ge s\).

We say that \((U(t,s))_{t\ge s}\) solves the Cauchy problem (nACP) (on space \(Y_t\)) if there are dense subspaces \(Y_s\subseteq X\), \(s\in \mathbb {R}\), such that \(U(t,s)Y_s\subseteq Y_t\subseteq \textrm{D}(A(t))\) for \(t\ge s\) and the function \(t\mapsto U(t,s)x\) is a solution of (nACP) for \(s\in \mathbb {R}\) and \(x\in Y_s\).

Proposition 2.3

[18, Prop. 2.5] The Cauchy problem (nACP) is well-posed (on \(Y_t\)) if and only if there is an evolution family solving (nACP) (on \(Y_t\)).

A uniform generation theorem in the style of Hille–Yosida is unfortunately not known for non-autonomous Cauchy problems. In fact, the existence of solutions of (nACP) is a priori not clear. However, there are several attempts to the characterisation of the well-posedness of certain classes of non-autonomous abstract Cauchy problems, let us only mention the work by Acquistapace and Terreni [2], Kato and Tanabe [13], as well as the survey by Schnaubelt [20]. There is one possible approach to the well-posedness results that applies operator semigroup theory by means of the so-called evolution semigroups, as follows. To any evolution family \((U(t,s))_{t\ge s}\) on a Banach space X one can associate an operator semigroup \((\mathscr {T}(t))_{t\ge 0}\) on the space \(\textrm{C}_0(\mathbb {R},X)\) by

$$\begin{aligned} (\mathscr {T}(t)f)(s):= U(s,s-t)f(s-t),\quad t\ge 0,\ f\in \textrm{C}_0(\mathbb {R},X),\ s\in \mathbb {R}. \end{aligned}$$

By [10, Lemma VI.9.10] this semigroup is strongly continuous on \(\textrm{C}_0(\mathbb {R},X)\) and is called the evolution semigroup. We denote its generator by \((G,\textrm{D}(G))\). The following result characterizes evolution semigroups. By \((T_\textrm{r}(t))_{t\ge 0}\) we denote the right translation semigroup on \(\textrm{C}_{\textrm{c}}(\mathbb {R})\) defined by \((T_\textrm{r}(t)\varphi )(s):= \varphi (s-t)\).

Theorem 2.4

[10, Thm. VI.9.14] Let \((T(t))_{t\ge 0}\) be a \(C_0\)-semigroup on \(\textrm{C}_0(\mathbb {R},X)\) with generator \((G,\textrm{D}(G))\). Then following assertions are equivalent.

  1. (a)

    \((T(t))_{t\ge 0}\) is an evolution semigroup.

  2. (b)

    \(T(t)(\varphi f)=(T_\textrm{r}(t)\varphi )T(t)f\) for all \(\varphi \in \textrm{C}_{\textrm{c}}(\mathbb {R})\), \(f\in \textrm{C}_0(\mathbb {R},X)\) and \(t\ge 0\).

  3. (c)

    For \(f\in \textrm{D}(G)\) and \(\varphi \in \textrm{C}_{\textrm{c}}^1(\mathbb {R})\) we have \(\varphi f\in \textrm{D}(G)\) and \(G(\varphi f)=\varphi Gf-\varphi 'f\).

Finally, one can characterize the well-posedness of (nACP) as follows.

Theorem 2.5

[18, Thm. 2.9] Let \((A(t),\textrm{D}(A(t)))_{t\in \mathbb {R}}\) be a family of linear operators on a Banach space X. The following are equivalent.

  1. (a)

    (nACP) is well-posed for the family of operators \((A(t),\textrm{D}(A(t)))_{t\in \mathbb {R}}\).

  2. (b)

    There exists a unique evolution semigroup \((T(t))_{t\ge 0}\) with generator \((G,\textrm{D}(G))\) and a T(t)-invariant core \(D\subseteq \textrm{C}_0^1(\mathbb {R},X)\cap \textrm{D}(G)\) such that \(Gf+f'=A(\cdot )f\) for all \(f\in D\).

The above result shows that determining the domain of the generator of the evolution semigroup is the most important step on the way to solve a given non-autonomous abstract Cauchy problem.

3 The non-autonomous network transport problem

3.1 The setting

Let us now consider the following non-autonomous dynamical system taking place along m edges of a metric graph.

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{\partial }{\partial t}u_j(x,t)=c_j(t)\frac{\partial }{\partial x}u_j(x,t),&{}\quad x\in \left( 0,1\right) ,\ t\ge s,\\ u_j(x,s)= f_j(x),&{}\quad x\in \left( 0,1\right) ,\\ \phi ^-_{ij}c_j(t)u_j(1,t)= w_{ij}\sum _{k=1}^m{\phi ^+_{ik}c_k(t)u_k(0,t)},&{}\quad t\ge s, \end{array}\right. } \end{aligned}$$
(nF)

for \(i=1,\dots ,n\), \(j=1,\dots , m\). The first equation models the transport along the edge \(\textrm{e}_j\) where \(c_j(t)\) is the time-variable velocity coefficient along this edge. In what follows, we will assume that \(c_j(t)>0\) and that there exist \(m>0\) and \(M>0\) such that \(m\le c_j(t)\le M\) for all \(j=1,\ldots ,m\) and all \(t\in \mathbb {R}\). The second equation gives the initial mass distribution along the edges while the third equation represents the boundary conditions in the vertices of the graph. Here, the graph structure is encoded in terms of incidence matrices. Observe that the weights \(w_{ij}\) give proportions of the material arriving in the vertex \(\textrm{v}_i\) that is distributed to the edge \(\textrm{e}_j\). By (2.1), at all times the mass is conserved, i.e., the Kirchhoff law holds in all vertices.

We will show that there exists a solution of the non-autonomous set of equations (nF) by means of evolution families. The equivalent problem in the autonomous case, i.e., when all \(c_j(t)\equiv c_j\) are independent of time has already been considered by several authors, cf. [14, 15]. A first approach to time-dependent transport equations on networks was done by Bayazit et al. [7]. In particular, they studied transport processes with time-dependent weights \(\omega _{ij}(t)\) but constant velocities \(c_j(t)\equiv 1\). This can be interpreted as autonomous transport with time-depending boundary conditions (i.e., the structure of the graph changes in time). They were able to obtain well-posedness as well as some results on the asymptotic behaviour. In our case, already the transport equation on each edge is non-autonomous and a different approach is needed.

We use operator theoretical approach and first define the Banach space

$$\begin{aligned} X:=\textrm{L}^1\left( \left[ 0,1\right] ,\mathbb {C}^m\right) \cong \textrm{L}^1\left( \left[ 0,1\right] ,\mathbb {C}\right) ^m, \end{aligned}$$

with the usual norm

$$\begin{aligned} \left\| f\right\| _X:=\sum _{j=1}^m\int _0^1{\left| f_j(t)\right| \ \textrm{d}{t}}. \end{aligned}$$

On this Banach space X we consider the family of operators \((A(t),\textrm{D}(A(t)))\) defined by

$$\begin{aligned} A(t)&:=\begin{pmatrix} c_1(t)\frac{\textrm{d}}{\textrm{d}{x}}&{} &{}0\\ &{} \ddots &{}\\ 0&{} &{} c_m(t)\frac{\textrm{d}}{\textrm{d}{x}} \end{pmatrix},\end{aligned}$$
(3.1)
$$\begin{aligned} \textrm{D}(A(t))&:=\left\{ u\in \textrm{W}^{1,1}\left( \left[ 0,1\right] ,\mathbb {C}^m\right) :\ u(1)=\mathbb {B}_C(t)u(0)\right\} \end{aligned}$$
(3.2)

where the matrix \(\mathbb {B}_C(t)\) is defined by \(\mathbb {B}_C(t):=C(t)^{-1}\mathbb {B}C(t)\) for \(C(t):=\textrm{diag}(c_j(t))_j\) and \(\mathbb {B}\) is the adjacency matrix given in (2.2). We emphasize that \(C(t)^{-1}\) exists for all \(t\in \mathbb {R}\) as we assumed that there exist \(m>0\) and \(M>0\) such that \(m\le c_j(t)\le M\) for all \(j=1,\ldots ,m\) and all \(t\in \mathbb {R}\). We can now rewrite our transport problem (nF) in the form of an abstract Cauchy problem as follows.

Lemma 3.1

The abstract non-autonomous Cauchy problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{u}(t)=A(t)u(t),&{}\quad t,s\in \mathbb {R},\ t\ge s,\\ u(s)=(f_j)_{j=1}^m, \end{array}\right. } \end{aligned}$$
(3.3)

associated to the family of operators \((A(t),\textrm{D}(A(t)))\) defined in (3.1)–(3.2) is equivalent to the transport problem stated in (nF).

Proof

We only need to check that the condition in the domains \(\textrm{D}(A(t))\) is equivalent to the boundary condition of (nF). This can be done the same way as in the proof of [5, Prop. 18.2].

Our main goal is to show that problem (3.3) is well-posed. We start with the simplest situation. \(\square \)

3.2 Operators with constant domains

In the literature treating non-autonomous Cauchy problems it is often assumed that the domains of the operators A(t) appearing in (3.3) do not depend on t. In this case, the smoothness of the coefficients \(c_j(\cdot )\) yields the following well-posedness result.

Proposition 3.2

Let \((A(t),\textrm{D}(A(t)))\) be the operator family defined in (3.1)-(3.2). Assume that \(\textrm{D}(A(t)) = \textrm{D}(A(0)) =:D\) for all \(t\in \mathbb {R}\) and that \(C(\cdot )\in \textrm{C}^1(\mathbb {R},\mathbb {R}^m)\). Then (nF) is a well-posed problem.

Proof

This follows from the well-known result by Kato once we verify the assumptions of [12, Thm. 5]. First, note that by [5, Cor. 18.15], for each fixed \(t\in \mathbb {R}\), the operator (A(t), D) generates a contractive \(C_0\)-semigroup. Next, the smoothness of coefficients \(c_j(\cdot )\) implies that the mapping \(t\mapsto A(t)f\) is continuously differentiable for each \(f\in D\). Finally, the latter condition is known to be equivalent to the Kato’s assumptions, see [10, Sect. VI.9.5] or [19, Prop. 2.1].\(\square \)

Let us state some simple conditions that yield t-independence of the domains \(\textrm{D}(A(t))\).

Lemma 3.3

The following two properties of the coefficients \(c_j(\cdot )\) appearing in (3.1)–(3.2) are equivalent.

  1. (i)

    Whenever \(\mathbb {B}_{ij}\ne 0 \), \(c_i(t_1) c_j(t_1)^{-1} = c_i(t_2) c_j(t_2)^{-1}\) for any \(t_1,t_2 \in \mathbb {R}\).

  2. (ii)

    \(C(t)=\alpha (t) D\), \(t\in \mathbb {R}\), for some scalar function \(\alpha (\cdot )\) and diagonal matrix D.

Moreover, each of these properties implies that \(\textrm{D}(A(t_1)) = \textrm{D}(A(t_2))\) for any \(t_1,t_2 \in \mathbb {R}\).

Proof

It is easy to see that (i) implies (ii). Indeed, by taking \(t_1=t\) and \(t_2=0\), and denoting \(D=\textrm{diag}(d_i)_i:=C(0)\), we can reformulate (i) as \(c_i(t) d_i^{-1}=c_j(t) d_j^{-1}=:\alpha (t)\), which yields (ii). We further observe that \(\left( \mathbb {B}_C(t)\right) _{ij} = c_i^{-1}(t) \mathbb {B}_{ij} c_j(t)\). Hence, (ii) clearly implies \(\mathbb {B}_C(t_1) = \mathbb {B}_C(t_2)\) and thus also (i).\(\square \)

One can interpret condition (i) in the above lemma as follows: whenever there is an inflow into the i-th edge from the j-th edge, at all times the velocities of the flow should stay in the same ratio which is, for example, determined by the radii of the respective pipelines. This is reasonable in many situations. However, our main result will show that neither constant domains nor the \(C^1\)-condition on the coefficients \(c_j(\cdot )\) is necessary for the well-posedness of (nF).

4 Well-posedness of the general non-autonomous problem

Let us now consider the general situation when the domains of operators A(t) are not necessarily constant. We start by observing that the domains of the adjoint operators \(A'(t)\), however, do not depend on t.

Lemma 4.1

The adjoints of the operators \((A(t),\textrm{D}(A(t)))\) defined in (3.1)–(3.2) are given by

$$\begin{aligned} A'(t)&:=\begin{pmatrix} - c_1(t)\frac{\textrm{d}}{\textrm{d}{x}}&{} &{}0\\ &{} \ddots &{}\\ 0&{} &{}- c_m(t)\frac{\textrm{d}}{\textrm{d}{x}} \end{pmatrix},\\ \textrm{D}(A'(t))&:=\left\{ v\in \textrm{W}^{1,\infty }\left( \left[ 0,1\right] ,\mathbb {C}^m\right) :\ v(0)=\mathbb {B}^{\top }v(1)\right\} . \end{aligned}$$

Proof

For \(u\in \textrm{D}(A(t))\) and \(v\in \textrm{W}^{1,\infty }\left( \left[ 0,1\right] ,\mathbb {C}^m\right) \) we first compute

$$\begin{aligned} \langle A(t) u,v\rangle&= \sum _{k=1}^m \int _0^1 c_k(t) u'_k(x) v_k(x) \; dx \\&= \sum _{k=1}^m c_k(t) \left( u_k(1) v_k(1) - u_k(0) v_k(0) \right) -\sum _{k=1}^m \int _0^1 c_k(t) u_k(x) v'_k(x) \; dx. \end{aligned}$$

Now we observe that \(v \in \textrm{D}(A'(t))\) if and only if

$$\begin{aligned} 0&= \sum _{k=1}^m c_k(t) \left( u_k(1) v_k(1) - u_k(0) v_k(0) \right) \\&=\sum _{k=1}^m c_k(t) \left( \sum _{i=1}^m \left( \left( \mathbb {B}_C(t)\right) _{ki} u_i(0) \right) v_k(1) - u_k(0) v_k(0) \right) \\&= \sum _{i=1}^m \left( \sum _{k=1}^m \left( \mathbb {B}_C(t)^{\top }\right) _{ik} c_k(t) v_k(1) \right) u_i(0) - \sum _{k=1}^m c_k(t) v_k(0) u_k(0) \end{aligned}$$

for all \(u\in \textrm{D}(A(t))\). By using \(\mathbb {B}_C(t) ^{\top } C(t) = C(t) \mathbb {B}^{\top } \) we obtain that this is further equivalent to

$$\begin{aligned} C(t) \mathbb {B}^{\top } v(1) - C(t) v(0)=0\end{aligned}$$

and since C(t) is invertible, we are done.\(\square \)

We shall now employ evolution families and evolution semigroups approach to show the well-posedness of (nACP) for operators A(t) given by (3.1) with non-autonomous domains of the form (3.2).We proceed in several steps.

4.1 The associated multiplication semigroup

By [5, Cor. 18.15], for each fixed \(s\in \mathbb {R}\), the abstract Cauchy problem corresponding to the single operator \((A(s),\textrm{D}(A(s)))\) is well-posed on X and the solution is given by a positive contraction semigroup \((T_s(t))_{t\ge 0}\). From this we construct a new operator semigroup \((\mathscr {S}(t))_{t\ge 0}\) on the vector-valued function space \(\textrm{C}_0(\mathbb {R},X)\) by

$$\begin{aligned} (\mathscr {S}(t)f)(s):=T_s(t)f(s),\quad t\ge 0,\ f\in \textrm{C}_0(\mathbb {R},X),\ s\in \mathbb {R}. \end{aligned}$$
(4.1)

We will show that \((\mathscr {S}(t))_{t\ge 0}\) is a strongly continuous semigroup on \(\textrm{C}_0(\mathbb {R},X)\) and give its generator.

Proposition 4.2

Assume that \(c_j(\cdot )\in \textrm{C}(\mathbb {R})\) and that there exist \(m>0\) and \(M>0\) such that \(m\le c_j(t)\le M\) for each \(t\in \mathbb {R}\). Then, the operator semigroup \((\mathscr {S}(t))_{t\ge 0}\) is strongly continuous on \(\textrm{C}_0(\mathbb {R},X)\) and its generator \((\mathscr {A},\textrm{D}(\mathscr {A}))\) is given by the multiplication operator on \(\textrm{C}_0(\mathbb {R},X)\) induced by the family of differential operators on \(X=\textrm{L}^1\left( \left[ 0,1\right] ,\mathbb {C}^m\right) \) defined in (3.1)–(3.2), i.e.,

$$\begin{aligned} (\mathscr {A}f)(s)&=A(s)f(s),\\ \textrm{D}(\mathscr {A})&=\left\{ f\in \textrm{C}_0(\mathbb {R},X): \ f(s)\in \textrm{D}(A(s)) \ { \forall s\in \mathbb {R}\ \text {and} } \ A(\cdot )f(\cdot )\in \textrm{C}_0(\mathbb {R},X)\right\} . \end{aligned}$$

In addition, the set \(\rho (\mathscr {A})\cap \rho (A(s))\) is nonempty for each \(s\in \mathbb {R}\) and the resolvent \(R(\lambda ,\mathscr {A})\) is a bounded operator multiplier for all \(\lambda \in \rho (\mathscr {A})\), given by

$$\begin{aligned} \left( R(\lambda ,\mathscr {A})f\right) (s) = R(\lambda , A(s)) f(s),\quad s\in \mathbb {R}, f\in \textrm{C}_0(\mathbb {R},X).\end{aligned}$$

Proof

The assertion follows directly by [11, Thm. 3.4 & Lem. 3.5] once we show that the map \(\mathbb {R}\times \mathbb {R}_{+}\ni (s,t)\mapsto T_s(t)\) is strongly continuous. To this end we first prove that the mapping \(s\mapsto R(\lambda ,A(s))\) is strongly continuous. For any fixed \(s\in \mathbb {R}\) we can use [5, Prop. 18.12] and obtain an explicit formula for the resolvent \(R(\lambda ,A(s))\). In particular, for \(\textrm{Re} \lambda >0\) we have

$$\begin{aligned} R(\lambda ,A(s))=\left( \textrm{I}+E_\lambda (\cdot ,s)(1-\mathbb {B}_{C,\lambda }(s))^{-1}\mathbb {B}_{C,\lambda }(s)\otimes \delta _0\right) R_{\lambda }(s), \end{aligned}$$

where

$$\begin{aligned} E_{\lambda }(\tau ,s)=\textrm{diag}\left( \textrm{e}^{(\lambda /c_k(s))\tau }\right) ,\quad \mathbb {B}_{C,\lambda }(s)=E_\lambda (-1,s)\mathbb {B}_C(s), \end{aligned}$$

\(\delta _0\) is the point evaluation at 0, and

$$\begin{aligned} (R_\lambda (s)f)(\tau )=\int _{\tau }^1{E_\lambda (\tau -\xi ,s)C^{-1}(s)f(\xi )\ \textrm{d}\xi }. \end{aligned}$$

It is easy to see, by use of the explicit resolvent formula, that by our continuity assumptions on \(c_j(\cdot )\), the mapping \(s\mapsto R(\lambda ,A(s))f\) is continuous for every \(f\in \textrm{C}_0(\mathbb {R},X)\). Now we estimate

$$\begin{aligned} \Vert T_{s_0}(t_0)f(s_0) - T_s(t)f(s)\Vert&\le \Vert T_{s_0}(t_0)f(s_0) - T_{s_0}(t)f(s_0)\Vert \\&+ \Vert T_{s_0}(t)f(s_0) - T_s(t)f(s_0)\Vert + \Vert T_{s}(t)f(s_0) - T_s(t)f(s)\Vert \end{aligned}$$

and note that the first term tends to 0 when \(t\rightarrow t_0\) since the semigroup \((T_{s_0}(t))_{t\ge 0}\) is strongly continuous while the third term tends to 0 when \(s\rightarrow s_0\) since \((T_{s}(t))_{t\ge 0}\) is contractive and f is continuous. Finally, due to the strong continuity of \(s\mapsto R(\lambda ,A(s))\) we can apply Trotter–Kato theorem [10, Thm. III.4.9] and see that also the second term converges to 0 as \(s\rightarrow s_0\).\(\square \)

4.2 The associated evolution semigroup and its generator

Inspired by the work of R. Nagel, G. Nickel and S. Romanelli [17, Sect. 4] we define another semigroup \((\mathscr {T}(t))_{t\ge 0}\) on \(\textrm{C}_0(\mathbb {R},X)\) by

$$\begin{aligned} (\mathscr {T}(t)f)(s):={(\mathscr {S}(t)f)}(s-t),\quad t\ge 0,\ f\in \textrm{C}_0(\mathbb {R},X),\ s\in \mathbb {R}, \end{aligned}$$
(4.2)

where \((\mathscr {S}(t))_{t\ge 0}\) is the multiplication semigroup given in (4.1). Hence, \((\mathscr {T}(t))_{t\ge 0}\) is the composition of the translation semigroup and the multiplication semigroup. An application of Theorem 2.4 shows that \((\mathscr {T}(t))_{t\ge 0}\) is an evolution semigroup.

Our next goal is to determine the generator of this semigroup. This can be done, as already mentioned in [17, Sect. 4], by means of extrapolation spaces. Recall that the first extrapolation space of Banach space X with respect to semigroup generator \((A,\textrm{D}(A))\) is defined to be the completion of X with respect to a new norm \(\left\| \cdot \right\| _{-1}\) on X defined by

$$\begin{aligned} \left\| x\right\| _{-1}:=\left\| {R({\lambda _0}, A)} x\right\| ,\quad x\in X \end{aligned}$$
(4.3)

for some \({\lambda _0}\in \rho (A)\) (usually one may assume that \({\lambda _0} =0\)). This completion is denoted by \(X_{-1}\) and is called the first extrapolation space. If we want to stress the dependence on the operator \((A,\textrm{D}(A))\) we write \(X_{-1}^{(A)}\). By continuity, we can extend \((T(t))_{t\ge 0}\) to a semigroup of linear operators on \(X_{-1}^{(A)}\). This semigroup is denoted by \((T_{-1}(t))_{t\ge 0}\). The first extrapolation space and the corresponding extrapolated semigroup have the following properties.

Proposition 4.3

[10, Thm. II.5.5] Let \((T(t))_{t\ge 0}\) be a strongly continuous semigroup on X with generator \((A,\textrm{D}(A))\). The following assertions hold true.

  1. (i)

    X is dense in \(X_{-1}^{(A)}\).

  2. (ii)

    \((T_{-1}(t))_{t\ge 0}\) is a strongly continuous semigroup on \(X_{-1}^{(A)}\).

  3. (iii)

    The generator \(A_{-1}\) of \((T_{-1}(t))_{t\ge 0}\) has domain \(\textrm{D}(A_{-1})=X\).

  4. (iv)

    For \(\lambda _0\in \rho (A)\), which is used to define the norm \(\left\| \cdot \right\| _{-1}\) in (4.3), \(\lambda _0-A_{-1}:X\rightarrow X_{-1}^{(A)}\) is the unique extension of \(\lambda _0-A:\textrm{D}(A)\rightarrow X\) to an isometry.

Typical examples for such extrapolation spaces are Sobolev spaces and weighted \(\textrm{L}^p\)-spaces. For more explicit examples we refer to [10, Ex. II.5.7,5.8], [17, Sect. 3] and [8, Sect. 5]. Similar results have been recently discovered for Bochner \(\textrm{L}^p\)-spaces as well, cf. [9]. Recall that by [11, Thm. 4.7], the first extrapolation space of \(\textrm{C}_0(\mathbb {R},X)\) with respect to the multiplication operator \((\mathscr {A},\textrm{D}(\mathscr {A}))\) looks like

$$\begin{aligned} \left[ \textrm{C}_0(\mathbb {R},X)\right] _{-1}^{\mathscr {A}}=\left\{ f\in \prod _{s\in \mathbb {R}}{{X_{-1}^{A(s)}}}:\ f\ \text {is fiber-continuous and vanishes at infinity}\right\} . \end{aligned}$$

Here, for fixed \(s\in \mathbb {R}\), we denote by \(X_{-1}^{A(s)}\)the first extrapolation space of X with respect to the operator \((A(s),\textrm{D}(A(s)))\). A function \(f\in \prod _{s\in \mathbb {R}}{X_{-1}^{A(s)}}\) is called fiber-continuous if for any \(s_0\in \mathbb {R}\) and \(\varepsilon >0\) there exist \(x\in X\) and \(\delta >0\) such that \(\left| s_0-s\right| <\delta \) implies that

$$\left\| f(s_0)-x\right\| _{-1,s_0}+\left\| f(s)-x\right\| _{-1,s}<\varepsilon ,$$

see [11, Def. 4.4]. Moreover, such a function vanishes at infinity if \(\lim \limits _{\left| s\right| \rightarrow \infty }{\left\| f(s)\right\| _{-1,s}}=0\).

At a first glance, it seems that the elements of the family of extrapolation spaces \((X_{-1}^{A(s)})_{s\in \mathbb {R}}\) may differ from each other and hence the product of the extrapolated translation semigroup and the extrapolated multiplication semigroup may not be well-defined. However, in our case the operators \((A(s),\textrm{D}(A(s)))\), \(s\in \mathbb {R}\), have a special appearance. Let \({\lambda _0}\in \mathbb {C}\) with \(\textrm{Re}({\lambda _0})>0\). By [5, Cor. 18.13] one has \({\lambda _0}\in \rho (A(s))\) for all \(s\in \mathbb {R}\). Now, by [3, Rem. 6.6(c)], Lemma 4.1, and an application of [10, Exercise II.5.9(1)], it follows that all extrapolation spaces \(X_{-1}^{A(s)}\) are equivalent, i.e.,

$$\begin{aligned} X_{-1}^{A(s)}\cong X_{-1}^{A(0)}=:X_{-1} \text { for all } s\in \mathbb {R}\end{aligned}$$

and there exists \(\eta >0\) such that \(\frac{1}{\eta }\left\| x\right\| _{X_{-1}}\le \left\| x\right\| _{X_{-1}^{A(s)}}\le \eta \left\| x\right\| _{X_{-1}}\) for all \(s\in \mathbb {R}\). Hence, we have that \(\left[ \textrm{C}_0(\mathbb {R},X)\right] _{-1}^{\mathscr {A}}=\textrm{C}_0(\mathbb {R},X_{-1})\). By following the procedure in [17, Sect. 4], we can then describe the domain \(\textrm{D}(\mathscr {G})\) of the generator of the evolution semigroup \((\mathscr {T}(t))_{t\ge 0}\).

Theorem 4.4

Assume that \(c_j(\cdot )\in \textrm{C}(\mathbb {R})\) and that there exist \(m>0\) and \(M>0\) such that \(m\le c_j(t)\le M\) for each \(t\in \mathbb {R}\). The generator \((\mathscr {G},\textrm{D}(\mathscr {G}))\) of the \(C_0\)-semigroup \((\mathscr {T}(t))_{t\ge 0}\) on \(\textrm{C}_0(\mathbb {R},X)\) defined in (4.2) is given by

$$\begin{aligned} \begin{aligned} (\mathscr {G}f)(s)&=A_{-1}(s)f(s)-f'(s),\\ \textrm{D}(\mathscr {G})&=\left\{ f\in \textrm{C}_0(\mathbb {R},X)\cap \textrm{C}_0^1(\mathbb {R},X_{-1}):\ {\mathscr {A}_{-1}} f-f'\in \textrm{C}_0(\mathbb {R},X)\right\} . \end{aligned} \end{aligned}$$
(4.4)

Proof

As mentioned above, one has that \(\left[ \textrm{C}_0(\mathbb {R},X)\right] _{-1}^{\mathscr {A}}=\textrm{C}_0(\mathbb {R},X_{-1})\). On this space we consider the extrapolated multiplication semigroup \((\mathscr {S}_{-1}(t))_{t\ge 0}\) which is defined by

$$\begin{aligned} (\mathscr {S}_{-1}(t)f)(s):=T_{-1,s}(t)f(s), \end{aligned}$$

where \((T_{-1,s}(t))_{t\ge 0}\) is the extrapolated semigroup for \((T_s(t))_{t\ge 0}\) on \(X_{-1}\). Its generator is denoted by \(\mathscr {A}_{-1}\) and is by [11, Lem. 4.6] given as

$$\begin{aligned} \left( \mathscr {A}_{-1}f\right) (s)= A_{-1}(s)f(s),\quad \textrm{D}(\mathscr {A}_{-1}) = \textrm{C}_0(\mathbb {R},X).\end{aligned}$$

On the space \(\textrm{C}_0(\mathbb {R},X_{-1})\) we also consider the operator \((\mathscr {B},\textrm{D}(\mathscr {B}))\) defined by

$$\begin{aligned} \mathscr {B}f:=f',\quad \textrm{D}(\mathscr {B}):=\textrm{C}^1_0(\mathbb {R},X_{-1}), \end{aligned}$$

which generates the right translation semigroup on \(\textrm{C}_0(\mathbb {R},X_{-1})\). Now, define the evolution semigroup \((\widetilde{\mathscr {T}}(t))_{t\ge 0}\) on \(\textrm{C}_0(\mathbb {R},X_{-1})\) by

$$\begin{aligned} (\widetilde{\mathscr {T}}(t)f)(s):={\left( \mathscr {S}_{-1}(t)f\right) }(s-t),\quad t\ge 0, s\in \mathbb {R}. \end{aligned}$$

Since our multiplication and translation semigroups commute, one obtains for the generator \((\widetilde{\mathscr {G}},\textrm{D}(\widetilde{\mathscr {G}}))\) of the multiplication semigroup \((\widetilde{\mathscr {T}}(t))_{t\ge 0}\) that for every \(f\in \textrm{D}(\mathscr {A}_{-1})\cap \textrm{D}(\mathscr {B})\), which is a core for \(\widetilde{\mathscr {G}}\), one has

$$\begin{aligned} (\widetilde{\mathscr {G}}f)(s)=-f'(s)+(\mathscr {A}_{-1}f)(s)=-f'(s)+A_{-1}(s)f(s),\quad s\in \mathbb {R}. \end{aligned}$$

see [10, Sec. II.2.7]. By observing that \((\mathscr {T}(t))_{t\ge 0}\) given in (4.2) coincides with \((\widetilde{\mathscr {T}}(t))_{t\ge 0}\) restricted to the space \(\textrm{C}_0(\mathbb {R},X)\), one has that its generator \(\mathscr {G}\) is the part of \(\widetilde{\mathscr {G}}\) in \(\textrm{C}_0(\mathbb {R},X)\), that is,

$$\begin{aligned} \textrm{D}(\mathscr {G})=\{f\in \textrm{C}_0(\mathbb {R},X)\cap \textrm{D}(\widetilde{\mathscr {G}}):\ \widetilde{\mathscr {G}}f\in \textrm{C}_0(\mathbb {R},X)\} \end{aligned}$$

and \(\mathscr {G}f=\widetilde{\mathscr {G}}f\) for all \(f\in \textrm{D}(\mathscr {G})\), see [10, Sec. II.2.3]. It only remains to specify the domain \(\textrm{D}(\mathscr {G})\). We already know that

$$\begin{aligned} \textrm{C}_0(\mathbb {R},X_{-1}) \supset \textrm{D}(\widetilde{\mathscr {G}}) \supset \textrm{D}(\mathscr {A}_{-1})\cap \textrm{D}(\mathscr {B}) = \textrm{C}_0(\mathbb {R},X) \cap \textrm{C}^1_0(\mathbb {R},X_{-1}).\end{aligned}$$

Now take \(f\in \textrm{C}_0(\mathbb {R},X)\cap \textrm{D}(\widetilde{\mathscr {G}}) = \textrm{D}(\mathscr {A}_{-1})\cap \textrm{D}(\widetilde{\mathscr {G}})\). Then both \(\lim \limits _{t\rightarrow 0}{\frac{\mathscr {S}_{-1}(t)f-f}{t}}\) and \(\lim \limits _{t\rightarrow 0}{\frac{\widetilde{\mathscr {T}}(t)f-f}{t}}\) exist in \(\left[ \textrm{C}_0(\mathbb {R},X)\right] _{-1}^{\mathscr {A}}=\textrm{C}_0(\mathbb {R},X_{-1})\). We notice that

$$\begin{aligned}&\lim _{t\rightarrow 0}{\frac{(\widetilde{\mathscr {T}}(t)f-f)(s)}{t}}-\lim _{t\rightarrow 0}{\frac{(\mathscr {S}_{-1}(t)f-f)(s)}{t}}\nonumber \\&\quad =\lim _{t\rightarrow 0}{\frac{({\mathscr {S}}_{-1}(t)f)(s-t)-f(s)}{t}}-\lim _{t\rightarrow 0}{\frac{(\mathscr {S}_{-1}(t)f)(s)-f(s)}{t}}\nonumber \\&\quad =\lim _{t\rightarrow 0}{\frac{T_{-1,s-t}(t)f(s-t)-f(s)}{t}}-\lim _{t\rightarrow 0}{\frac{T_{-1,s}(t)f(s)-f(s)}{t}}\nonumber \\&\quad =\lim _{t\rightarrow 0}{\frac{T_{-1,s-t}(t)f(s-t)-f(s-t)}{t}}-\lim _{t\rightarrow 0}{\frac{T_{-1,s}(t)f(s)-f(s)}{t}} \nonumber \\&\qquad + \lim _{t\rightarrow 0}{\frac{f(s-t)-f(s)}{t}}. \end{aligned}$$
(4.5)

Observe that \(f(\tilde{s}) \in \textrm{D}(A_{-1}(\tilde{s}))=X\) for each \(\tilde{s}\in \mathbb {R}\) and the operator \(A_{-1}(\tilde{s})\) generates \((T_{-1,\tilde{s}}(t))_{t\ge 0}\) on \(X_{-1}\). Moreover, we will now show that the mapping \(\tilde{s}\mapsto A_{-1}(\tilde{s})x\) in continuous for all \(x\in X\). Indeed, let \(s,\tilde{s}\in \mathbb {R}\) and \(x\in X\). First recall that \(n\in \rho (A(s))\) for all \(s\in \mathbb {R}\) (see [5, Cor. 18.13]) and we can define two sequences \((x_n)_{n\in \mathbb {N}}\) and \((\tilde{x}_n)_{n\in \mathbb {N}}\),

$$\begin{aligned} x_n:=nR(n,A_{-1}(s))x=nR(n,A(s))x\quad \text {and}\quad \tilde{x}_n:=nR(n,A_{-1}(\tilde{s}))x=nR(n,A(\tilde{s}))x. \end{aligned}$$

By [10, Lem. II.3.4(i)] we know that \(x_n\rightarrow x\) and \(\tilde{x}_n\rightarrow x\) as \(n\rightarrow \infty \). We also see that by the resolvent identity one gets

$$\begin{aligned} A_{-1}(s)x_n=n^2R(n,A(s))x-nx\quad \text {and}\quad A_{-1}(\tilde{s})\tilde{x}_n=n^2R(n,A(\tilde{s}))x-nx. \end{aligned}$$
(4.6)

Fix some \(\lambda _0\in \rho (A(s))\), which is used to define the norm \(\left\| \cdot \right\| _{-1}\) in (4.3), and let \(\varepsilon >0\) be arbitrary. Then

$$\begin{aligned} \left\| A_{-1}(s)x-A_{-1}(\tilde{s})x\right\| _{-1} =&\left\| (\lambda _0-A_{-1}(s))x-(\lambda _0-A_{-1}(\tilde{s}))x\right\| _{-1} \nonumber \\ \le&\left\| (\lambda _0-A_{-1}(s))(x-x_n)\right\| _{-1}\nonumber \\&+\left\| (\lambda _0-A_{-1}(s))x_n-(\lambda _0-A_{-1}(\tilde{s}))\tilde{x}_n\right\| _{-1} \nonumber \\&+\left\| (\lambda _0-A_{-1}(\tilde{s}))(\tilde{x}_n-x)\right\| _{-1} \nonumber \\ \le&\left\| x_n-x\right\| +\left\| \lambda _0(x_n-\tilde{x}_n)\right\| _{-1}\nonumber \\&+\left\| A_{-1}(\tilde{s})\tilde{x}_n-A_{-1}(s)x_n\right\| _{-1}+\left\| \tilde{x}_n-x\right\| , \end{aligned}$$
(4.7)

where in the last step we used the isometric property of operators \(\lambda _0-A_{-1}(s)\) and \(\lambda _0-A_{-1}(\tilde{s})\), see Proposition 4.3. For n large enough, we have \(\left\| x_n-x\right\| <\varepsilon /4\) and \(\left\| \tilde{x}_n-x\right\| <\varepsilon /4\). Further, by applying the definition of both sequences we get

$$\begin{aligned} \left\| \lambda _0 (x_n-\tilde{x}_n)\right\| _{-1}\le \lambda _0 C n\left\| R(n,A(s))x-R(n,A(\tilde{s}))x\right\| \end{aligned}$$

and, by (4.6), also

$$\begin{aligned} \left\| A_{-1}(\tilde{s})\tilde{x}_n-A_{-1}(s)x_n\right\| _{-1}\le n^2 D\left\| R(n,A(s))x-R(n,A(\tilde{s}))x\right\| \end{aligned}$$

for some constants \(C>0\) and \(D>0\). As already observed in the proof of Proposition 4.2, the assumption \(c_j(\cdot )\in \textrm{C}(\mathbb {R})\) implies the strong continuity of the map \(s\mapsto R(n,A(s))\). In particular, we can find \(\delta >0\) such that for \(\left| s-\tilde{s}\right| <\delta \) one has that

$$\begin{aligned} \left\| R(n,A(s))x-R(n,A(\tilde{s}))x\right\| <\frac{\varepsilon }{4n^2}, \end{aligned}$$

showing that also the remaining two summands in (4.7) become arbitrarily small as soon as \(\left| s-\tilde{s}\right| <\delta \). We have thus shown that \(\left\| A_{-1}(s)x-A_{-1}(\tilde{s})x\right\| <\varepsilon \) whenever \(\left| s-\tilde{s}\right| <\delta \). Now, by combining the continuity of \(\tilde{s}\mapsto A_{-1}(\tilde{s})x\) with the continuity of \((\tilde{s},t)\mapsto T_{-1,\tilde{s}}(t)f(\tilde{s})\) (shown in the proof of Proposition 4.2) we see that the first limit in (4.5) equals \(A_{-1}(s)f(s)\). The second limit, by definition, also equals \(A_{-1}(s)f(s)\) and hence, the first two limits in (4.5) cancel out. We obtain that \(\lim \limits _{t\rightarrow 0}{\frac{f(s-t)-f(s)}{t}}\) exists in \(\textrm{C}_0(\mathbb {R},X_{-1})\) and thus \(f\in \textrm{D}(\mathscr {B})=\textrm{C}^1_0(\mathbb {R},X_{-1})\) which finally yields

$$\begin{aligned} \textrm{C}_0(\mathbb {R},X)\cap \textrm{D}(\widetilde{\mathscr {G}}) =\textrm{C}_0(\mathbb {R},X)\cap \textrm{C}^1_0(\mathbb {R},X_{-1}). \end{aligned}$$

Now we are in the position to show that problem (3.3)—and thus (nF)—is well-posed. \(\square \)

Corollary 4.5

Assume that \(c_j(\cdot )\in \textrm{C}(\mathbb {R})\) and that there exist \(m>0\) and \(M>0\) such that \(m\le c_j(t)\le M\) for each \(t\in \mathbb {R}\). Then (nF) is a well-posed problem.

Proof

First observe that, \(\textrm{D}(\mathscr {G})\cap \textrm{C}_0^1(\mathbb {R},X)=\textrm{D}(\mathscr {G})\) as we showed that \(\textrm{C}_0(\mathbb {R},X)\cap \textrm{D}(\widetilde{\mathscr {G}}) =\textrm{C}_0(\mathbb {R},X)\cap \textrm{C}^1_0(\mathbb {R},X_{-1})\) and \(\mathscr {G}\) is the part of \(\widetilde{\mathscr {G}}\) in \(\textrm{C}_0(\mathbb {R},X)\). By Theorem 4.4, one has that \(\textrm{D}(\mathscr {G})\) is the domain of the evolution semigroup generator, so \(\textrm{D}(\mathscr {G})\) is a core. Hence, by applying Theorem 2.5 we obtain the desired result. \(\square \)

Unfortunately, we cannot give an explicit expression for the corresponding evolution family due to the fact that there is no closed formula for semigroups \((T_s(t))_{t\ge 0}\). Theoretically speaking, given an evolution semigroup \((\mathscr {T}(t))_{t\ge 0}\) on \(\textrm{C}_0(\mathbb {R},X)\) one defines the corresponding evolution family implicitly by

$$\begin{aligned} U(t,s)x:=(T_{\textrm{r}}(s-t)\mathscr {T}(t-s)f)(s),\quad s\in \mathbb {R},\ t\ge s, \end{aligned}$$
(4.8)

for \(f\in \textrm{C}_0(\mathbb {R},X)\) with \(f(s) = x\) and \((T_\textrm{r}(t))_{t\ge 0}\) the right-translation semigroup on \(\textrm{C}_0(\mathbb {R},X)\).

Remark 4.6

  1. (a)

    Following the construction of the evolution family given by (4.8) via the contraction semigroups \((T_s(t))_{t\ge 0}\), \((\mathscr {S}(t))_{t\ge 0}\), \((\mathscr {T}(t))_{t\ge 0}\), and translation semigroups we see that \((U(t,s))_{t\ge s}\) actually becomes an evolution family of contractions, i.e., \(\left\| U(t,s)\right\| \le 1\) for all \(t\ge s\).

  2. (b)

    Since each of the operators \((A(t),\textrm{D}(A(t)))\), \(t\in \mathbb {R}\), generates a positive operator semigroup, our construction also yields that the evolution family \((U(t,s))_{t\ge s}\) obtained in (4.8) consists of positive operators.

  3. (c)

    The analysis of the asymptotic behavior of the solutions to non-autonomous problems is in general a very difficult task. Batty, Chill, and Tomilov characterized strong stability of a bounded evolution family on a Banach space X by the stability of its associated evolution semigroup on \(\textrm{L}^p( \mathbb {R}_+,X)\) for some \(1 \le p\le \infty \) or, equivalently, by the density of the range of the corresponding generator on \(\textrm{L}^1( \mathbb {R}_+,X)\) see [6, Thm. 2.2]. These conditions are however not easy to verify in our case.

  4. (d)

    Note that most of our results hold also in the case of non-constant weights \(w_{ij}(t)\) on the edges as long as (2.1) holds for every \(t\in \mathbb {R}\). Therefore we obtained a generalisation of well-posedness results from Bayazit et al. [7]. The authors of [7] also studied longterm behaviour for the operators with periodic non-autonomous domain. They used an explicit description of the evolution family and the fact that in this case the period-map is a multiplication operator. However, such results are not available for our general situation so, a priori, no statements regarding the asymptotical behaviour are possible.

Finally, we consider a slightly modified version of (nF) by adding a time-varying absorption term. In particular, we consider the equation given by

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{\partial }{\partial t}u_j(t,x)=c_j(t)\frac{\partial }{\partial x}u_j(t,x)+q_j(t,x)u_j(t,x),&{}\quad x\in \left( 0,1\right) ,\ t\ge 0,\\ u_j(t,0)= f_j(x),&{}\quad x\in \left( 0,1\right) ,\\ \phi ^-_{ij}c_j(t)u_j(t,1)=\omega _{ij}\sum _{k\in \mathbb {N}}{\phi ^+_{ik}c_k(t)u_k(t,1)},&{}\quad t\ge 0. \end{array}\right. } \end{aligned}$$
(pnF)

Then (pnF) can be written as an abstract Cauchy problem of the form

$$\begin{aligned} {\left\{ \begin{array}{ll} \dot{u}(t)=(A(t)+B(t))u(t),&{}\quad t,s\in \mathbb {R},\ t\ge s,\\ u(s)=\textsf{f}, \end{array}\right. } \end{aligned}$$
(4.9)

where the family of operators \((A(t),\textrm{D}(A(t)))_{t\in \mathbb {R}}\) is again defined by (3.1)–(3.2), \(\textsf{f} = (f_j)_{j=1}^m\), and the family of bounded linear operators \((B(t))_{t\in \mathbb {R}}\) is defined by

$$\begin{aligned} B(t):=\textrm{diag}(M_{{ q_j(t,\cdot )}}), \end{aligned}$$
(4.10)

where \(M_{{ q_j(t,\cdot )}}\) denotes the multiplication operator on X induced by the function \(q_j(t,\cdot )\). Under sufficient regularity conditions on functions \(q_j\), classical bounded perturbation results [10, Cor. VI.9.20] yield the following.

Corollary 4.7

Let \(c_j(\cdot )\in \textrm{C}(\mathbb {R})\) be such that there exist \(m>0\) and \(M>0\) with \(m\le c_j(t)\le M\) for each \(t\in \mathbb {R}\). Under the assumption that \(q_j(t,\cdot )\in \textrm{L}^{\infty }\left( \left[ 0,1\right] ,\mathbb {C}^m\right) \) and \(q_j(\cdot ,x)\in \textrm{C}_\textrm{b}(\mathbb {R},X)\), there exists a unique evolution family \((U_B(t,s))_{t\ge s}\), generated by the family of operators \((A(t)+B(t),\textrm{D}(A(t)))_{t\in \mathbb {R}}\).

It is worth to mention that Corollary 4.7 does not imply that (pnF) is well-posed, see [10, Ex. VI.9.21]. Nonetheless, for the evolution family \((U_B(t,s))_{t\ge s}\) obtained in the theorem, the function \(U_B(\cdot ,s)\textsf{f}\) can be interpreted as a mild solution of the problem (4.9).