1 Introduction

In this article we bring together two theories for dealing with partial differential equations: the theory of \(C_{0}\)-semigroups on the one hand and the theory of evolutionary equations on the other hand. In particular, we show how \(C_{0}\)-semigroups can be associated with a given evolutionary equation.

The framework of evolutionary equations was introduced in the seminal paper [13]. Evolutionary equations are equations of the form

$$\begin{aligned} \left( \partial _{t}M(\partial _{t})+A\right) U=F, \end{aligned}$$
(1.1)

where \(\partial _{t}\) denotes the temporal derivative, \(M(\partial _{t})\) is a bounded operator in space-time defined via a functional calculus for \(\partial _{t}\) and A is an, in general, unbounded spatial operator. The function F defined on \({\mathbb {R}}\) and taking values in some Hilbert space is a given source term and one seeks for a solution U of the above equation. Here, the notion of solution is quite weak, since one just requires that the solution should belong to some exponentially weighted \(L_{2}\)-space. Thus, all operators have to be introduced in these spaces. Especially, the time derivative is introduced as an unbounded normal operator on such a space and so, in order to solve (1.1), one has to deal with the sum of two unbounded operators (\(\partial _{t}\) and A). Problems of the form (1.1) cover a broad spectrum of different types of differential equations, such as hyperbolic, parabolic, elliptic and mixed-type problems, integro-differential equations [23], delay equations [9] and fractional differential equations [15]. Also, generalisations to some nonlinear [19, 20] and non-autonomous problems [16, 25, 29, 30] are possible. The solution theory is quite easy and just relies on pure Hilbert space theory. Moreover, in applications the conditions for the well-posedness of the corresponding evolutionary equations can often easily be verified, since they usually break down to positivity constraints on the coefficients (see Example 2.12).

On the other hand, there is the well-established theory of \(C_{0}\)-semigroups dealing with so-called Cauchy problems (see e.g. [6, 8, 12]). These are abstract equations of the form

$$\begin{aligned} (\partial _{t}+A)U&=F,\nonumber \\ U(0)&=U_{0}, \end{aligned}$$
(1.2)

where A is a suitable operator acting on some Banach space. Although, (1.2) just seems to be a special case of (1.1) for \(M(\partial _{t})=1\), the theories are quite different. While we focus on solutions lying in \(L_{2}\) in the theory of evolutionary equations, one seeks for continuous solutions in the framework of \(C_{0}\)-semigroups. Moreover, while (1.1) holds on \({\mathbb {R}}\) as time horizon, (1.2) just holds on \({\mathbb {R}}_{\ge 0}\) and is completed by an initial condition. The existence of a \(C_0\)-semigroup associated with (1.2) can be characterised by the celebrated Theorem of Hille-Yosida. In fact, one needs suitable a-priori bounds for all powers of the resolvents \((\lambda +A)^{-1}\). However, for the well-posedness of (1.2) as an evolutionary equation (i.e. without an initial condition and F to be given on the whole real line) the boundedness of the resolvents \((\lambda +A)^{-1}\) suffices and other powers of these operators do not have to be considered. So, the extra regularity with respect to time, which is required in the theory of \(C_0\)-semigroups, restricts the choice of possible operators A. For example, the operator

$$\begin{aligned} A{:}{=}\begin{pmatrix} \mathrm {i}{\text {m}}&{} \mathrm {i}{\text {m}}\\ 0 &{} \mathrm {i}{\text {m}}\end{pmatrix} \end{aligned}$$

considered as an operator on \(L_2({\mathbb {R}}_{>0})\times L_2({\mathbb {R}}_{>0})\), where \({\text {m}}\) is given as the multiplication with the argument, i.e. \({\text {m}}f=(t\mapsto tf(t))\) with maximal domain, is not a Hille-Yosida operator and thus, does not generate a \(C_0\)-semigroup. However, the evolutionary equation \((\partial _t-A)u=f\) is well-posed in the sense of Theorem 2.10. So, roughly speaking, the theory of \(C_0\)-semigroups can be seen as a regularity theory within the framework of evolutionary equations, which requires stronger assumptions on the operators involved. It is the main goal of the present article, to work out these additional assumptions and to provide a way to associate a \(C_0\)-semigroup to an abstract evolutionary equation (1.1).

As we have indicated above, equations of the form (1.1) also cover delay equations, where it is more natural to prescribe histories instead of an initial state at time 0. Moreover, (1.1) also covers so-called differential algebraic equations (see [10] for the finite-dimensional case and [26,27,28] for infinite dimensions), where not every element of the underlying state space can be used as an initial state. Thus, one is confronted with the problem of defining the ‘right’ initial values and histories for (1.1) depending on the operators involved. Moreover, one has to incorporate these initial conditions within the framework of evolutionary equations, that is, initial conditions should enter the equation as a suitable source term on the right-hand side. This can be done by using extrapolation spaces and by extending the solution theory to those. This idea was already used to formulate initial value problems for certain evolutionary equations in [14, Section 4.2]. Then it will turn out that initial conditions can be formulated by distributional right hand sides, which belong to a suitable extrapolation space associated with the time derivative operator \(\partial _{t}\). Having the right formulation of initial value problems at hand, one can associate a \(C_{0}\)-semigroup on a product space consisting of the current state in the first and the past of the unknown in the second component. This idea was already used to deal with delay equations within the theory of \(C_{0}\)-semigroups, see [3]. As it turns out, this product space is not closed (as a subspace of a suitable Hilbert space) and in order to extend the associated \(C_{0}\)-semigroup to its closure one needs to impose similar conditions as in the Hille-Yosida Theorem. The key result, which will be used to extend the semigroup is the Theorem of Widder-Arendt (see [1] or Theorem 6.6 below).

The paper is structured as follows: We begin by recalling the basic notions and well-posedness results for evolutionary problems (Sect. 2) and for extrapolation spaces (Sect. 3). Then, in order to formulate initial value problems within the framework of evolutionary equations, we introduce a cut-off operator as an unbounded operator on the extrapolation space associated with the time derivative and discus some of its properties (Sect. 4). Section 5 is then devoted to determine the ‘right’ space of admissible histories and initial values for a given evolutionary problem. We note here that we restrict ourselves to homogeneous problems in the sense that we do not involve an additional source term besides the given history. The main reason for that is that such source terms would restrict and change the set of admissible histories, a fact which is well-known in the theory of differential-algebraic equations. In Sect. 6 we associate a \(C_{0}\)-semigroup on the before introduced product space of admissible initial values and histories and prove the main result of this article (Theorem 6.9). In the last section we discuss two examples. First, we apply the results to abstract differential algebraic equations and thereby obtain the Theorem of Hille-Yosida as a special case. In the second example, we discuss a concrete hyperbolic delay equation and prove that we can associate a \(C_{0}\)-semigroup with this problem.

Throughout, every Hilbert space is assumed to be complex and the inner product \(\langle \cdot ,\cdot \rangle \) is conjugate-linear in the first and linear in the second argument.

2 Evolutionary problems

We recall the basic notions and results for evolutionary problems, as they were introduced in [13] (see also [14, Chapter 6]). We begin by the definition of the time derivative operator on an exponentially weighted \(L_{2}\)-space (see also [17]).

Definition 2.1

Let \(\rho \in {\mathbb {R}}\) and H a Hilbert space. We set

$$\begin{aligned} L_{2,\rho }({\mathbb {R}};H){:}{=}\{f:{\mathbb {R}}\rightarrow H\,;\,f\text { measurable},\,\int _{{\mathbb {R}}}\Vert f(t)\Vert ^{2}\mathrm {e}^{-2\rho t}\,\mathrm {d}t\} \end{aligned}$$

with the common identification of functions coinciding almost everywhere. Then \(L_{2,\rho }({\mathbb {R}};H)\) is a Hilbert space with respect to the inner product

$$\begin{aligned} \langle f,g\rangle _{\rho }{:}{=}\int _{{\mathbb {R}}}\langle f(t),g(t)\rangle \mathrm {e}^{-2\rho t}\,\mathrm {d}t\quad (f,g\in L_{2,\rho }({\mathbb {R}};H)). \end{aligned}$$

Moreover, we define the operator

$$\begin{aligned} \partial _{t,\rho }:H_{\rho }^{1}({\mathbb {R}};H)\subseteq L_{2,\rho }({\mathbb {R}};H)\rightarrow L_{2,\rho }({\mathbb {R}};H),\;f\mapsto f', \end{aligned}$$

where

$$\begin{aligned} H_{\rho }^{1}({\mathbb {R}};H){:}{=}\{f\in L_{2,\rho }({\mathbb {R}};H)\,;\,f'\in L_{2,\rho }({\mathbb {R}};H)\} \end{aligned}$$

with \(f'\) denoting the usual distributional derivative.

We recall some facts on the operator \(\partial _{t,\rho }\) and refer to [9] for the respective proofs.

Proposition 2.2

Let \(\rho \in {\mathbb {R}}\) and H a Hilbert space.

  1. (a)

    The operator \(\partial _{t,\rho }\) is densely defined, closed and linear and \(C_{c}^{\infty }({\mathbb {R}};H)\) is a core for \(\partial _{t,\rho }\).

  2. (b)

    The spectrum of \(\partial _{t,\rho }\) is given by

    $$\begin{aligned} \sigma (\partial _{t,\rho })=\{\mathrm {i}t+\rho \,;\,t\in {\mathbb {R}}\}. \end{aligned}$$
  3. (c)

    For \(\rho \ne 0\) the operator \(\partial _{t,\rho }\) is boundedly invertible with \(\Vert \partial _{t,\rho }^{-1}\Vert =\frac{1}{|\rho |}\) and the inverse is given by

    $$\begin{aligned} \left( \partial _{t,\rho }^{-1}f\right) (t)={\left\{ \begin{array}{ll} \int _{-\infty }^{t}f(s)\,\mathrm {d}s &{} \text { if }\rho >0,\\ -\int _{t}^{\infty }f(s)\,\mathrm {d}s &{} \text { if }\rho <0 \end{array}\right. } \end{aligned}$$

    for \(f\in L_{2,\rho }({\mathbb {R}};H)\) and \(t\in {\mathbb {R}}\).

  4. (d)

    The operator \(\partial _{t,\rho }\) is normal with \(\partial _{t,\rho }^{*}=-\partial _{t,\rho }+2\rho .\)

  5. (e)

    The following variant of Sobolev’s embedding theorem holds:

    $$\begin{aligned} H_{\rho }^{1}({\mathbb {R}};H)\hookrightarrow C_{\rho }({\mathbb {R}};H) \end{aligned}$$

    continuously, where

    $$\begin{aligned} C_{\rho }({\mathbb {R}};H){:}{=}\{f:{\mathbb {R}}\rightarrow H\,;\,f\text { continuous, }\sup _{t\in {\mathbb {R}}}\Vert f(t)\Vert \mathrm {e}^{-\rho t}<\infty \}. \end{aligned}$$

As a normal operator, \(\partial _{t,\rho }\) possesses a natural functional calculus, which can be described via the so-called Fourier-Laplace transform.

Definition 2.3

Let \(\rho \in {\mathbb {R}}\) and H a Hilbert space. We denote by \({\mathcal {L}}_{\rho }\) the unitary extension of the mapping

$$\begin{aligned} C_{c}({\mathbb {R}};H)\subseteq L_{2,\rho }({\mathbb {R}};H)\rightarrow L_{2}({\mathbb {R}};H),\;f\mapsto \left( t\mapsto \frac{1}{\sqrt{2\pi }}\int _{{\mathbb {R}}}\mathrm {e}^{-(\mathrm {i}t+\rho )s}f(s)\,\mathrm {d}s\right) . \end{aligned}$$

Remark 2.4

Note that for \(\rho =0,\) the operator \({\mathcal {L}}_{0}\) is nothing but the classical Fourier transform, which is unitary due to Plancherel’s Theorem (see e.g. [18, Theorem 9.13]). Since \({\mathcal {L}}_{\rho }={\mathcal {L}}_{0}\exp (-\rho \cdot )\) with

$$\begin{aligned} \exp (-\rho \cdot ):L_{2,\rho }({\mathbb {R}};H)\rightarrow L_{2}({\mathbb {R}};H),\;f\mapsto \left( t\mapsto f(t)\mathrm {e}^{-\rho t}\right) , \end{aligned}$$

it follows that \({\mathcal {L}}_{\rho }\) is unitary as a composition of unitary operators.

Proposition 2.5

([9, Corollary 2.5]) Let \(\rho \in {\mathbb {R}}\) and H a Hilbert space. We define the operator \({\text {m}}\) by

$$\begin{aligned} {\text {m}}&:{\text {dom}}({\text {m}})\subseteq L_{2}({\mathbb {R}};H)\rightarrow L_{2}({\mathbb {R}};H),\;f\mapsto \left( t\mapsto tf(t)\right) \end{aligned}$$

with maximal domain

$$\begin{aligned} {\text {dom}}({\text {m}}){:}{=}\{f\in L_{2}({\mathbb {R}};H)\,;\,(t\mapsto tf(t))\in L_{2}({\mathbb {R}};H)\}. \end{aligned}$$

Then

$$\begin{aligned} \partial _{t,\rho }={\mathcal {L}}_{\rho }^{*}(\mathrm {i}{\text {m}}+\rho ){\mathcal {L}}_{\rho }. \end{aligned}$$

Using the latter proposition, we can define an operator-valued functional calculus for \(\partial _{t,\rho }\) as follows.

Definition 2.6

Let \(\rho \in {\mathbb {R}}\) and H a Hilbert space. Let \(F:\{\mathrm {i}t+\rho \,;\,t\in {\mathbb {R}}\}\rightarrow L(H)\) be strongly measurable and bounded. Then we define

$$\begin{aligned} F(\partial _{t,\rho }){:}{=}{\mathcal {L}}_{\rho }^{*}F(\mathrm {i}{\text {m}}+\rho ){\mathcal {L}}_{\rho }\in L(L_{2,\rho }({\mathbb {R}};H)), \end{aligned}$$

where

$$\begin{aligned} F(\mathrm {i}{\text {m}}+\rho )f{:}{=}\left( t\mapsto F(\mathrm {i}t+\rho )f(t)\right) \quad (f\in L_{2}({\mathbb {R}};H)). \end{aligned}$$

An important class of operator-valued function of \(\partial _{t,\rho }\) are those functions yielding causal operators.

Proposition 2.7

([14, Theorem 6.1.1, Theorem 6.1.4]) Let \(\rho _{0}\in {\mathbb {R}}\) and H a Hilbert space. If \(M:{\mathbb {C}}_{{\text {Re}}>\rho _{0}}\rightarrow L(H)\) is analytic and bounded, then \(M(\partial _{t,\rho })\) is causal for each \(\rho >\rho _{0};\) i.e., for \(f\in L_{2,\rho }({\mathbb {R}};H)\) with \({\text {spt}}f\subseteq {\mathbb {R}}_{\ge a}\) for some \(a\in {\mathbb {R}}\) it follows that

$$\begin{aligned} {\text {spt}}M(\partial _{t,\rho })f\subseteq {\mathbb {R}}_{\ge a}. \end{aligned}$$

Moreover, \(M(\partial _{t,\rho })\) is independent of the choice of \(\rho >\rho _{0}\) in the sense that

$$\begin{aligned} M(\partial _{t,\rho })f=M(\partial _{t,\mu })f\quad (f\in L_{2,\rho }({\mathbb {R}};H)\cap L_{2,\mu }({\mathbb {R}};H)) \end{aligned}$$

for each \(\rho ,\mu >\rho _{0}\).

Remark 2.8

(a) The proof of causality is based on a theorem by Paley and Wiener, which characterises the functions in \(L_{2}({\mathbb {R}}_{\ge 0};H)\) in terms of their Laplace transform (see [11] or [18, 19.2 Theorem]). The independence of \(\rho \) is a simple application of Cauchy’s Theorem for analytic functions.

(b) It is noteworthy that causal, translation-invariant and bounded operators are always of the form \(M(\partial _{t,\rho })\) for some analytic and bounded mapping defined on a right half plane (see [7, 32]).

Finally, we are in the position to define well-posed evolutionary problems.

Definition 2.9

(a) Let \(\rho _{0}\in {\mathbb {R}}\) and H a Hilbert space. Moreover, let \(M:{\mathbb {C}}_{{\text {Re}}>\rho _{0}}\rightarrow L(H)\) be analytic and bounded and \(A:{\text {dom}}(A)\subseteq H\rightarrow H\) densely defined, closed and linear. Then we call an equation of the form

$$\begin{aligned} (\partial _{t,\rho }M(\partial _{t,\rho })+A)u=f \end{aligned}$$

the evolutionary equation associated with (M, A). The problem is called well-posed if there is \(\rho _{1}>\rho _{0}\) such that \(zM(z)+A\) is boundedly invertible for each \(z\in {\mathbb {C}}_{{\text {Re}}\ge \rho _{1}}\) and

$$\begin{aligned} {\mathbb {C}}_{{\text {Re}}\ge \rho _{1}}\ni z\mapsto (zM(z)+A)^{-1} \end{aligned}$$

is bounded. Moreover we set \(s_{0}(M,A)\) as the infimum over all such \(\rho _{1}>\rho _{0}\).

Theorem 2.10

Let \(\rho _{0}\in {\mathbb {R}}\) and H a Hilbert space. Moreover, let \(M:{\mathbb {C}}_{{\text {Re}}>\rho _{0}}\rightarrow L(H)\) be analytic and bounded and \(A:{\text {dom}}(A)\subseteq H\rightarrow H\) densely defined closed and linear. We assume that the evolutionary equation associated with (MA) is well-posed. Then \(\overline{\partial _{t,\rho }M(\partial _{t,\rho })+A}\) is boundedly invertible as an operator on \(L_{2,\rho }({\mathbb {R}};H)\) for each \(\rho >s_{0}(M,A)\). Moreover, the inverse

$$\begin{aligned} S_{\rho }{:}{=}\left( \overline{\partial _{t,\rho }M(\partial _{t,\rho })+A}\right) ^{-1} \end{aligned}$$

is causal and independent of the choice of \(\rho >s_{0}(M,A)\) in the sense of Proposition 2.7.

Proof

We note that the operator \(\overline{\partial _{t,\rho }M(\partial _{t,\rho })+A}\) for \(\rho >s_{0}(M,A)\) is unitarily equivalent to the multiplication operator on \(L_{2}({\mathbb {R}};H)\) associated with the operator-valued function

$$\begin{aligned} F(t){:}{=}\left( \mathrm {i}t+\rho \right) M(\mathrm {i}t+\rho )+A, \end{aligned}$$

see [22, Lemma 2.2], which is boundedly invertible by assumption. The causality and independence of \(\rho \) are an immediate consequence of Proposition 2.7, since \(S_{\rho }=N(\partial _{t,\rho })\) for the analytic and bounded function \(N(z){:}{=}(zM(z)+A)^{-1}\) for \(z\in {\mathbb {C}}_{{\text {Re}}>s_{0}(M,A)}\). \(\square \)

Remark 2.11

(a) The latter theorem shows the well-posedness of the evolutionary equation

$$\begin{aligned} \left( \overline{\partial _{t,\rho }M(\partial _{t,\rho })+A}\right) U=F \end{aligned}$$

in the sense of Hadamard; i.e., uniqueness, existence and continuous dependence of the solution U on the given right-hand side F. Indeed, the injectivity of \(\left( \overline{\partial _{t,\rho }M(\partial _{t,\rho })+A}\right) \) yields uniqueness, its surjectivity the existence and the continuity of the inverse the continuous dependence of a solution, which is then simply given by

$$\begin{aligned} U=\left( \overline{\partial _{t,\rho }M(\partial _{t,\rho })+A}\right) ^{-1}F =S_\rho F\in L_{2,\rho }({\mathbb {R}};H). \end{aligned}$$

Moreover, the causality of the operator \(S_\rho \) implies, that as long as F is zero, U also vanishes. This is a crucial and desirable property for physical processes depending on time. Moreover, it will allow us to formulate initial value problems within the framework of evolutionary equations.

(b) If \(M(z)=1\) for each \(z\in {\mathbb {C}}\); that is, if we deal with the problem

$$\begin{aligned} (\partial _{t,\rho }+A)U=F \end{aligned}$$

then well-posedness in the sense of Definition 2.9 is a weaker assumption than well-posedness in the sense of \(C_0\)-semigroups (meaning that \(-A\) generates a \(C_0\)-semigroup). Indeed, Definition 2.9 just requires the invertibility of \((z+A)\) for z in a certain half-plane such that the resolvents are uniformly bounded, while for \(-A\) generating a \(C_0\)-semigroup one has to require a suitable boundedness of all powers of the resolvents \((z+A)^{-1}\). This is also reflected in the regularity of solutions. While we just have \(L_2\)-solutions in the case of Definition 2.9 (as in Theorem 2.10), the assumption of \(-A\) being a generator of a \(C_0\)-semigroup implies the continuity of the solutions.

In order to illustrate the versatility of the framework of evolutionary equations, we present two elementary examples.

Example 2.12

Let \(\varOmega \subseteq {\mathbb {R}}^n\) be open.

(a) The heat equation in its simplest form is given by a balance equation between the heat density \(\theta \) and the heat flux q given by

$$\begin{aligned} \partial _t \mu \theta +{\text {div}}q=f, \end{aligned}$$

where \(\mu :\varOmega \rightarrow {\mathbb {R}}\) describes the density of the underlying material and f is an external force. The equation is completed by a consitutive relation, which for instance can be given by Fourier’s law; that is,

$$\begin{aligned} q=k{\text {grad}}\theta , \end{aligned}$$

where \(k:\varOmega \rightarrow {\mathbb {R}}\) describes the heat-conductivity of the underlying medium. Assuming that k is strictly positive, we can rewrite the latter two equations as a system of the form

$$\begin{aligned} \left( \partial _t \begin{pmatrix} \mu &{} 0 \\ 0 &{} 0 \end{pmatrix}+ \begin{pmatrix} 0 &{} 0 \\ 0 &{} k^{-1} \end{pmatrix} +\begin{pmatrix} 0 &{} {\text {div}}\\ {\text {grad}}&{} 0 \end{pmatrix}\right) \begin{pmatrix} \theta \\ q \end{pmatrix}=\begin{pmatrix} f \\ 0 \end{pmatrix}. \end{aligned}$$

Indeed, this equation has the form of an evolutionary equation with

$$\begin{aligned}A{:}{=}\begin{pmatrix} 0 &{} {\text {div}}\\ {\text {grad}}&{} 0 \end{pmatrix} \text{ and } M(z)=\begin{pmatrix} \mu &{} 0 \\ 0 &{} z^{-1} k^{-1} \end{pmatrix}\quad (z\in {\mathbb {C}}_{{\text {Re}}\ge 1}). \end{aligned}$$

Assuming now suitable boundary conditions, the operator A turns out to be skew-selfadjoint (for example, if we impose homogeneous Dirichlet conditions for \(\theta \) or homogeneous Neumann conditions for q) or, more generally, m-accretive (for example for certain kinds of Robin-type boundary conditions). Moreover, assuming that \(\mu \) and k are strictly positive, we easily verify that (note that \({\text {Re}}\langle Au,u\rangle \ge 0\) for each \(u\in {\text {dom}}(A)\) due to the accretivity of A)

$$\begin{aligned} {\text {Re}}\langle (zM(z)+A)u,u\rangle \ge {\text {Re}}z\langle M(z) u ,u\rangle \ge c\Vert u\Vert ^2 \quad (z \in {\mathbb {C}}_{{\text {Re}}\ge 1}) \end{aligned}$$

for each \(u\in {\text {dom}}(A)\), where \(c>0\) depends on \(\mu \) and k. The same applies for the adjoint \((zM(z)+A)^*=(zM(z))^*+A^*\), since \(A^*\) is accretive due to the m-accretivity of A, see e.g. [4, Proposition 2.2]. Since A is closed (as an m-accretive opertor) and zM(z) is bounded, we infer that \( (zM(z)+A)^{-1} \in L(L_2(\varOmega )\times L_2(\varOmega )^n)\) with \(\Vert (zM(z)+A)^{-1}\Vert \le \frac{1}{c}\) for each \(z\in {\mathbb {C}}_{{\text {Re}}\ge 1}\) and thus, the corresponding evolutionary equation is well-posed.

(b) The wave equation (or more generally the equations of linear elasticity) is given by a balance equation between the displacement u and the stress T of the form

$$\begin{aligned} \partial _t^2 \mu u-{\text {div}}T=f, \end{aligned}$$

where again \(\mu :\varOmega \rightarrow {\mathbb {R}}\) describes the density of the medium and f is an external force. The equation is completed by Hooke’s law, linking stress and strain by

$$\begin{aligned} T= d{\text {grad}}u, \end{aligned}$$

where \(d:\varOmega \rightarrow {\mathbb {R}}\) describes the elastic behaviour of the medium. Introducing \(v{:}{=}\partial _t u\) as a new unknown, we end up with a system of the form

$$\begin{aligned} \left( \partial _t \begin{pmatrix} \mu &{} 0 \\ 0 &{} d^{-1} \end{pmatrix} +\begin{pmatrix} 0 &{} -{\text {div}}\\ -{\text {grad}}&{} 0 \end{pmatrix}\right) \begin{pmatrix} v \\ T \end{pmatrix}=\begin{pmatrix} f \\ 0 \end{pmatrix}. \end{aligned}$$

Again, suitable boundary conditions and positivity constraints on the coefficients \(\mu \) and d yield the well-posedness of the evolutionary problem.

Remark 2.13

The previous simple examples illustrate how to formulate equations from mathematical physics within the framework of evolutionary equations. It is remarkable that the difference between the (parabolic) heat equation and the (hyperbolic) wave equation just lies in the different choice of the material law operator M. This is one of the key features of evolutionary equations, which shifts the complexity of the problem under consideration to the bounded material law operator and leaves the unbounded operator A rather simple. Indeed, by more complicated material laws we can easily modify the above heat or wave equation to incorporate for instance certain delay effects, without changing the operator A and thus, the domain of the operators stays rather simple (see also Sect. 7.2). This is one advantage of the theory of evolutionary equations in contrast to the theory of \(C_0\)-semigroups, where the whole complexity is hidden in the generator A coming along with a highly non-trivial domain.

3 Extrapolation spaces

In this section we recall the notion of extrapolation spaces associated with a boundedly invertible operator on some Hilbert space H. We refer to [14, Section 2.1] for the proof of the results presented here.

Definition 3.1

Let \(C:{\text {dom}}(C)\subseteq H\rightarrow H\) be a densely defined, closed, linear and boundedly invertible operator on some Hilbert space H. We define the Hilbert space

$$\begin{aligned} H^{1}(C){:}{=}{\text {dom}}(C) \end{aligned}$$

equipped with the inner product

$$\begin{aligned} \langle x,y\rangle _{H^{1}(C)}{:}{=}\langle Cx,Cy\rangle \quad (x,y\in {\text {dom}}(C)). \end{aligned}$$

Moreover, we set

$$\begin{aligned} H^{-1}(C){:}{=}H^{1}(C^{*})', \end{aligned}$$

the dual space of \(H^{1}(C^{*})\).

Remark 3.2

Another way to introduce the space \(H^{-1}(C)\) is taking the completion of H with respect to the norm

$$\begin{aligned} x\mapsto \Vert C^{-1}x\Vert . \end{aligned}$$

Proposition 3.3

([14, Theorem 2.1.6]) Let \(C:{\text {dom}}(C)\subseteq H\rightarrow H\) be a densely defined, closed, linear and boundedly invertible operator on some Hilbert space H. Then \(H^{1}(C)\hookrightarrow H\hookrightarrow H^{-1}(C)\) with dense and continuous embeddings. Here, the second embedding is given by

$$\begin{aligned} H\rightarrow H^{-1}(C),\;x\mapsto \left( {\text {dom}}(C^{*})\ni y\mapsto \langle x,y\rangle \right) . \end{aligned}$$

Moreover, the operator

$$\begin{aligned} C:H^{1}(C)\rightarrow H \end{aligned}$$

is unitary and

$$\begin{aligned} C:{\text {dom}}(C)\subseteq H\rightarrow H^{-1}(C) \end{aligned}$$

possesses a unitary extension, which will again be denoted by C.

Example 3.4

Let \(\rho \ne 0\) and H a Hilbert space. Then we set

$$\begin{aligned} H_{\rho }^{1}({\mathbb {R}};H)&{:}{=}H^{1}(\partial _{t,\rho }),\\ H_{\rho }^{-1}({\mathbb {R}};H)&{:}{=}H^{-1}(\partial _{t,\rho }). \end{aligned}$$

Then the Dirac distribution \(\delta _{t}\) at a point \(t\in {\mathbb {R}}\) belongs to \(H_{\rho }^{-1}({\mathbb {R}};{\mathbb {C}})\) and

$$\begin{aligned} \partial _{t,\rho }^{-1}\delta _{t}={\left\{ \begin{array}{ll} \mathrm {e}^{2\rho t}\chi _{{\mathbb {R}}_{\ge t}} &{} \text { if }\rho >0,\\ -\mathrm {e}^{2\rho t}\chi _{{\mathbb {R}}_{\le t}} &{} \text { if }\rho <0. \end{array}\right. } \end{aligned}$$

Indeed, for \(\rho >0\) we have that

$$\begin{aligned} \langle \partial _{t,\rho }\chi _{{\mathbb {R}}_{\ge t}},\varphi \rangle _{H_{\rho }^{-1}({\mathbb {R}};{\mathbb {C}})\times H_{\rho }^{1}({\mathbb {R}};{\mathbb {C}})}&=\int _{t}^{\infty }\left( \partial _{t,\rho }^{*}\varphi \right) (s)\mathrm {e}^{-2\rho s}\,\mathrm {d}s\\&=-\int _{t}^{\infty }\left( \varphi \mathrm {e}^{-2\rho \cdot }\right) '(s)\,\mathrm {d}s\\&=\varphi (t)\mathrm {e}^{-2\rho t} \end{aligned}$$

for each \(\varphi \in C_{c}^{\infty }({\mathbb {R}};{\mathbb {C}})\), which shows the asserted formula. The statement for \(\rho <0\) follows by the same rationale.

Proposition 3.5

Let \(\rho _{0}\ge 0\) and H a Hilbert space. Moreover, let \(M:{\mathbb {C}}_{{\text {Re}}>\rho _{0}}\rightarrow H\) be analytic and bounded and \(A:{\text {dom}}(A)\subseteq H\rightarrow H\) densely defined, linear and closed such that the evolutionary problem associated with (MA) is well-posed. Then for each \(\rho >s_{0}(M,A)\) we obtain

$$\begin{aligned} S_{\rho }[H_{\rho }^{1}({\mathbb {R}};H)]\subseteq H_{\rho }^{1}({\mathbb {R}};H) \end{aligned}$$

and

$$\begin{aligned} S_{\rho }:L_{2,\rho }({\mathbb {R}};H)\subseteq H_{\rho }^{-1}({\mathbb {R}};H)\rightarrow H_{\rho }^{-1}({\mathbb {R}};H) \end{aligned}$$

is bounded and thus has a unique bounded extension to the whole \(H_{\rho }^{-1}({\mathbb {R}};H).\)

Proof

The assertion follows immediately by realising that

$$\begin{aligned} \left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) \partial _{t,\rho }\subseteq \partial _{t,\rho }\left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) . \end{aligned}$$

\(\square \)

We recall that for a densely defined, closed, linear operator \(A:{\text {dom}}(A)\subseteq H_{0}\rightarrow H_{1}\) between two Hilbert spaces \(H_{0}\) and \(H_{1}\), the operators \(A^{*}A\) and \(AA^{*}\) are selfadjoint and positive. Then the moduli of A and \(A^{*}\) are defined by

$$\begin{aligned} |A|{:}{=}\sqrt{A^{*}A},\quad |A^{*}|{:}{=}\sqrt{AA^{*}} \end{aligned}$$

and are selfadjoint positive operators, too (see e.g. [31, Theorem 7.20]).

Proposition 3.6

([14, Lemma 2.1.16]) Let \(H_{0},H_{1}\) be Hilbert spaces and \(A:{\text {dom}}(A)\subseteq H_{0}\rightarrow H_{1}\) densely defined, closed and linear. Then

$$\begin{aligned} A:{\text {dom}}(A)\subseteq H_{0}\rightarrow H^{-1}(|A^{*}|+1) \end{aligned}$$

is bounded and hence, possesses a bounded extension to \(H_{0}\).

4 Cut-off operators

The main goal of the present section is to extend the cut-off operators \(\chi _{{\mathbb {R}}_{\ge t}}\) and \(\chi _{{\mathbb {R}}_{\le t}}\) for some \(t\in {\mathbb {R}}\) defined on \(L_{2,\rho }({\mathbb {R}};H)\) to the extrapolation space \(H_{\rho }^{-1}({\mathbb {R}};H).\) For doing so, we start with the following observation.

Lemma 4.1

Let \(\rho >0,t\in {\mathbb {R}}\) and H be a Hilbert space. We define the operators

$$\begin{aligned} \chi _{{\mathbb {R}}_{\ge t}}({\text {m}})&:L_{2,\rho }({\mathbb {R}};H)\rightarrow L_{2,\rho }({\mathbb {R}};H),\quad f\mapsto \left( s\mapsto \chi _{{\mathbb {R}}_{\ge t}}(s)f(s)\right) ,\\ \chi _{{\mathbb {R}}_{\le t}}({\text {m}})&:L_{2,\rho }({\mathbb {R}};H)\rightarrow L_{2,\rho }({\mathbb {R}};H),\quad f\mapsto \left( s\mapsto \chi _{{\mathbb {R}}_{\le t}}(s)f(s)\right) . \end{aligned}$$

Then for \(f\in L_{2,\rho }({\mathbb {R}};H)\) we haveFootnote 1

$$\begin{aligned} \chi _{{\mathbb {R}}_{\ge t}}({\text {m}})f&=\partial _{t,\rho }\chi _{{\mathbb {R}}_{\ge t}}({\text {m}})\partial _{t,\rho }^{-1}f-\mathrm {e}^{-2\rho t}\left( \partial _{t,\rho }^{-1}f\right) (t+)\delta _{t},\\ \chi _{{\mathbb {R}}_{\le t}}({\text {m}})f&=\partial _{t,\rho }\chi _{{\mathbb {R}}_{\le t}}({\text {m}})\partial _{t,\rho }^{-1}f+\mathrm {e}^{-2\rho t}\left( \partial _{t,\rho }^{-1}f\right) (t-)\delta _{t}. \end{aligned}$$

Proof

We just prove the formula for \(\chi _{{\mathbb {R}}_{\ge t}}({\text {m}})\). So, let \(f\in L_{2,\rho }({\mathbb {R}};H)\) and set \(F{:}{=}\partial _{t,\rho }^{-1}f.\) We recall from Proposition 2.2 (c) that

$$\begin{aligned} F(t)=\int _{-\infty }^{t}f(s)\,\mathrm {d}s\quad (t\in {\mathbb {R}}). \end{aligned}$$

For \(g\in C_{c}^{\infty }({\mathbb {R}};H)\) we compute

$$\begin{aligned}&\langle \partial _{t,\rho }\chi _{{\mathbb {R}}_{\ge t}}({\text {m}})\partial _{t,\rho }^{-1}f,g\rangle _{H^{-1}(\partial _{t,\rho })\times H^{1}(\partial _{t,\rho }^{*})}\\&=\langle \chi _{{\mathbb {R}}_{\ge t}}({\text {m}})\partial _{t,\rho }^{-1}f,\partial _{t,\rho }^{*}g\rangle _{L_{2,\rho }({\mathbb {R}};H)}\\&=\int _{t}^{\infty }\langle F(s),-g'(s)+2\rho g(s)\rangle \mathrm {e}^{-2\rho s}\,\mathrm {d}s\\&=\int _{t}^{\infty }\langle f(s),g(s)\rangle \mathrm {e}^{-2\rho s}\,\mathrm {d}s+F(t+)g(t)\mathrm {e}^{-2\rho t}\\&=\langle \chi _{{\mathbb {R}}_{\ge t}}({\text {m}})f,g\rangle _{L_{2,\rho }({\mathbb {R}};H)}+\langle \mathrm {e}^{-2\rho t}F(t+)\delta _{t},g\rangle _{H^{-1}(\partial _{t,\rho })\times H^{1}(\partial _{t,\rho }^{*}).} \end{aligned}$$

Since \(C_{c}^{\infty }({\mathbb {R}};H)\) is dense in \(H^{1}(\partial _{t,\rho }^{*})\) by Proposition 2.2 (a), we derive the asserted formula. \(\square \)

The latter representation of the cut-off operators on \(L_{2,\rho }({\mathbb {R}};H)\) leads to the following definition on \(H_{\rho }^{-1}({\mathbb {R}};H).\)

Definition 4.2

Let \(\rho >0\) and H a Hilbert space. For \(t\in {\mathbb {R}}\) we define the operators

$$\begin{aligned} P_{t}&:{\text {dom}}(P_{t})\subseteq H_{\rho }^{-1}({\mathbb {R}};H)\rightarrow H_{\rho }^{-1}({\mathbb {R}};H),\\ Q_{t}&:{\text {dom}}(Q_{t})\subseteq H_{\rho }^{-1}({\mathbb {R}};H)\rightarrow H_{\rho }^{-1}({\mathbb {R}};H), \end{aligned}$$

with the domains

$$\begin{aligned} {\text {dom}}(P_{t})&{:}{=}\{f\in H_{\rho }^{-1}({\mathbb {R}};H)\,;\,(\partial _{t,\rho }^{-1}f)(t+)\text { exists}\},\\ {\text {dom}}(Q_{t})&{:}{=}\{f\in H_{\rho }^{-1}({\mathbb {R}};H)\,;\,(\partial _{t,\rho }^{-1}f)(t-)\text { exists}\} \end{aligned}$$

by

$$\begin{aligned} P_{t}f{:}{=}\partial _{t,\rho }\chi _{{\mathbb {R}}_{\ge t}}({\text {m}})\partial _{t,\rho }^{-1}f-\mathrm {e}^{-2\rho t}\left( \partial _{t,\rho }^{-1}f\right) (t+)\delta _{t}\quad (f\in {\text {dom}}(P_{t})) \end{aligned}$$

and

$$\begin{aligned} Q_{t}f{:}{=}\partial _{t,\rho }\chi _{{\mathbb {R}}_{\le t}}({\text {m}})\partial _{t,\rho }^{-1}f+\mathrm {e}^{-2\rho t}\left( \partial _{t,\rho }^{-1}f\right) (t-)\delta _{t}\quad (f\in {\text {dom}}(Q_{t})). \end{aligned}$$

Remark 4.3

For a function \(f\in L_{1,\mathrm {loc}}({\mathbb {R}};H)\) we say that \(a{:}{=}f(t+)\) for some \(t\in {\mathbb {R}}\) if

$$\begin{aligned} \forall \varepsilon>0\,\exists \delta>0:\;\lambda \left( \{s\in [t,t+\delta [\,;\,|f(s)-a|>\varepsilon \right) =0, \end{aligned}$$

where \(\lambda \) denotes the Lebesgue measure on \({\mathbb {R}}.\) The expression \(f(t-)\) is defined analogously.

We conclude this section by some properties of the so introduced cut-off operators.

Proposition 4.4

Let H be a Hilbert space, \(\rho >0\), \(y\in H\) and \(s,t\in {\mathbb {R}}.\) Then the following statements hold.

  1. (a)

    \(\delta _{s}y\in {\text {dom}}(P_{t})\) and

    $$\begin{aligned} P_{t}\delta _{s}y={\left\{ \begin{array}{ll} \delta _{s}y &{} \text { if }s>t,\\ 0 &{} \text { if }s\le t. \end{array}\right. } \end{aligned}$$
  2. (b)

    For \(f\in {\text {dom}}(P_{t})\cap {\text {dom}}(Q_{t})\) we obtain

    $$\begin{aligned} f=P_{t}f+Q_{t}f+\mathrm {e}^{-2\rho t}\left( \left( \partial _{t,\rho }^{-1}f\right) (t+)-\left( \partial _{t,\rho }^{-1}f\right) (t-)\right) \delta _{t}. \end{aligned}$$
  3. (c)

    For \(f\in H_{\rho }^{-1}({\mathbb {R}};H)\) we have \({\text {spt}}f\subseteq {\mathbb {R}}_{\le t}\) if and only if \(f\in \ker (P_{t}).\) Here, the support \({\text {spt}}f\) is meant in the sense of distributions.

Proof

  1. (a)

    We note that \(\partial _{t,\rho }^{-1}\delta _{s}y=\mathrm {e}^{2\rho s}\chi _{{\mathbb {R}}_{\ge s}}y\) and hence, \(\delta _{s}\in {\text {dom}}(P_{t}).\) Moreover,

    $$\begin{aligned} P_{t}\delta _{s}y=\partial _{t,\rho }\chi _{{\mathbb {R}}_{\ge t}}({\text {m}})\chi _{{\mathbb {R}}_{\ge s}}y\mathrm {e}^{2\rho s}-\mathrm {e}^{-2\rho t}\left( \mathrm {e}^{2\rho s}\chi _{{\mathbb {R}}_{\ge s}}y\right) (t+)\delta _{t}={\left\{ \begin{array}{ll} \delta _{s}y &{} \text { if }s>t,\\ 0 &{} \text { if }s\le t. \end{array}\right. } \end{aligned}$$
  2. (b)

    If \(f\in {\text {dom}}(P_{t})\cap {\text {dom}}(Q_{t})\) we compute

    $$\begin{aligned} P_{t}f+Q_{t}f&=\partial _{t,\rho }\chi _{{\mathbb {R}}_{\ge t}}({\text {m}})\partial _{t,\rho }^{-1}f-\mathrm {e}^{-2\rho t}\left( \partial _{t,\rho }^{-1}f\right) (t+)\delta _{t}\\&\quad +\partial _{t,\rho }\chi _{{\mathbb {R}}_{\le t}}({\text {m}})\partial _{t,\rho }^{-1}f+\mathrm {e}^{-2\rho t}\left( \partial _{t,\rho }^{-1}f\right) (t-)\delta _{t}\\&=\partial _{t,\rho }\partial _{t,\rho }^{-1}f-\mathrm {e}^{-2\rho t}\left( \left( \partial _{t,\rho }^{-1}f\right) (t+)-\left( \partial _{t,\rho }^{-1}f\right) (t-)\right) \delta _{t}\\&=f-\mathrm {e}^{-2\rho t}\left( \left( \partial _{t,\rho }^{-1}f\right) (t+)-\left( \partial _{t,\rho }^{-1}f\right) (t-)\right) \delta _{t}. \end{aligned}$$
  3. (c)

    Let \(f\in H_{\rho }^{-1}({\mathbb {R}};H)\) and assume first that \({\text {spt}}f\subseteq {\mathbb {R}}_{\le t}\). We first prove that \(\partial _{t,\rho }^{-1}f\) is constant on \({\mathbb {R}}_{\ge t}\). For doing so, we define

    $$\begin{aligned} V{:}{=}\{\chi _{{\mathbb {R}}_{\ge t}}x\,;\,x\in H\}\subseteq L_{2,\rho }({\mathbb {R}};H). \end{aligned}$$

Then V is a closed subspace and for \(g\in L_{2,\rho }({\mathbb {R}};H)\) we have that

$$\begin{aligned} g\in V^{\bot }\quad \Leftrightarrow \quad \int _{t}^{\infty }g(s)\mathrm {e}^{-2\rho s}\,\mathrm {d}s=0. \end{aligned}$$

For \(g\in L_{2,\rho }({\mathbb {R}};H)\) we obtain

$$\begin{aligned} \langle \chi _{{\mathbb {R}}_{\ge t}}({\text {m}})\partial _{t,\rho }^{-1}f,g\rangle _{L_{2,\rho }({\mathbb {R}};H)}=\langle f,\left( \partial _{t,\rho }^{*}\right) ^{-1}\chi _{{\mathbb {R}}_{\ge t}}({\text {m}})g\rangle _{H_{\rho }^{-1}({\mathbb {R}};H)\times H_{\rho }^{1}({\mathbb {R}};H)} \end{aligned}$$

and an elementary computation shows

$$\begin{aligned} \left( \left( \partial _{t,\rho }^{*}\right) ^{-1}\chi _{{\mathbb {R}}_{\ge t}}({\text {m}})g\right) (s)=\int _{s}^{\infty }\chi _{{\mathbb {R}}_{\ge t}}(r)g(r)\mathrm {e}^{2\rho (s-r)}\,\mathrm {d}r\quad (s\in {\mathbb {R}}). \end{aligned}$$

Consequently, for \(g\in V^{\bot }\) we infer that \(\left( \partial _{t,\rho }^{*}\right) ^{-1}\chi _{{\mathbb {R}}_{\ge t}}({\text {m}})g=0\) on \({\mathbb {R}}_{\le t}.\) Hence, \(\langle \chi _{{\mathbb {R}}_{\ge t}}({\text {m}})\partial _{t,\rho }^{-1}f,g\rangle _{L_{2,\rho }({\mathbb {R}};H)}=0\) for each \(g\in V^{\bot }\) and thus, \(\chi _{{\mathbb {R}}_{\ge t}}({\text {m}})\partial _{t,\rho }^{-1}f\in V\), which proves that \(\partial _{t,\rho }^{-1}f\) is constant on \({\mathbb {R}}_{\ge t}.\) In particular, this shows \(f\in {\text {dom}}(P_{t})\) and

$$\begin{aligned} P_{t}f&=\partial _{t,\rho }\chi _{{\mathbb {R}}_{\ge t}}({\text {m}})\partial _{t,\rho }^{-1}f-\mathrm {e}^{-2\rho t}\left( \partial _{t,\rho }^{-1}f\right) (t+)\delta _{t}\\&=\partial _{t,\rho }\chi _{{\mathbb {R}}_{\ge t}}\left( \partial _{t,\rho }^{-1}f\right) (t+)-\mathrm {e}^{-2\rho t}\left( \partial _{t,\rho }^{-1}f\right) (t+)\delta _{t}\\&=0. \end{aligned}$$

Assume on the other hand that \(f\in \ker (P_{t})\) and let \(\varphi \in C_{c}^{\infty }({\mathbb {R}}_{>t};H).\) We then compute, using that \({\text {spt}}\partial _{t,\rho }^{*}\varphi \subseteq {\mathbb {R}}_{>t}\)

$$\begin{aligned} \langle f,\varphi \rangle _{H_{\rho }^{-1}({\mathbb {R}};H)\times H_{\rho }^{1}({\mathbb {R}};H)}&=\langle \partial _{t,\rho }^{-1}f,\partial _{t,\rho }^{*}\varphi \rangle _{L_{2,\rho }({\mathbb {R}};H)}\\&=\langle Pf,\varphi \rangle _{H_{\rho }^{-1}\times H_{\rho }^{1}}+\mathrm {e}^{-2\rho t}\left( \partial _{t,\rho }^{-1}f\right) (t+)\varphi (t)\\&=0, \end{aligned}$$

which gives \({\text {spt}}f\subseteq {\mathbb {R}}_{\le t}.\) \(\square \)

5 Admissible histories for evolutionary equations

In this section we study evolutionary problems of the following form

$$\begin{aligned} \left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) u&=0\quad \text {on }{\mathbb {R}}_{>0},\nonumber \\ u&=g\quad \text {on }{\mathbb {R}}_{<0}, \end{aligned}$$
(5.1)

where M and A are as in Theorem 2.10 and g is a given function on \({\mathbb {R}}_{<0}\). The first goal is to rewrite this ‘Initial value problem’ into a proper evolutionary equations as it is introduced in Sect. 2. For doing so, we start with some heuristics to motivate the definition which will be made below. In particular, for the moment we will not care about domains of operators.

We will now write (5.1) as an evolutionary equation for the unknown \(v{:}{=}u|_{{\mathbb {R}}_{\ge 0}}\), which is the part of u to be determined. For doing so, we first assume that \(u\in H_{\rho }^{1}({\mathbb {R}};H)\) for some \(\rho >0,\) which means that \(v+g\in H_{\rho }^{1}({\mathbb {R}};H).\) We interpret the first line of (5.1) as

$$\begin{aligned} P_{0}\left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) u=0, \end{aligned}$$

where \(P_{0}\) is the cut-off operator introduced in Sect. 4. The latter gives

$$\begin{aligned} 0&=P_{0}\left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) u\\&=P_{0}\left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) v+P_{0}\left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) g\\&=\partial _{t,\rho }P_{0}M(\partial _{t,\rho })v+AP_{0}v-\left( M(\partial _{t,\rho })v\right) (0+)\delta _{0}+P_{0}\partial _{t,\rho }M(\partial _{t,\rho })g+AP_{0}g\\&=\partial _{t,\rho }P_{0}M(\partial _{t,\rho })v+Av+P_{0}\partial _{t,\rho }M(\partial _{t,\rho })g-\left( M(\partial _{t,\rho })v\right) (0+)\delta _{0}. \end{aligned}$$

Since v is supported on \({\mathbb {R}}_{\ge 0}\) by assumption and \(M(\partial _{t,\rho })\) is causal by Proposition 2.7, we infer that \(M(\partial _{t,\rho })v\) is also supported on \({\mathbb {R}}_{\ge 0}\) and so, \(P_{0}M(\partial _{t,\rho })v=M(\partial _{t,\rho })v.\) Hence, we arrive at an evolutionary problem for v of the form

$$\begin{aligned} \left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) v=\left( M(\partial _{t,\rho })v\right) (0+)\delta _{0}-P_{0}\partial _{t,\rho }M(\partial _{t,\rho })g. \end{aligned}$$

Since \(u=v+g\in H_{\rho }^{1}({\mathbb {R}};H)\) by assumption, we infer that u is continuous by Proposition 2.2 (e) and hence, the limits \(v(0+)\) and \(g(0-)\) exist and coincide. Hence, \(v-\chi _{{\mathbb {R}}_{\ge 0}}g(0-)\in H_{\rho }^{1}({\mathbb {R}};H)\) and vanishes on \({\mathbb {R}}_{<0}.\) The latter gives

$$\begin{aligned} \left( M(\partial _{t,\rho })v\right) (0+)&=\left( M(\partial _{t,\rho })(v-\chi _{{\mathbb {R}}_{\ge 0}}g(0-))\right) (0+)+\left( M(\partial _{t,\rho })\chi _{{\mathbb {R}}_{\ge 0}}g(0-)\right) (0+)\\&=\left( M(\partial _{t,\rho })\chi _{{\mathbb {R}}_{\ge 0}}g(0-)\right) (0+), \end{aligned}$$

where in the last equality we have used that \(M(\partial _{t,\rho })(v-\chi _{{\mathbb {R}}_{\ge 0}}g(0-))\in H_{\rho }^{1}({\mathbb {R}};H)\), hence it is continuous, and vanishes on \({\mathbb {R}}_{\le 0}\) due to causality. Summarising, we end up with the following problem for v

$$\begin{aligned} \left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) v=\left( M(\partial _{t,\rho })\chi _{{\mathbb {R}}_{\ge 0}}g(0-)\right) (0+)\delta _{0}-P_{0}\partial _{t,\rho }M(\partial _{t,\rho })g. \end{aligned}$$
(5.2)

Now, to make sense of (5.2) we need to ensure that the right hand side is well-defined. In particular, we need that \(\left( M(\partial _{t,\rho })\chi _{{\mathbb {R}}_{\ge 0}}g(0-)\right) (0+)\) exists. In order to ensure that, we introduce the following notion.

Definition 5.1

Let H be a Hilbert space, \(\rho _{0}\ge 0\) and \(M:{\mathbb {C}}_{{\text {Re}}>\rho _{0}}\rightarrow L(H)\) be analytic and bounded. We call M regularising if for all \(x\in H,\rho >\rho _{0}\) the limit

$$\begin{aligned} \left( M(\partial _{t,\rho })\chi _{{\mathbb {R}}_{\ge 0}}x\right) (0+) \end{aligned}$$

exists. Moreover, for \(\rho >0\) we define the space

$$\begin{aligned} H_{\rho }^{1}({\mathbb {R}}_{\le 0};H){:}{=}\left\{ f|_{{\mathbb {R}}_{\le 0}}\,;\,f\in H_{\rho }^{1}({\mathbb {R}};H)\right\} . \end{aligned}$$

As it turns out, this assumption suffices to obtain a well-defined expression on the right hand side of (5.2).

Proposition 5.2

Let H be a Hilbert space, \(\rho _{0}\ge 0\) and \(M:{\mathbb {C}}_{{\text {Re}}>\rho _{0}}\rightarrow L(H)\) be analytic and bounded and assume that M is regularising. Then for each \(g\in H_{\rho }^{1}({\mathbb {R}}_{\le 0};H)\) with \(\rho >\rho _{0}\) we have that

$$\begin{aligned} \partial _{t,\rho }M(\partial _{t,\rho })g\in {\text {dom}}(P_{0}). \end{aligned}$$

Proof

By assumption \(g=f|_{{\mathbb {R}}_{\le 0}}\) for some \(f\in H_{\rho }^{1}({\mathbb {R}};H).\) Hence, \(g(0-)=f(0)\) exists and hence, an easy computation shows that \(g-\chi _{{\mathbb {R}}_{\le 0}}g(0-)\in H_{\rho }^{1}({\mathbb {R}};H)\). Hence, also \(M(\partial _{t,\rho })\left( g-\chi _{{\mathbb {R}}_{\ge 0}}g(0-)\right) \in H_{\rho }^{1}({\mathbb {R}};H)\) and thus,

$$\begin{aligned} \left( M(\partial _{t,\rho })g\right) (0+)=\left( M(\partial _{t,\rho })\left( g-\chi _{{\mathbb {R}}_{\ge 0}}g(0-)\right) \right) (0+)+\left( M(\partial _{t,\rho })\chi _{{\mathbb {R}}_{\ge 0}}g(0-)\right) (0+) \end{aligned}$$

exists and so, \(\partial _{t,\rho }M(\partial _{t,\rho })g\in {\text {dom}}(P_{0}).\) \(\square \)

We are now in the position to define the space of admissible history functions g.

Definition 5.3

Let H be a Hilbert space, \(\rho _{0}\ge 0\) and \(M:{\mathbb {C}}_{{\text {Re}}>\rho _{0}}\rightarrow L(H)\) be analytic, bounded and regularising. Moreover, let \(A:{\text {dom}}(A)\subseteq H\rightarrow H\) be densely defined, closed and linear. For notational convenience, we set

$$\begin{aligned} \varGamma _{\rho }:H_{\rho }^{1}({\mathbb {R}}_{\le 0};H)\rightarrow H,\quad g\mapsto \left( M(\partial _{t,\rho })\chi _{{\mathbb {R}}_{\ge 0}}g(0-)\right) (0+) \end{aligned}$$

and

$$\begin{aligned} K_{\rho }:H_{\rho }^{1}({\mathbb {R}}_{\le 0};H)\rightarrow H_{\rho }^{-1}({\mathbb {R}};H),\quad g\mapsto P_{0}\partial _{t,\rho }M(\partial _{t,\rho })g \end{aligned}$$

for \(\rho >\rho _{0}.\) Furthermore, we assume that the evolutionary problem associated with (MA) is well-posed and define

$$\begin{aligned} {\text {His}}_{\rho }{:}{=}\{g\in H_{\rho }^{1}({\mathbb {R}}_{\le 0};H)\,;\,S_{\rho }\left( \varGamma _{\rho }g\delta _{0}-K_{\rho }g\right) +g\in H_{\rho }^{1}({\mathbb {R}};H)\} \end{aligned}$$

for each \(\rho >s_{0}(M,A),\) the space of admissible histories. Here \(S_{\rho }\) denotes the extension of the solution operator \((\partial _{t,\rho }M(\partial _{t,\rho })+A)^{-1}\) to \(H_{\rho }^{-1}({\mathbb {R}};H)\) (cp. Proposition 3.5). Moreover, we set

$$\begin{aligned} {\text {IV}}_{\rho }{:}{=}\left\{ g(0-)\,;\,g\in {\text {His}}_{\rho }\right\} \end{aligned}$$

the space of admissible initial values.

Remark 5.4

We have

$$\begin{aligned} \varGamma _{\rho }g=(M(\partial _{t,\rho })g)(0-)-\left( M(\partial _{t,\rho })g\right) (0+) \end{aligned}$$

for \(g\in H_{\rho }^{1}({\mathbb {R}}_{\le 0};H).\) Indeed, since \(M(\partial _{t,\rho })\) is causal we infer

$$\begin{aligned} (M(\partial _{t,\rho })g)(0-)&=(M(\partial _{t,\rho })(g+\chi _{{\mathbb {R}}_{\ge 0}}g(0-)))(0-)\\&=(M(\partial _{t,\rho })(g+\chi _{{\mathbb {R}}_{\ge 0}}g(0-)))(0+), \end{aligned}$$

since \(g+\chi _{{\mathbb {R}}_{\ge 0}}g(0-)\in H_{\rho }^{1}({\mathbb {R}};H).\) Thus,

$$\begin{aligned} (M(\partial _{t,\rho })g)(0-)-\left( M(\partial _{t,\rho })g\right) (0+)=\left( M(\partial _{t,\rho })\chi _{{\mathbb {R}}_{\ge 0}}g(0-)\right) (0+)=\varGamma _{\rho }g. \end{aligned}$$

We come back to the heuristic computation at the beginning of this section and show, that for \(g\in {\text {His}}_{\rho }\) the computation can be made rigorously.

Proposition 5.5

Let H be a Hilbert space, \(\rho _{0}\ge 0\) and \(M:{\mathbb {C}}_{{\text {Re}}>\rho _{0}}\rightarrow L(H)\) be analytic, bounded and regularising. Moreover, let \(A:{\text {dom}}(A)\subseteq H\rightarrow H\) be densely defined, closed and linear and assume that the evolutionary problem associated with (MA) is well-posed. Let \(\rho >s_{0}(M,A)\) and \(g\in {\text {His}}_{\rho }\). We set

$$\begin{aligned} v{:}{=}S_{\rho }\left( \varGamma _{\rho }g\delta _{0}-K_{\rho }g\right) \end{aligned}$$

and \(u{:}{=}v+g.\) Then \({\text {spt}}v\subseteq {\mathbb {R}}_{\ge 0}\), \(u\in H_{\rho }^{1}({\mathbb {R}};H)\) and satisfies (5.1).

Proof

Note that by assumption \(u=v+g\in H_{\rho }^{1}({\mathbb {R}};H)\) and thus, \(v=u-g\in L_{2,\rho }({\mathbb {R}};H).\) We prove that \({\text {spt}}v\subseteq {\mathbb {R}}_{\ge 0}.\) For doing so, we compute

$$\begin{aligned} \partial _{t,\rho }^{-1}v&=\partial _{t,\rho }^{-1}S_{\rho }\left( \varGamma _{\rho }g\delta _{0}-K_{\rho }g\right) \\&=S_{\rho }\left( \partial _{t,\rho }^{-1}\varGamma _{\rho }g\delta _{0}-\partial _{t,\rho }^{-1}K_{\rho }g\right) \\&=S_{\rho }\left( \varGamma _{\rho }g\chi _{{\mathbb {R}}_{\ge 0}}-\chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})M(\partial _{t,\rho })g+\left( M(\partial _{t,\rho })g\right) (0+)\chi _{{\mathbb {R}}_{\ge 0}}\right) \end{aligned}$$

and hence, \({\text {spt}}\partial _{t,\rho }^{-1}v\subseteq {\mathbb {R}}_{\ge 0}\) by causality of \(S_{\rho }.\) The latter implies \({\text {spt}}v\subseteq {\mathbb {R}}_{\ge 0}\). Thus, we have \(u=g\) on \({\mathbb {R}}_{<0}\) and we are left to show

$$\begin{aligned} \left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) u=0\quad \text {on }{\mathbb {R}}_{>0}. \end{aligned}$$

For doing so, let \(\varphi \in C_{c}^{\infty }({\mathbb {R}}_{>0};{\text {dom}}(A^{*}))\). We compute

$$\begin{aligned}&\langle \left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) u,\varphi \rangle _{L_{2,\rho }({\mathbb {R}};H^{-1}(|A^{*}|+1))\times L_{2,\rho }({\mathbb {R}};H^{1}(|A^{*}|+1))}\\&\quad = \langle u,\left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) ^{*}\varphi \rangle _{L_{2,\rho }({\mathbb {R}};H)}\\&\quad = \langle v,\left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) ^{*}\varphi \rangle _{L_{2,\rho }({\mathbb {R}};H)}+\langle g,\left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) ^{*}\varphi \rangle _{L_{2,\rho }({\mathbb {R}};H)}\\&\quad = \langle \varGamma _{\rho }g\delta _{0}-K_{\rho }g,\varphi \rangle _{H_{\rho }^{-1}({\mathbb {R}};H)\times H_{\rho }^{1}({\mathbb {R}};H)}+\langle g,\left( \partial _{t,\rho }M(\partial _{t,\rho })\right) ^{*}\varphi \rangle _{L_{2,\rho }({\mathbb {R}};H)}, \end{aligned}$$

where in the last line we have used \(\langle g,A^{*}\varphi \rangle =0,\) since \({\text {spt}}g\subseteq {\mathbb {R}}_{\le 0}\). Moreover, we compute

$$\begin{aligned}&\langle \varGamma _{\rho }g\delta _{0}-K_{\rho }g,\varphi \rangle _{H_{\rho }^{-1}({\mathbb {R}};H)\times H_{\rho }^{1}({\mathbb {R}};H)}\\&=-\langle K_{\rho }g,\varphi \rangle _{H_{\rho }^{-1}({\mathbb {R}};H)\times H_{\rho }^{1}({\mathbb {R}};H)}\\&=-\langle \partial _{t,\rho }\chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})M(\partial _{t,\rho })g-\left( M(\partial _{t,\rho })g\right) (0+)\delta _{0},\varphi \rangle _{H_{\rho }^{-1}({\mathbb {R}};H)\times H_{\rho }^{1}({\mathbb {R}};H)}\\&=-\langle M(\partial _{t,\rho })g,\partial _{t,\rho }^{*}\varphi \rangle _{L_{2,\rho }({\mathbb {R}};H)}, \end{aligned}$$

where we have used two times that \(\varphi (0)=0.\) Plugging this formula in the above computation, we infer that

$$\begin{aligned} \langle \left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) u,\varphi \rangle _{L_{2,\rho }({\mathbb {R}};H^{-1}(|A^{*}|+1))\times L_{2,\rho }({\mathbb {R}};H^{1}(|A^{*}|+1))}=0, \end{aligned}$$

which shows the claim.\(\square \)

6 \(C_{0}\)-semigroups associated with evolutionary problems

Throughout this section, let H be a Hilbert space, \(\rho _{0}\ge 0\) and \(M:{\mathbb {C}}_{{\text {Re}}>\rho _{0}}\rightarrow L(H)\) analytic, bounded and regularising. Moreover, let \(A:{\text {dom}}(A)\subseteq H\rightarrow H\) be densely defined, closed and linear such that the evolutionary problem associated with (MA) is well-posed.

In this section we aim for a \(C_{0}\)-semigroup associated with the evolutionary problem for (MA) acting on a suitable subspace of \({\text {IV}}_{\rho }\times {\text {His}}_{\rho }\) for \(\rho >s_{0}(M,A).\) For doing so, we first need to prove that \({\text {His}}_{\rho }\) is left invariant by the time evolution. The precise statement is as follows.

Theorem 6.1

Let \(\rho >s_{0}(M,A)\) and \(g\in {\text {His}}_{\rho }.\) Moreover, let \(v{:}{=}S_{\rho }\left( \varGamma _{\rho }g\delta _{0}-K_{\rho }g\right) \) and \(u{:}{=}v+g.\) For \(t>0\) we set \(h{:}{=}\chi _{{\mathbb {R}}_{\le 0}}({\text {m}})u(t+\cdot )\) and \(w{:}{=}\chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})u(t+\cdot ).\) Then \(h\in {\text {His}}_{\rho }\) and

$$\begin{aligned} w=S_{\rho }\left( \varGamma _{\rho }h\delta _{0}-K_{\rho }h\right) . \end{aligned}$$

In particular, \(w(0+)=h(0-)\in {\text {IV}}_{\rho }.\)

Proof

We first note that

$$\begin{aligned} \left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) \tau _{t}=\tau _{t}\left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) , \end{aligned}$$

where \(\tau _{t}u{:}{=}u(t+\cdot )\) for \(u\in L_{2,\rho }({\mathbb {R}};H)\), and hence,

$$\begin{aligned} {\text {spt}}\left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) \tau _{t}u\subseteq {\mathbb {R}}_{\le 0}. \end{aligned}$$

The latter gives, employing the causality of \(M(\partial _{t,\rho })\),

$$\begin{aligned} \left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) \tau _{t}u&=\chi _{{\mathbb {R}}_{\le 0}}({\text {m}})\left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) \tau _{t}u\\&=\partial _{t,\rho }\chi _{{\mathbb {R}}_{\le 0}}({\text {m}})M(\partial _{t,\rho })\tau _{t}u+\left( M(\partial _{t,\rho })\tau _{t}u\right) (0-)\delta _{0}+Ah\\&=\partial _{t,\rho }\chi _{{\mathbb {R}}_{\le 0}}({\text {m}})M(\partial _{t,\rho })h+\left( M(\partial _{t,\rho })h\right) (0-)\delta _{0}+Ah\\&=Q_{0}\partial _{t,\rho }M(\partial _{t,\rho })h+Ah. \end{aligned}$$

The latter yields

$$\begin{aligned} \left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) w&=\left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) \left( \tau _{t}u-h\right) \\&=Q_{0}\partial _{t,\rho }M(\partial _{t,\rho })h-\partial _{t,\rho }M(\partial _{t,\rho })h. \end{aligned}$$

Now, since \(\partial _{t,\rho }M(\partial _{t,\rho })h\in {\text {dom}}(P_{0})\) by causality of \(M(\partial _{t,\rho }),\) we use Proposition 4.4 (b) and Remark 5.4 to derive

$$\begin{aligned} \left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) w&=-P_{0}\partial _{t,\rho }M(\partial _{t,\rho })h-\left( \left( M(\partial _{t,\rho })h\right) (0+)-\left( M(\partial _{t,\rho })h\right) (0-)\right) \delta _{0}\\&=\varGamma _{\rho }h\delta _{0}-K_{\rho }h, \end{aligned}$$

which yields the desired formula for w. Now \(h\in {\text {His}}_{\rho }\) follows, since by definition

$$\begin{aligned} S_{\rho }\left( \varGamma _{\rho }h\delta _{0}-K_{\rho }h\right) +h=w+h=\tau _{t}u\in H_{\rho }^{1}({\mathbb {R}};H). \end{aligned}$$

\(\square \)

The latter theorem allows for defining a semigroup associated with (MA).

Definition 6.2

Let \(\rho >s_{0}(M,A)\) and set

$$\begin{aligned} D_{\rho }{:}{=}\{(g(0-),g)\,;\,g\in {\text {His}}_{\rho }\}. \end{aligned}$$

For \(g\in {\text {His}}_{\rho }\) we set

$$\begin{aligned} v{:}{=}S_{\rho }\left( \varGamma _{\rho }g\delta _{0}-K_{\rho }g\right) \end{aligned}$$

and \(u{:}{=}v+g.\) For \(t\ge 0\) we define

$$\begin{aligned} T_{1}^{\rho }(t):&D_{\rho }\subseteq {\text {IV}}_{\rho }\times {\text {His}}_{\rho }\rightarrow {\text {IV}}_{\rho },\quad (g(0-),g)\mapsto u(t),\\ T_{2}^{\rho }(t):&D_{\rho }\subseteq {\text {IV}}_{\rho }\times {\text {His}}_{\rho }\rightarrow {\text {His}}_{\rho },\quad (g(0-),g)\mapsto \chi _{{\mathbb {R}}_{\le 0}}({\text {m}})\tau _{t}u \end{aligned}$$

and

$$\begin{aligned} T^{\rho }(t){:}{=}(T_{1}^{\rho }(t),T_{2}^{\rho }(t)):D_{\rho }\subseteq {\text {IV}}_{\rho }\times {\text {His}}_{\rho }\rightarrow {\text {IV}}_{\rho }\times {\text {His}}_{\rho }. \end{aligned}$$

We call \((T^{\rho }(t))_{t\ge 0}\) the semigroup associated with (M, A).

Remark 6.3

The so defined semigroup \(T^{\rho }\) consists of two components, the actual state u(t) and the whole past of the actual state; that is, \(\chi _{{\mathbb {R}}_{\le 0}}({\text {m}})\tau _{t}u=u(t+\cdot )\) as a function on \({\mathbb {R}}_{\le 0}\). This construction naturally appears, when dealing with problems with memory, since the computation of the present state at some time \(t\ge 0\) needs the information of the whole trajectory up to this time t (we also refer to [3] for semigroups associated with delay equations). Moreover, the present state and the whole past should fit together, which is reflected in the definition of the space \(D_\rho \) (in [3] this condition is incorporated within the domain of the semigroup generator).

First we show that \(T^{\rho }\) defined above is indeed a strongly continuous semigroup.

Proposition 6.4

Let \(\rho >s_{0}(M,A)\) and \(T^{\rho }\) be the semigroup associated with (MA). Then \(T^{\rho }\) is a strongly continuous semigroup. More precisely,

$$\begin{aligned} T^{\rho }(t+s)=T^{\rho }(t)T^{\rho }(s)\quad (t,s\ge 0) \end{aligned}$$

and

$$\begin{aligned} T^{\rho }(t)(g(0-),g)\rightarrow (g(0-),g)\quad (t\rightarrow 0+) \end{aligned}$$

in \(H\times L_{2,\rho }({\mathbb {R}};H)\) for each \(g\in {\text {His}}_{\rho }.\)

Proof

Let \(g\in {\text {His}}_{\rho }\) and \(t,s\ge 0.\) We set \(v{:}{=}S_{\rho }\left( \varGamma _{\rho }g\delta _{0}-K_{\rho }g\right) \) and \(u{:}{=}v+g.\) By Theorem 6.1 we have that

$$\begin{aligned} \chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})\tau _{s}u=S_{\rho }\left( \varGamma _{\rho }\left( \chi _{{\mathbb {R}}_{\le 0}}({\text {m}})\tau _{s}u\right) \delta _{0}-K_{\rho }\left( \chi _{{\mathbb {R}}_{\le 0}}({\text {m}})\tau _{s}u\right) \right) . \end{aligned}$$

and thus,

$$\begin{aligned} T^{\rho }(t)T^{\rho }(s)(g(0-),g)&=T^{\rho }(t)\left( u(s),\chi _{{\mathbb {R}}_{\le 0}}({\text {m}})\tau _{s}u\right) \\&=(u(t+s),\chi _{{\mathbb {R}}_{\le 0}}({\text {m}})\tau _{t}\tau _{s}u)\\&=T^{\rho }(t+s)(g(0-),g). \end{aligned}$$

Moreover,

$$\begin{aligned}&\Vert T^{\rho }(t)(g(0-),g)-(g(0-),g)\Vert _{H\times L_{2,\rho }({\mathbb {R}};H)}^{2}\\&=\Vert u(t)-g(0-)\Vert _{H}^{2}+\Vert \chi _{{\mathbb {R}}_{\le 0}}({\text {m}})\tau _{t}u-g\Vert _{L_{2,\rho }({\mathbb {R}};H)}\\&=\Vert u(t)-u(0)\Vert _{H}^{2}+\Vert \chi _{{\mathbb {R}}_{\le 0}}({\text {m}})(\tau _{t}u-u)\Vert _{L_{2,\rho }({\mathbb {R}};H)}\\&\le \Vert u(t)-u(0)\Vert _{H}^{2}+\Vert \tau _{t}u-u\Vert _{L_{2,\rho }({\mathbb {R}};H)}\rightarrow 0\quad (t\rightarrow 0+), \end{aligned}$$

by the continuity of u and the strong continuity of translation in \(L_{2,\rho }.\) \(\square \)

In the rest of this section we show a characterisation result, when \(T^{\rho }\) can be extended to a \(C_{0}\)-semigroup on the space

$$\begin{aligned} X_{\rho }^{\mu }{:}{=}\overline{D_{\rho }}^{H\times L_{2,\mu }({\mathbb {R}};H)}\subseteq H\times L_{2,\mu }({\mathbb {R}};H) \end{aligned}$$

for some \(\mu \le \rho .\) We first prove a result that it suffices to consider the family \(T_{1}^{\rho }\).

Proposition 6.5

Let \(\rho >s_{0}(M,A)\) and \(\mu \le \rho .\) Assume that

$$\begin{aligned} T_{1}^{\rho }:D_{\rho }\subseteq X_{\rho }^{\mu }\rightarrow C_{\omega }({\mathbb {R}}_{\ge 0};H) \end{aligned}$$

is bounded for some \(\omega \in {\mathbb {R}}.\) Then

$$\begin{aligned} T_{2}^{\rho }:D_{\rho }\subseteq X_{\rho }^{\mu }\rightarrow C_{\max \{\mu ,\omega \}+\varepsilon }({\mathbb {R}}_{\ge 0};L_{2,\mu }({\mathbb {R}};H)) \end{aligned}$$

is bounded for each \(\varepsilon >0.\)

Proof

Let \(\varepsilon >0\) and \(g\in {\text {His}}_{\rho }.\) We note that

$$\begin{aligned} \left( T_{2}^{\rho }(t)(g(0-),g)\right) (s)={\left\{ \begin{array}{ll} g(t+s) &{} \text { if }s<-t,\\ T_{1}^{\rho }(t+s)(g(0-),g) &{} \text { if }-t\le s\le 0 \end{array}\right. }\quad (t\ge 0,s\le 0). \end{aligned}$$

Hence, we may estimate for \(\varepsilon >0\)

$$\begin{aligned} \Vert T_{2}^{\rho }(t)(g(0-),g)\Vert _{L_{2,\mu }({\mathbb {R}};H)}^{2}&=\int _{-\infty }^{-t}\Vert g(t+s)\Vert ^{2}\mathrm {e}^{-2\mu s}\,\mathrm {d}s+\int _{-t}^{0}\Vert T_{1}^{\rho }(t+s)(g(0-),g)\Vert ^{2}\mathrm {e}^{-2\mu s}\,\mathrm {d}s\\&\le \int _{-\infty }^{0}\Vert g(s)\Vert ^{2}\mathrm {e}^{-2\mu s}\,\mathrm {d}s\;\mathrm {e}^{2\mu t}+M\Vert (g(0-),g)\Vert _{X_{\rho }^{\mu }}^{2}\int _{-t}^{0}\mathrm {e}^{2\omega (t+s)}\mathrm {e}^{-2\mu s}\,\mathrm {d}s\\&=\Vert g\Vert _{L_{2,\mu }({\mathbb {R}};H)}^{2}\mathrm {e}^{2\mu t}+M\Vert (g(0-),g)\Vert _{X_{\rho }^{\mu }}^{2}\mathrm {e}^{2\omega t}\frac{1}{2(\omega -\mu )}(1-\mathrm {e}^{-2(\omega -\mu )t})\\&\le \Vert g\Vert _{L_{2,\mu }({\mathbb {R}};H)}^{2}\mathrm {e}^{2\mu t}+M\Vert (g(0-),g)\Vert _{X_{\rho }^{\mu }}^{2}t\mathrm {e}^{2\max \{\mu ,\omega \}t}\\&\le C\mathrm {e}^{2(\max \{\mu ,\omega )+\varepsilon )t}\Vert (g(0-),g)\Vert _{X_{\rho }^{\mu }}^{2} \end{aligned}$$

for each \(g\in {\text {His}}_{\rho },\) where M denotes the norm of \(T_{1}^{\rho }\) and \(C{:}{=}\max _{t\ge 0}(1+Mt)\mathrm {e}^{-2\varepsilon t}.\)

\(\square \)

In order to extend \(T_{1}^{\rho }\) to \(X_{\rho }^{\mu }\) we make use of the Widder-Arendt-Theorem.

Theorem 6.6

( Widder-Arendt, [1, 2, Theorem 2.2.3]) Let H be a Hilbert space and \(r\in C^{\infty }({\mathbb {R}}_{>0};H)\) such that

$$\begin{aligned} M{:}{=}\sup _{\lambda >0,k\in {\mathbb {N}}}\frac{\lambda ^{k+1}}{k!}\Vert r^{(k)}(\lambda )\Vert <\infty . \end{aligned}$$

Then there is \(f\in L_{\infty }({\mathbb {R}}_{\ge 0};H)\) such that \(\Vert f\Vert _{\infty }=M\) and

$$\begin{aligned} r(\lambda )=\int _{0}^{\infty }\mathrm {e}^{-\lambda t}f(t)\,\mathrm {d}t\quad (\lambda >0). \end{aligned}$$

Remark 6.7

The latter Theorem was first proved by Widder in the scalar-valued case [33] and then generalised by Arendt to the vector-valued case in [1]. It is noteworthy that the latter Theorem is also true in Banach spaces satisfying the Radon-Nikodym property (see [5, Chapter III]) and, in fact, this property of X is equivalent to the validity of Theorem 6.6, see [1, Theorem 1.4].

We now identify the function r mentioned in Theorem 6.6 within the presented framework.

Proposition 6.8

Let \(\rho >s_{0}(M,A)\) and \(g\in {\text {His}}_{\rho }.\) We set \(v{:}{=}S_{\rho }\left( \varGamma _{\rho }g\delta _{0}-K_{\rho }g\right) \in L_{2,\rho }({\mathbb {R}};H)\) and

$$\begin{aligned} r_{g}(\lambda ){:}{=}\sqrt{2\pi }({\mathcal {L}}_{\lambda }v)(0)\quad (\lambda >\rho ). \end{aligned}$$

Then \(r_{g}\in C^{\infty }({\mathbb {R}}_{>\rho };H)\). Moreover,

$$\begin{aligned} r_{g}(\lambda )=(\lambda M(\lambda )+A)^{-1}\left( \left( M(\partial _{t,\rho })g\right) (0-)-\lambda \sqrt{2\pi }{\mathcal {L}}_{\lambda }(\chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})M(\partial _{t,\rho })g)(0)\right) \quad (\lambda >\rho ). \end{aligned}$$

Proof

We note that

$$\begin{aligned} ({\mathcal {L}}_{\lambda }v)(0)=\frac{1}{\sqrt{2\pi }}\int _{0}^{\infty }\mathrm {e}^{-\lambda t}v(t)\,\mathrm {d}t\quad (\lambda >\rho ) \end{aligned}$$

and hence, the regularity of \(r_{g}\) follows. Moreover,

$$\begin{aligned} \partial _{\lambda ,t}^{-1}v&=\partial _{t,\rho }^{-1}v\\&=S_{\rho }\left( \partial _{t,\rho }^{-1}\varGamma _{\rho }g\delta _{0}-\partial _{t,\rho }^{-1}K_{\rho }g\right) \\&=S_{\rho }\left( \varGamma _{\rho }g\chi _{{\mathbb {R}}_{\ge 0}}-\chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})M(\partial _{t,\rho })g+(M(\partial _{t,\rho })g)(0+)\chi _{{\mathbb {R}}_{\ge 0}}\right) \\&=S_{\lambda }\left( \varGamma _{\rho }g\chi _{{\mathbb {R}}_{\ge 0}}-\chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})M(\partial _{t,\rho })g+(M(\partial _{t,\rho })g)(0+)\chi _{{\mathbb {R}}_{\ge 0}}\right) , \end{aligned}$$

where we have used the independence of \(\rho \) stated in Theorem 2.10. Hence,

$$\begin{aligned} r_{g}(\lambda )&=\sqrt{2\pi }({\mathcal {L}}_{\lambda }v)(0)\\&=\lambda \sqrt{2\pi }({\mathcal {L}}_{\lambda }\partial _{t,\lambda }^{-1}v)(0)\\&=\lambda \sqrt{2\pi }\left( \lambda M(\lambda )+A\right) ^{-1}\left( \frac{1}{\lambda \sqrt{2\pi }}\varGamma _{\rho }g-{\mathcal {L}}_{\lambda }\left( \chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})M(\partial _{t,\rho })g\right) (0)+\frac{1}{\lambda \sqrt{2\pi }}\left( M(\partial _{t,\rho })g\right) (0+)\right) \\&=\left( \lambda M(\lambda )+A\right) ^{-1}\left( \varGamma _{\rho }g+\left( M(\partial _{t,\rho })g\right) (0+)-\lambda \sqrt{2\pi }{\mathcal {L}}_{\lambda }\left( \chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})M(\partial _{t,\rho })g\right) (0)\right) \\&=\left( \lambda M(\lambda )+A\right) ^{-1}\left( \left( M(\partial _{t,\rho })g\right) (0-)-\lambda \sqrt{2\pi }{\mathcal {L}}_{\lambda }\left( \chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})M(\partial _{t,\rho })g\right) (0)\right) \end{aligned}$$

for each \(\lambda >\rho ,\) where we have used the formula for \(\varGamma _{\rho }\) stated in Remark 5.4. \(\square \)

With these preparations at hand, we can now state and prove the main result of this article.

Theorem 6.9

Let \(\rho >s_{0}(M,A)\) and \(T^{\rho }\) be the semigroup on \(D_{\rho }\) associated with (MA). Moreover, for \(g\in {\text {His}}_{\rho }\) we set

$$\begin{aligned} r_{g}(\lambda ){:}{=}\left( \lambda M(\lambda )+A\right) ^{-1}\left( \left( M(\partial _{t,\rho })g\right) (0-)-\lambda \sqrt{2\pi }{\mathcal {L}}_{\lambda }\left( \chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})M(\partial _{t,\rho })g\right) (0)\right) \quad (\lambda >\rho ). \end{aligned}$$

For \(\mu \le \rho \) the following statements are equivalent:

  1. (i)

    \(T^{\rho }\) can be extended to a \(C_{0}\)-semigroup on \(X_{\rho }^{\mu }=\overline{D_{\rho }}^{H\times L_{2,\mu }({\mathbb {R}};H)}\subseteq H\times L_{2,\mu }({\mathbb {R}};H).\)

  2. (ii)

    There exists \(M\ge 1\) and \(\omega \ge \rho \) such that

    $$\begin{aligned} \frac{(\lambda -\omega )^{k+1}}{k!}\Vert r_{g}^{(k)}(\lambda )\Vert \le M\left( \Vert g(0-)\Vert _{H}+\Vert g\Vert _{L_{2,\mu }({\mathbb {R}};H)}\right) \end{aligned}$$

    for each \(\lambda >\omega ,k\in {\mathbb {N}}\) and \(g\in {\text {His}}_{\rho }.\)

In this case

$$\begin{aligned} T_{1}^{\rho }&:X_{\rho }^{\mu }\rightarrow C_{\omega }({\mathbb {R}}_{\ge 0};H),\\ T_{2}^{\rho }&:X_{\rho }^{\mu }\rightarrow C_{\omega +\varepsilon }({\mathbb {R}}_{\ge 0};L_{2,\mu }({\mathbb {R}};H)) \end{aligned}$$

are bounded for each \(\varepsilon >0.\)

Proof

(i) \(\Rightarrow \)(ii): Since \(T^{\rho }:X_{\rho }^{\mu }\rightarrow X_{\rho }^{\mu }\) is a \(C_{0}\)-semigroup, we find \(M\ge 1\) and \(\omega \ge \rho \) such that

$$\begin{aligned} \Vert T^{\rho }(t)\Vert \le M\mathrm {e}^{\omega t}\quad (t\ge 0). \end{aligned}$$

In particular, we infer that

$$\begin{aligned} \Vert T_{1}^{\rho }(t)(g(0-),g)\Vert \le M\mathrm {e}^{\omega t}\Vert (g(0-),g)\Vert _{X_{\rho }^{\mu }}\quad (t\ge 0,g\in {\text {His}}_{\rho }). \end{aligned}$$

Since \(r_{g}(\lambda )=\sqrt{2\pi }{\mathcal {L}}_{\lambda }\left( T_{1}^{\rho }(\cdot )(g(0-),g)\right) (0)\) for \(\lambda >\omega \) by Proposition 6.8, we infer that

$$\begin{aligned} \Vert r_{g}^{(k)}(\lambda )\Vert&=\left\| \int _{0}^{\infty }\mathrm {e}^{-\lambda t}(-t)^{k}T_{1}^{\rho }(t)(g(0-),g)\,\mathrm {d}t\right\| \\&\le \int _{0}^{\infty }\mathrm {e}^{-\lambda t}t^{k}M\mathrm {e}^{\omega t}\,\mathrm {d}t\Vert (g(0-),g)\Vert _{X_{\rho }^{\mu }}\\&=M\frac{k!}{(\lambda -\omega )^{k+1}}\Vert (g(0-),g)\Vert _{X_{\rho }^{\mu }}, \end{aligned}$$

which shows (ii).

(ii)\(\Rightarrow \)(i): Let \(g\in {\text {His}}_{\rho }\) and define \({\widetilde{r}}:{\mathbb {R}}_{>0}\rightarrow H\) by \({\widetilde{r}}(\lambda )=r_{g}(\lambda +\omega )\) for \(\lambda >0.\) Then \({\widetilde{r}}\) satisfies the assumptions of Theorem 6.6 and hence, there is \(f\in L_{\infty }({\mathbb {R}}_{\ge 0};H)\) with \(\Vert f\Vert _{\infty }\le M\left( \Vert g(0-)\Vert _{H}+\Vert g\Vert _{L_{2,\mu }({\mathbb {R}};H)}\right) \) such that

$$\begin{aligned} r_{g}(\lambda +\omega )=\int _{0}^{\infty }\mathrm {e}^{-\lambda t}f(t)\,\mathrm {d}t=\int _{0}^{\infty }\mathrm {e}^{-(\lambda +\omega )t}\mathrm {e}^{\omega t}f(t)\,\mathrm {d}t \end{aligned}$$

for each \(\lambda >0.\) In particular, setting \(v{:}{=}T_{1}^{\rho }(\cdot )(g(0-),g)\) we obtain

$$\begin{aligned} \int _{0}^{\infty }\mathrm {e}^{-\lambda t}v(t)\,\mathrm {d}t=r_{g}(\lambda )=\int _{0}^{\infty }\mathrm {e}^{-\lambda t}\mathrm {e}^{\omega t}f(t)\,\mathrm {d}t\quad (\lambda >\omega ) \end{aligned}$$

and by analytic extension it follows that

$$\begin{aligned} {\mathcal {L}}_{\lambda }v={\mathcal {L}}_{\lambda }(\mathrm {e}^{\omega \cdot }f)\quad (\lambda >\omega ). \end{aligned}$$

Thus, \(v=\mathrm {e}^{\omega \cdot }f\) and hence,

$$\begin{aligned} \Vert v(t)\Vert =\mathrm {e}^{\omega t}\Vert f(t)\Vert \le M\mathrm {e}^{\omega t}\left( \Vert g(0-)\Vert _{H}+\Vert g\Vert _{L_{2,\mu }({\mathbb {R}};H)}\right) . \end{aligned}$$

Thus, since v is continuous on \({\mathbb {R}}_{\ge 0}\), we derive that

$$\begin{aligned} T_{\rho }^{1}:D_{\rho }\subseteq X_{\rho }^{\mu }\rightarrow C_{\omega }({\mathbb {R}}_{\ge 0};H) \end{aligned}$$

is bounded and can therefore be extended to a \(C_{0}\)-semigroup on \(X_{\rho }^{\mu }\). Then, by Proposition 6.5 we obtain that

$$\begin{aligned} T_{\rho }^{2}:D_{\rho }\subseteq X_{\rho }^{\mu }\rightarrow C_{\omega +\varepsilon }({\mathbb {R}}_{\ge 0};L_{2,\mu }({\mathbb {R}};H)) \end{aligned}$$

is also bounded for each \(\varepsilon >0\) and hence, (i) follows.\(\square \)

Remark 6.10

The latter theorem characterises, when the semigroup \(T^\rho \) can actually be extended to the closure of \(D_\rho \) (hence, to a Hilbert space) in terms of the Laplace transform of the corresponding solution. It is obvious that the so extended operators still form a \(C_0\)-semigroup since the extension still takes values in the space of continuous functions. This is where the Theorem of Widder-Arendt comes into play, since this theorem provides an \(L_\infty \) estimate for the solutions and hence, allows for an extension of the operators in the right spaces.

7 Applications

7.1 Differential-algebraic equations and classical Cauchy problems

In this section we consider initial value problems of the form

$$\begin{aligned} \left( \partial _{t,\rho }E+A\right) u&=0\quad \text {on }{\mathbb {R}}_{>0},\\ u&=g\quad \text {on }{\mathbb {R}}_{\le 0}, \end{aligned}$$

for a bounded operator \(E\in L(H)\), H a Hilbert space, and a densely defined linear and closed operator \(A:{\text {dom}}(A)\subseteq H\rightarrow H.\) We note that this corresponds to the abstract initial value problem (5.1) with

$$\begin{aligned} M(z){:}{=}E\quad (z\in {\mathbb {C}}). \end{aligned}$$

We assume that the evolutionary problem is well-posed, that is we assume that there is \(\rho _{1}\in {\mathbb {R}}_{\ge 0}\) such that \(zE+A\) is boundedly invertible for each \(z\in {\mathbb {C}}_{{\text {Re}}\ge \rho _{1}}\) and

$$\begin{aligned} \sup _{z\in {\mathbb {C}}_{{\text {Re}}\ge \rho _{1}}}\Vert (zE+A)^{-1}\Vert <\infty . \end{aligned}$$

We again denote the infimum over all such \(\rho _{1}\in {\mathbb {R}}_{\ge 0}\) by \(s_{0}(E,A).\)

Lemma 7.1

The function M given by \(M(z){:}{=}E\) for \(z\in {\mathbb {C}}\) is regularising. Moreover, \(\varGamma _{\rho }g=Eg(0-)\) and \(K_{\rho }g=0\) for each \(g\in H_{\rho }^{1}({\mathbb {R}}_{\le 0};H)\) and \(\rho \in {\mathbb {R}}_{>0}.\) In particular, for \(\rho >s_{0}(E,A)\) we have that

$$\begin{aligned} {\text {IV}}_{\rho }=\{x\in H\,;\,S_{\rho }(\delta Ex)-\chi _{{\mathbb {R}}_{\ge 0}}x\in H_{\rho }^{1}({\mathbb {R}};H)\} \end{aligned}$$

and

$$\begin{aligned} {\text {His}}_{\rho }=\{g\in H_{\rho }^{1}({\mathbb {R}}_{\le 0};H)\,;\,g(0-)\in {\text {IV}}_{\rho }\}. \end{aligned}$$

Moreover, \(X_{\rho }^{\mu }=\overline{{\text {IV}}_{\rho }}\times L_{2,\mu }({\mathbb {R}}_{\le 0};H)\) for each \(\mu \le \rho .\)

Proof

For \(x\in H\), \(\rho >0\) we have

$$\begin{aligned} M(\partial _{t,\rho })\chi _{{\mathbb {R}}_{\ge 0}}x=\chi _{{\mathbb {R}}_{\ge 0}}Ex \end{aligned}$$

and thus, M is regularising with \(\varGamma _{\rho }g=Eg(0-)\) for each \(g\in H_{\rho }^{1}({\mathbb {R}}_{\le 0};H).\) Moreover, we have

$$\begin{aligned} K_{\rho }g&=P_{0}\partial _{t,\rho }Eg\\&=\partial _{t,\rho }\chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})Eg-\delta _{0}(Eg)(0+)\\&=0. \end{aligned}$$

Hence, for \(g\in H_{\rho }^{1}({\mathbb {R}}_{\le 0};H)\), \(\rho >s_{0}(E,A)\), we have

$$\begin{aligned} g\in {\text {His}}_{\rho }&\Leftrightarrow \,S_{\rho }(\delta _{0}Eg(0-))+g\in H_{\rho }^{1}({\mathbb {R}};H)\\&\Leftrightarrow \,S_{\rho }(\delta _{0}Eg(0-))-\chi _{{\mathbb {R}}_{\ge 0}}g(0-)+g+\chi _{{\mathbb {R}}_{\ge 0}}g(0-)\in H_{\rho }^{1}({\mathbb {R}};H)\\&\Leftrightarrow \,S_{\rho }(\delta _{0}Eg(0-))-\chi _{{\mathbb {R}}_{\ge 0}}g(0-)\in H_{\rho }^{1}({\mathbb {R}};H) \end{aligned}$$

which proves the asserted equalities for \({\text {His}}_{\rho }\) and \({\text {IV}}_{\rho }\).

Finally, let \(x\in \overline{{\text {IV}}_{\rho }}\) and \(g\in L_{2,\mu }({\mathbb {R}}_{\le 0};H)\) for some \(\mu \le \rho \) with \(\rho >s_{0}(E,A).\) Then we find a sequence \((x_{n})_{n\in {\mathbb {N}}}\) in \({\text {IV}}_{\rho }\) and a sequence \((\varphi _{n})_{n\in {\mathbb {N}}}\) in \(C_{c}^{\infty }({\mathbb {R}}_{<0};H)\) such that \(x_{n}\rightarrow x\) and \(\varphi _{n}\rightarrow g\) in H and \(L_{2,\mu }({\mathbb {R}}_{\le 0};H)\), respectively. Moreover, we set

$$\begin{aligned} \psi _{n}(t){:}{=}{\left\{ \begin{array}{ll} \left( nt+1\right) x_{n} &{} \text { if }-\frac{1}{n}\le t\le 0,\\ 0 &{} \text { else} \end{array}\right. }\quad (t\in {\mathbb {R}}_{\le 0},n\in {\mathbb {N}}) \end{aligned}$$

and obtain a sequence \((\psi _{n})_{n\in {\mathbb {N}}}\) in \(H_{\rho }^{1}({\mathbb {R}}_{\le 0};H)\) with \(\psi _{n}(0-)=x_{n}\) for \(n\in {\mathbb {N}}\) and \(\psi _{n}\rightarrow 0\) as \(n\rightarrow \infty \) in \(L_{2,\mu }({\mathbb {R}}_{\le 0};H).\) Consequently, setting \(g_{n}{:}{=}\psi _{n}+\varphi _{n}\in H_{\rho }^{1}({\mathbb {R}}_{\le 0};H)\) for \(n\in {\mathbb {N}}\) we obtain a sequence \((x_{n},g_{n})_{n\in {\mathbb {N}}}\) in \(D_{\rho }\) with \((x_{n},g_{n})\rightarrow (x,g)\) in \(H\times L_{2,\mu }({\mathbb {R}};H)\) and thus, \((x,g)\in X_{\rho }^{\mu }.\) Since the other inclusion holds obviously, this proves the assertion.

\(\square \)

We now inspect the space \({\text {IV}}_{\rho }\) a bit closer. In particular, we are able to determine its closure \(\overline{{\text {IV}}_{\rho }}\) and a suitable dense subset of \(\overline{{\text {IV}}_{\rho }}\).

Proposition 7.2

We set

$$\begin{aligned} U{:}{=}\{x\in {\text {dom}}(A)\,;\,\exists y\in {\text {dom}}(A):\,Ax=Ey\}. \end{aligned}$$

Then \(U\subseteq {\text {IV}}_{\rho }\) and \({\overline{U}}=\overline{{\text {IV}}_{\rho }}\) for each \(\rho >s_{0}(E,A)\). In particular, \(\overline{{\text {IV}}_{\rho }}\) does not depend on the particular choice of \(\rho >s_{0}(E,A)\).

Proof

Let \(\rho >s_{0}(E,A)\), \(x\in U\) and \(y\in {\text {dom}}(A)\) with \(Ax=Ey.\) Then we compute

$$\begin{aligned} S_{\rho }\left( \delta Ex\right) -\chi _{{\mathbb {R}}_{\ge 0}}x&=\left( \partial _{t,\rho }E+A\right) ^{-1}(\delta Ex-\delta Ex-\chi _{{\mathbb {R}}_{\ge 0}}Ax)\\&=-(\partial _{t,\rho }E+A)^{-1}(\chi _{{\mathbb {R}}_{\ge 0}}Ey)\\&=-(\partial _{t,\rho }E+A)^{-1}(\partial _{t,\rho }E\partial _{t,\rho }^{-1}\chi _{{\mathbb {R}}_{\ge 0}}y)\\&=-\partial _{t,\rho }^{-1}\chi _{{\mathbb {R}}_{\ge 0}}y+(\partial _{t,\rho }E+A)^{-1}(\partial _{t,\rho }^{-1}\chi _{{\mathbb {R}}_{\ge 0}}Ay)\in H_{\rho }^{1}({\mathbb {R}};H), \end{aligned}$$

which shows hat \(x\in {\text {IV}}_{\rho }\) by Lemma 7.1. For showing the remaining assertion, we prove that \({\text {IV}}_{\rho }\subseteq {\overline{U}}.\) For doing so, let \(x\in {\text {IV}}_{\rho }\) and set \(v{:}{=}S_{\rho }(\delta Ex).\) Then

$$\begin{aligned} \partial _{t,\rho }E(v-\chi _{{\mathbb {R}}_{\ge 0}}x)&=(\partial _{t,\rho }E+A)v-\delta Ex-Av\\&=-Av, \end{aligned}$$

and since the left-hand side belongs to \(L_{2,\rho }({\mathbb {R}};H)\) we infer that \(v\in L_{2,\rho }({\mathbb {R}};{\text {dom}}(A)).\) Hence, \(\partial _{t,\rho }^{-1}v\in H_{\rho }^{1}({\mathbb {R}};{\text {dom}}(A))\hookrightarrow C_{\rho }({\mathbb {R}};{\text {dom}}(A))\) and so \(\int _{0}^{t}v(s)\,\mathrm {d}s=\left( \partial _{t,\rho }^{-1}v\right) (t)\in {\text {dom}}(A)\) for each \(t\ge 0\) and

$$\begin{aligned} A\int _{0}^{t}v(s)\,\mathrm {d}s=Ev(t)-Ex\quad (t\ge 0). \end{aligned}$$

Consequently,

$$\begin{aligned} \int _{0}^{t}v(s)\,\mathrm {d}s\in A^{-1}[{\text {ran}}(E)]\quad (t\ge 0) \end{aligned}$$

and since v is continuous on \({\mathbb {R}}_{\ge 0}\) and hence, \(\frac{1}{t}\int _{0}^{t}v(s)\,\mathrm {d}s\rightarrow v(0+)=x\) as \(t\rightarrow 0\), it suffices to prove \(A^{-1}[{\text {ran}}(E)]\subseteq {\overline{U}}\). For doing so, let \(y\in A^{-1}[{\text {ran}}(E)]\); i.e., \(y\in {\text {dom}}(A)\) and \(Ay=Ez\) for some \(z\in H.\) We choose a sequence \((z_{n})_{n\in {\mathbb {N}}}\) in \({\text {dom}}(A)\) with \(z_{n}\rightarrow z\) as \(n\rightarrow \infty \) and define

$$\begin{aligned} y_{n}{:}{=}\left( \lambda E+A\right) ^{-1}(\lambda Ey+Ez_{n})\quad (n\in {\mathbb {N}}), \end{aligned}$$

where \(\lambda >s_{0}(E,A)\) is fixed. Then \(y_{n}\in U,\) since

$$\begin{aligned} Ay_{n}=A\left( \lambda E+A\right) ^{-1}(\lambda Ey+Ez_{n})=E\left( \lambda E+A\right) ^{-1}(\lambda Ay+Az_{n})\in E[{\text {dom}}(A)] \end{aligned}$$

and since \(Ez_{n}\rightarrow Ez=Ay,\) we infer that \(y_{n}\rightarrow y\) and hence, \(y\in {\overline{U}}\). \(\square \)

Theorem 7.3

Let \(M(z){:}{=}E\) for \(z\in {\mathbb {C}}\), \(\rho >s_{0}(E,A)\) and let \(T^{\rho }:D_{\rho }\subseteq {\text {IV}}_{\rho }\times {\text {His}}_{\rho }\rightarrow {\text {IV}}_{\rho }\times {\text {His}}_{\rho }\) denote the semigroup associated with (MA). Moreover, for \(x\in H\) we define

$$\begin{aligned} f_{x}(t){:}{=}{\left\{ \begin{array}{ll} (t+1)x &{} \text { if }t\in [-1,0],\\ 0 &{} \text { else} \end{array}\right. }\quad (t\in {\mathbb {R}}_{\le 0}). \end{aligned}$$

Then the following statements are equivalent:

  1. (i)

    \(T^{\rho }\) extends to a \(C_{0}\)-semigroup in \(\overline{{\text {IV}}_{\rho }}\times L_{2,\mu }({\mathbb {R}}_{\le 0};H)\) for some \(\mu \le \rho .\)

  2. (ii)

    There exists \(M\ge 1\) and \(\omega \ge \rho \) such that

    $$\begin{aligned} \Vert \left( (\lambda E+A)^{-1}E\right) ^{n}\Vert \le \frac{M}{(\lambda -\omega )^{n}}\quad (\lambda >\omega ,n\in {\mathbb {N}}). \end{aligned}$$
    (7.1)
  3. (iii)

    \(T^{\rho }\) extends to a \(C_{0}\)-semigroup in \(\overline{{\text {IV}}_{\rho }}\times L_{2,\mu }({\mathbb {R}}_{\le 0};H)\) for each \(\mu \le \rho .\)

  4. (iv)

    The family of functions

    $$\begin{aligned} S^{\rho }(t):{\text {IV}}_{\rho }\subseteq \overline{{\text {IV}}_{\rho }}\rightarrow \overline{{\text {IV}}_{\rho }},\quad x\mapsto T_{1}^{\rho }(t)(x,f_{x}) \end{aligned}$$

    for \(t\ge 0\) extends to a \(C_{0}\)-semigroup on \(\overline{{\text {IV}}_{\rho }}.\)

In the latter case, \(S^{\rho }(t)x=T^{\rho }(t)(x,0)\) for each \(x\in \overline{{\text {IV}}_{\rho }}\) and \(t\ge 0\).

Proof

We first compute the function \(r_{g}\) for \(g\in {\text {His}}_{\rho }\) as it was defined in Theorem 6.9. We have that

$$\begin{aligned} \left( M(\partial _{t,\rho })g\right) (0-)-\lambda \sqrt{2\pi }{\mathcal {L}}_{\lambda }\left( \chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})M(\partial _{t,\rho })g\right) (0)=Eg(0-)\quad (\lambda >\rho ) \end{aligned}$$

and hence,

$$\begin{aligned} r_{g}(\lambda )=(\lambda E+A)^{-1}Eg(0-)\quad (\lambda >\rho ). \end{aligned}$$

Consequently,

$$\begin{aligned} r_{g}^{(k)}(\lambda )=(-1)^{k}k!\left( \left( \lambda E+A\right) ^{-1}E\right) ^{k+1}g(0-)\quad (k\in {\mathbb {N}}_{0},\lambda >\rho ). \end{aligned}$$

(i) \(\Rightarrow \) (ii): By Theorem 6.9 (note that \(X_{\rho }^{\mu }=\overline{{\text {IV}}_{\rho }}\times L_{2,\mu }({\mathbb {R}}_{\le 0};H)\) by Lemma 7.1) we know that there exists \(M\ge 1\) and \(\omega \ge \rho \) such that

$$\begin{aligned} \frac{(\lambda -\omega )^{k+1}}{k!}\Vert r_{g}^{(k)}(\lambda )\Vert \le M\left( \Vert g(0-)\Vert _{H}+\Vert g\Vert _{L_{2,\mu }({\mathbb {R}};H)}\right) \end{aligned}$$

for each \(\lambda >\omega ,k\in {\mathbb {N}}\) and \(g\in {\text {His}}_{\rho }.\) Choosing now \(x\in \overline{{\text {IV}}_{\rho }}\) we infer that

$$\begin{aligned} \Vert \left( (\lambda E+A)^{-1}E\right) ^{n}x\Vert&=\frac{1}{(n-1)!}\Vert r_{f_{x}(k\cdot )}^{(n-1)}(\lambda )\Vert \\&\le \frac{M}{(\lambda -\omega )^{n}}\left( \Vert x\Vert _{H}+\Vert f_{x}(k\cdot )\Vert _{L_{2,\mu }({\mathbb {R}};H)}\right) \end{aligned}$$

for each \(\lambda >\omega \), \(n,k\in {\mathbb {N}}\). Since \(f_{x}(k\cdot )\rightarrow 0\) as \(k\rightarrow \infty \), we infer that

$$\begin{aligned} \Vert \left( (\lambda E+A)^{-1}E\right) ^{n}\Vert \le \frac{M}{(\lambda -\omega )^{n}}\quad (\lambda >\omega ,n\in {\mathbb {N}}). \end{aligned}$$

(ii) \(\Rightarrow \) (iii): Let \(\mu \le \rho \). By assumption, there exist \(M\ge 1,\omega \ge \rho \) such that

$$\begin{aligned} \frac{(\lambda -\omega )^{k+1}}{k!}\Vert r_{g}^{(k)}(\lambda )\Vert&=(\lambda -\omega )^{k+1}\Vert \left( (\lambda E+A)^{-1}E\right) ^{k+1}g(0-)\Vert \\&\le M\Vert g(0-)\Vert _{H}\\&\le M\left( \Vert g(0-)\Vert +\Vert g\Vert _{L_{2,\mu }({\mathbb {R}};H)}\right) \end{aligned}$$

for each \(\lambda >\omega ,k\in {\mathbb {N}}_{0}\) and \(g\in {\text {His}}_{\rho }\) and hence, the assertion follows from Theorem 6.9 and Lemma 7.1.

(iii) \(\Rightarrow \) (iv): Since \(T^{\rho }\) extends to a \(C_{0}\)-semigroup on \(\overline{{\text {IV}}_{\rho }}\times L_{2}({\mathbb {R}}_{\le 0};H)\) and since

$$\begin{aligned} \Vert f_{x}\Vert _{L_{2}({\mathbb {R}};H)}\le \Vert x\Vert _{H}\quad (x\in H), \end{aligned}$$

we infer that there is \(M\ge 1\) and \(\omega \in {\mathbb {R}}\) such that

$$\begin{aligned} \Vert S^{\rho }(t)x\Vert \le 2M\mathrm {e}^{\omega t}\Vert x\Vert \quad (x\in {\text {IV}}_{\rho }) \end{aligned}$$

and thus, \((S^{\rho }(t))_{t\ge 0}\) extends to a \(C_{0}\)-semigroup on \(\overline{{\text {IV}}_{\rho }}\). Moreover, since

$$\begin{aligned} S^{\rho }(t)x&=T_{1}^{\rho }(t)(x,f_{x})\\&=\left( (\partial _{t,\rho }E+A)^{-1}(\delta _{0}Ex)\right) (t)\\&=T_{1}^{\rho }(t)(x,0) \end{aligned}$$

for each \(t\ge 0,x\in \overline{{\text {IV}}_{\rho }}\), we obtain the asserted formula at the end of the theorem.

(iv) \(\Rightarrow \) (i): By assumption, there is \(M\ge 1,\omega \in {\mathbb {R}}\) such that

$$\begin{aligned} \Vert T_{1}^{\rho }(t)(x,f_{x})\Vert \le M\mathrm {e}^{\omega t}\Vert x\Vert \quad (x\in {\text {IV}}_{\rho },t\ge 0). \end{aligned}$$

Moreover, since

$$\begin{aligned} T_{1}^{\rho }(t)(x,g)=T_{1}^{\rho }(t)(x,f_{x})\quad \left( (x,g)\in D_{\rho }\right) , \end{aligned}$$

we infer that

$$\begin{aligned} T_{1}^{\rho }:D_{\rho }\subseteq \overline{{\text {IV}}_{\rho }}\times L_{2,\mu }({\mathbb {R}}_{\le 0};H)\rightarrow C_{\omega }({\mathbb {R}}_{\ge 0};H) \end{aligned}$$

is continuous and hence, the assertion follows by Proposition 6.5.\(\square \)

Remark 7.4

We remark that in the case of classical Cauchy problems, i.e. \(E=1\), condition (7.1) is nothing but the classical Hille-Yosida condition for generators of \(C_{0}\)-semigroups (see e.g. [6, Chapter II, Theorem 3.8]). Note that in this case, \(U={\text {dom}}(A^{2})\) in Proposition 7.2 and hence, \(\overline{{\text {IV}}_{\rho }}={\overline{U}}=H.\)

7.2 A hyperbolic delay equation

As a slight generalisation of [3, Example 3.17] we consider a concrete delay equation of the form

$$\begin{aligned} \partial _{t,\rho }^{2}u-{\text {div}}k{\text {grad}}u-\sum _{i=1}^{n}c_{i}\tau _{-h_{i}}\partial _{i}u-c_{0}\tau _{-h_{0}}\partial _{t,\rho }u&=0\quad \text {on }{\mathbb {R}}_{>0},\nonumber \\ u&=g\quad \text {on }{\mathbb {R}}_{<0}. \end{aligned}$$
(7.2)

Here, u attains values in \(L_{2}(\varOmega )\) for some open set \(\varOmega \subseteq {\mathbb {R}}^{n}\) as underlying domain, \(h_{0},\ldots ,h_{n}>0\) are given real numbers and \(k,c_{0},\ldots ,c_{n}\) are bounded operators on \(L_{2}(\varOmega )^{n}\) and \(L_{2}(\varOmega ),\) respectively. The operators \({\text {grad}}\) and \({\text {div}}\) denote the usual gradient and divergence with respect to the spatial variables and will be introduced rigorously later. It is our first goal to rewrite this equation as a suitable evolutionary problem. For doing so, we need the following definition.

Definition 7.5

Let \(c_{0},\ldots ,c_{n}\in L(L_{2}(\varOmega ))\) and \(k\in L(L_{2}(\varOmega )^{n})\) selfadjoint such that \(k\ge d\) for some \(d\in {\mathbb {R}}_{>0}\). We define the function \(M_{1}:{\mathbb {C}}\rightarrow L(L_{2}(\varOmega )\times L_{2}(\varOmega )^{n};L_{2}(\varOmega ))\) by

$$\begin{aligned} M_{1}(z)(v,q){:}{=}-c_{0}\mathrm {e}^{-h_{0}z}v+\sum _{i=1}^{n}c_{i}e^{-h_{i}z}(k^{-1}q)_{i}\quad (z\in {\mathbb {C}},v\in L_{2}(\varOmega ), q\in L_{2}(\varOmega )^{n}). \end{aligned}$$

Furthermore, we define \(M:{\mathbb {C}}\setminus \{0\}\rightarrow L(L_{2}(\varOmega )\times L_{2}(\varOmega )^{n})\) by

$$\begin{aligned} M(z)\left( \begin{array}{c} v\\ q \end{array}\right) {:}{=}\left( \begin{array}{c} v+z^{-1}M_{1}(z)(v,q)\\ k^{-1}q \end{array}\right) . \end{aligned}$$

Remark 7.6

Since \(\left( {\mathcal {L}}_{\rho }\tau _{h}u\right) (t)=\mathrm {e}^{(\mathrm {i}t+\rho )h}\left( {\mathcal {L}}_{\rho }u\right) (t)\) for each \(u\in L_{2,\rho }({\mathbb {R}};H)\) and \(t,h\in {\mathbb {R}},\) we have that

$$\begin{aligned} M_{1}(\partial _{t,\rho })(v,q)=-c_{0}\tau _{-h_{0}}v+\sum _{i=1}^{n}c_{i}k^{-1}\tau _{-h_{i}}(k^{-1}q)_{i} \end{aligned}$$

for each \(M_{1}(\partial _{t,\rho })(v,q)=-c_{0}\tau _{-h_{0}}v+\sum _{i=1}^{n}c_{i}k^{-1}\tau _{-h_{i}}(k^{-1}q)_{i}\)

Obviously, the so defined function M is analytic and if we restrict it to some open half plane \({\mathbb {C}}_{{\text {Re}}>\rho _{0}}\) with \(\rho _{0}>0\), it is bounded. Thus, we may consider the operator \(M(\partial _{t,\rho })\) for some \(\rho >0\).

Lemma 7.7

The function M is regularising.

Proof

We need to prove that \(\left( M(\partial _{t,\rho })\chi _{{\mathbb {R}}_{\ge 0}}x\right) (0+)\) exists for all \(x=({\check{x}},{\hat{x}})\in L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}\) and \(\rho >0\). We have that

$$\begin{aligned} M(\partial _{t,\rho })\chi _{{\mathbb {R}}_{\ge 0}}x=\left( \begin{array}{c} \chi _{{\mathbb {R}}_{\ge 0}}{\check{x}}+\partial _{t,\rho }^{-1}M_{1}(\partial _{t,\rho })\chi _{{\mathbb {R}}_{\ge 0}}x\\ k^{-1}{\hat{x}} \end{array}\right) \end{aligned}$$

and since \(M_{1}(\partial _{t,\rho })\) is causal, we infer that \(\partial _{t,\rho }^{-1}M_{1}(\partial _{t,\rho })\chi _{{\mathbb {R}}_{\ge 0}}x\in H_{\rho }^{1}({\mathbb {R}};L_{2}(\varOmega ))\) is supported on \({\mathbb {R}}_{>0}\) and hence, \(\left( \partial _{t,\rho }^{-1}M_{1}(\partial _{t,\rho })\chi _{{\mathbb {R}}_{\ge 0}}x\right) (0+)=0.\) Thus, M is regularising. \(\square \)

We now rewrite (7.2) as an evolutionary equation. We introduce \(v{:}{=}\partial _{t,\rho }u\) and \(q{:}{=}k{\text {grad}}u\) as new unknowns, and rewrite (7.2) as

$$\begin{aligned} \left( \partial _{t,\rho }M(\partial _{t,\rho })+\left( \begin{array}{cc} 0 &{} {\text {div}}\\ {\text {grad}}&{} 0 \end{array}\right) \right) \left( \begin{array}{c} v\\ q \end{array}\right) =0\quad \text {on }{\mathbb {R}}_{>0}. \end{aligned}$$
(7.3)

Of course (7.2) has to be completed by suitable boundary conditions. This will be done by introducing the differential operators \({\text {div}}\) and \({\text {grad}}\) in a suitable way.

Definition 7.8

We define \({\text {grad}}_{0}:{\text {dom}}({\text {grad}}_{0})\subseteq L_{2}(\varOmega )\rightarrow L_{2}(\varOmega )^{n}\) as the closure of the operator

$$\begin{aligned} C_{c}^{\infty }(\varOmega )\subseteq L_{2}(\varOmega )\rightarrow L_{2}(\varOmega )^{n},\,\varphi \mapsto \left( \partial _{j}\varphi \right) _{j\in \{1,\ldots ,n\}} \end{aligned}$$

and similarly \({\text {div}}_{0}:{\text {dom}}({\text {div}}_{0})\subseteq L_{2}(\varOmega )^{n}\rightarrow L_{2}(\varOmega )\) as the closure of

$$\begin{aligned} C_{c}^{\infty }(\varOmega )^{n}\subseteq L_{2}(\varOmega )^{n}\rightarrow L_{2}(\varOmega ),\;(\varphi _{j})_{j\in \{1,\ldots ,n\}}\mapsto \sum _{j=1}^{n}\partial _{j}\varphi _{j}. \end{aligned}$$

Moreover, we set

$$\begin{aligned} {\text {grad}}&{:}{=}-({\text {div}}_{0})^{*}\\ {\text {div}}&{:}{=}-({\text {grad}}_{0})^{*}. \end{aligned}$$

Remark 7.9

We note that \({\text {dom}}({\text {grad}}_{0})\) coincides with the classical Sobolev space \(H_{0}^{1}(\varOmega )\) of weakly differentiable \(L_{2}\)-functions with vanishing Dirichlet trace. Moreover, \({\text {dom}}({\text {grad}})\) is nothing but the Sobolev space \(H^{1}(\varOmega )\). Likewise, \({\text {dom}}({\text {div}})\) are the \(L_2\)-vector-field, whose distributional divergence can be represented by an \(L_2\)-function and the elemets in \({\text {dom}}({\text {div}}_0)\) additionally satisfy a homogeneous Neumann boundary condition in a suitable sense.

Thus, by replacing \({\text {div}}\) by \({\text {div}}_{0}\) or \({\text {grad}}\) by \({\text {grad}}_{0}\) in (7.3), we can model homogeneous Neumann- or Dirichlet conditions, respectively. Using these operators, we immediatly obtain the following result.

Lemma 7.10

We set

$$\begin{aligned} A_{N}{:}{=}\left( \begin{array}{cc} 0 &{} {\text {div}}_{0}\\ {\text {grad}}&{} 0 \end{array}\right) :{\text {dom}}({\text {grad}})\times {\text {dom}}({\text {div}}_{0})\subseteq L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}\rightarrow L_{2}(\varOmega )\times L_{2}(\varOmega )^{n} \end{aligned}$$

and

$$\begin{aligned} A_{D}{:}{=}\left( \begin{array}{cc} 0 &{} {\text {div}}\\ {\text {grad}}_{0} &{} 0 \end{array}\right) :{\text {dom}}({\text {grad}}_{0})\times {\text {dom}}({\text {div}})\subseteq L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}\rightarrow L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}. \end{aligned}$$

Then both operators are skew-selfadjoint, i.e. \(A_{N}^{*}=-A_{N}\) and \(A_{D}^{*}=-A_{D}.\)

We now prove that the evolutionary problems associated with \((M,A_{D/N})\) are well-posed.

Proposition 7.11

Let \(c_{0},\ldots ,c_{n}\in L(L_{2}(\varOmega ))\) and \(k\in L(L_{2}(\varOmega )^{n})\) selfadjoint such that \(k\ge d\) for some \(d\in {\mathbb {R}}_{>0}\). Then the evolutionary problems associated with \((M,A_{D/N})\) are well-posed.

Proof

We first note that \(k^{-1}\) is selfadjoint and satisfies \(k^{-1}\ge \frac{1}{\Vert k\Vert }.\) Moreover, since \(A_{D/N}\) is skew-selfadjoint, we infer that

$$\begin{aligned} {\text {Re}}\langle A_{D/N}x,x\rangle =0\quad (x\in {\text {dom}}(A_{D/N})). \end{aligned}$$

Hence, we may estimate for \(x=(x_{1},x_{2})\in {\text {dom}}(A_{D/N})\)

$$\begin{aligned} {\text {Re}}\langle (zM(z)+A_{D/N})x,x\rangle&={\text {Re}}\langle zM(z)x,x\rangle \\&={\text {Re}}\langle zx_{1},x_{1}\rangle +{\text {Re}}\langle zk^{-1}x_{2},x_{2}\rangle +{\text {Re}}\langle M_{1}(z)x,x_{1}\rangle \\&\ge {\text {Re}}z\min \{1,\frac{1}{\Vert k\Vert }\}\Vert x\Vert ^{2}-\Vert M_{1}(z)\Vert \Vert x\Vert ^{2}. \end{aligned}$$

Moreover, we estimate

$$\begin{aligned} \Vert M_{1}(z)\Vert \le \Vert c_{0}\Vert \mathrm {e}^{-h_{0}{\text {Re}}z}+\sum _{i=1}^{n}\Vert c_{i}\Vert \Vert k^{-1}\Vert \mathrm {e}^{-h_{i}{\text {Re}}z} \end{aligned}$$

and hence, we infer that \(\Vert M_{1}(z)\Vert \rightarrow 0\) as \({\text {Re}}z\rightarrow \infty .\) Thus, we find \(c>0\) and \(\rho _{0}>0\) such that

$$\begin{aligned} {\text {Re}}\langle (zM(z)+A_{D/N})x,x\rangle \ge c\Vert x\Vert ^{2}\quad (z\in {\mathbb {C}}_{{\text {Re}}\ge \rho _{0}}), \end{aligned}$$

which yields the well-posedness for the evolutionary problem associated with \((M,A_{D/N}).\) \(\square \)

Remark 7.12

We note that the above proof also works for m-accretive operators A instead of \(A_{D/N}.\) This allows for the treatment of more general boundary conditions and we refer to [21] for a characterisation result about those boundary conditions (including also nonlinear ones).

Having these results at hand, we are now in the position to consider the history space for (7.3). From now on, to avoid cluttered notation, we will simply write A and note that A can be replaced by \(A_{N}\) and \(A_{D}\), respectively.

Proposition 7.13

Let \(\rho >s_{0}(M,A)\). Then

$$\begin{aligned} \varGamma _{\rho }g=\left( \begin{array}{cc} 1 &{} 0\\ 0 &{} k^{-1} \end{array}\right) g(0-),\quad K_{\rho }g=\chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) g \end{aligned}$$

for each \(g\in H_{\rho }^{1}({\mathbb {R}}_{\le 0};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}).\) Moreover,

$$\begin{aligned} \left\{ g\in H_{\rho }^{1}({\mathbb {R}}_{\le 0};{\text {dom}}(A))\,;\,\forall j\in \{0,\ldots ,n\}:g(-t_{j})=0,\,\left( \begin{array}{cc} 1 &{} 0\\ 0 &{} k \end{array}\right) Ag(0-)\in {\text {dom}}(A)\right\} \subseteq {\text {His}}_{\rho }\nonumber \\ \end{aligned}$$
(7.4)

and consequently,

$$\begin{aligned} X_{\rho }^{\mu }=\left( L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}\right) \times L_{2,\mu }({\mathbb {R}}_{\le 0};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}) \end{aligned}$$

for each \(\mu \le \rho .\)

Proof

Let \(g\in H_{\rho }^{1}({\mathbb {R}}_{\le 0};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n})\). Then

$$\begin{aligned} \varGamma _{\rho }g&=\left( \left( \left( \begin{array}{cc} 1 &{} 0\\ 0 &{} k^{-1} \end{array}\right) +\partial _{t,\rho }^{-1}\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) \right) \chi _{{\mathbb {R}}_{\ge 0}}g(0-)\right) (0+)\\&=\left( \left( \begin{array}{cc} 1 &{} 0\\ 0 &{} k^{-1} \end{array}\right) \chi _{{\mathbb {R}}_{\ge 0}}g(0-)\right) (0+)\\&=\left( \begin{array}{cc} 1 &{} 0\\ 0 &{} k^{-1} \end{array}\right) g(0-), \end{aligned}$$

where we have used

$$\begin{aligned} \partial _{t,\rho }^{-1}\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) \chi _{{\mathbb {R}}_{\ge 0}}g(0-)\in H_{\rho }^{1}({\mathbb {R}};H) \end{aligned}$$

and hence

$$\begin{aligned} \left( \partial _{t,\rho }^{-1}\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) \chi _{{\mathbb {R}}_{\ge 0}}g(0-)\right) (0+)&=\left( \partial _{t,\rho }^{-1}\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) \chi _{{\mathbb {R}}_{\ge 0}}g(0-)\right) (0-)=0 \end{aligned}$$

by causality. Moreover,

$$\begin{aligned} K_{\rho }g&=P_{0}\partial _{t,\rho }M(\partial _{t,\rho })g\\&=P_{0}\partial _{t,\rho }\left( \begin{array}{cc} 1 &{} 0\\ 0 &{} k^{-1} \end{array}\right) g+\chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) g\\&=\chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) g, \end{aligned}$$

since

$$\begin{aligned} {\text {spt}}\partial _{t,\rho }\left( \begin{array}{cc} 1 &{} 0\\ 0 &{} k^{-1} \end{array}\right) g\subseteq {\mathbb {R}}_{\le 0} \end{aligned}$$

and thus, \(P_{0}\partial _{t,\rho }\left( \begin{array}{cc} 1 &{} 0\\ 0 &{} k^{-1} \end{array}\right) g=0\) by Proposition 4.4 (c). Let now g be an element of the set on the left hand side of (7.4). Then, we compute

$$\begin{aligned}&S_{\rho }\left( \delta _{0}\left( \begin{array}{cc} 1 &{} 0\\ 0 &{} k^{-1} \end{array}\right) g(0-)-\chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) g\right) -\chi _{{\mathbb {R}}_{\ge 0}}g(0-)\\&\quad = S_{\rho }\left( \delta _{0}\left( \begin{array}{cc} 1 &{} 0\\ 0 &{} k^{-1} \end{array}\right) g(0-)-\chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) g-\right. \\&\left. \quad -\partial _{t,\rho }\left( \begin{array}{cc} 1 &{} 0\\ 0 &{} k^{-1} \end{array}\right) \chi _{{\mathbb {R}}_{\ge 0}}g(0-)-\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) \chi _{{\mathbb {R}}_{\ge 0}}g(0-)-\chi _{{\mathbb {R}}_{\ge 0}}Ag(0-)\right) \\&\qquad = -S_{\rho }\left( \chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) (g+\chi _{{\mathbb {R}}_{\ge 0}}g(0-))\right) -S_{\rho }\left( \chi _{{\mathbb {R}}_{\ge 0}}Ag(0-)\right) . \end{aligned}$$

We now treat both terms separately. We note that

$$\begin{aligned} \left( \chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})c_{j}\tau _{-h_{j}}f_{j}\right) (t)={\left\{ \begin{array}{ll} c_{j}f_{j}(t-h_{j}) &{} \text { if }t\ge 0,\\ 0 &{} \text { otherwise} \end{array}\right. } \end{aligned}$$

for \(f_{j}\in H_{\rho }^{1}({\mathbb {R}};L_{2}(\varOmega ))\) and thus, \(\chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})c_{j}\tau _{-h_{j}}f_{j}\in H_{\rho }^{1}({\mathbb {R}};L_{2}(\varOmega ))\) if \(f_{j}(-h_{j})=0.\) Thus, by the constraints on g, we infer that

$$\begin{aligned} \chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) (g+\chi _{{\mathbb {R}}_{\ge 0}}g(0-))\in H_{\rho }^{1}({\mathbb {R}};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}). \end{aligned}$$

Thus, we are left to consider the last term. By assumption, we find \(x\in {\text {dom}}(A)\) with \(Ag(0-)=\left( \begin{array}{cc} 1 &{} 0\\ 0 &{} k^{-1} \end{array}\right) x\) and hence,

$$\begin{aligned} S_{\rho }\left( \chi _{{\mathbb {R}}_{\ge 0}}Ag(0-)\right)&=\partial _{t,\rho }^{-1}S_{\rho }\left( \partial _{t,\rho }\left( \begin{array}{cc} 1 &{} 0\\ 0 &{} k^{-1} \end{array}\right) \chi _{{\mathbb {R}}_{\ge 0}}x\right) \\&=\partial _{t,\rho }^{-1}\left( \chi _{{\mathbb {R}}_{\ge 0}}x-S_{\rho }\left( \left( \left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) +A\right) \chi _{{\mathbb {R}}_{\ge 0}}x\right) \right) \\&\in H_{\rho }^{1}({\mathbb {R}};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}), \end{aligned}$$

which proves the claim. \(\square \)

We conclude this section by proving that the associated semigroup can be extended to \(X_{\rho }^{\mu }\) for each \(\mu \le \rho .\)

Theorem 7.14

Let \(\rho >s_{0}(M,A)\) and let \(T^{\rho }\) denote the associated semigroup with (MA) on \(D_{\rho }.\) Then for large enough \(\rho ,\) \(T^{\rho }\) extends to a \(C_{0}\)-semigroup on \(\left( L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}\right) \times L_{2,\mu }({\mathbb {R}}_{\le 0};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n})\) for each \(\mu \le \rho .\)

Proof

The proof will be done by a perturbation argument. For doing so, we consider the evolutionary problem associated with \(\left( E,A\right) \), where

$$\begin{aligned} E{:}{=}\left( \begin{array}{cc} 1 &{} 0\\ 0 &{} k^{-1} \end{array}\right) . \end{aligned}$$

We note that this problem is well-posed with \(s_{0}\left( E,A\right) =0\) (compare the proof of Proposition 7.11). We denote the associated semigroup by \({\widetilde{T}}^{\rho }\). By Proposition 7.2 we know that the closure of the initial value space for \({\widetilde{T}}^{\rho }\) is given by

$$\begin{aligned} \overline{\{x\in {\text {dom}}(A)\,;\,E^{-1}Ax\in {\text {dom}}(A)\}}=L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}. \end{aligned}$$

Moreover, by Theorem 7.3\({\widetilde{T}}^{\rho }\) extends to a \(C_{0}\)-semigroup on \(\left( L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}\right) \times L_{2,\mu }({\mathbb {R}}_{\le 0};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n})\) if and only if

$$\begin{aligned} \Vert \left( (\lambda E+A)^{-1}E\right) ^{n}\Vert \le \frac{M}{(\lambda -\omega )^{n}}\quad (\lambda >\omega ,n\in {\mathbb {N}}) \end{aligned}$$

for some \(M\ge 1,\,\omega \ge \rho \). We note that E is selfadjoint and strictly positive definite and thus,

$$\begin{aligned} (\lambda E+A)^{-1}=\sqrt{E^{-1}}\left( \lambda +\sqrt{E^{-1}}A\sqrt{E^{-1}}\right) ^{-1}\sqrt{E^{-1}}. \end{aligned}$$

The latter gives

$$\begin{aligned} \left( (\lambda E+A)^{-1}E\right) ^{n}=\sqrt{E^{-1}}\left( \lambda +\sqrt{E^{-1}}A\sqrt{E^{-1}}\right) ^{-n}\sqrt{E} \end{aligned}$$

for each \(n\in {\mathbb {N}}\). Since A is skew-selfadjoint, so is \(\sqrt{E^{-1}}A\sqrt{E^{-1}}\) and thus,

$$\begin{aligned} \Vert \left( (\lambda E+A)^{-1}E\right) ^{n}\Vert \le \frac{\Vert \sqrt{E}\Vert \Vert \sqrt{E^{-1}}\Vert }{\lambda ^{n}}\quad (\lambda >0,n\in {\mathbb {N}}) \end{aligned}$$

and hence, \({\widetilde{T}}^{\rho }\) extends to a bounded \(C_{0}\)-semigroup on \(\left( L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}\right) \times L_{2,\mu }({\mathbb {R}}_{\le 0};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n})\). Now we come to the semigroup \(T^{\rho }.\) We will prove that \(T_{1}^{\rho }:D_{\rho }\subseteq \left( L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}\right) \times L_{2,\mu }({\mathbb {R}}_{\le 0};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n})\rightarrow C_{\omega }({\mathbb {R}}_{\ge 0};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n})\) is bounded for some \(\omega \in {\mathbb {R}}\), which would imply the claim by Proposition 6.5. Let \((g(0-),g)\in D_{\rho }.\) We then have, using the formulas in Proposition 7.13,

$$\begin{aligned} T_{1}^{\rho }(g(0-),g)&=\left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) ^{-1}\left( \delta _{0}Eg(0-)-\chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) g\right) \\&=\left( \partial _{t,\rho }E+A+\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) \right) ^{-1}\left( \delta _{0}Eg(0-)\right) \\&\quad -\left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) ^{-1}\left( \chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) g\right) . \end{aligned}$$

The first term in the latter expression can be rewritten as

$$\begin{aligned} \left( \partial _{t,\rho }E+A+\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) \right) ^{-1}\left( \delta _{0}Eg(0-)\right) =\left( 1+\left( \partial _{t,\rho }E+A\right) ^{-1}\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) \right) ^{-1}{\widetilde{T}}_{1}^{\rho }\left( Eg(0-)\right) .\end{aligned}$$

Now, since \({\widetilde{T}}_{1}^{\rho }\left( Eg(0-)\right) \in L_{2,\rho }({\mathbb {R}};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n})\) and \(\Vert M_{1}(\partial _{t,\rho })\Vert \rightarrow 0\) as \(\rho \rightarrow \infty ,\) we infer that

$$\begin{aligned} \left( \partial _{t,\rho }E+A+\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) \right) ^{-1}\left( \delta _{0}Eg(0-)\right) \in L_{2,\rho }({\mathbb {R}};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}) \end{aligned}$$

for \(\rho \) large enough by the Neumann series. Since clearly

$$\begin{aligned} L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}\ni x\mapsto \left( \partial _{t,\rho }E+A+\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) \right) ^{-1}\left( \delta _{0}Ex\right) \in H_{\rho }^{-1}({\mathbb {R}};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}) \end{aligned}$$

is bounded, we obtain

$$\begin{aligned} \left\| \left( \partial _{t,\rho }E+A+\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) \right) ^{-1}\left( \delta _{0}Eg(0-)\right) \right\| _{L_{2,\rho }({\mathbb {R}};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n})}\le C\Vert g(0-)\Vert _{L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}} \end{aligned}$$

for some \(C\ge 0\) by the closed graph theorem. Hence,

$$\begin{aligned}&\Vert T_{1}^{\rho }(g(0-),g)\Vert _{L_{2,\rho }({\mathbb {R}};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n})}\\&\le C\Vert g(0-)\Vert _{L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}}+\left\| \left( \partial _{t,\rho }M(\partial _{t,\rho })+A\right) ^{-1}\left( \chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) g\right) \right\| _{L_{2,\rho }({\mathbb {R}};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n})}\\&\le C\Vert g(0-)\Vert _{L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}}+C_{1}\Vert g\Vert _{L_{2,\rho }({\mathbb {R}};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n})}\\&\le {\widetilde{C}}\Vert (g(0-),g)\Vert _{X_{\rho }^{\mu }} \end{aligned}$$

for suitable \(C_{1},{\widetilde{C}}\ge 0.\) Thus,

$$\begin{aligned} T_{1}^{\rho }:D_{\rho }\subseteq X_{\rho }^{\mu }\rightarrow L_{2,\rho }({\mathbb {R}}_{\ge 0};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}) \end{aligned}$$

is bounded and hence extends to a bounded operator on \(X_{\rho }^{\mu }\). Moreover, for \(f\in C_{c}^{\infty }({\mathbb {R}}_{\ge 0};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n})\) we may estimate

$$\begin{aligned} \left\| \left( (\partial _{t,\rho }E+A)^{-1}f\right) (t)\right\|&=\left\| \int _{0}^{t}{\widetilde{T}}_{\rho }^{(1)}(t-s)E^{-1}f(s)\,\mathrm {d}s\right\| \\&\le M\Vert E^{-1}\Vert \int _{0}^{t}\Vert f(s)\Vert \,\mathrm {d}s\\&\le \frac{M\Vert E^{-1}\Vert }{\sqrt{2\rho }}\Vert f\Vert _{L_{2,\rho }({\mathbb {R}};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n})}\mathrm {e}^{\rho t} \end{aligned}$$

for each \(t\ge 0\), which proves that

$$\begin{aligned} (\partial _{t,\rho }E+A)^{-1}:L_{2,\rho }({\mathbb {R}}_{\ge 0};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n})\rightarrow C_{\rho }({\mathbb {R}}_{\ge 0};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}) \end{aligned}$$

is bounded. Now, let \((x,g)\in X_{\rho }^{\mu }\) and set \(u{:}{=}T_{1}^{\rho }(x,g)\in L_{2,\rho }({\mathbb {R}}_{\ge 0};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n})\). Then

$$\begin{aligned} (\partial _{t,\rho }E+A)u=\delta _{0}Ex-\chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) g-\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) u \end{aligned}$$

and hence, we derive that

$$\begin{aligned} u&=(\partial _{t,\rho }E+A)^{-1}\left( \delta _{0}Ex-\chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) g-\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) u\right) \\&={\widetilde{T}}_{1}^{\rho }(x,g)-(\partial _{t,\rho }E+A)^{-1}\left( \chi _{{\mathbb {R}}_{\ge 0}}({\text {m}})\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) g+\left( \begin{array}{c} M_{1}(\partial _{t,\rho })\\ 0 \end{array}\right) u\right) \\&\in C_{\rho }({\mathbb {R}}_{\ge 0};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}) \end{aligned}$$

and hence, again by the closed graph theorem

$$\begin{aligned} T_{1}^{\rho }:X_{\rho }^{\mu }\rightarrow C_{\rho }({\mathbb {R}}_{\ge 0};L_{2}(\varOmega )\times L_{2}(\varOmega )^{n}) \end{aligned}$$

is bounded. \(\square \)

Remark 7.15

If \(c_0,\ldots ,c_n=0\) and \(k=1\); that is, if we just deal with the classical wave equation without any delay effects, then we end up with an equation of the form

$$\begin{aligned} \left( \partial _{t,\rho } +A\right) \begin{pmatrix} v\\ q \end{pmatrix} =0 \text{ on } {\mathbb {R}}_{>0}. \end{aligned}$$

which is already covered by the results obtained in Sect. 7.1. In this situation we end up with a \(C_0\)-semigroup, which actually just depends on the present state and not on the whole history and thus, gives rise to a \(C_0\)-semigroup on \({\overline{{\text {IV}}}}=\overline{{\text {dom}}(A^2)}=L_2(\varOmega )\times L_2(\varOmega )^n\) (cp. Theorem 7.3). In this way we recover the the result of [2, Example 8.4.9].

We remark here that within the theory of \(C_0\)-semigroups, the wave equation is often written as

$$\begin{aligned} \partial _{t}\begin{pmatrix} u \\ v \end{pmatrix} = \begin{pmatrix} 0 &{} 1 \\ \varDelta &{} 0 \end{pmatrix} \begin{pmatrix} u\\ v \end{pmatrix} \end{aligned}$$

with state space \(H^1(\varOmega )\times L_2(\varOmega )\) or \(H^1_0(\varOmega )\times L_2(\varOmega )\) in case of Neumann- or Dirichlet-boundary conditions, respectively. Thus, in this setting initial conditions are prescribed for u and \(v=u'\), while in our setting above, the state space is \(L_2(\varOmega )\times L_2(\varOmega )^n\) and the initial conditions are prescribed for \(v=u'\) and \(q={\text {grad}}u\). Note however, that the initial value for u in the second case is an element of \(H^1_{(0)}(\varOmega )\) and thus automatically implies an initial value for \(q={\text {grad}}u\). Thus, the initial value problem formulated in the second case is also covered by the initial value problem in the first case.