1 Introduction

Consider a bounded, linear operator G on \(L^2(\mathbb R_+,U)\) for complex Hilbert space U. As usual, \(L^2(\mathbb R_+,U)\) denotes the Hilbert space of (equivalence classes of) square-integrable functions \(\mathbb R_+ \rightarrow U\) and \(\mathbb R_+\) denotes the positive real numbers. We say that G is right-shift invariant if it commutes with the right-shift semigroup \((\sigma ^\tau )_{\tau \ge 0}\) defined by

$$\begin{aligned} (\sigma ^\tau f)(t) = \left\{ \begin{aligned}&0{} & {} t < \tau \\&f(t-\tau ){} & {} t \ge \tau \end{aligned} \right. \quad \hbox { almost all~}\ t \ge 0. \end{aligned}$$
(1.1)

Note that \(\sigma \) leaves \(L^2(\mathbb R_+,U)\) invariant. A famous result in the study of such right-shift invariant operators is the following multiplier-type theorem.

Theorem 1.1

The operator \(G: L^2(\mathbb R_+,U)\rightarrow L^2(\mathbb R_+,U)\) is bounded, linear and right-shift invariant if, and only if, it is of the form \(G = {{\mathcal {L}}}^{-1} {{\mathcal {M}}}_\textbf{G}{{\mathcal {L}}}\), for some (unique) holomorphic function \(\textbf{G}: \mathbb C_0 \rightarrow {{\mathcal {B}}}(U)\). Furthermore, \(\Vert G \Vert = \sup _{s \in \mathbb C_0} \Vert \textbf{G}(s) \Vert _{{{\mathcal {B}}}(U)}\).

In the above theorem \(\mathbb C_0\) denotes the open right-half complex plane, \({{\mathcal {L}}}\) denotes the (unilateral) Laplace transform, \({{\mathcal {M}}}_\textbf{G}\) denotes the multiplication operator by \(\textbf{G}\), and \({{\mathcal {B}}}(U)\) the Banach space of bounded linear operators \(U \rightarrow U\) equipped with the uniform topology. Theorem 1.1 is a special case of a general result proved in [15], and appears with a simpler proof in [42, Theorem 2.3, Remark 2.4], along with a number of bibliographical notes.

Bounded, linear, right-shift invariant operators on \(L^2(\mathbb R_+,U)\) arise naturally in mathematical systems and control theory, as they are precisely the so-called input–output maps G of linear, time-invariant, input–output stable control systems with input u and output Gu; see, for instance [43] and [36]. The terminology input–output stable refers to the property that inputs in \(L^2(\mathbb R_+,U)\) are continuously mapped to outputs in \(L^2(\mathbb R_+,U)\). Both u and Gu are assumed to take their values in the space U. In this setting, the symbol \(\textbf{G}\) in Theorem 1.1 is called the transfer function associated with the input–output map G. The independent variable \(t \in \mathbb R_+\) denotes time, and the choice of the semi-infinite real-axis \(\mathbb R_+\) is important (as we shall note later) for developing a theory which facilitates two features: (a) a stability theory, which requires an unbounded time domain, and; (b) the treatment of initial value problems associated with, for example, controlled and observed evolution equations [40], which requires an initial time, and so support bounded to the left.

Theorem 1.1 is in the spirit of operator-valued multiplier theorems for pseudo-differential operators, considering translation-invariantFootnote 1, linear operators of the form \(a(D):= {{\mathcal {F}}}^{-1} {{\mathcal {M}}}_a {{\mathcal {F}}}\) where the symbol a is typically defined on \(\mathbb R^n\), takes values in \({{\mathcal {B}}}(U)\), and \({{\mathcal {F}}}\) denotes the Fourier transform. A common aim is to determine conditions on the symbol a so that a(D) has desired boundedness properties with, for example, Mikhlin’s theorem [25] being a classical result in the field. Fourier multiplier theorems are hugely well-studied problems, although perhaps slightly less so in the vector-valued case. The paper [2], and texts [1, 3] by the same author, treat this problem in considerable detail, and contain a substantial history of the area. That Theorem 1.1 contains necessary and sufficient conditions is an exceptional consequence of the imposed Hilbert space structure (\(L^p(\mathbb R_+, U)\) with \(p=2\) and Hilbert space U) via the Paley–Wiener Theorem.

Connecting these ideas back to control theory, it is well-known (from, for example [39, Theorem 6.2]) that there is a one-to-one relationship between bounded, linear, right-shift invariant operators on \(L^2(\mathbb R_+,U)\) and bounded, linear, translation-invariant, causal operators on \(L^2(\mathbb R,U)\). Recall that a linear, translation-invariant operator \(F: L^2(\mathbb R,U) \rightarrow L^2(\mathbb R,U)\) is called causal (or non-anticipative) if \(L^2(\mathbb R_+,U)\) is an F-invariant subspace. In the so-called well-posed linear systems literature—a class of physically-motivated infinite-dimensional linear control systems, see [35]—input–output operators are typically considered to/from function spaces on \(\mathbb R_+\) or to/from function spaces on \(\mathbb R\). The latter framework facilitates a connection between the control theoretic input–output maps of [35] and the scattering theory of Lax and Phillips [21], see [36, Section 6]. By the above discussion these two approaches are equivalent for \(L^2\)-spaces. Indeed, it can be shown that (interpreted carefully) F and G as above coincide on \(L^2(\mathbb R_+,U)\).

Here, we present a number of far-reaching generalisations of Theorem 1.1, broadly investigating representation and boundedness properties of linear, right-shift invariant operators between certain interpolation spaces of Lebesgue and (usual) Sobolev spaces, or certain fractional-order Bessel potential spaces. Our first result contains a characterisation of bounded, linear, right-shift invariant operators between two interpolation spaces of the form

$$\begin{aligned} \big [H_0^{m}(\mathbb R_+,U), H^{m+1}_0(\mathbb R_+,U)\big ]_{\gamma } \quad m \in \{0,1,2, \dots \}, \; \gamma \in [0,1], \end{aligned}$$

as necessarily multiplication operators \({{\mathcal {L}}}^{-1} {{\mathcal {M}}}_\textbf{G}{{\mathcal {L}}}\) for some holomorphic symbol \(\textbf{G}: \mathbb C_0 \rightarrow {{\mathcal {B}}}(U)\). Moreover, boundedness properties of G are characterised by boundedness conditions on \(\textbf{G}\) involving the interpolation exponents. This result appears as Theorem 3.1. Our main result is Theorem 3.5 which contains a characterisation now for such operators between \(H^\gamma (\mathbb R_+,U)\) spaces for \(\gamma \ge 0\)—which are fractional-order Bessel potential spaces when \(\gamma \) is not a nonnegative integer—and combines the previously-mentioned boundedness condition with a strong Hardy space \({{\mathcal {H}}}^2_\textrm{str}\)-condition. Furthermore, Proposition 3.10 provides a characterisation of certain bounded, linear, right-shift invariant operators in terms of a strong convolution representation. We discuss how our results relate to others in the literature in Sect. 3.2.

We outline our argumentation, which relies on a few crucial ingredients. In this first study we consider the Hilbert-space setting only. Roughly, right-shift invariant operators commute with the generator of the associated right-shift semigroup, which is a differentiation operator on \(\mathbb R_+\) whose domain includes a zero-trace boundary condition. Consequently, right-shift invariant operators commute with fractional powers of this generator, which are well defined. The images of certain fractional powers of operators form a scale of so-called fractional power spaces, and are isometrically isomorphic to \(L^2(\mathbb R_+,U)\). Moreover, we exploit a powerful result relating interpolation spaces and fractional power spaces (see, for example, [17, Theorem 6.6.9]). The upshot is that we are able to use Theorem 1.1 to prove a number of generalisations of this very result.

Given the extensive research on operator-valued Fourier multiplier theorems on Euclidean space, where the case \(n = 1\) so that \(\mathbb R^n = \mathbb R\) is arguably the simplest to treat, it seems natural to approach the present problem by relating the half-line case to the whole-line case. Although we argue differently here, this approach may be used in the “zero-trace case”, Theorem 3.1, as certain zero-trace functions on the half-line may be continuously extended by zero to functions on the whole line and causality plays a key role. However, when non-zero initial conditions are imposed, as will generally happen when considering \(H^\gamma (\mathbb R_+,U)\) for \(\gamma >1/2\), another argument seems to be required. Indeed, our theory identifies numerous examples of bounded, linear, translation-invariant and causal \(F: L^2(\mathbb R) \rightarrow L^2(\mathbb R)\) (here \(U = \mathbb C\)), which restrict to bounded operators \(H^1(\mathbb R) \rightarrow H^1(\mathbb R)\) but the restriction of F to \(H^1(\mathbb R_+)\) does not map continuously into \(H^1(\mathbb R_+)\). It is this key distinction which has, in part, motivated the current study, along with the control-theoretic motivation of investigating when additional regularity of an input signal is continuously inherited by the corresponding output signal.

The paper is organised as follows. Section 2 gathers notation and preliminary results. Our main results are contained in Sect. 3 and examples are presented in Sect. 4 which include connections of the current results to Regular Linear Systems and Pritchard–Salamon Systems in Sects. 4.1 and 4.2, respectively. A number of further and technical details appear in the Appendix.

2 Preliminaries

We gather preliminary requisite notation and material.

2.1 Notation

Most mathematical notation used is standard. As usual, let \(\mathbb N\)\(\mathbb Z\)\(\mathbb R\) and \(\mathbb C\) denote the positive integers (natural numbers), integers, real numbers and complex numbers, respectively. Furthermore, we set

$$\begin{aligned} \mathbb Z_+:= \mathbb N\cup \{0\}, \quad \mathbb R_+\!:=\! (0, \infty ) \quad \text {and} \quad \mathbb C_\alpha \!:=\! \big \{ s \in \mathbb C: \textrm{Re}\,(s) > \alpha \} \quad \forall \, \alpha \!\in \! \mathbb R. \end{aligned}$$

Throughout the work we let \((U, \vert \cdot \vert _U)\) denote a complex, separable Hilbert space. The theory developed applies in the setting of real spaces U by considering their complexifications so as to make sense of Laplace transforms in the usual way.

For another Hilbert space V, we let \({{\mathcal {B}}}(U,V)\) denote the Banach space of all linear bounded operators \(U\rightarrow V\), with the usual induced operator norm \(\Vert \cdot \Vert \) from U and V, and set \({{\mathcal {B}}}(U):={{\mathcal {B}}}(U,U)\). We write \(U \hookrightarrow V\) if U is continuously embedded in V, meaning

$$\begin{aligned} | u |_V \lesssim | u |_U \quad \forall \, u \in U.\end{aligned}$$

The symbol \(\lesssim \) (\(\gtrsim \)) means less (greater) than or equal to, up to a general multiplicative constant independent of the variables appearing. Its use is intended to clarify the exposition by reducing the number of constants which appear in estimates. The symbol \(\dot{=}\) means equals with equivalent norms.

2.2 Function Spaces

We let \((L^2(\mathbb R_+,U), \Vert \cdot \Vert _{L^2(\mathbb R_+)})\) denote the usual Lebesgue space of (equivalence classes of Bochner measurable) square-integrable functions \(\mathbb R_+ \rightarrow U\); see, for example [39, Section 1], which is a Hilbert space when U is. For simplicity, we write \(L^2(\mathbb R_+)\) for \(L^2(\mathbb R_+,\mathbb C)\).

We shall require Sobolev spaces of vector-valued functions and we refer the reader to, for example, the texts [22, Chapter 8], [1, Chapter III, Sections 4.1, 4.2] or [3, Chapter VII]. For \(m \in \mathbb N\), we recall the (integer) Sobolev spaces

$$\begin{aligned} H^m(\mathbb R_+,U): =\big \{ u \in L^2(\mathbb R_+,U) \,: \, u^{(j)} \in L^2(\mathbb R_+,U), \; \forall \, j \in \{1,2,\dots ,m\} \big \}, \end{aligned}$$

with norm

$$\begin{aligned} \Vert u \Vert _{H^m(\mathbb R_+)}:= \Big (\sum _{k=0}^m \Vert u^{(k)}\Vert _{L^2(\mathbb R_+)}^2\Big )^{\frac{1}{2}} \quad \forall \, u \in H^m(\mathbb R_+,U). \end{aligned}$$

Here the symbol \(u^{(j)}\) denotes the j-th (weak) derivative of u for \(j \in \mathbb Z_+\), with \(u^{(0)} = u\). If u has a j-th classical derivative, then this is also denoted by \(u^{(j)}\).

It follows from, for instance, [22, Theorem 8.57] that elements of \(H^1(\mathbb R_+,U)\) (that is, equivalences classes of functions) may be identified with locally absolutely continuous functions \(\mathbb R_+ \rightarrow U\), and that

$$\begin{aligned} f(t) = f(a) + \int _a^t g(s) \, \textrm{d}s \quad \forall \, t,a >0, \; \forall \, f \in H^1(\mathbb R_+,U), \end{aligned}$$
(2.1)

for some \(g \in L^2(\mathbb R_+,U)\). We shall make this identification. Furthermore, taking the limit \(t \searrow 0\) in the right-hand side of (2.1) gives that every \(f \in H^1(\mathbb R_+,U)\) is well-defined at zero, with value denoted f(0).

Recall that \(H^k_0(\mathbb R_+,U)\) is defined as the closure in \(H^k(\mathbb R_+,U)\) of compactly supported smooth functions \(\mathbb R_+ \rightarrow U\). Repeated application of [26, Lemma B.7.9] gives the description that, for every \(m \in \mathbb N\),

$$\begin{aligned} H^m_0(\mathbb R_+,U) = \big \{ u \in H^m(\mathbb R_+, U) \,: \, u(0) = \dots = u^{(m-1)}(0) = 0\big \}. \end{aligned}$$
(2.2)

We shall require certain interpolation spaces. For thorough treatments we refer the reader to, for example, [1] or [5]. Let \(\theta \in (0,1)\). For reasons we discuss below, we borrow the notation of the so-called Lions-Magenes spaces from [23, Chapter 1, Section 11.7], and define the interpolation spaces

$$\begin{aligned} H_{00}^\theta (\mathbb R_+, U)&:= \big [L^2(\mathbb R_+, U),H^1_0(\mathbb R_+, U)\big ]_\theta \nonumber \\&{{\,\mathrm{\,\dot{=}\,}\,}}\big (L^2(\mathbb R_+, U),H^1_0(\mathbb R_+, U)\big )_{\theta ,2}. \end{aligned}$$
(2.3a)

Here \([\cdot , \, \cdot ]_\theta \) denotes the usual complex interpolation functor, and \((\cdot , \, \cdot )_{\theta ,2}\) denotes a real interpolation functor. By the results of [7], in the current Hilbert space setting, the choice of K-method or J-method for real interpolation gives rise to the same interpolation space.

For \(\theta = m + \alpha \) where \(m \in \mathbb N\) and \(\alpha \in (0,1)\), we set

$$\begin{aligned} H_{00}^\theta (\mathbb R_+, U)&:= \big [H^m_0(\mathbb R_+, U), H^{m+1}_0(\mathbb R_+, U)\big ]_\alpha \nonumber \\&{{\,\mathrm{\,\dot{=}\,}\,}} \big (H^m_0(\mathbb R_+, U), H^{m+1}_0(\mathbb R_+, U)\big )_{\alpha ,2}. \end{aligned}$$
(2.3b)

The second equalities in (2.3) are well-known; see, for example [7, Remark 3.6], as all the spaces appearing in the interpolation functors are Hilbert spaces. We set \(H^{m}_{00}(\mathbb R_+, U):= H^m_0(\mathbb R_+,U)\) (as in (2.2)) for \(m \in \mathbb Z_+\).

Apart from certain “borderline” values, the spaces \(H_{00}^\theta (\mathbb R_+, U)\) may be related to zero-trace Bessel potential spaces, which we now recall. For which purpose, for \(\theta \in \mathbb R\), the Bessel potential space \(H^{\theta }_\textrm{B}(\mathbb R,U)\) is defined as the set of all \(u \in L^2(\mathbb R,U)\) such that

$$\begin{aligned} \big \Vert {{\mathcal {F}}}^{-1} {{\mathcal {M}}}_{b^2_\theta } {{\mathcal {F}}}u \big \Vert _{L^2(\mathbb R)} < \infty , \end{aligned}$$

where \({{\mathcal {F}}}\) denotes the Fourier transform and \(b_{\theta }(\xi ):= (1 + |\xi |^2)^{\frac{\theta }{2}}\) is the so-called Bessel potential. This space is a Hilbert space when equipped with the norm

$$\begin{aligned} \Vert u \Vert _{H^{\theta }_\textrm{B}(\mathbb R)}: = \big \Vert {{\mathcal {F}}}^{-1} {{\mathcal {M}}}_{b^2_\theta } {{\mathcal {F}}}u \big \Vert _{L^2(\mathbb R)}. \end{aligned}$$

Let \({\mathbb {K}}= {{\,\textrm{cl}\,}}(\mathbb R_+) = [0, \infty )\) or \(\mathbb R_+ =(0,\infty )\). The Bessel potential spaces \(H^{\theta }_\textrm{B}({\mathbb {K}},U)\) are defined as the restriction of elements in \(H^{\theta }_\textrm{B}(\mathbb R,U)\) to \({\mathbb {K}}\), with norm

$$\begin{aligned} \Vert u \Vert _{H^{\theta }_\textrm{B}({\mathbb {K}})}:= \inf _{\begin{array}{c} v \in H^\theta _\textrm{B}(\mathbb R,U) \\ v\vert _{{\mathbb {K}}} = u \end{array}} \Vert v \Vert _{H^{\theta }_\textrm{B}(\mathbb R)} . \end{aligned}$$

Let \({\mathbb {X}}= \mathbb R\) or \({\mathbb {K}}\). It follows from [3, VII, Theorem 4.3.2] that

$$\begin{aligned} H^m_\textrm{B}({\mathbb {X}},U) {{\,\mathrm{\,\dot{=}\,}\,}}H^m({\mathbb {X}},U) \quad \forall \, m \in \mathbb Z_+. \end{aligned}$$

Therefore, from hereon in we omit the subscript \(\textrm{B}\) from Bessel potential spaces, as the use of the same symbol for both Bessel potential spaces and Sobolev spaces in this case is unproblematic, up to equivalent norms.

We highlight that, in following the work [3], Sobolev spaces and Bessel potential spaces may be defined on closed sets, such as \(\mathbb H:= [0, \infty )\), which is not the approach usually taken elsewhere in the literature. Typically, spaces of differentiable functions are defined on open sets. In fact, the results of [3] show that, for example, the spaces \(H^{\theta }(\mathbb R_+,U)\) and \(H^{\theta }(\mathbb H,U)\), coincide. We refer the reader specifically to [3, VIII, Section 1.9, Notes] for more information.

We now consider zero-trace Bessel potential spaces. To summarise [3, VIII, pp. 299–300], for \(\theta > k + 1/2\) and \(k \in \mathbb Z_+\), it follows that the trace operator of order k on \(\partial \mathbb H\), denoted \(\textrm{tr}_k\) and given by

$$\begin{aligned} {{\,\textrm{tr}\,}}_k u = u^{(k)}(0) \quad \forall \, u \in H^{\theta }(\mathbb H,U), \end{aligned}$$

is well-defined. Moreover, the traces are continuous maps from \(H^{\theta }(\mathbb H,U)\) to U, so that

$$\begin{aligned} \max _{j \in \{0, \dots , k\}} | u^{(j)}(0) | \lesssim \Vert u \Vert _{H^{\theta }(\mathbb H)} \quad \forall \, u \in H^{\theta }(\mathbb H,U). \end{aligned}$$
(2.4)

Define \(H^{\theta }_0(\mathbb H,U)\) as the closure in \(H^\theta (\mathbb H,U)\) of the set of compactly supported smooth functions \((0,\infty ) \rightarrow U\). (This agrees with \(H^m_0(\mathbb R_+,U)\) already introduced when \(\theta = m \in \mathbb Z_+\).) With this definition, the result [3, VIII, Theorem 1.6.8] gives that:

$$\begin{aligned} H^{\theta }_0(\mathbb H,U) = H^{\theta }(\mathbb H,U) \quad \forall \, \theta \in (0,1/2), \end{aligned}$$
(2.5)

and, if \(k \in \mathbb Z_+\) and \(k + 1/2< \theta < k+1+1/2\), then

$$\begin{aligned} \big \{ u \in H^{\theta }(\mathbb H,U) \,: \, u^{(j)}(0) = 0, \; \forall \, j \in \{0, \dots , k\} \big \} = H^{\theta }_0(\mathbb H,U). \end{aligned}$$
(2.6)

It can be shown that, for \(\theta \in (0,1)\)

$$\begin{aligned} H_{0}^{m+\theta }(\mathbb R_+, U) {{\,\mathrm{\,\dot{=}\,}\,}} \big [H^{m}_0(\mathbb R_+, U),H^{m+1}_0(\mathbb R_+, U)\big ]_\theta \quad \hbox { whenever~}\ \theta \ne \frac{1}{2}.\nonumber \\ \end{aligned}$$
(2.7)

For scalar-valued functions, the equality is contained in [23, Theorem 11.6, p. 64]. The (Hilbert space) vector-valued case can be established by adapting arguments from [3, proof of Theorem 1.6.4, p. 320], particularly [3, equation (1.6.10), p. 320].

The upshot of (2.2) and (2.7) is that

$$\begin{aligned} H_{00}^\theta (\mathbb R_+, U) {{\,\mathrm{\,\dot{=}\,}\,}}H^\theta _0(\mathbb R_+, U) \quad \hbox { whenever~}\ \theta \not \in \frac{1}{2}+\mathbb Z_+. \end{aligned}$$
(2.8)

An explicit characterization of \(H_{00}^{\frac{1}{2}}(\mathbb R_+)\) is given in [23, Theorem 11.7], again in the scalar-valued case.

We comment that, in light of the interpolation description (2.7), the symbol \(H^\theta _{00}\) is usually reserved in the literature for the borderline values \(\theta \in 1/2 + \mathbb Z_+\). However, it is notationally convenient for us to use \(H_{00}^\theta \) everywhere as it is, by definition, an interpolation space, a property which shall be important later in the context of domains of fractional powers of certain operators. In the sequel, we adopt the perspective that, whenever \(\theta \) is not a borderline value, then \(H_{00}^\theta \) also admits a characterisation as a zero trace space \(H^\theta _0\), that is, (2.8) holds. Finally, we note that a discussion of \(H_{00}^{1/2}\) also appears in [38, Section 33].

2.3 Hardy Spaces and Laplace Transforms

For \(\alpha \in \mathbb R\), we let \({{\mathcal {H}}}^\infty _\alpha ({{\mathcal {B}}}(U))\) denote the Hardy space of all holomorphic functions \(\mathbb C_\alpha \rightarrow {{\mathcal {B}}}(U)\) which are bounded in the norm

$$\begin{aligned} \Vert \textbf{H}\Vert _{{{\mathcal {H}}}^\infty _\alpha }:=\sup _{s\in \mathbb C_\alpha }\Vert \textbf{H}(s)\Vert . \end{aligned}$$

The space \({{\mathcal {H}}}^\infty _\alpha ({{\mathcal {B}}}(U))\), endowed with the above norm, is a Banach space. For notational simplicity we set \({{\mathcal {H}}}^\infty ({{\mathcal {B}}}(U)) = {{\mathcal {H}}}^\infty _0({{\mathcal {B}}}(U))\). For a complex Banach space E we recall the Hardy space \({{\mathcal {H}}}^2(E) = {{\mathcal {H}}}^2(\mathbb C_0,E)\) as the complex vector space of holomorphic functions \(\mathbb C_0 \rightarrow E\) bounded in the norm

$$\begin{aligned} \Vert \textbf{H}\Vert _{{{\mathcal {H}}}^2(E)}:=\sup _{x>0} \int _{-\infty }^\infty \vert \textbf{H}(x+iy) \vert _E^2 \, \textrm{d}y. \end{aligned}$$

When \(E = {{\mathcal {B}}}(U)\), equipped with the uniform operator topology, then we obtain the (uniform) Hardy space \({{\mathcal {H}}}^2({{\mathcal {B}}}(U))\). We shall more frequently require the space \({{\mathcal {B}}}(U, {{\mathcal {H}}}^2(\mathbb C_0,U))\) which, by [27, Lemma 4.1], may be (isometrically) identified with the so-called strong Hardy space, denoted \({{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U))\), of all holomorphic \(\textbf{G}: \mathbb C_0 \rightarrow {{\mathcal {B}}}(U)\) such that

$$\begin{aligned} \Vert \textbf{G}\Vert _{{{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U))}:= \sup _{\Vert u \Vert \le 1} \Vert s \mapsto \textbf{G}(s) u\Vert _{{{\mathcal {H}}}^2(U)} < \infty . \end{aligned}$$

Evidently, the following estimate holds

$$\begin{aligned} \Vert s \mapsto \textbf{G}(s) v \Vert _{{{\mathcal {H}}}^2(U)} \lesssim \vert v \vert _U \quad \forall \, \textbf{G}\in {{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U, U)), \; \forall \, v \in U. \end{aligned}$$

From [26, Lemma F.3.2] it follows that if \(\dim (U)<\infty \), then

$$\begin{aligned} {{\mathcal {H}}}^2({{\mathcal {B}}}(U)) {{\,\mathrm{\,\dot{=}\,}\,}}{{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U)). \end{aligned}$$

We note that there is no distinction between \({{\mathcal {H}}}^\infty _\textrm{str}({{\mathcal {B}}}(U))\) and \({{\mathcal {H}}}^\infty ({{\mathcal {B}}}(U))\).

Throughout the paper we abuse notation by using the same symbol to associate \(D \in {{\mathcal {B}}}(U)\) with the bounded linear operators \(L^2(\mathbb R_+, U) \rightarrow L^2(\mathbb R_+, U)\) or \({{\mathcal {H}}}^2(U) \rightarrow {{\mathcal {H}}}^2(U)\) given by \(u \mapsto Du\).

We let \({{\mathcal {L}}}\) denote the usual (one-sided) Laplace transform. By the Paley–Wiener Theorem \({{\mathcal {L}}}\) is, up to a multiplicative constant, an isometric isomorphism \(L^2(\mathbb R_+,U) \rightarrow {{\mathcal {H}}}^2(\mathbb C_0, U)\) (see, for instance [33, Theorem E, p. 91] or [4, Theorem 1.8.3, p. 47]). One consequence of the vector-valued Paley–Wiener Theorem is the following operator-valued version (which is routinely established or appears in [26, Lemma F.3.4 (d), p. 1019]).

Lemma 2.1

The Laplace transform \({{\mathcal {L}}}\) is (up to a multiplicative constant) an isometric isomorphism \({{\mathcal {B}}}(U, L^2(\mathbb R_+,U)) \rightarrow {{\mathcal {B}}}(U, {{\mathcal {H}}}^2(\mathbb C_0,U))\).

2.4 Right-Shift Semigroups and Their Fractional Powers

Here we gather preliminary material on right-shift semigroups and the fractional powers of their generators which shall play an important auxiliary role in proving our main results. The overall idea is that fractional powers of the generator of the right-shift semigroup (which is a differentiation operator and commutes with the focal objects of the present paper) induce a scale of fractional power spaces, which are naturally isometrically isomorphic to \(L^2(\mathbb R_+,U)\), and admit a representation in terms of interpolation spaces. This latter property facilitates a connection to the \(H^\theta _{00}(\mathbb R_+,U)\) spaces from Sect. 2.2. The upshot is that we are able to prove generalisations of Theorem 1.1 by mapping back to the case of bounded, linear, right-shift invariant operators on \(L^2(\mathbb R_+,U)\), where Theorem 1.1 applies.

Let \(\sigma \) denote the right-shift semigroup on \(L^2(\mathbb R_+,U)\), so that \(\sigma ^\tau \) denotes right-shift by \(\tau \) as in (1.1), which is a contraction semigroup. From, for example [40, Example 2.4.5], the generator A of \(\sigma \) equals minus the derivative operator, with domain \(H^1_0(\mathbb R_+,U)\). Note that the graph norm of A is simply the \(H^1(\mathbb R_+,U)\) norm. Further, \(\sigma \) restricts to a strongly continuous semigroup on \(H^m_0(\mathbb R_+,U)\) for \(m \in \mathbb Z_+\) with generator the restriction of A to \(H^{m+1}_0(\mathbb R_+,U)\).

Set \({{\mathcal {V}}}:= L^2(\mathbb R_+,U)\) and define the operators \(R_0:= I\), the identity on \({{\mathcal {V}}}_0: = {{\mathcal {V}}}\), and

$$\begin{aligned} R_\theta z: =\frac{1}{\Gamma (\theta )}\int _0^\infty \tau ^\theta \textrm{e}^{(A-I)\tau }z \,\frac{\textrm{d}\tau }{\tau } \quad \forall \, z \in {{\mathcal {V}}}, \; \forall \, \theta >0 . \end{aligned}$$
(2.9)

Since the growth bound of \(\sigma \) equals zero, we have that 1 belongs to the resolvent set of A and, therefore, an application of [35, Lemma 3.9.5] gives that \(R_\theta \) is a bounded, injective operator on \({{\mathcal {V}}}\). Moreover, its image with the norm \(v \mapsto \Vert R_\theta ^{-1} z\Vert _{{\mathcal {V}}}\) is a Hilbert space, which we denote by \({{\mathcal {V}}}_\theta \), and is called the fractional power space of index \(\theta \) for A. Consequently, the operator \(R_\theta :{{\mathcal {V}}}\rightarrow {{\mathcal {V}}}_\theta \) is an isometric surjection; its inverse \((I-A)^\theta :{{\mathcal {V}}}_\theta \rightarrow {{\mathcal {V}}}\) is called the fractional power of \(I-A\) of index \(\theta \) (and is also an isometric surjection).

We shall also require the fractional power spaces \({{\mathcal {V}}}_\theta \) for negative \(\theta \). These are defined, as usual, as the completion of \({{\mathcal {V}}}\) with respect to the (weaker) norm

$$\begin{aligned} v \mapsto \Vert R_{-\theta } v \Vert _{{\mathcal {V}}}\quad \forall \, \theta <0. \end{aligned}$$

It is well-known that the scale of spaces \({{\mathcal {V}}}_\theta \) are nested with continuous embeddings in the sense that \({{\mathcal {V}}}_{\theta _1} \hookrightarrow {{\mathcal {V}}}_{\theta _2}\) for all \(\theta _2 < \theta _1\). Moreover,

$$\begin{aligned} \left. \begin{aligned} R_\theta \vert _{{{\mathcal {V}}}_\alpha }&: {{\mathcal {V}}}_\alpha \rightarrow {{\mathcal {V}}}_{\alpha +\theta } \\ \quad \text {and} \quad R_\theta ^{-1} \vert _{{{\mathcal {V}}}_{\alpha +\theta }}&: {{\mathcal {V}}}_{\alpha +\theta } \rightarrow {{\mathcal {V}}}_\alpha \end{aligned}\right\} \quad \text {are isometries} \quad \forall \,\theta > 0, \; \forall \, \alpha \in \mathbb R, \end{aligned}$$

(see, for example, [35, p. 148]).

We record further properties of \(R_\theta \) and important consequences in the next lemma.

Lemma 2.2

Let \(\theta > 0\)\(\alpha \in \mathbb R\), \({{\mathcal {V}}}:= L^2(\mathbb R_+,U)\) and let \(R_\theta \) be as in (2.9). Define

$$\begin{aligned} q_\theta (t):=\frac{1}{\Gamma (\theta )}t^\theta \textrm{e}^{-t}\frac{1}{t}1_{(0,\infty )}(t) \quad \forall \, t >0. \end{aligned}$$

The following statements hold.

  1. (i)

    \(q_\theta \in L^1(\mathbb R_+)\) with \({{\mathcal {L}}}(q_\theta )(s) = 1/(1+s)^\theta \) for all \(s \in \mathbb C_0\) and

    $$\begin{aligned} R_\theta z = q_\theta *z \quad \forall \, z \in {{\mathcal {V}}}. \end{aligned}$$
    (2.10)
  2. (ii)

    \(R_\theta \in {{\mathcal {B}}}( {{\mathcal {V}}}_{\alpha }, {{\mathcal {V}}}_{\alpha +\theta })\) and is right-shift invariant. Moreover, \(R_\theta \) is invertible and \(R_\theta ^{-1}\) is a right-shift invariant operator in \({{\mathcal {B}}}({{\mathcal {V}}}_{\alpha +\theta }, {{\mathcal {V}}}_{\alpha })\).

Now additionally assume that \(\alpha \ge 0\).

  1. (iii)

    \({{\mathcal {V}}}_{\alpha } {{\,\mathrm{\,\dot{=}\,}\,}}H^\alpha _{00}(\mathbb R_+,U)\) and so \(R_\theta \) and \(R_\theta ^{-1}\) in statement (ii) satisfy

    $$\begin{aligned} R_\theta&\in {{\mathcal {B}}}( H^\alpha _{00}(\mathbb R_+,U), H^{\alpha +\theta }_{00}(\mathbb R_+,U)) \\ \text {and} \quad R_\theta ^{-1}&\in {{\mathcal {B}}}( H^{\alpha +\theta }_{00}(\mathbb R_+,U), H^{\alpha }_{00}(\mathbb R_+,U)). \end{aligned}$$

Proof

The first two claims in statement (i) are routinely established. To minimise disruption to the current section, the proof of equality (2.10) is relegated to Appendix 4.2.

The bulk of the argument for statement (ii) has been given in the text preceding the statement of the lemma. Right-shift invariance of \(R_\theta \) follows from the convolution representation in (2.10). The Laplace transform of \(q_\theta \) equals \(s \mapsto (1+s)^{-\theta }\) and has inverse \(s \mapsto q_{-\theta }(s) =(1+s)^\theta \)—a polynomially bounded holomorphic function on \(\mathbb C_0\). Therefore, by [46, Theorem 6.5-1, p. 121], the inverse Laplace transform of \(q_{-\theta }\) equals a distribution with support in \([0,\infty )\!\) and, moreover, the inverse of \(R_\theta \) equals convolution with this distribution. Convolution with such a distribution is right-shift invariant.

That statement (iii) holds when \(\beta = m\in \mathbb Z_+\) does not require interpolation spaces and follows as

$$\begin{aligned}{{\mathcal {V}}}_m = D((I-A)^m) = D(A^m) = H^m_0(\mathbb R_+,U).\end{aligned}$$

To prove statement (iii) for non-integer exponent, let \(\theta \in (0,1)\). We seek to apply [17, Theorem 6.6.9] to \(I-A\). For which purpose, routine calculations show that \((-\infty ,0)\) is contained in the resolvent set of \(I-A\) and

$$\begin{aligned} \sup _{t >0} \Vert t(tI + (I-A))^{-1} \Vert < \infty ,\end{aligned}$$

hence \(I-A\) is sectorial by [17, Proposition 2.2.1]. Furthermore, \(I + (I-A) = 2I - A\) is injective and A is skew-adjoint, meaning \(\textrm{Re}\langle A v, v\rangle = 0\) for all \(v \in D(A)\). Thus,

$$\begin{aligned} \textrm{Re}\langle (I+ (I-A))v, v \rangle = 2\Vert v \Vert ^2 \ge 0 \quad \forall \, v \in D(A), \end{aligned}$$

and so \(I + (I-A)\) is m-accretive, as in [17, Appendix C.7], as also closed with dense range. We conclude that \(I + (I-A)\) has bounded imaginary powers by [17, Corollary 7.1.8]. The hypotheses of [17, Theorem 6.6.9] are satisfied, and this result yields that

$$\begin{aligned} D((I-A)^\theta ) = \big [ {{\mathcal {V}}}, D(I-A) \big ]_\theta . \end{aligned}$$

However, \(D(I-A) = D(A)\), and so

$$\begin{aligned} D((I-A)^\theta ) = \big [ L^2(\mathbb R_+,U), H_0^1(\mathbb R_+,U) \big ]_\theta =: H^\theta _{00}(\mathbb R_+,U), \end{aligned}$$

where the final equality follows from (2.3). Since \({{\mathcal {V}}}_\theta = D((I - A)^\theta )\) and \(\theta \) was arbitrary, the claim is proven for \(\theta \in (0,1)\). Applying the construction to the restriction of A to an operator \(H^{m+1}_0(\mathbb R_+, U)\rightarrow H^m_0(\mathbb R_+, U)\), the claim is proven for any \(\theta >0\). The claimed boundedness of \(R_\theta \) and \(R_\theta ^{-1}\) follows from the equalities \({{\mathcal {V}}}_\gamma {{\,\mathrm{\,\dot{=}\,}\,}}H^\gamma _{00}(\mathbb R_+,U)\) when \(\gamma \ge 0\). \(\square \)

3 Representations and Regularity of Right-Shift Invariant Operators on Half-Line Bessel Potential Spaces

Lemmas 2.2 facilitates the following theorem—a generalisation of Theorem 1.1. Recall that \({{\mathcal {V}}}_\theta \) denotes the fractional power spaces from Sect. 2.4, with \({{\mathcal {V}}}_0 = {{\mathcal {V}}}= L^2(\mathbb R_+,U)\).

Theorem 3.1

Let \(\alpha , \beta \in \mathbb R\) be given. The following statements hold.

  1. (1)

    If \(G:{{\mathcal {V}}}_{\alpha } \rightarrow {{\mathcal {V}}}_{\beta }\) is a bounded, linear, right-shift invariant operator, then there exists a unique holomorphic function \(\textbf{G}:\mathbb C_0\rightarrow {{\mathcal {B}}}(U)\) such that \(G ={{\mathcal {L}}}^{-1}{{\mathcal {M}}}_\textbf{G}{{\mathcal {L}}}\) and

    $$\begin{aligned} s \mapsto (1+s)^{\beta - \alpha }\textbf{G}(s) \in {{\mathcal {H}}}^\infty ({{\mathcal {B}}}(U)). \end{aligned}$$
    (3.1)
  2. (2)

    If a holomorphic function \(\textbf{G}:\mathbb C_0\rightarrow {{\mathcal {B}}}(U)\) satisfies (3.1), then \(G:={{\mathcal {L}}}^{-1}{{\mathcal {M}}}_\textbf{G}{{\mathcal {L}}}~\) defines a bounded, linear, right-shift invariant operator \({{\mathcal {V}}}_{\alpha } \rightarrow {{\mathcal {V}}}_{\beta }\).

In either case, we have that

$$\begin{aligned} \big \Vert G \big \Vert _{{{\mathcal {B}}}({{\mathcal {V}}}_{\alpha }, {{\mathcal {V}}}_{\beta })} = \big \Vert s \mapsto (1+s)^{\beta - \alpha } \textbf{G}(s)\big \Vert _{{{\mathcal {H}}}^\infty }. \end{aligned}$$
(3.2)

Proof

To prove statement (1), an application of Lemma 2.2 yields that \(R_{\beta }^{-1}G R_{\alpha }\) is a bounded, linear, right-shift invariant operator on \(L^2(\mathbb R_+, U)\). Therefore, by Theorem 1.1, there exists a function \(\textbf{H}\in {{\mathcal {H}}}^\infty ({{\mathcal {B}}}(U))\) such that

$$\begin{aligned} R_{\beta }^{-1}G R_{\alpha } z={{\mathcal {L}}}^{-1}{{\mathcal {M}}}_\textbf{H}{{\mathcal {L}}}z, \end{aligned}$$

for all \(z\in L^2(\mathbb R_+, U)\). As convolution operators, we have

$$\begin{aligned} R_{\alpha }={{\mathcal {L}}}^{-1}{{\mathcal {M}}}_{(1+s)^{-\alpha }}{{\mathcal {L}}}\quad \text {and} \quad R_{\beta }={{\mathcal {L}}}^{-1}{{\mathcal {M}}}_{(1+s)^{-\beta }}{{\mathcal {L}}}. \end{aligned}$$
(3.3)

Consequently,

$$\begin{aligned} {{\mathcal {L}}}^{-1}{{\mathcal {M}}}_{(1+s)^{\beta }}{{\mathcal {L}}}G{{\mathcal {L}}}^{-1}{{\mathcal {M}}}_{(1+s)^{-\alpha }}{{\mathcal {L}}}={{\mathcal {L}}}^{-1}{{\mathcal {M}}}_\textbf{H}{{\mathcal {L}}}, \end{aligned}$$

which, under simplification and rearrangement, gives

$$\begin{aligned} {{\mathcal {L}}}G{{\mathcal {L}}}^{-1}={{\mathcal {M}}}_{(1+s)^{-\beta }}{{\mathcal {M}}}_\textbf{H}{{\mathcal {M}}}_{(1+s)^{\alpha }}. \end{aligned}$$

As a composition of multiplication operators, we infer that  \(G={{\mathcal {L}}}^{-1}{{\mathcal {M}}}_\textbf{G}{{\mathcal {L}}}\), where

$$\begin{aligned} \textbf{G}(s):=(1+s)^{-\beta }\textbf{H}(s)(1+s)^{\alpha }, \end{aligned}$$
(3.4)

which is evidently holomorphic \(\mathbb C_0 \rightarrow {{\mathcal {B}}}(U)\). Moreover, from the equality (3.4) we conclude the desired boundedness property, namely,

$$\begin{aligned} s \mapsto (1+s)^{\beta -\alpha }\textbf{G}(s)=\textbf{H}(s) \in {{\mathcal {H}}}^\infty ({{\mathcal {B}}}(U)). \end{aligned}$$

The proof of statement (2) follows along the same lines by reversing the above steps, and using that multiplication by a \({{\mathcal {H}}}^\infty ({{\mathcal {B}}}(U))\) function induces a bounded, linear, right-shift invariant operator on \(L^2(\mathbb R_+, U)\), again by Theorem 1.1.

To establish the equality of norms (3.2), we invoke the corresponding equality of norms in Theorem 1.1, which here gives that

$$\begin{aligned} \big \Vert R_{\beta }^{-1}G R_{\alpha } \big \Vert _{{{\mathcal {B}}}(L^2(\mathbb R_+, U))} = \big \Vert \textbf{H}\big \Vert _{{{\mathcal {H}}}^\infty } \end{aligned}$$

Simplifying both sides of the above, and using that \(R_{\beta }^{-1}\) and \(R_\alpha \) are isometric isomorphisms completes the proof. \(\square \)

The following corollary is a special case of Theorem 3.1 wherein \(\alpha =\beta \ge 0\), also using the identification \({{\mathcal {V}}}_\beta {{\,\mathrm{\,\dot{=}\,}\,}}H^\beta _{00}(\mathbb R_+,U)\) from Lemma 2.2.

Corollary 3.2

If \(G: L^2(\mathbb R_+,U) \rightarrow L^2(\mathbb R_+,U)\) is bounded, linear and right-shift invariant, then G maps \(H^\beta _{00}(\mathbb R_+,U)\) continuously into itself for all \(\beta \ge 0\).

Corollary 3.2 may be interpreted in terms of compressions of bounded, linear and right-shift invariant operators on \(L^2(\mathbb R_+,U)\). Analogously, it follows that a bounded, linear and right-shift invariant operator on \(H^\beta _{00}(\mathbb R_+,U)\) uniquely dilates to a bounded operator on \(H^\gamma _{00}(\mathbb R_+,U)\) for all \(\gamma \in [0, \beta )\). By (3.2), the operator norm of these dilations are equal.

We proceed to investigate boundedness properties of linear, right-shift invariant operators \(H^{\alpha }(\mathbb R_+, U)\rightarrow H^{\beta }(\mathbb R_+, U)\). In light of

$$\begin{aligned} H^{\gamma }_{00}(\mathbb R_+,U) \subsetneq H^{\gamma }(\mathbb R_+,U) \quad \text {for}\, \gamma \ge 1/2, \end{aligned}$$

we should not expect any such boundedness properties to follow from Theorem 3.1 alone.

We introduce a construction that we shall repeatedly exploit. Since \(H^\gamma _{00}(\mathbb R_+,U)\) is a closed subspace of the Hilbert space \(H^\gamma (\mathbb R_+,U)\) for all \(\gamma \ge 0\), the direct sum decomposition

$$\begin{aligned} H^\gamma (\mathbb R^+, U)=H^\gamma _{00}(\mathbb R_+, U) \, \dot{+} \, {{\mathcal {W}}}_{\gamma }, \end{aligned}$$
(3.5)

is valid for some subspace \({{\mathcal {W}}}_\gamma \) of \(H^\gamma (\mathbb R^+, U)\). When \(\gamma \) is not a “borderline” value, it is straightforward to construct a subspace \({{\mathcal {W}}}_\gamma \) such that (3.5) holds in terms of a family of so-called Bohl functions, and this construction comprises the content of Lemma 3.3 below. For which purpose, for \(k \in \mathbb N\) define \(g_k: \mathbb R_+ \rightarrow \mathbb C\) by

$$\begin{aligned} g_k(t):= \frac{t^{k-1} \textrm{e}^{-t}}{(k-1)!} \quad \forall \, t \in \mathbb R_+. \end{aligned}$$
(3.6)

It is clear that \(g_k \in H^\gamma (\mathbb R_+)\) for all \(k \in \mathbb N\) and all \(\gamma \ge 0\) and, further, that

$$\begin{aligned} g_k^{(r)}(0) = \left\{ \begin{aligned}&1&r = k-1 \\ {}&0&r < k-1\end{aligned}\right. \quad \forall \, k \in \mathbb N. \end{aligned}$$
(3.7)

Lemma 3.3

Let \(\gamma \ge 0\), \(\gamma \not \in 1/2 + \mathbb Z_+\) be given. Define \({{\mathcal {W}}}_\gamma \) by

$$\begin{aligned} \left. \begin{aligned} {{\mathcal {W}}}_\gamma&:= \{0\} \quad 0< \gamma < \frac{1}{2}, \\ {{\mathcal {W}}}_\gamma&:= \big \langle g_k v \,: \, v \in V, \; k \in \{1,\dots , {{\,\textrm{arg min}\,}}_{\ell \in \mathbb N} |\gamma - \ell |\} \big \rangle \quad \gamma > \frac{1}{2}, \end{aligned}\right\} \end{aligned}$$
(3.8)

(linear span of vectors in second equality). Then \(W_\gamma \) satisfies (3.5) and, for all \(u \in H^\gamma (\mathbb R_+,U)\), there exists a unique \(\xi _u \in {{\mathcal {W}}}_\gamma \) such that

$$\begin{aligned} u = \big ( u -\xi _u \big ) + \xi _u \in H^\gamma _{00}(\mathbb R_+, V) + {{\mathcal {W}}}_{\gamma }, \end{aligned}$$

and the mapping

$$\begin{aligned} H^\gamma (\mathbb R_+,U) \rightarrow ({{\mathcal {W}}}_\gamma , \Vert \cdot \Vert _{H^\kappa (\mathbb R_+,U)}), \quad u \mapsto \xi _u, \end{aligned}$$

is continuous for all \(\kappa \ge 0\).

In words, a suitable \({{\mathcal {W}}}_\gamma \) is a linear space of scalar-valued Bohl-functions, with dimension tied to \(\gamma \), tensored with the Hilbert space U.

Proof of Lemma 3.3

Let \(\gamma \ge 0\) be such that \(\gamma \not \in 1/2 + \mathbb Z_+\). If \(\gamma \in (0,1/2)\), then the claim follows immediately from (2.5).

The proof for \(\gamma >1/2\) relies on (2.8), recall, that \(H^\gamma _{00}(\mathbb R_+, U) {{\,\mathrm{\,\dot{=}\,}\,}}H^\gamma _{0}(\mathbb R_+, U)\) for these \(\gamma \), and the description (2.6) of \(H^\gamma _{0}(\mathbb R_+, V)\). Let \(m:= {{\,\textrm{arg min}\,}}_{\ell \in \mathbb N} |\gamma - \ell |\ge 1\) and define

$$\begin{aligned} \xi _u:= \sum _{k=1}^m d_k g_k \quad \forall \, u \in H^\gamma (\mathbb R_+,U),\end{aligned}$$

where the \(d_k \in U\) are to be determined. In fact, the \(d_k\) should be chosen so that

$$\begin{aligned} 0 = u^{(r)}(0) - \sum _{k =1}^m d_k g_k^{(r)}(0) \quad \forall \, r \in \{0,1,\dots , m-1\}, \end{aligned}$$
(3.9)

which is m equations in the m unknowns \(d_k\).

Taking \(r=0\) gives \(d_1 = u(0)\), and taking \(r=1\) gives \(d_2 = u'(0) + u(0)\). More generally, we iterate over increasing \(r \in \{0,1,\dots , m-1\}\) and use the expression for \(g_k^{(r)}(0)\) in (3.7) to note that

$$\begin{aligned} 0= u^{(r)}(0) - \sum _{k =1}^{r+1} d_k g_k^{(r)}(0) = u^{(r)}(0) - \sum _{k =1}^{r} d_k g_k^{(r)}(0) - d_{r+1} , \end{aligned}$$

which determines \(d_{r+1}\) in terms of \(u^{(r)}(0)\) and (the known) \(d_k\) for \(k \le r\). Therefore, we have shown that

$$\begin{aligned} u = (u-\xi _u) + \xi _u \in H^\gamma _{00}(\mathbb R_+,U) + {{\mathcal {W}}}_\gamma \quad \forall \, u \in H^\gamma (\mathbb R_+,U),\end{aligned}$$

so that

$$\begin{aligned} H^\gamma (\mathbb R^+, U) = H^\gamma _{00}(\mathbb R_+, U) \, + \, {{\mathcal {W}}}_{\gamma }. \end{aligned}$$

If \(v \in H^\gamma _{00}(\mathbb R_+, V) \cap {{\mathcal {W}}}_{\gamma }\), then \(v = \xi _v\) and, in light of the unique solution to (3.9), it follows that \(d_1 = \dots = d_m = 0\). Hence, \(v =0\) and the intersection is trivial. We conclude that \({{\mathcal {W}}}_\gamma \) as in (3.8) satisfies the direct sum decomposition (3.5).

The map \(u \mapsto \xi _u\) is evidently linear, and so to prove the claimed continuity, we invoke the trace bound (2.4) to majorise

$$\begin{aligned} \Big \Vert \sum _{k =1}^m d_k g_k \Big \Vert _{H^\kappa (\mathbb R_+,U)}&\le \sum _{k =1}^m | d_k |_U \Vert g_k \Vert _{H^\kappa (\mathbb R_+)} \nonumber \\&\lesssim \Big ( \sum _{k =1}^m \Vert g_k \Vert _{H^\kappa (\mathbb R_+)}\Big )\max _{0\le k \le m-1}\big \{| u^{(k)}(0) |_U\big \} \nonumber \\&\lesssim \Vert u \Vert _{H^\gamma (\mathbb R_+,U)}\,, \end{aligned}$$

for any \(\kappa >0\), as required. \(\square \)

Remark 3.4

Observe that \({{\mathcal {W}}}_\gamma \) in (3.8) is finite dimensional when U is. However, even in the scalar-valued \(U =\mathbb C\) setting, the second expression for \({{\mathcal {W}}}_\gamma \) in (3.8) cannot satisfy (3.5) when \(\gamma \in 1/2 +\mathbb Z_+\). Indeed, Lemma 3.3 shows that the inclusion operator \(H^\gamma _{00}(\mathbb R^+)\rightarrow H^\gamma (\mathbb R^+)\) (which is injective) is Fredholm for \(\gamma \ge 0\) such that \(\gamma \not \in 1/2 + \mathbb Z_+\). The theory of Fredholm operators on interpolation spaces, particularly [14, Corollary 5.2], gives that the dimension of the cokernel is continuous in \(\gamma \), and is also integer valued. However, Lemma 3.3 further shows that the dimension of the cokernel jumps by one across values in \(1/2 +\mathbb Z_+\), and hence the inclusion operator is not Fredholm at these points. In particular, the quotient space \(H^\gamma (\mathbb R^+)/H^\gamma _{00}(\mathbb R^+)\) must be infinite dimensional at these values of \(\gamma \). We do not have an explicit characterisation of a direct sum decomposition (3.5) at these borderline values. Consequently, the approach we adopt below is not currently applicable for those borderline values.

In overview, Theorem 3.1 provides a characterisation of bounded, linear, right-shift invariant functions between \(H^{\gamma }_{00}(\mathbb R_+,U)\) spaces. In light of the direct-sum decomposition (3.5), a necessary and sufficient condition for such operators to be bounded between \(H^{\gamma }(\mathbb R_+,U)\) spaces is that they behave nicely on \({{\mathcal {W}}}_\gamma \). Characterising this last property essentially comprises the content of the next theorem, which is our main result.

Theorem 3.5

Let \(\alpha , \beta \ge 0\) with \(\alpha , \beta \not \in 1/2 + \mathbb Z_+\), and let

$$\begin{aligned} G = {{\mathcal {L}}}^{-1} {{\mathcal {M}}}_\textbf{G}{{\mathcal {L}}}: L^2(\mathbb R_+,U) \rightarrow {{\mathcal {V}}}_{\beta -\alpha }, \end{aligned}$$

denote a bounded, linear, right-shift invariant operator, for some holomorphic \(\textbf{G}: \mathbb C_0 \rightarrow {{\mathcal {B}}}(U)\). The following statements are equivalent.

  1. (1)

    The restriction of G to \(H^{\alpha }(\mathbb R_+,U)\) maps continuously into \(H^{\beta }(\mathbb R_+,U)\) ;

  2. (2)

    \(G(g_1 v) \in H^{\beta }(\mathbb R_+,U)\) for all \(v \in U\) ;

  3. (3)

    Let \(m = {{\,\textrm{arg min}\,}}_{\ell \in \mathbb Z_+} |\beta - \ell |\).

    1. (1)

      If \(m =0\), then

      $$\begin{aligned} s\mapsto (1+s)^{\beta } \frac{\textbf{G}(s)}{1+s} \in {{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U)). \end{aligned}$$
      (3.10a)
    2. (2)

      If \(m \ge 1\), then there exist \(D_k \in {{\mathcal {B}}}(U)\) for \(k \in \{1,\dots , m\}\) such that

      $$\begin{aligned} s\mapsto (1+s)^{\beta }\Big ( \frac{\textbf{G}(s)}{1+s} - \sum _{k=1}^{m} \frac{D_k}{(1+s)^{k}} \Big ) \in {{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U)). \end{aligned}$$
      (3.10b)

If any of the above statements hold with \(\beta > 1/2\), then \(D_1\) in (3.10b) is given by \(D_1 v = G(g_1 v)(0)\) for all \(v \in U\).

Some remarks on Theorem 3.5 are in order. Recall that \(g_1 = t \mapsto e^{-t}\) is the first Bohl function in (3.6). Next, G in the statement of Theorem 3.5 must necessarily be of the form \({{\mathcal {L}}}^{-1} {{\mathcal {M}}}_\textbf{G}{{\mathcal {L}}}\) for \(\textbf{G}\) as stated by Theorem 3.1 and, by the same result, \(\textbf{G}\) satisfies the \({{\mathcal {H}}}^\infty \)-condition (3.1). Additionally, since \(\alpha \) and \(\beta \) are not borderline values, we may by (2.8) view G as a bounded linear operator \(H^{\alpha }_{0}(\mathbb R_+,U) \rightarrow H^{\beta }_{0}(\mathbb R_+,U)\).

Observe that \(\alpha \) does not appear in statements (2) and (3), which place constraints on \(\beta \) only. We reconcile this by recalling that the \({{\mathcal {H}}}^\infty \)-condition (3.1) places a constraint on \(\alpha -\beta \) which, of course, when combined with a condition on \(\beta \) is equivalent to a condition on the pair \(\alpha \) and \(\beta \).

If \(\alpha \in (0,1/2)\), then (3.1) is sufficient for the strong \({{\mathcal {H}}}^2\)-condition (3.10a) as

$$\begin{aligned} s\mapsto (1+s)^{\beta } \frac{\textbf{G}(s)}{1+s} = (1+s)^{\beta - \alpha } \textbf{G}(s) \times \frac{1}{(1+s)^{1-\alpha }}, \end{aligned}$$

is the product of an \({{\mathcal {H}}}^\infty \) and \({{\mathcal {H}}}^2\) function, and hence a forteriori belongs to \({{\mathcal {H}}}^2_\textrm{str}\). Alternatively, for \(\alpha \in (0, 1/2)\), we have that

$$\begin{aligned} H^{\alpha }_{00}(\mathbb R_+,U) {{\,\mathrm{\,\dot{=}\,}\,}}H^{\alpha }_0(\mathbb R_+,U) {{\,\mathrm{\,\dot{=}\,}\,}}H^{\alpha }(\mathbb R_+,U), \end{aligned}$$

from (2.5), and hence the result follows immediately from Theorem 3.1.

The proof of Theorem 3.5 is aided by the following technical lemmas.

Lemma 3.6

Let \(\alpha , \beta \ge 0\) and let \(G: L^2(\mathbb R_+,U) \rightarrow {{\mathcal {V}}}_{\beta -\alpha }\) denote a bounded, linear, right-shift invariant operator. It follows that \(G(g_1 v) \in H^{\beta }(\mathbb R_+,U)\) for all \(v \in U\) if, and only if, \(v \mapsto G(g_1 v) \in {{\mathcal {B}}}(U,H^{\beta }(\mathbb R_+,U))\).

Proof

Sufficiency is immediate. For necessity, assume that \(G(g_1 v) \in H^{\beta }(\mathbb R_+,U)\) for all \(v \in U\). We seek to apply the Closed Graph Theorem. For which purpose, let \(v_n \rightarrow 0\) in U and \(G(g_1 v) \rightarrow y\) in \(H^{\beta }(\mathbb R_+,U)\) as \(n \rightarrow \infty \). Since

$$\begin{aligned} g_1 v_n \rightarrow 0 \quad \text {in}~L^2(\mathbb R_+,U)\, \text { as}~n \rightarrow \infty , \end{aligned}$$

we conclude from the assumed continuity of G that

$$\begin{aligned} G(g_1 v_n) \rightarrow 0 \quad \hbox {in~} {{\mathcal {V}}}_{\beta -\alpha } \hbox {as~} n \rightarrow \infty . \end{aligned}$$

Therefore, we reach the desired conclusion that \(y = 0\), as the inclusion map \(H^{\beta }(\mathbb R_+,U) \hookrightarrow {{\mathcal {V}}}_{\beta -\alpha }\) is injective. \(\square \)

The next lemma may be summarised in words as: if \(G(g_1 v)\) has certain regularity properties, then these are inherited by \(G(g_k v)\) for all \(k \in \mathbb N\). Recall that \(g_k\) denotes the k-th Bohl function defined in (3.6).

Lemma 3.7

Let \(\alpha , \beta \ge 0\) and let \(G: L^2(\mathbb R_+,U) \rightarrow {{\mathcal {V}}}_{\beta -\alpha }\) denote a bounded, linear, right-shift invariant operator. The following statements are equivalent.

  1. (i)

     \(v \mapsto G(g_1 v) \in {{\mathcal {B}}}(U,H^{\beta }(\mathbb R_+,U))\) ;

  2. (ii)

     \(v \mapsto G(g_k v) \in {{\mathcal {B}}}(U,H^{k - 1+ \beta }(\mathbb R_+,U))\) for all \(k \in \mathbb N\).

Proof

That statement (ii) implies statement (i) is clear by taking \(k=1\). Conversely, assume that statement (i) holds. We use an induction argument. The base case is true by hypothesis. For the inductive step, assume that statement (ii) holds for some \(k-1 \in \mathbb N\). An elementary calculation shows that \(g_k\) satisfies the ordinary differential equation

$$\begin{aligned} g_{k}' = - g_k + g_{k-1}.\end{aligned}$$

Right-shift invariance of G gives that

$$\begin{aligned} \frac{1}{\tau } \big ( \sigma ^\tau G - G \big ) = \frac{1}{\tau } \big ( G(\sigma ^\tau ) - G \big ) = G\Big ( \frac{\sigma ^\tau - I}{\tau } \Big ) \quad \forall \, \tau > 0. \end{aligned}$$

Therefore, in light of the continuity of G from Theorem 3.1, taking the limit \(\tau \searrow 0\) above, it follows that

$$\begin{aligned} (G(u))' = G(u') \quad \forall \, u \in {{\mathcal {V}}}_{\alpha +\gamma +1}(U), \; \forall \, \gamma \ge 0, \end{aligned}$$

as an equality holding in \({{\mathcal {V}}}_{\beta - \alpha -\gamma }\). Thus,

$$\begin{aligned} G(g_kv)' = G(g_k'v) = -G(g_kv) + G(g_{k-1}v) \quad \forall \, v \in U. \end{aligned}$$
(3.11)

By induction hypothesis,

$$\begin{aligned} v \mapsto G(g_{k-1}v) \in {{\mathcal {B}}}(U,H^{k - 2 + \beta }(\mathbb R_+,U)), \end{aligned}$$
(3.12)

and, in light of (3.7) and the description (2.6),

$$\begin{aligned} g_k v\in H^{\gamma }_{00}(\mathbb R_+,U) \quad \forall \, v \in U, \; \forall \, \gamma \in (k-3/2, k-1/2), \end{aligned}$$

(where recall that \(k \ge 2\)). Therefore, the continuity of G from Theorem 3.1 and the above inclusion combine to give

$$\begin{aligned} v \mapsto G(g_k v) \in {{\mathcal {B}}}(U, {{\mathcal {V}}}_{\gamma + \beta -\alpha }) \quad \forall \, \gamma \in (k -3/2, k-1/2). \end{aligned}$$
(3.13)

In light of (3.11), (3.12) and (3.13), we see that \(G(g_k)\) has one more unit of regularity than that claimed in (3.13), provided \(k-2 +\beta > \gamma + \beta -\alpha \). Bootstrapping this argument, which replaces \(\gamma + \beta -\alpha \) by \(\gamma + \beta -\alpha +1\) and so on, eventually gives that

$$\begin{aligned} v \mapsto G(g_{k}v) \in {{\mathcal {B}}}\big (U,H^{(k - 2+ \beta )+1}(\mathbb R_+,U)\big ) = {{\mathcal {B}}}\big (U,H^{k - 1+ \beta }(\mathbb R_+,U)\big ), \end{aligned}$$

as required. \(\square \)

Proof of Theorem 3.5

We prove that statements (1) and (2) are equivalent, and that statements (2) and (3) are equivalent. We shall use throughout that, by Lemma 3.6, statement (2) is equivalent to \(v \mapsto G(g_1 v) \in {{\mathcal {B}}}(U,H^{\beta }(\mathbb R_+,U))\).

Assume first that statement (1) holds. That statement (2) is true is clear, as \(g_1 v \in H^\gamma (\mathbb R_+,U)\) for all \(\gamma \ge 0\). Conversely, suppose that statement (2) holds. Since \(\alpha \not \in \mathbb Z_+ + 1/2\), the conjunction of Lemmas 3.3 and 3.7 gives that the restriction of G to \({{\mathcal {W}}}_{\alpha }\) is continuous \({{\mathcal {W}}}_{\alpha } \rightarrow H^{\beta }(\mathbb R_+,U)\). The hypotheses of Theorem 3.1 are satisfied by assumption, and this result gives that the restriction of G to \(H^{\alpha }_{00}(\mathbb R_+,U)\) is continuous

$$\begin{aligned} H^{\alpha }_{00}(\mathbb R_+,U) \rightarrow H^{\beta }_{00}(\mathbb R_+,U) \hookrightarrow H^{\beta }(\mathbb R_+,U).\end{aligned}$$

Statement (1) now follows from these ingredients and the direct sum decomposition (3.5).

We next prove the equivalence of statements (2) and (3). Suppose first that statement (2) holds. If \(m= 0\), then

$$\begin{aligned} v \mapsto G(g_1 v) \in {{\mathcal {B}}}(U, H^{\beta }_{00}(\mathbb R_+,U)), \end{aligned}$$

and thus Lemma 2.2 yields that

$$\begin{aligned} v \mapsto R_{\beta }^{-1} G(g_1 v) \in {{\mathcal {B}}}(U, L^2(\mathbb R_+,U)). \end{aligned}$$
(3.14)

If \(m \ge 1\), then appealing to the decomposition in Lemma 3.3, we write

$$\begin{aligned} \xi _{G(g_1 v)} = \sum _{k=1}^m d_k g_k \in {{\mathcal {W}}}_{\beta } \quad \text {so that} \quad G(g_1 v) - \xi _{G(g_1 v)} \in H^{\beta }_{00}(\mathbb R_+,U), \end{aligned}$$

for some \(d_k \in U\). The proof of Lemma 3.3 shows that \(d_k\) are linear combinations of \(G(g_1 v)^{(j)}(0)\), and hence \(v \mapsto d_k: = D_k v \in {{\mathcal {B}}}(U)\) by the trace estimate (2.4). In particular, taking \(r = 0\) in (3.9) gives the claimed equality \(d_1 = G(g_1 v)(0) =: D_1 v\) for all \(v \in U\). Putting the above together, we conclude that

$$\begin{aligned} v \mapsto G(g_1 v) - \sum _{k=0}^m D_k v g_k \in {{\mathcal {B}}}(U, H^{\beta }_{00}(\mathbb R_+,U)), \end{aligned}$$

and thus

$$\begin{aligned} v \mapsto R_{\beta }^{-1} \Big (G(g_1 v) - \sum _{k=0}^m D_k v g_k \Big ) \in {{\mathcal {B}}}(U, L^2(\mathbb R_+,U)). \end{aligned}$$
(3.15)

In either case for m, an application of Lemma 2.1 yields that the Laplace transform of the above belongs to \({{\mathcal {B}}}(U, {{\mathcal {H}}}^2(\mathbb C_0,U))\). Computing the Laplace transforms of (3.14) and (3.15) gives exactly (3.10). The converse argument reverses these steps. \(\square \)

Remark 3.8

Developing the trail of thought from Remark 3.4, imposing the notation of Theorem 3.5 but relaxing the requirements that \(\alpha , \beta \not \in 1/2 + \mathbb Z_+\), it follows that statement (1) is further equivalent to

(\(2^*\)):

the restriction of G to \({{\mathcal {W}}}_{\alpha }\) as in (3.5) maps continuously into \(H^{\beta }(\mathbb R_+,U)\).

However, in the absence of a description of \({{\mathcal {W}}}_{\gamma }\) at the borderline values, this result is not constructive.

3.1 A Convolution Characterisation

Here we provide a further characterisation of bounded, linear, right-shift invariant operators \(H^{\beta }(\mathbb R_+,U) \rightarrow H^{\beta }(\mathbb R_+,U)\) in terms of so-called strong convolution operators, and appears as Proposition 3.10 below. For which purpose, the following lemma describes operators defined in terms of convolution with elements in \({{\mathcal {B}}}(U, L^2(\mathbb R_+,U))\). The present formulation is inspired by [35, Theorem A.3.5], [26, Lemma F.2.2] and [26, Lemma F.3.7].

Lemma 3.9

Let \(h \in {{\mathcal {B}}}(U, L^2(\mathbb R_+,U))\). There exists a unique bounded, linear, right-shift invariant operator \(H: L^1(\mathbb R_+,U) \rightarrow L^2(\mathbb R_+,U)\) with the property that

$$\begin{aligned} \big (H (uv)\big )(t)= & {} \int _0^t h(u(t-s)v )(s) \, \textrm{d}s = \int _0^t (h v)(s) u(t-s) \, \textrm{d}s \nonumber \\{} & {} \quad \forall \, t \ge 0, \; \forall \, v \in U, \; \forall \, u \in L^1(\mathbb R_+,\mathbb C). \end{aligned}$$
(3.16)

We write \(H u = h *u\) for all \(u \in L^1(\mathbb R_+,U)\). Moreover, the following statements hold.

  1. (i)

    \(H( g_1 v) = h *(g_1v) = (hv) *g_1\) belongs to \(H^1_0(\mathbb R_+,U)\) for all \(v \in U\), with

    $$\begin{aligned} \big (H( g_1 v)\big )' = hv - H( g_1 v) \quad \text {and} \quad \Vert H( g_1 v)\Vert _{H^1(\mathbb R_+)} \lesssim |v |_U \quad \forall \, v \in U. \end{aligned}$$
  2. (ii)

    Suppose that \(\textbf{G}\in {{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U)) \cap {{\mathcal {H}}}^\infty ({{\mathcal {B}}}(U))\), and set \(h: = {{\mathcal {L}}}^{-1}(\textbf{G}) \in {{\mathcal {B}}}(U, L^2(\mathbb R_+,U))\). Then the restriction of H to \(L^1 \cap L^2\) has a unique extension to a bounded, linear, right-shift invariant operator on \(L^2(\mathbb R_+,U) \). We write this extension as \(H_\textrm{e} u = h *_\textrm{e} u\) for all \(u \in L^2(\mathbb R_+,U)\).

  3. (iii)

    Let \(G = {{\mathcal {L}}}^{-1} {{\mathcal {M}}}_\textbf{G}{{\mathcal {L}}}\) with \(\textbf{G}\in {{\mathcal {H}}}^\infty ({{\mathcal {B}}}(U))\). If \(\textbf{G}\in {{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U))\), then \(G u = h *_\textrm{e} u\), where \(h = {{\mathcal {L}}}^{-1}(\textbf{G})\) and \(*_\textrm{e}\) is as described in statement (ii).

A sufficient condition for \(h \in {{\mathcal {B}}}(U, L^2(\mathbb R_+,U))\) is that \(h: \mathbb R_+ \rightarrow {{\mathcal {B}}}(U)\) is Bochner- (also known as uniformly) or even just strongly-measurable and \(\Vert h\Vert _{{{\mathcal {B}}}(U)} \in L^2(\mathbb R_+)\). However, as demonstrated in [35, Remark A.3.6, p. 742], not every \(h \in {{\mathcal {B}}}(U, L^2(\mathbb R_+,U))\) as in Lemma 3.9 is of this form when U is infinite dimensional.

By way of further commentary, recall the so-called strong \(L^2\)-space, denoted \(L^2_\textrm{str}(\mathbb R_+,{{\mathcal {B}}}(U))\), which comprises all \(f: \mathbb R_+ \rightarrow {{\mathcal {B}}}(U)\) such that \(f v \in L^2(\mathbb R_+,U)\) for all \(v \in U\) and

$$\begin{aligned} \Vert f \Vert _{L^2_\textrm{str}(\mathbb R_+)}:= \sup _{\Vert v \Vert \le 1} \Vert t \mapsto f(t) v\Vert _{L^2(\mathbb R_+,U)} < \infty . \end{aligned}$$

This space can be identified as a subspace of \({{\mathcal {B}}}(U,L^2(\mathbb R_+,U))\), and a version of Lemma 3.9 applies for \(h \in L^2_\textrm{str}(\mathbb R_+,{{\mathcal {B}}}(U))\). Moreover,

$$\begin{aligned} L^2_\textrm{str}(\mathbb R_+,{{\mathcal {B}}}(U)) {{\,\mathrm{\,\dot{=}\,}\,}}L^2(\mathbb R_+,{{\mathcal {B}}}(U)),\end{aligned}$$

when U is finite dimensional; see [26, Lemma F.1.5, p. 1003]. Strong \(L^p\)-spaces are studied in some generality in [26, Appendix F].

However, the main motivation for our present focus on \({{\mathcal {B}}}(U, L^2(\mathbb R_+,U))\), rather than the strong \(L^2\)-space is that this latter space is not isomorphic to \({{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U))\) under the Laplace transform, that is, the corresponding version of Lemma 2.1 does not hold here. This claim is proven in [26, Example F.3.6, p. 1020]. As the proof of Theorem 3.5 illustrates, to obtain various characterisations of boundedness properties of bounded, linear, right-shift invariant operators, we make essential use of spaces which are isomorphic under the Laplace transform.

We use the notation \(*_\textrm{e}\) for convenience, even though it is an extension of convolution in the sense of (3.16). For brevity, we write \(h \,*_\textrm{e}\) in place of \(u \mapsto h *_\textrm{e} u\).

Proof

The first claim is taken from [35, Theorem A.3.5] in the case \(p=2\).

Statement (1) is routine to prove in light of the equality (3.16), namely, that

$$\begin{aligned} H( g_1 v) = \int _0^t (hv)(s) \textrm{e}^{-(t-s)} \, \textrm{d}s \quad \forall \, t \ge 0, \; \forall \, v \in U. \end{aligned}$$

The estimate for \(\Vert H( g_1 v)\Vert _{H^1(\mathbb R_+)}\) follows from the expression for \((H( g_1 v))'\) and as

$$\begin{aligned} \Vert hv\Vert _{L^2(\mathbb R_+)} \lesssim |v |_U \quad \text {and} \quad \Vert H( g_1 v) \Vert _{L^2(\mathbb R_+)} \lesssim \Vert g_1 v \Vert _{L^1(\mathbb R_+)} \lesssim |v |_U \quad \forall \, v \in U. \end{aligned}$$

To prove statement (ii), we first estimate that

$$\begin{aligned} \Vert H u \Vert _{L^2(\mathbb R_+)}{} & {} \lesssim \Vert {{\mathcal {L}}}( H u) \Vert _{{{\mathcal {H}}}^2(U)} = \Vert s \mapsto \textbf{G}(s) {{\mathcal {L}}}(u)(s) \Vert _{{{\mathcal {H}}}^2(U)} \nonumber \\{} & {} \le \Vert \textbf{G}\Vert _{{{\mathcal {H}}}^\infty } \Vert {{\mathcal {L}}}(u) \Vert _{{{\mathcal {H}}}^2(U)} \nonumber \\{} & {} \lesssim \Vert u\Vert _{L^2(\mathbb R_+)} \quad \forall \, u \in L^1(\mathbb R_+,U) \cap L^2(\mathbb R_+,U). \end{aligned}$$
(3.17)

Here we have used that

$$\begin{aligned} {{\mathcal {L}}}( H u)(s) = \textbf{G}(s) {{\mathcal {L}}}(u)(s) \quad \forall \, s \in \mathbb C_0,\; \forall \, u \in L^1(\mathbb R_+,U),\end{aligned}$$

which follows from [35, Theorem A.3.5]. Since \(L^1(\mathbb R_+,U) \cap L^2(\mathbb R_+,U)\) is dense in \(L^2(\mathbb R_+,U)\), the estimate (3.17) gives the unique claimed extension.

Finally, statement (iii) follows from statement (ii), noting that \({{\mathcal {L}}}(G u) = {{\mathcal {L}}}(H_\textrm{e} u)\) for all \(u \in L^2(\mathbb R_+,U)\). Hence, \(G = H_\textrm{e}\). \(\square \)

Proposition 3.10

Imposing the notation of Theorem 3.5, in the situation that \(\alpha = \beta >0\), each statement is additionally equivalent to:

  1. (4a)

    if \(\beta \in (0,1/2)\), then there exists \(h \in {{\mathcal {B}}}(U,L^2(\mathbb R_+,U))\) such that \(G = R_{1-\beta }^{-1} h \,*_\textrm{e}\,\);

  2. (4b)

    if \(\beta \in (1/2,1)\), then there exist \(D \in {{\mathcal {B}}}(U)\) and \(h \in {{\mathcal {B}}}(U,L^2(\mathbb R_+,U))\) such that \(G - D = R_{1-\beta }^{-1} h \,*_\textrm{e}\,\);

  3. (4c)

    if \(\beta \ge 1\), then there exist \(D \in {{\mathcal {B}}}(U)\) and \(h \in {{\mathcal {B}}}(U,H^{\beta -1}(\mathbb R_+,U))\) such that \(G - D = h \,*_\textrm{e}~\).

For the above proposition to be true, note that the claimed conditions must each be equivalent to \(G(g_1 v) \in H^{\beta }(\mathbb R_+,U)\) for all \(v \in U\). Observe that \(h *(g_1 v) \in H^1_0(\mathbb R_+,U)\) when \(h \in {{\mathcal {B}}}(U, L^2(\mathbb R_+, U))\). In particular, when \(\beta < 1\) as is the case in statements (4a) and (4b), then \(R_{1-\beta }^{-1}\) is the inverse of convolution with an \(L^1\)-function, and so is convolution with some distribution. This (in general) removes regularity. Although we have presented the case \(\beta >1\) differently above, essentially the opposite is happening, as now \(R^{-1}_{1-\beta }\) is convolution with a function, which is smoothing. We note that G and \(G-D\) in statements (4a) and (4b), respectively, are still strong convolution operators.

Proof of Proposition 3.10

Assume first that any of the statements of Theorem 3.5 hold.

The proofs of statements (4a) and (4b) are similar. We give the latter, as the former essentially has the same calculations only with \(D =0\). On the one hand, as statement (3) holds with \(m=1\), it follows that

$$\begin{aligned} s \mapsto (1+s)^{\beta -1}\big (\textbf{G}(s) - D\big ) \in {{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U)).\end{aligned}$$

On the other hand, as \(\alpha = \beta \) it follows from Theorem 3.1 that \(\textbf{G}\in {{\mathcal {H}}}^\infty ({{\mathcal {B}}}(U))\) and, therefore, that

$$\begin{aligned} s \mapsto (1+s)^{\beta -1}\big (\textbf{G}(s) - D\big ) \in {{\mathcal {H}}}^\infty ({{\mathcal {B}}}(U)). \end{aligned}$$

Define \(G_0: = {{\mathcal {L}}}^{-1} {{\mathcal {M}}}_{\textbf{G}- D} {{\mathcal {L}}}= G - D \in {{\mathcal {B}}}(L^2(\mathbb R_+,U))\). An application of statement (iii) of Lemma 3.9 to \(G_0\) yields that

$$\begin{aligned} R_{1-\beta } \big (G - D \big ) = R_{1-\beta } G_0 = h *_\textrm{e},\end{aligned}$$

for some \(h \in {{\mathcal {B}}}(U,L^2(\mathbb R_+,U))\), and where we have invoked the property of \(R_{1-\beta }\) as a multiplication operator as in (3.3). The desired expression in statement (4b) is obtained.

In the case that \(\beta \ge 1\), the conclusions of Theorem 3.5 are valid with \(\beta \) replaced by 1. Hence, the above argument, with \(R_0 = I\), now gives again the desired expression for \(G_0 = G - D\). Here it remains to see that \(h \in {{\mathcal {B}}}(U, H^{\beta -1}(\mathbb R_+,U))\) when \(\beta >1\). The operator \(G_0\) clearly also satisfies statement (2), as well as \(G_0(g_1 v) = h *g_1 v\). Therefore, rearranging the first equation in statement (i) of Lemma 3.9 gives that

$$\begin{aligned} hv = \big (G_0 (g_1v)\big )' + G_0(g_1v) \quad \forall \, v \in U. \end{aligned}$$
(3.18)

Viewed as a linear operator in v, the right hand side of the above belongs to \({{\mathcal {B}}}(U,H^{\beta -1}(\mathbb R_+,U))\) by hypothesis, and hence so does the left hand side. We have proven statement (4c).

Conversely, assume that statement (4a), (4b) or (4c) holds. Where applicable, set \(G_0: = G - D\). We seek to prove that statement (2) of Theorem 3.5 holds. Since

$$\begin{aligned} R_{1-\beta }^{-1} = {{\mathcal {L}}}^{-1} {{\mathcal {M}}}_{(1+s)^{1-\beta }} {{\mathcal {L}}},\end{aligned}$$

it follows from Theorem 3.1 with \(\alpha = 1\) that \(R_{1-\beta }^{-1}\) is a bounded linear operator \(H^1_{00}(\mathbb R_+,U) \rightarrow H^\beta _{00}(\mathbb R_+,U)\). Statement (i) of Lemma 3.9 gives that \(h *(g_1 v) \in H^1_0(\mathbb R_+,U) {{\,\mathrm{\,\dot{=}\,}\,}}H^1_{00}(\mathbb R_+,U)\), and hence

$$\begin{aligned} G_0(g_1 v) = R^{-1}_{1-\beta } h *(g_1v) \in H^\beta _{0}(\mathbb R_+,U) \quad \forall \, v \in U. \end{aligned}$$
(3.19)

If \(\beta \in (0,1/2)\), then \(H^\beta _{0}(\mathbb R_+,U) {{\,\mathrm{\,\dot{=}\,}\,}}H^\beta (\mathbb R_+,U)\) by (2.5). We now see from (3.19) that \(G = G_0\) has the desired regularity. If \(\beta \in (1/2,1)\), then from (3.19) we now conclude that \(G(g_1 v) = G_0(g_1 v) + D(g_1 v)\) has the required regularity.

For \(\beta \ge 1\), we again consider \(G_0(g_1 v) = h *g_1 v\). Rearranging (3.18) yields that

$$\begin{aligned} \big (G_0 (g_1v)\big )' = - G_0(g_1v) + h v \quad \forall \, v \in U. \end{aligned}$$

The initial value problem

$$\begin{aligned} y' = - y + hv, \quad y(0) = 0,\end{aligned}$$

has unique solution \(y = g_1 *h v \in H^{\beta }(\mathbb R_+,U)\), as \(h v \in H^{\beta -1}(\mathbb R_+,U)\) by hypothesis. Therefore, \(G_0(g_1v) = y \in H^{\beta }(\mathbb R_+,U)\), which proves statement (2).\(\square \)

Our final result of the section is a corollary in the special case that \(\alpha = \beta = m \in \mathbb N\), so that the focal object is a bounded, linear, right-shift invariant operator on \(L^2(\mathbb R_+,U)\) or, equivalently, a symbol in \({{\mathcal {H}}}^\infty ({{\mathcal {B}}}(U))\). This setting is particularly relevant in the study of well-posed linear control systems.

Corollary 3.11

Let G denote a bounded, linear, right-shift invariant operator \(L^2(\mathbb R_+,U) \rightarrow L^2(\mathbb R_+,U)\), with \(\textbf{G}\) as in Theorem 1.1, and fix \(m \in \mathbb N\). The following statements are equivalent.

  1. (1)

    The restriction of G to \(H^m(\mathbb R_+,U)\) maps continuously into \(H^m(\mathbb R_+,U)\) ;

  2. (2)

    \(G(g_1 v) \in H^m(\mathbb R_+,U)\) for all \(v \in U\) ;

  3. (3)

    There exist \(D_k \in {{\mathcal {B}}}(U)\) for \(k \in \{1, \dots , m\}\) such that

    $$\begin{aligned} s \mapsto {(1+s)^{m}}\Big (\frac{\textbf{G}(s)}{1+s} - \sum _{k=1}^{m} \frac{D_k}{(1+s)^k} \Big )\in {{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U)); \end{aligned}$$
    (3.20)
  4. (4)

    There exist \(D \in {{\mathcal {B}}}(U)\) and \(h \in {{\mathcal {B}}}(U, H^{m-1}(\mathbb R_+,U))\) such that \(G - D = h *_\textrm{e}\).

When \(m=1\), the condition (3.20) in statement (3) simplifies to \(\textbf{G}- D_1 \in {{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U))\). This condition plays a key role in [27, Theorem 1.2], in the context of so-called Pritchard–Salamon systems, as we discuss in Sect. 4.2.

Each of the statements of Corollary 3.11 are in turn equivalent to \(H^m(\mathbb R_+,U)\) being an invariant subspace for G as in the statement of the result. The non-trivial claim here of course is that invariance of a one-dimensional subspace of \(H^m(\mathbb R_+,U)\) is sufficient for the invariance of the whole subspace. We note that if \(H^m(\mathbb R_+,U)\) is an invariant subspace for G, then \(G|_{H^m(\mathbb R_+)}\) is continuous in the stronger norm of \(H^m(\mathbb R_+,U)\) by the Closed Graph Theorem.

The next remark addresses the situation of linear, right-shift invariant operators mapping between vector-valued function spaces with distinct spaces of function values. Hitherto, a single Hilbert space has been used as a space of values in both the domain and codomain, and has always been denoted U.

Remark 3.12

The results of the present paper may easily be extended to operators of the form \({{\mathcal {L}}}^{-1} {{\mathcal {M}}}_{\textbf{G}} {{\mathcal {L}}}\) for holomorphic \(\textbf{G}: \mathbb C_0 \rightarrow {{\mathcal {B}}}(U,Y)\) and complex Hilbert space \(Y \ne U\). Specifically, a trick is used which considers the block operators on the product space \(U \times Y\) given by

$$\begin{aligned} {{\tilde{\textbf{G}}}}: = \begin{pmatrix}0 &{} 0 \\ \textbf{G}&{} 0\end{pmatrix}: \mathbb C_0 \rightarrow {{\mathcal {B}}}(U\times Y) \quad \text {and} \quad {\tilde{G}}:= {{\mathcal {L}}}^{-1} {{\mathcal {M}}}_{{{\tilde{\textbf{G}}}}} {{\mathcal {L}}}. \end{aligned}$$

The results of the present paper are applied to \({{\tilde{\textbf{G}}}}\) and \({\tilde{G}}\), and then \(\textbf{G}\) and G are recovered by restriction and projection. The current choice of \(U = Y\) has been made primarily to simplify the presentation.

3.2 Connections to Other Results

We conclude the section by describing how our results relate to others in the literature. The paper [29] considers bounded, linear, right-shift invariant operators \(X \rightarrow X\) where X denotes a Banach space of locally integrable scalar-valued functions on \(\mathbb R_+\) with certain properties. The main result of [29] is [29, Theorem 2.1] which shows that such operators are necessarily of the form \({{\mathcal {L}}}^{-1} {{\mathcal {M}}}_\textbf{G}{{\mathcal {L}}}\) for some symbol \(\textbf{G}\in {{\mathcal {H}}}^\infty (\mathbb C)\) or, in words, “represented by transfer functions”. The authors of [29] acknowledge that the methods used only apply to spaces X with zero boundary conditions at zero (where these evaluations make sense). As such there is some overlap between [29, Theorem 2.1] and Theorem 3.1, although neither result truly generalises the other. In particular, whilst the Banach space X includes spaces of functions other than \(H^\alpha _{00}(\mathbb R_+,U)\) for \(\alpha \ge 0\), Theorem 3.1 considers operators between different (vector-valued) spaces, allows for the situation that the symbol \(\textbf{G}\) is unbounded, and that G maps between spaces of distributions (\({{\mathcal {V}}}_\alpha \), \({{\mathcal {V}}}_\beta \) with negative exponent). The argumentation used presently and that of [29] is very different, and so these works are complementary in this sense as well.

There is some minor overlap between Theorem 3.5 and [30, Theorem 6]. However, in the proof of [30, Theorem 6] it is erroneously claimed that \(C_0^\infty (\mathbb R_+,U)\) is dense in \(W^{\alpha ,2}(\mathbb R_+,U)\) for all \(\alpha >0\). Consequently, no condition of the form (3.10) appears in [30, Theorem 6]. In fact, the authors of [30] essentially prove another version of Theorem 3.1 where \(\beta =0\), although from a state-space perspective.

Regarding connections to Proposition 3.10, we comment that convolution operators are well-studied objects, from a variety of perspectives with a vast literature. We relate the present work to three papers. First, the work [8], which builds on the earlier paper [45], considers convolution operators

$$\begin{aligned} (G u)(t) = \int _0^t h(t-\tau ) u(\tau )\, \textrm{d}\tau , \quad t > 0, \end{aligned}$$
(3.21)

(mostly) in the setting that the Laplace transform of \(h: \mathbb R_+ \rightarrow \mathbb R^{l \times m}\) is a rational matrix function. The main results of [8] are summarised in [8, Table 1] and derive exact formulae, or computable upper bounds, for the norm of G viewed as an operator \(L^{p_1}\big (\mathbb R_+,(\mathbb R^m, |\cdot |_{r_1}) \big ) \rightarrow L^{p_2}\big (\mathbb R_+,(\mathbb R^{l}, |\cdot |_{r_2}) \big )\) spaces for \(1 \le p_i, r_i \le \infty \), where \(|\cdot |_r\) denotes the Euclidean r-norm.

Second, the work [18] also considers convolution operators of the form (3.21) but with operator-valued kernels \(h: \mathbb R_+ \rightarrow {{\mathcal {B}}}(U,Y)\) for Hilbert spaces U and Y. Two problems are studied related to characterising when G is a bounded operator between the weighted spaces \(L^2(\mathbb R_+, w(s)\, \textrm{d}s, U) \rightarrow L^2(\mathbb R_+, m(s)\, \textrm{d}s, Y)\). Here the terms w and m are weighting functions and, roughly, either the weight pair (wm), or the kernel h, is fixed, and the problem is to characterise the other quantity which ensures boundedness of G. Positive results are given in both cases.

Third, the paper [13] broadly addresses solvability properties of Wiener-Hopf equations of the form

$$\begin{aligned} H(\phi )(t): = \phi (t) + \int _0^\infty k(t-\tau ) \phi (\tau ) \, \textrm{d}\tau = f(t) \quad t \in \mathbb R_+, \end{aligned}$$
(3.22)

on various function spaces on \(\mathbb R_+\), for given right-hand side f and kernel \(k \in L^1(\mathbb R)\). There are some differences to the situation considered presented, namely that k is not assumed to be supported in \(\mathbb R_+\) and so H need not be causal (and so necessarily need not be right-shift invariant). However, when k is supported on \(\mathbb R_+\), then H is a linear, right-shift invariant operator of the form \(H - I = k *\). In order to address solvability of (3.22) in [13, Section 6], the author in [13, Section 4] considers a number of boundedness properties of H between various function spaces, with results [13, Theorems 11, 13, 14], the latter of which addresses boundedness between Bessel potential spaces.

A direct comparison between Proposition 3.10 and [13, Theorem 14] is difficult owing to the different assumptions imposed but, in the case that \(p=2\) and the kernel k is supported on \(\mathbb R_+\), Proposition 3.10 extends [13, Theorem 14] to the vector-valued setting and kernels not in \(L^1\), and, in some sense, provides a converse. The overlap between our results and those of [8] or [18] is minimal, as these works both consider convolution operators between various (possibly weighted) Lebesgue spaces, and neither consider their continuity between Bessel-potential spaces, which is the main focus of the present work.

4 Examples

We illustrate our results through four examples. As mentioned, one motivation for the present study comes from mathematical systems and control theory, where bounded, linear, right-shift invariant operators are called input–output maps of linear, time-invariant control systems, and the associated symbol (should such a multiplication representation exist) is called the transfer function. Consequently, our examples are drawn from this field, although the presentation is elementary, does not require extensive knowledge of the area, and the examples are primarily intended to illustrate the theory.

There are a number of frameworks for extracting a transfer function from so-called infinite-dimensional linear control systems, such as those specified by partial- or delay-differential equations. These frameworks are broadly equivalent, and we refer the reader to, for example [16, Remark 7.6], as well [20, Chapter 12] and [47] for more information. For brevity, in the following examples we do not give extensive derivations of transfer functions.

Example 4.1

For fixed \(\tau >0\) the right-shift semigroup \(G = \sigma ^\tau \) is evidently a bounded, linear, right-shift invariant operator \(L^2(\mathbb R_+) \rightarrow L^2(\mathbb R_+)\). It is intuitively clear that the conclusions of Theorem 3.1 with \(U = \mathbb C\) andFootnote 2\({{\mathcal {B}}}(U) = \mathbb C\) should be true here — namely that \(\sigma ^\tau \) has bounded compressions \(H^\beta _{00}(\mathbb R_+) \rightarrow H^\beta _{00}(\mathbb R_+)\) for all \(\beta \ge 0\). Indeed, \(\sigma ^\tau \) is essentially “the identity map delayed by \(\tau \) and with zeros inserted beforehand”, which preserves all zero boundary conditions and the regularity of a function. To formalise these observations, Theorem 3.1 is applicable with \(\alpha = \beta \ge 0\) as \(G = {{\mathcal {L}}}^{-1} {{\mathcal {M}}}_\textbf{G}{{\mathcal {L}}}\) with \(s \mapsto \textbf{G}(s) = \textrm{e}^{-s \tau }\) which belongs to \({{\mathcal {H}}}^\infty (\mathbb C)\). We note that \(\alpha = \beta \) is optimal insomuch as \(s \mapsto (1+s)^\gamma \textrm{e}^{-s\tau } \not \in {{\mathcal {H}}}^\infty (\mathbb C)\) for any \(\gamma >0\).

To address the action of \(\sigma ^\tau \) on functions with non-zero boundary conditions, observe that \(t \mapsto G(g_1 v)(t)\) is discontinuous at \(t = \tau \), so does not belong to \(H^\beta (\mathbb R_+)\) for any \(\beta > 1/2\). In particular, Theorem 3.5 yields that G does not restrict to a bounded operator \(H^\beta (\mathbb R_+) \rightarrow H^\beta (\mathbb R_+)\) for such \(\beta \). For completeness, we note that \(D_1\) in statement (3) of Theorem 3.5 (with \(m=1\)) equals zero, and \(\textbf{G}- D_1 = \textbf{G}\not \in {{\mathcal {H}}}^2(\mathbb C_0)\).

Consider next the controlled ordinary differential equation

$$\begin{aligned} \dot{z}(t) = -z(t) + u(t), \quad t >0, \quad z(0) = 0, \end{aligned}$$

with input u, and delayed output \(y = \sigma ^\tau z\) (based on [12, Example 7.1.1]), with resulting input–output map \(G_1\) given by \(G_1 u= \sigma ^\tau (g_1 *u)\) for all \(u \in L^2(\mathbb R_+)\). The discontinuity in Gu introduced by the delay has been removed as \(G_1(g_1 v)\) is identically zero on a neighbourhood of zero, and hence infinitely differentiable on the same neighbourhood, with

$$\begin{aligned} \big (G_1(g_1 )\big )^{(k-1)}(0) = 0 \quad \forall \, k \in \mathbb N.\end{aligned}$$

Consequently, if condition (3.10b) is to hold, then it must hold with \(D_k = 0\) for every k as, recall, the proof of Theorem 3.5 showed that the \(v \mapsto D_k v\) are linear combinations of \(\big (G_1(g_1 )\big )^{(j)}(0)\).

The transfer function \(\textbf{G}_1\) is given by \(s \mapsto \textrm{e}^{-s \tau }/(s+1)\) on \(\mathbb C_0\) which satisfies

$$\begin{aligned} s \mapsto (1+s)^m\frac{\textbf{G}_1(s)}{1+s} \in {{\mathcal {H}}}^2(\mathbb C_0), \end{aligned}$$

for \(m \in \mathbb Z_+\) if, and only if, \(m = 1\). We conclude that the compressions of \(G_1\) to \(H^m(\mathbb R_+) \rightarrow H^m(\mathbb R_+)\) are bounded when \(m =1\) and are not bounded when \(m \ge 2\).

Example 4.2

The controlled and observed neutral delay differential equation

$$\begin{aligned}\dot{w}- \sigma ^{r} \dot{w}=-aw+u, \quad y = w, \end{aligned}$$

is considered in [24]. Here, as usual, u and y denote the input and output variables, respectively, and \(a, r>0\) are positive parameters. We have \(U = \mathbb C\). The associated transfer function \(\textbf{G}\) is given by

$$\begin{aligned} \textbf{G}(s)=\frac{1}{s(1-\textrm{e}^{-rs})+a}. \end{aligned}$$

It is clear that \(\textbf{G}\in {{\mathcal {H}}}^\infty _\alpha \) for all \(\alpha > 0\). Furthermore, it follows from [24, Propositions 3.1 and 3.4] that:

  • there exists an open set \(\Omega \) containing the closed right-half complex plane such that \(\textbf{G}\) is holomorphic on \(\Omega \) ;

  •  \(\textbf{G}\) is not bounded on \(\mathbb C_0\), that is, \(\textbf{G}\not \in {{\mathcal {H}}}^\infty (\mathbb C_0)\), and ;

  •  \(s \mapsto \textbf{G}(s)/(1+s)\) is bounded on \(\mathbb C_0\).

The final property ensures that condition (3.1) in Theorem 3.1 applies with \(\alpha - \beta = 1\). Consequently, for example, the associated input–output operator G is continuous \(H^1_0(\mathbb R_+) \rightarrow L^2(\mathbb R_+)\).

We claim that \(s \mapsto \textbf{G}(s)/(1+s) \in {{\mathcal {H}}}^2(\mathbb C_0)\), so that statement (3) of Theorem 3.5 holds with \(\beta = m = 1\) and \(D_1 =0\). Thus, by that result, it follows that G is continuous as an operator \(H^1(\mathbb R_+) \rightarrow L^2(\mathbb R_+)\). For which purpose, we compute that

$$\begin{aligned} \Big |\frac{\textbf{G}(i\omega )}{1+i\omega } \Big |^2&= \frac{1}{(1+\omega ^2) \Big ( (a-\omega \sin (r \omega ))^2 + \omega ^2(1-\cos (r \omega ))^2\Big )} \quad \forall \, \omega \in \mathbb R. \end{aligned}$$

Since for all \(a,r >0\), there exists \(b > 0\) such that

$$\begin{aligned} (a-\omega \sin (r \omega ))^2 + \omega ^2(1-\cos (r \omega ))^2 \ge b \quad \forall \, \, \omega \in \mathbb R,\end{aligned}$$

we conclude that

$$\begin{aligned} \left| \frac{\textbf{G}(i\omega )}{1+i\omega } \right| ^2 \le \frac{2b}{(1+\omega )^2} \quad \forall \, \, \omega \in \mathbb R.\end{aligned}$$

Therefore, \(s \mapsto \textbf{G}(s)/(1+s) \in {{\mathcal {H}}}^2(\mathbb C_0)\), as required.

Example 4.3

Consider the ubiquitous finite-dimensional controlled and observed system of linear ordinary differential equations

$$\begin{aligned} \dot{x} = Ax + Bu, \quad x(0) = x^0, \quad y = Cx +Du, \end{aligned}$$
(4.1)

with input, state and output denoted ux and y, respectively. The input, state and output spaces are \(U = \mathbb C^p\)\(X = \mathbb C^n\) and U, respectively, and ABC and D may be identified with compatibly-sized complex matrices. Let \(h: \mathbb R_+ \rightarrow \mathbb C^{p\times p}\) be given by \(t \mapsto h(t):= Ce^{At} B\). With this notation, we have that the input–output map G associated with (4.1) satisfies

$$\begin{aligned} G u = h * u + Du.\end{aligned}$$

If every eigenvalue of A has negative real part, then, in light of

$$\begin{aligned} t \mapsto h^{(k)}(t) = Ce^{At}A^k B \in L^2(\mathbb R_+, {{\mathcal {B}}}(U)) \subseteq {{\mathcal {B}}}(U, L^2(\mathbb R_+, U)) \quad \forall \, k \in \mathbb N_0, \end{aligned}$$

it follows from Proposition 3.10 that the restriction of G to \(H^m(\mathbb R_+,U)\) maps continuously into \(H^m(\mathbb R_+,U)\) for every \(m \in \mathbb N\).

Consider now the case that (4.1) denotes (at least formally) an infinite-dimensional linear control system, where \(A: X \supseteq D(A) \rightarrow X\) generates an exponentially stable \(C_0\)-semigroup. If C is bounded, meaning \(C \in {{\mathcal {B}}}(X,U)\), and \(A^k B \in {{\mathcal {B}}}(U,X)\) for \(k \in \mathbb Z_+\), then the restriction of G to \(H^m(\mathbb R_+,U)\) maps continuously into \(H^m(\mathbb R_+,U)\) for \(m = k+1\). We refer the reader to [28] for a number of examples of controlled and observed partial differential equations where the condition \(A^k B \in {{\mathcal {B}}}(U,X)\) is satisfied.

Our final example considers an operator-valued transfer function based on [16, Example 7.14].

Example 4.4

Consider the following controlled and observed heat equation on the unit square \(\Omega :=(0,1)\times (0,1)\):

$$\begin{aligned}&\frac{\partial w}{\partial t}(x_1,x_2,t)=\frac{\partial ^2 w}{\partial x_1^2}(x_1,x_2,t) +\frac{\partial ^2 w}{\partial x_2^2}(x_1,x_2,t),\\&w(0,x_2,t)=0,\quad w(1,x_2,t)=0,\\&\frac{\partial w}{\partial x_2}(x_1,0,t)=0,\quad \frac{\partial w}{\partial x_2}(x_1,1,t)=u(x_1,t),\\&y(x_1,t)=w(x_1,\kappa ,t)\,, \end{aligned}$$

where \(\kappa \in [0,1)\) is a parameter. Here we choose as input and output space \(U: = L^2(0,1)\). The input u represents a Neumann boundary control term along the top edge of the square. The measurement y is observation of w along the line parallel to the \(x_1\)-axis at \(x_2\)-position \(\kappa \) and, as may be shown by arguments analogous to those used in [6], the mapping \(U \rightarrow U\)\(u \mapsto y\) under zero initial conditions is well-defined and continuous. We refer the reader to [6] for more details of controlled and observed heat equations on bounded domains in \(\mathbb R^n\).

Routine modifications to the calculation in [16, Example 7.14] show that the transfer function \(\textbf{G}\) is given by

$$\begin{aligned} \textbf{G}(s)v=\sum _{n=1}^\infty h_n(s;\kappa ) \gamma _n(v)\, \sqrt{2}\sin (n\pi \,\cdot ) \quad \forall \, v \in L^2(0,1), \end{aligned}$$

where \(\gamma _n\) are the Fourier sine coefficients of v, namely,

$$\begin{aligned} \gamma _n(v)=\sqrt{2}\langle v , \sin (n\pi \,\cdot ) \rangle _{L^2(0,1)}=\sqrt{2}\int _0^1v(x_1) \sin (n\pi x_1)\, F x_1 \quad \forall \, n \in \mathbb N, \end{aligned}$$

and

$$\begin{aligned} h_n(s; \kappa ):=\frac{\cosh (\kappa \sqrt{s+n^2\pi ^2})}{\sqrt{s+n^2\pi ^2}\,\sinh (\sqrt{s+n^2\pi ^2})} \quad \forall \, s \in \mathbb C_0,\; \forall \, n \in \mathbb N. \end{aligned}$$

The function \(\textbf{G}\) belongs to \({{\mathcal {H}}}^\infty (\mathbb C_0, {{\mathcal {B}}}(U))\) and so, by Theorem 3.1 with \(\alpha = \beta \ge 0\), the associated input–output operator \(u \mapsto y = G(u)\) maps \(H^\alpha _{00}(\mathbb R_+,U)\) continuously into itself.

We investigate the extent to which the hypotheses of Theorem 3.5 hold. For which purpose, for \(\omega \in \mathbb R\), set \(z_n: = \sqrt{i \omega +n^2\pi ^2} \ne 0\) for all \(n \in \mathbb N\), which further satisfies

$$\begin{aligned} z_n = (\omega ^2 + n^4 \pi ^4)^{\frac{1}{4}} \textrm{e}^{i \arg (z_n)} \quad \text {and} \quad {{\,\textrm{Re}\,}}z_n = (\omega ^2 + n^4 \pi ^4)^{\frac{1}{4}} \cos (\arg (z_n)).\end{aligned}$$

Straightforward hyperbolic identities give that

$$\begin{aligned} \cosh (\kappa z_n)&= \cosh (\kappa {{\,\textrm{Re}\,}}z_n)\cos (\kappa {{\,\textrm{Im}\,}}z_n) + i \sinh (\kappa {{\,\textrm{Re}\,}}z_n)\sin (\kappa {{\,\textrm{Im}\,}}z_n) \\ \text {and} \quad |\cosh (\kappa z_n) |^2&= \cosh ^2(\kappa {{\,\textrm{Re}\,}}z_n) - \sin ^2(\kappa {{\,\textrm{Im}\,}}z_n) \le \cosh ^2(\kappa {{\,\textrm{Re}\,}}z_n) . \end{aligned}$$

Similarly,

$$\begin{aligned} \sinh (z_n)&= \sinh ({{\,\textrm{Re}\,}}z_n)\cos ({{\,\textrm{Im}\,}}z_n) + i \cosh ({{\,\textrm{Re}\,}}z_n)\sin ({{\,\textrm{Im}\,}}z_n) \\ \text {and} \quad |\sinh (z_n) |^2&= \sinh ^2({{\,\textrm{Re}\,}}z_n) + \sin ^2({{\,\textrm{Im}\,}}z_n) \ge \sinh ^2({{\,\textrm{Re}\,}}z_n) . \end{aligned}$$

Therefore,

$$\begin{aligned} \left|\frac{\cosh (\kappa z_n)}{\sinh (z_n) }\right|&\le \frac{\cosh (\kappa {{\,\textrm{Re}\,}}z_n)}{\sinh ( {{\,\textrm{Re}\,}}z_n)} \lesssim \exp \big ((\kappa -1)(\omega ^2 + n^4 \pi ^4)^{\frac{1}{4}} \cos (\arg (z_n))\big ) \nonumber \\&\le \exp \big ((1/\sqrt{2})(\kappa -1)(\omega ^2 + n^4 \pi ^4)^{\frac{1}{4}} \big ) \quad \forall \, \omega \in \mathbb R, \quad \forall \, n \in \mathbb N\,, \end{aligned}$$

where we have used that \(\kappa - 1 <0\) and \(\arg (z_n) \in (-\pi /4, \pi /4)\). Consequently,

$$\begin{aligned} |h_n(i \omega ; \kappa ) \vert \lesssim \frac{1}{n \pi } \exp \big ((1/\sqrt{2})(\kappa -1)(\omega ^2 + n^4 \pi ^4)^{\frac{1}{4}} \big )\quad \forall \, \omega \in \mathbb R, \; \forall \, n \in \mathbb N.\end{aligned}$$

Noting that

$$\begin{aligned} \sqrt{\omega } + n \pi \lesssim (\omega ^2 + n^4 \pi ^4)^{\frac{1}{4}} \quad \forall \, \omega \in \mathbb R_+, \; \omega \ge n^2 \pi ^2, \end{aligned}$$

we estimate that

$$\begin{aligned}&\int _{\mathbb R} |(1+ i\omega )^\beta \frac{h_n(i \omega , \kappa )}{1+ i \omega } \vert ^2 \, \textrm{d}\omega \\&\quad \lesssim \frac{2}{(n \pi )^2}\Big (\int _0^{n^2 \pi ^2} + \int _{n^2 \pi ^2}^\infty \Big )(1+\omega ^2)^{\beta -1} \exp \big (\sqrt{2}(\kappa -1)(\omega ^2 + n^4 \pi ^4)^{\frac{1}{4}} \big )\, \textrm{d}\omega \\&\quad \lesssim \frac{2}{(n \pi )^2} \textrm{e}^{\sqrt{2}(\kappa -1)n\pi } \Big (\int _0^{n^2 \pi ^2} (1+\omega ^2)^{\beta -1} \, \textrm{d}\omega + \int _{n^2 \pi ^2}^\infty (1+\omega ^2)^{k-1} \textrm{e}^{2c_1 (\kappa -1)\sqrt{\omega }} \, \textrm{d}\omega \Big ) \\&\quad \le \frac{q_\beta (n\pi )}{2n} \textrm{e}^{\sqrt{2}(\kappa -1)n \pi } \quad \forall \, n \in \mathbb N, \; \forall \, \beta \ge 0\,, \end{aligned}$$

where \(q_\beta \) and \(c_1\) are a certain polynomial and positive constant, respectively. Hence, we have shown that

$$\begin{aligned} \Vert s \mapsto (1+s)^{\beta -1} h_n(s;\kappa ) \Vert _{{{\mathcal {H}}}^2} \lesssim \frac{\sqrt{q_\beta (n\pi )}}{n} \textrm{e}^{(\sqrt{2}/2)(\kappa -1)n \pi } \quad \forall \, n \in \mathbb N, \; \forall \, \beta \ge 0. \end{aligned}$$

Evidently, by the Cauchy-Schwarz inequality

$$\begin{aligned} |\gamma _n(v) |= |\sqrt{2}\langle v , \sin (n\pi \,\cdot ) \rangle _{L^2(0,1)} |&\lesssim \Vert v\Vert _{L^2(0,1)} \Vert \sin (n\pi \,\cdot )\Vert _{L^2(0,1)} \\&\lesssim \Vert v \Vert _{L^2(0,1)} \quad \forall \, n \in \mathbb N. \end{aligned}$$

Consequently, invoking the above inequalities, we have that

$$\begin{aligned}&\Vert s \mapsto (1+s)^{\beta -1} \textbf{G}(s)v \Vert _{{{\mathcal {H}}}^2(L^2(0,1))} \\&\le \sum _{n=1}^\infty \Vert s \mapsto (1+s)^{\beta -1}h_n(s;\kappa ) \Vert _{{{\mathcal {H}}}^2} \, \Vert \gamma _n(v)\, \sqrt{2}\sin (n\pi \,\cdot ) \Vert _{L^2(0,1)} \\&\lesssim \Vert v \Vert _{L^2(0,1)} \sum _{n=1}^\infty \frac{\sqrt{q_\beta (n\pi )}}{n} \textrm{e}^{(\sqrt{2}/2)(\kappa -1)n \pi } \lesssim \Vert v \Vert _{L^2(0,1)} \quad \forall \, v \in L^2(0,1) \,, \end{aligned}$$

where we have crucially used that \(\kappa \in [0,1)\) so that the infinite series involving \(q_\beta \) is summable. We conclude that \(s \mapsto (1+s)^{\beta -1}\textbf{G}(s) \in {{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U))\) and, therefore, from Theorem 3.5 with \(D_k = 0\) for every \(k \in \mathbb N\) that the associated input–output map G maps \(H^\beta (\mathbb R_+,U)\) continuously into itself for all \(\beta \ge 0\). The above analysis crucially relies on the inequality \(\kappa <1\) and fails when \(\kappa = 1\). Indeed, in this case it can be shown that G does not continuously map \(H^\beta (\mathbb R_+,U)\) into itself for \(\beta > 1/2\).

4.1 Regular Linear Systems

Here we connect the results of Sect. 3 to the concept of regular systems in mathematical systems and control theory. Regularity in this context was originally defined as a property of G, but a number of characterisations are available in terms of the associated function \(\textbf{G}\) as in Theorem 1.1. We recall from [35, Definition 5.6.1, p. 318] that weak, strong or uniform regularity is equivalent to the existence of the following limit

$$\begin{aligned}\lim _{\begin{array}{c} s \in \mathbb R_+ \\ s \rightarrow \infty \end{array}} \textbf{G}(s),\end{aligned}$$

in the weak, strong or uniform topology, respectively. The above concept of regularity dates back to [41], and was further developed in, for example, [43] (see also the discussion on [43, p. 833]) and [36]. A number of further refinements of regularity appear in [26, Definition 6.2.3], including that of uniform line-regularity, namely that \(\textbf{G}(s)\) has a limit in the uniform topology as \(\textrm{Re}(s) \rightarrow \infty \). As noted in [26, Section 6.2, Notes], the concept of uniform line-regularity dates much further back to the 1970s in [19, p. 155], although the terminology regular is not used there.

In all cases, the resulting linear operator D defined by

$$\begin{aligned} D u:= \lim _{\begin{array}{c} s \in \mathbb R_+ \\ s \rightarrow \infty \end{array}} \textbf{G}(s) u \quad \forall \, u \in U, \end{aligned}$$

is called the feedthrough operator, and belongs to \({{\mathcal {B}}}(U)\) by the uniform boundedness principle. To quote [35, p. 318]: “Most of the systems appearing in practice seem to be regular.” An example of a non-regular function may be found in [37, Example 8.4], viz. the function \(\textbf{G}: \mathbb C_0 \rightarrow \mathbb C\) given by

$$\begin{aligned} \textbf{G}(s) = \cos (\log (s^2 + 1)),\end{aligned}$$

where \(\log \) is defined to be analytic on the split plane \(\mathbb C\backslash (-\infty ,0]\). Furthermore, [35, Example 5.7.4] contains a function which is weakly regular, but not strongly regular, and one which is strongly regular, but not uniformly regular.

Our next result relates the regularity property to additional continuity properties of bounded, linear, right-shift invariant operators on \(L^2(\mathbb R_+,U)\). It follows from Corollary 3.11.

Corollary 4.5

Suppose that \(G = {{\mathcal {L}}}^{-1} {{\mathcal {M}}}_\textbf{G}{{\mathcal {L}}}: L^2(\mathbb R_+,U) \rightarrow L^2(\mathbb R_+,U)\) is a bounded, linear, right-shift invariant operator, with \(\textbf{G}\in {{\mathcal {H}}}^\infty ({{\mathcal {B}}}(U))\). The following statements are equivalent.

  1. (i)

    The restriction of G to \(H^1(\mathbb R_+,U)\) maps continuously into \(H^1(\mathbb R_+,U)\) ;

  2. (ii)

    G is uniformly line-regular with feedthrough D and \(\textbf{G}- D \in {{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U))\) ;

  3. (iii)

    G is weakly regular with feedthrough D and \(\textbf{G}- D \in {{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U))\).

In each case the feedthrough operator is equal to \(u \mapsto G(g_1 u)(0)\).

Proof

If statement (i) holds, then an application of Corollary 3.11 yields that \(G - D = h *_\textrm{e}\) for some \(D \in {{\mathcal {B}}}(U)\) and \(h \in {{\mathcal {B}}}(U, L^2(\mathbb R_+,U))\). It now follows from [26, Proposition 6.3.4, (a3)] with \(p=2\) that \(G - D\) is uniformly line-regular with zero feedthrough, and hence G is trivially uniformly line-regular with feedthrough D. Further, invoking Lemma 3.9, we compute that

$$\begin{aligned} G(g_1 u)(0) = (h *(g_1 u))(0) + D (g_1 u)(0) = Du \quad \forall \, u \in U,\end{aligned}$$

giving the desired formula for D. Theorem 3.5 gives that \(D_1 v:= G(g_1 v)(0) = D v\) is such that \(\textbf{G}- D_1 \in {{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U))\), as required.

That statement (ii) implies statement (iii) is trivial, and that statement (iii) implies statement (i) follows from Corollary 3.11 with \(m = 1\) and \(D_1 = D\). \(\square \)

An interesting facet of Corollary 4.5 is that the combination of weak regularity with feedthrough D and \(\textbf{G}- D \in {{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U))\) is sufficient for the a priori stronger properties of uniformly line-regular with feedthrough D and \(\textbf{G}- D \in {{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U))\). Observe further that regularity is a necessary condition for G to map \(H^1(\mathbb R_+,U)\) continuously into itself. As a complementary approach, in Appendix 4.2 we provide an elementary proof of the regularity aspect in the implication (i) \(\Rightarrow \) (ii) which does not require the results of [26].

4.2 Pritchard–Salamon Systems

Pritchard–Salamon (PS) systems are a class of infinite-dimensional state-space linear control systems, dating back to [31, 32]. At their heart are three operators (ABC) and Hilbert spaces \(W \hookrightarrow V\) with A generating a \(C_0\)-semigroup on V, which restricts to a semigroup on W. The input map B is bounded \(U \rightarrow V\), and induces a bounded controllability operator from inputs in \(L^2((0,t),U) \rightarrow W\) for some (hence all) \(t>0\). The output map C is bounded \(W \rightarrow U\), and induces a bounded observability operator \(V \rightarrow L^2((0,t),U)\) for some (hence all) \(t>0\). These admissibility concepts are dual to one another. Nowadays, Pritchard–Salamon systems have been generalised to well-posed linear systems and system nodes, but they were popular for a number of years and arguably helped pave the way for the contemporary abstract functional-analytic understanding of infinite-dimensional state-space linear control systems. They also have a number of appealing properties, such as being closed under feedback. Studies of Pritchard–Salamon systems include [10, 34] and [44] and, for more historical information, we refer the reader to [35, Section 2.9] as well as, for example, [9] and [11].

By appealing to the combination of our results and those of [27], we are able to provide a criterion for when the conclusions of Proposition 3.10 hold with \(\beta =1\). For convenience, we recall [27, Theorem 1.2], the main result of that paper, presented with the notation used currently. The symbol D below refers to a feedthrough operator. For brevity, we refer to [27] for the remaining definitions.

Theorem 4.6

(Theorem 1.2, [27]) Let \(\gamma \in \mathbb R\) and let \(\textbf{G}: \mathbb C_\gamma \rightarrow {{\mathcal {B}}}(U)\) be holomorphic. The following statements hold.

  1. (1)

    \(\textbf{G}\) has a realisation with a bounded input operator and \(D=0\) if, and only if, there exists \(\alpha \in \mathbb R\) such that \(s \mapsto \textbf{G}(s)u \in {{\mathcal {H}}}^2(\mathbb C_\alpha , U)\) for all \(u \in U\).

  2. (2)

    \(\textbf{G}\) has a realisation with a bounded output operator and \(D=0\) if, and only if, there exists \(\alpha \in \mathbb R\) such that \(s \mapsto \textbf{G}({\overline{s}})^*y \in {{\mathcal {H}}}^2(\mathbb C_\alpha , U)\) for all \(y \in U\).

  3. (3)

    \(\textbf{G}\) has a realisation as a Pritchard–Salamon system with \(D=0\) if, and only if, the conditions in both statements (1) and (2) hold.

The Closed Graph Theorem yields that the above condition \(s \mapsto \textbf{G}(s)u \in {{\mathcal {H}}}^2(\mathbb C_0, U)\) for all \(u \in U\) (here \(\alpha =0\)) is equivalent to \(\textbf{G}\in {{\mathcal {H}}}^2_\textrm{str}({{\mathcal {B}}}(U))\)—see also [27, Lemma 3.1]. The analogous conclusion applies to the condition \(s \mapsto \textbf{G}({\overline{s}})^*y \in {{\mathcal {H}}}^2(\mathbb C_0, U)\) for all \(y \in U\). The conditions (1) and (2) above are equivalent when U is finite-dimensional, but are not equivalent in general.

Presently, the upshot of the above result is that, in light of Corollary 3.11, the bounded, linear, right-shift invariant operators \(G: L^2(\mathbb R_+,U) \rightarrow L^2(\mathbb R_+,U)\) with restrictions to \(H^1(\mathbb R_+,U)\) that map continuously into \(H^1(\mathbb R_+,U)\) with finite-dimensional U are precisely those that may be realised by an input–output stable Pritchard–Salamon system. In particular, this immediately provides numerous examples and counterexamples of where Corollary 3.11 applies; for example, those discussed in [9] and [11].