1 Introduction

Following the seminal work of J.R. Cannon [6], a semigroup-theoretical study of diffusion and wave equations associated with one-dimensional Laplace operators equipped with integral conditions has recently been commenced in [9], where an abstract framework for studying such problems in Hilbert spaces has been proposed. Paper [5] presents a different approach, applicable apparently in a broader context: it shows that the recently developed Lord Kelvin’s method of images [3, 4] provides natural tools for constructing moments-preserving cosine families. In particular, the main theorem of [5] states that there is a unique cosine family generated by a Laplace operator in C[0,1] that preserves the moments of order zero and 1 (about 0).

In this context, a natural question arises of whether and to what extent the main result of [5], may be generalized to the case of two arbitrary moments of order, say i and j, where i<j are two non-negative integers. Our main theorem (Theorem 2.1) provides the following answer to this question: a necessary and sufficient condition for existence of a cosine family generated by a Laplace operator in C[0,1] that preserves the moments of order i and j (about 0) is that i=0. In particular, there are no Laplace-operator-generated cosine families that preserve two moments of order larger than 0. Moreover, the cosine family that preserves the moments of order 0 and j≥1 is uniquely determined, and can be constructed (almost) explicitly by means of the abstract Kelvin formula (3.1). Extensions to non-integer moments are also discussed.

2 Preservation of moments about 0

Let C[0,1] be the Banach space of continuous functions on the unit interval, and let C 2[0,1] be its subspace of twice continuously differentiable functions. (In what follows we think of real-valued functions, but this is merely to fix attention; the same analysis can be performed in the space of complex functions, as well.) By \(\mathfrak{L}_{c}\) we denote the class of restrictions of the Laplace operator Lf=f″,D(L)=C 2[0,1] to various domains that generate (strongly continuous) cosine families in C[0,1]. In other words, a member of \(\mathfrak{L}_{c}\) is a closed linear operator A that generates a strongly continuous cosine family in C[0,1] and on its domain coincides with L. The cosine family generated by A will be denoted \(C_{A}= (C_{A}(t))_{t\in\mathbb {R}}\). We recall (see, e.g., [1, Proof of Theorem 3.14.17] or [8, Theorem 8.7]) that \(A \in\mathfrak{L}_{c}\) generates also a strongly continuous semigroup S A =(S A (t)) t≥0; the latter semigroup is given by the abstract Weierstrass formula

$$ S_A(t)f = \frac{1}{\sqrt{\pi t}} \int_0^{\infty} \mathrm{e}^{-\tau^2/4t} C_A(\tau)f \, \mathrm{d}\tau\quad(t > 0, f \in C[0,1]), $$
(2.1)

and S A (0)=Id C[0,1] (identity operator in C[0,1]). In particular, \(\mathfrak{L}_{c}\) is a subclass of the class \(\mathfrak{L}_{s}\) of restrictions of the Laplace operator that generate strongly continuous semigroups in C[0,1].

Let \(\mathbb{N}\) be the set of non-negative integers, and let F i denote the moment of order i about 0, i.e., let it be the linear functional on C[0,1] defined by

$$ F_i f := \int_0^1 \mathrm{k}_i(x) f(x)\, \mathrm{d}x,\quad{i\in\mathbb{N}}, $$
(2.2)

where

$$\mathrm{k}_i(x) := x^i . $$

Given \(A \in\mathfrak{L}_{c}\) we say that the related cosine family C A preserves the ith moment iff for all \(t\in\mathbb{R}\) and fC[0,1], we have F i C A (t)f=F i f. Analogously, for \(A \in \mathfrak{L}_{s}\) we say that the related semigroup preserves the ith moment iff for all t≥0 and fC[0,1], we have F i S A (t)f=F i f. Observe that, by the Weierstrass formula, if C A preserves F i then so does S A (see Proposition 2.2, later on).

Theorem 2.1

Let \(i,j\in\mathbb{N}\) with i<j be given.

  1. (a)

    If i=0, then there is exactly one \(A \in\mathfrak{L}_{c}\) such that the related cosine family preserves F i and F j .

  2. (b)

    If i>0, then there is no \(A \in\mathfrak{L}_{s}\) such that the related semigroup preserves F i and F j .

This theorem will be proved in two steps. First, following [5] we will relate preservation of moments with boundary conditions to show that a generator \(A \in\mathfrak{L}_{s}\) of a semigroup that preserves two moments of order i,j≥1 could not be densely defined (thus establishing (b)). Then, in the next section, we will construct the moments preserving cosine family of point (a).

Before continuing, we note that in case (a), a generation theorem for moments-preserving semigroups has been obtained in [9, Theorem 3.4].

Proposition 2.2

Let A be a member of \(\mathfrak{L}_{c}\) and let \(i \in\mathbb{N}\). The following statements are equivalent.

  1. (a)

    The cosine family C A preserves F i .

  2. (b)

    The semigroup S A preserves F i .

  3. (c)

    For fD(A), we have F i (f″)=0.

  4. (d)

    For fD(A), we have:

    $$ \begin{array}{ll@{\quad}l} f'(0)=& f'(1) &\textit{if }i =0,\\ f'(1)=& f(1) - f(0) &\textit{if }i =1, \\ if(1)=& i(i-1)F_{i-2}f + f'(1) &\textit{if }i \ge2. \end{array} $$
    (2.3)

Proof

If we assume that C A preserves F i , then since F i is bounded,

$$F_i S_A(t) f = \frac{1}{\sqrt{\pi t}} \int_0^{\infty} \mathrm{e}^{-\tau^2/4t} F_i C_A(\tau) f \, \mathrm{d}\tau= F_i f, $$

by (2.1) and because \(\int_{0}^{\infty} \mathrm{e}^{-\tau^{2}/4t} \, \mathrm{d}\tau= \sqrt{\pi t} \). This proves that (a) implies (b).

In order to prove that (b) implies (c) let fD(A) and consider

$$u_i:[0,\infty) \ni t \mapsto F_i(S_A(t) f) \in\mathbb{R}. $$

The scalar-valued function u i is differentiable with \(u_{i}'(t) = F_{i}(S_{A}(t) A f) = F_{i}(S_{A}(t) f'')\). If S A preserves F i the function u i is constant, and hence \(u_{i}'(t) = 0 \) for t∈[0,∞). Thus in particular \(u_{i}'(0) = F_{i}(f'') = 0 \).

The equivalence of (c) and (d) is evident since

$$ F_i (f') = f(1) - iF_{i-1} f $$
(2.4)

holds for all i≥1 and all fC 1[0,1].

Finally, assume condition (c) holds. We show that the cosine family C A preserves F i . First observe that if fD(A) then \(F_{i}(\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}C_{A}(t) f)= F_{i}(A C_{A}(t) f) = 0 \), because C A (t)(D(A))⊂D(A). Then, similarly as in the proof of (b) ⇒ (c),

$$v_i:[0,\infty) \ni t \mapsto F_i(C_A(t) f) \in\mathbb{R} $$

is a scalar-valued twice differentiable function with \(v_{i}'(t) = F_{i}(\int_{0}^{t} C_{A}(s) \, \mathrm{d}s f'') \) and \(v_{i}''(t) = F_{i}(C_{A}(t) f'') = 0\). Therefore, since \(v_{i}'(0) = 0 \), v i is constant, i.e., F i (C A (t)f)=F i (C A (0)f)=F i f for fD(A). This completes the proof because D(A) is dense in C[0,1]. □

Corollary 2.3

For i,j≥1, ij, the set

$$D_{i,j} = \{ f \in C^2[0,1] \mid F_i(f'') = F_j(f'') =0 \} $$

is not dense in C[0,1].

Proof

Since (c) and (d) in Proposition 2.2 are equivalent,

$$D_{i,j} \subset\operatorname{Ker}H_{i,j}, $$

where H i,j is a bounded linear functional on C[0,1] given by

$$H_{i,j} f= (j-1) f(1) + f(0) - j(j-1) F_{j-2} f $$

for i=1, and

$$H_{i,j} f = (i-j) f(1) - i(i-1) F_{i-2} f + j(j-1) F_{j-2} f $$

for i≥2. We note that H i,j is non-zero, because

$$H_{1,j} \mathrm{k}_2 = j-1 - j(j-1) \frac{1}{j+1} = \frac{j-1}{j+1} \neq0, $$

and for i≥2,

$$H_{i,j} \mathrm{k}_2 = i - j - \frac{i(i-1)}{i+1} + \frac{j(j-1)}{j+1} =\frac{2(i-j)}{(i+1)(j+1)} \neq0. $$

Hence, the corollary follows since \(\operatorname{Ker}H_{i,j}\) is closed and not equal to C[0,1]. □

This corollary clearly implies (b) in Theorem 2.1. For, if the semigroup generated by \(A \in\mathfrak{L}_{s}\) preserves moments F i and F j then, by Proposition 2.2, D(A)⊂D i,j which contradicts the fact that D(A) is dense in C[0,1].

3 Proof of the case i=0,j≥1

Let \(C(\mathbb{R})\) be the Fréchet space of continuous functions on \(\mathbb{R}\) with topology of almost uniform convergence, and let \((C(t))_{t \in \mathbb{R} }\) be the basic cosine family in \(C(\mathbb{R})\) given by the D’Alembert formula,

$$ C(t) f (x) := \frac{1}{2} (f(x+t) + f(x-t)) , \quad t,x \in\mathbb{R}. $$

Also, let F j be the linear functional on \(C(\mathbb{R})\) defined by

$$ F_j f := \int_0^1 \mathrm{k}_j(x) f(x)\, \mathrm{d}x,\quad{j\in\mathbb{N}}. $$

The fact that two distinct objects, the functional in \(C(\mathbb{R})\) defined here, and the functional on C[0,1] defined in (2.2), are denoted by the same letter, should not lead to misunderstanding. Clearly, F i is continuous both on C[0,1] and \(C(\mathbb{R})\) for all \(i\in\mathbb{N}\).

In the theory of semigroups of linear operators and the related theory of cosine families, Lord Kelvin’s method of images can be thought of as a way of constructing families of operators generated by an operator with a boundary condition by means of families generated by the same operator in a larger space, where no boundary conditions are imposed (cf. [3, 4]). In our particular context, the method boils down to constructing a cosine family \(C_{\mathrm{mp}}=(C_{\mathrm{mp}}(t))_{t \in\mathbb{R}}\) in C[0,1] via the formula

$$ C_{\mathrm{mp}}(t) f(x) = C(t) \tilde{f}(x), \quad x \in[0,1],\; t\in\mathbb{R},\; f \in C[0,1], $$
(3.1)

where ‘mp’ stands for ‘moments-preserving’ and, more importantly, \(\widetilde{f}\in C(\mathbb{R}) \) is a certain extension of f, chosen in such a way that C mp preserves both F 0 and F j . To be more specific: Given fC[0,1], and j≥1, we are looking for an \(\widetilde{f}:\mathbb{R}\to\mathbb{R}\) such that

  1. (A)

    \(\widetilde{f}\in C(\mathbb{R})\) and \(\widetilde {f}(x) = f(x)\) for all x∈[0,1],

  2. (B)

    \(F_{0} C(t) \widetilde{f}= F_{0} f\) for all \(t \in\mathbb {R}\), and

  3. (C)

    \(F_{j} C(t) \widetilde{f}= F_{j} f\) for all \(t \in\mathbb{R}\).

Existence of such an extension is secured by Proposition 3.2, later on.

Before proceeding, we need to introduce some notations. For a function f defined on [0,1] let f e be its symmetric reflection about \(\frac{1}{2} \), that is f e(x)=f(1−x) for x∈[0,1]. Moreover, given two functions f,ϕC[0,1], let fϕC[0,1] be their convolution:

$$(f \ast\phi)(x) = \int_0^x f(x-y) \phi(y) \, \mathrm{d}y, \quad x \in[0,1]. $$

Observe that

$$\lVert f * \phi\rVert_{C[0,1]} \leq\lVert f \rVert_{C[0,1]} \lVert \phi\rVert_{C[0,1]}. $$

Finally, if fC 1[0,1], then fϕC 1[0,1] with

$$ (f \ast\phi)' = f(0) \phi+ f' \ast\phi. $$

For the proof of Proposition 3.2, we need the following lemma.

Lemma 3.1

For g,ϕC[0,1], there exists a unique fC[0,1] such that

$$f - \phi\ast f = g. $$

Moreover, if ϕC 1[0,1] and gC 2[0,1], then fC 2[0,1].

Proof

Let us define the Bielecki-type norm [2, 7]

$$ \lVert f\rVert_{\lambda} = \sup_{x \in[0,1]} \lvert\mathrm{e}^{-\lambda x} f(x)\rvert $$

in C[0,1] with λ>0. We note that this norm is equivalent to the usual supremum norm in this space. Consider the operator T:C[0,1]→C[0,1] given by

$$Tf= g + \phi\ast f. $$

If f 1,f 2C[0,1], then

$$\begin{aligned} \lVert Tf_1 - Tf_2 \rVert_{\lambda} &= \lVert(f_1 - f_2) *\phi\rVert_{\lambda} \\ &\leq\sup_{x \in[0,1]} \int_0^x \mathrm{e}^{-\lambda(x-y)} \lVert f_1 - f_2 \rVert_{\lambda} |\phi(x-y) |\, \mathrm{d}y \\ &\le\int_0^1\mathrm{e}^{-\lambda x}\, \mathrm{d}x \, \lVert\phi\rVert_{C[0,1]} \lVert f_1 - f_2 \rVert_{\lambda}. \end{aligned}$$

Hence, λ can be chosen so large that ∥Tf 1Tf 2 λ C λ f 1f 2 λ for some C λ ∈(0,1), and by the Banach fixed point theorem, there exists a unique fC[0,1] such that Tf=f.

In order to prove the second part of the lemma recall that the unique fC[0,1] constructed above satisfies f=lim n→∞ f n in C[0,1], where

$$f_n = T^n g = g + \sum_{k=1}^{n} \phi_k \ast g, $$

and ϕ 1:=ϕ, ϕ k+1:=ϕ k ϕ, k≥1. Observe that for ϕC 1[0,1] we have ϕ k C 1[0,1],k≥1, by induction. Hence, if we assume that g is twice continuously differentiable, then so is f n , n≥1. Moreover, since \(\phi_{k+1}' = \phi(0) \phi_{k} + \phi' \ast\phi_{k} \), k≥1, we have

$$\begin{aligned} Lf_n = f_n'' &= g'' + g'(0) \sum_{k=1}^n \phi_k + g(0) \sum_{k=1}^n\phi_k' + g'' \ast\sum_{k=1}^n \phi_k \\ &= g'' + g(0) \phi' + g'(0) \phi_n + g'' \ast\phi_n \\ &\phantom{={}} + \bigl(g'(0) + g(0)\phi(0)\bigr) \sum_{k=1}^{n-1}\phi_k \\ &\phantom{={}} + \bigl(g(0) \phi' + g''\bigr) \ast\sum_{k=1}^{n-1} \phi_k. \end{aligned}$$

Therefore, \((f_{n}'')_{n \geq1} \) converges in C[0,1], for

$$\lVert\phi_k \rVert_{C[0,1]} \leq\frac{\lVert\phi \rVert _{C[0,1]}^k}{(k-1)!}, \quad k \geq 1. $$

Since L is closed, f=lim n→∞ f n is twice continuously differentiable, which completes the proof. □

Proposition 3.2

For fC[0,1], an extension \(\widetilde{f}\) that fulfills conditions (A)(C), listed above, exists and is uniquely determined.

Proof

It suffices to find for all \(n \in\mathbb{N}, \) functions g n ,h n C[0,1] related to \(\widetilde{f} \) as follows:

$$ g_n(x) = \widetilde{f}(x+n), \qquad h_n(x) = \widetilde{f}(1-x-n), \quad x\in[0,1]. $$
(3.2)

Since we want \(\widetilde{f} \) to be well-defined and continuous, these functions must satisfy compatibility conditions:

$$ h_{n+1}(0) = h_n(1), \qquad g_{n+1}(0) = g_n(1), \quad n \in\mathbb{N}. $$
(3.3)

The proof of [5, Proposition 2.2] shows that condition (B) is satisfied if and only if

$$ g_{n+1} + h_{n+1} = g_n + h_n, \quad n \in\mathbb{N}. $$
(3.4)

On the other hand, condition (C) holds if and only if

$$\int_t^{1+t} \mathrm{k}_j(y-t) \widetilde{f}(y) \, \mathrm{d}y + \int _{-t}^{1-t} \mathrm{k}_j(y+t)\widetilde{f}(y) \, \mathrm{d}y = 2\int_0^1 \mathrm{k}_j(x) f(x) \, \mathrm{d}x, $$

for t≥0. Differentiating with respect to t and then writing t=n+x,x∈[0,1], we see that this is equivalent to

$$\begin{aligned} & g_{n+1}(x) - j \int_{n+x}^{1+n+x} \mathrm{k}_{j-1}(y - n - x)\widetilde{f}(y) \, \mathrm{d}y \\ &\quad {} -h_n(x)+ j \int_{-n-x}^{1-n-x} \mathrm{k}_{j-1}(y + n + x) \widetilde{f}(y)\, \mathrm{d}y = 0, \end{aligned}$$

which is satisfied if and only if

$$\begin{aligned} & g_{n+1}(x) - j \biggl[\int_x^1 \mathrm{k}_{j-1}(y-x)g_n(y) \, \mathrm{d}y+\int_0^x k_{j-1}^e (x-y) g_{n+1}(y) \, \mathrm{d}y\biggr] \\ &\quad {} - h_n(x) +j \biggl[ \int_0^x \mathrm{k}_{j-1} (x-y) h_{n+1}(y)\, \mathrm{d}y +\int_x^1\mathrm{k}_{j-1}^e(y-x) h_n(y) \, \mathrm{d}y\biggr] = 0, \end{aligned}$$

for x∈[0,1] and \(n \in\mathbb{N}\). Finally, since h n+1=g n +h n g n+1 by (3.4), condition (C) holds if and only if

$$\begin{aligned} &g_{n+1} - j (k_{j-1} + \mathrm{k}_{j-1}^e) \ast g_{n+1} \\ &\quad{} = j (g_n^e *\mathrm{k}_{j-1})^e - j\mathrm{k}_{j-1} \ast(g_n + h_n) - j (h_n^e * \mathrm{k}_{j-1}^e)^e + h_n, \quad n\in\mathbb{N}. \end{aligned}$$
(3.5)

By Lemma 3.1, g n+1 is uniquely determined by the pair (g n ,h n ), and by (3.4) so is h n+1.

It remains to prove that for g n and h n defined recursively by h 0=f e,g 0=f, (3.4) and (3.5), conditions (3.3) are satisfied. To this end, let

$$d_n := j F_{j-1}(h_n^e - g_n), \quad n \in\mathbb{N}. $$

Then, by (3.5),

$$ g_{n+1}(0) = h_n(0) - d_n \quad\mbox{and} \quad g_{n+1}(1) = h_n(1) - d_{n+1}. $$
(3.6)

By (3.4) this yields

$$ h_{n+1}(0) =g_n (0) + d_n \quad\mbox{and} \quad h_{n+1}(1) = g_n(1) + d_{n+1}. $$
(3.7)

By induction argument, it follows that for all \(n \in\mathbb{N}\),

$$ h_n(1) - g_n(0) = h_n(0) - g_n(1) = d_n. $$
(3.8)

For, the formula is evidently true for n=0 since d 0=jF j−10=0, h 0(0)=f(1)=g 0(1), and h 0(1)=f(0)=g 0(0), and relations (3.6) and (3.7) allow proving the induction step.

Equalities (3.8) in turn show

$$ g_{n+1}(0) = h_n(0) - d_n = g_n(1) + d_n - d_n = g_n(1), $$

and

$$ h_{n+1}(0) = g_n(0) + d_n = h_n(1) - d_n + d_n = h_n(1), $$

establishing (3.3) and completing the proof. □

Definition 3.3

Let fC[0,1]. The function \(\widetilde{f}:\mathbb{R}\to\mathbb{R}\) defined in accordance with the rules (3.2), (3.4) and (3.5) is called the integral extension of f.

We note that the extension operator

$$E: C[0,1] \ni f \mapsto Ef := \widetilde{f}\in C(\mathbb{R}) $$

is continuous. Let

$$ D_j = \{f \in C^2[0,1]: F_0(f'')=F_j(f'')=0\}. $$

Lemma 3.4

Let fD j . Then its integral extension Ef is twice continuously differentiable on (−1,2).

Proof

The case j=1 is proved in [5, Lemma 2.4], hence we restrict ourselves to j≥2. By (3.4), (3.5), and Lemma 3.1, restrictions of Ef to the intervals [−1,0], [0,1], and [1,2] are twice continuously differentiable. We need to show that

$$g_1'(0) = f'(1), \qquad g_1''(0) = f''(1), \qquad h_1'(0) = -f'(0), \quad\mbox{and}\quad h_1''(0) = f''(0). $$

Differentiating (3.5) with n=0 we obtain

$$g_1'(0) = 2jf(1) - 2j(j-1) F_{j-2} f - f'(1). $$

Since fD j , by the equivalence of conditions (c) and (d) in Proposition 2.2, we see that

$$f'(1) = jf(1) - j(j-1) F_{j-2} f, $$

which proves \(g_{1}'(0) = f'(1) \), the first of the desired equalities. The third equality now follows by F 0(f″)=f′(1)−f′(0)=0 and (3.4) with n=0:

$$h_1'(0) = f'(0) - f'(1) - g_1'(0) = -f'(1). $$

Turning to the second equality we consider the cases j=2 and j>2 separately. If j=2, then (3.5) gives

$$\begin{aligned} g_1''(0) =& j f'(0) + j(j-1) f(0) - j(j-1)(f(0) + f(1))\\ & {}- jf'(1) + j(j-1) f(1) + f''(1)\\ =& f''(1). \end{aligned}$$

Similarly, if j>2, then

$$\begin{aligned} & g_1''(0) - jf'(1) + j(j-1) f(1)\\ & \quad{}= j(j-1)(j-2) F_{j-3} f - jf'(1) + j(j-1)f(1)\\ & \qquad{}- j(j-1)(j-2) F_{j-3} f + f''(1), \end{aligned}$$

hence \(g_{1}''(0) = f''(1) \). Finally, by (3.4) with n=0, it follows that

$$h_1''(0) = f''(0) + f''(1) - g_1''(0) = f''(0), $$

which completes the proof. □

Theorem 3.5

The abstract Kelvin formula (3.1) defines a strongly continuous cosine family \((C_{\mathrm{mp}}(t))_{t\in\mathbb{R}}\) on C[0,1]. This family preserves both functionals F 0 and F j . Moreover, the generator A of \((C_{\mathrm{mp}}(t))_{t\in\mathbb{R}}\) is a member of \(\mathfrak{L}_{c}\) and its domain is D j .

Proof

Let \(R: C(\mathbb{R}) \to C[0,1]\) map a member of \(C(\mathbb{R})\) to its restriction to [0,1]. Then (3.1) takes the form

$$ C_{\mathrm{mp}}(t)= RC(t)E, \quad t \in\mathbb{R}. $$
(3.9)

By (3.4) and (3.5), a pair (g n+1,h n+1) is obtained from (g n ,h n ) by means of a bounded linear operator mapping C[0,1]×C[0,1] into itself. Since for any t,RC(t)Ef depends merely on the finite number of such pairs, it follows that C mp(t) is a bounded linear operator in C[0,1]. That the operators C mp(t) preserve functionals F 0 and F j is clear by Proposition 3.2.

Fix fC[0,1] and \(s \in\mathbb{R}\). Clearly, C(s)Ef extends RC(s)Ef and, by the cosine equation for C and the definition of Ef, we have

$$F_i C(t) C(s) Ef= F_i f= F_i RC(s)Ef,\quad i=0,j, t \in\mathbb{R}. $$

By uniqueness of integral extensions, this shows that C(s)Ef is the integral extension of RC(s)Ef:

$$ERC(s)Ef = C(s)Ef,\quad s\in\mathbb{R}. $$

Using this and the cosine equation for C, we check that

$$2C_{\mathrm{mp}}(t)C_{\mathrm{mp}}(s) f = C_{\mathrm{mp}}(t+s)f +C_{\mathrm{mp}}(t-s) f,\quad t,s \in\mathbb{R}, $$

i.e., that C mp is a cosine family. This family is strongly continuous, i.e., we have lim t→0 RC(t)Ef=f for all fC[0,1], since Ef, as restricted to any compact interval, is a uniformly continuous function, and on [0,1] it coincides with f.

Turning to the characterization of the generator: Lemma 3.4 and the Taylor formula imply that for fD j ,

$$\lim_{t\to0} \frac{2}{t^2} (C(t) \widetilde{f}(x) - \widetilde{f}(x)) = \widetilde{f}''(x),\quad x \in(-1,2); $$

the limit is uniform in x∈[0,1] since \(\widetilde{f}''\) is uniformly continuous in any compact subinterval of (−1,2). By (3.9) this proves that f belongs to D(A) and we have Af=f″.

Finally, we observe that Proposition 2.2 implies D(A)⊂D j , which shows that D(A)=D j , and completes the proof. □

When combined with Proposition 2.2, Theorem 3.5 proves not only existence of the cosine family that preserves moments of order 0 and j≥1, but also its uniqueness (in the class of cosine families generated by members of \(\mathfrak{L}_{c}\)). For, by Proposition 2.2, the domain of the generator of a cosine family preserving these moments is contained in D j . Since no member of \(\mathfrak{L}_{c}\) is a proper extension of another member, this generator must coincide with the generator described in Theorem 3.5. In particular, we have completed the proof of Theorem 2.1.

We conclude this section with a remark on symmetries in the moments-preserving cosine families. We say that a function fC[0,1] is symmetric about \(\frac{1}{2} \) if f=f e and similarly, we say that f is asymmetric about \(\frac{1}{2} \) if f=−f e. By C even[0,1] and C odd[0,1] we denote the spaces of symmetric and asymmetric functions, respectively.

In [5, Proposition 3.2] it is proved that in the case i=0, j=1, the moments-preserving cosine family C mp leaves the spaces C even[0,1] and C odd[0,1] invariant. This allows decomposition of C mp into ‘smaller’ pieces which are easier to handle. (For example, one of the pieces is the cosine family related to the Neumann boundary conditions.) As we shall see now, such a decomposition is not possible in general. More specifically, for j≥2 the space C odd[0,1] is invariant for C mp (the reason for that is formula (3.4) showing that integral extensions of asymmetric functions are asymmetric) but the space C even[0,1] does not possess this property.

Indeed, suppose that, contrary to our claim, C mp C even[0,1]⊂C even[0,1]. If A is the generator of C mp, then the generator of C mp restricted to C even[0,1] is A p , the part of A in C even[0,1]. Hence, for fD(A p ) the first and the third conditions in (2.3) hold and, f being even, f′(1)=f′(0)=0. Therefore \(D(A_{p}) \subset \operatorname{Ker}H_{j}\), where H j is the bounded linear functional on C even[0,1] given by

$$H_j f = j(j-1)F_{j-2}f - jf(1). $$

Since D(A p ) is dense in C even[0,1] and \(\operatorname{Ker}H_{j}\) is closed, \(\operatorname{Ker} H_{j} = C_{\mathrm{even}}[0,1]\). This contradicts the fact that for fC even[0,1] given by \(f(x) = (x-\frac{1}{2})^{2} \), \(H_{j} f = - \frac{j-1}{j+1} \neq0, j \geq2\), completing the proof of the claim.

In this context it is natural to ask whether there is a subspace of C[0,1] that is complementary to C odd[0,1] and invariant for C mp. However, at present an answer to this question eludes us. In fact, as we have noted above, the requirement of preservation of the first moment forces the cosine family to leave the space C odd[0,1] invariant, yet in general it is unclear how to relate a moment to be preserved with an invariant subspace for the cosine family.

4 Extensions

The article focuses on non-negative integer moments. However, the main theorem (Theorem 2.1) may be extended to the case of non-negative real moments, as follows.

Because k i is integrable over [0,1] for real i>−1, formula (2.2) defines a bounded linear functional on C[0,1] for all such i. Hence, relation (2.4) remains true for i>0 and Proposition 2.2 may be extended to real i≥0 by writing (2.3) in the form:

$$ \begin{array}{lll@{\quad}l} f'(0)&=& f'(1) &\mbox{if }i =0,\\ f'(1)&=& iF_{i-1} f' &\mbox{if } i \in(0,1),\\ f'(1)&=& f(1) - f(0) &\mbox{if }i =1, \\ if(1)&=& i(i-1)F_{i-2}f + f'(1) &\mbox{if }i > 1. \end{array} $$

Next, for real i,j≥1, Corollary 2.3 remains valid, but for i or j in (−1,1), it does not (see below). Finally, for real positive j, unless j=1 or j=2, the argument used in Lemma 3.4 requires existence of F j−3 f, and does not work for j∈(0,1)∪(1,2). To recapitulate, we have the following theorem.

Theorem 4.1

Let i,j≥0 be two real numbers with i<j.

  1. (a)

    If i=0, and j=1 or j≥2, then there is exactly one \(A \in\mathfrak{L}_{c}\) such that the related cosine family preserves F i and F j .

  2. (b)

    If i≥1, then there is no \(A \in\mathfrak{L}_{s}\) such that the related semigroup preserves F i and F j .

If C[0,1] is the space of complex functions, similar extensions are possible for moments of complex order i with ℜi≥0.

Except for the claim that D i,j of Corollary 2.3 is dense for i or j in (−1,1), all the statements presented above are proved precisely as in the case where i and j are integers. Hence, we restrict ourselves to showing the former.

First, let −1<i<j<1. Given C>0 let α,β∈[−C,C]. Observe that there is a constant K=K(i,j)>0 and a function f α,β,C C[0,1], such that F i f α,β,C =α, F j f α,β,C =β and ∥f α,β,C C[0,1]CK. Indeed, let f α,β,C (x)=(x−1)(ax+b), where real numbers a and b are chosen so that

$$\begin{cases} F_i f_{\alpha,\beta,C}= \alpha,\\ F_j f_{\alpha,\beta,C}= \beta, \end{cases} \mbox{i.e.,}\quad \begin{cases} (i+1)a + (i+3)b = -\alpha P(i),\\ (j+1)a + (j+3)b = -\beta P(j), \end{cases} $$

where P(x)=(x+1)(x+2)(x+3). Precisely, we have

$$a = \frac{(j+3) P(i) \alpha- (i+3) P(j) \beta}{2(j-i)}, \qquad b = \frac{(i+1)P(j) \beta- (j+1)P(i) \alpha}{2(j-i)}. $$

Then, for g α,β,C defined by \(g_{\alpha,\beta,C}(x) = \int_{1}^{x} \int_{1}^{y} f_{\alpha,\beta,C}(z) \, \mathrm{d}z \, \mathrm{d}y, x \in[0,1] \), we obtain \(g_{\alpha,\beta,C}(1) = g_{\alpha,\beta,C}'(1)= g_{\alpha,\beta,C}''(1) = 0 \), \(F_{i} g_{\alpha,\beta,C}'' = \alpha\), \(F_{j} g_{\alpha,\beta ,C}'' = \beta\), and ∥g α,β,C C[0,1]<CK.

Given fC 2[0,1], let c>0 be chosen so that c+j−1<0. For k>1 we define

$$ f_k (x) = f(x) - \frac{1}{k^c} h_k ( k x ), \quad x \in[0,1] $$
(4.1)

where

$$h_k (x) = \begin{cases} g_{\alpha,\beta,C}(x), & x \in[0,1], \\ 0, & x \ge1, \end{cases} $$

for C:=max(|F i f″|,|F j f″|), α:=k c+i−1 F i f″, and β:=k c+j−1 F j f″. Then

$$ F_i f_{k}'' = F_i f'' - \frac{1}{k^c} \int_0^1 k^2 h_{k}''(k x) x^i\, \mathrm{d}x = F_i f'' - \frac{1}{k^{c+i-1}} \int_0^1 h_k''(y) y^i \, \mathrm{d}y = 0, $$

and similarly \(F_{j} f_{k}'' = 0 \), that is f k D i,j . Furthermore \(\lVert f - f_{k} \rVert_{C[0,1]} = \frac{1}{k^{c}} \lVert h_{k}\rVert _{C[0,1]} < \frac{1}{k^{c}} C K \), where C depends merely on f, and K depends on i and j. Since k>1 is arbitrary, this shows that D i,j is dense in C 2[0,1], and the claim follows in this case.

For the proof of the other case, where i∈(−1,1) and j≥1, we claim first that for fC[0,1] and k>1, there is f j C 2[0,1] such that \(F_{j} f_{j}'' = 0 \) and \(\lVert f - f_{j} \rVert _{C[0,1]} < \frac{1}{k} \). Since C 2[0,1] is dense in C[0,1], it suffices to show this for fC 2[0,1]. Given k>1 let g a C 2[0,1], a≥1, be defined by \(g_{a}(x) = \frac{1}{k} x^{a} \). Then \(\lVert g_{a} \rVert_{C[0,1]} < \frac{1}{k} \) and \(F_{j} g_{a}'' = \frac{1}{k} \frac{a(a-1)}{a-1+j}\). Hence, \(a \mapsto F_{j} g_{a}'' \in\mathbb{R}^{+} \) is a continuous function satisfying \(F_{j} g_{1}'' = 0 \) and \(\lim_{a \to\infty} F_{j} g_{a}'' = +\infty\). Thus, by the intermediate value theorem there exists a 0≥1 such that \(F_{j} g_{a_{0}}'' = |F_{j} f''| \). Finally, let \(f_{j} = f - \operatorname{sgn} (F_{j}f'') \, g_{a_{0}} \). Then f j C 2[0,1], \(\lVert f - f_{j} \rVert_{C[0,1]} \leq\frac{1}{k} \) and \(F_{j} f_{j}'' = 0 \), as desired.

To complete the proof, we fix k>1. Defining f k via (4.1) with c>0, c+i−1<0, \(C := \lvert F_{i} f''_{j}\rvert\), \(\alpha= k^{c+i-1} F_{i} f_{j}'' \), and β=0, and with f replaced by f j , we obtain f k C 2[0,1] and \(F_{i} f_{k}'' = 0 \) as before, and

$$ F_j f_k'' = F_j f_j'' - k^{1-i-c} \int_0^{k} h_k''(y) y^j \, \mathrm{d}y = F_j f_j'' = 0, $$

since β=0. Thus f k D i,j , and \(\lVert f - f_{k} \rVert_{C[0,1]} < \frac{1}{k} CK \), where K is defined as before. This completes the proof of the claim.