1 Introduction

In this paper we consider differential equations of mixed type in abstract form whose concrete model example is

$$\begin{aligned} r (x,t) u_t - \text {div} \, (|Du|^{p-2} Du) = f , \qquad p \geqslant 2 , \end{aligned}$$
(1)

where r is a function which may assume positive, null and negative values and consequently this equation may be of elliptic–parabolic type, parabolic both forward and backward.

Equations of mixed type have been considered since at least one century ago, since, as far as many authors say, they are mentioned in [7]. Here we recall some simple and more known examples:

$$\begin{aligned} x \, \frac{\partial u}{\partial t} - \frac{\partial ^{2} u}{\partial x^{2}} = 0 , \qquad \text {sgn}(x) \, \frac{\partial u}{\partial t} - \frac{\partial ^{2} u}{\partial x^{2}} + k \, u = f. \end{aligned}$$

The first was considered in [1] in 1968, the second one in [15] in 1971 and they clearly are particular cases of equations like

$$\begin{aligned} r (x) u_t + A u = f \end{aligned}$$
(2)

where A is an elliptic operator and r a changing sign function. This type of equations seem to be interesting in many areas and have arisen in connection with many different problems: in the study of some stochastic differential equation, in the kinetic theory, in some physical models (like electron scattering, neutron transport). For these applications we confine to quote the recent paper [10] and for the many others we refer to the references contained in the already quoted paper [15] and in [2, 3]. Just in these papers Beals treated equations like (2), but always with simple r. For instance, in [2] the equation

$$\begin{aligned} x \, \frac{\partial u}{\partial t} - \frac{\partial }{\partial x} \left( (1-x^2) \frac{\partial u}{\partial x} \right) \ = 0 \end{aligned}$$

is considered, but, as Gevrey seems to suggest and as Beals says, some coefficients like

$$\begin{aligned} r (x) = x^m , \ m \text { odd} \qquad \text {or} \qquad r (x) = \text {sgn}(x) \, |x|^p \end{aligned}$$

are of interest in some applications. Among papers taking into account equations with some more general r (and A linear) we recall [9, 20]. In particular in [9] a coefficient depending also on time is considered and a condition of regularity in time is requested, i.e.

$$\begin{aligned} r = r (x,t), \qquad r , r_t \in L^{\infty }. \end{aligned}$$

A recent paper where the author consider \(r = r (x,t)\) (and A linear) is [8]. Another paper we want to recall is [16] which is more general, even if incomplete.

As regards equations like (2) where \(r \geqslant 0\), the known results are more general than those regarding forward–backward parabolic equations. We recall the paper [21] and the book (see chapter 3) [22] for some general results. Finally we want to recall [4] for many examples and applications of equations with non-negative coefficients.

All the results cited above are generalized in the present paper and in [17], where it is considered the abstract equation

$$\begin{aligned} (\mathcal {R}u)' + \mathcal {A}u = f \end{aligned}$$
(3)

with suitable boundary data, where \(\mathcal {A}\) is a monotone operator and \(\mathcal {R}\) is a linear operator depending also on time which can be not invertible. When \(\mathcal {R}\) is a multiplication operator, i.e. \((\mathcal {R}u) (x,t) :=r(x,t) u(x,t)\), also discontinuous and unbounded coefficients, i.e. \(r \in L^1_{loc}\), are admitted without assuming the existence of \(r_t\). Just to show an example, an equation included in the result in [17] is

$$\begin{aligned} \big ( r(x,t) u \big )_t - \text {div} \, (|Du|^{p-2} Du) = f, \qquad p \geqslant 2. \end{aligned}$$

The aim of this paper is giving existence results for the equation

$$\begin{aligned} \begin{array}{ll} \mathcal {R}u' + \mathcal {A}u = f, &{} \qquad \\ \end{array} \end{aligned}$$
(4)

\(\mathcal {R}\) and \(\mathcal {A}\) operators, with suitable boundary data, in a quite general setting and to extend to this general setting also the results about Eq. (3). As regards boundary data, roughly speaking and supposing \((\mathcal {R}u) (x,t) :=r(x,t) u(x,t)\), we give the initial datum about u at time 0 where r “is positive”, we prescribe a final datum for u at time T where r “is negative”, no datum is prescribed where \(r = 0\), both at time \(t=0\) and \(t=T\). These boundary conditions are coherent with the Fichera conditions for the well-posedness of a boundary value problem of elliptic–parabolic type (see the paper [6], but see also [11] for a more recent paper discussing these conditions).

In the last section we give some examples which could help to clarify that.

Since we consider abstract equation we consider functions defined in [0, T] and valued in a Banach space. But the setting is not standard, we consider a family of triplets

$$\begin{aligned} V(t) \subset H(t) \subset V'(t), \qquad t \in [0,T] , \end{aligned}$$
(5)

where V(t) is a reflexive Banach space which continuously embeds in the Hilbert space H(t), while \(V'(t)\) denotes the dual space of V(t). In this way our functions will be defined in [0, T] and, for each \(t \in [0,T]\), u(t) will denote an element in V(t), H(t) or \(V'(t)\).

In the last section we show with some examples why this setting can be interesting.

2 Notations, hypotheses and preliminary results

Consider the following family of evolution triplets

$$\begin{aligned} V(t) \subset H(t) \subset V'(t) t \in [0,T] \end{aligned}$$
(6)

where H(t) is a separable Hilbert space, V(t) a reflexive Banach space which continuously and densely embeds in H(t) and \(V'(t)\) the dual space of V(t), and we suppose there is a constant k which satisfies

$$\begin{aligned} \Vert w \Vert _{V'(t)} \leqslant k \, \Vert w \Vert _{H(t)}, \qquad \text {and} \qquad \Vert v \Vert _{H(t)} \leqslant k \, \Vert v \Vert _{V(t)} \end{aligned}$$
(7)

for every \(w \in H(t)\), \(v \in V(t)\) and every \(t \in [0,T]\).

We will suppose the existence of a Banach space U such that (for simplicity we consider the constant k as in (7))

$$\begin{aligned} \begin{array}{l} U \subset V(t) \qquad \text {and } \Vert u \Vert _{V(t)} \leqslant k \, \Vert u \Vert _U \qquad \text {for a.e. } t \in [0,T] \\ U \text { dense in } V(t) \qquad \text { for a.e. } t \in [0,T] \end{array} \end{aligned}$$
(8)

and define, for some \(p \geqslant 2\), the set

$$\begin{aligned} \mathcal {U}:=L^p(0,T; U) . \end{aligned}$$

Moreover we will suppose that the functions

$$\begin{aligned} t \mapsto \Vert u(t) \Vert _{V(t)}, t \mapsto \Vert u(t) \Vert _{H(t)}, t \mapsto \Vert u(t) \Vert _{V'(t)}, t \in [0,T] , \end{aligned}$$

are measurable for every \(u \in \mathcal {U}\) and for the same \(p \in [2, +\infty )\) used to define \(\mathcal {U}\) we define the spaces

$$\begin{aligned} \mathcal {V}, \qquad \mathcal {H}, \qquad \mathcal {V}^{*} \end{aligned}$$
(9)

as the completion of \(\mathcal {U}\) with respect to the norms

$$\begin{aligned} \Vert v \Vert _{\mathcal {V}}&:=\left( \int _0^T \Vert v(t) \Vert _{V(t)}^p dt \right) ^{1/p} , \\ \Vert v \Vert _{\mathcal {H}}&:=\left( \int _0^T \Vert v(t) \Vert _{H(t)}^2 dt \right) ^{1/2} , \\ \Vert f \Vert _{\mathcal {V}^{*}}&:=\left( \int _0^T \Vert f(t) \Vert _{V'(t)}^{p'} dt \right) ^{1/p'} \, . \end{aligned}$$

Notice that \(C^1 ([0,T] ; U)\) is dense in \(\mathcal {V}, \mathcal {H}, \mathcal {V}^{*}\) and that, for \(f \in \mathcal {V}^{*}\) and \(v \in \mathcal {V}\), it makes sense to evaluate

$$\begin{aligned} \int _0^T \langle f (t), v (t)\rangle _{V'(t) \times V(t)} dt . \end{aligned}$$

From now on we will suppose that the following holds true:

$$\begin{aligned} \begin{array}{l} [0,T] \ni t \mapsto ( u_1, u_2 )_{H(t)} \quad \text {belongs to } C^0 ( [0,T] ) \qquad \text {for every } u_1, u_2 \in U , \\ [0.5em] [0,T] \ni t \mapsto \Vert u \Vert _{V(t)} \quad \text {belongs to } C^0 ( [0,T] ) \qquad \text {for every } u \in U .\\ [0.5em] [0,T] \ni t \mapsto \Vert u \Vert _{V'(t)} \quad \text {belongs to } C^0 ( [0,T] ) \qquad \text {for every } u \in U .\\ \end{array} \end{aligned}$$
(10)

Lemma 2.1

Denote by \(\mathcal {V}'\) the dual space of \(\mathcal {V}\) and assume (10) holds. Then \(\mathcal {V}^{*} = \mathcal {V}'\).

Proof

Consider the application

$$\begin{aligned} T : \mathcal {V}^{*} \rightarrow \mathcal {V}' , \qquad \langle T f , v \rangle _{\mathcal {V}' \times \mathcal {V}} = \int _0^T \langle f (t), v (t)\rangle _{V'(t) \times V(t)} dt . \end{aligned}$$

On one hand we have immediately that

$$\begin{aligned} \Vert T f \Vert _{\mathcal {V}'} \leqslant \Vert f \Vert _{\mathcal {V}^{*}} . \end{aligned}$$

On the other hand, consider \(f \in \mathcal {V}^{*}\). For every \(\varepsilon > 0\) one can choose a subdivision of [0, T], i.e. \(t_0 = 0< t_1< t_2 < \dots t_N = T\) and a step function

$$\begin{aligned} u = \sum _{i = 1}^N u_i \chi _i (t) , \qquad u_i \in U , \quad \chi _i (t) = 1 \text { for } t \in [t_{i-1}, t_i), \chi _i (t) = 0 \text { for } t \not \in [t_{i-1}, t_i) , \end{aligned}$$

and

$$\begin{aligned} \Vert f - u \Vert _{\mathcal {V}^{*}} < \varepsilon . \end{aligned}$$

Moreover consider \(\varphi : [ 0, T] \rightarrow \mathbf{R}\), \(\varphi \geqslant 0\), \(\varphi \in L^p (0,T)\), such that

$$\begin{aligned} \Vert \varphi \Vert _{L^p (0,T)} \leqslant 1 \qquad \text {and} \qquad \int _0^T \Vert u (t) \Vert _{V' (t)} \varphi (t)\,dt \geqslant \Vert u \Vert _{\mathcal {V}^{*}} - \varepsilon . \end{aligned}$$
(11)

Notice that for \(g \in H(t), z \in V(t)\) the duality between g and z satisfies

$$\begin{aligned} \langle g , z \rangle _{V'(t) \times V(t)} = ( g , z )_{H(t)} . \end{aligned}$$

In particular if \(g, z \in U\) we get that \(t \mapsto \langle g , z \rangle _{V'(t) \times V(t)}\) is continuous, and in fact uniformly continuous in [0, T]. Now we choose \(v \in \mathcal {V}\) a step function

$$\begin{aligned} v = \sum _{i = 1}^N w_i \varphi (t) \chi _i (t) \end{aligned}$$

with \(w_i \in U\) satisfying

$$\begin{aligned}&\Vert w_i \Vert _{V (t_i)} = 1 ,\\&\langle u_i , w_i \rangle _{V'(t_i) \times V(t_i)} \geqslant \Vert u_i \Vert _{V' (t_i)} - \varepsilon ,\\&\int _{t_{i - 1}}^{t_i} \langle u_i , w_i \rangle _{V'(t) \times V(t)}\,dt \geqslant \int _{t_{i - 1}}^{t_i} \langle u_i , w_i \rangle _{V'(t_i) \times V(t_i)} - \varepsilon (t_i - t_{i - 1}) \\&\quad = \Big [ \langle u_i , w_i \rangle _{V'(t_i) \times V(t_i)} - \varepsilon \Big ] (t_i - t_{i - 1}), \end{aligned}$$

where the last inequality is due to the fact that \(t \mapsto \langle u_i , w_i \rangle _{V'(t) \times V(t)}\) is uniformly continuous.

By the (uniform) continuity of \(t \mapsto \Vert w_i \Vert _{V(t)}\) for every i we get that, choosing the points \(t_i\) suitably near, we can suppose that

$$\begin{aligned} \Vert w_i \Vert _{V(t)} \leqslant 1 + \varepsilon \qquad \text { for every } t \in [t_{i - 1}, t_i ] \text { and for every } i \in \{1, \dots N \} . \end{aligned}$$

By (10) we can also choose \(\{ t_i \}\) in such a way that

$$\begin{aligned} \Vert u_i \Vert _{V' (t_i)} \geqslant \Vert u_i \Vert _{V' (t)} - \varepsilon \qquad \text { for every } t \in [t_{i - 1}, t_i ] \text { and for every } i \in \{1, \dots N \} . \end{aligned}$$

Notice that

$$\begin{aligned} \Vert v (t) \Vert _{V(t)} = \left\| \sum _{i = 1}^N w_i \varphi (t) \chi _i (t) \right\| _{V(t)} = \varphi (t) \Vert w_i \Vert _{V(t)} < (1 + \varepsilon ) \varphi (t) \end{aligned}$$

and then, by (11),

$$\begin{aligned} \Vert v \Vert _{\mathcal {V}} = \left( \int _0^T \Vert v (t) \Vert _{V(t)}^p\,dt \right) ^{1/p} < 1 + \varepsilon . \end{aligned}$$

Then we have

$$\begin{aligned} \langle T f , v \rangle _{\mathcal {V}' \times \mathcal {V}}&= \int _0^T \langle u (t), v (t)\rangle _{V'(t) \times V(t)} dt + \int _0^T \langle f (t) - u (t) , v (t) \rangle _{V'(t) \times V(t)} dt \\&\geqslant \int _0^T \langle u (t), v (t) \rangle _{V'(t) \times V(t)} dt - \frac{\varepsilon }{1 + \varepsilon } . \end{aligned}$$

Now we focus our attention on the first term in the right hand side:

$$\begin{aligned} \int _0^T \langle u (t), v (t) \rangle _{V'(t) \times V(t)} dt&= \int _0^T \sum _{i = 1}^N \langle u_i, w_i \rangle _{V'(t) \times V(t)} \varphi (t) \chi _i (t) dt \\&\geqslant \int _0^T \sum _{i = 1}^N \Big [ \langle u_i , w_i \rangle _{V'(t_i) \times V(t_i)} - \varepsilon \Big ] \varphi (t) \chi _i (t) dt \\&\geqslant \int _0^T \sum _{i = 1}^N \Big [ \Vert u_i \Vert _{V' (t_i)} - 2 \varepsilon \Big ] \varphi (t) \chi _i (t) dt \\&\geqslant \int _0^T \sum _{i = 1}^N \Big [ \Vert u_i \Vert _{V' (t)} - 3 \varepsilon \Big ] \varphi (t) \chi _i (t) dt \\&\geqslant \Vert u \Vert _{\mathcal {V}^{*}} - 4 \varepsilon - 3 \varepsilon \int _0^T \varphi (t) dt \\&\geqslant \Vert f \Vert _{\mathcal {V}^{*}} - 5 \varepsilon - 3 \varepsilon \int _0^T \varphi (t) dt . \end{aligned}$$

Summing up we have

$$\begin{aligned} \Vert T f \Vert _{\mathcal {V}'}&\geqslant \frac{\langle Tf , v \rangle _{\mathcal {V}' \times \mathcal {V}}}{1 + \varepsilon } \\&\geqslant \frac{1}{1 + \varepsilon } \left[ \Vert f \Vert _{\mathcal {V}^{*}} - 5 \varepsilon - 3 \varepsilon \int _0^T \varphi (t) dt \right] - \frac{\varepsilon }{(1 + \varepsilon )^2} . \end{aligned}$$

By the arbitrariness of \(\varepsilon \) we finally get

$$\begin{aligned} \Vert T f \Vert _{\mathcal {V}'} \geqslant \Vert f \Vert _{\mathcal {V}^{*}} . \end{aligned}$$

Then we have shown that T is a linear isometry between \(\mathcal {V}^{*}\) and \(\mathcal {V}'\) and in particular that

$$\begin{aligned} T (\mathcal {V}^{*}) \quad \text {is a closed subspace of} \quad \mathcal {V}' . \end{aligned}$$

Now consider \(v \in \mathcal {V}\) and suppose that

$$\begin{aligned} \langle T f , v \rangle _{\mathcal {V}' \times \mathcal {V}} = 0 \qquad \text { for every } f \in \mathcal {V}^{*}. \end{aligned}$$

That is, taking \(\varphi \in C^0 ([0,T]; \mathbf{R})\), \(f \in \mathcal {V}^{*}\) and \(g = \varphi f \in \mathcal {V}^{*}\) we have

$$\begin{aligned} 0 = \langle T g , v \rangle _{\mathcal {V}' \times \mathcal {V}} = \int _0^T \varphi (t) \, \langle f (t), v (t) \rangle _{V'(t) \times V(t)}\,dt \end{aligned}$$

for every continuous function \(\varphi \). Then \(\langle f (t), v (t) \rangle _{V'(t) \times V(t)} = 0\) for almost every \(t \in [0,T]\).

Since \(\mathcal {V}\subset \mathcal {H}\subset \mathcal {V}'\) and for \(g \in \mathcal {H}, u \in \mathcal {V}\) one has

$$\begin{aligned} \langle g , u \rangle _{\mathcal {V}' \times \mathcal {V}} = (g, v )_{\mathcal {H}} \end{aligned}$$

one gets that, taking \(f = v\) one gets that

$$\begin{aligned} 0 = \int _0^T \langle v (t), v (t) \rangle _{V'(t) \times V(t)}\,dt = \int _0^T ( v (t), v (t) )_{H(t)}\,dt = (v , v)_{\mathcal {H}} \end{aligned}$$

which clearly implies that \(v = 0\). By that we finally derive that \(T (\mathcal {V}^{*})\) is dense in \(\mathcal {V}'\). Being also closed we conclude that \(T (\mathcal {V}^{*}) = \mathcal {V}'\). \(\square \)

By the assumptions above and supposing (10) by the previous lemma one gets that

$$\begin{aligned} \mathcal {V}\subset \mathcal {H}\subset \mathcal {V}' \end{aligned}$$

with continuous and dense embeddings, with

$$\begin{aligned} \Vert w \Vert _{\mathcal {V}'} \leqslant k\,\Vert w \Vert _{\mathcal {H}}, \qquad \text {and} \qquad \Vert v \Vert _{\mathcal {H}} \leqslant k\,\Vert v \Vert _{\mathcal {V}}. \end{aligned}$$
(12)

Definition 2.2

Consider a family of linear operators R(t) such that R depends on a parameter \(t \in [0,T]\) and

$$\begin{aligned} R(t) \in {\mathcal {L}} (H(t)) \qquad \text {for every } t \in [0,T] \end{aligned}$$
(13)

being \({\mathcal {L}} (H(t))\) the set of linear and bounded operators from H(t) in itself. We say that R belongs to the class \({\mathcal {E}}(C_1, C_2)\), \(C_1, C_2 \geqslant 0\), if it satisfies what follows for every \(u, v \in U\):

$$\begin{aligned} \begin{array}{ll} \diamond &{} R(t) \qquad \text {is self-adjoint } \text { and } \Vert R(t)\Vert _{\mathcal {L}(H(t))} \leqslant C_1 \text { for every } t \in [0,T], \\ \diamond &{} t \mapsto \big ( R(t) u , v \big )_{H(t)} \text { is absolutely continuous in } [0,T],\\ \diamond &{} \Big | {\displaystyle \frac{d}{dt}} \big ( R(t) u , v \big )_{H(t)} \Big | \leqslant C_2\,\Vert u \Vert _{V(t)} \Vert v \Vert _{V(t)} \qquad \text { for a.e. } t \in [0,T]. \end{array} \end{aligned}$$

Now, given two non-negative constants \(C_1\) and \(C_2\), consider \(R \in {\mathcal {E}}(C_1, C_2)\). For every \(t \in [0,T]\) we consider the spectral decomposition of R(t) (see, e.g., Section 8.4 in [14]) as follows: since R(t) is self-adjoint we get that \(R (t)^2 = R^* (t) \circ R (t)\) is a positive operator; then we can define the square root of \(R (t)^2\) (see, e.g., Chapter 3 in [14]), which is a positive operator,

$$\begin{aligned} |R(t)| = \big ( R(t)^2 \big )^{1/2} \end{aligned}$$

and then define the two positive operators

$$\begin{aligned} R_+(t) :=\frac{1}{2} \big ( |R(t)| + R(t) \big ) , \qquad R_-(t) {:}{=} |R(t)| - R_+(t). \end{aligned}$$

By this decomposition we can also write \(H(t) = H_+(t) \oplus H_0(t) \oplus H_-(t)\) where \(H_+(t) = (\text {Ker}\,{R}_+(t) )^{\perp }\) and \(H_-(t) = (\text {Ker}\,{R}_-(t) )^{\perp }\) and \(H_0(t)\) is the kernel of R(t). Finally we denote \({\tilde{H}}_0(t) = H_0(t) = \text {Ker}\,R(t)\) and

$$\begin{aligned} {\tilde{H}}(t)\,{\tilde{H}}_+(t),{\tilde{H}}_-(t) = \text {the completion respectively of }H (t)\, H_+(t),H_-(t) \end{aligned}$$
(14)

with respect to the norm \(\Vert w\Vert _{{\tilde{H}}(t)} = \Vert |R(t)|^{1/2}w\Vert _{H (t)}\).

Clearly the operation \(\ \widetilde{}\ \) depends on R. In this way \({R}(t) = {R}_+(t) - {R}_-(t)\), \(|R(t)| =R_+(t) + R_-(t)\) and \({R}_+(t) \circ {R}_-(t) = {R}_-(t) \circ {R}_+(t) = 0\) (see, e.g., Theorem 10.37 in [14]) and \({R}_+(t): H_+ (t) \rightarrow H_+ (t)\) and \({R}_-(t) : H_- (t) \rightarrow H_- (t)\) turn out to be invertible.

Given an operator \(R \in {\mathcal {E}}(C_1, C_2)\) it is possible to define two other linear operators. First we can define the derivative of R which, unlike R, is valued in \({\mathcal {L}} (V(t), V'(t))\), i.e. the set of linear and bounded operators from V(t) to \(V'(t)\): since \(R \in {\mathcal {E}}(C_1, C_2)\) we can define a family of equibounded operators

$$\begin{aligned} R'(t), \quad t \in [0,T] , \qquad R'(t) : V(t) \rightarrow V'(t) \qquad \text {by} \\ \langle R' (t) u , v \rangle _{V'(t) \times V(t)} :=\frac{d}{dt} \big ( R(t) u , v \big )_{H(t)} , \qquad u , v \in U. \end{aligned}$$

By the density of U in V(t) we can extend \(R'(t)\) to V(t).

Remark 2.3

Notice that the last request in Definition 2.2 and (8) imply that

$$\begin{aligned} R \in W^{1, \infty } (0, T ; {\mathcal {L}} (U, U') ) . \end{aligned}$$

Clearly if \(V(t) = V\) and \(H (t) = H\) for every \(t \in [0,T]\), \(R \in {\mathcal {E}}(C_1, C_2)\) simply means

$$\begin{aligned} R \in L^{\infty }(0,T; {{\mathcal {L}}}(H)) \cap W^{1,\infty }(0,T; {{\mathcal {L}}}(V, V')) . \end{aligned}$$

Via \(R, R_+, R_-\) and \(R'\) we can also define

$$\begin{aligned} \begin{array}{ll} {{\mathcal {R}}} : \mathcal {H}\rightarrow \mathcal {H}\, &{}\quad ({\mathcal {R}} u ) (t) :=R(t) u(t) , \\ {{\mathcal {R}}_+} : \mathcal {H}\rightarrow \mathcal {H}\, &{}\quad (\mathcal {R_+} u ) (t) :=R_+(t) u(t) , \\ {{\mathcal {R}}_-} : \mathcal {H}\rightarrow \mathcal {H}, &{}\quad (\mathcal {R_-} u ) (t) :=R_-(t) u(t) , \end{array} \end{aligned}$$
(15)

which turn out to be linear and bounded by the constant \(C_1\) and, by density of \(\mathcal {U}\) in \(\mathcal {V}\), an operator

$$\begin{aligned} \mathcal {R}': \mathcal {V}\rightarrow \mathcal {V}' \ \ \ \ \ \ \text {by }\ \ \ \ \langle \mathcal {R}' u , v \rangle _{\mathcal {V}' \times \mathcal {V}} {:}{=} \int _0^T \langle R' (t) u(t) , v(t) \rangle _{V'(t) \times V(t)} dt \end{aligned}$$
(16)

which turns out to be linear, self-adjoint and bounded by \(C_2\). As done before we can define, in a way analogous to that done for the spaces (14),

$$\begin{aligned} \tilde{\mathcal {H}},\tilde{\mathcal {H}}_+,\tilde{\mathcal {H}}_- = \text {the completion respectively of }\mathcal {H}\, \mathcal {H}_+\, \mathcal {H}_- \end{aligned}$$
(17)

with respect to the norm \(\Vert w \Vert _{\tilde{\mathcal {H}}} = \Vert |\mathcal {R}|^{1/2} w \Vert _{\mathcal {H}}\), where \(|\mathcal {R}| = \mathcal {R}_+ + \mathcal {R}_-\).

Analogously, we define \(\mathcal {H}_+\) and \(\mathcal {H}_-\) and \(\mathcal {P}_+\) and \(\mathcal {P}_-\) the orthogonal projections from \(\tilde{\mathcal {H}}\) onto \(\mathcal {H}_+\) and \(\mathcal {H}_-\) respectively. \(\mathcal {H}_0\) is the kernel of \(\mathcal {R}\) and \(\mathcal {P}_0\) the projection defined in \(\mathcal {H}\) onto \(\mathcal {H}_0\).

Remark 2.4

Notice that since R is self-adjoint and bounded we can define \(|R|(t)^{1/2}\), \(R_+(t)^{1/2}\), \(R_-(t)^{1/2}\) (see, e.g., Chapter 3 in [14]).

3 The existence result

In this section we will give one of the main results of the paper. We will consider a function R such that, given two non-negative constants,

$$\begin{aligned} R \in {\mathcal {E}}(C_1, C_2) \end{aligned}$$
(18)

and all the spaces we introduced in (6), (8), (9), (14) and from now on we will assume (10) in such a way that Lemma 2.1 holds.

Our goal is to give an existence result for an abstract equation like

$$\begin{aligned} \mathcal {R}u' + \mathcal {A}u = f \end{aligned}$$

for some suitable operator \(\mathcal {A}\) we will specify below.

We want to stress that, despite of the fact that no derivative of \(\mathcal {R}\) appears in the equation, we require \(\mathcal {R}\) to be differentiable, i.e. \(R \in {\mathcal {E}}(C_1, C_2)\). This fact will be needed to get the existence of a solution to the previous equation and we will also show (see example 6 in the last section) that without this assumption at least uniqueness of the solution fails. Anyway to require that R is diffierentiable is not so restrictive (as shown in the examples in the last section) because if, for instance, \(\mathcal {R}\) is a multiplication operator, i.e. \(\mathcal {R}u = r(x,t) u(x,t)\) for some function r, \(\mathcal {R}\) could be differentiable even if r is discontinuous.

We will use this assumption about \(\mathcal {R}\) to split properly the operator \(u \mapsto \mathcal {R}u' + \mathcal {A}u\) as indicated in (33) to give the existence of a solution.

We will use the operator \(\mathcal {R}'\) to define \(\mathcal {R}u'\) in an apparently involute way. First for a function \(u \in \mathcal {V}\subset \mathcal {H}\) we consider the generalized derivative of \(\mathcal {R}u\) and require that it belongs to \(\mathcal {V}'\), where the generalized derivative is defined as a function \(w \in \mathcal {V}'\) such that

$$\begin{aligned} \big \langle w (t) , v \big \rangle _{V'(t) \times V(t)} = \frac{d}{dt} \big ( \mathcal {R}u (t) , v \big )_{H(t)} \qquad \text { for every } v \in U \, . \end{aligned}$$

Notice that by (8) we have

$$\begin{aligned} U \subset V(t) \subset H(t) \subset V'(t) \subset U', \end{aligned}$$

than we can define in a classical way the generalized derivative of \(\mathcal {R}u\) in \(L^{p'}(0,T; U')\) (here \(p'\) denotes \(p/(p-1)\)), and then to require that \((\mathcal {R}u)' \in \mathcal {V}'\).

For a function \(u \in \mathcal {V}\) for which \(\mathcal {R}u\) admits generalized derivative in \(\mathcal {V}'\) we define

$$\begin{aligned} \mathcal {R}u' :=(\mathcal {R}u)' - \mathcal {R}' u. \end{aligned}$$
(19)

With this definition in mind we now define the space

$$\begin{aligned} \mathcal {W}_{\mathcal {R}} :=\big \{ u \in \mathcal {V}\ |\ \mathcal {R}u' \in \mathcal {V}'\big \}\, \Vert u \Vert _{\mathcal {W}_{\mathcal {R}}} = \Vert u \Vert _{\mathcal {V}} + \Vert \mathcal {R}u' \Vert _{\mathcal {V}'}. \end{aligned}$$
(20)

Remark 3.1

Notice the space \(\big \{ u \in \mathcal {V}\ |\ (\mathcal {R}u)' \in \mathcal {V}'\big \}\) endowed with the norm \(\Vert \cdot \Vert _{\mathcal {W}_{\mathcal {R}}}\) coincides with \(\mathcal {W}_{\mathcal {R}}\). Indeed, in view of the definition of \(\mathcal {R}u'\) given in (19), \(\mathcal {R}' u\) belong to \(\mathcal {V}'\).

Moreover one can endow this space both with the norm \(\Vert \cdot \Vert _{\mathcal {W}_{\mathcal {R}}}\) and with the norm \(\Vert u \Vert = \Vert u \Vert _{\mathcal {V}} + \Vert (\mathcal {R}u)' \Vert _{\mathcal {V}'}\) since they are equivalent.

Proposition 3.2

Under assumption (18) we have that for every \(u,v \in \mathcal {W}_{\mathcal {R}}\) the function \(t \mapsto (R(t) u (t), v(t))_{H(t)}\) is absolutely continuous and the following hold:

$$\begin{aligned} \frac{d}{dt} (\mathcal {R}&u (t), v(t))_{H(t)} \\&= \langle \mathcal {R}' u(t) , v(t) \rangle _{V'(t) \times V(t)} + \langle \mathcal {R}u' (t), v(t) \rangle _{V'(t)\times V(t)} + \langle \mathcal {R}v' (t), u(t) \rangle _{V'(t)\times V(t)} \end{aligned}$$

and there exists a constant c, which depends only on T, such that

$$\begin{aligned} \max _{[0,T]}&| (R(t) u (t), v(t))_{H(t)} | \\&\leqslant c \Big [ \Vert \mathcal {R}u' \Vert _{\mathcal {V}'} \Vert v\Vert _{\mathcal {V}} + \Vert \mathcal {R}v' \Vert _{\mathcal {V}'} \Vert u\Vert _{\mathcal {V}} + \Vert \mathcal {R}' \Vert _{{{\mathcal {L}}}{(\mathcal {V},\mathcal {V}')}} \Vert u\Vert _{\mathcal {V}} \Vert v\Vert _{\mathcal {V}} + \Vert \mathcal {R}\Vert _{{\mathcal {L}} (\mathcal {H})} \Vert u\Vert _{\mathcal {H}} \Vert v\Vert _{\mathcal {H}} \Big ]. \end{aligned}$$

In particular if \(u = v\) we have

$$\begin{aligned} \displaystyle \int _s^t&\big \langle \mathcal {R}u' (\tau ), u(\tau )\big \rangle _{V'(\tau ) \times V(\tau )} d\tau \\&= {\displaystyle \frac{1}{2}} \Big [ (R(t) u (t), u(t))_{H(t)} - (R(s) u (s), u(s))_{H(s)} \Big ] - {\displaystyle \frac{1}{2}} \displaystyle \int _s^t \langle \mathcal {R}' u (\tau ), u(\tau ) \rangle _{V'(\tau ) \times V(\tau )} d\tau \end{aligned}$$

and

$$\begin{aligned} \max _{[0,T]} |(R(t) u (t), u(t))_{H(t)} | \leqslant c\,\Vert u \Vert _{\mathcal {W}_{\mathcal {R}}}^2 \end{aligned}$$
(21)

where c depends (only) on \(T^{-1}, \Vert \mathcal {R}' \Vert _{{{\mathcal {L}}}{(\mathcal {V},\mathcal {V}')}}, \Vert \mathcal {R}\Vert _{{\mathcal {L}} (\mathcal {H})}\).

Proof For \(u, v \in C^1 ([0,T] ; U)\) one has

$$\begin{aligned} \frac{d}{dt} (\mathcal {R}&u (t), v(t))_{H(t)}\\&= \langle \mathcal {R}' u(t) , v(t) \rangle _{V'(t) \times V(t)} + ( \mathcal {R}u' (t), v(t) )_{H (t)} + (\mathcal {R}u (t), v' (t) )_{H(t)}\\&= \langle \mathcal {R}' u(t) , v(t) \rangle _{V'(t) \times V(t)} + \langle \mathcal {R}u' (t), v(t) \rangle _{V'(t)\times V(t)} + \langle \mathcal {R}v' (t), u(t) \rangle _{V'(t)\times V(t)}. \end{aligned}$$

By the density of \(C^1 ([0,T] ; U)\) one gets the first part of the thesis. Now we show the estimate. First one can extend R, a function \(w \in \mathcal {W}_{\mathcal {R}}\) and \(f \in \mathcal {V}'\) to \([-T, T]\) as follows

$$\begin{aligned} {\tilde{R}} (t)&{:}{=}&\left\{ \begin{array}{ll} R (t) &{} t \in [0,T] \\ R(-t) &{} t \in [-T,0) \end{array} \right. \\ {\tilde{w}} (t)&{:}{=}&\left\{ \begin{array}{ll} w (t) &{} t \in [0,T] \\ w(-t) &{} t \in [-T,0) \end{array} \right. \\ {\tilde{f}} (t)&{:}{=}&\left\{ \begin{array}{ll} f (t) &{} t \in [0,T] \\ f(-t) &{} t \in [-T,0) \end{array} \right. , \end{aligned}$$

and similarly one extends

$$\begin{aligned} ({\tilde{w}}_1 (t), {\tilde{w}}_2 (t))_{{\tilde{H}} (t)} = (w_1 (-t), w_2 (-t))_{H (-t)} \\ \langle {\tilde{f}} (t), {\tilde{w}} (t) \rangle _{V' (t) \times V (t) } = \langle f (-t), w (-t) \rangle _{V' (-t) \times V (-t) } \end{aligned}$$

for \(t \in [-T, 0]\), \(w_1, w_2 \in \mathcal {H}\), \(w \in \mathcal {V}\), \(f \in \mathcal {V}'\). Then consider a differentiable function \(\varphi : [-T, T] \rightarrow \mathbf{R}\) such that

$$\begin{aligned} \varphi (t) = 1 \quad \text {for } t \in [0, T], \qquad \varphi (-T) = 0 , \qquad 0 \leqslant \varphi ' (t) \leqslant 2/T \quad \text {for every } t \in [-T, T] . \end{aligned}$$

Finally, for \(t \in [0, T]\) and \(u, v \in C^1 ([0,T]; U)\), we have

$$\begin{aligned} \big (R(t) u (t), v(t) \big )_{H(t)}&= \int _{-T}^t \frac{d}{ds} \Big [ \varphi (s) \big ( {\tilde{R}} (s) {\tilde{u}} (s), {\tilde{v}} (s) \big )_{{\tilde{H}}(s)} \Big ]\,ds \\&= \int _{-T}^t \varphi ' (s) \big ( {\tilde{R}} (s) {\tilde{u}} (s), {\tilde{v}} (s) \big )_{{\tilde{H}}(s)}\,ds + \int _{-T}^t \varphi (s) \big \langle {\tilde{R}}' (s) {\tilde{u}} (s), {\tilde{v}} (s) \big \rangle _{{\tilde{H}}(s)}\,ds\\&\quad + \int _{-T}^t \varphi (s) \big \langle {\tilde{R}} (s) {\tilde{u}}' (s), {\tilde{v}} (s) \big \rangle _{{\tilde{H}}(s)}\,ds + \int _{-T}^t \varphi (s) \big \langle {\tilde{R}} (s) {\tilde{v}}' (s), {\tilde{u}} (s) \big \rangle _{{\tilde{H}}(s)}\,ds \end{aligned}$$

by which one concludes. \(\square \)

Remark 3.3

By the previous result we get that \([0,T] \ni t \mapsto (R(t) u (t), w)_{H(t)}\) is continuous for every \(u \in \mathcal {W}_{\mathcal {R}}\) and \(w \in U\). In particular taking \(w_+\) such that \(P_- (t) w_+ \equiv 0\) one gets that

$$\begin{aligned}{}[0,T] \ni t \mapsto (R(t) u (t), w_+ )_{H(t)} = (R_+ (t) u (t), w_+ )_{H(t)} \qquad \text {is continuous}. \end{aligned}$$

In an analogous way, taking \(w_- \in U \cap (H_- (t) \oplus H_0 (t))\), one gets that \((R_- (t) u (t), w_- )_{H(t)}\) is continuous.

Now we prove a compactness result and, following the analogous in [13] (precisely Lemma 5.1 and Theorem 5.1), we divide the proof in a preliminary lemma and the compactness result. We prove both the results because in our situation things are very different from the standard case being the spaces defined in (5) depending on a parameter, while in [13] the spaces are fixed.

The proof of Theorem 3.5 follows the ideas of the analogous one of Lions, but it is not an immediate adaptation. To prove it we will use the following lemma.

Lemma 3.4

Given \(R \in {\mathcal {E}}(C_1, C_2)\), we have that for each \(\eta > 0\) there is \(c_{\eta } \geqslant 0\) such that

$$\begin{aligned} \Vert | \mathcal {R}|^{1/2} u \Vert _{\mathcal {H}} \leqslant \eta \Vert u \Vert _{\mathcal {V}} + c_{\eta } \Vert \mathcal {R}u \Vert _{\mathcal {V}'} \end{aligned}$$

for every \(u \in \mathcal {U}\).

Proof

If, by contradiction, we suppose that the thesis is false we have that there exist a value of \(\eta \), say \(\bar{\eta }\), such that for each \(h \in \mathbf{N}\) there is \(u_h \in \mathcal {U}\) for which

$$\begin{aligned} \begin{array}{l} \Vert u_h \Vert _{\mathcal {V}} = 1 , \\ \Vert |\mathcal {R}|^{1/2} u_h \Vert _{\mathcal {H}} > \bar{\eta } + h\,\Vert \mathcal {R}u_h \Vert _{\mathcal {V}'} , \end{array} \end{aligned}$$
(22)

Then, since (k defined in (7))

$$\begin{aligned} \Vert |\mathcal {R}|^{1/2} u_h \Vert _{\mathcal {H}} \leqslant \sqrt{C_1}\,\Vert u_h \Vert _{\mathcal {H}} \leqslant \sqrt{C_1}\,k\,\Vert u_h \Vert _{\mathcal {V}} = \sqrt{C_1} \, k \end{aligned}$$

we derive that

$$\begin{aligned} \lim _{h \rightarrow +\infty } \Vert \mathcal {R}u_h \Vert _{\mathcal {V}'} = 0 . \end{aligned}$$
(23)

Moreover, since \(\Vert u_h \Vert _{\mathcal {V}} = 1\), we also get (up to select a subsequence still denoted by \((u_h)_h\)) the existence of \({\bar{u}} \in \mathcal {V}\) such that

$$\begin{aligned} u_h \rightarrow {\bar{u}} \quad \text {in } \mathcal {V}\text { - weak}. \end{aligned}$$

By this and (23) we have

$$\begin{aligned} \lim _{h \rightarrow +\infty } \big \langle \mathcal {R}u_h , u_h \big \rangle _{\mathcal {V}' \times \mathcal {V}} = \lim _{h \rightarrow +\infty } \big ( \mathcal {R}u_h , u_h \big )_{\mathcal {H}} = \lim _{h \rightarrow +\infty } \big \Vert \mathcal {R}^{1/2} u_h \big \Vert _{\mathcal {H}} = 0 , \end{aligned}$$

where \(\mathcal {R}^{1/2} = \mathcal {R}_+^{1/2} - \mathcal {R}_-^{1/2}\). Since \(\Vert \mathcal {R}^{1/2} u_h \Vert _{\mathcal {H}} = \Vert |\mathcal {R}^{1/2}| u_h \Vert _{\mathcal {H}}\) this contradicts (22). \(\square \)

Theorem 3.5

Then the space \(\mathcal {W}_{\mathcal {R}}\) compactly embeds in \(\tilde{\mathcal {H}}\).

Proof

Consider a sequence \((u_h)_h\) such that \(\Vert u_h \Vert _{\mathcal {W}_{\mathcal {R}}} \leqslant c\). In particular, up to a subsequence, \(u_h \rightarrow u\) weakly in \(\mathcal {V}\); for simplicity we can suppose that \(u = 0\) and that \(\Vert u_h \Vert _{\mathcal {V}} = 1\) otherwise one can replace \(u_h\) by \((u_h - u)/\Vert u_h - u \Vert _{\mathcal {V}}\). Then by Lemma 3.4 we have

$$\begin{aligned} \Vert |\mathcal {R}|^{1/2} u_h \Vert _{\mathcal {H}} \leqslant \eta + c_{\eta } \Vert \mathcal {R}u_h\Vert _{\mathcal {V}'}. \end{aligned}$$

We conclude if we show that

$$\begin{aligned} \lim _{h \rightarrow +\infty } \Vert \mathcal {R}u_h\Vert _{\mathcal {V}'} = 0 . \end{aligned}$$
(24)

Notice that, by Proposition 3.2, we have that

$$\begin{aligned} \max _{[0,T]} |(R(t) u_h (t), u_h (t))_{H(t)} | \leqslant {\tilde{c}}^2 \end{aligned}$$

for some positive constant \({\tilde{c}}\). Now define, for each \(t \in [0,T]\),

$$\begin{aligned} R^{1/2} (t) :=R_+^{1/2}(t) - R_-^{1/2}(t) . \end{aligned}$$

Then we get

$$\begin{aligned} \max _{[0,T]} |(R^{1/2} (t) u_h (t), R^{1/2} (t)u_h (t))_{H(t)} | \leqslant {\tilde{c}}^2 , \end{aligned}$$

that is (we recall that \(\Vert R(t)\Vert _{\mathcal {L}(H(t))} \leqslant C_1\) for each t)

$$\begin{aligned} \Vert R (t) u_h (t) \Vert _{V' (t)} \leqslant k\,\Vert R^{1/2} \circ R^{1/2} (t) u_h (t) \Vert _{H (t)} \leqslant k \, \sqrt{C_1}\,{\tilde{c}} . \end{aligned}$$
(25)

Being \(\Vert R (t) u_h (t) \Vert _{V' (t)}\) equibounded, to get (24) it will be sufficient to show that the sequence \(\{ \Vert R (t) u_h (t) \Vert _{V' (t)}\}_h\) converges to 0 pointwise. To show that we will confine to \(t = 0\), being the choice of t irrelevant. Fix \(\delta > 0\), then consider \(\phi \in U\) such that

$$\begin{aligned} \begin{array}{l} \Vert R(0) u_h (0) \Vert _{V' (0)} < \big \langle R(0) u_h (0) , \phi \big \rangle _{V' (0) \times V(0)} + \delta = \big ( R(0) u_h (0) , \phi \big )_{H(0)} + \delta , \\ \Vert \phi \Vert _{V(0)} \leqslant 1 . \end{array} \end{aligned}$$
(26)

Notice that the function

$$\begin{aligned}{}[0,T] \ni \sigma \mapsto \big ( R(\sigma ) u_h (\sigma ) , \phi \big )_{H(\sigma )} \qquad \text {is continuous,} \end{aligned}$$

in particulat in \(\sigma = 0\). Consider \(\lambda \in (0,1]\) which will be fixed later and define the two spaces \(\mathcal {H}_{\lambda }\) and \(\mathcal {V}_{\lambda }'\) as the completion of \(\mathcal {U}\) with respect to the norms, respectively,

$$\begin{aligned} \Vert u \Vert _{\mathcal {H}_{\lambda }} :=\left( \int _0^T \Vert u(t) \Vert _{H(\lambda t)}^2 dt \right) ^{1/2} , \qquad \Vert u \Vert _{\mathcal {V}_{\lambda }'} :=\left( \int _0^T \Vert u(t) \Vert _{V'(\lambda t)}^{p'} dt \right) ^{1/p'}. \end{aligned}$$

Now define \(v_h (t) :=R( \lambda t) u_h ( \lambda t)\) and notice that

$$\begin{aligned} \Vert v_h \Vert _{\mathcal {H}_{\lambda }} \leqslant \lambda ^{-1/2} \Vert \mathcal {R}u_h \Vert _{\mathcal {H}} , \qquad \Vert v_h' \Vert _{\mathcal {V}_{\lambda }'} \leqslant \lambda ^{1 - 1/p'} \Vert (\mathcal {R}u_h)' \Vert _{\mathcal {V}'} . \end{aligned}$$

Consider \(\varphi : [0,T] \rightarrow [0,1]\) such that \(\varphi (0) = -1\) and \(\varphi (T) = 0\): we have that

$$\begin{aligned} \big ( R (0) u_h (0) , \phi \big )_{H(0)}&\,= \int _0^T \left[ \frac{d}{dt} \bigg ( \varphi (t) \big ( R(\lambda t) u_h(\lambda t) , \phi \big )_{H(\lambda t)} \bigg ) \right] dt \\&\,= \int _0^T \varphi '(t) \big ( R(\lambda t) u_h(\lambda t) , \phi \big )_{H(\lambda t)} dt\,\\&\qquad + \int _0^T \varphi (t) \lambda \big \langle (\mathcal {R}u_h) ' (\lambda t) , \phi \big \rangle _{V' (\lambda t) \times V(\lambda t)} dt. \end{aligned}$$

Define \(a_h\) the first addend, \(b_h\) the second one. Notice that

$$\begin{aligned}&| b_h | \leqslant \lambda ^{1 - 1/p' - 1/p} \Vert (\mathcal {R}u_h)' \Vert _{\mathcal {V}'} \left( \int _0^{\lambda T} \Vert \phi \Vert _{V (s)}^p\,ds \right) ^{1/p} \leqslant c \left( \int _0^{\lambda T} \Vert \phi \Vert _{V (s)}^p\,ds \right) ^{1/p} . \end{aligned}$$

Then, after fixing \(\varepsilon > 0\), we can choose \(\lambda \) small enough such that

$$\begin{aligned} | b_h | \leqslant \varepsilon . \end{aligned}$$

For the term \(a_h\) we have

$$\begin{aligned} | a_h | \leqslant \left| \int _0^T \varphi ' (t) \big ( R(\lambda t) u_h(\lambda t) , \phi \big )_{H(\lambda t)} dt \right| = \left| \lambda ^{-1} \int _0^{\lambda T} \varphi ' ( \lambda ^{-1} s ) \big ( R(s) u_h(s) , \phi \big )_{H(s)} ds \right| . \end{aligned}$$

Now since \(u_h \rightarrow 0\) weakly in \(\mathcal {V}\) we have that \(v_h = \mathcal {R}u_h \rightarrow 0\) weakly in \(\mathcal {H}\) and then we derive that

$$\begin{aligned} a_h \rightarrow 0 . \end{aligned}$$

Then, since \(\big | \big ( v_h (0) , \phi \big )_{H(0)} \big | = \big | \big ( R(0) u_h (0) , \phi \big )_{H(0)} \big | \leqslant | a_h | + | b_h |\), we conclude that

$$\begin{aligned} \lim _{h \rightarrow +\infty } \big ( R(0) u_h (0) , \phi \big )_{H(0)} = 0 . \end{aligned}$$

Then, from (26), we derive that

$$\begin{aligned} \limsup _{h \rightarrow +\infty } \Vert R(0) u_h (0) \Vert _{V' (0)} \leqslant \delta . \end{aligned}$$

We can repeat the same argument for every \(t \in [0,T]\) and get that for every \(\delta > 0\)

$$\begin{aligned} \limsup _{h \rightarrow +\infty } \Vert R(t) u_h (t) \Vert _{V' (t)} \leqslant \delta \qquad \text {for every } t \in [0,T]. \end{aligned}$$

By the arbitrariness of \(\delta \) we derive that \(\Vert R(t) u_h (t) \Vert _{V' (t)}\) converges to 0 for every \(t \in [0, T]\) and, since by (25) \(\{\Vert R(t) u_h (t) \Vert _{V' (t)}\}_h\) is bounded, by Lebesgue’s theorem we get that

$$\begin{aligned} \mathop {\lim }\limits _{h \rightarrow + \infty } \Vert \mathcal {R}u_h\Vert _{\mathcal {V}'} = 0\, . \end{aligned}$$

\(\square \)

Before to state the existence result we recall some definitions and a classical result, for which we refer to [23] (see Section 32.4).

Definition 3.6

Given a Banach space X, we say that an operator \(B : X \rightarrow X'\) is coercive if

$$\begin{aligned} \lim _{\Vert x\Vert \rightarrow + \infty } \frac{\langle B x, x \rangle }{\Vert x\Vert } \rightarrow + \infty , \end{aligned}$$

is bounded if it maps a bounded set in a bounded set, is hemicontinuous if the map

$$\begin{aligned} t \mapsto \big \langle B (u + t v) , w \big \rangle _{X' \times X} \qquad \text {is continuous in } [0,1] \qquad \text {for every } u, v, w \in X . \end{aligned}$$

A monotone and hemicontinuous operator B is of type M if (see, for instance, Basic Ideas of the Theory of Monotone Operators in volume B of [23] or Lemma 2.1 in [22]), i.e. it satisfies what follows: for every sequence \((u_j)_{j \in \mathbf{N}} \subset X\) such that

$$\begin{aligned} \left. \begin{array}{l} u_j \rightarrow u \qquad \text {in } X \text {-weak}\\ B u_j \rightarrow b \qquad \text {in } X'\text {-weak}\\ {\displaystyle \limsup _{j \rightarrow +\infty }}\,\big \langle B u_j , u_j \big \rangle _{X' \times X} \leqslant \big \langle b, u \big \rangle _{X' \times X} \end{array} \right| \quad \Longrightarrow \qquad B u = b . \end{aligned}$$
(M)

Theorem 3.7

Let \(M: X \rightarrow X'\) be monotone, bounded, coercive and hemicontinuous. Suppose \(L: X \rightarrow 2^{X'}\) to be maximal monotone. Then for every \(f \in X'\) the following equation has a solution

$$\begin{aligned} L u + M u \ni f \end{aligned}$$

and in particular if LM are single-valued the equation \(Lu + Mu = f\) has a solution.

If, moreover, M is strictly monotone the solution is unique.

Now we move to introduce our assumptions. Consider a family of operators

$$\begin{aligned} A(t): V(t) \longrightarrow V'(t) \end{aligned}$$

such that

$$\begin{aligned} t \mapsto \big \langle A(t) u, v \big \rangle _{V(t)' \times V(t)} \, \text { measurable on }[0,T] \quad \text { for every } u,v, \in U \end{aligned}$$

such that if we define the abstract operator

$$\begin{aligned} \mathcal {A}:\mathcal {V}\longrightarrow \mathcal {V}'\, \mathcal {A}u (t) = A(t) u(t) 0\leqslant t \leqslant T. \end{aligned}$$
(27)

this turns out to be

$$\begin{aligned} \mathcal {A}\,\text {monotone, bounded, coercive and hemicontinuous.} \end{aligned}$$

We denote

$$\begin{aligned} \mathcal {P}:\ \mathcal {W}_{\mathcal {R}} \longrightarrow \mathcal {V}', \qquad (\mathcal {P}u) (t) \, =\, \mathcal {R}u'(t) + \mathcal {A}u(t) , \qquad 0 \leqslant t \leqslant T \end{aligned}$$

where \(\mathcal {R}\) is defined in (15) and satisfies (18).

Thanks to Remark 3.3 it makes sense to consider the two quantities

$$\begin{aligned} (R_+ (0) u (0), w_+ )_{H(0)} \quad \text { and } \quad (R_- (T) u (T), w_- )_{H(T)} \end{aligned}$$

for every \(w_+ \in U \cap (H_+ (0) \oplus H_0 (0))\) and \(w_- \in U \cap (H_- (T) \oplus H_0 (T))\) and then to define

$$\begin{aligned} \mathcal {W}_{\mathcal {R}}^0 = \{ u\in \mathcal {W}_{\mathcal {R}} \ | \ P_+(0)u(0)=0\, P_-(T)u(T)=0 \}. \end{aligned}$$
(28)

Now we consider the three operators

$$\begin{aligned} {{\mathcal {L}}}_1 u = \mathcal {R}u' + \frac{1}{2} \mathcal {R}' u\, \qquad {{\mathcal {L}}}_2 u = \mathcal {R}u' , \qquad {{\mathcal {L}}}_3 u = (\mathcal {R}u)', \qquad D({{\mathcal {L}}}_i) = \mathcal {W}_{\mathcal {R}}^0 , i = 1, 2, 3, \end{aligned}$$

and state the following result.

Proposition 3.8

The operator \({{\mathcal {L}}}_1: \mathcal {W}_{\mathcal {R}}^0 \subset \mathcal {V}\rightarrow \mathcal {V}'\) is maximal monotone.

The operator \({{\mathcal {L}}}_2: \mathcal {W}_{\mathcal {R}}^0 \subset \mathcal {V}\rightarrow \mathcal {V}'\) is maximal monotone if

$$\begin{aligned} \langle \mathcal {R}' u, u \rangle _{\mathcal {V}' \times \mathcal {V}} \leqslant 0 \text {for every } u \in \mathcal {V}, \end{aligned}$$
(29)

the operator \({{\mathcal {L}}}_3: \mathcal {W}_{\mathcal {R}}^0 \subset \mathcal {V}\rightarrow \mathcal {V}'\) is maximal monotone if

$$\begin{aligned} \langle \mathcal {R}' u, u \rangle _{\mathcal {V}' \times \mathcal {V}} \geqslant 0 \text {for every } u \in \mathcal {V}. \end{aligned}$$
(30)

Proof

The proof is the same of the analogous in [17]. The only difference is that one has to consider first \(u \in C^1 ([0,T]; U) \cap \mathcal {W}_{\mathcal {R}}^0\) and then use the definition of \(\mathcal {V},\mathcal {V}^*\) and Lemma 22. \(\square \)

Definition 3.9

We say u is a solution of the problem

$$\begin{aligned} \left\{ \begin{array}{l} \mathcal {R}u' + \mathcal {A}u = f \\ {P}_+ (0)u (0) = \varphi \\ {P}_- (T)u (T) = \psi , \end{array} \right. \end{aligned}$$

where \(f \in \mathcal {V}'\), \(\varphi \in {\tilde{H}}_+(0), \psi \in {\tilde{H}}_-(T)\), if \(u \in \mathcal {W}_{\mathcal {R}}\) and

$$\begin{aligned} \mathcal {R}u'(t) + \mathcal {A}u (t) = f(t) \qquad \text {for a.e. } t\in [0,T] \end{aligned}$$

and the two conditions \({P}_+ (0)u (0) = \varphi \) and \({P}_- (T)u (T) = \psi \) are satisfied.

We say u is a solution of the problem

$$\begin{aligned} \left\{ \begin{array}{l} (\mathcal {R}u)' + \mathcal {A}u = f \\ {P}_+ (0)u (0) = \varphi \\ {P}_- (T)u (T) = \psi , \end{array} \right. \end{aligned}$$
(31)

where \(f \in \mathcal {V}'\), \(\varphi \in {\tilde{H}}_+(0), \psi \in {\tilde{H}}_-(T)\), if \(u \in \mathcal {W}_{\mathcal {R}}\) and

$$\begin{aligned} (\mathcal {R}u)'(t) + \mathcal {A}u (t) = f(t) \qquad \text {for a.e. } t\in [0,T] \end{aligned}$$

and the two conditions \({P}_+ (0)u (0) = \varphi \) and \({P}_- (T)u (T) = \psi \) are satisfied.

We start now giving an existence result for the following problem

$$\begin{aligned} \left\{ \begin{array}{l} \mathcal {R}u' + \mathcal {A}u = f \\ [0.3em] P_+ (0)u (0) = 0 \\ [0.3em] P_- (T)u (T) = 0 \end{array} \right. \end{aligned}$$
(32)

The idea now is to apply the previous proposition and Theorem 3.7 to the equation \(\mathcal {R}u' +\mathcal {A}u = f\) adding and subtracting a term involving the derivative of \(\mathcal {R}\) and, in this way, to get the sum of two operators satisfying assumption of Theorem 3.7. We will see with an example (see example 6 in the last section) that, even if the derivative of \(\mathcal {R}\) is not involved in the equation, the lack of regularity in time for \(\mathcal {R}\) can cause problems, at least lack of uniqueness of solutions.

With this is mind we write

$$\begin{aligned} \mathcal {R}u' + \mathcal {A}u = \left( \mathcal {R}u' + {\displaystyle \frac{1}{2}} \mathcal {R}' u \right) + \left( \mathcal {A}u - {\displaystyle \frac{1}{2}} \mathcal {R}' u \right) . \end{aligned}$$
(33)

Theorem 3.10

Suppose \(\mathcal {R}\) satisfies assumptions (18). Suppose true one of the following:

i):

\({{\mathcal {M}}} = \mathcal {A}- \frac{1}{2} \mathcal {R}'\) is monotone, bounded, coercive and hemicontinuous.

ii ):

\({{\mathcal {M}}} = \mathcal {A}\) is monotone, bounded, coercive, hemicontinuous and \(\langle \mathcal {R}' u, u \rangle _{\mathcal {V}' \times \mathcal {V}} \leqslant 0\) for every \(u \in \mathcal {V}\).

Then problems (32) admits a solution for every \(f \in \mathcal {V}'\). If moreover \({{\mathcal {M}}}\) is strictly monotone the solution is unique.

Suppose true one of the following:

iii\(\, )\):

\({{\mathcal {M}}} = \mathcal {A}+ \frac{1}{2} \mathcal {R}'\) is monotone, bounded, coercive and hemicontinuous.

iv\(\, )\):

\({{\mathcal {M}}} = \mathcal {A}\) is monotone, bounded, coercive, hemicontinuous and \(\langle \mathcal {R}' u, u \rangle _{\mathcal {V}' \times \mathcal {V}} \geqslant 0\) for every \(u \in \mathcal {V}\).

Then problems (31) with \(\varphi = \psi = 0\) admits a solution for every \(f \in \mathcal {V}'\). If moreover \({{\mathcal {M}}}\) is strictly monotone the solution is unique.

Proof

The proof follows from Theorem 3.7 and Proposition 3.8, taking in Theorem 3.7\(M = {{\mathcal {M}}}\) and \(L u = \mathcal {R}u' + \frac{1}{2} \mathcal {R}' u\) in points \(i\,)\) and \(iii\,)\), \(L u = \mathcal {R}u'\) in point \(ii\,)\), \(L u = (\mathcal {R}u)'\) in point \(iv\,)\). \(\square \)

Remark 3.11

In fact we obtain an existence result also for the Cauchy problem

$$\begin{aligned} \mathcal {R}u' + \mathcal {A}u \ni f, u \in \mathcal {W}_{\mathcal {R}}^0 . \end{aligned}$$

Now we want to consider the Cauchy-Dirichlet problem with non-zero “initial” data

$$\begin{aligned} \left\{ \begin{array}{l} \mathcal {R}u' + \mathcal {A}u = f \\ P_+ (0)u (0) = \varphi \\ P_- (T)u (T) = \psi . \end{array} \right. \end{aligned}$$
(34)

To do that we have to add some assumptions, both on \(\mathcal {A}\) and \(\mathcal {R}\). Assumptions on \(\mathcal {A}\) are explicitly given in the theorem which follows. Assumptions on \(\mathcal {R}\) are more implicit and are hidden in (35). Consider the following spaces:

$$\begin{aligned} U_+(0) = \big \{ w \in U \ \big | \ [P_+(0)+P_0(0)] w \in U \big \} = U \cap ({\tilde{H}}_+(0)\oplus {\tilde{H}}_0(0)) , \\ U_-(T) = \big \{ w \in U \ \big | \ [P_-(T)+P_0(T)] w \in U \big \} = U \cap ({\tilde{H}}_-(T)\oplus {\tilde{H}}_0(T)) . \end{aligned}$$

(see (14) for the definition of \({\tilde{H}}_-, {\tilde{H}}_0, {\tilde{H}}_+\)). Then we suppose

$$\begin{aligned} U_+(0) \text { dense in } {\tilde{H}}_+(0), \qquad U_-(T) \text { dense in } {\tilde{H}}_-(T). \end{aligned}$$
(35)

This assumptions indirectly involves the operator \(\mathcal {R}\), as we show with an example at the end of the paper (see example 2 in the last section).

Remark 3.12

We want to stress that we remove the assumption \(H_+(0) \cap H_-(T) = \{ 0 \}\) which, on the contrary, was made in [17].

Then the following theorem holds.

Theorem 3.13

Suppose (35) holds. Define the operator \(\mathcal {P}: \mathcal {W}_{\mathcal {R}} \rightarrow \mathcal {V}'\) by \(\mathcal {P}u = \mathcal {R}u' + \mathcal {A}u\) where \(\mathcal {R}\) is defined in (15), \(R \in {\mathcal {E}}(C_1, C_2)\) and \(\mathcal {A}: \mathcal {V}\rightarrow \mathcal {V}'\) is hemicontinuous. Suppose that there exist two constants \(\alpha , \beta > 0\) such that

$$\begin{aligned} \begin{array}{l} \big \langle \mathcal {A}u - \mathcal {A}v - \frac{1}{2} (\mathcal {R}' u - \mathcal {R}' v), u - v \big \rangle _{\mathcal {V}' \times \mathcal {V}} \geqslant \alpha \Vert u - v \Vert _{\mathcal {V}}^2, \\ \Vert \mathcal {A}u - \frac{1}{2} \mathcal {R}' u \Vert _{\mathcal {V}'} \leqslant \beta \Vert u \Vert _{\mathcal {V}} \end{array} \end{aligned}$$
(36)

or for some \(p \in (2, +\infty )\)

$$\begin{aligned} \begin{array}{l} \langle \mathcal {A}u - \mathcal {A}v, u - v \rangle _{\mathcal {V}' \times \mathcal {V}} \geqslant \alpha \Vert u - v \Vert _{\mathcal {V}}^p, \Vert \mathcal {A}u \Vert _{\mathcal {V}'} \leqslant \beta \Vert u \Vert _{\mathcal {V}}^{p-1} \\ \langle \mathcal {R}' u, u \rangle _{\mathcal {V}' \times \mathcal {V}} \leqslant 0 \end{array} \end{aligned}$$
(37)

for every \(u , v \in \mathcal {V}\). Then there is a constant \(c = c(\alpha , \beta , p)\) (depending only on \(\alpha , \beta \), p and proportional to \(\alpha ^{-\frac{1}{p}} + \alpha ^{-\frac{p-1}{p}})\) such that for every \(u \in \mathcal {W}_{\mathcal {R}}\)

$$\begin{aligned} \Vert u \Vert _{\mathcal {W}_{\mathcal {R}}}&\leqslant c\ \Big [ \Vert \mathcal {P}u \Vert _{\mathcal {V}'} + \Vert \mathcal {P}u \Vert _{\mathcal {V}'}^{1/(p-1)} + \Vert R_-^{1/2}(T) u(T) \Vert _{H_-(T)}^{2(p-1)/p} + \Vert R_+^{1/2}(0) u(0) \Vert _{H_+(0)}^{2(p-1)/p}\\&\quad + \Vert R_-^{1/2}(T) u(T) \Vert _{H_-(T)}^{2/p} + \Vert R_+^{1/2}(0) u(0) \Vert _{H_+(0)}^{2/p} \Big ]\, . \end{aligned}$$

Moreover for every \(f \in \mathcal {V}'\), \(\varphi \in {\tilde{H}}_+(0)\), \(\psi \in {\tilde{H}}_-(T)\) problem (34) has a unique solution.

Proof

First we show the existence result: consider \(\Phi , \Psi , \varphi , \psi \in U\) with \(P_+(0) \Phi = \varphi \), \(P_-(T) \Psi = \psi \) and define

$$\begin{aligned} \eta (t) :=\frac{T - t}{T}\,\varphi + \frac{t}{T}\,\psi = \varphi + \frac{t}{T} \big ( \psi - \varphi \big ), \qquad t \in [0, T]. \end{aligned}$$

Now solve the following problem (one can easily verify that the operator \({\tilde{\mathcal {A}}} v :=\mathcal {A}(v + \eta )\) is bounded, coercive, strongly monotone and hemicontinuous)

$$\begin{aligned} \left\{ \begin{array}{l} \mathcal {R}v' + \mathcal {A}(v + \eta ) = f - \mathcal {R}\eta ' \\ P_+(0) v(0) = 0 \\ P_-(T) v(T) = 0 \end{array} \right. \end{aligned}$$

and consider \(u = v + \eta \); then the function u solves (34) with \(\varphi , \psi \in U\). Before concluding we come to the estimates.

Notice that, since \(\mathcal {P}u = \mathcal {R}u' + \mathcal {A}u\) and, for \(p = 2\), \(\mathcal {P}u = \big ( \mathcal {R}u' + \frac{1}{2} \mathcal {R}' u \big ) + \big ( \mathcal {A}u - \frac{1}{2} \mathcal {R}' u \big )\) and by Proposition 3.2 we get for \(p > 2\)

$$\begin{aligned} \alpha \Vert u \Vert _{\mathcal {V}}^p\leqslant & {} \left\langle \mathcal {A}u, u \right\rangle _{\mathcal {V}' \times \mathcal {V}} = \left\langle \mathcal {P}u , u \right\rangle _{\mathcal {V}' \times \mathcal {V}} - \left\langle \mathcal {R}u', u \right\rangle _{\mathcal {V}' \times \mathcal {V}} \\= & {} \ \left\langle \mathcal {P}u , u \right\rangle _{\mathcal {V}' \times \mathcal {V}} - \frac{1}{2} \left[ (R(T) u (T), u(T))_{H(T)} - (R(0) u (0), u(0))_{H(0)} \right] \\&+ \left\langle \frac{1}{2} \mathcal {R}' u, u \right\rangle _{\mathcal {V}' \times \mathcal {V}} \\= & {} \ \left\langle \mathcal {P}u , u \right\rangle _{\mathcal {V}' \times \mathcal {V}} + \frac{1}{2} \left[ (R_-(T) u (T), u(T))_{H(T)} + (R_+(0) u (0), u(0))_{H(0)} \right] \\&- \frac{1}{2} \left[ (R_+(T) u (T), u(T))_{H(T)} + (R_-(0) u (0), u(0))_{H(0)} \right] + \left\langle \frac{1}{2} \mathcal {R}' u, u \right\rangle _{\mathcal {V}' \times \mathcal {V}} \\\leqslant & {} \ \left\langle \mathcal {P}u , u \right\rangle _{\mathcal {V}' \times \mathcal {V}} + \frac{1}{2} \left[ (R_-(T) u (T), u(T))_{H(T)} + (R_+(0) u (0), u(0))_{H(0)} \right] \end{aligned}$$

and for \(p = 2\)

$$\begin{aligned} \alpha \Vert u \Vert _{\mathcal {V}}^2&\leqslant \left\langle \mathcal {A}u - \frac{1}{2} \mathcal {R}' u, u \right\rangle _{\mathcal {V}' \times \mathcal {V}} \\&= \ \left\langle \mathcal {P}u , u \right\rangle _{\mathcal {V}' \times \mathcal {V}} - \left\langle \mathcal {R}u' + \frac{1}{2} \mathcal {R}' u, u \right\rangle _{\mathcal {V}' \times \mathcal {V}} \\&= \ \left\langle \mathcal {P}u , u \right\rangle _{\mathcal {V}' \times \mathcal {V}} - \frac{1}{2} \left[ (R(T) u (T), u(T))_{H(T)} - (R(0) u (0), u(0))_{H(0)} \right] \\&\leqslant \ \left\langle \mathcal {P}u , u \right\rangle _{\mathcal {V}' \times \mathcal {V}} + \frac{1}{2} \left[ (R_-(T) u (T), u(T))_{H(T)} + (R_+(0) u (0), u(0))_{H(0)} \right] . \end{aligned}$$

Now, if we denote by q the quantity \(p/(p-1)\), for every \(\epsilon > 0\) we can write

$$\begin{aligned} \Vert \mathcal {P}u \Vert _{\mathcal {V}'} \Vert u\Vert _{\mathcal {V}} \leqslant \frac{\epsilon ^p}{p} \Vert u \Vert _{\mathcal {V}}^p + \frac{1}{q}\Big (\frac{1}{\epsilon }\Big )^{q/p^2} \Vert \mathcal {P}u \Vert _{\mathcal {V}'}^q \end{aligned}$$
(38)

by which we get the existence of a constant \(c = c (\alpha , p)\) which is proportional to \(\alpha ^{-1/p}\) (e.g. choosing \(\epsilon \) such that \(\epsilon ^p/p = \alpha /2\)) such that

$$\begin{aligned} \Vert u \Vert _{\mathcal {V}} \leqslant c(\alpha , p) \Big [ \Vert \mathcal {P}u \Vert _{\mathcal {V}'}^{1/(p-1)} + \big ( R_-(T) u (T), u(T) \big )_{H(T)}^{1/p} + \big ( R_+(0) u (0), u(0) \big )_{H(0)}^{1/p} \Big ] \end{aligned}$$

for every \(p \in [2, +\infty )\). Since \(\mathcal {R}u' = \mathcal {P}u - \mathcal {A}u\) we immediatly get

$$\begin{aligned} \left\| \mathcal {R}u' \right\| _{\mathcal {V}'} \leqslant&\,\left\| \mathcal {P}u \right\| _{\mathcal {V}'} + \left\| \mathcal {A}u \right\| _{\mathcal {V}'} \leqslant \Vert \mathcal {P}u \Vert _{\mathcal {V}'} + c\,\Vert u \Vert _{\mathcal {V}}^{p-1} \end{aligned}$$

where c depends on \(\beta \) and \(\Vert \mathcal {R}' \Vert \) when \(p = 2\) and c depends only on \(\beta \) when \(p > 2\), by which we get the thesis.

Now to conclude the existence result for general \(\varphi \) and \(\psi \) one can consider \(\varphi \in {\tilde{H}}_+(0)\), \(\psi \in {\tilde{H}}_-(T)\), two sequences \((\varphi _n)_n \subset U_+(0), (\psi _n)_n \subset U_-(T)\), \(\varphi _n \rightarrow \varphi \) in \({\tilde{H}}_+(0)\), \(\psi _n \rightarrow \psi \) in \({\tilde{H}}_-(T)\), any \(f \in \mathcal {V}'\) and consider \(u_n\) the corresponding solutions. This can be done thanks to assumption (35).

Similarly as done above one can obtain the estimate for the difference \(u_n - u_m\)

$$\begin{aligned}&\Vert u_n - u_m \Vert _{\mathcal {V}}^p \,+ \Vert \mathcal {R}( u_n' - u_m') \Vert _{\mathcal {V}'} \\&\quad \leqslant c \ \bigg [ \,\Vert R_-^{1/2}(T) (\psi _n - \psi _m) \Vert _{H_-(T)}^{\frac{2}{p}} + \Vert R_+^{1/2}(0) ( \varphi _n - \varphi _m) \Vert _{H_+(0)}^{\frac{2}{p}} \\&\qquad + \Vert R_-^{1/2}(T) (\psi _n - \psi _m) \Vert _{H_-(T)}^{2 \frac{p-1}{p}} + \Vert R_+^{1/2}(0) (\varphi _n - \varphi _m) \Vert _{H_+(0)}^{2 \frac{p-1}{p}} \bigg ] \end{aligned}$$

for every \(n, m \in \mathbf{N}\), and then there is a function \(u \in \mathcal {W}\) such that

$$\begin{aligned}&u_n \rightarrow u \quad \text { in } \mathcal {V}\qquad \text { and } \qquad \mathcal {R}u_n' \rightarrow \mathcal {R}u' \quad \text { in } \mathcal {V}' . \end{aligned}$$

Up to select a subsequence we also get that \(\mathcal {A}u_n\) weakly converge to some \(b \in \mathcal {V}'\) and then \(\langle \mathcal {A}u_n , u_n \rangle _{\mathcal {V}' \times \mathcal {V}} \rightarrow \langle b , u \rangle _{\mathcal {V}' \times \mathcal {V}}\). Since \(\mathcal {A}\) is monotone and hemicontinuous \(\mathcal {A}\) is type M, then we can conclude that \(b = \mathcal {A}u\). \(\square \)

Similarly one proves the following theorem, which is an extension to the case of Banach spaces depending on the parameter t of the result stated in [17].

Theorem 3.14

Suppose (35) holds. Define the operator \(\mathcal {P}: \mathcal {W}_{\mathcal {R}} \rightarrow \mathcal {V}'\) by \(\mathcal {P}u = (\mathcal {R}u)' + \mathcal {A}u\) where \(\mathcal {R}\) is defined in (15), \(R \in {\mathcal {E}}(C_1, C_2)\) and \(\mathcal {A}: \mathcal {V}\rightarrow \mathcal {V}'\) is hemicontinuous. Suppose that there exist two constants \(\alpha , \beta > 0\) such that

$$\begin{aligned} \begin{array}{l} \big \langle \mathcal {A}u - \mathcal {A}v + \frac{1}{2} (\mathcal {R}' u - \mathcal {R}' v), u - v \big \rangle _{\mathcal {V}' \times \mathcal {V}} \geqslant \alpha \Vert u - v \Vert _{\mathcal {V}}^2, \\ \Vert \mathcal {A}u + \frac{1}{2} \mathcal {R}' u \Vert _{\mathcal {V}'} \leqslant \beta \Vert u \Vert _{\mathcal {V}} \end{array} \end{aligned}$$
(39)

or for some \(p \in (2, +\infty )\)

$$\begin{aligned} \begin{array}{l} \langle \mathcal {A}u - \mathcal {A}v, u - v \rangle _{\mathcal {V}' \times \mathcal {V}} \geqslant \alpha \Vert u - v \Vert _{\mathcal {V}}^p, \Vert \mathcal {A}u \Vert _{\mathcal {V}'} \leqslant \beta \Vert u \Vert _{\mathcal {V}}^{p-1} \\ \langle \mathcal {R}' u, u \rangle _{\mathcal {V}' \times \mathcal {V}} \geqslant 0 \end{array} \end{aligned}$$
(40)

for every \(u , v \in \mathcal {V}\). Then there is a constant \(c = c(\alpha , \beta , p)\) (depending only on \(\alpha , \beta \), p and proportional to \(\alpha ^{-\frac{1}{p}} + \alpha ^{-\frac{p-1}{p}})\) such that for every \(u \in \mathcal {W}_{\mathcal {R}}\)

$$\begin{aligned}&\Vert u \Vert _{\mathcal {W}_{\mathcal {R}}} \leqslant c\ \Big [ \Vert \mathcal {P}u \Vert _{\mathcal {V}'} + \Vert \mathcal {P}u \Vert _{\mathcal {V}'}^{1/(p-1)} + \Vert R_-^{1/2}(T) u(T) \Vert _{H_-(T)}^{2(p-1)/p} + \Vert R_+^{1/2}(0) u(0) \Vert _{H_+(0)}^{2(p-1)/p} \\&\quad + \Vert R_-^{1/2}(T) u(T) \Vert _{H_-(T)}^{2/p} + \Vert R_+^{1/2}(0) u(0) \Vert _{H_+(0)}^{2/p} \Big ]\, . \end{aligned}$$

Moreover for every \(f \in \mathcal {V}'\), \(\varphi \in {\tilde{H}}_+(0)\), \(\psi \in {\tilde{H}}_-(T)\) problem (31) has a unique solution.

Remark 3.15

If \(\mathcal {A}\) is linear we also have the corresponding existence results for the problems

$$\begin{aligned} \left\{ \begin{array}{l} \mathcal {R}u' + \mathcal {A}u + \lambda \,\mathcal {R}u = f \\ P_+(0) u(0) = \varphi \\ P_-(T) u(T) = \psi \end{array} \right. \qquad \qquad \left\{ \begin{array}{l} (\mathcal {R}u)' + \mathcal {A}u + \lambda \,\mathcal {R}u = f \\ P_+(0) u(0) = \varphi \\ P_-(T) u(T) = \psi \end{array} \right. \end{aligned}$$

for every \(\lambda \in \mathbf{R}\). It is sufficient indeed to consider the change of variable

$$\begin{aligned} v(t) = e^{\lambda t} u(t) \end{aligned}$$

(one could also consider \(v(t) = e^{\lambda (t-T)} u(t)\) or \(v(t) = e^{\lambda (T-t)} u(t)\)) to obtain

$$\begin{aligned} \left\{ \begin{array}{l} \mathcal {R}v' + \mathcal {A}v = {\tilde{f}} = f e^{\lambda t} \\ P_+(0) v(0) = \varphi \\ P_-(T) v(T) = e^{\lambda T} \psi \end{array} \right. \qquad \qquad \left\{ \begin{array}{l} (\mathcal {R}v)' + \mathcal {A}v = {\tilde{f}} = f e^{\lambda t} \\ P_+(0) v(0) = \varphi \\ P_-(T) v(T) = e^{\lambda T} \psi \end{array} \right. \end{aligned}$$

which has a unique solution v. Then \(u(t) = v(t) e^{- \lambda t}\) solves the original problem.

3.1 A time regularity result

Here we give a regularity result for the solution of (34) and (31) only in the following special case:

$$\begin{aligned}&p = 2 \quad \text {and} \quad \mathcal {A}\ \mathbf{linear} \\&V(t) = V, \quad H(t) \equiv H , \\&A(t) \equiv A , \quad \text {i.e. } A \text { independent of } t , \\&R(t) \equiv R , \quad \text {i.e. } R \text { independent of } t . \end{aligned}$$

For a much more detailed study about regularity, not only in t, we refer to [19].

Theorem 3.16

Under the assumption of Theorem 3.13 and given \(f \in \mathcal {V}'\), \(\varphi \in {\tilde{H}}_+\), \(\psi \in {\tilde{H}}_-\) denote by u the solution of problem (34). Suppose moreover that \(R \in {\mathcal {E}}(C_1, C_2)\), that A satisfies

$$\begin{aligned} \langle A u - A v, u - v \rangle _{V' \times V} \geqslant \alpha \Vert u - v \Vert _{V}^2, \Vert A u \Vert _{V'} \leqslant \beta \Vert u \Vert _{V} \end{aligned}$$

for every \(u, v \in V\). Assume that \(f'\), the generalized derivative of f, belongs to \(\mathcal {V}'\) and suppose there exist \(u_0 , u_T \in V\) such that \(P_+ u_0 = \varphi \), \(P_- u_T = \psi \)\((P_+\) and \(P_-\) the projections from H onto \({\tilde{H}}_+\) and \({\tilde{H}}_-\), respectively) \(f(0) - A u_0 \in \mathrm{Im}\ R\), \(f(T) - A u_T \in \mathrm{Im}\ R\). Then

  1. (i)

    \(u \in H^1(0,T; V)\),

  2. (ii)

    there is \(c > 0\) depending (only) on \(\alpha ^{-1/2}\), \(\beta \), \(\Vert A \Vert \), such that

    $$\begin{aligned}&\Vert u \Vert _{\mathcal {W}_{\mathcal {R}}} + \Vert u' \Vert _{\mathcal {W}_{\mathcal {R}}} + \sup _{t \in [0,T]} \Vert u(t) \Vert _{V} \\&\quad \leqslant \,c\,\Big [ \Vert f \Vert _{\mathcal {V}'} + \Vert R_+^{1/2} u_0 \Vert _{H_+} + \Vert R_-^{1/2} u_T \Vert _{H_-} \\&\qquad + \Vert f' \Vert _{\mathcal {V}'} + \Vert R_+^{-1/2}(0) \big [ P_+ \big ( f(0) - A u_0 \big ) \big ] \Vert _{H_+} \\&\qquad + \Vert R_-^{-1/2}(T) \big [ P_- \big ( f(T) - A u_T \big ) \big ] \Vert _{H_-} \Big ] \end{aligned}$$

Proof

Consider u to be the solution of the problem

$$\begin{aligned} \left\{ \begin{array}{l} R u' + A u = f \\ P_+ u (0) = \varphi \\ P_- u (T) = \psi \end{array} \right. \end{aligned}$$

and v the solution of

$$\begin{aligned} \left\{ \begin{array}{l} R v' + A v = f'\\ P_+ v (0) = R_+^{-1} \big [ P_+ \big ( f(0) - A u_0 \big ) \big ]\\ P_- v (T) = R_-^{-1} \big [ P_- \big ( f(T) - A u_T \big ) \big ] \end{array} \right. \end{aligned}$$
(41)

with

$$\begin{aligned}&u_0 \in V, u_T \in V, P_+ u_0 = \varphi , \ P_- u_T = \psi ,\nonumber \\&P_+ [f(0) - A u_0 ] \in \text {Im}\,R_+,\nonumber \\&P_- [f(T) - A u_T ] \in \text {Im}\,R_-. \end{aligned}$$
(42)

Denote by v the solution of (41) and consider

$$\begin{aligned} w(u_0;t) = w(t) = u_0 + \int _0^t v(s)\,ds . \end{aligned}$$

Clearly \(w \in H^1 (0,T; V)\). Writing \(\mathcal {R}v'\) as \((\mathcal {R}v)' - \mathcal {R}' v\) and integrating the equation in (41) in [0, t] one obtains

$$\begin{aligned} R v(t) = R v (0) - \int _0^t A v(s) ds + f(t) - f(0) . \end{aligned}$$

Since \(w' = v\) by the previous equality we can derive that

$$\begin{aligned} R w' (t) =&\ R v (0) - \int _0^t A w' (s) ds + f(t) - f(0) \\ =&\ R v (0) - A w(t) + A w(0) + f(t) - f(0) . \end{aligned}$$

Then we get

$$\begin{aligned} R (w - u)'(t) + A ( w -u ) (t) =&R w' (t) + A w (t) - f (t) \\ =&\big [ R v (0) - A w(t) + A w(0) + f(t) - f(0) \big ] + A w (t) - f (t) \\ =&R v(0) - f(0) + A u_0. \end{aligned}$$

Notice that by assumption \(R v(0) - f(0) + A u_0 = 0 \in H\) and, by (41),

$$\begin{aligned} P_+ \Big [ R v(0) - f(0) + A u_0 \Big ] = 0 \qquad \text { in } H_+ \end{aligned}$$

but we can choose \(u_0\) in such a way that

$$\begin{aligned} R v(0) - f(0) + A u_0 = 0 \qquad \text { in } H. \end{aligned}$$
(43)

We can do that, if necessary, modifying \(u_0\) and taking \(u_0 \in V\) to be the solution of the problem

$$\begin{aligned} A z = f(0) - R v(0), z \in V. \end{aligned}$$

Clearly in this way the initial condition in (41) is maintained. Therefore, since \(u_0\) satisfies (43), we have that the function \(w - u\) solves the problem

$$\begin{aligned} \left\{ \begin{array}{l} R y' + A y = 0 \\ P_+ y (0) = 0 \\ {\displaystyle P_- y (T) = P_- \Big [ u_0 + \int _0^T v(s) ds - u_T \Big ] } \end{array} \right. \end{aligned}$$
(44)

Now consider the function

$$\begin{aligned} {\tilde{w}}(u_T;t) = {\tilde{w}}(t) : = u_T + \int _T^t v(s)\,ds = \Big [u_T - \int _0^T v(s) ds \Big ] + \int _0^t v(s) ds, \end{aligned}$$

Similarly as done for w, but now integrating between T and \(t \in (0,T)\), we get that the function \({\tilde{w}}-u\) solves the problem

$$\begin{aligned} \left\{ \begin{array}{l} R y' + A y = 0 \\ {\displaystyle P_+ y (0) = P_+ \Big [ u_T - \int _0^T v(s) ds - u_0 \Big ] } \\ P_- y (T) = 0 \end{array} \right. \end{aligned}$$

where \(u_T\) is chosen is such a way to solve \(A u_T = f (T) - R v(T)\), \(u_T \in V\).

Now define the function \(z:=w - {\tilde{w}}\) and notice that

$$\begin{aligned} w(u_0,t) - {\tilde{w}}(u_T,t) = u_0 - u_T + \int _0^T v(s) ds \, \text { is independent of } t \end{aligned}$$

and in particular z is the solution of the (elliptic) problem

$$\begin{aligned} R z' + A z = A z = 0. \end{aligned}$$
(45)

Then \(z = 0\), i.e. \(w - {\tilde{w}} = 0\), i.e.

$$\begin{aligned} u_0 - u_T + \int _0^T v(s) ds = 0 . \end{aligned}$$

By that we derive that the solution of problem (44) is 0, i.e.

$$\begin{aligned} w - u = 0 . \end{aligned}$$

By that, being \(w'\) the solution of (41), we also get the estimate of point \(ii\,)\). \(\square \)

4 Examples

In this section we present some simple examples of possible choices of \(\mathcal {R}\) and \(\mathcal {A}\) for the equation considered in Sect. 3. These examples should help to understand which kind of operators \(\mathcal {R}\) are admissible and combining these examples one could imagine some more general situations which satisfy the conditions assumed in the theorems given in the previous sections.

I - The equation \(\mathcal {R}u' + \mathcal {A}u = f\)

In the first examples which follow we consider \(T > 0\), \(\Omega \subset \mathbf{R}^n\) open set with Lipschitz boundary and

$$\begin{aligned}&U \equiv V(t) = H^1_0(\Omega ) \quad \text {and} \quad H(t) = L^2(\Omega ) \qquad \text {for every } t \in [0,T], \nonumber \\&A(t) : H^1_0(\Omega ) \rightarrow H^{-1}(\Omega ) \nonumber \\&\big ( A(t) u \big ) (x) :=-\,\text {div},a\big ( x,t, Du (x) \big ) + b(x,t,u) \,\nonumber \\&\text {with } a : \Omega \times (0,T) \times \mathbf{R}^n \rightarrow \mathbf{R}^n, \quad b : \Omega \times (0,T) \times \mathbf{R}\rightarrow \mathbf{R}, \nonumber \\&\text {verifying} \qquad \lambda _o\,| \xi |^2 \leqslant a(x,t,\xi ) \cdot \xi \leqslant \Lambda _o\,|\xi |^2 \qquad \text {and} \qquad | b (x,t,u) | \leqslant M\,|u (x)| \end{aligned}$$
(46)

for every \(\xi \in \mathbf{R}^n\) and for some positive \(\lambda _o, \Lambda _o\) and some \(M \geqslant 0\). Then \(\mathcal {A}\) will be defined as in (27). We fix now our attention on the operator \(\mathcal {R}\). Consider a function

$$\begin{aligned} r: \Omega \times [0,T] \rightarrow \mathbf{R}, \qquad r \in L^{\infty } (\Omega \times (0,T)) \end{aligned}$$

and

$$\begin{aligned} R(t) : L^2(\Omega ) \rightarrow L^2(\Omega ), \qquad \big ( R(t) u \big ) (x) :=r(x,t) u(x). \end{aligned}$$

Finally for every \(t \in [0,T]\) we denote

$$\begin{aligned}&\Omega _+ (t) : = \big \{ x \in \Omega \, \big | \, r(\cdot , t) > 0 \big \} , \nonumber \\&\Omega _- (t) : = \big \{ x \in \Omega \, \big | \, r(\cdot , t) < 0 \big \} , \nonumber \\&\Omega _0 (t) : = \Omega \setminus \big ( \Omega _+ \cap \Omega _- \big ) \end{aligned}$$
(47)

and (see also (14))

$$\begin{aligned}&r_+ \quad \text {the positive part of } r , \qquad r_- \quad \text {the negative part of } r , \nonumber \\&{\tilde{H}}_+(0) = L^2 \big (\Omega _+(0), r_+(\cdot , 0) \big ) \quad \text {the completion of }C_c(\Omega _+(0)) \nonumber \\&\qquad \text { w.r.t. the norm } \Vert w \Vert ^2 = \int _{\Omega _+(0)} w^2(x) \, r_+(x, 0) dx , \nonumber \\&{\tilde{H}}_-(T) = L^2 \big (\Omega _-(T), r_-(\cdot , T) \big ) \quad \text {the completion of }C_c(\Omega _+(0)) \nonumber \\&\qquad \text { w.r.t. the norm } \Vert w \Vert ^2 = \int _{\Omega _-(T)} w^2(x) \, r_-(x, T) dx \, . \end{aligned}$$
(48)

Then consider the problem, for some \(f \in L^2(0,T; H^{-1}(\Omega )), \varphi \in {\tilde{H}}_+(0), \psi \in {\tilde{H}}_-(T)\)

$$\begin{aligned} \left\{ \begin{array}{ll} r(x,t) \, u_t + \mathcal {A}\, u = f(x,t) &{}\quad \text {in } \Omega \times (0,T) \\ u(x,t) = 0 &{}\quad \text {for } (x,t) \in \ \partial \Omega \times (0,T)\\ u(x,0) = \varphi (x) &{}\quad \text {for } x \in \Omega _+(0)\\ u(x,T) = \psi (x) &{}\quad \text {for } x \in \Omega _-(T) \, . \end{array} \right. \end{aligned}$$
(49)
  1. 1.

    Clearly Theorem 3.13 includes the “standard” equations. If \(r \equiv 1\) we have the forward parabolic equation

    $$\begin{aligned} \left\{ \begin{array}{ll} u_t + \mathcal {A}\, u = f(x,t) &{}\quad \text {in } \Omega \times (0,T) \\ u(x,t) = 0 &{}\quad \text {for } (x,t) \in \ \partial \Omega \times (0,T) \\ u(x,0) = \varphi (x) &{}\quad \text {for } x \in \Omega , \end{array} \right. \end{aligned}$$

    if \(r \equiv -1\) we have the backward parabolic equation

    $$\begin{aligned} \left\{ \begin{array}{ll} - u_t + \mathcal {A}\, u = f(x,t) &{}\quad \text {in } \Omega \times (0,T) \\ u(x,t) = 0 &{}\quad \text {for } (x,t) \in \ \partial \Omega \times (0,T) \\ u(x,T) = \psi (x) &{}\quad \text {for } x \in \Omega , \end{array} \right. \end{aligned}$$

    if \(r \equiv 0\) we have a family of elliptic equations

    $$\begin{aligned} \left\{ \begin{array}{ll} A(t) \, u (t) = f(t) &{}\quad \text {in } \Omega \ \text {for a.e. } t \in (0,T) \\ u(\cdot ,t) = 0 &{}\quad \text {for } (x,t) \in \ \partial \Omega \times (0,T) \, . \end{array} \right. \end{aligned}$$
  2. 2.

    Suppose \(r = r(x)\), \(r \in L^{\infty }(\Omega )\). As long as \(r \geqslant 0\)every function is admitted, even, for example,

    $$\begin{aligned}&r(x) = 1 \quad \text {in } \Omega _+ , \qquad r(x) = 0 \quad \text {in } \Omega _0 , \\&\Omega _+ \text { and }\Omega _0 \quad \text {Cantor-type sets of positive measure.} \end{aligned}$$

    This because clearly R belongs to the class \({\mathcal {E}}\) defined in Definition 2.2 and because assumption (35) is satisfied. This last assumption might not be satisfied if one considers a generic \(r \in L^{\infty }(\Omega )\), if for instance

    $$\begin{aligned}&r(x) = 1 \quad \text {in } \Omega _+ , \qquad r(x) = 0 \quad \text {in } \Omega _0 , \qquad r(x) = -1 \quad \text {in } \Omega _- , \\&\Omega _+ , \ \Omega _0 , \ \Omega _- \quad \text {Cantor-type sets of positive measure.} \end{aligned}$$

    The request (35) is surely satisfied if there are two open sets \(A_1, A_2\) with

    $$\begin{aligned} A_1 \cap A_2 = \emptyset , \qquad \Omega _+ \subset A_1 , \qquad \Omega _- \subset A_2 \, . \end{aligned}$$
  3. 3.

    Suppose \(r = r(t)\). Assumptions (18) are satisfied if \(r \in W^{1,\infty }(0,T)\), therefore every \(r \in W^{1,\infty }(0,T)\) is admitted. Two interesting situations are the following: the first when \(r(0) \geqslant 0\) and \(r(T) \leqslant 0\) leads to the problem

    $$\begin{aligned} \left\{ \begin{array}{ll} r(t) u_t + r'(t) u + \mathcal {A}u = f &{}\quad \text { in } \Omega \times (0,T) \\ u(x,t) = 0 &{}\quad \text { for } (x,t) \in \ \partial \Omega \times (0,T) \\ u(x,0) = \varphi (x) &{}\quad \text { for } x \in \Omega \\ u(x,T) = \psi (x) &{}\quad \text { for } x \in \Omega \end{array} \right. \end{aligned}$$

    where a datum is given in the whole \(\Omega \) both at time 0 and at time T; the second where \(r(0) \leqslant 0\) and \(r(T) \geqslant 0\), which leads to the problem

    $$\begin{aligned} \left\{ \begin{array}{ll} r(t) u_t + r'(t) u + \mathcal {A}u = f &{}\quad \text { in } \Omega \times (0,T) \\ u(x,t) = 0 &{}\quad \text { for } (x,t) \in \ \partial \Omega \times (0,T) \end{array} \right. \end{aligned}$$

    where no information is given in the whole \(\Omega \) both at time 0 and at time T.

  4. 4.

    More interesting is the case when \(r = r (x,t)\). As long as

    $$\begin{aligned} r \text { and } \frac{\partial r}{\partial t} \in L^{\infty }(\Omega \times (0,T)) \end{aligned}$$

    the situation is very similar to that analyzed in example 3, so every r such that \(r, r_t \in L^{\infty }(\Omega \times (0,T))\) is admitted, provided that assumption (35) is satisfied. Suppose now

    $$\begin{aligned} r \quad \text { does not admit a partial derivative with respect to time}. \end{aligned}$$

    Well, assumption (18) could be satisfied anyway. To show it we consider a very simple example: suppose \(n = 1\), \(\Omega = (a,b)\), \(T > 0\), consider a function

    $$\begin{aligned} \gamma : [0,T] \rightarrow (a,b) , \qquad \gamma \in W^{1,\infty }(0,T) \end{aligned}$$

    and define the sets

    $$\begin{aligned} \omega _+ {:}{=} \big \{ (x,t) \in \Omega \times (0,T) \, \big | \, x < \gamma (t) \big \} , \qquad \omega _0 {:}{=} \big ( \Omega \times (0,T) \big ) \setminus \omega _+ \end{aligned}$$
    (50)

    and the function r

    $$\begin{aligned} r(x,t) = \chi _{\omega _+} (x,t) {:}{=} \left\{ \begin{array}{ll} 1 &{}\quad \text { in } \ \omega _+ \\ 0 &{}\quad \text { in } \ \omega _0 \, . \end{array} \right. \end{aligned}$$
    (51)

    To verify that \(R \in {\mathcal {E}}\) we consider \(w_1,w_2 \in H^1_0(a,b)\) and evaluate

    $$\begin{aligned} \frac{d}{dt} \big ( R(t) w_1 , w_2 \big )_{L^2(a,b)}&\, = \frac{d}{dt} \int _a^b w_1(x) w_2(x) r(x,t) \, dx =\\&\, = \frac{d}{dt} \int _a^{\gamma (t)} w_1(x) w_2(x) \, dx = w_1(\gamma (t)) \, w_2(\gamma (t)) \, \gamma '(t) , \end{aligned}$$

    then for \(u \in \mathcal {V}= L^2(0,T; H^1_0(\omega ))\) we have

    $$\begin{aligned} \big \langle \mathcal {R}' u, u \big \rangle _{\mathcal {V}' \times \mathcal {V}} = \int _0^T \big ( u(\gamma (t),t) \big )^2 \, \gamma '(t) \, dt \, . \end{aligned}$$

    Moreover notice that, as long as \(\gamma \) is decreasing so that \(\gamma ' \leqslant 0\), assumption (39) in Theorem 3.13 is easily satisfied, since

    $$\begin{aligned} \mathcal {A}- \frac{1}{2}\mathcal {R}' : \mathcal {V}\rightarrow \mathcal {V}' \end{aligned}$$

    turns out to be bounded thanks to the fact that \(\gamma '\) is bounded, and

    $$\begin{aligned} \big \langle \mathcal {A}u - \mathcal {A}v - \frac{1}{2}(\mathcal {R}' u - \mathcal {R}' v), u - v \big \rangle _{\mathcal {V}' \times \mathcal {V}} \geqslant \big \langle \mathcal {A}u - \mathcal {A}v , u - v \big \rangle _{\mathcal {V}' \times \mathcal {V}} \geqslant \lambda _o \Vert u - v \Vert _{\mathcal {V}}^2 \, . \end{aligned}$$

    In Figure 1.a below two possible admissible configurations are shown, the first one referring to a situation when \(\omega _+\) and \(\omega _0\) are like those defined in (50), the second one refers to a possible configuration with

    $$\begin{aligned} \omega _+ {:}{=} \big \{ (x,t) \in \Omega \times (0,T) \, \big | \, x > \gamma (t) \big \} , \qquad \omega _0 {:}{=} \big ( \Omega \times (0,T) \big ) \setminus \omega _+ . \end{aligned}$$

    Now suppose that

    $$\begin{aligned} \mathop {\mathrm{ess sup}}\limits _{[0,T]} \gamma '(t) > 0 \, . \end{aligned}$$

    Notice that there is a constant C such that for every \(w \in H^1_0(a,b)\)

    $$\begin{aligned} | w(x) | \leqslant C \, \Vert w \Vert _{H^1_0(a,b)} \qquad \text { for every } x \in [a,b] , \end{aligned}$$

    then, finally, for \(u \in \mathcal {V}= L^2(0,T; H^1_0(\Omega ))\) we have

    $$\begin{aligned} - \frac{1}{2} \big \langle \mathcal {R}' u, u \big \rangle _{\mathcal {V}' \times \mathcal {V}}&= - \frac{1}{2} \int _0^T \big ( u(\gamma (t),t) \big )^2 \, \gamma '(t) \, dt \\&\geqslant \frac{1}{2} \int _0^T \big ( u(\gamma (t),t) \big )^2 \, \mathop {\mathrm{ess inf}}\limits _{[0,T]} \big ( - \gamma '(t) \big ) \, dt \\&\geqslant - \frac{1}{2} \int _0^T \big ( u(\gamma (t),t) \big )^2 \, \mathop {\mathrm{ess sup}}\limits _{[0,T]} \gamma '(t) \, dt \\&\geqslant - \frac{C^2}{2} \, \mathop {\mathrm{ess sup}}\limits _{[0,T]} \gamma '(t) \, \Vert u \Vert _{\mathcal {V}}^2 \end{aligned}$$

    and then

    $$\begin{aligned} \big \langle \mathcal {A}u - \mathcal {A}v - \frac{1}{2}(\mathcal {R}' u - \mathcal {R}' v), u - v \big \rangle _{\mathcal {V}' \times \mathcal {V}} \geqslant \left( \lambda _o - \frac{C^2}{2} \, \mathop {\mathrm{ess sup}}\limits _{[0,T]} \gamma '(t) \right) \Vert u - v \Vert _{\mathcal {V}}^2 \, . \end{aligned}$$

    Then the first assumption required in (39) in Theorem 3.13 is satisfied if

    $$\begin{aligned} \lambda _o - \frac{C^2}{2} \, \mathop {\mathrm{ess sup}}\limits _{[0,T]} \gamma '(t) > 0 \end{aligned}$$

    and then \(\gamma \) can be also increasing, provided that

    $$\begin{aligned} \mathop {\mathrm{ess sup}}\limits _{[0,T]} \gamma '(t) < \frac{2 \, \lambda _o}{C^2} \, . \end{aligned}$$
    (52)

    A possible configuration is shown in Figure 2, where \(\gamma '\) is not necessarily negative, but it has to satisfy (52).

Fig. 1
figure 1

Two possible examples with \(\mathcal {R}'\leqslant 0\)

Fig. 2
figure 2

An example where the sign of \(\mathcal {R}'\) changes

Analogous considerations can be made if \(n \geqslant 2\): taking the simplest example

$$\begin{aligned} r(x,t) = \chi _{\omega _+} (x,t) \end{aligned}$$

where for each \(t \in [0.T]\) we have \(\Omega = \Omega _+(t) \cup \Omega _0(t)\) and

$$\begin{aligned} \omega _+ {:}{=} \bigcup _{t} \Omega _+(t) , \qquad \omega _0 {:}{=} \bigcup _{t} \Omega _0(t) \, . \end{aligned}$$

In this case we need that that for \(w_1,w_2 \in H^1_0(\Omega )\) the following hold:

$$\begin{aligned}&t \mapsto \int _{\Omega _+(t)} w_1(x) w_2(x) \, dx \qquad \text {is differentiable} , \\&\left| \frac{d}{dt} \int _{\Omega _+(t)} w_1(x) w_2(x) \, dx \right| \leqslant {\tilde{C}} \, \Vert w_1 \Vert _{H^1_0(\Omega )} \Vert w_2 \Vert _{H^1_0(\Omega )} \end{aligned}$$

for some positive \({\tilde{C}}\), since

$$\begin{aligned} \big ( R(t) w_1 , w_2 \big )_{L^2(\Omega )} = \int _{\Omega } w_1(x) w_2(x) r(x,t) \, dx = \int _{\Omega _+(t)} w_1(x) w_2(x) \, dx \, . \end{aligned}$$

These hold if \(\Omega _+(t)\) is open and the interface separating \(\Omega _+(t)\) and \(\Omega _0(t)\) is Lipschitz continuous (see, e. g., Proposition 3, section 3.4.4, in [5]). Moreover, since \(u, v \in H^1_0(\Omega )\), it makes sense to consider the trace on this interface (see, e. g., Theorem 1, section 4.3, in [5]).

  1. 5.

    We want to show a little example where the regularity result stated in Theorem 3.16 holds. First suppose that (35) and (39) are satisfied, so to have a solution.

    About r consider

    $$\begin{aligned} r =r(x), \qquad r\in L^{\infty }(\Omega ). \end{aligned}$$

    As regards the operator \(\mathcal {A}\), consider, for instance, it is like (46), but with a and b independent of t, i.e.

    $$\begin{aligned} a=a(x), \qquad b=b(x) \end{aligned}$$

    To consider a simper example suppose \(b(x,u) = b(x) u\) with \(b \in L^{\infty } (\Omega )\). Now consider \(f \in H^1(0,T; H^{-1}(\Omega ))\) and the two functions \(u_0\) and \(u_T\) solutions respectively of the two problems

    $$\begin{aligned} \left\{ \begin{array}{ll} - \, \text {div} \, a\big ( x, Du \big ) + b(x) u = f(0) &{}\quad \text { in } \Omega \\ u = 0 &{}\quad \text { in } \partial \Omega \end{array} \right. \\ \left\{ \begin{array}{ll} - \, \text {div} \, a\big ( x, Du \big ) + b(x) u = f(T) &{}\quad \text { in } \Omega \\ u = 0 &{}\quad \text { in } \partial \Omega \end{array} \right. \end{aligned}$$

    and consider \(\varphi \) the restriction to \(\Omega _+(0)\) (see (47)) of \(u_0\) and \(\psi \) the restriction to \(\Omega _-(T)\) of \(u_T\).

    Then the solution u of (49) belongs to \(H^1(0,T; H^1_0(\Omega ))\).

  2. 6.

    The equation we considered in Sect. (3) is \(\mathcal {R}u' + \mathcal {A}u = f\). Nevertheless we required some regularity assumption about \(\mathcal {R}\), precisely that \(R \in {\mathcal {E}}\), the class defined in Definition 2.2. With this example we want to show that at least uniqueness is lost if \(R \not \in {\mathcal {E}}\).

    Consider

    $$\begin{aligned} r = r(t) = \left\{ \begin{array}{rl} -1 &{}\quad \text { for } \ t < T/2 \\ 1 &{}\quad \text { for } \ t \geqslant T/2 \end{array} \right. \end{aligned}$$

    and the problem (49) with this r. Clearly

    $$\begin{aligned} \Omega _+(0) = \Omega _-(T) = \emptyset \, . \end{aligned}$$

    Then we can fix \(\eta \in H^1_0(\Omega )\) and solve separately the two problems

    $$\begin{aligned} \begin{array}{ll} \left\{ \begin{array}{ll} -u_t + \mathcal {A}u = f &{}\quad \text { in } \Omega \times (0,T/2) \\ u(x,t) = 0 &{}\quad \text { in } \partial \Omega \times (0,T)\\ u(x,T/2) = \eta (x) &{}\quad \text { in } x \in \Omega \end{array} \right. &{} \left\{ \begin{array}{ll} u_t + \mathcal {A}u = f &{}\quad \text { in } \Omega \times (T/2,0) \\ u(x,t) = 0 &{}\quad \text { in } \partial \Omega \times (0,T) \\ u(x,T/2) = \eta (x) &{}\quad \text { in } x \in \Omega \end{array} \right. \end{array} \end{aligned}$$

    and call \(u_1\) the solution of the first problem, \(u_2\) the solution of the second problem. Notice that the function \(u^{\eta }(t) = u_1(t)\) for \(t \in [0, T/2]\), \(u^{\eta }(t) = u_2(t)\) for \(t \in [T/2, T]\) solves problem

    $$\begin{aligned} \left\{ \begin{array}{ll} r\, u_t + \mathcal {A}u = f &{}\quad \text { in } \Omega \times (0,T/2) \\ u(x,t) = 0 &{}\quad \text { in } \partial \Omega \times (0,T) \end{array} \right. \end{aligned}$$

    and this is true for every \(\eta \in H^1_0(\Omega )\), and so we have infinite different solutions.

    Notice that if r depends only on t, \(r(0) < 0\), \(r(T) > 0\), r increasing and continuous the problem above has a unique solution, even if there are no initial and final data.

In the following example we modify (46) and consider for \(\mathcal {A}\) a monotone operator whose growth is more than linear. We consider the simple example where \(p > 2\) (one could also consider \(p > 2n/(2+n)\) is such a way that \(W^{1,p} \subset L^2\), but for simplicity we confined to \(p \geqslant 2\))

$$\begin{aligned}&U \equiv V(t) = W^{1,p}_0(\Omega ) \quad \text {and} \quad H(t) = L^2(\Omega ) \qquad \text {for every } t \in [0,T] , \nonumber \\&A(t) : W^{1,p}_0(\Omega ) \rightarrow W^{-1,p'}(\Omega ) \nonumber \\&\big ( A(t) u \big ) (x) {:}{=} - \, \text {div} \, a\big ( x,t, Du (x) \big ) ,\nonumber \\&\text {with } a : \Omega \times (0,T) \times \mathbf{R}^n \rightarrow \mathbf{R}^n , \nonumber \\&\text {verifying} \qquad \lambda _o \, | \xi |^p \leqslant a(x,t,\xi ) \cdot \xi \leqslant \Lambda _o \, |\xi |^p \end{aligned}$$
(53)

for every \(\xi \in \mathbf{R}^n\) and for some positive \(\lambda _o, \Lambda _o\).

  1. 7.

    With \(\mathcal {A}\) like in (53) all things we said in examples 1,2,3,4 hold, except one. Since \(\mathcal {R}'\) is linear, \(\mathcal {R}'\) is not comparable with \(\mathcal {A}\), then in this case to have assumption (40) in Theorem 3.13 satisfied we have to confine to some \(\mathcal {R}\) such that

    $$\begin{aligned} - \frac{1}{2} \mathcal {R}' \quad \text {is a positive operator}. \end{aligned}$$

    Then we can consider the functions considered in example 4, but we have to confine to non-increasing \(\gamma \) in the first example and to some \(\Omega _+(t)\) such that

    $$\begin{aligned} \frac{d}{dt} \int _{\Omega _+(t)} w_1(x) w_2(x) \, dx \leqslant 0 \quad \text {(and clearly bounded)} \end{aligned}$$

    in the more general case. So in this case examples like those shown in Figure 1 are admissible, but that in Figure 2 is not.

  2. 8.

    Now consider the following \(R : [0,T] \rightarrow {\mathcal {L}} (L^2(\Omega ))\). For a fixed function \(r \in L^{\infty } \big ( \Omega \times \Omega \times [0,T] \big )\) we define

    $$\begin{aligned} \big (R(t) u \big )(x) {:}{=} \int _{\Omega } r(x,y,t) u(y) \, dy \qquad u \in L^2(\Omega ) \, . \end{aligned}$$

    Clearly r could be a convolution kernel, i.e. \(r(x,y,t) = r(x-y,t)\) (suitable extended to zero outside of \(\Omega \times (0,T)\)). If assumptions (35) and (39) are satisfied if \(p = 2\), e.g. if the situation is like that in (46), or if (35) and (40) are satisfied if \(p > 2\), e.g. if the situation is like that in (53), we have the existence and uniqueness of the solution of the following problem

    $$\begin{aligned} \left\{ \begin{array}{ll} {\displaystyle \int _{\Omega } r(x,y,t) u_t(y,t) dy } + \mathcal {A}u = f &{}\quad \text {in } \Omega \times (0,T) , \\ u = 0 &{}\quad \text {in } \partial \Omega \times (0,T) , \\ u(\cdot ,0) = \varphi &{}\quad \text {in } {\tilde{H}}_+(0) , \\ u(\cdot ,T) = \psi &{}\quad \text {in } {\tilde{H}}_-(T) \, . \end{array} \right. \end{aligned}$$

    Notice that \(\mathcal {R}u'\) in (34) belongs, a priori, to \(\mathcal {V}'\), but this is well defined since we recall that \(\mathcal {R}u' = (\mathcal {R}u)' - \mathcal {R}' u\).

    In this case we have to give the initial and final data respectively in the space \({\tilde{H}}_+(0)\) and \({\tilde{H}}_-(T)\) (defined in (14)) which in the previous cases are those defined in (48).

Now we want to show some examples of varying spaces in which the Banach spaces V(t) are varying with time.

  1. 9.

    Unbounded coefficients. Another admissible situation is the following. Consider two functions

    $$\begin{aligned} \mu , \lambda \in L^1(\Omega \times (0,T)) \, . \end{aligned}$$

    Suppose \(\lambda > 0\) a.e. while \(\mu \) can change sign and also be zero. Denote by \(|{\tilde{\mu }}|\) a suitable function (see [18] or [19] for this detail) such that \(|{\tilde{\mu }}| > 0\) a.e. (we choose \(|{\tilde{\mu }}| = \lambda \) where \(\mu \equiv 0\)) and

    $$\begin{aligned} |{\tilde{\mu }}| = \left\{ \begin{array}{rl} \mu &{}\quad \text { in } \big \{ (x,t) \in \Omega \times (0,T) \, \big | \, \mu (x,t) > 0 \big \} \\ -\mu &{}\quad \text { in } \big \{ (x,t) \in \Omega \times (0,T) \, \big | \, \mu (x,t) < 0 \big \} \end{array} \right. \end{aligned}$$

    and the weighted Sobolev spaces for \(p \geqslant 2\) (also for these details about this spaces we refer to [18] or to [19])

    $$\begin{aligned} H(t) := L^2 \big ( \Omega , |{\tilde{\mu }}| (\cdot , t) \big ) , \qquad V(t) := W^{1,p}_0 \big ( \Omega , |\mu | (\cdot , t), \lambda (\cdot , t) \big ) \, . \end{aligned}$$

    In this case (see again [18]) one has that there is \(q > p\) such that \(W^{1,q}_0(\Omega )\) is dense in V(t) for every \(t \in [0,T]\). Then we consider

    $$\begin{aligned}&U = W^{1,q}_0(\Omega ) , \qquad V(t) \quad \text {and} \quad H(t) \qquad \text {as above } , \nonumber \\&A(t) : V(t) \rightarrow V'(t)\nonumber \\&\big ( A(t) u \big ) (x) {:}{=} - \, \text {div} \, a\big ( x,t, Du (x) \big ) ,\nonumber \\&\text {with } a : \Omega \times (0,T) \times \mathbf{R}^n \rightarrow \mathbf{R}^n , \nonumber \\&\text {verifying} \qquad \lambda (x,t) \, | \xi |^p \leqslant a(x,t,\xi ) \cdot \xi \leqslant L \, \lambda (x,t) \, |\xi |^p \end{aligned}$$
    (54)

    for every \(\xi \in \mathbf{R}^n\) and for some \(L \geqslant 1\).

    Consider the spaces and the operator just introduced and once defined

    $$\begin{aligned}&\Omega _+ (t) : = \big \{ x \in \Omega \, \big | \, \mu (\cdot , t) > 0 \big \} , \\&\Omega _- (t) : = \big \{ x \in \Omega \, \big | \, \mu (\cdot , t) < 0 \big \} , \end{aligned}$$

    define the operators

    $$\begin{aligned}&R(t) : L^2 \big ( \Omega , |\mu | (\cdot , t) \big ) \rightarrow L^2 \big ( \Omega , |\mu | (\cdot , t) \big ) , \qquad R(t) {:}{=} P_+(t) - P_-(t) , \\&P_+(t) : L^2 \big ( \Omega , |\mu | (\cdot , t) \big ) \rightarrow L^2 \big ( \Omega _+ (t), |\mu | (\cdot , t) \big ) \qquad \text {the orthogonal projection} , \\&P_-(t) : L^2 \big ( \Omega , |\mu | (\cdot , t) \big ) \rightarrow L^2 \big ( \Omega _- (t), |\mu | (\cdot , t) \big ) \qquad \text {the orthogonal projection} . \end{aligned}$$

    In this way R(t) turns out to be bounded for every t even if \(\mu \) is unbounded and we will need (see (13)) that the following function is absolutely continuous (and differentiable) for every \(u, v \in W^{1,q}_0 (\Omega )\):

    $$\begin{aligned} t \mapsto \big ( R(t) u, v \big )_{H(t)} =&\int _{\Omega _+(t)} u(x) \, v(x) \, |{\tilde{\mu }}| (x,t) \, dx - \int _{\Omega _-(t)} u(x) \, v(x) \, |{\tilde{\mu }}| (x,t) \, dx = \\ =&\int _{\Omega } u(x) \, v(x) \, \mu (x,t) \, dx \, . \end{aligned}$$

    Then for every \(\varphi \in L^2 \big ( \Omega _+ (0), \mu _+ (\cdot , 0) \big )\), \(\psi \in L^2 \big ( \Omega _- (T), \mu _- (\cdot , T) \big )\) and \(f \in \mathcal {V}'\) the problem

    $$\begin{aligned} \left\{ \begin{array}{ll} \mu (x,t) \, u_t + \mathcal {A}\, u = f(x,t) &{}\quad \text {in } \Omega \times (0,T) \\ u(x,t) = 0 &{}\quad \text {in } \partial \Omega \times (0,T) \\ u(x,0) = \varphi (x) &{}\quad \text {in } \Omega _+(0) \\ u(x,T) = \psi (x) &{}\quad \text {in } \Omega _-(T) \end{array} \right. \end{aligned}$$

    has a unique solution.

  2. 10.

    The analogous of example 8 with unbounded coefficient can be considered, then, adapting examples 8 and 9, one can consider

    $$\begin{aligned} \left\{ \begin{array}{ll} {\displaystyle \int _{\Omega } \mu (x,y,t) u_t(y,t) dy } + \mathcal {A}u (x,t)= f (x,t) &{}\quad \text {in } \Omega \times (0,T) , \\ u = 0 &{}\quad \text {in } \partial \Omega \times (0,T) , \\ u(\cdot ,0) = \varphi &{}\quad \text {in } {\tilde{H}}_+(0) , \\ u(\cdot ,T) = \psi &{}\quad \text {in } {\tilde{H}}_-(T) \, . \end{array} \right. \end{aligned}$$

    where

    $$\begin{aligned} \mu \in L^1(\Omega \times \Omega \times (0,T)) \end{aligned}$$

    and \(\mathcal {A}\), for instance, as in (54).

  3. 11.

    Another example of varying spaces is the following: consider first a function \(q : \Omega \rightarrow [1, +\infty )\)

    $$\begin{aligned} L^{q(\cdot )}(\Omega ) {:}{=} \left\{ u \in L^1_\mathrm{loc}(\Omega ) \, \Big | \, \int _{\Omega } | u(x) |^{q(x)} \, dx < + \infty \right\} \end{aligned}$$

    endowed with the norm (see, for instance, [12] for definitions and properties of these spaces)

    $$\begin{aligned} \Vert u \Vert _{ L^{q(\cdot )}(\Omega ) } {:}{=} \inf \left\{ \lambda > 0 \, \Big | \, \int _{\Omega } \left| \frac{u(x)}{\lambda } \right| ^{q(x)} \, dx \leqslant 1 \right\} \, . \end{aligned}$$

    Clearly \(W^{1, q(\cdot )}_0 (\Omega )\) is defined as the space

    $$\begin{aligned} W^{1, q(\cdot )}_0 (\Omega ) {:}{=} \left\{ u \in W^{1,1}_\mathrm{loc}(\Omega ) \, \Big | \, u \in L^{q(\cdot )}(\Omega ) \text { and } Du \in L^{q(\cdot )}(\Omega ) \right\} \end{aligned}$$

    endowed with the norm \(\Vert u \Vert _{ L^{q(\cdot )}(\Omega ) } + \Vert Du \Vert _{ L^{q(\cdot )}(\Omega ) }\). If now we have a function

    $$\begin{aligned} p : \Omega \times [0,T] \rightarrow [2, p_o] \end{aligned}$$

    for some \(p_o \geqslant 2\) we can consider

    $$\begin{aligned}&U = W^{1,p_o}_0(\Omega ) , \qquad V(t) = W^{1, p(\cdot , t)}_0 (\Omega ) , \qquad H(t) = L^2(\Omega ) , \\&A(t) : V(t) \rightarrow V'(t)\\&\big ( A(t) u \big ) (x) {:}{=} - \, \text {div} \, a\big ( x,t, Du (x) \big ) ,\\&\text {with } a : \Omega \times (0,T) \times \mathbf{R}^n \rightarrow \mathbf{R}^n ,\\&\text {verifying} \qquad \lambda _o \, | \xi |^{p(x,t)} \leqslant a(x,t,\xi ) \cdot \xi \leqslant \Lambda _o \, |\xi |^{p(x,t)} \end{aligned}$$

    for every \(\xi \in \mathbf{R}^n\) and for some positive \(\lambda _o, \Lambda _o\). If

    $$\begin{aligned} p : \Omega \times [0,T] \rightarrow [2, +\infty ) \end{aligned}$$

    one can simply consider, if U does not need to be a Banach space,

    $$\begin{aligned} U = C^1_0(\Omega ) \, . \end{aligned}$$

    Then problem (49) has a unique solution for r like in the examples 1–8.