1 Introduction

In this paper, we study a class of the Cauchy problems for evolution inclusions of subdifferential type in the framework of evolution triple of spaces. Recently, the problem has been studied in [21] without the convex subdifferential term and under the sign condition for the nonconvex potential, and in [8], where reformulated as a variational–hemivariational inequality, has been analyzed under a restrictive smallness hypothesis on the constants involved in operators A and \(\partial J\) [see problem (1.1), (1.2) below]. Here, we replace this restriction by a weaker condition and further examine the continuous dependence for such inclusions which was not studied before. In this way, the present paper is a continuation of [8]. Moreover, we provide a new application of our results to a semipermeability problem with nonmonotone and possibly multivalued subdifferential boundary conditions.

The main feature of the evolution inclusion under consideration is that both multivalued terms are generated by subdifferential operators which take their values in the dual space and not in a pivot space, and moreover, such inclusions depend on operators involved in the subdifferential maps and assumed to be history-dependent. We should note that Gasinski et al. [6] investigated an abstract first-order evolution inclusion in a reflexive Banach space extending several earlier work on parabolic hemivariational inequalities by Migórski [13], and Migórski and Ochal [15], and others.

The initial value problem for evolution inclusion under consideration reads as follows. Find \(w \in L^2(0, T; V)\) with \(w' \in L^2(0,T; V^*)\) such that

$$\begin{aligned}& w'(t) + A(t, w(t)) + (R_1 w)(t) + M^* \partial J (t, (Sw)(t), Mw(t)) \nonumber \\& \quad + N^* \partial \varphi (t, (Rw)(t), Nw(t)) \ni f(t) \ \ \text{ a.e. } \ t \in (0,T), \end{aligned}$$
(1.1)
$$\begin{aligned}& w(0) = w_0 . \end{aligned}$$
(1.2)

In inclusion (1.1), the symbol \(\partial J\) stands for the generalized gradient of a locally Lipschitz function \(J(t, z, \cdot )\) and \(\partial \varphi \) denotes the convex subdifferential of a convex and lower semicontinuous function \(\varphi (t, y, \cdot )\). Furthermore, (1.1) involves three nonlinear operators R, \(R_1\) and S assumed to be history-dependent. It is worth to observe that inclusion (1.1) finds an equivalent formulation as the following history-dependent variational–hemivariational inequality, see [8, Problem 7]:

$$\begin{aligned}& \langle w'(t) + A(t, w(t)) + ({R}_1 w)(t) - f(t), v - w(t) \rangle _{V^* \times V} \nonumber \\& + J^0 (t, ({S} w)(t), w(t); v - w(t)) + \, \varphi (t, ({R} w)(t), v) - \varphi (t, ({R} w)(t), w(t)) \ge 0 \end{aligned}$$
(1.3)

for all \(v \in V\), a.e. \(t \in (0, T)\). Existence and uniqueness results for a particular form of inclusion (1.3), under more restrictive hypotheses, have been recently proved in [22] where \(J \equiv 0\), and in [23] where J is independent of the history-dependent operator. We refer to [14, 17, 18, 20, 28, 29, 31, 33] for various related results on history-dependent inequality problems, and to a recent monograph [32] for a comprehensive research. Moreover, various classes of related differential variational inequalities and differential hemivariational inequalities have been studied only recently in [10,11,12, 24, 35].

The novelties of the paper are following. First, we prove existence of the unique solution to problem (1.1), (1.2) under a weaker (relaxed) smallness condition than in [8, Theorem 9]. Since the smallness condition is simpler here, it is applied to a wider class of evolution inclusions. Second, we deliver a continuous dependence result for the map \((f, w_0) \mapsto w\) for problem (1.1), (1.2) in weak topologies. Note that a convergence result in norm topologies for a problem in which \(\partial J\) does not depend on the history-dependent operator was obtained in [32, Theorem 99]. Third, we provide a new application of the continuous dependence result of Theorem 7 to a dynamic frictional contact model with history-dependent operators, and a new well-posedness result for a semipermeability problem. Finally, note that the continuous dependence result in weak topologies obtained in this paper can be used in analysis of various optimization and optimal control problems for variational and variational–hemivariational inequalities with history-dependent operators.

2 Essential tools

Let us recall some basic definitions from nonlinear analysis in Banach spaces. For more details in this connection, we refer to, e.g., [2,3,4].

Let X be a Banach space. Throughout this paper, we denote by \(\langle \cdot , \cdot \rangle _{X^*\times X}\) the duality pairing between a Banach space X and its dual \(X^*\), and by \(\Vert \cdot \Vert _X\) the norm in X. When no confusion arises, we often drop the subscripts. A function \(\varphi :X \rightarrow {\mathbb {R}}\cup \{ +\infty \}\) is proper if its effective domain \(\mathrm{dom} \, \varphi = \{ x \in X \mid \varphi (x) < +\infty \} \not = \emptyset \). It is lower semicontinuous (l.s.c.) if \(x_n \rightarrow x\) in X entails \(\varphi (x) \le \liminf \varphi (x_n)\).

Let \(\varphi :X \rightarrow {\mathbb {R}}\cup \{ +\infty \}\) be a convex function. An element \(x^* \in X^*\) is called a subgradient of \(\varphi \) at \(u \in X\) if

$$\begin{aligned} \langle x^*, v - u \rangle _{X^* \times X} \le \varphi (v) - \varphi (u) \ \ \text{ for } \text{ all } \ v \in X. \end{aligned}$$
(2.1)

The set of all \(x^* \in X^*\) which satisfy (2.1) is called the (convex) subdifferential of \(\varphi \) at u and is denoted by \(\partial \varphi (u)\). Next, we recall the notions of the generalized directional derivative and the generalized gradient of Clarke for a locally Lipschitz function \(\psi :X \rightarrow {\mathbb {R}}\). The generalized directional derivative of \(\psi \) at \(u \in X\) in the direction \(v \in X\) is defined by

$$\begin{aligned} \psi ^{0}(u; v) =\limsup _{y \rightarrow u, \ t \downarrow 0}\frac{\psi (y+ t v) - \psi (y)}{t}. \end{aligned}$$

The generalized gradient of \(\psi \) at \(u \in X\) is given by

$$\begin{aligned} \partial \psi (u) = \{ u^* \in X^* \mid \psi ^{0}(u; v) \ge {\langle u^*, v \rangle }_{X^* \times X}\ \ \mathrm{for\ all\ } \ v \in X \}. \end{aligned}$$

The locally Lipschitz function \(\psi \) is called regular (in the Clarke sense) at \(u \in X\) if for all \(v \in X\) the one-sided directional derivative \(\psi '(u; v)\) exists and satisfies \(\psi ^0(u; v) = \psi '(u; v)\) for all \(v \in X\). In what follows the generalized gradient of Clarke for a locally Lipschitz function and the subdifferential of a convex function will be denoted in the same way.

Recall that an operator \(A :X \rightarrow X^*\) is said to be demicontinuous if for all \(v \in X\), the functional \(u \mapsto \langle A u, v \rangle _{X^* \times X}\) is continuous, i.e., A is continuous as a map from X to \(X^*\) endowed with the weak topology. Let \(0< T < \infty \) and \(A :(0, T) \times X \rightarrow X^*\). The Nemytskii (superposition) operator associated with A is the operator \({{\mathcal {A}}} :L^2(0,T; X) \rightarrow L^2(0,T; X^*)\) defined by

$$\begin{aligned} ({{\mathcal {A}}} v)(t) = A(t, v(t)) \ \ \text{ for } \ \ v \in L^2(0,T; X), \ \ \text{ a.e. } \ \ t \in (0, T). \end{aligned}$$

A multivalued operator \(A :X \rightarrow 2^{X^*}\) is called coercive if either its domain \(D(A)=\{u\in X\mid Au\not =\emptyset \}\) is bounded or D(A) is unbounded and

$$\begin{aligned}\lim _{\Vert u \Vert _X \rightarrow \infty , \, \, u \in D(A)} \frac{\inf \, \{\, \langle u^*, u \rangle _{X^* \times X} \mid u^* \in A u\,\}}{\Vert u \Vert _X} = + \infty . \end{aligned}$$

Given a set D in a normed space E, we define \(\Vert D \Vert _E = \sup \{ \Vert x \Vert _E \mid x \in D \}\). The space of linear and bounded operators from a normed space E to a normed space F is denoted by \(\mathcal {L}(E, F)\). It is endowed with the standard operator norm \(\Vert \cdot \Vert _{\mathcal {L}(E, F)}\). For an operator \(L \in \mathcal {L}(E, F)\), we denote its adjoint by \(L^* \in \mathcal {L}(F^*, E^*)\).

Recall that the spaces \((V, H, V^*)\) form an evolution triple of spaces, if V is a reflexive and separable Banach space, H is a separable Hilbert space, and the embedding \(V \subset H\) is dense and continuous. We introduce the following Bochner spaces \(\mathcal {V} = L^2(0,T; V)\), \(\mathcal {V}^* = L^2(0,T; V^*)\), and \(\mathcal {W} = \{ w \in \mathcal {V} \mid w' \in \mathcal {V}^*\}\). It follows from standard results, see, e.g., [3, Section 8.4], that the space \(\mathcal {W}\) endowed with the graph norm \(\Vert w \Vert _{\mathcal {W}} = \Vert w \Vert _{\mathcal {V}} + \Vert w' \Vert _{\mathcal {V^*}}\) is a separable and reflexive Banach space, and each element in \({{\mathcal {W}}}\), after a modification on a set of null measure, can be identified with a unique continuous function on [0, T] with values in H. Further, the embedding \(\mathcal {W} \subset C(0, T; H)\) is continuous, where C(0, TH) stands for the space of continuous functions on [0, T] with values in H.

Finally, we state a fixed point result (see [9, Lemma 7] or [30, Proposition 3.1]) being a consequence of the Banach contraction principle.

Lemma 1

Let X be a Banach space and \(0< T < \infty \). Let \(F :L^2(0,T; X) \rightarrow L^2(0,T; X)\) be an operator such that

$$\begin{aligned}\Vert (Fv_1)(t) - (Fv_2)(t)\Vert ^2_X \le c \int _0^t \Vert v_1(s) - v_2(s)\Vert ^2_X \, \mathrm{d}s \end{aligned}$$

for all \(v_1\), \(v_2 \in L^2(0,T;X)\), a.e. \(t \in (0,T)\) with a constant \(c > 0\). Then, there exists a unique \(v^* \in L^2(0,T;X)\) such that \(Fv^* = v^*\).

3 History-dependent evolution inclusions

We start with the study of existence and uniqueness for abstract first-order evolution subdifferential inclusion in a general form. Our study is a continuation of paper [8] where a class of general dynamic history-dependent variational–hemivariational inequalities has been investigated. The aim is to provide an improved version of result in [8, Theorem 6] which actually holds under a more general smallness hypothesis.

We study the operator inclusions in the standard functional setting used for evolution problems which exploits the notion of an evolution triple of spaces \((V, H, V^*)\). We use the notation \(\mathcal {V}\), \(\mathcal {V}^*\) and \(\mathcal {W}\), recalled in the previous section.

Given \(A :(0, T) \times V \rightarrow V^*\), \(\psi :(0, T) \times V \rightarrow {\mathbb {R}}\), \(f:(0,T)\rightarrow V^*\) and \(w_0\in V\), we consider the following Cauchy problem for the evolution inclusion.

Problem 2

Find \(w \in {{\mathcal {W}}}\) such that

$$\begin{aligned}{\left\{ \begin{array}{ll} \displaystyle w'(t) + A(t, w(t)) + \partial \psi (t, w(t)) \ni f(t) \ \ \text{ a.e. } \ t \in (0,T), \\ w(0) = w_0 . \end{array}\right. } \end{aligned}$$

Here, \(\psi (t, \cdot )\) is a locally Lipschitz function and \(\partial \psi \) denotes its Clarke generalized gradient. We recall that a function \(w \in {{\mathcal {W}}}\) is a solution of Problem 2, if there exists \(w^* \in {{\mathcal {V}}^*}\) such that \(w'(t) + A(t, w(t)) + w^*(t) = f(t)\) a.e. \(t \in (0,T)\), \(w^*(t) \in \partial \psi (t, w(t))\) a.e. \(t \in (0,T)\), and \(w(0) = w_0\).

In the study of Problem  2, we need the following hypotheses.

\(\underline{H(A)}{:}\,\, \displaystyle A :(0, T) \times V \rightarrow V^*\) is such that

  1. (i)

    \(A(\cdot , v)\) is measurable on (0, T) for all \(v \in V\).

  2. (ii)

    \(A(t,\cdot )\) is demicontinuous on V for a.e. \(t \in (0, T)\).

  3. (iii)

    \(\Vert A(t, v) \Vert _{V^*} \le a_0(t) + a_1 \Vert v \Vert _V\) for all \(v \in V\), a.e. \(t \in (0,T)\) with \(a_0 \in L^2(0,T)\), \(a_0 \ge 0\) and \(a_1 \ge 0\).

  4. (iv)

    \(A(t, \cdot )\) is strongly monotone for a.e. \(t \in (0,T)\), i.e., for a constant \(m_A > 0\),

    $$\begin{aligned}\langle A (t, v_1) - A(t, v_2), v_1 - v_2 \rangle _{V^* \times V} \ge m_A \Vert v_1 - v_2 \Vert _V^2 \end{aligned}$$

    for all \(v_1\), \(v_2 \in V\), a.e. \(t \in (0, T)\).

\(\underline{H(\psi )}{:}\,\, \psi :(0, T) \times V \rightarrow {\mathbb {R}}\) is such that

  1. (i)

    \(\psi (\cdot , v)\) is measurable on (0, T) for all \(v \in V\).

  2. (ii)

    \(\psi (t, \cdot )\) is locally Lipschitz on V for a.e. \(t \in (0, T)\).

  3. (iii)

    \(\Vert \partial \psi (t, v) \Vert _{V^*} \le c_0 (t) + c_1 \Vert v \Vert _V\) for all \(v \in V\), a.e. \(t \in (0, T)\) with \(c_0 \in L^2(0, T)\), \(c_0 \ge 0\), \(c_1 \ge 0\).

  4. (iv)

    \(\partial \psi (t, \cdot )\) is relaxed monotone for a.e. \(t \in (0,T)\), i.e., for a constant \(m_\psi \ge 0\),

    $$\begin{aligned} \langle z_1 - z_2, v_1 - v_2 \rangle _{V^* \times V} \ge -m_\psi \Vert v_1 - v_2 \Vert ^2_V \end{aligned}$$

    for all \(z_i \in \partial \psi (t, v_i)\), \(z_i \in V^*\), \(v_i \in V\), \(i = 1\), 2, a.e. \(t \in (0, T)\).

\(\underline{(H_0)}{:}\,\, f \in {{\mathcal {V}}}^*\), \(w_0 \in V\).

\(\underline{(H_1)}{:}\,\, m_A > m_\psi \).

We have the following existence and uniqueness result.

Theorem 3

Under hypotheses H(A), \(H(\psi )\), \((H_0)\) and \((H_1)\), Problem  2 has a unique solution.

Proof

The proof follows the lines of [8, Theorem 6]. For this reason, we provide only an argument for the coercivity of the operator \({{\mathcal {F}}}:{\mathcal {V}}\rightarrow 2^{{{\mathcal {V}}}^*}\) defined by \({{\mathcal {F}}}v={{\mathcal {A}}}v+{{\mathcal {B}}}v\) for \(v\in {{\mathcal {V}}}\), where \({{\mathcal {A}}} :{{\mathcal {V}}} \rightarrow {{\mathcal {V}}^*}\) and \({{\mathcal {B}}} :{{\mathcal {V}}} \rightarrow 2^{{\mathcal {V}}^*}\) are the Nemytskii operators corresponding to the translations of \(A(t, \cdot )\) and \(\partial \psi (t, \cdot )\) by the initial condition \(w_0\):

$$\begin{aligned}&({{\mathcal {A}}} v)(t) = A(t, v(t) + w_0), \\&{{\mathcal {B}}} v = \{ \, v^* \in {{\mathcal {V}}^*} \mid v^*(t) \in \partial \psi (t, v(t) + w_0) \ \ \text{ for } \text{ a.e. } \ t \in (0, T) \, \} \end{aligned}$$

for \(v \in {{\mathcal {V}}}\) and a.e. \(t \in (0, T)\). By H(A)(iii), (iv) and \(H(\psi )\)(iii), (iv), we have

$$\begin{aligned}&\langle A(t, z) + \partial \psi (t, z), z \rangle _{V^* \times V} =\langle A(t, z) - A(t, 0), z \rangle _{V^* \times V} \\&\quad + \langle \partial \psi (t, z) - \partial \psi (t, 0), z \rangle _{V^* \times V} + \langle A(t, 0) + \partial \psi (t, 0), z \rangle _{V^* \times V} \\&\qquad \ge (m_A - m_\psi ) \Vert z \Vert ^2_V - (a_0(t) + c_0(t)) \Vert z \Vert _V \end{aligned}$$

for all \(z \in V\), a.e. \(t \in (0, T)\). Next, let \(v \in {\mathcal V}\). We obtain

$$\begin{aligned}&\langle {{\mathcal {A}}} v + {{\mathcal {B}}} v, v \rangle _{\mathcal {V^*} \times \mathcal {V}} = \int _0^T \langle ({{\mathcal {A}}} v)(t) + ({{\mathcal {B}}} v)(t), v(t) \rangle _{V^* \times V} \, \mathrm{d}t \nonumber \\&\quad = \int _0^T \langle A(t, v(t) + w_0) + \partial \psi (t, v(t) + w_0), v(t) + w_0 \rangle _{V^* \times V} \, \mathrm{d}t \nonumber \\&\qquad - \int _0^T \langle A(t, v(t) + w_0) + \partial \psi (t, v(t) + w_0), w_0 \rangle _{V^* \times V} \, \mathrm{d}t . \end{aligned}$$
(3.1)

Using H(A)(iii) and \(H(\psi )\)(iii), we get

$$\begin{aligned}&|\langle A(t, v(t) + w_0) + \partial \psi (t, v(t) + w_0), w_0 \rangle _{V^* \times V}| \\&\quad \le (\Vert A(t, v(t) + w_0)\Vert _{V^*} + \Vert \partial \psi (t, v(t) + w_0) \Vert _{V^*}) \Vert w_0 \Vert _V \\&\qquad \le (a_0(t) + c_0(t) + (a_1+c_1) \Vert w_0\Vert _V) \Vert w_0 \Vert _V + (a_1+c_1) \Vert w_0\Vert _V \Vert v(t) \Vert _V. \end{aligned}$$

Integrating this inequality on (0, T), by the Hölder inequality, we deduce

$$\begin{aligned}& - \int _0^T \langle A(t, v(t) + w_0) + \partial \psi (t, v(t) + w_0), w_0 \rangle _{V^* \times V} \, \mathrm{d}t \ge - (a_1+c_1) \Vert w_0 \Vert ^2_V \nonumber \\& \quad -\sqrt{T} (\Vert a_0\Vert _{L^2} + \Vert c_0 \Vert _{L^2}) \Vert w_0 \Vert _V - \sqrt{T} (a_1+c_1) \Vert w_0 \Vert _V \Vert v \Vert _{{\mathcal {V}}}. \end{aligned}$$
(3.2)

Combining (3.1) and (3.2), we obtain

$$\begin{aligned}&\langle {{\mathcal {A}}} v + {{\mathcal {B}}} v, v \rangle _{\mathcal {V^*} \times \mathcal {V}} \ge (m_A - m_\psi ) \Vert v + w_0 \Vert ^2_{{\mathcal {V}}} - (\Vert a_0\Vert _{L^2} + \Vert c_0 \Vert _{L^2}) \Vert v + w_0 \Vert _{{\mathcal {V}}} \\&\quad -\sqrt{T} (\Vert a_0\Vert _{L^2} + \Vert c_0 \Vert _{L^2}) \Vert w_0 \Vert _V - (a_1+c_1) T \Vert w_0 \Vert ^2_V - (a_1+c_1)\Vert w_0 \Vert _V \sqrt{T} \Vert v \Vert _{{\mathcal {V}}} \end{aligned}$$

for all \(v \in {{\mathcal {V}}}\). Finally, by the inequality \((a+b)^2 \ge \frac{1}{2} a^2 - b^2\) for all a, \(b \in {\mathbb {R}}\), we conclude

$$\begin{aligned}& \langle {{\mathcal {F}}} v, v \rangle _{\mathcal {V^*} \times \mathcal {V}} \ge \frac{1}{2} (m_A - m_\psi ) \Vert v \Vert ^2_{\mathcal V} - (m_A - m_\psi ) \Vert w_0 \Vert ^2_{{\mathcal {V}}} - (\Vert a_0\Vert _{L^2} + \Vert c_0 \Vert _{L^2}) \Vert v \Vert _{{\mathcal {V}}} \\&- (\Vert a_0\Vert _{L^2} + \Vert c_0 \Vert _{L^2}) (1 +\sqrt{T}) \Vert w_0 \Vert _{{\mathcal {V}}} -(a_1+c_1) T \Vert w_0 \Vert ^2_V -(a_1+c_1)\Vert w_0 \Vert _V \sqrt{T} \Vert v \Vert _{{\mathcal {V}}}. \end{aligned}$$

From the smallness hypothesis \((H_1)\), we infer that \({{\mathcal {F}}}\) is a coercive operator. The rest of the proof of this theorem is analogous to [8, Theorem 6], and therefore, it is omitted here. \(\square \)

Observe that existence and uniqueness result of Theorem 3 was proved earlier in [8] under the more restrictive smallness assumption \(m_A > \max \{ m_\psi , 2 \sqrt{2} \, c_1 \}\).

Note that condition \(H(\psi )\)(iv) has an equivalent formulation in terms of the generalized directional derivative of \(\psi (t, \cdot )\), that is, \(\partial \psi (t, \cdot )\) is relaxed monotone for a.e. \(t \in (0,T)\) with constant \(m_\psi \ge 0\) if and only if

$$\begin{aligned} \psi ^0(t, v_1; v_2 - v_1) + \psi ^0(t, v_2; v_1 - v_2) \le m_{\psi } \, \Vert v_1 - v_2 \Vert ^2_V \end{aligned}$$

for all \(v_i \in V\), \(i= 1\), 2, a.e. \(t \in (0, T)\). For details and examples, see, e.g., [7, Remark 3.1], and [19, 20, 32].

We pass now to the evolution inclusion with history-dependent operators which is the main object of our study in this paper.

Problem 4

Find \(w \in {{\mathcal {W}}}\) such that

$$\begin{aligned}{\left\{ \begin{array}{ll} \displaystyle w'(t) + A(t, w(t)) + (R_1 w)(t) + M^* \partial J (t, (Sw)(t), Mw(t)) \\ \quad + N^* \partial \varphi (t, (Rw)(t), Nw(t)) \ni f(t) \ \ \text{ a.e. } \ t \in (0,T), \\ w(0) = w_0 . \end{array}\right. } \end{aligned}$$

In this problem, \(\partial J\) denotes the generalized gradient of a locally Lipschitz function \(J(t, z, \cdot )\) and \(\partial \varphi \) is the convex subdifferential of a convex and lower semicontinuous function \(\varphi (t, y, \cdot )\). Despite two subdifferential terms generated by convex and (in general) nonconvex functions, the inclusion involves three nonlinear operators R, \(R_1\) and S called history-dependent ones.

We introduce the following hypotheses on the data of Problem 4. Let X, Y, Z and U be Banach spaces.

\(\underline{H(J)}{:}\,\, J :(0, T) \times Z \times X \rightarrow {\mathbb {R}}\) is such that

  1. (i)

    \(J(\cdot , z, v)\) is measurable on (0, T) for all \(z \in Z\), \(v \in X\).

  2. (ii)

    \(J(t, \cdot , v)\) is continuous on Z for all \(v \in X\), a.e. \(t \in (0, T)\).

  3. (iii)

    \(J(t, z, \cdot )\) is locally Lipschitz on X for all \(z \in Z\), a.e. \(t \in (0, T)\).

  4. (iv)

    \(\Vert \partial J(t, z, v) \Vert _{X^*} \le c_{0J} (t) + c_{1J} \Vert z \Vert _Z + c_{2J} \Vert v \Vert _X\) for all \(z \in Z\), \(v \in X\), a.e. \(t \in (0, T)\) with \(c_{0J} \in L^2(0, T)\), \(c_{0J}\), \(c_{1J}\), \(c_{2J} \ge 0\).

  5. (v)

    \(\partial J\) is relaxed monotone in the following sense

    $$\begin{aligned}& \langle \partial J(t, z_1, v_1) - \partial J(t, z_2, v_2), v_1 - v_2 \rangle _{X^* \times X} \\&\qquad \ge - m_J \Vert v_1 - v_2 \Vert ^2_X - {\bar{m}}_{J} \, \Vert z_1 - z_2 \Vert _Z \Vert v_1 - v_2 \Vert _X \end{aligned}$$

    for all \(z_i \in Z\), \(v_i \in X\), \(i = 1\), 2, a.e. \(t \in (0, T)\) with \(m_{J}\ge 0\), \({\bar{m}}_J \ge 0\).

\(\underline{H(\varphi )}{:}\,\, \varphi :(0, T) \times Y \times U \rightarrow {\mathbb {R}}\) is such that

  1. (i)

    \(\varphi (\cdot , y, u)\) is measurable on (0, T) for all \(y \in Y\), \(u \in U\).

  2. (ii)

    \(\varphi (t, \cdot , u)\) is continuous on Y for all \(u \in U\), a.e. \(t \in (0, T)\).

  3. (iii)

    \(\varphi (t, y, \cdot )\) is convex and l.s.c. on U for all \(y \in Y\), a.e. \(t \in (0, T)\).

  4. (iv)

    \(\Vert \partial \varphi (t, y, u) \Vert _{U^*} \le c_{0\varphi } (t) + c_{1\varphi } \Vert y \Vert _Y + c_{2\varphi } \Vert u \Vert _U\) for all \(y \in Y\), \(u \in U\), a.e. \(t \in (0, T)\) with \(c_{0\varphi } \in L^2(0, T)\), \(c_{0\varphi }\), \(c_{1\varphi }\), \(c_{2\varphi } \ge 0\).

  5. (v)

    \( \varphi (t, y_1, u_2) - \varphi (t, y_1, u_1) + \varphi (t, y_2, u_1) - \varphi (t, y_2, u_2) \le \beta _{\varphi } \, \Vert y_1 - y_2 \Vert _Y \Vert u_1 - u_2 \Vert _U\) for all \(y_i \in Y\), \(u_i \in U\), \(i = 1\), 2, a.e. \(t \in (0, T)\) with \(\beta _{\varphi } \ge 0\).

\(\underline{H(M,N)}{:}\,\, M \in {{\mathcal {L}}}(V, X)\), \(N \in {{\mathcal {L}}}(V, U)\).

\(\underline{(H_2)}{:}\,\, {R} :{{\mathcal {V}}} \rightarrow L^2(0, T; Y)\), \({R}_1 :{{\mathcal {V}}} \rightarrow {{\mathcal {V}}^*}\), and \({S} :{{\mathcal {V}}} \rightarrow L^2(0, T; Z)\) are such that

  1. (i)

    \(\displaystyle \Vert ({R} v_1)(t) - ({R} v_2)(t) \Vert _Y \le c_{R} \int _0^t \Vert v_1(s) - v_2(s) \Vert _V \mathrm{d}s\) for all \(v_1\), \(v_2 \in {{\mathcal {V}}}\), a.e. \(t\in (0, T)\) with \(c_{R} > 0\).

  2. (ii)

    \(\displaystyle \Vert ({R}_1 v_1)(t) - ({R}_1 v_2)(t) \Vert _{V^*} \le c_{R_1} \int _0^t \Vert v_1(s) - v_2(s) \Vert _V \mathrm{d}s\) for all \(v_1\), \(v_2 \in {{\mathcal {V}}}\), a.e. \(t\in (0, T)\) with \(c_{R_1} > 0\).

  3. (iii)

    \(\displaystyle \Vert ({S} v_1)(t) - ({S} v_2)(t) \Vert _Z \le c_{S} \int _0^t \Vert v_1(s) - v_2(s) \Vert _V \mathrm{d}s\) for all \(v_1\), \(v_2 \in {{\mathcal {V}}}\), a.e. \(t\in (0, T)\) with \(c_{S} > 0\).

\(\underline{(H_3)}{:}\,\, m_A > m_J \Vert M \Vert ^2\).

Theorem 5

Under hypotheses H(A), H(J), \(H(\varphi )\), H(MN), \((H_0)\), \((H_2)\) and \((H_3)\), Problem  4 has a unique solution.

Proof

The proof is based on Theorem 3, uses some ideas from [8, Theorem 9] and consists of three steps.

Step 1 Let us fix \(\xi \in L^2(0, T; V^*)\), \(\eta \in L^2(0, T; Y)\) and \(\zeta \in L^2(0, T; Z)\) and define a functional \(\psi _{\xi \eta \zeta } :(0, T) \times V \rightarrow {\mathbb {R}}\) by

$$\begin{aligned}\psi _{\xi \eta \zeta } (t, v) = \langle \xi (t), v \rangle _{V^* \times V} + J(t, \zeta (t), M v) + \varphi (t, \eta (t), N v) \end{aligned}$$

for all \(v \in V\), a.e. \(t \in (0,T)\).

We will verify that \(\psi _{\xi \eta \zeta }\) satisfies hypothesis \(H(\psi )\). First, it is clear by assumptions H(J)(i), (ii) and \(H(\varphi )\)(i), (ii) that both \(J(\cdot , \cdot , v)\) and \(\varphi (\cdot , \cdot , u)\) are Carathéodory functions for all \(v \in V\), \(u \in U\). Making compositions with measurable functions \((0,T) \ni t \mapsto \zeta (t) \in Y\) and \((0,T) \ni t \mapsto \eta (t) \in Z\), it is obvious that the functional \(\psi _{\xi \eta \zeta } (\cdot , v)\) is measurable on (0, T) for all \(v \in V\). This implies \(H(\psi )\)(i). Next, we observe that \(H(\psi )\)(ii) is satisfied. Indeed, using [3, Proposition 5.2.10] and \(H(\varphi )\)(iii), we infer that \(\varphi (t, y,\cdot )\) is locally Lipschitz on U for all \(y \in Y\), a.e. \(t \in (0, T)\). Hence, taking into account H(J)(iii) combined with H(MN), we obtain that the functional \(\psi _{\xi \eta \zeta }(t, \cdot )\) is locally Lipschitz on V for a.e. \(t \in (0, T)\), i.e., \(H(\psi )\)(ii) holds.

Subsequently, since \(J(t, z, M \cdot )\) and \(\varphi (t, y, N \cdot )\) are locally Lipschitz for all \(z \in Z\) and \(y \in Y\), a.e. \(t \in (0, T)\), we apply the chain rule in [3, Proposition 5.6.23] to get

$$\begin{aligned} \partial \psi _{\xi \eta \zeta }(t, v) \subset \xi (t) + M^* \partial J (t, \zeta (t), M v) + N^* \partial \varphi (t, \eta (t), N v) \end{aligned}$$
(3.3)

for all \(v \in V\) and a.e. \(t \in (0,T)\). Exploiting (3.3), we easily have

$$\begin{aligned} \Vert \partial \psi _{\xi \eta \zeta } (t, v) \Vert _{V^*}&\le \Vert \xi (t) \Vert _{V^*} + \Vert M^* \partial J(t, \zeta (t), M v) \Vert _{V^*} + \Vert N^* \partial \varphi (t, \eta (t), Nv) \Vert _{V^*} \\&\le \Vert \xi (t) \Vert _{V^*} + c_{0J}(t) + c_{1J} \Vert M \Vert \Vert \zeta (t) \Vert _Z + c_{2J} \Vert M \Vert ^2 \Vert v \Vert _V \\&\quad + c_{0\varphi }(t) + c_{1\varphi } \Vert N \Vert \Vert \eta (t) \Vert _Y + c_{2\varphi } \Vert N \Vert ^2 \Vert v \Vert _V \\&\le c_0(t) + c_1 \Vert v \Vert _V \end{aligned}$$

for all \(v \in V\), a.e. \(t\in (0,T)\), where \(c_0 \in L^2(0,T)\), \(c_0 \ge 0\) and

$$\begin{aligned} c_1 = \max \{ c_{2J} \Vert M \Vert ^2, c_{2\varphi } \Vert N \Vert ^2 \} \ge 0. \end{aligned}$$

Condition \(H(\psi )\)(iii) follows.

Finally, we will check the relaxed monotonicity condition \(H(\psi )\)(iv). We use H(J)(v) and the monotonicity of \(\partial \varphi (t, y, \cdot )\) for all \(y \in Y\), a.e. \(t \in (0, T)\), see [3, Theorem 6.3.19], which entail

$$\begin{aligned}&\langle \partial \psi _{\xi \eta \zeta } (t, v_1) - \partial \psi _{\xi \eta \zeta } (t, v_2), v_1 - v_2 \rangle _{V^* \times V} \\&\quad = \langle \partial J(t, \zeta (t), Mv_1) - \partial J(t, \zeta (t), M v_2), M v_1 - M v_2 \rangle _{X^* \times X} \\&\qquad +\langle \partial \varphi (t, \eta (t), N v_1) - \partial \varphi (t, \eta (t), N v_2), N v_1 - N v_2 \rangle _{U^* \times U}\\&\quad \ge - m_J \, \Vert M (v_1 - v_2) \Vert ^2_X \ge - m_J \Vert M \Vert ^2 \Vert v_1 - v_2 \Vert ^2_V \end{aligned}$$

for all \(v_1\), \(v_2 \in V\), a.e. \(t \in (0, T)\). We deduce that condition \(H(\psi )\)(iv) is satisfied with \(m_{\psi } = m_{J}\Vert M \Vert ^2\). This completes the proof of \(H(\psi )\).

Since \(m_\psi = m_J \Vert M \Vert ^2\), hypothesis \((H_3)\) implies condition \((H_1)\). Therefore, by applying Theorem 3, we obtain that there exists a unique element \(w_{\xi \eta \zeta } \in {{\mathcal {W}}}\) which solves the inclusion in Problem 2 with \(\psi \) replaced by \(\psi _{\xi \eta \zeta }\). Moreover, by (3.3), it is clear that \(w_{\xi \eta \zeta } \in {{\mathcal {W}}}\) is also a solution to the following problem: find \(w \in {{\mathcal {W}}}\) such that

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle w'(t) + A(t, w(t)) + \xi (t) + M^* \partial J (t, \zeta (t), Mw(t)) \\ \quad + N^* \partial \varphi (t, \eta (t), Nw(t)) \ni f(t) \ \ \text{ a.e. } \ t \in (0,T), \\ w(0) = w_0 . \end{array}\right. } \end{aligned}$$
(3.4)

Step 2 We claim that a solution to the problem (3.4) is unique. Let \(w_1\), \(w_2 \in {{\mathcal {W}}}\) be solutions to the problem (3.4). For simplicity, we skip the subscripts \(\xi \), \(\eta \) and \(\zeta \) in this part of the proof. Take \(w_2(t)\) as the test function in the inclusion in (3.4) satisfied by \(w_1\), take \(w_1(t)\) as the test function in the inclusions (3.4) for \(w_2\), and add the two resulting expressions. We have

$$\begin{aligned}& \langle w'_1(t) - w'_2(t), w_1(t) - w_2(t) \rangle _{V^* \times V} + \langle A(t, w_1(t)) - A(t, w_2(t)), w_1(t) - w_2(t)\rangle _{V^* \times V}\\& \quad + \langle \partial J (t, \zeta (t), Mw_1(t)) - \partial J (t, \zeta (t), Mw_2(t)), Mw_1 (t) - M w_2(t) \rangle _{X^* \times X} \\& \quad + \langle \partial \varphi (t, \eta (t), N w_1(t)) - \partial \varphi (t, \eta (t), N w_2(t)), N w_1 (t) - N w_2(t) \rangle _{U^* \times U} = 0 \end{aligned}$$

for a.e. \(t \in (0, T)\). Next, we integrate the above inequality on (0, t), for all \(t \in [0, T]\) and then use the integration by parts, H(A)(iv), H(J)(v), the monotonicity of the convex subdifferential and condition \(w_1(0)-w_2(0)=0\) to deduce

$$\begin{aligned} \frac{1}{2}\Vert w_1(t) - w_2(t) \Vert ^2_H + m_A \int _{0}^{t}\Vert w_1(s) - w_2(s) \Vert _V^2 \, \mathrm{d}s - m_{J}\int _{0}^{t}\Vert w_1(s) - w_2(s)\Vert _V^2 \, \mathrm{d}s \le 0 \end{aligned}$$

for all \(t \in [0,T]\). By the smallness condition \((H_3)\), we conclude \(w_1 = w_2\). The solution to problem (3.4) is unique.

Step 3 Consider operator \(\Lambda :L^2(0,T; V^* \times Y \times Z) \rightarrow L^2(0,T;V^* \times Y \times Z)\) by

$$\begin{aligned} \Lambda (\xi , \eta , \zeta ) = ({R}_1 w_{\xi \eta \zeta }, {R} w_{\xi \eta \zeta }, {S} w_{\xi \eta \zeta }) \quad \mathrm{for\ all} \ \ (\xi , \eta , \zeta ) \in L^2(0,T;V^* \times Y \times Z), \end{aligned}$$

where \(w_{\xi \eta \zeta } \in {{\mathcal {W}}}\) denotes the unique solution to the problem (3.4) corresponding to \((\xi , \eta , \zeta )\). By an argument similar to the one used in [8, Theorem 9 and Lemma 3], we deduce that there exists a unique fixed point \((\xi ^*,\eta ^*,\zeta ^*)\) of \(\Lambda \), i.e.,

$$\begin{aligned} (\xi ^*,\eta ^*,\zeta ^*) \in L^2(0,T;V^* \times Y \times Z) \ \ \ \mathrm{and}\ \ \ \Lambda (\xi ^*, \eta ^*, \zeta ^*) = (\xi ^*, \eta ^*, \zeta ^*). \end{aligned}$$

Let \(w_{\xi ^* \eta ^* \zeta ^*} \in {{\mathcal {W}}}\) be the unique solution to the problem (3.4) corresponding to \((\xi ^*,\eta ^*, \zeta ^*)\). By definition of operator \(\Lambda \), we have

$$\begin{aligned} \xi ^* = {R}_1(w_{\xi ^* \eta ^* \zeta ^*}), \ \ \eta ^* = {R}(w_{\xi ^* \eta ^* \zeta ^*}) \ \ \ \text{ and } \ \ \ \zeta ^* = {S}(w_{\xi ^* \eta ^* \zeta ^*}) . \end{aligned}$$

Finally, we use these equalities in (3.4), and conclude \(w_{\xi ^* \eta ^* \zeta ^*}\) is the unique solution of Problem 4. This completes the proof of the theorem. \(\square \)

4 A continuous dependence result

In this section, we provide a new continuous dependence result for Problem 4. We study the continuity in the weak topologies of the map which to the right-hand side and initial condition in Problem 4 assigns its unique solution.

We will prove first the following a priori estimate on a solution.

Proposition 6

Under hypotheses of Theorem 5, if \(w \in {{\mathcal {W}}}\) is a solution to Problem 4, then there exists a constant \(C > 0\) such that

$$\begin{aligned}&\Vert w \Vert _{C(0,T;H)} + \Vert w \Vert _{{\mathcal {W}}} \le C \big ( 1 + \Vert w_0 \Vert _V + \Vert f \Vert _{{\mathcal {V}}^*} + \Vert R0 \Vert _{L^2(0, T; Y)} \\&\quad + \Vert R_10 \Vert _{L^2(0, T; V^*)} + \Vert S 0 \Vert _{L^2(0, T; Z)} \big ) . \end{aligned}$$

Proof

Let us denote by \(w \in {{\mathcal {W}}}\) a solution to Problem 4. This means that there are \(\xi \in L^2(0, T; X^*)\) and \(\eta \in L^2(0,T; U^*)\) such that

$$\begin{aligned}& w'(t) + A (t, w(t)) + (R_1 w)(t) + M^* \xi (t) + N^* \eta (t) = f(t) \ \ \text{ a.e. } \ \ t \in (0, T), \end{aligned}$$
(4.1)
$$\begin{aligned}& \xi (t) \in \partial J(t, (Sw)(t), Mw(t)) \ \ \text{ a.e. } \ \ t \in (0, T), \end{aligned}$$
(4.2)
$$\begin{aligned}& \eta (t) \in \partial \varphi (t, (Rw)(t), N w(t)) \ \ \text{ a.e. } \ \ t \in (0, T), \end{aligned}$$
(4.3)
$$\begin{aligned}& w(0) = w_0. \end{aligned}$$
(4.4)

We take the duality with w(t) in (4.1) to get

$$\begin{aligned}&\langle w'(t) + A(t, w(t)) + (R_1w)(t), w(t) \rangle _{V^* \times V} + \langle \xi (t), M w(t) \rangle _{X^* \times X} \nonumber \\&\quad + \langle \eta (t), N w(t) \rangle _{Y^* \times Y} = \langle f(t), w(t) \rangle _{V^* \times V}. \end{aligned}$$
(4.5)

In the estimates below, we use several times the Hölder inequality, Young’s inequality \(ab \le \frac{\varepsilon ^2}{2}a^2 + \frac{1}{2\varepsilon ^2} b^2\) with \(\varepsilon >0\), and elementary inequality \((a+b)^2 \le 2\, (a^2+b^2)\) for all a, \(b \in {\mathbb {R}}\). Let \(t \in [0,T]\). From hypothesis \((H_2)\), we have

$$\begin{aligned}&\Vert (Rw)(t) \Vert ^2_{Y} \le 2 \, c_R^2 \, t \int _0^t \Vert w(s) \Vert ^2_V \, \mathrm{d}s + 2 \, \Vert (R0)(t)\Vert ^2_{Y}, \end{aligned}$$
(4.6)
$$\begin{aligned}&\Vert (R_1w)(t) \Vert ^2_{V^*} \le 2 \, c_{R_1}^2 \, t \int _0^t \Vert w(s) \Vert ^2_V \, \mathrm{d}s + 2 \, \Vert (R_10)(t)\Vert ^2_{V^*}, \end{aligned}$$
(4.7)
$$\begin{aligned}&\Vert (Sw)(t) \Vert ^2_{Z} \le 2 \, c_S^2 \, t \int _0^t \Vert w(s) \Vert ^2_V \, \mathrm{d}s + 2 \, \Vert (S0)(t)\Vert ^2_{Z}. \end{aligned}$$
(4.8)

By assumption \(H(\varphi )\)(iv) and (4.6), we obtain

$$\begin{aligned}& |\langle \partial \varphi (t, (Rw)(t), 0), Nw(t) \rangle _{Y^* \times Y}| \le \big (c_{0\varphi }(t) + c_{1\varphi } \Vert (Rw)(t)\Vert _Y\big ) \Vert N \Vert \Vert w(t) \Vert _V \\&\le \frac{\varepsilon ^2}{2} \Vert N \Vert ^2 \Vert w(t) \Vert _V^2 + \frac{1}{\varepsilon ^2} \big ( c_{0\varphi }^2(t) + c_{1\varphi }^2 \Vert (Rw)(t)\Vert ^2_Y \big ) \\&\le \frac{\varepsilon ^2}{2} \Vert N \Vert ^2 \Vert w(t) \Vert _V^2 + \frac{c_{0\varphi }^2(t)}{\varepsilon ^2} + \frac{2 c_{1\varphi }^2 c_R^2 T}{\varepsilon ^2} \int _0^t \Vert w(s) \Vert _V^2\, \mathrm{d}s + \frac{2 c_{1\varphi }^2}{\varepsilon ^2} \Vert (R0)(t)\Vert ^2_Y. \end{aligned}$$

The latter combined with (4.3) and the monotonicity of the convex subdifferential implies

$$\begin{aligned}& \langle \eta (t), N w(t) \rangle _{Y^* \times Y} = \langle \partial \varphi (t, (Rw)(t), N w(t)), N w(t) \rangle _{Y^* \times Y} \nonumber \\&\ge \langle \partial \varphi (t, (Rw)(t), 0), N w(t) \rangle _{Y^* \times Y} \nonumber \\&\ge - \frac{\varepsilon ^2}{2} \Vert N \Vert ^2 \Vert w(t) \Vert _V^2 - \frac{c_{0\varphi }^2(t)}{\varepsilon ^2} - \frac{2 c_{1\varphi }^2 c_R^2 T}{\varepsilon ^2} \int _0^t \Vert w(s) \Vert _V^2\, \mathrm{d}s - \frac{2 c_{1\varphi }^2}{\varepsilon ^2} \Vert (R0)(t)\Vert ^2_Y. \end{aligned}$$
(4.9)

Similarly, by hypotheses H(J)(iv) and (v), (4.2), and (4.8), we get

$$\begin{aligned}& \langle \xi (t), M w(t) \rangle _{X^* \times X} = \langle \partial J (t, (Sw)(t), M w(t)), M w(t) \rangle _{X^* \times X} \nonumber \\&\ge \langle \partial J(t, (Sw)(t), 0), M w(t) \rangle _{Y^* \times Y} - m_{J} \Vert M \Vert ^2 \Vert w(t) \Vert _V^2 \nonumber \\&\ge - m_{J} \Vert M \Vert ^2 \Vert w(t) \Vert _V^2 - \frac{\varepsilon ^2}{2} \Vert M \Vert ^2 \Vert w(t) \Vert _V^2 - \frac{c_{0J}^2(t)}{\varepsilon ^2} \nonumber \\&\quad - \frac{2 c_{1J}^2 c_S^2 T}{\varepsilon ^2} \int _0^t \Vert w(s) \Vert _V^2\, \mathrm{d}s - \frac{2 c_{1J}^2}{\varepsilon ^2} \Vert (S 0)(t)\Vert ^2_Z. \end{aligned}$$
(4.10)

Next, exploiting (4.7), we have

$$\begin{aligned}& \int _0^t \langle (R_1w)(s)), w(s) \rangle _{V^* \times V} \, \mathrm{d}s \le \frac{\varepsilon ^2}{2} \int _0^t \Vert w(s) \Vert _V^2\, \mathrm{d}s \nonumber \\&+ \frac{c_{R_1}^2 t}{\varepsilon ^2} \int _0^t \Big ( \int _0^s \Vert w(\tau ) \Vert _V^2 \, \mathrm{d}\tau \Big )\, \mathrm{d}s + \frac{1}{\varepsilon ^2} \int _0^t \Vert (R_1 0)(s) \Vert ^2_{V^*}\, \mathrm{d}s. \end{aligned}$$
(4.11)

On the other hand, a simple calculation gives

$$\begin{aligned}& \int _0^t \langle f(s)), w(s) \rangle _{V^* \times V} \, \mathrm{d}s \le \frac{\varepsilon ^2}{2} \int _0^t \Vert w(s) \Vert _V^2\, \mathrm{d}s + \frac{1}{2\varepsilon ^2} \int _0^t \Vert f(s) \Vert ^2_{V^*}\, \mathrm{d}s, \end{aligned}$$
(4.12)
$$\begin{aligned}&\int _0^t a_0(s) \Vert w(s) \Vert _V\, \mathrm{d}s \le \frac{\varepsilon ^2}{2} \Vert w \Vert _{L^2(0,t;V)}^2 + \frac{1}{2\varepsilon ^2} \Vert a_0 \Vert ^2_{L^2(0,T)} . \end{aligned}$$
(4.13)

Subsequently, we integrate (4.5) on (0, t) for all \(t \in [0,T]\), use integration by parts formula in [4, Proposition 8.4.14], H(A)(iii) and (iv), and inequalities (4.9)–(4.13), to deduce

$$\begin{aligned}& \frac{1}{2} \Vert w(t)\Vert _H^2 + \Big (m_A - m_{J} \Vert M \Vert ^2 -\frac{3\varepsilon ^2}{2} -\frac{\varepsilon ^2}{2} (\Vert N \Vert ^2 + \Vert M \Vert ^2)\Big ) \int _0^t \Vert w(s) \Vert _V^2 \, \mathrm{d}s \\&\le \frac{1}{2} \Vert w_0 \Vert ^2_H + \frac{1}{2\varepsilon ^2} \Vert a_0 \Vert ^2_{L^2(0,T)} + \frac{1}{\varepsilon ^2} (\Vert c_{0\varphi }\Vert ^2_{L^2(0,T)} + \Vert c_{0J}\Vert ^2_{L^2(0,T)}) \\&\quad + \frac{1}{\varepsilon ^2} (2c_{1\varphi }^2 c_R^2 T + 2 c_{1J}^2 c_S^2 T + c_{R_1}^2 T) \int _0^t \Big ( \int _0^s \Vert w(\tau ) \Vert ^2_V \, \mathrm{d}\tau \Big )\, \mathrm{d}s + \frac{1}{2\varepsilon ^2} \int _0^t \Vert f(s) \Vert ^2_{V^*}\, \mathrm{d}s \\&\quad + \frac{2c_{1\varphi }^2}{\varepsilon ^2} \int _0^t \Vert (R0)(s) \Vert _Y^2 \, \mathrm{d}s + \frac{1}{\varepsilon ^2} \int _0^t \Vert (R_10)(s) \Vert _{V^*}^2 \, \mathrm{d}s + \frac{2 c_{1J}^2}{\varepsilon ^2} \int _0^t \Vert (S0)(s) \Vert _Z^2 \, \mathrm{d}s. \end{aligned}$$

Now, by \((H_3)\), we choose \(\varepsilon > 0\) such that

$$\begin{aligned} m_A - m_{J} \Vert M \Vert ^2 -\frac{3\varepsilon ^2}{2} -\frac{\varepsilon ^2}{2} \big (\Vert N \Vert ^2 + \Vert M \Vert ^2 \big ) > 0. \end{aligned}$$

Hence, for some positive constants \(d_i\), \(i=1,\ldots ,5\), we have

$$\begin{aligned}& d_1 \int _0^t \Vert w(s) \Vert _V^2 \, \mathrm{d}s \le \frac{1}{2} \Vert w_0 \Vert ^2_H + d_2 \Vert f \Vert ^2_{L^2(0,t;V^*)} + d_3 \int _0^t \Big ( \int _0^s \Vert w(\tau ) \Vert ^2_V \, \mathrm{d}\tau \Big )\, \mathrm{d}s \\&\quad + \, d_4 \int _0^t \Big ( \Vert (R0)(s) \Vert _Y^2 + \Vert (R_10)(s) \Vert _{V^*}^2 + \Vert (S0)(s) \Vert _Z^2 \Big ) \, \mathrm{d}s + d_5 \end{aligned}$$

for all \(t \in [0,T]\). Applying the Gronwall lemma, see, e.g., [19, Lemma 2.7], we deduce the desired estimate on the term \(\Vert w \Vert _{{\mathcal {V}}}\). By (4.1), we obtain the bound on \(\Vert w' \Vert _{{\mathcal {V}}^*}\), and finally also on the norm of the solution w in C(0, TH). This proves the estimate in the statement of the proposition and completes the proof. \(\square \)

To provide a result on the continuous dependence, we need stronger versions of hypotheses introduced in the previous section. In particular, operator A will be assumed to be time independent and the weakly–weakly continuous which obviously implies the demicontinuity in H(A)(ii). All hypotheses introduced below are clearly satisfied in applications in Sects. 5 and 6.

\(\underline{H(A)_1}{:}\,\, \displaystyle A :V \rightarrow V^*\) is such that

  1. (i)

    A is weakly–weakly continuous.

  2. (ii)

    \(\Vert A v \Vert _{V^*} \le a_0 + a_1 \Vert v \Vert _V\) for all \(v \in V\) with \(a_0\), \(a_1 \ge 0\).

  3. (iii)

    A is strongly monotone with constant \(m_A > 0\), i.e.,

    $$\begin{aligned}\langle A v_1 - A v_2, v_1 - v_2 \rangle _{V^* \times V} \ge m_A \Vert v_1 - v_2 \Vert _V^2 \end{aligned}$$

    for all \(v_1\), \(v_2 \in V\).

\(\underline{H(J)_1}{:}\,\, J :(0, T) \times Z \times X \rightarrow {\mathbb {R}}\) is such that H(J)(i)–(v) hold and

  1. (vi)

    \(J^0(t, \cdot , \cdot ; w) :Z \times X \rightarrow {\mathbb {R}}\) is upper semicontinuous on \(Z \times X\) for all \(w \in X\).

\(\underline{H(M,N)_1}{:}\,\, M\) and N satisfy H(MN), and their Nemytskii operators

$$\begin{aligned} {{\mathcal {M}}} :{{\mathcal {W}}} \subset {{\mathcal {V}}} \rightarrow L^2(0, T;X) \ \ \text{ and } \ \ {{\mathcal {N}}} :{{\mathcal {W}}} \subset {{\mathcal {V}}} \rightarrow L^2(0, T; U) \end{aligned}$$

are compact.

\(\underline{(H_4)}{:}\,\, {R}\), \({R}_1\) and S satisfy \((H_2)\), and

  1. (i)

    \({R} :{{\mathcal {W}}} \subset {{\mathcal {V}}} \rightarrow L^2(0, T; Y)\) and \({S} :{{\mathcal {W}}} \subset {{\mathcal {V}}} \rightarrow L^2(0, T; Z)\) are compact.

  2. (ii)

    \({R}_1 :{{\mathcal {V}}} \rightarrow {{\mathcal {V}}^*}\) is weakly–weakly continuous.

  3. (iii)

    \((R0, R_10, S0)\) remains in a bounded subset of \(L^2(0, T; Y\times V^* \times Z)\).

Theorem 7

Under hypotheses \(H(A)_1\), \(H(J)_1\), \(H(\varphi )\), \(H(M,N)_1\), \((H_0)\), \((H_3)\), and \((H_4)\), if \(\{ f_n \} \subset {{\mathcal {V}}^*}\), \(f_n \rightarrow f\) weakly in \({{\mathcal {V}}^*}\), \(\{ w_0^n \} \subset V\), \(w_0^n \rightarrow w_0\) weakly in V, and \(\{ w_n \} \subset {{\mathcal {W}}}\), \(w \in {{\mathcal {W}}}\) are the unique solutions to Problem 4 corresponding to \(\{ (f_n, w_0^n) \}\) and \((f, w_0)\), respectively, then \(w_n \rightarrow w\) weakly in \({{\mathcal {W}}}\), as \(n \rightarrow \infty \).

Proof

The existence and uniqueness of solution is a consequence of Theorem 5. We prove the continuous dependence result. Let \(\{ f_n \} \subset {{\mathcal {V}}^*}\), \(f_n \rightarrow f\) weakly in \({{\mathcal {V}}^*}\), \(\{ w_0^n \} \subset V\), \(w_0^n \rightarrow w_0\) weakly in V, and \(\{ w_n \} \subset {{\mathcal {W}}}\) be the unique solution to Problem 4 corresponding to \(\{ (f_n, w_0^n) \}\), \(n \in {\mathbb {N}}\). Then,

$$\begin{aligned}& w_n'(t) + A w_n(t) + (R_1w_n)(t) + M^* \xi _n(t) + N^* \eta _n(t) = f_n(t) \ \ \text{ a.e. } \ \ t \in (0, T), \end{aligned}$$
(4.14)
$$\begin{aligned}& \xi _n(t) \in \partial J(t, (Sw_n)(t), Mw_n(t)) \ \ \text{ a.e. } \ \ t \in (0, T), \end{aligned}$$
(4.15)
$$\begin{aligned}& \eta _n(t) \in \partial \varphi (t, (Rw_n)(t), Nw_n(t)) \ \ \text{ a.e. } \ \ t \in (0, T), \end{aligned}$$
(4.16)
$$\begin{aligned}& w_n(0) = w_0^n. \end{aligned}$$
(4.17)

By Proposition 6 combined with \((H_4)\)(iii) and weak convergences of \(\{ f_n \}\) and \(\{ w_0^n \}\), the sequence \(\{ w_n \}\) is uniformly bounded in \({{\mathcal {W}}}\). By the reflexivity of \({\mathcal W}\), we can find a subsequence, denoted in the same way, such that \(w_n \rightarrow w\) weakly in \({{\mathcal {W}}}\) with \(w \in {{\mathcal {W}}}\), as \(n \rightarrow \infty \). We will prove that w is the unique solution in \({{\mathcal {W}}}\) to Problem 4 corresponding to \((f, w_0)\).

Using an argument similar to [34, Lemma 13], from hypothesis \(H(A)_1\), we known that

$$\begin{aligned} {{\mathcal {A}}} w_n \rightarrow {{\mathcal {A}}} w \ \ \text{ weakly } \text{ in } \ {{\mathcal {V}}^*}, \ \text{ as } \ n \rightarrow \infty . \end{aligned}$$
(4.18)

By assumption \((H_4)\)(ii), it follows

$$\begin{aligned} R_1 w_n \rightarrow R_1 w \ \ \text{ weakly } \text{ in } \ {{\mathcal {V}}^*}, \ \text{ as } \ n \rightarrow \infty . \end{aligned}$$
(4.19)

Further, we use \(H(M,N)_1\) to get \({{\mathcal {M}}} w_n \rightarrow {{\mathcal {M}}}w\) in \(L^2(0, T; X)\) and \({{\mathcal {N}}} w_n \rightarrow {{\mathcal {N}}}w\) in \(L^2(0, T; U)\), which, for a next subsequence, entails

$$\begin{aligned} M w_n(t) \rightarrow M w (t)\ \ \text{ in } \ X, \ \ \text{ and } \ \ N w_n(t) \rightarrow N w (t) \ \text{ in } \ U, \ \text{ for } \text{ a.e. } \ t \in (0,T). \end{aligned}$$
(4.20)

On the other hand, by \((H_4)\)(i), we have \(R w_n \rightarrow R w\) in \(L^2(0, T; Y)\) and \(S w_n \rightarrow S w\) in \(L^2(0, T; Z)\). Hence, again for a subsequence, it holds

$$\begin{aligned} (R w_n)(t) \rightarrow (R w)(t)\ \ \text{ in } \ Y, \ \ \text{ and } \ \ (Sw_n)(t) \rightarrow (Sw)(t) \ \text{ in } \ Z, \ \text{ for } \text{ a.e. } \ t \in (0,T). \end{aligned}$$
(4.21)

Next, we prove the following claims. \(\square \)

Claim 1. If \(J :(0, T) \times Z \times X \rightarrow {\mathbb {R}}\) satisfies H(J)(iii), (iv) and \(H(J)_1\)(vi), then the multivalued map

$$\begin{aligned} Z \times X \ni (z,x) \mapsto \partial J(t,z,x) \subset X^* \end{aligned}$$

is upper semicontinuous from \(Z \times X\) into \(X^*\) endowed with the weak topology with nonempty, closed and convex values, for a.e. \(t \in (0, T)\). Indeed, let us fix \(t \in (0, T) \setminus N_1\), \(m(N_1) = 0\). Having in mind [3, Proposition 4.1.4], it is sufficient to show that for any weakly closed subset D of \(X^*\), the weak inverse image \((\partial J)^{-}(D)\) of D under \(\partial J(t, \cdot , \cdot )\) is closed in the norm topology, where \((\partial J)^{-}(D)\) is defined by

$$\begin{aligned}(\partial J)^{-}(D)=\big \{\, (z,x)\in Z \times X \mid \partial J(t, z, x)\cap D\ne \emptyset \, \big \}. \end{aligned}$$

Let \(\{(z_n,x_n)\}\subset (\partial J)^{-}(D)\) be such that \((z_n,x_n)\rightarrow (z,x)\) in \(Z\times X\), as \(n\rightarrow \infty \). Then, there is \(\{\zeta _n\}\subset X^*\) such that \(\zeta _n\in \partial J(t, z_n, x_n)\cap D\) for each \(n\in {\mathbb {N}}\). Hypothesis H(J)(iv) implies that the sequence \(\{\zeta _n\}\) is bounded in \(X^*\). Hence, from the reflexivity of \(X^*\), without any loss of generality, we may assume that \(\zeta _n \rightarrow \zeta \) weakly in \(X^*\). Since D is weakly closed, we have \(\zeta \in D\), and by condition \(\zeta _n\in \partial J (t, z_n, x_n)\), we get

$$\begin{aligned}\langle \zeta _n, w \rangle _{X^*\times X} \le J^0(t,z_n,x_n;w) \ \ \text{ for } \text{ all }\ \ w \in X. \end{aligned}$$

Taking into account hypothesis \(H(J)_1\)(vi) and passing to the limit, we have

$$\begin{aligned}\langle \zeta , w \rangle _{X^*\times X} =\limsup _{n\rightarrow \infty } \langle \zeta _n, w \rangle _{X^*\times X} \le \limsup _{n\rightarrow \infty } J^0(t, z_n, x_n; w)\le J^0(t, z, x; w) \end{aligned}$$

for all \(w\in X\). Thus, \(\zeta \in \partial J(t, z, x)\), and finally, we get \(\zeta \in \partial J(t, z, x)\cap D\), i.e., \((z,x)\in (\partial J)^{-}(D)\). Hence, \((\partial J)^{-}(D)\) is closed in \(Z\times X\). The fact that the map \(\partial J\) has nonempty, closed and convex values follows from, e.g., [19, Theorem 3.23(iv)]. This completes the proof of the claim.

Claim 2. If \(\varphi :(0, T) \times Y \times U \rightarrow {\mathbb {R}}\) satisfies \(H(\varphi )\)(ii)–(iv), then the multivalued map

$$\begin{aligned} Y \times U \ni (y,u) \mapsto \partial \varphi (t,y,u) \subset U^* \end{aligned}$$

is upper semicontinuous from \(Y \times U\) into \(U^*\) endowed with the weak topology with nonempty, closed and convex values for a.e. \(t \in (0, T)\). This fact be proved similarly as Claim 1. The map \(\partial \varphi \) has nonempty, closed and convex values since \(\varphi \) has finite values and we use [4, Proposition 6.3.10]. Let \(t \in (0, T) \setminus N_1\) with \(m(N_1) = 0\), \(E \subset U^*\) be weakly closed, and let

$$\begin{aligned}(\partial \varphi )^{-}(E) =\big \{\, (y,u)\in Y \times U \mid \partial \varphi (t, y, u)\cap E\ne \emptyset \, \big \}. \end{aligned}$$

Let \(\{(y_n,u_n)\}\subset (\partial \varphi )^{-}(E)\) and \((y_n,u_n)\rightarrow (y,u)\) in \(Y\times U\), as \(n\rightarrow \infty \). We can find \(\{\rho _n\}\subset U^*\) such that \(\rho _n\in \partial \varphi (t, y_n, u_n)\cap E\) for each \(n\in {\mathbb {N}}\). It is clear from \(H(\varphi )\)(iv) that the sequence \(\{\rho _n\}\) is bounded in \(U^*\), which by the reflexivity of \(U^*\) entails, at least for a subsequence, \(\rho _n \rightarrow \rho \) weakly in \(U^*\). Obviously, \(\rho \in E\) and

$$\begin{aligned}\langle \rho _n, w \rangle _{U^*\times U} \le \varphi (t,y_n,w)- \varphi (t,y_n, u_n) \ \ \text{ for } \text{ all }\ \ w \in U. \end{aligned}$$

Next, it follows from [3, Proposition 5.2.10] that \(\varphi (t, y, \cdot )\) is locally Lipschitz. Hence, using \(H(\varphi )\)(ii)–(iv), by [19, Lemma 3.43], we infer that \(\varphi (t, \cdot , \cdot )\) is continuous on \(Y \times U\). This allows to pass to the limit below

$$\begin{aligned}&\langle \rho , w -u \rangle _{U^*\times U} =\limsup _{n\rightarrow \infty } \langle \rho _n, w - u_n\rangle _{U^*\times U} \\&\quad \le \lim _{n\rightarrow \infty } \left( \varphi (t,y_n,w)- \varphi (y,y_n, u_n) \right) = \varphi (t,y,w)- \varphi (t,y,u) \end{aligned}$$

for all \(w\in U\). This means that \(\rho \in \partial \varphi (t, y, u) \cap E\) and finally \((y, u) \in (\partial \varphi )^{-}(E)\). Hence, the set \((\partial \varphi )^{-}(E)\) is closed in \(Y \times U\), which by [3, Proposition 4.1.4] implies the desired upper semicontinuity of \(\partial \varphi (t,\cdot , \cdot )\).

We now pass to the limits in (4.15) and (4.16). In both cases, we apply a convergence theorem of Aubin-Cellina [1], in a version provided in [16, Proposition 2] or in [27, Lemma 2.6]. Exploiting hypotheses H(J)(iv) and \(H(\varphi )\)(iv), it follows that sequences \(\{ \xi _n \}\) and \(\{ \eta _n \}\) are uniformly bounded in \(L^2(0,T;X^*)\) and \(L^2(0,T; U^*)\), respectively. Hence, again by passing to a subsequence if necessary, we may assume that

$$\begin{aligned} \xi _n \rightarrow \xi \ \ \text{ weakly } \text{ in } \ L^2(0,T;X^*) \ \ \text{ and } \ \ \eta _n \rightarrow \eta \ \text{ weakly } \text{ in } \ L^2(0,T;U^*) \end{aligned}$$
(4.22)

with \((\xi , \eta ) \in L^2(0,T;X^* \times U^*)\).

From convergences (4.20), (4.21), (4.22) and properties of \(\partial J\) of Claim 1, we apply the aforementioned convergence theorem to deduce

$$\begin{aligned} \xi (t) \in \partial J(t, (Sw)(t), Mw(t)) \ \ \text{ a.e. } \ \ t \in (0, T). \end{aligned}$$
(4.23)

In a similar way, by (4.20), (4.21), (4.22) and Claim 2, we are able to apply the same convergence theorem to get

$$\begin{aligned} \eta (t) \in \partial \varphi (t, (Rw)(t), Nw(t)) \ \ \text{ a.e. } \ \ t \in (0, T). \end{aligned}$$
(4.24)

On the other hand, it is immediate to see that

$$\begin{aligned} {{\mathcal {M}}}^* \xi _n \rightarrow {{\mathcal {M}}}^* \xi \ \ \text{ and } \ \ {{\mathcal {N}}}^* \eta _n \rightarrow {{\mathcal {N}}}^* \eta \ \ \text{ weakly } \text{ in } \ \ {{\mathcal {V}}^*}. \end{aligned}$$
(4.25)

Since the map

$$\begin{aligned} {{\mathcal {W}}} \ni w \mapsto w(0) \in H \end{aligned}$$

is linear and continuous, from convergence \(w_n \rightarrow w\) weakly in \({{\mathcal {W}}}\), we have \(w_n(0) \rightarrow w(0)\) weakly in H. Passing to the weak limit in H in (4.17), we obtain \(w(0) = w_0\). Applying convergences \(w_n' \rightarrow w'\), (4.18), (4.19), (4.25), and \(f_n \rightarrow f\) weakly in \({{\mathcal {V}}^*}\), we take the limit in \(w_n' + {{\mathcal {A}}} w_n + R_1 w_n + {{\mathcal {M}}}^* \xi _n + {\mathcal N}^* \eta _n = f_n\) in \({{\mathcal {V}}^*}\). Hence,

$$\begin{aligned} w' + {{\mathcal {A}}} w + R_1 w + {{\mathcal {M}}}^* \xi + {{\mathcal {N}}}^* \eta = f \ \ \text{ in } \ \ {{\mathcal {V}}^*}. \end{aligned}$$

The latter combined with (4.23), (4.24) and \(w(0) = w_0\) imply that that \(w \in {{\mathcal {W}}}\) is a solution to Problem 4 corresponding to \((f, w_0)\). Since the solution is unique, we conclude that the whole sequence \(\{ w_n \}\) converges weakly in \({{\mathcal {W}}}\) to w. This completes the proof of the theorem. \(\square \)

5 Application to a frictional contact problem

In this section, we provide new continuous dependence results to a dynamic viscoelastic contact problem with friction which in its weak form leads to a history-dependent evolution inclusion analyzed in Sect. 4. Note that existence and uniqueness result for this problem has been obtained in [8] under a more restrictive smallness condition while the continuous dependence in weak topologies for this problem has not been studied before.

We shortly recall the necessary notation and refer to [8, 19, 30] for a detailed explanation and a discussion on mechanical interpretation. Let \(\Omega \subset {\mathbb {R}}^d\), \(d=2\), 3, be a regular domain occupied in its reference configuration by a viscoelastic body. Its boundary \(\Gamma \) consists of three disjoint measurable parts \(\Gamma _D\), \(\Gamma _N\) and \(\Gamma _C\), such that \(m(\Gamma _D) > 0\). The body is clamped on \(\Gamma _D\) (the displacement field vanishes there), the surface tractions act on \(\Gamma _N\), and \(\Gamma _C\) is a contact surface. All indices i, j, k, l run between 1 and d and, unless stated otherwise, the summation convention over repeated indices is applied. We use \(\varvec{u}=(u_i)\), \(\varvec{\sigma }=(\sigma _{ij})\) and \(\varvec{\varepsilon }(\varvec{u})=(\varepsilon _{ij}(\varvec{u}))\) to denote the displacement vector, the stress tensor and linearized strain tensor, respectively. The latter is defined by

$$\begin{aligned} \varepsilon _{ij}(\varvec{u}) = \frac{1}{2}\, (u_{i,j} + u_{j,i}) \end{aligned}$$

where \(u_{i,j}=\partial u_i/\partial x_j\). For a vector field, we use the notation \(v_\nu \) and \(\varvec{v}_\tau \) for the normal and tangential components of \(\varvec{v}\) on \(\partial \Omega \) given by \(v_\nu =\varvec{v}\cdot \varvec{\nu }\) and \(\varvec{v}_\tau =\varvec{v}-v_\nu \varvec{\nu }\), where \({\varvec{\nu }}\) stands for the outward unit normal on the boundary. The normal and tangential components of the stress field \(\varvec{\sigma }\) on the boundary are defined by \(\sigma _\nu =(\varvec{\sigma }\varvec{\nu })\cdot \varvec{\nu }\) and \(\varvec{\sigma }_\tau =\varvec{\sigma }\varvec{\nu }-\sigma _\nu \varvec{\nu }\), respectively. The symbol \({\mathbb {S}}^{d}\) stands for the space of symmetric matrices of order d, and the canonical inner products on \({\mathbb {R}}^d\) and \({\mathbb {S}}^{d}\) are given by \(\varvec{u}\cdot \varvec{v}= u_iv_i\) for all \(\varvec{u}=(u_i)\), \(\varvec{v}=(v_i)\in {\mathbb {R}}^d\), and \(\varvec{\sigma }\cdot \varvec{\tau }=\sigma _{ij}\tau _{ij}\) for all \(\varvec{\sigma }=(\sigma _{ij})\), \(\varvec{\tau }=(\tau _{ij}) \in {\mathbb {S}}^{d}\), respectively.

We are interested in the evolution process of the mechanical state of the body, in the finite time interval. The classical formulation of the contact problem is stated as follows.

Problem 8

Find a displacement field \(\varvec{u}:\Omega \times (0, T) \rightarrow {\mathbb {R}}^d\) and a stress field \(\varvec{\sigma }:\Omega \times (0, T) \rightarrow {\mathbb {S}}^d\) such that for all \(t\in (0,T)\),

$$\begin{aligned} \varvec{\sigma }(t)&={{\mathscr {A}}}\varvec{\varepsilon }({\varvec{u}}'(t)) +{{\mathscr {B}}}\varvec{\varepsilon }({\varvec{u}}(t))+\int _0^t{\mathscr {C}}(t-s)\varvec{\varepsilon }({\varvec{u}}'(s))\,\mathrm{d}s&\mathrm{in}\,\,&\Omega , \end{aligned}$$
(5.1)
$$\begin{aligned} \rho \, {\varvec{u}}''(t)&=\mathrm{Div}\,\varvec{\sigma }(t)+\varvec{f}_0(t)&\mathrm{in}\,\,&\Omega , \end{aligned}$$
(5.2)
$$\begin{aligned} \varvec{u}(t)&=\varvec{0}&\mathrm{on}\,\,&\Gamma _D,\end{aligned}$$
(5.3)
$$\begin{aligned} \varvec{\sigma }(t)\varvec{\nu }&=\varvec{f}_2(t)&\mathrm{on}\,\,&\Gamma _N, \end{aligned}$$
(5.4)
$$\begin{aligned} -\sigma _\nu (t)&\in k ( u_\nu (t))\partial j_\nu ({u}_\nu '(t))&\mathrm{on}\,\,&\Gamma _C, \end{aligned}$$
(5.5)
$$\begin{aligned} \Vert \varvec{\sigma }_\tau (t)\Vert&\le F_b\Big (\int _0^t\Vert \varvec{u}_\tau (s)\Vert \,\mathrm{d}s\Big ),\quad&\nonumber \\ -\varvec{\sigma }_\tau&=F_b\Big (\int _0^t\Vert \varvec{u}_\tau (s)\Vert \,\mathrm{d}s\Big ) \frac{{\varvec{u}}_\tau '(t)}{\Vert {\varvec{u}}_\tau '(t)\Vert }\ \ \mathrm{if}\ {\varvec{u}}_\tau '(t)\ne \varvec{0}&\mathrm{on}\,\,&\Gamma _C, \end{aligned}$$
(5.6)

and

$$\begin{aligned} \varvec{u}(0)=\varvec{u}_0,\quad {\varvec{u}}'(0)=\varvec{w}_0\quad \mathrm{in}\ \Omega . \end{aligned}$$
(5.7)

Now, we recall the weak formulation of Problem 8. To this end, we need the classical Hilbert spaces

$$\begin{aligned}V=\left\{ \varvec{v}\in H^1(\Omega ;{\mathbb {R}}^d) \mid \varvec{v}=\varvec{0}\ \text{ on }\ \Gamma _D\right\} , \quad H=L^2(\Omega ;{\mathbb {R}}^d), \quad {{\mathcal {H}}}= L^2(\Omega ;{\mathbb {S}}^{d}) \end{aligned}$$

with their standard inner products and norms. The symbol \(\Vert \gamma \Vert \) represents the norm of the trace operator \(\gamma :V \rightarrow L^2(\Gamma ; {\mathbb {R}}^d)\). For \(\varvec{v}\in H^1(\Omega ; {\mathbb {R}}^d)\), we use the same symbol \(\varvec{v}\) for the trace of \(\varvec{v}\) on \(\Gamma \) and we use the notation \(v_\nu \) and \(\varvec{v}_\tau \) for its normal and tangential traces.

For the weak formulation of the problem and further discussion, we need the following hypotheses.

\(\underline{H({{\mathscr {A}}})}{:}\,\, {{\mathscr {A}}} :\Omega \times {\mathbb {S}}^d \rightarrow {\mathbb {S}}^d\) is such that

  1. (i)

    \({{\mathscr {A}}}(\varvec{x},\varvec{\varepsilon }) =a(\varvec{x})\varvec{\varepsilon }\) for all \(\varvec{\varepsilon }\in {\mathbb {S}}^d\), a.e. \(\varvec{x}\in \Omega \).

  2. (ii)

    \(a(\varvec{x}) = \{ a_{ijkl}(\varvec{x})\}\) with \(a_{ijkl} = a_{jikl} = a_{lkij} \in L^\infty (\Omega )\), \(i,j,k,l=1,\ldots ,d\).

  3. (iii)

    \(a_{ijkl}(\varvec{x}) \varepsilon _{ij} \varepsilon _{kl} \ge \alpha \Vert \varvec{\varepsilon }\Vert ^2\) for all \(\varvec{\varepsilon }= (\varvec{\varepsilon }_{ij}) \in {\mathbb {S}}^d\), a.e. \(\varvec{x}\in \Omega \) with \(\alpha > 0\).

\(\underline{H({{\mathscr {B}}})}{:}\,\, {{\mathscr {B}}} :\Omega \times {\mathbb {S}}^d \rightarrow {\mathbb {S}}^d\) is such that

  1. (i)

    \({{\mathscr {B}}}(\varvec{x},\varvec{\varepsilon }) =b(\varvec{x})\varvec{\varepsilon }\) for all \(\varvec{\varepsilon }\in {\mathbb {S}}^d\), a.e. \(\varvec{x}\in \Omega \).

  2. (ii)

    \(b(\varvec{x}) = \{ b_{ijkl}(\varvec{x})\}\) with \(b_{ijkl} = b_{jikl} = b_{lkij} \in L^\infty (\Omega )\), \(i,j,k,l=1,\ldots ,d\).

  3. (iii)

    \(b_{ijkl}(\varvec{x}) \varepsilon _{ij} \varepsilon _{kl} \ge 0\) for all \(\varvec{\varepsilon }= (\varvec{\varepsilon }_{ij}) \in {\mathbb {S}}^d\), a.e. \(\varvec{x}\in \Omega \).

\(\underline{H({{\mathscr {C}}})}{:}\,\, {{\mathscr {C}}} :Q \times {\mathbb {S}}^d \rightarrow {\mathbb {S}}^d\) is such that

  1. (i)

    \({{\mathscr {C}}}(\varvec{x},t,\varvec{\varepsilon }) =c(\varvec{x},t)\varvec{\varepsilon }\) for all \(\varvec{\varepsilon }\in {\mathbb {S}}^d\), a.e. \((\varvec{x},t) \in Q\).

  2. (ii)

    \(c(\varvec{x},t) = \{ c_{ijkl}(\varvec{x},t)\}\) with \(c_{ijkl} = c_{jikl} = c_{lkij} \in L^\infty (Q)\), \(i,j,k,l=1,\ldots ,d\).

For the potential function \(j_\nu \), we assume

\(\underline{H(j_\nu )}{:}\,\, j_\nu :\Gamma _C \times {\mathbb {R}}\rightarrow {\mathbb {R}}\) is such that

  1. (i)

    \(j_\nu (\cdot , r)\) is measurable on \(\Gamma _C\) for all \(r \in {\mathbb {R}}\) and there is \({\bar{e}} \in L^2(\Gamma _C)\) such that \(j_\nu (\cdot , {\bar{e}} (\cdot )) \in L^1(\Gamma _C)\).

  2. (ii)

    \(j_\nu (\varvec{x}, \cdot )\) is locally Lipschitz on \({\mathbb {R}}\) for a.e. \(\varvec{x}\in \Gamma _C\).

  3. (iii)

    \(| \partial j_\nu (\varvec{x}, r) | \le {\bar{c}}_0\) for all \(r \in {\mathbb {R}}\), a.e. \(\varvec{x}\in \Gamma _C\) with \({\bar{c}}_0\ge 0\).

  4. (iv)

    \((\partial j_\nu (\varvec{x}, r_1) - \partial j_\nu (\varvec{x}, r_2))(r_1 - r_2) \ge -{{\bar{\beta }}} \, |r_1 - r_2|^2\) for all \(r_1\), \(r_2 \in {\mathbb {R}}\), a.e. \(\varvec{x}\in \Gamma _C\) with \({{\bar{\beta }}}\ge 0\).

  5. (v)

    \(j_\nu (\varvec{x}, \cdot )\) or \(-j_\nu (\varvec{x}, \cdot )\) is regular for a.e. \(\varvec{x}\in \Gamma _C\).

For the damper coefficient k and the friction bound \(F_b\), we assume

\(\underline{H(k)}{:}\,\, k :\Gamma _C \times {\mathbb {R}}\rightarrow {\mathbb {R}}_+\) is such that

  1. (i)

    \(k(\cdot ,r)\) is measurable on \(\Gamma _C\) for all \(r\in {\mathbb {R}}\).

  2. (ii)

    there are \(k_1\), \(k_2\) such that \(0 < k_1 \le k(\varvec{x}, r) \le k_2\) for all \(r \in {\mathbb {R}}\), a.e. \(\varvec{x}\in \Gamma _C\).

  3. (iii)

    there is \(L_{k}>0\) such that \(|k(\varvec{x},r_1)-k(\varvec{x},r_2)|\le L_{k}|r_1-r_2|\) for all \(r_1\), \(r_2\in {{\mathbb {R}}}\), a.e. \(\varvec{x}\in \Gamma _C\).

\(\underline{H(F_b)}{:}\,\, F_b :\Gamma _C \times {\mathbb {R}}\rightarrow {\mathbb {R}}_+\) is such that

  1. (i)

    \(F_b(\cdot ,r)\) is measurable on \(\Gamma _C\) for all \(r\in {\mathbb {R}}\).

  2. (ii)

    there is \(L_{F_b}>0\) such that \(|F_b(\varvec{x},r_1)-F_b(\varvec{x},r_2)|\le L_{F_b}|r_1-r_2|\) for all \(r_1\), \(r_2\in {{\mathbb {R}}}\), a.e. \(\varvec{x}\in \Gamma _C\).

  3. (iii)

    \(F_b(\cdot , 0) \in L^2(\Gamma _C)\).

\(\underline{H(f_0)}:\)    the densities of body forces, surface tractions and the initial data satisfy

$$\begin{aligned} \varvec{f}_0 \in L^2(0,T;L^2(\Omega ; {\mathbb {R}}^d)), \quad \varvec{f}_2\in L^2(0,T;L^2(\Gamma _N; {\mathbb {R}}^d)), \quad \varvec{u}_0, \, \varvec{w}_0 \in V. \end{aligned}$$
(5.8)

Finally, we define \(\varvec{f}:(0, T) \rightarrow V^*\) by

$$\begin{aligned} \langle \varvec{f}(t), \varvec{v}\rangle _{V^*\times V} =\langle \varvec{f}_0(t),\varvec{v}\rangle _{H} + \langle \varvec{f}_2(t),\gamma \varvec{v}\rangle _{L^2(\Gamma _N;{\mathbb {R}}^d)} \end{aligned}$$
(5.9)

for all \(\varvec{v}\in V\) and a.e. \(t \in (0, T)\). Under the above notation, we obtain the following weak formulation of Problem 8 in terms of the displacement.

Problem 9

Find \(\varvec{u}:(0, T) \rightarrow V\) such that for all \(\varvec{v}\in V\), a.e. \(t \in (0, T)\),

$$\begin{aligned}&\int _\Omega {\varvec{u}}''(t)\cdot (\varvec{v}-{\varvec{u}}'(t))\,\mathrm{d}x+({\mathscr {A}}\varvec{\varepsilon }({\varvec{u}}'(t)), \varvec{\varepsilon }(\varvec{v}-{\varvec{u}}'(t)))_{{\mathcal {H}}} \\&\qquad + ({\mathscr {B}}\varvec{\varepsilon }({\varvec{u}}(t)),\varvec{\varepsilon }(\varvec{v}-{\varvec{u}}'(t)))_{{\mathcal {H}}} +\Big (\int _0^t {{\mathscr {C}}} (t-s)\varvec{\varepsilon }({\varvec{u}}'(s))\,\mathrm{d}s, \varvec{\varepsilon }(\varvec{v}-{\varvec{u}}'(t))\Big )_{{\mathcal {H}}} \\&\qquad +\int _{\Gamma _C}F_b\Big (\int _0^t\Vert \varvec{u}_\tau (s)\Vert \,\mathrm{d}s \Big ) \, \big (\Vert \varvec{v}_\tau \Vert -\Vert \varvec{u}_\tau '(t)\Vert \big ) \, \mathrm{d}\Gamma \\&\qquad +\int _{\Gamma _C} k( u_\nu (t))\, j_\nu ^0 ({u}_\nu '(t); v_\nu -{u}_\nu '(t))\, \mathrm{d}\Gamma \ge \langle \varvec{f}(t),\varvec{v}-{\varvec{u}}'(t)\rangle _{V^*\times V}, \end{aligned}$$

and

$$\begin{aligned} \varvec{u}(0)=\varvec{u}_0,\quad {\varvec{u}}'(0)=\varvec{w}_0. \end{aligned}$$

Let \(\varvec{w}= \varvec{u}'\). Then

$$\begin{aligned} \varvec{u}(t) = \int _0^t \varvec{w}(s) \, \mathrm{d}s + \varvec{u}_0, \end{aligned}$$
(5.10)

and subsequently, \(u_\nu (t) = \int _0^t w_\nu (s) \, \mathrm{d}s + u_{0\nu }\) and \(\varvec{u}_\tau (t) = \int _0^t \varvec{w}_\tau (s) \, \mathrm{d}s + \varvec{u}_{0\tau }\) for \(t \in (0,T)\). Using these relations, Problem 9 in terms of velocity can be equivalently formulated as follows.

Problem 10

Find

\(\varvec{w}:(0, T) \rightarrow V\) such that for all \(\varvec{v}\in V\), a.e. \(t \in (0, T)\),

$$\begin{aligned}&\int _\Omega {\varvec{w}}'(t)\cdot (\varvec{v}-{\varvec{w}}(t))\,\mathrm{d}x +({{\mathscr {A}}}\varvec{\varepsilon }({\varvec{w}}(t)), \varvec{\varepsilon }(\varvec{v}-{\varvec{w}}(t)))_{{\mathcal {H}}} \\&\qquad + \Big ({{\mathscr {B}}}\varvec{\varepsilon }\Big (\int _0^t \varvec{w}(s) \, \mathrm{d}s + \varvec{u}_0 \Big ), \varvec{\varepsilon }(\varvec{v}-{\varvec{w}}(t))\Big )_{{\mathcal {H}}} +\Big (\int _0^t{{\mathscr {C}}}(t-s)\varvec{\varepsilon }({\varvec{w}}(s))\,\mathrm{d}s, \varvec{\varepsilon }(\varvec{v}-{\varvec{w}}(t))\Big )_{{\mathcal {H}}} \\&\qquad +\int _{\Gamma _C}F_b\Big (\int _0^t \left\| \int _0^s \varvec{w}_\tau (r) \, \mathrm{d}r + \varvec{u}_{0\tau } \right\| \,\mathrm{d}s \Big ) \, \big (\Vert \varvec{v}_\tau \Vert -\Vert \varvec{w}_\tau (t)\Vert \big ) \, \mathrm{d}\Gamma \\&\qquad +\int _{\Gamma _C} k\Big (\int _0^t w_\nu (s) \, \mathrm{d}s + u_{0\nu }\Big ) \, j_\nu ^0 ({w}_\nu (t); v_\nu -{w}_\nu (t))\, \mathrm{d}\Gamma \ge \langle \varvec{f}(t),\varvec{v}-{\varvec{w}}(t)\rangle _{V^*\times V}, \end{aligned}$$

and

$$\begin{aligned}\varvec{w}(0)=\varvec{w}_0. \end{aligned}$$

Next, let \(X = Y = Z = L^2(\Gamma _C)\), \(U = L^2(\Gamma _C;{\mathbb {R}}^d)\), and introduce operators \(A :V \rightarrow V^*\), \({R} :{\mathcal V} \rightarrow L^2(0, T; Y)\), \({R}_1 :{{\mathcal {V}}} \rightarrow {{\mathcal {V}}^*}\), \({S} :{{\mathcal {V}}} \rightarrow L^2(0, T; Z)\), \(M :V \rightarrow X\), and \(N :V \rightarrow U\) as follows:

$$\begin{aligned}& \langle A\varvec{w},\varvec{v}\rangle _{V^*\times V} = ({{\mathscr {A}}} \varvec{\varepsilon }(\varvec{w}),\varvec{\varepsilon }(\varvec{v}))_{{\mathcal {H}}} \quad \text{ for } \text{ all }\ \ \varvec{w},\varvec{v}\in V, \end{aligned}$$
(5.11)
$$\begin{aligned}& (R\varvec{w})(t) = \int _0^t\Big \Vert \int _0^s \varvec{w}_\tau (r)\,\mathrm{d}r+\varvec{u}_{0\tau }\Big \Vert \,\mathrm{d}s \quad \text{ for } \text{ all } \ \varvec{w}\in {{\mathcal {V}}},\ t \in (0,T), \end{aligned}$$
(5.12)
$$\begin{aligned}& \langle (R_1 \varvec{w})(t),\varvec{v}\rangle _{V^*\times V} = \Big ( {{\mathscr {B}}}\Big (\int _0^t\varvec{\varepsilon }({\varvec{w}}(s))\,\mathrm{d}s+\varvec{u}_0\Big ), \varvec{\varepsilon }(\varvec{v}) \Big )_{{\mathcal {H}}} \end{aligned}$$
(5.13)
$$\begin{aligned}&\ \ {} +\Big (\int _0^t {{\mathscr {C}}} (t-s)\varvec{\varepsilon }({\varvec{w}}(s))\,\mathrm{d}s, \varvec{\varepsilon }(\varvec{v})\Big )_{{\mathcal {H}}} \ \ \text{ for } \text{ all } \ \varvec{w}\in {{\mathcal {V}}},\ \varvec{v}\in V, \ t \in (0, T),\nonumber \\& (S\varvec{w})(t) = \int _0^t w_\nu (s)\,\mathrm{d}s+u_{0\nu } \quad \text{ for } \text{ all } \ \varvec{w}\in {{\mathcal {V}}},\ t \in (0,T), \end{aligned}$$
(5.14)
$$\begin{aligned}& M \varvec{v}= v_\nu , \quad N\varvec{v}= \varvec{v}_{\tau } \ \ \text{ for } \text{ all } \ \ \varvec{v}\in V. \end{aligned}$$
(5.15)

Further, consider the boundary potentials \(J :(0, T) \times Z \times X \rightarrow {\mathbb {R}}\) and \(\varphi :(0, T) \times Y \times U \rightarrow {\mathbb {R}}\) defined by

$$\begin{aligned}& J(t,z,v)=\int _{\Gamma _C} k(z)\, j_\nu (v) \,\mathrm{d}\Gamma \quad \text{ for } \text{ all } \ z\in Z,\ v\in X,\ t\in (0, T), \end{aligned}$$
(5.16)
$$\begin{aligned}& \varphi (t,y,\varvec{v})=\int _{\Gamma _C} F_b(y) \, \Vert \varvec{v}\Vert \,\mathrm{d}\Gamma \quad \text{ for } \text{ all }\ y\in Y,\ \varvec{v}\in U,\ t \in (0,T). \end{aligned}$$
(5.17)

With the notation above, we formulate the following history-dependent evolution inclusion associated with Problem 10.

Problem 11

Find \(\varvec{w}\in {{\mathcal {W}}}\) such that

$$\begin{aligned}{\left\{ \begin{array}{ll} \displaystyle \varvec{w}'(t) + A \varvec{w}(t) + (R_1 \varvec{w})(t) + M^* \partial J (t, (S\varvec{w})(t), M\varvec{w}(t)) \\ \quad + N^* \partial \varphi (t, (R\varvec{w})(t), N\varvec{w}(t)) \ni \varvec{f}(t) \ \ \text{ a.e. } \ t \in (0,T), \\ \varvec{w}(0) = \varvec{w}_0 . \end{array}\right. } \end{aligned}$$

The following result concerns the well-posedness of Problem 11.

Theorem 12

Assume hypotheses \(H({\mathscr {A}})\), \(H({\mathscr {B}})\), \(H({\mathscr {C}})\), \(H(j_\nu )\), H(k), \(H(F_b)\), and \(H(f_0)\). If

$$\begin{aligned} \alpha > {{\bar{\beta }}} \, k_2 \Vert \gamma \Vert ^2, \end{aligned}$$
(5.18)

then Problem 11 has a unique solution \(\varvec{w}\in {{\mathcal {W}}}\). Moreover, the map \({{\mathcal {V}}}^* \times V \ni (\varvec{f}, \varvec{w}_0) \mapsto \varvec{w}\in {{\mathcal {W}}}\) is continuous in weak topologies.

Proof

We will apply Theorem 7 with a suitable choice of spaces, operators and functionals introduced above. First, it is clear that under \(H({\mathscr {A}})\) the operator A defined by (5.11) satisfies \(A \in {{\mathcal {L}}}(V, V^*)\), and \(H(A)_1\) with \(a_0 = 0\), \(a_1 = \Vert A \Vert _{{{\mathcal {L}}}(V,V^*)}\) and \(m_A = \alpha \).

Now, we verify properties in \(H(J)_1\) for the functional J given by (5.16). Conditions H(J)(i)–(iii) follow from \(H(j_\nu )\)(i)–(iii), H(k)(i), (ii) and [19, Theorem 3.47(i)–(iii)]. From [19, Proposition 3.37(ii), Theorem 3.47(v), (vi)], we have

$$\begin{aligned} \partial J (t,z, v) \subset \int _{\Gamma _C} k(z) \partial j_\nu (v) \, \mathrm{d}\Gamma \ \ \text{ for } \text{ all } \ \ z \in Z, \ v \in X, \ \text{ a.e. } \ \ t \in (0, T). \end{aligned}$$

Hence, we obtain that H(J)(iv) holds with \(c_{0J}(t) = k_2 {\bar{c}}_0 \sqrt{|\Gamma _C|}\), and \(c_{1J} = c_{2J} = 0\). Moreover, by a similar argument as in [8, Theorem 13], we get

$$\begin{aligned}& J^0(t, z_1, v_1; v_2 - v_1) + J^0(t, z_2, v_2; v_1 - v_2) \\&\le m_{J} \Vert v_1 - v_2 \Vert ^2_X + {\bar{m}}_{J} \Vert z_1 - z_2 \Vert _Z \Vert v_1 - v_2 \Vert _X \end{aligned}$$

for all \(z_1\), \(z_2 \in Z\), \(v_1\), \(v_2 \in X\), a.e. \(t \in (0, T)\), where \(m_{J} = {{\bar{\beta }}} k_2\) and \({\bar{m}}_{J} = {\bar{c}}_0 L_k\). Hence, H(J)(v) follows.

Next, we verify \(H(J)_1\)(vi). Let \(\{ z_n \} \subset Z\), \(\{ v_n \} \subset X\), \(z_n \rightarrow z\) in Z and \(v_n \rightarrow v\) in X. By passing to a subsequence, we may suppose that \(z_n(x) \rightarrow z(x)\) and \(v_n(x) \rightarrow v(x)\) for a.e. \(x \in \Gamma _C\). Then, we use properties of the generalized directional derivative and generalized gradient stated in [19, Proposition 3.23(ii), (iii), Theorem 3.47(iv)]. By \(H(j_\nu )\) and H(k), for all \(w \in X\) and a.e. \(t \in (0, T)\), we obtain

$$\begin{aligned}&J^0(t, z_n, v_n; w) \le \int _{\Gamma _C} k(z_n(x)) j_\nu ^0(v_n(x); w(x)) \, \mathrm{d}\Gamma \\&\quad \le L_k \, {\bar{c}}_0 \int _{\Gamma _C} |z_n (x) - z (x)| |w(x)| \, \mathrm{d}\Gamma + \int _{\Gamma _C} k(z(x)) j_\nu ^0(v_n(x); w(x)) \, \mathrm{d}\Gamma \end{aligned}$$

which entails

$$\begin{aligned}&\limsup J^0(t, z_n, v_n; w) \le \limsup \int _{\Gamma _C} k(z(x)) j_\nu ^0(v_n(x); w(x)) \, \mathrm{d}\Gamma \\&\quad \le \int _{\Gamma _C} k(z(x)) j_\nu ^0(v(x); w(x)) \, \mathrm{d}\Gamma = J^0(t, z, v; w). \end{aligned}$$

The last inequality follows from hypothesis \(H(j_\nu )\)(v) and [19, Theorem 3.47(vii)]. Hence, we deduce that \(J^0(t, \cdot , \cdot ; w) :Z \times X \rightarrow {\mathbb {R}}\) is upper semicontinuous for all \(w \in X\), a.e. \(t \in (0,T)\). Condition \(H(J)_1\)(vi) is verified.

Subsequently, by a modification of the reasoning used in [8, Theorem 13], we easily verify that \(\varphi \) satisfies conditions \(H(\varphi )\)(ii)–(iv) with \(c_{0\varphi }(t) = \sqrt{2} \Vert F_b(0) \Vert _Y\), \(c_{1\varphi } = \sqrt{2} L_{F_b}\) and \(c_{2\varphi } = 0\). Further, a direct calculation implies that \(H(\varphi )\)(v) holds with \(\beta _\varphi = L_{F_b}\). We conclude that condition \(H(\varphi )\) is satisfied.

By definition (5.15), it is clear that operators M and N satisfy H(MN) and they are compact. From [21, Theorem 2.18], we infer that the Nemytskii operators corresponding to M, and N are compact too. Therefore, condition \(H(M,N)_1\) holds.

Next, by hypothesis \(H(f_0)\) and definition (5.8), we know that the regularity condition \((H_0)\) holds. Hypothesis \((H_3)\) is a consequence of the smallness condition (5.18).

We now verify condition \((H_4)\). It follows from [33] that operators R, \({R}_1\) and S given by (5.12)-(5.14), under hypotheses \(H({{\mathscr {B}}})\) and \(H({{\mathscr {C}}})\), are history-dependent with constants \(c_R = T \Vert \gamma \Vert \), \(c_{R_1} = \Vert {{\mathscr {B}}}\Vert + \Vert {{\mathscr {C}}}\Vert _{L^\infty (Q;{{\mathbb {S}}}^d)}\) and \(c_S = \Vert \gamma \Vert \). Hence, \((H_2)\) follows. Next, to prove compactness of operators R and S. We present the proof for R, while a proof for S is omitted being analogous. Let \(\varvec{v}_n \rightarrow \varvec{v}\) weakly in \({{\mathcal {W}}}\). From [21, Theorem 2.18], we know that \( \gamma \varvec{v}_n \rightarrow \gamma \varvec{v}\) in \(L^2(0, T; L^2(\Gamma _C;{\mathbb {R}}^d))\), where \(\gamma :{{\mathcal {V}}} \rightarrow L^2(0, T; L^2(\Gamma _C;{\mathbb {R}}^d))\) is the Nemytskii operator corresponding to the trace \(\gamma \) (for simplicity denoted in the same way). By Hölder’s inequality and a direct calculation, we have

$$\begin{aligned}&\Vert ({R}\varvec{v}_n)(t) - ({R}\varvec{v})(t)\Vert _Y \le T \int _0^t \Vert \varvec{v}_n(s) - \varvec{v}(s) \Vert _{L^2(\Gamma _C;{\mathbb {R}}^d)} \, \mathrm{d}s \\&\quad \le T \sqrt{T} \Vert \varvec{v}_n - \varvec{v}\Vert _{L^2(0,T; L^2(\Gamma _C;{\mathbb {R}}^d))} \end{aligned}$$

for a.e. \(t \in (0, T)\). Hence, it follows \(({R}\varvec{v}_n)(t) \rightarrow ({R}\varvec{v})(t)\) in Y, for a.e. \(t \in (0, T)\). By the Lebesgue dominated convergence theorem, we get \({R}\varvec{v}_n \rightarrow {R}\varvec{v}\) in \(L^2(0, T; Y)\). Thus, operator \({R} :{{\mathcal {W}}} \subset {{\mathcal {V}}} \rightarrow L^2(0, T; Y)\) is compact. It is also clear that (R0, S0) belongs to a bounded subset of \(L^2(0, T; Y\times Z)\).

To prove the continuity of \({R}_1\) in weak topologies, we denote

$$\begin{aligned}&\langle (R_{11} \varvec{w})(t),\varvec{v}\rangle _{V^*\times V} = \Big ( {\mathscr {B}}\Big (\int _0^t\varvec{\varepsilon }({\varvec{w}}(s))\,\mathrm{d}s+\varvec{u}_0\Big ), \varvec{\varepsilon }(\varvec{v}) \Big )_{{\mathcal {H}}}, \\&\langle (R_{12} \varvec{w})(t),\varvec{v}\rangle _{V^*\times V} = \Big (\int _0^t {{\mathscr {C}}} (t-s)\varvec{\varepsilon }({\varvec{w}}(s))\,\mathrm{d}s, \varvec{\varepsilon }(\varvec{v})\Big )_{{\mathcal {H}}} \end{aligned}$$

for all \(\varvec{w}\in {{\mathcal {V}}}\), \(\varvec{v}\in V\), \(t \in (0, T)\). Let \(\{ \varvec{v}_n \} \subset {{\mathcal {V}}}\) be such that \(\varvec{v}_n \rightarrow \varvec{v}\) weakly in \({{\mathcal {V}}}\). Then, for all \(\psi \in V^*\), all \(t \in [0, T]\), we have

$$\begin{aligned}&\left\langle \int _0^t \varvec{v}_n(s) \, \mathrm{d}s, \psi \right\rangle _{V^* \times V} = \int _0^t \langle \varvec{v}_n(s), \psi \rangle _{V^* \times V} \, \mathrm{d}s =\langle \varvec{v}_n, \psi \rangle _{{{\mathcal {V}}}^* \times {{\mathcal {V}}}} \\&\quad \rightarrow \langle \varvec{v}, \psi \rangle _{{{\mathcal {V}}}^* \times {{\mathcal {V}}}} = \int _0^t \langle \varvec{v}(s), \psi \rangle _{V^* \times V} \, \mathrm{d}s = \left\langle \int _0^t \varvec{v}(s) \, \mathrm{d}s, \psi \right\rangle _{V^* \times V}, \end{aligned}$$

that is,

$$\begin{aligned} \int _0^t \varvec{v}_n(s) \, \mathrm{d}s + \varvec{u}_0 \rightarrow \int _0^t \varvec{v}(s) \, \mathrm{d}s + \varvec{u}_0 \ \ \text{ weakly } \text{ in } \ \, V, \ \text{ for } \text{ all } \ t \in [0, T]. \end{aligned}$$
(5.19)

Since \({{\mathcal {B}}}\) is linear and continuous, we deduce \({R}_{11} \varvec{v}_n \rightarrow {R}_{11}\varvec{v}\) weakly in \({{\mathcal {V}}}^*\). Also since \({R}_{12}\) is linear and continuous, it is also weakly–weakly continuous, and therefore, \({R}_{12} \varvec{v}_n \rightarrow {R}_{12}\varvec{v}\) weakly in \({{\mathcal {V}}}^*\). We conclude that \({R}_1\) is weakly–weakly continuous, history-dependent, and clearly, \({R}_10\) is bounded in \(L^2(0, T; V^*)\). In this way, condition \((H_4)\) is verified. The conclusion of the theorem follows now from Theorem 7, which completes the proof. \(\square \)

Observe that Problems 10 and 11 are equivalent. This follows form the facts that every solution to Problem 11 is a solution to Problem 10, and that both problems have unique solutions. Thus, \(\varvec{w}\in {{\mathcal {W}}}\) solves inequality in Problem 10 if and only if it is solves inclusion in Problem 11. We apply Theorem 12 to deduce the following well-posedness result for variational–hemivariational inequality in Problem 9. It shows the continuous dependence of the solution to the contact problem with respect to the densities of applied forces and the initial data.

Corollary 13

Assume hypotheses \(H({\mathscr {A}})\), \(H({\mathscr {B}})\), \(H({\mathscr {C}})\), \(H(j_\nu )\), H(k), \(H(F_b)\), \(H(f_0)\), and (5.18). Then, Problem 9 has a unique solution with regularity \(\varvec{u}\in {{\mathcal {V}}}\) and \(\varvec{u}' \in {{\mathcal {W}}}\). Moreover, if \(\{ (\varvec{f}_0^n, \varvec{f}_N^n, \varvec{u}_0^n, \varvec{w}_0^n )\} \subset L^2(0,T;L^2(\Omega ; {\mathbb {R}}^d)\times L^2(\Gamma _N; {\mathbb {R}}^d)) \times V \times V\), and

$$\begin{aligned}&(\varvec{f}_0^n, \varvec{f}_N^n) \rightarrow (\varvec{f}_0, \varvec{f}_N) \ \ \text{ weakly } \text{ in } \ \, L^2(0,T;L^2(\Omega ; {\mathbb {R}}^d)\times L^2(\Gamma _N; {\mathbb {R}}^d)), \end{aligned}$$
(5.20)
$$\begin{aligned}&(\varvec{u}_0^n, \varvec{w}_0^n) \rightarrow (\varvec{u}_0, \varvec{w}_0) \ \ \text{ weakly } \text{ in } \ \, V \times V, \end{aligned}$$
(5.21)

then

$$\begin{aligned}&\varvec{u}_n(t) \rightarrow \varvec{u}(t) \ \ \text{ weakly } \text{ in } \ \, V, \ \text{ for } \text{ all } \ t \in [0, T], \nonumber \\&\varvec{u}_n' \rightarrow \varvec{u}' \ \ \text{ weakly } \text{ in } \ \, {{\mathcal {W}}}, \end{aligned}$$
(5.22)

where \(\{ \varvec{u}_n \}\) and \(\varvec{u}\) are unique solutions to Problem 9 corresponding to \((\varvec{f}_0^n, \varvec{f}_N^n, \varvec{u}_0^n, \varvec{w}_0^n)\) and \((\varvec{f}_0, \varvec{f}_N, \varvec{u}_0, \varvec{w}_0)\), respectively.

Proof

Assume (5.20) and (5.21). Let \(\varvec{f}_n\), \(\varvec{f}\in {{\mathcal {V}}}^*\) be elements defined by (5.9) corresponding to \((\varvec{f}_0^n, \varvec{f}_N^n)\) and \((\varvec{f}_0, \varvec{f}_N)\), respectively. Since the map \((\varvec{f}_0, \varvec{f}_n) \mapsto \varvec{f}\) is linear and continuous, we have \(\varvec{f}_n \rightarrow \varvec{f}\) weakly in \({{\mathcal {V}}}^*\). Combining this with hypothesis \(\varvec{w}_0^n \rightarrow \varvec{w}_0\) weakly in V, by Theorem 12, we infer \(\varvec{w}_n \rightarrow \varvec{w}\) weakly in \({{\mathcal {W}}}\), where \(\varvec{u}_n(t) = \int _0^t \varvec{w}_n(s) \, \mathrm{d}s + \varvec{u}_0^n\) and \(\varvec{u}(t) = \int _0^t \varvec{w}(s) \, \mathrm{d}s + \varvec{u}_0\) for all \(t \in [0, T]\), see (5.10).

Clearly, we have \(\varvec{u}_n' \rightarrow \varvec{u}'\) weakly in \({{\mathcal {W}}}\), and analogously as in the proof of (5.19), we obtain \(\int _0^t \varvec{w}_n(s) \, \mathrm{d}s \rightarrow \int _0^t \varvec{w}(s) \, \mathrm{d}s\) weakly in V, for all \(t \in [0, T]\). The latter together with hypothesis (5.21) implies (5.22). This completes the proof. \(\square \)

6 Application to a semipermeability problem

In this section, we illustrate the applicability of results in Sect. 4 in analysis of a semipermeability problem. Our aim is to provide the weak formulation of the problem which will be a variational–hemivariational inequality without history-dependent operators and to establish its well-posedness.

The semipermeability boundary conditions describe behavior of various types of membranes, natural and artificial ones and arise in models of heat conduction, electrostatics, hydraulics and in the description of flow of Bingham’s fluids, where the solution represents temperature, electric potential and pressure. These boundary conditions were first examined by Duvaut and Lions [5] in the convex setting, where semipermeability relations were assumed to be monotone and they led to variational inequalities. More generally, nonmonotone semipermeability conditions can be modeled by the Clarke generalized gradient, see, e.g., [6, 15, 25, 26] and the references therein.

Let \(\Omega \) be a bounded domain in \({\mathbb {R}}^d\) with Lipschitz continuous boundary \(\Gamma \). The latter is decomposed into three mutually disjoint and relatively open subsets \(\Gamma _a\), \(\Gamma _b\) and \(\Gamma _c\) of \(\Gamma \) such that \(\Gamma = {{\bar{\Gamma }}}_a \cup {{\bar{\Gamma }}}_b \cup {{\bar{\Gamma }}}_c\) and \(m(\Gamma _c) > 0\). We denote \(Q = \Omega \times (0, T)\), \(\Sigma _a = \Gamma _a \times (0, T)\), \(\Sigma _b = \Gamma _b \times (0, T)\) and \(\Sigma _c = \Gamma _c \times (0, T)\) with \(0< T < \infty \). Consider the following initial-boundary value problem.

Problem 14

Find \(u = u(x, t)\) such that

$$\begin{aligned} \begin{array}{rcl} \displaystyle \frac{\partial u}{\partial t} + A u + \partial j_1 (u) + \partial g_1(u) \ni f_1 &{}\text{ in }&{} \ Q \\ \displaystyle \frac{\partial u}{\partial \nu _A} + \partial j_2 (u) \ni f_a &{}\text{ on }&{} \ \Sigma _a \\ \displaystyle \frac{\partial u}{\partial \nu _A} + \partial g_2 (u) \ni f_b &{}\text{ on }&{} \ \Sigma _b \\ \displaystyle u = 0 &{}\text{ on }&{} \ \Sigma _c \\ \displaystyle u(0) = u_0 &{}\text{ in }&{} \ \Omega . \end{array} \end{aligned}$$

where A represents a linear operator \(A :V \rightarrow V^*\), \(\frac{\partial u}{\partial \nu _A}\) denotes the conormal derivative with respect to operator A, and \(\nu \) stands for the unit outward normal on the boundary. Problem 14 has been studied in [6] under more restrictive hypotheses and a different weak formulation.

To provide the weak formulation of Problem (14), we introduce assumptions on the data of the problem. Let \(V = \{ v \in H^1(\Omega ) \mid v = 0 \ \text{ on }\ \Gamma _c \}\), \(H = L^2(\Omega )\), \({{\mathcal {V}}} = L^2(0, T; V)\), \({{\mathcal {W}}} = \{ \, u \in {{\mathcal {V}}} \mid u' \in {{\mathcal {V}}}^* \, \}\). We denote by \(i :V \rightarrow H\) the embedding operator and by \(\gamma :V \rightarrow L^2(\Gamma )\) the trace operator. For \(v\in H^1(\Omega )\), we always write v instead of iv and \(\gamma v\).

We need the following hypotheses on the data.

\(\underline{H(A)_2}{:}\,\, \displaystyle A :V \rightarrow V^*\) is such that \(A= -\sum _{i,j=1}^d D_i \big ( a_{ij}(x) D_j \big ) \), and

  1. (i)

    \(a_{ij} \in L^\infty (\Omega )\) for i, \(j=1, \ldots , d\).

  2. (ii)

    \(\sum _{i,j=1}^d a_{ij}(x) \xi _i \xi _j \ge \alpha \Vert \xi \Vert ^2\) for all \(\xi \in {\mathbb {R}}^d\), a.e. \(x \in \Omega \) with \(\alpha > 0\).

\(\underline{H(j_1)}{:}\,\, j_1 :Q \times {\mathbb {R}}\rightarrow {\mathbb {R}}\) is such that

  1. (i)

    \(j_1(\cdot , \cdot , r)\) is measurable on Q for all \(r \in {\mathbb {R}}\) and there exists \(e_1 \in L^2(\Omega )\) such that \(j_1(\cdot , \cdot , e_1(\cdot )) \in L^1(Q)\).

  2. (ii)

    \(j_1(x, t, \cdot )\) is locally Lipschitz for a.e. \((x, t) \in Q\).

  3. (iii)

    \(| \partial j_1(\varvec{x}, t, r) | \le c_{0j}(t) + c_{1j} |r|\) for all \(r \in {\mathbb {R}}\), a.e. \((\varvec{x},t) \in Q\) with \(c_{0j} \in L^2_+(0,T)\), \(c_{1j} \ge 0\).

  4. (iv)

    \((\partial j_1(\varvec{x}, t, r_1) - \partial j_1(\varvec{x}, t, r_2))(r_1 - r_2) \ge - \beta _{1j} |r_1 - r_2|^2\) all \(r_1\), \(r_2 \in {\mathbb {R}}\), a.e. \((\varvec{x},t) \in Q\) with \(\beta _{1j}\ge 0\).

\(\underline{H(j_2)}{:}\,\, j_2 :\Sigma _a \times {\mathbb {R}}\rightarrow {\mathbb {R}}\) is such that

  1. (i)

    \(j_2(\cdot , \cdot , r)\) is measurable on \(\Sigma _a\) for all \(r \in {\mathbb {R}}\) and there exists \(e_2 \in L^2(\Gamma _a)\) such that \(j_2(\cdot , \cdot , e_2(\cdot )) \in L^1(\Sigma _a)\).

  2. (ii)

    \(j_2(x, t, \cdot )\) is locally Lipschitz for a.e. \((x, t) \in \Sigma _a\).

  3. (iii)

    \(| \partial j_2(\varvec{x}, t, r) | \le c_{2j}(t) + c_{3j} |r|\) for all \(r \in {\mathbb {R}}\), a.e. \((\varvec{x},t) \in \Sigma _a\) with \(c_{2j}\in L^2_+(0,T)\), \(c_{3j} \ge 0\).

  4. (iv)

    \((\partial j_2(\varvec{x}, t, r_1) - \partial j_2(\varvec{x}, t, r_2))(r_1 - r_2) \ge - \beta _{2j} |r_1 - r_2|^2\) all \(r_1\), \(r_2 \in {\mathbb {R}}\), a.e. \((\varvec{x},t) \in \Sigma _a\) with \(\beta _{2j}\ge 0\).

\(\underline{H(g_1)}{:}\,\, g_1 :Q \times {\mathbb {R}}\rightarrow {\mathbb {R}}\) is such that

  1. (i)

    \(g_1(\cdot , \cdot , r)\) is measurable on Q for all \(r \in {\mathbb {R}}\).

  2. (ii)

    \(g_1(x, t, \cdot )\) is convex and l.s.c. for a.e. \((x, t) \in Q\).

  3. (iii)

    \(| \partial g_1(\varvec{x}, t, r) | \le c_{0g}(t) + c_{1g} |r|\) for all \(r \in {\mathbb {R}}\), a.e. \((\varvec{x},t) \in Q\) with \(c_{0g}\in L^2_+(0,T)\), \(c_{1g} \ge 0\).

\(\underline{H(g_2)}{:}\,\, g_2 :\Sigma _b \times {\mathbb {R}}\rightarrow {\mathbb {R}}\) is such that

  1. (i)

    \(g_2(\cdot , \cdot , r)\) is measurable on \(\Sigma _b\) for all \(r \in {\mathbb {R}}\).

  2. (ii)

    \(g_2(x, t, \cdot )\) is convex and l.s.c. for a.e. \((x, t) \in \Sigma _b\).

  3. (iii)

    \(| \partial g_2(\varvec{x}, t, r) | \le c_{2g}(t) + c_{3g} |r|\) for all \(r \in {\mathbb {R}}\), a.e. \((\varvec{x},t) \in \Sigma _b\) with \(c_{2g}\in L^2_+(0,T)\), \(c_{3g} \ge 0\).

\(\underline{H(f)}{:}\,\, f_1 \in L^2(Q), \ f_a \in L^2(\Sigma _a), \ f_b \in L^2(\Sigma _b), \ u_0 \in V. \)

\(\underline{(H_5)}{:}\,\, \alpha > \beta _{1j} \Vert i \Vert ^2 + \beta _{2j} \Vert \gamma \Vert ^2. \)

By a standard procedure, we obtain the following weak formulation of Problem (14).

Problem 15

Find \(u \in {{\mathcal {W}}}\) such that for all \(v\in V\), a.e. \(t \in (0, T)\), we have

$$\begin{aligned}& \langle u'(t) + A u(t) - f(t), v - u(t) \rangle _{V^* \times V} \\& \displaystyle \quad + \int _{\Omega } j_1^0(x,t, u(t); v-u(t)) \, \mathrm{d}x + \int _{\Gamma _a} j_2^0(x,t, u(t); v-u(t)) \, \mathrm{d}\Gamma \\& \displaystyle \qquad + \int _\Omega (g_1(x,t, v) - g_1(x,t,u(t))) \, \mathrm{d}x + \int _{\Gamma _b} (g_2(x,t, v) - g_2(x,t,u(t))) \, \mathrm{d}\Gamma \ge 0, \\& u(0)=u_0 . \end{aligned}$$

Here, \(f :(0, T) \rightarrow V^*\) is given by

$$\begin{aligned}\langle f(t), v \rangle _{V^* \times V} = \int _\Omega f_1(t) v \, \mathrm{d}x + \int _{\Gamma _a} f_a(t) v \, \mathrm{d}\Gamma + \int _{\Gamma _b} f_b(t) v \, \mathrm{d}\Gamma \ \ \end{aligned}$$

for \(v \in V\), a.e. \(t \in (0, T)\).

Theorem 16

Assume hypotheses \(H(A)_2\), \(H(j_1)\), \(H(j_2)\), \(H(g_1)\), \(H(g_2)\), H(f), and \((H_5)\). Then, Problem 15 has a unique solution \(u \in {{\mathcal {W}}}\) and the map

$$\begin{aligned} L^2(Q) \times L^2(\Sigma _a) \times L^2(\Sigma _b) \times V \ni (f_1, f_a, f_b, u_0) \mapsto u \in {{\mathcal {W}}} \end{aligned}$$

is continuous, where all spaces are endowed with their weak topologies.

Proof

We sketch the main points of the proof. We introduce functionals \(J_1 :(0, T) \times H \rightarrow {\mathbb {R}}\), \(J_2 :(0, T) \times L^2(\Gamma _a) \rightarrow {\mathbb {R}}\), \(\varphi _1 :(0, T) \times H \rightarrow {\mathbb {R}}\), and \(\varphi _2 :(0, T) \times L^2(\Gamma _b) \rightarrow {\mathbb {R}}\) defined by

$$\begin{aligned}&J_1 (t, v) = \int _{\Omega } j_1(x, t, v(x)) \, \mathrm{d}x \ \ \text{ for } \ v \in H, \\&J_2 (t, v) = \int _{\Gamma _a} j_2(x, t, v(x)) \, \mathrm{d}x \ \ \text{ for } \ v \in L^2(\Gamma _a), \\&\varphi _1 (t, v) = \int _{\Omega } g_1(x, t, v(x)) \, \mathrm{d}x \ \ \text{ for } \ v \in H, \\&\varphi _2 (t, v) = \int _{\Gamma _b} g_2(x, t, v(x)) \, \mathrm{d}x \ \ \text{ for } \ v \in L^2(\Gamma _b), \end{aligned}$$

and operators \(M_1 = N_1 = i \in {{\mathcal {L}}}(V, H)\), \(M_2 = N_2 = \gamma \in {{\mathcal {L}}}(V, L^2(\Gamma ))\). With this notation, we consider the following evolution inclusion. \(\square \)

Problem 17

Find \(u \in {{\mathcal {W}}}\) such that

$$\begin{aligned}{\left\{ \begin{array}{ll} \displaystyle u'(t) + A u(t) + M_1^* \partial J_1 (t, M_1 u(t)) + M_2^* \partial J_2 (t, M_2 u(t)) \\ \quad + N_1^* \partial \varphi _1 (t, N_1 u(t)) + N_2^* \partial \varphi _2 (t, N_2 u(t)) \ni f(t)\ \ \text{ a.e. } \ t \in (0,T), \\ u(0) = u_0 . \end{array}\right. } \end{aligned}$$

It is clear by definitions of the convex and Clarke subdifferentials, and properties of the generalized directional derivative, see [19, Theorem 3.47], that any solution to Problem 17 is also a solution to Problem 15. We will show below that the solution to Problem 11 is unique, and moreover, by a direct calculation we verify that under our hypotheses the solution to Problem 15 is also unique. Hence, we conclude that Problems 15 and 17 are equivalent.

Problem 11 is now treated by exploiting the methods used in Theorems 5 and 7 . Since the operator \(A \in {{\mathcal {L}}}(V, V^*)\) is coercive, it satisfies condition \(H(A)_1\). Note that \(J_1\) satisfies condition H(J), where the variable z is omitted, \(X = H\), \(m_J = \beta _{1j}\), and \({\bar{m}}_J =0\). Taking into account that \(J_1^0(t,\cdot ;w) :X \rightarrow {\mathbb {R}}\) is upper semicontinuous for all \(w \in H\), a.e. \(t \in (0, T)\), see [19, Proposition 3.23(ii)], we deduce that \(H(J)_1\) holds too. Analogously, we check that \(J_2\) satisfies \(H(J)_1\) with \(m_J = \beta _{2j}\), and \({\bar{m}}_J =0\).

Furthermore, we easily verify that both \(\varphi _1\) and \(\varphi _2\) satisfy condition \(H(\varphi )\). In particular, \(H(\varphi )\)(v) holds automatically. The Nemytskii operators corresponding to \(M_1\) and \(M_2\) are compact, see [6, Examples 5.2 and 5.3]. Hence, \(H(M,N)_1\) is satisfied. Conditions \((H_2)\) and \((H_4)\) hold trivially. Condition \((H_5)\) implies the smallness assumption of type \((H_3)\). We are in a position to apply Theorems 5 and 7 to Problem 17 to obtain its unique solvability and a continuous dependence result. Since Problems 15 and 17 are equivalent with their unique solution, this completes the proof. \(\square \)