1 Introduction

The goal of this paper is to study a mechanical contact problem for beams with nonconvex and nonsmooth superpotentials. Contact problems have been recently investigated in the literature for various classes of processes. Considerable progress has been achieved in their modeling, mathematical analysis and numerical simulations, and, as a result, a general Mathematical Theory of Contact Mechanics is currently emerging. It is concerned with the mathematical structures which underly general contact problems with different constitutive laws, i.e., materials, various geometries and different contact conditions, see for instance [7, 8, 14, 16, 17, 19] and the references therein. An important number of contact problems arising in Mechanics, Physics and Engineering Science lead to mathematical models expressed in terms of subdifferential inclusions, and variational and hemivariational inequalities. For this reason the mathematical literature dedicated to Contact Mechanics is extensive and the progress made in the last two decades is impressive. The analysis of nonlinear inclusions and hemivariational inequalities, including existence and uniqueness results, can be found in [3, 4, 13, 14, 17].

The interest in contact problems involving beams lies in the fact that their mathematical analysis may provide insight into the possible types of behavior of the solutions and on occasions leads to decoupling of some of the equations, thus simplifying the approach. Moreover, one may use such models as tests and benchmarks for computer schemes meant for simulation of complicated multidimensional contact problems. Models, analysis and simulations of contact problems for beams can be found in [2, 6, 10, 11, 18] and the references therein. In [2], a mathematical model which describes the unilateral contact of a beam between two deformable obstacles was considered. The unique weak solvability of the model was obtained by using the control variational method and numerical simulation related to this method were presented, as well.

This paper is a continuation of [2, 13]. Its aim is to complete [13] with a new existence and uniqueness results in the study of a class of subdifferential inclusions and hemivariational inequalities, and to apply these results in the analysis of a quasistatic contact model for elastic beams, which extends the contact model considered in [2]. A brief comparison between the results obtained in this current paper and those in [2, 13] is the following.

In the proof of the unique solvability of the inclusions we use the method already used in [13], based on a surjectivity result for pseudomonotone multivalued operators. Nevertheless, we note that the inclusion formulated in this paper is more general than that in [13] and, moreover, it is studied under different hypotheses on the data. More precisely, the sign condition for the superpotential, considered in [13], is replaced in this paper by the smallness assumption on constants involved in the problem. Also we deal with operators between a reflexive Banach space and its dual without introducing an additional intermediate space as in [13]. The uniqueness of solution is proved, analogously as in [13], under the hypothesis on the regularity of the superpotential. Next, we specialize our existence and uniqueness result in the study of a time dependent hemivariational inequality. In contrast with the hemivariational inequality considered in [13], where the superpotential was defined on the boundary of a domain, in the current paper the superpotential is defined inside the domain under consideration.

The mathematical model we consider in this paper describes the contact between an Euler–Bernoulli beam and a reactive obstacle. We model the contact with a new and nonstandard boundary condition which involves both the subdifferential of a nonconvex function and a Volterra-type integral term. This contact condition includes as a particular case the normal compliance condition and takes into account the memory effects of the obstacle, too. In a variational formulation, the model leads to a history-dependent hemivariational inequality for the displacement field. We prove the unique weak solvability of the problem. With respect to [2], the main novelty of the model studied in this paper lies in the contact condition we use. As a consequence, the problem we study here is time-dependent and, therefore, neither the arguments on stationary variational inequalities nor the arguments on the control variational method used in [2] work in this case. For this reason we use the arguments on history-dependent hemivariational inequalities we develop previously in that paper. In this way we exemplify one of the main features of the Mathematical Theory of Contact Mechanics which consists in the cross fertilization between modeling and applications on the one hand, and nonlinear mathematical analysis on the other hand. Indeed, within the setting of equilibrium process of an elastic beam, we show how new models of contact lead to a new type of hemivariational inequalities and, conversely, we show how new abstract results on hemivariational inequalities can be applied to prove the solvability of new contact problems.

The rest of the paper is structured as follows. In Sect. 2 we provide the existence and uniqueness of the solution to a class of the history-dependent subdifferential inclusions and in Sect. 3 we specialize this result in the study of history-dependent hemivariational inequalities. We proceed with Sect. 4, in which we describe the model of contact between the elastic beam and the reactive obstacle. Then we list the assumptions on the data, derive the variational formulation of the problem and prove an existence and uniqueness result, Theorem 12. Finally, in Sect. 5 we provide numerical algorithm and simulations for the problem under consideration.

2 History-Dependent Subdifferential Inclusions

In this section we deal with a nonlinear abstract inclusion of subdifferential type which depends on the time variable being a parameter in the problem. The main goal is to provide a result on the unique solvability of this subdifferential inclusion involving a history-dependent operator. We start with a basic notation and preliminary results on the abstract history-dependent subdifferential inclusions. For additional details on the material presented in this section we refer to [35, 1315, 17].

Let \((E, \Vert \cdot \Vert _E)\) be a Banach space and \(h :E \rightarrow \mathbb {R}\) be a locally Lipschitz function on \(E\). The generalized directional derivative of \(h\) at \(x \in E\) in the direction \(v \in E\), denoted by \(h^{0}(x; v)\), is defined by

$$\begin{aligned} h^{0}(x; v) = \limsup _{y \rightarrow x, \ \lambda \downarrow 0} \frac{h(y + \lambda v) - h(y)}{\lambda } \end{aligned}$$

and the generalized gradient of \(h\) at \(x\), denoted by \(\partial h(x)\), is a subset of a dual space \(E^*\) given by

$$\begin{aligned} \partial h(x) = \{\, \zeta \in E^* \mid h^{0}(x; v) \ge {\langle \zeta , v \rangle }_{E^* \times E} \ \text{ for } \text{ all }\, \ v \in E\, \}, \end{aligned}$$

where \(\langle \cdot , \cdot \rangle _{E^* \times E}\) is the duality pairing of \(E\) and \(E^*\). A locally Lipschitz function \(h\) is called regular (in the sense of Clarke) at \(x \in E\) if for all \(v \in E\) the one-sided directional derivative \(h'(x; v)\) exists and satisfies \(h^0(x; v) = h'(x; v)\) for all \(v \in E\). The symbol \(w\text{- }E\) is used for the space \(E\) endowed with the weak topology. The space of all linear and continuous operators from a normed space \(E\) to a normed space \(F\) is denoted by \({\mathcal L}(E, F)\).

We consider the reflexive Banach space \(V\) and its dual, \(V^*\). Given \(0 < T < +\infty \), we introduce the spaces \({\mathcal V} = L^2(0,T; V)\), and \({\mathcal V^*} = L^2(0,T; V^*)\). Let \(X\) be a separable reflexive Banach space and \(M :V \rightarrow X\) be a linear continuous operator. We denote by \(\Vert M \Vert \) the norm of the operator \(M\) in \({\mathcal L}(V, X)\) and by \(M^* :X^* \rightarrow V^*\) the adjoint operator to \(M\).

Let \(A :(0, T) \times V \rightarrow V^*\), \({\mathcal S} :{\mathcal V} \rightarrow {\mathcal V^*}\), \(J :(0, T) \times X \rightarrow \mathbb {R}\) and \({\widetilde{f}} :(0, T) \rightarrow V^*\) be given. We consider the following time dependent abstract subdifferential inclusion.

Problem 1

Find \(u \in {\mathcal V}\) such that

$$\begin{aligned} A(t, u(t)) + ({\mathcal S} u)(t) + M^* \, \partial J(t, M u(t)) \ni {\widetilde{f}}(t) \ \ \text{ a.e. } \ t \in (0,T). \end{aligned}$$

The symbol \(\partial J(t, \cdot )\) denotes the Clarke generalized gradient of \(J(t, \cdot )\) for \(t \in (0, T)\).

Definition 2

A function \(u \in {\mathcal V}\) is called a solution to Problem 1 if and only if there exists \(\zeta \in {\mathcal V^*}\) such that

$$\begin{aligned} \left. \begin{array}{l} \displaystyle A(t, u(t)) + ({\mathcal S} u)(t) + \zeta (t) = {\widetilde{f}}(t) \ \ \text{ a.e. } \ t \in (0, T) \\ \zeta (t) \in M^* \partial J(t, M u(t)) \ \ \text{ a.e. } \ t \in (0, T). \end{array} \right\} \end{aligned}$$

In order to provide a result on the solvability of Problem 1, we need the following hypotheses on the data.

$$\begin{aligned} \left. \begin{array}{l} A :(0,T) \times V \rightarrow V^* \ \text{ is } \text{ such } \text{ that } \\ \ \ \mathrm{(a)} \ A(\cdot , v) \ \text{ is } \text{ measurable } \text{ on } \ (0,T) \ \text{ for } \text{ all } \ v \in V. \\ \ \ \mathrm{(b)} \ A(t,\cdot ) \ \text{ is } \text{ pseudomonotone } \text{ and } \text{ coercive } \text{ with } \\ \qquad \ \ \ \text{ constant } \ \alpha > 0, \text{ i.e., }\ \ {\langle A(t, v), v \rangle }_{V^* \times V} \ge \alpha \Vert v \Vert _V^{2} \\ \qquad \ \ \ \text{ for } \text{ all } \ v \in V, \ \text{ for } \text{ a.e. } \ t \in (0, T). \\ \ \ \mathrm{(c)} \ A(t,\cdot ) \ \text{ is } \text{ strongly } \text{ monotone } \text{ for } \text{ a.e. } \ t \in (0, T), \ \mathrm{i.e.,}\ \\ \qquad \ \ \ {\langle A(t, v_1) - A(t, v_2), v_1 - v_2 \rangle }_{V^* \times V} \ge m_1 \Vert v_1 - v_2 \Vert _V^{2} \\ \qquad \ \ \ \text{ for } \text{ all } \ v_1, v_2 \in V, \ \text{ a.e. } \ t \in (0, T) \ \text{ with } \ m_1 > 0. \end{array} \right\} \end{aligned}$$
(1)
$$\begin{aligned} \left. \begin{array}{l} {\mathcal S} :{\mathcal V} \rightarrow {\mathcal V^*} \ \text{ is } \text{ such } \text{ that } \\ \quad \Vert ({\mathcal {S}} u_1)(t) - ({\mathcal {S}} u_2)(t)\Vert _{V^*} \le L_{\mathcal S} \displaystyle {\int _0^t\Vert u_1(s)-u_2(s)\Vert _V \,ds}\\ \quad \text{ for } \text{ all } \, u_1,\,u_2\in {\mathcal V}, \ \mathrm{a.e.} \ t \in (0,T) \ \text{ with } \ L_{\mathcal S} > 0. \end{array} \right\} \end{aligned}$$
(2)
$$\begin{aligned} \left. \begin{array}{l} J :(0,T) \times X \rightarrow \mathbb {R}\ \text{ is } \text{ such } \text{ that } \\ \ \ \mathrm{(a)} \ J(\cdot , u) \ \text{ is } \text{ measurable } \text{ on } \ (0, T) \ \text{ for } \text{ all } \ u \in X. \\ \ \ \mathrm{(b)} \ J(t, \cdot ) \ \text{ is } \text{ locally } \text{ Lipschitz } \text{ on } \ X \ \text{ for } \text{ a.e. } \ t \in (0, T).\ \\ \ \ \mathrm{(c)} \ \Vert \partial J(t, u) \Vert _{X^*} \le c_0(t) + c_1 \, \Vert u \Vert _{X} \ \text{ for } \text{ all } \ u \in X, \\ \qquad \ \ \ \text{ a.e. } \ t \in (0, T) \ \text{ with } \ c_0\in L^2(0,T), c_0(t), c_1 \ge 0. \\ \ \ \mathrm{(d)} \ \langle z_1 - z_2, u_1 - u_2 \rangle _{X^* \times X} \ge -m_2 \Vert u_1 - u_2 \Vert ^2_{X} \\ \qquad \ \ \ \text{ for } \text{ all } \ z_i \in \partial J(t, u_i), z_i \in X^*, \, u_i \in X, \, i = 1, 2, \\ \qquad \ \ \ \text{ a.e. } \ t \in (0, T) \ \text{ with } \ m_2 \ge 0. \end{array} \right\} \end{aligned}$$
(3)
$$\begin{aligned}&M \in {\mathcal L}(V, X) \ \text{ is } \text{ compact }.\end{aligned}$$
(4)
$$\begin{aligned}&{\widetilde{f}} \in {\mathcal V^*}.\end{aligned}$$
(5)
$$\begin{aligned}&\max \{ \, c_1, m_2 \, \}\, \Vert M \Vert ^2 < \min \{ \, \alpha , m_1 \, \}. \end{aligned}$$
(6)

Following the terminology introduced in [20], an operator which satisfies condition (2) is called a history-dependent operator. For this reason, we refer to Problem 1 as a history-dependent subdifferential inclusion.

In order to establish the existence and uniqueness for Problem 1, we start with an auxiliary result on the unique solvability of subdifferential inclusion in which the time variable plays the role of a parameter.

Lemma 3

Assume that the hypotheses (1) and (3)–(6) hold. Then the problem

$$\begin{aligned} A(t, u(t)) + M^* \partial J(t, M u(t)) \ni {\widetilde{f}}(t) \ \ \text{ a.e. } \ t \in (0,T) \end{aligned}$$
(7)

has a unique solution \(u \in {\mathcal V}\) which satisfies

$$\begin{aligned} \Vert u \Vert _{\mathcal V} \le c \, \left( 1 + \Vert {\widetilde{f}} \Vert _{\mathcal V^*} \right) \end{aligned}$$
(8)

with some constant \(c > 0\).

Proof

We define the operator \(B :(0, T) \times V \rightarrow 2^{V^*}\) by

$$\begin{aligned} B(t, v) = M^* \, \partial J(t, M v) \ \ \ \text{ for } \text{ all } \ v \in V, \ \text{ a.e. } \ t \in (0, T). \end{aligned}$$

We will establish the following properties of the operator \(B\).

$$\begin{aligned} \left. \begin{array}{l} \mathrm{(a)} \ B (\cdot , v) \ \text{ is } \text{ measurable } \text{ for } \text{ all } \ v \in V.\\ \mathrm{(b)} \ \Vert B (t, v) \Vert _{V^*} \le \Vert M \Vert \, (c_0(t) + c_1 \Vert M \Vert \Vert v \Vert _V) \ \text{ for } \text{ all } \ v \in V, \\ \quad \ \ \text{ a.e. } \ t \in (0, T).\\ \mathrm{(c)} \ \text{ for } \text{ all } \ v \in V \ \text{ and } \text{ a.e. } \ t \in (0, T), \ B (t, v) \ \text{ is } \text{ nonempty, } \text{ convex }, \\ \quad \ \ \text{ weakly } \text{ compact } \text{ subset } \text{ of } \ V^*. \\ \mathrm{(d)} \ \langle B (t, v), v \rangle _{V^* \times V} \ge - c_1 \, \Vert M \Vert ^2 \Vert v \Vert _V^2 - c_0(t) \, \Vert M \Vert \Vert v \Vert _V \ \text{ for } \text{ all } \\ \quad \ \ \ v \in V, \ \text{ a.e. } \ t \in (0, T). \\ \mathrm{(e)} \ \text{ the } \text{ graph } \text{ of } \ B(t, \cdot ) \ \text{ is } \text{ closed } \text{ in } \ (w{-}V) \times (w{-}V^*) \ \text{ topology } \\ \quad \ \ \text{ for } \text{ a.e. } \ t \in (0, T), \ (\text{ i.e., } \text{ for } \text{ fixed } \ t\in (0,T) \ \text{ if } \ \zeta _n \in B (t, v_n) \\ \quad \ \ \text{ with } \ v_n, v \in V, v_n \rightarrow v \ \text{ weakly } \text{ in } \ V \ \text{ and } \ \zeta _n, \zeta \in V^*, \ \zeta _n \rightarrow \zeta \\ \quad \ \ \text{ weakly } \text{ in } \ V^*, \ \text{ then } \ \zeta \in B(t, v)) \ \text{ and } \ \lim \, \langle \zeta _n, v_n-v \rangle _{V^* \times V} = 0.\ \ \end{array} \right\} \end{aligned}$$
(9)

Using the separability of \(X\), by Proposition 3.44 in [14], and hypothesis (3)(a), (b), we deduce that \(\partial J(\cdot , v)\) is a measurable multifunction on \((0, T)\) for all \(v \in X\). From Lemma 5.10 of [14] and (4), we have that the map \(M^* \partial J(\cdot , M v)\) is measurable for all \(v \in X\). Hence, for all \(v \in V\), \(B(\cdot , v)\) is measurable, i.e., (9)(a) holds.

Next, from (3)(c) and the continuity of the operator \(M\), we obtain

$$\begin{aligned} \Vert B (t, v) \Vert _{V^*} \le \Vert M^* \Vert \, \Vert \partial J(t, M v) \Vert _{X^*} \le \Vert M \Vert \left( c_0 (t) + c_1 \, \Vert M \Vert \, \Vert v \Vert _V \right) \end{aligned}$$
(10)

for all \(v \in V\), a.e. \(t\in (0,T)\), which proves (9)(b).

In order to establish (9)(c), we recall that the values of \(\partial J(t,\cdot )\) are nonempty, convex, and weakly compact subsets of \(X^*\) for a.e. \(t\in (0,T)\). Let \(v \in V\) and \(t \in (0, T)\) be fixed. Then \(B (t, v)\) is a nonempty and convex subset in \(V^*\). To show that \(B (t, v)\) is weakly compact in \(V^*\), we prove that it is closed in \(V^*\). Indeed, let \(\{ \zeta _n \} \subset B (t, v)\) be such that \(\zeta _n \rightarrow \zeta \) in \(V^*\). Since \(\zeta _n \in M^* \, \partial J (t, M v)\) and the latter is a closed subset of \(V^*\), we get \(\zeta \in M^* \, \partial J (t, M v)\) which implies that \(\zeta \in B (t, v)\). Therefore, the set \(B (t, v)\) is closed and convex in \(V^*\), so it is also weakly closed in \(V^*\). Since \(B (t, v)\) is a bounded set in a reflexive Banach space \(V^*\), we obtain that \(B (t, v)\) is weakly compact in \(V^*\), which shows (9)(c).

To prove (9)(d), let \(v \in V\) and \(t \in (0, T)\). Using (10), we have

$$\begin{aligned} |\langle B(t, v), v \rangle _{V^* \times V}|&\le \Vert B (t, v) \Vert _{V^*} \Vert v \Vert _V \\&\le \Vert M \Vert \left( c_0 (t) + c_1 \, \Vert M \Vert \, \Vert v \Vert _V \right) \, \Vert v \Vert _V . \end{aligned}$$

Hence

$$\begin{aligned} \langle B(t, v), v \rangle _{V^* \times V} \ge - c_1 \, \Vert M \Vert ^2 \, \Vert v \Vert ^2_V - c_0 (t) \, \Vert M \Vert \, \Vert v \Vert _V \end{aligned}$$

and (9)(d) holds.

For the proof of (9)(e), let \(t \in (0, T)\) be fixed, \(\zeta _n \in B(t, v_n)\), where \(v_n\), \(v \in V\), \(v_n \rightarrow v\) weakly in \(V\), \(\zeta _n\), \(\zeta \in V^*\) and \(\zeta _n \rightarrow \zeta \) weakly in \(V^*\). Then \(\zeta _n = M^* z_n \) and \(z_n \in \partial J(t, M v_n)\). The compactness of the operator \(M\) (cf. (4)) implies \(M v_n \rightarrow M v\) in \(X\) and the bound (3)(c) gives that, at least for a subsequence, we have \(z_n \rightarrow z\) weakly in \(X^*\) with some \(z \in X^*\). Hence,

$$\begin{aligned} \lim \, \langle \zeta _n, v_n - v \rangle _{V^* \times V} = \lim \, \langle z_n, M v_n - M v \rangle _{X^*\times X} = 0. \end{aligned}$$

Moreover, from the equality \(\zeta _n = M^* z_n\), we easily obtain \(\zeta = M^* z\). Since the graph of \(\partial J(t, \cdot )\) is closed in \(X \times (w\text{- }X)\) topology, from \(z_n \in \partial J(t, M v_n)\), we get \(z \in \partial J(t, M v)\), and subsequently \(\zeta \in M^* \, \partial J(t, M v)\), i.e., \(\zeta \in B (t, v)\). The proof of all conditions of (9) is complete.

Next, we define the multivalued map \({\mathcal F} :(0, T) \times V \rightarrow 2^{V^*}\) by \({\mathcal F}(t, v) = A(t, v) + B(t, v)\) for all \(v \in V\) and a.e. \(t \in (0, T)\). From (1)(a) and (9)(a), it is clear that \({\mathcal F}(\cdot , v)\) is a measurable multifunction for all \(v \in V\). We show that \({\mathcal F}(t, \cdot )\) is pseudomonotone (cf. Definition 6.3.63 of [5]) and coercive for a.e. fixed \(t \in (0, T)\). To this end, we use the fact (cf. Definition 3.58 of [14]) that a generalized pseudomonotone operator which is bounded and has nonempty, closed and convex values, is pseudomonotone. From the property (9)(c), we know that \({\mathcal F}(t, \cdot )\) has nonempty, convex and closed values in \(V^*\). Since \(A(t, \cdot )\) is pseudomonotone, it is bounded (see Definition 3.65 in [14]). Thus, by (9)(b), it follows that \({\mathcal F}(t, \cdot )\) is a bounded map, i.e., it maps bounded subsets of \(V\) into bounded subsets of \(V^*\).

We prove that \({\mathcal F}(t, \cdot )\) is a generalized pseudomonotone operator for a.e. \(t \in (0, T)\). To this end, let \(t \in (0, T)\) be fixed, \(v_n\), \(v \in V\), \(v_n \rightarrow v\) weakly in \(V\), \(v^*_n\), \(v^* \in V^*\), \(v_n^* \rightarrow v^*\) weakly in \(V^*\), \(v_n^* \in {\mathcal F} (t, v_n)\) and assume that \(\limsup \, \langle v_n^*, v_n - v \rangle _{V^* \times V} \le 0\). We show that \(v^* \in {\mathcal F}(t, v)\) and \(\langle v_n^*, v_n \rangle _{V^* \times V} \rightarrow \langle v^*, v \rangle _{V^* \times V}\). We have \(v_n^* = A (t, v_n) + \zeta _n\) with \(\zeta _n \in B (t, v_n)\). By the boundedness of \(B(t, \cdot )\) for fixed a.e. \(t\in (0,T)\) (cf. (9)(b)), passing to a subsequence, if necessary, we have

$$\begin{aligned} \zeta _n \rightarrow \zeta \ \ \text{ weakly } \text{ in } \ V^* \ \text{ with } \text{ some } \ \zeta \in V^*. \end{aligned}$$
(11)

From (9)(e) and (11), since \(\zeta _n \in B (t, v_n)\), we infer immediately that \(\zeta \in B (t, v)\). Furthermore, exploiting the equality

$$\begin{aligned} \langle v_n^*, v_n - v \rangle _{V^* \times V} = \langle A (t, v_n), v_n - v \rangle _{V^* \times V} + \langle \zeta _n, v_n - v \rangle _{V^* \times V}, \end{aligned}$$

we obtain

$$\begin{aligned} \displaystyle \limsup \, \langle A (t, v_n), v_n - v \rangle _{V^* \times V} = \limsup \, \langle v_n^*, v_n - v \rangle _{V^* \times V} \le 0. \end{aligned}$$

Using the pseudomonotonicity of \(A(t, \cdot )\), by Proposition 3.66 of [14], we deduce that

$$\begin{aligned} A (t, v_n) \rightarrow A (t, v) \ \ \text{ weakly } \text{ in } \ V^* \end{aligned}$$
(12)

and

$$\begin{aligned} \lim \, \langle A (t, v_n), v_n - v \rangle _{V^* \times V} = 0. \end{aligned}$$
(13)

Therefore, passing to the limit in the equation \(v_n^* = A (t, v_n) + \zeta _n\), we obtain \(v^* = A (t, v) + \zeta \) which, together with \(\zeta \in B(t, v)\), implies \(v^* \in A(t, v) + B (t, v) = {\mathcal F} (t, v)\). Next, from convergences (11)–(13) and (9)(e), we get

$$\begin{aligned} \lim \, \langle v_n^*, v_n \rangle _{V^* \times V}&= \lim \, \langle A (t, v_n), v_n - v \rangle _{V^* \times V} + \lim \, \langle A (t, v_n), v \rangle _{V^* \times V}\\&\quad + \lim \, \langle \zeta _n, v_n \rangle _{V^* \times V} \\&= \langle A (t, v), v \rangle _{V^* \times V} + \langle \zeta , v \rangle _{V^* \times V} = \langle v^*, v \rangle _{V^* \times V}. \end{aligned}$$

This, according to Definition 3.57 of [14], shows that \({\mathcal F}(t, \cdot )\) is a generalized pseudomonotone operator and, consequently, completes the proof of the pseudomonotonicity of \({\mathcal F}(t, \cdot )\) for a.e. \(t \in (0, T)\).

Next, by hypothesis (1)(a) and property (9)(d), we have

$$\begin{aligned} \langle {\mathcal F} (t, v), v \rangle _{V^* \times V}&= \langle A (t, v), v \rangle _{V^* \times V} + \langle B (t, v), v \rangle _{V^* \times V}\\&\ge (\alpha - c_1\Vert M \Vert ^2) \Vert v \Vert ^2_V - c_0(t)\,\Vert M \Vert \, \Vert v \Vert _V \end{aligned}$$

for all \(v \in V\) and a.e. \(t \in (0, T)\) which, by hypothesis (6), implies that the operator \({\mathcal F}(t, \cdot )\) is coercive.

Applying the surjectivity result (cf. e.g. Theorem 6.3.70 of [5]), since \({\mathcal F}(t, \cdot )\) is pseudomonotone and coercive for a.e. \(t \in (0, T)\), it follows that \({\mathcal F}(t, \cdot )\) is surjective which implies that for a.e. \(t \in (0, T)\), there exists a solution \(u(t) \in V\) of problem (7). Furthermore, using the coercivity of \({\mathcal F}(t, \cdot )\), we deduce

$$\begin{aligned} \Big ( (\alpha - c_1 \Vert M \Vert ^2) \Vert u(t) \Vert _V - c_0(t) \Vert M \Vert \Big ) \, \Vert u(t)\Vert _V \le \Vert {\widetilde{f}}(t) \Vert _{V^*} \Vert u(t)\Vert _V, \end{aligned}$$

which implies the following estimate

$$\begin{aligned} \Vert u(t) \Vert _V \le \frac{1}{\alpha - c_1 \Vert M \Vert ^2} \left( \Vert {\widetilde{f}}(t) \Vert _{V^*} + c_0(t) \Vert M \Vert \right) \ \text{ for } \text{ a.e. } \ t \in (0,T). \end{aligned}$$
(14)

We prove now that the solution to problem (7) is unique. Let \(t \in (0, T)\) and \(u_1(t)\), \(u_2 (t) \in V\) be solutions to problem (7). Then, there exist \(z_i(t) \in X^*\) and \(z_i (t) \in \partial J(t, M u_i(t))\) such that

$$\begin{aligned} A (t, u_i(t)) + M^* z_i (t) = \widetilde{f}(t) \ \ \text{ for } \ \ i = 1, 2. \end{aligned}$$
(15)

Subtracting the above two equations, multiplying the result by \(u_1 (t) - u_2 (t)\) and using the strong monotonicity of \(A(t, \cdot )\), we obtain

$$\begin{aligned} m_1 \Vert u_1(t) - u_2(t) \Vert _V^2 + \langle M^* z_1 (t) - M^* z_2(t), u_1(t) - u_2(t) \rangle _{V^* \times V} \le 0. \end{aligned}$$

Next, by the relaxed monotonicity of \(\partial J(t,\cdot )\) (cf. (3)(d)), we deduce

$$\begin{aligned}&\langle M^* z_1 (t) {-} M^* z_2 (t), u_1(t) {-} u_2(t) \rangle _{V^* \times V} {=} \langle z_1 (t) {-} z_2(t), M u_1 (t) {-} M u_2 (t) \rangle _{X^*\times X}\\&\quad \ge - m_2 \, \Vert M u_1 (t) - M u_2 (t) \Vert ^2_{X} \ge - m_2 \, \Vert M \Vert ^2 \, \Vert u_1 (t) - u_2 (t) \Vert ^2_V. \end{aligned}$$

Hence

$$\begin{aligned} m_1 \Vert u_1(t) - u_2(t) \Vert _V^2 - m_2 \, \Vert M \Vert ^2 \, \Vert u_1 (t) - u_2 (t) \Vert ^2_V \le 0 \end{aligned}$$

which, in view of hypothesis \(m_1 > m_2 \, \Vert M \Vert ^2\) (cf. (6)), implies \(u_1(t) = u_2(t)\). Furthermore, from (15), we deduce that \(z_1(t) = z_2(t)\). This completes the proof of the uniqueness of the solution.

Next, we prove that the solution \(u(t)\) to problem (7) is a measurable function of \(t \in (0, T)\). To this end, given \(g \in V^*\), we denote by \(w \in V\) a unique solution of the following auxiliary problem

$$\begin{aligned} A(t, w) + M^* \, \partial J(t, M w) \ni g \ \ \text{ a.e. } \ \ t \in (0, T). \end{aligned}$$
(16)

Since \(A\) and \(J\) depend on the parameter \(t\), the solution \(w\) is also a function of \(t\), i.e., \(w=w(t)\). We claim that for a.e. \(t\in (0,T)\) the solution \(w\) depends continuously on the right hand side \(g\). Indeed, let \(g_1\), \(g_2 \in V^*\) and \(w_1\), \(w_2 \in V\) be the corresponding solutions to (16). We have

$$\begin{aligned}&A(t, w_1) + \zeta _1 = g_1 \ \ \text{ a.e. } \ \ t \in (0, T),\\&A(t, w_2) + \zeta _2 = g_2 \ \ \text{ a.e. } \ \ t \in (0, T),\\&\zeta _1 \in M^* \, \partial J(t, M w_1), \ \ \zeta _2 \in M^* \, \partial J(t, M w_2) \ \ \text{ a.e. } \ \ t \in (0, T). \end{aligned}$$

Subtracting the above two equations, multiplying the result by \(w_1 - w_2\), we obtain

$$\begin{aligned}&\langle A(t, w_1) - A(t, w_2), w_1 - w_2 \rangle _{V^* \times V} \\&\quad + \langle \zeta _1 - \zeta _2, w_1 - w_2 \rangle _{V^* \times V} = \langle g_1 - g_2, w_1 - w_2 \rangle _{V^* \times V} \end{aligned}$$

for a.e. \(t \in (0, T)\). Since \(\zeta _i = M^* z_i\) with \(z_i \in \partial J(t, M w_i)\) for a.e. \(t \in (0, T)\) and \(i=1\), \(2\), by the strong monotonicity of \(A(t, \cdot )\) (cf. (1)(c)) and the relaxed monotonicity of \(\partial J(t, \cdot )\) (cf. (3)(d)), we have

$$\begin{aligned} m_1 \Vert w_1 - w_2 \Vert ^2_V - m_2 \, \Vert M \Vert ^2 \Vert w_1 - w_2 \Vert ^2_V \le \Vert g_1 - g_2 \Vert _{V^*} \, \Vert w_1 - w_2 \Vert _V. \end{aligned}$$

Exploiting hypothesis (6), we obtain

$$\begin{aligned} \Vert w_1 - w_2 \Vert _V \le {\widetilde{c}} \, \Vert g_1 - g_2 \Vert _{V^*}, \end{aligned}$$

where \({\widetilde{c}} = (m_1 - m_2 \Vert M \Vert ^2)^{-1} > 0\) is independent of \(t\). Hence, for a.e. \(t\in (0,T)\), the mapping \(V^* \ni g \mapsto w = w(t)\in V\) is continuous, which proves the claim. Now, using the continuous dependence of the solution of (16) on the right hand side, and the measurability of \(\widetilde{f}\), we deduce that the unique solution \(u(\cdot )\) of problem (7) is measurable on \((0, T)\). Since \(\widetilde{f} \in {\mathcal V^*}\), from the estimate (14), we conclude that \(u \in {\mathcal V}\) and, moreover (8) holds. This completes the proof of the lemma. \(\square \)

In order to prove the existence and uniqueness result for Problem 1, we recall the following result (cf. Lemma 7 in [9]) which is a consequence of the Banach contraction principle.

Lemma 4

Let \((E, \Vert \cdot \Vert _E)\) be a Banach space and \(T > 0\). Let \(\Lambda :L^2(0,T; E) \rightarrow L^2(0,T; E)\) be an operator satisfying

$$\begin{aligned} \Vert (\Lambda \eta _1)(t) - (\Lambda \eta _2)(t)\Vert _E \le c \int _0^t \Vert \eta _1(s) - \eta _2(s)\Vert _E \, ds \end{aligned}$$

for every \(\eta _1\), \(\eta _2 \in L^2(0,T;E)\), a.e. \(t \in (0,T)\) with a constant \(c > 0\). Then \(\Lambda \) has a unique fixed point in \(L^2(0, T; E)\), i.e. there exists a unique \(\eta ^* \in L^2(0,T;E)\) such that \(\Lambda \eta ^* = \eta ^*\).

The following existence and uniqueness result is the main theorem of this paper.

Theorem 5

Assume that (1)–(6) hold. Then Problem 1 has a unique solution.

Proof

We use a fixed point argument. Let \(\eta \in {\mathcal V^*}\). We denote by \(u_\eta \in {\mathcal V}\) the solution of the following problem

$$\begin{aligned} A(t, u_\eta (t)) + M^* \, \partial J(t, M u_\eta (t)) \ni {\widetilde{f}}(t) - \eta (t) \ \ \text{ a.e. } \ t \in (0, T). \end{aligned}$$
(17)

It is clear from Lemma 3 that \(u_\eta \in {\mathcal V}\) exists and it is unique. We consider the operator \(\Lambda :{\mathcal V^*} \rightarrow {\mathcal V^*}\) defined by

$$\begin{aligned} (\Lambda \eta )(t) = (\mathcal {S}u_\eta )(t) \ \ \ \text{ for } \text{ all } \ \ \eta \in {\mathcal V^*}, \ \text{ a.e. } \ t \in (0, T). \end{aligned}$$
(18)

We prove that the operator \(\Lambda \) has a unique fixed point. To this end, let \(\eta _1\), \(\eta _2 \in {\mathcal V^*}\) and let \(u_1 = u_{\eta _1}\) and \(u_2 = u_{\eta _2}\) be the corresponding unique solutions to (17). We have \(u_1\), \(u_2 \in {\mathcal V}\) and

$$\begin{aligned}&A(t, u_1(t)) + \zeta _1(t) = {\widetilde{f}}(t) - \eta _1(t) \ \ \mathrm{\ a.e.} \ t \in (0, T),\end{aligned}$$
(19)
$$\begin{aligned}&A(t, u_2(t)) + \zeta _2(t) = {\widetilde{f}}(t) - \eta _2(t) \ \ \mathrm{a.e.} \ t \in (0, T),\\&\zeta _1(t) \in M^* \partial J(t, M u_1(t)), \quad \zeta _2(t) \in M^* \partial J(t, M u_2(t)) \ \ \mathrm{a.e.} \ t \in (0, T).\nonumber \end{aligned}$$
(20)

Subtracting (20) from (19), multiplying the result by \(u_1(t) - u_2(t)\) and using (1)(c), (3)(d) and (6), we infer

$$\begin{aligned} \Vert u_1(t) - u_2(t) \Vert _V \le {\widetilde{c}} \, \Vert \eta _1(t) - \eta _2(t) \Vert _{V^*} \ \ \text{ for } \text{ a.e. } \ t \in (0, T), \end{aligned}$$
(21)

where \({\widetilde{c}} = (m_1 - m_2 \, \Vert M \Vert ^2)^{-1} > 0\). From (2), (18) and (21), we deduce

$$\begin{aligned} \Vert (\Lambda \eta _1)(t) \!-\! (\Lambda \eta _2) (t) \Vert _{V^*} \!\le L_{\mathcal S} \int _0^t \Vert u_1(s) \!-\! u_2(s) \Vert _{V} \, ds \le c \!\int _0^t \Vert \eta _1(s) \!-\! \eta _2(s) \Vert _{V^*} \, ds \end{aligned}$$

for a.e. \(t \in (0,T)\) with a positive constant \(c\). Applying Lemma 4, we obtain that there exists \(\eta ^* \in {\mathcal V^*}\) the unique fixed point of \(\Lambda \). Thus \(u_{\eta ^*}\) is a solution to Problem 1, which concludes the existence part of the theorem.

The uniqueness part follows from the uniqueness of the fixed point of \(\Lambda \). Indeed, let \(u \in {\mathcal V}\) be a solution to Problem 1 and define the element \(\eta \in {\mathcal V^*}\) by \(\eta (t) = \mathcal {S} u(t)\) for a.e. \(t \in (0, T)\). It follows that \(u\) is the solution to problem (17) and, by the uniqueness of solutions to (17), we obtain \(u = u_\eta \). This implies \(\Lambda \eta =\mathcal {S}u_\eta =\mathcal {S}u= \eta \) and by the uniqueness of the fixed point of \(\Lambda \) we have \(\eta = \eta ^*\), so \(u = u_{\eta ^*}\), which completes the proof. \(\square \)

3 History-Dependent Hemivariational Inequalities

In this section we deal with a hemivariational inequality involving a history-dependent operator.

Let \(\Omega \subset \mathbb {R}^d\) be an open, bounded subset of \(\mathbb {R}^d\) with a Lipschitz continuous boundary \(\partial \Omega \). Let \(V\) be a reflexive Banach space, \(V^*\) be its dual, \(s \ge 1\), and let \(M :V \rightarrow L^2(\Omega ; \mathbb {R}^s)\) be an embedding operator satisfying (4).

The problem under consideration reads as follows.

Problem 6

Find \(u \in {\mathcal V}\) such that

$$\begin{aligned}&\langle A(t, u(t)), v \rangle _{V^* \times V} + \langle (\mathcal {S}u)(t), v \rangle _{V^* \times V} + \int _{\Omega } \varphi ^0 (x, t, M (u(t))(x); M v(x)) \, dx \nonumber \\&\quad \ge \langle {\widetilde{f}}(t), v \rangle _{V^* \times V} \ \ \ \text{ for } \text{ all } \ v \in V \ \text{ and } \text{ a.e. }\ t\in (0,T). \end{aligned}$$
(22)

We refer to Problem 6 as a history-dependent hemivariational inequality. In its study, in addition to assumptions (1), (2) and (5), we need the following hypothesis.

$$\begin{aligned} \left. \begin{array}{l} \varphi :\Omega \times (0, T) \times \mathbb {R}^s \rightarrow \mathbb {R}\ \text{ is } \text{ such } \text{ that } \\ \ \ \mathrm{(a)} \ \varphi (\cdot , \cdot , \xi ) \ \text{ is } \text{ measurable } \text{ on } \ \Omega \times (0, T) \ \text{ for } \text{ all } \ \xi \in \mathbb {R}^s \ \text{ and } \\ \qquad \ \ \ \varphi (\cdot , \cdot , e(\cdot )) \in L^1(\Omega \times (0, T)) \ \text{ with } \ e \in L^2(\Omega ; \mathbb {R}^s). \\ \ \ \mathrm{(b)} \ \varphi (x, t, \cdot ) \ \text{ is } \text{ locally } \text{ Lipschitz } \text{ on } \ \mathbb {R}^s \ \text{ for } \text{ a.e. } \ (x, t) \in \Omega \times (0, T). \\ \ \ \mathrm{(c)} \ \Vert \partial \varphi (x, t, \xi ) \Vert _{\mathbb {R}^s} \le {\overline{c}}_0(t) + {\overline{c}}_1 \, \Vert \xi \Vert _{\mathbb {R}^s} \ \text{ for } \text{ a.e. } \ (x, t) \in \Omega \times (0, T), \\ \qquad \ \ \ \text{ all } \ \xi \in \mathbb {R}^s \ \text{ with } \ {\overline{c}}_0(t), {\overline{c}}_1 \ge 0, {\overline{c}}_0 \in L^2(0, T). \\ \ \ \mathrm{(d)} \ (\zeta _1 - \zeta _2) \cdot (\xi _1 - \xi _2) \ge - {\overline{m}}_2 \Vert \xi _1 - \xi _2 \Vert ^2_{\mathbb {R}^s} \ \text{ for } \text{ all } \ \zeta _i, \xi _i \in \mathbb {R}^s, \\ \qquad \ \ \zeta _i \in \partial \varphi (x, t, \xi _i), i = 1, 2, \ \text{ a.e. } \ (x, t) \in \Omega \times (0, T) \\ \qquad \ \ \text{ with } \ {\overline{m}}_2 \ge 0. \end{array} \right\} \end{aligned}$$
(23)

In condition (23)(d) the dot denotes the inner product in \(\mathbb {R}^s\).

We have the following existence and uniqueness result.

Theorem 7

Assume that hypotheses (1), (2), (5), (23) are satisfied, the embeding operator \(M :V \rightarrow L^2(\Omega ; \mathbb {R}^s)\) is compact and, moreover,

$$\begin{aligned} \max \{ \, \sqrt{3} \, {\overline{c}}_1, {\overline{m}}_2 \, \} \Vert M \Vert ^2 < \min \{ \, \alpha , m_1 \, \}. \end{aligned}$$
(24)

Then Problem 6 has a solution \(u \in {\mathcal V}\). If, in addition,

$$\begin{aligned} \text{ either } \ \varphi (x, t, \cdot ) \ \text{ or } \ - \varphi (x, t, \cdot ) \ \text{ is } \text{ regular } \text{ on } \ \mathbb {R}^s \ \text{ for } \text{ a.e. } \ (x, t) \in \Omega \times (0, T), \end{aligned}$$
(25)

then the solution of Problem 6 is unique.

To provide the proof of Theorem 7 we start by introducing the functional \(J :(0, T) \times L^2(\Omega ; \mathbb {R}^s) \rightarrow \mathbb {R}\) defined by

$$\begin{aligned} J(t, v) = \int _{\Omega } \varphi (x, t, v(x)) \, dx \ \ \text{ for } \ v \in L^2(\Omega ; \mathbb {R}^s), \ \text{ a.e. } \ t \in (0, T). \end{aligned}$$
(26)

The following result on the properties of the functional \(J\) represents a direct consequence of Theorem 3.47 of [14].

Lemma 8

Assume that (23) holds. Then the functional \(J\) given by (26) satisfies the following properties.

  1. (a)

    \(J(\cdot , v)\) is measurable on \((0, T)\) for all \(v \in L^2(\Omega ; \mathbb {R}^s)\).

  2. (b)

    \(J(t, \cdot )\) is locally Lipschitz on \(L^2(\Omega ; \mathbb {R}^s)\) for a.e. \(t \in (0, T)\).

  3. (c)

    \(\Vert \partial J(t, v) \Vert _{L^2(\Omega ; \mathbb {R}^s)} \le \sqrt{3\, \mathrm{meas}(\Omega )} \, {\overline{c}}_0 (t) + \sqrt{3} \, {\overline{c}}_1 \, \Vert v \Vert _{L^2(\Omega ; \mathbb {R}^s)}\) for all \(v \in L^2(\Omega ; \mathbb {R}^s)\), a.e. \(t \in (0, T)\).

  4. (d)

    \((z_1 - z_2, w_1 - w_2)_{L^2(\Omega ; \mathbb {R}^s)} \ge - {\overline{m}}_2 \Vert w_1 - w_2 \Vert ^2_{L^2(\Omega ; \mathbb {R}^s)}\) for all \(z_i\), \(w_i \in L^2(\Omega ; \mathbb {R}^s)\), \(z_i \in \partial J(t, w_i)\), \(i=1\), \(2\), a.e. \(t \in (0, T)\).

  5. (e)

    for all \(u\), \(v \in L^2(\Omega ; \mathbb {R}^s)\) and a.e. \(t \in (0, T)\), we have

    $$\begin{aligned} J^0(t, u; v) \le \int _{\Omega } \varphi ^0(x, t, u(x); v(x)) \, dx \end{aligned}$$

    where \(J^0(t, u; v)\) denotes the generalized directional derivative of \(J(t, \cdot )\) at a point \(u \in L^2(\Omega ; \mathbb {R}^s)\) in the direction \(v \in L^2(\Omega ; \mathbb {R}^s)\).

Moreover, if (25) holds, then either \(J(t, \cdot )\) or \(- J(t, \cdot )\) is regular on \(L^2(\Omega ; \mathbb {R}^s)\) for a.e. \(t \in (0, T)\), respectively, and (e) holds with equality.

Proof of Theorem 7

We apply Theorem 5 with \(X = L^2(\Omega ; \mathbb {R}^s)\) and the functional \(J\) defined by (26). From Lemma 8 we know that \(J\) satisfies hypothesis (3). By Theorem 5, we deduce that there exists a unique solution \(u \in {\mathcal V}\) of the operator inclusion

$$\begin{aligned} A(t, u(t)) + ({\mathcal S} u)(t) + M^* \partial J(t, M u(t)) \ni {\widetilde{f}}(t) \ \ \text{ a.e. } \ t \in (0,T). \end{aligned}$$

Exploiting condition (e) of Lemma 8, it follows that \(u \in {\mathcal V}\) is also a solution to Problem 6. Indeed, according to Definition 2, there exists \(\zeta \in L^2(0, T; X^*)\), \(\zeta (t) \in \partial J(t, M u(t))\) for a.e. \(t \in (0, T)\) such that

$$\begin{aligned} A(t, u(t)) + ({\mathcal S} u)(t) + M^* \zeta (t) = {\widetilde{f}}(t) \ \ \text{ a.e. } \ t \in (0, T). \end{aligned}$$

Hence, we obtain

$$\begin{aligned} \langle {\widetilde{f}}(t) - A(t, u(t)) - ({\mathcal S} u)(t), v \rangle _{V^* \times V}&= \langle M^* \zeta (t), v \rangle _{V^* \times V} \\&= \langle \zeta (t), M v \rangle _{X^*\times X} \le J^0 (t, M u(t); M v) \\&\le \int _{\Omega } \varphi ^0 (x, t, M (u(t))(x); M v(x)) \,dx \end{aligned}$$

for all \(v \in V\), a.e. \(t \in (0, T)\). It follows from the last inequality that \(u \in {\mathcal V}\) is a solution to Problem 6.

Next, we assume the regularity condition (25). In order to prove uniqueness of solutions to Problem 6, let \(u \in {\mathcal V}\) be a solution to Problem 6. By Lemma 8, we know that either \(J(t, \cdot )\) or \(- J(t, \cdot )\) is regular for a.e. \(t \in (0, T)\), and condition (e) of that lemma holds with equality. Therefore, using this equality, we have

$$\begin{aligned} \langle A(t, u(t)) + ({\mathcal S} u)(t) - {\widetilde{f}}(t), v \rangle _{V^* \times V} + J^0(t, M (u(t)); M v) \ge 0 \end{aligned}$$

for all \(v \in V\), a.e. \(t \in (0, T)\). Also, by Proposition 2.1(i) of [12], we obtain

$$\begin{aligned} \langle {\widetilde{f}}(t) - A(t, u(t)) - ({\mathcal S} u)(t), v \rangle _{V^* \times V} \le (J \circ M)^0 (t, u(t); v) \end{aligned}$$

for all \(v \in V\), a.e. \(t \in (0, T)\). Subsequently, using the definition of the Clarke subdifferential and Proposition 2.1(ii) of [12], the previous inequality implies

$$\begin{aligned} {\widetilde{f}}(t) - A(t, u(t)) - ({\mathcal S} u)(t) \in \partial (J \circ M) (t, u(t)) = M^* \partial J (t, M u(t)) \end{aligned}$$

for a.e. \(t \in (0, T)\). Therefore, we deduce that \(u \in {\mathcal V}\) is a solution to Problem 1. The uniqueness of solution to Problem 6 is now a consequence of the uniqueness part in Theorem 5. This concludes the proof of the theorem. \(\square \)

4 A Contact Model for an Elastic Beam

The physical setting and the process are as follows. An elastic beam occupies in the reference configuration the interval \([0, L]\) of the \(Ox\) axis, it is clamped at its left end and the right end is free. The beam is acted upon by an applied force of (linear) density \(f = f(x, t)\) where \(x\) is the spatial variable and \(t\) represents the time variable. Here \(t \in [0,T]\) with \(T>0\) and \([0, T]\) represents the time interval of interest. For \(x \in [0,L]\), and \(t \in [0,T]\) we denote by \(u = u(x,t)\) the vertical displacement of the beam. Everywhere in what follows, when the meaning is clear, we do not indicate explicitly the dependence of various variables on \(x\) or both on \(x\) and \(t\). The beam may arrive in contact with an obstacle \(S\), parallel to the axis \(Ox\), situated below the beam, at the level \(g \le 0\). Note that \(g\) may depend on the spatial variable \(x\) but, for simplicity, we assume in what follows that it is a given constant. The obstacle is deformable and reactive. Therefore, the penetration is allowed and it arises when \(g - u \ge 0\). Otherwise, when \(g - u < 0\), the beam is not in contact with the obstacle. The physical setting is depicted in Fig. 1.

Fig. 1
figure 1

A beam in potential contact with an obstacle

We use the Euler–Bernoulli model for the beam and we denote \(A_e = EI\), where \(I\) is the beam moment of inertia and \(E\) is its Young modulus. We have

$$\begin{aligned} \frac{d^2}{dx^2} \left( A_e \,\frac{d^2u}{dx^2}(x,t)\right) = f(x,t) + \xi (x,t) \quad \mathrm{in}\ \ (0,L)\times (0,T) \end{aligned}$$
(27)

which is the classical equilibrium equation of the beam, where \(\xi \) represents the contact force. We assume that this force has an additive decomposition of the form

$$\begin{aligned} \xi (x,t) = \xi ^D(x,t) + \xi ^M(x,t) \ \ \text{ for } \ \ (x, t) \in (0,L)\times (0,T), \end{aligned}$$
(28)

where

$$\begin{aligned}&-\xi ^D (x,t) \in \partial j(x,t, g - u(x,t)) \ \ \text{ for } \ \ (x, t) \in (0,L)\times (0,T),\end{aligned}$$
(29)
$$\begin{aligned}&-\xi ^M(x,t) = \displaystyle \int _0^t b(t-s)\,(g-u(x,s))^+ \,ds \ \ \text{ for } \ \ (x, t) \in (0,L)\times (0,T) \end{aligned}$$
(30)

with \(r^+ = \max \{ r, 0 \}\). Here and below the quantity \(g - u(x,t)\), when positive, represents a measure of the penetration of the point \(x\) of the beam inside the obstacle, at the time moment \(t\). The part \(\xi ^D\) of the force \(\xi \) describes the reaction of the obstacle due to its current deformability; it follows a normal compliance condition of Clarke-subdifferential type, as shown in (29). Concrete examples of such condition can be found in [14]. The part \(\xi ^M\) of the force \(\xi \) describes the memory of effects of the obstacle and satisfies condition (30), in which \(b\) is a given function. It follows from here that the memory effects of the obstacle depend on the history of the penetration. If \(b>0\) then \(\xi ^M>0\) and, therefore, \(\xi ^M\) describes a pressure towards the beam. If \(b<0\) then \(\xi ^M<0\) and, therefore, \(\xi ^M\) describes a force which pulls down the beam. Such kind of behavior could arise in the case of an adhesive contact, for instance.

Finally, since the beam is rigidly attached at its left, we impose the condition

$$\begin{aligned} u(0,t) = \frac{du}{dx}(0,t) = 0 \quad \mathrm{for}\ \ t \in (0,T) \end{aligned}$$
(31)

and, since no moments act on the free end of the beam, we have

$$\begin{aligned} \frac{d^2u}{dx^2}(L,t) = \frac{d^3u}{dx^3}(L,t) = 0 \quad \mathrm{for}\ \ t \in (0,T). \end{aligned}$$
(32)

We collect the equations and conditions above to obtain the following classical formulation of the contact problem.

Problem 9

Find a displacement field \(u :[0,L]\times [0,T]\rightarrow \mathbb {R}\) which satisfies relations (27)–(30), together with the boundary conditions (31) and (32).

We now turn to derive a weak or variational formulation of Problem 9. To this end, we assume in what follows that

$$\begin{aligned} A_e&\in L^\infty (0,L),\ \text{ there } \text{ is } \ m_A > 0 \ \mathrm{such\ that\ }A_e(x)\ge m_A \ \ \mathrm{a.e.}\ x \in (0,L).\end{aligned}$$
(33)
$$\begin{aligned} f&\in L^2(0,T;L^2(0,L)).\end{aligned}$$
(34)
$$\begin{aligned} b&\in L^\infty (0,T).\end{aligned}$$
(35)
$$\begin{aligned} g&\le 0. \end{aligned}$$
(36)

Also, the contact potential \(j\) satisfies the following hypothesis.

$$\begin{aligned} \left. \begin{array}{l} j:(0,L)\times (0,T)\times \mathbb {R} \rightarrow \mathbb {R}\ \text{ is } \text{ such } \text{ that }\\ \ \ \mathrm{(a)}\ j(\cdot , \cdot , r) \ \text{ is } \text{ measurable } \text{ on } \ (0,L)\times (0,T) \ \text{ for } \text{ all }\ r \in \mathbb {R} \ \text{ and } \text{ there } \\ \qquad \quad \text{ exists } \ e_1 \in L^2(0, L) \ \text{ such } \text{ that } \ j(\cdot , \cdot , e_1(\cdot )) \in L^1((0,L)\times (0,T)). \\ \ \ \mathrm{(b)}\ j(x, t, \cdot ) \ \text{ is } \text{ locally } \text{ Lipschitz } \text{ on } \ \mathbb {R}\ \text{ for } \text{ a.e. }\ (x, t) \in \ (0,L)\times (0,T).\\ \ \ \mathrm{(c)}\ | \partial j(x, t, r) | \le d_0(t) + d_1 | r | \ \text{ for } \text{ all } \ r \in \mathbb {R}, \ \text{ a.e. }\ (x, t) \in \ (0,L)\times (0,T) \\ \qquad \quad \text{ with } \ d_0(t), d_1 \ge 0, d_0 \in L^2(0, T).\\ \ \ \mathrm{(d)}\ (\zeta _1 - \zeta _2) (r_1 - r_2) \ge - m | r_1 - r_2 |^2\ \text{ for } \text{ all }\ \zeta _i \in \partial j(x, t, r_i), \\ \qquad \quad r_i \in \mathbb {R}, i =1, 2, \ \text{ a.e. } \ (x, t) \in \ (0,L)\times (0,T)\ \text{ with }\ m \ge 0. \end{array} \right\} \qquad \end{aligned}$$
(37)

In what follows we use the subscripts \(x\) and \(xx\) to denote the first and the second derivatives with respect to \(x\), respectively. We introduce the closed subspace of \(H^2(0,L)\) given by

$$\begin{aligned} V=\{\, v\in H^2(0,L) \mid v(0) = v_x(0) = 0 \, \}. \end{aligned}$$
(38)

We note that there exists \(c>0\) such that \(\Vert v \Vert _{L^2(0,L)} \le c \, \Vert v_x \Vert _{L^2(0,L)}\) for all \(v\in H^1(0,L)\) satisfying \(v(0)=0\), thus,

$$\begin{aligned} \Vert v \Vert _{H^2(0,L)} \le c \, \Vert v_{xx} \Vert _{L^2(0,L)} \ \ \text{ for } \text{ all } \ v \in V. \end{aligned}$$
(39)

We consider now the inner product on \(V\) given by \((u,v)_V = (u_{xx}, v_{xx})_{L^2(0,L)}\) and let \(\Vert \cdot \Vert _V\) be the associated norm. Using (39) we find that \(\Vert \cdot \Vert _{H^2(0,L)}\) and \(\Vert \cdot \Vert _V\) are equivalent norms on \(V\) and, therefore, \((V,(\cdot ,\cdot )_V)\) is a real Hilbert space.

Next lemma gives a simple estimate on the embedding constant for \(V\subset L^2(0,L)\).

Lemma 10

We have \(\Vert v\Vert _{L^2(0,L)}\le \frac{L^2}{3}\Vert v\Vert _V\) for all \(v\in V\).

Proof

Let \(v\in V\). For all \(y\in [0,L]\) we have

$$\begin{aligned} \int _0^y|v_{xx}(r)|\,dr\le \left( \int _0^y dr\right) ^{\frac{1}{2}}\left( \int _0^y|v_{xx}(r)|^2\,dr\right) ^{\frac{1}{2}} \le \sqrt{y}\,\Vert v_{xx}\Vert _{L^2(0,L)}. \end{aligned}$$

Hence, for all \(x\in [0,L]\), we obtain

$$\begin{aligned} |v(x)|&=\left| \int _0^x\left( \int _0^y v_{xx}(r)\,dr\right) dy\right| \le \int _0^x\left( \int _0^y|v_{xx}(r)|\,dr\right) dy\nonumber \\&\le \int _0^x \sqrt{y} \, \Vert v_{xx}\Vert _{L^2(0,L)}\,dy=\frac{2}{3} x^{\frac{3}{2}} \Vert v_{xx}\Vert _{L^2(0,L)}. \end{aligned}$$

It follows that

$$\begin{aligned} \Vert v\Vert _{L^2(0,L)}^2=\int _0^L|v(x)|^2\,dx\le \frac{4}{9} \Vert v_{xx}\Vert ^2_{L^2(0,L)}\int _0^Lx^{3}\,dx =\frac{L^4}{9} \Vert v_{xx}\Vert ^2_{L^2(0,L)}, \end{aligned}$$

whence the assertion follows. \(\square \)

In addition, we consider the bilinear form \(a :V \times V \rightarrow \mathbb {R}\), and the operator \(\mathcal {S}:{\mathcal V} \rightarrow {\mathcal V}^*\) given by

$$\begin{aligned}&a(u, v)=\int _{0}^{L}\,A_e \,u_{xx}v_{xx}\,dx \ \ \text{ for } \text{ all } \ u, \,v \in V,\end{aligned}$$
(40)
$$\begin{aligned}&\langle (\mathcal {S}u)(t), v \rangle _{V^* \times V} = \displaystyle \int _0^L \left( \int _0^t b(t-s)\,(g-u(x, s))^+ \, ds \right) v(x) \, dx \\&\quad \text{ for } \text{ all } \ u \in {\mathcal V}, \, v \in V, \ \text{ a.e. } \ t \in (0, T).\nonumber \end{aligned}$$
(41)

We note that by (33) and (35), it follows that the integrals in (40) and (41) are well defined. Moreover, \(a\) is a bilinear continuous symmetric and coercive on \(V\).

Assume now that \(u\) is a sufficiently smooth solution of Problem 9, let \(v\) be an arbitrary element in \(V\) and let \(t\in [0,T]\). Then, it follows from (27) that

$$\begin{aligned} \int _0^L\,\frac{d^2}{dx^2}\left( A_e \,\frac{d^2u}{dx^2}(x,t)\right) v(x)\,dx =\int _0^L\,f(x,t)\,v(x)\,dx +\int _0^L\,\xi (x,t)\,v(x)\,dx. \end{aligned}$$

Performing two integrations by parts and using the boundary conditions (31) and (32), we have

$$\begin{aligned} \int _0^L\,A_e u_{xx}(x,t) \, v_{xx}(x)\,dx = \int _0^L\,f(x,t)\,v(x)\,dx + \int _0^L\,\xi (x,t)\,v(x)\,dx. \end{aligned}$$
(42)

On the other hand, using (28)–(30) and the definition of the Clarke subdifferential, we deduce that

$$\begin{aligned} \int _0^L\,\xi (x,t)\,v(x)\,dx&\ge -\int _0^L\,j^0(x,t,g-u(x,t); v(x)) \,dx \nonumber \\&\quad - \int _0^L \left( \int _0^t b(t-s)\,(g-u(x, s))^+ \, ds \right) v(x) \, dx. \end{aligned}$$
(43)

We combine now (42) and (43), then we use notation (40) and (41), and skip the dependence of various functions on \(x\). As a result, we obtain the following variational formulation of Problem 9.

Problem 11

Find a displacement field \(u :(0,T) \rightarrow V\) such that

$$\begin{aligned}&a(u(t),v)+\langle (\mathcal {S}u)(t), v \rangle _{V^* \times V} + \int _0^L\,j^0(t,g-u(t); v)\,dx \ge \int _0^Lf(t)\,v\,dx \end{aligned}$$

for all \(v \in V\), a.e. \(t \in (0, T)\).

Our main result in the study of Problem 11 is the following.

Theorem 12

Assume that (33)–(37) hold and

$$\begin{aligned} \max \{\, \sqrt{3} \, d_1, m \,\} L^4 < 9 \, m_A. \end{aligned}$$
(44)

Then Problem 11 has at least one solution \(u\in {\mathcal V}\). If, in addition,

$$\begin{aligned} \left. \begin{array}{l} \text{ either } \ \ j(x, t, \cdot ) \ \text{ or } \ - j(x, t, \cdot ) \ \text{ is } \text{ regular } \text{ on } \ \mathbb {R}\\ \text{ for } \text{ a.e. } \ (x, t) \in (0,L)\times (0,T), \end{array} \right\} \end{aligned}$$
(45)

then the solution of Problem 11 is unique.

Proof

We apply Theorem 7 with \(\Omega = (0, L)\), \(s = 1\) and \(V\) defined by (38). It is clear that the embedding operator \(M :V \rightarrow L^2(0,L)\) is compact. We define the operator \(A :V \rightarrow V^*\) by

$$\begin{aligned} \langle Au, v \rangle _{V^* \times V} = a(u, v) \ \ \text{ for } \text{ all } \ u, v \in V, \end{aligned}$$
(46)

the function \(\varphi :(0, L) \times (0, T) \times \mathbb {R}\rightarrow \mathbb {R}\) by

$$\begin{aligned} \varphi (x, t, r) = j(x, t, g - r) \ \ \text{ for } \text{ all } \ r \in \mathbb {R}, \, \text{ a.e. } \ t\in (0, T) \end{aligned}$$
(47)

and we introduce the function \({\widetilde{f}} :(0, T) \rightarrow V^*\) by

$$\begin{aligned} \langle {\widetilde{f}}(t), v \rangle _{V^* \times V} = \int _0^L f(t) \, v \, dx \ \ \text{ for } \text{ all } \ v \in V, \, \text{ a.e. } \ t \in (0, T). \end{aligned}$$
(48)

First, since the form \(a\) defined by (40) is bilinear, continuous and coercive, the operator \(A\) given by (46) satisfies hypothesis (1) with \(\alpha = m_1 = m_A\). This follows from the fact that every bounded, hemicontinuous and monotone operator is pseudomonotone (cf. Proposition 27.6 of [21]).

Next, we show that the operator \(\mathcal {S}\) defined by (41) satisfies condition (2). Let \(u_1\), \(u_2 \in {\mathcal V}\). For \(v \in V\) and a.e. \(t \in (0, T)\), we have

$$\begin{aligned}&\langle (\mathcal {S}u_1)(t) - (\mathcal {S}u_2)(t), v \rangle _{V^* \times V} \\&\quad = \int _0^L \left( \int _0^t b(t-s)\, [ (g - u_1(x, s))^+ - (g - u_2(x, s))^+ ]\, ds \right) v(x) \, dx \\&\quad \le c\, \Vert \int _0^t b(t-s) \, [ (g - u_1(s))^+ - (g - u_2(s))^+ ]\, ds \Vert _{L^2(0, L)} \, \Vert v \Vert _V \end{aligned}$$

with \(c>0\). Hence and from the elementary inequality \(| a^+ - b^+| \le |a - b|\) for all \(a\), \(b \in \mathbb {R}\), it follows that

$$\begin{aligned} \Vert (\mathcal {S}u_1)(t) - (\mathcal {S}u_2)(t) \Vert _{V^*}&\le c\, \Vert b \Vert _{L^\infty (0, T)} \Vert \int _0^t | u_1(s) - u_2(s)| \, ds \Vert _{L^2(0, L)} \\&\le c\, \Vert b \Vert _{L^\infty (0, T)} \int _0^t \Vert u_1(s) - u_2(s)\Vert _{V} \, ds. \end{aligned}$$

Since \({\mathcal {S}} 0 = 0\), we easily infer that \(\Vert \mathcal {S}u \Vert _{\mathcal V^*} \le c\, T \, \Vert b \Vert _{L^\infty (0, T)} \Vert u \Vert _{\mathcal V}\) for all \(u \in {\mathcal V}\). This implies that the operator \(\mathcal {S}\) is well defined, takes values in \({\mathcal V^*}\) and condition (2) holds with \(L_{\mathcal S} = c\, \Vert b \Vert _{L^\infty (0, T)}\).

Subsequently, we prove that the function \(\varphi \) given by (47) satisfies hypothesis (23). Indeed, from (a) and (b) of (37), it is clear that (a) and (b) of (23) hold. Since \(\partial \varphi (x, t, r) = - \partial j(x, t, g - r)\) for all \(r \in \mathbb {R}\), a.e. \((x, t) \in (0, L) \times (0, T)\), we infer that condition (23)(c) is satisfied with \({\overline{c}}_0(t) = d_0(t) + d_1 |g|\) and \({\overline{c}}_1 = d_1\).

Let \(r_i\), \(s_i \in \mathbb {R}\), \(s_i \in \partial \varphi (x, t, r_i)\), \(i=1\), \(2\) with \((x, t) \in (0, L) \times (0, T)\). Thus \(s_i = -\zeta _i\), \(\zeta _i \in \partial j(x, t, g - r_i)\) and condition (37)(d) implies \((\zeta _1 - \zeta _2)(r_2 - r_1) \ge - m | r_1 - r_2|^2\). Hence

$$\begin{aligned} (s_1 - s_2)(r_1 - r_2) = (-\zeta _1 + \zeta _2)(r_1 - r_2) = (\zeta _1 - \zeta _2)(r_2 - r_1) \ge - m |r_1 - r_2 |^2 \end{aligned}$$

which proves (23)(d) with \({\overline{m}}_2 = m\). Hence condition (23) follows.

It is obvious that the function \({\widetilde{f}}\) defined by (48) satisfies the inequality \(\Vert {\widetilde{f}} \Vert _{\mathcal V^*} \le \Vert f \Vert _{L^2(0, T; L^2(0, L))}\), so it satisfies condition (5). For the embedding operator, by Lemma 10, we have \(\Vert M\Vert \le \frac{L^2}{3}\). Thus, condition (44) implies hypothesis (24). We deduce from Theorem 7 that Problem 11 has a solution \(u \in L^2(0, T; V)\).

Finally, if, in addition, the regularity hypothesis (45) holds, then by Proposition 3.37 of [14], we conclude that the function \(\varphi \) given by (47) satisfies condition (25). Therefore the uniqueness of solution is a consequence of Theorem 7. This concludes the proof of the theorem. \(\square \)

5 Numerical Approximation

In this section we formulate the Galerkin scheme for Problem 11 and describe the primal-dual active set technique that is used to solve effectively the approximate problem.

Let \(0 = x_0 < x_1 < \cdots < x_N = L\) be a partition of the interval \([0, L]\) such that \(x_i = i h\) for \(i=0\), \(\ldots \), \(N\) and \(h=L/N\). Let \(I_n = (x_{n-1}, x_n)\) for \(n=1\), \(\ldots \), \(N\). We define the following finite element space

$$\begin{aligned} V_h = \{ \, v\in C^1([0,L]) \mid \ v_{|I_n} \in \mathbb {P}_3 (I_n) \ \ \text{ for } \text{ all } \ n = 1,\ldots ,N, \ v(0) = v_{x}(0) = 0 \, \}, \end{aligned}$$

where \(\mathbb {P}_m(K)\) denotes the space of polynomials of degree \(\le m\) on an interval \(K\). The basis of this space consists of the functions

$$\begin{aligned}&v_n \in V_h \ \ \text{ such } \text{ that } \ \ v_n(x_i) = \delta _{ni}, \ v_n'(x_i) = 0 \ \text{ for } \ i, n=1, \ldots , N,\\&v_{n+N}\in V_h \ \ \text{ such } \text{ that } \ \ v_n(x_i) = 0, \ v_n'(x_i)=\delta _{ni} \ \text{ for }\ i,n=1,\ldots ,N, \end{aligned}$$

where \(\delta _{ni}\) denotes the Kronecker delta. We also introduce the space

$$\begin{aligned} W_h = \{ \, w :[0,L] \rightarrow \mathbb {R}\mid w|_{I_n} \in \mathbb {P}_0(I_n)\ \ \text{ for } \text{ all } \ n=1,\ldots ,N \, \} \end{aligned}$$

and the projection operator \(\Pi _h :L^1(0,L)\rightarrow W_h\) given by

$$\begin{aligned} \Pi _h(v)(x) = \frac{1}{h} \int _{x_{n-1}}^{x_n} v(y) \, dy \ \ \text{ for }\ \ x \in I_n, \ n=1, \ldots , N. \end{aligned}$$

The basis of the space \(W_h\) consists of the characteristic functions of intervals \(I_i\) for \(i=1\), \(\ldots \), \(N\). We are now in a position to define the semidiscrete version of Problem 11.

Problem 13

Find a displacement \(u_h :[0,T] \rightarrow V_h\) and a reaction force \(\xi ^D_h :[0,T] \rightarrow W_h\) such that

$$\begin{aligned}&a(u_h(t),v_h) + \int _0^t \int _0^L b(t-s) (\Pi _h(g-u_h(s)))^+ v_h \, dx \,ds \\&= \int _0^L \xi ^D_h(t) v_h \, dx+ \int _0^L f(t) v_h \, dx \end{aligned}$$

for all \(v_h \in V_h\), all \(t\in [0,T]\), where

$$\begin{aligned} -\xi ^D_h(x,t) \in \partial j(t,\Pi _h(g-u_h(t))(x))\ \ \text{ for } \text{ all } \ \ (x,t) \in [0,L] \times [0,T]. \end{aligned}$$

In what follows, for the convenience of the reader we assume that in Problem 13, the function \(j\) does not depend on \(x\).

Now we pass to the fully discrete formulation. To this end we introduce the time mesh \(0 = t_0 < t_1 < \cdots < t_M = T\), where \(t_k = k \tau \) for \(k=0\), \(\ldots \), \(M\) and \(\tau =T/M\). The time integrals will be approximated by the trapezoidal rule

$$\begin{aligned} \int _0^{t_k}h(t)\,dt\simeq \frac{\tau }{2}h(0)+\tau \sum _{j=1}^{k-1}h(t_j)+\frac{\tau }{2}h(t_k) \ \ \text{ for } \ k = 1, \ldots , M. \end{aligned}$$

For a function \(h :\{t_0, t_1, \ldots , t_M \}\rightarrow Y\), where \(Y\) is a vector space, we denote \(h^j = h(j \tau )\) for \(j = 0\), \(\ldots \), \(M\). The fully discrete problem corresponding to Problem 13 reads as follows.

Problem 14

Find \((u^k_{h\tau }, \xi ^{Dk}_{h\tau }) \in V_h \times W_h\) for \(k=0,\ldots ,M\) such that

$$\begin{aligned} a(u^0_{h\tau },v_h) - \int _0^L \xi ^{D0}_{h\tau }\, v_h \, dx = \int _0^L f^0\, v_h \, dx \ \ \text{ for } \text{ all } \ v_h \in V_h \end{aligned}$$
(49)

and

$$\begin{aligned}&a(u^k_{h\tau },v_h) + \frac{\tau }{2}\int _0^Lb^0(\Pi _h(g-u^k_{h\tau }))^+v_h\,dx - \int _0^L\xi ^{Dk}_{h\tau }\,v_h\,dx \nonumber \\&\quad = -\frac{\tau }{2}\int _0^Lb^k(\Pi _h(g-u^0_{h\tau }))^+v_h\,dx - \tau \sum _{j=1}^{k-1}\int _0^Lb^{k-j}(\Pi _h(g-u^j_{h\tau }))^+ v_h \,dx \nonumber \\&\qquad + \int _0^L f^k\, v_h\,dx \ \ \text{ for } \text{ all } \ v_h \in V_h, \ \text{ for } \text{ all } \ k = 1, \ldots , M, \end{aligned}$$
(50)

where

$$\begin{aligned} -\xi ^{Dk}_{h\tau }(x) \in \partial j(t_k,\Pi _h(g-u^k_{h\tau })(x))\ \ \text{ for } \text{ all } \ \ x \in [0,L], \ k = 0, \ldots , M. \end{aligned}$$

Note that, in the time step \(k\), the terms on the left-hand side of (50) depend on the unknown functions \(u^k_{h\tau }\) and \(\xi ^{Dk}_{h\tau }\), and the terms on the right-hand side of (50) depend on the history values \(u^j_{h\tau }\) for \(j=0,\ldots ,k-1\) and the function \(f^k\). Hence, in the first time step, we need to calculate \(u^0_{h\tau }\) and \(\xi ^{D0}_{h\tau }\) from (49), and next, solve (50) recursively to obtain the values \(u^k_{h\tau }\) and \(\xi ^{Dk}_{h\tau }\) in consecutive points on the time mesh. The solution of (49), (50) in each time step can be obtained by means of the primal-dual active set strategy.

We now describe the primal-dual active set strategy in the case the superpotential \(j:\mathbb {R}\rightarrow \mathbb {R}\) is given by

$$\begin{aligned} j(s) = {\left\{ \begin{array}{ll} 0 &{}\quad \text {for}\ s \le 0,\\ -\frac{1}{2} \, \lambda \, s^2 &{}\quad \text {for} \ s \in (0,R),\\ -\frac{1}{2} \, \lambda \, R^2 &{}\quad \text {for} \ s \ge R, \end{array}\right. } \end{aligned}$$
(51)

where \(R>0\) is the threshold above which the obstacle breaks and no reaction occurs any more and \(\lambda >0\) is the foundation compliance coefficient. We remark that this method can be easily generalized into the case where the graph of \(\partial j\) consists of finite number of line segments. It is clear that the Clarke subdifferential of the function \(j\) defined by (51) is of the form

$$\begin{aligned} \partial j(s) = {\left\{ \begin{array}{ll} \{ 0 \}&{}\quad \text {for}\ s \le 0,\\ \{ -\lambda s \}&{}\quad \text {for}\ s \in (0,R),\\ \,[-\lambda R,0] &{}\quad \text {for}\ s=R,\\ \{0\}&{}\quad \text {for} \ s > R. \end{array}\right. } \end{aligned}$$
(52)

Clearly, \(j\) defined by (51) satisfies (37)(a), (b). From (52), it follows that (37)(c) holds with \(d_1=0\) and \(d_2\equiv \lambda R\). Moreover, (37)(d) holds with \(m=\lambda \). Indeed, if we define \(j_1(s)=j(s)+\frac{1}{2}\lambda s^2\) then it is easy to see that \(j_1\) is convex and \(\partial j_1 (s) = \partial j(s)+\{\lambda s\}\) for all \(s\in \mathbb {R}\). Now, condition (37)(d) with constant \(m=\lambda \) is a consequence of the fact that \(\partial j_1\) is monotone.

We put \(A_e\equiv 1\). Then \(a(u,v) = \int _0^L u_{xx}v_{xx}\, dx\) and \(m_A = 1\) in (33). Futhermore, we assume the following smallness condition on the beam length

$$\begin{aligned} \lambda L^4 < 9. \end{aligned}$$

Then condition (44) holds and we can apply Theorem 12.

To solve the discretized problem (49), (50) in a given time step it is required to know the relation between \(\Pi _h(g-u^k_{h\tau })\) and \(-\xi ^{Dk}_{h\tau }\) on every interval of the space mesh. It is enough to know which of the four segments in the graph of (52), the pair \((\Pi _h(g-u^k_{h\tau }), -\xi ^{Dk}_{h\tau })\) belongs to. Therefore, we divide the set \(\{1,\ldots ,N\}\) into four disjoint subsets \(\displaystyle \{1,\ldots ,N\} = \cup _{j=1}^4 A_j\), where

$$\begin{aligned} i \in A_1&\Leftrightarrow \ \Pi _h(g-u^k_{h\tau })(x)\le 0 \ \ \text{ and }\ \ \xi ^{Dk}_{h\tau }(x)=0, \\ i \in A_2&\Leftrightarrow \Pi _h(g-u^k_{h\tau })(x)\in (0,R) \ \ \text{ and } \ \ \xi ^{Dk}_{h\tau }(x)=\lambda \,\Pi _h(g-u^k_{h\tau })(x), \\ i\in A_3&\Leftrightarrow \ \Pi _h(g-u^k_{h\tau })(x)= R \ \ \text{ and } \ \ \xi ^{Dk}_{h\tau }(x)\in [0,\lambda R],\\ i \in A_4&\Leftrightarrow \ \Pi _h(g-u^k_{h\tau })(x) > R \ \ \text{ and } \ \ \xi ^{Dk}_{h\tau }(x)=0, \end{aligned}$$

for \(x\in I_i\), \(i=1,\ldots ,N\). If the above division is known, then the solution consists in solving a set of \(3N\) linear equations in which \(2N\) unknowns are coefficients of \(u_{h\tau }^k\) in the basis of \(V_h\) and the remaining \(N\) unknowns are the values of \(\xi ^{Dk}_{h\tau }\) on corresponding intervals. However, since we do not know this division a priori, we need to apply the following iterative procedure to find it.

Step 0. In the first time step \(A_1^{(0)}:=\{1,\ldots ,N\}\), \(A_1^{(1)}=A_1^{(2)}=A_1^{(3)}:=\emptyset \), \(l:=1\). In the following time steps initialize the sets by taking them from the previous time step.

Step 1. \(A_1^{(l)}:=A_1^{(l-1)}\), \(A_2^{(l)}:=A_2^{(l-1)}\), \(A_3^{(l)}:=A_3^{(l-1)}\), \(A_4^{(l)}:=A_4^{(l-1)}\).

Step 2. Solve the following auxiliary linear problem (here only the case \(k\ge 1\) is considered, the case \(k=0\) is analogous).

Problem 15

Find \((u^{k(l)}_{h\tau },\xi ^{Dk(l)}_{h\tau })\in V_h\times W_h\) such that

$$\begin{aligned}&a(u^{k(l)}_{h\tau },v_h) + \frac{\tau }{2}\int _{[0,L]\setminus \bigcup _{i\in A_1^{(l)}} I_i}b^0(\Pi _h(g-u^{k(l)}_{h\tau }))^+ \, v_h\,dx - \int _0^L\xi ^{Dk(l)}_{h\tau }\, v_h \, dx \nonumber \\&\quad = -\frac{\tau }{2}\int _0^Lb^k(\Pi _h(g-u^0_{h\tau }))^+ v_h \, dx - \tau \sum _{j=1}^{k-1}\int _0^Lb^{k-j}(\Pi _h(g-u^j_{h\tau }))^+ v_h \, dx \nonumber \\&\qquad + \int _0^Lf^k \, v_h\,dx \ \ \text{ for } \text{ all } \ \ v_h\in V_h \end{aligned}$$
(53)

with

$$\begin{aligned}&\xi ^{Dk(l)}_{h\tau }(x) = 0 \ \ \text{ for } \ \ x\in I_i \ \ \text{ and }\ \ i\in A_1^{(l)} \cup A_4^{(l)}, \end{aligned}$$
(54)
$$\begin{aligned}&\xi ^{Dk(l)}_{h\tau }(x)=\lambda \Pi _h(g-u^{k(l)}_{h\tau })(x)\ \ \text{ for } \ \ x\in I_i \ \ \text{ and } \ \ i\in A_2^{(l)},\end{aligned}$$
(55)
$$\begin{aligned}&\Pi _h(g-u^{k(l)}_{h\tau })(x)= R \ \ \text{ for } \ \ x\in I_i \ \ \text{ and } \ \ i\in A_3^{(l)}. \end{aligned}$$
(56)

After representing \(u^{k(l)}_{h\tau }\) in the basis of \(V_h\) and \(\xi ^{Dk(l)}_{h\tau }\) in the basis of \(W_h\) and substituting basis functions of \(V_h\) in place of \(v_h\) the equation (53) leads to a set of \(2N\) linear equations. The next \(N\) linear equations are obtained from (54)–(56) for \(i=1,\ldots ,N\).

Step 3. Sets update. The sets in each iteration step are updated according to the following rules. For \(i=1,\ldots ,N\)

  • if \(i\in A_1^{(l-1)}\) and \(\Pi _h(g-u_{h\tau }^{k(l)})(x)> 0\) for \(x\in I_i\), then \(A_1^{(l)}:=A_1^{(l)}\setminus \{i\}\) and \(A_2^{(l)}:=A_2^{(l)}\cup \{i\}\),

  • if \(i\in A_2^{(l-1)}\) and \(\Pi _h(g-u_{h\tau }^{k(l)})(x)<0\) for \(x\in I_i\), then \(A_2^{(l)}:=A_2^{(l)}\setminus \{i\}\) and \(A_1^{(l)}:=A_1^{(l)}\cup \{i\}\),

  • if \(i\in A_2^{(l-1)}\) and \(\Pi _h(g-u_{h\tau }^{k(l)})(x)>R\) for \(x\in I_i\), then \(A_2^{(l)}:=A_2^{(l)}\setminus \{i\}\) and \(A_1^{(l)}:=A_1^{(l)}\cup \{i\}\),

  • if \(i\in A_3^{(l-1)}\) and \(\xi ^{Dk(l)}_{h\tau }(x)>\lambda R\) for \(x\in I_i\), then \(A_3^{(l)}:=A_3^{(l)}\setminus \{i\}\) and \(A_2^{(l)}:=A_2^{(l)}\cup \{i\}\),

  • if \(i\in A_3^{(l-1)}\) and \(\xi ^{Dk(l)}_{h\tau }(x)<0\) for \(x\in I_i\), then \(A_3^{(l)}:=A_3^{(l)}\setminus \{i\}\) and \(A_4^{(l)}:=A_4^{(l)}\cup \{i\}\),

  • if \(i\in A_4^{(l-1)}\) and \(\Pi _h(g-u_{h\tau }^{k(l)})(x)<R\) for \(x\in I_i\), then \(A_4^{(l)}:=A_4^{(l)}\setminus \{i\}\) and \(A_3^{(l)}:=A_3^{(l)}\cup \{i\}\).

Step 4. If \(A_1^{(l)}=A_1^{(l-1)}\) and \(A_2^{(l)}=A_2^{(l-1)}\) and \(A_3^{(l)}=A_3^{(l-1)}\) and \(A_4^{(l)}=A_4^{(l-1)}\), then STOP, else \(l:=l+1\), and go to Step 1.

From the construction of the above algorithm, it follows that after it stops, the obtained solution \((u^{k(l)}_{h\tau },\xi ^{Dk(l)}_{h\tau })\) is the solution of Problem 14.

We now provide numerical simulations for the following data: \(R=0.5\), \(\lambda =1\), \(g=-0.1\), \(L=1\), \(T=5\) and \(f(x)=-5\) for all \(x \in [0, 1]\). Five examples of the memory function \(b\) were considered:

$$\begin{aligned} b_1(s)=0, \quad b_2(s)=-1, \quad b_3(s)=-e^{-s}, \quad b_4(s)=1, \quad b_5(s)=e^{-s} \end{aligned}$$

for all \(s\in [0,5]\). The history terms in the right-hand side of (50) were recorded and updated in consecutive time steps according to the following formulas. For \(b_2\), we put

$$\begin{aligned} B_2^k = \frac{\tau }{2}\int _0^L(\Pi _h(g-u^0_{h\tau }))^+v_h\,dx + \tau \sum _{j=1}^{k-1}\int _0^L(\Pi _h(g-u^j_{h\tau }))^+v_h\,dx \end{aligned}$$

and hence we have the recursive scheme

$$\begin{aligned} B_2^{1}=\frac{\tau }{2}\int _0^L(\Pi _h(g-u^0_{h\tau }))^+v_h\,dx, \ \ B_2^{k+1}=B_2^{k}+\tau \int _0^L(\Pi _h(g-u^k_{h\tau }))^+v_h\,dx. \end{aligned}$$

For \(b_3\), we put

$$\begin{aligned} B_3^k = \frac{\tau }{2}\int _0^L e^{k\tau }(\Pi _h(g-u^0_{h\tau }))^+v_h\,dx + \tau \sum _{j=1}^{k-1}\int _0^L e^{(k-j)\tau }(\Pi _h(g-u^j_{h\tau }))^+v_h\,dx \end{aligned}$$

which leads to the recursive scheme

$$\begin{aligned} B_3^{1}&= \frac{\tau }{2}\int _0^L e^{\tau }(\Pi _h(g-u^0_{h\tau }))^+v_h\,dx, \ \ B_3^{k+1}=e^{\tau }B_3^{k}\\&+\tau \int _0^L e^{\tau }(\Pi _h(g-u^k_{h\tau }))^+v_h\,dx . \end{aligned}$$

The above formulas allow to save storage for remembering the history values of solution needed to compute the right-hand side of (50).

The space interval \([0,1]\) was divided into \(30\) elements of equal length, which resulted in \(60\) base functions of \(V_h\) (\(30\) for the value of the function and \(30\) for the value of its derivative). The length of the time step was assumed to be equal to \(0.1\). The deformed configuration of the beam after respectively \(0, 10, 20, 30, 40\) and \(50\) time steps (which corresponds to \(t=0, 1, 2, 3, 4\) and \(5\)) is shown in Fig. 2.

Fig. 2
figure 2

The deformed configuration of the beam for the memory functions \(b_1,\ldots ,b_5\) for time instants \(t=0,1,2,3,4,5\). For the memory function \(b_1\), the solutions, which are time independent, are plotted for various values of \(\lambda \)

A quick analysis of the results presented in this figure leads to the following comments.

First, Fig. 2a corresponds to the case when the memory function \(b\) vanishes. In this case the obstacle does not provide memory effects and, therefore, the process is stationary. The solutions are plotted for various values of \(\lambda \). Note that for the case \(\lambda =0\) the exact solution is given by the expression \(u(x) = -\frac{5}{24}x^4+\frac{5}{6}x^3-\frac{5}{4}x^2\) for \(x\in [0,1]\). Figure 2b corresponds to the case when the memory function \(b\) is negative which introduces a reaction from the obstacle towards the beam. The process is evolutionary and the penetration at the right extremity of the beam (say at the point \(x=1\)) is decreasing in time, since the beam is pushing up. This case corresponds to a hardening of the obstacle. A similar behavior of the solution is obtained in Fig. 2c which, again, corresponds to a negative memory function \(b\) and describes the hardening of the obstacle. Figure 2d and e correspond to the case when the memory function \(b\) is positive. In these cases the process is evolutionary but the penetration at the right extremity of the beam (say at the point \(x=1\)) is increasing in time, since the beam is pulling down. This situation corresponds to a softening of the obstacle. We conclude from above that our model of contact describes both the hardening and the softening of the obstacle’s surface.

Moreover, we note that the penetration at \(x=1\) is more important in the case (b) in comparison with that case (c), at each time moment. The reason arises in the inequality

$$\begin{aligned} e^{-s}\le 1, \end{aligned}$$

valid for all \(s\in [0,5]\), which shows that the part of the reaction of the obstacle that is due to the memory, is more important in the case (b) than in the case (c). Consequently, the surface hardening is more important in the case (b) than in the case (c). We have a similar comment concerning the cases (d) and (e): the penetration at \(x=1\) is more important in the case (d) in comparison with that in the case (e), at each time moment. This shows that the softening of the obstacle is more important in the case (d) than in the case (e).

Finally, we note that in the cases (c) and (e) the penetration at \(x=1\) seems to stabilize to a limit value, as time converges to infinity. The reason arises in the limit

$$\begin{aligned} \lim _{s\rightarrow \infty }e^{-s}=0 \end{aligned}$$

which implies that, for large time intervals, the variation of the memory effects of the contact is very small. Therefore, these effects do no produce extra hardening or softening.

We conclude this section with some remarks on the convergence of proposed numerical scheme. We run the simulation for the case without memory (\(b\equiv b_1\)) and with \(\lambda =1\), \(L=1\) and \(f(x)=-6\) for \(x\in [0,1]\) for various meshes to see if the contact area does converge. The results are presented in Table 1. Clearly, the results provide the numerical evidence that the contact area converges with the decreasing space step length.

Table 1 Cardinalities of sets \(A_1,\ldots ,A_4\) for five space meshes

The problem of convergence of solutions to the fully discretized problems to the solution of the original problem, as well as the derivation of error estimates and convergence order remain open. We expect that the proof of error estimates can be done using the generalization of methods from [1]. Since there are no time derivatives present in the considered problem and the time stepping scheme is implicit, we expect that the convergence holds without any additional relations between the time step \(\tau \) and space step \(h\).

The number of the steps of primal-dual active set algorithm, required for convergence was always no greater than four. For finer meshes, to decrease the number of active set algorithm steps required for convergence, instead of starting from the configuration which assumed that all edges belong to \(A_1\), we started from the configuration obtained from the solution on the sparser mesh.