1 Introduction

We consider a bounded domain \(\Omega \) in \({\mathbb {R}}^d\) whose regular boundary \(\Gamma \) consists of the union of three disjoint portions \(\Gamma _{i}\), \(i=1\), 2, 3 with \(|\Gamma _{i}|>0\), where \(|\Gamma _i|\) denotes the \((d-1)\)-dimensional Hausdorff measure of the portion \(\Gamma _i\) on \(\Gamma \). The outward normal vector on the boundary is denoted by n. We formulate the following two steady-state heat conduction problems with mixed boundary conditions:

$$\begin{aligned}&-\Delta u=g \ \ \text{ in } \ \ \Omega , \ \ \quad u\big |_{\Gamma _{1}}=0, \ \ \quad -\frac{\partial u}{\partial n}\big |_{\Gamma _{2}}=q, \ \ \quad u\big |_{\Gamma _{3}}=b, \end{aligned}$$
(1)
$$\begin{aligned}&-\Delta u=g \ \ \text{ in } \ \ \Omega , \ \ \quad u\big |_{\Gamma _{1}}=0, \ \ \quad -\frac{\partial u}{\partial n}\big |_{\Gamma _{2}}=q, \ \ \quad -\frac{\partial u}{\partial n}\big |_{\Gamma _{3}} =\alpha (u-b), \end{aligned}$$
(2)

where u is the temperature in \(\Omega \), g is the internal energy in \(\Omega \), b is the temperature on \(\Gamma _{3}\) for (1) and the temperature of the external neighborhood of \(\Gamma _{3}\) for (2), q is the heat flux on \(\Gamma _{2}\) and \(\alpha >0\) is the heat transfer coefficient on \(\Gamma _{3}\), which satisfy the hypothesis: \(g\in L^2(\Omega )\), \(q\in L^2(\Gamma _2)\) and \(b\in H^{\frac{1}{2}}(\Gamma _3)\).

Throughout the paper we use the following notation

$$\begin{aligned}&V=H^{1}(\Omega ), \quad V_{0}=\{v\in V \mid v = 0 \ \ \text{ on } \ \ \Gamma _{1} \}, \\&\quad K=\{v\in V \mid v = 0 \ \ \text{ on } \ \ \Gamma _{1},\ v = b \ \ \text{ on } \ \ \Gamma _{3} \}, \quad \\&\quad K_{0}=\{v\in V \mid v = 0 \ \ \text{ on } \ \ \Gamma _{1}\cup \Gamma _3 \}, \\&\quad a(u,v)=\int _{\Omega }\nabla u \, \nabla v \, dx, \quad a_{\alpha }(u,v)=a(u,v)+\alpha \int \limits _{\Gamma _3}\gamma u \, \gamma v \, d\Gamma , \\&\quad L(v)= \int _{\Omega }g v \,dx - \int _{\Gamma _{2}}q \, \gamma v \,d\Gamma , \quad L_{{\alpha }}(v)=L(v) +\alpha \int \limits _{\Gamma _3}b \, \gamma v \, d\Gamma , \end{aligned}$$

where \(\gamma :V \rightarrow L^2(\Gamma )\) denotes the trace operator on \(\Gamma \). In what follows, we write u for the trace of a function \(u \in V\) on the boundary. In a standard way, we obtain the following variational formulations of (1) and (2), respectively:

$$\begin{aligned}& \text{ find } \ \ u_{\infty }\in K \ \ \text{ such } \text{ that }\ \ a(u_{\infty },v)=L(v) \ \ \text{ for } \text{ all } \ \ v\in K_{0}, \end{aligned}$$
(3)
$$\begin{aligned}& \text{ find } \ \ u_{\alpha }\in V_0 \ \ \text{ such } \text{ that }\ \ a_{\alpha }(u_{\alpha },v)=L_{\alpha }(v) \ \ \text{ for } \text{ all } \ \ v\in V_{0}. \end{aligned}$$
(4)

The standard norms on V and \(V_0\) are denoted by

$$\begin{aligned}&\Vert v \Vert _V = \Big ( \Vert v \Vert ^2_{L^2(\Omega )} + \Vert \nabla v \Vert ^2_{L^2(\Omega ;{\mathbb {R}}^d)} \Big )^{1/2} \ \ \text{ for } \ \ v \in V, \\&\quad \Vert v \Vert _{V_0} = \Vert \nabla v \Vert _{L^2(\Omega ;{\mathbb {R}}^d)} \ \ \text{ for } \ \ v \in V_0. \end{aligned}$$

It is well known by the Poincaré inequality, see [5, Proposition 2.94], that on \(V_0\) the above two norms are equivalent. Note that the form a is bilinear, symmetric, continuous and coercive with constant \(m_a > 0\), i.e.

$$\begin{aligned} a(v, v) = \Vert v\Vert ^{2}_{V_0} \ge m_a \Vert v\Vert ^{2}_{V} \ \ \text{ for } \text{ all } \ \ v\in V_{0}. \end{aligned}$$
(5)

It is well known that the regularity of solution to the mixed elliptic problems (1) and (2) is problematic in the neighborhood of a part of the boundary, see for example the monograph [12]. Regularity results for elliptic problems with mixed boundary conditions can be found in [1, 2, 14]. Moreover, sufficient hypothesis on the data in order to have \(H^{2}\) regularity for elliptic variational inequalities is given in [23]. We remark that, under additional hypotheses on the data g, q and b, problems (1) and (2) can be considered as steady-state two phase Stefan problems, see, for example, [10, 26, 28, 30].

Problems (3) and (4) have been extensively studied in several papers such as [10, 26,27,28,29]. Some properties of monotonicity and convergence, when the parameter \(\alpha \) goes to infinity, obtained in the aforementioned works, are recalled in the following result.

Theorem 1

If the data satisfy \(b=const.> 0\), \(g\in L^2(\Omega )\) and \(q\in L^2(\Gamma _2)\) with the properties \(q\ge 0\) on \(\Gamma _{2}\) and \(g\le 0\) in \(\Omega \), then

  1. (i)

    \(u_{\infty }\le b\) in \(\Omega \),

  2. (ii)

    \(u_{\alpha }\le b\) in \(\Omega \),

  3. (iii)

    \(u_{\alpha }\le u_{\infty }\) in \(\Omega \),

  4. (iv)

    if \(\alpha _{1} \le \alpha _{2}\), then \(u_{\alpha _{1}}\le u_{\alpha _{2}}\) in \(\Omega \),

  5. (v)

    \(u_{\alpha } \rightarrow u_{\infty }\) in V, as \(\alpha \rightarrow \infty \).

The main goal of this paper is to study a generalization of problem (2) for which we provide sufficient conditions that guarantee the comparison properties and asymptotic behavior, as \(\alpha \rightarrow \infty \), stated in Theorem 1. Moreover, for a more general problem, we also show a result on the continuous dependence of solution on the data g and q.

The mixed nonlinear boundary value problem for the elliptic equation under consideration reads as follows.

$$\begin{aligned} -\Delta u=g \ \ \text{ in } \ \ \Omega , \ \quad u\big |_{\Gamma _{1}}=0, \ \quad -\frac{\partial u}{\partial n}\big |_{\Gamma _{2}}=q, \ \quad -\frac{\partial u}{\partial n}\big |_{\Gamma _{3}} \in \alpha \, \partial j(u). \end{aligned}$$
(6)

Here \(\alpha \) is a positive constant while the function \(j :\Gamma _{3} \times {\mathbb {R}}\rightarrow {\mathbb {R}}\), called a superpotential (nonconvex potential), is such that \(j(x, \cdot )\) is locally Lipschitz for a.e. \(x \in \Gamma _3\) and not necessarily differentiable. Since in general \(j(x, \cdot )\) is nonconvex, so the multivalued condition on \(\Gamma _3\) in problem (6) is described by a nonmonotone relation expressed by the generalized gradient of Clarke. Such multivalued relation in problem (6) is met in certain types of steady-state heat conduction problems (the behavior of a semipermeable membrane of finite thickness, a temperature control problems, etc.). Further, problem (6) can be considered as a prototype of several boundary semipermeability models, see [15, 19, 20, 32], which are motivated by problems arising in hydraulics, fluid flow problems through porous media, and electrostatics, where a solution represents the pressure and the electric potentials. Note that the analogous problems with maximal monotone multivalued boundary relations (that is the case when \(j(x, \cdot )\) is a convex function) were considered in [3, 9], see also references therein.

Under the above notation, the weak formulation of the elliptic problem (6) becomes the following boundary hemivariational inequality:

$$\begin{aligned} \text{ find } \ \ u \in V_0 \ \ \text{ such } \text{ that } \ \ a(u,v) + \alpha \int _{\Gamma _{3}}j^{0}(u;v)\, d\Gamma \ge L(v) \ \ \text{ for } \text{ all } \ \ v\in V_{0}. \end{aligned}$$
(7)

Here and in what follows we often omit the variable x and we simply write j(r) instead of j(xr). Observe that if \(j(x, \cdot )\) is a convex function for a.e. \(x \in \Gamma _3\), then problem (7) reduces to the variational inequality of second kind:

$$\begin{aligned}&\text{ find } \ u \in V_0 \ \text{ such } \text{ that } \ a(u,v-u) +\alpha \int _{\Gamma _{3}}(j(v) - j(u)) \, d\Gamma \nonumber \\&\quad \ge L(v-u) \ \ \text{ for } \text{ all } \ \ v\in V_{0}. \end{aligned}$$
(8)

Note that when \(j(r) = \frac{1}{2} (r-b)^2\), problem (8) reduces to a variational inequality corresponding to problem (2). Several other examples of convex potentials can be found in various diffusion problems. For instance, the following convex functions:

$$\begin{aligned} j(r) = |r|,\ \ \ j(r) = {\left\{ \begin{array}{ll} \beta (r-c)^5 &{}\text { if} \ r \ge c, \\ 0 &{}\text {if } \ r< c, \end{array}\right. } \ \ \text{ and } \ \ j(r) = {\left\{ \begin{array}{ll} \beta r^{9/4} &{}\text {if} \ r \ge 0, \\ 0 &{}\text {if } \ r < 0, \end{array}\right. } \end{aligned}$$

with suitable constants \(\beta > 0\) and \(c \in {\mathbb {R}},\) appear in models which describe a free boundary problem with the Tresca condition, see [4], the Stefan-Boltzman heat radiation law, and the natural convection, respectively, see [3, 13], and the references therein for further applications and extensions. On the other hand, the stationary heat conduction models with nonmonotone multivalued subdifferential interior and boundary semipermeability relations can not be described by convex potentials. They use locally Lipschitz potentials and their weak formulations lead to hemivariational inequalities, see [19, Chapter 5.5.3] and [20].

We mention that the theory of hemivariational and variational inequalities has been proposed in the 1980s by Panagiotopoulos, see [19, 21, 22], as variational formulations of important classes of inequality problems in mechanics. In the last few years, new kinds of variational, hemivariational, and variational-hemivariational inequalities have been investigated, see recent monographs [5, 17, 25], and the theory has emerged today as a new and interesting branch of applied mathematics.

The rest of the paper is structured as follows. In Sect. 2 we provide a new existence result for problem (7). In Sect. 3 we establish two comparison properties for solutions to problem (7). The result on convergence of solutions of problem (7) to the solution of problem (3), when the parameter \(\alpha \) goes to infinity, is provided in Sect. 4. In Sect. 5 we study the continuous dependence of a solution to problem (7) on the internal energy g and the heat flux q. The proofs are based on arguments of compactness, lower semicontinuity, monotonicity, various estimates, the theory of elliptic hemivariational inequalities and nonsmooth analysis [6,7,8,9, 11, 17, 22, 24, 25, 31]. Finally, in Sect. 6 we deliver several examples of convex and nonconvex potentials which satisfy the hypotheses on the function j required in this paper.

2 Preliminaries

In this section first recall standard notation and preliminary concepts, and then provide a new result on existence of solution to the elliptic hemivariational inequality (7).

Let \((X, \Vert \cdot \Vert _{X})\) be a Banach space, \(X^{*}\) be its dual, and \(\langle \cdot , \cdot \rangle \) denote the duality between \(X^*\) and X. For a real valued function defined on X, we have the following definitions, see [6, Sect. 2.1] and [7, 17].

Definition 2

A function \(\varphi :X\rightarrow {\mathbb {R}}\) is said to be locally Lipschitz, if for every \(x\in X\) there exist \(U_{x}\) a neighborhood of x and a constant \(L_{x}>0\) such that

$$\begin{aligned} |\varphi (y)-\varphi (z)|\le L_{x}\Vert y-z\Vert _{X} \ \ \text{ for } \text{ all } \ \ y, z\in U_{x}. \end{aligned}$$

For such a function the generalized (Clarke) directional derivative of j at the point \(x\in X\) in the direction \(v\in X\) is defined by

$$\begin{aligned} \varphi ^{0}(x;v)=\limsup \limits _{y \rightarrow x, \, \lambda \rightarrow 0^{+}} \frac{\varphi (y +\lambda v)-\varphi (y)}{\lambda } \, . \end{aligned}$$

The generalized gradient (subdifferential) of \(\varphi \) at x is a subset of the dual space \(X^{*}\) given by

$$\begin{aligned} \partial \varphi (x)=\{\zeta \in X^{*} \mid \varphi ^{0}(x;v)\ge \langle \zeta ,v\rangle \ \ \text{ for } \text{ all } \ \ v \in X\}. \end{aligned}$$

We shall use the following properties of the generalized directional derivative and the generalized gradient, see [17, Proposition 3.23].

Proposition 3

Assume that \(\varphi :X\rightarrow {\mathbb {R}}\) is a locally Lipschitz function. Then the following hold:

  1. (i)

    for every \(x\in X\), the function \(X\ni v\mapsto \varphi ^{0}(x;v)\in {\mathbb {R}}\) is positively homogeneous, and subadditive, i.e.,

    $$\begin{aligned}&\varphi ^{0}(x;\lambda v)=\lambda \varphi ^{0}(x;v) \ \ \text{ for } \text{ all } \ \ \lambda \ge 0, \ v\in X, \\&\quad \varphi ^0(x; v_1+v_2) \le \varphi ^0(x; v_1) + \varphi ^0(x; v_2) \ \ \text{ for } \text{ all } \ \ v_1, v_2 \in X, \end{aligned}$$

    respectively.

  2. (ii)

    for every \(x\in X\), we have \(\varphi ^{0}(x;v) =\max \{\langle \zeta , v\rangle \mid \zeta \in \partial \varphi (x)\}\).

  3. (iii)

    the function \(X \times X \ni (x, v) \mapsto \varphi ^0(x; v) \in {\mathbb {R}}\) is upper semicontinuous.

  4. (iv)

    for every \(x\in X\), the gradient \(\partial \varphi (x)\) is a nonempty, convex, and weakly\(\,^*\) compact subset of \(X^*\).

  5. (v)

    the graph of the generalized gradient \(\partial \varphi \) is closed in \(X\times ({ weak\,^*\text{-- }}X^*)\)–topology.

Now, we pass to a result on existence of solution to the elliptic hemivariational inequality:

$$\begin{aligned} \text{ find } \ \ u \in V_0 \ \ \text{ such } \text{ that } \ \ a(u, v) + \alpha \int _{\Gamma _{3}}j^{0}(u; v)\, d\Gamma \ge f(v) \ \ \text{ for } \text{ all } \ \ v\in V_{0}. \end{aligned}$$
(9)

We admit the following standing hypothesis.

H(j): \(j:\Gamma _3 \times {\mathbb {R}}\rightarrow {\mathbb {R}}\) is such that

  1. (a)

    \(j(\cdot , r)\) is measurable for all \(r \in {\mathbb {R}}\),

  2. (b)

    \(j(x, \cdot )\) is locally Lipschitz for a.e. \(x \in \Gamma _3\),

  3. (c)

    there exist \(c_0\), \(c_1 \ge 0\) such that \(| \partial j(x, r)| \le c_0 + c_1 |r|\) for all \(r \in {\mathbb {R}}\), a.e. \(x\in \Gamma _3\),

  4. (d)

    \(j^0(x, r; b-r) \le 0\) for all \(r \in {\mathbb {R}}\), a.e. \(x \in \Gamma _{3}\) with a constant \(b \in {\mathbb {R}}\).

Here the constant b in H(j)(d) is the same as in the boundary condition on \(\Gamma _3\) (see (1) or (2)).

Note that the existence results for elliptic hemivariational inequalities can be found in several contributions, see [5, 16,17,18,19]. In comparison to other works, the new hypothesis is H(j)(d). Under this condition we will show both existence of a solution to problem (9) and a convergence result when \(\alpha \rightarrow \infty \). We underline that, if the hypothesis H(j)(d) is replaced by the relaxed monotonicity condition (see Remark 10 for details)

$$\begin{aligned} j^0(x, r; s-r) + j^0(x,s; r-s) \le m_j \, |r-s|^2 \end{aligned}$$

for all r, \(s \in {\mathbb {R}}\), a.e. \(x\in \Gamma _3\) with \(m_j \ge 0\), and the following smallness condition

$$\begin{aligned} m_a > \alpha \, m_j \Vert \gamma \Vert ^2 \end{aligned}$$

is assumed, then problem (9) is uniquely solvable, see [18, Lemma 20] for the proof. However, this smallness condition is not suitable in the study of problem (9) since for a sufficiently large value of \(\alpha \), it is not satisfied.

In the following result we apply a surjectivity result in [17, Proposition 3.61] and partially follow arguments of [18, Lemma 20]. For completeness we provide the proof.

Theorem 4

If H(j) holds, \(f \in V_0^*\) and \(\alpha > 0\), then the hemivariational inequality (9) has a solution.

Proof

Let \(\langle \cdot , \cdot \rangle \) stand for the duality pairing between \(V^*_0\) and \(V_0\). Let \(A :V_0 \rightarrow V_0^*\) be defined by

$$\begin{aligned} \langle Au, v \rangle = a(u, v) \ \ \text{ for } \ \ u, v \in V_0. \end{aligned}$$

It is obvious that the operator A is linear, bounded and coercive, i.e., \(\langle Av, v \rangle \ge \Vert v \Vert ^2_{V_0}\) for all \(v \in V_0\). Moreover, let \(J :L^2(\Gamma _3) \rightarrow {\mathbb {R}}\) be given by

$$\begin{aligned} J(w) = \int _{\Gamma _3} j(x, w(x)) \, d\Gamma \ \ \text{ for } \text{ all } \ \ w \in L^2(\Gamma _3). \end{aligned}$$

From H(j)(a)–(c), by [17, Corollary 4.15], we infer that the functional J enjoys the following properties:

  1. (p1)

    J is well defined and Lipschitz continuous on bounded subsets of \(L^2(\Gamma _3)\), hence also locally Lipschitz,

  2. (p2)

    \(\displaystyle J^0(w; z) \le \int _{\Gamma _3} j^0(x, w(x); z(x))\, d\Gamma \) for all w, \(z \in L^2(\Gamma _3)\),

  3. (p3)

    \(\Vert \partial J(w) \Vert _{L^2(\Gamma _3)} \le {{\overline{c}}}_0 + {{\overline{c}}}_1 \, \Vert w \Vert _{L^2(\Gamma _3)}\) for all \(w \in L^2(\Gamma _3)\) with \({{\overline{c}}}_0\), \({{\overline{c}}}_1 \ge 0\).

We introduce the operator \(B :V_0 \rightarrow 2^{V_0^*}\) defined by

$$\begin{aligned} Bv = \alpha \, \gamma ^* \partial J (\gamma v) \ \ \text{ for } \text{ all } \ \ v \in V_0, \end{aligned}$$

where \(\gamma ^* :L^2(\Gamma ) \rightarrow V_0^*\) denotes the adjoint to the trace \(\gamma \).

We show that B is pseudomonotone and bounded from \(V_0\) to \(2^{V_0^*}\), see [17, Definition 3.57]. By Proposition 3 (iv), it follows that the values of \(\partial J\) are nonempty, convex and weakly compact subsets of \(L^2(\Gamma _3)\). Hence, the set Bv is nonempty, closed and convex in \(V_0^*\) for all \(v \in V_0\). The operator B is bounded which is a consequence of the following estimate

$$\begin{aligned} \Vert Bv\Vert _{V_0^*} \le \alpha \, \Vert \gamma ^*\Vert \, \Vert \partial J (\gamma v) \Vert _{L^2(\Gamma _3)} \le \alpha \, \Vert \gamma ^*\Vert \, ({{\overline{c}}}_0 + {{\overline{c}}}_1 \Vert \gamma \Vert \Vert v \Vert _{V_0} ) \ \ \text{ for } \text{ all } \ \ v \in V_0, \end{aligned}$$

where \(\Vert \gamma \Vert \) denotes the norm of the trace operator. In order to establish pseudomonotonicity of the operator B, we take into account [17, Proposition 3.58(ii)], and prove that B is generalized pseudomonotone.

Let \(v_n\), \(v \in V_0\), \(v_n \rightarrow v\) weakly in \(V_0\), \(v^*_n\), \(v^* \in V_0^*\), \(v^*_n \rightarrow v^*\) weakly in \(V_0^*\), \(v_n^* \in Bv_n\) and \(\limsup \, \langle v_n^*, v_n - v \rangle \le 0\). We show that

$$\begin{aligned} v^* \in Bv \ \ \ \text{ and } \ \ \ \langle v_n^*, v_n \rangle \rightarrow \langle v^*, v \rangle . \end{aligned}$$

From condition \(v_n^* \in Bv_n\), it follows \(v_n^* = \alpha \, \gamma ^* \eta _n\) with \(\eta _n \in \partial J(\gamma v_n)\). By the estimate (p3), it is clear that \(\{ \eta _n \}\) remains in a bounded subset of \(L^2(\Gamma _3)\). Thus, at least for a subsequence, denoted in the same way, we may suppose that \(\eta _n \rightarrow \eta \) weakly in \(L^2(\Gamma _3)\) with \(\eta \in L^2(\Gamma _3)\). Using the compactness of the trace operator, we have \(\gamma v_n \rightarrow \gamma v\) in \(L^2(\Gamma _3)\) Now, we employ the strong-weak closedness of the graph of \(\partial J\), see Proposition 3 (v), to obtain \(\eta \in \partial J(\gamma v)\). On the other hand, by \(v_n^* = \alpha \, \gamma ^* \eta _n\), it follows \(v^* = \alpha \, \gamma ^* \eta \). Hence, we get \(v^* \in \alpha \, \gamma ^* \partial J(\gamma v) = Bv\). Now, it is obvious that

$$\begin{aligned} \langle v_n^*, v_n \rangle = \alpha \langle \eta _n, \gamma v_n \rangle _{L^2(\Gamma _3)} \longrightarrow \alpha \langle \eta , \gamma v \rangle _{L^2(\Gamma _3)} = \langle \alpha \gamma ^* \eta , v \rangle = \langle v^*, v \rangle . \end{aligned}$$

This completes the proof that B is generalized pseudomonotone. Hence, the operator B is also pseudomonotone.

Subsequently, we note that \(A :V_0 \rightarrow V_0^*\) is pseudomonotone, see [17, Theorem 3.69], since it is linear, bounded and nonnegative. Therefore, A is pseudomonotone and bounded as a multivalued operator from \(V_0\) to \(2^{V_0^*}\), see [17, Sect. 3.4]. Since the sum of multivalued pseudomonotone operators remains pseudomonotone, see [17, Proposition 3.59 (ii)], we infer that \(A+B\) is bounded and pseudomonotone.

Next, we prove that the operator \(A+B\) is coercive. In view of the coercivity of A, it is enough to show that

$$\begin{aligned} \langle Bv, v \rangle \ge -d_0 - d_1 \Vert v\Vert _{V_0} \ \ \text{ for } \text{ all } \ \ v \in V_0 \end{aligned}$$
(10)

with \(d_0\), \(d_1 \ge 0\). First, from hypothesis H(j)(d), by Proposition 3 (i)–(ii), we have

$$\begin{aligned} j^0(x, r; -r)= & {} j^0(x, r; b-r-b) \le j^0(x, r;b-r) + j^0(x, r; -b) \\\le & {} j^0(x, r; -b) \le |\partial j(x, r)| \, |-b| \le |b| (c_0 + c_1 |r|) \end{aligned}$$

for all \(r \in {\mathbb {R}}\), a.e. \(x \in \Gamma _3\). Next, let \(v \in V_0\), \(v^* \in Bv\). Thus, \(v^* = \alpha \, \gamma ^* \eta \) with \(\eta \in \partial J(\gamma v)\). Hence, by the definition of the generalized gradient and the property (p2), we obtain

$$\begin{aligned} \alpha \, \langle \eta , -\gamma v \rangle _{L^2(\Gamma _3)}\le & {} \alpha \, J^0(\gamma v; - \gamma v) \le \alpha \int _{\Gamma _{3}} j^0(\gamma v; -\gamma v) \, d\Gamma \\\le & {} \alpha \, |b| \int _{\Gamma _{3}} (c_0 + c_1 |\gamma v (x)|)\, d\Gamma \le d_0 + d_1 \Vert v \Vert _{V_0} \end{aligned}$$

with \(d_0\), \(d_1 \ge 0\). Using the latter and the equality

$$\begin{aligned} \alpha \, \langle \eta , \gamma v \rangle _{L^2(\Gamma _3)} = \langle \alpha \gamma ^* \eta , v \rangle = \langle v^*, v \rangle , \end{aligned}$$

we deduce

$$\begin{aligned} \langle v^*, v \rangle \ge -d_0 - d_1 \Vert v\Vert _{V_0} \ \ \text{ for } \text{ all } \ \ v \in V_0 \end{aligned}$$

which proves (10). In consequence, we have

$$\begin{aligned} \langle (A+B)v, v \rangle \ge \Vert v \Vert _{V_0}^2 - d_1 \Vert v\Vert _{V_0} - d_0. \end{aligned}$$

We conclude that the multivalued operator \(A+B\) is bounded, pseudomonotone, and coercive, hence surjective, see [17, Proposition 3.61]. We infer that there exists \(u \in V_0\) such that \((A+B) u \ni f\).

In the final step of the proof, we observe that any solution \(u\in V_0\) to the inclusion \((A+B) u \ni f\) is a solution to problem (9). Indeed, we have

$$\begin{aligned} A u + \alpha \, \gamma ^* \eta = f \ \ \text{ with } \ \ \eta \in \partial J(\gamma u) \end{aligned}$$

and hence

$$\begin{aligned} \langle Au, v \rangle + \alpha \langle \eta , \gamma v \rangle _{L^2(\Gamma _{3})} = \langle f, v \rangle \end{aligned}$$

for all \(v \in V_0\). Combining the latter with the definition of the generalized gradient and the property (p2), we obtain

$$\begin{aligned} \langle f, v \rangle= & {} \langle Au, v \rangle + \alpha \, \langle \eta , \gamma v \rangle _{L^2(\Gamma _{3})} \le \langle Au, v \rangle + \alpha \, J^0(\gamma u; \gamma v) \\\le & {} a(u, v) + \alpha \int _{\Gamma _{3}} j^0(\gamma u; \gamma v) \, d\Gamma \end{aligned}$$

for all \(v \in V_0\). This means that \(u\in V_0\) solves problem (9). This completes the proof. \(\square \)

3 Comparison Results

In this section we study the following two problems under the standing hypothesis H(j) on the superpotential.

For every \(\alpha > 0\), we consider the hemivariational inequality of the form

$$\begin{aligned} \text{ find } \ \ u \in V_0 \ \ \text{ such } \text{ that } \ \ a(u, v) + \alpha \int _{\Gamma _{3}}j^{0}(u; v)\, d\Gamma \ge L(v) \ \ \text{ for } \text{ all } \ \ v\in V_{0} \end{aligned}$$
(11)

and the weak form of the elliptic equation

$$\begin{aligned} \text{ find } \ \ u_{\infty }\in K \ \ \text{ such } \text{ that }\ \ a(u_{\infty },v)=L(v) \ \ \text{ for } \text{ all } \ \ v\in K_{0}. \end{aligned}$$
(12)

Recall that

$$\begin{aligned} K=\{v\in V \mid v = 0 \ \ \text{ on } \ \ \Gamma _{1},\ v = b \ \ \text{ on } \ \ \Gamma _{3} \}, \quad K_{0}=\{v\in V \mid v = 0 \ \ \text{ on } \ \ \Gamma _{1}\cup \Gamma _3 \}. \end{aligned}$$

It follows from Theorem 4 that for each \(\alpha > 0\), problem (11) has a solution \(u_\alpha \in V_0\) while [5, Corollary 2.102] entails that problem (12) has a unique solution \(u_\infty \in K\). Moreover, it is easy to observe that problem (12) can be equivalently formulated as follows

$$\begin{aligned} \text{ find } \ \ u_{\infty }\in K \ \ \text{ such } \text{ that }\ \ a(u_{\infty },v-u_\infty ) = L(v-u_\infty ) \ \ \text{ for } \text{ all } \ \ v\in K. \end{aligned}$$
(13)

In what follows we need the hypothesis on the data.

\({{(H_0)}}\):    \(g \in L^2(\Omega )\), \(g \le 0\) in \(\Omega \), \(q \in L^2(\Gamma _2)\), \(q \ge 0\) on \(\Gamma _2\).

Theorem 5

If H(j), \((H_0)\) hold and \(b\ge 0\), then

  1. (a)

    \(u_{\alpha }\le b\) in \(\Omega \),

  2. (b)

    \(u_{\alpha }\le u_{\infty }\) in \(\Omega \),

where \(u_\alpha \in V_0\) is a solution to problem (11) and \(u_\infty \in K\) is the unique solution to problem (12).

Proof

  1. (a)

    Let \(w = u_\alpha -b\). We shall prove that \(w^+ = 0\), where \(r^+ = \max \{ 0, r\}\) for \(r \in {\mathbb {R}}\). Since \(w\big |_{\Gamma _{1}}=-b \le 0\), we have \(w^+\big |_{\Gamma _{1}}=0\). We choose \(v = -w^+ \in V_0\) in problem (11) to get

    $$\begin{aligned} a(u_\alpha , -w^+) + \alpha \int _{\Gamma _{3}} j^0(u_\alpha ; -w^+) \, d\Gamma \ge L(-w^+). \end{aligned}$$

    By the linearity of the form a, we easily obtain

    $$\begin{aligned} a(u_\alpha , -w^+) = - a(w^+, w^+), \end{aligned}$$

    while \((H_0)\) implies \(L(w^+) \le 0\). Hence

    $$\begin{aligned} -a(w^+, w^+) + \alpha \int _{\Gamma _{3}} j^0(u_\alpha ; -w^+) \, d\Gamma \ge L(-w^+) \ge 0, \end{aligned}$$

    and

    $$\begin{aligned} a(w^+, w^+) \le \alpha \int _{\Gamma _{3}} j^0(u_\alpha ; -(u_\alpha -b)^+) \, d\Gamma . \end{aligned}$$

    Subsequently, H(j)(d) entails

    $$\begin{aligned} j^0(x, r; -(r-b)^+) \le 0 \ \ \text{ for } \text{ all } \ \ r \in {\mathbb {R}}, \ \text{ a.e. } \ x\in \Gamma _{3}. \end{aligned}$$
    (14)

    Indeed, if \(r \le b\), then \((r-b)^+ =0\) and \(j^0(x, r; -(r-b)^+) = j^0(x, r; 0) = 0 \le 0\). If \(r > b\), we would have \((r-b)^+ = r-b\) and \(j^0(x, r; -(r-b)^+) = j^0(x, r; b-r) \le 0\). Using the coercivity condition (5) of the form a and (14), we deduce \(m_a \Vert w^+ \Vert _V^2 \le 0\). Hence \(w^+ = 0\) in \(\Omega \), and finally \(u_\alpha \le b\) in \(\Omega \).

  2. (b)

    We denote \(w = u_\alpha -u_\infty \). It is enough to show that \(w^+ = 0\) in \(\Omega \). We observe that \(w \big |_{\Gamma _{1}}=0\). This allows to choose \(v = -w^+ \in V_0\) in problem (11) to obtain

    $$\begin{aligned} a(u_\alpha -u_\infty , -w^+) + a(u_\infty , -w^+) + \alpha \int _{\Gamma _{3}} j^0(u_\alpha ; -w^+) \, d\Gamma \ge L(-w^+). \end{aligned}$$

    Exploiting the relation \(a(u_\alpha - u_\infty , -w^+) = -a(w^+, w^+)\), we have

    $$\begin{aligned} -a(w^+, w^+) + a(u_\infty , -w^+) + \alpha \int _{\Gamma _{3}} j^0(u_\alpha ; -w^+) \, d\Gamma \ge L(-w^+). \end{aligned}$$
    (15)

    Next, part (a) of the proof shows that

    $$\begin{aligned} w \big |_{\Gamma _{3}} = (u_\alpha - b) \big |_{\Gamma _{3}} \le 0 \end{aligned}$$

    and \(w^+ \big |_{\Gamma _{3}}=0\), and consequently \(w^+ \in K_0\). Since \(u_\infty \in K\) solves (12), taking \(v = w^+ \in K_0\) in equality (12), and using the result in (15), it follows that

    $$\begin{aligned} a(w^+, w^+) \le \alpha \int _{\Gamma _{3}} j^0(u_\alpha ; -w^+) \, d\Gamma . \end{aligned}$$

    Since \(u_\infty = b\) on \(\Gamma _{3}\), by (14), we get

    $$\begin{aligned} j^0(x, u_\alpha ; -(u_\alpha -u_\infty )^+) = j^0(x, u_\alpha ; -(u_\alpha -b)^+) \le 0 \ \ \text{ a.e. } \text{ on } \ \Gamma _{3}. \end{aligned}$$

    Again, by the coercivity of the form a, we have \(m_a \Vert w^+ \Vert _V^2 \le 0\). Therefore, \(w^+ = 0\) in \(\Omega \), and finally \(u_\alpha \le u_\infty \) in \(\Omega \). This completes the proof.

\(\square \)

Note that properties (a) and (b) of Theorem 5 obtained for the hemivariational inequality (11) have been provided for linear elliptic problem (3) in properties (ii) and (iii) of Theorem 1.

In what follows, we comment on the monotonicity property analogous to condition (iv) stated for problem (3) in Theorem 1.

Proposition 6

Assume that H(j) and \((H_0)\) hold, and

$$\begin{aligned} j^0(x, r; -(r-s)^+) + c \, j^0(x,s; (r-s)^+) \le 0 \end{aligned}$$
(16)

for all \(c \ge 1\), all r, \(s \in {\mathbb {R}}\), a.e. \(x\in \Gamma _3\). Let \(u_{\alpha _i} \in V_0\) denote the unique solution to the inequality (11) corresponding to \(\alpha _i > 0\), \(i=1\), 2. Then the following monotonicity property holds:

$$\begin{aligned} \alpha _1 \le \alpha _2 \ \ \Longrightarrow \ \ u_{\alpha _1} \le u_{\alpha _2} \ \ \text{ in } \ \ \Omega . \end{aligned}$$

Proof

Let \(0 < \alpha _1 \le \alpha _2\) and \(w = u_{\alpha _1} - u_{\alpha _2}\) in \(\Omega \). It is sufficient to prove that \(w^+ = 0\) in \(\Omega \). Since \(w \big |_{\Gamma _{1}} = 0\), we have \(w^+ \in V_0\). We choose \(v = -w^+ \in V_0\) in problem (11) for \(\alpha _1\), and \(v = w^+ \in V_0\) in problem (11) for \(\alpha _2\) to get

$$\begin{aligned}&a(u_{\alpha _1}, -w^+) + \alpha _1 \int _{\Gamma _{3}} j^{0}(u_{\alpha _1}; -w^+)\, d\Gamma \ge L(-w^+), \\&\quad a(u_{\alpha _2}, w^+) + \alpha _2 \int _{\Gamma _{3}} j^{0}(u_{\alpha _2}; w^+)\, d\Gamma \ge L(w^+). \end{aligned}$$

By adding the last two inequalities, we have

$$\begin{aligned} -a(w, w^+) + \alpha _1 \int _{\Gamma _{3}} j^{0}(u_{\alpha _1}; -w^+)\, d\Gamma + \alpha _2 \int _{\Gamma _{3}} j^{0}(u_{\alpha _2}; w^+)\, d\Gamma \ge 0 \end{aligned}$$

which implies

$$\begin{aligned} a(w^+, w^+)\le & {} \int _{\Gamma _{3}} \Big ( \alpha _1 \, j^{0}(u_{\alpha _1}; -w^+) + \alpha _2 \, j^{0}(u_{\alpha _2}; w^+) \Big ) \, d\Gamma \\= & {} \alpha _1 \int _{\Gamma _{3}} \Big ( j^{0}(u_{\alpha _1}; -w^+) + \frac{\alpha _2}{\alpha _1} \, j^{0}(u_{\alpha _2}; w^+) \Big ) \, d\Gamma \le 0. \end{aligned}$$

Using the coercivity of the form a, we deduce that \(w^+ = 0\), which completes the proof. \(\square \)

It is easy to check that the following two simple examples satisfy H(j) and the condition (16):

$$\begin{aligned} j(r) = {\left\{ \begin{array}{ll} b-r &{}\text { for} \ r< b, \\ 0 &{}\text {for} \ r \ge b, \end{array}\right. } \ \ \ \ \text{ and } \ \ \ \ j(r) = {\left\{ \begin{array}{ll} (b-r)^2+1 &{}\text { for} \ r < b, \\ 1 &{}\text { for} \ r \ge b. \end{array}\right. } \end{aligned}$$

Note also that hypothesis (16) implies that the function \(j(x, \cdot )\) is convex for a.e. \(x \in \Gamma _3\). In fact, if \(r>s\), then \((r-s)^+=r-s\) and (16) gives

$$\begin{aligned} j^0(x, r; s-r) + c \, j^0(x,s; r-s) \le 0 \ \ \text{ for } \text{ all }\ \ c\ge 1. \end{aligned}$$

In particular, taking \(c=1\) we obtain the condition equivalent to the relaxed monotonicity condition with \(m_j=0\), which means that \(j(x,\cdot )\) is convex (see Remark 10).

We conclude that the monotonicity property of Proposition 6 holds for convex potentials, i.e., for variational inequalities. The proof of the monotonicity property for hemivariational inequalities remains an open problem.

4 Asymptotic Behavior of Solutions

In this section we investigate the asymptotic behavior of solutions to problem (11) when \(\alpha \rightarrow \infty \). To this end, we need the following additional hypothesis on the superpotential j.

\({{(H_1)}}\):    if \(j^0(x, r; b-r) = 0\) for all \(r \in {\mathbb {R}}\), a.e. \(x \in \Gamma _{3}\), then \(r = b\).

Theorem 7

Assume H(j), \((H_0)\) and \((H_1)\). Let \(\{ u_\alpha \} \subset V_0\) be a sequence of solutions to problem (11) and \(u_\infty \in K\) be the unique solution to problem (12). Then \(u_\alpha \rightarrow u_\infty \) in V, as \(\alpha \rightarrow \infty \).

Proof

First, we prove the estimate on the sequence \(\{ u_\alpha \}\) in V. We choose \(v = u_\infty - u_\alpha \in V_0\) as a test function in problem (11) to obtain

$$\begin{aligned} a(u_\alpha , u_\infty -u_\alpha ) + \alpha \int _{\Gamma _{3}} j^0(u_\alpha ; u_\infty -u_\alpha ) \, d\Gamma \ge L(u_\infty -u_\alpha ). \end{aligned}$$

From the equality \(a(u_\alpha , u_\infty -u_\alpha ) = - a(u_\infty -u_\alpha , u_\infty -u_\alpha ) + a(u_\infty , u_\infty -u_\alpha )\), we get

$$\begin{aligned} a(v,v) - \alpha \int _{\Gamma _{3}} j^0(u_\alpha ; v) \, d\Gamma \le a(u_\infty , v) - L(v). \end{aligned}$$
(17)

We observe that \(j^0(x, u_\alpha ; v) = j^0(x, u_\alpha ; b-u_\alpha )\) on \(\Gamma _{3}\), and by H(j)(d), we have \(j^0(x, u_\alpha ; v) \le 0\) on \(\Gamma _{3}\). Hence

$$\begin{aligned} a(v,v) \le a(u_\infty , v) - L(v). \end{aligned}$$

By the boundedness and coercivity of a, we infer

$$\begin{aligned} m_a \Vert v \Vert _V^2 \le (M \Vert u_\infty \Vert _V + \Vert L \Vert _{V^*}) \, \Vert v \Vert _V \end{aligned}$$

with \(M > 0\), and subsequently

$$\begin{aligned} \Vert u_\alpha \Vert _V \le \Vert v \Vert _V + \Vert u_\infty \Vert _V \le \frac{1}{m_a} (M \Vert u_\infty \Vert _V + \Vert L \Vert _{V^*}) + \Vert u_\infty \Vert _V =: C, \end{aligned}$$
(18)

where \(C > 0\) is independent of \(\alpha \). Hence, since \(a(v,v) \ge 0\), from (17), we have

$$\begin{aligned} - \alpha \int _{\Gamma _{3}} j^0(u_\alpha ; v) \, d\Gamma \le (M \Vert u_\infty \Vert _V + \Vert L \Vert _{V^*}) \, \Vert v \Vert _V \le \frac{1}{m_a} (M \Vert u_\infty \Vert _V + \Vert L \Vert _{V^*})^2 =: C_1, \end{aligned}$$

where \(C_1 > 0\) is independent of \(\alpha \). Thus

$$\begin{aligned} - \int _{\Gamma _{3}} j^0(u_\alpha ; v) \, d\Gamma \le \frac{C_1}{\alpha }. \end{aligned}$$
(19)

It follows from (18) that \(\{ u_\alpha \}\) remains in a bounded subset of V. Thus, there exists \(u^* \in V\) such that, by passing to a subsequence if necessary, we have

$$\begin{aligned} u_\alpha \rightarrow u^* \ \ \text{ weakly } \text{ in } \ \ V, \ \text{ as } \ \alpha \rightarrow \infty . \end{aligned}$$
(20)

Next, we show that \(u^* = u_\infty \). We observe that \(u^* \in V_0\) because \(\{ u_\alpha \} \subset V_0\) and \(V_0\) is sequentially weakly closed in V. Let \(w \in K\) and \(v = w - u_\alpha \in V_0\). From (11), we have

$$\begin{aligned} L(w-u_\alpha ) \le a(u_\alpha , w -u_\alpha ) + \alpha \int _{\Gamma _{3}} j^0(u_\alpha ; w-u_\alpha ) \, d\Gamma . \end{aligned}$$

Since \(w = b\) on \(\Gamma _3\), by H(j)(d), we have

$$\begin{aligned} \alpha \int _{\Gamma _{3}} j^0(u_\alpha ; w-u_\alpha ) \, d\Gamma = \alpha \int _{\Gamma _{3}} j^0(u_\alpha ; b-u_\alpha ) \, d\Gamma \le 0 \end{aligned}$$

which implies

$$\begin{aligned} L(w-u_\alpha ) \le a(u_\alpha , w -u_\alpha ). \end{aligned}$$
(21)

Next, we use the weak lower semicontinuity of the functional \(V \ni v \mapsto a(v,v) \in {\mathbb {R}}\) and from (21), we deduce

$$\begin{aligned} u^* \in V_0 \ \ \text{ satisfies } \ \ L(w-u^*) \le a(u^*, w-u^*) \ \ \text{ for } \text{ all } \ \ w \in K. \end{aligned}$$
(22)

Subsequently, we show that \(u^* \in K\). In fact, from (20), by the compactness of the trace operator, we have \(u_\alpha \big |_{\Gamma _{3}} \rightarrow u^* \big |_{\Gamma _{3}}\) in \(L^2(\Gamma _3)\), as \(\alpha \rightarrow \infty \). Passing to a subsequence if necessary, we may suppose that \(u_\alpha (x) \rightarrow u^*(x)\) for a.e. \(x \in \Gamma _3\) and there exists \(h \in L^2(\Gamma _3)\) such that \(|u_\alpha (x)| \le h(x)\) a.e. \(x \in \Gamma _3\). Using the upper semicontinuity of the function \({\mathbb {R}}\times {\mathbb {R}}\ni (r, s) \mapsto j^0(x, r; s) \in {\mathbb {R}}\) for a.e. \(x \in \Gamma _3\), see Proposition 3(iii), we get

$$\begin{aligned} \limsup j^0(x, u_\alpha (x); u_\infty (x) - u_\alpha (x)) \le j^0(x,u^*(x); u_\infty (x)-u^*(x)) \ \ \text{ a.e. } \ \ x \in \Gamma _3. \end{aligned}$$

Next, taking into account the estimate

$$\begin{aligned} |j^0(x, u_\alpha (x); u_\infty (x) - u_\alpha (x))| \le (c_0 + c_1 |u_\alpha (x)|) \, |b-u_\alpha (x)| \le k(x) \ \ \text{ a.e. } \ \ x \in \Gamma _3 \end{aligned}$$

with \(k \in L^1(\Gamma _3)\) given by \(k(x) = (c_0 + c_1 h(x)) (|b| + h(x))\), by the dominated convergence theorem, see [8, Theorem 2.2.33], we obtain

$$\begin{aligned} \limsup \int _{\Gamma _3} j^0(u_\alpha ; u_\infty - u_\alpha ) \, d\Gamma \le \int _{\Gamma _3} j^0(u^*; u_\infty -u^*)\, d\Gamma . \end{aligned}$$

Consequently, from H(j)(d) and (19), we have

$$\begin{aligned} 0 \le -\int _{\Gamma _{3}} j^0(u^*; b-u^*) \, d\Gamma \le \liminf \left( -\int _{\Gamma _{3}} j^0(u_\alpha ; u_\infty -u_\alpha ) \, d\Gamma \right) \le 0 \end{aligned}$$

which gives \(\int _{\Gamma _{3}} j^0(u^*; b-u^*) \, d\Gamma =0\). Again by H(j)(d), we get \(j^0(x, u^*; b-u^*) = 0\) a.e. \(x\in \Gamma _{3}\). Using \((H_1)\), we have \(u^*(x) = b\) for a.e. \(x \in \Gamma _3\), which together with (22) implies

$$\begin{aligned} u^* \in K \ \ \text{ satisfies } \ \ L(w-u^*) \le a(u^*, w-u^*) \ \ \text{ for } \text{ all } \ \ w \in K. \end{aligned}$$

Next, we prove that \(u^* = u_\infty \). To this end, let \(v := w -u^* \in K_0\) with arbitrary \(w \in K\). Hence, \(L(v) \le a(u^*, v)\) for all \(v \in K_0\). Recalling that \(v \in K_0\) implies \(-v \in K_0\), we obtain \(a(u^*, v) \le L(v)\) for all \(v \in K_0\). Hence, we conclude that

$$\begin{aligned} u^* \in K \ \ \text{ satisfies } \ \ a(u^*, v) = L(v)\ \ \text{ for } \text{ all } \ \ v \in K_0, \end{aligned}$$

i.e., \(u^* \in K\) is a solution to problem (12). By the uniqueness of solution to problem (12), we have \(u^* = u_\infty \) and hence \(u_\alpha \rightarrow u_\infty \) weakly in V, as \(\alpha \rightarrow \infty \). From the uniqueness of solution to (12), we also infer that the whole sequence \(\{ u_\alpha \}\) converges weakly in V to \(u_\infty \).

Finally, we prove the strong convergence \(u_\alpha \rightarrow u_\infty \) in V, as \(\alpha \rightarrow \infty \). Choosing \(v = u_\infty -u_\alpha \in V_0\) in problem (11), we obtain

$$\begin{aligned} a(u_\alpha , u_\infty -u_\alpha ) + \alpha \int _{\Gamma _{3}} j^0(u_\alpha ; u_\infty -u_\alpha ) \, d\Gamma \ge L(u_\infty -u_\alpha ). \end{aligned}$$

Hence

$$\begin{aligned}&a(u_\infty -u_\alpha , u_\infty -u_\alpha ) \le a(u_\infty , u_\infty -u_\alpha ) + L(u_\alpha - u_\infty ) \nonumber \\&\qquad + \alpha \int _{\Gamma _{3}} j^0(u_\alpha ; u_\infty -u_\alpha ) \, d\Gamma . \end{aligned}$$

Since \(u_\infty = b\) on \(\Gamma _3\), by H(j)(d) and the coercivity of the form a, we have

$$\begin{aligned} m_a \, \Vert u_\infty -u_\alpha \Vert ^2_V \le a(u_\infty , u_\infty -u_\alpha ) + L(u_\alpha - u_\infty ). \end{aligned}$$

Employing the weak continuity of both \(a(u_\infty , \cdot )\) and L, we conclude that \(u_\alpha \rightarrow u_\infty \) in V, as \(\alpha \rightarrow \infty \). This completes the proof. \(\square \)

5 Continuous Dependence Result

In this section we provide the result on continuous dependence of solution to problem (11) on the internal energy g and the heat flux q for fixed \(\alpha >0\).

First, from the compactness of the embedding V into \(L^2(\Omega )\) and of the trace operator from V into \(L^2(\Gamma )\), we obtain the following convergence result.

Lemma 8

Let \(g_n \in L^2(\Omega )\), \(q_n \in L^2(\Gamma _2)\) for \(n \in {\mathbb {N}}\). Define \(L_n \in V^*\), \(n \in {\mathbb {N}}\), by

$$\begin{aligned} L_n(v) = \int _{\Omega } g_n v \, dx - \int _{\Gamma _{2}} q_n v \, d\Gamma \ \ \text{ for } \ \ v \in V. \end{aligned}$$

If \(g_n \rightarrow g\) weakly in \(L^2(\Omega )\), \(q_n \rightarrow q\) weakly in \(L^2(\Gamma _{2})\), and \(v_n \in V\), \(v_n \rightarrow v\) weakly in V, then

$$\begin{aligned} L_n(v_n) \rightarrow L(v), \ \ \text{ as } \ \ n \rightarrow \infty , \end{aligned}$$

and there exists a constant \(C> 0\) independent of n such that \(\Vert L_n \Vert _{V^*} \le C\) for all \(n \in {\mathbb {N}}\).

The continuous dependence result reads as follows.

Theorem 9

Assume that \(\alpha > 0\) is fixed, L, \(L_n \in V^*\), \(n \in {\mathbb {N}}\) and H(j) holds. Let \(u_n \in V_0\), \(n \in {\mathbb {N}}\), be a solution to problem (11) corresponding to \(L_n\), and

$$\begin{aligned} \lim L_n(z_n) = L(z) \ \ \text{ for } \text{ any } \ \ z_n \rightarrow z \ \text{ weakly } \text{ in } \ V, \ \text{ as } \ n \rightarrow \infty . \end{aligned}$$
(23)

Then, there exists a subsequence of \(\{ u_n \}\) which converges weakly in V to a solution of problem (11) corresponding to L. If, in addition, the following hypotheses hold

$$\begin{aligned}& j^0(x, r; s-r) + j^0(x, s; r-s) \le m_j \, |r-s|^2 \ \ \text{ for } \text{ all } \ \ r, s \in {\mathbb {R}}, \ \text{ a.e. } \ x \in \Gamma _3, \end{aligned}$$
(24)
$$\begin{aligned}&m_a > \alpha \, m_j \Vert \gamma \Vert ^2, \end{aligned}$$
(25)

where \(m_j \ge 0\), then problem (11) has the unique solution u and \(u_n \in V_0\) corresponding to L and \(L_n\), respectively, and the whole sequence \(\{ u_n \}\) converges to u in V, as \(n \rightarrow \infty \).

Proof

Let \(u_n \in V_0\) be a solution to problem (11) corresponding to \(L_n\), and \(u_\infty \in K\) be the solution to problem (12). We have

$$\begin{aligned} a(u_n, u_\infty -u_n) + \alpha \int _{\Gamma _{3}} j^0(u_n; u_\infty -u_n) \, d\Gamma \ge L_n(u_\infty -u_n). \end{aligned}$$

Hence

$$\begin{aligned}&a(u_\infty - u_n, u_\infty -u_n) \le a(u_\infty , u_\infty -u_n) + L_n (u_n-u_\infty ) \\&\qquad + \alpha \int _{\Gamma _{3}} j^0(u_n; b-u_n) \, d\Gamma . \end{aligned}$$

From hypothesis H(j)(d), since the form a is bounded and coercive, we get

$$\begin{aligned} m_a \Vert u_\infty - u_n \Vert _V^2\le & {} a(u_\infty , u_\infty -u_n) + L_n (u_n-u_\infty ) \\\le & {} M \Vert u_\infty \Vert _V \Vert u_\infty - u_n \Vert _V + \Vert L_n \Vert _{V^*} \Vert u_\infty - u_n \Vert _V, \end{aligned}$$

and subsequently

$$\begin{aligned} \Vert u_n \Vert _V \le \Vert u_\infty - u_n \Vert _V + \Vert u_\infty \Vert _V \le \frac{1}{m_a} (M \Vert u_\infty \Vert _V + k_1) + \Vert u_\infty \Vert _V \le k_2 \end{aligned}$$

for all \(n \in {\mathbb {N}}\) with \(k_1\), \(k_2 > 0\) independent of n. Hence, \(\{ u_n \}\) is uniformly bounded in V and also in \(V_0\). From the reflexivity of \(V_0\), there exist \(\xi \in V_0\) and a subsequence of \(\{ u_n \}\), denoted in the same way, such that

$$\begin{aligned} u_n \rightarrow \xi \ \ \text{ weakly } \text{ in } \ V_0, \ \text{ as } \ n \rightarrow \infty . \end{aligned}$$

We show that \(\xi \in V_0\) satisfies (11). We know that \(u_n \in V_0\) and

$$\begin{aligned} a(u_n, v) + \alpha \int _{\Gamma _{3}} j^0(u_n; v) \, d\Gamma \ge L_n(v) \ \ \text{ for } \text{ all } \ \ v \in V_0. \end{aligned}$$

Taking the upper limit, we use the weak continuity of \(a(\cdot , v)\) and (23) to get

$$\begin{aligned} a(\xi , v) + \alpha \limsup \int _{\Gamma _{3}} j^0(u_n; v) \, d\Gamma \ge \lim L_n(v) = L(v) \ \ \text{ for } \text{ all } \ \ v \in V_0. \end{aligned}$$
(26)

By the compactness of the trace operator from V into \(L^2(\Gamma _{3})\), we have \(u_n \big |_{\Gamma _{3}} \rightarrow \xi \big |_{\Gamma _{3}}\) in \(L^2(\Gamma _3)\), as \(n \rightarrow \infty \), and at least for a subsequence, \(u_n(x) \rightarrow \xi (x)\) for a.e. \(x \in \Gamma _3\) and \(|u_n(x)| \le \eta (x)\) a.e. \(x \in \Gamma _3\), where \(\eta \in L^2(\Gamma _3)\). Since the function \({\mathbb {R}}\times {\mathbb {R}}\ni (r, s) \mapsto j^0(x, r; s) \in {\mathbb {R}}\) a.e. on \(\Gamma _3\) is upper semicontinuous, see Proposition 3(iii), we obtain

$$\begin{aligned} \limsup j^0(x, u_n(x); v(x)) \le j^0(x, \xi (x); v(x)) \ \ \text{ a.e. } \ \ x \in \Gamma _3. \end{aligned}$$

Recalling the estimate

$$\begin{aligned} |j^0(x, u_n(x); v(x))| \le (c_0 + c_1 |u_n(x)|) \, |v(x)| \le k(x) \ \ \text{ a.e. } \ \ x \in \Gamma _3 \end{aligned}$$

where \(k \in L^1(\Gamma _3)\), \(k(x) = (c_0 + c_1 \eta (x)) |v(x)|\), we apply the dominated convergence theorem, see [8, Theorem 2.2.33] to get

$$\begin{aligned} \limsup \int _{\Gamma _3} j^0(u_n; v) \, d\Gamma \le \int _{\Gamma _3} \limsup j^0(u_n; v) \, d\Gamma \le \int _{\Gamma _3} j^0(\xi ; v)\, d\Gamma . \end{aligned}$$

Using the latter in (26) entails

$$\begin{aligned} a(\xi , v) + \alpha \int _{\Gamma _{3}} j^0(\xi ; v) \, d\Gamma \ge L(v) \ \ \text{ for } \text{ all } \ \ v \in V_0, \end{aligned}$$
(27)

which means that \(\xi \in V_0\) is a solution to problem (11), and completes the first part of the proof.

Next, in addition, we assume (24) and (25). The existence of solution to (11) follows from the first part of the theorem. To prove uniqueness, let \(u_1\), \(u_2 \in V_0\) solve (11). Then taking as test functions \(u_2-u_1\in V_0\) for \(u_1\) and \(u_1-u_2\in V_0\) for \(u_2\), and adding corresponding inequalities, we obtain

$$\begin{aligned} a(u_1-u_2, u_2-u_1) + \alpha \int _{\Gamma _{3}} \left( j^0(u_1; u_2-u_1) + j^0(u_2; u_1-u_2) \right) \, d\Gamma \ge 0. \end{aligned}$$

From the coercivity of the form a and (24), we have

$$\begin{aligned} m_a \, \Vert u_1-u_2 \Vert ^2_V \le \alpha \, m_j \int _{\Gamma _{3}} | u_1(x) - u_2(x) |^2 \, d\Gamma \le \alpha \, m_j \Vert \gamma \Vert ^2 \, \Vert u_1-u_2 \Vert ^2_V. \end{aligned}$$

Since \((m_a - \alpha \, m_j \Vert \gamma \Vert ^2) \, \Vert u_1-u_2 \Vert ^2_V \le 0\), and by the smallness condition (25), we get \(u_1=u_2\). Hence, we deduce that solutions u, \(u_n \in V_0\) to (11) are unique, and by (27), we immediately have \(\xi = u\).

Finally, we show the strong convergence of \(\{u_n \}\) to u in V. We choose suitable test functions from \(V_0\) in (11) and (27) to obtain

$$\begin{aligned}&a(u_n, u-u_n) + \alpha \int _{\Gamma _{3}} j^0(u_n; u-u_n) \, d\Gamma \ge L_n(u-u_n) \\&\quad a(u, u_n-u) + \alpha \int _{\Gamma _{3}} j^0(u; u_n-u) \, d\Gamma \ge L(u_n-u). \end{aligned}$$

Adding the two inequalities, we have

$$\begin{aligned}&a(u_n-u, u-u_n) + \alpha \int _{\Gamma _{3}} \left( j^0(u_n; u-u_n) + j^0(u; u_n-u) \right) \, d\Gamma \\&\qquad \ge L_n(u-u_n) + L(u_n-u). \end{aligned}$$

Using the coercivity of the form a and (24), we get

$$\begin{aligned} m_a \, \Vert u_n-u \Vert ^2_V \le \alpha \, m_j \Vert \gamma \Vert ^2 \, \Vert u_n-u \Vert ^2_V + L_n(u_n-u) + L(u-u_n) \end{aligned}$$

which entails

$$\begin{aligned} (m_a - \alpha \, m_j \Vert \gamma \Vert ^2) \, \Vert u_n-u \Vert ^2_V \le L_n(u_n-u) + L(u-u_n). \end{aligned}$$

From hypotheses (23) and (25), we deduce that \(\Vert u_n - u\Vert _V \rightarrow 0\), as \(n\rightarrow \infty \). Since \(u \in V_0\) is unique, we infer that the whole sequence \(\{u_n \}\) converges in V to u. This proof is complete. \(\square \)

Remark 10

It is known that for a locally Lipschitz function \(j :{\mathbb {R}}\rightarrow {\mathbb {R}}\), the condition (24) is equivalent to the so-called relaxed monotonicity condition of the subdifferential

$$\begin{aligned} (\eta _1 - \eta _2) (r_1-r_2) \ge - m_j \, | r_1 -r_2 |^2 \end{aligned}$$
(28)

for all \(r_i \in {\mathbb {R}}\), \(\eta _i \in \partial j(r_i)\), \(i=1\), 2. The latter was extensively used in the literature, see [17] and the references therein. Condition (24) can be verified by proving that the function

$$\begin{aligned} {\mathbb {R}}\ni r \mapsto j(r) + \frac{m_j}{2} |r|^2 \in {\mathbb {R}}\end{aligned}$$

is convex. An example of a nonconvex function which satisfies the condition (24) is given in Example 11. Note that if \(j :{\mathbb {R}}\rightarrow {\mathbb {R}}\) is convex, then (24) and (28) hold with \(m_j =0\). In fact, by convexity,

$$\begin{aligned} j^0(r; s-r) \le j(s) - j(r)\quad \mathrm{and} \quad j^0(s; r-s) \le j(r) - j(s) \end{aligned}$$

for all r, \(s \in {\mathbb {R}}\) which imply \(j^0(r; s-r) + j^0(s; r-s) \le 0\). Therefore, for a convex function \(j :{\mathbb {R}}\rightarrow {\mathbb {R}}\), condition (24) or, equivalently, (28) reduces to monotonicity of the (convex) subdifferential, i.e., \(m_j = 0\).

6 Examples

The following examples provide nonconvex and convex functions which satisfies the hypotheses H(j), \((H_1)\) and (24).

Example 11

Let \(j :{\mathbb {R}}\rightarrow {\mathbb {R}}\) be the function defined by

$$\begin{aligned} j(r) = \left\{ \begin{array}{cc} (r-b)^2 &{}\text {if} \ \ r < b, \\ 1-e^{-(r-b)} &{}\text {if} \ \ r\ge b \end{array}\right. \end{aligned}$$

for \(r \in {\mathbb {R}}\) with a constant \(b \in {\mathbb {R}}\). This function is nonconvex, locally Lipschitz and its subdifferential is given by

$$\begin{aligned} \partial j(r) = \left\{ \begin{array}{cc} 2(r-b) &{}\text { if }\, r < b, \\ {[}0,1{]} &{}\text { if }\, r = b, \\ e^{-(r-b)} &{}\text { if }\, r > b \end{array}\right. \end{aligned}$$

for all \(r \in {\mathbb {R}}\). Hence, we have \(|\partial j(r)| \le 1+ 2|b| + 2|r|\) for all \(r \in {\mathbb {R}}\). Moreover, using Proposition 3(ii), one has

$$\begin{aligned} j^0(r; b-r) =\max \{ \zeta \, (b-r) \mid \zeta \in \partial j(r) \} = \left\{ \begin{array}{cc} -2(b-r)^2 &{}\text { if } r < b, \\ 0 &{}\text { if } r = b, \\ e^{-(r-b)}(b-r) &{}\text {if } r > b \end{array}\right. \end{aligned}$$

for all \(r \in {\mathbb {R}}\). Thus H(j) is satisfied. By the above formula, we also infer that \((H_1)\) is satisfied. Further, we show that condition (24) holds with \(m_j = 1\). The condition (24) is equivalent to the relaxed monotonicity of the subdifferential

$$\begin{aligned} (\partial j(r) - \partial j(s))(r-s) \ge - |r-s|^2 \ \ \text{ for } \text{ all } \ \ r, s \in {\mathbb {R}}. \end{aligned}$$

The latter means that

$$\begin{aligned} \left( \partial \left( j(r)+\frac{1}{2}r^2 \right) - \partial \left( (j(s)+\frac{1}{2} s^2 \right) \right) (r-s) \ge 0 \ \ \text{ for } \text{ all } \ \ r, s \in {\mathbb {R}}, \end{aligned}$$

i.e., the subdifferential \(\partial \psi \) of the function \(\psi :{\mathbb {R}}\rightarrow {\mathbb {R}}\) defined by \(\psi (r) = j(r) + \frac{1}{2} r^2\) is monotone. Now, the monotonicity of \(\partial \psi \) can be verified using the formula

$$\begin{aligned} \partial \psi (r) = \left\{ \begin{array}{cc} 3 r - 2 b &{}\text { if }\, r < b, \\ {[}b, b +1{]} &{}\text { if }\, r = b, \\ e^{-(r-b)} + r &{}\text { if }\, r > b \end{array}\right. \end{aligned}$$

for all \(r \in {\mathbb {R}}\). We conclude that H(j), \((H_1)\) and (24) are satisfied.

Example 12

(see [15, Example 3]) Let the function \(j :{\mathbb {R}}\rightarrow {\mathbb {R}}\) be given by

$$\begin{aligned} j(r) = \min \{ j_1(r), j_2(r) \} \end{aligned}$$

for \(r \in {\mathbb {R}}\), where \(j_i :{\mathbb {R}}\rightarrow {\mathbb {R}}\) are convex, quadratic and such that \(j_i'(b)=0\), \(i=1\), 2. It is known, see [6, Theorem 2.5.1], that

$$\begin{aligned} \partial j(r) \subset \mathrm{conv} \{ j_1'(r), j_2'(r) \} \ \ \text{ for } \text{ all } \ \ r \in {\mathbb {R}}, \end{aligned}$$

so, the subgradient of j has at most a linear growth. Using the monotonicity of the subgradient of convex function, we get

$$\begin{aligned} 0\le \left( j_i'(b)-j_i'(r)\right) (b-r) = -j_i'(r)(b-r) \ \ \text{ for } \text{ all } \ \ r \in {\mathbb {R}},\ i=1, 2, \end{aligned}$$

and, by Proposition 3(iii), we have

$$\begin{aligned} j^0(r; b-r)= & {} \max \{ \zeta (b-r) \mid \zeta \in \partial j(r)\} \\= & {} \max \{ \left( \lambda j_1'(r) + (1-\lambda ) j_2'(r)\right) (b-r) \mid \lambda \in [0, 1]\} \le 0. \end{aligned}$$

Hence, we deduce that condition H(j) is satisfied. Similarly, if \(j^0(r; b-r)=0\) for all \(r\in {\mathbb {R}}\), then \(\lambda j_1'(r) (b-r) = 0\) and \((1-\lambda ) j_2'(r)(b-r)=0\) for all \(r \in {\mathbb {R}}\) with \(\lambda \in [0,1]\), which is possible when \(r=b\). So, j satisfies also \((H_1)\). Further, it is easy to observe that in the case when the graphs of functions \(j_1\) and \(j_2\) have two common points, then the function j is nonconvex.

Example 13

Let \(j:{\mathbb {R}}\rightarrow {\mathbb {R}}\) be the function defined by

$$\begin{aligned} j(r)=\frac{1}{2}(r-b)^{2} \end{aligned}$$

for \(r \in {\mathbb {R}}\) with \(b \in {\mathbb {R}}\). Then

$$\begin{aligned} j^{0}(r; s)=(r-b)\, s \ \ \text{ and } \ \ \partial j(r)=r-b \end{aligned}$$

for r, \(s \in {\mathbb {R}}\). Moreover, we have \(j^0(r; b-r) = (r-b)\, (b-r) = - (b-r)^2 \le 0\) for all \(r \in {\mathbb {R}}\). Also, for all \(r \in {\mathbb {R}}\), if \(j^0(r; b-r) = 0\), then \((r-b)\, (b-r) = - (b-r)^2 = 0\), which implies \(r =b\). Hence we deduce that j satisfies properties H(j) and \((H_1)\). By Remark 10, it is clear that j satisfies (24) with \(m_j =0\).

Example 14

Let \(m_{1}\), \(m_{2}\), \(r_{0}\in {\mathbb {R}}\) be constants such that \(m_{1}\le -r_{0}<0\) and \(m_{2}\ge r_{0}>0\). Consider the function \(j:{\mathbb {R}}\rightarrow {\mathbb {R}}\) defined by

$$\begin{aligned} j(r) = {\left\{ \begin{array}{ll} \frac{r_{0}^{2}}{2}+m_{1}[r-(b-r_{0})] &{}\text {if } \ \ r< b-r_{0}, \\ \frac{1}{2}(r-b)^2 &{}\text {if} \ \ b-r_{0}\le r\le b+r_{0}, \\ \frac{r_{0}^{2}}{2}+m_{2}{[}r-(b+r_{0}){]} &{}\text {if} \ \ r> b+r_{0} \end{array}\right. } \end{aligned}$$

for \(r \in {\mathbb {R}}\), \(b \in {\mathbb {R}}\). The function j is convex, its subdifferential is given by

$$\begin{aligned} \partial j(r) = {\left\{ \begin{array}{ll} m_1 &{}\text {if} \ \ r< b-r_{0}, \\ \left[ m_{1},-r_{0}\right] &{}\text {if} \ \ r= b-r_{0}, \\ r-b &{}\text {if} \ \ b-r_{0}< r < b+r_{0}, \\ \left[ r_{0},m_{2}\right] &{}\text {if} \ \ r= b+r_{0}, \\ m_2 &{}\text {if} \ \ r> b+r_{0} \\ \end{array}\right. } \end{aligned}$$

for all \(r \in {\mathbb {R}}\), and its generalized directional derivative has the form

$$\begin{aligned} j^0(r; b-r) = {\left\{ \begin{array}{ll} m_{1}(b-r)< 0 &{}\text {if} \ r< b-r_{0}, \\ - r_0^2< 0 &{} \text {if} \ r=b-r_{0}, \\ -(b-r)^{2} \le 0 &{}\text {if}\ b-r_{0}< r< b+r_{0},\\ - r_0^2< 0 &{}\text {if} \ r=b+r_{0}, \\ m_{2}(b-r)< 0 &{}\text {if} \ r> b+r_{0} \\ \end{array}\right. } \end{aligned}$$

for all r, \(s \in {\mathbb {R}}\). Hence, we obtain that \(j^{0}(r; b-r)\le 0\) for all \(r \in {\mathbb {R}}\). Similarly, if \(j^0(r;b-r)=0\) for all \(r \in {\mathbb {R}}\), then \(r=b\). We conclude that j satisfies H(j) and \((H_1)\). Moreover, the function j, being convex, satisfies (24) with \(m_j =0\), see Remark 10.

Example 15

We define \(j :{\mathbb {R}}\rightarrow {\mathbb {R}}\) by

$$\begin{aligned} j(r) = |r-b| = {\left\{ \begin{array}{ll} -r+b &{}\text {if} \ \ r \le b, \\ r-b &{}\text {if} \ \ r > b \end{array}\right. } \end{aligned}$$

for \(r \in {\mathbb {R}}\) with a constant \(b \in {\mathbb {R}}\). Then, we have

$$\begin{aligned} \partial j(r) = {\left\{ \begin{array}{ll} -1 &{}\text { if} \ \ r < b, \\ {[}-1, 1{]} &{}\text {if} \ \ r = b, \\ 1 &{}\text {if} \ \ r > b \end{array}\right. } \end{aligned}$$

for all \(r \in {\mathbb {R}}\), and

$$\begin{aligned} j^0(r; b-r) = {\left\{ \begin{array}{ll} b-r &{}\text { if} \ \ r > b, \\ 0 &{}\text { if} \ \ r = b, \\ r-b &{}\text {if} \ \ r < b \end{array}\right. } \end{aligned}$$

for all \(r \in {\mathbb {R}}\). Thus, \(j^0(r; b-r) \le 0\) for all \(r \in {\mathbb {R}}\). Also, we observe that if \(j^{0}(r; b-r)=0\) for all \(r \in {\mathbb {R}}\), then \(r=b\). In consequence, the properties H(j) and \((H_1)\) are verified. Further, since j is convex, it satisfies (24) with \(m_j =0\), see Remark 10.

7 Conclusions

We have studied the nonlinear elliptic problem with mixed boundary conditions involving a nonmonotone multivalued subdifferential boundary condition on a part of the boundary. Based on the notion of the Clarke generalized gradient, the variational form of the problem leads to an elliptic boundary hemivariational inequality. We have provided results on existence, comparison of solutions and continuous dependence on the data. Sufficient conditions have been found which guarantee the asymptotic behavior of solution, when the heat transfer coefficient tends to infinity, to a problem with the Dirichlet boundary condition. Under our hypotheses, the proof of the monotonicity property of Theorem 1(iv) for the elliptic hemivariational inequality (7) remains an interesting open problem. We have also given some examples of locally Lipschitz (nondifferentiable and nonconvex) functions to which our results can be applied.