Introduction

A reasonable attention has been shown by many researchers for the study of variational inclusions (inequalities) and their generalized forms, which occupies a leading and significant role to connect research between analysis, geometry, biology, elasticity, optimization, image processing, biomedical and mathematical sciences, etc. A broad range of problems with which we encounter in physics, economics, management sciences, and operations research can be formulated as an inclusion problem \(0 \in T(x)\), for a given set-valued mapping T on a Hilbert space H. Thus, the problem of finding a zero of T, i.e., a point \(x \in H\), such that \(0 \in T(x)\) is a fundamental problem in many areas of applied sciences.

On the other hand, it is well known that monotone operators on Hilbert spaces can be regularized into single-valued Lipschitzian monotone operators via a process known as the Yosida approximation. This Yosida approximation operators are instrumental to approximate the solutions of general variational inclusion problems using non-expansive resolvent operators. Recently, many authors [2, 3, 5, 6, 810] have applied Yosida approximation operators and their generalized forms to solve some variational inclusion problems. Zou and Huang [14], Ahmad et al. [1] introduced and studied the graph convergence of \(H(\cdot ,\cdot )\)-accretive operators and \(H(\cdot ,\cdot )\)-co-accretive operators, respectively, for solving variational inclusion problems and their system. For more details, we refer to [4, 11, 12, 15].

This paper deals with the introduction of a generalized Yosida approximation operator with some of its properties. Under the concept of graph convergence of \(H(\cdot ,\cdot )\)-accretive operators, we prove the convergence of generalized Yosida approximation operator. Finally, we solve a Yosida inclusion problem in q-uniformly Banach spaces. A MATLAB programming related to graph convergence of generalized Yosida approximation operator is discussed with a consolidated example. Our results are applicable and new in this direction and refinement of results of Li and Huang [7].

Preliminaries

Let X be a real Banach Space with its dual space \(X^*\). We denote the duality pairing between X and \(X^*\) by \(\langle \cdot ,\cdot \rangle\), and \(2^X\) is the family of all nonempty subsets of X.

The generalized duality mapping \(F_q:X \rightarrow 2^{X^*}\) is defined by

$$\begin{aligned} F_q(x)=\left\{ f^* \in X^*: \langle x,f^*\rangle =\Vert x\Vert ^{q},\Vert f^*\Vert =\Vert x\Vert ^{q-1}\right\} ,\quad \forall x\in X, \end{aligned}$$

where \(q>1\) is a constant. For \(q=2\), \(F_q\) coincides with the normalized duality mapping. If X is a Hilbert space, \(F_2\) becomes the identity mapping on X. It is to be noted that if X is uniformly smooth, then \(F_q\) is single-valued. Throughout the paper, we assume that X is a real Banach space and \(F_q\) is single-valued.

The function \(\rho _{X}:[0,\infty ) \rightarrow [0,\infty )\) is called modulus of smoothness of X, such that

$$\begin{aligned} \rho _{X}(t)=\left\{ \frac{\Vert x+y\Vert +\Vert x-y\Vert }{2}-1: \Vert x\Vert \le 1, \Vert y\Vert \le t \right\} . \end{aligned}$$

A Banach space X is called

  1. 1.

    uniformly smooth if \(\lim \limits _{t \rightarrow 0} \frac{\rho _{X}(t)}{t}=0\);

  2. 2.

    q-uniformly smooth if there exists a constant \(c>0\), such that

    $$\begin{aligned} \rho _{X}(t) \le c~t^q,\quad q>1. \end{aligned}$$

While encountered with the characteristic inequalities, Xu [13] proved the following important Lemma in q-uniformly smooth Banach spaces.

Lemma 1

Let X be a real uniformly smooth Banach space. Then, X is q-uniformly smooth if and only if there exists a constant \(c_q>0\), such that for all \(x,y \in X,\)

$$\begin{aligned} \Vert x+y\Vert ^q \le \Vert x\Vert ^q + q \langle y,F_{q}(x)\rangle +c_q \Vert y\Vert ^q. \end{aligned}$$

The following definitions and concepts are essential to achieve the aim of this paper.

Definition 1

[14] Let \(A,B:X \rightarrow X\) and \(H:X \times X \rightarrow X\) be the single-valued mappings.

  1. 1.

    A is said to be accretive, if

    $$\begin{aligned} \langle A(x)-A(y),F_q(x-y)\rangle \ge 0,\quad \forall x,\quad y \in X; \end{aligned}$$
  2. 2.

    A is said to be strictly accretive, if A is accretive and

    $$\begin{aligned} \langle A(x)-A(y),F_q(x-y)\rangle = 0,\quad \mathrm{if} \; \mathrm{and} \; \mathrm{only}\; \mathrm{if}\; x=y; \end{aligned}$$
  3. 3.

    A is said to be \(\delta _A\)-strongly accretive, if there exists a constant \(\delta _A>0\), such that

    $$\begin{aligned} \langle Ax-Ay,F_q(x-y)\rangle \ge {\delta }_{A} \Vert x-y\Vert ^q; \end{aligned}$$
  4. 4.

    A is said to be \(\gamma _A\)-Lipschitz continuous, if there exists a constant \(\gamma _A>0\), such that

    $$\begin{aligned} \Vert Ax-Ay\Vert \le {\gamma }_{A} \Vert x-y\Vert , \quad \forall x,\quad y \in X; \end{aligned}$$
  5. 5.

    \(H(A,\cdot )\) is said to be \(\alpha\)-strongly accretive with respect to A, if there exists a constant \(\alpha >0\), such that

    $$\begin{aligned} \langle H(Ax,\cdot )-H(Ay,\cdot ),F_q(x-y)\rangle \ge {\alpha } \Vert x-y\Vert ^q, \quad \forall x,\quad y \in X; \end{aligned}$$
  6. 6.

    \(H(\cdot ,B)\) is said to be \(\beta\)-relaxed accretive with respect to B, if there exists a constant \(\beta >0\), such that

    $$\begin{aligned} \langle H(\cdot ,Bx)-H(\cdot ,By),F_q(x-y)\rangle \ge -{\beta } \Vert x-y\Vert ^q,\quad \forall x,\quad y \in X; \end{aligned}$$
  7. 7.

    \(H(A,\cdot )\) is said to be \(\sigma\)-Lipschitz continuous with respect to A, if there exists a constant \(\sigma >0\), such that

    $$\begin{aligned} \Vert H(Ax,\cdot )-H(Ay,\cdot )\Vert \le {\sigma } \Vert x-y\Vert ,\quad \forall x,\quad y \in X. \end{aligned}$$

Similarly, we can define the Lipschitz continuity of H with respect to B.

Definition 2

[14] Let \(H:X \rightarrow X\) be a single-valued mapping and \(M:X \rightarrow 2^X\) be a set-valued mapping. The mapping M is said to be

  1. 1.

    accretive, if

    $$\begin{aligned} \langle u-v,F_q(x-y)\rangle \ge 0,\quad \forall x,y \in X,\quad u \in M(x), v \in M(y); \end{aligned}$$
  2. 2.

    m-accretive, if M is accretive and \((I+\lambda M)(X)=X\), for all \(\lambda >0\), where I is the identity operator on X;

  3. 3.

    H-accretive, if M is accretive and \((H+\lambda M)(X)=X\), for all \(\lambda >0.\)

Definition 3

[14] Let \(A,B:X \rightarrow X\), \(H:X\times X \rightarrow X\) be the single-valued mappings and \(M:X \rightarrow 2^X\) be a set-valued mapping. The mapping M is said to be \(H(\cdot ,\cdot )\)-accretive with respect to A and B, if M is accretive and \([H(A,B)+\lambda M](X)=X,\) for every \(\lambda >0.\)

Lemma 2

[14] Let H(AB) be \(\alpha\)-strongly accretive with respect to A, \(\beta\)-relaxed accretive with respect to B and \(\alpha > \beta\). Let M be an \(H(\cdot ,\cdot )\)-accretive operator with respect to A and B. Then, the operator \([H(A,B)+\lambda M]^{-1}\) is single-valued and is called the resolvent operator, i.e., \(R^{H(\cdot ,\cdot )}_{M,\lambda }:X \rightarrow X\), such that

$$\begin{aligned} R^{H(\cdot ,\cdot )}_{M,\lambda }(u)=[H(A,B)+\lambda M]^{-1}(u), \quad \forall u \in X, \quad \lambda >0. \end{aligned}$$
(1)

Furthermore, the resolvent operator defined by Eq. (1) is \(\frac{1}{(\alpha -\beta )}\)-Lipschitz continuous.

Lemma 3

[3]  Let \(\{a_n\}\) and \(\{b_n\}\) be two non-negative real sequences satisfying

$$\begin{aligned} a_{n+1} \le k a_n + b_n, \end{aligned}$$

with \(0< k <1\) and \(b_n \rightarrow 0.\) Then, \(\lim \nolimits _{n \rightarrow \infty } a_n=0.\)

Generalized Yosida approximation operator and its convergence

We define the generalized Yosida approximation operator using the resolvent operator defined by Eq. (1), that is

$$\begin{aligned} R^{H(\cdot ,\cdot )}_{M,\lambda }(u)=[H(A,B)+\lambda M]^{-1}(u),\quad \forall x \in X,\quad \lambda >0. \end{aligned}$$

Definition 4

The generalized Yosida approximation operator denoted by \(J^{H(\cdot ,\cdot )}_{M,\lambda }\) is defined as

$$\begin{aligned} J^{H(\cdot ,\cdot )}_{M,\lambda }(u)=\frac{1}{\lambda }\left[ I-R^{H(\cdot ,\cdot )}_{M,\lambda }\right] (u),\quad \forall u \in X\quad \mathrm{and}\quad \lambda >0, \end{aligned}$$
(2)

where I is the identity mapping on X.

Lemma 4

The generalized Yosida approximation operator defined by Eq. (2) is

  1. 1.

    \(\theta _1\)-Lipschitz continuous, where \(\theta _1=\frac{[\alpha -\beta +1]}{\lambda (\alpha -\beta )}, \alpha > \beta .\)

  2. 2.

    \(\theta _2\)-strongly monotone, where \(\theta _2=\frac{[(\alpha -\beta )-1]}{\lambda (\alpha -\beta )}, \alpha > \beta .\)

Proof

  1. 1.

    Let \(u,v \in X\) and \(\lambda >0\). Using Lemma 2, we have

    $$\begin{aligned} \left\| J^{H(\cdot ,\cdot )}_{M,\lambda }(u)-J^{H(\cdot , \cdot )}_{M,\lambda }(v)\right\|&= \frac{1}{\lambda } \left\| [I(u)-R^{H(\cdot ,\cdot )}_{M,\lambda }(u)]-[I(v) -R^{H(\cdot ,\cdot )}_{M,\lambda }(v)]\right\| \\&\le \frac{1}{\lambda }\left[ \Vert u-v\Vert +\left\| R^{H(\cdot ,\cdot )}_{M,\lambda } (u)-R^{H(\cdot ,\cdot )}_{M,\lambda }(v)\right\| \right] \\&\le \frac{1}{\lambda }[\Vert u-v\Vert + \frac{1}{[\alpha - \beta ]} \Vert u-v\Vert ]\\&= \frac{1}{\lambda }\left[ \frac{\alpha -\beta +1}{\alpha -\beta }\right] \Vert u-v\Vert , \end{aligned}$$

    i.e.,

    $$\begin{aligned} \left\| J^{H(\cdot ,\cdot )}_{M,\lambda }(u)-J^{H(\cdot , \cdot )}_{M,\lambda }(v)\right\| \le \theta _1 \Vert u-v\Vert , \end{aligned}$$
    (3)

    where \(\theta _1= \frac{[\alpha -\beta +1]}{\lambda (\alpha -\beta )}, \alpha > \beta .\)

  2. 2.

    For any \(u,v \in X,\) and \(\lambda >0\) and using Lemma 2, we have

    $$\begin{aligned} \Big \langle&J^{H(\cdot ,\cdot )}_{M,\lambda }(u)-J^{H(\cdot ,\cdot )}_{M,\lambda }(v), F_q(u-v)\Big \rangle \\&\quad = \frac{1}{\lambda } \Big \langle I(u)-R^{H(\cdot ,\cdot )}_{M,\lambda }(u)-[I(v)-R^{H(\cdot , \cdot )}_{M,\lambda }(v)],F_q(u-v) \Big \rangle \\&\quad = \frac{1}{\lambda }\Big [ \Big \langle u-v,F_q(u-v) \Big \rangle -\Big \langle R^{H(\cdot ,\cdot )}_{M,\lambda }(u)-R^{H(\cdot , \cdot )}_{M,\lambda }(v),F_q(u-v) \Big \rangle \Big ]\\&\quad \ge \frac{1}{\lambda }\Big [\Vert u-v\Vert ^q-\Big \Vert R^{H(\cdot , \cdot )}_{M,\lambda }(u)-R^{H(\cdot ,\cdot )}_{M,\lambda }(v) \Big \Vert \Vert u-v\Vert ^{q-1}\Big ]\\&\quad \ge \frac{1}{\lambda }\Big [\Vert u-v\Vert ^q-\frac{1}{[\alpha -\beta ]}\Vert u-v\Vert \Vert u-v\Vert ^{q-1}\Big ]\\&\quad =\frac{1}{\lambda }\Big [\Vert u-v\Vert ^q-\frac{1}{[\alpha -\beta ]} \Vert u-v\Vert ^{q}\Big ]\\&\quad =\frac{[(\alpha -\beta )-1]}{\lambda (\alpha -\beta )}\Vert u-v\Vert ^q. \end{aligned}$$

    i.e.,

    $$\begin{aligned} \left\langle J^{H(\cdot ,\cdot )}_{M,\lambda }(u)-J^{H(\cdot ,\cdot )}_{M,\lambda }(v),\quad F_q(u-v)\right\rangle \ge \theta _2 \Vert u-v\Vert ^q, \quad \forall u,v \in X, \lambda >0 \end{aligned}$$

    and \(\theta _2=\frac{[(\alpha -\beta )-1]}{\lambda (\alpha -\beta )}, \alpha > \beta .\)

\(\square\)

Note 1

It is interesting to note that resolvent operator defined by Eq. (1) and generalized Yosida approximation operator defined by Eq. (2) are connected by the following relation:

$$\begin{aligned} \lambda J^{H(\cdot ,\cdot )}_{M,\lambda }(x) \in [\lambda M+H(A,B)-I]\left( R^{H(\cdot ,\cdot )}_{M,\lambda }(x)\right) . \end{aligned}$$

Let \(M:X \rightarrow 2^X\) be a set-valued mapping. The graph of the mapping M is defined by

$$\begin{aligned} {\rm graph}(M)=\{(x,y) \in X \times Y: y\in M(x)\} \end{aligned}$$

Definition 5

[7] Let \(A,B:X\rightarrow X\) and \(H:X\times X \rightarrow X\) be the single-valued mappings. Let \(M_n,M:X \rightarrow 2^{X}\) be \(H(\cdot ,\cdot )\)-accretive operators for \(n=0,1,2,\ldots\). The sequence \(\{M_n\}\) is said to be graph convergence to M, denoted by \(M_n \overset{G}{\rightarrow } M\), if for every \((x,y)\in {\rm graph}(M)\), there exists a sequence \((x_n,y_n)\in {\rm graph}(M_n)\), such that

$$\begin{aligned} x_n \rightarrow x,\quad y_n \rightarrow y\quad as\; n \rightarrow \infty . \end{aligned}$$

Theorem 1

[7]  Let \(M_n,M:X \rightarrow 2^X\) be \(H(\cdot ,\cdot )\)-accretive operators for \(n=0,1,2,\ldots\). Assume that \(H:X\times X \rightarrow X\) is a single-valued mapping, such that

  1. 1.

    H(AB) is \(\alpha\)-strongly accretive with respect to A and \(\beta\)-relaxed accretive with respect to B, \(\alpha > \beta\);

  2. 2.

    H(AB) is \(\gamma _1\)-Lipschitz continuous with respect to A and \(\gamma _2\)-Lipschitz continuous with respect to B.

Then, \(M_n \overset{G}{\rightarrow } M\) if and only if

$$\begin{aligned} R^{H(\cdot ,\cdot )}_{M_n,\lambda }(u) \rightarrow R^{H(\cdot ,\cdot )}_{M,\lambda }(u),\quad \forall u \in X,\quad \lambda >0, \end{aligned}$$

where \(R^{H(\cdot ,\cdot )}_{M_n,\lambda }=[H(A,B)+\lambda M_n]^{-1}\) and \(R^{H(\cdot ,\cdot )}_{M,\lambda }=[H(A,B)+\lambda M]^{-1}.\)

Now, we prove the convergence of generalized Yosida approximation operator in the light of graph convergence of \(H(\cdot ,\cdot )\)-accretive operator without using the convergence of resolvent operator defined by Eq. (1).

Theorem 2

Let \(M_n,M:X \rightarrow 2^X\) be \(H(\cdot ,\cdot )\)-accretive operators for \(n=0,1,2,\ldots\), and \(H:X\times X \rightarrow X\) be a single-valued mapping, such that conditions (1) and (2) of Theorem 1 hold.

Then \(M_n \overset{G}{\rightarrow } M\) if and only if

$$\begin{aligned} J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x) \rightarrow J^{H(\cdot ,\cdot )}_{M,\lambda }(x),\quad \forall x \in X,\quad \lambda >0, \end{aligned}$$

where

$$\begin{aligned} J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x)=\frac{1}{\lambda } \left[ I-R^{H(\cdot ,\cdot )}_{M_n,\lambda }\right] (x), \quad J^{H(\cdot ,\cdot )}_{M,\lambda }(x)=\frac{1}{\lambda } \left[ I-R^{H(\cdot ,\cdot )}_{M,\lambda }\right] (x),\quad \forall x\in X, \end{aligned}$$

and \(R^{H(\cdot ,\cdot )}_{M_n,\lambda }\) and \(R^{H(\cdot ,\cdot )}_{M,\lambda }\) are defined in Theorem 1.

Proof

Necessary part: Suppose that \(M_n \overset{G}{\rightarrow } M\). For any given \(x \in X\), let

$$\begin{aligned} z_n=J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x)\quad \mathrm{and}\quad z =J^{H(\cdot ,\cdot )}_{M,\lambda }(x). \end{aligned}$$

Then,

$$\begin{aligned} z= J^{H(\cdot ,\cdot )}_{M,\lambda }(x)=\frac{1}{\lambda } \left[ I-R^{H(\cdot ,\cdot )}_{M,\lambda }\right] (x), \end{aligned}$$

implies that

$$\begin{aligned} (x-\lambda z)= R^{H(\cdot ,\cdot )}_{M,\lambda }(x) =[H(A,B)+\lambda M]^{-1}(x), \end{aligned}$$

i.e.,

$$\begin{aligned} H(A,B)(x-\lambda z) +\lambda M(x-\lambda z)= x. \end{aligned}$$

It follows that

$$\begin{aligned} \frac{1}{\lambda }[x-H(A,B)(x-\lambda z)]\in & M(x-\lambda z). \end{aligned}$$

That is

$$\begin{aligned} \left( x-\lambda z,\frac{1}{\lambda }[x-H(A,B)(x-\lambda z)]\right)\in & \;\mathrm{graph}(M). \end{aligned}$$

By Definition 4, there exists a sequence \((w_n,y_n) \in \mathrm{graph}(M_n)\), such that

$$\begin{aligned} w_n \rightarrow (x-\lambda z),\quad y_n \rightarrow \frac{1}{\lambda }[x-H(A,B)(x-\lambda z)]. \end{aligned}$$
(4)

Since \(y_n \in M_n(w_n)\), we have

$$\begin{aligned} H(Aw_n,Bw_n)+\lambda y_n \in [H(A,B)+\lambda M_n](w_n), \end{aligned}$$

and so,

$$\begin{aligned} w_n&= [H(A,B)+\lambda M_n]^{-1}[H(Aw_n,Bw_n)+\lambda y_n],\\&= R^{H(\cdot ,\cdot )}_{M_n,\lambda }[H(Aw_n,Bw_n)+\lambda y_n],\\&= \left[ I-\lambda J^{H(\cdot ,\cdot )}_{M_n,\lambda }\right] [H(Aw_n,Bw_n)+\lambda y_n], \end{aligned}$$

which implies that

$$\begin{aligned} \frac{1}{\lambda }w_n=\frac{1}{\lambda } H(Aw_n,Bw_n)+ y_n- J^{H(\cdot ,\cdot )}_{M_n,\lambda }[H(Aw_n,Bw_n)+\lambda y_n]. \end{aligned}$$
(5)

Using (1) of Lemma 4 and Eq. (5), we have

$$\begin{aligned}&\left\| z_n-z\right\| \nonumber \\&\quad =\, \left\| J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x)-z\right\| \nonumber \\&\quad =\,\left\| J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x)+\frac{1}{\lambda }w_n-\frac{1}{\lambda }w_n-z\right\| \nonumber \\&\quad =\,\Big \Vert J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x)+\frac{1}{\lambda } H(Aw_n,Bw_n)+y_n-J^{H(\cdot ,\cdot )}_{M_n,\lambda }[H(Aw_n,Bw_n)+\lambda y_n] - \frac{1}{\lambda }w_n-z\Big \Vert \nonumber \\&\quad \le \,\left\| J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x) -J^{H(\cdot ,\cdot )}_{M_n,\lambda }[H(Aw_n,Bw_n) +\lambda y_n]\right\| + \left\| \frac{1}{\lambda }H(Aw_n,Bw_n)+y_n-\frac{1}{\lambda }w_n-z\right\| \nonumber \\&\quad \le \,\theta _1 \left\| x-H(Aw_n,Bw_n)-\lambda y_n\right\| +\left\| \frac{1}{\lambda }H(Aw_n,Bw_n)+y_n-\frac{1}{\lambda }x\right\| + \left\| \frac{1}{\lambda }w_n-\frac{1}{\lambda } x+z\right\| \nonumber \\&\quad =\,\left( \theta _1-\frac{1}{\lambda }\right) \left\| x-H(Aw_n,Bw_n) -\lambda y_n\right\| +\frac{1}{\lambda } \left\| w_n-x +\lambda z\right\| \nonumber \\&\quad =\,\left( \theta _1-\frac{1}{\lambda }\right) \left\| x-H(Aw_n,Bw_n) +H(A,B)(x-\lambda z)-H(A,B)(x-\lambda z) -\lambda y_n\right\| \nonumber \\&\qquad +\,\frac{1}{\lambda } \left\| w_n-x+\lambda z\right\| \nonumber \\&\quad \le \left( \theta _1-\frac{1}{\lambda }\right) \left\| x-H(A,B) (x-\lambda z)-\lambda y_n\right\| \nonumber \\&\qquad +\,\left( \theta _1-\frac{1}{\lambda }\right) \left\| H(A,B)(x-\lambda z) -H(Aw_n,Bw_n)\right\| +\frac{1}{\lambda } \left\| w_n-x+\lambda z\right\| . \end{aligned}$$
(6)

Since H is \(\gamma _1\)-Lipschitz continuous with respect to A and \(\gamma _2\)-Lipschitz continuous with respect to B, we have

$$\begin{aligned}&\Big \Vert H(A,B)(x-\lambda z)-H(A,B) w_n\Big \Vert \nonumber \\&\quad= \Big \Vert H(A(x-\lambda z),B(x-\lambda z)) -H(A(x-\lambda z),Bw_n) \nonumber \\&\qquad+H(A(x-\lambda z),Bw_n)-H(Aw_n,Bw_n) \Big \Vert \nonumber \\&\quad\le \left\| H(A(x-\lambda z),B(x-\lambda z))-H(A(x-\lambda z), Bw_n)\right\| \nonumber \\&\qquad+\left\| H(A(x-\lambda z),Bw_n) -H(Aw_n,Bw_n)\right\| \nonumber \\&\quad\le \gamma _2 \left\| x-\lambda z-w_n\right\| +\gamma _1 \left\| x-\lambda z-w_n\right\| \nonumber \\&\quad= (\gamma _1+\gamma _2) \left\| x-\lambda z-w_n\right\| . \end{aligned}$$
(7)

Using Eqs. (7), (6) becomes

$$\begin{aligned} \left\| z_n-z\right\|\le & \left( \theta _1-\frac{1}{\lambda }\right) \left\| x-H(A,B)(x-\lambda z)-\lambda y_n\right\| \\&+\,\left[ \left( \theta _1-\frac{1}{\lambda }\right) (\gamma _1+\gamma _2) +\frac{1}{\lambda }\right] \left\| w_n-x+\lambda z\right\| . \end{aligned}$$

By Eq. (4), we have

$$\begin{aligned} w_n \rightarrow (x-\lambda z),\quad y_n \rightarrow \frac{1}{\lambda }[x-H(A,B)(x-\lambda z)], \end{aligned}$$

i.e.,

$$\begin{aligned} \Vert w_n-x+ \lambda z\Vert \rightarrow 0,\quad \frac{1}{\lambda }\Vert x-H(A,B)(x-\lambda z)-\lambda y_n\Vert \rightarrow 0, \end{aligned}$$

and so

$$\left\| z_n-z\right\| \rightarrow 0,\quad ~\mathrm{as}\; n \rightarrow \infty,$$

i.e.,

$$\begin{aligned} J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x) \rightarrow J^{H(\cdot ,\cdot )}_{M,\lambda }(x). \end{aligned}$$

Sufficient Part: Suppose that

$$\begin{aligned} J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x) \rightarrow J^{H(\cdot ,\cdot )}_{M,\lambda }(x),\quad \forall x \in X,\quad \lambda >0. \end{aligned}$$

For any \((x,y)\in {\rm graph}(M),\) we have \(y \in M(x)\), and hence

$$\begin{aligned} H(Ax,Bx)+\lambda y\in & [H(A,B)+\lambda M](x). \end{aligned}$$

Therefore,

$$\begin{aligned} x= & \left[ I-\lambda J^{H(\cdot ,\cdot )}_{M,\lambda }\right] \left( H(Ax,Bx)+\lambda y\right) . \end{aligned}$$

Let \(x_n=\left[ I-\lambda J^{H(\cdot ,\cdot )}_{M_n,\lambda }\right] \left( H(Ax,Bx)+\lambda y\right)\). This implies that

$$\begin{aligned} \frac{1}{\lambda }[H(Ax,Bx)-H(Ax_n,Bx_n)+\lambda y] \in M_n(x_n). \end{aligned}$$

Let \(y_n^{\prime }=\frac{1}{\lambda }[H(Ax,Bx)-H(Ax_n,Bx_n)+\lambda y]\) and using the same arguments as for Eq. (7), we have

$$\begin{aligned} \left\| y_n^{\prime }-y\right\|= & \left\| \frac{1}{\lambda } [H(Ax,Bx)-H(Ax_n,Bx_n)+\lambda y]-y\right\| \nonumber \\= & \frac{1}{\lambda } \left\| H(Ax,Bx)-H(Ax_n,Bx_n)\right\| \nonumber \\= & \frac{1}{\lambda } \left\| H(Ax,Bx)-H(Ax_n,Bx)+H(Ax_n,Bx) -H(Ax_n,Bx_n)\right\| \nonumber \\\le & \frac{1}{\lambda } \left\| H(Ax,Bx)-H(Ax_n,Bx)\right\| \nonumber \\&+\,\frac{1}{\lambda }\left\| H(Ax_n,Bx)-H(Ax_n,Bx_n)\right\| \nonumber \\\le & \left( \frac{\gamma _1+\gamma _2}{\lambda }\right) \left\| x_n-x\right\| . \end{aligned}$$
(8)

Using above arguments, we have

$$\begin{aligned} \Vert x_n-x\Vert&= \left\| \left( I- \lambda J^{H(\cdot ,\cdot )}_{M_n,\lambda }\right) [H(Ax,Bx)+\lambda y]-\left( I- \lambda J^{H(\cdot ,\cdot )}_{M,\lambda }\right) [H(Ax,Bx)+\lambda y] \right\| \nonumber \\&= \left\| \left[ \left( I- \lambda J^{H(\cdot ,\cdot )}_{M_n,\lambda }\right) -\left( I- \lambda J^{H(\cdot ,\cdot )}_{M,\lambda }\right) \right] [H(Ax,Bx)+\lambda y] \right\| . \end{aligned}$$
(9)

Since \(J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x) \rightarrow J^{H(\cdot ,\cdot )}_{M,\lambda }(x)\), we have from (9) that

$$\begin{aligned} \left\| x_n-x\right\| \rightarrow 0\quad ~\mathrm{as}\; n\rightarrow \infty . \end{aligned}$$

Thus, from (8), it follows that \(y_n^{\prime } \rightarrow y\) as \(n \rightarrow \infty ,\)

i.e.,

$$\begin{aligned} M_n \overset{G}{\rightarrow } M. \end{aligned}$$

This completes the proof. \(\square\)

Combining Theorems 1 and 2, we have the following remark.

Remark 1

The convergence of the resolvent operator \(R^{H(\cdot ,\cdot )}_{M_n,\lambda }(x) \rightarrow R^{H(\cdot ,\cdot )}_{M,\lambda }(x)\), and the convergence of the generalized Yosida approximation operator \(J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x) \rightarrow J^{H(\cdot ,\cdot )}_{M,\lambda }(x)\) are equivalent if and only if the operator \(M_n \overset{G}{\rightarrow } M\).

Proof

Suppose that \(M_n \overset{G}{\rightarrow } M\) and \(R^{H(\cdot ,\cdot )}_{M_n,\lambda }(x) \rightarrow R^{H(\cdot ,\cdot )}_{M,\lambda }(x)\). Then

$$\begin{aligned}&R^{H(\cdot ,\cdot )}_{M_n,\lambda }(x) \rightarrow R^{H(\cdot ,\cdot )}_{M,\lambda }(x),\quad \forall x \in X\\&\quad \Rightarrow \left[ I-R^{H(\cdot ,\cdot )}_{M_n,\lambda }\right] (x) \rightarrow \left[ I-R^{H(\cdot ,\cdot )}_{M,\lambda }\right] (x)\\&\quad \Rightarrow \frac{1}{\lambda }\left[ I-R^{H(\cdot ,\cdot )}_{M_n,\lambda }\right] (x) \rightarrow \frac{1}{\lambda }\left[ I-R^{H(\cdot ,\cdot )}_{M,\lambda }\right] (x)\\&\quad \Rightarrow J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x) \rightarrow J^{H(\cdot ,\cdot )}_{M,\lambda }(x),\quad \forall x \in X. \end{aligned}$$

On similar way, we can show that \(J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x) \rightarrow J^{H(\cdot ,\cdot )}_{M,\lambda }(x)\) implies that \(R^{H(\cdot ,\cdot )}_{M_n,\lambda }(x) \rightarrow R^{H(\cdot ,\cdot )}_{M,\lambda }(x)\). \(\square\)

We construct the following consolidated example which shows that the mapping M is \(H(\cdot ,\cdot )\)-accretive with respect to A and B, \(M_n \overset{G}{\rightarrow }M\) and \(J^{H(\cdot ,\cdot )}_{M_n, \lambda }\rightarrow J^{H(\cdot ,\cdot )}_{M, \lambda }\). Through MATLAB programming, we show some graphics for the convergence of generalized Yosida approximation operator.

Example 1

Let \(X=\mathbb {R}\); \(A,B:\mathbb {R} \rightarrow \mathbb {R}\) and \(H:\mathbb {R} \times \mathbb {R} \rightarrow \mathbb {R}\) be the mappings defined by

$$\begin{aligned} A(x)=\frac{x^3}{8}, \quad B(x)=\frac{x}{2}, \end{aligned}$$

and

$$\begin{aligned} H(A(x),B(x))=A(x)-B(x),\quad ~\forall x \in \mathbb {R}, \end{aligned}$$

with the condition \(x^2+y^2+xy\ge 1\). Suppose \(M_n, M:\mathbb {R} \rightarrow 2^\mathbb {R}\) are the set-valued mappings defined by

$$\begin{aligned} M_n(x)=\frac{x}{2}+\frac{1}{n^2}, \end{aligned}$$

and

$$\begin{aligned} M(x)=\frac{x}{2}. \end{aligned}$$

Then, for any fixed \(u \in \mathbb {R},\) we have

$$\begin{aligned} \langle H(Ax,u)-H(Ay,u),x-y\rangle= & \langle Ax-Ay,x-y\rangle \\= & \frac{1}{8}(x-y)^2(x^2+y^2+xy)\\\ge & \frac{1}{8}(x-y)^2=\frac{1}{8} \Vert x-y\Vert ^2. \end{aligned}$$

Hence, H(AB) is \(\frac{1}{8}\)-strongly accretive with respect to A. In addition

$$\begin{aligned} \langle H(u,Bx)-H(u,By),x-y\rangle =-\langle Bx-By,x-y\rangle =-\frac{1}{2}(x-y)^2\ge -\frac{3}{2}(x-y)^2. \end{aligned}$$

Hence, H(AB) is \(\frac{3}{2}\)-relaxed accretive with respect to B.

One can easily verify that for \(\lambda =1\),

$$\begin{aligned} {[H(A,B)+\lambda M]}^{-1}(\mathbb {R})=\mathbb {R}. \end{aligned}$$

Hence, M is \(H(\cdot ,\cdot )\)-accretive with respect to A and B.

Now, we show that \(M_n \overset{G}{\rightarrow }M.\) For any \((x,y) \in \mathrm{graph}(M)\), there exists a sequence \((x_n,y_n)\in \mathrm{graph}(M_n)\), where let

$$\begin{aligned} x_n=\left( 1+\frac{1}{n}\right) x, \end{aligned}$$

and

$$\begin{aligned} y_n=M_n(x_n)=\frac{x_n}{2}+\frac{1}{n^2},\quad \forall n\in \mathbb {N}. \end{aligned}$$

Since

$$\begin{aligned} \lim \limits _n {x_n}= \lim \limits _n\left[ \left( 1+\frac{1}{n}\right) x \right] =x, \end{aligned}$$

we have,

$$\begin{aligned} x_n \rightarrow x \quad ~ \mathrm{as} \; n \rightarrow \infty . \end{aligned}$$

In addition, by definition of graph, it follows that

$$\begin{aligned} \lim \limits _n {y_n}= \lim \limits _n \left( \frac{x_n}{2}+\frac{1}{n^2}\right) =\frac{1}{2}x=M(x)=y. \end{aligned}$$

It follows that \(y_n \rightarrow y\) as \(n \rightarrow \infty\) and hence, \(M_n \overset{G}{\rightarrow }M.\)

Furthermore, we show that \(J^{H(\cdot ,\cdot )}_{M_n, \lambda }\rightarrow J^{H(\cdot ,\cdot )}_{M, \lambda }\) as \(M_n \overset{G}{\rightarrow }M.\)

Let for \(\lambda =1\), the resolvent operators are given by

$$\begin{aligned} R^{H(\cdot ,\cdot )}_{M_n, \lambda }(x)=[H(A,B)+\lambda M_n]^{-1}(x)=2 \root 3 \of {\left( x-\frac{1}{n^2}\right) }, \end{aligned}$$

and

$$\begin{aligned} R^{H(\cdot ,\cdot )}_{M, \lambda }(x)=[H(A,B)+\lambda M]^{-1}(x)=2 \root 3 \of {x}, \end{aligned}$$

and the generalized Yosida approximation operators are given by

$$\begin{aligned} J^{H(\cdot ,\cdot )}_{M_n, \lambda }(x)=\frac{1}{\lambda }\left[ I-R^{H(\cdot ,\cdot )}_{M_n, \lambda }\right] (x)=\left[ x-2\root 3 \of {\left( x-\frac{1}{n^2}\right) }\right] , \end{aligned}$$

and

$$\begin{aligned} J^{H(\cdot ,\cdot )}_{M, \lambda }(x)=\frac{1}{\lambda }\left[ I-R^{H(\cdot ,\cdot )}_{M, \lambda }\right] (x)=\left( x-2\root 3 \of {x}\right) . \end{aligned}$$

We evaluate

$$\begin{aligned} \left\| J^{H(\cdot ,\cdot )}_{M_n, \lambda }- J^{H(\cdot ,\cdot )}_{M, \lambda }\right\| =\left\| \left[ x-2\root 3 \of {\left( x-\frac{1}{n^2}\right) } \right] -\left( x-2\root 3 \of {x}\right) \right\| , \end{aligned}$$

which shows that

$$\begin{aligned} \left\| J^{H(\cdot ,\cdot )}_{M_n, \lambda }- J^{H(\cdot ,\cdot )}_{M, \lambda }\right\| \rightarrow 0\quad ~\mathrm{as} \; n \rightarrow \infty , \end{aligned}$$

i.e.,

$$\begin{aligned} J^{H(\cdot ,\cdot )}_{M_n, \lambda }\rightarrow J^{H(\cdot ,\cdot )}_{M, \lambda }\quad ~\mathrm{as} \; M_n \overset{G}{\rightarrow }M. \end{aligned}$$

Using the above example, the convergence of generalized Yosida approximation operator \(J^{H(\cdot ,\cdot )}_{M_n, \lambda }\) to \(J^{H(\cdot ,\cdot )}_{M,\lambda }\) is illustrated in the following figure for \(n=1,2,5,15\).

figure a

A Yosida inclusion problem and existence of solution

First, we state a Yosida inclusion problem and its equivalence with a fixed point problem.

Let X be q-uniformly smooth Banach space and let \(M: X \rightarrow 2^X\) be \(H(\cdot ,\cdot )\)-accretive operator. We consider the following problem.

Find \(x \in X\), such that

$$\begin{aligned} 0 \in J^{H(\cdot ,\cdot )}_{M,\lambda }(x)+M(x),\quad \forall x \in X,\quad \lambda >0, \end{aligned}$$
(10)

where \(J^{H(\cdot ,\cdot )}_{M,\lambda }\) is the generalized Yosida approximation operator defined by Eq.  (2). Problem (10) is called Yosida inclusion problem.

The fixed point formulation of the problem Eq. (10) is as follows:

$$\begin{aligned} x=R^{H(\cdot ,\cdot )}_{M,\lambda }\left[ H(A,B)x- \lambda J^{H(\cdot ,\cdot )}_{M,\lambda }(x)\right] , \forall x \in X, \;\lambda >0. \end{aligned}$$
(11)

Using the definition of the resolvent operator \(R^{H(\cdot ,\cdot )}_{M,\lambda }\) defined by Eq. (1), one can easily obtain the equivalence of Eqs.  (10) and (11).

Based on Eq. (11), we construct the following iterative algorithm for solving Yosida inclusion problem Eq. (10).

Algorithm 1 For any \(x_0 \in X\), compute the sequence \(\{x_n\} \subset X\) by the following scheme:

$$\begin{aligned} x_{n+1}=R^{H(\cdot ,\cdot )}_{M_n,\lambda }\left[ H(A,B)x_n-\lambda J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x_n)\right] ,\quad where\; \lambda >0,\quad n=0,1,2,\ldots . \end{aligned}$$
(12)

If \(J^{H(\cdot ,\cdot )}_{M_n,\lambda }=T\), where \(T:X \rightarrow X\) is a mapping, then the Yosida inclusion problem (10) and Algorithm 1 reduces to the variational inclusion problem (10) and Algorithm 1 of Li and Huang [7], respectively, and note that for suitable choice of operators in the formulation of (12), one can obtain many existing problems and algorithms in literature.

Theorem 3

Let X be a q-uniformly smooth Banach space and \(A,B:X \rightarrow X\) be the single-valued mappings. Let \(H:X \times X \rightarrow X\) be a single-valued mapping and \(M_n, M: X \rightarrow 2^X\) be the \(H(\cdot ,\cdot )\)-accretive operators, such that \(M_n \overset{G}{\rightarrow } M\). Assume that

  1. 1.

    H(AB) is \(\alpha\)-strongly accretive with respect to A and \(\beta\)-relaxed accretive with respect to B and \(\alpha > \beta\);

  2. 2.

    H(AB) is \(\gamma _1\)-Lipschitz continuous with respect to A and \(\gamma _2\)-Lipschitz continuous with respect to B;

  3. 3.

    \((\alpha -\beta ) \ge \root q \of {1+c_q(\gamma _1+\gamma _2)^{q}-q(\alpha -\beta )}+\root q \of {1-q \lambda \theta _2 +c_q \lambda ^{q}\theta _1};\)

  4. 4.

    \((\alpha -\beta ) \ge [\gamma _1 + \gamma _2 + \lambda \theta _1].\)

where \(\theta _1=\frac{[(\alpha -\beta )+1]}{\lambda (\alpha -\beta )}\) , \(\theta _2=\frac{[(\alpha -\beta )-1]}{\lambda (\alpha -\beta )}\), \(\alpha >\beta\) and \(c_q\) is same as in Lemma 2.1. Then, the Yosida inclusion problem (10) has a unique solution and the iterative sequence \(\{x_n\}\) generated by Algorithm 1 converges strongly to x.

Proof

Let the mapping \(F:X \rightarrow X\) be defined by

$$\begin{aligned} F(x)=R^{H(\cdot ,\cdot )}_{M,\lambda }\left[ H(A,B)x- \lambda J^{H(\cdot ,\cdot )}_{M,\lambda }(x)\right] ,\quad \forall x \in X, \quad \lambda > 0. \end{aligned}$$

For any \(x,y \in X\) and using Lemma 2, we have

$$\begin{aligned} \Vert F(x)-F(y)\Vert&= \Big \Vert R^{H(\cdot ,\cdot )}_{M,\lambda }\left[ H(A,B)x- \lambda J^{H(\cdot ,\cdot )}_{M,\lambda }(x)\right] -R^{H(\cdot ,\cdot )}_{M,\lambda }\left[ H(A,B)y- \lambda J^{H(\cdot ,\cdot )}_{M,\lambda }(y)\right] \Big \Vert \nonumber \\&\le \frac{1}{(\alpha -\beta )} \left\| H(A,B)x- \lambda J^{H(\cdot ,\cdot )}_{M,\lambda }(x)-H(A,B)y+ \lambda J^{H(\cdot ,\cdot )}_{M,\lambda }(y)\right\| \nonumber \\&= \frac{1}{(\alpha -\beta )} \Big \Vert H(A,B)x-H(A,B)y-(x-y)+(x-y)- \lambda J^{H(\cdot ,\cdot )}_{M,\lambda }(x)+ \lambda J^{H(\cdot ,\cdot )}_{M,\lambda }(y)\Big \Vert \nonumber \\&\le \frac{1}{(\alpha -\beta )} \left\| H(A,B)x-H(A,B)y-(x-y)\right\| \nonumber \\&\quad +\frac{1}{(\alpha -\beta )}\left\| (x-y)- \lambda \left( J^{H(\cdot ,\cdot )}_{M,\lambda }(x)-J^{H(\cdot ,\cdot )}_{M,\lambda }(y)\right) \right\| . \end{aligned}$$
(13)

Using the same arguments as used in Li and Huang [7], we have

$$\left\| H(A,B)x-H(A,B)y-(x-y)\right\| ^q \le \left[ 1+c_q(\gamma _1+\gamma _2)^{q}-q(\alpha -\beta )\right] \Vert x-y\Vert ^{q},$$

and hence

$$\left\| H(A,B)x-H(A,B)y-(x-y)\right\| \le \root q \of {\left[ 1+c_q(\gamma _1+\gamma _2)^{q}-q(\alpha -\beta )\right] }\Vert x-y\Vert .$$
(14)

Using (1) and (2) of Lemma 4, we obtain

$$\begin{aligned}&\left\| (x-y)-\lambda \left( J^{H(\cdot ,\cdot )}_{M,\lambda }(x) -J^{H(\cdot ,\cdot )}_{M,\lambda }(y)\right) \right\| ^{q}\\&\quad \le \,\Vert x-y\Vert ^{q}-q \lambda \left\langle J^{H(\cdot ,\cdot )}_{M,\lambda } (x)-J^{H(\cdot ,\cdot )}_{M,\lambda }(y),F_q(x-y)\right\rangle \\&\qquad +\,c_q \lambda ^{q} \left\| J^{H(\cdot ,\cdot )}_{M,\lambda }(x) -J^{H(\cdot ,\cdot )}_{M,\lambda }(y)\right\| \\&\quad \le \, \Vert x-y\Vert ^{q}-q \lambda \theta _2 \Vert x-y\Vert ^q+c_q \lambda ^{q} \theta _1 \Vert x-y\Vert ^{q}\\&\quad =\, \left( 1-q \lambda \theta _2 +c_q \lambda ^{q}\theta _1 \right) \Vert x-y\Vert ^{q}, \end{aligned}$$

i.e.,

$$\begin{aligned} \left\| (x-y)-\lambda \left( J^{H(\cdot ,\cdot )}_{M,\lambda }(x) -J^{H(\cdot ,\cdot )}_{M,\lambda }(y)\right) \right\| \le \root q \of {1-q \lambda \theta _2 +c_q \lambda ^{q}\theta _1} \Vert x-y\Vert ^{q}. \end{aligned}$$
(15)

Using Eqs. (14), (15), (13) becomes

$$\Vert F(x)-F(y)\Vert \le \frac{1}{\alpha -\beta }\left[ \root q \of {1+c_q(\gamma _1 +\gamma _2)^{q}-q(\alpha -\beta )}+\root q \of {1-q \lambda \theta _2 +c_q \lambda ^{q}\theta _1}\right] \Vert x-y\Vert ,$$

i.e.,

$$\Vert F(x)-F(y)\Vert \le k\Vert x-y\Vert ,$$
(16)

where

$$k=\frac{1}{\alpha -\beta }\left[ \root q \of {1+c_q(\gamma _1 +\gamma _2)^{q}-q(\alpha -\beta )}+\root q \of {1-q \lambda \theta _2 +c_q \lambda ^{q}\theta _1}\right] .$$

By condition (3), it follows that \(0< k < 1\) and so (16) implies that the mapping F has a unique fixed point \(x \in X\). Thus, x is a unique solution of Yosida inclusion problem (10).

Next, we show that the sequence \(\{x_n\}\) generated by the Algorithm 1 strongly converges to x.

Using Eqs. (11) and (12), we obtain

$$\begin{aligned} \Vert x_{n+1}-x\Vert&= \left\| R^{H(\cdot ,\cdot )}_{M_n,\lambda } \left[ H(A,B)x_n-\lambda J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x_n) \right] -R^{H(\cdot ,\cdot )}_{M,\lambda } \left[ H(A,B)x-\lambda J^{H(\cdot ,\cdot )}_{M,\lambda }(x) \right] \right\| \nonumber \\&= \Big \Vert R^{H(\cdot ,\cdot )}_{M_n,\lambda } \left[ H(Ax_n,Bx_n)-\lambda J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x_n) \right] -R^{H(\cdot ,\cdot )}_{M,\lambda } \left[ H(Ax_n,Bx_n)-\lambda J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x_n) \right] \nonumber \\&\quad+\,R^{H(\cdot ,\cdot )}_{M_,\lambda } \left[ H(Ax_n,Bx_n)-\lambda J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x_n) \right] -R^{H(\cdot ,\cdot )}_{M,\lambda } \left[ H(Ax,Bx)-\lambda J^{H(\cdot ,\cdot )}_{M,\lambda }(x) \right] \Big \Vert \nonumber \\&\le \Big \Vert R^{H(\cdot ,\cdot )}_{M_n,\lambda } \left[ H(Ax_n,Bx_n)-\lambda J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x_n) \right] -R^{H(\cdot ,\cdot )}_{M,\lambda } \left[ H(Ax_n,Bx_n)-\lambda J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x_n) \right] \Big \Vert \nonumber \\&\quad +\,\Big \Vert R^{H(\cdot ,\cdot )}_{M,\lambda } \left[ H(Ax_n,Bx_n)-\lambda J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x_n) \right] -R^{H(\cdot ,\cdot )}_{M,\lambda } \left[ H(Ax,Bx)-\lambda J^{H(\cdot ,\cdot )}_{M,\lambda }(x) \right] \Big \Vert \nonumber \\&\le b_n+\frac{1}{\alpha -\beta }\Big \Vert H(Ax_n,Bx_n)-\lambda J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x_n)- \left[ H(Ax,Bx)-\lambda J^{H(\cdot ,\cdot )}_{M,\lambda }(x) \right] \Big \Vert , \end{aligned}$$
(17)

where

$$\begin{aligned} b_n=\Big \Vert R^{H(\cdot ,\cdot )}_{M_n,\lambda } \left[ H(Ax_n,Bx_n)-\lambda J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x_n) \right] -R^{H(\cdot ,\cdot )}_{M,\lambda } \left[ H(Ax_n,Bx_n)-\lambda J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x_n) \right] \Big \Vert . \end{aligned}$$

Using Lipschitz continuity of H(AB) in both the arguments and Lipschitz continuity of generalized Yosida approximation operator, we obtain

$$\begin{aligned}&\Big \Vert H(Ax_n,Bx_n)-H(Ax,Bx)-\lambda \Big [J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x_n) -J^{H(\cdot ,\cdot )}_{M,\lambda }(x)\Big ]\Big \Vert \nonumber \\&\quad \le \,\Big \Vert H(Ax_n,Bx_n)-H(Ax_n,Bx)+H(Ax_n,Bx)-H(Ax,Bx) \nonumber \\&\qquad -\,\lambda \left[ J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x_n) -J^{H(\cdot ,\cdot )}_{M,\lambda }(x)\right] -J^{H(\cdot ,\cdot )}_{M,\lambda }(x_n)+J^{H(\cdot , \cdot )}_{M,\lambda }(x_n)\Big \Vert \nonumber \\&\quad \le \, \left\| H(Ax_n,Bx_n)-H(Ax_n,Bx)\right\| +\left\| H(Ax_n,Bx) -H(Ax,Bx)\right\| \nonumber \\&\qquad +\,\lambda \Big \Vert J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x_n)-J^{H(\cdot , \cdot )}_{M,\lambda }(x_n)\Big \Vert +\lambda \Big \Vert J^{H(\cdot ,\cdot )}_{M,\lambda }(x_n)-J^{H(\cdot , \cdot )}_{M,\lambda }(x)\Big \Vert \nonumber \\&\quad \le \, \gamma _2 \Vert x_n-x\Vert +\gamma _1\Vert x_n-x\Vert +\lambda c_n+\lambda \theta _1 \Vert x_n-x\Vert , \end{aligned}$$
(18)

where \({c_n}=\Big \Vert J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x_n)-J^{H(\cdot , \cdot )}_{M,\lambda }(x_n)\Big \Vert .\)

Using Eq. (18), (17) becomes

$$\begin{aligned} \Vert x_{n+1}-x\Vert \le {b_n}+\frac{1}{\alpha -\beta }[\gamma _1+\gamma _2+\lambda \theta _1]\Vert x_n-x\Vert +\lambda {c_n}, \end{aligned}$$

where \(\theta _1=\frac{[\alpha -\beta +1]}{\lambda (\alpha -\beta )}.\)

By Theorems 1 and 2, we have

$$\begin{aligned} R^{H(\cdot ,\cdot )}_{M_n,\lambda }\left[ H(Ax_n,Bx_n)-\lambda J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x_n)\right] \rightarrow R^{H(\cdot ,\cdot )}_{M,\lambda }\left[ H(Ax_n,Bx_n)-\lambda J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x_n)\right] \end{aligned}$$

and hence,

$$\begin{aligned} J^{H(\cdot ,\cdot )}_{M_n,\lambda }(x_n) \rightarrow J^{H(\cdot ,\cdot )}_{M,\lambda }(x_n). \end{aligned}$$

Thus, \(b_n \rightarrow 0\) and \(c_n \rightarrow 0\) as \(n \rightarrow \infty .\) It follows that

$$\begin{aligned} \Vert x_{n+1}-x\Vert \le P(\theta ) \Vert x_n-x\Vert +d_n, \end{aligned}$$

where \(d_n=b_n + \lambda c_n\), and \(P(\theta )=\frac{1}{(\alpha -\beta )}[\gamma _1+\gamma _2+\lambda \theta _1].\) By condition (4), we have \(0< P(\theta ) < 1\) and \(d_n \rightarrow 0\) as \(b_n ,c_n \rightarrow 0 (n \rightarrow \infty ).\) By Lemma 3, we have

$$\begin{aligned} \Vert x_{n+1}-x\Vert \rightarrow 0. \end{aligned}$$

This completes the proof. \(\square\)

Remark 2

If we take \(J^{H(\cdot ,\cdot )}_{M,\lambda }=T\), where \(T:X \rightarrow X\) is a mapping and deleting condition (4) from Theorem 3, we can obtain Theorem 4.1 of Li and Huang [7].