1 Introduction

In 1959, Signorini [1] posed the first problem involving a variational inequality, the so-called Signorini contact problem. The term variational inequality for such a problem was first coined by Fichera. After the emergence of the theory of variational inequalities in the 1960s, Stampacchia [2] and Fichera [3] initiated its study in 1964, and the interest in this theory has much increased during the past five decades. Such an interest is explained by the fact that in the last 50 years, variational inequalities have emerged as a strong tool in the mathematical study of many nonlinear problems of physics and mechanics, as the complexity of the boundary conditions and the diversity of the constitutive equations lead to variational formulations of inequality type. The motivation for the development and generalization of variational inequalities in various directions comes from the study of some problems arising in different fields of science and engineering, which lead to the problems involving generalized forms of variational inequalities. The need to extend and generalize variational inequalities in different directions gave rise to substantial progress in this area. For more detail, we refer the interested reader, for example, to [49] and the references therein.

Among the generalizations existing in the literature, variational inclusion has appeared as an efficient and a productive tool for solving and studying a large number of problems arising in diverse branches of pure and applied sciences. For this reason, after the introduction of the concept of variational inclusion, in the last twenty years, there has been a major activity in the study of various kinds of variational inclusion problems in a Hilbert or Banach space setting; see, for instance, [1020] and the references therein.

Simultaneously with the study of variational inequality (inclusion) problems, many authors have turned their attentions to the design of methods for approximation of their solutions. Thereby, in the course of the past few decades, many computational methods for solving them have been developed and proposed in the setting of Hilbert and Banach spaces. The simplest of these is the projection method, and the method based on the resolvent operator technique as a generalization of the projection method is one of the best and widely known methods for solving variational inequalities/inclusions and related optimization problems. For a detailed description of this method along with relevant commentaries, the reader is referred to [10, 1316, 2127] and the references therein.

Monotone operators were independently introduced by Browder [28] and Minty [29, 30]. It is worth mentioning that the term monotone operator first appeared in a work by Kačurovskiĭ [31], who proved that the subdifferential of a convex function on a Hilbert space is monotone. Those monotone ones, which are maximal or satisfy the range condition, play a key role in modern optimization and variational analysis. The beginning of the study of the notion of accretive mapping comes back to the 1960s with the pioneering studies of Browder [32] and Kato [33]. Interest in such mappings stems mainly from their firm connection with the existence theory for nonlinear evolution equations in Banach spaces. At the same time the study of nonlinear semigroups, differential equations in Banach spaces, and fully nonlinear partial differential equations is closely connected with notion of an m-accretive mapping, that is, accretive one that satisfies the range condition. Inspired by their wide applications, developing and generalizing the notions of maximal monotone operators and m-accretive mappings occur naturally and have been frequently done in recent decades. For example, the classes of maximal η-monotone operators [34], η-subdifferential operators [17, 35], generalized m-accretive mappings [36], H-monotone operators [14], general H-monotone operators [37], H-accretive mappings [13], \((H,\eta )\)-monotone operators [16], and several other interesting generalizations have been introduced in the literature in this direction. With the goal of providing a unifying framework for the classes mentioned above, Kazmi and Khan [24] proceeded to the introduction and study of the class of P-η-accretive mappings in a real q-uniformly smooth Banach space setting and defining the associated resolvent operator. Some related properties are also presented. It is worth mentioning that the main result in [24] concerning P-η-accretive mappings, the Lipschitz continuity of the resolvent operator associated with a P-η-accretive operator, is not necessarily true. Actually, it is true under condition that the underlying space is a real 2-uniformly smooth Banach space. One year later, without pointing out the errors in [24], Peng and Zhu [26] reviewed the class of P-η-accretive mappings and relevant results given in [24]. They provided correct versions of the corresponding results in [24] and considered a system of variational inclusions involving P-η-accretive mapping in a real q-uniformly smooth Banach space setting. They demonstrated the existence of a unique solution for the system of variational inclusions and constructed a Mann iterative algorithm for approximating its unique solution. Moreover, they discussed the convergence of the sequence generated by their proposed iterative algorithm under some suitable hypotheses.

Around the middle of the 1980s, the concept of graph convergence for operators was initially introduced by Attouch [38]. It is important to emphasize that the concept of graph convergence in [38] was restricted to maximal monotone operators. Afterward, in parallel to the introduction of various classes of generalized monotone operators and generalized accretive mappings, the development and generalization of the notion of graph convergence for them have been flourishing areas of research in the last two decades and have led to an extensive literature. Further information along with more details can be found in [21, 25, 27, 39, 40] and the references therein.

On the other hand, fixed point theory is one of the important thrust areas of research in nonlinear analysis. Meanwhile, it has played a central role in the problem solving techniques of nonlinear functional analysis. Due to its applicability in different areas of mathematical sciences, fixed point theory has grown tremendously since the last century. There is a strong connection between the classes of monotone and accretive operators, two classes of operators which arise naturally in the theory of differential equations and the class of nonexpansive mappings. Therefore, since the appearance of the notion of nonexpansive mapping, the fixed point theory for it has rapidly grown into an important field of study in both pure and applied mathematics, and it has become one of the most essential tools in nonlinear functional analysis. For this reason, during the past few decades, many researchers have shown interest in extending the notion of nonexpansive mapping, and the fixed point theory for generalized nonexpansive mappings has also attracted increasing attention.

There is no doubt that the class of asymptotically nonexpansive mappings, the history of its introduction and study of which dates back to a work of Goebel and Kirk [41] in the early of 1970s, is one of the most important and well-known generalizations appearing in the literature. After that time, the efforts have been made to unify the existing classes of generalized nonexpansive mappings, and in one of these attempts, Sahu [42] succeeded to introduce a class of generalized nonexpansive mappings, the so-called nearly asymptotically nonexpansive mappings, which contain properly the class of asymptotically nonexpansive mappings. The reader is referred to [22, 4148] and the references therein for more detail and further information. It is well known that the variational inequality (inclusion) problems are deeply related to the fixed point problems. This fact has always been one of the main incentives of researchers for presenting a unified approach to these two different problems. For more related details, we refer the readers to [5, 6, 22, 44, 4955] and the references therein.

Motivated and inspired by the research going on this field, in this paper, we study a class of generalized nonexpansive mappings called generalized nearly asymptotically nonexpansive mappings and by a concrete example is illustrate the fact that such a class is essentially wider than that of nearly asymptotically nonexpansive mappings. The existence of a unique solution for a system of generalized nonlinear variational-like inclusions (SGNVLI) involving P-η-accretive mappings is proved under some appropriate conditions. We investigate the problem of finding a point lying in the intersection of the set of solutions of the SGNVLI and the set of fixed points of a given generalized nearly asymptotically nonexpansive mapping. For finding such a point, we suggest a new iterative algorithm with mixed errors. In the final section, we use the notions of graph convergence and the resolvent operator associated with a P-η-accretive mapping and establish a new equivalence relationship between the graph convergence and the resolvent operator convergence of a sequence of P-η-accretive mappings. Finally, as an application of this equivalence, under some suitable assumptions imposed on the parameters, we verify the strong convergence and stability of the sequence generated by the proposed iterative algorithm to a common element of the above two sets.

2 P-η-accretive mappings: preliminary results and some notation

Unless otherwise stated, we always assume that E is a real Banach space with norm \(\Vert \cdot \Vert \), \(E^{*}\) is the topological dual space of E, \(\langle \cdot ,\cdot \rangle \) is the dual pair between E and \(E^{*}\), and \(2^{E}\) denotes the family of all nonempty subsets of E.

For an arbitrary but fixed real number \(q>1\), the multivalued mapping \(J_{q}:E\rightarrow 2^{E^{*}}\) defined by

$$\begin{aligned} J_{q}(x):=\bigl\{ x^{*}\in E^{*}:\bigl\langle x,x^{*}\bigr\rangle = \Vert x \Vert ^{q}, \bigl\Vert x^{*} \bigr\Vert = \Vert x \Vert ^{q-1}\bigr\} ,\quad x\in E, \end{aligned}$$

is called the generalized duality mapping of E. In particular, \(J_{2}\) is the usual normalized duality mapping. It is known that, in general, \(J_{q}(x)=\Vert x\Vert ^{q-2}J_{2}(x)\) for all \(x\neq 0\), and \(J_{q}\) is single-valued if \(E^{*}\) is strictly convex. We recall that a Banach space E is said to be strictly convex if \(\frac{\Vert x+y\Vert}{2}<1\) for all \(x,y\in U=\{z\in E:\Vert z\Vert =1\}\) such that \(x\neq y\). If E is a Hilbert space, then \(J_{2}\) becomes the identity mapping on E.

The modulus of smoothness of E is the function \(\rho _{E}:(0,\infty )\rightarrow (0,\infty )\) defined by

$$\begin{aligned} \rho _{E}(\tau )=\sup \biggl\{ \frac{1}{2}\bigl( \Vert x+y \Vert + \Vert x-y \Vert \bigr)-1:x,y \in E, \Vert x \Vert \leq 1, \Vert y \Vert \leq t\biggr\} . \end{aligned}$$

A Banach space E is called uniformly smooth if \(\lim_{t\rightarrow 0}\frac{\rho _{E}(t)}{t}=0\).

For a real constant \(q>1\), a Banach space E is called q-uniformly smooth if there exists a constant \(C>0\) such that \(\rho _{E}(t)\leq Ct^{q}\) for all \(t\in [0,+\infty )\). It is well known that (see, e.g., [56]) \(L_{q}\) (or \(l_{q}\)) is q-uniformly smooth for \(1< q\leq 2\) and is 2-uniformly smooth for \(q\geq 2\). Note that \(J_{q}\) is single-valued if E is smooth.

Concerned with the characteristic inequalities in q-uniformly smooth Banach spaces, Xu [56] proved the following result.

Lemma 2.1

Let E be a real uniformly smooth Banach space. For a real constant \(q>1\), E is q-uniformly smooth if and only if there exists a constant \(c_{q}>0\) such that for all \(x,y\in E\),

$$\begin{aligned} \Vert x+y \Vert ^{q}\leq \Vert x \Vert ^{q}+q\bigl\langle y,J_{q}(x)\bigr\rangle +c_{q} \Vert y \Vert ^{q}. \end{aligned}$$

We now introduce some notation and terminology and present some elementary results, which be used in later sections.

Definition 2.2

Let E be a real q-uniformly smooth Banach space, and let \(P:E\rightarrow E\) and \(\eta :E\times E\rightarrow E\). Then P is said to be

  1. (i)

    η-accretive if

    $$\begin{aligned} \bigl\langle P(x)-P(y),J_{q}\bigl(\eta (x,y)\bigr)\bigr\rangle \geq 0\quad \forall x,y \in E; \end{aligned}$$
  2. (ii)

    strictly η-accretive if P is η-accretive and equality holds if and only if \(x=y\);

  3. (iii)

    γ-strongly η-accretive (or strongly η-accretive with a constant \(\gamma >0\)) if there exists a constant \(\gamma >0\) such that

    $$\begin{aligned} \bigl\langle P(x)-P(y),J_{q}\bigl(\eta (x,y)\bigr)\bigr\rangle \geq \gamma \Vert x-y \Vert ^{q} \quad \forall x,y\in E; \end{aligned}$$
  4. (iv)

    ξ-Lipschitz continuous if there exists a constant \(\xi >0\) such that

    $$\begin{aligned} \bigl\Vert P(x)-P(y) \bigr\Vert \leq \xi \Vert x-y \Vert \quad \forall x,y\in E. \end{aligned}$$

Note that if \(\eta (x,y)=x-y\) for all \(x,y\in E\), then parts (i)–(iii) of Definition 2.2 reduce to the definitions of accretivity, strict accretivity, and strong accretivity of the mapping P, respectively.

Definition 2.3

([13, 26])

Let E be a real q-uniformly smooth Banach space, let \(P:E\rightarrow E\) be a single-valued mapping, and let \(M:E\rightarrow 2^{E}\) be a multivalued mapping. Then M is said to be

  1. (i)

    accretive if

    $$\begin{aligned} \bigl\langle u-v,J_{q}(x-y)\bigr\rangle \geq 0\quad \forall (x,u),(y,v)\in \operatorname{Graph}(M), \end{aligned}$$

    where \(\operatorname{Graph}(M)=\{(x,u)\in E\times E:u\in M(x)\}\);

  2. (ii)

    m-accretive if M is accretive and \((I+\lambda M)(E)=E\) for every real constant \(\lambda >0\), where I is the identity mapping on E;

  3. (iii)

    P-accretive if M is accretive and \((P+\lambda M)(E)=E\) for every \(\lambda >0\).

The notion of generalized m-accretive (also referred to as m-η-accretive and also η-m-accretive [11]) mappings was initially introduced by Huang and Fang [36] in 2001 as an extension of m-accretive mappings as follows.

Definition 2.4

([11, 36])

Let E be a real q-uniformly smooth Banach space, let \(\eta :E\times E\rightarrow E\) be a vector-valued mapping, and let \(M:E\rightarrow 2^{E}\) be a multivalued mapping. Then M is said to be

  1. (i)

    η-accretive if

    $$\begin{aligned} \bigl\langle u-v,J_{q}\bigl(\eta (x,y)\bigr)\bigr\rangle \geq 0\quad \forall (x,u),(y,v) \in \operatorname{Graph}(M); \end{aligned}$$
  2. (ii)

    generalized m-accretive if M is η-accretive and \((I+\lambda M)(E)=E\) for every real constant \(\lambda >0\).

It is very essential to note that M is a generalized m-accretive mapping if and only if M is η-accretive and there is no other η-accretive mapping whose graph strictly contains \(\operatorname{Graph}(M)\). The generalized m-accretivity is to be understood in terms of inclusion of graphs. If \(M:E\rightarrow 2^{E}\) is a generalized m-accretive mapping, then adding anything to its graph so as to obtain the graph of a new multivalued mapping destroys the η-accretivity. In fact, the extended mapping is no longer η-accretive. In other words, for every pair \((x,u)\in E\times E\backslash \operatorname{Graph}(M)\), there exists \((y,v)\in \operatorname{Graph}(M)\) such that \(\langle u-v,J_{q}(\eta (x,y))\rangle <0\). In the light of the above-mentioned arguments, a necessary and sufficient condition for a multivalued mapping \(M:E\rightarrow 2^{E}\) to be generalized m-accretive is that the property

$$\begin{aligned} \bigl\langle u-v,J_{q}\bigl(\eta (x,y)\bigr)\bigr\rangle \geq 0\quad \forall (y,v)\in \operatorname{Graph}(M) \end{aligned}$$

is equivalent to \((x,u)\in \operatorname{Graph}(M)\). The above characterization of generalized m-accretive mappings provides a useful and manageable way for recognizing that an element u belongs to \(M(x)\).

With the goal of presenting a unifying framework for H-accretive (P-accretive) mappings, \((H,\eta )\)-monotone operators [16], H-monotone operators [14], generalized m-accretive mappings, m-accretive mappings, maximal η-monotone operators [36], and maximal monotone operators, Peng and Zhu [26] and Kazmi and Khan [24] were the first to introduce and study the notion of P-η-accretive (which is also referred to as \((H,\eta )\)-accretive) mappings as follows.

Definition 2.5

([24, 26])

Let E be a real q-uniformly smooth Banach space, let \(P:E\rightarrow E\) and \(\eta :E\times E\rightarrow E\) be single-valued mappings, and let \(M:E\rightarrow 2^{E}\) be a multivalued mapping. Then M is said to be P-η-accretive if M is η-accretive and \((P+\lambda M)(E)=E\) for every \(\lambda >0\).

Note that for given mappings \(\eta :E\times E\rightarrow E\) and \(P:E\rightarrow E\), a P-η-accretive mapping may be neither P-accretive nor generalized m-accretive. In support of this fact, we present the following example.

Example 2.6

Let ϕ:N(0,+) and consider the complex linear space \(l^{2}_{\phi}\), the weighted \(l^{2}\) space consisting of all infinite complex sequences \((z_{n})_{n=1}^{\infty}\) such that \(\sum_{n=1}^{\infty}|z_{n}|^{2}\phi (n)<\infty \). It is well known that

l ϕ 2 = { z = { z n } n = 1 : n = 1 | z n | 2 ϕ ( n ) < , z n F = R  or  C }

with respect to the inner product ,: l ϕ 2 × l ϕ 2 C defined by

$$\begin{aligned} \langle z,w\rangle =\sum_{n=1}^{\infty }z_{n} \overline{w_{n}}\phi (n),\quad z=\{z_{n}\}_{n=1}^{\infty}, w=\{w_{n} \}_{n=1}^{\infty}\in l^{2}_{\phi}, \end{aligned}$$

is a Hilbert space, and so it is a 2-uniformly smooth Banach space. The above inner product induces the norm

$$\begin{aligned} \Vert z \Vert _{l^{2}_{\phi}}=\sqrt{\langle z,z\rangle}=\Biggl(\sum _{n=1}^{ \infty} \vert z_{n} \vert ^{2}\phi (n)\Biggr)^{\frac{1}{2}},\quad z=\{z_{n} \}_{n=1}^{ \infty}\in l^{2}_{\phi}. \end{aligned}$$

For any \(z=\{z_{n}\}_{n=1}^{\infty}=\{x_{n}+iy_{n}\}_{n=1}^{\infty}\in l^{2}_{ \phi}\), we have

$$\begin{aligned} z =&(x_{1}+iy_{1},x_{2}+iy_{2}, \dots ,x_{r}+iy_{r},0,0, \dots ) \\ &{} +(0,0,\dots ,0,x_{r+1}+iy_{r+1},x_{r+2}+iy_{r+2}, \dots ,x_{2r}+iy_{2r},0,0, \dots )+\cdots \\ &{} +(0,0,\dots ,0,x_{tr+1}+iy_{tr+1},x_{tr+2}+iy_{tr+2}, \dots ,x_{(t+1)r}+iy_{(t+1)r},0,0, \dots )+\cdots \\ =&\sum_{t=0}^{\infty}(0,0,\dots ,0,x_{tr+1}+iy_{tr+1},x_{tr+2}+iy_{tr+2}, \dots ,x_{(t+1)r}+iy_{(t+1)r},0,0,\dots ), \end{aligned}$$

where \(r\geq 2\) is an arbitrary natural constant. For each \(t\geq 0\), we obtain

$$\begin{aligned}& (0,0,\dots ,0,x_{tr+1}+iy_{tr+1},x_{tr+2}+iy_{tr+2}, \dots ,x_{(t+1)r}+iy_{(t+1)r},0,0,\dots ) \\& \quad =(0,0,\dots ,0,x_{tr+1}+iy_{tr+1},0,0,\dots ,0,x_{(t+1)r}+iy_{(t+1)r},0,0, \dots ) \\& \qquad {} +(0,0,\dots ,0,x_{tr+2}+iy_{tr+2},0,0,\dots ,0,x_{(t+1)r-1}+iy_{(t+1)r-1},0,0, \dots ) \\& \qquad {} +\cdots +(0,0,\dots ,0,x_{\frac{(2t+1)r}{2}}+iy_{\frac{(2t+1)r}{2}},x_{ \frac{(2t+1)r}{2}+1} +iy_{\frac{(2t+1)r}{2}+1},0,0,\dots ) \\& \quad =\sum_{j=tr+1}^{\frac{(2t+1)r}{2}}(0,0,\dots ,0,x_{j}+iy_{j},0,0, \dots ,0,x_{(2t+1)r-j+1}+iy_{(2t+1)r-j+1},0,0, \dots ). \end{aligned}$$

Accordingly, for any \(z=\{z_{n}\}_{n=1}^{\infty}=\{x_{n}+iy_{n}\}_{n=1}^{\infty}\in l^{2}_{ \phi}\),

$$\begin{aligned} z =&\sum_{t=0}^{\infty}(0,0, \dots ,0,x_{tr+1}+iy_{tr+1},x_{tr+2}+iy_{tr+2}, \dots ,x_{(t+1)r}+iy_{(t+1)r},0,0,\dots ) \\ =&\sum_{t=0}^{\infty}\sum _{j=tr+1}^{ \frac{(2t+1)r}{2}}(0,0,\dots ,0,x_{j}+iy_{j},0,0, \dots ,0,x_{(2t+1)r-j+1}+iy_{(2t+1)r-j+1},0,0, \dots ) \\ =&\sum_{t=0}^{\infty} \sum_{j=tr+1}^{ \frac{(2t+1)r}{2}} \biggl[ \frac{y_{j}+y_{(2t+1)r-j+1} -i(x_{j}+x_{(2t+1)r-j+1})}{2}u_{j,(2t+1)r-j+1} \\ &{} +\frac{y_{j}-y_{(2t+1)r-j+1} -i(x_{j}-x_{(2t+1)r-j+1})}{2}u'_{j,(2t+1)r-j+1} \biggr], \end{aligned}$$

where for all tN{0} and \(j\in \{tr+1,tr+2,\dots ,\frac{(2t+1)r}{2}\}\),

$$\begin{aligned} u_{j,(2t+1)r-j+1}=(0,0,\dots ,0,i_{j},0,0,\dots ,0,i_{(2t+1)r-j+1},0,0, \dots ) \end{aligned}$$

with i at the jth and \(((2t+1)r-j+1)\)th coordinates and all other coordinates zero, and

$$\begin{aligned} u'_{j,(2t+1)r-j+1}=(0,0,\dots ,0,i_{j},0,0,\dots ,0,-i_{(2t+1)r-j+1},0,0, \dots ) \end{aligned}$$

with i and −i at the jth and \(((2t+1)r-j+1)\)th coordinates, respectively, and zeros elsewhere. Therefore the set

B= { u j , ( 2 t + 1 ) r j + 1 , u j , ( 2 t + 1 ) r j + 1 : t N { 0 } ; j = t r + 1 , t r + 2 , , ( 2 t + 1 ) r 2 }

spans the Banach space \(l^{2}_{\phi}\). We can easily observe that the set \(\mathfrak{B}\) is linearly independent, and so it is a basis for \(l^{2}_{\phi}\). For all tN{0} and \(j\in \{tr+1,tr+2,\dots ,\frac{(2t+1)r}{2}\}\), let us now take

$$\begin{aligned}& v_{j,(2t+1)r-j+1} \\& \quad =\biggl(0,0,\dots ,0,\frac{1}{\sqrt{2\phi (j)}}i_{j},0,0, \dots ,0, \frac{1}{\sqrt{2\phi ((2t+1)r-j+1)}}i_{(2t+1)r-j+1},0,0, \dots \biggr) \end{aligned}$$

and

$$\begin{aligned}& v'_{j,(2t+1)r-j+1} \\& \quad =\biggl(0,0,\dots ,0,\frac{1}{\sqrt{2\phi (j)}}i_{j},0,0, \dots ,0, -\frac{1}{\sqrt{2\phi ((2t+1)r-j+1)}}i_{(2t+1)r-j+1},0,0, \dots \biggr). \end{aligned}$$

Obviously, the set

{ v j , ( 2 t + 1 ) r j + 1 , v j , ( 2 t + 1 ) r j + 1 : t N { 0 } ; j = t r + 1 , t r + 2 , , ( 2 t + 1 ) r 2 }

is linearly independent and orthonormal. Let the mappings \(M:l^{2}_{\phi}\rightarrow 2^{l^{2}_{\phi}}\), \(\eta :l^{2}_{\phi}\times l^{2}_{\phi}\rightarrow l^{2}_{\phi}\) and \(P:l^{2}_{\phi}\rightarrow l^{2}_{\phi}\) be defined as

$$\begin{aligned}& M(z)=\textstyle\begin{cases} \Phi ,& z=v_{k,(2s+1)r-k+1}, \\ -z+ \{\sqrt{\frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}} +i\sqrt{ \frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}} \}_{n=1}^{\infty},& z\neq v_{k,(2s+1)r-k+1}, \end{cases}\displaystyle \\& \eta (z,w)=\textstyle\begin{cases} \alpha (w-z),& z,w\neq v_{k,(2s+1)r-k+1}, \\ \mathbf{0}& \text{otherwise, } \end{cases}\displaystyle \end{aligned}$$

and \(P(z)=\beta z+\gamma \{\sqrt{ \frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}} +i\sqrt{ \frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}} \}_{n=1}^{\infty}\) for \(z,w\in l^{2}_{\phi}\), where

Φ = { v j , ( 2 t + 1 ) r j + 1 v k , ( 2 s + 1 ) r k + 1 , v j , ( 2 t + 1 ) r j + 1 v k , ( 2 s + 1 ) r k + 1 : t N { 0 } ; j = t r + 1 , t r + 2 , , ( 2 t + 1 ) r 2 } ,

α,β,γR are arbitrary constants such that \(\beta <0<\alpha \) and sN{0}, \(k\in \{tr+1,tr+2,\dots ,\frac{(2t+1)r}{2}\}\) are chosen arbitrarily but fixed, and 0 is the zero vector of the space \(l^{2}_{\phi}\). Since \(\sum_{n=1}^{\infty}\frac{(n+3)!}{3!n!3^{n}}\) is convergent, it follows that \(\{\sqrt{\frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}} +i\sqrt{ \frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}} \}_{n=1}^{\infty}\in l^{2}_{ \phi}\). Then, for all \(z,w\in l^{2}_{\phi}\), \(z\neq w\neq v_{k,(2s+1)r-k+1}\), we have

$$\begin{aligned}& \bigl\langle M(z)-M(w),J_{2}(z-w)\bigr\rangle \\& \quad =\bigl\langle M(z)-M(w),z-w \bigr\rangle \\& \quad =\biggl\langle -z+ \biggl\{ \sqrt{\frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}}+i \sqrt{ \frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}} \biggr\} _{n=1}^{\infty }+w \\& \qquad {} - \biggl\{ \sqrt{\frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}} +i\sqrt{ \frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}} \biggr\} _{n=1}^{\infty},z-w \biggr\rangle \\& \quad =\langle w-z,z-w\rangle \\& \quad =- \Vert z-w \Vert ^{2}_{l^{2}_{\phi}} \\& \quad =- \sum_{n=1}^{\infty} \vert z_{n}-w_{n} \vert ^{2}\phi (n)< 0, \end{aligned}$$

which means that M is not accretive, and so it is not a P-accretive mapping. For any given \(z,w\in l^{2}_{\phi}\), \(z\neq w\neq v_{k,(2s+1)r-k+1}\), we have

$$\begin{aligned}& \bigl\langle M(z)-M(w),J_{2}\bigl(\eta (z,w) \bigr)\bigr\rangle \\& \quad =\bigl\langle M(z)-M(w), \eta (z,w)\bigr\rangle \\& \quad =\biggl\langle -z+ \biggl\{ \sqrt{\frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}}+i \sqrt{ \frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}} \biggr\} _{n=1}^{\infty} \\& \qquad {} +w- \biggl\{ \sqrt{\frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}}+i \sqrt{\frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}} \biggr\} _{n=1}^{\infty}, \alpha (w-z)\biggr\rangle \\& \quad =\alpha \langle w-z,w-z\rangle \\& \quad =\alpha \Vert w-z \Vert ^{2}_{l^{2}_{ \phi}} \\& \quad =\alpha \sum_{n=1}^{\infty} \vert z_{n}-w_{n} \vert ^{2}\phi (n)>0. \end{aligned}$$

For each of the cases where \(z\neq w=v_{k,(2s+1)r-k+1}\), \(w\neq z=v_{k,(2s+1)r-k+1}\), and \(z=w=v_{k,(2s+1)r-k+1}\), since \(\eta (z,w)=\mathbf{0}\), we conclude that

$$\begin{aligned} \bigl\langle u-v,J_{2}\bigl(\eta (z,w)\bigr)\bigr\rangle =\bigl\langle u-v,\eta (z,w)\bigr\rangle =0\quad \forall u\in M(z), v\in M(w). \end{aligned}$$

Hence M is an η-accretive mapping. Taking into account that for any \(z\in l^{2}_{\phi}\), \(z\neq v_{k,(2s+1)r-k+1}\),

$$\begin{aligned} \bigl\Vert (I+M) (z) \bigr\Vert ^{2}_{l^{2}_{\phi}} =& \biggl\Vert \biggl\{ \sqrt{ \frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}}+i\sqrt{ \frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}} \biggr\} _{n=1}^{\infty} \biggr\Vert ^{2}_{l^{2}_{ \phi}} \\ =&\sum_{n=1}^{\infty}\frac{(n+3)!}{3!n!3^{n}}>0 \end{aligned}$$

and

( I + M ) ( v k , ( 2 s + 1 ) r k + 1 ) = { v j , ( 2 t + 1 ) r j + 1 , v j , ( 2 t + 1 ) r j + 1 : t N { 0 } ; j = t r + 1 , t r + 2 , , ( 2 t + 1 ) r 2 } ,

where I is the identity mapping on \(E=l^{2}_{\phi}\), it follows that \(\mathbf{0}\notin (I+M)(l^{2}_{\phi})\). Thus \(I+M\) is not surjective, and, consequently, M is not a generalized m-accretive mapping. For any \(\lambda >0\) and \(z\in l^{2}_{\phi}\), taking \(w=\frac{1}{\beta -\lambda}z+\frac{\gamma +\lambda}{\lambda -\beta} \{\sqrt{\frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}}+i\sqrt{ \frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}} \}_{n=1}^{\infty}\) (\(\lambda \neq \beta \), because \(\beta <0\)), we obtain

$$\begin{aligned}& (P+\lambda M) (w) \\& \quad = (P+\lambda M) \biggl( \frac{1}{\beta -\lambda}z +\frac{\gamma +\lambda}{\lambda -\beta} \biggl\{ \sqrt{ \frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}} +i \sqrt{ \frac{(n+3)!}{2\times 3!n!3^{n}\phi (n)}} \biggr\} _{n=1}^{\infty}\biggr) \\& \quad = z. \end{aligned}$$

Accordingly, for any real constant \(\lambda >0\), the mapping \(P+\lambda M\) is surjective, and so M is a P-η-accretive mapping.

The following example illustrates that for given mappings \(\eta :E\times E\rightarrow E\) and \(P:E\rightarrow E\), a generalized m-accretive mapping need not be P-η-accretive.

Example 2.7

Let H 2 (C) be the set of all Hermitian matrices with complex entries. Recall that a square matrix A is said to be Hermitian (or self-adjoint) if it is equal to its own Hermitian conjugate, i.e., \(A^{*}=\overline{A^{t}}=A\). In view of the definition of a Hermitian \(2\times 2\) matrix, the condition \(A^{*}=A\) implies that the \(2\times 2\) matrix A= ( a b c d ) is Hermitian if and only if a,dR and \(b=\bar{c}\). Thus

H 2 (C)= { ( z x i y x + i y w ) | x , y , z , w R } ,

and it is a subspace of M 2 (C), the space of all \(2\times 2\) matrices with complex entries, with respect to the operations of addition and scalar multiplication defined on M 2 (C) when M 2 (C) is considered as a real vector space. Considering the scalar product on H 2 (C) as \(\langle A,B\rangle :=\frac{1}{2}\operatorname{tr}(AB)\) for A,B H 2 (C), we can easily check that \(\langle \cdot ,\cdot \rangle \) is an inner product, that is, ( H 2 (C),,) is an inner product space. The inner product defined above induces the norm on H 2 (C):

A = A , A = 1 2 tr ( A A ) = { 1 2 tr ( ( x 2 + y 2 + z 2 ( z + w ) ( x i y ) ( z + w ) ( x + i y ) x 2 + y 2 + w 2 ) ) } 1 2 = x 2 + y 2 + 1 2 ( z 2 + w 2 ) , A H 2 ( C ) .

Since every finite-dimensional normed space is a Banach space, it follows that ( H 2 (C),) is a Hilbert space, and so it is a 2-uniformly smooth Banach space. Assume that the mappings P,M: H 2 (C) H 2 (C) and η: H 2 (C)× H 2 (C) H 2 (C) are defined, respectively, as

P ( A ) = P ( ( z x i y x + i y w ) ) = ( 2 sin 2 z x 2 i y 2 x 2 + i y 2 4 cos 2 w ) , M ( A ) = M ( ( z x i y x + i y w ) ) = ( 4 cos 2 z x i y x + i y 2 sin 2 w ) ,

and

η ( A , B ) = η ( ( z x i y x + i y w ) , ( z ˆ x ˆ i y ˆ x ˆ + i y ˆ w ˆ ) ) = ( 4 ( cos 2 z cos 2 z ˆ ) x x ˆ i ( y y ˆ ) x x ˆ + i ( y y ˆ ) 2 ( sin 2 w sin 2 w ˆ ) )

for

A= ( z x i y x + i y w ) ,B= ( z ˆ x ˆ i y ˆ x ˆ + i y ˆ w ˆ ) H 2 (C).

Then, for any

A= ( z x i y x + i y w ) ,B= ( z ˆ x ˆ i y ˆ x ˆ + i y ˆ w ˆ ) H 2 (C),

we get

M ( A ) M ( B ) , J 2 ( η ( A , B ) ) = M ( A ) M ( B ) , η ( A , B ) = ( 4 ( cos 2 z cos 2 z ˆ ) x x ˆ i ( y y ˆ ) x x ˆ + i ( y y ˆ ) 2 ( sin 2 w sin 2 w ˆ ) ) , ( 4 ( cos 2 z cos 2 z ˆ ) x x ˆ i ( y y ˆ ) x x ˆ + i ( y y ˆ ) 2 ( sin 2 w sin 2 w ˆ ) ) = 1 2 tr ( ( Δ 11 ( x , x ˆ , y , y ˆ , z , z ˆ ) Δ 12 ( x , x ˆ , y , y ˆ , z , z ˆ ) Δ 21 ( x , x ˆ , y , y ˆ , z , z ˆ ) Δ 22 ( x , x ˆ , y , y ˆ , z , z ˆ ) ) ) = 8 ( cos 2 z cos 2 z ˆ ) 2 + 2 ( sin 2 w sin 2 w ˆ ) 2 + ( x x ˆ ) 2 + ( y y ˆ ) 2 0 ,
(2.1)

where

$$\begin{aligned}& \Delta _{11}(x,\widehat{x},y,\widehat{y},z, \widehat{z})= 16\bigl(\cos ^{2}z-\cos ^{2}\widehat{z} \bigr)^{2}+(x-\widehat{x})^{2}+(y- \widehat{y})^{2}, \\& \Delta _{12}(x,\widehat{x},y,\widehat{y},z,\widehat{z})= \bigl(x- \widehat{x}-i(y-\widehat{y})\bigr) \bigl(-4\bigl(\cos ^{2}z-\cos ^{2}\widehat{z}\bigr)+2\bigl( \sin ^{2}w-\sin ^{2} \widehat{w}\bigr)\bigr), \\& \Delta _{21}(x,\widehat{x},y,\widehat{y},z,\widehat{z})= \bigl(x- \widehat{x}+i(y-\widehat{y})\bigr) \bigl(-4\bigl(\cos ^{2}z-\cos ^{2}\widehat{z}\bigr)+2\bigl( \sin ^{2}w-\sin ^{2} \widehat{w}\bigr)\bigr), \\& \Delta _{22}(x,\widehat{x},y,\widehat{y},z,\widehat{z})= (x- \widehat{x})^{2}+(y-\widehat{y})^{2}+4\bigl(\sin ^{2}w-\sin ^{2} \widehat{w}\bigr)^{2}. \end{aligned}$$

By (2.1) we infer that M is an η-accretive mapping. Let us define the functions f,g:RR as \(f(t):=2\sin ^{2}t-4\cos ^{2}t\) and \(g(t):=t^{2}+t\) for tR. Then, for any A= ( z x i y x + i y w ) H 2 (C), we have

( P + M ) ( A ) = ( P + M ) ( ( z x i y x + i y w ) ) = ( 2 sin 2 z 4 cos 2 z x 2 + x i ( y 2 + y ) x 2 + x + i ( y 2 + y ) 2 sin 2 w 4 cos 2 w ) = ( f ( z ) g ( x ) i g ( y ) g ( x ) + i g ( y ) f ( w ) ) .

Since

f(t)=2 sin 2 t4 cos 2 t=2 sin 2 t4 ( 1 sin 2 t ) =6 sin 2 t4tR,

it follows that \(-4\leq f(t)\leq 2\) for all tR. At the same time, for all tR, we have \(g(t)=t^{2}+t=(t+\frac{1}{2})^{2}-\frac{1}{4}\geq -\frac{1}{4}\). Hence f(R)=[4,2]R and g(R)=[ 1 4 ,+)R. These facts ensure that (P+M)( H 2 (C)) H 2 (C), i.e., \(P+M\) is not surjective, and, consequently, M is not P-η-accretive. Now let λ be an arbitrary positive real constant, and let the functions f ˜ , g ˜ , h ˜ :RR be defined as \(\widetilde{f}(t):=t-4\lambda \cos ^{2}t\), \(\widetilde{g}(t):=t+2\lambda \sin ^{2}t\), and \(\widetilde{h}(t):=(1+\lambda )t\) for tR. Then, for any A= ( z x i y x + i y w ) H 2 (C), we have

( I + λ M ) ( A ) = ( I + λ M ) ( ( z x i y x + i y w ) ) = ( z 4 λ cos 2 z ( 1 + λ ) x i ( 1 + λ ) y ( 1 + λ ) x + i ( 1 + λ ) y w + 2 λ sin 2 w ) = ( f ˜ ( z ) h ˜ ( x ) i h ˜ ( y ) h ˜ ( x ) + i h ˜ ( y ) g ˜ ( w ) ) ,

where I is the identity mapping on H 2 (C). Since f ˜ (R)= g ˜ (R)= h ˜ (R)=R, we deduce that (I+λM)( H 2 (C))= H 2 (C), that is, \(I+\lambda M\) is a surjective mapping. Taking into account the arbitrariness of \(\lambda >0\), it follows that M is a generalized m-accretive mapping.

Example 2.8

Let m,nN be arbitrary but fixed, and let M m × n (F) be the vector space of all \(m\times n\) matrices with real or complex entries. Then

M m × n (F)={A=( a l j )| a l j F,i=1,2,,m;j=1,2,,n,F=R or C}

is a Hilbert space with respect to the Hilbert–Schmidt norm

A= ( i = 1 m j = 1 n | a i j | 2 ) 1 2 ,A M m × n (F),

induced by the Hilbert–Schmidt inner product

A,B=tr ( A B ) = i = 1 m j = 1 n a i j b i j ,A,B M m × n (F),

where tr denotes the trace, that is, the sum of diagonal entries, and \(A^{*}\) denotes the Hermitian conjugate (or adjoint) of a matrix A, that is, \(A^{*}=\overline{A^{t}}\), the complex conjugate of the transpose A, the bar denotes complex conjugation, and the superscript denotes the transpose of the entries. Denote by D n (R) the space of all diagonal \(n\times n\) matrices with real entries, that is, the \((i,j)\)-entry is an arbitrary real number if \(i=j\) and is zero if \(i\neq j\). Then

D n (R)={A=( a i j )| a i j R, a i j =0 if ij;i,j=1,2,,n}

is a subspace of M n × n (R)= M n (R) with respect to addition and scalar multiplication on M n (R). Furthermore, the Hilbert–Schmidt inner product on D n (R) and the Hilbert–Schmidt norm induced by it become

$$\begin{aligned} \langle A,B\rangle =\operatorname{tr}\bigl(A^{*}B\bigr)=\operatorname{tr}(AB) \end{aligned}$$

and

$$\begin{aligned} \Vert A \Vert =\sqrt{\langle A,A\rangle}=\sqrt{\operatorname{tr}(AA)}= \Biggl(\sum _{i=1}^{n}a_{ii}^{2} \Biggr)^{\frac{1}{2}}, \end{aligned}$$

respectively. Define the mappings P 1 , P 2 ,M: D n (R) D n (R) and η: D n (R)× D n (R) D n (R) by \(P_{1}(A)=P_{1}(( a_{ij} ))=( a'_{ij} )\), \(P_{2}(A)=P_{2}(( a_{ij} ))=( a''_{ij} )\), \(M(A)=M(( a_{ij} ))=( a'''_{ij} )\), and \(\eta (A,B)=\eta (( a_{ij} ),( b_{ij} ))=( c_{ij} )\) for \(A=( a_{ij} )\), B=( b i j ) D n (R), where for all \(i,j\in \{1,2,\dots ,n\}\),

$$\begin{aligned}& a'_{ij}=\textstyle\begin{cases} \sin ^{2l}a_{ii}+\cos ^{2l}a_{ii}-\gamma \sqrt[k]{a_{ii}},& i=j, \\ 0,& i\neq j, \end{cases}\displaystyle \\& a''_{ij}=\textstyle\begin{cases} \alpha a^{q}_{ii},& i=j, \\ 0,& i\neq j, \end{cases}\displaystyle \\& a'''_{ij}=\textstyle\begin{cases} \gamma \sqrt[k]{a_{ii}},& i=j, \\ 0,& i\neq j, \end{cases}\displaystyle \end{aligned}$$

and

$$ c_{ij}=\textstyle\begin{cases} \frac{\beta (\sqrt[p]{a_{ii}}-\sqrt[p]{b_{ii}})}{1+a^{2}_{ii}+b^{2}_{ii}},& i=j, \\ 0,& i\neq j, \end{cases} $$

where β, γ are arbitrary positive real constants, α is an arbitrary real constant, k, p, and q are arbitrary but fixed odd natural numbers, and lN{1} is an arbitrary but fixed natural number. Then, for any \(A=( a_{ij} )\), B=( b i j ) D n (R), we have

$$\begin{aligned} \begin{aligned} \bigl\langle M(A)-M(B),J_{2} \bigl(\eta (A,B)\bigr)\bigr\rangle &=\bigl\langle M(A)-M(B), \eta (A,B)\bigr\rangle \\ & =\operatorname{tr} \big( \bigl(a'''_{ij}-b'''_{ij} \bigr) ( c_{ij} ) \big) \\ & =\beta \gamma \sum_{i=1}^{n} \frac{(\sqrt[k]{a_{ii}}-\sqrt[k]{b_{ii}}) (\sqrt[p]{a_{ii}}-\sqrt[p]{b_{ii}})}{1+a^{2}_{ii}+b^{2}_{ii}}. \end{aligned} \end{aligned}$$
(2.2)

If \(a_{ii}=b_{ii}=0\), then \((\sqrt[k]{a_{ii}}-\sqrt[k]{b_{ii}})(\sqrt[p]{a_{ii}}- \sqrt[p]{b_{ii}})=0\). In the case where \(a_{ii}\neq 0=b_{ii}\),

$$\begin{aligned} \bigl(\sqrt[k]{a_{ii}}-\sqrt[k]{b_{ii}}\bigr) \bigl( \sqrt[p]{a_{ii}}-\sqrt[p]{b_{ii}}\bigr)= \sqrt[k]{a_{ii}} \sqrt[p]{a_{ii}}=\sqrt[kp]{a_{ii}^{k+p}}. \end{aligned}$$

At the same time, if \(a_{ii}=0\neq b_{ii}\), then

$$\begin{aligned} \bigl(\sqrt[k]{a_{ii}}-\sqrt[k]{b_{ii}}\bigr) \bigl( \sqrt[p]{a_{ii}}-\sqrt[p]{b_{ii}}\bigr)= \sqrt[k]{b_{ii}} \sqrt[p]{b_{ii}}=\sqrt[kp]{b_{ii}^{k+p}}. \end{aligned}$$

Since k and p are odd natural numbers, in the last both cases, we deduce that

$$\begin{aligned} \bigl(\sqrt[k]{a_{ii}}-\sqrt[k]{b_{ii}}\bigr) \bigl( \sqrt[p]{a_{ii}}-\sqrt[p]{b_{ii}}\bigr)>0. \end{aligned}$$

If \(a_{ii},b_{ii}\neq 0\), then we infer that

$$\begin{aligned}& \sqrt[k]{a_{ii}}-\sqrt[k]{b_{ii}}= \frac{a_{ii}-b_{ii}}{\sum_{r=1}^{k} \sqrt[k]{a_{ii}^{k-r}b_{ii}^{r-1}}} \quad \text{and} \\& \sqrt[p]{a_{ii}}-\sqrt[p]{b_{ii}}= \frac{a_{ii}-b_{ii}}{\sum_{t=1}^{p}\sqrt[p]{a_{ii}^{p-t}b_{ii}^{t-1}}}. \end{aligned}$$

Since k and p are odd natural numbers, it follows that \(\sum_{r=1}^{k}\sqrt[k]{a_{ii}^{k-r}b_{ii}^{r-1}}>0\) and \(\sum_{t=1}^{p}\sqrt[p]{a_{ii}^{p-t}b_{ii}^{t-1}}>0\), which imply that

$$\begin{aligned} \bigl(\sqrt[k]{a_{ii}}-\sqrt[k]{b_{ii}}\bigr) \bigl( \sqrt[p]{a_{ii}}-\sqrt[p]{b_{ii}}\bigr)= \frac{(a_{ii}-b_{ii})^{2}}{ (\sum_{r=1}^{k}\sqrt[k]{a_{ii}^{k-r}b_{ii}^{r-1}})(\sum_{t=1}^{p}\sqrt[p]{a_{ii}^{p-t}b_{ii}^{t-1}})}>0. \end{aligned}$$

Since \(\beta ,\gamma >0\), in the light of the arguments mentioned above and making use of (3.2), it follows that

M ( A ) M ( B ) , J 2 ( η ( A , B ) ) 0A,B D n (R),

which ensures that M is an η-accretive mapping. Assume that the function f:RR is defined by \(f(x):=\sin ^{2l}x+\cos ^{2l}x\) for xR. Then, for any A=( a i j ) D n (R), we obtain

$$\begin{aligned} (P_{1}+M) (A)=(P_{1}+M) \big(( a_{ij} ) \big)=\bigl( a'_{ij}+a'''_{ij} \bigr)=( \widetilde{a}_{ij} ), \end{aligned}$$

where for each \(i,j\in \{1,2,\dots ,n\}\),

$$ \widetilde{a}_{ij}=\textstyle\begin{cases} \sin ^{2l}a_{ii}+\cos ^{2l}a_{ii},& i=j, \\ 0,& i\neq j, \end{cases}\displaystyle =\textstyle\begin{cases} f(a_{ii}),& i=j, \\ 0,& i\neq j. \end{cases} $$

The fact that f(R)=[ 2 1 l ,1] implies that ( P 1 +M)( D n (R)) D n (R), which means that \(P_{1}+M\) is not surjective, and so M is not a \(P_{1}\)-η-accretive mapping. Now suppose that \(\lambda >0\) is an arbitrary real constant and let the function g:RR be defined by \(g(x):=\alpha x^{q}+\lambda \gamma \sqrt[k]{x}\) for xR. Then, for any A=( a i j ) D n (R), we get

$$\begin{aligned} (P_{2}+\lambda M) (A)=(P_{2}+\lambda M) \big( ( a_{ij} )\big)=\bigl( a''_{ij}+\lambda a'''_{ij} \bigr)=( \widehat{a}_{ij} ), \end{aligned}$$

where for each \(i,j\in \{1,2,\dots ,n\}\),

$$ \widehat{a}_{ij}=\textstyle\begin{cases} \alpha a_{ii}^{q}+\lambda \gamma \sqrt[k]{a_{ii}},& i=j, \\ 0,& i\neq j, \end{cases}\displaystyle =\textstyle\begin{cases} g(a_{ii}),& i=j, \\ 0,& i\neq j. \end{cases} $$

Since the natural numbers q and k are odd, we deduce that g(R)= R , which implies that ( P 2 +λM)( D n (R))= D n (R). Thereby \(P_{2}+\lambda M\) is a surjective mapping. Since \(\lambda >0\) was arbitrary, it follows that M is a \(P_{2}\)-η-accretive mapping.

Note, in particular, that if \(P\equiv I\), the identity mapping on E, then the definition of a P-η-accretive mapping is that of a generalized m-accretive mapping. In fact, there is a close relation between the two classes of P-η-accretive mappings and generalized m-accretive mappings in the framework of Banach spaces. On the other hand, in the light of Example 2.6, for given mappings \(P:E\rightarrow E\) and \(\eta :E\times E\rightarrow E\), a P-η-accretive mapping need not be generalized m-accretive. The following lemma tells that under the sufficient conditions, a P-η-accretive mapping M is generalized m-accretive.

Lemma 2.9

([26, Theorem 3.1(a)])

Let E be a real q-uniformly smooth Banach space, let \(\eta :E\times E\rightarrow E\) be a vector-valued mapping, let \(P:E\rightarrow E\) be a strictly η-accretive mapping, let \(M:E \rightarrow 2^{E}\) be a P-η-accretive mapping, and let \(x,u\in E\). If \(\langle u-v,J_{q}(\eta (x,y))\rangle \geq 0\) for all \((y,v)\in \operatorname{Graph}(M)\), then \((x,u)\in \operatorname{Graph}(M)\).

Invoking Example 2.7, for given mappings \(P:E\rightarrow E\) and \(\eta :E\times E\rightarrow E\), a generalized m-accretive mapping may not be P-η-accretive. In the following assertion, we state sufficient conditions for a generalized m-accretive mapping to be P-η-accretive. Before turning to it, we need to recall the following notions.

Definition 2.10

Let E be a real q-uniformly smooth Banach space. A mapping \(P:E\rightarrow E\) is said to be coercive if

$$\begin{aligned} \lim_{ \Vert x \Vert \rightarrow +\infty} \frac{\langle P(x),J_{q}(x)\rangle}{ \Vert x \Vert }=+\infty . \end{aligned}$$

Definition 2.11

Let E be a real q-uniformly smooth Banach space, and let \(P:E\rightarrow E\) be a single-valued mapping. P is said to be

  1. (i)

    bounded if \(P(A)\) is a bounded subset of E for every bounded subset A of E,

  2. (ii)

    hemicontinuous if for any fixed points \(x,y,z\in E\), the function \(t\longmapsto \langle P(x+ty),J_{q}(z)\rangle \) is continuous at 0+.

Theorem 2.12

Let E be a real q-uniformly smooth Banach space, let \(\eta :E\times E\rightarrow E\) be a single-valued mapping, and let \(P:E\rightarrow E\) be a bounded, coercive, hemicontinuous, and η-accretive mapping. If \(M:E\rightarrow 2^{E}\) is a generalized m-accretive mapping, then M is P-η-accretive.

Proof

Since the mapping P is bounded, coercive, hemicontinuous, and η-accretive, from Theorem 3.1 of Guo [57, p. 401] it follows that \(P+\lambda M\) is surjective for every \(\lambda >0\), i.e., \(\operatorname{Range}(P+\lambda M)(E)=E\) for every \(\lambda >0\), where \(\operatorname{Range}(P+\lambda M)\) is the range of \(P+\lambda M\). Hence M is a P-η-accretive mapping. The proof is finished. □

Theorem 2.13

Suppose that E is a real q-uniformly smooth Banach space, \(\eta :E\times E\rightarrow E\) is a vector-valued mapping, \(P:E\rightarrow E\) is a strictly η-accretive mapping, and \(M:E\rightarrow 2^{E}\) is an η-accretive mapping. Then the mapping \((P+\lambda M)^{-1}:\operatorname{Range}(P+\lambda M)\rightarrow E\) is single-valued for every constant \(\lambda >0\).

Proof

Assume that constant \(\lambda >0\) and point \(u\in \operatorname{Range}(P+\lambda M)\) are chosen arbitrarily but fixed. Then for any \(x,y\in (P+\lambda M)^{-1}(u)\), we have \(u=(P+\lambda M)(x)=(P+\lambda M)(y)\), from which we deduce that

$$\begin{aligned} \lambda ^{-1}\bigl(u-P(x)\bigr)\in M(x) \quad \text{and}\quad \lambda ^{-1}\bigl(u-P(y)\bigr)\in M(y). \end{aligned}$$

Taking into account that M is η-accretive, we infer that

$$\begin{aligned} 0 \leq& \bigl\langle \lambda ^{-1}\bigl(u-P(x)\bigr)-\lambda ^{-1}\bigl(u-P(y)\bigr),J_{q}\bigl(\eta (x,y)\bigr) \bigr\rangle \\ =&\lambda ^{-1}\bigl\langle P(x)-P(y),J_{q}\bigl(\eta (x,y)\bigr)\bigr\rangle . \end{aligned}$$

Since P is a strictly η-accretive mapping, from the last inequality we conclude that \(x=y\), which ensures that the mapping \((P+\lambda M)^{-1}\) from \(\operatorname{Range}(P+\lambda M)\) into E is single-valued. This gives the desired result. □

The following statement due to Kazmi and Khan [24] is an immediate consequence of Theorem 2.13.

Lemma 2.14

([26, Theorem 3.1(b)])

Let E be a real q-uniformly smooth Banach space, let \(\eta :E\times E\rightarrow E\) be a vector-valued mapping, let \(P:E\rightarrow E\) be a strictly η-accretive mapping, and let \(M:E\rightarrow 2^{E}\) be a P-η-accretive mapping. Then the mapping \((P+\lambda M)^{-1}:E\rightarrow E\) is single-valued for every real constant \(\lambda >0\).

The resolvent operator \(R^{P,\eta}_{M,\lambda}\) associated with the mappings P, η, M and constant \(\lambda >0\) is defined based on Lemma 2.14 as follows.

Definition 2.15

([24, 26])

Let E be a real q-uniformly smooth Banach space, let \(\eta :E\times E\rightarrow E\) be a vector-valued mapping, let \(P:E\rightarrow E\) be a strictly η-accretive mapping, let \(M:E \rightarrow 2^{E}\) be a P-η-accretive mapping, and let \(\lambda >0\) be an arbitrary real constant. The resolvent operator \(R^{P,\eta}_{M,\lambda}:E\rightarrow E\) associated with P, η, M, and λ is defined by

$$\begin{aligned} R^{P,\eta}_{M,\lambda}(u)=(P+\lambda M)^{-1}(u),\quad u\in E. \end{aligned}$$

Definition 2.16

A mapping \(\eta :E\times E\rightarrow E\) is said to be τ-Lipschitz continuous if there exists a constant \(\tau >0\) such that \(\Vert \eta (x,y)\Vert \leq \tau \Vert x-y\Vert \) for all \(x,y\in E\).

Under some suitable conditions, Peng and Zhu [26] proved the Lipschitz continuity of the resolvent operator \(R^{P,\eta}_{M,\lambda}\) and calculated an approximation of its Lipschitz constant as follows.

Lemma 2.17

([26, Lemma 2.4])

Let E be a real q-uniformly smooth Banach space, let \(\eta :E\times E\rightarrow E\) be a τ-Lipschitz continuous mapping, let \(P:E\rightarrow E\) be a γ-strongly η-accretive mapping, let \(M:E\rightarrow 2^{E}\) be a P-η-accretive mapping, and let \(\lambda >0\) be an arbitrary real constant. Then the resolvent operator \(R^{P,\eta}_{M,\lambda}:E\rightarrow E\) is Lipschitz continuous with constant \(\frac{\tau ^{q-1}}{\gamma}\), i.e.,

$$\begin{aligned} \bigl\Vert R^{P,\eta}_{M,\lambda}(u)-R^{P,\eta}_{M,\lambda}(v) \bigr\Vert \leq \frac{\tau ^{q-1}}{\gamma} \Vert u-v \Vert \quad \forall u,v\in E. \end{aligned}$$

3 System of variational inclusions: existence and uniqueness of solution and iterative algorithm

Let for \(i\in \{1,2\}\), \(E_{i}\) be a real \(q_{i}\)-uniformly smooth Banach space with norm \(\Vert \cdot \Vert _{i}\) and \(q_{i}>1\). Suppose that for \(i\in \{1,2\}\) and \(j\in \{1,2\}\backslash \{i\}\), \(P_{i},f_{i},g_{i},h_{i}:E_{i}\rightarrow E_{i}\), \(\eta _{i}:E_{i}\times E_{i}\rightarrow E_{i}\), \(F_{i}:E_{1}\times E_{2}\rightarrow E_{i}\), and \(T_{i}:E_{j}\times E_{i}\rightarrow E_{i}\) are single-valued nonlinear mappings, and \(M_{i}:E_{i}\times E_{j}\rightarrow 2^{E_{i}}\) are any multivalued nonlinear mappings such that for all \(x_{j}\in E_{j}\), \(M_{i}(\cdot ,x_{j}):E_{i}\rightarrow 2^{E_{i}}\) is a \(P_{i}\)-\(\eta _{i}\)-accretive mapping with \(h_{i}(E_{i})\cap \operatorname{dom}M_{i}(\cdot ,x_{j})\neq \emptyset \). We consider the problem of finding \((x,y)\in E_{1}\times E_{2}\) such that

$$\begin{aligned}& \textstyle\begin{cases} 0\in F_{1}(x,y-f_{2}(y))+T_{1}(y,x-g_{1}(x))+M_{1}(h_{1}(x),y), \\ 0\in F_{2}(x-f_{1}(x),y)+T_{2}(x,y-g_{2}(y))+M_{2}(h_{2}(y),x), \end{cases}\displaystyle \end{aligned}$$
(3.1)

which we call the system of generalized nonlinear variational-like inclusions \((\operatorname{SGNVLI})\) with P-η-accretive mappings.

If for \(i=1,2\), \(h_{i}\equiv I_{i}\), the identity mapping on \(E_{i}\), \(f_{i}=g_{i}=T_{i}\equiv 0\), \(F_{1}=F\), \(F_{2}=G\), and \(M_{1}=M:E_{1}\rightarrow 2^{E_{1}}\) and \(M_{2}=N:E_{2}\rightarrow 2^{E_{2}}\) are two univariate multivalued nonlinear mappings, then SGNVLI (3.1) reduces to the problem of finding \((x,y)\in E_{1}\times E_{2}\) such that

$$\begin{aligned}& \textstyle\begin{cases} 0\in F(x,y)+M(x), \\ 0\in G(x,y)+N(y), \end{cases}\displaystyle \end{aligned}$$

which was introduced and studied by Peng and Zhu [26].

Remark that for appropriate and suitable choices of the mappings \(P_{i}\), \(\eta _{i}\), \(f_{i}\), \(g_{i}\), \(h_{i}\), \(F_{i}\), \(T_{i}\), \(M_{i}\) and the underlying spaces \(E_{i}\) (\(i=1,2\)), SGNVLI (3.1) reduces to various classes of variational inclusions and variational inequalities; see, for example, [15, 18, 23, 37, 58, 59] and the references therein.

The following statement, which tells that the SGNVLI (3.1) is equivalent to a fixed point problem, provides us a characterization of the solution of SGNVLI (3.1).

Lemma 3.1

Let \(E_{i}\), \(P_{i}\), \(\eta _{i}\), \(F_{i}\), \(T_{i}\), \(M_{i}\), \(f_{i}\), \(g_{i}\), \(h_{i}\) (\(i=1,2\)) be as in SGNVLI (3.1) and such that for each \(i\in \{1,2\}\), \(P_{i}\) is a strictly \(\eta _{i}\)-accretive mapping with \(\operatorname{dom}(P_{i})\cap h_{i}(E_{i})\neq \emptyset \). Then \((x,y)\in E_{1}\times E_{2}\) is a solution of SGNVLI (3.1) if and only if

$$\begin{aligned}& \textstyle\begin{cases} h_{1}(x)=R^{P_{1},\eta _{1}}_{M_{1}(\cdot ,y),\lambda}[P_{1}(h_{1}(x))- \lambda (F_{1}(x,y-f_{2}(y))+T_{1}(y,x-g_{1}(x)))], \\ h_{2}(y)=R^{P_{2},\eta _{2}}_{M_{2}(\cdot ,x),\rho}[P_{2}(h_{2}(y))-\rho (F_{2}(x-f_{1}(x),y)+T_{2}(x,y-g_{2}(y)))], \end{cases}\displaystyle \end{aligned}$$

where \(R^{P_{1},\eta _{1}}_{M_{1}(\cdot ,y),\lambda}=(P_{1}+\lambda M(\cdot ,y))^{-1}\) and \(R^{P_{2},\eta _{2}}_{M_{2}(\cdot ,x),\rho}=(P_{2}+\rho M_{2}(\cdot ,x))^{-1}\), and \(\lambda ,\rho >0\) are two constants.

Proof

The conclusions follow directly from Definition 2.15 and some simple arguments. □

Before dealing with the main result of this section, let us present some definitions.

Definition 3.2

Let E be a real q-uniformly smooth Banach space. A mapping \(T:E\rightarrow E\) is said to be \((\xi ,\varsigma )\)-relaxed cocoercive if there exist two constants \(\xi ,\varsigma >0\) such that

$$\begin{aligned} \bigl\langle T(x)-T(y),J_{q}(x-y)\bigr\rangle \geq -\xi \bigl\Vert T(x)-T(y) \bigr\Vert ^{q}+ \varsigma \Vert x-y \Vert ^{q} \quad \forall x,y\in E. \end{aligned}$$

Definition 3.3

Let E be a real q-uniformly smooth Banach space, let \(F:E\times E\rightarrow E\) and \(T:E\rightarrow E\) be two nonlinear mappings, and let \((a,b)\in E\times E\). Then the mapping

  1. (i)

    \(F(a,\cdot )\) is said to be k-strongly accretive with respect to T (or T-strongly accretive with constant k) if there exists a constant \(k>0\) such that

    $$\begin{aligned} \bigl\langle F(a,x)-F(a,y),J_{q}\bigl(T(x)-T(y)\bigr)\bigr\rangle \geq& k \Vert x-y \Vert ^{q}\quad \forall x,y\in E; \end{aligned}$$
  2. (ii)

    \(F(\cdot ,b)\) is said to be r-strongly accretive with respect to T (or T-strongly accretive with constant r) if there exists a constant \(r>0\) such that

    $$\begin{aligned} \bigl\langle F(x,b)-F(y,b),J_{q}\bigl(T(x)-T(y)\bigr)\bigr\rangle \geq& r \Vert x-y \Vert ^{q}\quad \forall x,y\in E; \end{aligned}$$
  3. (iii)

    \(F(a,\cdot )\) is said to be γ-Lipschitz continuous if there exists a constant \(\gamma >0\) such that

    $$\begin{aligned} \bigl\Vert F(a,x)-F(a,y) \bigr\Vert \leq& \gamma \Vert x-y \Vert \quad \forall x,y \in E; \end{aligned}$$
  4. (iv)

    \(F(\cdot ,b)\) is said to be μ-Lipschitz continuous if there exists a constant \(\mu >0\) such that

    $$\begin{aligned} \bigl\Vert F(x,b)-F(y,b) \bigr\Vert \leq& \mu \Vert x-y \Vert \quad \forall x,y \in E; \end{aligned}$$
  5. (v)

    \(F(\cdot ,\cdot )\) is said to be \((\varrho ,\xi )\)-mixed Lipschitz continuous in the first and second arguments if there exist two constants \(\varrho ,\xi >0\) such that

    $$\begin{aligned} \bigl\Vert F(x,y)-F\bigl(x',y'\bigr) \bigr\Vert \leq& \varrho \bigl\Vert x-x' \bigr\Vert +\xi \bigl\Vert y-y' \bigr\Vert \quad \forall x,x',y,y'\in E. \end{aligned}$$

Theorem 3.4

Let for each \(i\in \{1,2\}\), \(E_{i}\) be a real \(q_{i}\)-uniformly smooth Banach space with norm \(\Vert \cdot \Vert _{i}\) and \(q_{i}>1\). For each \(i\in \{1,2\}\), suppose that the mapping \(\eta _{i}:E_{i}\times E_{i}\rightarrow E_{i}\) is \(\tau _{i}\)-Lipschitz continuous, the mapping \(P_{i}:E_{i}\rightarrow E_{i}\) is \(\gamma _{i}\)-strongly \(\eta _{i}\)-accretive and \(\delta _{i}\)-Lipschitz continuous, the mapping \(f_{i}:E_{i}\rightarrow E_{i}\) is \((\sigma _{i},\varsigma _{i})\)-relaxed cocoercive and \(\pi _{i}\)-Lipschitz continuous, the mapping \(g_{i}:E_{i}\rightarrow E_{i}\) is \((\zeta _{i},\nu _{i})\)-relaxed cocoercive and \(\varrho _{i}\)-Lipschitz continuous, and the mapping \(h_{i}:E_{i}\rightarrow E_{i}\) is \((\varpi _{i},\theta _{i})\)-relaxed cocoercive and \(\iota _{i}\)-Lipschitz continuous such that \(\operatorname{dom}(P_{i})\cap h_{i}(E_{i})\neq \emptyset \). Let for \(i=1,2\), \(F_{i}:E_{1}\times E_{2}\rightarrow E_{i}\) be two nonlinear mappings such that for any given point \((a,b)\in E_{1}\times E_{2}\), \(F_{1}(\cdot ,b)\) is \(r_{1}\)-strongly accretive with respect to \(P_{1}\circ h_{1}\) and \(s_{1}\)-Lipschitz continuous, \(F_{1}(a,\cdot )\) is \(\xi _{1}\)-Lipschitz continuous, \(F_{2}(a,\cdot )\) is \(r_{2}\)-strongly accretive with respect to \(P_{2}\circ h_{2}\) and \(s_{2}\)-Lipschitz continuous, and \(F_{2}(\cdot ,b)\) is \(\xi _{2}\)-Lipschitz continuous. For \(i\in \{1,2\}\) and \(j\in \{1,2\}\backslash \{i\}\), let \(T_{i}:E_{j}\times E_{i}\rightarrow E_{i}\) be \((\varepsilon _{i},\mu _{i})\)-mixed Lipschitz continuous in the first and second arguments, and let \(M_{i}:E_{i}\times E_{j}\rightarrow 2^{E_{i}}\) be multivalued nonlinear mappings such that for all \(x_{j}\in E_{j}\), \(M_{i}(\cdot ,x_{j}):E_{j}\rightarrow 2^{E_{i}}\) is a \(P_{i}\)-\(\eta _{i}\)-accretive mapping with \(h_{i}(E_{i})\cap \operatorname{dom}M_{i}(\cdot ,x_{j})\neq \emptyset \). Suppose further that there exist constants \(o_{i}>0\) (\(i=1,2\)) such that

$$\begin{aligned} &\bigl\Vert R^{P_{1},\eta _{1}}_{M_{1}(\cdot ,u),\lambda}(w)-R^{P_{1},\eta _{1}}_{M_{1}(\cdot ,v), \lambda}(w) \bigr\Vert \leq o_{1} \Vert u-v \Vert _{1}\quad \forall u,v,w\in E_{1}, \end{aligned}$$
(3.2)
$$\begin{aligned} &\bigl\Vert R^{P_{2},\eta _{2}}_{M_{2}(\cdot ,u),\rho}(w)-R^{P_{2},\eta _{2}}_{M_{2}(\cdot ,v), \rho}(w) \bigr\Vert \leq o_{2} \Vert u-v \Vert _{2} \quad \forall u,v,w\in E_{2}. \end{aligned}$$
(3.3)

Suppose that there exist two constants \(\lambda ,\rho >0\) such that

$$\begin{aligned}& \begin{aligned} &o_{2}+ \sqrt[q_{1}]{1-q_{1} \theta _{1}+(c_{q_{1}}+q_{1}\varpi _{1}) \iota _{1}^{q_{1}}} \\ &\quad {}+\frac{\tau _{1}^{q_{1}-1}}{\gamma _{1}} \bigl( \sqrt[q_{1}]{\delta _{1}^{q_{1}}\iota _{1}^{q_{1}}-q_{1}\lambda r_{1}+\lambda ^{q_{1}}c_{q_{1}}s_{1}^{q_{1}}} \\ &\quad{} +\lambda \mu _{1} \sqrt[q_{1}]{1-q_{1}\nu _{1}+(c_{q_{1}}+q_{1}\zeta _{1})\varrho _{1}^{q_{1}}} \bigr) \\ &\quad {}+\frac{\rho \tau _{2}^{q_{2}-1}}{\gamma _{2}}\bigl(\xi _{2} \sqrt[q_{1}]{1-q_{1}\varsigma _{1} +(c_{q_{1}}+q_{1}\sigma _{1})\pi _{1}^{q_{1}}}+\varepsilon _{2}\bigr)< 1 \end{aligned} \end{aligned}$$
(3.4)

and

$$\begin{aligned}& \begin{aligned} &o_{1}+ \sqrt[q_{2}]{1-q_{2} \theta _{2}+(c_{q_{2}}+q_{2}\varpi _{2})\iota _{2}^{q_{2}}} \\ &\quad {} +\frac{\tau _{2}^{q_{2}-1}}{\gamma _{2}} \bigl( \sqrt[q_{2}]{ \delta _{2}^{q_{2}}\iota _{2}^{q_{2}}-q_{2} \rho r_{2}+\rho ^{q_{2}}c_{q_{2}}s_{2}^{q_{2}}} \\ &\quad {}+\rho \mu _{2} \sqrt[q_{2}]{1-q_{2}\nu _{2}+(c_{q_{2}}+q_{2}\zeta _{2})\varrho _{2}^{q_{2}}} \bigr) \\ &\quad {}+\frac{\lambda \tau _{1}^{q_{1}-1}}{\gamma _{1}}\bigl(\xi _{1} \sqrt[q_{2}]{1-q_{2}\varsigma _{2} +(c_{q_{2}}+q_{2}\sigma _{2})\pi _{2}^{q_{2}}}+ \varepsilon _{1}\bigr)< 1, \end{aligned} \end{aligned}$$
(3.5)

where \(c_{q_{1}}\) and \(c_{q_{2}}\) are constants guaranteed by Lemma 2.1, and for the case where \(q_{1}\) and \(q_{2}\) are even natural numbers, in addition to (3.4) and (3.5), the following conditions hold:

$$\begin{aligned}& \begin{aligned} \textstyle\begin{cases} q_{i}\theta _{i}< 1+(c_{q_{i}}+q_{i}\varpi _{i})\iota _{i}^{q_{i}} \quad (i=1,2), \\ q_{i}\nu _{i}< 1+(c_{q_{i}}+q_{i}\zeta _{i})\varrho _{i}^{q_{i}} \quad (i=1,2), \\ q_{i}\varsigma _{i}< 1+(c_{q_{i}}+q_{i}\sigma _{i})\pi _{i}^{q_{i}} \quad (i=1,2), \\ q_{1}\lambda r_{1}< \delta _{1}^{q_{1}}\iota _{1}^{q_{1}}+\lambda ^{q_{1}}c_{q_{1}}s_{1}^{q_{1}}, \\ q_{2}\rho r_{2}< \delta _{2}^{q_{2}}\iota _{2}^{q_{2}}+\rho ^{q_{2}}c_{q_{2}}s_{2}^{q_{2}}. \end{cases}\displaystyle \end{aligned} \end{aligned}$$
(3.6)

Then SGNVLI (3.1) admits a unique solution.

Proof

For any given \(\lambda ,\rho >0\), define the mappings \(G_{\lambda}:E_{1}\times E_{2}\rightarrow E_{1}\) and \(Q_{\rho}:E_{1}\times E_{2}\rightarrow E_{2}\) by

$$\begin{aligned} G_{\lambda}(x,y) =&x-h_{1}(x) \\ &{}+R^{P_{1},\eta _{1}}_{M_{1}(\cdot ,y),\lambda} \bigl[P_{1}\bigl(h_{1}(x)\bigr)- \lambda \bigl(F_{1}\bigl(x,y-f_{2}(y)\bigr)+T_{1} \bigl(y,x-g_{1}(x)\bigr)\bigr)\bigr] \end{aligned}$$
(3.7)

and

$$\begin{aligned} Q_{\rho}(x,y) =&y-h_{2}(y) \\ &{}+R^{P_{2},\eta _{2}}_{M_{2}(\cdot ,x),\rho} \bigl[P_{2}\bigl(h_{2}(y)\bigr)- \rho \bigl(F_{2} \bigl(x-f_{1}(x),y\bigr)+T_{2}\bigl(x,y-g_{2}(y) \bigr)\bigr)\bigr] \end{aligned}$$
(3.8)

for \((x,y)\in E_{1}\times E_{2}\).

Using Lemma 2.17 and (3.2), for all \((x,y),(x',y')\in E_{1}\times E_{2}\), we have

$$\begin{aligned}& \bigl\Vert G_{\lambda}(x,y)-G_{\lambda} \bigl(x',y'\bigr) \bigr\Vert _{1} \\& \quad = \bigl\Vert x-h_{1}(x) +R^{P_{1},\eta _{1}}_{M_{1}(\cdot ,y),\lambda} \bigl[P_{1}\bigl(h_{1}(x)\bigr)- \lambda \bigl(F_{1}\bigl(x,y-f_{2}(y)\bigr)+T_{1} \bigl(y,x-g_{1}(x)\bigr)\bigr)\bigr] \\& \qquad {} -\bigl(x'-h_{1}\bigl(x' \bigr)+R^{P_{1},\eta _{1}}_{M_{1}(\cdot ,y'),\lambda}\bigl[P_{1}\bigl(h_{1} \bigl(x'\bigr)\bigr) -\lambda \bigl(F_{1} \bigl(x',y'-f_{2}\bigl(y'\bigr) \bigr) \\& \qquad {}+T_{1}\bigl(y',x'-g_{1} \bigl(x'\bigr)\bigr)\bigr)\bigr] \bigr) \bigr\Vert _{1} \\& \quad \leq \bigl\Vert x-x'-\bigl(h_{1}(x)-h_{1} \bigl(x'\bigr)\bigr) \bigr\Vert _{1} \\& \qquad {} + \bigl\Vert R^{P_{1},\eta _{1}}_{M_{1}(\cdot ,y),\lambda}\bigl[P_{1} \bigl(h_{1}(x)\bigr)- \lambda \bigl(F_{1} \bigl(x,y-f_{2}(y)\bigr)+T_{1}\bigl(y,x-g_{1}(x) \bigr)\bigr)\bigr] \\& \qquad {} -R^{P_{1},\eta _{1}}_{M_{1}(\cdot ,y'),\lambda}\bigl[P_{1}\bigl(h_{1}(x) \bigr)- \lambda \bigl(F_{1}\bigl(x,y-f_{2}(y) \bigr)+T_{1}\bigl(y,x-g_{1}(x)\bigr)\bigr)\bigr] \bigr\Vert _{1} \\& \begin{aligned} &\qquad {} + \bigl\Vert R^{P_{1},\eta _{1}}_{M_{1}(\cdot ,y'),\lambda}\bigl[P_{1} \bigl(h_{1}(x)\bigr)- \lambda \bigl(F_{1} \bigl(x,y-f_{2}(y)\bigr)+T_{1}\bigl(y,x-g_{1}(x) \bigr)\bigr)\bigr] \\ &\qquad {} -R^{P_{1},\eta _{1}}_{M_{1}(\cdot ,y'),\lambda}\bigl[P_{1}\bigl(h_{1} \bigl(x'\bigr)\bigr)- \lambda \bigl(F_{1} \bigl(x',y'-f_{2}\bigl(y'\bigr) \bigr)+T_{1}\bigl(y',x'-g_{1} \bigl(x'\bigr)\bigr)\bigr)\bigr] \bigr\Vert _{1} \end{aligned}\\& \quad \leq \bigl\Vert x-x'-\bigl(h_{1}(x)-h_{1} \bigl(x'\bigr)\bigr) \bigr\Vert _{1}+o_{1} \bigl\Vert y-y' \bigr\Vert _{2} \\& \qquad {} +\frac{\tau _{1}^{q_{1}-1}}{\gamma _{1}} \bigl( \bigl\Vert P_{1}\bigl(h_{1}(x) \bigr)-P_{1}\bigl(h_{1}\bigl(x'\bigr)\bigr)- \lambda \bigl(F_{1}\bigl(x,y-f_{2}(y)\bigr)-F_{1} \bigl(x',y'-f_{2}\bigl(y'\bigr) \bigr)\bigr) \bigr\Vert _{1} \\& \qquad {} + \lambda \bigl\Vert T_{1}\bigl(y,x-g_{1}(x) \bigr)-T_{1}\bigl(y',x'-g_{1} \bigl(x'\bigr)\bigr) \bigr\Vert _{1} \bigr) \\& \quad \leq \bigl\Vert x-x'-\bigl(h_{1}(x)-h_{1} \bigl(x'\bigr)\bigr) \bigr\Vert _{1}+o_{1} \bigl\Vert y-y' \bigr\Vert _{2} \\& \qquad {} +\frac{\tau _{1}^{q_{1}-1}}{\gamma _{1}} \bigl( \bigl\Vert P_{1}\bigl(h_{1}(x) \bigr)-P_{1}\bigl(h_{1}\bigl(x'\bigr)\bigr)- \lambda \bigl(F_{1}\bigl(x,y-f_{2}(y)\bigr)-F_{1} \bigl(x',y-f_{2}(y)\bigr)\bigr) \bigr\Vert _{1} \\& \qquad {} + \lambda \bigl\Vert F_{1}\bigl(x',y-f_{2}(y) \bigr)-F_{1}\bigl(x',y'-f_{2} \bigl(y'\bigr)\bigr) \bigr\Vert _{1} \\& \qquad {}+\lambda \bigl\Vert T_{1}\bigl(y,x-g_{1}(x)\bigr)-T_{1} \bigl(y',x'-g_{1}\bigl(x'\bigr) \bigr) \bigr\Vert _{1} \bigr). \end{aligned}$$
(3.9)

By Lemma 2.1 there exists a constant \(c_{q_{1}}>0\) such that

$$\begin{aligned} \bigl\Vert x-x'-\bigl(h_{1}(x)-h_{1} \bigl(x'\bigr)\bigr) \bigr\Vert _{1}^{q_{1}} \leq& \bigl\Vert x-x' \bigr\Vert _{1}^{q_{1}}-q_{1} \bigl\langle h_{1}(x)-h_{1}\bigl(x' \bigr),J_{q_{1}}\bigl(x-x'\bigr) \bigr\rangle \\ &{} +c_{q_{1}} \bigl\Vert h_{1}(x)-h_{1} \bigl(x'\bigr) \bigr\Vert _{1}^{q_{1}}. \end{aligned}$$

From the \((\varpi _{1},\theta _{1})\)-relaxed cocoercivity and \(\iota _{1}\)-Lipschitz continuity of \(h_{1}\) it follows that

$$\begin{aligned}& \bigl\Vert x-x'-\bigl(h_{1}(x)-h_{1} \bigl(x'\bigr)\bigr) \bigr\Vert _{1}^{q_{1}} \\& \quad \leq \bigl\Vert x-x' \bigr\Vert _{1}^{q_{1}}-q_{1} \theta _{1} \bigl\Vert x-x' \bigr\Vert _{1}^{q_{1}}+(c_{q_{1}}+q_{1} \varpi _{1})\iota _{1}^{q_{1}} \bigl\Vert x-x' \bigr\Vert _{1}^{q_{1}} \\& \quad =\bigl(1-q_{1}\theta _{1}+(c_{q_{1}}+q_{1} \varpi _{1})\iota _{1}^{q_{1}}\bigr) \bigl\Vert x-x' \bigr\Vert _{1}^{q_{1}}, \end{aligned}$$

which implies that

$$\begin{aligned} \bigl\Vert x-x'-\bigl(h_{1}(x)-h_{1} \bigl(x'\bigr)\bigr) \bigr\Vert _{1}\leq \sqrt[q_{1}]{1-q_{1}\theta _{1}+(c_{q_{1}}+q_{1} \varpi _{1})\iota _{1}^{q_{1}}} \bigl\Vert x-x' \bigr\Vert _{1}. \end{aligned}$$
(3.10)

Using Lemma 2.1 and taking into account that the mappings \(P_{1}\) and \(h_{1}\) are \(\delta _{1}\)-Lipschitz continuous and \(\iota _{1}\)-Lipschitz continuous, respectively, the mapping \(F_{1}(\cdot ,b)\) is \(r_{1}\)-strongly accretive with respect to \(P_{1}\circ h_{1}\) and \(s_{1}\)-Lipschitz continuous, and the mapping \(F_{1}(a,\cdot )\) is \(\xi _{1}\)-Lipschitz continuous for any \((a,b)\in E_{1}\times E_{2}\), we obtain

$$\begin{aligned}& \bigl\Vert P_{1} \bigl(h_{1}(x)\bigr)-P_{1}\bigl(h_{1} \bigl(x'\bigr)\bigr)-\lambda \bigl(F_{1} \bigl(x,y-f_{2}(y)\bigr)-F_{1}\bigl(x',y-f_{2}(y) \bigr)\bigr) \bigr\Vert _{1}^{q_{1}} \\& \quad \leq \bigl\Vert P_{1}\bigl(h_{1}(x) \bigr)-P_{1}\bigl(h_{1}\bigl(x'\bigr)\bigr) \bigr\Vert _{1}^{q_{1}}-q_{1} \lambda \bigl\langle F_{1}\bigl(x,y-f_{2}(y)\bigr)-F_{1} \bigl(x',y-f_{2}(y)\bigr), \\& J_{q_{1}}\bigl(P_{1}\bigl(h_{1}(x) \bigr)-P_{1}\bigl(h_{1}\bigl(x'\bigr)\bigr) \bigr)\bigr\rangle +\lambda ^{q_{1}}c_{q_{1}} \bigl\Vert F_{1}\bigl(x,y-f_{2}(y)\bigr)-F_{1} \bigl(x',y-f_{2}(y)\bigr) \bigr\Vert _{1}^{q_{1}} \\& \quad \leq \delta _{1}^{q_{1}} \bigl\Vert h_{1}(x)-h_{1} \bigl(x'\bigr) \bigr\Vert _{1}^{q_{1}}-q_{1} \lambda r_{1} \bigl\Vert x-x' \bigr\Vert _{1}^{q_{1}}+\lambda ^{q_{1}}c_{q_{1}}s_{1}^{q_{1}} \bigl\Vert x-x' \bigr\Vert _{1}^{q_{1}} \\& \quad \leq \bigl(\delta _{1}^{q_{1}}\iota _{1}^{q_{1}}-q_{1} \lambda r_{1}+ \lambda ^{q_{1}}c_{q_{1}}s_{1}^{q_{1}} \bigr) \bigl\Vert x-x' \bigr\Vert _{1}^{q_{1}} \end{aligned}$$
(3.11)

and

$$\begin{aligned} \bigl\Vert F_{1}\bigl(x',y-f_{2}(y) \bigr)-F_{1}\bigl(x',y'-f_{2} \bigl(y'\bigr)\bigr) \bigr\Vert _{1}\leq \xi _{1} \bigl\Vert y-y'-\bigl(f_{2}(y)-f_{2} \bigl(y'\bigr)\bigr) \bigr\Vert _{2}. \end{aligned}$$
(3.12)

The previous inequality (3.11) implies that

$$\begin{aligned}& \begin{aligned} & \bigl\Vert P_{1} \bigl(h_{1}(x)\bigr)-P_{1}\bigl(h_{1} \bigl(x'\bigr)\bigr)-\lambda \bigl(F_{1} \bigl(x,y-f_{2}(y)\bigr)-F_{1}\bigl(x',y-f_{2}(y) \bigr)\bigr) \bigr\Vert _{1} \\ &\quad \leq \sqrt[q_{1}]{\delta _{1}^{q_{1}}\iota _{1}^{q_{1}}-q_{1}\lambda r_{1}+\lambda ^{q_{1}}c_{q_{1}}s_{1}^{q_{1}}} \bigl\Vert x-x' \bigr\Vert _{1}. \end{aligned} \end{aligned}$$
(3.13)

Since \(f_{2}\) is \((\sigma _{2},\varsigma _{2})\)-relaxed cocoercive and \(\pi _{2}\)-Lipschitz continuous, in a similar way to that of proof of (3.10), using Lemma 2.1, we can deduce that

$$\begin{aligned} \bigl\Vert y-y'-\bigl(f_{2}(y)-f_{2} \bigl(y'\bigr)\bigr) \bigr\Vert _{2} \leq& \sqrt[q_{2}]{1-q_{2}\varsigma _{2}+(c_{q_{2}}+q_{2} \sigma _{2})\pi _{2}^{q_{2}}} \bigl\Vert y-y' \bigr\Vert _{2}. \end{aligned}$$
(3.14)

Since \(T_{1}\) is \((\varepsilon _{1},\mu _{1})\)-mixed Lipschitz continuous in the first and second arguments, we obtain

$$\begin{aligned}& \begin{aligned} & \bigl\Vert T_{1} \bigl(y,x-g_{1}(x)\bigr)-T_{1}\bigl(y',x'-g_{1} \bigl(x'\bigr)\bigr) \bigr\Vert _{1} \\ &\quad \leq \varepsilon _{1} \bigl\Vert y-y' \bigr\Vert _{2}+\mu _{1} \bigl\Vert x-x'- \bigl(g_{1}(x)-g_{1}\bigl(x'\bigr)\bigr) \bigr\Vert _{1}. \end{aligned} \end{aligned}$$
(3.15)

By an argument analogous to the previous inequality (3.10), from the \((\zeta _{1},\nu _{1})\)-relaxed cocoercivity and \(\varrho _{1}\)-Lipschitz continuity of \(g_{1}\) it follows that

$$\begin{aligned} \bigl\Vert x-x'-\bigl(g_{1}(x)-g_{1} \bigl(x'\bigr)\bigr) \bigr\Vert _{1} \leq& \sqrt[q_{1}]{1-q_{1}\nu _{1}+(c_{q_{1}}+q_{1} \zeta _{1})\varrho _{1}^{q_{1}}} \bigl\Vert x-x' \bigr\Vert _{1}. \end{aligned}$$
(3.16)

Combining (3.9)–(3.16), we obtain

$$\begin{aligned}& \begin{aligned} &\bigl\Vert G_{\lambda}(x,y)-G_{\lambda} \bigl(x',y'\bigr) \bigr\Vert _{1} \\ &\quad \leq \sqrt[q_{1}]{1-q_{1}\theta _{1} +(c_{q_{1}}+q_{1}\varpi _{1})\iota _{1}^{q_{1}}} \bigl\Vert x-x' \bigr\Vert _{1}+o_{1} \bigl\Vert y-y' \bigr\Vert _{2} \\ & \qquad {}+\frac{\tau _{1}^{q_{1}-1}}{\gamma _{1}} \bigl( \sqrt[q_{1}]{\delta _{1}^{q_{1}} \iota _{1}^{q_{1}}-q_{1}\lambda r_{1}+ \lambda ^{q_{1}}c_{q_{1}}s_{1}^{q_{1}}} \bigl\Vert x-x' \bigr\Vert _{1} \\ &\qquad {} +\lambda \xi _{1} \sqrt[q_{2}]{1-q_{2}\varsigma _{2} +(c_{q_{2}}+q_{2}\sigma _{2})\pi _{2}^{q_{2}}} \bigl\Vert y-y' \bigr\Vert _{2} \\ &\qquad {} +\lambda \varepsilon _{1} \bigl\Vert y-y' \bigr\Vert _{2}+\lambda \mu _{1} \sqrt[q_{1}]{1-q_{1} \nu _{1}+(c_{q_{1}}+q_{1}\zeta _{1}) \varrho _{1}^{q_{1}}} \bigl\Vert x-x' \bigr\Vert _{1} \bigr) \\ & \quad =\vartheta _{1} \bigl\Vert x-x' \bigr\Vert _{1}+\phi _{1} \bigl\Vert y-y' \bigr\Vert _{2}, \end{aligned} \end{aligned}$$
(3.17)

where

$$\begin{aligned} \vartheta _{1} =& \sqrt[q_{1}]{1-q_{1} \theta _{1}+(c_{q_{1}}+q_{1}\varpi _{1}) \iota _{1}^{q_{1}}} \\ &{}+ \frac{\tau _{1}^{q_{1}-1}}{\gamma _{1}} \bigl( \sqrt[q_{1}]{\delta _{1}^{q_{1}}\iota _{1}^{q_{1}}-q_{1}\lambda r_{1}+\lambda ^{q_{1}}c_{q_{1}}s_{1}^{q_{1}}} \\ &{} +\lambda \mu _{1} \sqrt[q_{1}]{1-q_{1}\nu _{1}+(c_{q_{1}}+q_{1}\zeta _{1}) \varrho _{1}^{q_{1}}} \bigr) \end{aligned}$$

and

$$\begin{aligned} \phi _{1} =&o_{1}+\frac{\lambda \tau _{1}^{q_{1}-1}}{\gamma _{1}}\bigl(\xi _{1} \sqrt[q_{2}]{1-q_{2}\varsigma _{2} +(c_{q_{2}}+q_{2}\sigma _{2})\pi _{2}^{q_{2}}}+ \varepsilon _{1}\bigr). \end{aligned}$$

By Lemma 2.1, Lemma 2.17, and (3.3), and by following a similar argument as in the proof of (3.17) with suitable changes, we can prove that

$$\begin{aligned} \bigl\Vert Q_{\rho}(x,y)-Q_{\rho} \bigl(x',y'\bigr) \bigr\Vert _{2} \leq& \vartheta _{2} \bigl\Vert x-x' \bigr\Vert _{1}+\phi _{2} \bigl\Vert y-y' \bigr\Vert _{2}, \end{aligned}$$
(3.18)

where

$$\begin{aligned} \phi _{2} =& \sqrt[q_{2}]{1-q_{2} \theta _{2}+(c_{q_{2}}+q_{2}\varpi _{2})\iota _{2}^{q_{2}}} \\ &{}+ \frac{\tau _{2}^{q_{2}-1}}{\gamma _{2}} \sqrt[q_{2}]{\delta _{2}^{q_{2}}\iota _{2}^{q_{2}}-q_{2} \rho r_{2}+\rho ^{q_{2}}c_{q_{2}}s_{2}^{q_{2}}} \\ &{} +\rho \mu _{2} \sqrt[q_{2}]{1-q_{2}\nu _{2}+(c_{q_{2}}+q_{2}\zeta _{2})\varrho _{2}^{q_{2}}} ) \end{aligned}$$

and

$$\begin{aligned} \vartheta _{2} =&o_{2}+\frac{\rho \tau _{2}^{q_{2}-1}}{\gamma _{2}}\bigl( \xi _{2} \sqrt[q_{1}]{1-q_{1}\varsigma _{1} +(c_{q_{1}}+q_{1}\sigma _{1})\pi _{1}^{q_{1}}}+\varepsilon _{2}\bigr). \end{aligned}$$

For any \(\lambda ,\rho >0\), define the mapping \(\Psi _{\lambda ,\rho}:E_{1}\times E_{2}\rightarrow E_{1}\times E_{2}\) by

$$\begin{aligned} \Psi _{\lambda ,\rho}(x,y)=\bigl(G_{\lambda}(x,y),Q_{\rho}(x,y) \bigr),\quad (x,y) \in E_{1}\times E_{2}. \end{aligned}$$
(3.19)

Let us define the norm \(\Vert \cdot \Vert _{*}\) on \(E_{1}\times E_{2}\) by

$$\begin{aligned} \bigl\Vert (x,y) \bigr\Vert _{*}= \Vert x \Vert _{1}+ \Vert y \Vert _{2},\quad (x,y) \in E_{1} \times E_{2}. \end{aligned}$$
(3.20)

We can easily see that \((E_{1}\times E_{2},\Vert \cdot \Vert _{*})\) is a Banach space. Then by (3.17) and (3.18) we get

$$\begin{aligned}& \begin{aligned} & \bigl\Vert G_{\lambda}(x,y)-G_{\lambda} \bigl(x',y'\bigr) \bigr\Vert _{1}+ \bigl\Vert Q_{\rho}(x,y)-Q_{\rho}\bigl(x',y' \bigr) \bigr\Vert _{2} \\ &\quad \leq (\vartheta _{1}+\vartheta _{2}) \bigl\Vert x-x' \bigr\Vert _{1}+(\phi _{1}+ \phi _{2}) \bigl\Vert y-y' \bigr\Vert _{2} \\ &\quad \leq k \bigl\Vert (x,y)-\bigl(x',y'\bigr) \bigr\Vert _{*}, \end{aligned} \end{aligned}$$
(3.21)

where \(k=\max \{\vartheta _{1}+\vartheta _{2},\phi _{1}+\phi _{2}\}\). Evidently, (3.4)–(3.6) imply that \(k\in (0,1)\), and using (3.21), we conclude that \(\Psi _{\lambda ,\rho}\) is a contraction mapping. According to the Banach fixed point theorem, there exists a unique point \((x^{*},y^{*})\in E_{1}\times E_{2}\) such that \(\Psi _{\lambda ,\rho}(x^{*},y^{*})=(x^{*},y^{*})\). From (3.7), (3.8), and (3.19) it follows that

$$\begin{aligned}& \textstyle\begin{cases} h_{1}(x^{*})=R^{P_{1},\eta _{1}}_{M_{1}(\cdot ,y^{*}),\lambda}[P_{1}(h_{1}(x^{*}))- \lambda (F_{1}(x^{*},y^{*}-f_{2}(y^{*}))+T_{1}(y^{*},x^{*}-g_{1}(x^{*})))], \\ h_{2}(y^{*})=R^{P_{2},\eta _{2}}_{M_{2}(\cdot ,x^{*}),\rho}[P_{2}(h_{2}(y^{*}))- \rho (F_{2}(x^{*}-f_{1}(x^{*}),y^{*})+T_{2}(x^{*},y^{*}-g_{2}(y^{*})))]. \end{cases}\displaystyle \end{aligned}$$

Now Lemma 3.1 ensures that \((x^{*},y^{*})\) is the unique solution of SGNVLI (3.1). This completes the proof. □

For a given real normed space E with norm \(\Vert \cdot \Vert \), we recall that a nonlinear mapping \(T:E\rightarrow E\) is said to be nonexpansive if \(\Vert T(x)-T(y)\Vert \leq \Vert x-y\Vert \) for all \(x,y\in E\). In fact, nonexpansive mappings are those that have Lipschitz’s constant equal to 1. Owing to the fact that nonexpansive mapping theory has many diverse applications in the theory of fixed points, during the last five decades, a lot of attention has been paid to present several interesting generalizations of it in the framework of different spaces. Some important classes of generalized nonexpansive mappings are recalled in the following:

Definition 3.5

A nonlinear mapping \(T:E\rightarrow E\) is said to be

  1. (i)

    L-Lipschitzian if there exists a constant \(L>0\) such that

    $$\begin{aligned} \bigl\Vert T(x)-T(y) \bigr\Vert \leq L \Vert x-y \Vert \quad \forall x,y\in E; \end{aligned}$$
  2. (ii)

    uniformly L-Lipschitzian if there exists a constant \(L>0\) such that for each nN,

    $$\begin{aligned} \bigl\Vert T^{n}(x)-T^{n}(y) \bigr\Vert \leq L \Vert x-y \Vert \quad \forall x,y \in E; \end{aligned}$$
  3. (iii)

    asymptotically nonexpansive [41] if there exists a sequence \(\{a_{n}\}\subset (0,+\infty )\) with \(\lim_{n\rightarrow \infty}a_{n}=0\) such that for each nN,

    $$\begin{aligned} \bigl\Vert T^{n}(x)-T^{n}(y) \bigr\Vert \leq (1+a_{n}) \Vert x-y \Vert \quad \forall x,y\in E. \end{aligned}$$

    Equivalently, we say that the mapping T is asymptotically nonexpansive if there exists a sequence \(\{k_{n}\}\subset [1,+\infty )\) with \(\lim_{n\rightarrow \infty}k_{n}=1\) such that for each nN,

    $$\begin{aligned} \bigl\Vert T^{n}(x)-T^{n}(y) \bigr\Vert \leq k_{n} \Vert x-y \Vert \quad \forall x,y \in E; \end{aligned}$$
  4. (iv)

    nearly nonexpansive [42] if there exists a nonnegative real sequence \(\{a_{n}\}\) with \(a_{n}\rightarrow 0\) as \(n\rightarrow \infty \) such that for each nN,

    $$\begin{aligned} \bigl\Vert T^{n}(x)-T^{n}(y) \bigr\Vert \leq \Vert x-y \Vert +a_{n}\quad \forall x,y \in E; \end{aligned}$$
  5. (v)

    nearly uniformly L-Lipschitzian [42] if there exist a real constant \(L>0\) and a nonnegative real sequence \(\{a_{n}\}\) with \(a_{n}\rightarrow 0\) as \(n\rightarrow \infty \) such that for each nN,

    $$\begin{aligned} \bigl\Vert T^{n}(x)-T^{n}(y) \bigr\Vert \leq L\bigl( \Vert x-y \Vert +a_{n}\bigr)\quad \forall x,y\in E; \end{aligned}$$
  6. (vi)

    nearly asymptotically nonexpansive (or \((\{a_{n}\},\{b_{n}\})\)-nearly asymptotically nonexpansive) [42] if there exist nonnegative real sequences \(\{a_{n}\}\) and \(\{b_{n}\}\) with \(a_{n},b_{n}\rightarrow 0\) as \(n\rightarrow \infty \) such that for each nN,

    $$\begin{aligned} \bigl\Vert T^{n}(x)-T^{n}(y) \bigr\Vert \leq \Vert x-y \Vert +a_{n} \Vert x-y \Vert +b_{n} \quad \forall x,y \in E. \end{aligned}$$

    Equivalently, the mapping T is called nearly asymptotically nonexpansive if there exist real sequences \(\{k_{n}\}\subset [1,+\infty )\) and \(\{\sigma _{n}\}\subset [0,+\infty )\) with \(k_{n}\rightarrow 1\) and \(\sigma _{n}\rightarrow 0\) as \(n\rightarrow \infty \) such that for each nN,

    $$\begin{aligned} \bigl\Vert T^{n}(x)-T^{n}(y) \bigr\Vert \leq k_{n}\bigl( \Vert x-y \Vert +\sigma _{n}\bigr) \quad \forall x,y\in E. \end{aligned}$$

It is very essential to note that every uniformly L-Lipschitzian mapping is L-Lipschitzian but the converse may be not true. In other words, the class of L-Lipschitzian mappings is more general than that of uniformly L-Lipschitzian mappings. We illustrate this fact by the following example.

Example 3.6

Assume that \(\beta >0\) is an arbitrary constant and consider \(E=(-\infty ,\beta ]\) with the Euclidean norm \(\Vert \cdot \Vert =|\cdot |\) defined on R. Suppose further that the self-mapping T of E is defined by

$$ T(x)=\textstyle\begin{cases} \alpha x & \text{if } x\in (-\infty ,0], \\ x & \text{if } x\in [0,\beta ), \\ \beta & \text{if } x\in [\beta ,+\infty ), \end{cases} $$

where \(\alpha >1\) is an arbitrary constant. From the facts that

  1. (i)

    for all \(x,y\in (-\infty ,0]\),

    $$\begin{aligned} \bigl\vert T(x)-T(y) \bigr\vert = \vert \alpha x-\alpha y \vert \leq \alpha \vert x-y \vert , \end{aligned}$$
  2. (ii)

    for all \(x,y\in [0,\beta )\),

    $$\begin{aligned} \bigl\vert T(x)-T(yx) \bigr\vert = \vert x-y \vert < \alpha \vert x-y \vert , \end{aligned}$$
  3. (iii)

    for all \(x,y\in [\beta ,+\infty )\),

    $$\begin{aligned} \bigl\vert T(x)-T(y) \bigr\vert =0< \alpha \vert x-y \vert , \end{aligned}$$
  4. (iv)

    for all \(x\in (-\infty ,0]\) and \(y\in [0,\beta )\),

    $$\begin{aligned} \bigl\vert T(x)-T(y) \bigr\vert = \vert \alpha x-y \vert < \vert \alpha x-\alpha y \vert =\alpha \vert x-y \vert , \end{aligned}$$
  5. (v)

    for all \(x\in [0,\beta )\) and \(y\in [\beta ,+\infty )\),

    $$\begin{aligned} \vert Tx-Ty \vert = \vert x-\beta \vert < \vert x-y \vert < \alpha \vert x-y \vert , \end{aligned}$$
  6. (vi)

    for all \((-\infty ,0]\) and \(y\in [\beta ,+\infty )\),

    $$\begin{aligned} \bigl\vert T(x)-T(y) \bigr\vert = \vert \alpha x-\beta \vert < \vert \alpha x-\alpha \beta \vert =\alpha \vert x- \beta \vert < \alpha \vert x-y \vert , \end{aligned}$$

it follows that T is an α-Lipschitzian mapping. However, taking into account that \(\alpha >1\), we infer that for all nN{1},

$$\begin{aligned} \bigl\vert T^{n}(x)-T^{n}(y) \bigr\vert =\alpha ^{n} \vert x-y \vert >\alpha \vert x-y \vert \quad \forall x,y \in (-\infty ,0], \end{aligned}$$

which implies that T is not a uniformly α-Lipschitzian mapping.

It is also remarkable that for a given nonnegative real sequence \(\{a_{n}\}\), every nearly nonexpansive mapping with respect to the sequence \(\{a_{n}\}\) is a nearly uniformly L-Lipschitzian mapping with \(L=1\), but the converse is not necessarily true. Moreover, a nearly uniformly L-Lipschitzian mapping with respect to the sequence \(\{a_{n}\}\) may be not uniformly L-Lipschitzian. In fact, for a given nonnegative real sequence \(\{a_{n}\}\), the class of nearly uniformly L-Lipschitzian mappings with respect to the sequence \(\{a_{n}\}\) contains properly the class of nearly nonexpansive mappings with respect to the sequence \(\{a_{n}\}\) and the class of uniformly L-Lipschitzian mappings. The following example supports these facts.

Example 3.7

Consider \(E=(-\infty ,\alpha ]\) with the Euclidean norm \(\Vert \cdot \Vert =|\cdot |\) defined on R and let the self-mapping T of E be defined as follows:

$$ T(x)=\textstyle\begin{cases} \frac{1}{l} & \text{if } x\in (-\infty ,k)\cup \{l\}, \\ l & \text{if } x=k, \\ 0 & \text{if } x\in (k,l)\cup (l,\alpha ), \end{cases} $$

where \(k>0\) and \(\frac{k+\sqrt{k^{2}+4}}{2}< l<\alpha \) are arbitrary real constants such that \(kl>1\). It is well known that every asymptotically nonexpansive mapping is Lipschitzian and every Lipschitzian mapping is continuous. Since the mapping T is discontinuous at the points \(x=k,l,\alpha \), it follows that T is not Lipschitzian, and so it is not an asymptotically nonexpansive mapping. Taking into account that \(l>\frac{k+\sqrt{k^{2}+4}}{2}\), we deduce that \(l-\frac{1}{l}>k\). Let us now take \(a_{n}=\frac{k}{\beta ^{n}}\) for each nN, where \(\beta \in (\max \{1,\frac{k}{\alpha -k}\},+\infty )\) is an arbitrary constant, \(x=k\), and \(y=\frac{k}{\beta}\). Then we have \(T(x)=l\), \(T(y)=\frac{1}{l}\), and

$$\begin{aligned} \bigl\vert T(x)-T(y) \bigr\vert =l-\frac{1}{l}>k=\frac{(\beta -1)k}{\beta}+ \frac{k}{\beta}= \vert x-y \vert +a_{1}, \end{aligned}$$

that is, T is not a nearly nonexpansive mapping with respect to the sequence \(\{a_{n}\}=\{\frac{k}{\beta ^{n}}\}\). Moreover, picking \(x=k\) and \(y\in (k,\frac{(\beta +1)k}{\beta} )\), we have \(T(x)=l\), \(T(y)=0\), and \(0<|x-y|<\frac{k}{\beta}\), and so \(|T(x)-T(y)|>\frac{\beta l}{k}|x-y|\). Thus T is not a uniformly \(\frac{\beta l}{k}\)-Lipschitzian mapping. However, for all \(x,y\in E\),

$$\begin{aligned} \bigl\vert T(x)-T(y) \bigr\vert \leq l\leq \frac{\beta l}{k} \biggl( \vert x-y \vert +\frac{k}{\beta}\biggr)= \frac{\beta l}{k}\bigl( \vert x-y \vert +a_{1}\bigr), \end{aligned}$$
(3.22)

and for all \(n\geq 2\), because of \(T^{n}(z)=\frac{1}{l}\) for all \(z\in E\), we get

$$\begin{aligned} \bigl\vert T^{n}(x)-T^{n}(y) \bigr\vert < \frac{\beta l}{k}\biggl( \vert x-y \vert +\frac{k}{\beta ^{n}}\biggr)= \frac{\beta l}{k}\bigl( \vert x-y \vert +a_{n}\bigr). \end{aligned}$$
(3.23)

Thereby (3.22) and (3.23) imply that T is a nearly uniformly \(\frac{\beta l}{k}\)-Lipschitzian mapping with respect to the sequence \(\{a_{n}\}=\{\frac{k}{\beta ^{n}}\}\).

We now focus our attention on another class of generalized nonexpansive mappings, which unifies the mappings appearing in parts (i)–(iv) of Definition 3.5.

Definition 3.8

A nonlinear mapping \(T:E\rightarrow E\) is called generalized \((L,\{a_{n}\},\{b_{n}\})\)-nearly asymptotically nonexpansive if there exist a constant \(L>0\) and nonnegative real sequences \(\{a_{n}\}\) and \(\{b_{n}\}\) with \(a_{n},b_{n}\rightarrow 0\) as \(n\rightarrow \infty \) such that for each nN,

$$\begin{aligned} \bigl\Vert T^{n}(x)-T^{n}(y) \bigr\Vert \leq L\bigl( \Vert x-y \Vert +a_{n} \Vert x-y \Vert +b_{n}\bigr)\quad \forall x,y\in E. \end{aligned}$$

It should be also pointed out that every nearly asymptotically nonexpansive mapping is a generalized \((L,\{a_{n}\},\{b_{n}\})\)-nearly asymptotically nonexpansive mapping with \(L=1\), but the converse is not true in general. The following example shows that the class of nearly asymptotically nonexpansive mappings is strictly contained within the class of generalized \((L,\{a_{n}\},\{b_{n}\})\)-nearly asymptotically nonexpansive mappings.

Example 3.9

Consider E=R with the Euclidean norm \(\Vert \cdot \Vert =|\cdot |\) and let the self-mapping T of E be defined as follows:

$$ T(x)=\textstyle\begin{cases} p & \text{if } x\in (-\infty ,p), \\ q & \text{if } x\in (p,\frac{1}{q})\cup (\frac{1}{q},k), \\ \frac{1}{q} & \text{if } x\in \{p,\frac{1}{q}\}\cup [k,+\infty ), \end{cases} $$

where \(0\leq p< k\) and \(q>\frac{k+\sqrt{k^{2}+4}}{2}\) are arbitrary real constants such that \(pq<1<kq\). Evidently, the mapping T is discontinuous at the points \(x=p,k,\frac{1}{q}\). This fact ensures that T is not Lipschitzian, and so it is not an asymptotically nonexpansive mapping. Let us now take \(a_{n}=\frac{\alpha}{n}\) and \(b_{n}=\frac{k}{\gamma ^{n}}\) for nN, where q and γ are arbitrary real constants such that \(\gamma \in (1,kq)\cup (kq,+\infty )\) if \(p=0\) and \(\gamma \in (1,\frac{k}{p})\backslash \{kq\}\) if \(p>0\), and let \(\alpha \in (0,\frac{\gamma (q^{2}-kq-1)}{kq(\gamma -1)} )\). The fact that \(q>\frac{k+\sqrt{k^{2}+4}}{2}\) ensures that \(q^{2}-kq-1>0\). Picking \(x=k\) and \(y=\frac{k}{\gamma}\), we have \(T(x)=\frac{1}{q}\) and \(T(y)=q\), and since \(0<\alpha <\frac{\gamma (q^{2}-kq-1)}{kq(\gamma -1)}\), we deduce that

$$\begin{aligned} \bigl\vert T(x)-T(y) \bigr\vert =&q- \frac{1}{q} \\ >&\frac{(\gamma -1)k}{\gamma} + \frac{\alpha (\gamma -1)k}{\gamma}+\frac{k}{\gamma} \\ =& \vert x-y \vert +a_{1} \vert x-y \vert +b_{1}, \end{aligned}$$

which guarantees that T is not an \((\{a_{n}\},\{b_{n}\})=(\{\frac{\alpha}{n}\},\{\frac{k}{\gamma ^{n}} \})\)-nearly asymptotically nonexpansive mapping. However, for all \(x,y\in E\), we obtain

$$\begin{aligned}& \begin{aligned} \bigl\vert T(x)-T(y) \bigr\vert &\leq q\leq \frac{\gamma q}{k}\biggl( \vert x-y \vert +\alpha \vert x-y \vert + \frac{k}{\gamma}\biggr) \\ & =\frac{\gamma q}{k}\bigl( \vert x-y \vert +a_{1} \vert x-y \vert +b_{1}\bigr), \end{aligned} \end{aligned}$$
(3.24)

and for all \(n\geq 2\), since \(T^{n}(z)=\frac{1}{q}\) for all \(z\in E\), we have

$$\begin{aligned}& \begin{aligned} \bigl\vert T^{n}(x)-T^{n}(y) \bigr\vert &< \frac{\gamma q}{k}\biggl( \vert x-y \vert + \frac{\alpha}{n} \vert x-y \vert +\frac{k}{\gamma ^{n}}\biggr) \\ & =\frac{\gamma q}{k}\bigl( \vert x-y \vert +a_{n} \vert x-y \vert +b_{n}\bigr). \end{aligned} \end{aligned}$$
(3.25)

Now by (3.24) and (3.25) it follows that T is a generalized \((\frac{\gamma q}{k},\{\frac{\alpha}{n}\},\{\frac{k}{\gamma ^{n}}\})\)-nearly asymptotically nonexpansive mapping.

Lemma 3.10

Let for each \(i\in \{1,2\}\), \(E_{i}\) be a real Banach space with a norm \(\Vert \cdot \Vert _{i}\), and let \(S_{i}:E_{i}\rightarrow E_{i}\) be a generalized \((L_{i},\{a_{n,i}\}_{n=1}^{\infty},\{b_{n,i}\}_{n=1}^{\infty} )\)-nearly asymptotically nonexpansive mapping. Suppose further that Q is a self-mapping of \(E_{1}\times E_{2}\) defined by

$$\begin{aligned} Q(x_{1},x_{2})=(S_{1}x_{1},S_{2}x_{2}),\qquad (x_{1},x_{2})\in E_{1} \times E_{2}. \end{aligned}$$
(3.26)

Then Q is a generalized \((\max \{L_{1},L_{2}\},\{a_{n,1}+a_{n,2}\}_{n=1}^{\infty}, \{b_{n,1}+b_{n,2} \}_{n=1}^{\infty} )\)-nearly asymptotically nonexpansive mapping.

Proof

Relying on the fact that for each \(i\in \{1,2\}\), \(S_{i}\) is a generalized \((L_{i},\{a_{n,i}\}_{n=1}^{\infty},\{b_{n,i}\}_{n=1}^{\infty} )\)-nearly asymptotically nonexpansive mapping, for all \((x_{1},x_{2}),(y_{1},y_{2})\in E_{1}\times E_{2}\) and nN, we have

$$\begin{aligned}& \bigl\Vert Q^{n}(x_{1},x_{2})-Q^{n}(y_{1},y_{2}) \bigr\Vert _{*} \\& \quad = \bigl\Vert \bigl(S_{1}^{n}x_{1},S_{2}^{n}x_{2} \bigr)-\bigl(S_{1}^{n}y_{1},S_{2}^{n}y_{2} \bigr) \bigr\Vert _{*} \\& \quad = \bigl\Vert \bigl(S_{1}^{n}x_{1}-S_{1}^{n}y_{1},S_{2}^{n}x_{2}-S_{2}^{n}y_{2} \bigr) \bigr\Vert _{*} \\& \quad = \bigl\Vert S^{n}_{1}x_{1}-S^{n}_{1}y_{1} \bigr\Vert _{1}+ \bigl\Vert S^{n}_{2}x_{2}-S^{n}_{2}y_{2} \bigr\Vert _{2} \\& \quad \leq L_{1} \bigl( \Vert x_{1}-y_{1} \Vert _{1}+a_{n,1} \Vert x_{1}-y_{1} \Vert _{1}+b_{n,1} \bigr) \\& \qquad {} +L_{2} \bigl( \Vert x_{2}-y_{2} \Vert _{2}+a_{n,2} \Vert x_{2}-y_{2} \Vert _{2}+b_{n,2} \bigr) \\& \quad \leq \max \{L_{1},L_{2}\} \bigl( \Vert x_{1}-y_{1} \Vert _{1}+ \Vert x_{2}-y_{2} \Vert _{2} \\& \qquad {} +(a_{n,1}+a_{n,2}) \bigl( \Vert x_{1}-y_{1} \Vert _{1}+ \Vert x_{2}-y_{2} \Vert _{2}\bigr)+b_{n,1}+b_{n,2} \bigr), \end{aligned}$$

where \(\Vert \cdot \Vert _{*}\) is the norm on \(X_{1}\times X_{2}\) defined by (3.20). From the preceding relation it follows that Q is a generalized \((\max \{L_{1},L_{2}\},\{a_{n,1}+a_{n,2}\}_{n=1}^{\infty}, \{b_{n,1}+b_{n,2} \}_{n=1}^{\infty} )\)-nearly asymptotically nonexpansive mapping. The proof is completed. □

Let for \(i\in \{1,2\}\), \(E_{i}\) be a real \(q_{i}\)-uniformly smooth Banach space with norm \(\Vert \cdot \Vert _{i}\) and \(q_{i}>1\), and let \(S_{i}:E_{i}\rightarrow E_{i}\) be a generalized \((L_{i},\{a_{n,i}\}_{n=1}^{\infty},\{b_{n,i}\}_{n=1}^{\infty})\)-nearly asymptotically nonexpansive mapping. Furthermore, let Q be a self-mapping of \(E_{1}\times E_{2}\) defined by (3.26). Denote the sets of all the fixed points of \(S_{i}\) (\(i=1,2\)) and Q by \(\operatorname{Fix}(S_{i})\) (\(i=1,2\)) and \(\operatorname{Fix}(Q)\), respectively. At the same time, denote by \(\Omega _{\operatorname{SGNVLI}}\) the set of all the solutions of SGNVLI (3.1), where for each \(i\in \{1,2\}\), \(P_{i}\) is a nonlinear strictly \(\eta _{i}\)-accretive mapping with \(\operatorname{dom}(P_{i})\cap h_{i}(E_{i})\neq \emptyset \). Using (3.26), it follows that for any \((x_{1},x_{2})\in E_{1}\times E_{2}\), \((x_{1},x_{2})\in \operatorname{Fix}(Q)\) if and only if \(x_{i}\in \operatorname{Fix}(S_{i})\) for each \(i\in \{1,2\}\), i.e., \(\operatorname{Fix}(Q)=\operatorname{Fix}(S_{1},S_{2})=\operatorname{Fix}(S_{1}) \times \operatorname{Fix}(S_{2})\). If \((x^{*},y^{*})\in \operatorname{Fix}(Q)\cap \Omega _{\operatorname{SGNVLI}}\), then using Lemma 3.1, we can easily observe that for each nN,

$$\begin{aligned}& \textstyle\begin{cases} x^{*}=S^{n}_{1}x^{*}=x^{*}-h_{1}(x^{*})+R^{P_{1},\eta _{1}}_{M_{1}(\cdot ,y^{*}), \lambda}[P_{1}(h_{1}(x^{*})) \\ \hphantom{x^{*}=}{}- \lambda (F_{1}(x^{*},y^{*}-f_{2}(y^{*}))+T_{1}(y^{*},x^{*}-g_{1}(x^{*})))] \\ \hphantom{x^{*}}=S^{n}_{1} (x^{*}-h_{1}(x^{*})+R^{P_{1},\eta _{1}}_{M_{1}(\cdot ,y^{*}), \lambda}[P_{1}(h_{1}(x^{*})) \\ \hphantom{x^{*}=}{}- \lambda (F_{1}(x^{*},y^{*}-f_{2}(y^{*}))+T_{1}(y^{*},x^{*}-g_{1}(x^{*})))] ), \\ y^{*}=S^{n}_{2}y^{*}=y^{*}-h_{2}(y^{*})+R^{P_{2},\eta _{2}}_{M_{2}(\cdot ,x^{*}), \rho}[P_{2}(h_{2}(y^{*})) \\ \hphantom{y^{*}=}{}- \rho (F_{2}(x^{*}-f_{1}(x^{*}),y^{*})+T_{2}(x^{*},y-g_{2}(y^{*})))] \\ \hphantom{y^{*}}=S^{n}_{2} (y^{*}-h_{2}(y^{*})+R^{P_{2},\eta _{2}}_{M_{2}(\cdot ,x^{*}), \rho}[P_{2}(h_{2}(y^{*})) \\ \hphantom{y^{*}=}{}- \rho (F_{2}(x^{*}-f_{1}(x^{*}),y^{*})+T_{2}(x^{*},y-g_{2}(y^{*})))] ). \end{cases}\displaystyle \end{aligned}$$
(3.27)

The fixed point formulation (3.27) allows us to construct an iterative algorithm for finding a common element of the two sets of \(\operatorname{Fix}(Q)=\operatorname{Fix}(S_{1},S_{2})\) and \(\Omega _{\operatorname{SGNVLI}}\) as follows.

Algorithm 3.11

Let \(E_{i}\), \(f_{i}\), \(g_{i}\), \(h_{i}\), \(F_{i}\), and \(T_{i}\) (\(i=1,2\)) be as in SGNVLI (3.1). Suppose that for all \(n\geq 0\) and \(i\in \{1,2\}\), \(\eta _{n,i}:E_{i}\times E_{i}\rightarrow E_{i}\) and \(P_{n,i}:E_{i}\rightarrow E_{i}\) are single-valued nonlinear mappings such that for all \(n\geq 0\) and \(i\in \{1,2\}\), \(P_{n,i}\) is strictly \(\eta _{n,i}\)-accretive with \(\operatorname{dom}(P_{n,i})\cap h_{i}(E_{i})\neq \emptyset \). Assume that for \(i\in \{1,2\}\), \(j\in \{1,2\}\backslash \{i\}\), and \(n\geq 0\), \(M_{n,i}:E_{i}\times E_{j}\rightarrow 2^{E_{i}}\) is a multivalued nonlinear mapping such that for all \(x_{j}\in E_{j}\) and \(n\geq 0\), \(M_{n,i}(\cdot ,x_{j}):E_{i}\rightarrow 2^{E_{i}}\) is a \(P_{n,i}\)-\(\eta _{n,i}\)-accretive mapping with \(h_{i}(E_{i})\cap \operatorname{dom}M_{n,i}(\cdot ,x_{j})\neq \emptyset \). Suppose further that for \(i\in \{1,2\}\), \(S_{i}:E_{i}\rightarrow E_{i}\) is a generalized \((L_{i},\{a_{n,i}\}_{n=0}^{\infty},\{b_{n,i}\}_{n=0}^{\infty})\)-nearly asymptotically nonexpansive mapping. For any given \((x_{0},y_{0})\in E_{1}\times E_{2}\), define the iterative sequence \(\{(x_{n},y_{n})\}_{n=0}^{\infty}\) in \(E_{1}\times E_{2}\) in the following way:

$$\begin{aligned}& \textstyle\begin{cases} x_{n+1}=\alpha _{n,1}x_{n}+(1-\alpha _{n,1}-\beta _{n,1})S^{n}_{1}\{z_{n,1}-h_{1}(z_{n,1}) \\ \hphantom{x_{n+1}=} +{}R^{P_{n,1},\eta _{n,1}}_{M_{n,1}(\cdot ,t_{n,1}),\lambda _{n}}(\Phi (z_{n,1},t_{n,1}))\} +\alpha _{n,1}e_{n,1}+\beta _{n,1}k_{n,1}+l_{n,1}, \\ y_{n+1}=\alpha _{n,1}y_{n}+(1-\alpha _{n,1}-\beta _{n,1})S^{n}_{2}\{t_{n,1}-h_{2}(t_{n,1}) \\ \hphantom{y_{n+1}=}{}+R^{P_{n,2},\eta _{n,2}}_{M_{n,2}(\cdot ,z_{n,1}),\rho _{n}} (\Delta (z_{n,1},t_{n,1}))\} +\alpha _{n,1}\hat{e}_{n,1}+\beta _{n,1}\hat{k}_{n,1}+\hat{l}_{n,1}, \\ z_{n,i}=\alpha _{n,i+1}x_{n}+(1-\alpha _{n,i+1}-\beta _{n,i+1})S^{n}_{1}\{z_{n,i+1}-h_{1}(z_{n,i+1}) \\ \hphantom{z_{n,i}=}{} +R^{P_{n,1},\eta _{n,1}}_{M_{n,1}(\cdot ,t_{n,i+1}),\lambda _{n}}(\Phi (z_{n,i+1},t_{n,i+1}))\} \\ \hphantom{z_{n,i}=}{}+\alpha _{n,i+1}e_{n,i+1}+\beta _{n,i+1}k_{n,i+1}+l_{n,i+1}, \\ t_{n,i}=\alpha _{n,i+1}y_{n}+(1-\alpha _{n,i+1}-\beta _{n,i+1})S^{n}_{2}\{t_{n,i+1}-h_{2}(t_{n,i+1}) \\ \hphantom{t_{n,i}=}{}+R^{P_{n,2},\eta _{n,2}}_{M_{n,2}(\cdot ,z_{n,i+1}),\rho _{n}}(\Delta (z_{n,i+1},t_{n,i+1}))\} \\ \hphantom{t_{n,i}=}{}+\alpha _{n,i+1}\hat{e}_{n,i+1}+\beta _{n,i+1}\hat{k}_{n,i+1}+\hat{l}_{n,i+1}, \\ \dots \\ z_{n,p-1}=\alpha _{n,p}x_{n}+(1-\alpha _{n,p}-\beta _{n,p})S^{n}_{1}\{x_{n}-h_{1}(x_{n}) \\ \hphantom{z_{n,p-1}=}{}+R^{P_{n,1},\eta _{n,1}}_{M_{n,1}(\cdot ,y_{n}),\lambda _{n}}(\Phi (x_{n},y_{n}))\} +\alpha _{n,p}e_{n,p}+\beta _{n,p}k_{n,p}+l_{n,p}, \\ t_{n,p-1}=\alpha _{n,p}y_{n}+(1-\alpha _{n,p}-\beta _{n,p})S^{n}_{2}\{y_{n}-h_{2}(y_{n}) \\ \hphantom{t_{n,p-1}=}{} +R^{P_{n,2},\eta _{n,2}}_{M_{n,2}(\cdot ,x_{n}),\rho _{n}}(\Delta (x_{n},y_{n}))\} +\alpha _{n,p}\hat{e}_{n,p}+\beta _{n,p}\hat{k}_{n,p}+\hat{l}_{n,p},\end{cases}\displaystyle \end{aligned}$$
(3.28)

for \(i=1,2,\dots ,p-2\); for all nN{0} and \(i=1,2,\dots ,p-1\),

$$\begin{aligned}& \Phi (z_{n,i},t_{n,i})=P_{1} \bigl(h_{1}(z_{n,i})\bigr)-\lambda _{n} \bigl(F_{1}\bigl(z_{n,i},t_{n,i}-f_{2}(t_{n,i}) \bigr) +T_{1}\bigl(t_{n,i},z_{n,i}-g_{1}(z_{n,i}) \bigr)\bigr), \\& \Delta (z_{n,i},t_{n,i})=P_{2} \bigl(h_{2}(t_{n,i})\bigr)-\rho _{n} \bigl(F_{2}\bigl(z_{n,i}-f_{1}(z_{n,i}),t_{n,i} \bigr) +T_{2}\bigl(z_{n,i},t_{n,i}-g_{2}(t_{n,i}) \bigr)\bigr), \\& \Phi (x_{n},y_{n})=P_{1}\bigl(h_{1}(x_{n}) \bigr)-\lambda _{n}\bigl(F_{1}\bigl(x_{n},y_{n}-f_{2}(y_{n}) \bigr)+T_{1}\bigl(y_{n},x_{n}-g_{1}(x_{n}) \bigr)\bigr), \\& \Delta (x_{n},y_{n})=P_{2}\bigl(h_{2}(y_{n}) \bigr)-\rho _{n}\bigl(F_{2}\bigl(x_{n}-f_{1}(x_{n}),y_{n} \bigr)+T_{2}\bigl(x_{n},y_{n}-g_{2}(y_{n}) \bigr)\bigr), \end{aligned}$$

\(\lambda _{n},\rho _{n}>0\) (\(n=0,1,2,\dots \)) are constants, \(\{\alpha _{n,i}\}_{n=0}^{\infty}\), \(\{\beta _{n,i}\}_{n=0}^{\infty}\) (\(i=1,2,\dots ,p\)) are 2p sequences in \((0,1)\) such that \(\sum_{n=0}^{\infty}\prod_{i=1}^{p}(1-\alpha _{n,i})=\infty \), \(\sum_{n=0}^{\infty}\beta _{n,i}<\infty \) for all \(n\geq 0\), \(\alpha _{n,i}+\beta _{n,i}\in (0,1]\) for all \(n\geq 0\) and \(i=1,2,\dots ,p\), and \(\{e_{n,i}\}_{n=0}^{\infty}\), \(\{\hat{e}_{n,i}\}_{n=0}^{\infty}\), \(\{k_{n,i}\}_{n=0}^{\infty}\), \(\{\hat{k}_{n,i}\}_{n=0}^{\infty}\), \(\{l_{n,i}\}_{n=0}^{\infty}\), and \(\{\hat{l}_{n,i}\}_{n=0}^{\infty}\) \((i=1,2,\dots ,p)\) are 6p sequences to take into account a possible inexact computation of the resolvent operator point satisfying the following conditions: For \(i=1,2,\dots ,p\), \(\{k_{n,i}\}_{n=0}^{\infty}\) are p bounded sequences in \(E_{1}\), \(\{\hat{k}_{n,i}\}_{n=0}^{\infty}\) are p bounded sequences in \(E_{2}\), and \(\{(e_{n,i},\hat{e}_{n,i})\}_{n=0}^{\infty}\) and \(\{(l_{n,i},\hat{l}_{n,i})\}_{n=0}^{\infty}\) are 2p sequences in \(E_{1}\times E_{2}\) such that for all nN{0} and \(i=1,2,\dots ,p\),

$$\begin{aligned}& \textstyle\begin{cases} e_{n,i}=e'_{n,i}+e''_{n,i}, \qquad \hat{e}_{n,i}=\hat{e}'_{n,i}+\hat{e}''_{n,i},\\ \lim_{n\rightarrow \infty} \Vert (e'_{n,i},\hat{e}'_{n,i}) \Vert _{*}=0,\\ \sum_{n=0}^{\infty} \Vert (e''_{n,i},\hat{e}''_{n,i}) \Vert _{*}< \infty ,\\ \sum_{n=0}^{\infty} \Vert (l_{n,i},\hat{l}_{n,i}) \Vert _{*}< \infty . \end{cases}\displaystyle \end{aligned}$$
(3.29)

Let \(\{(u_{n},v_{n})\}_{n=0}^{\infty}\) be any sequence in \(E_{1}\times E_{2}\) and define \(\{\epsilon _{n}\}_{n=0}^{\infty}\) by

$$\begin{aligned}& \textstyle\begin{cases} \epsilon _{n}= \Vert (u_{n+1},v_{n+1})-(L_{n},D_{n}) \Vert _{*}, \\ L_{n}=\alpha _{n,1}u_{n}+(1-\alpha _{n,1}-\beta _{n,1})S^{n}_{1}\{\nu _{n,1}-h_{1}(\nu _{n,1}) \\ \hphantom{L_{n}=}{}+R^{P_{n,1},\eta _{n,1}}_{M_{n,1}(\cdot ,\omega _{n,1}),\lambda _{n}}(\Phi (\nu _{n,1},\omega _{n,1}))\} +\alpha _{n,1}e_{n,1}+\beta _{n,1}k_{n,1}+l_{n,1}, \\ D_{n}=\alpha _{n,1}v_{n}+(1-\alpha _{n,1}-\beta _{n,1})S^{n}_{2}\{\omega _{n,1}-h_{2}(\omega _{n,1}) \\ \hphantom{D_{n}=}{}+R^{P_{n,2},\eta _{n,2}}_{M_{n,2}(\cdot ,\nu _{n,1}),\rho _{n}}(\Delta (\nu _{n,1},\omega _{n,1}))\} +\alpha _{n,1}\hat{e}_{n,1}+\beta _{n,1}\hat{k}_{n,1}+\hat{l}_{n,1}, \\ \nu _{n,1}=\alpha _{n,2}u_{n}+(1-\alpha _{n,2}-\beta _{n,2})S^{n}_{1}\{\nu _{n,2}-h_{1}(\nu _{n,2}) \\ \hphantom{\nu _{n,1}=}{}+R^{P_{n,1},\eta _{n,1}}_{M_{n,1}(\cdot ,\omega _{n,2}),\lambda _{n}}(\Phi (\nu _{n,2},\omega _{n,2}))\} +\alpha _{n,2}e_{n,2}+\beta _{n,2}k_{n,2}+l_{n,2}, \\ \omega _{n,1}=\alpha _{n,2}v_{n}+(1-\alpha _{n,2}-\beta _{n,2})S^{n}_{2}\{\omega _{n,2}-h_{2}(\omega _{n,2}) \\ \hphantom{\omega _{n,1}=}{}+R^{P_{n,2},\eta _{n,2}}_{M_{n,2}(\cdot ,\nu _{n,2}),\rho _{n}}(\Delta (\nu _{n,2},\omega _{n,2}))\} +\alpha _{n,2}\hat{e}_{n,2}+\beta _{n,2}\hat{k}_{n,2}+\hat{l}_{n,2}, \\ \dots \\ \nu _{n,p-2}=\alpha _{n,p-1}u_{n}+(1-\alpha _{n,p-1}-\beta _{n,p-1})S^{n}_{1}\{\nu _{n,p-1}-h_{1}(\nu _{n,p-1}) \\ \hphantom{\nu _{n,p-2}=}{}+R^{P_{n,1},\eta _{n,1}}_{M_{n,1}(\cdot ,\omega _{n,p-1}),\lambda _{n}}(\Phi (\nu _{n,p-1},\omega _{n,p-1}))\} \\ \hphantom{\nu _{n,p-2}=}{}+\alpha _{n,p-1}e_{n,p-1}+\beta _{n,p-1}k_{n,p-1}+l_{n,p-1}, \\ \omega _{n,p-2}=\alpha _{n,p-1}v_{n}+(1-\alpha _{n,p-1}-\beta _{n,p-1})S^{n}_{2}\{\omega _{n,p-1}-h_{2}(\omega _{n,p-1}) \\ \hphantom{\omega _{n,p-2}=}{} +R^{P_{n,2},\eta _{n,2}}_{M_{n,2}(\cdot ,\nu _{n,p-1}),\rho _{n}}(\Delta (\nu _{n,p-1},\omega _{n,p-1}))\} \\ \hphantom{\omega _{n,p-2}=}{}+\alpha _{n,p-1}\hat{e}_{n,p-1}+\beta _{n,p-1}\hat{k}_{n,p-1}+\hat{l}_{n,p-1}, \\ \nu _{n,p-1}=\alpha _{n,p}u_{n}+(1-\alpha _{n,p}-\beta _{n,p})S^{n}_{1}\{u_{n}-h_{1}(u_{n}) \\ \hphantom{\nu _{n,p-1}=}{} +R^{P_{n,1},\eta _{n,1}}_{M_{n,1}(\cdot ,v_{n}),\lambda _{n}}(\Phi (u_{n},v_{n}))\} +\alpha _{n,p}e_{n,p}+\beta _{n,p}k_{n,p}+l_{n,p}, \\ \omega _{n,p-1}=\alpha _{n,p}v_{n}+(1-\alpha _{n,p}-\beta _{n,p})S^{n}_{2}\{v_{n}-h_{2}(v_{n}) \\ \hphantom{\omega _{n,p-1}=}{}+R^{P_{n,2},\eta _{n,2}}_{M_{n,2}(\cdot ,u_{n}),\rho _{n}}(\Delta (u_{n},v_{n}))\} +\alpha _{n,p}\hat{e}_{n,p}+\beta _{n,p}\hat{k}_{n,p}+\hat{l}_{n,p}, \end{cases}\displaystyle \end{aligned}$$
(3.30)

where for all nN{0} and \(i=1,2,\dots ,p-1\),

$$\begin{aligned}& \Phi (\nu _{n,i},\omega _{n,i})=P_{1} \bigl(h_{1}(\nu _{n,i})\bigr)-\lambda _{n} \bigl(F_{1}\bigl(\nu _{n,i},\omega _{n,i}-f_{2}( \omega _{n,i})\bigr) +T_{1}\bigl(\omega _{n,i},\nu _{n,i}-g_{1}(\nu _{n,i})\bigr)\bigr), \\& \Delta (\nu _{n,i},\omega _{n,i})=P_{2} \bigl(h_{2}(\omega _{n,i})\bigr)-\rho _{n} \bigl(F_{2}\bigl(\nu _{n,i}-f_{1}(\nu _{n,i}),\omega _{n,i}\bigr) +T_{2}\bigl(\nu _{n,i},\omega _{n,i}-g_{2}(\omega _{n,i}) \bigr)\bigr), \\& \Phi (u_{n},v_{n})=P_{1}\bigl(h_{1}(u_{n}) \bigr)-\lambda _{n}\bigl(F_{1}\bigl(u_{n},v_{n}-f_{2}(v_{n}) \bigr)\bigr)+T_{1}\bigl(v_{n},u_{n}-g_{1}(u_{n}) \bigr)), \\& \Delta (u_{n},v_{n})=P_{2}\bigl(h_{2}(v_{n}) \bigr)-\rho _{n}\bigl(F_{2}\bigl(u_{n}-f_{1}(u_{n}),v_{n} \bigr)+T_{2}\bigl(u_{n},v_{n}-g_{2}(v_{n}) \bigr)\bigr). \end{aligned}$$

4 Graph convergence and an application

Definition 4.1

Given multivalued mappings \(M_{n},M:E\rightarrow 2^{E}\) (\(n\geq 0\)), the sequence \(\{M_{n}\}_{n=0}^{\infty}\) is said to be graph-convergent to M, denoted by \(M_{n}\stackrel{G}{\longrightarrow} M\), if for every point \((x,u)\in \operatorname{Graph}(M)\), there exists a sequence of points \((x_{n},u_{n})\in \operatorname{Graph}(M_{n})\) such that \(x_{n}\rightarrow x\) and \(u_{n}\rightarrow u\) as \(n\rightarrow \infty \).

In the next theorem, a new equivalence relationship between the graph convergence of a sequence of P-η-accretive mappings and the associated resolvent operators to a given P-η-accretive mapping and the associated resolvent operator is established.

Theorem 4.2

Suppose that E is a real q-uniformly smooth Banach space, \(\eta :E\times E\rightarrow E\) is a vector-valued mapping, \(P:E\rightarrow E\) is a strictly η-accretive mapping, and \(M:E\rightarrow 2^{E}\) is a P-η-accretive mapping. Let for each \(n\geq 0\), \(\eta _{n}:E\times E\rightarrow E\) be a \(\tau _{n}\)-Lipschitz continuous mapping, \(P_{n}:E\rightarrow E\) be a \(\gamma _{n}\)-strongly \(\eta _{n}\)-accretive and \(\delta _{n}\)-Lipschitz continuous mapping, and \(M_{n}:E\rightarrow 2^{E}\) be a \(P_{n}\)-\(\eta _{n}\)-accretive mapping. Let \(\lim_{n\rightarrow \infty}P_{n}(x)=P(x)\) for all \(x\in E\), and let \(\{\frac{1}{\gamma _{n}}\}_{n=0}^{\infty}\), \(\{\tau _{n}\}_{n=0}^{\infty}\), and \(\{\delta _{n}\}_{n=0}^{\infty}\) be bounded sequences. Assume further that \(\{\lambda _{n}\}_{n=0}^{\infty}\) is a sequence of positive real constants convergent to a positive real constant λ. Then \(M_{n}\stackrel{G}{\longrightarrow}M\) if and only if \(R^{P_{n},\eta _{n}}_{M_{n},\lambda _{n}}(z)\rightarrow R^{P,\eta}_{M, \lambda}(z)\) for all \(z\in E\) as \(n\rightarrow \infty \), where for each \(n\geq 0\), \(R^{P_{n},\eta _{n}}_{M_{n},\lambda _{n}}=(P_{n}+\lambda _{n}M_{n})^{-1}\) and \(R^{P,\eta}_{M,\lambda}=(P+\rho M)^{-1}\).

Proof

Suppose first that for all \(z\in E\), we have \(\lim_{n\rightarrow \infty}R^{P_{n},\eta _{n}}_{M_{n}, \lambda _{n}}(z)= R^{P,\eta}_{M,\lambda}(z)\). Then, for any \(u\in M(x)\), we have \(x=R^{P,\eta}_{M,\lambda}[P(x)+\lambda u]\), and so \(R^{P_{n},\eta _{n}}_{M_{n},\lambda _{n}}[P(x)+\lambda u]\rightarrow x\) as \(n\rightarrow \infty \). Taking \(x_{n}=R^{P_{n},\eta _{n}}_{M_{n},\lambda _{n}}[P(x)+\lambda u]\) for each \(n\geq 0\), it follows that \(P(x)+\lambda u\in (P_{n}+\lambda _{n}M_{n})(x_{n})\). Hence, for each \(n\geq 0\), we can choose \(u_{n}\in M_{n}(x_{n})\) such that \(P(x)+\lambda u=P_{n}(x_{n})+\lambda _{n}u_{n}\). Then, for each \(n\geq 0\), we obtain

$$\begin{aligned} \Vert \lambda _{n}u_{n}-\lambda u \Vert =& \bigl\Vert P_{n}(x_{n})-P(x) \bigr\Vert \\ \leq& \bigl\Vert P_{n}(x_{n})-P_{n}(x) \bigr\Vert + \bigl\Vert P_{n}(x)-P(x) \bigr\Vert \\ \leq& \delta _{n} \Vert x_{n}-x \Vert + \bigl\Vert P_{n}(x)-P(x) \bigr\Vert . \end{aligned}$$

Since the sequence \(\{\delta _{n}\}_{n=0}^{\infty}\) is bounded and \(x_{n}\rightarrow x\) and \(P_{n}(x)\rightarrow P(x)\) as \(n\rightarrow \infty \), we conclude that \(\lambda _{n}u_{n}\rightarrow \lambda u\) as \(n\rightarrow \infty \). Moreover, for each \(n\geq 0\), we get

$$\begin{aligned} \lambda \Vert u_{n}-u \Vert =& \Vert \lambda u_{n}-\lambda u \Vert \\ \leq& \Vert \lambda _{n} u_{n}-\lambda u_{n} \Vert + \Vert \lambda _{n} u_{n}-\lambda u \Vert \\ =& \vert \lambda _{n}-\lambda \vert \Vert u_{n} \Vert + \Vert \lambda _{n} u_{n}-\lambda u \Vert . \end{aligned}$$

Taking into account that \(\lambda _{n}\rightarrow \lambda \) and \(\lambda _{n}u_{n}\rightarrow \lambda u\) as \(n\rightarrow \infty \), it follows that the right-hand side of the above inequality approaches zero as \(n\rightarrow \infty \). Thus \(u_{n}\rightarrow u\) as \(n\rightarrow \infty \). Now, in view of Definition 4.1, \(M_{n}\stackrel{G}{\longrightarrow}M\).

Conversely, assume that \(M_{n}\stackrel{G}{\longrightarrow}M\), and let \(z\in E\) be chosen arbitrarily but fixed. Since the mapping M is P-η-accretive, it follows that \((P+\lambda M)(E)=E\), and so there exists a point \((x,u)\in \operatorname{Graph}(M)\) such that \(z=P(x)+\lambda u\). By Definition 4.1 there exists a sequence \(\{(x_{n},u_{n})\}_{n=0}^{\infty}\subset \operatorname{Graph}(M_{n})\) such that \(x_{n}\rightarrow x\) and \(u_{n}\rightarrow u\) as \(n\rightarrow \infty \). Since \((x,u)\in \operatorname{Graph}(M)\) and \((x_{n},u_{n})\in \operatorname{Graph}(M_{n})\) for all \(n\geq 0\), we have

$$\begin{aligned} x=R^{P,\eta}_{M,\lambda}\bigl[P(x)+\lambda u\bigr] \quad \text{and}\quad x_{n}=R^{P_{n}, \eta _{n}}_{M_{n},\lambda _{n}} \bigl[P_{n}(x_{n})+\lambda _{n} u_{n} \bigr]\quad \forall n\geq 0. \end{aligned}$$
(4.1)

Picking \(z_{n}=P_{n}(x_{n})+\lambda _{n}u_{n}\) for each \(n\geq 0\), by Lemma 2.17, (4.1), and the assumptions, for each \(n\geq 0\), we obtain

$$\begin{aligned}& \bigl\Vert R^{P_{n},\eta _{n}}_{M_{n},\lambda _{n}}(z)-R^{P,\eta}_{M, \lambda}(z) \bigr\Vert \\& \quad \leq \bigl\Vert R^{P_{n},\eta _{n}}_{M_{n},\lambda _{n}}(z)-R^{P_{n},\eta _{n}}_{M_{n}, \lambda _{n}}(z_{n}) \bigr\Vert + \bigl\Vert R^{P_{n},\eta _{n}}_{M_{n},\lambda _{n}}(z_{n})-R^{P, \eta}_{M,\lambda}(z) \bigr\Vert \\& \quad \leq \frac{\tau _{n}^{q-1}}{\gamma _{n}} \Vert z_{n}-z \Vert + \bigl\Vert R^{P_{n}, \eta _{n}}_{M_{n},\lambda _{n}}\bigl[P_{n}(x_{n})+\lambda _{n} u_{n}\bigr]-R^{P, \eta}_{M,\lambda}\bigl[P(x)+ \lambda u\bigr] \bigr\Vert \\& \quad \leq \frac{\tau _{n}^{q-1}}{\gamma _{n}} \Vert z_{n}-z \Vert + \Vert x_{n}-x \Vert \\& \quad =\frac{\tau _{n}^{q-1}}{\gamma _{n}} \bigl\Vert P_{n}(x_{n})+\lambda _{n} u_{n}-P(x)- \lambda u \bigr\Vert + \Vert x_{n}-x \Vert \\& \quad \leq \frac{\tau _{n}^{q-1}}{\delta _{n}}\bigl( \bigl\Vert P_{n}(x_{n})-P(x) \bigr\Vert + \Vert \lambda _{n} u_{n}-\lambda u \Vert \bigr)+ \Vert x_{n}-x \Vert \\& \quad \leq \frac{\tau _{n}^{q-1}}{\gamma _{n}}\bigl( \bigl\Vert P_{n}(x_{n})-P_{n}(x) \bigr\Vert + \bigl\Vert P_{n}(x)-P(x) \bigr\Vert \\& \qquad {} + \Vert \lambda _{n} u_{n}-\lambda _{n} u \Vert + \Vert \lambda _{n} u- \lambda u \Vert \bigr)+ \Vert x_{n}-x \Vert \\& \quad \leq \biggl(1+\frac{\delta _{n}\tau _{n}^{q-1}}{\gamma _{n}}\biggr) \Vert x_{n}-x \Vert + \frac{\tau _{n}^{q-1}}{\gamma _{n}} \bigl\Vert P_{n}(x)-P(x) \bigr\Vert \\& \qquad {} +\frac{\lambda _{n}\tau _{n}^{q-1}}{\gamma _{n}} \Vert u_{n}-u \Vert + \frac{ \vert \lambda _{n}-\lambda \vert \tau _{n}^{q-1}}{\gamma _{n}} \Vert u \Vert . \end{aligned}$$

Since \(\lim_{n\rightarrow \infty}\lambda _{n}=\lambda \) and the sequences \(\{\frac{1}{\gamma _{n}}\}_{n=0}^{\infty}\) and \(\{\tau _{n}\}_{n=0}^{\infty}\) are bounded, we deduce that the sequence \(\{\frac{\lambda _{n}\tau _{n}^{q-1}}{\gamma _{n}}\}_{n=0}^{\infty}\) is also bounded. Taking into account the assumptions, we infer that the right-hand side of the preceding inequality tends to zero as \(n\rightarrow \infty \), which guarantees that \(R^{P_{n},\eta _{n}}_{M_{n},\lambda _{n}}(z)\rightarrow R^{P,\eta}_{M, \lambda}(z)\) as \(n\rightarrow \infty \). The proof is finished. □

Definition 4.3

For \(i=1,2\), let \(E_{i}\) be real Banach spaces, and let T be a self-mapping of \(E_{1}\times E_{2}\). Suppose that \((x_{0},y_{0})\in E_{1}\times E_{2}\) and \((x_{n+1},y_{n+1})=f(T,x_{n},y_{n})\) defines the iterative procedure, which yields a sequence of points \(\{(x_{n},y_{n})\}_{n=0}^{\infty}\) in \(E_{1}\times E_{2}\). Assume that \(\operatorname{Fix}(T)=\{(x,y)\in E_{1}\times E_{2}:(x,y)=T(x,y)\}\neq \emptyset \) and \(\{(x_{n},y_{n})\}_{n=0}^{\infty}\) converges to some \((x^{*},y^{*})\in \operatorname{Fix}(T)\). Furthermore, let \(\{(z_{n},w_{n})\}_{n=0}^{\infty}\) be an arbitrary sequence in \(E_{1}\times E_{2}\), and denote \(\epsilon _{n}=\Vert (z_{n+1},w_{n+1})-f(T,z_{n},w_{n})\Vert \) for nN{0}. If \(\lim_{n\rightarrow \infty}\epsilon _{n}=0\) implies that \(\lim_{n\rightarrow \infty}(z_{n},w_{n})=(x^{*},y^{*})\), then the iterative procedure defined by \((x_{n+1},y_{n+1})=f(T,x_{n},y_{n})\) is said to be T-stable or stable with respect to T.

Remark 4.4

It is significant to mention that in the last decades a lot of studies related to the of iteration procedures for variational inequalities and variational inclusions has been done by researchers; see, for example, [2123, 25, 40, 49, 6064] and the references therein.

We now prove, as an application of the notion of graph convergence for P-η-accretive mappings, the strong convergence of the iterative sequence generated by Algorithm 3.11 to a common element of the two sets \(\Omega _{\operatorname{SGNVLI}}\) and \(\operatorname{Fix}(Q)\), where Q is a self-mapping of \(E_{1}\times E_{2}\) defined by (3.26). At the same time, we establish the stability of the iterative sequence generated by Algorithm 3.11. Before presenting the most important result of this paper, we need to recall the following lemma, which will be used in our proof.

Lemma 4.5

Let \(\{a_{n}\}\), \(\{b_{n}\}\), and \(\{c_{n}\}\) be three nonnegative real sequences satisfying the following conditions: there exists a natural number \(n_{0}\) such that

$$\begin{aligned} a_{n+1}\leq (1-t_{n})a_{n}+b_{n}t_{n}+c_{n}\quad \forall n\geq n_{0}, \end{aligned}$$

where \(t_{n}\in [0,1]\), \(\sum_{n=0}^{\infty }t_{n}=\infty \), \(\lim_{n\rightarrow \infty}b_{n}=0\), and \(\sum_{n=0}^{\infty }c_{n}<\infty \).

Then \(\lim_{n\rightarrow \infty}a_{n}=0\).

Proof

The proof follows directly from Lemma 2 in [65]. □

Theorem 4.6

Let \(E_{i}\), \(\eta _{i}\), \(P_{i}\), \(f_{i}\), \(g_{i}\), \(h_{i}\), \(F_{i}\), \(T_{i}\), \(M_{i}\) (\(i=1,2\)) be as in Theorem 3.4, and let all the conditions of Theorem 3.4hold. Assume that \(\eta _{n,i}\), \(P_{n,i}\), \(M_{n,i}\) (\(n\geq 0\) and \(i=1,2\)) are as in Algorithm 3.11. Let for each \(i\in \{1,2\}\), \(S_{i}:E_{i}\rightarrow E_{i}\) be a generalized \((L_{i},\{a_{n,i}\}_{n=0}^{\infty},\{b_{n,i}\}_{n=0}^{\infty})\)-nearly asymptotically nonexpansive mapping such that \(L_{i}(k+1)<2\), where k is as in (3.20), and let Q be a self-mapping of \(E_{1}\times E_{2}\) defined by (3.26) such that \(\operatorname{Fix}(Q)\cap \Omega _{\operatorname{SGNVLI}}\neq \emptyset \). Suppose that for all \(n\geq 0\) and \(i\in \{1,2\}\), \(\eta _{n,i}\) is \(\tau _{n,i}\)-Lipschitz continuous, and \(P_{n,i}\) is \(\gamma _{n,i}\)-strongly \(\eta _{n,i}\)-accretive and \(\delta _{n,i}\)-Lipschitz continuous such that \(\lim_{n\rightarrow \infty}P_{n,i}(x_{i})=P_{i}(x_{i})\) for all \(x_{i}\in E_{i}\). For all \(i\in \{1,2\}\) and \(j\in \{1,2\}\backslash \{i\}\), let \(M_{n,i}(\cdot ,x_{j})\stackrel{G}{\longrightarrow}M_{i}(\cdot ,x_{j})\) for all \(x_{j}\in E_{j}\), and let \(\gamma _{n,i}\rightarrow \gamma _{i}\), \(\tau _{n,i}\rightarrow \tau _{i}\), and \(\delta _{n,i}\rightarrow \delta _{i}\) as \(n\rightarrow \infty \). Let there exist constants \(o_{n,i}\) such that for all \(n\geq 0\),

$$\begin{aligned} &\bigl\Vert R^{P_{n,1},\eta _{n,1}}_{M_{n,1}(\cdot ,u),\lambda _{n}}(w)-R^{P_{n,1}, \eta _{n,1}}_{M_{n,1}(\cdot ,v),\lambda _{n}}(w) \bigr\Vert \leq o_{n,1} \Vert u-v \Vert _{1}\quad \forall u,v,w\in E_{1}, \end{aligned}$$
(4.2)
$$\begin{aligned} &\bigl\Vert R^{P_{n,2},\eta _{n,2}}_{M_{n,2}(\cdot ,u),\rho _{n}}(w)-R^{P_{n,2}, \eta _{n,2}}_{M_{n,2}(\cdot ,v),\rho _{n}}(w) \bigr\Vert \leq o_{n,2} \Vert u-v \Vert _{2} \quad \forall u,v,w\in E_{2}. \end{aligned}$$
(4.3)

Further, let there exist constants \(o_{i}>0\) (\(i=1,2\)) and \(\lambda ,\rho >0\) such that (3.2)(3.5) hold, \(\lambda _{n}\rightarrow \lambda \), \(\rho _{n}\rightarrow \rho \), \(o_{n,i}\rightarrow o_{i}\) as \(n\rightarrow \infty \) for each \(i\in \{1,2\}\), and for the cases where \(q_{1}\) and \(q_{2}\) are even natural numbers, in addition to (3.4) and (3.5), let (3.6) hold. Then

  1. (i)

    the iterative sequence \(\{(x_{n},y_{n})\}_{n=0}^{\infty}\) generated by Algorithm 3.11converges strongly to the only element \((x^{*},y^{*})\) of \(\operatorname{Fix}(Q)\cap \Omega _{\operatorname{SGNVLI}}\);

  2. (ii)

    if, in addition, there exists a constant \(\alpha >0\) such that \(\alpha +\alpha _{n}\leq 1\) for each \(n\geq 0\), then \(\lim_{n\rightarrow \infty}(u_{n},v_{n})=(x^{*},y^{*})\) if and only if \(\lim_{n\rightarrow \infty}\epsilon _{n}=0\), where \(\{(u_{n},v_{n})\}_{n=0}^{\infty}\) is any sequence in \(E_{1}\times E_{2}\) satisfying (3.30).

Proof

Since all the conditions of Theorem 3.4 hold, the existence of a unique solution \((x^{*},y^{*})\in E_{1}\times E_{2}\) for SGNVLI (3.1) is guaranteed by Theorem 3.4. Then Lemma 3.1 implies that

$$\begin{aligned}& \textstyle\begin{cases} x^{*}=x^{*}-h_{1}(x^{*})+R^{P_{1},\eta _{1}}_{M_{1}(\cdot ,y^{*}),\lambda}[P_{1}(h_{1}(x^{*})) \\ \hphantom{x^{*}=}{} -\lambda (F_{1}(x^{*},y^{*}-f_{2}(y^{*}))+T_{1}(y^{*},x^{*}-g_{1}(x^{*})))], \\ y^{*}=y^{*}-h_{2}(y^{*})+R^{P_{2},\eta _{2}}_{M_{2}(\cdot ,x^{*}),\rho}[P_{2}(h_{2}(y^{*})) \\ \hphantom{y^{*}=}{} -\rho (F_{2}(x^{*}-f_{1}(x^{*}),y^{*})+T_{2}(x^{*},y^{*}-g_{2}(y^{*})))]. \end{cases}\displaystyle \end{aligned}$$
(4.4)

Since \(\Omega _{\operatorname{SGNVLI}}\) is a singleton set and \(\operatorname{Fix}(Q)\cap \Omega _{\operatorname{SGNVLI}}\neq \emptyset \), we deduce that \(x^{*}\in \operatorname{Fix}(S_{1})\) and \(y^{*}\in \operatorname{Fix}(S_{2})\). Therefore, using (4.4), for each \(n\geq 0\), we can write

$$\begin{aligned}& \textstyle\begin{cases} x^{*}=\alpha _{n,1}x^{*}+(1-\alpha _{n,1}-\beta _{n,1})S^{n}_{1} (x^{*}-h_{1}(x^{*}) +R^{P_{1},\eta _{1}}_{M_{1}(\cdot ,y^{*}),\lambda}(\Phi (x^{*},y^{*})) )+\beta _{n,1}x^{*}, \\ y^{*}=\alpha _{n,1}y^{*}+(1-\alpha _{n,1}-\beta _{n,1})S^{n}_{2} (y^{*}-h_{2}(y^{*}) +R^{P_{2},\eta _{2}}_{M_{2}(\cdot ,x^{*}),\rho}(\Delta (x^{*},y^{*})) )+ \beta _{n,1}y^{*}, \end{cases}\displaystyle \end{aligned}$$
(4.5)

where

$$\begin{aligned}& \Phi \bigl(x^{*},y^{*} \bigr)=P_{1}\bigl(h_{1}\bigl(x^{*}\bigr)\bigr)- \lambda \bigl(F_{1}\bigl(x^{*},y^{*}-f_{2} \bigl(y^{*}\bigr)\bigr)+T_{1}\bigl(y^{*},x^{*}-g_{1} \bigl(x^{*}\bigr)\bigr)\bigr), \\& \Delta \bigl(x^{*},y^{*}\bigr)=P_{2} \bigl(h_{2}\bigl(y^{*}\bigr)\bigr)-\rho \bigl(F_{2} \bigl(x^{*}-f_{1}\bigl(x^{*}\bigr),y^{*} \bigr)+T_{2}\bigl(x^{*},y^{*}-g_{2} \bigl(y^{*}\bigr)\bigr)\bigr). \end{aligned}$$

Letting

$$\begin{aligned} N =&\max \Bigl\{ \sup_{n\geq 0} \bigl\Vert k_{n,i}-x^{*} \bigr\Vert _{1},\sup_{n\geq 0} \bigl\Vert \hat{k}_{n,i}-y^{*} \bigr\Vert _{2}:i=1,2,\dots ,p \Bigr\} \end{aligned}$$

and using (3.28), (4.2), (4.5), Lemma 2.17, and the assumptions, we can obtain that

$$\begin{aligned} \bigl\Vert x_{n+1}-x^{*} \bigr\Vert _{1} =& \bigl\Vert \alpha _{n,1}x_{n}+(1- \alpha _{n,1}-\beta _{n,1})S^{n}_{1} \bigl\{ z_{n,1}-h_{1}(z_{n,1}) \\ &{} +R^{P_{n,1},\eta _{n,1}}_{M_{n,1}(\cdot ,t_{n,1}),\lambda _{n}}\bigl( \Phi (z_{n,1},t_{n,1}) \bigr)\bigr\} +\alpha _{n,1}e_{n,1}+\beta _{n,1}k_{n,1}+l_{n,1} \\ &{} - \bigl(\alpha _{n,1}x^{*}+(1-\alpha _{n,1}- \beta _{n,1})S^{n}_{1} \bigl\{ x^{*}-h_{1} \bigl(x^{*}\bigr) \\ &{} +R^{P_{1},\eta _{1}}_{M_{1}(\cdot ,y^{*}),\lambda}\bigl(\Phi \bigl(x^{*},y^{*} \bigr)\bigr) \bigr\} +\beta _{n,1}x^{*} \bigr) \bigr\Vert _{1} \\ \leq& \alpha _{n,1} \bigl\Vert x_{n}-x^{*} \bigr\Vert _{1}+(1-\alpha _{n,1}- \beta _{n,1})L_{1} \bigl((1+a_{n,1}) \bigl(\vartheta _{1}(n) \bigl\Vert z_{n,1}-x^{*} \bigr\Vert _{1} \\ &{} +\phi _{1}(n) \bigl\Vert t_{n,1}-y^{*} \bigr\Vert _{2}+\Psi _{1}(n)+ \bigl\Vert \Gamma _{1}(n) \bigr\Vert _{1} \bigr)+b_{n,1} \bigr) \\ &{} +\alpha _{n,1} \bigl\Vert e'_{n,1} \bigr\Vert _{1}+ \bigl\Vert e''_{n,1} \bigr\Vert _{1}+ \Vert l_{n,1} \Vert _{1}+ \beta _{n,1}N, \end{aligned}$$
(4.6)

where for each \(n\geq 0\),

$$\begin{aligned}& \vartheta _{1}(n)= \sqrt[q_{1}]{1-q_{1} \theta _{1}+(c_{q_{1}}+q_{1}\varpi _{1}) \iota ^{q_{1}}_{1}} +\frac{\tau ^{q_{1}-1}_{n,1}}{\gamma _{n,1}} \bigl( \sqrt[q_{1}]{\delta ^{q_{1}}_{n,1}\iota ^{q_{1}}_{1}-q_{1}\lambda _{n}r_{1} +\lambda ^{q_{1}}_{n}c_{q_{1}}s^{q_{1}}_{1}} \\& \hphantom{\vartheta _{1}(n)=}{} +\lambda _{n}\mu _{1} \sqrt[q_{1}]{1-q_{1}v_{1}+(c_{q_{1}}+q_{1} \zeta _{1})\varrho ^{q_{1}}_{1}} \bigr), \\& \phi _{1}(n)=o_{n,1}+ \frac{\lambda _{n}\tau ^{q_{1}-1}_{n,1}}{\gamma _{n,1}} \bigl(\xi _{1} \sqrt[q_{2}]{1-q_{2}\varsigma _{2} +(c_{q_{2}}+q_{2}\sigma _{2})\pi ^{q_{2}}_{2}}+ \varepsilon _{1} \bigr), \\& \Psi _{1}(n)=\frac{\tau ^{q_{1}-1}_{n,1}}{\gamma _{n,1}} \bigl( \bigl\Vert P_{n,1}\bigl(h_{1}\bigl(x^{*}\bigr) \bigr)-P_{1}\bigl(h_{1}\bigl(x^{*}\bigr)\bigr) \bigr\Vert _{1}+ \vert \lambda _{n}-\lambda \vert \bigl( \bigl\Vert F_{1}\bigl(x^{*},y^{*}-f_{2} \bigl(y^{*}\bigr)\bigr) \bigr\Vert _{1} \\& \hphantom{\Psi _{1}(n)=}{}+ \bigl\Vert T_{1}\bigl(y^{*},x^{*}-g_{1} \bigl(x^{*}\bigr)\bigr) \bigr\Vert _{1}\bigr) \bigr), \\& \Gamma _{1}(n)=R^{P_{n,1},\eta _{n,1}}_{M_{n,1}(\cdot ,y^{*}),\lambda _{n}}\bigl( \Phi \bigl(x^{*},y^{*}\bigr)\bigr) -R^{P_{1},\eta _{1}}_{M_{1}(\cdot ,y^{*}),\lambda} \bigl( \Phi \bigl(x^{*},y^{*}\bigr)\bigr). \end{aligned}$$

In a similar way, using (3.28), (4.3), (4.5), Lemma 2.17, and the assumptions, for each \(n\geq 0\), we have

$$\begin{aligned} \bigl\Vert y_{n+1}-y^{*} \bigr\Vert _{2} \leq& \alpha _{n,1} \bigl\Vert y_{n}-y^{*} \bigr\Vert _{2}+(1-\alpha _{n,1}-\beta _{n,1})L_{2} \bigl((1+a_{n,2}) \bigl( \vartheta _{2}(n) \bigl\Vert z_{n,1}-x^{*} \bigr\Vert _{1} \\ &{} +\phi _{2}(n) \bigl\Vert t_{n,1}-y^{*} \bigr\Vert _{2}+\Psi _{2}(n)+ \bigl\Vert \Gamma _{2}(n) \bigr\Vert _{2} \bigr)+b_{n,2} \bigr) \\ &{} +\alpha _{n,1} \bigl\Vert \hat{e}'_{n,1} \bigr\Vert _{2}+ \bigl\Vert \hat{e}''_{n,1} \bigr\Vert _{2}+ \Vert \hat{l}_{n,1} \Vert _{2}+\beta _{n,1}N, \end{aligned}$$
(4.7)

where for each \(n\geq 0\),

$$\begin{aligned}& \phi _{2}(n)= \sqrt[q_{2}]{1-q_{2} \theta _{2}+(c_{q_{2}}+q_{2}\varpi _{2})\iota ^{q_{2}}_{2}} +\frac{\tau ^{q_{2}-1}_{n,2}}{\gamma _{n,2}} \bigl( \sqrt[q_{2}]{ \delta ^{q_{2}}_{n,2}\iota ^{q_{2}}_{2}-q_{2} \rho _{n}r_{2} +\rho ^{q_{2}}_{n}c_{q_{2}}s^{q_{2}}_{2}} \\& \hphantom{\phi _{2}(n)=}{} +\rho _{n}\mu _{2} \sqrt[q_{2}]{1-q_{2} \nu _{2}+(c_{q_{2}}+q_{2}\zeta _{2})\varrho ^{q_{2}}_{2}} \bigr), \\& \vartheta _{2}(n)=o_{n,2}+ \frac{\rho _{n}\tau ^{q_{2}-1}_{n,2}}{\gamma _{n,2}} \bigl(\xi _{2} \sqrt[q_{1}]{1-q_{1}\varsigma _{1} +(c_{q_{1}}+q_{1}\sigma _{1})\pi ^{q_{1}}_{1}}+\varepsilon _{2} \bigr), \\& \Psi _{2}(n)=\frac{\tau ^{q_{2}-1}_{n,2}}{\gamma _{n,2}} \bigl( \bigl\Vert P_{n,2}\bigl(h_{2}\bigl(y^{*}\bigr) \bigr)-P_{2}\bigl(h_{2}\bigl(y^{*}\bigr)\bigr) \bigr\Vert _{2}+ \vert \rho _{n}-\rho \vert \bigl( \bigl\Vert F_{2}\bigl(x^{*}-f_{1}\bigl(x^{*} \bigr),y^{*}\bigr) \bigr\Vert _{2} \\& \hphantom{\Psi _{2}(n)=}{} + \bigl\Vert T_{2}\bigl(x^{*},y^{*}-g_{2} \bigl(y^{*}\bigr)\bigr) \bigr\Vert _{2}\bigr) \bigr), \\& \Gamma _{2}(n)=R^{P_{n,2},\eta _{n,2}}_{M_{n,2}(\cdot ,x^{*}),\rho _{n}}\bigl( \Delta \bigl(x^{*},y^{*}\bigr)\bigr) -R^{P_{2},\eta _{2}}_{M_{2}(\cdot ,x^{*}),\rho} \bigl( \Delta \bigl(x^{*},y^{*}\bigr)\bigr). \end{aligned}$$

Letting \(L=\max \{L1,L_{2}\}\), from (4.6) and (4.7) it follows that

$$\begin{aligned} \bigl\Vert (x_{n+1},y_{n+1})- \bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*}={}& \bigl\Vert x_{n+1}-x^{*} \bigr\Vert _{1}+ \bigl\Vert y_{n+1}-y^{*} \bigr\Vert _{2} \\ \leq{}& \alpha _{n,1}\bigl( \bigl\Vert x_{n}-x^{*} \bigr\Vert _{1}+ \bigl\Vert y_{n}-y^{*} \bigr\Vert _{2}\bigr) \\ &{} +(1-\alpha _{n,1}-\beta _{n,1})L (\bigl(\vartheta _{1}(n)+ \vartheta _{2}(n)\bigr) \bigl\Vert z_{n,1}-x^{*} \bigr\Vert _{1} \\ &{} +\bigl(\phi _{1}(n)+\phi _{2}(n)\bigr) \bigl\Vert t_{n,1}-y^{*} \bigr\Vert _{2} \\ &{} +\Psi _{1}(n)+\Psi _{2}(n)+ \bigl\Vert \Gamma _{1}(n) \bigr\Vert _{1}+ \bigl\Vert \Gamma _{2}(n) \bigr\Vert _{2} \\ &{} +(a_{n,1}+a_{n,2}) \bigl(\bigl(\vartheta _{1}(n)+\vartheta _{2}(n)\bigr) \bigl\Vert z_{n,1}-x^{*} \bigr\Vert _{1} \\ &{} +\bigl(\phi _{1}(n)+\phi _{2}(n)\bigr) \bigl\Vert t_{n,1}-y^{*} \bigr\Vert _{2}+ \Psi _{1}(n)+\Psi _{2}(n) \\ &{} + \bigl\Vert \Gamma _{1}(n) \bigr\Vert _{1} + \bigl\Vert \Gamma _{2}(n) \bigr\Vert _{2} \bigr)+b_{n,1}+b_{n,2} ) \\ & {}+\alpha _{n,1}\bigl( \bigl\Vert e'_{n,1} \bigr\Vert _{1}+ \bigl\Vert \hat{e}'_{n,1} \bigr\Vert _{2}\bigr)+ \bigl\Vert e''_{n,1} \bigr\Vert _{1} \\ \begin{aligned}&{} + \bigl\Vert \hat{e}''_{n,1} \bigr\Vert _{2}+ \Vert l_{n,1} \Vert _{1}+ \Vert \hat{l}_{n,1} \Vert _{2}+2\beta _{n,1}N \\ \leq{}& \alpha _{n,1} \bigl\Vert (x_{n},y_{n})- \bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \end{aligned}\\ &{} +(1-\alpha _{n,1}-\beta _{n,1})L \bigl(k(n) \bigl\Vert (z_{n,1},t_{n,1})-\bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\ &{} +\Psi _{1}(n)+\Psi _{2}(n)+ \bigl\Vert \bigl(\Gamma _{1}(n),\Gamma _{2}(n)\bigr) \bigr\Vert _{*} \\ &{} +(a_{n,1}+a_{n,2}) \bigl(k(n) \bigl\Vert (z_{n,1},t_{n,1})-\bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\ &{} +\Psi _{1}(n)+\Psi _{2}(n)+ \bigl\Vert \bigl(\Gamma _{1}(n),\Gamma _{2}(n)\bigr) \bigr\Vert _{*} \bigr)+b_{n,1}+b_{n,2} \bigr) \\ &{} +\alpha _{n,1} \bigl\Vert \bigl(e'_{n,1}, \hat{e}'_{n,1}\bigr) \bigr\Vert _{*} + \bigl\Vert \bigl(e''_{n,1},\hat{e}''_{n,1} \bigr) \bigr\Vert _{*}+ \bigl\Vert (l_{n,1}, \hat{l}_{n,1}) \bigr\Vert _{*} \\ ={}&\alpha _{n,1} \bigl\Vert (x_{n},y_{n})- \bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\ &{} +(1-\alpha _{n,1}-\beta _{n,1})Lk(n) \bigl\Vert (z_{n,1},t_{n,1})-\bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\ & {}+\Upsilon _{1}(n)+\alpha _{n,1} \bigl\Vert \bigl(e'_{n,1},\hat{e}'_{n,1}\bigr) \bigr\Vert _{*} + \bigl\Vert \bigl(e''_{n,1}, \hat{e}''_{n,1}\bigr) \bigr\Vert _{*} \\ &{} + \bigl\Vert (l_{n,1},\hat{l}_{n,1}) \bigr\Vert _{*}+2\beta _{n,1}N, \end{aligned}$$
(4.8)

where for each \(n\geq 0\),

$$\begin{aligned} k(n) =&\max \bigl\{ \vartheta _{1}(n)+\vartheta _{2}(n),\phi _{1}(n)+\phi _{2}(n) \bigr\} \end{aligned}$$

and

$$\begin{aligned} \Upsilon _{1}(n) =&(1-\alpha _{n,1}-\beta _{n,1})L \bigl( \Psi _{1}(n)+\Psi _{2}(n)+ \bigl\Vert \bigl(\Gamma _{1}(n),\Gamma _{2}(n)\bigr) \bigr\Vert _{*} \\ &{}+(a_{n,1}+a_{n,2}) \bigl(k(n) \bigl\Vert (z_{n,1},t_{n,1})-\bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} +\Psi _{1}(n) \\ &{} +\Psi _{2}(n)+ \bigl\Vert \bigl(\Gamma _{1}(n),\Gamma _{2}(n)\bigr) \bigr\Vert _{*}\bigr)+b_{n,1}+b_{n,2} \bigr). \end{aligned}$$

Similarly to the proofs of (4.6)–(4.8), we can prove that for \(i=1,2,\dots ,p-2\),

$$\begin{aligned}& \bigl\Vert (z_{n,i},t_{n,i})- \bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\& \quad \leq \alpha _{n,i+1} \bigl\Vert (x_{n},y_{n})- \bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\& \qquad {} +(1-\alpha _{n,i+1}-\beta _{n,i+1})Lk(n) \bigl\Vert (z_{n,i+1},t_{n,i+1})-\bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\& \qquad {} +\Upsilon _{i+1}(n) +\alpha _{n,i+1} \bigl\Vert \bigl(e'_{n,i+1},\hat{e}'_{n,i+1}\bigr) \bigr\Vert _{*}+ \bigl\Vert \bigl(e''_{n,i+1}, \hat{e}''_{n,i+1}\bigr) \bigr\Vert _{*} \\& \qquad {} + \bigl\Vert (l_{n,i+1},\hat{l}_{n,i+1}) \bigr\Vert _{*}+2\beta _{n,i+1}N \end{aligned}$$
(4.9)

and

$$\begin{aligned}& \bigl\Vert (z_{n,p-1},t_{n,p-1})- \bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\& \quad \leq \alpha _{n,p} \bigl\Vert (x_{n},y_{n})- \bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\& \qquad {} +(1-\alpha _{n,p}-\beta _{n,p})Lk(n) \bigl\Vert (x_{n},y_{n})-\bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\& \qquad {} +\Upsilon _{p}(n) +\alpha _{n,p} \bigl\Vert \bigl(e'_{n,p},\hat{e}'_{n,p}\bigr) \bigr\Vert _{*}+ \bigl\Vert \bigl(e''_{n,p}, \hat{e}''_{n,p}\bigr) \bigr\Vert _{*} \\& \qquad {} + \bigl\Vert (l_{n,p},\hat{l}_{n,p}) \bigr\Vert _{*}+2\beta _{n,p}N, \end{aligned}$$
(4.10)

where for \(i=1,2,\dots ,p-2\),

$$\begin{aligned} \Upsilon _{i+1}(n) =&(1-\alpha _{n,i+1}-\beta _{n,i+1})L \bigl(\Psi _{1}(n)+\Psi _{2}(n) + \bigl\Vert \bigl(\Gamma _{1}(n),\Gamma _{2}(n)\bigr) \bigr\Vert _{*} \\ &{} +(a_{n,1}+a_{n,2}) \bigl(k(n) \bigl\Vert (z_{n,i+1},t_{n,i+1})-\bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} +\Psi _{1}(n) \\ &{} +\Psi _{2}(n)+ \bigl\Vert \bigl(\Gamma _{1}(n),\Gamma _{2}(n)\bigr) \bigr\Vert _{*} \bigr)+b_{n,1}+b_{n,2} \bigr) \end{aligned}$$

and

$$\begin{aligned} \Upsilon _{p}(n) =&(1-\alpha _{n,p}-\beta _{n,p})L \bigl( \Psi _{1}(n)+\Psi _{2}(n) + \bigl\Vert \bigl(\Gamma _{1}(n),\Gamma _{2}(n)\bigr) \bigr\Vert _{*} \\ &{} +(a_{n,1}+a_{n,2}) \bigl(k(n) \bigl\Vert (x_{n},y_{n})-\bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} +\Psi _{1}(n) \\ &{} +\Psi _{2}(n)+ \bigl\Vert \bigl(\Gamma _{1}(n),\Gamma _{2}(n)\bigr) \bigr\Vert _{*} \bigr)+b_{n,1}+b_{n,2} \bigr). \end{aligned}$$

Obviously, \(k(n)\rightarrow k=\max \{\vartheta _{1}+\vartheta _{2},\phi _{1}+ \phi _{2}\}\) as \(n\rightarrow \infty \), where \(\vartheta _{1}\), \(\vartheta _{2}\), \(\phi _{1}\), \(\phi _{2}\) are as in (3.17) and (3.18). Then for \(\hat{k}=\frac{1}{2}(k+1)\in (k,1)\), there exists n 0 N such that \(k(n)<\hat{k}\) for all \(n\geq n_{0}\). Employing (4.9) and (4.10), we can show that for all \(n\geq n_{0}\),

$$\begin{aligned}& \bigl\Vert (z_{n,1},t_{n,1})- \bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\& \quad \leq \alpha _{n,2} \bigl\Vert (x_{n},y_{n})- \bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} +(1- \alpha _{n,2}-\beta _{n,2})L\hat{k} \bigl\Vert (z_{n,2},t_{n,2})-\bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\& \qquad {} +\Upsilon _{2}(n)+\alpha _{n,2} \bigl\Vert \bigl(e'_{n,2},\hat{e}'_{n,2}\bigr) \bigr\Vert _{*} + \bigl\Vert \bigl(e''_{n,2}, \hat{e}''_{n,2}\bigr) \bigr\Vert _{*}+ \bigl\Vert (l_{n,2}, \hat{l}_{n,2}) \bigr\Vert _{*}+2\beta _{n,2}N \\& \quad \leq \Biggl(\alpha _{n,2}+(1-\alpha _{n,2}-\beta _{n,2})\alpha _{n,3}L \hat{k} +(1-\alpha _{n,2}- \beta _{n,2}) (1-\alpha _{n,3}-\beta _{n,3}) \alpha _{n,4}L^{2}\hat{k}^{2} \\& \qquad {} +\cdots +\prod_{i=2}^{p-1}(1-\alpha _{n,i}-\beta _{n,i}) \alpha _{n,p}L^{p-2} \hat{k}^{p-2} \\& \qquad {} +\prod_{i=2}^{p}(1-\alpha _{n,i}-\beta _{n,i})L^{p-1} \hat{k}^{p-1} \Biggr) \bigl\Vert (x_{n},y_{n})-\bigl(x^{*},y^{*} \bigr) \bigr\Vert _{*} \\& \qquad {} +\prod_{i=2}^{p-1}(1- \alpha _{n,i}- \beta _{n,i})L^{p-2} \hat{k}^{p-2}\Upsilon _{p}(n) +\prod _{i=2}^{p-2}(1- \alpha _{n,i}-\beta _{n,i})L^{p-3}\hat{k}^{p-3}\Upsilon _{p-1}(n) \\& \qquad {} +\cdots +(1-\alpha _{n,2}-\beta _{n,2}) (1-\alpha _{n,3}-\beta _{n,3})L^{2} \hat{k}^{2} \Upsilon _{4}(n) \\& \qquad {} +(1-\alpha _{n,2}-\beta _{n,2})L\hat{k}\Upsilon _{3}(n)+ \Upsilon _{2}(n) \\& \qquad {} +\prod_{i=2}^{p-1}(1-\alpha _{n,i}-\beta _{n,i}) \alpha _{n,p}L^{p-2} \hat{k}^{p-2} \bigl\Vert \bigl(e'_{n,p}, \hat{e}'_{n,p}\bigr) \bigr\Vert _{*} \\& \qquad {} +\prod_{i=2}^{p-2}(1-\alpha _{n,i}-\beta _{n,i}) \alpha _{n,p-1}L^{p-3} \hat{k}^{p-3} \bigl\Vert \bigl(e'_{n,p-1}, \hat{e}'_{n,p-1}\bigr) \bigr\Vert _{*}+\cdots \\& \qquad {} +(1-\alpha _{n,2}-\beta _{n,2}) (1-\alpha _{n,3}-\beta _{n,3}) \alpha _{n,4}L^{2} \hat{k}^{2} \bigl\Vert \bigl(e'_{n,4}, \hat{e}'_{n,4}\bigr) \bigr\Vert _{*} \\& \qquad {} +(1-\alpha _{n,2}-\beta _{n,2})\alpha _{n,3}L \hat{k} \bigl\Vert \bigl(e'_{n,3}, \hat{e}'_{n,3} \bigr) \bigr\Vert _{*} +\alpha _{n,2} \bigl\Vert \bigl(e'_{n,2},\hat{e}'_{n,2}\bigr) \bigr\Vert _{*} \\& \qquad {} +\prod_{i=2}^{p-1}(1-\alpha _{n,i}-\beta _{n,i})L^{p-2} \hat{k}^{p-2} \bigl\Vert \bigl(e''_{n,p}, \hat{e}''_{n,p}\bigr) \bigr\Vert _{*} \\& \qquad {} +\prod_{i=2}^{p-2}(1-\alpha _{n,i}-\beta _{n,i})L^{p-3} \hat{k}^{p-3} \bigl\Vert \bigl(e''_{n,p-1}, \hat{e}''_{n,p-1}\bigr) \bigr\Vert _{*}+\cdots \\& \qquad {} +(1-\alpha _{n,2}-\beta _{n,2}) (1-\alpha _{n,3}-\beta _{n,3})L^{2} \hat{k}^{2} \bigl\Vert \bigl(e''_{n,4}, \hat{e}''_{n,4}\bigr) \bigr\Vert _{*} \\& \qquad {} +(1-\alpha _{n,2}-\beta _{n,2})L\hat{k} \bigl\Vert \bigl(e''_{n,3}, \hat{e}''_{n,3} \bigr) \bigr\Vert _{*} + \bigl\Vert \bigl(e''_{n,2}, \hat{e}''_{n,2}\bigr) \bigr\Vert _{*} \\& \qquad {} +\prod_{i=2}^{p-1}(1-\alpha _{n,i}-\beta _{n,i})L^{p-2} \hat{k}^{p-2} \bigl\Vert (l_{n,p},\hat{l}_{n,p}) \bigr\Vert _{*} \\& \qquad {} +\prod_{i=2}^{p-2}(1-\alpha _{n,i}-\beta _{n,i})L^{p-3} \hat{k}^{p-3} \bigl\Vert (l_{n,p-1},\hat{l}_{n,p-1}) \bigr\Vert _{*}+\cdots \\& \qquad {} +(1-\alpha _{n,2}-\beta _{n,2}) (1-\alpha _{n,3}-\beta _{n,3})L^{2} \hat{k}^{2} \bigl\Vert (l_{n,4},\hat{l}_{n,4}) \bigr\Vert _{*} \\& \qquad {} +(1-\alpha _{n,2}-\beta _{n,2})L\hat{k} \bigl\Vert (l_{n,3},\hat{l}_{n,3}) \bigr\Vert _{*} + \bigl\Vert (l_{n,2},\hat{l}_{n,2}) \bigr\Vert _{*} \\& \qquad {} +2 \Biggl(\prod_{i=2}^{p-1}(1-\alpha _{n,i}-\beta _{n,i}) \beta _{n,p}L^{p-2} \hat{k}^{p-2} \\& \qquad {} +\prod_{i=2}^{p-2}(1-\alpha _{n,i}-\beta _{n,i}) \beta _{n,p-1}L^{p-3} \hat{k}^{p-3}+\cdots \\& \qquad {} +(1-\alpha _{n,2}-\beta _{n,2}) (1-\alpha _{n,3}-\beta _{n,3}) \beta _{n,4}L^{2} \hat{k}^{2} \\& \qquad {} +(1-\alpha _{n,2}-\beta _{n,2})\beta _{n,3}L \hat{k}+\beta _{n,2} \Biggr)N. \end{aligned}$$
(4.11)

By (4.8), (4.11), and the fact that \(0<\alpha \leq \prod_{i=1}^{p}(1-\alpha _{n,i})\) for all \(n\geq n_{0}\), we can prove that

$$\begin{aligned}& \bigl\Vert (x_{n+1},y_{n+1})- \bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\& \quad \leq \alpha _{n,1} \bigl\Vert (x_{n},y_{n})- \bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\& \qquad {} +(1-\alpha _{n,1}-\beta _{n,1})L\hat{k} \bigl\Vert (z_{n,1},t_{n,1})-\bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\& \qquad {} +\Upsilon _{1}(n) +\alpha _{n,1} \bigl\Vert \bigl(e'_{n,1}, \hat{e}'_{n,1}\bigr) \bigr\Vert _{*} + \bigl\Vert \bigl(e''_{n,1},\hat{e}''_{n,1} \bigr) \bigr\Vert _{*} \\& \begin{aligned} &\qquad {} + \bigl\Vert (l_{n,1},\hat{l}_{n,1}) \bigr\Vert _{*}+2\beta _{n,1}N \\ &\quad \leq \Biggl(1-L^{p-1}\hat{k}^{p-1}(1-L\hat{k}) \prod _{i=1}^{p}(1- \alpha _{n,i}) \Biggr) \bigl\Vert (x_{n},y_{n})-\bigl(x^{*},y^{*} \bigr) \bigr\Vert _{*} \end{aligned}\\& \qquad {} +L^{p-1}\hat{k}^{p-1}(1-L\hat{k})\prod _{i=1}^{p}(1- \alpha _{n,i}) \frac{\chi (n)}{L^{p-1}\hat{k}^{p-1}(1-L\hat{k})\alpha} \\& \qquad {} +\sum_{i=1}^{p-1}\prod _{j=1}^{i}(1-\alpha _{n,j}- \beta _{n,j}) L^{i}\hat{k}^{i} \bigl\Vert \bigl(e''_{n,i+1},\hat{e}''_{n,i+1} \bigr) \bigr\Vert _{*}+ \bigl\Vert \bigl(e''_{n,1}, \hat{e}''_{n,1}\bigr) \bigr\Vert _{*} \\& \qquad {} +\sum_{i=1}^{p-1}\prod _{j=1}^{i}(1-\alpha _{n,j}- \beta _{n,j}) L^{i}\hat{k}^{i} \bigl\Vert (l_{n,i+1},\hat{l}_{n,i+1}) \bigr\Vert _{*}+ \bigl\Vert (l_{n,1},\hat{l}_{n,1}) \bigr\Vert _{*} \\& \qquad {} +2 \Biggl(\sum_{i=1}^{p-1}\prod _{j=1}^{i}(1- \alpha _{n,j}-\beta _{n,j})\beta _{n,i+1} L^{i}\hat{k}^{i} \Biggr)N, \end{aligned}$$
(4.12)

where for each \(n\geq 0\),

$$\begin{aligned} \chi (n) =&\Upsilon _{1}(n)+\sum _{i=1}^{p-1} \prod_{j=1}^{i}(1- \alpha _{n,j}-\beta _{n,j})L^{i} \hat{k}^{i} \Upsilon _{i+1}(n) \\ &{} +\sum_{i=1}^{p-1}\prod _{j=1}^{i}(1-\alpha _{n,j}- \beta _{n,j})\alpha _{n,i+1}L^{i}\hat{k}^{i} \bigl\Vert \bigl(e'_{n,i+1}, \hat{e}'_{n,i+1} \bigr) \bigr\Vert _{*}+\alpha _{n,1} \bigl\Vert \bigl(e'_{n,1},\hat{e}'_{n,1}\bigr) \bigr\Vert _{*}. \end{aligned}$$

The fact that \(L_{i}(k+1)<2\) for each \(i\in \{1,2\}\), where k is as in (3.20), ensures that \(L\hat{k}<1\). From Theorem 4.2 it follows that \(\Vert (\Gamma _{1}(n),\Gamma _{2}(n))\Vert _{*}\rightarrow 0\) as \(n\rightarrow \infty \). Since \(P_{n,i}(x_{i})\rightarrow P_{i}(x_{i})\) for all \(x_{i}\in E_{i}\) and \(i\in \{1,2\}\), and \(\lambda _{n}\rightarrow \lambda \) and \(\rho _{n}\rightarrow \rho \) as \(n\rightarrow \infty \), we deduce that \(\Psi _{1}(n),\Psi _{2}(n)\rightarrow 0\) as \(n\rightarrow \infty \). Taking into account that for each \(i\in \{1,2\}\), \(a_{n,i}\rightarrow 0\) as \(n\rightarrow \infty \), we infer that \(\Upsilon _{i}(n)\rightarrow 0\) as \(n\rightarrow \infty \) for each \(i\in \{1,2,\dots ,p\}\). Since \(\sum_{n=0}^{\infty}\beta _{n,i}<\infty \) for each \(i\in \{1,2,\dots ,p\}\), with the help of (3.29), we observe that all the conditions of Lemma 4.5 are satisfied, and so Lemma 4.5 and (4.12) guarantee that \((x_{n},y_{n})\rightarrow (x^{*},y^{*})\) as \(n\rightarrow \infty \), that is, the iterative sequence \(\{(x_{n},y_{n})\}_{n=0}^{\infty}\) generated by Algorithm 3.11 converges strongly to the only element \((x^{*},y^{*})\) of \(\operatorname{Fix}(Q)\cap \Omega _{\operatorname{SGNVLI}}\).

Now we prove conclusion (ii). Using (3.30) yields

$$\begin{aligned}& \bigl\Vert (u_{n+1},v_{n+1})- \bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\& \quad \leq \bigl\Vert (u_{n+1},v_{n+1})-(L_{n},D_{n}) \bigr\Vert _{*}+ \bigl\Vert (L_{n},D_{n})- \bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\& \quad =\epsilon _{n}+ \bigl\Vert L_{n}-x^{*} \bigr\Vert _{1}+ \bigl\Vert D_{n}-y^{*} \bigr\Vert _{2}. \end{aligned}$$
(4.13)

By the same arguments as in the proof of (4.6) and (4.7) we can establish that

$$\begin{aligned} \bigl\Vert L_{n}-x^{*} \bigr\Vert _{1} \leq& \alpha _{n,1} \bigl\Vert u_{n}-x^{*} \bigr\Vert _{1}+(1-\alpha _{n,1}-\beta _{n,1})L_{1} \bigl((1+a_{n,1}) \bigl( \vartheta _{1}(n) \bigl\Vert \nu _{n,1}-x^{*} \bigr\Vert _{1} \\ & {}+\phi _{1}(n) \bigl\Vert \omega _{n,1}-y^{*} \bigr\Vert _{2}+\Psi _{1}(n)+ \bigl\Vert \Gamma _{1}(n) \bigr\Vert _{1}\bigr)+b_{n,1} \bigr) \\ &{} +\alpha _{n,1} \bigl\Vert e'_{n,1} \bigr\Vert _{1}+ \bigl\Vert e''_{n,1} \bigr\Vert _{1}+ \Vert l_{n,1} \Vert _{1}+ \beta _{n,1}N \end{aligned}$$
(4.14)

and

$$\begin{aligned} \bigl\Vert D_{n}-y^{*} \bigr\Vert _{1} \leq& \alpha _{n,1} \bigl\Vert v_{n}-y^{*} \bigr\Vert _{2}+(1-\alpha _{n,1}-\beta _{n,1})L_{2} \bigl((1+a_{n,2}) \bigl( \vartheta _{2}(n) \bigl\Vert \nu _{n,1}-x^{*} \bigr\Vert _{1} \\ &{} +\phi _{2}(n) \bigl\Vert \omega _{n,1}-y^{*} \bigr\Vert _{2}+\Psi _{2}(n)+ \bigl\Vert \Gamma _{2}(n) \bigr\Vert _{2}\bigr)+b_{n,2} \bigr) \\ &{} +\alpha _{n,1} \bigl\Vert e'_{n,1} \bigr\Vert _{2}+ \bigl\Vert e''_{n,1} \bigr\Vert _{2}+ \Vert \hat{l}_{n,1} \Vert _{2}+\beta _{n,1}N, \end{aligned}$$
(4.15)

where for all \(n\geq 0\), \(\vartheta _{1}(n)\), \(\phi _{1}(n)\) are as in (4.6), and \(\vartheta _{2}(n)\), \(\phi _{2}(n)\) are as in (4.7). Since \(0<\alpha \leq \prod_{i=1}^{p}(1-\alpha _{n,i})\) for all \(n\geq n_{0}\), using (4.13)–(4.15), as in the proof of (4.12), we obtain

$$\begin{aligned}& \bigl\Vert (u_{n+1},v_{n+1})- \bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\& \quad \leq \Biggl(1-L^{p-1}\hat{k}^{p-1}(1-L\hat{k}) \prod _{i=1}^{p}(1- \alpha _{n,i}) \Biggr) \bigl\Vert (u_{n},v_{n})-\bigl(x^{*},y^{*} \bigr) \bigr\Vert _{*} \\& \begin{aligned} & \qquad {}+L^{p-1}\hat{k}^{p-1}(1-L\hat{k})\prod _{i=1}^{p}(1- \alpha _{n,i}) \frac{\chi '(n)}{L^{p-1}\hat{k}^{p-1}(1-L\hat{k})\alpha} \\ & \qquad {} +\sum_{i=1}^{p-1}\prod _{j=1}^{i}(1-\alpha _{n,j}- \beta _{n,j})L^{i}\hat{k}^{i} \bigl\Vert \bigl(e''_{n,i+1},\hat{e}''_{n,i+1} \bigr) \bigr\Vert _{*}+ \bigl\Vert \bigl(e''_{n,1}, \hat{e}''_{n,1}\bigr) \bigr\Vert _{*} \end{aligned}\\& \qquad {} +\sum_{i=1}^{p-1}\prod _{j=1}^{i}(1-\alpha _{n,j}- \beta _{n,j})L^{i}\hat{k}^{i} \bigl\Vert (l_{n,i+1},\hat{l}_{n,i+1}) \bigr\Vert _{*}+ \bigl\Vert (l_{n,1},\hat{l}_{n,1}) \bigr\Vert _{*} \\& \qquad {} +2 \Biggl(\sum_{i=1}^{p-1}\prod _{j=1}^{i}(1- \alpha _{n,j}-\beta _{n,j})\beta _{n,i+1}L^{i}\hat{k}^{i} \Biggr)N, \end{aligned}$$
(4.16)

where for each \(n\geq 0\),

$$\begin{aligned} \chi '(n) =&\Upsilon _{1}(n)+ \sum_{i=1}^{p-1} \prod _{j=1}^{i}(1-\alpha _{n,j}-\beta _{n,j}) L^{i}\hat{k}^{i} \Upsilon _{i+1}(n) \\ &{} +\sum_{i=1}^{p-1} \prod _{j=1}^{i}(1-\alpha _{n,j}- \beta _{n,j})\alpha _{n,i+1}L^{i}\hat{k}^{i} \bigl\Vert \bigl(e'_{n,i+1},\hat{e}'_{n,i+1} \bigr) \bigr\Vert _{*} \\ &{} +\alpha _{n,1} \bigl\Vert \bigl(e'_{n,1}, \hat{e}'_{n,1}\bigr) \bigr\Vert _{*}+ \epsilon _{n}. \end{aligned}$$

If \(\lim_{n\rightarrow \infty}\epsilon _{n}=0\), then from (3.29), (4.16), and Lemma 4.5 it follows that \(\lim_{n\rightarrow \infty}(u_{n},v_{n})=(x^{*},y^{*})\).

Conversely, suppose that \(\lim_{n\rightarrow \infty}(u_{n},v_{n})=(x^{*},y^{*})\). Employing (4.14) and (4.15), we have

$$\begin{aligned} \epsilon _{n}={}& \bigl\Vert (u_{n+1},v_{n+1})-(L_{n},D_{n}) \bigr\Vert _{*} \\ \leq{}& \bigl\Vert (u_{n+1},v_{n+1})-\bigl(x^{*},y^{*} \bigr) \bigr\Vert _{*}+ \bigl\Vert (L_{n},D_{n})- \bigl(x^{*},y^{*}\bigr) \bigr\Vert _{*} \\ \leq {}&\bigl\Vert (u_{n+1},v_{n+1})-\bigl(x^{*},y^{*} \bigr) \bigr\Vert _{*} \\ \begin{aligned} & {}+ \Biggl(1-L^{p-1}\hat{k}^{p-1}(1-L\hat{k}) \prod _{i=1}^{p}(1- \alpha _{n,i}) \Biggr) \bigl\Vert (u_{n},v_{n})-\bigl(x^{*},y^{*} \bigr) \bigr\Vert _{*} \\ &{} +L^{p-1}\hat{k}^{p-1}(1-L\hat{k})\prod _{i=1}^{p}(1- \alpha _{n,i}) \frac{\chi (n)}{L^{p-1}\hat{k}^{p-1}(1-L\hat{k})\alpha} \end{aligned}\\ &{} +\sum_{i=1}^{p-1} \prod_{j=1}^{i}(1- \alpha _{n,j}- \beta _{n,j})L^{i}\hat{k}^{i} \bigl\Vert \bigl(e''_{n,i+1}, \hat{e}''_{n,i+1} \bigr) \bigr\Vert _{*}+ \bigl\Vert \bigl(e''_{n,1}, \hat{e}''_{n,1}\bigr) \bigr\Vert _{*} \\ &{} +\sum_{i=1}^{p-1}\prod _{j=1}^{i}(1-\alpha _{n,j}- \beta _{n,j})L^{i}\hat{k}^{i} \bigl\Vert (l_{n,i+1},\hat{l}_{n,i+1}) \bigr\Vert _{*}+ \bigl\Vert (l_{n,1},\hat{l}_{n,1}) \bigr\Vert _{*} \\ & {}+2 \Biggl(\sum_{i=1}^{p-1}\prod _{j=1}^{i}(1- \alpha _{n,j}-\beta _{n,j})\beta _{n,i+1}L^{i}\hat{k}^{i} \Biggr)N. \end{aligned}$$
(4.17)

Obviously, (3.29) implies that \(\lim_{n\rightarrow \infty}\Vert (e''_{n,i},\hat{e}''_{n,i}) \Vert _{*} =\lim_{n\rightarrow \infty}\Vert (l_{n,i},\hat{l}_{n,i}) \Vert _{*}=0\) for each \(i\in \{1,2,\dots ,p\}\). In view of the facts that \(L\hat{k}<1\), \(\lim_{n\rightarrow \infty}\Upsilon _{i}(n)=0\), \(\lim_{n\rightarrow \infty}\Vert (e'_{n,i},\hat{e}'_{n,i}) \Vert _{*}=0\), and \(\sum_{n=0}^{\infty}\beta _{n,i}<\infty \) for each \(i\in \{1,2,\dots ,p\}\), we conclude that the right-hand side of (4.17) tends to zero as \(n\rightarrow \infty \). This completes the proof. □