1 Introduction

Over the last 50 years, it has been revealed that the theory of variational inequalities is a powerful and important tool in the study of a wide class of problems. These problems in various fields, such as mechanics, physics, optimization and control, nonlinear programming, elasticity, and applied sciences, etc. For example, see [6, 27, 32, 38] and the references therein. Due to its many diverse applications, over the past few decades, various generalizations of this theory have been proposed and analyzed using novel and innovative techniques. In recent years, variational inclusion problem as a useful and important extension of the variational inequality problem has received a lot of attention both as to mathematical investigation and applications. For further details and applications of variational inclusions, we refer the reader to [5].

In the meantime, much attention has been given to develop general techniques for the sensitivity analysis of the solution set of various classes of variational inequalities and inclusions. It is worth mentioning that among the methods for solving variational inclusion problems, the resolvent operator techniques have become increasingly popular. For more information and further details, see, e.g., [2, 8, 20,21,22, 25, 26, 28, 29, 33, 39, 42, 45, 47, 54, 55] and the references therein. In particular, the applications of the generalized resolvent operator technique have been explored and improved recently. To study different classes of variational inequality problems and variational inclusion problems, many authors have shown interest in introducing various concepts of generalized monotone operators and generalized accretive mappings. For instance, in 2001, Huang and Fang [25] introduced a class of accretive mappings in the setting of Banach spaces the so-called generalized m-accretive mappings which can be viewed as a verifying framework for the classes of maximal monotone operators, maximal \(\eta \)-monotone operators [26], and \(\eta \)-subdifferential operators [18, 34]. They defined the resolvent operator associated with a generalized m-accretive mapping. They also showed some properties related to it. In 2003, Fang and Huang [20] were the first to introduce the notion of H-monotone operator as an extension of maximal monotone operator and gave the definition of the resolvent operator associated with an H-monotone operator in a real Hilbert space setting. Based on the resolvent operator technique, they constructed an iterative algorithm for solving a class of variational inclusions involving H-monotone operators. In [21], a similar notion to H-monotone operator was defined in a more general setting of Banach space and called it H-accretive operator. Subsequently, further generalized monotone and accretive operators, such as \((H,\eta )\)-monotone operator [22], A-monotone operator [45], M-monotone operator [42], and \((A,\eta )\)-monotone operator [47] in the setting of real Hilbert spaces, and \((P,\eta )\)-accretive (also referred to as P-\(\eta \)-accretive) mapping [28, 39], \((A,\eta )\)-accretive (also referred to as A-maximal m-relaxed \(\eta \)-accretive) mapping [33], and H(., .)-accretive operator [54, 55] in the framework of Banach spaces are introduced and studied. By defining the resolvent operators associated with these operators and deriving properties concerning them, the authors constructed iterative algorithms based on the resolvent operator technique to find approximate solutions of various classes of variational inclusion problems and studied the convergence analysis of the sequences generated by their iterative algorithms. Later, motivated and inspired by the works mentioned above, Kazmi et al. [29] introduced the notion of generalized H(., .)-accretive mapping in real Banach spaces as a verifying framework for the classes of H-monotone operators, H-accretive operators, M-monotone operators, and H(., .)-accretive operators. They defined the proximal-point mapping associated with a generalized H(., .)-accretive mapping and obtained some properties regarding it. By employing the proximal-point mapping method, they proposed an iterative algorithm for solving a system of generalized variational inclusions involving generalized H(., .)-accretive mappings in real q-uniformly smooth Banach spaces. At the same time, under some suitable conditions, they proved the strong convergence of the sequence generated by their proposed iterative algorithm, to a unique solution of the system of generalized variational inclusions.

On the other hand, fixed point theory whose study began almost a century ago in the field of algebraic topology has gained impetus, due to its wide range of applicability in resolving diverse problems emanating from the theory of nonlinear differential equations, theory of nonlinear integral equations, game theory, mathematical economics, and control theory. Because of the existence of a very close relation between variational inequality/inclusion problems and the fixed point problems, in recent decades, a significant amount of research has been conducted by researchers to present a unified approach to these two different problems; see, e.g., [4, 7,8,9, 12, 43] and the references therein. Meanwhile, due to the connection with the geometry of Banach spaces, and the relevance of these mappings in the theory of monotone and accretive operators, in the past 50 years, the study of the class of nonexpansive mappings is one of the major and highly active research areas of nonlinear analysis. For this reason, since the the 70s, many authors proceeded to generalize the concept of nonexpansive mappings in different spaces settings. In 1972, Goebel and Kirk [23] succeeded to introduce a generalized nonexpansive mapping the so-called asymptotically nonexpansive mapping. During the past 2 decades, further extensions of class of nonexpansive mappings have been introduced in the literature. For instance, in 2006, Alber et al. [1] were the first to introduce the notion of total asymptotically nonexpansive mapping which is more general than asymptotically nonexpansive mapping and studied methods for approximation of fixed points of such a class of mappings. For more information and details regarding various extensions of the class of nonexpansive mappings along with several illustrative examples, we refer readers to [1, 10, 23, 30, 40] and the references therein.

The system of generalized variational-like inclusions and fixed point problems in real Banach spaces is a mathematical framework that encompasses a wide range of problems in optimization, equilibrium theory, and fixed point theory. This system involves studying of solutions to variational-like inclusions and fixed point problems in the context of real Banach spaces. The concept of total asymptotically nonexpansive mappings plays a crucial role in this framework. A mapping is said to be total asymptotically nonexpansive if it satisfies specific contraction properties with respect to two given sets. These mappings provide a generalization of nonexpansive mappings and play a fundamental role in solving variational-like inclusions and fixed point problems. This has led to significant contributions in the field. Some of the key contributions include: the existence and uniqueness of solutions to the generalized variational-like inclusions and fixed point problems, convergence analysis of iterative algorithms, and establish conditions guaranteeing convergence to a solution; this framework has found applications in various areas, including optimization, game theory, and equilibrium problems and various generalizations and extensions of the framework to handle more complex and specialized problems. Overall, the study of the system of generalized variational-like inclusions and fixed point problems in real Banach spaces, particularly with a focus on total asymptotically nonexpansive mappings and iterative schemes, has made significant contributions to the field of optimization, equilibrium theory, and fixed point theory. The theoretical developments and practical applications have enhanced our understanding of these problems and provided valuable tools for solving them effectively. For more information and further details, see, for example, [3, 11, 13, 19, 44, 48, 50, 51].

The paper is outlined as follows. In Sect. 2, we review some background material on the \((P,\eta )\)-accretive mapping and its associated resolvent operator in a real Banach space setting, and also provides the necessary notation and some conclusions to be used in the rest of the paper. In Sect. 3, a system of generalized variational-like inclusions \((\mathop {{\textrm{SGVLI}}})\) involving \((P,\eta )\)-accretive mappings in real q-uniformly smooth Banach space is considered, and under some appropriate conditions, the existence and uniqueness of solution for the \(\mathop {{\textrm{SGVLI}}}\) is discussed. Section 4 is devoted to introducing a new iterative process using the resolvent operator technique for finding the common element of the set of fixed points of a total asymptotically nonexpansive mapping and the set of solutions of the \(\mathop {{\textrm{SGVLI}}}\). As an application of the notion of \((P,\eta )\)-accretive mapping and our suggested iterative algorithm, in Sect. 5, we show that the sequence generated by the proposed iterative algorithm in Sect. 4 converges strongly to a common element of the above two sets under some parameters controlling conditions. Finally, in Sect. 6, the concept of generalized H(., .)-accretive mapping as defined by Kazmi et al. [29] is investigated and analyzed. We provide some comments regarding it and verify that under the conditions considered in [29], every generalized H(., .)-accretive mapping is, in fact, P-accretive and not a new one. Furthermore, the fact that all conditions presented in [29] can be deduced from our results derived in Sects. 25 is illustrated.

2 Preliminary materials and results

Throughout this paper, unless otherwise specified, we assume that E is a real Banach space with a norm \(\Vert .\Vert \), that \(E^*\) is the continuous dual of E, and that E and \(E^*\) are paired by \(\langle .,.\rangle \). For the sake of simplicity, the norm of \(E^*\) is also denoted by the symbol \(\Vert .\Vert \). As usual, \(x^*\) stands for the weak star topology in \(E^*\), and the family of all nonempty subsets of E is denoted by \(2^E\). Meanwhile, \(B_E\) and \(S_E\) denote, respectively, the unit ball and the unit sphere in E. The graph of a given multi-valued mapping \(M:E\rightarrow 2^E\) is defined by

$$\begin{aligned} \mathop {{\textrm{Graph}}}(M):=\{(x,u)\in E\times E:u\in M(x)\}. \end{aligned}$$

Definition 2.1

A normed space E is called strictly convex if \(S_E\) is strictly convex, that is, the inequality \(\Vert x+y\Vert <2\) holds for all \(x,y\in S_E\), such that \(x\ne y\).

Recall that a normed space E is said to be smooth if, for every \(x\in S_E\), there exists a unique \(x^*\in E^*\), such that \(\Vert x^*\Vert =\langle x,x^*\rangle =1\). It is well known that E is smooth if \(E^*\) is strictly convex, and E is strictly convex if \(E^*\) is smooth.

Definition 2.2

A normed space E is called uniformly convex if, for any given \(\varepsilon >0\), there exists \(\delta >0\), such that for all \(x,y\in B_E\) with \(\Vert x-y\Vert \ge \varepsilon \), the inequality \(\Vert x+y\Vert \le 2(1-\delta )\) holds.

The function \(\delta _E:[0,2]\rightarrow [0,,1]\) defined by the formula

$$\begin{aligned} \delta _E(\varepsilon )=\inf \left\{ 1-\frac{1}{2}\Vert x+y\Vert :x,y\in B_E, \Vert x-y\Vert \ge \varepsilon \right\} \end{aligned}$$

is called the modulus of convexity of E.

Definition 2.3

A normed space E is said to be uniformly smooth if, for any given \(\varepsilon >0\), there exists \(\delta >0\), such that for all \(x\in S_E\) and \(\frac{y}{\delta }\in B_E\), the inequality

$$\begin{aligned} 2^{-1}(\Vert x+y\Vert +\Vert x-y\Vert )-1\le \varepsilon \Vert y\Vert \end{aligned}$$

holds.

The function \(\rho _E:[0,\infty )\rightarrow [0,\infty )\) defined by the formula

$$\begin{aligned} \rho _E(\tau ):=\sup \left\{ \frac{\Vert x+y\Vert +\Vert x-y\Vert }{2}-1:x,\frac{y}{\tau }\in B_E\right\} \end{aligned}$$

is called the modulus of smoothness of the space E. Taking into account the definitions of the functions \(\delta _E\) and \(\rho _E\), we note that a normed space E is uniformly convex if and only if \(\delta _E(\varepsilon )>0\) for every \(\varepsilon \in (0,2]\), and it is uniformly smooth if and only if \(\lim _{\tau \rightarrow 0}\frac{\rho _E(\tau )}{\tau }=0\). It should be pointed out that a Banach space E is uniformly convex (resp., uniformly smooth) if and only if \(E^*\) is uniformly smooth (resp., uniformly convex).

The spaces \(l^p\), \(L^p\) and \(W_m^p\), \(1<p<\infty \), \(m\in \mathbb {N}\), are uniformly convex as well as uniformly smooth; see [17, 24, 35]. Furthermore, to see the modulus of convexity and smoothness of a Hilbert space and the spaces \(l^p\), \(L^p\) and \(W_m^p\), \(1<p<\infty \), \(m\in \mathbb {N}\), we refer the reader to [17, 24, 35].

For an arbitrary but fixed real number \(q>1\), the multi-valued mapping \(J_q:E\rightarrow 2^{E^*}\) defined by the formula

$$\begin{aligned} J_q(x)=\{x^*\in E^*:\langle x,x^*\rangle =\Vert x\Vert ^q,\Vert x^*\Vert =\Vert x\Vert ^{q-1}\},\quad \forall x\in E \end{aligned}$$

is called the generalized duality mapping of E.

In particular, \(J_2\) is the usual normalized duality mapping. It is known that, in general, \(J_q(x)=\Vert x\Vert ^{q-2}J_2(x)\), for all \(x\ne 0\). Note that \(J_q\) is single-valued if E is uniformly smooth or equivalently \(E^*\) is strictly convex. If E is a Hilbert space, then \(J_2\) becomes the identity mapping on E.

Definition 2.4

For a real constant \(q>1\), a Banach space E is called q-uniformly smooth if there exists a constant \(C>0\), such that \(\rho _E(\tau )\le C\tau ^q\), for all \(\tau \in [0,+\infty )\).

It is well known that (see, e.g., [49]) \(L_q\) (or \(l_q\)) is q-uniformly smooth for \(1\le q\le 2\) and is 2-uniformly smooth if \(q>2\).

Concerned with the characteristic inequalities in q-uniformly smooth Banach spaces, Xu [49] proved the following result.

Lemma 2.5

Let E be a real uniformly smooth Banach space. For a real constant \(q>1\), E is q-uniformly smooth if and only if there exists a constant \(c_q>0\), such that for all \(x,y\in E\)

$$\begin{aligned} \Vert x+y\Vert ^q\le \Vert x\Vert ^q+q\langle y,J_q(x)\rangle +c_q\Vert y\Vert ^q. \end{aligned}$$

Before proceeding to the main results of the paper, we also recall some necessary notation and few useful results.

Definition 2.6

Let E be a real q-uniformly smooth Banach space and let \(P:E\rightarrow E\) and \(\eta :E\times E\rightarrow E\) be two vector-valued mappings. Then, P is said to be

  1. (i)

    \(\eta \)-accretive if,

    $$\begin{aligned} \langle P(x)-P(y),J_q(\eta (x,y))\rangle \ge 0,\quad \forall x,y\in E; \end{aligned}$$
  2. (ii)

    strictly \(\eta \)-accretive if, P is \(\eta \)-accretive and equality holds if and only if \(x=y\);

  3. (iii)

    r-strongly \(\eta \)-accretive if there exists a constant \(r>0\), such that

    $$\begin{aligned} \langle P(x)-P(y),J_q(\eta (x,y))\rangle \ge r\Vert x-y\Vert ^q,\quad \forall x,y\in E; \end{aligned}$$
  4. (iv)

    \(\varsigma \)-Lipschitz continuous if there exists a constant \(\varsigma >0\), such that

    $$\begin{aligned} \Vert P(x)-P(y)\Vert \le \varsigma \Vert x-y\Vert ,\quad \forall x,y\in E. \end{aligned}$$

It should be noted that if \(\eta (x,y)=x-y\), for all \(x,y\in E\), then parts (i) to (iii) of Definition 2.6 reduce to the definitions of accretivity, strict accretivity, and strong accretivity of the mapping P, respectively.

Definition 2.7

[21, Definition 1.2] Let E be a real q-uniformly smooth Banach space, \(P:E\rightarrow E\) be a single-valued mapping and \({\widehat{M}}:E\rightarrow 2^E\) be a multi-valued mapping. \({\widehat{M}}\) is said to be

  1. (i)

    accretive if

    $$\begin{aligned} \langle u-v,J_q(x-y)\rangle \ge 0,\quad \forall (x,u),(y,v)\in \mathop {{\textrm{Graph}}}({\widehat{M}}); \end{aligned}$$
  2. (ii)

    r-strongly accretive if there exists a constant \(r>0\), such that

    $$\begin{aligned} \langle u-v,J_q(x-y)\rangle \ge r\Vert x-y\Vert ^q,\quad \forall (x,u),(y,v)\in \mathop {{\textrm{Graph}}}({\widehat{M}}); \end{aligned}$$
  3. (ii)

    m-accretive if \({\widehat{M}}\) is accretive and \((I+\lambda {\widehat{M}})(E)=E\) holds for every \(\lambda >0\), where I is the identity mapping on E.

In 2001, Huang and Fang [25] introduced and studied the class of generalized m-accretive (also referred to as m-\(\eta \)-accretive or \(\eta \)-m-accretive [16]) mappings which includes those of m-accretive mappings, maximal \(\eta \)-monotone operators [26], and maximal monotone operators [52] as special cases.

Definition 2.8

[16, 25] Let E be a real q-uniformly smooth Banach space, \(\eta :E\times E\rightarrow E\) be a vector-valued mapping and \({\widehat{M}}:E\rightarrow 2^E\) be a multi-valued mapping. \({\widehat{M}}\) is said to be

  1. (i)

    \(\eta \)-accretive if

    $$\begin{aligned} \langle u-v,J_q(\eta (x,y))\rangle \ge 0,\quad \forall (x,u),(y,v)\in \mathop {{\textrm{Graph}}}({\widehat{M}}); \end{aligned}$$
  2. (ii)

    \(\gamma \)-strongly \(\eta \)-accretive if there exists a constant \(\gamma >0\), such that

    $$\begin{aligned} \langle u-v,J_q(\eta (x,y))\rangle \ge \gamma \Vert x-y\Vert ^q,\quad \forall (x,u),(y,v)\in \mathop {{\textrm{Graph}}}({\widehat{M}}); \end{aligned}$$
  3. (iii)

    generalized m-accretive if, \({\widehat{M}}\) is \(\eta \)-accretive and \((I+\lambda {\widehat{M}})(E)=E\) holds for every \(\lambda >0\).

We note that \({\widehat{M}}\) is a generalized m-accretive mapping if and only if \({\widehat{M}}\) is \(\eta \)-accretive and there is no other \(\eta \)-accretive mapping whose graph strictly contains \(\mathop {{\textrm{Graph}}}({\widehat{M}})\). The generalized m-accretivity is to be understood in terms of inclusion of graphs. If \({\widehat{M}}:E\rightarrow 2^E\) is a generalized m-accretive mapping, then any modification of its graph that results in the graph of a new multi-valued mapping will destroy the \(\eta \)-accretivity. In fact, the extended mapping is no longer \(\eta \)-accretive. In other words, for every pair \((x,u)\in E\times E\backslash \mathop {{\textrm{Graph}}}({\widehat{M}})\), there exists \((y,v)\in \mathop {{\textrm{Graph}}}({\widehat{M}})\), such that \(\langle u-v,J_q(\eta (x,y))\rangle <0\). Considering the above-mentioned arguments, a necessary and sufficient condition for multi-valued mapping \({\widehat{M}}:E\rightarrow 2^E\) to be generalized m-accretive is that the property

$$\begin{aligned} \langle u-v,J_q(\eta (x,y))\rangle \ge 0,\quad \forall (y,v)\in \mathop {{\textrm{Graph}}}({\widehat{M}}) \end{aligned}$$

is equivalent to \(u\in {\widehat{M}}(x)\).

The above characterization of generalized m-accretive mappings provides a useful and manageable way for recognizing that an element u belongs to \({\widehat{M}}(x)\).

Subsequently, Fang and Huang [21] introduced and studied the class of H-accretive (also referred to as P-accretive) mappings, which includes those of m-accretive and maximal monotone operators as special cases.

Definition 2.9

[21] Let E be a real q-uniformly smooth Banach space, \(P:E\rightarrow E\) be a single-valued mapping, and let \({\widehat{M}}:E\rightarrow 2^E\) be a multi-valued mapping. \({\widehat{M}}\) is said to be P-accretive if \({\widehat{M}}\) is accretive and \((P+\lambda {\widehat{M}})(E)=E\) holds for every constant \(\lambda >0\).

Kazmi and Khan [28] and later Peng and Zhu [39] introduced and studied another class of generalized accretive operators known as P-\(\eta \)-accretive (also referred to as \((H,\eta )\)-accretive) mappings. This class serves as an extension of H-accretive mappings, \((H,\eta )\)-monotone operators [22], generalized m-accretive mappings, m-accretive mappings, maximal \(\eta \)-monotone operators, and maximal monotone operators as follows.

Definition 2.10

[28, 39] Let E be a real q-uniformly smooth Banach space, \(P:E\rightarrow E\) and \(\eta :E\times E\rightarrow E\) be two vector-valued mappings, and \({\widehat{M}}:E\rightarrow 2^E\) be a multi-valued mapping. \({\widehat{M}}\) is said to be \((P,\eta )\)-accretive (also referred to as P-\(\eta \)-accretive) if \({\widehat{M}}\) is \(\eta \)-accretive and \((P+\lambda {\widehat{M}})(E)=E\) holds for every constant \(\lambda >0\).

The following example shows that for given mappings \(\eta :E\times E\rightarrow E\) and \(P:E\rightarrow E\), a \((P,\eta )\)-accretive mapping may be neither P-accretive nor generalized m-accretive.

Example 2.11

Let m and n be arbitrary but fixed natural numbers and \(M_{m\times n}(\mathbb {F})\) be the vector space of all \(m\times n\) matrices with real or complex entries, where \(\mathbb {F}=\mathbb {R} \text{ or } \mathbb {C}\). Then, \(M_{m\times n}(\mathbb {F})\) is a Hilbert space with the Hilbert–Schmidt inner product

$$\begin{aligned} \langle A,B\rangle =tr(A^*B)=\sum \limits _{l=1}^m\sum \limits _{j=1}^n{\bar{a}}_{lj}b_{lj},\quad \forall A,B\in M_{m\times n}(\mathbb {F}), \end{aligned}$$

where tr(C) denotes the trace of the matrix C, that is, the sum of diagonal entries of C, and \(A^*\) denotes the Hermitian conjugate (or adjoint) of the matrix A, that is, \(A^*=\overline{A^t}\), the complex conjugate of the transpose A, and the bar denotes complex conjugation and superscript denotes the transpose of the entries. The Hilbert–Schmidt inner product defined above induces a norm on \(M_{m\times n}(\mathbb {F})\), the so-called Hilbert–Schmidt norm, as \(\Vert A\Vert =\big (\sum _{l=1}^m\sum _{j=1}^n|a_{lj}|^2\big )^{\frac{1}{2}}\), for all \(A\in M_{m\times n}(\mathbb {F})\). Let us assume that \(\mathbb {F}=\mathbb {C}\) and m is an even natural number. Then, for any \(A=\left( \begin{array}{c} a_{lj} \\ \end{array} \right) \in M_{m\times n}(\mathbb {C})\), we have \(A=\sum _{l=1}^{\frac{m}{2}}\sum _{j=1}^nA_{lj}\), that is, every \(m\times n\) matrix \(A\in M_{m\times n}(\mathbb {C})\) can be written as a linear combination of \(\frac{mn}{2}\) matrices \(A_{lj}\), where for each \(l\in \{1,2,\dots ,\frac{m}{2}\}\) and \(j\in \{1,2,\dots ,n\}\), \(A_{lj}\) is an \(m\times n\) matrix, such that its (lj) and \(({\hat{l}},j)\)-entries equal to \(a_{lj}=x_{lj}+iy_{lj}\) and \(a_{{\hat{l}}j}=x_{{\hat{l}}j}+iy_{{\hat{l}}j}\), respectively, and all other entries equal to zero, where \({\hat{l}}=m-l+1,\) and \({\hat{l}}\in \{m, m-1,\dots , \frac{m}{2}-1 \}\),. For each \(l\in \{1,2,\dots ,\frac{m}{2}\}\) and \(j\in \{1,2,\dots ,n\}\), it yields

$$\begin{aligned} A_{lj}=\left( \begin{array}{ccccccc} 0 &{} 0 \cdots &{} 0 \cdots &{} 0 &{} 0 \\ 0 &{} 0 \cdots &{} 0 \cdots &{} 0 &{} 0 \\ \vdots &{} \vdots \cdots &{} \vdots \cdots &{} \vdots &{} \vdots \\ 0 &{} 0 \cdots &{} x_{lj}+iy_{lj} \cdots &{} 0 &{} 0 \\ \vdots &{} \vdots \cdots &{} \vdots \cdots &{} \vdots &{} \vdots \\ 0 &{} 0 \cdots &{} x_{{\hat{l}}j}+iy_{{\hat{l}}j} \cdots &{} 0 &{} 0 \\ \vdots &{} \vdots \cdots &{} \vdots \cdots &{} \vdots &{} \vdots \\ 0 &{} 0 \cdots &{} 0 \cdots &{} 0 &{} 0 \\ 0 &{} 0 \cdots &{} 0 \cdots &{} 0 &{} 0 \\ \end{array} \right) =\frac{y_{lj}+y_{{\hat{l}}j}-i(x_{lj}+x_{{\hat{l}}j})}{2}Q_{lj} +\frac{y_{lj}-y_{{\hat{l}}j}-i(x_{lj}-x_{{\hat{l}}j})}{2}Q'_{lj}, \end{aligned}$$

where for each \(l\in \{1,2,\dots ,\frac{m}{2}\}\) and \(j\in \{1,2,\dots ,n\}\), \(Q_{lj}\) is an \(m\times n\) matrix in which the (lj) and \(({\hat{l}},j)\)-entries equal to i and all other entries equal to zero, \(Q'_{lj}\) is an \(m\times n\) matrix with the entries i and \(-i\) at the (lj) and \(({\hat{l}},j)\) places, respectively, and 0’s everywhere else. Thus, any matrix \(A\in M_{m\times n}(\mathbb {C})\) can be written as a linear combination of mn matrices \(Q_{lj}\) and \(Q'_{lj}\) \((l=1,2,\dots ,\frac{m}{2}\) and \(j=1,2,\dots ,n)\) as follows:

$$\begin{aligned} A=\sum \limits _{l=1}^{\frac{m}{2}}\sum \limits _{j=1}^nA_{lj} =\sum \limits _{l=1}^{\frac{m}{2}}\sum \limits _{j=1}^n \left[ \frac{y_{lj}+y_{{\hat{l}}j}-i(x_{lj}+x_{{\hat{l}}j})}{2}Q_{lj} +\frac{y_{lj}-y_{{\hat{l}}j}-i(x_{lj}-x_{{\hat{l}}j})}{2}Q'_{lj}\right] . \end{aligned}$$

Accordingly, the set

$$\begin{aligned} \left\{ Q_{lj},Q'_{lj}:l=1,2,\dots ,\frac{m}{2};j=1,2,\dots ,n\right\} \end{aligned}$$

spans the Hilbert space \(M_{m\times n}(\mathbb {C})\). Taking \(\theta _{lj}:=\frac{1}{\sqrt{2}}Q_{lj}\) and \(\theta '_{lj}:=\frac{1}{\sqrt{2}}Q'_{lj}\), for each \(l\in \{1,2,\dots ,\frac{m}{2}\}\) and \(j\in \{1,2,\dots ,n\}\), it follows that the rescaled matrices \(\theta _{lj}\) and \(\theta '_{lj}\) spans also \(M_{m\times n}(\mathbb {C})\). In the meanwhile, it is easy to prove that the set

$$\begin{aligned} {\mathfrak {B}}=\left\{ \theta _{lj},\theta '_{lj}:l=1,2,\dots ,\frac{m}{2};j=1,2,\dots ,n\right\} \end{aligned}$$

is linearly independent and orthonormal, and so, the set \({\mathfrak {B}}\) is an orthonormal basis for the Hilbert space \(M_{m\times n}(\mathbb {C})\).

Let the mappings \({\widehat{M}}:M_{m\times n}(\mathbb {C})\rightarrow 2^{M_{m\times n}(\mathbb {C})}\), \(\eta :M_{m\times n}(\mathbb {C})\times M_{m\times n}(\mathbb {C})\rightarrow M_{m\times n}(\mathbb {C})\) and \(P:M_{m\times n}(\mathbb {C})\rightarrow M_{m\times n}(\mathbb {C})\) be defined by

$$\begin{aligned} {\widehat{M}}(A)=\left\{ \begin{array}{ll} \Phi ,&{}\quad A=\theta '_{sk},\\ -A+\theta '_{sk},&{}\quad A\ne \theta '_{sk}, \end{array}\right. \\ \eta (A,B)=\left\{ \begin{array}{ll} \alpha (B-A),&{}\quad A,B\ne \theta '_{sk},\\ \textbf{0},&{}\quad \text{ otherwise, } \end{array}\right. \end{aligned}$$

and \(P(A)=\beta A+\gamma \theta '_{sk}\), for all \(A,B\in M_{m\times n}(\mathbb {C})\), where

$$\begin{aligned} \begin{aligned} \Phi&=\left\{ \theta _{lj}-\theta '_{sk},\theta '_{lj}-\theta '_{sk},:l=1,2,\dots ,\frac{m}{2};j=1,2,\dots ,n\right\} , \end{aligned} \end{aligned}$$

\(\alpha ,\beta ,\gamma \in \mathbb {R}\) are arbitrary constants, such that \(\beta<0<\alpha \), \(s\in \{1,2,\dots ,\frac{m}{2}\}\) and \(k\in \{1,2,\dots ,n\}\) are arbitrary but fixed natural numbers, and \(\textbf{0}\) is the zero vector of the space \(M_{m\times n}(\mathbb {C})\), that is, the zero \(m\times n\) matrix.

Taking into account that every finite-dimensional normed space is a Banach space, it follows that \((M_{m\times n}(\mathbb {C}),\Vert .\Vert )\) is a 2-uniformly smooth Banach space. Then, for all \(A,B\in M_{m\times n}(\mathbb {C})\), \(A\ne B\ne \theta '_{sk}\), it yields

$$\begin{aligned} \begin{aligned} \langle {\widehat{M}}(A)-{\widehat{M}}(B),J_2(A-B)\rangle&=\langle {\widehat{M}}(A)-{\widehat{M}}(B),A-B\rangle \\ {}&=\langle -A+\theta '_{sk}+B-\theta '_{sk},A-B\rangle \\ {}&=\langle B-A,A-B\rangle =-\Vert A-B\Vert ^2 =-\sqrt{\sum \limits _{l=1}^m\sum \limits _{j=1}^n|a_{lj}-b_{lj}|^2}<0, \end{aligned} \end{aligned}$$

i.e., \({\widehat{M}}\) is not accretive, and so, it is not P-accretive. For any given \(A,B\in M_{m\times n}(\mathbb {C})\), \(A\ne B\ne \theta _{sk}\), we get

$$\begin{aligned} \begin{aligned} \langle {\widehat{M}}(A)-{\widehat{M}}(B),J_2(\eta (A,B))\rangle&=\langle {\widehat{M}}(A)-{\widehat{M}}(B),\eta (A,B)\rangle \\ {}&=\langle -A+\theta '_{sk}+B-\theta '_{sk},\alpha (B-A)\rangle \\ {}&=\alpha \langle B-A,B-A\rangle =\alpha \Vert B-A\Vert ^2 =\alpha \sqrt{\sum \limits _{l=1}^m\sum \limits _{j=1}^n|a_{lj}-b_{lj}|^2}>0. \end{aligned} \end{aligned}$$

In view of the fact that for each of the cases when \(A\ne B=\theta '_{sk}\), \(B\ne A=\theta '_{sk}\) and \(A=B=\theta '_{sk}\), we have \(\eta (A,B)=\textbf{0}\), and it follows that:

$$\begin{aligned} \langle u-v,J_2(\eta (A,B))\rangle =\langle u-v,\eta (A,B)\rangle =0,\quad \forall (A,u),(B,v)\in \mathop {{\textrm{Graph}}}({\widehat{M}}). \end{aligned}$$

Consequently, M is an \(\eta \)-accretive mapping. Since for any \(A\in M_{m\times n}(\mathbb {C})\), \(A\ne \theta '_{sk}\)

$$\begin{aligned} \Vert (I+{\widehat{M}})(A)\Vert =\Vert \theta '_{sk}\Vert =1>0 \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} (I+{\widehat{M}})(\theta '_{sk})&=\left\{ \theta _{lj},\theta '_{lj}: l=1,2,\dots ,\frac{m}{2};j=1,2,\dots ,n\right\} ={\mathfrak {B}}, \end{aligned} \end{aligned}$$

where I is the identity mapping on \(E=M_{m\times n}(\mathbb {C})\), we conclude that \(\textbf{0}\notin (I+{\widehat{M}})(M_{m\times n}(\mathbb {C}))\). Therefore, \(I+{\widehat{M}}\) is not surjective, and so, \({\widehat{M}}\) is not a generalized m-accretive mapping.

For any \(\lambda >0\) and \(A\in M_{m\times n}(\mathbb {C})\), taking \(B=\frac{1}{\beta -\lambda }A+\frac{\gamma +\lambda }{\lambda -\beta }\theta '_{sk}\) (\(\lambda \ne \beta \), because \(\beta <0\)), it yields

$$\begin{aligned} (P+\lambda {\widehat{M}})(B)=(P+\lambda {\widehat{M}}) \left( \frac{1}{\beta -\lambda }A+\frac{\gamma +\lambda }{\lambda -\beta }\theta '_{sk}\right) =A. \end{aligned}$$

Hence, for every \(\lambda >0\), \(P+\lambda {\widehat{M}}\) is surjective, and so, \({\widehat{M}}\) is a \((P,\eta )\)-accretive mapping.

In the following example, the fact that for given mappings \(P:E\rightarrow E\) and \(\eta :E\times E\rightarrow E\), a generalized m-accretive mapping need not to be \((P,\eta )\)-accretive is illustrated.

Example 2.12

Let \(H_2(\mathbb {C})\) be the set of all Hermitian matrices with complex entries, from the definition of \(2\times 2\) Hermitian matrices, A is Hermitian matrix if and only if

$$\begin{aligned} H_2(\mathbb {C})=\left\{ \left( \begin{array}{cc} z &{} x-iy \\ x+iy &{} w \\ \end{array} \right) |x,y,z,w\in \mathbb {R}\right\} . \end{aligned}$$

Then, \(H_2(\mathbb {C})\) is a subspace of \(M_2(\mathbb {C})\), the space of all \(2\times 2\) matrices with complex entries, with respect to the operations of addition and scalar multiplication defined on \(M_2(\mathbb {C})\), when \(M_2(\mathbb {C})\) is considered as a real vector space. By defining the scalar product on \(H_2(\mathbb {C})\) as \(\langle A,B\rangle :=\frac{1}{2}tr(AB)\), for all \(A,B\in H_2(\mathbb {C})\), it is easy to check that \(\langle .,.\rangle \) is an inner product, that is, \((H_2(\mathbb {C}),\langle .,.\rangle )\) is an inner product space. The inner product defined above induces a norm on \(H_2(\mathbb {C})\) as follows:

$$\begin{aligned} \Vert A\Vert =\sqrt{\langle A,A\rangle }=\sqrt{\frac{1}{2}tr(AA)} = \sqrt{x^2+y^2+\frac{1}{2}(z^2+w^2)}. \end{aligned}$$

Taking into account that \((H_2(\mathbb {C}),\Vert .\Vert )\) is a finite-dimensional normed space, it follows that it is a Hilbert space and so it is a 2-uniformly smooth Banach space. Suppose that the mappings \(P,{\widehat{M}}:H_2(\mathbb {C})\rightarrow H_2(\mathbb {C})\) and \(\eta :H_2(\mathbb {C})\times H_2(\mathbb {C})\rightarrow H_2(\mathbb {C})\) are defined, respectively, by

$$\begin{aligned} \begin{aligned} P(A)=P\left( \left( \begin{array}{cc} z &{} x-iy \\ x+iy &{} w \\ \end{array} \right) \right) =\left( \begin{array}{cc} z^{2k} &{} x^{2q}-iy^{2q} \\ x^{2q}+iy^{2q} &{} w^{2p} \\ \end{array} \right) , \end{aligned} \\ \begin{aligned} {\widehat{M}}(A)={\widehat{M}}\left( \left( \begin{array}{cc} z &{} x-iy \\ x+iy &{} w \\ \end{array} \right) \right) =\left( \begin{array}{cc} \alpha z^k &{} x^q-iy^q \\ x^q+iy^q &{} \beta w^p \\ \end{array} \right) \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} \eta (A,B)&=\eta \left( \left( \begin{array}{cc} z &{} x-iy \\ x+iy &{} w \\ \end{array} \right) ,\left( \begin{array}{cc} {\hat{z}} &{} {\hat{x}}-i{\hat{y}} \\ {\hat{x}}+i{\hat{y}} &{} {\hat{w}} \\ \end{array} \right) \right) \\ {}&=\left( \begin{array}{cc} \gamma w^l{\hat{w}}^l(z^m-{\hat{z}}^m) &{} x^q-{\hat{x}}^q-i(y^q-{\hat{y}}^q) \\ x^q-{\hat{x}}^q+i(y^q-{\hat{y}}^q) &{} \theta e^{s(z+{\hat{z}})}(w^n-{\hat{w}}^n) \\ \end{array} \right) , \end{aligned} \end{aligned}$$

for all \(A=\left( \begin{array}{cc} z &{} x-iy \\ x+iy &{} w \\ \end{array} \right) ,B=\left( \begin{array}{cc} {\hat{z}} &{} {\hat{x}}-i{\hat{y}} \\ {\hat{x}}+i{\hat{y}} &{} {\hat{w}} \\ \end{array} \right) \in H_2(\mathbb {C})\), where s is an arbitrary real constant, l is an arbitrary but fixed even natural number, \(\alpha ,\beta ,\gamma \) and \(\theta \) are arbitrary positive real constants, and mnpqk are arbitrary but fixed odd natural numbers.

Then, for any \(A=\left( \begin{array}{cc} z &{} x-iy \\ x+iy &{} w \\ \end{array} \right) ,B=\left( \begin{array}{cc} {\hat{z}} &{} {\hat{x}}-i{\hat{y}} \\ {\hat{x}}+i{\hat{y}} &{} {\hat{w}} \\ \end{array} \right) \in H_2(\mathbb {C})\), we obtain

$$\begin{aligned} \begin{aligned}&\langle {\widehat{M}}(A)-{\widehat{M}}(B),J_2(\eta (A,B))\rangle = \langle {\widehat{M}}(A)-{\widehat{M}}(B),\eta (A,B)\rangle \\&=\left\langle \left( \begin{array}{cc} \alpha (z^k-{\hat{z}}^k) &{} x^q-{\hat{x}}^q-i(y^q-{\hat{y}}^q) \\ x^q-{\hat{x}}^q+i(y^q-{\hat{y}}^q) &{} \beta (w^p-{\hat{w}}^p) \\ \end{array} \right) , \left( \begin{array}{cc} \gamma w^l{\hat{w}}^l(z^m-{\hat{z}}^m) &{} x^q-{\hat{x}}^q-i(y^q-{\hat{y}}^q) \\ x^q-{\hat{x}}^q+i(y^q-{\hat{y}}^q) &{} \theta e^{s(z+{\hat{z}})}(w^n-{\hat{w}}^n) \\ \end{array} \right) \right\rangle \\&=\frac{\alpha \gamma }{2}w^l{\hat{w}}^l(z^k-{\hat{z}}^k)(z^m-{\hat{z}}^m) \quad +\frac{\beta \theta }{2}e^{s(z+{\hat{z}})}(w^p-{\hat{w}}^p)(w^n-{\hat{w}}^n) +(x^q-{\hat{x}}^q)^2+(y^q-{\hat{y}}^q)^2 \\&=\frac{\alpha \gamma }{2}w^l{\hat{w}}^l(z-{\hat{z}})^2\sum \limits _{j=1}^k z^{k-j}{\hat{z}}^{j-1}\sum \limits _{{\hat{j}}=1}^mz^{m-{\hat{j}}}{\hat{z}}^{{\hat{j}}-1} +\frac{\beta \theta }{2}e^{s(z+{\hat{z}})}(w-{\hat{w}})^2\sum \limits _{j'=1}^p w^{p-j'}{\hat{w}}^{j'-1}\sum \limits _{j''=1}^nw^{n-j''}{\hat{w}}^{j''-1} \\ {}&\quad + (x^q-{\hat{x}}^q)^2+(y^q-{\hat{y}}^q)^2.\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \\ \end{aligned} \end{aligned}$$

Taking into account that mnkp are odd natural numbers, it is easy to see that

$$\begin{aligned} \sum \limits _{j=1}^kz^{k-j}{\hat{z}}^{j-1},\sum \limits _{{\hat{j}}=1}^mz^{m-{\hat{j}}}{\hat{z}}^{{\hat{j}}-1}, \sum \limits _{j'=1}^pw^{p-j'}{\hat{w}}^{j'-1},\sum \limits _{j''=1}^nw^{n-j''}{\hat{w}}^{j''-1}\ge 0. \end{aligned}$$

Since \(\alpha ,\beta ,\gamma ,\theta >0\) and l is an even natural number, the preceding relation implies that

$$\begin{aligned} \langle {\widehat{M}}(A)-{\widehat{M}}(B),J_2(\eta (A,B))\rangle \ge 0,\quad \forall A,B\in H_2(\mathbb {C}), \end{aligned}$$

which means that \({\widehat{M}}\) is an \(\eta \)-accretive mapping. Let the functions \(f,g,h:\mathbb {R}\rightarrow \mathbb {R}\) be defined by \(f(x):=x^{2k}+\alpha x^k\), \(g(x):=x^{2p}+\beta x^p\) and \(h(x)=x^{2q}+x^q\) for all \(x\in \mathbb {R}\). Then, for any \(A=\left( \begin{array}{cc} z &{} x-iy \\ x+iy &{} w \\ \end{array} \right) \in H_2(\mathbb {C})\), it yields

$$\begin{aligned} \begin{aligned} (P+{\widehat{M}})(A)&=(P+{\widehat{M}})\left( \left( \begin{array}{cc} z &{} x-iy \\ x+iy &{} w \\ \end{array} \right) \right) \\ {}&=\left( \begin{array}{cc} z^{2k}+\alpha z^k &{} x^{2q}+x^q-i(y^{2q}+y^q) \\ x^{2q}+x^q+i(y^{2q}+y^q) &{} w^{2p}+\beta w^p\\ \end{array} \right) \\ {}&=\left( \begin{array}{cc} f(z) &{} h(x)-ih(y) \\ h(x)+ih(y) &{} g(w) \\ \end{array} \right) . \end{aligned} \end{aligned}$$

Considering the facts that for all \(x\in \mathbb {R}\)

$$\begin{aligned} \begin{aligned}&f(x)=x^{2k}+\alpha x^k=\left( x^k+\frac{\alpha }{2}\right) ^2-\frac{\alpha ^2}{4}\ge -\frac{\alpha ^2}{4},\\ {}&g(x)=x^{2p}+\beta x^p=\left( x^p+\frac{\beta }{2}\right) ^2-\frac{\beta ^2}{4}\ge -\frac{\beta ^2}{4} \end{aligned} \end{aligned}$$

and

$$\begin{aligned} h(x)=x^{2q}+x^q=\left( x^q+\frac{1}{2}\right) ^2-\frac{1}{4}\ge -\frac{1}{4}, \end{aligned}$$

it follows that: \(f(\mathbb {R})=[-\frac{\alpha ^2}{4},+\infty )\ne \mathbb {R}\), \(g(\mathbb {R})=[-\frac{\beta ^2}{4},+\infty )\ne \mathbb {R}\) and \(h(\mathbb {R})=[-\frac{1}{4},+\infty )\ne \mathbb {R}\). Hence, \((P+{\widehat{M}})(H_2(\mathbb {C}))\ne H_2(\mathbb {C})\), i.e., \(P+{\widehat{M}}\) is not surjective and so \({\widehat{M}}\) is not a \((P,\eta )\)-accretive mapping.

We now suppose that \(\lambda \) is an arbitrary positive real constant and let the functions \({\widetilde{f}},{\widetilde{g}},{\widetilde{h}}:\mathbb {R}\rightarrow \mathbb {R}\) be defined, respectively, by \({\widetilde{f}}(x):=x+\lambda \alpha x^k\), \({\widetilde{g}}(x):=x+\lambda \beta x^p\) and \({\widetilde{h}}(x):=(1+\lambda )x^q\), for all \(x\in \mathbb {R}\). Then, for any \(A=\left( \begin{array}{cc} z &{} x-iy \\ x+iy &{} w \\ \end{array} \right) \in H_2(\mathbb {C})\), we get

$$\begin{aligned} \begin{aligned} (I+\lambda {\widehat{M}})(A)&=(I+\lambda {\widehat{M}})\left( \left( \begin{array}{cc} z &{} x-iy \\ x+iy &{} w \\ \end{array} \right) \right) \\ {}&=\left( \begin{array}{cc} z+\lambda \alpha z^k&{} (1+\lambda )x^q-i(1+\lambda )y^q \\ (1+\lambda )x^q+i(1+\lambda )y^q &{} w+\lambda \beta w^p\\ \end{array} \right) \\ {}&=\left( \begin{array}{cc} {\widetilde{f}}(z) &{} {\widetilde{h}}(x)-i{\widetilde{h}}(y) \\ {\widetilde{h}}(x)+i{\widetilde{h}}(y) &{} {\widetilde{g}}(w) \\ \end{array} \right) , \end{aligned} \end{aligned}$$

where I is the identity mapping on \(H_2(\mathbb {C})\). Relying on the fact that kp and q are odd natural numbers, it follows that \({\widetilde{f}}(\mathbb {R})={\widetilde{g}}(\mathbb {R})={\widetilde{h}}(\mathbb {R})=\mathbb {R}\). This fact implies that \((I+\lambda {\widehat{M}})(H_2(\mathbb {C}))=H_2(\mathbb {C})\), that is, \(I+\lambda {\widehat{M}}\) is a surjective mapping. Taking into account the arbitrariness in the choice of \(\lambda >0\), we conclude that \({\widehat{M}}\) is a generalized m-accretive mapping.

Example 2.13

Let \(M_{m\times n}(\mathbb {F})\) be defined as in Example 2.11, with the same norm and inner product. Let us denote by \(D_n(\mathbb {R})\) the space of all diagonal \(n\times n\) matrices with real entries. In other words, the (ij)-entry is an arbitrary real number if \(i=j\), and is zero if \(i\ne j\). Then

$$\begin{aligned} D_n(\mathbb {R})=\{A=( \begin{array}{c} a_{ij} \\ \end{array} )|a_{ij}\in \mathbb {R}, a_{ij}=0 \text{ if } i\ne j;i,j=1,2,\dots ,n\} \end{aligned}$$

is a subspace of \( M_{n\times n}(\mathbb {R})=M_n(\mathbb {R})\) with respect to the operations of addition and scalar multiplication defined on \(M_n(\mathbb {R})\), and the Hilbert–Schmidt norm induced by it becomes as \( \Vert A\Vert = (\sum _{i=1}^n a_{ii}^2\ )^{\frac{1}{2}}. \) Now, define the mappings \(P_1,P_2,{\widehat{M}}:D_n(\mathbb {R})\rightarrow D_n(\mathbb {R})\) and \(\eta :D_n(\mathbb {R})\times D_n(\mathbb {R})\rightarrow D_n(\mathbb {R})\), respectively, as follows: \(P_1(A)=P_1(( \begin{array}{c} a_{ij} \\ \end{array} ))=( \begin{array}{c} a'_{ij} \\ \end{array} ),\) \(P_2(A)=P_2(( \begin{array}{c} a_{ij} \\ \end{array} ))=( \begin{array}{c} a''_{ij} \\ \end{array} ),\) \({\widehat{M}}(A)={\widehat{M}}(( \begin{array}{c} a_{ij} \\ \end{array} ))=( \begin{array}{c} a'''_{ij} \\ \end{array} )\) and \(\eta (A,B)=\eta (( \begin{array}{c} a_{ij} \\ \end{array} ),( \begin{array}{c} b_{ij} \\ \end{array} ))=( \begin{array}{c} c_{ij} \\ \end{array} )\) for all \(A=( \begin{array}{c} a_{ij} \\ \end{array} ), B=(\begin{array}{c} b_{ij} \\ \end{array} )\in D_n(\mathbb {R})\), where for each \(i,j\in \{1,2,\dots ,n\}\),

$$\begin{aligned} a'_{ij}=\left\{ \begin{array}{ll} (\frac{1}{2})^{|a_{ii}|-2}-\alpha a_{ii}^p, &{}\quad i=j,\\ 0,&{}\quad i\ne j, \end{array}\right. \quad \quad a''_{ij}=\left\{ \begin{array}{ll} \beta a_{ii}^l,&{}\quad i=j,\\ 0,&{}\quad i\ne j, \end{array}\right. \\ a'''_{ij}=\left\{ \begin{array}{ll} \alpha a_{ii}^p,&{}\quad i=j,\\ 0,&{}\quad i\ne j, \end{array}\right. \end{aligned}$$

and

$$\begin{aligned} c_{ij}=\left\{ \begin{array}{ll} \varsigma e^{\gamma a_{ii}b_{ii}}(a_{ii}^k-b_{ii}^k),&{}\quad i=j,\\ 0,&{}\quad i\ne j, \end{array}\right. \end{aligned}$$

where \(\alpha \) and \(\varsigma \) are two arbitrary positive real constants, \(\beta \) and \(\gamma \) are two arbitrary real constants, k and p are two arbitrary but fixed odd natural numbers, and l is an arbitrary but fixed even natural number, such that \(p>l\). It goes without saying that \((D_n(\mathbb {R}),\Vert .\Vert )\) is a 2-uniformly smooth Banach space. Then, for any \(A=( \begin{array}{c} a_{ij} \\ \end{array} ), B=(\begin{array}{c} b_{ij} \\ \end{array} )\in D_n(\mathbb {R})\), yields

$$\begin{aligned} \begin{aligned} \langle {\widehat{M}}(A)-{\widehat{M}}(B),J_2(\eta (A,B))\rangle&=\langle {\widehat{M}}(A)-{\widehat{M}}(B),\eta (A,B)\rangle \\ {}&=tr\Big (\left( \begin{array}{c} a'''_{ij}-b'''_{ij}\\ \end{array} \right) \left( \begin{array}{c} c_{ij} \\ \end{array} \right) \Big )\\ {}&=\alpha \varsigma \sum \limits _{i=1}^n(a_{ii}^p-b_{ii}^p)e^{\gamma a_{ii}b_{ii}}(a_{ii}^k-b_{ii}^k)\\ {}&=\alpha \varsigma \sum \limits _{i=1}^n(a_{ii}-b_{ii})^2e^{\gamma a_{ii}b_{ii}}\sum \limits _{s=1}^pa_{ii}^{p-s}b_{ii}^{s-1} \sum \limits _{j=1}^ka_{ii}^{k-j}b_{ii}^{j-1}. \end{aligned} \end{aligned}$$

In the light of the fact that k and p are odd natural numbers, it can be easily observed that for each \(i\in \{1,2,\dots ,n\}\), \(\sum _{s=1}^pa_{ii}^{p-s}b_{ii}^{s-1},\sum _{j=1}^ka_{ii}^{k-j}b_{ii}^{j-1}\) are both greater than or equal to 0. With the help of these facts, the preceding relation implies that

$$\begin{aligned} \langle {\widehat{M}}(A)-{\widehat{M}}(B),J_2(\eta (A,B))\rangle \ge 0,\quad \forall A,B\in D_n(\mathbb {R}), \end{aligned}$$

which means that \({\widehat{M}}\) is an \(\eta \)-accretive mapping. Let \(f:\mathbb {R}\rightarrow \mathbb {R}\) be a function defined by \(f(x):=(\frac{1}{2})^{|x|-2}\), for all \(x\in \mathbb {R}\). Then, for any \(A=\left( \begin{array}{c} a_{ij} \\ \end{array} \right) \in D_n(\mathbb {R})\), we get

$$\begin{aligned} (P_1+{\widehat{M}})(A)=(P_1+{\widehat{M}})(\left( \begin{array}{c} a_{ij} \\ \end{array} \right) )=\left( \begin{array}{c} a'_{ij}+a'''_{ij} \\ \end{array} \right) =\left( \begin{array}{c} {\widehat{a}}_{ij} \\ \end{array} \right) , \end{aligned}$$

where for each \(i,j\in \{1,2,\dots ,n\}\)

$$\begin{aligned} {\widehat{a}}_{ij}=\left\{ \begin{array}{ll} (\frac{1}{2})^{|a_{ii}|-2} = f(a_{ii}),&{}\quad i=j,\\ 0,&{}\quad i\ne j. \end{array}\right. \end{aligned}$$

In virtue of the fact that \(f(\mathbb {R})=(0,4]\), we conclude that \((P_1+{\widehat{M}})(D_n(\mathbb {R}))\ne D_n(\mathbb {R})\), i.e., \(P_1+{\widehat{M}}\) is not surjective. Therefore, \({\widehat{M}}\) is not a \((P_1,\eta )\)-accretive mapping. Now, let \(\lambda >0\) be an arbitrary real constant and let the function \(g:\mathbb {R}\rightarrow \mathbb {R}\) be defined by \(g(x):=\lambda \alpha x^p+\beta x^l\), for all \(x\in \mathbb {R}\). Then, for any \(A=\left( \begin{array}{c} a_{ij} \\ \end{array} \right) \in D_n(\mathbb {R})\), we obtain

$$\begin{aligned} (P_2+\lambda {\widehat{M}})(A)=(P_2+\lambda {\widehat{M}})(\left( \begin{array}{c} a_{ij} \\ \end{array} \right) )=\left( \begin{array}{c} a''_{ij}+\lambda a'''_{ij} \\ \end{array} \right) =\left( \begin{array}{c} {\widetilde{a}}_{ij} \\ \end{array} \right) , \end{aligned}$$

where for each \(i,j\in \{1,2,\dots ,n\}\)

$$\begin{aligned} {\widetilde{a}}_{ij}=\left\{ \begin{array}{ll} \lambda \alpha a_{ii}^p+\beta a_{ii}^l =g(a_{ii}),&{}\quad i=j,\\ 0,&{}\quad i\ne j. \end{array}\right. \end{aligned}$$

Taking into account that l is an even natural number and p is an odd natural number, such that \(p>l\), it can be easily seen that \(g(\mathbb {R})=\mathbb {R}\), which implies that \((P_2+\lambda {\widehat{M}})(D_n(\mathbb {R}))=D_n(\mathbb {R})\), that is, \(P_2+\lambda {\widehat{M}}\) is a surjective mapping. Since \(\rho >0\) was arbitrary, it follows that \({\widehat{M}}\) is an \((P_2,\eta )\)-accretive mapping.

Note, in particular, that if \(P=I\), the identity mapping on X, then the definition of \((P,\eta )\)-accretive mapping is equivalent to the definition of a generalized m-accretive mapping. In fact, the class of \((P,\eta )\)-accretive mappings has close relation with the generalized m-accretive mappings in the framework of Banach spaces. However, it should be noted that, according to Example 2.11, for given single-valued mappings \(P: E \rightarrow E\) and \(\eta : E \times E \rightarrow E\), a \((P,\eta )\)-accretive mapping may not necessarily be a generalized m-accretive mapping. The following conclusion provides us with sufficient conditions for a \((P,\eta )\)-accretive mapping \({\widehat{M}}\) to be a generalized m-accretive mapping.

Lemma 2.14

[28, Theorem 3.1(a)] Let E be a real q-uniformly smooth Banach space, \(\eta :E\times E\rightarrow E\) be a vector-valued mapping, \(P:E\rightarrow E\) be a strictly \(\eta \)-accretive mapping, and \({\widehat{M}}:E\rightarrow 2^E\) be a \((P,\eta )\)-accretive mapping. Let \(x,u\in E\) be two given points. If \(\langle u-v,J_q(\eta (x,y))\rangle \ge 0\) holds for all \((y,v)\in \mathop {{\textrm{Graph}}}({\widehat{M}})\), then \((x,u)\in \mathop {{\textrm{Graph}}}({\widehat{M}})\).

The following assertion due to Fang and Huang [21] is a direct consequence of the last result.

Lemma 2.15

[39, Theorem 2.1] Let \(P:E\rightarrow E\) be a strictly accretive single-valued operator, \({\widehat{M}}:E\rightarrow 2^E\) be a P-accretive operator, and \(x,u\in E\) be given points. If \(\langle u-v,J_q(x-y)\rangle \ge 0\) holds, for all \((y,v)\in \mathop {{\textrm{Graph}}}({\widehat{M}})\), then \((x,u)\in \mathop {{\textrm{Graph}}}({\widehat{M}})\).

Lemma 2.16

[28, Theorem 3.1(b)] Let E be a real q-uniformly smooth Banach space, \(\eta :E\times E\rightarrow E\) be a vector-valued mapping, \(P:E\rightarrow E\) be a strictly \(\eta \)-accretive mapping, and \({\widehat{M}}:E\rightarrow 2^E\) be a \((P,\eta )\)-accretive mapping. Then, the mapping \((P+\lambda {\widehat{M}})^{-1}:E\rightarrow E\) is single-valued for every real constant \(\lambda >0\).

Taking \(\eta (x,y)=x-y\), for all \(x,y\in E\), we obtain the following result as a direct consequent of the previous lemma.

Lemma 2.17

[39, Theorem 2.2] Let \(P:E\rightarrow E\) be a strictly accretive mapping and \({\widehat{M}}:E\rightarrow 2^E\) be a P-accretive mapping. Then, the operator \((P+\lambda {\widehat{M}})^{-1}:E\rightarrow E\) is single-valued, where \(\lambda >0\) is a real constant.

Based on Lemma 2.16, the resolvent operator \(R^{P,\eta }_{{\widehat{M}},\lambda }\) associated with \(P,\eta ,{\widehat{M}}\) and \(\lambda >0\) is defined as follows.

Definition 2.18

[28, 39] Let E be a real q-uniformly smooth Banach space, \(\eta :E\times E\rightarrow E\) be a vector-valued mapping, \(P:E\rightarrow E\) be a strictly \(\eta \)-accretive mapping, \({\widehat{M}}:E\rightarrow 2^E\) be a \((P,\eta )\)-accretive mapping, and \(\lambda >0\) be an arbitrary real constant. The resolvent operator \(R^{P,\eta }_{{\widehat{M}},\lambda }:E\rightarrow E\) associated with \(P,\eta ,{\widehat{M}}\) and \(\lambda \) is defined by

$$\begin{aligned} R^{P,\eta }_{{\widehat{M}},\lambda }(u)=(P+\lambda {\widehat{M}})^{-1}(u),\quad \forall u\in E. \end{aligned}$$

When \(\eta (x,y)=x-y\), for all \(x,y\in E\), Definition 2.18 reduces to the following definition due to Fang et al. [21].

Definition 2.19

[21, Definition 2.4] Let \(P:E\rightarrow E\) be a strictly accretive mapping, \({\widehat{M}}:E\rightarrow 2^E\) be a P-accretive mapping, and \(\lambda >0\) be an arbitrary real constant. The resolvent operator \(R^P_{{\widehat{M}},\lambda }:E\rightarrow E\) associated with \(P,{\widehat{M}}\) and \(\lambda \) is defined by

$$\begin{aligned} R^P_{{\widehat{M}},\lambda }(u)=(P+\lambda {\widehat{M}})^{-1}(u),\quad \forall u\in E. \end{aligned}$$

In the rest of the paper, we define that \({\widehat{M}}\) is a \((P,\eta )\)-strongly (resp., P-strongly) accretive with constant \(\gamma \), which means that \({\widehat{M}}\) is a \(\gamma \)-strongly \(\eta \)-accretive (resp., \(\gamma \)-strongly accretive) mapping and \((P+\lambda {\widehat{M}})(E)=E\), for every \(\lambda >0\).

Before we proceed to our main result in this section, let us provide the following definition, which will be used efficiently in its proof.

Definition 2.20

A vector-valued mapping \(\eta :E\times E\rightarrow E\) is said to be \(\tau \)-Lipschitz continuous if and only if there exists a constant \(\tau >0\), such that \(\Vert \eta (x,y)\Vert \le \tau \Vert x-y\Vert \), for all \(u,v\in E\).

The next theorem states that the resolvent operator \(R^{P,\eta }_{{\widehat{M}},\lambda }\) associated with \(P,\eta ,{\widehat{M}}\) and \(\lambda >0\) is Lipschitz continuous and provides an estimate of its Lipschitz constant.

Theorem 2.21

Let E be a real q-uniformly smooth Banach space, \(\eta :E\times E\rightarrow E\) be a \(\tau \)-Lipschitz continuous mapping, \(P:E\rightarrow E\) be an r-strongly \(\eta \)-accretive mapping and let \({\widehat{M}}:E\rightarrow 2^E\) be a \((P,\eta )\)-strongly accretive mapping with constant \(\gamma \). Then, the resolvent operator \(R^{P,\eta }_{{\widehat{M}},\lambda }:E\rightarrow E\) is \(\frac{\tau ^{q-1}}{\lambda \gamma +r}\)-Lipschitz continuous. In other words, we have the following Lipschitz estimate:

$$\begin{aligned} \Vert R^{P,\eta }_{{\widehat{M}},\lambda }(u)-R^{P,\eta }_{{\widehat{M}},\lambda }(v)\Vert \le \frac{\tau ^{q-1}}{\lambda \gamma +r}\Vert u-v\Vert ,\quad \forall u,v\in E. \end{aligned}$$

Proof

Taking into consideration the fact that \({\widehat{M}}\) is a \((P,\eta )\)-accretive mapping, for any given points \(x,u\in E\) with \(\Big \Vert R^{P,\eta }_{{\widehat{M}},\lambda }(u)-R^{P,\eta }_{{\widehat{M}},\lambda }(v)\Big \Vert \ne 0\), we have

$$\begin{aligned} R^{P,\eta }_{{\widehat{M}},\lambda }(u)=(P+\lambda {\widehat{M}})^{-1}(u) \text{ and } R^{P,\eta }_{{\widehat{M}},\lambda }(v)=(P+\lambda {\widehat{M}})^{-1}(v), \end{aligned}$$

and so

$$\begin{aligned} \frac{1}{\lambda }\left( u-P\left( R^{P,\eta }_{{\widehat{M}},\lambda }(u)\right) \right) \in {\widehat{M}}\left( R^{P,\eta }_{{\widehat{M}},\lambda }(u)\right) \text{ and } \frac{1}{\lambda }\left( v-P\left( R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right) \right) \in {\widehat{M}}(R^{P,\eta }_{{\widehat{M}},\lambda }(v)). \end{aligned}$$

Since \({\widehat{M}}\) is \(\gamma \)-strongly \(\eta \)-accretive, it follows that:

$$\begin{aligned} \begin{aligned}&\frac{1}{\lambda }\left\langle u-P\left( R^{P,\eta }_{{\widehat{M}},\lambda }(u)\right) -\left( v-P\left( R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right) \right) , J_q\left( \eta \left( R^{P,\eta }_{{\widehat{M}},\lambda }(u),R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right) \right) \right\rangle \\ {}&\ge \gamma \left\| R^{P,\eta }_{{\widehat{M}},\lambda }(u)-R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right\| ^q, \end{aligned} \end{aligned}$$

which leads to

$$\begin{aligned} \begin{aligned}&\left\langle u-v,J_q\left( \eta \left( R^{P,\eta }_{{\widehat{M}},\lambda }(u),R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right) \right) \right\rangle \ge \lambda \gamma \left\| R^{P,\eta }_{{\widehat{M}},\lambda }(u)-R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right\| ^q\\&+\left\langle P\left( R^{P,\eta }_{{\widehat{M}},\lambda }(u)\right) -P\left( R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right) , J_q\left( \eta \left( R^{P,\eta }_{{\widehat{M}},\lambda }(u),R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right) \right) \right\rangle . \end{aligned} \end{aligned}$$

Employing the preceding inequality and in the light of the facts that \(\eta \) is \(\tau \)-Lipschitz continuous and P is r-strongly \(\eta \)-accretive, we can obtain the following:

$$\begin{aligned} \begin{aligned}&\tau ^{q-1}\Vert u-v\Vert \left\| R^{P,\eta }_{{\widehat{M}},\lambda }(u)-R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right\| ^{q-1}\\&\ge \Vert u-v\Vert \left\| \eta \left( R^{P,\eta }_{{\widehat{M}},\lambda }(u),R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right) \right\| ^{q-1}\\ {}&=\Vert u-v\Vert \left\| J_q\left( \eta \left( R^{P,\eta }_{{\widehat{M}},\lambda }(u),R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right) \right) \right\| \\ {}&\ge \lambda \gamma \left\| R^{P,\eta }_{{\widehat{M}},\lambda }(u)-R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right\| ^q\\ {}&\quad +\left\langle P\left( R^{P,\eta }_{{\widehat{M}},\lambda }(u)\right) -P\left( R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right) , J_q\left( \eta \left( R^{P,\eta }_{{\widehat{M}},\lambda }(u),R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right) \right) \right\rangle \\ {}&\ge \lambda \gamma \left\| R^{P,\eta }_{{\widehat{M}},\lambda }(u)-R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right\| ^q +r\left\| R^{P,\eta }_{{\widehat{M}},\lambda }(u)-R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right\| ^q\\ {}&=(\lambda \gamma +r)\left\| R^{P,\eta }_{{\widehat{M}},\lambda }(u)-R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right\| ^q. \end{aligned} \end{aligned}$$

Relying on the fact that \(\left\| R^{P,\eta }_{{\widehat{M}},\lambda }(u)-R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right\| \ne 0\), we conclude that

$$\begin{aligned} \left\| R^{P,\eta }_{{\widehat{M}},\lambda }(u) -R^{P,\eta }_{{\widehat{M}},\lambda }(v)\right\| \le \frac{\tau ^{q-1}}{\lambda \gamma +r}\Vert u-v\Vert . \end{aligned}$$

This gives us the desired result. \(\square \)

Corollary 2.22

Let E be a real q-uniformly smooth Banach space, \(P:E\rightarrow E\) be an r-strongly accretive mapping, and \({\widehat{M}}:E\rightarrow 2^E\) be a P-strongly accretive mapping with constant \(\gamma \). Then, the resolvent operator \(R^P_{{\widehat{M}},\lambda }:E\rightarrow E\) is \(\frac{1}{\lambda \gamma +r}\)-Lipschitz continuous. In other words, we have the following Lipschitz estimate:

$$\begin{aligned} \Vert R^P_{{\widehat{M}},\lambda }(u)-R^P_{{\widehat{M}},\lambda }(v)\Vert \le \frac{1}{\lambda \gamma +r}\Vert u-v\Vert ,\quad \forall u,v\in E. \end{aligned}$$

Proof

Taking \(\eta (x,y)=x-y\), for all \(x,y\in E\), the statement follows immediately from Theorem 2.21. \(\square \)

If \(E=\mathcal {H}\) is a real Hilbert space, then we obtain the following result as a direct consequence of the last conclusion. However, before presenting it, we need to recall the following assertion.

Lemma 2.23

[53, Theorem 2.1] Let \(P:\mathcal {H}\rightarrow \mathcal {H}\) be a strongly monotone, continuous and single-valued operator. Then, a multi-valued operator \({\widehat{M}}:\mathcal {H}\rightarrow 2^{\mathcal {H}}\) is P-monotone if and only if \({\widehat{M}}\) is maximal monotone.

Corollary 2.24

[53, Theorem 2.2] Let \(P:\mathcal {H}\rightarrow \mathcal {H}\) be a continuous and strongly monotone operator with constant \(\gamma \) and let \({\widehat{M}}:\mathcal {H}\rightarrow 2^{\mathcal {H}}\) be maximal strongly monotone with constant \(\eta \). Then, the resolvent operator \(R^P_{{\widehat{M}},\lambda }:\mathcal {H}\rightarrow \mathcal {H}\) is \(\frac{1}{\lambda \eta +\gamma }\)-Lipschitz continuous, that is

$$\begin{aligned} \Vert R^P_{{\widehat{M}},\lambda }(u)-R^P_{{\widehat{M}},\lambda }(v)\Vert \le \frac{1}{\lambda \eta +\gamma }\Vert u-v\Vert ,\quad \forall u,v\in E. \end{aligned}$$

Proof

Since P is strongly monotone and continuous, according to Lemma 2.23 (that is, [53, Theorem 2.1]), \({\widehat{M}}\) is P-monotone. The desired result follows immediately from Corollary 2.22. \(\square \)

3 System of generalized variational-like inclusions

Let \(p\in \mathbb {N}\backslash \{1\}\) be an arbitrary real constant and let for each \(i\in \{1,2,\dots ,p\}\), \(E_i\) be a real \(q_i\)-uniformly smooth Banach space with a norm \(\Vert .\Vert _i\). Assume that \(\eta _i:E_i\times E_i\rightarrow E_i\), \(P_i:E_i\rightarrow E_i\) and \(F_i:E_1\times E_2\times \dots \times E_p=\prod _{k=1}^pE_k\rightarrow E_i\) \((i=1,2,\dots ,p)\) are single-valued mappings. Furthermore, let \({\widehat{M}}_i: E_i \rightarrow 2^{E_i}\) be a \((P_i, \eta _i)\)-accretive mapping for each \({i \in {1,2,\dots ,p}}\). For given \(a_i\in E_i\) \((i=1,2,\dots ,p)\), we consider the problem of finding \((x_1,x_2,\dots ,x_p)\in \prod _{k=1}^pE_k\), such that for each \(i\in \{1,2,\dots ,p\}\)

$$\begin{aligned} a_i\in F_i(x_1,x_2,\dots ,x_p)+{\widehat{M}}_i(x_i). \end{aligned}$$
(3.1)

The problem (3.1) is called a system of generalized variational-like inclusions \((\mathop {{\textrm{SGVLI}}})\) with \((P,\eta )\)-accretive mappings in real q-uniformly smooth Banach spaces.

If \(p=2\), \(x_1=x\), \(x_2=y\), for each \(i\in \{1,2\}\) \(a_i=\theta _i\) is the zero vector of \(E_i\) and \(\eta _i(u_i,v_i)=u_i-v_i\) for all \(u_i,v_i\in E_i\), then the \(\mathop {{\textrm{SGVLI}}}\) (3.1) reduces to the problem of finding \((x,y)\in E_1\times E_2\), such that

$$\begin{aligned} \begin{aligned} \left\{ \begin{array}{ll} \theta _1\in F_1(x,y)+{\widehat{M}}_1(x),\\ \theta _2\in F_2(x,y)+{\widehat{M}}_2(y), \end{array}\right. \end{aligned} \end{aligned}$$
(3.2)

which is called a system of variational inclusions \((\mathop {{\textrm{SVI}}})\) with P-accretive mappings.

We note that for appropriate and suitable choices of the mappings \(F_i,P_i,\eta _i,{\widehat{M}}_i\), the elements \(a_i\in E_i\), and the spaces \(E_i\) \((i=1,2,\dots ,p)\), the \(\mathop {{\textrm{SGVLI}}}\) (3.1) reduces to various classes of variational inclusions and variational inequalities. These reductions have been studied in several works such as [20,21,22, 39, 45, 46, 53,54,55] and the references therein.

The following statement, which establishes the equivalence between the \(\mathop {{\textrm{SGVLI}}}\) (3.1) and a fixed point problem, provides a characterization of the solutions of \(\mathop {{\textrm{SGVLI}}}\) (3.1).

Lemma 3.1

Let \(E_i,F_i,\eta _i\) and \(a_i\) \((i=1,2,\dots ,p)\) be the same as in the \(\mathop {{\textrm{SGVLI}}}\) (3.1). Suppose that for each \(i\in \{1,2,\dots ,p\}\), \(P_i:E_i\rightarrow E_i\) is a strictly \(\eta _i\)-accretive mapping and \({\widehat{M}}_i:E_i\rightarrow 2^{E_i}\) is a \((P_i,\eta _i)\)-accretive mapping. Then, \((x_1,x_2,\dots ,x_p)\in \prod _{k=1}^pE_k\) is a solution of the \(\mathop {{\textrm{SGVLI}}}\) (3.1) if and only if \((x_1,x_2,\dots ,x_p)\) satisfies

$$\begin{aligned} x_i=R^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i}[P_i(x_i)-\lambda _iF_i(x_1,x_2,\dots ,x_p)-a_i)], \quad (i=1,2,\dots ,p), \end{aligned}$$
(3.3)

where \(\lambda _i>0\) \((i=1,2,\dots ,p)\) are arbitrary real constants, and for each \(i\in \{1,2,\dots ,p\}\), \(R^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i}=(P_i+\lambda _i{\widehat{M}}_i)^{-1}\).

Proof

The conclusions follow directly from Definition 2.18 and some simple arguments. \(\square \)

As a direct consequence of the previous lemma, we obtain the following assertion which gives a characterization of the solution of \(\mathop {{\textrm{SVI}}}\) (3.2).

Lemma 3.2

Assume that for \(i=1,2\), \(E_i\) are real \(q_i\)-uniformly smooth Banach spaces, \(F_i:E_1\times E_2\rightarrow E_i\) are single-valued mappings, and let \(\theta _i\) be the zero vector of \(E_i\) for each \(i\in {1,2}\). Suppose further that for each \(i\in \{1,2\}\), \(P_i:E_i\rightarrow E_i\) is a strictly accretive mapping and \({\widehat{M}}_i:E_i\rightarrow 2^{E_i}\) is a \(P_i\)-accretive mapping. Then, \((x,y)\in E_1\times E_2\) is a solution of the \(\mathop {{\textrm{SVI}}}\) (3.2) if and only if (xy) satisfies the following conditions:

$$\begin{aligned} \begin{aligned} \left\{ \begin{array}{ll} x=R^{P_1}_{{\widehat{M}}_1,\lambda _1}[P_1(x)-\lambda _1F_1(x,y)],\\ y=R^{P_2}_{{\widehat{M}}_2,\lambda _2}[P_2(y)-\lambda _2F_2(x,y)], \end{array}\right. \end{aligned} \end{aligned}$$

where \(\lambda _1,\lambda _2>0\) are arbitrary real constants and for \(i=1,2\), \(R^{P_i}_{{\widehat{M}}_i,\lambda _i}=(P_i+\lambda _i{\widehat{M}}_i)^{-1}\).

Definition 3.3

For each \(i\in \{1,2,\dots ,p\}\), let \(E_i\) be a \(q_i\)-uniformly smooth Banach space with a norm \(\Vert .\Vert _i\). A mapping \(F:\prod _{j=1}^pE_j\rightarrow E_i\) is said to be

  1. (i)

    accretive in the ith argument if

    $$\begin{aligned} \begin{aligned}&\langle F(x_1,\dots ,x_{i-1},x_i,x_{i+1},\dots ,x_p) -F(x_1,\dots ,x_{i-1},{\widehat{x}}_i,x_{i+1},\dots ,x_p), J_{q_i}(x_i-{\widehat{x}}_i)\rangle \ge 0,\\ {}&\quad \forall x_i,{\widehat{x}}_i\in E_i,x_j\in E_j (j=1,2,\dots ,p; j\ne i); \end{aligned} \end{aligned}$$
  2. (ii)

    \(\mu _i\)-strongly accretive in the ith argument if there exists a constant \(\mu _i>0\), such that

    $$\begin{aligned} \begin{aligned}&\langle F(x_1,\dots ,x_{i-1},x_i,x_{i+1},\dots ,x_p)-F(x_1,\dots ,x_{i-1},{\widehat{x}}_i,x_{i+1},\dots ,x_p), J_{q_i}(x_i-{\widehat{x}}_i)\rangle \\ {}&\ge \mu _i\Vert x_i-{\widehat{x}}_i\Vert _i^{q_i},\quad \forall x_i,{\widehat{x}}_i\in E_i,x_j\in E_j (j=1,2,\dots ,p; j\ne i); \end{aligned} \end{aligned}$$
  3. (iii)

    \(\varsigma _i\)-Lipschitz continuous in the ith argument if there exists a constant \(\varsigma _i>0\), such that

    $$\begin{aligned} \begin{aligned}&\Vert F(x_1,\dots ,x_{i-1},x_i,x_{i+1},\dots ,x_p)-F(x_1,\dots ,x_{i-1},{\widehat{x}}_i,x_{i+1},\dots ,x_p)\Vert _i\\ {}&\le \varsigma _i\Vert x_i-{\widehat{x}}_i\Vert _i,\quad \forall x_i,{\widehat{x}}_i\in E_i,x_j\in E_j (j=1,2,\dots ,p; j\ne i). \end{aligned} \end{aligned}$$

Note, in particular, that for the special case when \(p=2\), we say that a mapping \(F:E_1\times E_2\rightarrow E_i\) \((i=1,2)\) is \((\alpha ,\beta )\)-mixed Lipschitz continuous if F is Lipschitz continuous in the first and second arguments with constants \(\alpha \) and \(\beta \), respectively.

In the next theorem, under some appropriate and suitable conditions, the existence and uniqueness of a solution for the \(\mathop {{\textrm{SGVLI}}}\) (3.1) is proven.

Theorem 3.4

Let \(E_i,F_i,P_i,{\widehat{M}}_i,\eta _i\) and \(a_i\) \((i=1,2,\dots ,p)\) be the same as in the \(\mathop {{\textrm{SGVLI}}}\) (3.1), such that for each \(i\in \Gamma =\{1,2,\dots ,p\}\),

  1. (i)

    \(\eta _i\) is a \(\tau _i\)-Lipschitz continuous mapping;

  2. (ii)

    \(P_i\) is an \(r_i\)-strongly \(\eta _i\)-accretive mapping;

  3. (iii)

    \(F_i\) is a \(\mu _i\)-strongly accretive and \(\xi _i\)-Lipschitz continuous in the ith argument and \(\varsigma _{i,j}\)-Lipschitz continuous in the jth argument for each \(j\in \Gamma =\{1,2,\dots ,p\}\), \(j\ne i\);

  4. (iv)

    \({\widehat{M}}_i\) is a \((P_i,\eta _i)\)-strongly accretive mapping with constant \(\gamma _i\);

  5. (v)

    there exists a constant \(\lambda _i>0\), such that

    $$\begin{aligned} \begin{aligned}&\frac{\tau _i^{q_i-1}}{\lambda _i\gamma _i+r_i}\left( \root q_i \of {1-2q_ir_i+c_{q_i}\varrho _i^{q_i}} +\root q_i \of {1-2\lambda _iq_i\mu _i+c_{q_i}\lambda _i^{q_i}\xi _i^{q_i}}\right) \\ {}&\quad +\sum \limits _{k\in \Gamma ,k\ne i}\frac{\tau _k^{q_k-1}\lambda _k}{\lambda _k\gamma _k+r_k}\varsigma _{k,i}<1, \end{aligned}\nonumber \\ \end{aligned}$$
    (3.4)

    where \(c_{q_i}\) \((i=1,2,\dots ,p)\) are constants guaranteed by Lemma 2.5, and for the case when \(q_i\) \((i=1,2,\dots ,p)\) are even natural numbers, in addition to (3.4), the following conditions hold:

    $$\begin{aligned} 2q_ir_i<1+c_{q_i}\varrho _i^{q_i} \text{ and } 2\lambda _iq_i\mu _i<1+c_{q_i}\lambda _i^{q_i}\xi _i^{q_i}. \end{aligned}$$
    (3.5)

Then, the \(\mathop {{\textrm{SGVLI}}}\) (3.1) admits a unique solution.

Proof

Let us first define, for each \(i\in \Gamma \), the mapping \(\varphi _i:\prod \limits _{k=1}^pE_k\rightarrow E_i\) by

$$\begin{aligned} \begin{aligned} \varphi _i(x_1,x_2,\dots ,x_p)=R^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i}[P_i(x_i)-\lambda _iF_i(x_1,x_2,\dots ,x_p)-a_i)], \end{aligned} \end{aligned}$$
(3.6)

for all \((x_1,x_2,\dots ,x_p)\in \prod _{k=1}^pE_k\). Define \(\Vert .\Vert _*\) on \(\prod _{k=1}^pE_k\) by

$$\begin{aligned} \Vert (x_1,x_2,\dots ,x_p)\Vert _*=\sum \limits _{k=1}^p\Vert x_k\Vert _k,\quad \forall (x_1,x_2,\dots ,x_p)\in \prod \limits _{k=1}^pE_k. \end{aligned}$$
(3.7)

It can be easily observed that \((\prod _{k=1}^pE_k,\Vert .\Vert _*)\) is a Banach space. At the same time, suppose that the mapping \(\psi :\prod _{k=1}^pE_k\rightarrow \prod _{k=1}^pE_k\) is defined by

$$\begin{aligned} \psi (x_1,x_2,\dots ,x_p)=(\varphi _1(x_1,x_2,\dots ,x_p),\dots ,\varphi _p(x_1,x_2,\dots ,x_p)), \end{aligned}$$
(3.8)

for all \((x_1,x_2,\dots ,x_p)\in \prod _{k=1}^pE_k\). We will now show that \(\psi \) is a contraction mapping. To do this, let \((x_1,x_2,\dots ,x_p),({\widehat{x}}_1,{\widehat{x}}_2,\dots ,{\widehat{x}}_p)\in \prod _{k=1}^pE_k\) be chosen arbitrarily but fixed. Using (3.6) and Theorem 2.21, it follows that for each \(i\in \Gamma \):

$$\begin{aligned} \begin{aligned}&\Vert \varphi _i(x_1,x_2,\dots ,x_p)-\varphi _i({\widehat{x}}_1,{\widehat{x}}_2,\dots ,{\widehat{x}}_p)\Vert _i\\&\le \frac{\tau _i^{q_i-1}}{\lambda _i\gamma _i+r_i}\Vert P_i(x_i)-P_i({\widehat{x}}_i) -\lambda _i\big (F_i(x_1,x_2,\dots ,x_{i-1},x_i,x_{i+1},\dots ,x_p)\\&\quad -F_i(x_1,x_2,\dots ,x_{i-1},{\widehat{x}}_i,x_{i+1},\dots ,x_p)\big )\Vert _i\\ {}&\quad +\frac{\tau _i^{q_i-1}\lambda _i}{\lambda _i\gamma _i+r_i}\sum \limits _{j\in \Gamma ,j\ne i}\Vert F_i(x_1,x_2,\dots ,x_{j-1},x_j,x_{j+1},\dots ,x_p)\\ {}&\quad -F_i(x_1,x_2,\dots ,x_{j-1},{\widehat{x}}_j,x_{j+1},\dots ,x_p)\Vert _i\\ {}&\le \frac{\tau _i^{q_i-1}}{\lambda _i\gamma _i+r_i}\Big (\Vert x_i-{\widehat{x}}_i-(P_i(x_i)-P_i({\widehat{x}}_i))\Vert _i\\&\quad +\Vert x_i-{\widehat{x}}_i-\lambda _i\big (F_i(x_1,x_2,\dots ,x_{i-1},x_i,x_{i+1},\dots ,x_p)\\&\quad -F_i(x_1,x_2,\dots ,x_{i-1},{\widehat{x}}_i,x_{i+1},\dots ,x_p)\big )\Vert _i\Big )\\ {}&\quad +\frac{\tau _i^{q_i-1}\lambda _i}{\lambda _i\gamma _i+r_i}\sum \limits _{j\in \Gamma ,j\ne i}\Vert F_i(x_1,x_2,\dots ,x_{j-1},x_j,x_{j+1},\dots ,x_p)\\ {}&\quad -F_i(x_1,x_2,\dots ,x_{j-1},{\widehat{x}}_j,x_{j+1},\dots ,x_p)\Vert _i. \end{aligned} \end{aligned}$$
(3.9)

Taking into account that for each \(i\in \Gamma \), \(E_i\) is a real \(q_i\)-uniformly smooth Banach space, in the light of Lemma 2.5, there exists a constant \(c_{q_i}>0\), such that

$$\begin{aligned} \begin{aligned} \Vert x_i-{\widehat{x}}_i-(P_i(x_i)-P_i({\widehat{x}}_i))\Vert _i^{q_i}&\le \Vert x_i-{\widehat{x}}_i\Vert _i^{q_i} -2q_i\langle P_i(x_i)-P_i({\widehat{x}}_i),J_{q_i}(\eta _i(x_i,{\widehat{x}}_i))\rangle _i\\ {}&\quad +c_{q_i}\Vert P_i(x_i)-P_i({\widehat{x}}_i)\Vert _i^{q_i}. \end{aligned} \end{aligned}$$
(3.10)

Since for each \(i\in \Gamma \), \(P_i\) is an \(r_i\)-strongly \(\eta _i\)-accretive and \(\varrho _i\)-Lipschitz continuous mapping, from (3.10), we conclude that

$$\begin{aligned} \Vert x_i-{\widehat{x}}_i-(P_i(x_i)-P_i({\widehat{x}}_i))\Vert _i\le \root q_i \of {1-2q_ir_i+c_{q_i}\varrho _i^{q_i}}\Vert x_i-{\widehat{x}}_i\Vert _i. \end{aligned}$$
(3.11)

Similarly, utilizing Lemma 2.5 and the fact that for each \(i\in \Gamma \), the mapping \(F_i\) is \(\mu _i\)-strongly accretive and \(\xi _i\)-Lipschitz continuous in the ith argument, yields

$$\begin{aligned} \begin{aligned}&\Vert x_i-{\widehat{x}}_i-\lambda _i\big (F_i(x_1,x_2,\dots ,x_{i-1},x_i,x_{i+1},\dots ,x_p)\\ {}&\quad -F_i(x_1,x_2,\dots ,x_{i-1},{\widehat{x}}_i,x_{i+1},\dots ,x_p)\big )\Vert _i^{q_i}\\ {}&\le \Vert x_i-{\widehat{x}}_i\Vert _i^{q_i}-2\lambda _iq_i\langle F_i(x_1,x_2,\dots ,x_{i-1},x_i,x_{i+1},\dots ,x_p)\\&\quad -F_i(x_1,x_2,\dots ,x_{i-1},{\widehat{x}}_i,x_{i+1},\dots ,x_p)\big ),J_{q_i}(x_i-{\hat{x}}_i)\rangle _i\\&\quad +c_{q_i}\lambda _i^{q_i}\Vert F_i(x_1,x_2,\dots ,x_{i-1},x_i,x_{i+1},\dots ,x_p)\\ {}&\quad - F_i(x_1,x_2,\dots ,x_{i-1},{\widehat{x}}_i,x_{i+1},\dots ,x_p)\Vert _i^{q_i}\\ {}&\le (1-2\lambda _iq_i\mu _i+c_{q_i}\lambda _i^{q_i}\xi _i^{q_i})\Vert x_i-{\widehat{x}}_i\Vert _i^{q_i}, \end{aligned} \end{aligned}$$

from which we obtain

$$\begin{aligned} \begin{aligned}&\Vert x_i-{\widehat{x}}_i-\lambda _i\big (F_i(x_1,x_2,\dots ,x_{i-1},x_i,x_{i+1},\dots ,x_p)\\ {}&\quad -F_i(x_1,x_2,\dots ,x_{i-1},{\widehat{x}}_i,x_{i+1},\dots ,x_p)\big )\Vert _i\\ {}&\le \root q_i \of {1-2\lambda _iq_i\mu _i+c_{q_i}\lambda _i^{q_i}\xi _i^{q_i}}\Vert x_i-{\widehat{x}}_i\Vert _i. \end{aligned} \end{aligned}$$
(3.12)

Relying on the fact that for each \(i\in \Gamma \), the mapping \(F_i\) is \(\varsigma _{i,j}\)-Lipschitz continuous in the jth argument \((j\in \gamma ,j\ne i)\), it follows that:

$$\begin{aligned} \begin{aligned}&\Vert F_i(x_1,x_2,\dots ,x_{j-1},x_j,x_{j+1},\dots ,x_p) -F_i(x_1,x_2,\dots ,x_{j-1},{\widehat{x}}_j,x_{j+1},\dots ,x_p)\Vert _i\\ {}&\le \varsigma _{i,j}\Vert x_j-{\widehat{x}}_j\Vert _j. \end{aligned} \end{aligned}$$
(3.13)

Substituting (3.10)–(3.13) into (3.9), for each \(i\in \Gamma \), we get

$$\begin{aligned} \begin{aligned}&\Vert \varphi _i(x_1,x_2,\dots ,x_p)-\varphi _i({\widehat{x}}_1,{\widehat{x}}_2,\dots ,{\widehat{x}}_p)\Vert _i\\ {}&\le \vartheta _i\Vert x_i-{\widehat{x}}_i\Vert _i+\frac{\tau _i^{q_i-1}\lambda _i}{\lambda _i\gamma _i+r_i}\sum \limits _{j\in \Gamma ,j\ne i}\varsigma _{i,j}\Vert x_j-{\widehat{x}}_j\Vert _j, \end{aligned} \end{aligned}$$
(3.14)

where for each \(i\in \Gamma \)

$$\begin{aligned} \vartheta _i=\frac{\tau _i^{q_i-1}}{\lambda _i\gamma _i+r_i}\left( \root q_i \of {1-2q_ir_i+c_{q_i}\varrho _i^{q_i}} +\root q_i \of {1-2\lambda _iq_i\mu _i+c_{q_i}\lambda _i^{q_i}\xi _i^{q_i}}\right) . \end{aligned}$$

Thereby, making use of (3.8) and (3.14), it yields

$$\begin{aligned} \begin{aligned}&\Vert \psi (x_1,x_2,\dots ,x_p)-\psi ({\widehat{x}}_1,{\widehat{x}}_2,\dots ,{\widehat{x}}_p)\Vert _*\\ {}&=\sum \limits _{i=1}^p\Vert \varphi _i(x_1,x_2,\dots ,x_p)-\varphi _i({\widehat{x}}_1,{\widehat{x}}_2,\dots ,{\widehat{x}}_p)\Vert _i\\ {}&\le \sum \limits _{i=1}^p\left( \vartheta _i\Vert x_i-{\widehat{x}}_i\Vert _i+\frac{\tau ^{q_i-1}_i\lambda _i}{\lambda _i\gamma _i+r_i}\sum \limits _{j\in \Gamma ,j\ne i}\varsigma _{i,j}\Vert x_j-{\widehat{x}}_j\Vert _j\right) \\ {}&=\left( \vartheta _1+\sum \limits _{k=2}^p\frac{\tau _k^{q_k-1}\lambda _k}{\lambda _k\gamma _k+r_k}\varsigma _{k,1}\right) \Vert x_1-{\widehat{x}}_1\Vert _1\\ {}&\quad +\left( \vartheta _2+\sum \limits _{k\in \Gamma ,k\ne 2}\frac{\tau ^{q_k-1}\lambda _k}{\lambda _k\gamma _k+r_k}\varsigma _{k,2}\right) \Vert x_2-{\widehat{x}}_2\Vert _2\\ {}&\quad +\dots +\left( \vartheta _p +\sum \limits _{l=1}^{p-1}\frac{\tau _k^{q_k-1}\lambda _k}{\lambda _k\gamma _k+r_k}\varsigma _{k,p}\right) \Vert x_p-{\hat{x}}_p\Vert _p\\ {}&\le \theta \sum \limits _{i=1}^p\Vert x_i-{\widehat{x}}_i\Vert _i=\theta \Vert (x_1,x_2,\dots ,x_p)-({\widehat{x}}_1,{\widehat{x}}_2,\dots ,{\widehat{x}}_p)\Vert _*, \end{aligned} \end{aligned}$$
(3.15)

where

$$\begin{aligned} \theta =\max \left\{ \vartheta _i+\sum \limits _{k\in \Gamma ,k\ne i}\frac{\tau _k^{q_k-1}\lambda _k}{\lambda _k\gamma _k+r_k}\varsigma _{k,i}:i=1,2,\dots ,p\right\} . \end{aligned}$$

Clearly, (3.4) and (3.5) imply that \(0\le \theta <1\), and so, (3.15) guarantees that \(\psi \) is a contraction mapping. Accordance with Banach fixed point theorem, there exists a unique point \((x^*_1,x^*_2,\dots ,x^*_p)\in \prod _{i=1}^pE_i\), such that

$$\begin{aligned} \psi (x^*_1,x^*_2,\dots ,x^*_p)=(x^*_1,x^*_2,\dots ,x^*_p). \end{aligned}$$

It follows from (3.6) and (3.8), we deduce that \((x^*_1,x^*_2,\dots ,x^*_p)\) satisfies Eq. (3.3), i.e., for each \(i\in \Gamma \):

$$\begin{aligned} x^*_i=R^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i}[P_i(x^*_i)-\lambda _iF_i(x^*_1,x^*_2,\dots ,x^*_p)-a_i)]. \end{aligned}$$

Now, Lemma 3.1 guarantees that \((x^*_1,x^*_2,\dots ,x^*_p)\in \prod _{i=1}^pE_i\) is a unique solution of the \(\mathop {{\textrm{SGVLI}}}\) (3.1). The proof is complete. \(\square \)

An immediate consequence of the previous theorem can be derived as follows.

Corollary 3.5

Suppose that for each \(i\in \{1,2\}\), \(E_i\) is a real \(q_i\)-uniformly smooth Banach space with a norm \(\Vert .\Vert _i\), \(P_i:E_i\rightarrow E_i\) is an \(r_i\)-strongly accretive mapping, and \({\widehat{M}}_i:E_i\rightarrow 2^{E_i}\) is a \(P_i\)-strongly accretive mapping with constant \(\gamma _i\). Let, for each \(i\in \{1,2\}\), the mapping \(F_i:E_1\times E_2\rightarrow E_i\) be \(\mu _i\)-strongly accretive and \((L_{F_i},l_{F_i})\)-mixed Lipschitz continuous. Moreover, let there exist constants \(\lambda _1,\lambda _2>0\), such that

$$\begin{aligned} \begin{aligned} ~~~~~~~~~~~~~~~~~~ \quad \quad \left\{ \begin{array}{ll} \frac{1}{\lambda _1\gamma _1+r_1}\Big (\root q_1 \of {1-2q_1r_1+c_{q_1}\varrho _1^{q_1}} +\root q_1 \of {1-2\lambda _1q_1\mu _1+c_{q_1}\lambda _1^{q_1}L_{F_1}^{q_1}}\Big ) +\frac{\lambda _2L_{F_2}}{\lambda _2\gamma _2+r_2}<1,\\ \frac{1}{\lambda _2\gamma _2+r_2}\Big (\root q_2 \of {1-2q_2r_2+c_{q_2}\varrho _2^{q_2}} +\root q_2 \of {1-2\lambda _2q_2\mu _2+c_{q_2}\lambda _2^{q_2}l_{F_2}^{q_2}}\Big ) +\frac{\lambda _1l_{F_1}}{\lambda _1\gamma _1+r_1}<1, \end{array}\right. \end{aligned}\nonumber \\ \end{aligned}$$
(3.16)

where \(c_{q_i}\) \((i=1,2)\) are constants guaranteed by Lemma 2.5. In the case when \(q_i\) \((i=1,2)\) are even natural numbers, in addition to (3.16), the following conditions hold:

$$\begin{aligned} \begin{aligned} \left\{ \begin{array}{ll} 2q_ir_i<1+c_{q_i}\varrho _i^{q_i}, (i=1,2),\\ 2\lambda _1q_1\mu _1<1+c_{q_1}\lambda _1^{q_1}L_{F_1}^{q_1},\\ 2\lambda _2q_2\mu _2<1+c_{q_2}\lambda _2^{q_2}l_{F_2}^{q_2}. \end{array}\right. \end{aligned} \end{aligned}$$

Then, the \(\mathop {{\textrm{SVI}}}\) (3.2) has a unique solution.

4 Total asymptotically nonexpansive mappings and some iterative algorithms

We recall that a mapping \(T:E\rightarrow E\) is called nonexpansive if \(\Vert T(x)-T(y)\Vert \le \Vert x-y\Vert \) for all \(x,y\in E\). Because of the connection with the geometry Banach spaces along with the relevance of these mappings in the theory of monotone and accretive operators, since the 60s, the study of the class of nonexpansive mappings has been one of the major and most active research areas of nonlinear analysis. Due to their importance and applications in fixed point theory, during the last 5 decades, much attention has been given to develop the notion of nonexpansive mapping. As an extension of the class of nonexpansive mappings, in 1972, Goebel and Kirk [23] introduced the class of asymptotically nonexpansive mappings as follows.

Definition 4.1

[23] A mapping \(T:E\rightarrow E\) is said to be asymptotically nonexpansive if there exists a sequence \(\{a_n\}\subset (0,\infty )\) with \(\lim \limits _{n\rightarrow \infty }a_n=0\), such that for all \(x,y\in E\)

$$\begin{aligned} \Vert T^n(x)-T^n(y)\Vert \le (1+a_n)\Vert x-y\Vert ,\quad \forall n\in \mathbb {N}. \end{aligned}$$

The appropriate condition under which a self-mapping T of a nonempty subset K of a real normed linear space E has a fixed point is also investigated and appears in [23].

As another generalization of the class of nonexpansive mappings, the concept of nearly asymptotically nonexpansive mapping is introduced and studied by Sahu [40] as follows.

Definition 4.2

[40] A mapping \(T:E\rightarrow E\) is said to be nearly asymptotically nonexpansive with respect to the sequences \(\{a_n\}\) and \(\{b_n\}\) (or nearly \((\{a_n\},\{b_n\})\)-asymptotically nonexpansive) if there exist nonnegative real sequences \(\{a_n\}\) and \(\{b_n\}\) with \(a_n,b_n\rightarrow 0\), as \(n\rightarrow \infty \), such that for all \(x,y\in E\)

$$\begin{aligned} \Vert T^n(x)-T^n(y)\Vert \le (1+a_n)\Vert x-y\Vert +b_n,\quad \forall n\in \mathbb {N}. \end{aligned}$$

Recently, Alber et al. [1] introduced another generalized nonexpansive mapping, the so-called total asymptotically nonexpansive mapping, which is more general than asymptotically nonexpansive and nearly asymptotically nonexpansive mappings.

Definition 4.3

[1] A mapping \(T:E\rightarrow E\) is said to be total asymptotically nonexpansive (also referred to as \((\{a_n\},\{b_n\},\phi )\)-total asymptotically nonexpansive) if there exist nonnegative real sequences \(\{a_n\}\) and \(\{b_n\}\) with \(a_n,b_n\rightarrow 0\) as \(n\rightarrow \infty \) and a strictly increasing continuous function \(\phi :\mathbb {R}^+\rightarrow \mathbb {R}^+\) with \(\phi (0)=0\), such that for all \(x,y\in E\)

$$\begin{aligned} \Vert T^n(x)-T^n(y)\Vert \le \Vert x-y\Vert +a_n\phi (\Vert x-y\Vert )+b_n,\quad \forall n\in \mathbb {N}. \end{aligned}$$

Remark 4.4

It should be pointed out that if \(\phi =I\), the identity mapping on \(\mathbb {R}^+\), then the class of \((\{a_n\},\{b_n\},\phi =I)\)-total asymptotically nonexpansive mappings becomes the same class of nearly \((\{a_n\},\{b_n\})\)-asymptotically nonexpansive mappings [40]. For the special case when \(\phi =I\) and \(b_n=0\) for all \(n\in \mathbb {N}\), then the class of \((\{a_n\},0,\phi =I)\)-total asymptotically nonexpansive mappings coincides exactly with the class of asymptotically nonexpansive mappings [23]. If \(a_n=0\) for all \(n\in \mathbb {N}\), then the class of \((0,\{b_n\},\phi )\)-total asymptotically nonexpansive mappings is actually the same class of nearly nonexpansive mappings [40].

To explore further generalizations of the notion of nonexpansive mapping and to find more information and details along with some illustrative examples, we refer to [1, 8, 14, 15, 23, 31, 37, 40] and the references therein.

Next, we provide an example that illustrates the fact that the class of total asymptotically nonexpansive mappings properly includes the class of asymptotically nonexpansive mappings.

Example 4.5

For \(1\le p<\infty \), consider the classical space

$$\begin{aligned} l^p=\left\{ x=(x_n)_{n\in \mathbb {N}}:\sum \limits _{n=1}^{\infty }|x_n|^p<\infty , x_n\in \mathbb {F}=\mathbb {R} \text{ or } \mathbb {C}\right\} , \end{aligned}$$

consisting of all p-power summable sequences, equipped with the p-norm \(\Vert .\Vert _p\) defined on it by

$$\begin{aligned} \Vert x\Vert _p=\left( \sum \limits _{n=1}^\infty |x_n|^p\right) ^{\frac{1}{p}},\quad \forall x=(x_n)_{n\in \mathbb {N}}\in l^p. \end{aligned}$$

In the meanwhile, consider \(E:=(-\infty ,\alpha ]\times B\) with the norm \(\Vert .\Vert _E=|.|_{\mathbb {R}}+\Vert .\Vert _p\), where \(\alpha >0\) is an arbitrary real constant and B is the closed unit ball in \(l^p\). Suppose further that the self-mapping T of E is defined by

$$\begin{aligned} T(u,x)=\left\{ \begin{array}{ll} (u,{\widetilde{x}}), &{}\quad \text{ if } -\infty<u<0,\\ (\frac{u}{\gamma },{\widetilde{x}}), &{}\quad \text{ if } 0\le u\le \beta ,\\ (0,{\widetilde{x}}), &{}\quad \text{ if } \beta <u\le \alpha , \end{array}\right. \end{aligned}$$

where \({\widetilde{x}}=({\widetilde{x}}_n)_{n=1}^\infty \), with \({\widetilde{x}}_i={\widetilde{x}}_{q+2j}=0\) for all \(1\le i\le q\) and \(j\in \mathbb {N}\)

$$\begin{aligned} {\widetilde{x}}_{q+2i-1}=\left\{ \begin{array}{ll} \frac{\theta }{\root p \of {2^{p+1}}} \big (\sin ^{k_{\frac{i+2}{3}}}|x_i|-|x_i|^{\lambda _{\frac{i+2}{3}}}\big ), &{}\quad \text{ if } i\in \left\{ 3r-2|r=1,2,\dots ,\frac{t+2}{3}\right\} ,\\ \theta \sin |x_i|^{m_{\frac{i+1}{3}}}, &{}\quad \text{ if } i\in \left\{ 3r-1|t=1,2,\dots ,\frac{t+2}{3}\right\} ,\\ \frac{\theta }{\root p \of {2^{p+1}}}(|x_i|^{\alpha _{\frac{i}{3}}}- -\sin |x_i|^{\beta _{\frac{i}{3}}}), &{}\quad \text{ if } i\in \left\{ 3r|r=1,2,\dots ,\frac{t+2}{3}\right\} , \end{array}\right. \end{aligned}$$

\({\widetilde{x}}_{q+2t+l}=\theta x_{t+\frac{l+1}{2}}\) for all \(l\in \{2j+3|j\in \mathbb {N}\}\), \(0<\theta<1<\gamma \) and \(\beta \in (0,\alpha )\) are arbitrary real constants, \(t\in \{3\,s-2|s\in \mathbb {N}\}\), \(q\ge t+2\) and \(\alpha _i,\beta _i,k_i,\lambda _i,m_i\in \mathbb {N}\backslash \{1\}\) \((i=1,2,\dots ,\frac{t+2}{3})\) are arbitrary but fixed natural numbers. It is known that every asymptotically nonexpansive mapping is Lipschitzian and every Lipschitzian mapping is continuous. Taking into account that the mapping T is discontinuous at the points \((\beta ,x)\) for all \(x\in B\), it follows that T is not Lipschitzian, and therefore, it is not an asymptotically nonexpansive mapping.

It is easy to show that for all \((u,x),(v,y)\in (-\infty ,0)\times B\)

$$\begin{aligned} \begin{aligned}&\Vert T(u,x)-T(v,y)\Vert _E=\Vert (u-v,{\widetilde{x}}-{\widetilde{y}})\Vert _E =\Big \Vert (u-v,\Big (\underbrace{0,0,\dots ,0}_{q~times},\frac{\theta }{\root p \of {2^{p+1}}} \Big (\sin ^{k_1}|x_1|-\sin ^{k_1}|y_1|\\ {}&\quad -(|x_1|^{\lambda _1}-|y_1|^{\lambda _1})\Big ),0,\theta (\sin |x_2|^{m_1}-\sin |y_2|^{m_1}),0, \frac{\theta }{\root p \of {2^{p+1}}}(|x_3|^{\alpha _1}-|y_3|^{\alpha _1} -(\sin |x_3|^{\beta _1}\\ {}&\quad -\sin |y_3|^{\beta _1})),0,,\dots ,0, \frac{\theta }{\root p \of {2^{p+1}}}\Big (\sin ^{k_{\frac{t+2}{3}}}|x_t|-\sin ^{k_{\frac{t+2}{3}}}|y_t| -\Big (|x_t|^{\lambda _{\frac{t+2}{3}}}-|y_t|^{\lambda _{\frac{t+2}{3}}}\Big )\Big ),0,\\&\quad \theta \Big (\sin |x_{t+1}|^{m_{\frac{t+2}{3}}}-\sin |y_{t+1}|^{m_{\frac{t+2}{3}}}\Big ),0,\frac{\theta }{\root p \of {2^{p+1}}}\Big (|x_{t+2}|^{\alpha _{\frac{t+2}{3}}}-|y_{t+2}|^{\alpha _{\frac{t+2}{3}}} -\Big (\sin |x_{t+2}|^{\beta _{\frac{t+2}{3}}}\\ {}&\quad -\sin |y_{t+2}|^{\beta _{\frac{t+2}{3}}}\Big )\Big ),0,\theta (x_{t+3}-y_{t+3}),0,\theta (x_{t+4}-y_{t+4}),\dots \Big )\Big )\Big \Vert _E\\ {}&\le |u-v|+\theta \Big (\max \Big \{\Big (\sum \limits _{j=1}^{k_i}|x_{3i-2}|^{k_i-j}|y_{3i-2}|^{j-1}\Big )^p, \Big (\sum \limits _{s'=1}^{\lambda _i}|x_{3i-2}|^{\lambda _i-s'}|y_{3i-2}|^{s'-1}\Big )^p,\\ {}&\quad \Big (\sum \limits _{r=1}^{m_i}|x_{3i-1}|^{m_i-r}|y_{3i-1}|^{r-1}\Big )^p, \Big (\sum \limits _{s''=1}^{\alpha _i}|x_{3i}|^{\alpha _i-s''}|y_{3i}|^{s''-1}\Big )^p,\\ {}&\quad \Big (\sum \limits _{r'=1}^{\beta _i}|x_{3i}|^{\beta _i-r'}|y_{3i}|^{r'-1}\Big )^p,1:i=1,2,\dots ,\frac{t+2}{3}\Big \}\sum \limits _{i=1}^\infty |x_i-y_i|^p\Big )^{\frac{1}{p}}\\ {}&=|u-v|+\theta \max \Big \{\sum \limits _{j=1}^{k_i}|x_{3i-2}|^{k_i-j}|y_{3i-2}|^{j-1}, \sum \limits _{s'=1}^{\lambda _i}|x_{3i-2}|^{\lambda _i-s'}|y_{3i-2}|^{s'-1},\\ {}&\quad \sum \limits _{r=1}^{m_i}|x_{3i-1}|^{m_i-r}|y_{3i-1}|^{r-1}, \sum \limits _{s''=1}^{\alpha _i}|x_{3i}|^{\alpha _i-s''}|y_{3i}|^{s''-1},\\ {}&\quad \sum \limits _{r'=1}^{\beta _i}|x_{3i}|^{\beta _i-r'}|y_{3i}|^{r'-1},1: i=1,2,\dots ,\frac{t+2}{3}\Big \}\Vert x-y\Vert _p. \end{aligned} \end{aligned}$$
(4.1)

Since \(x,y\in B\), it follows that: \(0\le |x_{3i-2}|^{k_i-j},|y_{3i-2}|^{j-1}\le 1\) for each \(j\in \{1,2,\dots ,k_i\}\), \(0\le |x_{3i-2}|^{\lambda _i-s'},|y_{3i-2}|^{s'-1}\le 1\) for each \(s'\in \{1,2,\dots ,\lambda _i\}\), \(0\le |x_{3i-1}|^{m_i-r},|y_{3i-1}|^{r-1}\le 1\) for each \(r\in \{1,2,\dots ,m_i\}\), \(0\le |x_{3i}|^{\alpha _i-s''},|y_{3i}|^{s''-1}\le 1\) for each \(s''\in \{1,2,\dots ,\alpha _i\}\), and \(0\le |x_{3i}|^{\beta _i-r'},|y_{3i}|^{r'-1}\le 1\) for each \(r'\in \{1,2,\dots ,\beta _i\}\) and \(i\in \{1,2,\dots ,\frac{t+2}{3}\}\). These facts imply that

$$\begin{aligned} \begin{aligned}&0\le \sum \limits _{j=1}^{k_i}|x_{3i-2}|^{k_i-j}|y_{3i-2}|^{j-1}\le k_i,\quad 0\le \sum \limits _{s'=1}^{\lambda _i}|x_{3i-2}|^{\lambda _i-s'}|y_{3i-2}|^{s'-1}\le \lambda _i,\\ \end{aligned} \\ \begin{aligned}&0\le \sum \limits _{r=1}^{m_i}|x_{3i-1}|^{m_i-r}|y_{3i-1}|^{r-1}\le m_i,\quad 0\le \sum \limits _{s''=1}^{\alpha _i}|x_{3i}|^{\alpha _i-s''}|y_{3i}|^{s''-1}\le \alpha _i \end{aligned} \end{aligned}$$

and \(0\le \sum \limits _{r'=1}^{\beta _i}|x_{3i}|^{\beta _i-r'}|y_{3i}|^{r'-1}\le \beta _i\), for each \(i\in \{1,2,\dots ,\frac{t+2}{3}\}\). Thanks to the last mentioned facts and by applying (4.1), we deduce that for all \((u,x),(v,y)\in (-\infty ,0)\times B\)

$$\begin{aligned} \Vert T(u,x)-T(v,y)\Vert _E\le |u-v|+\theta \varsigma \Vert x-y\Vert _p, \end{aligned}$$
(4.2)

where \(\varsigma =\max \{\alpha _i,\beta _i,k_i,\lambda _i,m_i:i=1,2,\dots ,\frac{t+2}{3}\}\). Using the arguments similar to those used in (4.1) and (4.2), one can prove that

  1. (i)

    for all \((u,x),(v,y)\in [0,\beta ]\times B\)

    $$\begin{aligned} \begin{aligned} \Vert T(u,x)-T(v,y)\Vert _E&=\Vert \left( \frac{u}{\gamma },{\widetilde{x}}\right) -\left( \frac{v}{\gamma },{\widetilde{y}}\right) \Vert _E\\ {}&\le \frac{1}{\gamma }|u-v|+\theta \varsigma \Vert x-y\Vert _p\\ {}&\le |u-v|+\theta \varsigma \Vert x-y\Vert _p+\frac{\beta }{\gamma }; \end{aligned} \end{aligned}$$
    (4.3)
  2. (ii)

    for all \((u,x),(v,y)\in (\beta ,\alpha ]\times B\)

    $$\begin{aligned} \begin{aligned} \Vert T(u,x)-T(v,y)\Vert _E&=\Vert (0,{\widetilde{x}}-{\widetilde{y}})\Vert _E\\ {}&\le \theta \varsigma \Vert x-y\Vert _p\\ {}&\le |u-v|+\theta \varsigma \Vert x-y\Vert _p; \end{aligned} \end{aligned}$$
    (4.4)
  3. (iii)

    for all \((u,x)\in (-\infty ,0)\times B\) and \((v,y)\in [0,\beta ]\times B\)

    $$\begin{aligned} \begin{aligned} \Vert T(u,x)-T(v,y)\Vert _E&=\Big \Vert (u,{\widetilde{x}})-\left( \frac{v}{\gamma },{\widetilde{y}}\right) \Big \Vert _E\\ {}&=\Big \Vert \left( u-\frac{v}{\gamma },{\widetilde{x}}-{\widetilde{y}}\right) \Big \Vert _E\\ {}&\le \Big |u-\frac{v}{\gamma }\Big |+\theta \varsigma \Vert x-y\Vert _p\\ {}&\le |u|+\frac{1}{\gamma }|v|+\theta \varsigma \Vert x-y\Vert _p\\ {}&\le |u-v|+\theta \varsigma \Vert x-y\Vert _p+\frac{\beta }{\gamma }; \end{aligned} \end{aligned}$$
    (4.5)
  4. (iv)

    for all \((u,x)\in (-\infty ,0)\times B\) and \((v,y)\in (\beta ,\alpha ]\times B\)

    $$\begin{aligned} \begin{aligned} \Vert T(u,x)-T(v,y)\Vert _E&=\Vert (u,{\widetilde{x}})-(0,{\widetilde{y}})\Vert _E\\ {}&=\Vert (u,{\widetilde{x}}-{\widetilde{y}})\Vert _E\\ {}&\le |u|+\theta \varsigma \Vert x-y\Vert _p\\ {}&\le |u-v|+\theta \varsigma \Vert x-y\Vert _p; \end{aligned} \end{aligned}$$
    (4.6)
  5. (v)

    for all \((u,x)\in [0,\beta ]\times B\) and \((v,y)\in (\beta ,\alpha ]\times B\)

    $$\begin{aligned} \begin{aligned} \Vert T(u,x)-T(v,y)\Vert _E&=\Big \Vert \Big (\frac{u}{\gamma },{\widetilde{x}}\Big ) -(0,{\widetilde{y}})\Big \Vert _E\\ {}&=\Big \Vert \Big (\frac{u}{\gamma },{\widetilde{x}}-{\widetilde{y}}\Big )\Big \Vert _E\\&\le \Big |\frac{u}{\gamma }\Big |+\theta \varsigma \Vert x-y\Vert _p\\ {}&\le |u-v|+\theta \varsigma \Vert x-y\Vert _p+\frac{1}{\gamma }|v|\\ {}&\le |u-v|+\theta \varsigma \Vert x-y\Vert _p+\frac{\alpha }{\gamma }. \end{aligned} \end{aligned}$$
    (4.7)

Making use of (4.2)–(4.7) and taking into account that \(0<\beta <\alpha \), it follows that for all \((u,x),(v,y)\in E\):

$$\begin{aligned} \begin{aligned} \Vert T(u,x)-T(v,y)\Vert _E&\le |u-v|+\theta \varsigma \Vert x-y\Vert _p+\frac{\alpha }{\gamma }\\ {}&=|u-v|+\Vert x-y\Vert _p+\theta \varsigma (|u-v|+\Vert x-y\Vert _p)+\frac{\alpha }{\gamma }. \end{aligned} \end{aligned}$$
(4.8)

For all \((u,x)\in (-\infty ,0)\times B\) and \(n\ge 2\), we have \(T^n(u,x)=(u,{\widehat{x}})\), where

$$\begin{aligned} \begin{aligned} {\widehat{x}}&=\Big (\underbrace{0,0,\dots ,0}_{(2^n-1)q~times}, \Big (\frac{\theta }{\root p \of {2^{p+1}}}\Big )^n(\sin ^{k_1}|x_1|-|x_1|^{\lambda _1}), \underbrace{0,0,\dots ,0}_{(2^n-1)~times},\theta ^n\sin |x_2|^{m_1},\underbrace{0,0,\dots ,0}_{(2^n-1)~times},\\&\quad \Big (\frac{\theta }{\root p \of {2^{p+1}}}\Big )^n(|x_3|^{\alpha _1}-\sin |x_3|^{\beta _1}),\dots , \Big (\frac{\theta }{\root p \of {2^{p+1}}}\Big )^n\Big (\sin ^{k_{\frac{t+2}{3}}}|x_t|\\&\quad -|x_t|^{\lambda _{\frac{t+2}{3}}}\Big ), \underbrace{0,0,\dots ,0}_{(2^n-1)~times},\theta ^n\sin |x_{t+1}|^{m_{\frac{t+2}{3}}}, \underbrace{0,0,\dots ,0}_{(2^n-1)~times},\Big (\frac{\theta }{\root p \of {2^{p+1}}}\Big )^n \Big (|x_{t+2}|^{\alpha _{\frac{t+2}{3}}}\\ {}&\quad -\sin |x_{t+2}|^{\beta _{\frac{t+2}{3}}}\Big ), \underbrace{0,0,\dots ,0}_{(2^n-1)~times},\theta ^n x_{t+3}, \underbrace{0,0,\dots ,0}_{(2^n-1)~times},\theta ^n x_{t+4},\dots \Big ). \end{aligned} \end{aligned}$$

Then, by a similar way to the proofs of (4.1) and (4.2), for all \((u,x),(v,y)\in (-\infty ,0)\times B\) and \(n\ge 2\), one can show that

$$\begin{aligned} \begin{aligned}&\Vert T^n(u,x)-T^n(v,x)\Vert _E=\Vert (u,{\widehat{x}})-(v,{\widehat{y}})\Vert _E =\Vert (u-v,{\widehat{x}}-{\widehat{y}})\Vert _E\\&=\Big \Vert \Big (u-v,\Big (\underbrace{0,0,\dots ,0}_{(2^n-1)q~times}, \Big (\frac{\theta }{\root p \of {2^{p+1}}}\Big )^n\Big (\sin ^{k_1}|x_1| -\sin ^{k_1}|y_1|-(|x_1|^{\lambda _1}-|y_1|^{\lambda _1})\Big ), \underbrace{0,0,\dots ,0}_{(2^n-1)~times},\\&\quad \theta ^n(\sin |x_2|^{m_1}-\sin |y_2|^{m_1}),\underbrace{0,0,\dots ,0}_{(2^n-1)~times}, \Big (\frac{\theta }{\root p \of {2^{p+1}}}\Big )^n(|x_3|^{\alpha _1}-|y_3|^{\alpha _1}-(\sin |x_3|^{\beta _1}-\sin |y_3|^{\beta _1}),\\ {}&\quad \ldots ,\Big (\frac{\theta }{\root p \of {2^{p+1}}}\Big )^n\Big (\sin ^{k_{\frac{t+2}{3}}}|x_t|-\sin ^{k_{\frac{t+2}{3}}}|y_t| -\Big (|x_t|^{\lambda _{\frac{t+2}{3}}}-|y_t|^{\lambda _{\frac{t+2}{3}}}\Big )\Big ),\\ {}&\quad \underbrace{0,0,\dots ,0}_{(2^n-1)~times}, \theta ^n(\sin |x_{t+1}|^{m_{\frac{t+2}{3}}} -\sin |y_{t+1}|^{m_{\frac{t+2}{3}}}), \underbrace{0,0,\dots ,0}_{(2^n-1)~times},\\&\quad \Big (\frac{\theta }{\root p \of {2^{p+1}}}\Big )^n \Big (|x_{t+2}|^{\alpha _{\frac{t+2}{3}}}-|y_{t+2}|^{\alpha _{\frac{t+2}{3}}} -\Big (\sin |x_{t+2}|^{\beta _{\frac{t+2}{3}}}\Big )-\sin |y_{t+2}|^{\beta _{\frac{t+2}{3}}}\Big )\Big ),\\&\quad \underbrace{0,0,\dots ,0}_{(2^n-1)~times},\theta ^n(x_{t+3}-y_{t+3}), \underbrace{0,0,\dots ,0}_{(2^n-1)~times},\theta ^n(x_{t+4}-y_{t+4}),\dots \Big )\Big )\Big \Vert _E\\ {}&\le |u-v|+\theta ^n\varsigma \Vert x-y\Vert _p. \end{aligned} \end{aligned}$$
(4.9)

Furthermore, for each \(n\in \mathbb {N}\), \(T^n(u,x)=(\frac{u}{\gamma ^n},{\widehat{x}})\) and \(T^n(u,x)=(0,{\widehat{x}})\) for all \((u,x)\in [0,\beta ]\times B\) and \((u,x)\in (\beta ,\alpha ]\times B\), respectively. Then, by arguments analogous to the previous inequalities (4.1) and (4.2), and using (4.9), it follows that for all \((u,x),(v,y)\in E\) and \(n\ge 2\):

$$\begin{aligned} \begin{aligned} \Vert T^n(u,x)-T^n(v,y)\Vert _E&\le |u-v|+\theta ^n\varsigma \Vert x-y\Vert _p+\frac{\alpha }{\gamma ^n}\\ {}&\le |u-v|+\Vert x-y\Vert _p+\theta ^n\varsigma (|u-v|+\Vert x-y\Vert _p)+\frac{\alpha }{\gamma ^n}. \end{aligned} \end{aligned}$$
(4.10)

Thereby, using (4.8) and (4.10), for all \((u,x),(v,y)\in E\) and \(n\in \mathbb {N}\), we deduce that

$$\begin{aligned} \begin{aligned} \Vert T^n(u,x)-T^n(v,y)\Vert _E&\le |u-v|+\Vert x-y\Vert _p+\theta ^n\varsigma (|u-v|+\Vert x-y\Vert _p)+\frac{\alpha }{\gamma ^n}\\ {}&=\Vert (u,x)-(v,y)\Vert _E+\theta ^n\varsigma \Vert (u,x)-(v,y)\Vert _E+\frac{\alpha }{\gamma ^n}. \end{aligned} \end{aligned}$$
(4.11)

Let us now take \(\mu _n=\theta ^n\) and \(b_n=\frac{\alpha }{\gamma ^n}\) for each \(n\in \mathbb {N}\). Then, in virtue of the fact that \(\theta<1<\gamma \), we infer that \(\mu _n,b_n\rightarrow 0\), as \(n\rightarrow \infty \). Defining the mapping \(\phi :[0,+\infty )\rightarrow [0,+\infty )\) as \(\phi (w)=\varsigma w\) for all \(w\in [0,+\infty )\), and employing (4.11), for all \((u,x),(v,y)\in E\) and \(n\in \mathbb {N}\), yields

$$\begin{aligned} \Vert T^n(u,x)-T^n(u,y)\Vert _E\le \Vert (u,x)-(v,y)\Vert _E+\mu _n\phi (\Vert (u,x)-(v,y)\Vert _E)+b_n, \end{aligned}$$

that is, T is a \((\{\mu _n\},\{b_n\},\phi )\)-total asymptotically nonexpansive mapping.

Lemma 4.6

Assume that, for each \(i\in \big \{1,2\dots ,p\}\), \(E_i\) is a real Banach space with a norm \(\Vert .\Vert _i\), and \(S_i:E_i\rightarrow E_i\) is an \(\big (\{a_{n,i}\}_{n=1}^\infty ,\{b_{n,i}\}_{n=1}^\infty ,\phi _i\big )\)-total asymptotically nonexpansive mapping. Furthermore, let Q and \(\phi \) be self-mappings of \(\prod _{i=1}^pE_i\) and \(\mathbb {R}^+\), respectively, defined by

$$\begin{aligned} Q(x_1,x_2,\dots ,x_p)=(S_1x_1,S_2x_2,\dots ,S_px_p),\quad \forall (x_1,x_2,\dots ,x_p)\in \prod \limits _{i=1}^pE_i \end{aligned}$$
(4.12)

and

$$\begin{aligned} \phi (t)=\max \{\phi _i(t):i=1,2,\dots ,p\},\quad \forall t\in \mathbb {R}^+. \end{aligned}$$
(4.13)

Then, Q is a \(\big (\{\sum _{i=1}^pa_{n,i}\}_{n=1}^\infty , \{\sum _{i=1}^pb_{n,i}\}_{n=1}^\infty ,\phi \big )\)-total asymptotically nonexpansive mapping.

Proof

Taking into account that for each \(i\in \big \{1,2,\dots ,l\big \}\), \(S_i\) is an \(\big (\{a_{n,i}\}_{n=1}^\infty ,\{b_{n,i}\}_{n=1}^\infty ,\phi _i\big )\)-total asymptotically nonexpansive mapping and \(\phi _i:\mathbb {R}^+\rightarrow \mathbb {R}^+\) is a strictly increasing function, for all \((x_1,x_2,\dots ,x_p),({\widehat{x}}_1,{\widehat{x}}_2,\dots ,{\widehat{x}}_p)\in \prod _{i=1}^pX_i\) and \(n\in \mathbb {N}\), we obtain

$$\begin{aligned} \begin{aligned}&\Vert Q^n(x_1,x_2,\dots ,x_p)-Q^n({\widehat{x}}_1,{\widehat{x}}_2,\dots ,{\widehat{x}}_p)\Vert _*\\ {}&=\Vert (S_1^nx_1-S_1^n{\widehat{x}}_1,S_2^nx_2-S_2^n{\widehat{x}}_2,\dots ,S_p^nx_p-S_p^n{\widehat{x}}_p)\Vert _*\\&=\sum \limits _{i=1}^p\Vert S^n_ix_i-S^n_i{\widehat{x}}_i\Vert _i\\ {}&\le \sum \limits _{i=1}^p\big (\Vert x_i-{\widehat{x}}_i\Vert _i+a_{n,i}\phi _i(\Vert x_i-{\widehat{x}}_i\Vert _i)+b_{n,i}\big )\\ {}&\le \sum \limits _{i=1}^p\Vert x_i-{\widehat{x}}_i\Vert _i+\sum \limits _{i=1}^pa_{n,i}\phi (\Vert x_i-{\widehat{x}}_i\Vert _i)+\sum \limits _{i=1}^pb_{n,i}\\ {}&\le \sum \limits _{i=1}^p\Vert x_i-{\widehat{x}}_i\Vert _i+\sum \limits _{i=1}^pa_{n,i}\phi (\sum \limits _{j=1}^p\Vert x_j-{\widehat{x}}_j\Vert _j)+\sum \limits _{i=1}^pb_{n,i}\\ {}&=\Vert (x_1,x_2,\dots ,x_p)-({\widehat{x}}_1,{\widehat{x}}_2,\dots ,{\widehat{x}}_p)\Vert _*\\ {}&\quad +\sum \limits _{i=1}^pa_{n,i}\phi (\Vert (x_1,x_2,\dots ,x_p)-({\widehat{x}}_1,{\widehat{x}}_2,\dots ,{\widehat{x}}_p)\Vert _*) +\sum \limits _{i=1}^pb_{n,i}, \end{aligned} \end{aligned}$$
(4.14)

where \(\Vert .\Vert _*\) is a norm on \(\prod _{i=1}^pE_i\) defined by (3.7). From (4.14), it follows that the mapping Q is a \(\big (\{\sum _{i=1}^pa_{n,i}\}_{n=1}^\infty , \{\sum _{i=1}^pb_{n,i}\}_{n=1}^\infty ,\phi \big )\)-total asymptotically nonexpansive mapping. This completes the proof. \(\square \)

Let us now denote by \(\mathop {{\textrm{Fix}}}(S_i)\) and \(\mathop {{\textrm{Fix}}}(Q)\), respectively, the sets of all the fixed points of \(S_i\) \((i=1,2,\dots ,p)\) and Q. At the same time, we denote the set of all the solutions of \(\mathop {{\textrm{SGVLI}}}\) (3.1) by \(\Phi _{\mathop {{\textrm{SGVLI}}}}\). Thanks to (4.12), for any \((x_1,x_2,\dots ,x_p)\in \prod _{i=1}^pE_i\), \((x_1,x_2,\dots ,x_p)\in \mathop {{\textrm{Fix}}}(Q)\) if and only if, for each \(i\in \{1,2,\dots ,p\}\), \(x_i\in \mathop {{\textrm{Fix}}}(S_i)\), that is, \(\mathop {{\textrm{Fix}}}(Q)=\mathop {{\textrm{Fix}}}(S_1,S_2,\dots ,S_p)=\prod _{i=1}^p\mathop {{\textrm{Fix}}}(S_i)\). If \((x^*_1,x^*_2,\dots ,x^*_p)\in \mathop {{\textrm{Fix}}}(Q)\cap \Phi _{\mathop {{\textrm{SGVLI}}}}\), then utilizing Lemma 3.1, it can be easily seen that for each \(i\in \{1,2,\dots ,p\}\) and \(n\in \mathbb {N}\)

$$\begin{aligned} \begin{aligned} x_i^*=S^n_ix^*_i&=R^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i}[P_i(x^*_i) -\lambda _i\big (F_i(x^*_1,x^*_2,\dots ,x^*_p)-a_i\big )]\\ {}&=S^n_iR^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i}[P_i(x^*_i) -\lambda _i\big (F_i(x^*_1,x^*_2,\dots ,x^*_p)-a_i\big )]. \end{aligned} \end{aligned}$$
(4.15)

The fixed point formulation (4.15) allows us to construct an iterative algorithm with mixed errors based on the resolvent operator technique as follows.

Algorithm 4.7

Let \(E_i,{\widehat{M}}_i,P_i,F_i,\eta _i\) and \(a_i\) \((i=1,2,\dots ,p)\) be the same as in Lemma 3.1 and let for each \(i\in \{1,2,\dots ,p\}\), \(S_i:E_i\rightarrow E_i\) be a \((\{b_{i,n}\},\{c_{i,n}\},\phi _i)\)-total asymptotically nonexpansive mapping. For an arbitrary chosen initial point \((x_{1,0},x_{2,0},\dots ,x_{p,0})\in \prod _{i=1}^pE_i\), compute the iterative sequence \(\{(x_{1,n},x_{2,n},\dots ,x_{p,n})\}_{n=0}^\infty \) in \(\prod _{i=1}^pE_i\) by the iterative schemes

$$\begin{aligned} \begin{aligned} x_{i,n+1}&=(1-\alpha _n-\beta _n)x_{i,n}+\alpha _nS^n_iR^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i} [P_i(x_{i,n})-\lambda _i(F_i(x_{1,n},x_{2,n},\dots ,x_{p,n})-a_i)]\\&\quad +\alpha _ne_{i,n}+\beta _nt_{i,n}+s_{i,n}, \end{aligned} \end{aligned}$$
(4.16)

where \(i=1,2,\dots ,p\); \(n\ge 0\); \(\lambda _i>0\) are arbitrary real constants; \(\{\alpha _n\}_{n=0}^\infty \) and \(\{\beta _n\}_{n=0}^\infty \) are two sequences in the interval [0, 1] satisfying \(\sum _{n=0}^\infty \alpha _n=\infty \), \(\alpha _n+\beta _n<1\) for all \(n\ge 0\), \(\sum _{n=0}^\infty \beta _n<\infty \) and for each \(i\in \{1,2,\dots ,p\}\), \(\{e_{i,n}\}_{n=0}^\infty \), \(\{t_{i,n}\}_{n=0}^\infty \), \(\{s_{i,n}\}_{n=0}^\infty \) are three sequences in \(E_i\) to take into account a possible inexact computation of the resolvent operator point satisfying the following conditions: for each \(i\in \{1,2,\dots ,p\}\), \(\{t_{i,n}\}_{n=0}^\infty \) is a bounded sequence in \(E_i\) and \(\{e_{i,n}\}_{n=0}^\infty \), \(\{s_{i,n}\}_{n=0}^\infty \) are two sequences in \(E_i\), such that for each \(i\in \{1,2,\dots ,p\}\) and for all \(n\ge 0\)

$$\begin{aligned} \begin{aligned} \left\{ \begin{array}{ll} e_{i,n}=e'_{i,n}+e''_{i,n};\\ \lim \limits _{n\rightarrow \infty }\Vert (e'_{1,n},e'_{2,n},\dots ,e'_{p,n})\Vert _*=0;\\ \sum \limits _{n=0}^\infty \Vert (e''_{1,n},e''_{2,n},\dots ,e''_{p,n})\Vert _*<\infty ;\\ \sum \limits _{n=0}^\infty \Vert (s_{1,n},s_{2,n},\dots ,s_{p,n})\Vert _*<\infty . \end{array}\right. \end{aligned} \end{aligned}$$
(4.17)

If for each \(i\in \{1,2,\dots ,p\}\), \(S_i\equiv I_i\), the identity mapping on \(E_i\), then Algorithm 4.7 reduces to the following algorithm.

Algorithm 4.8

Let \(E_i,{\widehat{M}}_i,P_i,F_i,\eta _i\) and \(a_i\) \((i=1,2,\dots ,p)\) be the same as in Lemma 3.1. For any given \((x_{1,0},x_{2,0},\dots ,x_{p,0})\in \prod _{i=1}^pE_i\), define the iterative sequence \(\{(x_{1,n},x_{2,n},\dots ,x_{p,n})\}_{n=0}^\infty \) in \(\prod _{i=1}^pE_i\) by the iterative processes

$$\begin{aligned} \begin{aligned} x_{i,n+1}&=(1-\alpha _n-\beta _n)x_{i,n}+\alpha _nR^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i} [P_i(x_{i,n})-\lambda _i(F_i(x_{1,n},x_{2,n},\dots ,x_{p,n})-a_i)]\\ {}&\quad +\alpha _ne_{i,n}+\beta _nt_{i,n}+s_{i,n}, \end{aligned} \end{aligned}$$

where \(i=1,2,\dots ,p\); \(n\ge 0\); the constants \(\lambda _i>0\) and the sequences \(\{\alpha _n\}_{n=0}^\infty \), \(\{\beta _n\}_{n=0}^\infty \), \(\{e_{i,n}\}_{n=0}^\infty \), \(\{t_{i,n}\}_{n=0}^\infty \) and \(\{s_{i,n}\}_{n=0}^\infty \) \((i=1,2,\dots ,p)\) are the same as in Algorithm 4.7.

For the case when \(p=2\), \(\alpha _n=1\), \(\beta _n=0\), \(e_{i,n}=s_{i,n}=0\) and \(\eta _i(u_i,v_i)=u_i-v_i\) for all \(u_i,v_i\in E_i\), \(n\ge 0\) and \(i=1,2\), then Algorithm 4.8 collapses to the following iterative algorithm.

Algorithm 4.9

Suppose that \(E_i,{\widehat{M}}_i,P_i\) and \(F_i\) are the same as in Lemma 3.2. For an arbitrary chosen initial point \((x_0,y_0)\in E_1\times E_2\), compute the iterative sequence \(\{(x_n,y_n)\}_{n=0}^\infty \) in \(E_1\times E_2\) using the iterative schemes

$$\begin{aligned} \begin{aligned} \left\{ \begin{array}{ll} x_{n+1}=R^{P_1}_{{\widehat{M}}_1,\lambda _1}[P_1(x_n)-\lambda _1F_1(x_n,y_n)],\\ y_{n+1}=R^{P_2}_{{\widehat{M}}_2,\lambda _2}[P_2(y_n)-\lambda _2F_2(x_n,y_n)], \end{array}\right. \end{aligned} \end{aligned}$$

where \(n=0,1,2,\dots \); \(\lambda _1,\lambda _2>0\) are two arbitrary real constants.

Remark 4.10

  1. (i)

    If \(e_{i,n}=s_{i,n}=0\) for all \(n\ge 0\) and \(i=1,2,\dots ,p\), then Algorithms 4.7 and 4.8 become the resolvent iterative processes with mean errors.

  2. (ii)

    When \(e_{i,n}=t_{i,n}=s_{i,n}=0\) for all \(n\ge 0\) and \(i=1,2,\dots ,p\), then Algorithms 4.7 and 4.8 reduce to the resolvent iterative processes without error.

5 An application

In this section, as an application of our proposed iterative algorithm in the previous section, we verify that the iterative sequence generated by Algorithm 4.7 converges strongly to a common element of the two sets \(\Phi _{\mathop {{\textrm{SGVLI}}}}\) and \(\mathop {{\textrm{Fix}}}(Q)\), where Q is defined as (4.12).

Before proceeding to our main result, we need to recall the following significant lemma which plays a crucial role in its proof.

Lemma 5.1

Let \(\{a_n\}\), \(\{b_n\}\) and \(\{c_n\}\) be three nonnegative real sequences satisfying the following conditions: there exists a natural number \(n_0\), such that:

$$\begin{aligned} a_{n+1}\le (1-\sigma _n)a_n+b_n\sigma _n+c_n,\quad \forall n\ge n_0, \end{aligned}$$

where \(\sigma _n\in [0,1]\), \(\sum _{n=0}^\infty \sigma _n=\infty \), \(\lim _{n\rightarrow \infty }b_n=0\) and \(\sum _{n=0}^\infty c_n<\infty \). Then, \(\lim _{n\rightarrow \infty }a_n=0\).

Proof

The proof follows directly from Lemma 2 in [36]. \(\square \)

Theorem 5.2

Let \(E_i,{\widehat{M}}_i,P_i,F_i\) and \(\eta _i\) \((i=1,2,\dots ,p)\) be the same as in Theorem 3.4 and let all the conditions of Theorem 3.4 hold. Suppose further that for each \(i\in \{1,2,\dots ,p\}\), \(S_i:E_i\rightarrow E_i\) is a \((\{b_{i,n}\},\{c_{i,n}\},\phi _i)\)-total asymptotically nonexpansive mapping and Q is a self-mapping of \(\prod _{i=1}^pE_i\) defined by (4.12), such that \(\mathop {{\textrm{Fix}}}(Q)\cap \Phi _{\mathop {{\textrm{SGVLI}}}}\ne \emptyset \). Then, the iterative sequence \(\{(x_{1,n},x_{2,n},\dots ,x_{p,n})\}_{n=0}^\infty \) generated by Algorithm 4.7 converges strongly to the unique element of \(\mathop {{\textrm{Fix}}}(Q)\cap \Phi _{\mathop {{\textrm{SGVLI}}}}\).

Proof

In the light of Theorem 3.4, the \(\mathop {{\textrm{SGVLI}}}\) (3.1) admits a unique solution \((x^*_1,x^*_2,\dots ,x^*_p)\in \prod _{i=1}^pE_i\). Then, Lemma 3.1 implies that for each \(i\in \{1,2,\dots ,p\}\)

$$\begin{aligned} x^*_i=R^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i}[P_i(x^*_i)-\lambda _i(F_i(x^*_1,x^*_2,\dots ,x^*_p)-a_i)]. \end{aligned}$$
(5.1)

Taking into account that \(\Phi _{\mathop {{\textrm{SGVLI}}}}\) is a singleton set and \(\mathop {{\textrm{Fix}}}(Q)\cap \Phi _{\mathop {{\textrm{SGVLI}}}}\ne \emptyset \), it follows that for each \(i\in \{1,2,\dots ,p\}\), \(x^*_i\in \mathop {{\textrm{Fix}}}(S_i)\). Therefore, using (5.1) and based on the above-mentioned facts, for each \(n\ge 0\) and \(i\in \{1,2,\dots ,p\}\), we have

$$\begin{aligned} ~~~~~~~~ \quad x^*_i=(1-\alpha _n-\beta _n)x^*_i+\alpha _nS^n_iR^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i}[P_i(x^*_i) -\lambda _i(F_i(x^*_1,x^*_2,\dots ,x^*_p)-a_i)]+\beta _nx^*_i, \end{aligned}$$
(5.2)

where the sequences \(\{\alpha _n\}_{n=0}^\infty \) and \(\{\beta _n\}_{n=0}^\infty \) are the same as in Algorithm 4.7. Invoking Theorem 2.21, it follows that for each \(i\in \Gamma \):

$$\begin{aligned} \begin{aligned}&\Vert R^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i}[P_i(x_{i,n}) -\lambda _i(F_i(x_{1,n},x_{2,n},\dots ,x_{p,n})-a_i)]\\ {}&\quad -R^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i}[P_i(x^*_i) -\lambda _i(F_i(x^*_1,x^*_2,\dots ,x^*_p)-a_i)]\Vert _i\\ {}&\le \frac{\tau _i^{q_i-1}}{\lambda _i\gamma _i+r_i}\Vert P_i(x_{i,n})-P_i(x^*_i)-\lambda _i\big (F_i(x_{1,n},x_{2,n},\dots ,x_{p,n})-F_i(x^*_1,x^*_2,\dots ,x^*_p)\big )\Vert _i\\&\le \frac{\tau _i^{q_i-1}}{\lambda _i\gamma _i+r_i}\Vert P_i(x_{i,n})-P_i(x^*_i)-\lambda _i\big (F_i(x_{1,n},x_{2,n},\dots ,x_{i-1,n},x_{i,n},x_{i+1,n},\dots ,x_{p,n})\\ {}&\quad -F_i(x_{1,n},x_{2,n},\dots ,x_{i-1,n},x^*_i,x_{i+1,n},\dots ,x_{p,n})\big )\Vert _i\\ {}&\quad +\frac{\tau _i^{q_i-1}\lambda _i}{\lambda _i\gamma _i+r_i}\sum \limits _{j\in \Gamma ,j\ne i}\Vert F_i(x_{1,n},x_{2,n},\dots ,x_{j-1,n},x_{j,n},x_{j+1,n},\dots ,x_{p,n})\\&\quad -F_i(x_{1,n},x_{2,n},\dots ,x_{j-1,n},x^*_j,x_{j+1,n},\dots ,x_{p,n})\Vert _i\\ {}&\le \frac{\tau _i^{q_i-1}}{\lambda _i\gamma _i+r_i}\big (\Vert x_{i,n}-x^*_i-\big (P_i(x_{i,n})-P_i(x^*_i)\big )\Vert _i\\ {}&\quad +\Vert x_{i,n}-x^*_i-\lambda _i\big (F_i(x_{1,n},x_{2,n},\dots ,x_{i-1,n},x_{i,n},x_{i+1,n},\dots ,x_{p,n})\\ {}&\quad -F_i(x_{1,n},x_{2,n},\dots ,x_{i-1,n},x^*_i,x_{i+1,n},\dots ,x_{p,n})\big )\Vert _i\big )\\ {}&\quad +\frac{\tau _i^{q_i-1}\lambda _i}{\lambda _i\gamma _i+r_i}\sum \limits _{j\in \Gamma ,j\ne i}\Vert F_i(x_{1,n},x_{2,n},\dots ,x_{j-1,n},x_{j,n},x_{j+1,n},\dots ,x_{p,n})\\ {}&\quad -F_i(x_{1,n},x_{2,n},\dots ,x_{j-1,n},x^*_j,x_{j+1,n},\dots ,x_{p,n})\Vert _i. \end{aligned} \end{aligned}$$
(5.3)

In light of the assumptions and using the same arguments as for (3.10)–(3.13), one can prove that for each \(i\in \Gamma \) and \(n\ge 0\)

$$\begin{aligned}&\Vert x_{i,n}-x^*_i-(P_i(x_{i,n})-P_i(x^*_i))\Vert _i\le \root q_i \of {1-2q_ir_i+c_{q_i}\varrho _i^{q_i}}\Vert x_{i,n}-x^*_i\Vert _i,\end{aligned}$$
(5.4)
$$\begin{aligned}&\Vert x_{i,n}-x^*_i-\lambda _i\big (F_i(x_{1,n},x_{2,n},\dots ,x_{i-1,n},x_{i,n},x_{i+1,n},\dots ,x_{p,n})\nonumber \\&-F_i(x_{1,n},x_{2,n},\dots ,x_{i-1,n},x^*_i,x_{i+1,n},\dots ,x_{p,n})\big )\Vert _i\nonumber \\&\le \root q_i \of {1-2\lambda _iq_i\mu _i+c_{q_i}\lambda _i^{q_i}\xi _i^{q_i}}\Vert x_{i,n}-x^*_i\Vert _i \end{aligned}$$
(5.5)

and

$$\begin{aligned} \begin{aligned}&\Vert F_i(x_{1,n},x_{2,n},\dots ,x_{j-1,n},x_{j,n},x_{j+1,n},\dots ,x_{p,n})\\ {}&-F_i(x_{1,n},x_{2,n},\dots ,x_{j-1,n},x^*_j,x_{j+1,n},\dots ,x_{p,n})\Vert _i\le \varsigma _{i,j}\Vert x_{j,n}-x^*_j\Vert _j. \end{aligned} \end{aligned}$$
(5.6)

Combining (5.3)–(5.6), for each \(i\in \Gamma \) and \(n\ge 0\), we obtain

$$\begin{aligned} \begin{aligned}&\Vert R^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i}[P_i(x_{i,n}) -\lambda _i(F_i(x_{1,n},x_{2,n},\dots ,x_{p,n})-a_i)]\\ {}&-R^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i}[P_i(x^*_i) -\lambda _i(F_i(x^*_1,x^*_2,\dots ,x^*_p)-a_i)]\Vert _i\\ {}&\le \vartheta _i\Vert x_{i,n}-x^*_i\Vert _i +\frac{\tau _i^{q_i-1}\lambda _i}{\lambda _i\gamma _i+r_i}\sum \limits _{j\in \Gamma ,j\ne i} \varsigma _{i,j}\Vert x_{j,n}-x^*_j\Vert _j, \end{aligned} \end{aligned}$$
(5.7)

where for each \(i\in \Gamma \), \(\vartheta _i\) is the same as in (3.1). Let us now assume that \(L=\max \{\sup _{n\ge 0}\Vert t_{i,n}-x^*_i\Vert _i: i=1,2,\dots ,p\}\). Using (4.16), (5.2), and (5.7), for each \(i\in \Gamma \) and \(n\ge 0\), yields

$$\begin{aligned} \begin{aligned} \Vert x_{i,n+1}-x^*_i\Vert _i&\le (1-\alpha _n-\beta _n)\Vert x_{i,n}-x^*_i\Vert _i +\alpha _n\Vert S^n_iR^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i}[P_i(x_{i,n})\\&\quad -\lambda _i(F_i(x_{1,n},x_{2,n},\dots ,x_{p,n})-a_i)]\\ {}&\quad -S^n_iR^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i}[P_i(x^*_i) -\lambda _i(F_i(x^*_1,x^*_2,\dots ,x^*_p)-a_i)]\Vert _i\\ {}&\quad +\beta _n\Vert t_{i,n}-x^*_i\Vert _i+\alpha _n\Vert e_{i,n}\Vert _i+\Vert s_{i,n}\Vert _i\\ {}&\le (1-\alpha _n-\beta _n)\Vert x_{i,n}-x^*_i\Vert _i+\alpha _n\Big (\Vert R^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i}[P_i(x_{i,n})\\&\quad -\lambda _i(F_i(x_{1,n},x_{2,n},\dots ,x_{p,n})-a_i)]\\ {}&\quad -R^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i}[P_i(x^*_i) -\lambda _i(F_i(x^*_1,x^*_2,\dots ,x^*_p)-a_i)]\Vert _i\\ {}&\quad +b_{i,n}\phi _i\big (\Vert R^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i}[P_i(x_{i,n}) -\lambda _i(F_i(x_{1,n},x_{2,n},\dots ,x_{p,n})-a_i)]\\ {}&\quad -R^{P_i,\eta _i}_{{\widehat{M}}_i,\lambda _i}[P_i(x^*_i) -\lambda _i(F_i(x^*_1,x^*_2,\dots ,x^*_p)-a_i)]\Vert _i\big )\\ {}&\quad +c_{i,n}\Big )+\beta _n\Vert t_{i,n}-x^*_i\Vert _i+\alpha _n\big (\Vert e'_{i,n}\Vert _i+\Vert e''_{i,n}\Vert _i\big )+\Vert s_{i,n}\Vert _i\\ {}&\le (1-\alpha _n-\beta _n)\Vert x_{i,n}-x^*_i\Vert _i+\alpha _n\Big (\vartheta _i\Vert x_{i,n}-x^*_i\Vert _i\\ {}&\quad +\frac{\tau _i^{q_i-1}\lambda _i}{\lambda _i\gamma _i+r_i}\sum \limits _{j\in \Gamma ,j\ne i} \varsigma _{i,j}\Vert x_{j,n}-x^*_j\Vert _j+b_{i,n}\phi _i\big (\vartheta _i\Vert x_{i,n}-x^*_i\Vert _i\\ {}&\quad +\frac{\tau _i^{q_i-1}\lambda _i}{\lambda _i\gamma _i+r_i}\sum \limits _{j\in \Gamma ,j\ne i} \varsigma _{i,j}\Vert x_{j,n}-x^*_j\Vert _j\big )+c_{i,n}\Big )\\ {}&\quad +\alpha _n\Vert e'_{i,n}\Vert _i+\Vert e''_{i,n}\Vert _i+\Vert s_{i,n}\Vert _i+\beta _nL. \end{aligned} \end{aligned}$$
(5.8)

Then, using (5.8), it follows that for all \(n\ge 0\):

$$\begin{aligned} \begin{aligned} \Vert (x_{1,n+1},x_{2,n+1},\dots ,x_{p,n+1})-(x^*_1,x^*_2,\dots ,x^*_p)\Vert _*\\ \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&\le (1-\alpha _n-\beta _n)\Vert (x_{1,n},x_{2,n},\dots ,x_{p,n})-(x^*_1,x^*_2,\dots ,x^*_p)\Vert _*\\ {}&\quad +\alpha _n\sum \limits _{i=1}^p\Big (\vartheta _i\Vert x_{i,n}-x^*_i\Vert _i +\frac{\tau _i^{q_i-1}\lambda _i}{\lambda _i\gamma _i+r_i}\sum \limits _{j\in \Gamma ,j\ne i} \varsigma _{i,j}\Vert x_{j,n}-x^*_j\Vert _j\Big )\\ {}&\quad +\alpha _n\sum \limits _{i=1}^pb_{i,n}\phi _i\Big (\vartheta _i\Vert x_{i,n}-x^*_i\Vert _i +\frac{\tau _i^{q_i-1}\lambda _i}{\lambda _i\gamma _i+r_i}\sum \limits _{j\in \Gamma ,j\ne i} \varsigma _{i,j}\Vert x_{j,n}-x^*_j\Vert _j\Big )\\ {}&\quad +\alpha _n\sum \limits _{i=1}^pc_{i,n} +\alpha _n\Vert (e'_{1,n},e'_{2,n},\dots ,e'_{p,n})\Vert _*+\Vert (e''_{1,n},e''_{2,n},\dots ,e''_{p,n})\Vert _*\\ {}&\quad +\Vert (s_{1,n},s_{2,n},\dots ,s_{p,n})\Vert _*+p\beta _nL\\&\le (1-\alpha _n-\beta _n)\Vert (x_{1,n},x_{2,n},\dots ,x_{p,n})-(x^*_1,x^*_2,\dots ,x^*_p)\Vert _*\\ {}&\quad +\alpha _n\Big (\Big (\vartheta _1+\sum \limits _{k=2}^p \frac{\tau _k^{q_k-1}\lambda _k}{\lambda _k\gamma _k+r_k}\varsigma _{k,1}\Big )\Vert x_{1,n}-x^*_1\Vert _1\\&\quad +\Big (\vartheta _2+\sum \limits _{k\in \Gamma ,k\ne 2} \frac{\tau _k^{q_k-1}\lambda _k}{\lambda _k\gamma _k+r_k} \varsigma _{k,2}\Big )\Vert x_{2,n}-x^*_2\Vert _2+\dots \\ {}&\quad +\Big (\vartheta _p+\sum \limits _{k=1}^{p-1} \frac{\tau _k^{q_k-1}\lambda _k}{\lambda _k\gamma _k+r_k}\varsigma _{k,p}\Big )\Vert x_{p,n}-x^*_p\Vert _p\Big )\\ {}&\quad +\alpha _n\sum \limits _{i=1}^pb_{i,n} \phi _i\Big (\vartheta _i\Vert x_{i,n}-x^*_i\Vert _i +\frac{\tau _i^{q_i-1}\lambda _i}{\lambda _i\gamma _i+r_i}\sum \limits _{j\in \Gamma ,j\ne i} \varsigma _{i,j}\Vert x_{j,n}-x^*_j\Vert _j\Big )\\ {}&\quad +\alpha _n\sum \limits _{i=1}^pc_{i,n} +\alpha _n\Vert (e'_{1,n},e'_{2,n},\dots ,e'_{p,n})\Vert _*+\Vert (e''_{1,n},e''_{2,n},\dots ,e''_{p,n})\Vert _* \\&\quad +\Vert (s_{1,n},s_{2,n},\dots ,s_{p,n})\Vert _*+p\beta _nL\\ {}&\le (1-\alpha _n-\beta _n)\Vert (x_{1,n},x_{2,n},\dots ,x_{p,n})-(x^*_1,x^*_2,\dots ,x^*_p)\Vert _*\\ {}&\quad +\alpha _n\theta \Vert (x_{1,n},x_{2,n},\dots ,x_{p,n})-(x^*_1,x^*_2,\dots ,x^*_p)\Vert _*\\ {}&\quad +\alpha _n\sum \limits _{i=1}^pb_{i,n}\phi _i\big (\vartheta _i\Vert x_{i,n}-x^*_i\Vert _i +\frac{\tau _i^{q_i-1}\lambda _i}{\lambda _i\gamma _i+r_i}\sum \limits _{j\in \Gamma ,j\ne i} \varsigma _{i,j}\Vert x_{j,n}-x^*_j\Vert _j\big )\\ {}&\quad +\alpha _n\sum \limits _{i=1}^pc_{i,n} +\alpha _n\Vert (e'_{1,n},e'_{2,n},\dots ,e'_{p,n})\Vert _*+\Vert (e''_{1,n},e''_{2,n},\dots ,e''_{p,n})\Vert _*\\&\quad +\Vert (s_{1,n},s_{2,n},\dots ,s_{p,n})\Vert _*+p\beta _nL\\&\le \big (1-(1-\theta )\alpha _n\big )\Vert (x_{1,n},x_{2,n},\dots ,x_{p,n}) -(x^*_1,x^*_2,\dots ,x^*_p)\Vert _*\\ {}&\quad +\alpha _n(1-\theta )\frac{\sum _{i=1}^pb_{i,n}\phi _i(\vartheta _i\Vert x_{i,n}-x^*_i\Vert _i+\frac{\tau _i^{q_i-1}\lambda _i}{\lambda _i\gamma _i+r_i}\sum _{j\in \Gamma ,j\ne i}\varsigma _{i,j}\Vert x_{j,n}-x^*_j\Vert _j)+\Psi _n}{1-\theta }\\ {}&\quad +\Vert (e''_{1,n},e''_{2,n},\dots ,e''_{p,n})\Vert _*+ \Vert (s_{1,n},s_{2,n},\dots ,s_{p,n})\Vert _*+p\beta _nL, \end{aligned} \end{aligned}$$
(5.9)

where \(\Psi _n=\Vert (e'_{1,n},e'_{2,n},\dots ,e'_{p,n})\Vert _*+\sum _{i=1}^pc_{i,n}\) and \(\theta \) is the same as in (3.15).

Let us take for each \(n\ge 0\)

$$\begin{aligned}&{\widehat{a}}_n=\Vert (x_{1,n},x_{2,n},\dots ,x_{p,n})-(x^*_1,x^*_2,\dots ,x^*_p)\Vert _*,\\ {}&{\widehat{b}}_n=\frac{\sum _{i=1}^pb_{i,n}\phi _i(\vartheta _i\Vert x_{i,n}-x^*_i\Vert _i+\frac{\tau _i^{q_i-1}\lambda _i}{\lambda _i\gamma _i+r_i}\sum _{j\in \Gamma ,j\ne i}\varsigma _{i,j}\Vert x_{j,n}-x^*_j\Vert _j)+\Psi _n}{1-\theta }\\ {}&{\widehat{c}}_n=\Vert (e''_{1,n},e''_{2,n},\dots ,e''_{p,n})\Vert _* +\Vert (s_{1,n},s_{2,n},\dots ,s_{p,n})\Vert _*+p\beta _nL,\\ {}&\sigma _n=(1-\theta )\alpha _n. \end{aligned}$$

Taking into consideration the facts that \(\lim _{n\rightarrow \infty }b_{i,n}=\lim _{n\rightarrow \infty }c_{i,n}=0\) for each \(i\in \Gamma \) and \(\sum _{n=0}^\infty \beta _n<\infty \), in the light of (4.17) and (5.9), we observe that all the conditions of Lemma 5.1 are satisfied. Therefore, according to Lemma 5.1\({\widehat{a}}_n\rightarrow \infty \) as \(n\rightarrow \infty \), i.e., \((x_{1,n},x_{2,n},\dots ,x_{p,n})\rightarrow (x^*_1,x^*_2,\dots ,x^*_p)\), as \(n\rightarrow \infty \). Consequently, the sequence \(\{(x_{1,n},x_{2,n},\dots ,x_{p,n})\}_{n=0}^\infty \) generated by Algorithm 4.7 converges strongly to the only element \((x^*_1,x^*_2,\dots ,x^*_p)\) of the singleton set \(\mathop {{\textrm{Fix}}}(Q)\cap \Phi _{\mathop {{\textrm{SGVLI}}}}\). This completes the proof. \(\square \)

As direct consequences of the above theorem, we have the following corollaries.

Corollary 5.3

Let \(E_i,{\widehat{M}}_i,P_i,F_i\) and \(\eta _i\) \((i=1,2,\dots ,p)\) be the same as in Theorem 3.4 and let all the conditions of Theorem 3.4 hold. Then, the iterative sequence \(\{(x_{1,n},x_{2,n},\dots ,x_{p,n})\}_{n=0}^\infty \) generated by Algorithm 4.8 converges strongly to the unique solution of \(\mathop {{\textrm{SGVLI}}}\) (3.1).

Corollary 5.4

Suppose that \(E_i,{\widehat{M}}_i,P_i\) and \(F_i\) are the same as in Corollary 3.5 and assume further that all the conditions of Corollary 3.5 hold. Then, the iterative sequence \(\{(x_n,y_n)\}_{n=0}^\infty \) generated by Algorithm 4.9 converges strongly to the unique solution of \(\mathop {{\textrm{SVI}}}\) (3.2).

6 Generalized H(., .)-accretive mappings

The main motivation of this section is to investigate and analyze the notion of generalized H(., .)-accretive mapping and the relevant results that appeared in [29] and to provide some comments regarding them. Additionally, we show that the results presented in [29] can be deduced as corollaries of our main results presented in the previous sections.

Throughout the rest of the paper, unless otherwise stated, we assume that E is a q-uniformly smooth Banach space.

Definition 6.1

[29, 54] Let \(A,B:E\rightarrow E\) and \(H:E\times E\rightarrow E\) be three single-valued mappings.

  1. (i)

    H(A, .) is said to be \(\alpha '\)-strongly accretive with respect to A if there exists a constant \(\alpha '>0\), such that

    $$\begin{aligned} \langle H(Ax,u)-H(Ay,u),J_q(x-y)\rangle \ge \alpha '\Vert x-y\Vert ^q,\quad \forall x,y,u\in E; \end{aligned}$$
  2. (ii)

    H(., B) is said to be \(\beta '\)-relaxed accretive with respect to B if there exists a constant \(\beta '>0\), such that

    $$\begin{aligned} \langle H(u,Bx)-H(u,By),J_q(x-y)\rangle \ge -\beta '\Vert x-y\Vert ^q,\quad \forall x,y,u\in E; \end{aligned}$$
  3. (iii)

    H(., .) is said to be \(\alpha '\beta '\)-symmetric accretive with respect to A and B, if H(A, .) is \(\alpha '\)-strongly accretive with respect to A and H(., B) is \(\beta '\)-relaxed accretive with respect to B with \(\alpha '\ge \beta '\) and \(\alpha '=\beta '\) if and only if \(x=y\), for all \(x,y\in E\);

  4. (iv)

    H(., .) is said to be \(\xi \)-Lipschitz continuous with respect to the first argument if there exists a constant \(\xi >0\), such that

    $$\begin{aligned} \Vert H(x,u)-H(y,u)\Vert \le \xi \Vert x-y\Vert ,\quad \forall x,y,u\in E; \end{aligned}$$
  5. (v)

    H(., .) is said to be \(\varsigma \)-Lipschitz continuous with respect to the second argument if there exists a constant \(\varsigma >0\), such that

    $$\begin{aligned} \Vert H(u,x)-H(u,y)\Vert \le \varsigma \Vert x-y\Vert ,\quad \forall x,y,u\in E. \end{aligned}$$

Proposition 6.2

Let \(A,B:E\rightarrow E\) and \(H:E\times E\rightarrow E\) be three single-valued mappings. Suppose further that the mapping \(P:E\rightarrow E\) is defined by \(P(x):=H(Ax,Bx)\) for all \(x\in E\). Then, the following statements hold:

  1. (i)

    If H(., .) is \(\alpha '\beta '\)-symmetric accretive with respect to A and B, then P is \((\alpha '-\beta ')\)-strongly accretive. Hence, it is strictly accretive if \(\alpha '>\beta '\) and accretive if \(\alpha '=\beta '\).

  2. (ii)

    If H(., .) is \(\xi \)-Lipschitz continuous with respect to A and \(\varsigma \)-Lipschitz continuous with respect to B, then P is \((\xi +\varsigma )\)-Lipschitz continuous.

Proof

(i) Since H(., .) is \(\alpha '\beta '\)-symmetric accretive with respect to A and B, according to Definition 6.1(iii), H(A, .) is \(\alpha '\)-strongly accretive with respect to A and H(., B) is \(\beta '\)-relaxed accretive with respect to B. Then, for all \(x,y\in E\), it follows that:

$$\begin{aligned} \begin{aligned} \langle P(x)-P(y),J_q(x-y)\rangle&=\langle H(Ax,Bx)-H(Ay,By),J_q(x-y)\rangle \\ {}&=\langle H(Ax,Bx)-H(Ay,Bx),J_q(x-y)\rangle \\ {}&\quad +\langle H(Ay,Bx)-H(Ay,By),J_q(x-y)\rangle \\ {}&\ge \alpha '\Vert x-y\Vert ^q-\beta '\Vert x-y\Vert ^q\\ {}&=(\alpha '-\beta ')\Vert x-y\Vert ^q. \end{aligned} \end{aligned}$$

If \(\alpha '>\beta '\), the preceding inequality implies that P is \((\alpha '-\beta ')\)-strongly accretive. Hence, the fact that P is strictly accretive is straightforward. For the case when \(\alpha '=\beta '\), in the light of the last inequality, it follows that P is accretive.

(ii) Taking into account that H(., .) is \(\xi \)-Lipschitz continuous and \(\varsigma \)-Lipschitz continuous with respect to A and B, respectively, for all \(x,y\in E\), we obtain

$$\begin{aligned} \begin{aligned} \Vert P(x)-P(y)\Vert&=\Vert H(Ax,Bx)-H(Ay,By)\Vert \\ {}&\le \Vert H(Ax,Bx)-H(Ay,Bx)\Vert \\ {}&\quad +\Vert H(Ay,Bx)-H(Ay,By)\Vert \\ {}&\le (\xi +\varsigma )\Vert x-y\Vert ; \end{aligned} \end{aligned}$$

i.e., P is \((\xi +\varsigma )\)-Lipschitz continuous. The proof is complete. \(\square \)

Remark 6.3

It is worthwhile to stress that in the light of Proposition 6.2(i), every bifunction \(H:E\times E\rightarrow E\) that is \(\alpha '\beta '\)-symmetric accretive with respect to the mappings A and B is actually a univariate \((\alpha -\beta )\)-strongly accretive mapping if \(\alpha '>\beta '\) and accretive mapping if \(\alpha '=\beta '\); however, this is not a new concept. Furthermore, invoking Proposition 6.2(ii), the concept of Lipschitz continuity of the bifunction \(H:E\times E\rightarrow E\) with respect to the mappings \(A,B:E\rightarrow E\) presented in parts (iv) and (v) of Definition 6.1 is exactly the same notion of Lipschitz continuity of a univariate mapping \(P:=H(A,B):E\rightarrow E\) appeared in part (iv) of Definition 2.6 and is not a new concept.

Definition 6.4

[29] Let \(f,g:E\rightarrow E\) be single-valued mappings and \(M:E\times E\rightarrow 2^E\) be a multi-valued mapping. Then

  1. (i)

    M(f, .) is said to be \(\alpha \)-strongly accretive with respect to f if there exists a constant \(\alpha >0\), such that

    $$\begin{aligned} \langle u-v,J_q(x-y)\rangle \ge \alpha \Vert x-y\Vert ^q,\quad \forall x,y,w\in E, u\in M(f(x),w), v\in M(f(y),w); \end{aligned}$$
  2. (ii)

    M(., g) is said to be \(\beta \)-relaxed accretive with respect to g if there exists a constant \(\beta >0\), such that

    $$\begin{aligned} \langle u-v,J_q(x-y)\rangle \ge -\beta \Vert x-y\Vert ^q,\quad \forall x,y,w\in E, u\in M(w,g(x)), v\in M(w,g(y)); \end{aligned}$$
  3. (iii)

    M(., .) is said to be \(\alpha \beta \)-symmetric accretive with respect to f and g if M(f, .) is \(\alpha \)-strongly accretive with respect to f and M(., g) is \(\beta \)-relaxed accretive with respect to g with \(\alpha \ge \beta \) and \(\alpha =\beta \) if and only if \(x=y\).

Proposition 6.5

Let \(f,g:E\rightarrow E\) be single-valued mappings and \(M:E\times E\rightarrow 2^E\) be a multi-valued mapping. Suppose further that the multi-valued mapping \({\widehat{M}}:E\rightarrow 2^E\) is defined by \({\widehat{M}}(x):=M(f(x),g(x))\) for all \(x\in E\). If M(., .) is \(\alpha \beta \)-symmetric accretive with respect to f and g, then \({\widehat{M}}\) is \((\alpha -\beta )\)-strongly accretive if \(\alpha >\beta \) and accretive if \(\alpha =\beta \).

Proof

In virtue of the fact that M(., .) is \(\alpha \beta \)-symmetric accretive with respect to f and g, Definition 6.4(iii) implies that M(f, .) is \(\alpha \)-strongly accretive with respect to f and M(., g) is \(\beta \)-relaxed accretive with respect to g. Thereby, for all \(x,y\in E\), \(u\in {\widehat{M}}(x)\) and \(v\in {\widehat{M}}(y)\), it follows that:

$$\begin{aligned} \begin{aligned} \langle u-v,J_q(x-y)\rangle&=\langle u-w+w-v,J_q(x-y)\rangle \\ {}&=\langle u-w,J_q(x-y)\rangle +\langle w-v,J_q(x-y)\rangle \\ {}&\ge \alpha \Vert x-y\Vert ^q-\beta \Vert x-y\Vert ^q\\ {}&=(\alpha -\beta )\Vert x-y\Vert ^q, \end{aligned} \end{aligned}$$

for all \(w\in M(f(y),g(x))\). For the case when \(\alpha >\beta \), the last inequality implies that the mapping \({\widehat{M}}\) is \((\alpha -\beta )\)-strongly accretive and when \(\alpha =\beta \) it is accretive mapping. This gives us the desired result. \(\square \)

Note, in particular, that in accordance with Proposition 6.5, every symmetric accretive mapping is actually a strongly accretive or accretive mapping. In fact, the concept of \(\alpha \beta \)-symmetric accretive mapping presented in Definition 6.4(iii) (that is, [29, Definition 3.3(vii)]) is exactly the same notion as \(r=(\alpha -\beta )\)-strongly accretive (respectively, accretive) mapping given in part (ii) (Definition 2.7(i)) provided that \(\alpha >\beta \) (respectively, \(\alpha =\beta \)), and is not a new concept.

In 2011, Kazmi et al. [29] introduced and studied another class of accretive mappings called generalized H(., .)-accretive mappings. This class serves as a generalization of H-accretive mapping [21], H(., .)-accretive mapping [54], and several another classes of accretive and monotone operators found in the literature as follows.

Definition 6.6

[29, Definition 3.4] Let \(A,B,f,g:E\rightarrow E\) and \(H:E\times E\rightarrow E\) be single-valued mappings. Let \(M:E\times E\rightarrow 2^E\) be a multi-valued mapping. The mapping M is said to be generalized \(\alpha \beta \)-H(., .)-accretive with respect to A and B, f and g, if M(fg) is \(\alpha \beta \)-symmetric accretive with respect to f and g, and \((H(A,B)+\lambda M(f,g))(E)=E\) for every \(\lambda >0\).

Remark 6.7

It should be noticed that in the light of the above-mentioned arguments, Definition 6.6 coincides exactly with Definition 2.9. In fact, by defining \({\widehat{M}}:E\rightarrow 2^E\) as \({\widehat{M}}(x):=M(f(x),g(x))\) for all \(x\in E\), by virtue of the fact that M is an \(\alpha \beta \)-symmetric accretive with respect to the mappings f and g, Proposition 6.5 implies that \({\widehat{M}}\) is \((\alpha -\beta )\)-strongly accretive when \(\alpha >\beta \) and accretive when \(\alpha =\beta \), and so, \({\widehat{M}}\) is an accretive mapping. Now, by defining the mapping \(P:E\rightarrow E\) as \(P(x):=H(Ax,Bx)\) for all \(x\in E\) and taking into account that M is a generalized \(\alpha \beta \)-H(., .)-accretive mapping with respect to ABf and g, thanks to Definition 6.6 we have \((P+\lambda {\widehat{M}})(E)=(H(A,B)+\lambda M(f,g))(E)=E\) for every \(\lambda >0\). Therefore, according to Definition 2.9, \({\widehat{M}}\) is a P-accretive mapping. Accordingly, for the case when \(\alpha >\beta \) (resp., \(\alpha =\beta \)), the class of generalized \(\alpha \beta \)-H(., .)-accretive mappings coincides exactly with the class of P-strongly accretive mappings with constant \(\alpha -\beta \) (resp., P-accretive mappings). In other words, the concept of generalized \(\alpha \beta \)-H(., .)-accretive mapping is actually the same notion of P-strongly accretive mapping with constant \(\alpha -\beta \) when \(\alpha >\beta \) and P-accretive mapping when \(\alpha =\beta \), and hence, this is not a new concept.

Theorem 6.8

[29, Theorem 3.1] Let \(A,B,f,g:E\rightarrow E\) be single-valued mappings, \(H:E\times E\rightarrow E\) be an \(\alpha '\beta '\)-symmetric accretive mapping with respect to A and B and \(\alpha '>\beta '\), and let \(M:E\times E\rightarrow 2^E\) be a generalized \(\alpha \beta \)-H(., .)-accretive mapping with respect to the mappings ABf and g. Suppose that for all \((y,v)\in \mathop {{\textrm{Graph}}}(M(f,g))\), the inequality \(\langle u-v,J_q(x-y)\rangle \ge 0\) holds, then \((x,u)\in \mathop {{\textrm{Graph}}}(M(f,g))\), where \(\mathop {{\textrm{Graph}}}(M(f,g)):=\{(x,u)\in E\times E:u\in M(f(x),g(x))\}\).

It is significant to emphasize that there are some small flaw in the context of [29, Theorem 3.1]. In fact, in the context of the mentioned theorem, \(M:E\times E\rightarrow E\), (ux) and (vy) should be replaced by \(M:E\times E\rightarrow 2^E\), (xu) and (yv), respectively, as we have done in the context of Theorem 6.8.

To define the resolvent operator associated with a generalized \(\alpha \beta \)-H(., .)-accretive mapping, Kazmi et al. [29] presented the following assertion in which they stated the sufficient conditions for the operator \((H(A,B)+\lambda M(f,g))^{-1}\) to be single-valued for every \(\lambda >0\).

Theorem 6.9

[29, Theorem 3.2] Let \(A,B,f,g:E\rightarrow E\) be single-valued mappings and let \(H:E\times E\rightarrow E\) be an \(\alpha '\beta '\)-symmetric accretive mapping with respect to A and B. Suppose that \(M:E\times E\rightarrow 2^E\) be a generalized \(\alpha \beta \)-H(., .)-accretive mapping with respect to ABf and g. Then, the mapping \((H(A,B)+\lambda M(f,g))^{-1}\) is single-valued for all \(\lambda >0\).

Based on Theorem 6.9 (that is, [29, Theorem 3.2]), the authors defined the resolvent operator \(R^{H(.,.)}_{M(.,.),\lambda }\) associated with a generalized \(\alpha \beta \)-H(., .)-accretive mapping M and an arbitrary real constant \(\lambda >0\) as follows.

Definition 6.10

[29, Definition 3.5] Let \(A,B,f,g:E\rightarrow E\) be single-valued mappings and let \(H:E\times E\rightarrow E\) be an \(\alpha '\beta '\)-symmetric accretive mapping with respect to A and B. Let \(M:E\times E\rightarrow 2^E\) be a generalized \(\alpha \beta \)-H(., .)-accretive mapping with respect to the mappings ABf and g. The proximal-point mapping \(R^{H(.,.)}_{M(.,.),\lambda }:E\rightarrow E\) is defined as follows:

$$\begin{aligned} R^{H(.,.)}_{M(.,.),\lambda }(x)=(H(A,B)+\lambda M(f,g))^{-1}(x),\quad \forall x\in E. \end{aligned}$$

It should be noted that there is a small mistake in the context of Definition 3.5 of [29]. In fact, in the definition, \(R^{H(.,.)}_{M(.,.),\lambda }(x)=(H(A,B)+\lambda M)^{-1}(x)\) must be replaced by \(R^{H(.,.)}_{M(.,.),\lambda }(x)=(H(A,B)+\lambda M(f,g))^{-1}(x)\), similar to Definition 6.10. Additionally, by defining the mappings \(P:E\rightarrow E\) and \({\widehat{M}}:E\rightarrow 2^E\) as \(P(x):=H(Ax,Bx)\) and \({\widehat{M}}(x)=M(f(x),g(x))\), for all \(x\in E\), in the light of the assumptions of Definition 6.10 and Propositions 6.2 and 6.5, P is a strictly accretive mapping and \({\widehat{M}}\) is a P-accretive mapping. Thus, invoking Definition 2.19, for any real constant \(\lambda >0\), the resolvent operator \(R^p_{{\widehat{M}},\lambda }=R^{H(.,.)}_{M(.,.),\lambda }:E\rightarrow E\) associated with a P-accretive mapping \({\widehat{M}}\) (generalized \(\alpha \beta \)-H(., .)-accretive mapping M) is defined by

$$\begin{aligned} \begin{aligned} R^{H(.,.)}_{M(.,.),\lambda }(u)=R^p_{{\widehat{M}},\lambda }(u)=(P+\lambda {\widehat{M}})^{-1}(u)=(H(A,B)+\lambda M(f,g))^{-1}(u),\quad \forall u\in E. \end{aligned} \end{aligned}$$

In fact, based on the arguments mentioned above, the notion of the proximal-point mapping \(R^{H(.,.)}_{M(.,.),\lambda }\) associated with a generalized \(\alpha \beta \)-H(., .)-accretive mapping M and an arbitrary real constant \(\lambda >0\) given in Definition 6.10 is actually the same concept of the resolvent operator \(R^p_{{\widehat{M}},\lambda }\) associated with P-accretive mapping \({\widehat{M}}\) and real constant \(\lambda >0\) presented in Definition 2.19 and is not a new one.

With the goal of proving the Lipschitz continuity of the proximal-point mapping \(R^{H(.,.)}_{M(.,.),\lambda }\) and computing an estimate of its Lipschitz constant, section 3 in [29] is concluded with the following conclusion.

Theorem 6.11

[29, Theorem 3.3] Let \(A,B,f,g:E\rightarrow E\) be single-valued mappings and let \(H:E\times E\rightarrow E\) be an \(\alpha '\beta '\)-symmetric accretive mapping with respect to A and B. Suppose that \(M:E\times E\rightarrow 2^E\) is a generalized \(\alpha \beta \)-H(., .)-accretive mapping with respect to the mappings ABf and g. Then, the proximal-point mapping \(R^{H(.,.)}_{M(.,.),\lambda }:E\rightarrow E\) is Lipschitz continuous with constant L, that is

$$\begin{aligned} \Vert R^{H(.,.)}_{M(.,.),\lambda }(x^*)-R^{H(.,.)}_{M(.,.),\lambda }(y^*)\Vert \le L\Vert x^*-y^*\Vert ,\quad \forall x^*,y^*\in E, \end{aligned}$$

where \(L:=\frac{1}{\lambda (\alpha -\beta )+\alpha '-\beta '}\).

Let, for each \(i\in \{1,2\}\), \(E_i\) be a \(q_i\)-uniformly smooth Banach space with a norm \(\Vert .\Vert _i\), \(A_i,B_i,f_i,g_i:E_i\rightarrow E_i\) and \(F_i,H_i:E_1\times E_2\rightarrow E_i\) be single-valued nonlinear mappings and let \(M_i:E_i\times E_i\rightarrow 2^{E_i}\) be a generalized \(\alpha _i\beta _i\)-\(H_i(.,.)\)-accretive mapping. Kazmi et al. [29] investigated the problem of finding \((x,y) \in E_1 \times E_2\) satisfying the following conditions:

$$\begin{aligned} \begin{aligned} \left\{ \begin{array}{ll} \theta _1\in F_1(x,y)+M_1(f_1(x),g_1(x)),\\ \theta _2\in F_2(x,y)+M_1(f_2(y),g_2(y)), \end{array}\right. \end{aligned} \end{aligned}$$
(6.1)

where \(\theta _1\) and \(\theta _2\) are the zero vectors of \(E_1\) and \(E_2\), respectively, and it is called a system of generalized variational inclusions (\(\mathop {{\textrm{SGVI}}}\)).

With the assistance of the proximal-point mappings \(R^{H_i(.,.)}_{M_i(.,.),\lambda _i}\) \((i=1,2)\), they presented a characterization of a solution to the \(\mathop {{\textrm{SGVI}}}\) (6.1) as follows.

Lemma 6.12

[29, Lemma 4.1] Let \(E_i,A_i,B_i,f_i,g_i,H_i,M_i\) and \(F_i\) \((i=1,2)\) be the same as in the \(\mathop {{\textrm{SGVI}}}\) (6.1), such that for each \(i\in \{1,2\}\), \(H_i\) is an \(\alpha '_i\beta '_i\)-symmetric accretive mapping with respect to the mappings \(A_i\) and \(B_i\). Then, for any given \((x,y)\in E_1\times E_2\), (xy) is a solution of the \(\mathop {{\textrm{SGVI}}}\) (6.1) if and only if (xy) satisfies

$$\begin{aligned} \begin{aligned} \left\{ \begin{array}{ll} x=R^{H_1(.,.)}_{M_1(.,.),\lambda _1}[H_1(A_1,B_1)(x)-\lambda _1F_1(x,y)],\\ y=R^{H_2(.,.)}_{M_2(.,.),\lambda _2}[H_2(A_2,B_2)(y)-\lambda _2F_2(x,y)], \end{array}\right. \end{aligned} \end{aligned}$$
(6.2)

where \(\lambda _1,\lambda _2>0\) are real constants; \(R^{H_1(.,.)}_{M_1(.,.),\lambda _1}(x)=(H_1(A_1,B_1)+\lambda _1 M(f_1,g_1))^{-1}(x)\) and \(R^{H_2(.,.)}_{M_2(.,.),\lambda _2}(y)=(H_2(A_2,B_2)+\lambda _2 M(f_2,g_2))^{-1}(y)\), for all \(x\in E_1\) and \(y\in E_2\).

Remark 6.13

  1. (i)

    In view of Theorem 6.9 and Definition 6.10, conclusions 6.2 are valid if, for each \(i\in \{1,2\}\), \(H_i:E_1\times E_2\rightarrow E_i\) is an \(\alpha '_i\beta '_i\)-symmetric accretive mapping with respect to \(A_i\) and \(B_i\), and \(M_i:E_i\times E_i\rightarrow 2^{E_i}\) is a generalized \(\alpha _i\beta _i\)-\(H_i(.,.)\)-accretive mapping with respect to the mappings \(A_i,B_i,f_i\) and \(g_i\). Hence, the condition of \(\alpha '_i\beta '_i\)-symmetric accretivity of the mapping \(H_i\) with respect to the mappings \(A_i\) and \(B_i\) \((i=1,2)\) must be added to the context of Lemma 4.1 in [29], as we have done in Lemma 6.12.

  2. (ii)

    Upon careful examination, we have observed that contrary to the claim made by the authors in [29], the characterization of the solution for the \(\mathop {{\textrm{SGVI}}}\) (6.1) involving generalized \(\alpha _i\beta _i\)-\(H_i(.,.)\)-accretive mappings \(M_i\) \((i=1,2)\) given in Lemma 6.12 (that is, [29, Lemma 4.1]) coincides exactly with the characterization of the solution for the system (3.2) involving \(P_i\)-accretive mapping \({\widehat{M}}_i\) \((i=1,2)\) presented in Lemma 3.2. Therefore, it is not a new characterization but rather the same one as previously established.

Under some appropriate conditions, Kazmi et al. [29] discussed the existence of a unique solution for the \(\mathop {{\textrm{SGVI}}}\) (6.1) as follows.

Theorem 6.14

[29, Theorem 5.1]

For \(i=1,2\), let \(E_i\) be \(q_i\)-uniformly smooth Banach spaces; let \(A_i,B_i,f_i,g_i:E_i\rightarrow E_i\) be single-valued mappings. Furthermore, let \(H_i:E_1\times E_2\rightarrow E_i\) be \(\alpha '_i\beta '_i\)-symmetric accretive mappings with respect to \(A_i\) and \(B_i\) and \((\nu _i,\delta _i)\)-mixed Lipschitz continuous. Moreover, for each \(i\in \{1,2\}\), \(F_i:E_1\times E_2\rightarrow E_i\) be \(\mu _i\)-strongly accretive mapping in the \(i^{th}\) argument and \((L_{F_i},l_{F_i})\)-mixed Lipschitz continuous. Additionally, \(M_1:E_1\times E_1\rightarrow 2^{E_1}\) be a generalized \(\alpha _1\beta _1\)-\(H_1(.,.)\)-accretive mapping with respect to the mappings \(A_1,B_1,f_1\) and \(g_1\), and \(M_2:E_2\times E_2\rightarrow 2^{E_2}\) be a generalized \(\alpha _2\beta _2\)-\(H_2(.,.)\)-accretive mapping with respect to the mappings \(A_2,B_2,f_2\) and \(g_2\). Suppose that there are constants \(\lambda _1,\lambda _2>0\) satisfying the following conditions:

$$\begin{aligned} \begin{aligned} \left\{ \begin{array}{ll} k_1:=m_1+\lambda _2L_2L_{F_2}<1,\\ k_2:=m_2+\lambda _1L_1l_{F_1}<1, \end{array}\right. \end{aligned} \end{aligned}$$
(6.3)

where

$$\begin{aligned} \begin{aligned} \left\{ \begin{array}{ll} m_1:=L_1\Big [(1-2q_1(\alpha '_1-\beta '_1)+c_{q_1}(\nu _1+\delta _1)^{q_1})^{\frac{1}{q_1}} +(1-2\lambda _1q_1\mu _1+c_{q_1}\lambda _1^{q_1}L^{q_1}_{F_1})^{\frac{1}{q_1}}\big ],\\ m_2:=L_2\big [(1-2q_2(\alpha '_2-\beta '_2)+c_{q_2}(\nu _2+\delta _2)^{q_2})^{\frac{1}{q_2}} +(1-2\lambda _2q_2\mu _2+c_{q_2}\lambda _2^{q_2}l^{q_2}_{F_2})^{\frac{1}{q_2}}\Big ],\\ L_i:=\frac{1}{\lambda _i(\alpha _i-\beta _i)+\alpha '_i-\beta '_i},\quad (i=1,2), \end{array}\right. \end{aligned} \end{aligned}$$

\(c_{q_1}\) and \(c_{q_2}\) are two constants guaranteed by Lemma 2.5 and for the case when \(q_i\) \((i=1,2)\) are even natural numbers, in addition to (6.3), the following conditions hold:

$$\begin{aligned} \begin{aligned} \left\{ \begin{array}{ll} 2q_i(\alpha '_i-\beta '_i)<1+c_{q_i}(\nu _i+\delta _i)^{q_i},\\ 2\lambda _1q_1\mu _1<1+c_{q_1}\lambda _1^{q_1}L_{F_1}^{q_1},\\ 2\lambda _2q_2\mu _2<1+c_{q_2}\lambda _2^{q_2}l_{F_2}^{q_2}. \end{array}\right. \end{aligned} \end{aligned}$$
(6.4)

Then, the \(\mathop {{\textrm{SGVI}}}\) (6.1) has a unique solution.

It is important to note that by a careful reading the proof of [29, Theorem 5.1], we found that if \(q_i\) \((i=1,2)\) are even natural numbers, then in addition to (6.3), (6.4) must be also satisfied, as we have incorporated these required conditions to the context of Theorem 6.14.

To find an approximate solution of the \(\mathop {{\textrm{SGVI}}}\) (6.1), Kazmi et al. [29] suggested an iterative algorithm based on Lemma 6.12 as follows.

Algorithm 6.15

[29, Iterative Algorithm 6.1] For \(i=1,2\), let \(E_i,A_i,B_i,f_i,g_i,H_i,F_i\) and \(M_i\) be the same as in Lemma 6.12. For any given \((x_0,y_0)\in E_1\times E_2\), compute \((x_n,y_n)\in E_1\times E_2\) by the iterative schemes

$$\begin{aligned} \begin{aligned} \left\{ \begin{array}{ll} x_{n+1}=R^{H_1(.,.)}_{M_1(.,.),\lambda _1}[H_1(A_1,B_1)(x_n)-\lambda _1F_1(x_n,y_n)],\\ y_{n+1}=R^{H_2(.,.)}_{M_1(.,.),\lambda _2}[H_2(A_2,B_2)(y_n)-\lambda _2F_2(x_n,y_n)], \end{array}\right. \end{aligned} \end{aligned}$$

where \(n=0,1,2,\dots \); \(\lambda _1,\lambda _2>0\) are constants.

Remark 6.16

Note, in particular, that if for \(i=1,2\), the mappings \(P_i:E_i\rightarrow E_i\) and \({\widehat{M}}_i:E_i\rightarrow 2^{E_i}\) are defined as in the proof of Lemma 6.12, the assumptions and Propositions 6.2 and 6.5 imply that for each \(i\in \{1,2\}\), \(P_i\) is a strictly accretive mapping and \({\widehat{M}}_i\) is a \(P_i\)-accretive mapping. Therefore, Algorithm 6.15 is actually the same Algorithm 4.9 and is not a new algorithm.