1 Introduction

Banach’s contraction principle is one of the most powerful tool in applied nonlinear analysis. Weak contractions (also called ϕ-contractions) are generalizations of Banach contraction mappings which have been studied by several authors. Let T be a self-map of a metric space \((X, d)\) and \(\phi: [0,+\infty)\rightarrow[0,+\infty)\) be a function. We say that T is a ϕ-contraction if

$$d(Tx,Ty)\leq\phi\bigl(d(x,y)\bigr), \quad \forall x,y\in X. $$

In 1968, Browder [1] proved that if ϕ is non-decreasing and right continuous and \((X,d)\) is complete, then T has a unique fixed point \(x^{*}\) and \(\lim_{n\rightarrow\infty}T^{n}x_{0}=x^{*}\) for any given \(x_{0} \in X\). Subsequently, this result was extended in 1969 by Boyd and Wong [2] by weakening the hypothesis on ϕ, in the sense that it is sufficient to assume that ϕ is right upper semi-continuous (not necessarily monotone). For a comprehensive study of the relations between several such contraction type conditions, see [36].

On the other hand, in 2015, Su and Yao [7] proved the following generalized contraction mapping principle.

Theorem SY

Let \((X,d)\) be a complete metric space. Let \(T:X\rightarrow X\) be a mapping such that

$$ \psi\bigl(d(Tx,Ty)\bigr)\leq\phi\bigl(d(x,y)\bigr), \quad\forall x, y \in X, $$
(1.1)

where \(\psi, \phi: [0, +\infty) \rightarrow[0, +\infty)\) are two functions satisfying the conditions:

$$\begin{aligned}& (1) \quad \psi(a)\leq\phi(b) \quad\Rightarrow\quad a \leq b; \\& (2)\quad \textstyle\begin{cases} \psi(a_{n})\leq\phi(b_{n}) \\ a_{n}\rightarrow\varepsilon,\qquad b_{n}\rightarrow\varepsilon \end{cases}\displaystyle \quad\Rightarrow\quad\varepsilon=0. \end{aligned}$$

Then T has a unique fixed point and, for any given \(x_{0} \in X\), the iterative sequence \(T^{n}x_{0}\) converges to this fixed point.

In particular, the study of the fixed points for weak contractions and generalized contractions was extended to partially ordered metric spaces in [818]. Among them, some results involve altering distance functions. Such functions were introduced by Khan et al. in [19], where some fixed point theorems are presented.

The first purpose of this paper is to prove an existence and uniqueness result of the multivariate fixed point for contraction type mappings in complete metric spaces. The proof is based on the new idea of introducing a convenient metric space and an appropriate mapping. This ingenious method leads to the changing of the non-self-mapping setting to the self-mapping one. Then the main result of the paper will be applied to an initial-value problem for a class of differential equations of first order. The second aim of this paper is to prove strong and weak convergence theorems for the multivariate fixed point of N-variables nonexpansive mappings. The results of this paper improve several important results recently published in the literature.

2 Contraction principle for multivariate mappings

We will start with some concepts and results which are useful in our approach.

Definition 2.1

A multiply metric function \(\triangle(a_{1},a_{2},\ldots,a_{N})\) is a continuous N variable non-negative real function with the domain

$$\bigl\{ (a_{1},a_{2},\ldots,a_{N})\in R^{N}: a_{i}\geq0, i\in\{1,2,3, \ldots,N\} \bigr\} $$

which satisfies the following conditions:

  1. (1)

    \(\triangle(a_{1},a_{2},\ldots,a_{N})\) is non-decreasing for each variable \(a_{i}\), \(i\in\{1,2,3, \ldots,N \}\);

  2. (2)

    \(\triangle(a_{1}+b_{1},a_{2}+b_{2},\ldots,a_{N}+b_{N})\leq \triangle(a_{1},a_{2},\ldots,a_{N})+\triangle(b_{1},b_{2},\ldots,b_{N})\);

  3. (3)

    \(\triangle(a,a,\ldots,a)=a\);

  4. (4)

    \(\triangle(a_{1},a_{2},\ldots,a_{N})\rightarrow0 \Leftrightarrow a_{i}\rightarrow0\), \(i\in\{1,2,3,\ldots, N \}\), for all \(a_{i},b_{i}, a \in\mathbb{R}\), \(i\in\{1,2,3, \ldots,N \}\), where \(\mathbb{R}\) denotes the set of all real numbers.

The following are some basic examples of multiply metric functions.

Example 2.2

(1) \(\triangle_{1}(a_{1},a_{2},\ldots,a_{N})=\frac{1}{N} \sum_{i=1}^{N}a_{i}\). (2) \(\triangle_{2}(a_{1},a_{2},\ldots,a_{N})=\frac{1}{h} \sum_{i=1}^{N}q_{i} a_{i}\), where \(q_{i}\in[0,1)\), \(i\in\{1,\ldots, N \}\), and \(0< h:= \sum_{i=1}^{N}q_{i}<1\).

Example 2.3

\(\triangle_{3}(a_{1},a_{2},\ldots,a_{N})=\sqrt{\frac{1}{N} \sum_{i=1}^{N}a_{i}^{2}}\).

Example 2.4

\(\triangle_{4}(a_{1},a_{2},\ldots,a_{N})=\max \{a_{1},a_{2},\ldots,a_{N}\}\).

An important concept is now presented.

Definition 2.5

Let \((X,d)\) be a metric space, \(T: X^{N}\rightarrow X\) be a N variable mapping, an element \(p\in X\) is called a multivariate fixed point (or a fixed point of order N; see [20]) of T if

$$p=T(p,p,\ldots,p). $$

In the following, we prove the following theorem, which generalizes the Banach contraction principle.

Theorem 2.6

Let \((X,d)\) be a complete metric space, \(T: X^{N}\rightarrow X\) be an N variable mapping that satisfies the following condition:

$$d(Tx,Ty)\leq h \triangle\bigl(d(x_{1},y_{1}), d(x_{2},y_{2}), \ldots, d(x_{N},y_{N}) \bigr), \quad\forall x,y \in X^{N}, $$

whereis a multiply metric function,

$$x=(x_{1},x_{2}, \ldots, x_{N}) \in X^{N},\qquad y=(y_{1},y_{2}, \ldots, y_{N}) \in X^{N}, $$

and \(h \in(0,1)\) is a constant.

Then T has a unique multivariate fixed point \(p\in X\) and, for any \(p_{0} \in X^{N}\), the iterative sequence \(\{p_{n}\}\subset X^{N}\) defined by

$$\begin{aligned} &p_{1}=(Tp_{0},Tp_{0},\ldots,Tp_{0}), \\ &p_{2}=(Tp_{1},Tp_{1},\ldots,Tp_{1}), \\ &p_{3}=(Tp_{2},Tp_{2},\ldots,Tp_{2}), \\ &\cdots \\ & p_{n+1}=(Tp_{n},Tp_{n},\ldots,Tp_{n}), \\ &\cdots \end{aligned}$$

converges, in the multiply metric △, to \((p,p,\ldots ,p)\in X^{N}\) and the iterative sequence \(\{Tp_{n}\}\subset X\) converges, with respect to d, to \(p \in X\).

Proof

We define a two variable function D on \(X^{N}\) by the following relation:

$$D\bigl((x_{1},x_{2},\ldots,x_{N}), (y_{1},y_{2},\ldots,y_{N})\bigr)=\triangle \bigl(d(x_{1},y_{1}),d(x_{2},y_{2}), \ldots,d(x_{N},y_{N}) \bigr) $$

for all \((x_{1},x_{2},\ldots,x_{N}), (y_{1},y_{2},\ldots,y_{N})\in X^{N}\). Next we show that D is a metric on \(X^{N}\). The following two conditions are obvious:

  1. (i)

    \(D((x_{1},x_{2},\ldots,x_{N}), (y_{1},y_{2},\ldots,y_{N}))=0 \Leftrightarrow (x_{1},x_{2},\ldots,x_{N})= (y_{1},y_{2},\ldots,y_{N})\);

  2. (ii)

    \(D( (y_{1},y_{2},\ldots,y_{N})), (x_{1},x_{2},\ldots ,x_{N})=D((x_{1},x_{2},\ldots,x_{N}), (y_{1},y_{2},\ldots,y_{N}))\), for all \((x_{1},x_{2},\ldots,x_{N}), (y_{1},y_{2},\ldots,y_{N})\in X^{N}\).

Next we prove the triangular inequality. For all

$$(x_{1},x_{2},\ldots,x_{N}), (y_{1},y_{2}, \ldots,y_{N}), (z_{1},z_{2},\ldots,z_{N}) \in X^{N}, $$

from the definition of △, we have

$$\begin{aligned} &D\bigl((x_{1},x_{2},\ldots,x_{N}), (y_{1},y_{2},\ldots,y_{N})\bigr) \\ &\quad=\triangle\bigl(d(x_{1},y_{1}),d(x_{2},y_{2}), \ldots,d(x_{N},y_{N}) \bigr) \\ &\quad\leq\triangle\bigl(d(x_{1},z_{1})+d(z_{1},y_{1}),d(x_{2},z_{2})+d(z_{2},y_{2}), \ldots,d(x_{N},z_{N})+d(z_{N},y_{N}) \bigr) \\ &\quad\leq\triangle\bigl(d(x_{1},z_{1}),d(x_{2},z_{2}), \ldots,d(x_{N},z_{N}) \bigr) +\triangle\bigl(d(z_{1},y_{1}),d(z_{2},y_{2}), \ldots,d(z_{N},y_{N}) \bigr) \\ &\quad=D\bigl((x_{1},x_{2},\ldots,x_{N}), (z_{1},z_{2},\ldots,z_{N})\bigr)+D \bigl((z_{1},z_{2},\ldots,z_{N}), (y_{1},y_{2},\ldots,y_{N})\bigr). \end{aligned}$$

Next we prove that \((X^{N},D)\) is a complete metric space. Let \(\{p_{n}\}\subset X^{N}\) be a Cauchy sequence, then we have

$$\lim_{n,m\rightarrow\infty} D(p_{n},p_{m})=\lim _{n,m\rightarrow\infty}\triangle \bigl(d(x_{1,n},x_{1,m}),d(x_{2,n},x_{2,m}), \ldots,d(x_{N,n},x_{N,m})\bigr)=0, $$

where

$$p_{n}=(x_{1,n},x_{2,n},x_{3,n}, \ldots,x_{N,n}),\qquad p_{m}=(x_{1,m},x_{2,m},x_{3,m}, \ldots,x_{N,m}). $$

From the definition of △, we have

$$\lim_{n,m\rightarrow\infty}d(x_{i,n},x_{i,m})=0, $$

for all \(i\in\{1,2,3,\ldots,N \}\). Hence each \(\{x_{i,n}\} \) (\(i\in\{1,2,3, \ldots,N \}\)) is a Cauchy sequence. Since \((X,d)\) is a complete metric space, there exist \(x_{1},x_{2},x_{3},\ldots,x_{N} \in X\) such that \(\lim_{n\rightarrow\infty}d(x_{i,n}, x_{i})=0\) for all \(i\in\{1,2,3,\ldots,N \}\). Therefore

$$\lim_{n\rightarrow\infty}D(p_{n},x)=0, $$

where

$$x=(x_{1},x_{2},x_{3},\ldots,x_{N})\in X^{N}, $$

which implies that \((X^{N},D)\) is a complete metric space.

We define a mapping \(T^{*}:X^{N}\rightarrow X^{N}\) by the following relation:

$$T^{*}(x_{1},x_{2},\ldots,x_{N})= \bigl(T(x_{1},x_{2},\ldots,x_{N}),T(x_{1},x_{2}, \ldots,x_{N}),\ldots,T(x_{1},x_{2}, \ldots,x_{N})\bigr), $$

for all \((x_{1},x_{2},\ldots,x_{N})\in X^{N}\). Next we prove that \(T^{*}\) is a contraction mapping from \((X^{N},D)\) into itself. Observe that, for any

$$x=(x_{1},x_{2},\ldots,x_{N}), y=(y_{1},y_{2}, \ldots,y_{N})\in X^{N}, $$

we have

$$\begin{aligned} D\bigl(T^{*}x,T^{*}y\bigr)&=\triangle\bigl(d(Tx,Ty),d(Tx,Ty),\ldots,d(Tx,Ty)\bigr) \\ &=d(Tx,Ty) \\ &\leq h \triangle\bigl(d(x_{1},y_{1}),d(x_{2},y_{2}), \ldots,d(x_{N},y_{N})\bigr) \\ &=h D(x,y). \end{aligned}$$

By the Banach contraction mapping principle, there exists a unique element \(u \in X^{N}\) such that \(u=T^{*}u=(Tu,Tu,\ldots,Tu)\) and, for any \(u_{0}=(x_{1},x_{2},\ldots,x_{N})\in X^{N}\), the iterative sequence \(u_{n+1}=T^{*}u_{n}\) converges to u. That is,

$$\begin{aligned} &u_{1}=(Tu_{0},Tu_{0},\ldots,Tu_{0}), \\ &u_{2}=(Tu_{1},Tu_{1},\ldots,Tu_{1}), \\ &u_{3}=(Tu_{2},Tu_{2},\ldots,Tu_{2}), \\ &\cdots \\ & u_{n+1}=(Tu_{n},Tu_{n},\ldots,Tu_{n}), \\ &\cdots \end{aligned}$$

converges to \(u\in X^{N}\). By the structure of \(\{u_{n}\}\), we know that there exists a unique element \(p\in X\) such that \(u=(p,p,\ldots, p)\) and hence the iterative sequence \(\{Tu_{n}\}\) converges to \(p \in X\). By

$$T^{*}u=u=(p,p,\ldots,p),\qquad Tu=T(p,p,\ldots,p), \qquad T^{*}u=(Tu,Tu,\ldots,Tu), $$

we obtain \(p=T(p,p,\ldots,p)\), that is, p is the unique multivariate fixed point of T. This completes the proof. □

Notice that taking \(N=1\), \(\triangle(a)=a\) in Theorem 2.6, we obtain Banach’s contraction principle.

Some other consequences of the above general result are the following corollaries.

Corollary 2.7

Let \((X,d)\) be a complete metric space, \(T: X^{N}\rightarrow X\) be a N variables mapping satisfying the following condition:

$$d(Tx,Ty)\leq\frac{h}{N}\sum_{i=1}^{N}d(x_{i},y_{i}), \quad 0< h< 1, $$

where

$$x=(x_{1},x_{2}, \ldots, x_{N}) \in X^{N}, \qquad y=(y_{1},y_{2}, \ldots, y_{N}) \in X^{N}. $$

Then T has a unique multivariate fixed point \(p\in X\) and, for any \(p_{0} \in X^{N}\), the iterative sequence \(\{p_{n}\}\subset X^{N}\) defined by

$$\begin{aligned} &p_{1}=(Tp_{0},Tp_{0},\ldots,Tp_{0}), \\ &p_{2}=(Tp_{1},Tp_{1},\ldots,Tp_{1}), \\ &p_{3}=(Tp_{2},Tp_{2},\ldots,Tp_{2}), \\ &\cdots \\ & p_{n+1}=(Tp_{n},Tp_{n},\ldots,Tp_{n}), \\ &\cdots \end{aligned}$$

converges, in the multiply metric, to \((p,p,\ldots,p)\in X^{N}\) and the iterative sequence \(\{Tp_{n}\}\subset X\) converges, with respect to d, to \(p \in X\).

Notice that the above corollary is related to the well-known Prešić’s fixed point theorem (see [21]).

Prešić’s theorem

Let \((X,d)\) be a complete metric space, N be a given natural number, and \(T:X^{N}\to X\) be an operator, such that, for all \(x_{1}, \ldots, x_{N}, x_{N+1}\in X\), we have

$$d\bigl(T(x_{1},x_{2}, \ldots x_{N}),T(x_{2}, \ldots, x_{N}, x_{N+1})\bigr)\le q_{1} d(x_{1},x_{2})+ \cdots+q_{N} d(x_{N},x_{N+1}), $$

where \(q_{1}, \ldots, q_{N}\in\mathbb{R}_{+}\) with \(q_{1}+ \cdots+ q_{N}<1\).

Then there exists a unique multivariate fixed point \(p\in X\) and p is the limit of the sequence \((x_{n})\) given by

$$x_{n+k}:=T(x_{n}, \ldots, x_{n+k-1}), \quad\textit{for } n\ge1, $$

independently of the initial N values.

Choosing \(\Delta:=\Delta_{2}\), \(h:= \sum_{i=1}^{N}q_{i}\), and \(x=(x_{1},x_{2},\ldots,x_{N}), y=(x_{2},x_{3},\ldots ,x_{N+1})\in X^{N}\), the contraction condition given in Theorem 2.6 leads to Prešić’s contraction type condition.

Corollary 2.8

Let \((X,d)\) be a complete metric space and \(T: X^{N}\rightarrow X\) be a N variable mapping which satisfies the following condition:

$$d(Tx,Ty)\leq h \sqrt{\frac{1}{N}\sum _{i=1}^{N}d(x_{i},y_{i})^{2}}, \quad 0< h< 1, $$

where

$$x=(x_{1},x_{2}, \ldots, x_{N}) \in X^{N}, \qquad y=(y_{1},y_{2}, \ldots, y_{N}) \in X^{N}. $$

Then T has a unique multivariate fixed point \(p\in X\) and, for any \(p_{0} \in X^{N}\), the iterative sequence \(\{p_{n}\}\subset X^{N}\) defined by

$$\begin{aligned} &p_{1}=(Tp_{0},Tp_{0},\ldots,Tp_{0}), \\ &p_{2}=(Tp_{1},Tp_{1},\ldots,Tp_{1}), \\ &p_{3}=(Tp_{2},Tp_{2},\ldots,Tp_{2}), \\ &\cdots \\ & p_{n+1}=(Tp_{n},Tp_{n},\ldots,Tp_{n}), \\ &\cdots \end{aligned}$$

converges, in the multiply metric, to \((p,p,\ldots,p)\in X^{N}\) and the iterative sequence \(\{Tp_{n}\}\subset X\) converges, with respect to d, to \(p \in X\).

Corollary 2.9

Let \((X,d)\) be a complete metric space and \(T: X^{N}\rightarrow X\) be a N variable mapping which satisfies the following condition:

$$d(Tx,Ty)\leq h \max \bigl\{ d(x_{1},y_{1}),d(x_{2},y_{2}), \ldots,d(x_{N},y_{N})\bigr\} ,\quad 0< h< 1, $$

where

$$x=(x_{1},x_{2}, \ldots, x_{N}) \in X^{N}, \qquad y=(y_{1},y_{2}, \ldots, y_{N}) \in X^{N}. $$

Then T has a unique multivariate fixed point \(p\in X\) and, for any \(p_{0} \in X^{N}\), the iterative sequence \(\{p_{n}\}\subset X^{N}\) defined by

$$\begin{aligned} &p_{1}=(Tp_{0},Tp_{0},\ldots,Tp_{0}), \\ &p_{2}=(Tp_{1},Tp_{1},\ldots,Tp_{1}), \\ &\cdots \\ & p_{n+1}=(Tp_{n},Tp_{n},\ldots,Tp_{n}), \\ &\cdots \end{aligned}$$

converges, in the multiply metric, to \((p,p,\ldots,p)\in X^{N}\) and the iterative sequence \(\{Tp_{n}\}\subset X\) converges, with respect to d, to \(p \in X\).

Notice also here that the above corollary is related to a multivariate fixed point theorem of Ćirić and Prešić (see [22]), which reads as follows.

Ćirić-Prešić’s theorem

Let \((X,d)\) be a complete metric space, N be a given natural number, and \(T:X^{N}\to X\) be an operator, such that, for all \(x_{1}, \ldots, x_{N}, x_{N+1}\in X\), we have

$$d\bigl(T(x_{1},x_{2}, \ldots x_{N}),T(x_{2}, \ldots, x_{N}, x_{N+1})\bigr)\le h \max\bigl\{ d(x_{1},x_{2}), d(x_{2},x_{3}), \ldots, d(x_{N},x_{N+1}) \bigr\} , $$

where \(0< h<1\).

Then there exists a multivariate fixed point \(p\in X\) and p is the limit of the sequence \((x_{n})\) given by

$$x_{n+k}:=T(x_{n}, \ldots, x_{n+k-1}), \quad\textit{for } n\ge1, $$

independently of the initial N values.

If in addition, we suppose that on the diagonal \(\operatorname{Diag}\subset X^{N}\) we have

$$d\bigl(T(u,\ldots, u), T(v,\ldots, v)\bigr)< d(u,v), \quad\textit{for all } u,v\in X \textit{ with } u\ne v, $$

then the multivariate fixed point is unique.

Choosing \(\Delta:=\Delta_{4}\), \(h\in(0,1)\), and \(x=(x_{1},x_{2},\ldots,x_{N}), y=(x_{2},x_{3},\ldots ,x_{N+1})\in X^{N}\), the contraction condition given in Theorem 2.6 leads to the above Ćirić-Prešić’s contraction type condition.

It is worth to mention that the above results are in connection with a very interesting multivariate fixed point principle proved by Tasković in [23]. More precisely, Tasković’s result is as follows.

Tasković’s theorem

Let \((X,d)\) be a complete metric space, N be a given natural number, \(f:\mathbb{R}^{N}\to\mathbb{R}\) be a continuous, increasing, and semi-homogeneous function (in the sense that \(f(\lambda a_{1}, \ldots, \lambda a_{N})\le\lambda f(a_{1}, \ldots, a_{N})\), for any \(\lambda, a_{1}, \ldots a_{N}\in\mathbb{R}\)) and let \(T:X^{k}\to X\) be an operator, such that, for all \(x=(x_{1}, \ldots , x_{N}), y=(y_{1}, \ldots, y_{N})\in X^{k}\), we have

$$d\bigl(T(x),T(y)\bigr)\le\bigl|f\bigl(a_{1} d(x_{1},y_{1}), \ldots, a_{N} d(x_{N},y_{N})\bigr)\bigr|, $$

where \(a_{1}, \ldots, a_{N}\in\mathbb{R}_{+}\) with \(|f(a_{1}, \ldots, a_{N})|<1\).

Then there exists a unique multivariate fixed point \(p\in X\) and p is the limit of the sequence \((x_{n})\) given by

$$x_{n+k}:=T(x_{n}, \ldots, x_{n+k-1}), \quad\textit{for } n\ge1, $$

independently of the initial N values.

Notice here that \(\triangle(a_{1},\ldots, a_{n}):=f(a_{1},\ldots, a_{n})\) satisfies part of the axioms of the multiply metric. More connections with the above mentioned results will be given in a forthcoming paper.

The following result is another multivariate fixed point theorem for a class of generalized contraction mappings related to the SY theorem. The proof of it can be obtained by Theorem SY, in the same way as was used in the proof of Theorem 2.6.

Theorem 2.10

Let \((X,d)\) be a complete metric space and \(T: X^{N}\rightarrow X\) be a N variable mapping which satisfies the following condition:

$$\psi\bigl( d(Tx,Ty)\bigr)\leq\phi\bigl( \triangle\bigl(d(x_{1},y_{1}), d(x_{2},y_{2}), \ldots, d(x_{N},y_{N}) \bigr)\bigr), $$

whereis a multiply metric function,

$$x=(x_{1},x_{2}, \ldots, x_{N}) \in X^{N},\qquad y=(y_{1},y_{2}, \ldots, y_{N}) \in X^{N}, $$

and \(\psi, \phi: [0, +\infty) \rightarrow[0, +\infty)\) are two functions satisfying the conditions:

$$\begin{aligned}& (1)\quad \psi(a)\leq\phi(b) \quad\Rightarrow\quad a \leq b; \\& (2)\quad \textstyle\begin{cases} \psi(a_{n})\leq\phi(b_{n}) \\ a_{n}\rightarrow\varepsilon,\qquad b_{n}\rightarrow\varepsilon \end{cases}\displaystyle \quad\Rightarrow\quad\varepsilon=0. \end{aligned}$$

Then T has a unique multivariate fixed point \(p\in X\) and, for any \(p_{0} \in X^{N}\), the iterative sequence \(\{p_{n}\}\subset X^{N}\) defined by

$$\begin{aligned} &p_{1}=(Tp_{0},Tp_{0},\ldots,Tp_{0}), \\ &p_{2}=(Tp_{1},Tp_{1},\ldots,Tp_{1}), \\ &\cdots \\ & p_{n+1}=(Tp_{n},Tp_{n},\ldots,Tp_{n}), \\ &\cdots \end{aligned}$$

converges, in the multiply metric, to \((p,p,\ldots,p)\in X^{N}\) and the iterative sequence \(\{Tp_{n}\}\subset X\) converges, with respect to d, to \(p \in X\).

In [7], Su and Yao also gave some examples of functions \(\psi(t)\), \(\phi(t)\). Here we recall some of them.

Example 2.11

([7])

The following functions satisfy the conditions (1) and (2) of Theorem 2.10.

$$(\mathrm{a}) \quad \textstyle\begin{cases} \psi_{1}(t)=t, \\ \phi_{1}(t)=\alpha t, \end{cases} $$

where \(0<\alpha<1\) is a constant.

$$\begin{aligned}& (\mathrm{b})\quad \textstyle\begin{cases} \psi_{2}(t)=t^{2}, \\ \phi_{2}(t)=\ln(t^{2}+1), \end{cases}\displaystyle \\& (\mathrm{c})\quad \textstyle\begin{cases} \psi_{3}(t)=t, \\ \phi_{3}(t)= \textstyle\begin{cases} t^{2}, & 0\leq t\leq\frac{1}{2},\\ t-\frac{3}{8},& \frac{1}{2}< t< +\infty, \end{cases}\displaystyle \end{cases}\displaystyle \\& (\mathrm{d}) \quad \textstyle\begin{cases} \psi_{4}(t)= \textstyle\begin{cases} t, & 0\leq t\leq1,\\ t-\frac{1}{2}, & 1< t< +\infty, \end{cases}\displaystyle \\ \phi_{4}(t)= \textstyle\begin{cases} \frac{t}{2}, & 0\leq t\leq1,\\ t-\frac{4}{5}, & 1< t< +\infty, \end{cases}\displaystyle \end{cases}\displaystyle \\& (\mathrm{e}) \quad \textstyle\begin{cases} \psi_{5}(t)= \textstyle\begin{cases} t, & 0\leq t\leq1,\\ \alpha t^{2}, & 1\leq t< +\infty, \end{cases}\displaystyle \\ \phi_{5}(t)= \textstyle\begin{cases} t^{2}, & 0\leq t< 1,\\ \beta t, & 1< t< +\infty, \end{cases}\displaystyle \end{cases}\displaystyle \end{aligned}$$

where \(0<\beta< \alpha\) are constants.

For example, if we choose \(\psi_{5}(t)\), \(\phi_{5}(t)\) in Theorem 2.10, then we can get the following result.

Theorem 2.12

Let \((X,d)\) be a complete metric space. Let \(T: X^{N}\rightarrow X\) be a N variables mapping such that

$$\begin{aligned}& 0\leq d(Tx,Ty)< 1 \quad\Rightarrow\quad d(Tx,Ty)\leq\bigl(\triangle \bigl(d(x_{1},y_{1}), d(x_{2},y_{2}), \ldots, d(x_{N},y_{N})\bigr)\bigr)^{2}, \\& d(Tx,Ty)\geq1 \quad\Rightarrow\quad\alpha\bigl(d(Tx,Ty)\bigr)^{2}\leq \beta\triangle \bigl(d(x_{1},y_{1}), d(x_{2},y_{2}), \ldots, d(x_{N},y_{N})\bigr), \end{aligned}$$

for any \(x=(x_{1},x_{2},x_{3}, \ldots,x_{N}), y=(y_{1},y_{2},y_{3}, \ldots,y_{N}) \in X^{N}\).

Then T has a unique multivariate fixed point \(p\in X\) and, for any \(p_{0} \in X^{N}\), the iterative sequence \(\{p_{n}\}\subset X^{N}\) defined by

$$\begin{aligned} &p_{1}=(Tp_{0},Tp_{0},\ldots,Tp_{0}), \\ &p_{2}=(Tp_{1},Tp_{1},\ldots,Tp_{1}), \\ &\cdots \\ & p_{n+1}=(Tp_{n},Tp_{n},\ldots,Tp_{n}), \\ &\cdots \end{aligned}$$

converges, in the multiply metric, to \((p,p,\ldots,p)\in X^{N}\) and the iterative sequence \(\{Tp_{n}\}\subset X\) converges, with respect to d, to \(p \in X\).

Using the following notions it is easy to prove another consequence of our main results.

Remark 2.13

Let \(\psi, \phi: [0, +\infty) \rightarrow[0, +\infty)\) be two functions satisfying the conditions:

  1. (i)

    \(\psi(0)=\phi(0)\);

  2. (ii)

    \(\psi(t)>\phi(t)\), \(\forall t>0\);

  3. (iii)

    ψ is lower semi-continuous and ϕ is upper semi-continuous.

Then \(\psi(t)\), \(\phi(t)\) satisfy the above mentioned conditions (1) and (2).

Corollary 2.14

Let \((X,d)\) be a complete metric space. Let \(T: X^{N}\rightarrow X\) be a N variable mapping such that, for any \(x=(x_{1},x_{2},x_{3}, \ldots,x_{N}), y=(y_{1},y_{2},y_{3}, \ldots,y_{N}) \in X^{N}\), we have

$$\psi\bigl(d(Tx,Ty)\bigr)\leq\phi\bigl(\triangle\bigl(d(x_{1},y_{1}), d(x_{2},y_{2}), \ldots, d(x_{N},y_{N}) \bigr)\bigr), $$

where \(\psi, \phi: [0, +\infty) \rightarrow[0, +\infty)\) are two functions with the conditions (i), (ii), and (iii).

Then T has a unique multivariate fixed point \(p\in X\) and, for any \(p_{0} \in X^{N}\), the iterative sequence \(\{p_{n}\}\subset X^{N}\) defined by

$$\begin{aligned}[b] &p_{1}=(Tp_{0},Tp_{0},\ldots,Tp_{0}), \\ &p_{2}=(Tp_{1},Tp_{1},\ldots,Tp_{1}), \\ &\cdots \\ & p_{n+1}=(Tp_{n},Tp_{n},\ldots,Tp_{n}), \\ &\cdots \end{aligned} $$

converges, in the multiply metric, to \((p,p,\ldots,p)\in X^{N}\) and the iterative sequence \(\{Tp_{n}\}\subset X\) converges, with respect to d, to \(p \in X\).

3 An application to an initial-value problem related to a first order differential equation

We will give now an application of the above results to an initial-value problem related to a first order differential equation of the following form:

$$ \left \{ \textstyle\begin{array}{@{}l} \frac{dx}{dt}=f(x(t),x(t),\ldots, x(t),t), \quad t\in I:=[t_{0}-\delta, t_{0}+\delta],\\ x(t_{0})=x^{0} \quad (x_{0}\in\mathbb{R}), \end{array}\displaystyle \right . $$

where \(t_{0},\delta>0\) are given real numbers and \(f:\mathbb{R}^{N}\times I\to\mathbb{R}\) is a continuous \((N+1)\)-variables function satisfying the following Lipschitz type condition:

$$\bigl|f(x_{1},x_{2},\ldots, x_{N}, t)-f(y_{1},y_{2}, \ldots, y_{N}, t)\bigr|\leq k(t) \sum_{i=1}^{N} |x_{i}-y_{i}|, $$

with \(k\in L^{1}(I,\mathbb{R}_{+})\).

For this purpose, we will consider first the following integral equation:

$$x(t)= \int_{t_{0}}^{t}f\bigl(x(\tau),x(\tau),\ldots,x(\tau), \tau\bigr)\,d\tau+g(t), \quad t \in[t_{0}-\delta,t_{0}+ \delta], $$

where \(g\in C(I)\) is a given function and f is as before.

Let \(X:=C[t_{0}-\delta,t_{0}+\delta]\), the linear space of continuous real functions defined on the closed interval \(I:=[t_{0}-\delta,t_{0}+\delta]\), where \(t_{0}, \delta>0\) are real numbers. It is well known that \(C[t_{0}-\delta,t_{0}+\delta]\) is a complete metric space with respect to the Chebyshev metric

$$d(x,y):=\max_{t_{0}-\delta\leq t \leq t_{0}+\delta}\bigl|x(t)-y(t)\bigr|, $$

for \(x,y\in X\).

We can also introduce on X a Bielecki type metric (which is known to be Lipschitz (strongly) equivalent to d), by the relation

$$d_{B}(x,y):=\max_{t_{0}-\delta\leq t \leq t_{0}+\delta}\bigl|x(t)-y(t)\bigr|e^{-LK(t)}, $$

where \(K(t):=\int^{t}_{t_{0}}k(s)\,ds\) and L is a constant greater than N.

Let \(T: X \times X\times\cdots\times X \rightarrow X\) with \(X^{N}\ni x=(x_{1},\ldots, x_{N})\longmapsto Tx\) be a N variable mapping defined by

$$Tx(t):= \int_{t_{0}}^{t}f\bigl(x_{1}(\tau), x_{2}(\tau ),\ldots,x_{N}(\tau),\tau\bigr)\,d\tau+g(t), $$

for all \(x_{1}, x_{2},\ldots, x_{N} \in X\), where \(g\in X\) and \(f(x_{1},x_{2},\ldots x_{N},t)\) is a continuous \((N+1)\) variable function satisfying the following condition:

$$\bigl|f(x_{1},x_{2},\ldots, x_{N}, t)-f(y_{1},y_{2}, \ldots, y_{N}, t)\bigr|\leq k(t) \sum_{i=1}^{N} |x_{i}-y_{i}|, $$

with \(k\in L^{1}(I,\mathbb{R}_{+})\).

For any \(x=(x_{1}, x_{2},\ldots,x_{N}), y=(y_{1}, y_{2},\ldots,y_{N})\in X^{N}\), and \(t\in I\) we have

$$\begin{aligned} \bigl|Tx(t)-Ty(t)\bigr|&\le\biggl| \int_{t_{0}}^{t}\bigl|f\bigl(x(\tau),\tau\bigr)-f\bigl(y( \tau),\tau \bigr)\bigr|\,d\tau\biggr| \\ & \le\biggl| \int_{t_{0}}^{t}\sum_{i=1}^{N}k( \tau)\bigl|x_{i}(\tau)-y_{i}(\tau)\bigr|\,d\tau\biggr| \\ &= \Biggl| \int_{t_{0}}^{t}\sum_{i=1}^{N} k(\tau) \bigl|x_{i}(\tau)-y_{i}(\tau)\bigr|e^{-LK(\tau)} e^{LK(\tau)}\,d\tau\Biggr| \\ &\le\Biggl| \int_{t_{0}}^{t}\sum_{i=1}^{N} \max_{\tau\in I}\bigl[\bigl|x_{i}(\tau)-y_{i}( \tau)\bigr|e^{-LK(\tau)}\bigr] k(\tau) e^{LK(\tau)}\,d\tau\Biggr| \\ &= N \biggl| \int_{t_{0}}^{t} \Biggl(\frac{1}{N}\sum _{i=1}^{N} d_{B}(x_{i},y_{i}) \Biggr) k(\tau) e^{LK(\tau)}\,d\tau\biggr| \\ &= N \triangle_{1} \bigl(d_{B}(x_{1},y_{1}), \ldots, d_{B}(x_{N},y_{N})\bigr) \biggl| \int_{t_{0}}^{t} k(\tau) e^{LK(\tau)}\,d\tau\biggr| \\ &\le\frac{N}{L}\cdot\triangle_{1} \bigl(d_{B}(x_{1},y_{1}), \ldots, d_{B}(x_{N},y_{N})\bigr) e^{LK(t)}. \end{aligned}$$

Thus,

$$\bigl|Tx(t)-Ty(t)\bigr| e^{-LK(t)}\le\frac{N}{L}\cdot\triangle_{1} \bigl(d_{B}(x_{1},y_{1}),\ldots, d_{B}(x_{N},y_{N})\bigr), \quad\mbox{for all } t \in I. $$

Hence we get

$$d_{B}(Tx,Ty)\le\frac{N}{L}\cdot\triangle_{1} \bigl(d_{B}(x_{1},y_{1}),\ldots, d_{B}(x_{N},y_{N})\bigr), \quad\mbox{for all } x,y \in X. $$

Since \(h:= \frac{N}{L}<1\), we conclude, by using Theorem 2.6, that the N variable mapping T has a unique multivariate fixed point \(x^{*}\in X= C[t_{0}-\delta,t_{0}+\delta]\), i.e., such that

$$x^{*}(t)= \int_{t_{0}}^{t}f\bigl(x^{*}(\tau),x^{*}(\tau),\ldots,x^{*}( \tau ),\tau\bigr)\,d\tau+g(t), \quad t\in I, $$

and, for any \(x_{0}\in X\), the iterative sequence \(\{x_{n}(t)\}\) defined by

$$\begin{aligned} &x_{1}(t)= \int_{t_{0}}^{t}f\bigl(x_{0}( \tau),x_{0}(\tau),\ldots,x_{0}(\tau ),\tau\bigr)\,d\tau+x_{0}, \\ &x_{2}(t)= \int_{t_{0}}^{t}f\bigl(x_{1}( \tau),x_{1}(\tau),\ldots,x_{1}(\tau ),\tau\bigr)\,d\tau+x_{0}, \\ & \cdots \\ &x_{n+1}(t)= \int_{t_{0}}^{t}f\bigl(x_{n}( \tau),x_{n}(\tau),\ldots ,x_{n}(\tau),\tau\bigr)\,d\tau+x_{0}, \end{aligned}$$

converges to \(x^{*}\in X=C[t_{0}-\delta,t_{0}+\delta]\). The function \(x^{*}=x^{*}(t)\) is the unique solution of the integral equation

$$x(t)= \int_{t_{0}}^{t}f\bigl(x(\tau),x(\tau),\ldots,x(\tau), \tau\bigr)\,d\tau+g(t), \quad t \in[t_{0}-\delta,t_{0}+ \delta]. $$

In particular, if \(g(t):=x^{0}\) (where \(x^{0}\in\mathbb{R}\)) is a constant function, it is well known that the above integral equation is equivalent to the initial-value problem associated to a first order differential equation of the form

$$\frac{dx(t)}{dt}=f\bigl(x(t),x(t),\ldots,x(t),t\bigr), \qquad x(t_{0})=x_{0}. $$

Thus, by our approach an existence and uniqueness result for the initial-value problem follows.

4 N variable nonexpansive mappings in normed spaces

We will introduce first the concept of N variable nonexpansive mapping.

Definition 4.1

Let \((X,\|\cdot\|)\) be a normed space. Then a N variable mapping \(T: X^{N}\rightarrow X\) is said to be nonexpansive, if

$$\|Tx-Ty\|\leq\triangle\bigl(\|x_{1}-y_{1}\|, \|x_{2}-y_{2}\|, \ldots,\|x_{N}-y_{N}\|\bigr), $$

for all \(x=(x_{1},x_{2},x_{3},\ldots,x_{N}), x=(y_{1},y_{2},y_{3},\ldots,y_{N})\in X^{N}\), where △ is a multiply metric function.

Some useful results are the following.

Lemma 4.2

Let X be a Hilbert space with the inner product \(\langle\cdot,\cdot\rangle\). We consider on the Cartesian product space \(X^{N}=X\times X\times\cdots\times X\) the following functional:

$$\langle x, y\rangle^{*}=\frac{1}{N}\sum_{i=1}^{N} \langle x_{i},y_{i}\rangle,\quad \forall x=(x_{1},x_{2},\ldots,x_{N}), y=(y_{1},y_{2}, \ldots,y_{N}) \in X^{N}. $$

Then \((X^{N}, \langle\cdot,\cdot\rangle^{*})\) is a Hilbert space.

Proof

It is easy to prove that the \(X^{N}\) is a linear space with the following linear operations:

$$\begin{aligned}& (x_{1},x_{2},\ldots,x_{N})+(y_{1},y_{2}, \ldots ,y_{N})=(x_{1}+y_{1},x_{2}+y_{2}, \ldots,x_{N}+y_{N}), \\& \lambda(x_{1},x_{2},\ldots,x_{N})=(\lambda x_{1},\lambda x_{2},\ldots,\lambda x_{N}), \end{aligned}$$

for all \(x=(x_{1},x_{2},\ldots,x_{N}), y=(y_{1},y_{2},\ldots,y_{N}) \in X^{N}\), and \(\lambda\in (-\infty,+\infty)\). Next we prove that \((X^{N}, \langle \cdot,\cdot\rangle^{*})\) is an inner product space. It is easy to see that the following relations hold:

  1. (1)

    \(\langle x,x\rangle^{*}=\frac{1}{N}\sum_{i=1}^{N}\langle x_{i},x_{i}\rangle\geq0\) and \(\langle x,x\rangle^{*}=0 \Leftrightarrow x=0\), \(\forall x=(x_{1},x_{2},\ldots,x_{N}) \in X^{N}\);

  2. (2)

    \(\langle x,y\rangle^{*}=\langle y,x\rangle^{*}\), \(\forall x,y \in X^{N}\);

  3. (3)

    \(\langle\lambda x,y\rangle^{*}=\frac{1}{N}\sum_{i=1}^{N}\langle \lambda x_{i},y_{i}\rangle= \lambda\frac{1}{N} \sum_{i=1}^{N}\langle x_{i},y_{i}\rangle=\lambda\langle x,y\rangle^{*} \), \(\forall x,y \in X^{N}\);

  4. (4)

    \(\langle x+y,z\rangle^{*}=\langle x,z\rangle^{*}+\langle y,z\rangle^{*}\), \(\forall x,y,z \in X^{N}\).

Hence \((X^{N}, \langle\cdot,\cdot\rangle^{*})\) is an inner product space.

The inner product \(\langle x,y \rangle^{*}\) generates the following norm:

$$\|x\|^{*}= \sqrt{\langle x,x\rangle^{*}}=\sqrt{\frac{1}{N}\sum _{i=1}^{N}\|x_{i} \|^{2}}, \quad \forall x=(x_{1},x_{2}, \ldots,x_{N}) \in X^{N}, $$

where \(\|x_{i}\|= \sqrt{\langle x_{i},x_{i}\rangle}\), \(\forall x_{i} \in X\), \(i=1,2,3,\ldots, N\). Since X is complete, we know that \((X^{N}, \|\cdot\|^{*})\) is also complete. So \((X^{N}, \|\cdot\|^{*})\) is a Hilbert space. □

Lemma 4.3

Let X be a Hilbert space with the inner product \(\langle\cdot,\cdot\rangle\) and let

$$\langle x, y\rangle^{*}=\frac{1}{N}\sum_{i=1}^{N} \langle x_{i},y_{i}\rangle, \quad \forall x=(x_{1},x_{2},\ldots,x_{N}), y=(y_{1},y_{2}, \ldots,y_{N}) \in X^{N} $$

be the inner product on the Cartesian product space \(X^{N}\). Then the following conclusions hold:

  1. (1)

    \((X^{N})^{*}=X^{*}\times X^{*}\times\cdots\times X^{*}\);

  2. (2)

    \(f \in(X^{N})^{*}\) if and only if there exist \(f_{i} \in X^{*}\), \(i\in\{1,2,3, \ldots,N \}\) such that

    $$f(x)=\frac{1}{N}\sum_{i=1}^{N}f_{i}(x_{i}), \quad \forall x=(x_{1},x_{2},\ldots,x_{N}) \in X^{N}. $$

(Here \((X^{N})^{*}\) and \(X^{*}\) denote the conjugate spaces of \(X^{N}\) and X, respectively.)

Proof

By Lemma 4.2, we obtain the conclusion (1). Next we prove the conclusion (2). Assume that \(f \in(X^{N})^{*}\). By Riesz’s theorem and by Lemma 4.2, there exists an element \(y=(y_{1},y_{2},\ldots,y_{N}) \in X^{N}\) such that

$$f(x)=\langle x, y \rangle^{*}=\frac{1}{N}\sum_{i=1}^{N} \langle x_{i},y_{i}\rangle,\quad \forall x=(x_{1},x_{2},\ldots,x_{N}) \in X^{N}. $$

Therefore there exist \(f_{i} \in X^{*}\), \(i\in\{1,2,3, \ldots,N \}\) such that

$$f(x)=\frac{1}{N}\sum_{i=1}^{N}f_{i}(x_{i}), \quad \forall x=(x_{1},x_{2},\ldots,x_{N}) \in X^{N}. $$

Assume there exist \(f_{i} \in X^{*}\), \(i\in\{1,2,3, \ldots,N \}\) such that

$$f(x)=\frac{1}{N}\sum_{i=1}^{N}f_{i}(x_{i}), \quad \forall x=(x_{1},x_{2},\ldots,x_{N}) \in X^{N}. $$

It is easy to see that \(f \in(X^{N})^{*}\). This completes the proof. □

Theorem 4.4

Let X be a Hilbert space with the inner product \(\langle\cdot,\cdot\rangle\) and the norm \(\|\cdot\|\). Consider on \(X^{N}\) the norm

$$\|x\|^{*}=\sqrt{\frac{1}{N} \sum_{i=1}^{N} \|x_{i}\|^{2}},\quad \forall x=(x_{1},x_{2}, \ldots,x_{N}) \in X^{N}. $$

Let \(T:X^{N}\rightarrow X\) be a N-variables nonexpansive mapping such that the multivariate fixed point set \(F(T)\) is nonempty. Then, for any given \(x_{0}=(x_{1}^{0},x_{2}^{0},\ldots,x_{N}^{0}) \in X^{N}\), the iterative sequences

$$ x_{i}^{n+1}=\alpha_{n}u_{i}+(1- \alpha_{n})T\bigl(x_{1}^{n},x_{2}^{n}, \ldots,x_{N}^{n}\bigr),\quad i=1,2,3,\ldots,N , $$
(4.1)

converge strongly to a multivariate fixed point p of T, where \(u=(u_{1},u_{2},\ldots,u_{n}) \in X^{N}\) is a fixed element and the sequence \(\{\alpha_{n}\} \subset[0,1]\) satisfies the conditions (C 1), (C 2), and (C 3) as follows:

(C1):

\(\lim_{n\rightarrow\infty}\alpha_{n}=0\);

(C2):

\(\sum_{n=1}^{\infty}\alpha_{n}=+\infty\);

(C3):

\(\sum_{n=1}^{\infty}|\alpha_{n+1}-\alpha _{n}|<+\infty\).

Proof

We define a mapping \(T^{*}:X^{N}\rightarrow X^{N}\), \(x\mapsto T^{*}(x)\) by the following relation:

$$T^{*}(x_{1},x_{2},\ldots,x_{N}):= \bigl(T(x_{1},x_{2},\ldots,x_{N}),T(x_{1},x_{2}, \ldots,x_{N}),\ldots,T(x_{1},x_{2}, \ldots,x_{N})\bigr), $$

for all \((x_{1},x_{2},\ldots,x_{N})\in X^{N}\). Next we prove that \(T^{*}\) is a nonexpansive mapping from \((X^{N}, \|\cdot\|^{*})\) into itself. Observe that, for any

$$x=(x_{1},x_{2},\ldots,x_{N}), y=(y_{1},y_{2}, \ldots,y_{N})\in X^{N}, $$

we have

$$\begin{aligned} \bigl\| T^{*}x-T^{*}y\bigr\| ^{*}&=\sqrt{\frac{1}{N}{}\sum _{i=1}^{N}\|Tx-Ty\|^{2}}\leq\sqrt { \frac{1}{N}\sum_{i=1}^{N}\Biggl(\sum _{i=1}^{N}\|x_{i}-y_{i} \|\Biggr)^{2}} \\ &= \sqrt{\frac{1}{N}\sum_{i=1}^{N} \bigl(\|x-y\|^{*}\bigr)^{2}}=\|x-y\|^{*}. \end{aligned}$$

Hence \(T^{*}\) is a nonexpansive mapping from \((X^{N},\|\cdot\|^{*})\) into itself. For any \(p \in F(T)=\{x \in X: x=T(x,x,\ldots,x)\}\), we have

$$T^{*}(p,p,\ldots,p)=\bigl(T(p,p,\ldots,p),T(p,p,\ldots,p), \ldots,T(p,p,\ldots,p) \bigr)=(p,p,\ldots,p), $$

hence \(p^{*}=(p,p,\ldots,p) \in X^{N}\) is a fixed point of \(T^{*}\). Therefore, the mapping \(T^{*}: X^{N}\rightarrow X^{N}\) is a nonexpansive mapping with a nonempty fixed point set

$$F\bigl(T^{*}\bigr)=\bigl\{ (p,p,\ldots,p)\in X^{N}: p \in F(T)\bigr\} . $$

By using the result of Wittmann [24], we know that, for any given \(x_{0} \in X^{N}\), Halpern’s iterative sequence

$$ x_{n+1}=\alpha_{n}u+(1-\alpha_{n})T^{*}x_{n} $$
(4.2)

converges in the norm \(\|\cdot\|^{*}\) to a fixed point \(p^{*}=(p,p,\ldots,p)\) of \(T^{*}\), where \(u=(u_{1},u_{2},\ldots, u_{N}) \in X^{N}\). Let

$$x_{n}=\bigl(x_{1}^{n},x_{2}^{n}, \ldots,x_{N}^{n}\bigr), \quad n=0, 1,2,3, \ldots. $$

Then the iterative scheme (4.2) can be rewritten as (4.1). From \(x_{n} \rightarrow p^{*}\) (in the norm \(\|\cdot\|^{*}\)), we have \(x_{i}^{n}\rightarrow p\) in norm \(\|\cdot\|\) for all \(i=1,2,3,\ldots,N\). This completes the proof. □

If the condition (C3) can be replaced by the condition (C4) [25] or the condition (C5) [26], then Theorem 4.4 still holds.

The construction of fixed points of nonexpansive mappings via Mann’s algorithm has extensively been investigated recently in the literature (see, e.g., [27] and references therein). Related work can also be found in [2845]. Mann’s algorithm generates, initializing an arbitrary \(x_{0} \in C\), a sequence according to the following recursive procedure:

$$ x_{n+1}=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n}, \quad n\geq0, $$
(4.3)

where \(\{\alpha_{n}\}\) is a real control sequence in the interval \((0, 1)\).

If T is a nonexpansive mapping with at least one fixed point and if the control sequence \(\{\alpha_{n}\}\) is chosen so that \(\sum_{n=0}^{\infty}\alpha_{n}(1-\alpha_{n})=+\infty\), then the sequence \(\{x_{n}\}\) generated by Mann’s algorithm (4.3) converges weakly, in a uniformly convex Banach space with a Fréchet differentiable norm (see [27]), to a fixed point of T.

Next we prove a weak convergence theorem for a N-variables nonexpansive mapping in Hilbert spaces.

Theorem 4.5

Let X be a Hilbert space with the inner product \(\langle\cdot,\cdot\rangle\) and the norm \(\|\cdot\|\). Consider on the Cartesian product space \(X^{N}\) the norm

$$\|x\|^{*}=\sqrt{\frac{1}{N}\sum_{i=1}^{N} \|x_{i}\|^{2}}, \quad \forall x=(x_{1},x_{2}, \ldots,x_{N}) \in X^{N}. $$

Let \(T:X^{N}\rightarrow X\) be a N-variables nonexpansive mapping such that the multivariate fixed point set \(F(T)\) is nonempty. Consider, for any given \(x_{0}=(x_{1}^{0},x_{2}^{0},\ldots,x_{N}^{0}) \in X^{N}\), the following iterative sequences:

$$ x_{i}^{n+1}=\alpha_{n}x_{i}^{n}+(1- \alpha_{n})T\bigl(x_{1}^{n},x_{2}^{n}, \ldots,x_{N}^{n}\bigr),\quad i=1,2,3,\ldots,N, $$
(4.4)

where the sequence \(\{\alpha_{n}\} \subset[0,1]\) satisfies the condition \(\sum_{n=0}^{\infty}\alpha_{n}(1-\alpha_{n})=+\infty\).

Then the sequence \(\{x_{i}^{n}\}\) converges weakly to a multivariate fixed point p of T.

Proof

We define a mapping \(T^{*}:X^{N}\rightarrow X^{N}\) by the following relation:

$$T^{*}(x_{1},x_{2},\ldots,x_{N}):= \bigl(T(x_{1},x_{2},\ldots,x_{N}),T(x_{1},x_{2}, \ldots,x_{N}),\ldots,T(x_{1},x_{2}, \ldots,x_{N})\bigr). $$

By Theorem 4.5 we know that \(T^{*}: X^{N}\rightarrow X^{N}\) is a nonexpansive mapping with a nonempty fixed point set

$$F\bigl(T^{*}\bigr)=\bigl\{ (p,p,\ldots,p)\in X^{N}: p \in F(T)\bigr\} . $$

By Reich’s result [27], we obtain, for any given \(x_{0} \in X^{N}\), Mann’s iterative sequence

$$ x_{n+1}=\alpha_{n} x_{n}+(1-\alpha_{n})T^{*}x_{n}, \quad n\geq0, $$
(4.5)

converging weakly to a fixed point \(p^{*}=(p,p,\ldots,p) \in F(T^{*})\), where \(p \in F(T)\). Since \(X^{N}\) is a Hilbert space, for any \(y=(y_{1},y_{2},\ldots,y_{N}) \in X^{N}\), we have

$$\bigl\langle x_{n}-p^{*}, y\bigr\rangle ^{*}=\frac{1}{N}\sum _{i=1}^{N}\bigl\langle x^{n}_{i}-p,y_{i} \bigr\rangle \rightarrow0, \quad\mbox{as } n\rightarrow\infty. $$

Therefore, for any \(i\in\{1,2,3,\ldots,N \}\), let us chose \(y=(0,\ldots , 0, y_{i},0,\ldots,0)\) and we get

$$\bigl\langle x^{n}_{i}-p, y_{i}\bigr\rangle \rightarrow0 \quad\mbox{as } n\rightarrow \infty. $$

Hence \(\langle x^{n}_{i}, y_{i}\rangle\rightarrow\langle p, y_{i}\rangle\) as \(n\rightarrow\infty\), for any \(i\in\{1,2,3,\ldots,N \}\). This shows that the iterative sequences \(\{x^{n}_{i}\}\), \(i\in\{1,2,3,\ldots,N \}\), defined by (4.4) converge weakly to a multivariate fixed point p of T. This completes the proof. □

Remark

The above presented method can successfully be applied for several other iterative schemes in order to prove weak and strong convergence theorems for the multivariate fixed points of N-variables nonexpansive type mappings.