1 Introduction

In 1975, Baillon [1] first proved the following nonlinear ergodic theorem.

Theorem 1.1

Suppose that C is a nonempty closed convex subset of a Hilbert space E and \(T:C\to C\) is a nonexpansive mapping such that \(F(T)\neq\emptyset\). Then, \(\forall x\in C\), the Cesàro means

$$ T_{n}x=\frac{1}{n+1}\sum_{i=0}^{n}T^{i} x $$
(1.1)

weakly converges to a fixed point of T.

In 1979, Bruck [2] generalized the Baillon Cesàro’s means theorem from Hilbert space to uniformly convex Banach space with Fréchet differentiable norms.

In 2007, Song and Chen [3] defined the following viscosity iteration \(\{x_{n}\}\) of the Cesàro means for nonexpansive mapping T:

$$ x_{n+1}=\alpha_{n}f(x_{n})+(1-\alpha_{n}) \frac{1}{n+1}\sum_{i=0}^{n}T^{i}x, $$
(1.2)

and they proved that the sequence \(\{x_{n}\}\) converged strongly to some point in \(F(T)\) in a uniformly convex Banach space with weakly sequentially continuous duality mapping.

In 2009, Yao et al. [4] in a Banach space introduced the following process \(\{x_{n}\}\):

$$ x_{n+1}=\alpha_{n}u+\beta_{n}x_{n}+ \gamma_{n} Tx_{n},\quad n\geq0. $$
(1.3)

They proved that the sequence \(\{x_{n}\}\) converged strongly to a fixed point of T under suitable control conditions of parameters.

In 2014, Zhu and Chen [5] proposed the following iterations with Cesàro’s means for nonexpansive mappings:

$$ x_{n+1}=\alpha_{n}u+\beta_{n}x_{n}+ \gamma_{n}\frac{1}{n+1}\sum_{i=0}^{n}T^{i}x_{n}, \quad n\geq0, $$
(1.4)

and viscosity iteration:

$$ x_{n+1}=\alpha_{n}f(x_{n})+\beta_{n}x_{n}+ \gamma_{n}\frac{1}{n+1}\sum_{i=0}^{n}T^{i}x_{n}, \quad n\geq0. $$
(1.5)

They proved the sequences \(\{x_{n}\}\) both converged strongly to a fixed point of \(F(T)\) in the framework of a uniformly smooth Banach space.

As is well known, an extension of a linear version (usually in Banach spaces or Hilbert spaces) of this well-known result to a metric space has its own importance. The above iterative methods (1.1)-(1.5) involve general convex combinations. If we want to extend these results from Hilbert spaces or Banach spaces to metric spaces, we need some convex structure in a metric space to investigate their convergence on a nonlinear domain.

On the other hand, recently the theory and applications of CAT(0) space have been studied extensively by many authors.

Recall that a metric space \((X, d)\) is called a CAT(0) space, if it is geodesically connected and if every geodesic triangle in X is at least as ‘thin’ as its comparison triangle in the Euclidean plane. It is known that any complete, simply connected Riemannian manifold having non-positive sectional curvature is a CAT(0) space. Other examples of CAT(0) spaces include pre-Hilbert spaces (see [6]), R-trees (see [7]), Euclidean buildings (see [8]), the complex Hilbert ball with a hyperbolic metric (see [9]), and many others. A complete CAT(0) space is often called a Hadamard space. A subset K of a CAT(0) space X is convex if, for any \(x, y \in K\), we have \([x, y] \subset K\), where \([x, y]\) is the uniquely geodesic joining x and y. For a thorough discussion of CAT(0) spaces and of the fundamental role they play in geometry, we refer the reader to Bridson and Haefliger [6].

Fixed point theory in CAT(0) spaces has been first studied by Kirk (see [10, 11]). He showed that every nonexpansive (single-valued) mapping defined on a bounded closed convex subset of a complete CAT(0) space always has a fixed point.

Motivated and inspired by the research going on in this direction, it is naturally to put forward the following.

Open question

Can we extend the above Cesàro nonlinear ergodic theorems for nonexpansive mapping in [15] from a Hilbert space or Banach space to a CAT(0) space?

The purpose of this paper is to give an affirmative answer to this question. For solving this problem we first propose the following new iterations with Cesàro’s means for nonexpansive mappings in the setting of CAT(0) spaces:

$$ x_{n+1}=\alpha_{n} u \oplus\beta_{n}x_{n} \oplus\gamma_{n}\Biggl(\bigoplus_{i=0}^{n} \frac{1}{n+1}T^{i}x_{n}\Biggr), $$
(1.6)

and the viscosity iteration:

$$ x_{n+1}=\alpha_{n}f(x_{n})\oplus \beta_{n}x_{n}\oplus\gamma_{n}\Biggl(\bigoplus _{i=0}^{n}\frac{1}{n+1}T^{i}x_{n} \Biggr), $$
(1.7)

and then study the strong convergence of the iterative sequences (1.6) and (1.7). Under suitable conditions, some strong converge theorems to a fixed point of the nonexpansive mapping are proved. We also prove that this fixed point is a unique solution of some kind of variational inequality in CAT(0) spaces. The results presented in this paper are new; they extend and improve corresponding previous results.

Next we give some examples of iteration (1.6) or (1.7) with Cesàro’s means for nonexpansive mappings to illustrate the generation of our new iterations.

Example 1

If X is a real Hilbert (or Banach space), and if \(\alpha_{n} = 0\), \(\beta_{n} = 0\), \(\forall n \ge1\), then the iteration (1.6) with Cesàro’s means for nonexpansive mappings is reduced as the same iteration of Baillon [1] (or Bruck [2]).

Example 2

If X is a real Banach space and \(n =1\), then the iteration (1.6) with Cesàro’s means for nonexpansive mappings is reduced to the same iteration of Yao et al. [4]. If \(n \ge2\), then the iteration (1.6) can be written

$$x_{n+1}=\alpha_{n} u + \beta_{n}x_{n} + \gamma_{n}\Biggl(\sum_{i=0}^{n} \frac{1}{n+1}T^{i}x_{n}\Biggr), $$

which is a generalization of results due to Yao et al. [4].

Example 3

The iterations (1.6) and (1.7) are a generalization of the iterations of Zhu and Chen [5] from Banach space to CAT(0) spaces.

Example 4

Let \(E = \mathbb{R}\) with the usual metric, T be a nonexpansive mapping defined by \(Tx = \sin x\), and f be a contractive mapping defined by \(f(x) = \frac{x}{2}\). Let \(\alpha_{n} = \frac{1}{2n}\), \(\beta_{n} = \frac{(4n-3)}{4n}\), and \(\gamma_{n} = \frac{1}{4n}\), \(\forall n \ge1\). Then the iteration (1.6) and (1.7) with Cesàro’s means for nonexpansive mappings \(Tx = \sin x\) are reduced to the following iterations, respectively:

$$\begin{aligned}& x_{n+1} = \frac{1}{2n}u + \frac{4n-3}{4n} x_{n} + \frac{1}{4n(n+1)} \bigl(x_{n} + Tx_{n} + T^{2} x_{n} + \cdots+ T^{n} x_{n}\bigr), \end{aligned}$$
(1.8)
$$\begin{aligned}& x_{n+1} = \frac{1}{2n}\cdot\frac{1}{2}x_{n} + \frac{4n-3}{4n} x_{n} + \frac {1}{4n(n+1)} \bigl(x_{n} + Tx_{n} + T^{2} x_{n} + \cdots+ T^{n} x_{n}\bigr). \end{aligned}$$
(1.9)

Example 5

If \((X, d)\) is a CAT(0) space, \(n =1\) and \(\beta_{n} = 0\), then the iteration (1.7) with Cesàro’s means for nonexpansive mappings is reduced to the following iteration, with Cesàro’s means for nonexpansive mappings:

$$ x_{n+1}=\alpha_{n} f(x_{n}) \oplus \frac{\gamma_{n}}{2} x_{n} \oplus\frac {\gamma_{n}}{2} T x_{n}, $$
(1.10)

which is a generalization of the iterations in Wangkeeree and Preechasilp [12] from a Banach space to CAT(0) spaces.

2 Preliminaries and lemmas

In this paper, we write \((1-t)x\oplus ty\) for the unique point z in the geodesic segment joining from x to y such that

$$ d(x,z)=td(x,y), \qquad d(y,z)=(1-t)d(x,y). $$
(2.1)

Lemma 2.1

[13]

A geodesic space \((X, d)\) is a CAT(0) space, if and only if the inequality

$$ d^{2}\bigl((1-t)x\oplus ty,z\bigr)\leq(1-t)d^{2}(x,z)+td^{2}(y,z)-t(1-t)d^{2}(x,y) $$
(2.2)

is satisfied for all \(x,y,z\in X\) and \(t\in[0,1]\). In particular, if x, y, z are points in a CAT(0) space and \(t\in[0,1]\), then

$$d\bigl((1-t)x\oplus ty,z\bigr)\leq(1-t)d(x,z)+td(y,z). $$

Lemma 2.2

[6]

Let \((X, d)\) be a CAT(0) space, \(p,q,r,s\in X\), and \(\lambda\in[0,1]\). Then

$$ d\bigl(\lambda p\oplus(1-\lambda)q,\lambda r\oplus(1-\lambda)s\bigr)\leq\lambda d(p,r)+(1-\lambda)d(q,s). $$
(2.3)

By induction, we write

$$ \bigoplus_{m=1}^{n}\lambda_{m}x_{m} :=(1-\lambda_{n}) \biggl(\frac{\lambda_{1}}{1-\lambda_{n}}x_{1}\oplus \frac{\lambda_{2}}{1-\lambda_{n}}x_{2}\oplus\cdots\oplus \frac{\lambda_{n-1}}{1-\lambda_{n}}x_{n-1} \biggr)\oplus\lambda_{n}x_{n}. $$
(2.4)

Lemma 2.3

[14]

Let \((X, d)\) be a CAT(0) space, then for any sequence \(\{\lambda_{i}\}_{i=1}^{n}\) in [0,1] satisfying \(\sum_{i=1}^{n}\lambda_{i}=1\) and for any \(\{x_{i}\}_{i=1}^{n}\subset X\), the following conclusions hold:

$$ d\Biggl(\bigoplus_{i=1}^{n} \lambda_{i}x_{i},x\Biggr)\leq\sum_{i=1}^{n} \lambda_{i}d(x_{i},x),\quad x \in X; $$
(2.5)

and

$$d^{2}\Biggl(\bigoplus_{i=1}^{n} \lambda_{i}x_{i},x\Biggr) \leq\sum_{i=1}^{n} \lambda_{i}d^{2}(x_{i},x) -\lambda_{1} \lambda_{2}d^{2}(x_{1},x_{2}), \quad x \in X. $$

Lemma 2.4

Let \(\{x_{i}\}\) and \(\{y_{i}\}\) be any sequences of a CAT(0) space X, then for any sequence \(\{\lambda_{i}\}_{i=1}^{k}\) in \([0,1]\) satisfying \(\sum_{i=1}^{k}\lambda_{i}=1\) and for any \(\{x_{i}\}_{i=1}^{k}\subset X\), the following inequality holds:

$$ d\Biggl(\bigoplus_{i=1}^{k} \lambda_{i}x_{i},\bigoplus_{i=1}^{k} \lambda_{i}y_{i}\Biggr)\leq \sum_{i=1}^{k} \lambda_{i}d(x_{i},y_{i}). $$
(2.6)

Proof

It is obvious that (2.6) holds for \(k=2\). Suppose that (2.6) holds for some \(k\geq3\). Next we prove that (2.6) is also true for \(k+1\). From (2.3) and (2.4) we have

$$\begin{aligned} &d\Biggl(\bigoplus_{i=1}^{k+1} \lambda_{i}x_{i},\bigoplus_{i=1}^{k+1} \lambda _{i}y_{i}\Biggr) \\ &\quad=d\Biggl((1-\lambda_{k+1}) \Biggl(\bigoplus _{i=1}^{k}\frac{\lambda_{i}}{1-\lambda _{k+1}}x_{i}\Biggr) \oplus\lambda_{k+1}x_{k+1}, (1-\lambda_{k+1}) \Biggl( \bigoplus_{i=1}^{k}\frac{\lambda_{i}}{1-\lambda _{k+1}}y_{i} \Biggr)\oplus\lambda_{k+1}y_{k+1}\Biggr) \\ &\quad\leq(1-\lambda_{k+1})d\Biggl(\bigoplus _{i=1}^{k}\frac{\lambda_{i}}{1-\lambda_{k+1}}x_{i}, \bigoplus _{i=1}^{k}\frac{\lambda_{i}}{1-\lambda_{k+1}}y_{i} \Biggr)+\lambda _{k+1}d(x_{k+1},y_{k+1}) \\ &\quad\leq(1-\lambda_{k+1})\sum_{i=1}^{k} \frac{\lambda_{i}}{1-\lambda _{k+1}}d(x_{i},y_{i})+\lambda_{k+1}d(x_{k+1},y_{k+1}) \\ &\quad=\sum_{i=1}^{k+1}\lambda_{i}d(x_{i},y_{i}). \end{aligned}$$

This implies that (2.6) holds. □

Berg and Nikolaev [15] introduced the concept of quasilinearization as follows. Let us denote a pair \((a,b)\in X\times X\) by \(\overrightarrow{ab}\) and call it a vector. Then quasilinearization is defined as a map \(\langle\cdot,\cdot\rangle:(X\times X)\times(X\times X)\to\mathbb{R}\) defined by

$$ \langle\overrightarrow{ab},\overrightarrow{cd}\rangle =\frac{1}{2} \bigl(d^{2}(a,d)+d^{2}(b,c)-d^{2}(a,c)-d^{2}(b,d) \bigr)\quad (a,b,c,d\in X). $$
(2.7)

It is easy to seen that \(\langle\overrightarrow{ab},\overrightarrow{cd}\rangle =\langle\overrightarrow{cd},\overrightarrow{ab}\rangle\), \(\langle\overrightarrow{ab},\overrightarrow{cd}\rangle =-\langle\overrightarrow{ba},\overrightarrow{cd}\rangle\), and \(\langle\overrightarrow{ax},\overrightarrow{cd}\rangle +\langle\overrightarrow{xb},\overrightarrow{cd}\rangle= \langle\overrightarrow{ab},\overrightarrow{cd}\rangle\) for all \(a,b,c,d\in X\). We say that X satisfies the Cauchy-Schwarz inequality if

$$ \langle\overrightarrow{ab},\overrightarrow{cd}\rangle\leq d(a,b)d(c,d) $$
(2.8)

for all \(a,b,c,d\in X\). It is well known [15] that a geodesically connected metric space is a CAT(0) space if and only if it satisfies the Cauchy-Schwarz inequality.

Let C be a nonempty closed convex subset of a complete CAT(0) space X. The metric projection \(P_{C}: X\to C\) is defined by

$$u=P_{C}(x)\quad\Leftrightarrow\quad d(u,x)=\inf\bigl\{ d(y,x):y\in C\bigr\} , \quad \forall x\in X. $$

Lemma 2.5

[16]

Let C be a nonempty convex subset of a complete CAT(0) space X, \(x\in X\), and \(u\in C\). Then \(u=P_{C}(x)\) if and only if

$$\langle\overrightarrow{yu},\overrightarrow{ux}\rangle\geq0,\quad \forall y\in C. $$

Lemma 2.6

([12])

Let C be a closed convex subset of a complete CAT(0) space X, and let \(T:C\to C\) be a nonexpansive mapping with \(F(T) \neq \emptyset\). Let f be a contraction on C with coefficient \(\alpha<1\). For each \(t\in[0,1]\), let \(\{x_{t}\}\) be the net defined by

$$ x_{t}=tf(x_{t})\oplus(1-t)Tx_{t}. $$
(2.9)

Then \(\lim_{t\to0}x_{t} =\tilde{x}\), some point \(F(T)\) which is the unique solution of the following variational inequality:

$$ \bigl\langle \overrightarrow{\tilde{x}f(\tilde{x})},\overrightarrow{x\tilde {x}} \bigr\rangle \geq0,\quad \forall x\in F(T). $$
(2.10)

Lemma 2.7

[12]

Let X be a complete CAT(0) space. Then, for any \(u,x,y\in X\), the following inequality holds:

$$d^{2}(x,u)\leq d^{2}(y,u)+2\langle\overrightarrow{xy}, \overrightarrow{xu}\rangle. $$

Lemma 2.8

[12]

Let X be a complete CAT(0) space. For any \(t\in[0,1]\) and \(u,v\in X\), let \(u_{t}=tu\oplus(1-t)v\). Then, for any \(x,y\in X\),

  1. (i)

    \(\langle\overrightarrow{u_{t}x},\overrightarrow{u_{t}y}\rangle\leq t\langle\overrightarrow{ux},\overrightarrow{u_{t}y}\rangle +(1-t)\langle\overrightarrow{vx},\overrightarrow{u_{t}y}\rangle\);

  2. (ii)

    \(\langle\overrightarrow{u_{t}x},\overrightarrow{uy}\rangle\leq t\langle\overrightarrow{ux},\overrightarrow{uy}\rangle +(1-t)\langle\overrightarrow{vx},\overrightarrow{uy}\rangle\), and \(\langle\overrightarrow{u_{t}x},\overrightarrow{vy}\rangle\leq t\langle\overrightarrow{ux},\overrightarrow{vy}\rangle +(1-t)\langle\overrightarrow{vx},\overrightarrow{vy}\rangle\).

Lemma 2.9

[17]

Let \(\{a_{n}\}\) be a sequence of non-negative real numbers satisfying the property \(a_{n+1}\leq(1-\alpha_{n})a_{n}+\alpha_{n}\beta_{n}\), \(n\geq0\), where \(\{\alpha_{n}\}\subset(0,1)\) and \(\{\beta_{n}\}\subset\mathbb{R}\) such that

  1. (i)

    \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\);

  2. (ii)

    \(\limsup_{n\to\infty}\beta_{n}\leq0\) or \(\sum_{n=0}^{\infty}|\alpha_{n}\beta_{n}|<\infty\).

Then \(\{a_{n}\}\) converges to zero as \(n\to\infty\).

3 Approximative iterative algorithms

Throughout this section, we assume that C is a nonempty and closed convex subset of a complete CAT(0) space X, and \(T:C\to C\) is a nonexpansive mappings with \(F(T)\neq\emptyset\). Let f be a contraction on C with coefficient \(k\in(0,1)\). Suppose \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), and \(\{\gamma_{n}\}\) are three real sequences in \((0,1)\) satisfying

  1. (i)

    \(\alpha_{n}+\beta_{n}+\gamma_{n}=1\), for all \(n\geq0\);

  2. (ii)

    \(\lim_{n\to\infty}\alpha_{n}=0\) and \(\sum_{n=0}^{\infty}\alpha_{n}=\infty\);

  3. (iii)

    \(\lim_{n\to\infty}\gamma_{n}=0\).

In the following, we first present two important results.

The first one is to prove the mapping \(S:=\bigoplus_{i=0}^{n}\frac{1}{n+1}T^{i} : C \to C\) is nonexpansive:

In fact, for any \(x,y\in C\), from Lemma 2.4 we get

$$\begin{aligned} d(Sx,Sy)&=d\Biggl(\bigoplus_{i=0}^{n} \frac{1}{n+1}T^{i}x,\bigoplus_{i=0}^{n} \frac{1}{n+1}T^{i}y\Biggr) \\ &\leq\sum_{i=0}^{n}\frac{1}{n+1}d \bigl(T^{i}x,T^{i}y\bigr) \\ &\leq d(x,y), \end{aligned}$$

i.e., S is nonexpansive.

The second one is to prove the following mapping \(T_{t,n}: C \to C\) is contractive:

$$T_{t,n}x=\frac{(1-\alpha_{n})t}{\gamma_{n}+t\beta_{n}}f(x) \oplus\frac{(1-t)\gamma_{n}}{\gamma_{n}+t\beta_{n}}\Biggl( \bigoplus_{i=0}^{n}\frac {1}{n+1}T^{i}x \Biggr). $$

In fact, the mapping \(T_{t,n}\) can be written as

$$T_{t,n}x = \lambda_{n}f(x)\oplus(1-\lambda_{n}) \Biggl(\bigoplus_{i=0}^{n} \frac {1}{n+1}T^{i} x\Biggr),\quad x \in C, $$

where \(\lambda_{n}=\frac{(1-\alpha_{n})t}{\gamma_{n}+t\beta_{n}}\). Therefore this kind of mappings has just a similar form to (2.9). Hence for any \(x,y\in C\), from Lemma 2.4, we have

$$\begin{aligned} &d\bigl(T_{t,n}(x),T_{t,n}(y)\bigr) \\ &\quad=d\Biggl(\lambda_{n}f(x)\oplus(1-\lambda_{n})\bigoplus _{i=0}^{n}\frac {1}{n+1}T^{i}x, \lambda_{n}f(y)\oplus(1-\lambda_{n})\bigoplus _{i=0}^{n}\frac {1}{n+1}T^{i}y\Biggr) \\ &\quad\leq\lambda_{n}d\bigl(f(x),f(y)\bigr)+(1-\lambda_{n})d \Biggl(\bigoplus_{i=0}^{n} \frac {1}{n+1}T^{i}x,\bigoplus_{i=0}^{n} \frac{1}{n+1}T^{i}y\Biggr) \\ &\quad\leq\lambda_{n}k d(x,y)+\frac{1-\lambda_{n}}{n+1}\sum _{i=0}^{n}d\bigl(T^{i}x,T^{i}y \bigr) \\ &\quad\leq\lambda_{n}k d(x,y)+(1-\lambda_{n})d(x,y) \\ &\quad=\bigl(1-\lambda_{n}(1-k)\bigr)d(x,y), \end{aligned}$$

i.e., \(T_{t,n}\) is a contractive mapping. Hence, it has a unique fixed point (denote by \(z_{t,n}\)), i.e.,

$$z_{t,n}=\lambda_{n} f(z_{t,n}) \oplus(1- \lambda_{n}) \Biggl(\bigoplus_{i=0}^{n} \frac{1}{n+1}T^{i} (z_{t,n})\Biggr). $$

From Lemma 2.6, for each given \(n \ge1\), we have

$$ \lim_{t\to0}z_{t,n}=x_{n}\in F\Biggl( \bigoplus_{i=0}^{n}\frac{1}{n+1}T^{i} \Biggr), $$
(3.1)

and it is the unique solution of the following variational inequality:

$$ \bigl\langle \overrightarrow{x_{n} f(x_{n})}, \overrightarrow{x x_{n}}\bigr\rangle \geq 0,\quad \forall x\in F\Biggl( \bigoplus_{i=0}^{n}\frac{1}{n+1}T^{i} \Biggr). $$
(3.2)

Next we prove that for each \(n \ge1\)

$$ F(T) = F\Biggl(\bigoplus_{i=0}^{n} \frac{1}{n+1}T^{i}\Biggr). $$
(3.3)

If fact, if \(x \in F(T)\), it is obvious that \(x \in F(\bigoplus_{i=0}^{n}\frac{1}{n+1}T^{i})\), for each \(n\ge1\). This implies that \(F(T) \subset F(\bigoplus_{i=0}^{n}\frac{1}{n+1}T^{i})\) for each \(n \ge1\).

By using induction, now we prove that

$$ F\Biggl(\bigoplus_{i=0}^{n} \frac{1}{n+1}T^{i}\Biggr) \subset F(T) \quad\mbox{for each } n \ge1. $$
(3.4)

Indeed, if \(n =1\), and \(x \in F(\bigoplus_{i=0}^{1}\frac{1}{2}T^{i})\), i.e., \(x = \frac{1}{2} x \oplus\frac{1}{2}Tx\), then we have

$$ \frac{1}{2} x \oplus\frac{1}{2}x = \frac{1}{2} x \oplus\frac{1}{2}Tx. $$
(3.5)

Since the geodesic segment joining any two points in CAT(0) space is unique, it follows from (3.5) that \(x=Tx\), i.e., \(x \in F(T)\).

If (3.4) is true for some \(n \ge2\), now we prove that (3.4) is also true for \(n +1\).

Indeed, if \(x \in F(\bigoplus_{i=0}^{n+1}\frac{1}{n+2}T^{i})\), i.e.,

$$ x = \bigoplus_{i=0}^{n+1} \frac{1}{n+2}T^{i} x. $$
(3.6)

It is easy to see that (3.6) can be rewritten as

$$ \frac{n+1}{n+2}x \oplus\frac{1}{n+2}x = \frac{n+1}{n+2} \biggl\{ \frac {x}{n+1}\oplus\frac{Tx}{n+1} \oplus \cdots \oplus \frac{T^{n} x}{n+1}\biggr\} \oplus \frac{1}{n+2}T^{n+1} x. $$
(3.7)

By the same reasoning showing that the geodesic segment joining any two points in CAT(0) space is unique, from (3.7) we have

$$\frac{n+1}{n+2}x = \frac{n+1}{n+2}\biggl\{ \frac{x}{n+1}\oplus \frac {Tx}{n+1} \oplus\cdots \oplus \frac{T^{n} x}{n+1}\biggr\} \quad \mbox{and} \quad \frac{1}{n+2}x = \frac{1}{n+2}T^{n+1} x. $$

This implies that

$$ x = \frac{x}{n+1}\oplus\frac{Tx}{n+1} \oplus \cdots \oplus \frac{T^{n} x}{n+1} \quad \mbox{and} \quad x = T^{n+1} x. $$
(3.8)

By the assumption of induction, from (3.8) we have \(x =Tx\). The conclusion (3.4) is proved. Therefore the conclusion (3.3) is also proved.

By using (3.3), (3.1) can be written as

$$ \lim_{t\to0}z_{t,n}=x_{n}\in F(T), $$
(3.9)

and \(x_{n}\) is the unique solution of the following variational inequality:

$$ \bigl\langle \overrightarrow{x_{n} f(x_{n})}, \overrightarrow{x x_{n}}\bigr\rangle \geq 0,\quad \forall x\in F(T). $$
(3.10)

Next we prove that for any positive integers m, n, \(x_{n} = x_{m}\) (where \(x_{n}\) is the limit in (3.9)). Therefore (3.9) can be written as

$$ \lim_{t\to0}z_{t,n}=x_{n} = \tilde{x}\in F(T), $$
(3.11)

where \(\tilde{x}\) is some point, and it is the unique solution of the following variational inequality:

$$ \bigl\langle \overrightarrow{\tilde{x} f(\tilde{x})},\overrightarrow{x \tilde {x}}\bigr\rangle \geq0,\quad \forall x\in F(T). $$
(3.12)

In fact, it follows from (3.10) that

$$\begin{aligned}& \bigl\langle \overrightarrow{x_{n} f(x_{n})}, \overrightarrow{x x_{n}}\bigr\rangle \geq 0,\quad \forall x\in F(T), \\& \bigl\langle \overrightarrow{x_{m} f(x_{m})}, \overrightarrow{y x_{m}}\bigr\rangle \geq 0,\quad \forall y\in F(T). \end{aligned}$$

Taking \(x = x_{m}\) in the first inequality and \(y = x_{n}\) in the second inequality and then adding up the resultant two inequalities, we have

$$\begin{aligned} 0\le{}&\bigl\langle \overrightarrow{x_{n} f(x_{n})}, \overrightarrow{x_{m} x_{n}}\bigr\rangle - \bigl\langle \overrightarrow{x_{m} f(x_{m})},\overrightarrow{x_{m} x_{n}}\bigr\rangle \\ \le{}&\bigl\langle \overrightarrow{x_{n} f(x_{m})}, \overrightarrow{x_{m} x_{n}}\bigr\rangle + \bigl\langle \overrightarrow{ f(x_{m})f(x_{n})},\overrightarrow {x_{m} x_{n}}\bigr\rangle \\ &{} - \langle\overrightarrow{x_{m} x_{n}}, \overrightarrow{x_{m} x_{n}}\rangle - \bigl\langle \overrightarrow{x_{n} f(x_{m})},\overrightarrow{x_{m} x_{n}}\bigr\rangle \\ ={}& \bigl\langle \overrightarrow{ f(x_{m})f(x_{n})}, \overrightarrow{x_{m} x_{n}}\bigr\rangle - \langle \overrightarrow{x_{m} x_{n}},\overrightarrow{x_{m} x_{n}}\rangle \\ \le{}& d\bigl( f(x_{m}), f(x_{n})\bigr) d(x_{m}, x_{n}) - d^{2}(x_{m}, x_{n}) \\ \le{}& k d^{2}(x_{m}, x_{n}) - d^{2}(x_{m}, x_{n}) = (k-1)d^{2}(x_{m}, x_{n}). \end{aligned}$$

Since \(k\in(0, 1)\), this implies that \(d(x_{m}, x_{n}) = 0\), i.e., \(x_{m} = x_{n}\), \(\forall m, n \ge1\).

The conclusion is proved.

We are now in a position to propose the iterative algorithms with Cesàro’s means for a nonexpansive mapping in a complete CAT(0) space X, and prove that the sequence generated by the algorithms converges strongly to a fixed point of the nonexpansive mapping which is also a unique solution of some kind of variational inequality.

Theorem 3.1

Let C be a closed convex subset of a complete CAT(0) space X, and \(T:C\to C\) be a nonexpansive mapping with \(F(T)\neq\emptyset\). Let f be a contraction on C with coefficient \(k\in(0,\frac{1}{2})\). Suppose \(x_{0}\in C\) and the sequence \(\{x_{n}\}\) is given by

$$ x_{n+1}=\alpha_{n}f(x_{n})\oplus \beta_{n}x_{n}\oplus\gamma_{n}\Biggl(\bigoplus _{i=0}^{n}\frac{1}{n+1}T^{i}x_{n} \Biggr), $$
(3.13)

where \(T^{0}=I\), \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), and \(\{\gamma_{n}\}\) are three real sequences in (0,1) satisfying conditions (i)-(iii). Then \(\{x_{n}\}\) converges strongly to \(\tilde{x}\) such that \(\tilde{x}=P_{F(T)}f(\tilde{x})\), which is equivalent to the following variational inequality:

$$ \bigl\langle \overrightarrow{\tilde{x}f(\tilde{x})},\overrightarrow{x\tilde {x}} \bigr\rangle \geq0,\quad \forall x\in F(T). $$
(3.14)

Proof

We first show that the sequence \(\{x_{n}\}\) is bounded. For any \(p\in F(T)\), we have

$$\begin{aligned} d(x_{n+1},p)&=d\Biggl(\alpha_{n}f(x_{n})\oplus \beta_{n}x_{n}\oplus\gamma_{n}\Biggl(\bigoplus _{i=0}^{n}\frac{1}{n+1}T^{i}x_{n} \Biggr),p\Biggr) \\ &\leq\alpha_{n}d\bigl(f(x_{n}),p\bigr)+\beta_{n}d(x_{n},p)+ \gamma_{n}d\Biggl(\bigoplus_{i=0}^{n} \frac{1}{n+1}T^{i}x_{n},p\Biggr) \\ &\leq\alpha_{n}\bigl(d\bigl(f(x_{n}),f(p)\bigr)+d \bigl(f(p),p\bigr)\bigr)+\beta_{n}d(x_{n},p)+ \gamma_{n}\sum_{i=0}^{n} \frac{1}{n+1}d\bigl(T^{i}x_{n},T^{i}p\bigr) \\ &\leq\alpha_{n}kd(x_{n},p)+\alpha_{n}d\bigl(f(p),p \bigr)+\beta_{n}d(x_{n},p)+\gamma _{n}d(x_{n},p) \\ &=\bigl(1-\alpha_{n}(1-k)\bigr)d(x_{n},p)+ \alpha_{n}(1-k)\cdot\frac{1}{1-k}d\bigl(f(p),p\bigr) \\ &\leq\max\biggl\{ d(x_{n},p),\frac{1}{1-k}d\bigl(f(p),p\bigr) \biggr\} . \end{aligned}$$

By induction, we have

$$d(x_{n},p)\leq\max\biggl\{ d(x_{0},p),\frac{1}{1-k}d \bigl(f(p),p\bigr)\biggr\} $$

for all \(n\geq0\). Hence \(\{x_{n}\}\) is bounded, and so are \(\{f(x_{n})\}\) and \(\{T^{i}(x_{n})\}\).

Let \(\{z_{t,n}\}\) be a sequence in C such that

$$z_{t,n}=\frac{(1-\alpha_{n})t}{\gamma_{n}+t\beta_{n}}f(z_{t,n}) \oplus\frac{(1-t)\gamma_{n}}{\gamma_{n}+t\beta_{n}} \Biggl(\bigoplus_{i=0}^{n} \frac {1}{n+1}T^{i}z_{t,n}\Biggr). $$

It follows from (3.11) that \(\{z_{t,n}\}\) converges strongly to a fixed point \(\tilde{x}\in F(T)\), which is also a unique solution of the variational inequality (3.14).

Now we claim that

$$ \limsup_{n\to\infty}\bigl\langle \overrightarrow{f(\tilde{x}) \tilde{x}}, \overrightarrow{x_{n+1}\tilde{x}}\bigr\rangle \leq0. $$
(3.15)

It follows from Lemma 2.8 and Lemma 2.4 that

$$\begin{aligned} &d^{2}(z_{t,n},x_{n+1}) \\ &\quad=\langle \overrightarrow{z_{t,n}x_{n+1}},\overrightarrow {z_{t,n}x_{n+1}}\rangle \\ &\quad\leq\frac{(1-\alpha_{n})t}{\gamma_{n}+t\beta_{n}}\bigl\langle \overrightarrow {f(z_{t,n})x_{n+1}}, \overrightarrow{z_{t,n}x_{n+1}}\bigr\rangle +\frac{(1-t)\gamma_{n}}{\gamma_{n}+t\beta_{n}} \Biggl\langle \overrightarrow{\bigoplus_{i=0}^{n} \frac{1}{n+1}T^{i}z_{t,n}x_{n+1}},\overrightarrow {z_{t,n}x_{n+1}}\Biggr\rangle \\ &\quad=\frac{(1-\alpha_{n})t}{\gamma_{n}+t\beta_{n}}\bigl[\bigl\langle \overrightarrow {f(z_{t,n})f( \tilde{x})},\overrightarrow{z_{t,n}x_{n+1}}\bigr\rangle +\bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}},\overrightarrow {z_{t,n}x_{n+1}} \bigr\rangle \\ &\qquad{}+\langle\overrightarrow{\tilde{x}z_{t,n}},\overrightarrow {z_{t,n}x_{n+1}}\rangle+\langle\overrightarrow {z_{t,n}x_{n+1}},\overrightarrow{z_{t,n}x_{n+1}} \rangle\bigr] \\ &\qquad{}+\frac{(1-t)\gamma_{n}}{\gamma_{n}+t\beta_{n}} \Biggl[\Biggl\langle \overrightarrow{\bigoplus _{i=0}^{n}\frac {1}{n+1}T^{i}z_{t,n} \bigoplus_{i=0}^{n}\frac {1}{n+1}T^{i}x_{n+1}}, \overrightarrow{z_{t,n}x_{n+1}}\Biggr\rangle \\ &\qquad{}+\Biggl\langle \overrightarrow{\bigoplus_{i=0}^{n} \frac {1}{n+1}T^{i}x_{n+1}x_{n+1}}, \overrightarrow{z_{t,n}x_{n+1}}\Biggr\rangle \Biggr] \\ &\quad\leq\frac{(1-\alpha_{n})t}{\gamma_{n}+t\beta_{n}}\bigl[k d(z_{t,n},\tilde{x})d(z_{t,n},x_{n+1}) +\bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}},\overrightarrow {z_{t,n}x_{n+1}}\bigr\rangle \\ &\qquad{}+d(\tilde{x},z_{t,n})d(z_{t,n},x_{n+1})+d^{2}(z_{t,n},x_{n+1})\bigr] \\ &\qquad{}+\frac{(1-t)\gamma_{n}}{\gamma_{n}+t\beta_{n}} \Biggl[d\Biggl(\bigoplus _{i=0}^{n}\frac{1}{n+1}T^{i}z_{t,n}, \bigoplus_{i=0}^{n}\frac {1}{n+1}T^{i}x_{n+1} \Biggr)d(z_{t,n},x_{n+1}) \\ &\qquad{}+d\Biggl(\bigoplus _{i=0}^{n}\frac {1}{n+1}T^{i}x_{n+1},x_{n+1} \Biggr)d(z_{t,n},x_{n+1})\Biggr] \\ &\quad\leq\frac{(1-\alpha_{n})t}{\gamma_{n}+t\beta_{n}} \bigl[k d(z_{t,n},\tilde{x})d(z_{t,n},x_{n+1}) +\bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}},\overrightarrow {z_{t,n}x_{n+1}}\bigr\rangle \\ &\qquad{}+d(\tilde{x},z_{t,n})d(z_{t,n},x_{n+1}) +d^{2}(z_{t,n},x_{n+1})\bigr] \\ &\qquad{}+\frac{(1-t)\gamma_{n}}{\gamma_{n}+t\beta_{n}} \Biggl[d^{2}(z_{t,n},x_{n+1})+ \frac{1}{n+1}\sum_{i=0}^{n}d \bigl(T^{i}x_{n+1},x_{n+1}\bigr)d(z_{t,n},x_{n+1}) \Biggr] \\ &\quad\leq\frac{(1-\alpha_{n})t}{\gamma_{n}+t\beta_{n}}\bigl[M k d(z_{t,n},\tilde{x})+\bigl\langle \overrightarrow{f(\tilde{x})\tilde {x}},\overrightarrow{z_{t,n}x_{n+1}} \bigr\rangle +Md(\tilde{x},z_{t,n})\bigr]\\ &\qquad{}+d^{2}(z_{t,n},x_{n+1})+ \frac{(1-t)\gamma _{n}MN}{\gamma_{n}+t\beta_{n}}, \end{aligned}$$

where

$$M\geq\sup\bigl\{ d(z_{t,n},x_{n+1}),n\geq0,0< t< 1\bigr\} ,\qquad N\geq\sup\bigl\{ d\bigl(T^{i}x_{n+1},x_{n+1}\bigr),n \geq0,i\geq0\bigr\} . $$

This implies that

$$ \bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}},\overrightarrow {x_{n+1}z_{t,n}}\bigr\rangle \leq(1+k)Md(z_{t,n}, \tilde{x})+\frac{(1-t)\gamma_{n}MN}{(1-\alpha_{n})t}. $$
(3.16)

Taking the upper limit as \(n\to\infty\) first, and then taking the upper limit as \(t\to0\), we get

$$ \limsup_{t\to0}\limsup_{n\to\infty} \bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}},\overrightarrow {x_{n+1}z_{t,n}} \bigr\rangle \leq0. $$
(3.17)

Since

$$\begin{aligned} \bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}},\overrightarrow {x_{n+1}\tilde{x}}\bigr\rangle &=\bigl\langle \overrightarrow{f( \tilde{x})\tilde{x}},\overrightarrow {x_{n+1}z_{t,n}}\bigr\rangle +\bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}},\overrightarrow {z_{t,n}\tilde{x}}\bigr\rangle \\ &\leq\bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}},\overrightarrow {x_{n+1}z_{t,n}}\bigr\rangle +d\bigl(f(\tilde{x}),\tilde{x} \bigr)d(z_{t,n},\tilde{x}). \end{aligned}$$

Thus, by taking the upper limit as \(n\to\infty\) first, and then taking the upper limit as \(t\to0\), it follows from \(z_{t,n}\to\tilde{x}\) and (3.17) that

$$\limsup_{t\to0}\limsup_{n\to\infty} \bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}},\overrightarrow {x_{n+1}\tilde{x}} \bigr\rangle \leq0. $$

Hence

$$\limsup_{n\to\infty} \bigl\langle \overrightarrow{f(\tilde{x}) \tilde{x}},\overrightarrow {x_{n+1}\tilde{x}}\bigr\rangle \leq0. $$

Finally, we prove that \(x_{n}\to\tilde{x}\) as \(n\to\infty\). In fact, for any \(n\geq0\), let

$$ \left \{ \textstyle\begin{array}{@{}l} u_{n}: = \alpha_{n}\tilde{x}\oplus(1-\alpha_{n})y_{n},\\ y_{n}: = \frac{\beta_{n}}{1-\alpha_{n}}x_{n}\oplus\frac{\gamma_{n}}{1-\alpha _{n}}(\bigoplus_{i=0}^{n}\frac{1}{n+1}T^{i}x_{n}). \end{array}\displaystyle \right . $$
(3.18)

From Lemma 2.7 and Lemma 2.8 we have

$$\begin{aligned} &d^{2}(x_{n+1},\tilde{x}) \\ &\quad\leq d^{2}(u_{n},\tilde{x}) +2\langle \overrightarrow{x_{n+1}u_{n}},\overrightarrow{x_{n+1} \tilde {x}}\rangle \\ &\quad\leq(1-\alpha_{n})^{2}d^{2}(y_{n}, \tilde{x}) +2\bigl[\alpha_{n}\bigl\langle \overrightarrow{f(x_{n})u_{n}}, \overrightarrow {x_{n+1}\tilde{x}}\bigr\rangle +(1-\alpha_{n})\langle\overrightarrow{y_{n}u_{n}}, \overrightarrow {x_{n+1}\tilde{x}}\rangle\bigr] \\ &\quad\leq(1-\alpha_{n})^{2}d^{2}(x_{n}, \tilde{x}) +2\bigl[\alpha_{n}^{2}\bigl\langle \overrightarrow{f(x_{n})\tilde{x}},\overrightarrow {x_{n+1} \tilde{x}}\bigr\rangle \\ &\qquad{}+\alpha_{n}(1-\alpha_{n})\bigl\langle \overrightarrow{f(x_{n})y_{n}},\overrightarrow {x_{n+1}\tilde{x}}\bigr\rangle +\alpha_{n}(1- \alpha_{n})\langle\overrightarrow{y_{n}\tilde {x}}, \overrightarrow{x_{n+1}\tilde{x}}\rangle +(1-\alpha_{n})^{2}\langle\overrightarrow{y_{n}y_{n}}, \overrightarrow {x_{n+1}\tilde{x}}\rangle\bigr] \\ &\quad= (1-\alpha_{n})^{2}d^{2}(x_{n}, \tilde{x}) +2\bigl[\alpha_{n}^{2}\bigl\langle \overrightarrow{f(x_{n})\tilde{x}},\overrightarrow {x_{n+1} \tilde{x}}\bigr\rangle +\alpha_{n}(1-\alpha_{n})\bigl\langle \overrightarrow{f(x_{n})\tilde {x}},\overrightarrow{x_{n+1} \tilde{x}}\bigr\rangle \bigr] \\ &\quad=(1-\alpha_{n})^{2}d^{2}(x_{n}, \tilde{x}) +2\alpha_{n}\bigl\langle \overrightarrow{f(x_{n}) \tilde{x}},\overrightarrow {x_{n+1}\tilde{x}}\bigr\rangle \\ &\quad=(1-\alpha_{n})^{2}d^{2}(x_{n}, \tilde{x}) +2\alpha_{n}\bigl\langle \overrightarrow{f(x_{n})f( \tilde{x})},\overrightarrow {x_{n+1}\tilde{x}}\bigr\rangle +2 \alpha_{n}\bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}}, \overrightarrow {x_{n+1}\tilde{x}}\bigr\rangle \\ &\quad\leq(1-\alpha_{n})^{2}d^{2}(x_{n}, \tilde{x})+2\alpha_{n}kd(x_{n},\tilde {x})d(x_{n+1}, \tilde{x}) +2\alpha_{n}\bigl\langle \overrightarrow{f(\tilde{x}) \tilde{x}},\overrightarrow {x_{n+1}\tilde{x}}\bigr\rangle \\ &\quad\leq(1-\alpha_{n})^{2}d^{2}(x_{n}, \tilde{x})+\alpha_{n}k\bigl(d^{2}(x_{n},\tilde {x})+d^{2}(x_{n+1},\tilde{x})\bigr) +2\alpha_{n} \bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}},\overrightarrow {x_{n+1}\tilde{x}}\bigr\rangle . \end{aligned}$$

This implies that

$$\begin{aligned} d^{2}(x_{n+1},\tilde{x}) &\leq\frac{1-(2-k)\alpha_{n}+\alpha_{n}^{2}}{1-\alpha_{n}k}d^{2}(x_{n}, \tilde{x}) +\frac{2\alpha_{n}}{1-\alpha_{n}k}\bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}}, \overrightarrow{x_{n+1}\tilde{x}}\bigr\rangle \\ &=\biggl(1-\frac{\alpha_{n}(2-2k-\alpha_{n})}{1-\alpha_{n}k}\biggr)d^{2}(x_{n},\tilde{x}) + \frac{2\alpha_{n}}{1-\alpha_{n}k}\bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}}, \overrightarrow{x_{n+1}\tilde{x}}\bigr\rangle . \end{aligned}$$

Then it follows that

$$d^{2}(x_{n+1},\tilde{x})\leq\bigl(1-\alpha_{n}^{\prime} \bigr)d^{2}(x_{n},\tilde{x})+\alpha _{n}^{\prime} \beta_{n}^{\prime}, $$

where

$$\alpha_{n}^{\prime}=\frac{\alpha_{n}(2-2k-\alpha_{n})}{1-\alpha_{n}k},\qquad \beta_{n}^{\prime}= \frac{2}{2-2k-\alpha_{n}} \bigl\langle \overrightarrow{f(\tilde{x})\tilde{x}}, \overrightarrow {x_{n+1}\tilde{x}}\bigr\rangle . $$

Applying Lemma 2.9, we can conclude that \(x_{n}\to\tilde{x}\) as \(n\to\infty\). This completes the proof. □

Letting \(f(x_{n})\equiv u\) for all \(n\in\mathbb{N}\), the following theorem can be obtained from Theorem 3.1 immediately.

Theorem 3.2

Let C be a closed convex subset of a complete CAT(0) space X, and let \(T:C\to C\) be a nonexpansive mapping satisfying \(F(T)\neq\emptyset\). For given \(x_{0}\in C\) arbitrarily and fixed point \(u\in C\), the sequence \(\{x_{n}\}\) is given by

$$ x_{n+1}=\alpha_{n}u\oplus\beta_{n}x_{n} \oplus\gamma_{n}\Biggl(\bigoplus_{i=0}^{n} \frac {1}{n+1}T^{i}x_{n}\Biggr), $$
(3.19)

where \(T^{0}=I\), \(\{\alpha_{n}\}\), \(\{\beta_{n}\}\), and \(\{\gamma_{n}\}\) are three real sequences in (0,1) satisfying conditions (i)-(iii). Then \(\{x_{n}\}\) converges strongly to \(\tilde{x}\) such that \(\tilde{x}=P_{F(T)}u\), which is equivalent to the following variational inequality:

$$ \langle\overrightarrow{\tilde{x}u},\overrightarrow{x\tilde{x}}\rangle \geq0, \quad \forall x\in F(T). $$
(3.20)

Remark 3.3

Theorem 3.1 and Theorem 3.2 are two new results. The proofs are simple and different from many others which extend the Cesàro nonlinear ergodic theorems for nonexpansive mappings in Baillon [1], Bruck [2], Song and Chen [3], Zhu and Chen [5], Yao et al. [4] from Hilbert space or Banach space to CAT(0) spaces. Theorem 3.1 is also a generalization of the results by Wangkeeree and Preechasilp [12].