1 Introduction

Let C be a nonempty subset of a metric space \((X,d)\). Suppose that, for each \(x \in X\), there exists a unique point \(P_{C}x \in C\) such that \(d(x,P_{C}x)=d(x,C)=\inf_{y \in C}d(x,y)\). Then, the mapping \(P_{C}\) of X onto C is called the metric projection.

The well-known Banach contraction principle is an important tool in the theory of metric spaces; it guarantees the existence and uniqueness of fixed points of certain self-mappings of metric spaces. One generalization of the contraction principle for weak contractions is obtained by Alber and Guerre-Delabriere [1] in Hilbert spaces. A mapping \(f:X \to X\) is called a φ-weak contraction if

$$ d\bigl(f(x),f(y)\bigr) \le d(x,y)-\varphi\bigl(d(x,y)\bigr),\quad x,y \in X, $$

where \(\varphi:[0,\infty) \to[0,\infty)\) is a continuous and nondecreasing function with \(\varphi(t)=0\) if and only if \(t=0\).

Let \(T:C \to X\) be a mapping. If \(d(Tx,Ty) \le d(x,y)\) for all \(x,y \in C\), then T is nonexpansive. We denote by \(\mathfrak{F}(T)\) the set o fixed points of T. The mapping T is quasinonexpansive if \(\mathfrak{F}(T)\) is nonempty and

$$d(Tx,y) \le d(x,y), \quad x \in C, y \in\mathfrak{F}(T). $$

A point \(p \in C\) is said to be a strongly asymptotic fixed point [2] of T if there exists a sequence \(\{x_{n}\}\) in C that converges strongly to p and \(\lim_{n \to\infty}d(x_{n},Tx_{n})=0\). We denote by \(\mathfrak{\widetilde{F}}(T)\) the set of strongly asymptotic fixed points of T. It is known that the fixed point set of a quasinonexpansive mapping defined on a \(\operatorname{CAT}(0)\) space (see Section 2 for the definition) is closed and convex.

Approximation methods for finding specific fixed points of a family of nonexpansive mappings in Hilbert, Banach, and geodesic metric spaces have been studied by many researchers; see, e.g., [39] and the references therein. One well-known method, called the shrinking projection method, was first proposed by Takahashi et al. [10] and has been applied to a variety of approximation problems; see, e.g., [11, 12]. In particular, Kimura and Takahashi [11] applied this method to the zero-point problem for a maximal monotone operator defined in a Banach space and obtained strong convergence theorems. To generate the iterative sequence by the shrinking projection method, they use the metric projection onto a closed convex set \(C_{n}\) for each \(n \in\mathbb{N}\). It is noticeable that the larger the integer n, the more complicated the shape of \(C_{n}\). Hence, the calculation of the projection is tedious as n gets larger. In 2011, Kimura et al. [2] overcome this difficulty and introduce the so-called averaged projection method of Halpern type for a family of quasinonexpansive mappings by combining the Halpern iteration. They still use the metric projection approach; nevertheless, the subsets corresponding to these projections have simpler shapes than the classical ones. Let us denote by \(\mathfrak{F}(\mathfrak{T})\) the common fixed point set of all mappings in a family \(\mathfrak{T}\). Their theorem is stated as follows.

Theorem 1.1

(Kimura et al. [2], Theorem 3.1)

Let C be a closed convex subset of a Hilbert space H, \(\mathfrak{T}=\{T_{j}:j=1,\ldots,N\}\) a finite family of quasinonexpansive mappings of C into H with \(\mathfrak{F}(\mathfrak{T}) \ne\emptyset\) and \(\widetilde{\mathfrak{F}}(T_{j})=\mathfrak{F}(T_{j})\) for \(j=1,\ldots,N\). Let \(u,x_{1} \in C\) and define the sequence \(\{x_{n}\}\) by

$$\begin{aligned}& y^{j}_{n} =\alpha_{n}x_{n}+(1- \alpha_{n})T_{j}x_{n}, \\& C^{j}_{n} =\bigl\{ z \in C:\bigl\Vert y^{j}_{n}-z \bigr\Vert \le \Vert x_{n}-z\Vert \bigr\} , \quad j=1,\ldots,N, \\& v^{j}_{n,k} =P_{C^{j}_{k}}x_{n}, \quad k=1, \ldots,n, j=1,\ldots,N, \\& w_{n,k} =\sum_{j=1}^{N} \beta^{j}_{k}v^{j}_{n,k}, \quad k=1, \ldots,n, \\& x_{n+1} = \delta_{n}u+(1-\delta_{n})\sum _{k=1}^{n}\gamma_{n,k}w_{n,k}, \end{aligned}$$

where \(\{\alpha_{n}\}\), \(\{\beta^{j}_{n}:j=1,\ldots,N\}\), \(\{\gamma_{n,k}:k \le n\}\), and \(\{\delta_{n}\}\) are sequences in \([0,1]\) satisfying the following conditions:

  1. (i)

    \(\liminf_{n \to\infty}\alpha_{n}<1\),

  2. (ii)

    \(\beta^{j}_{n}>0\) for \(j=1,\ldots,N\), and \(\sum_{j=1}^{N}\beta^{j}_{n}=1\) for \(n \in\mathbb{N}\),

  3. (iii)

    \(\sum_{k=1}^{n}\gamma_{n,k}=1\) for \(n \in\mathbb{N}\), \(\lim_{n \to\infty}\gamma_{n,k}>0\) for \(k \in\mathbb{N}\), and \(\sum_{n=1}^{\infty}\sum_{k=1}^{n}|\gamma_{n+1,k}-\gamma _{n,k}|<\infty\),

  4. (iv)

    \(\lim_{n \to\infty}\delta_{n}=0\), \(\sum_{n=1}^{\infty}\delta_{n}=\infty\), and \(\sum_{n=1}^{\infty}|\delta_{n+1}-\delta_{n}|<\infty\).

Then \(\{x_{n}\}\) converges strongly to the point \(P_{\mathfrak{F}(\mathfrak{T})}u\).

The problem of whether or not we can construct a shrinking projection method analogous to that given in Theorem 1.1 for solving a common fixed point problem for a finite family of quasinonexpansive mappings in a geodesic metric space is still open. The purpose of this paper is to analyze the feasibility study of Moudafi viscosity type of projection method with a weak contraction for a finite family of quasinonexpansive mappings in a complete \(\operatorname{CAT}(0)\) space, also known as a Hadamard space.

This paper is organized as follows. In Section 2 we recall the definition of geodesic metric spaces and summarize some useful lemmas and the main properties of \(\operatorname{CAT}(0)\) spaces. Besides, without vector addition as in a Banach space, we present an inequality to estimate the distance between two elements defined by finite convex combination ‘⊕’ in a \(\operatorname{CAT}(0)\) space; see Lemma 2.2. In Section 3 we construct a sequence of nonexpansive mappings satisfying AKTT condition by choosing an appropriate control sequence under certain conditions; see Theorem 3.2. Therefore, a convergence theorem of a new Moudafi viscosity approximation follows from Theorem 3.2; see Theorem 3.3. Using Theorem 3.3, we also derive a strong convergence theorem by a Moudafi type viscosity approximation with a weak contraction for a family of quasinonexpansive mappings; see Theorem 3.4. As a particular case where a weak contraction is constant in Theorem 3.4, a strong convergence theorem by the averaged projection method of Halpern type is then obtained; see Theorem 3.5.

2 Preliminaries

Let \((X,d)\) be a metric space. For \(x,y \in X\), a geodesic path joining x to y (or a geodesic from x to y) is an isometric mapping \(c:[0,\ell] \subset\mathbb{R} \to X\) such that \(c(0)=x\), \(c(\ell)=y\), that is, \(d(c(t),c(t'))=|t-t'|\) for all \(t,t' \in[0,\ell]\). Therefore, \(d(x,y)=\ell\). The image of c is called a geodesic (segment) from x to y, and we shall denote a definite choice of this geodesic segment by \([x,y]\). A point \(z=c(t)\) in the geodesic \([x,y]\) will be written as \(z=(1-\lambda)x \oplus\lambda y\), where \(\lambda=t/\ell\), and so \(d(z,x)=\lambda d(x,y)\) and \(d(z,y)=(1-\lambda)d(x,y)\). A subset C of X is convex if every pair of points \(x,y \in C\) can be joined by a geodesic in X and the image of every such geodesic is contained in C.

A geodesic triangle \(\triangle(x_{1},x_{2},x_{3})\) in \((X,d)\) consists of three points \(x_{i} \in X\) (\(i=1,2,3\)), its vertices, and a geodesic segment between each pair of vertices, its sides. If a point \(x \in X\) lies in the union of \([x_{i},x_{j}]\), \(i,j \in\{ 1,2,3\}\), then we write \(x \in\triangle(x_{1},x_{2},x_{3})\). A comparison triangle for the geodesic triangle \(\triangle (x_{1},x_{2},x_{3})\) in X is a triangle \(\triangle(\bar{x}_{1},\bar{x}_{2},\bar{x}_{3})\) in the Euclidean plane \(\mathbb{E}^{2}\) such that \(d_{\mathbb{E}^{2}}(\bar{x}_{i},\bar{x}_{j})=d(x_{i},x_{j})\) for \(i,j \in\{1,2,3\}\).

A geodesic triangle △ in X is said to satisfy the \(\operatorname{CAT}(0)\) inequality if, given a comparison triangle △̅ in \(\mathbb{E}^{2}\) for △,

$$d(x,y) \le d_{\mathbb{E}^{2}}(\bar{x},\bar{y})\quad \mbox{for }x,y \in \triangle, $$

where \(\bar{x},\bar{y} \in\overline{\triangle}\) are the corresponding comparison points of x, y. The geodesic metric space X is called a \(\operatorname{CAT}(0)\) space if all geodesic triangles in X satisfy the \(\operatorname{CAT}(0)\) inequality. Note that Hilbert spaces are \(\operatorname{CAT}(0)\).

Lemma 2.1

Let \((X,d)\) be a \(\operatorname{CAT}(0)\) space, and let \(\alpha,\beta\in [0,1]\). Then:

  1. (i)

    For \(x,y \in X\), we have

    $$d\bigl(\alpha x \oplus(1-\alpha)y,\beta x \oplus(1-\beta)y\bigr)=|\alpha- \beta|d(x,y). $$
  2. (ii)

    ([13], Chapter II.2. Proposition 2.2) For \(x,y,p,q \in X\), we have

    $$d\bigl(\alpha x \oplus(1-\alpha)y,\alpha p \oplus(1-\alpha)q\bigr) \le\alpha d(x,p)+(1-\alpha) d(y,q). $$

    In particular, if \(p=q\), this reduces to

    $$d\bigl(\alpha x \oplus(1-\alpha)y,p\bigr) \le\alpha d(x,p)+(1-\alpha)d(y,p). $$
  3. (iii)

    ([14], Lemma 2.5) For \(x,y,z \in X\), we have

    $$d\bigl(\alpha x \oplus(1-\alpha)y,z\bigr)^{2} \le\alpha d(x,z)^{2}+(1-\alpha )d(y,z)^{2}-\alpha(1- \alpha)d(x,y)^{2}. $$

We will extend the equality in Lemma 2.1(i) to any finitely many elements in X. First, we recall the notion of a finite sum ‘⊕’ defined by Butsan et al. [4]. Fix \(n \in\mathbb{N}\) with \(n \ge2\) and let \(\{\alpha_{1},\ldots,\alpha _{n}\} \subset(0,1)\) with \(\sum_{k=1}^{n}\alpha_{k}=1\) and \(\{x_{1},\ldots,x_{n}\} \subset X\). By induction we define

$$ \bigoplus_{k=1}^{n}\alpha_{k}x_{k} =(1-\alpha_{n}) \biggl(\frac{\alpha_{1}}{1-\alpha_{n}}x_{1}\oplus\cdots \oplus\frac {\alpha_{n-1}}{1-\alpha_{n}}x_{n-1} \biggr) \oplus\alpha_{n}x_{n}. $$
(2.1)

The definition of ⨁ in (2.1) is an ordered one in the sense that it depends on the order of points \(x_{1},\ldots,x_{n}\). However, we occasionally use the notation \(\alpha_{1}x_{1} \oplus\alpha_{2}x_{2} \oplus\cdots\oplus\alpha_{n}x_{n}\) for such a point. Lemma 2.1(ii) assures that, for \(y \in X\),

$$ d \Biggl(\bigoplus_{k=1}^{n} \alpha_{k}x_{k},y \Biggr) \le\sum _{k=1}^{n}\alpha _{k}d(x_{k},y). $$
(2.2)

Lemma 2.2

Let \((X,d)\) be a \(\operatorname{CAT}(0)\) space, and for \(n \in\mathbb{N}\) with \(n \ge2\), let \(\{\alpha_{k}\}_{k=1}^{n}\) and \(\{\beta_{k}\}_{k=1}^{n} \subset(0,1)\) be two sequences such that \(\sum_{k=1}^{n}\alpha_{k}=\sum_{k=1}^{n}\beta_{k}=1\). Then, for \(x_{1},\ldots,x_{n} \in X\), we have

$$\begin{aligned}& d \Biggl(\bigoplus_{k=1}^{n} \alpha_{k}x_{k},\bigoplus_{k=1}^{n} \beta_{k}x_{k} \Biggr) \\& \quad \le\biggl\vert \frac{\alpha_{2}}{\alpha_{1}+\alpha_{2}}-\frac{\beta_{2}}{\beta _{1}+\beta_{2}}\biggr\vert ( \alpha_{1}+\alpha_{2})d(x_{1},x_{2}) \\& \qquad {}+\biggl\vert \frac{\alpha_{3}}{\sum_{k=1}^{3}\alpha_{k}}-\frac{\beta_{3}}{\sum_{k=1}^{3}\beta_{k}}\biggr\vert \sum _{k=1}^{3}\alpha_{k}\cdot\sum _{k=1}^{2}\frac{\beta_{k}}{\beta_{1}+\beta _{2}}d(x_{k},x_{3}) \\& \qquad {}+\cdots+\biggl\vert \frac{\alpha_{j}}{\sum_{k=1}^{j}\alpha_{k}}-\frac{\beta _{j}}{\sum_{k=1}^{j}\beta_{j}}\biggr\vert \sum_{k=1}^{j}\alpha_{j}\cdot\sum _{k=1}^{j-1}\frac{\beta_{k}}{\beta_{1}+\cdots +\beta_{j-1}}d(x_{k},x_{j}) \\& \qquad {}+\cdots+\vert \alpha_{n}-\beta_{n}\vert \sum _{k=1}^{n-1}\frac{\beta_{k}}{1-\beta _{n}}d(x_{k},x_{n}). \end{aligned}$$

Proof

We will prove the result by induction.

Step 1. According to Lemma 2.1(ii), (2.1), and (2.2), we derive

$$\begin{aligned}& d \Biggl(\bigoplus_{k=1}^{n} \alpha_{k}x_{k},\bigoplus_{k=1}^{n} \beta_{k}x_{k} \Biggr) \\& \quad \le d \Biggl((1-\alpha_{n}) \Biggl(\bigoplus _{k=1}^{n-1}\frac{\alpha _{k}}{1-\alpha_{n}}x_{k} \Biggr) \oplus\alpha_{n}x_{n}, (1-\alpha_{n}) \Biggl( \bigoplus_{k=1}^{n-1}\frac{\beta_{k}}{1-\beta _{n}}x_{k} \Biggr)\oplus\alpha_{n}x_{n} \Biggr) \\& \qquad {}+d \Biggl((1-\alpha_{n}) \Biggl(\bigoplus _{k=1}^{n-1}\frac{\beta_{k}}{1-\beta _{n}}x_{k} \Biggr) \oplus\alpha_{n}x_{n}, (1-\beta_{n}) \Biggl( \bigoplus_{k=1}^{n-1}\frac{\beta_{k}}{1-\beta _{n}}x_{k} \Biggr)\oplus\beta_{n}x_{n} \Biggr) \\& \quad \le (1-\alpha_{n})d \Biggl(\bigoplus _{k=1}^{n-1}\frac{\alpha_{k}}{1-\alpha _{n}}x_{k},\bigoplus _{k=1}^{n-1}\frac{\beta_{k}}{1-\beta_{n}}x_{k} \Biggr) + |\alpha_{n}-\beta_{n}|d \Biggl(\bigoplus _{k=1}^{n-1}\frac{\beta_{k}}{1-\beta _{n}}x_{k},x_{n} \Biggr) \\& \quad \le (1-\alpha_{n})d \Biggl(\bigoplus _{k=1}^{n-1}\frac{\alpha_{k}}{1-\alpha _{n}}x_{k},\bigoplus _{k=1}^{n-1}\frac{\beta_{k}}{1-\beta_{n}}x_{k} \Biggr) \\& \qquad {}+ |\alpha_{n}-\beta_{n}|\sum _{k=1}^{n-1}\frac{\beta_{k}}{1-\beta_{n}}d(x_{k},x_{n}). \end{aligned}$$

Step 2. Apply the inequality in Step 1 for the case \(n-1\) to obtain

$$\begin{aligned}& d \Biggl(\bigoplus_{k=1}^{n-1} \frac{\alpha_{k}}{1-\alpha_{n}}x_{k},\bigoplus_{k=1}^{n-1} \frac{\beta_{k}}{1-\beta_{n}}x_{k} \Biggr) \\& \quad \le\frac{1-\alpha_{n-1}-\alpha_{n}}{1-\alpha_{n}} d \Biggl(\bigoplus_{k=1}^{n-2} \frac{\alpha_{k}}{1-\alpha_{n-1}-\alpha _{n}}x_{k},\bigoplus_{k=1}^{n-2} \frac{\beta_{k}}{1-\beta_{n-1}-\beta _{n}}x_{k} \Biggr) \\& \qquad {}+\biggl\vert \frac{\alpha_{n-1}}{1-\alpha_{n}}-\frac{\beta _{n-1}}{1-\beta_{n}}\biggr\vert \sum _{k=1}^{n-2}\frac{\beta_{k}}{1-\beta _{n-1}-\beta_{n}}d(x_{k},x_{n-1}). \end{aligned}$$

Step 3. Recall that \(\sum_{k=1}^{n}\alpha_{k}=\sum_{k=1}^{n}\beta_{k}=1\). Hence, the two inequalities in Step 1 and Step 2 imply that

$$\begin{aligned}& d \Biggl(\bigoplus_{k=1}^{n} \alpha_{k}x_{k},\bigoplus_{k=1}^{n} \beta_{k}x_{k} \Biggr) \\& \quad \le(1-\alpha_{n-1}-\alpha_{n})d \Biggl(\bigoplus _{k=1}^{n-2}\frac{\alpha _{k}}{1-\alpha_{n-1}-\alpha_{n}}x_{k}, \bigoplus_{k=1}^{n-2}\frac{\beta _{k}}{1-\beta_{n-1}-\beta_{n}}x_{k} \Biggr) \\& \qquad {}+\biggl\vert \frac{\alpha_{n-1}}{\sum_{k=1}^{n-1}\alpha_{k}}-\frac{\beta _{n-1}}{\sum_{k=1}^{n-1}\beta_{k}}\biggr\vert \sum _{k=1}^{n-1}\alpha_{k}\cdot\sum _{k=1}^{n-2}\frac{\beta_{k}}{1-\beta _{n-1}-\beta_{n}}d(x_{k},x_{n-1}) \\& \qquad {}+|\alpha_{n}-\beta_{n}|\sum _{k=1}^{n-1}\frac{\beta_{k}}{1-\beta_{n}}d(x_{k},x_{n}). \end{aligned}$$

Continuing the process in Step 1 to estimate the first term of this inequality on the right-hand side, after \(n-2\) steps, we have

$$\begin{aligned}& d \Biggl(\bigoplus_{k=1}^{n} \alpha_{k}x_{k},\bigoplus_{k=1}^{n} \beta_{k}x_{k} \Biggr) \\& \quad \le\biggl\vert \frac{\alpha_{2}}{\alpha_{1}+\alpha_{2}}-\frac{\beta_{2}}{\beta _{1}+\beta_{2}}\biggr\vert ( \alpha_{1}+\alpha_{2})d(x_{1},x_{2}) \\& \qquad {}+\biggl\vert \frac{\alpha_{3}}{\sum_{k=1}^{3}\alpha_{k}}-\frac{\beta_{3}}{\sum_{k=1}^{3}\beta_{k}}\biggr\vert \sum _{k=1}^{3}\alpha_{k}\cdot\sum _{k=1}^{2}\frac{\beta_{k}}{\beta_{1}+\beta _{2}}d(x_{k},x_{3}) \\& \qquad {}+\cdots+\biggl\vert \frac{\alpha_{j}}{\sum_{k=1}^{j}\alpha_{k}}-\frac{\beta _{j}}{\sum_{k=1}^{j}\beta_{j}}\biggr\vert \sum_{k=1}^{j}\alpha_{j}\cdot\sum _{k=1}^{j-1}\frac{\beta_{k}}{\beta_{1}+\cdots +\beta_{j-1}}d(x_{k},x_{j}) \\& \qquad {}+\cdots+|\alpha_{n}-\beta_{n}|\sum _{k=1}^{n-1}\frac{\beta_{k}}{1-\beta_{n}}d(x_{k},x_{n}). \end{aligned}$$

 □

Let \(\{\alpha_{n}\}_{n=1}^{\infty}\) be a sequence in \((0,1)\) such that \(\sum_{n=1}^{\infty}\alpha_{n}=1\). For notational convenience, let

$$ \bar{\alpha}_{k}=\frac{\alpha_{k}}{\sum_{j=1}^{k}\alpha_{j}},\qquad \alpha'_{k}= \sum_{j=k+1}^{\infty}\alpha_{j} \quad \text{for }k \in \mathbb{N}, $$

The following result is an immediate consequence of Lemma 2.2.

Lemma 2.3

Let \((X,d)\) be a \(\operatorname{CAT}(0)\) space, and for \(n \in\mathbb{N}\) (\(n \ge2\)), let \(\{\alpha_{k}\}_{k=1}^{n},\{\beta _{k}\}_{k=1}^{n} \subset(0,1)\) be such that \(\sum_{k=1}^{n}\alpha_{k}=\sum_{k=1}^{n}\beta_{k}=1\). Then for \(x_{1},\ldots,x_{n} \in X\), we have

$$ d \Biggl(\bigoplus_{k=1}^{n} \alpha_{k}x_{k},\bigoplus_{k=1}^{n} \beta_{k}x_{k} \Biggr) \le M\sum_{k=1}^{n}| \bar{\alpha}_{k}-\bar{\beta}_{k}|, $$

where \(M=\max\{d(x_{i},x_{j}):i,j=1,\ldots,n\}\).

It is remarkable that Dhompongsa et al. [5] define an infinite sum ‘⊕’ as follows. Let \(\{\alpha_{n}\} \subset(0,1)\) with \(\sum_{n=1}^{\infty}\alpha_{n}=1\), and let \(\{x_{n}\}\) be a bounded sequence in a complete metric space X. Choose arbitrary \(u \in X\). Suppose that \(\lim_{n \to\infty}\sum_{k=n}^{\infty}\alpha'_{k}=0\). Define the sequence \(\{y_{n}\}\) in X by

$$y_{n}=\alpha_{1}x_{1} \oplus \alpha_{2}x_{2} \oplus\cdots\oplus\alpha_{n}x_{n} \oplus\alpha'_{n}u. $$

Then, according to (2.1),

$$ y_{n}= \Biggl(\sum_{k=1}^{n} \alpha_{k} \Biggr)z_{n} \oplus\alpha'_{n}u, $$
(2.3)

where

$$z_{n}=\frac{\alpha_{1}}{\sum_{k=1}^{n}\alpha_{k}}x_{1} \oplus\cdots\oplus \frac {\alpha_{n}}{\sum_{k=1}^{n}\alpha_{k}}x_{n}. $$

Recall that \(\{y_{n}\}\) is a Cauchy sequence [5] and therefore converges to some point \(x \in X\). We can write

$$x=\bigoplus_{n=1}^{\infty}\alpha_{n}x_{n}. $$

By (2.3), \(d(y_{n},z_{n})=\alpha'_{n}d(z_{n},u)\). Hence, \(\{z_{n}\}\) also converges to x, and the limit x is independent of the choice of u.

To verify our main results in Section 3, the following property is required and crucial.

Lemma 2.4

(Dhompongsa et al. [5], Lemma 3.8)

Let C be a closed convex subset of a complete \(\operatorname{CAT}(0)\) space X, \(\{T_{n}\}\) a sequence of nonexpansive mappings on C with \(\bigcap_{n=1}^{\infty}\mathfrak{F}(T_{n}) \ne\emptyset\), and \(\{\alpha_{n}\}\) a sequence in \((0,1)\) such that \(\sum_{n=1}^{\infty}\alpha_{n}=1\) and \(\lim_{n \to\infty}\sum_{k=n}^{\infty}\alpha'_{k}=0\). Define the mapping \(S:C \to C\) by \(Sx=\bigoplus_{n=1}^{\infty}\alpha _{n}T_{n}x\), \(x \in C\). Then S is nonexpansive, and \(\mathfrak{F}(S)=\bigcap_{n=1}^{\infty}\mathfrak{F}(T_{n})\).

3 Projection method

Let C be a closed convex subset of a complete metric space X. A family \(\{T_{n}\}\) of nonexpansive self-mappings of C is said to satisfy AKTT condition [3] if for every bounded subset B of C,

$$\sum_{n=1}^{\infty}\sup\bigl\{ d(T_{n+1}x,T_{n}x):x \in B\bigr\} < \infty. $$

In this case, the sequence \(\{T_{n}x\}\) is Cauchy for each \(x \in C\) and so converges in X. We recall the following convergence theorem with a weak contraction for a sequence of nonexpansive mappings with AKTT condition.

Theorem 3.1

(Huang [15], Theorem 4.11)

Let X be a complete \(\operatorname{CAT}(0)\) space, C a closed convex subset of X, \(\{T_{n}\}\) a family of nonexpansive mappings on C satisfying AKTT condition such that \(\bigcap_{n=1}^{\infty}\mathfrak{F}(T_{n})\ne\emptyset\), f a φ-weak contraction on C, where φ is strictly increasing, and \(\{\alpha_{n}\}\) is a sequence in \((0,1]\) satisfying

  1. (C1)

    \(\lim_{n \to\infty}\alpha_{n}=0\);

  2. (C2)

    \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\);

  3. (C3)

    either \(\sum_{n=1}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\), or \(\lim_{n \to\infty}(\alpha_{n+1}/\alpha_{n})=1\).

Define the mapping \(S:C \to C\) by \(Sx=\lim_{n \to\infty}T_{n}x\) for \(x \in C\). Suppose that \(\mathfrak{F}(S)=\bigcap_{n=1}^{\infty}\mathfrak{F}(T_{n})\). Then the sequence \(\{x_{n}\}\) defined by \(x_{1} \in C\) and

$$ x_{n+1}=\alpha_{n}f(x_{n}) \oplus(1- \alpha_{n})T_{n}x_{n} $$

converges strongly to a point \(\hat{x} \in C\) such that \(\hat{x}=P_{\mathfrak{F}(S)}f(\hat{x})\).

We now construct a sequence of nonexpansive mappings satisfying AKTT condition by choosing an appropriate control sequence under certain conditions.

Theorem 3.2

Let C be a closed convex subset of a complete \(\operatorname{CAT}(0)\) space X, \(\mathfrak{T}=\{T_{n}\}\) a family of nonexpansive mappings on C with \(\mathfrak{F}(\mathfrak{T}) \ne\emptyset\), and \(\{\gamma_{n,k}:k \le n\} \subset(0,1)\) a sequence satisfying

  1. (D1)

    \(\sum_{k=1}^{n}\gamma_{n,k}=1\), \(\forall n \in\mathbb{N}\);

  2. (D2)

    \(\lambda_{k}=\lim_{n \to\infty}\gamma_{n,k}>0\), \(\forall k \in\mathbb {N}\), and \(\lim_{n \to\infty}\sum_{k=n}^{\infty}\lambda'_{k}=0\);

  3. (D3)

    \(\sum_{n=1}^{\infty}\sum_{k=1}^{n+1}|\bar{\gamma}_{n+1,k}-\bar{\gamma }_{n,k}|<\infty\), where \(\gamma_{n,n+1}=0\) and

    $$\bar{\gamma}_{n,k}=\frac{\gamma_{n,k}}{\gamma_{n,1}+\cdots+\gamma _{n,k}}, \quad k=1,\ldots,n+1. $$

For each \(n \in\mathbb{N}\), define the mapping \(S_{n}:C \to C\) by

$$ S_{n}x=\bigoplus_{k=1}^{n} \gamma_{n,k}T_{k}x. $$

Then \(\{S_{n}\}\) is a family of nonexpansive mappings satisfying AKTT condition and

$$\bigcap_{n=1}^{\infty}\mathfrak{F}(S_{n})=\bigcap_{n=1}^{\infty}\mathfrak {F}(T_{n}). $$

Moreover, the mapping \(S:C \to C\) defined by \(Sx=\lim_{n \to\infty}S_{n}x\) is also nonexpansive, and \(\mathfrak{F}(S)=\bigcap_{n=1}^{\infty}\mathfrak {F}(S_{n})\).

Proof

Fix any \(n \in\mathbb{N}\). We may assume that \(\gamma_{n,k}=0\) for all \(k>n\). Then Lemma 2.4 states that \(S_{n}\) is nonexpansive and \(\mathfrak {F}(S_{n})=\bigcap_{k=1}^{n}\mathfrak{F}(T_{k})\). Thus,

$$\bigcap_{n=1}^{\infty}\mathfrak{F}(S_{n})= \bigcap_{k=1}^{\infty}\mathfrak {F}(T_{k}) \ne\emptyset. $$

For every bounded subset B of C, the set \(\{T_{k}x:x \in B, k \in \mathbb{N}\}\) is bounded since \(\bigcap_{k=1}^{\infty}\mathfrak{F}(T_{k}) \ne\emptyset\). Let

$$M=\operatorname{diam}\{T_{k}x:x \in B, k \in\mathbb{N}\}, $$

so that by Lemma 2.3, for \(x \in B\) and \(n \in\mathbb{N}\), we have

$$\begin{aligned} d(S_{n+1}x,S_{n}x) \le& d \Biggl(\bigoplus _{k=1}^{n}\gamma_{n+1,k}T_{k}x, \bigoplus_{k=1}^{n}\gamma _{n,k}T_{k}x \Biggr) +\gamma_{n+1,n+1}d \Biggl(T_{n+1}x,\bigoplus _{k=1}^{n}\gamma _{n,k}T_{k}x \Biggr) \\ \le& M\sum_{k=1}^{n}|\bar{ \gamma}_{n+1,k}-\bar{\gamma}_{n,k}|+M\gamma _{n+1,n+1}\sum _{k=1}^{n}\gamma_{n,k} \\ =& M\sum_{k=1}^{n}|\bar{ \gamma}_{n+1,k}-\bar{\gamma}_{n,k}|+M\bar{\gamma }_{n+1,n+1} \\ =& M\sum_{k=1}^{n+1}|\bar{ \gamma}_{n+1,k}-\bar{\gamma}_{n,k}|. \end{aligned}$$

It follows that

$$\sum_{n=1}^{\infty}\sup\bigl\{ d(S_{n+1}x,S_{n}x):x \in B\bigr\} \le M\sum _{n=1}^{\infty}\sum_{k=1}^{n+1}| \bar{\gamma}_{n+1,k}-\bar{\gamma }_{n,k}|< \infty. $$

Therefore, \(\{S_{n}\}\) is a family of nonexpansive mappings on C satisfying AKTT condition such that \(\bigcap_{n=1}^{\infty}\mathfrak {F}(S_{n}) \ne\emptyset\). It follows that \(\{S_{n}x\}\) converges for all \(x \in C\), and thus S is well defined.

If \(m,n \in\mathbb{N}\) and \(m > n\), then we get

$$\begin{aligned} \begin{aligned} \sum_{k=1}^{n}|\bar{\gamma}_{m,k}- \bar{\gamma}_{n,k}| & \le\sum_{k=1}^{n} \bigl(\vert \bar{\gamma}_{n+1,k}-\bar{\gamma}_{n,k}\vert +| \bar {\gamma}_{n+2,k}-\bar{\gamma}_{n+1,k}|+\cdots +|\bar{\gamma}_{m,k}-\bar{\gamma}_{m-1,k}|\bigr) \\ & = \sum_{j=n}^{m-1}\sum _{k=1}^{n}|\bar{\gamma}_{j+1,k}-\bar{\gamma }_{j,k}| \\ & \le\sum_{j=n}^{m-1}\sum _{k=1}^{j+1}|\bar{\gamma}_{j+1,k}-\bar{ \gamma}_{j,k}|. \end{aligned} \end{aligned}$$

Recall that \(\bar{\lambda}_{k}=\lim_{n \to\infty}\bar{\gamma}_{n,k}\) for \(k \in\mathbb{N}\). We take the limit as \(m \to\infty\) to obtain

$$\sum_{k=1}^{n}|\bar{\lambda}_{k}- \bar{\gamma}_{n,k}| \le\sum_{j=n}^{\infty}\sum_{k=1}^{j+1}|\bar{\gamma}_{j+1,k}- \bar{\gamma}_{j,k}| $$

and then take the limit as \(n \to\infty\) to obtain

$$ \lim_{n \to\infty}\sum_{k=1}^{n}| \bar{\lambda}_{k}-\bar{\gamma}_{n,k}|=0. $$
(3.1)

On the other hand, the absolute convergence of the series

$$\sum_{n=1}^{\infty}\sum _{k=1}^{n+1}(\bar{\gamma}_{n+1,k}-\bar{ \gamma}_{n,k}) $$

implies the convergence of its partial sums

$$\sum_{n=1}^{m}\sum _{k=1}^{n+1}(\bar{\gamma}_{n+1,k}-\bar{ \gamma}_{n,k}) = \Biggl(\sum_{k=1}^{m+1} \bar{\gamma}_{m+1,k} \Biggr)-\bar{\gamma}_{1,1} = \Biggl(\sum _{k=1}^{m+1}\bar{\gamma}_{m+1,k} \Biggr)-1. $$

Hence, by (3.1), \(\sum_{k=1}^{\infty}\bar{\lambda}_{k}\) converges (in fact, to \(\sum_{k=1}^{\infty}\bar{\gamma}_{n,k}\)), and so does \(\sum_{k=1}^{\infty}\lambda_{k}\) because \(\lambda_{k} \le\bar {\lambda}_{k}\). Let \(\lambda=\sum_{k=1}^{\infty}\lambda_{k}\). Define the mapping \(W:C \to C\) by

$$Wx=\bigoplus_{n=1}^{\infty}\frac{\lambda_{n}}{\lambda}T_{n}x. $$

Then by (D2) Lemma 2.4 guarantees that W is nonexpansive and \(\mathfrak{F}(W)=\bigcap_{n=1}^{\infty}\mathfrak{F}(T_{n})\). If

$$W_{n}x=\bigoplus_{k=1}^{n} \frac{\lambda_{k}}{\sum_{j=1}^{n}\lambda_{j}}T_{k}x,\quad x \in C, $$

then \(\{W_{n}x\}\) converges to Wx. Recall that

$$\overline{ \biggl(\frac{\lambda_{k}}{\sum_{j=1}^{n}\lambda_{j}} \biggr)}=\bar {\lambda}_{k}\quad \text{for }k=1,\ldots,n. $$

Fix any \(x \in C\). Then by Lemma 2.3 and (3.1) we get

$$ d(S_{n}x,W_{n}x) \le K\sum_{k=1}^{n}| \bar{\gamma}_{n,k}-\bar{\lambda}_{k}| \to0\quad \text{as }n \to \infty, $$

where \(K=\max\{d(T_{i}x,T_{j}x):i,j=1,\ldots,n\}\). This shows that \(Wx=Sx\) for all \(x \in C\), as required. □

The following result follows immediately from Theorems 3.1 and 3.2.

Theorem 3.3

Let C be a closed convex subset of a complete \(\operatorname{CAT}(0)\) space X, \(\mathfrak{T}=\{T_{n}\}\) a family of nonexpansive mappings on C such that \(\mathfrak{F}(\mathfrak{T}) \ne\emptyset\), and f a φ-weak contraction on C, where φ is strictly increasing. Let \(\{\alpha_{n}\} \subset(0,1]\) and \(\{\gamma_{n,k}:k \le n\} \subset (0,1)\) be two sequences such that \(\{\alpha_{n}\}\) satisfies (C1)-(C3) and \(\{\gamma_{n,k}:k \le n\}\) satisfies (D1)-(D3). Let \(x_{1} \in C\) and define the sequence \(\{x_{n}\}\) by

$$x_{n+1}=\alpha_{n}f(x_{n}) \oplus(1- \alpha_{n})\bigoplus_{k=1}^{n} \gamma_{n,k}T_{k}x_{n}. $$

Then \(\{x_{n}\}\) converges strongly to a point \(\hat{x} \in C\) such that \(\hat{x}=P_{\mathfrak{F}(\mathfrak{T})}f(\hat{x})\).

Proof

For each \(n \in\mathbb{N}\), let \(S_{n}:C \to C\) be the mapping defined by

$$ S_{n}x=\bigoplus_{k=1}^{n} \gamma_{n,k}T_{k}x. $$

Then by Theorem 3.2, \(\{S_{n}\}\) is a family of nonexpansive mappings satisfying the AKTT condition and \(\bigcap_{n=1}^{\infty}\mathfrak{F}(S_{n})=\bigcap_{n=1}^{\infty}\mathfrak {F}(T_{n})\). We can write

$$x_{n+1}=\alpha_{n}f(x_{n}) \oplus(1- \alpha_{n})S_{n}x_{n}. $$

Define the mapping \(S:C \to C\) by \(Sx=\lim_{n \to\infty}S_{n}x\) for \(x \in C\), so that S is nonexpansive and \(\mathfrak{F}(S)=\bigcap_{n=1}^{\infty}\mathfrak{F}(S_{n})\). Consequently, Theorem 3.1 assures the strong convergence of \(\{ x_{n}\}\) with limit , say, such that \(\hat{x}=P_{\mathfrak{F}(S)}f(\hat{x})\). □

Using Theorem 3.3, we establish a strong convergence theorem by a Moudafi type of shrinking projection method for a family of quasinonexpansive mappings as follows.

Theorem 3.4

Let C be a closed convex subset of a complete \(\operatorname{CAT}(0)\) space X such that \(\{z \in C: d(u, z) \leq d(v,z)\}\) is a convex subset of C for every \(u, v \in C\). Let \(\mathfrak{T}=\{T_{j}:j=1,\ldots,N\}\) be a finite family of quasinonexpansive mappings of C into X with \(\mathfrak{F}(\mathfrak{T}) \ne\emptyset\) and \(\widetilde{\mathfrak{F}}(T_{j})=\mathfrak{F}(T_{j})\) for \(j=1,\ldots,N\), and f a φ-weak contraction on C, where φ is strictly increasing. Let \(\{\alpha_{n}\}\), \(\{\delta_{n}\}\) be sequences in \((0,1]\), and \(\{\beta^{j}_{n}:j=1,\ldots,N\}\) and \(\{\gamma_{n,k}:k \le n\}\) be sequences in \((0,1)\). Let \(x_{1} \in C\) and define the sequence \(\{x_{n}\}\) by

$$\begin{aligned}& y^{j}_{n} =\delta_{n}x_{n} \oplus(1- \delta_{n})T_{j}x_{n}, \\& C^{j}_{n} =\bigl\{ z \in C:d\bigl(y^{j}_{n},z \bigr) \le d(x_{n},z)\bigr\} ,\quad j=1,\ldots,N, \\& v^{j}_{n,k} =P_{C^{j}_{k}}x_{n}, \quad k=1, \ldots,n, j=1,\ldots,N, \\& w_{n,k} =\bigoplus_{j=1}^{N} \beta^{j}_{k}v^{j}_{n,k}, \quad k=1, \ldots,n, \\& x_{n+1} = \alpha_{n}f(x_{n}) \oplus(1- \alpha_{n})\bigoplus_{k=1}^{n} \gamma _{n,k}w_{n,k}, \end{aligned}$$

where \(\{\alpha_{n}\}\) satisfies (C1)-(C3), \(\{\gamma_{n,k}:k \le n\}\) satisfies (D1)-(D3), and \(\{\delta_{n}\}\), \(\{\beta^{j}_{n}\}\) satisfy the following conditions:

  1. (i)

    \(\liminf_{n \to\infty}\delta_{n}<1\);

  2. (ii)

    \(\sum_{j=1}^{N}\beta^{j}_{n}=1\) for \(n \in\mathbb{N}\).

Then \(\{x_{n}\}\) converges strongly to a point \(\hat{x} \in C\) such that \(\hat{x}=P_{\mathfrak{F}(\mathfrak{T})}f(\hat{x})\).

Proof

First, we can see that every \(C^{j}_{n}\) is closed and convex by the assumption on the space. To prove that the metric projection \(P_{C^{j}_{k}}\) is well defined, let \(z \in\mathfrak{F}(\mathfrak{T})\). Since \(T_{j}\) is quasinonexpansive, we have

$$d\bigl(y^{j}_{n},z\bigr) \le\delta_{n}d(x_{n},z)+(1- \delta_{n})d(T_{j}x_{n},z) \le d(x_{n},z), $$

and so \(z \in C^{j}_{n}\). This implies that

$$\emptyset\ne\mathfrak{F}(\mathfrak{T}) \subset C^{j}_{n}, \quad j=1,\ldots,N, n \in \mathbb{N}. $$

Thus, the metric projection onto \(C^{j}_{n}\) is well defined. For \(n \in\mathbb{N}\), define \(Q_{n}:C \to C\) by

$$Q_{n}x=\bigoplus_{j=1}^{N} \beta^{j}_{n}P_{C^{j}_{n}}x, \quad x \in C. $$

It follows from Lemma 2.4 and condition (ii) that \(Q_{n}\) is nonexpansive and \(\mathfrak{F}(Q_{n})=\bigcap_{j=1}^{N}C^{j}_{n}\). According to our construction, we can write

$$\begin{aligned}& w_{n,k}=Q_{k}x_{n}, \quad k=1,\ldots,n, \\& x_{n+1}=\alpha_{n}f(x_{n}) \oplus(1- \alpha_{n})\bigoplus_{j=1}^{n} \gamma _{n,k}Q_{k}x_{n}, \quad n \in\mathbb{N}. \end{aligned}$$

Hence, Theorem 3.3 and conditions (C1)-(C3) and (D1)-(D3) assure the strong convergence of \(\{x_{n}\}\) to a point \(\hat{x} \in C\) such that \(\hat{x}=P_{F}f(\hat{x})\), where

$$F=\bigcap_{n=1}^{\infty}\mathfrak{F}(Q_{n})= \bigcap_{n=1}^{\infty}\bigcap _{j=1}^{N}C^{j}_{n}=\bigcap _{j=1}^{N}\bigcap _{n=1}^{\infty}C^{j}_{n}. $$

Notice that \(\mathfrak{F}(\mathfrak{T}) \subset F\). Condition (i) asserts that there exists a convergent subsequence \(\{ \delta_{n_{i}}\}\) of \(\{\delta_{n}\}\) such that \(\lim_{i \to\infty}\delta_{n_{i}}<1\). Since \(\hat{x} \in C^{j}_{n}\) for all \(j=1,\ldots,N\) and \(n \in\mathbb{N}\), we obtain

$$\begin{aligned} d(x_{n_{i}},\hat{x}) & \ge d(y_{n_{i}},\hat{x}) \\ & =d\bigl(\delta_{n_{i}}x_{n_{i}} \oplus(1-\delta_{n_{i}})T_{j}x_{n_{i}}, \hat{x}\bigr) \\ & \ge d\bigl(x_{n_{i}},\delta_{n_{i}}x_{n_{i}} \oplus(1- \delta _{n_{i}})T_{j}x_{n_{i}}\bigr)-d(x_{n_{i}}, \hat{x}) \\ & =(1-\delta_{n_{i}})d(x_{n_{i}},T_{j}x_{n_{i}})-d(x_{n_{i}}, \hat{x}), \end{aligned}$$

which yields

$$\frac{2}{1-\delta_{n_{i}}}d(x_{n_{i}},\hat{x}) \ge d(x_{n_{i}},T_{j}x_{n_{i}}). $$

We then take the limit as \(i \to\infty\) and get

$$\lim_{i \to\infty}d(x_{n_{i}},T_{j}x_{n_{i}})=0, \quad j=1,\ldots,N. $$

This shows that \(\hat{x} \in\widetilde{\mathfrak{F}}(T_{j})=\mathfrak {F}(T_{j})\) for \(j=1,\ldots,N\), that is, \(\hat{x} \in\mathfrak{F}(\mathfrak{T})\). Since \(\mathfrak{F}(\mathfrak{T}) \subset F\), we then have \(\hat{x}=P_{F}f(\hat{x})=P_{\mathfrak{F}(\mathfrak{T})}f(\hat{x})\), which completes the proof. □

Consequently, when f is constant in Theorem 3.4, we obtain the following strong convergence theorem by a new Halpern type of shrinking projection method.

Theorem 3.5

Let X, C, \(\mathfrak{T}=\{T_{j}:j=1, \ldots,N\}\), and the sequences \(\{\alpha_{n}\}\), \(\{\delta_{n}\}\), \(\{\beta^{j}_{n}:j=1,\ldots ,N\}\), \(\{\gamma_{n,k}:k \le n\}\) be as in Theorem  3.4. Let \(u,x_{1} \in C\) and define the sequence \(\{x_{n}\}\) by

$$\begin{aligned}& y^{j}_{n} =\delta_{n}x_{n} \oplus(1- \delta_{n})T_{j}x_{n}, \\& C^{j}_{n} =\bigl\{ z \in C:d\bigl(y^{j}_{n},z \bigr) \le d(x_{n},z)\bigr\} , \quad j=1,\ldots,N, \\& v^{j}_{n,k} =P_{C^{j}_{k}}x_{n}, \quad k=1, \ldots,n, j=1,\ldots,N, \\& w_{n,k} =\bigoplus_{j=1}^{N} \beta^{j}_{k}v^{j}_{n,k},\quad k=1, \ldots,n, \\& x_{n+1} = \alpha_{n}u \oplus(1-\alpha_{n})\bigoplus _{k=1}^{n}\gamma_{n,k}w_{n,k}. \end{aligned}$$

Then \(\{x_{n}\}\) converges strongly to the point \(P_{\mathfrak{F}(\mathfrak{T})}u\).