1 Introduction

Let C be a nonempty subset of a metric space \((X,d)\) and \(T:C\rightarrow C\) be a nonlinear mapping. The fixed point set of T is denoted by \(F(T)\), that is, \(F(T)=\{x\in C: x=Tx\}\). Recall that a mapping T is said to be nonexpansive if

$$d(Tx,Ty)\leq d(x,y) $$

for all \(x,y\in C\).

There have been many iterative schemes constructed and proposed in order to approximate fixed points of nonexpansive mappings. The Mann iteration process is defined as follows: \(x_{1}\in C\) and

$$ x_{n+1}=(1-\alpha_{n})x_{n}+ \alpha_{n}Tx_{n} $$
(1.1)

for each \(n\in\mathbb{N}\), where \(\{\alpha_{n}\}\) is a sequence in \((0,1)\). The Ishikawa iteration process is defined as follows: \(x_{1}\in C\) and

$$ \textstyle\begin{cases} x_{n+1}=(1-\alpha_{n})x_{n}+\alpha_{n}Ty_{n}, \\ y_{n}=(1-\beta_{n})x_{n}+\beta_{n}Tx_{n} \end{cases} $$
(1.2)

for each \(n\in\mathbb{N}\), where \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) are sequences in \((0,1)\).

Recently, Agarwal et al. [1] introduced the following S-iteration process: \(x_{1}\in C\) and

$$ \textstyle\begin{cases} x_{n+1}=(1-\alpha_{n})Tx_{n}+\alpha_{n}Ty_{n}, \\ y_{n}=(1-\beta_{n})x_{n}+\beta_{n}Tx_{n} \end{cases} $$
(1.3)

for each \(n\in\mathbb{N}\), where \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) are sequences in \((0,1)\).

It is noted in [1] that (1.3) is independent of (1.2) (and hence (1.1)) and has the convergence rate better than (1.1) and (1.2) for contractions.

Fixed point theory in a \(\operatorname{CAT}(0)\) space was first studied by Kirk [2]. Since then, fixed point theory for various types of mappings in \(\operatorname{CAT}(0)\) spaces has been investigated rapidly. In 2008, Dhompongsa-Panyanak [3] studied the convergence of the processes (1.1) and (1.2) for nonexpansive mappings in \(\operatorname{CAT}(0)\) spaces. Subsequently, in 2011, Khan-Abbas [4] studied the convergence of Ishikawa-type iteration process for two nonexpansive mappings. In \(\operatorname{CAT}(0)\) spaces, they also modified the process (1.3) and studied strong and Δ-convergence of the S-iteration as follows: \(x_{1}\in C\) and

$$ \textstyle\begin{cases} x_{n+1}=(1-\alpha_{n})Tx_{n}\oplus\alpha_{n}Ty_{n}, \\ y_{n}=(1-\beta_{n})x_{n}\oplus\beta_{n}Tx_{n} \end{cases} $$

for each \(n\in\mathbb{N}\), where \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) are sequences in \((0,1)\).

Some interesting results for solving a fixed point problem of nonlinear mappings in the framework of \(\operatorname{CAT}(0)\) spaces can also be found, for examples, in [514].

Let \((X,d)\) be a geodesic metric space and \(f:X\rightarrow(-\infty,\infty]\) be a proper and convex function. One of the major problems in optimization is to find \(x\in X\) such that

$$ f(x)=\min_{y\in X}f(y). $$

We denote \(\operatorname{argmin}_{y\in X}f(y)\) by the set of minimizers of f. A successful and powerful tool for solving this problem is the well-known proximal point algorithm (shortly, the PPA) which was initiated by Martinet [15] in 1970. In 1976, Rockafellar [16] generally studied, by the PPA, the convergence to a solution of the convex minimization problem in the framework of Hilbert spaces.

Indeed, let f be a proper, convex, and lower semi-continuous function on a Hilbert space H which attains its minimum. The PPA is defined by \(x_{1}\in H\) and

$$ x_{n+1}=\mathop{\operatorname{argmin}}_{y\in H} \biggl(f(y)+ \frac{1}{2\lambda_{n}}\|y-x_{n}\|^{2} \biggr) $$

for each \(n\in\mathbb{N}\), where \(\lambda_{n}>0\) for all \(n\in\mathbb {N}\). It was proved that the sequence \(\{x_{n}\}\) converges weakly to a minimizer of f provided \(\sum_{n=1}^{\infty}\lambda_{n}=\infty\). However, as shown by Güler [17], the PPA does not necessarily converges strongly in general. In 2000, Kamimura-Takahashi [18] combined the PPA with Halpern’s algorithm [19] so that the strong convergence is guaranteed (see also [2023]).

In 2013, Bačák [24] introduced the PPA in a \(\operatorname{CAT}(0)\) space \((X,d)\) as follows: \(x_{1}\in X\) and

$$ x_{n+1}=\mathop{\operatorname{argmin}}_{y\in X} \biggl(f(y)+ \frac{1}{2\lambda_{n}}d^{2}(y,x_{n}) \biggr) $$

for each \(n\in\mathbb{N}\), where \(\lambda_{n}>0\) for all \(n\in\mathbb {N}\). Based on the concept of the Fejér monotonicity, it was shown that, if f has a minimizer and \(\sum_{n=1}^{\infty}\lambda_{n}=\infty\), then the sequence \(\{ x_{n}\}\) Δ-converges to its minimizer (see also [25]). Recently, in 2014, Bačák [26] employed a split version of the PPA for minimizing a sum of convex functions in complete \(\operatorname{CAT}(0)\) spaces. Other interesting results can also be found in [25, 27, 28].

Recently, many convergence results by the PPA for solving optimization problems have been extended from the classical linear spaces such as Euclidean spaces, Hilbert spaces and Banach spaces to the setting of manifolds [2730]. The minimizers of the objective convex functionals in the spaces with nonlinearity play a crucial role in the branch of analysis and geometry. Numerous applications in computer vision, machine learning, electronic structure computation, system balancing and robot manipulation can be considered as solving optimization problems on manifolds (see [3134]).

A question arises naturally:

Can we establish strong convergence of the sequence to minimizers of a convex function and to fixed points of nonexpansive mappings in \(\operatorname{CAT}(0)\) spaces?

Motivated by the previous works, we propose the modified proximal point algorithm using the S-type iteration process for two nonexpansive mappings in \(\operatorname{CAT}(0)\) spaces and prove some convergence theorems of the proposed processes under some mild conditions.

2 Preliminaries and lemmas

Let \((X,d)\) be a metric space and \(x,y\in X\) with \(d(x,y)=l\). A geodesic path from x to y is an isometry \(c:[0,l]\rightarrow X\) such that \(c(0)=x\) and \(c(l)=y\). The image of a geodesic path is called the geodesic segment. The space \((X,d)\) is said to be a geodesic space if every two points of X are joined by a geodesic. A space \((X,d)\) is a uniquely geodesic space if every two points of X are jointed by only one geodesic segment. A geodesic triangle \(\triangle(x_{1},x_{2},x_{3})\) in a geodesic metric space \((X,d)\) consists of three points \(x_{1}\), \(x_{2}\), \(x_{3}\) in X and a geodesic segment between each pair of vertices. A comparison triangle for the geodesic triangle \(\triangle(x_{1},x_{2},x_{3})\) in \((X,d)\) is a triangle \(\bar{\triangle}(x_{1},x_{2},x_{3}):=\triangle(\bar{x}_{1},\bar {x}_{2},\bar{x}_{3})\) in Euclidean space \(\mathbb{R}^{2}\) such that \(d_{\mathbb{R}^{2}}(\bar{x}_{i},\bar{x}_{j})=d(x_{i},x_{j})\) for each \(i,j\in\{1,2,3\}\). A geodesic space is said to be a \(\operatorname{CAT}(0)\) space if, for each geodesic triangle \(\triangle(x_{1},x_{2},x_{3})\) in X and its comparison triangle \(\bar{\triangle}:=\triangle(\bar{x}_{1},\bar{x}_{2},\bar{x}_{3})\) in \(\mathbb{R}^{2}\), the \(\operatorname{CAT}(0)\) inequality

$$ d(x,y)\leq d_{\mathbb{R}^{2}}(\bar{x},\bar{y}) $$

is satisfied for all \(x,y\in\triangle\) and comparison points \(\bar{x},\bar{y}\in\bar{\triangle}\). We write \((1-t)x\oplus ty\) for the unique point z in the geodesic segment joining from x to y such that \(d(x,z)=td(x,y)\) and \(d(y,z)=(1-t)d(x,y)\). We also denote by \([x,y]\) the geodesic segment joining x to y, that is, \([x,y]=\{(1-t)x\oplus ty: t\in[0,1]\}\). A subset C of a \(\operatorname{CAT}(0)\) space is said to be convex if \([x,y]\subset C\) for all \(x,y\in C\). For more details, the readers may consult [35]. A geodesic space X is a \(\operatorname{CAT}(0)\) space if and only if

$$ d^{2} \bigl((1-t)x\oplus ty,z \bigr)\leq (1-t)d^{2}(x,z)+td^{2}(y,z)-t(1-t)d^{2}(x,y) $$
(2.1)

for all \(x,y,z\in X\) and \(t\in[0,1]\) [36]. In particular, if x, y, z are points in X and \(t\in[0,1]\), then we have

$$ d \bigl((1-t)x\oplus ty,z \bigr)\leq(1-t)d(x,z)+td(y,z). $$
(2.2)

The following examples are \(\operatorname{CAT}(0)\) spaces:

  1. (1)

    Euclidean spaces \(\mathbb{R}^{n}\);

  2. (2)

    Hilbert spaces;

  3. (3)

    simply connected Riemannian manifolds of nonpositive sectional curvature;

  4. (4)

    hyperbolic spaces;

  5. (5)

    trees.

Let \((X,d)\) be a complete \(\operatorname{CAT}(0)\) space and C be a nonempty closed and convex subset of X. Then, for each point \(x\in X\), there exists a unique point of C, denoted by \(P_{C}x\), such that

$$d(x,P_{C}x)=\inf_{y\in C}d(x,y) $$

(see [35]). Such a mapping \(P_{C}\) is called the metric projection from X onto C.

Let \(\{x_{n}\}\) be a bounded sequence in a closed convex subset C of a \(\operatorname{CAT}(0)\) space X. For any \(x\in X\), we set

$$ r\bigl(x,\{x_{n}\}\bigr)=\limsup_{n\rightarrow\infty}d(x,x_{n}). $$

The asymptotic radius \(r(\{x_{n}\})\) of \(\{x_{n}\}\) is given by

$$ r\bigl(\{x_{n}\}\bigr)=\inf \bigl\{ r\bigl(x,\{x_{n}\}\bigr): x\in X \bigr\} $$

and the asymptotic center \(A(\{x_{n}\})\) of \(\{x_{n}\}\) is the set

$$ A\bigl(\{x_{n}\}\bigr)= \bigl\{ x\in X: r\bigl(\{x_{n}\} \bigr)=r\bigl(x,\{x_{n}\}\bigr) \bigr\} . $$

It is well known that, in \(\operatorname{CAT}(0)\) spaces, \(A(\{x_{n}\})\) consists of exactly one point [37].

Definition 2.1

A sequence \(\{x_{n}\}\) in a \(\operatorname{CAT}(0)\) space X is said to Δ-converge to a point \(x\in X\) if x is the unique asymptotic center of \(\{u_{n}\}\) for every subsequence \(\{u_{n}\}\) of \(\{x_{n}\}\).

In this case, we write \(\Delta\mbox{-}\!\lim_{n\rightarrow\infty}x_{n}=x\) and call x the Δ-limit of \(\{x_{n}\}\). We denote \(w_{\Delta}(x_{n}):=\bigcup\{A(\{u_{n}\})\}\), where the union is taken over all subsequences \(\{u_{n}\}\) of \(\{x_{n}\}\).

Recall that a bounded sequence \(\{x_{n}\}\) in X is said to be regular if \(r(\{x_{n}\})=r(\{u_{n}\})\) for every subsequence \(\{u_{n}\}\) of \(\{x_{n}\}\). It is well known that every bounded sequence in X has a Δ-convergent subsequence [38].

Lemma 2.2

[3]

Let C be a closed and convex subset of a complete \(\operatorname{CAT}(0)\) space X and \(T:C\rightarrow C\) be a nonexpansive mapping. Let \(\{x_{n}\}\) be a bounded sequence in C such that \(\lim_{n\rightarrow\infty}d(x_{n},Tx_{n})=0\) and \(\Delta\mbox{-}\!\lim_{n\rightarrow\infty}x_{n}=x\). Then \(x=Tx\).

Lemma 2.3

[3]

If \(\{x_{n}\}\) is a bounded sequence in a complete \(\operatorname{CAT}(0)\) space with \(A(\{x_{n}\})=\{x\}\), \(\{u_{n}\}\) is a subsequence of \(\{x_{n}\}\) with \(A(\{u_{n}\})=\{u\}\) and the sequence \(\{d(x_{n},u)\}\) converges, then \(x=u\).

Recall that a function \(f : C\rightarrow(-\infty,\infty]\) defined on a convex subset C of a \(\operatorname{CAT}(0)\) space is convex if, for any geodesic \(\gamma: [a, b]\rightarrow C\), the function \(f\circ\gamma\) is convex. Some important examples can be found in [35]. We say that a function f defined on C is lower semi-continuous at a point \(x\in C\) if

$$f(x)\leq\liminf_{n\rightarrow\infty}f(x_{n}) $$

for each sequence \(x_{n}\rightarrow x\). A function f is said to be lower semi-continuous on C if it is lower semi-continuous at any point in C.

For any \(\lambda>0\), define the Moreau-Yosida resolvent of f in \(\operatorname{CAT}(0)\) spaces as

$$ J_{\lambda}(x)=\operatorname{argmin}_{y\in X} \biggl[f(y)+\frac{1}{2\lambda}d^{2}(y,x) \biggr] $$
(2.3)

for all \(x\in X\). The mapping \(J_{\lambda}\) is well defined for all \(\lambda>0\) (see [39, 40]).

Let \(f:X\rightarrow(-\infty,\infty]\) be a proper convex and lower semi-continuous function. It was shown in [25] that the set \(F(J_{\lambda})\) of fixed points of the resolvent associated with f coincides with the set \(\operatorname{argmin}_{y\in X}f(y)\) of minimizers of f.

Lemma 2.4

[39]

Let \((X,d)\) be a complete \(\operatorname{CAT}(0)\) space and \(f:X\rightarrow(-\infty ,\infty]\) be proper convex and lower semi-continuous. For any \(\lambda>0\), the resolvent \(J_{\lambda}\) of f is nonexpansive.

Lemma 2.5

[41]

Let \((X,d)\) be a complete \(\operatorname{CAT}(0)\) space and \(f:X\rightarrow(-\infty,\infty]\) be proper convex and lower semi-continuous. Then, for all \(x,y\in X\) and \(\lambda>0\), we have

$$ \frac{1}{2\lambda}d^{2}(J_{\lambda}x,y)-\frac{1}{2\lambda}d^{2}(x,y) +\frac{1}{2\lambda}d^{2}(x,J_{\lambda}x)+f(J_{\lambda}x)\leq f(y). $$

Proposition 2.6

[39, 40] (The resolvent identity)

Let \((X,d)\) be a complete \(\operatorname{CAT}(0)\) space and \(f:X\rightarrow(-\infty,\infty]\) be proper convex and lower semi-continuous. Then the following identity holds:

$$ J_{\lambda}x=J_{\mu} \biggl(\frac{\lambda-\mu}{\lambda}J_{\lambda}x \oplus \frac{\mu}{\lambda}x \biggr) $$

for all \(x\in X\) and \(\lambda>\mu>0\).

For more results in \(\operatorname{CAT}(0)\) spaces, refer to [42].

3 Main results

We are now ready to prove our main results.

Theorem 3.1

Let \((X,d)\) be a complete \(\operatorname{CAT}(0)\) space and \(f:X\rightarrow(-\infty,\infty]\) be a proper convex and lower semi-continuous function. Let \(T_{1}\) and \(T_{2}\) be nonexpansive mappings on X such that \(\Omega=F(T_{1})\cap F(T_{2})\cap \operatorname{argmin}_{y\in X} f(y)\) is nonempty. Assume that \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) are sequences such that \(0< a\leq\alpha_{n}\), \(\beta_{n}\leq b<1\) for all \(n\in\mathbb{N}\) and for some a, b and \(\{\lambda_{n}\}\) is a sequence such that \(\lambda_{n}\geq\lambda>0\) for all \(n\in\mathbb{N}\) and for some λ. Let \(\{x_{n}\}\) be generated in the following manner:

$$ \textstyle\begin{cases} z_{n}=\operatorname{argmin}_{y\in X} [f(y)+\frac{1}{2\lambda_{n}}d^{2}(y,x_{n}) ], \\ y_{n}=(1-\beta_{n})x_{n}\oplus\beta_{n}T_{1}z_{n}, \\ x_{n+1}=(1-\alpha_{n})T_{1}x_{n}\oplus\alpha_{n}T_{2}y_{n} \end{cases} $$

for each \(n\in\mathbb{N}\). Then we have the following:

  1. (1)

    \(\lim_{n\rightarrow\infty}d(x_{n},q)\) exists for all \(q\in\Omega\);

  2. (2)

    \(\lim_{n\rightarrow\infty}d(x_{n},z_{n})=0\);

  3. (3)

    \(\lim_{n\rightarrow\infty}d(x_{n},T_{1}x_{n})=\lim_{n\rightarrow \infty}d(x_{n},T_{2}x_{n})=0\).

Proof

Let \(q\in\Omega\). Then \(q=T_{1}q=T_{2}q\) and \(f(q)\leq f(y)\) for all \(y\in X\). It follows that

$$ f(q)+\frac{1}{2\lambda_{n}}d^{2}(q,q)\leq f(y)+\frac{1}{2\lambda_{n}}d^{2}(y,q) $$

for all \(y\in X\) and hence \(q=J_{\lambda_{n}}q\) for all \(n\in\mathbb{N}\).

(1) We show that \(\lim_{n\rightarrow\infty}d(x_{n},q)\) exists. Noting that \(z_{n}=J_{\lambda_{n}}x_{n}\) for all \(n\in\mathbb{N}\), we have, by Lemma 2.4,

$$ d(z_{n},q)=d(J_{\lambda_{n}}x_{n},J_{\lambda_{n}}q) \leq d(x_{n},q). $$
(3.1)

Also, we have, by (2.2) and (3.1),

$$\begin{aligned} d(y_{n},q) =&d\bigl((1-\beta_{n})x_{n} \oplus\beta_{n}T_{1}z_{n},q\bigr) \\ \leq&(1-\beta_{n})d(x_{n},q)+\beta_{n}d(T_{1}z_{n},q) \\ \leq&(1-\beta_{n})d(x_{n},q)+\beta_{n}d(z_{n},q) \\ \leq&d(x_{n},q). \end{aligned}$$
(3.2)

So, by (3.2), we obtain

$$\begin{aligned} d(x_{n+1},q) =&d\bigl((1-\alpha_{n})T_{1}x_{n} \oplus\alpha _{n}T_{2}y_{n},q\bigr) \\ \leq&(1-\alpha_{n})d(T_{1}x_{n},q)+ \alpha_{n}d(T_{2}y_{n},q) \\ \leq&(1-\alpha_{n})d(x_{n},q)+\alpha_{n}d(y_{n},q) \\ \leq&d(x_{n},q). \end{aligned}$$
(3.3)

This shows that \(\lim_{n\rightarrow\infty}d(x_{n},q)\) exists. Hence \(\lim_{n\rightarrow\infty}d(x_{n},q)=c\) for some c.

(2) We show that \(\lim_{n\rightarrow\infty}d(x_{n},z_{n})=0\). By Lemma 2.5, we see that

$$ \frac{1}{2\lambda_{n}}d^{2}(z_{n},q)-\frac{1}{2\lambda _{n}}d^{2}(x_{n},q)+ \frac{1}{2\lambda_{n}}d^{2}(x_{n},z_{n})\leq f(q)-f(z_{n}). $$

Since \(f(q)\leq f(z_{n})\) for all \(n\in\mathbb{N}\), it follows that

$$ d^{2}(x_{n},z_{n})\leq d^{2}(x_{n},q)-d^{2}(z_{n},q). $$

In order to show that \(\lim_{n\rightarrow\infty}d(x_{n},z_{n})=0\), it suffices to show that

$$\lim_{n\rightarrow\infty}d(z_{n},q)=c. $$

In fact, from (3.3), we have

$$ d(x_{n+1},q)\leq (1-\alpha_{n})d(x_{n},q)+ \alpha_{n}d(y_{n},q), $$

which is equivalent to

$$\begin{aligned} d(x_{n},q) \leq& \frac{1}{\alpha_{n}} \bigl(d(x_{n},q)-d(x_{n+1},q) \bigr)+d(y_{n},q) \\ \leq&\frac{1}{a} \bigl(d(x_{n},q)-d(x_{n+1},q) \bigr)+d(y_{n},q) \end{aligned}$$

since \(d(x_{n+1},q)\leq d(x_{n},q)\) and \(\alpha_{n}\geq a>0\) for all \(n\in\mathbb{N}\). Hence we have \(c=\liminf_{n\rightarrow\infty}d(x_{n}, q)\leq\liminf_{n\rightarrow\infty }d(y_{n},q)\).

On the other hand, by (3.2), we see that

$$ \limsup_{n\rightarrow\infty}d(y_{n},q)\leq\limsup _{n\rightarrow\infty }d(x_{n},q)=c. $$

Therefore, we have \(\lim_{n\rightarrow\infty}d(y_{n},q)=c\). Also, by (3.2), we have

$$\begin{aligned} d(x_{n},q) \leq& \frac{1}{\beta_{n}} \bigl(d(x_{n},q)-d(y_{n},q) \bigr)+d(z_{n},q) \\ \leq&\frac{1}{a} \bigl(d(x_{n},q)-d(y_{n},q) \bigr)+d(z_{n},q), \end{aligned}$$

which yields

$$ c=\liminf_{n\rightarrow\infty}d(x_{n},q)\leq\liminf _{n\rightarrow\infty }d(z_{n},q). $$

From this, together with (3.1), we conclude that

$$ \lim_{n\rightarrow\infty}d(z_{n},q)=c. $$

This shows that

$$ \lim_{n\rightarrow\infty}d(x_{n},z_{n})=0 $$
(3.4)

and so we prove (2).

(3) We show that

$$ \lim_{n\rightarrow\infty}d(x_{n},T_{1}x_{n})= \lim_{n\rightarrow\infty }d(x_{n},T_{1}x_{n})=0. $$
(3.5)

We observe that

$$\begin{aligned} d^{2}(y_{n},q) =&d^{2}\bigl((1- \beta_{n})x_{n}\oplus\beta _{n}T_{1}z_{n},q \bigr) \\ \leq&(1-\beta_{n})d^{2}(x_{n},q)+ \beta_{n}d^{2}(T_{1}z_{n},q)-\beta _{n}(1-\beta_{n})d^{2}(x_{n},T_{1}z_{n}) \\ \leq&d^{2}(x_{n},q)-a(1-b)d^{2}(x_{n},T_{1}z_{n}). \end{aligned}$$
(3.6)

This implies that

$$\begin{aligned} \begin{aligned} d^{2}(x_{n},T_{1}z_{n})&\leq \frac {1}{a(1-b)}\bigl(d^{2}(x_{n},q)-d^{2}(y_{n},q) \bigr) \\ &\rightarrow 0 \end{aligned} \end{aligned}$$

as \(n\rightarrow\infty\). Hence we have

$$ \lim_{n\rightarrow\infty}d(x_{n},T_{1}z_{n})=0. $$
(3.7)

It follows from (3.4) and (3.7) that

$$\begin{aligned} d(x_{n},T_{1}x_{n}) \leq& d(x_{n},T_{1}z_{n})+d(T_{1}z_{n},T_{1}x_{n}) \\ \leq& d(x_{n},T_{1}z_{n})+d(z_{n},x_{n}) \\ \rightarrow& 0 \end{aligned}$$
(3.8)

as \(n\rightarrow\infty\). Similarly, we obtain

$$\begin{aligned} d^{2}(x_{n+1},q) \leq&(1-\alpha_{n})d^{2}(T_{1}x_{n},q)+ \alpha_{n}d^{2}(T_{2}y_{n},q) - \alpha_{n}(1-\alpha_{n})d^{2}(T_{1}x_{n},T_{2}y_{n}) \\ \leq&d^{2}(x_{n},q)-a(1-b)d^{2}(T_{1}x_{n},T_{2}y_{n}), \end{aligned}$$

which implies

$$\begin{aligned} d^{2}(T_{1}x_{n},T_{2}y_{n}) \leq&\frac {1}{a(1-b)}\bigl(d^{2}(x_{n},q)-d^{2}(x_{n+1},q) \bigr) \\ \rightarrow& 0 \end{aligned}$$
(3.9)

as \(n\rightarrow\infty\). From (3.7), we have

$$ d(y_{n},x_{n})=\beta_{n}d(T_{1}z_{n},x_{n}) \rightarrow0 $$
(3.10)

as \(n\rightarrow\infty\). From (3.7), (3.9), and (3.10), it follows that

$$\begin{aligned} d(x_{n},T_{2}x_{n}) \leq &d(x_{n},T_{1}x_{n})+d(T_{1}x_{n},T_{2}y_{n})+d(T_{2}y_{n},T_{2}x_{n}) \\ \leq &d(x_{n},T_{1}x_{n})+d(T_{1}x_{n},T_{2}y_{n})+d(x_{n},y_{n}) \\ \rightarrow& 0 \end{aligned}$$
(3.11)

as \(n\rightarrow\infty\) and so we prove (3). This completes the proof. □

Next, we prove the Δ-convergence of our iteration.

Theorem 3.2

Let \((X,d)\) be a complete \(\operatorname{CAT}(0)\) space and \(f:X\rightarrow(-\infty ,\infty]\) be a proper convex and lower semi-continuous function. Let \(T_{1}\) and \(T_{2}\) be nonexpansive mappings on X such that \(\Omega=F(T_{1})\cap F(T_{2})\cap \operatorname{argmin}_{y\in X} f(y)\) is nonempty. Assume that \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) are sequences such that \(0< a\leq\alpha_{n}\), \(\beta_{n}\leq b<1\) for all \(n\in\mathbb{N}\) and for some a, b, and \(\{\lambda_{n}\}\) is a sequence such that \(\lambda_{n}\geq\lambda>0\) for all \(n\in \mathbb{N}\) and for some λ. Let \(\{x_{n}\}\) be generated in the following manner:

$$ \textstyle\begin{cases} z_{n}=\operatorname{argmin}_{y\in X} [f(y)+\frac{1}{2\lambda_{n}}d^{2}(y,x_{n}) ], \\ y_{n}=(1-\beta_{n})x_{n}\oplus\beta_{n}T_{1}z_{n}, \\ x_{n+1}=(1-\alpha_{n})T_{1}x_{n}\oplus\alpha_{n}T_{2}y_{n} \end{cases} $$
(3.12)

for each \(n\in\mathbb{N}\). Then the sequence \(\{x_{n}\}\) Δ-converges to a common element of Ω.

Proof

Since \(\lambda_{n}\geq\lambda>0\), by Proposition 2.6 and Theorem 3.1(2),

$$\begin{aligned} d(J_{\lambda}x_{n},J_{\lambda_{n}}x_{n}) =& d \biggl(J_{\lambda}x_{n},J_{\lambda} \biggl( \frac{\lambda_{n}-\lambda }{\lambda_{n}}J_{\lambda_{n}}x_{n} \oplus\frac{\lambda}{\lambda_{n}}x_{n} \biggr) \biggr) \\ \leq&d \biggl(x_{n}, \biggl(1-\frac{\lambda}{\lambda_{n}} \biggr)J_{\lambda _{n}}x_{n}\oplus\frac{\lambda}{\lambda_{n}}x_{n} \biggr) \\ =& \biggl(1-\frac{\lambda}{\lambda_{n}} \biggr)d(x_{n},z_{n}) \\ \rightarrow&0 \end{aligned}$$

as \(n\rightarrow\infty\). So, we obtain

$$\begin{aligned} d(x_{n},J_{\lambda}x_{n}) \leq&d(x_{n},z_{n})+d(z_{n},J_{\lambda }x_{n}) \\ \rightarrow&0 \end{aligned}$$

as \(n\rightarrow\infty\). Theorem 3.1(1) shows that \(\lim_{n\rightarrow\infty}d(x_{n},q)\) exists for all \(q\in\Omega\) and Theorem 3.1(3) also implies that \(\lim_{n\rightarrow\infty}d(x_{n},T_{i}x_{n})=0\) for all \(i=1,2\).

Next, we show that \(w_{\Delta}(x_{n})\subset\Omega\). Let \(u\in w_{\Delta}(x_{n})\). Then there exists a subsequence \(\{u_{n}\}\) of \(\{x_{n}\}\) such that \(A(\{u_{n}\})=\{u\}\). From Lemma 2.2, there exists a subsequence \(\{v_{n}\}\) of \(\{u_{n}\}\) such that \(\Delta\mbox{-}\!\lim_{n\rightarrow\infty}v_{n}=v\) for some \(v\in\Omega\). So, \(u=v\) by Lemma 2.3. This shows that \(w_{\Delta}(x_{n})\subset \Omega\).

Finally, we show that the sequence \(\{x_{n}\}\) Δ-converges to a point in Ω. To this end, it suffices to show that \(w_{\Delta}(x_{n})\) consists of exactly one point. Let \(\{u_{n}\}\) be a subsequence of \(\{x_{n}\}\) with \(A(\{u_{n}\})=\{u\}\) and let \(A(\{x_{n}\})=\{x\}\). Since \(u\in w_{\Delta}(x_{n})\subset\Omega\) and \(\{d(x_{n},u)\}\) converges, by Lemma 2.3, we have \(x=u\). Hence \(w_{\Delta}(x_{n})=\{x\}\). This completes the proof. □

If \(T_{1}=T_{2}=T\) in Theorem 3.2, then we obtain the following result.

Corollary 3.3

Let \((X,d)\) be a complete \(\operatorname{CAT}(0)\) space and \(f:X\rightarrow(-\infty ,\infty]\) be a proper convex and lower semi-continuous function. Let T be a nonexpansive mapping on X such that \(\Omega=F(T)\cap \operatorname{argmin}_{y\in X} f(y)\) is nonempty. Assume that \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) are sequences such that \(0< a\leq\alpha_{n}\), \(\beta_{n}\leq b<1\) for all \(n\in\mathbb{N}\) and for some a, b, and \(\{\lambda_{n}\}\) is a sequence such that \(\lambda_{n}\geq\lambda>0\) for all \(n\in\mathbb{N}\) and for some λ. Let \(\{x_{n}\}\) be generated in the following manner:

$$ \textstyle\begin{cases} z_{n}=\operatorname{argmin}_{y\in X} [f(y)+\frac{1}{2\lambda_{n}}d^{2}(y,x_{n}) ], \\ y_{n}=(1-\beta_{n})x_{n}\oplus\beta_{n}Tz_{n}, \\ x_{n+1}=(1-\alpha_{n})x_{n}\oplus\alpha_{n}Ty_{n} \end{cases} $$

for each \(n\in\mathbb{N}\). Then the sequence \(\{x_{n}\}\) Δ-converges to a common element of Ω.

Since every Hilbert space is a complete \(\operatorname{CAT}(0)\) space, we obtain directly the following result.

Corollary 3.4

Let H be a Hilbert space and \(f:H\rightarrow(-\infty,\infty]\) be a proper convex and lower semi-continuous function. Let \(T_{1}\) and \(T_{2}\) be nonexpansive mappings on H such that \(\Omega=F(T_{1})\cap F(T_{2})\cap \operatorname{argmin}_{y\in H} f(y)\) is nonempty. Assume that \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) are sequences such that \(0< a\leq\alpha_{n}\), \(\beta_{n}\leq b<1\) for all \(n\in\mathbb{N}\) and for some a, b, and \(\{\lambda_{n}\}\) is a sequence such that \(\lambda_{n}\geq\lambda>0\) for all \(n\in \mathbb{N}\) and for some λ. Let \(\{x_{n}\}\) be generated in the following manner:

$$ \textstyle\begin{cases} z_{n}=\operatorname{argmin}_{y\in X} [f(y)+\frac{1}{2\lambda_{n}}\|y-x_{n}\| ^{2} ], \\ y_{n}=(1-\beta_{n})x_{n}+\beta_{n}T_{1}z_{n}, \\ x_{n+1}=(1-\alpha_{n})T_{1}x_{n}+\alpha_{n}T_{2}y_{n} \end{cases} $$

for each \(n\in\mathbb{N}\). Then the sequence \(\{x_{n}\}\) weakly converges to a common element of Ω.

Next, we establish the strong convergence theorems of our iteration.

Theorem 3.5

Let X be a complete \(\operatorname{CAT}(0)\) space and \(f:X\rightarrow(-\infty,\infty]\) be a proper convex and lower semi-continuous function. Let \(T_{1}\) and \(T_{2}\) be nonexpansive mappings on X such that \(\Omega=F(T_{1})\cap F(T_{2})\cap \operatorname{argmin}_{y\in X} f(y)\) is nonempty. Assume that \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) are sequences such that \(0< a\leq\alpha_{n}\), \(\beta_{n}\leq b<1\) for all \(n\in\mathbb{N}\) and for some a, b, and \(\{\lambda_{n}\}\) is a sequence such that \(\lambda_{n}\geq\lambda>0\) for all \(n\in\mathbb{N}\) and for some λ. Then the sequence \(\{x_{n}\}\) generated by (3.12) strongly converges to a common element of Ω if and only if \(\liminf_{n\rightarrow\infty}d(x_{n},\Omega)=0\), where \(d(x,\Omega)=\inf\{d(x,q):q\in\Omega\}\).

Proof

The necessity is obvious. Conversely, suppose that \(\liminf_{n\rightarrow\infty}d(x_{n},\Omega)=0\). Since

$$ d(x_{n+1},q) \leq d(x_{n},q) $$

for all \(q\in\Omega\), it follows that

$$ d(x_{n+1},\Omega)\leq d(x_{n},\Omega). $$

Hence \(\lim_{n\rightarrow\infty}d(x_{n},\Omega)\) exists and \(\lim_{n\rightarrow\infty}d(x_{n},\Omega)=0\). Following the proof of Theorem 2 of [4], we can show that \(\{x_{n}\}\) is a Cauchy sequence in X. This implies that \(\{x_{n}\}\) converges to a point \(x^{*}\) in X and so \(d(x^{*},\Omega)=0\). Since Ω is closed, \(x^{*}\in\Omega\). This completes the proof. □

A family \(\{A, B, C\}\) of mappings is said to satisfy the condition \((\Omega)\) if there exists a nondecreasing function \(f:[0,\infty)\rightarrow[0,\infty)\) with \(f(0)=0\), \(f(r)>0\) for all \(r\in(0,\infty)\) such that \(d(x,Ax)\geq f(d(x,F))\) or \(d(x,Bx)\geq f(d(x,F))\) or \(d(x,Cx)\geq f(d(x,F))\) for all \(x\in X\). Here \(F=F(A)\cap F(B)\cap F(C)\).

Theorem 3.6

Let X be a complete \(\operatorname{CAT}(0)\) space and \(f:X\rightarrow(-\infty,\infty]\) be a proper convex and lower semi-continuous function. Let \(T_{1}\) and \(T_{2}\) be nonexpansive mappings on X such that \(\Omega=F(T_{1})\cap F(T_{2})\cap \operatorname{argmin}_{y\in X} f(y)\) is nonempty. Assume that \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) are sequences such that \(0< a\leq\alpha_{n}\), \(\beta_{n}\leq b<1\) for all \(n\in\mathbb{N}\) and for some a, b, and \(\{\lambda_{n}\}\) is a sequence such that \(\lambda_{n}\geq\lambda>0\) for all \(n\in\mathbb{N}\) and for some λ. If \(\{J_{\lambda}, T_{1}, T_{2}\}\) satisfies the condition \((\Omega)\), then the sequence \(\{x_{n}\}\) generated by (3.12) strongly converges to a common element of Ω.

Proof

From Theorem 3.1(1), we know that \(\lim_{n\rightarrow\infty}d(x_{n},q)\) exists for all \(q\in\Omega\). This implies that \(\lim_{n\rightarrow\infty}d(x_{n},\Omega)\) exists. On the other hand, by the hypothesis, we see that

$$\lim_{n\rightarrow\infty}f\bigl(d(x_{n},\Omega)\bigr)\leq\lim _{n\rightarrow\infty }d(x_{n},T_{1}x_{n})=0 $$

or

$$\lim_{n\rightarrow\infty}f\bigl(d(x_{n},\Omega)\bigr)\leq \lim _{n\rightarrow\infty}d(x_{n},T_{2}x_{n})=0, $$

or

$$\lim_{n\rightarrow\infty}f\bigl(d(x_{n},\Omega)\bigr)\leq \lim _{n\rightarrow\infty}d(x_{n},J_{\lambda}x_{n})=0. $$

So, we have

$$ \lim_{n\rightarrow\infty}f\bigl(d(x_{n},\Omega)\bigr)=0. $$

Using the property of f, it follows that \(\lim_{n\rightarrow\infty}d(x_{n},\Omega)=0\). Following the proof of Theorem 3.5, we obtain the desired result. This completes the proof. □

A mapping \(T:C\rightarrow C\) is said to be semi-compact if any sequence \(\{x_{n}\}\) in C satisfying \(d(x_{n},Tx_{n})\rightarrow0\) has a convergent subsequence.

Theorem 3.7

Let X be a complete \(\operatorname{CAT}(0)\) space and \(f:X\rightarrow(-\infty,\infty]\) be a proper convex and lower semi-continuous function. Let \(T_{1}\) and \(T_{2}\) be nonexpansive mappings on X such that \(\Omega=F(T_{1})\cap F(T_{2})\cap \operatorname{argmin}_{y\in X} f(y)\) is nonempty. Assume that \(\{\alpha_{n}\}\) and \(\{\beta_{n}\}\) are sequences such that \(0< a\leq\alpha_{n}\), \(\beta_{n}\leq b<1\) for all \(n\in\mathbb{N}\) and for some a, b, and \(\{\lambda_{n}\}\) is a sequence such that \(\lambda_{n}\geq\lambda>0\) for all \(n\in\mathbb{N}\) and for some λ. If \(T_{1}\) or \(T_{2}\), or \(J_{\lambda}\) is semi-compact, then the sequence \(\{x_{n}\}\) generated by (3.12) strongly converges to a common element of Ω.

Proof

Suppose that \(T_{1}\) is semi-compact. By Theorem 3.1(3), we have

$$d(x_{n},T_{1}x_{n})\rightarrow0 $$

as \(n\to\infty\). So, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{k}}\rightarrow x^{*}\in X\). Since \(d(x_{n},J_{\lambda}x_{n})\rightarrow0\) and \(d(x_{n},T_{i}x_{n})\rightarrow0\) for all \(i\in\{1,2\}\), \(d(x^{*},J_{\lambda}x^{*})=0\), and \(d(x^{*},T_{1}x^{*})=d(x^{*},T_{2}x^{*})=0\), which shows that \(x^{*}\in\Omega\). In other cases, we can prove the strong convergence of \(\{x_{n}\}\) to a common element of Ω. This completes the proof. □

Remark 3.8

  1. (1)

    Our main results generalize Theorem 1, Theorem 2 and Theorem 3 of Khan-Abbas [4] from one nonexpansive mapping to two nonexpansive mappings involving the convex and lower semi-continuous function in \(\operatorname{CAT}(0)\) spaces.

  2. (2)

    Theorem 3.1 extends that of Bačák [24] in \(\operatorname{CAT}(0)\) spaces. In fact, we present a new modified proximal point algorithm for solving the convex minimization problem as well as the fixed point problem of nonexpansive mappings in \(\operatorname{CAT}(0)\) spaces.