1 Introduction

Throughout, H is a real Hilbert space with inner product 〈·,·〉 and induced norm ║·║. Let C be a nonempty closed convex subset of H. Then, a mapping T, from C into itself is said to be a nonexpansive mapping if

T x - T y x - y ,

for any x, yC. Fix(T) denotes the fixed point set of T, that is Fix(T) = {xC : Tx = x}. Iterative methods for finding fixed points of nonexpansive mappings are an important topic in the theory of nonexpansive mappings and have wide applications in a number of applied areas, such as, image reconstruction in computerized tomography [1], optics and neural networks [2], collective sensing [3], and image denoising and deblurring [4] etc. However, the Picard sequence T n x n = 0 often fails to converge even in the weak topology. To overcome the difficulties, the Krasnoselskii-Mann iteration algorithm become prevail. This algorithm generates from an arbitrary initial guess x0C and a sequence {x n } by the recursive formula

x n + 1 = α n x n + 1 - α n T x n , n 0 ,
(1.1)

where {α n } is a sequence in (0, 1). Reich [5] proved that if X is a uniformly convex Banach space with a Fréchet differential norm and if {α n } is chosen such that n = 0 α n 1 - α n =+, then the sequence {x n } defined by (1.1) converges weakly to a fixed point of T. On the other hand, Maingé [6] proposed the so-called inertial Krasnoselskii-Mann-type algorithm as follows

x n + 1 = 1 - α n I + α n T v n , v n = x n + θ n x n - x n - 1 , n 1 ,
(1.2)

where I : HH is the identity operator, x0, x1H, {θ n } ⊂ [0, 1], {α n } ⊂ (0, 1) are relaxation factor. The proposed algorithm unifies Krasnoselskii-Mann iteration and inertial type extrapolation. He established some weak convergence theorems of the sequence {x n } generated by (1.2). It is clear that if θ n = 0 for all n, then the algorithm (1.2) reduces to the Krasnoselskii-Mann iteration (1.1). The sequence {υ n } is intended to speed up the convergence of algorithms. As a matter of fact, the above algorithms (1.1) and (1.2) have only weak convergence except in a finite dimensional space. To obtain strong convergence in the setting of an infinite dimensional Hilbert or Banach spaces, there exist several iterative algorithms to nonexpansive mappings (e.g., Viscosity iteration algorithm [7], Hybrid projection algorithm [8], Hybrid steepest descent algorithm [9], Halpern-type iteration algorithm [10, 11], Shrinking projection algorithm [12], etc.). In general, the nonexpansive mapping may have more than one fixed point. Without loss of generality, we may assume that Fix T (otherwise, C is additionally bounded), then Fix(T) is closed and convex (It is worth mentioning that Ferreira [13] proved that Fix(T) is closed and convex even in a strictly convex Banach space which includes Hilbert spaces as a special case). So there exists a unique x* ∈ Fix(T) satisfies:

x * = min x : x Fix T .

That is, x* is the minimum-norm fixed point of T. In other words, x* is the metric projection of the origin into Fix(T), i.e., x* = PFix(T)0. It is an interesting thing to construct iterative sequence to find the minimum-norm fixed point of a nonexpansive mapping T, i.e., the minimum-norm solutions of x = Tx. Recently, Yao and Xu [14] and Cui and Liu [15] independently introduced two iterative methods (one implicit and one explicit) for finding the minimum-norm fixed point of nonexpansive mapping which is defined on a closed convex subset C of H. The proposed algorithms are based on the well-known Browder's iterative method [16] and Halpern's iterative method [17]. We next briefly recall the Browder's iterative method and the Halpern's iterative method. Browder [16] introduced an implicit scheme as follows. Let uC and t ∈ (0, 1), x t be the unique fixed point in C of the contraction T t from C into C:

T t x = t u + 1 - t T x , x C .
(1.3)

Browder proved that the strong limit of {x t } as t → 0+ is the fixed point of T which is nearest from Fix(T) to u, i.e., limt→0+ x t = PFix(T)u. Besides, Halpern [17] introduced an explicit scheme. Let x0C, define a sequence {x n } by the following:

x n + 1 = α n u + 1 - α n T x n , n 0 ,
(1.4)

where {α n } ⊂ (0, 1). It is known that the sequence {x n } generated by (1.4) converges in norm to the same limit PFix(T)u as Browder's implicit scheme (1.3) if the sequence {α n } satisfies the following conditions:

(C1) limn→∞α n = 0;

(C2) n = 0 α n =+;

(C3) either n = 0 α n + 1 - α n <+ or li m n α n / α n + 1 =1.

It is noticed that the Browder's and the Halpern's iterative methods do find the minimum-norm fixed point x* of T if 0 ∈ C. However, if 0 ∉ C, then neither Browder's nor Halpern's methods works to find the minimum-norm element x*. The reason is simple: if 0 ∉ C, we cannot take u = 0 either in (1.3) or (1.4) since the contraction T t x = (1 - t)Tx is no longer a self-mapping of C or the point (1 - α n )Tx n may not belong to C and consequently, {x n +1} may be undefined. In order to overcome this difficulties caused by possible exclusion of the origin from C, Yao and Xu [14] and Cui and Liu [15] put forward the improvement strategy to impose the metric projection P C on the right side of the (1.3) and (1.4) when u = 0. The role of the metric projection P C is to pull the substituted sequence back to C, then the iterative sequences are well-defined.

Motivated and inspired by the above studies, the purpose of this article is to consider another way to ensure the well defined of the iterative sequence. That is, we replace the closed convex subset C by a closed convex cone C (C is said to be a closed convex cone if (i) C is closed and convex; (ii) αxC, for all α ≥ 0 and xC; (iii) C ≠ {0}). We present new strongly convergent methods for approximating minimum-norm fixed point of nonexpansive mappings. The proposed algorithms consist of two types and generated by the following. For each λ ∈ (0, 1), (i) The implicit method

x t = 1 - t λ T x t + 1 - λ x t .
(1.5)
  1. (ii)

    The explicit method

    x n + 1 = 1 - α n λ T x n + 1 - λ x n , n 0 ,
    (1.6)

where {α n } ⊂ (0, 1).

We prove that the sequence {x n } generated by (1.5) and (1.6) converge strongly to the element of minimal norm fixed point of nonexpansive mappings. As applications, we provide iterative processes for solving the constrained convex optimization problem. And we use them to solve some split feasibility problems which attracted great attention in recent years. Our results improve and generalize the corresponding results of Cui and Liu [15], Yao and Xu [14], and Wang and Xu [18] et al.

2 Preliminaries

Let H be a Hilbert space with inner product 〈·,·〉 and norm ║·║, and let C be a nonempty closed convex subset of H.

We use the following notions in the sequel:

  1. (i)

    for weak convergence and → for strong convergence;

  2. (ii)

    w w x n = x : x n j x denotes the weak ω-limit set of {x n }.

Recall that the orthogonal projection P C x of x onto C is defined by the following

P C x = arg  min y C x - y .

The orthogonal projection has the following well-known properties. For a given xH,

  1. (i)

    x - P C x, z - P C x〉 ≤ 0, for all zC;

  2. (ii)

    P C x - P C y2 ≤ 〈P C x - P C y, x - y〉, for all x, yH.

We shall make use of the following results.

Lemma 2.1. (Demiclosedness principle of nonexpansive mapping) Let T : CC a non-expansive mapping with Fix T . If x n ⇀ x and (I - T)x n → 0, then x = Tx.

Lemma 2.2. (see, [19]) Let {x n } and {y n } be bounded sequences in a Banach space E and let {β n } be a sequence in [0, 1] with 0 < lim inf βn ≤ lim sup βn < 1. Suppose x n +1 = β n y n + (1 - β n )x n for all n ≥ 0 and

lim  sup n y n + 1 - y n - x n + 1 - x n 0 .

Then limn→∞y n - x n ║ = 0.

Lemma 2.3. (see, [20]) Let {a n } be an nonnegative real sequences satisfying the following inequality:

a n + 1 1 - γ n a n + γ n δ n , n 0 ,

where {γ n } ⊂ (0, 1) such that n = 0 γ n =+, and lim supn→∞δ n ≤ 0. Then limn→∞a n = 0.

3 Main results

First, we prove the following strong convergence theorem by using the implicit method (1.5) for finding the minimum-norm fixed point of a nonexpansive mapping T.

Theorem 3.1. Let C be a closed convex cone of a real Hilbert space H. Let T : CC be a nonexpansive with Fix T . For each t ∈ (0, 1), let x t be the unique fixed point in C of the contraction Tλ : = (1 - t)(λT + (1 - λ)I), where λ ∈ (0, 1) is a constant. Then x t converges strongly to the minimum-norm fixed point of T as t → 0+.

Proof. Take p ∈ Fix(T), from (1.5), we have

x t - p = 1 - t λ T x t + 1 - λ x t - p = 1 - t 1 - λ x t - p + λ T x t - p - t p 1 - t 1 - λ x t - p + λ x t - p + t p = 1 - t x t - p + t p ,

that is,

x t - p p , for all t 0 , 1 .

Hence, {x t } is bounded and so is {Tx t }. Next, we prove that ║x t - Tx t ║→ 0 as t → 0+. In fact, from (1.5), we have

x t - T x t = 1 - t λ T x t + 1 - λ x t - T x t = 1 - t λ T x t + 1 - λ x t - T x t - t T x t 1 - t 1 - λ x t - T x t + t T x t ,

that is,

x t - T x t t 1 - 1 - t 1 - λ T x t 0 , as t 0 + .
(3.1)

Next we show that {x t } is relatively norm-compact as t → 0+. Since {x t } is bounded, there exists a null sequence {t n } ⊂ (0, 1) such that x t n x ̄ . By Lemma 2.1 and (3.1), then x ̄ Fix T .

For any x ̃ Fix T , we deduce that

x t - x ̃ 2 = x t - x ̃ , x t - x ̃ = 1 - t λ T x t + 1 - λ x t - x ̃ , x t - x ̃ = 1 - t 1 - λ x t - x ̃ + λ T x t - x ̃ - t x ̃ , x t - x ̃ = 1 - t 1 - λ x t - x ̃ 2 + λ T x t - x ̃ , x t - x ̃ + t - x ̃ , x t - x ̃ 1 - t x t - x ̃ 2 + t - x ̃ , x t - x ̃ .

It turns out that

x t - x ̃ 2 - x ̃ , x t - x ̃ .
(3.2)

Since x ̄ Fix T , we may substitute x ̄ for x ̃ and t n for t in (3.2) to obtain that x t n x ̄ .

Hence, {x t } is indeed relatively compact (as t → 0+) in the norm topology.

Observe that (3.2) is equivalent to

x t 2 x t , x ̃ .

Hence,

x t x ̃ , t 0 , 1 , x ̃ Fix T .

This implies that

x ̄ x ̃ , for all x ̃ Fix T .

Therefore, x ̄ = x * , where x* is the minimum-norm fixed point of T, and we conclude that x t x* as t → 0+. This completes the proof. □

Now, we are in the position to prove the strong convergence of the explicit method (1.6). Our proofs of this theorem closely follows proofs given in [11] for some related results.

Theorem 3.2. Let C be a closed convex cone of a real Hilbert space H. Let T : C → C be a nonexpansive mapping and Fix(T) is nonempty. Assume that the sequence {α n } ⊂ (0, 1) satisfies the following conditions:

(i) limn→∞α n = 0;

(ii) n = 0 α n =+.

Then the sequence {x n } generated by the algorithm (1.6) strongly converges to a fixed point of T which is of minimal norm.

Proof. First we prove that the sequence {x n } is bounded. Let p ∈ Fix(T). By (1.6), we have

x n + 1 - p = 1 - α n λ T x n + 1 - λ x n - p 1 - α n 1 - λ x n - p + λ T x n - p - α n p 1 - α n x n - p + α n p max x n - p , p .

By induction,

x n - p max x 0 - p , p ,

for all n ≥ 0. Then {x n } is bounded. Therefore, {Tx n } is also bounded.

Let y n = 1 - α n λ T x n α n + 1 - α n λ , then the iterative sequence (1.6) is equivalent to

x n + 1 = α n + 1 - α n λ y n + 1 - α n - 1 - α n λ x n .
(3.3)

Observe that limn→∞(α n + (1 - α n )λ) = λ, then

y n - p = 1 - α n λ T x n - α n p - 1 - α n λ p α n + 1 - α n λ 1 - α n λ x n - p + α n p α n + 1 - α n λ = α n α n + 1 - α n λ p + 1 - α n α n + 1 - α n λ x n - p max x n - p , p .

Thus, {y n } is bounded. Consequently, we have

y n + 1 - y n - x n + 1 - x n = 1 - α n + 1 λ T x n + 1 α n + 1 + 1 - α n + 1 λ - 1 - α n λ T x n α n + 1 - α n λ - x n + 1 - x n 1 - α n + 1 λ α n + 1 + 1 - α n + 1 λ T x n + 1 - T x n + 1 - α n + 1 λ α n + 1 + 1 - α n + 1 λ - 1 - α n λ α n + 1 - α n λ T x n - x n + 1 - x n 1 - α n + 1 λ α n + 1 + 1 - α n + 1 λ - 1 x n + 1 - x n + 1 - α n + 1 λ α n + 1 + 1 - α n + 1 λ - 1 - α n λ α n + 1 - α n λ T x n

From the fact that {x n } and {Tx n } are bounded sequences and limn→∞α n = 0, then

lim  sup n y n + 1 - y n - x n + 1 - x n 0 .

With the help of Lemma 2.2, we obtain that limn→∞║y n - x n ║ = 0. Therefore,

lim n x n + 1 - x n = lim n α n + 1 - α n λ y n - x n = 0 .
(3.4)

On the other hand,

x n - T x n x n - x n + 1 + x n + 1 - T x n = x n - x n + 1 + 1 - α n λ T x n + 1 - λ x n - T x n x n - x n + 1 + 1 - α n 1 - λ x n - T x n + α n T x n .

From the above inequality, and (3.4), we obtain

x n - T x n 1 1 - 1 - α n 1 - λ x n - x n + 1 + α n 1 - 1 - α n 1 - λ T x n 0 as n .

Next, we prove that lim supn→∞x* - x n , x*〉 ≤ 0. To achieve this, we take a subsequence x n i of {x n } such that

lim  sup n i x * - x n , x * = lim i x * - x n i , x * .

Since {x n } is bounded, without loss of generality, we may assume that x n i x . Consequently,

lim  sup n x * - x n , x * = x * - x , x * .

Notice that limn→∞x n - Tx n ║ = 0. By the demiclosedness principle of nonexpansive mapping T, we have x' ∈ Fix(T). Since x* = PFix(T)0. It follows from the properties of Projection operator that

lim  sup n x * - x n , x * = x * - x , x * 0 .
(3.5)

By (1.6), we have

x n + 1 - 1 - α n x * 2 = 1 - α n λ T x n + 1 - λ x n - 1 - α n x * 2 = 1 - α n λ T x n + 1 - λ x n - x * 2 1 - α n x n - x * 2 .
(3.6)

Observe that

x n + 1 - 1 - α n x * 2 = x n + 1 - x * 2 - 2 α n - x * , x n + 1 - x * + α n 2 x * 2 x n + 1 - x * 2 - 2 α n - x * , x n + 1 - x * .
(3.7)

Therefore, by (3.6) and (3.7), we get

x n + 1 - x * 2 1 - α n x n - x * 2 + 2 α n x * , x * - x n + 1 .
(3.8)

By the condition of (ii) and the inequality (3.5), we can apply Lemma 2.3 to (3.8) and conclude that {x n } converges strongly to x* as n → ∞, that is, the minimum-norm fixed point of T. This completes the proof. □

Remark 3.1. (i) If the closed convex cone C in Theorems 3.1 and 3.2 are replaced by closed convex C with 0 ∈ C, then Theorems 3.1 and 3.2 are still true because of the iterative sequence (1.5) and (1.6) are well-defined now.

  1. (ii)

    Theorem 3.2 also improve the [[14], Theorem 3.2] and [[15], Theorem 3.3], in which the restrictions n = 0 α n + 1 - α n <+ or lim n α n / α n + 1 = 1 are removed.

4 Some applications

From now on, we apply the proposed methods for approximating the minimum-norm solution of convex function and to split feasibility problems. Let's recall that the standard constrained convex optimization problem as follows:

find x * C , such that f x * = min x C f x ,
(4.1)

where f : CR is a convex, Fréchet differentiable function, C is closed convex subset of H.

It is known that the above optimization problem is equivalent to the following variational inequality:

find x * C , such that v - x * , f ( x * ) 0 , for all v C ,
(4.2)

where ∇f : HH is the gradient of f.

It is well-known that the optimality condition (4.2) is equivalent to the following fixed point problem:

x * = P C x * - μ f x * ,

where P C is the metric projection onto C and μ > 0 is a positive constant. Based on the fixed point problem, we deduce the projected gradient method.

x 0 C , x n + 1 = P C x n - μ f x n , n 0 .
(4.3)

Using Theorems 3.1 and 3.2, we immediately obtain the following result.

Theorem 4.1. Suppose that the solution set of (4.1) is nonempty. Let the objective function f be convex, fréchet differentiable and its gradientf is Lipschitz continuous with Lipschitz constant L. In addition, if 0 ∈ C or C is closed convex cone. Let μ ∈ (0, 2/L),

(i) For each t ∈ (0, 1), let x t be the unique solution of the fixed point equation

x t = 1 - t λ P C I - μ f x t + 1 - λ x t .
(4.4)

Then {x t } converges in norm as t → 0+ to the minimum-norm solution of the minimization (4.1)

(ii) Define a sequence {x n } by the following

x n + 1 = 1 - α n λ P C I - μ f x n + 1 - λ x n , n 0 ,

where λ ∈ (0, 1) and the sequence {α n } ⊂ (0, 1) satisfies conditions in Theorem 3.2. Then the sequence {x n } converges strongly to the minimum-norm solution of the minimization (4.1).

Proof. Since ∇f is Lipschitz continuous with Lipschitz constant L, then the P C (I - μf) is nonexpansive mapping (see [[21], Sect. 4). Replace the mapping T in (1.5) and (1.6) with P C (I - μf). Therefore, the conclusion of Theorem 4.1 follows from Theorems 3.1 and 3.2 immediately. □

Next, we give an application of Theorem 4.1 to the split feasibility problem (say SFP, for short) which was introduced by Censor and Elfving [22].

find x C , such that A x Q ,
(4.5)

where C and Q are nonempty closed convex subset of Hilbert space H1 and H2, respectively. A : H1H2 is a bounded linear operator.

It is clear that x* is a solution to the split feasibility problem (4.5) if and only if x* ∈ C and Ax* - P Q Ax* = 0. We define the proximity function f by

f x = 1 2 A x - P Q A x 2 ,

and consider the convex optimization problem

min x C f x = min x C 1 2 A x - P Q A x 2 .
(4.6)

Then, x* solves the split feasibility problem (4.5) if and only if x* solves the minimization (4.6) with the minimize equal to 0. Byrne [21] introduced the so-called CQ algorithm to solve the (SFP).

x n + 1 = P C I - μ A * I - P Q A x n , n 0 ,
(4.7)

where 0 < μ < 2/ρ(A*A) and where P C denotes the projection onto C and ρ(A*A) is the spectral radius of the self-adjoint operator A*A. He obtained that the sequence {x n } generated by (4.7) converges weakly to a solution of the (SFP).

In order to obtain strong convergence iterative sequence to solve the (SFP), Xu [23] investigated the following algorithm:

x n + 1 = α n u + 1 - α n P C x n - μ A * I - P Q A x n , n 0 ,
(4.8)

where 0 < μ < 2/ρ(A*A). He showed that when the sequence {α n } satisfies the conditions (C1)-(C3), then {x n } converges strongly to the projection of u onto the solution set of the (SFP). In particular, if u = 0 in the algorithm (4.8), then the corresponding algorithms converges strongly to the minimal norm solution of the (SFP). Lately, Wang and Xu [18] introduced a modification of CQ algorithm (4.7) with strong convergence by introducing an approximating curve for the (SFP) in infinite dimensional Hilbert space, and obtained the minimum-norm solution of the (SFP) as the strong limit of the approximating curve. The sequence {x n } is generated by the iterative algorithm

x n + 1 = P C 1 - α n I - μ A * I - P Q A x n , n 0 ,
(4.9)

where {α n } ⊂ (0, 1) such that (C1)-(C3).

Applying Theorem 4.1, we obtain the following result which improve the corresponding results of Xu [23] and Wang and Xu [18].

Theorem 4.2. Assume that the split feasibility problem (4.5) is consistent. In addition, if 0 ∈ C or C is closed convex cone. Let the sequence {x n } be generated by

x n + 1 = 1 - α n λ P C x n - μ A * I - P Q A x n + 1 - λ x n , n 0 ,
(4.10)

where the sequence {α n } ⊂ (0, 1) satisfies the conditions: (i) lim n→∞, α n = 0, (ii) n = 0 α n =+,λ 0 , 1 , λ ∈ (0, 1) and μ ∈ (0, 2/ρ(A*A)), where ρ(A*A) denotes the spectral radius of the self-adjoint operator A*A. Then the sequence {x n } converges strongly to the minimum-norm solution of the split feasibility problem (4.5).

Proof. By the definition of the proximity function f, we have

f x = A * I - P Q A x ,

and ∇f is Lipschitz continuous (Lemma 8.1 of [21]), i.e.,

f x - f y L x - y ,

where L = ρ(A*A). Then the iterative scheme (4.10) is equivalent to

x n + 1 = 1 - α n λ P C I - μ f x n + 1 - λ x n .

Due to Theorem 4.1, we have the conclusion immediately. □

Remark 4.1. Theorem 4.2 extends the corresponding results of Wang and Xu [18] and Xu [23] by discarding the assumption " n = 0 α n + 1 - α n <+ or limn → ∞(α n /αn+1) = 1".