1 Introduction

Let C be a closed, convex and nonempty subset of a Hilbert space H and let \(T:C\to H\) be a non-expansive mapping such that the fixed point set \(\operatorname{Fix}(T):=\{x\in C:Tx=x\}\) is not empty.

For a real sequence \(\{\alpha_{n}\}\subset(0,1)\), we will consider the iterations

$$ \begin{cases} x_{0}\in C, \\ x_{n+1}=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n}. \end{cases} $$
(1)

If T is a self-mapping, the iterative scheme above has been studied in an impressive amount of papers (see [1] and the references therein) in the last decades and it is often called ‘segmenting Mann’ [24] or ‘Krasnoselskii-Mann’ (e.g., [5, 6]) iteration.

A general result on algorithm (1) is due to Reich [7] and states that the sequence \(\{x_{n}\}\) weakly converges to a fixed point of the operator T under the following assumptions:

  1. (C1)

    T is a self-mapping, i.e., \(T:C\to C\) and

  2. (C2)

    \(\{\alpha_{n}\}\) is such that \(\sum_{n}\alpha _{n}(1-\alpha_{n})=+\infty\).

In this paper, we are interested in lowering condition (C1) by allowing T to be non-self at the price of strengthening the requirements on the sequence \(\{\alpha_{n}\}\) and on the set C. Indeed, we will assume that C is a strictly convex set and that the non-expansive map \(T:C\to H\) is inward.

Historically, the inward condition and its generalizations were widely used to prove convergence results for both implicit [811] and explicit (see, e.g., [1, 1214]) algorithms. However, we point out that the explicit case was only studied in conjunction with processes involving the calculation of a projection or a retraction \(P:H\to C\) at each step.

As an example, in [12], the following algorithm is studied:

$$x_{n+1}=P\bigl(\alpha_{n}f(x_{n})+(1- \alpha_{n})Tx_{n}\bigr), $$

where \(T:C\to H\) satisfies the weakly inward condition, f is a contraction and \(P:H\to C\) is a non-expansive retraction.

We point out that in many real world applications, the process of calculating P can be a resource consumption task and it may require an approximating algorithm by itself, even in the case when P is the nearest point projection.

To overcome the necessity of using an auxiliary mapping P, for an inward and non-expansive mapping \(T:C\to H\), we will introduce a new search strategy for the coefficients \(\{\alpha_{n}\}\) and we will prove that the Krasnoselskii-Mann algorithm

$$x_{n+1}=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n} $$

is well defined for this particular choice of the sequence \(\{\alpha _{n}\}\). Also we will prove both weak and strong convergence results for the above algorithm when C is a strictly convex set.

We stress that the main difference between the classical Krasnoselskii-Mann and our algorithm is that the choice of the coefficient \(\alpha_{n}\) is not made a priori in the latter, but it is constructed step to step and determined by the values of the map T and the geometry of the set C.

2 Main result

We will make use of the following.

Definition 1

A map \(T:C\to H\) is said to be inward (or to satisfy the inward condition) if, for any \(x\in C\), it holds

$$ Tx\in I_{C}(x):=\bigl\{ x+c(u-x):c\geq1\mbox{ and }u\in C\bigr\} . $$
(2)

We refer to [15] for a comprehensive survey on the properties of the inward mappings.

Definition 2

A set \(C\subset H\) is said to be strictly convex if it is convex and with the property that \(x,y\in\partial C\) and \(t\in(0,1)\) implies that

$$tx+(1-t)y\in\mathring{C}. $$

In other words, if the boundary ∂C does not contain any segment.

Definition 3

A sequence \(\{y_{n}\}\subset C\) is Fejér-monotone with respect to a set \(D\subset C\) if, for any element \(y\in D\),

$$\|y_{n+1}-y\|\leq\|y_{n}-y\| \quad \forall n\in\mathbb{N}. $$

For a closed and convex set C and a map \(T:C\to H\), we define a mapping \(h:C\to\mathbb{R}\) as

$$ h(x):=\inf\bigl\{ \lambda\geq0:\lambda x+(1-\lambda)Tx\in C\bigr\} . $$
(3)

Note that the above quantity is a minimum since C is closed. In the following lemma, we group the properties of the function defined above.

Lemma 1

Let C be a nonempty, closed and convex set, let \(T:C\to H\) be a mapping and define \(h:C\to\mathbb{R}\) as in (3). Then the following properties hold:

  1. (P1)

    for any \(x\in C\), \(h(x)\in[0,1]\) and \(h(x)=0\) if and only if \(Tx\in C\);

  2. (P2)

    for any \(x\in C\) and any \(\alpha\in[h(x),1]\), \(\alpha x+(1-\alpha)Tx\in C\);

  3. (P3)

    if T is an inward mapping, then \(h(x)<1\) for any \(x\in C\);

  4. (P4)

    whenever \(Tx\notin C\), \(h(x)x+(1-h(x))Tx\in\partial C\).

Proof

Properties (P1) and (P2) follow directly from the definition of h. To prove (P3), observe that (2) implies

$$\frac{1}{c}Tx+\biggl(1-\frac{1}{c}\biggr)x\in C $$

for some \(c\geq1\). As a consequence,

$$h(x)=\inf\bigl\{ \lambda\geq0:\lambda x+(1-\lambda)Tx\in C\bigr\} \leq\biggl(1- \frac{1}{c}\biggr)< 1. $$

In order to verify (P4), we first note that \(h(x)>0\) by property (P1) and that \(h(x)x+(1-h(x))Tx\in C\). Let \(\{\eta_{n}\}\subset(0,h(x))\) be a sequence of real numbers converging to \(h(x)\) and note that, by the definition of h, it holds

$$z_{n}:=\eta_{n}x+(1-\eta_{n})Tx\notin C $$

for any \(n\in\mathbb{N}\). Since \(\eta_{n}\to h(x)\) and

$$\bigl\Vert z_{n}-h(x)x-\bigl(1-h(x)\bigr)Tx\bigr\Vert =\bigl\vert \eta_{n}-h(x)\bigr\vert \|x-Tx\|, $$

it follows that \(z_{n}\to h(x)x+(1-h(x))Tx\in C\), so that this last must belong to ∂C. □

Our main result is the following.

Theorem 1

Let C be a convex, closed and nonempty subset of a Hilbert space H and let \(T:C\to H\) be a mapping. Then the algorithm

$$ \begin{cases} x_{0}\in C, \\ \alpha_{0}:=\max\{\frac{1}{2},h(x_{0})\}, \\ x_{n+1}:=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n}, \\ \alpha_{n+1}:=\max\{\alpha_{n},h(x_{n+1})\} \end{cases} $$
(4)

is well defined.

If we further assume that

  1. 1.

    C is strictly convex and

  2. 2.

    T is a non-expansive mapping, which satisfies the inward condition (2) and such that \(\operatorname{Fix}(T)\neq\emptyset\),

then \(\{x_{n}\}\) weakly converges to a point \(p\in \operatorname{Fix}(T)\). Moreover, if \(\sum_{n=0}^{\infty}(1-\alpha_{n})<\infty\), then the convergence is strong.

Proof

To prove that the algorithm is well defined, it is sufficient to note that \(\alpha_{n}\in[h(x_{n}),1]\) for any \(n\in\mathbb{N}\); then, by recalling property (P2) from Lemma 1, it immediately follows that

$$x_{n+1}=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n} \in C. $$

Assume now that T satisfies the inward condition. In this case, by property (P3) of the previous lemma, we obtain that the non-decreasing sequence \(\{\alpha_{n}\}\) is contained in \([\frac{1}{2},1)\). Also, since T is non-expansive and with at least one fixed point, it follows by standard arguments that \(\{x_{n}\}\) is Fejér-monotone with respect to \(\operatorname{Fix}(T)\) and, as a consequence, both \(\{x_{n}\}\) and \(\{Tx_{n}\}\) are bounded.

Firstly, assume that \(\sum_{n=0}^{\infty}(1-\alpha_{n})=\infty\). Then, since \(\alpha_{n}\geq\frac{1}{2}\), we derive that \(\sum_{n=0}^{\infty}\alpha_{n}(1-\alpha_{n})=\infty\) and from Lemma 2 of [16] we obtain that

$$\|x_{n}-Tx_{n}\|\to0. $$

This fact, together with the Fejér-monotonicity of \(\{x_{n}\}\) proves that the sequence weakly converges in \(\operatorname{Fix}(T)\) (see [17], Proposition 2.1).

Suppose that

$$ \sum_{n=0}^{\infty}(1-\alpha_{n})< \infty. $$
(5)

Since

$$\|x_{n+1}-x_{n}\|=(1-\alpha_{n}) \|Tx_{n}-x_{n}\|, $$

and by the boundedness of \(\{x_{n}\}\) and \(\{Tx_{n}\}\), it is promptly obtained that

$$\sum_{n=0}^{\infty}\|x_{n+1}-x_{n} \|< \infty, $$

i.e., \(\{x_{n}\}\) is a strongly Cauchy sequence and hence \(x_{n}\to x^{*}\in C\).

Note that T satisfies the inward condition. Then, by applying properties (P2) and (P3) from Lemma 1, we obtain that \(h(x^{*})<1\) and that for any \(\mu\in(h(x^{*}),1)\) it holds

$$ \mu x^{*}+(1-\mu)Tx^{*}\in C. $$
(6)

On the other hand, we observe that since \(\lim_{n\to\infty}\alpha_{n}=1\) by (5) and since \(\alpha_{n}=\max\{\alpha _{n-1}, h(x_{n})\}\) holds, it follows that we can choose a sub-sequence \(\{x_{n_{k}}\}\) with the property that \(\{h(x_{n_{k}})\}\) is non-decreasing and \(h(x_{n_{k}})\to1\). In particular, for any \(\mu<1\),

$$ \mu x_{n_{k}}+(1-\mu)Tx_{n_{k}}\notin C $$
(7)

eventually holds.

Choose \(\mu_{1},\mu_{2}\in(h(x^{*}),1)\) with \(\mu_{1}\neq\mu_{2}\) and set \(v_{1}:=\mu_{1}x^{*}+(1-\mu_{1})Tx^{*}\) and \(v_{2}:=\mu _{2}x^{*}+(1-\mu_{2})Tx^{*}\). Then, whenever \(\mu\in[\mu_{1},\mu_{2}]\), by (6) we have that \(v:=\mu x^{*}+(1-\mu)Tx^{*}\in C\). Moreover,

$$\mu x_{n_{k}}+(1-\mu)Tx_{n_{k}}\to v $$

since \(x_{n}\to x^{*}\). This last, together with (7), implies that \(v\in\partial C\) and \([v_{1},v_{2}]\subset\partial C\), since μ is arbitrary.

By the strict convexity of C, we derive that

$$\mu_{1}x^{*}+(1-\mu_{1})Tx^{*}= \mu_{2}x^{*}+(1-\mu_{2})Tx^{*} $$

and \(x^{*}=Tx^{*}\) must necessarily hold, i.e., \(\{x_{n}\}\) strongly converges to a fixed point of T. □

Remark 1

Following the same line of proof, it can be easily seen that the same results hold true if the starting coefficient \(\alpha_{0}=\max\{ \frac{1}{2},h(x_{0})\}\) is substituted by \(\alpha_{0}=\max\{b,h(x_{0})\}\), where \(b\in(0,1)\) is a fixed and arbitrary value. In the statement of Theorem 1, the value \(b=\frac{1}{2}\) was taken to ease the notation.

We also note that the value \(h(x_{n})\) can be replaced, in practice, by \(h_{n}=1-\frac{1}{2^{j_{n}}}\), where \(j_{n}:=\min\{j\in\mathbb {N}:(1-\frac{1}{2^{j}})x_{n}+\frac{1}{2^{j}}Tx_{n}\in C\}\).

Remark 2

As it follows from the proof, the condition \(\sum_{n}(1-\alpha _{n})<\infty\) provides a localization result for the fixed point \(x^{*}\) as a side result. Indeed, in this case, it holds that \(x^{*}=v_{1}=v_{2}\) belongs to the boundary ∂C of the set C.

Remark 3

In [18], for a closed and convex set C, the map

$$f(x):=\inf\bigl\{ \lambda\in[0,1]:x\in\lambda C\bigr\} $$

was introduced and used in conjunction with an iterative scheme to approximate a fixed point of minimum norm (see also [19]). Indeed, in the above mentioned paper, it is proved that the iterative scheme

$$\begin{cases} \lambda_{n}=\max\{f(x_{n}),\lambda_{n-1}\}, \\ y_{n}=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n}, \\ x_{n+1}=\alpha_{n}\lambda_{n}x_{n}+(1-\alpha_{n})y_{n} \end{cases} $$

strongly converges under the assumptions that \(\{\alpha_{n}\}\) is a sequence in \((0,1)\) such that \(\lim_{n}\frac{\alpha_{n}}{(1-\lambda_{n})}=0\) and that \(\sum_{n}(1-\lambda_{n})\alpha_{n}=\infty\). We point out that the mentioned conditions appear to be difficult to be checked as they involve the geometry of the set C.

We illustrate the statement of our results with a brief example.

Example 1

Let \(H=l^{2}(\mathbb{R})\) and let \(C:=B_{1}\cap B_{2}\), where \(B_{1}:=\{ (t_{i})_{i\in\mathbb{N}}:(t_{1}-49.995)^{2}+\sum_{i=2}^{\infty }t_{i}^{2}\leq(50.005)^{2}\}\) and \(B_{2}:=\{(t_{i})_{i\in\mathbb{N}}:\sum_{i=1}^{\infty}t_{i}^{2}\leq 1\}\). Then C is a nonempty, closed and strictly convex subset of H. Let \(T:C\to H\) be the map defined by \(T(t_{1},t_{2},\ldots,t_{i},\ldots ):=(-t_{1},t_{2},\ldots,t_{i},\ldots)\), then T is a non-expansive inward map with \(\operatorname{Fix}(T)=\{(0,t_{2},\ldots ,t_{i},\ldots):\sum_{i=2}^{\infty}t_{i}^{2}\leq1\}\). If we use the algorithm

$$\begin{cases} x_{0}=(t_{i})_{i\in\mathbb{N}}\in C, \\ \alpha_{0}:=\max\{\frac{1}{2},h(x_{0})\}, \\ x_{n+1}:=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n}, \\ \alpha_{n+1}:=\max\{\alpha_{n},h(x_{n+1})\}, \end{cases} $$

then, by the natural symmetry of the problem, we obtain the constant sequence

$$x_{1}=\cdots=x_{n}=(0,t_{2}, \ldots,t_{i},\ldots)\in \operatorname{Fix}(T). $$

If we use the algorithm

$$\begin{cases} x_{0}=(t_{i})_{i\in\mathbb{N}}\in C, \\ \alpha_{0}:=\max\{0.01,h(x_{0})\}, \\ x_{n+1}:=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n}, \\ \alpha_{n+1}:=\max\{\alpha_{n},h(x_{n+1})\}, \end{cases} $$

then \(\{x_{n}\}\) still converges in \(\operatorname{Fix}(T)\), but \(\{x_{n}\}\cap \operatorname{Fix}(T)=\emptyset\) whenever \(t_{i}\neq0\).

We conclude the paper by including few question that appear to be still open to the best of our knowledge.

Question 1

It has been proved that the Krasnoselskii-Mann algorithm converges for general classes of mappings (see, e.g., [20] and [21]). By maintaining the same assumption on the set C and the inward condition of the involved map, it appears to be natural to ask for which classes of mappings the same result of Theorem 1 still holds.

Question 2

Under which assumptions can algorithm (4) be adapted to produce a converging sequence to a common fixed point for a family of mappings? In other words, does the algorithm

$$\begin{cases} x_{0}\in C, \\ \alpha_{0}:=\max\{\frac{1}{2},h_{n}(x_{0})\}, \\ x_{n+1}:=\alpha_{n}x_{n}+(1-\alpha_{n})T_{n}x_{n}, \\ \alpha_{n+1}:=\max\{\alpha_{n},h_{n+1}(x_{n+1})\} \end{cases} $$

converge to a common fixed point of the family \(\{T_{n}\}\), where

$$h_{n}(x):=\inf\bigl\{ \lambda\geq0:\lambda x+(1-\lambda)T_{n}x \in C\bigr\} $$

and under suitable hypotheses?

We refer to [22] and [23] for two examples regarding the classical Krasnoselskii-Mann algorithm.

Question 3

In the classical literature, it has been proved that the inward condition can be often dropped in favor of a weaker condition. For example, a mapping \(T:C\to X\) is said to be weakly inward (or to satisfy the weakly inward condition) if

$$Tx\in\overline{I_{C}(x)}\quad \forall x\in C. $$

Does Theorem 1 hold even for weakly inward mappings?

On the other hand, we observe that the strict convexity of the set C does appear to be unusual for results regarding the convergence of Krasnoselskii-Mann iterations. We do not know if our result can hold for a convex and closed set C, even at the price of strengthening the requirements on the map T.