1 Introduction

In this paper, we show that, under some general conditions, the Lipschitz continuity of a mapping as well as its nonlinear version, important for instance in fixed-point theory, are implied by much weaker ones.

We say that a mapping T of a metric space \(\left( X,d\right) \) into a metric space \(\left( Y,\rho \right) \) is restrictive Lipschitz if there exist: a positive decreasing to zero sequence \(\left( t_{n}:n\in \mathbb {N}\right) \) and a nonnegative sequence \(\left( L_{n}:n\in \mathbb {N} \right) ,\) with \(L:=\liminf _{n\rightarrow \infty }L_{n}<\infty ,\) such that for all \(x,y\in X,\) \(n\in \mathbb {N},\) the implication

$$\begin{aligned} d\left( x,y\right) =t_{n}\Longrightarrow \rho \left( Tx,Ty\right) \le L_{n}t_{n} \end{aligned}$$

holds true.

In view of our main result (Theorem 1, Sect. 5), if a continuous mapping T of a complete metrically convex metric space \(\left( X,d\right) \) into a metric space \(\left( Y,\rho \right) \) is restrictive Lipschitz, then T is L-Lipschiz, that is

$$\begin{aligned} \rho \left( Tx,Ty\right) \le Ld\left( x,y\right) , \quad x,y\in X, \end{aligned}$$

(see [15] where it is assumed that X is a convex subset of a normed space and Y is a normed space) and, in the case when the set \( \left\{ n\in \mathbb {N}:L_{n}<L\right\} \) is infinite, even more, that

$$\begin{aligned} \rho \left( Tx,Ty\right) \le L\alpha \left( d\left( x,y\right) \right) , \quad x,y\in X, \end{aligned}$$

where the function \(\alpha :\left[ 0,\infty \right) \rightarrow \left[ 0,\infty \right) \) is continuous, increasing, concave (so subadditive) and such that

$$\begin{aligned} \alpha \left( t\right) <t , \quad t>0\text {.} \end{aligned}$$

The assumption of the continuity of T can be omitted by a strengthening of the restricted Lipschitz continuity (Theorem 2).

In the proof of this result, besides the metrical convexity and some properties of subadditive functions (Sect. 4), the key technical role is played by the following base type result: if \(\left( t_{n}:n\in \mathbb {N} \right) \) is a strictly decreasing sequence of real numbers and \( \lim _{n\rightarrow \infty }t_{n}=0\), then every positive real number t can be represented in the unique way in the form

$$\begin{aligned} t=\sum _{n=1}^{\infty }k_{n}\left( t\right) t_{n}, \end{aligned}$$

where \(\left( k_{n}\left( t\right) :n\in \mathbb {N}\right) \) is a sequence of nonnegative integers (Lemma 1 in Sect. 2).

In Sect. 6, we show that Theorem 1 leads to the following: every continuous selfmapping T of a nonempty metrically convex complete metric space \( \left( X,d\right) ,\) satisfying rather modest restrictive Lipschitz conditions with a sequence \(\left( L_{n}:n\in \mathbb {N}\right) ,\) such that\( \ 0\,\le L_{n}<1\ (n\in \mathbb {N)}\) and \(\lim _{n\rightarrow \infty }L_{n}\le 1,\) not only has a unique fixed-point, but either it must be a Banach contraction or there is an increasing concave function \(\alpha : \left[ 0,\infty \right) \rightarrow \left[ 0,\infty \right) \), \(\alpha \left( t\right) <t\) for \(t>0\), such that

$$\begin{aligned} d\left( Tx,Ty\right) \le \alpha \left( d\left( x,y\right) \right) , \quad x,y\in X, \end{aligned}$$

i.e., T must be a regular nonlinear \(\alpha \)-contraction with a concave function \(\alpha \). This result unifies the Banach principle and its nonlinear generalizations. In particular, it implies that, in metrically convex space, every nonlinear contraction is a nonlinear contraction with a concave function. This fact improves the relevant result of Boyd and Wong (Th. 2 in [2]).

In Sect. 7, we present some applications (Theorems  5,  6). The advantages of the convexity of \(\alpha \) in Theorem 3 are illustrated by Theorem 6.

2 Decreasing zero sequence is a basis in the set of real positive numbers

The following result plays the crucial role in the proof of the main result.

Lemma 1

Let \(\left( t_{n}:n\in \mathbb {N}\right) \) be an arbitrary sequence of strictly decreasing real numbers, such that

$$\begin{aligned} \lim _{n\rightarrow \infty }t_{n}=0\text {.} \end{aligned}$$
(1)

Then,

(I) for every \(t\ge 0\) the sequence \(\left( k_{n}\left( t\right) :n\in \mathbb {N}\right) \) of nonnegative integers, such that

$$\begin{aligned} k_{1}\left( t\right) t_{1}\le & {} t<\left( k_{1}\left( t\right) +1\right) t_{1}, \end{aligned}$$
(2)
$$\begin{aligned} k_{n}\left( t\right) t_{n}\le & {} t-\sum _{i=1}^{n-1}k_{i}\left( t\right) t_{i}<\left( k_{n}\left( t\right) +1\right) t_{n}, \quad n\in \mathbb {N}\text {, \ }n\ge 2\text {,} \end{aligned}$$
(3)

exists, is unique and the following holds true:

$$\begin{aligned} t=\sum _{n=1}^{\infty }k_{n}\left( t\right) t_{n}, \end{aligned}$$
(4)

for every \(n\in \mathbb {N}\)

$$\begin{aligned} k_{n}\left( t_{m}\right) =\left\{ \begin{array}{ccc} 1 &{} \text {if} &{} m=n \\ 0 &{} \text {if} &{} m\ne n \end{array} \right. , \quad m\in \mathbb {N}\text {,} \end{aligned}$$

for every \(m\in \mathbb {N}\), and for every t

$$\begin{aligned} 0\le t<t_{m}\Longrightarrow t=\sum _{n=m+1}^{\infty }k_{n}\left( t\right) t_{n}, \end{aligned}$$
(5)

for every \(m\in \mathbb {N}\), and for every t

$$\begin{aligned} t_{m+1}<t<t_{m}\Longrightarrow k_{m+1}\left( t\right) \ge 1; \end{aligned}$$
(6)

(II) for every L and for every sequence \(\left( L_{n}\right) \) of real numbers, such that

$$\begin{aligned} 0<L_{n}<L<\infty , \quad n\in \mathbb {N}\text {,} \end{aligned}$$
(7)

the function \(\gamma :\left[ 0,\infty \right) \rightarrow \) \(\left[ 0,\infty \right) \)

$$\begin{aligned} \gamma \left( t\right) :=\frac{1}{L}\sum _{n=1}^{\infty }L_{n}k_{n}\left( t\right) t_{n}, \quad t\ge 0\text {, } \end{aligned}$$
(8)

is correctly defined and has the following properties:

$$\begin{aligned} 0<\gamma \left( t\right) <t, \quad t>0, \quad \gamma \left( 0\right) =0, \end{aligned}$$
(9)

and

$$\begin{aligned} \limsup _{s\rightarrow t}\gamma \left( s\right) <t, \quad t>0. \end{aligned}$$
(10)

Proof

(I) Take an arbitrary \(t\ge 0\). The existence and uniqueness of a nonnegative integer satisfying (2) is obvious. Suppose we have already chosen nonnegative integers \(k_{1}\left( t\right) ,\ldots ,k_{n-1}\left( t\right) \), such that

$$\begin{aligned} k_{n-1}\left( t\right) t_{n-1}\le t-\sum _{i=1}^{n-2}k_{i}\left( t\right) t_{i}<\left( k_{n-1}\left( t\right) +1\right) t_{n-1}\text {,} \end{aligned}$$

for some \(n\in \mathbb {N}\), \(n\ge 2\). Then, there is a unique nonnegative integer \(k_{n}\left( t\right) \), such that (3) holds, and the correctness of the construction of the sequence \(\left( k_{n}\left( t\right) :n\in \mathbb {N }\right) \) follows from the induction. From (2) and (3), we have

$$\begin{aligned} 0\le t-\sum _{i=1}^{n}k_{i}\left( t\right) t_{i}<t_{n}, \quad n\in \mathbb {N}\text {,} \end{aligned}$$

which together with (1) implies (4). The remaining properties of (I) are obvious.

(II) The correctness of the definition of the function (8) follows from (4) and the boundedness of the sequence \((L_{n})\).

To show (9) take an arbitrary \(t>0.\) If \(t\ge t_{1}\), then \(t\in \big [ k_{1}\left( t\right) t_{1},\big ( k_{1}\left( t\right) +1\big ) t_{1}\big ) ,\) and by (8), we have

$$\begin{aligned} \gamma \left( t\right) =\frac{1}{L}\sum _{n=1}^{\infty }L_{n}k_{n}\left( t\right) t_{n}<\frac{1}{L}\sum _{n=1}^{\infty }Lk_{n}\left( t\right) t_{n}=\sum _{n=1}^{\infty }k_{n}\left( t\right) t_{n}=t\text {.} \end{aligned}$$

If \(0<t<t_{1}\), then there is a unique \(m\in \mathbb {N}\), such that \( t_{m+1}\le t<t_{m}\). By (5), we have \(k_{1}\left( t\right) = \ldots =k_{m}\left( t\right) =0\). Now, (8), inequalities (7) and (5) imply that

$$\begin{aligned} \gamma \left( t\right) =\frac{1}{L}\sum _{n=m+1}^{\infty }L_{n}k_{n}\left( t\right) t_{n}<\sum _{n=m+1}^{\infty }k_{n}\left( t\right) t_{n}=t\text {,} \end{aligned}$$

which shows that \(\gamma \left( t\right) <t\) for all \(t>0.\) Since, obviously, \(\gamma \left( t\right) >0\) for all \(t>0\) and \(\gamma \left( 0\right) =0\), inequalities (9) are proved.

To prove (10) take an arbitrary \(t>0\). Then, either \(t\ge t_{1}\) or there is a unique \(m\in \mathbb {N}\), such that

$$\begin{aligned} t_{m+1}\le t<t_{m}. \end{aligned}$$

Assume that the first case holds, i.e., that \(t\ge t_{1}.\) Then, \(t\in \big [ k_{1}\left( t\right) t_{1},\big ( k_{1}\left( t\right) +1\big ) t_{1}\big ) \) and, clearly, for all \(s\in \left[ k_{1}\left( t\right) t_{1},\left( k_{1}\left( t\right) +1\right) t_{1}\right) \) we have \( k_{1}\left( s\right) =k_{1}\left( t\right) >0\). Hence, making use of (4), (8) and (7), we have

$$\begin{aligned} Ls-L\gamma \left( s\right) =\sum _{n=1}^{\infty }\left( L-L_{n}\right) k_{n}\left( s\right) t_{n}\ge \left( L-L_{1}\right) k_{1}\left( s\right) t_{1}=\left( L-L_{1}\right) k_{1}\left( t\right) t_{1} \end{aligned}$$

for all \(s\in \left[ k_{1}\left( t\right) t_{1},\left( k_{1}\left( t\right) +1\right) t_{1}\right) .\) If \(t>k_{1}\left( t\right) t_{1}\), it implies that

$$\begin{aligned} Lt-\limsup _{s\rightarrow t}L\gamma \left( s\right) =\liminf _{s\rightarrow t}\left( Ls-L\gamma \left( s\right) \right) >0, \end{aligned}$$

which shows that inequality (10) holds true. If \(t=k_{1}\left( t\right) t_{1} \), we have

$$\begin{aligned} \limsup _{s\rightarrow t+}\gamma \left( s\right) <Lt\text {,} \end{aligned}$$

and the above inequality implies that

$$\begin{aligned} Lt-\limsup _{s\rightarrow t+}L\gamma \left( s\right) =\liminf _{s\rightarrow t}\left( Ls-L\gamma \left( s\right) \right) >0, \end{aligned}$$

so we have

$$\begin{aligned} \limsup _{s\rightarrow t+}\gamma \left( s\right) <t\text {.} \end{aligned}$$

In view of (5) and (6) of part (I), for all \(s\in \left( t_{2},t_{1}\right) \) and close enough to \(t_{1}\), we have

$$\begin{aligned} Ls-L\gamma \left( s\right) =\sum _{n=2}^{\infty }\left( L-L_{n}\right) k_{n}\left( s\right) t_{n}\ge \left( L-L_{2}\right) k_{2}\left( s\right) t_{2}\ge \left( L-L_{2}\right) t_{2}>0, \end{aligned}$$

whence

$$\begin{aligned} Lt-\limsup _{s\rightarrow t-}L\gamma \left( s\right) =\liminf _{s\rightarrow t-}\left( Ls-L\gamma \left( s\right) \right) >0, \end{aligned}$$

that is

$$\begin{aligned} \limsup _{s\rightarrow t-}\gamma \left( s\right) <t\text {.} \end{aligned}$$

This shows that inequality (10) holds true if \(t=k\left( t\right) t_{1}\).

Now, assume that \(t_{m+1}\le t<t_{m}.\) In view of (5) of part (I), for \( s\in \) \(\left[ t_{m+1},t_{m}\right) \), we have

$$\begin{aligned} Ls-L\gamma \left( s\right) =\sum _{n=m+1}^{\infty }\left( L-L_{n}\right) k_{n}\left( s\right) t_{n}. \end{aligned}$$

Treating \(t_{m+1}\) as \(t_{1}\) in the previous reasoning and arguing similarly, we conclude that (10) holds true for all \(t\in \left[ t_{m+1},t_{m}\right) .\) This completes the proof. \(\square \)

Thus, every positive and strictly decreasing to zero sequence of real numbers forms a basis in the cone of positive numbers \(\left( 0,\infty \right) \), which means that every \(t>0\) can be uniquely represented in the form (4) where \(\left( k_{n}\left( t\right) :n\in \mathbb {N}\right) \) is a unique sequence of nonnegative integer coefficients satisfying conditions (2) and (3).

Remark 1

The uniqueness of the sequence \(\left( k_{n}\left( t\right) :n\in \mathbb {N} \right) \) for every t, equality \(\sum _{n=1}^{\infty }k_{n}\left( t_{n}\right) t_{n}=t_{n}\) imply that \(k_{n}\left( t_{n}\right) =1\) and \( k_{m}\left( t_{n}\right) =0\) for all \(m\ne n\), whence

$$\begin{aligned} L\gamma \left( t_{n}\right) =L_{n}, \quad n\in \mathbb {N}\text {.} \end{aligned}$$

3 Metrically convex space and a lemma

A metric space \(\left( X,d\right) \) is said to be metrically convex (or Menger convex), if, for all \(x,y\in X\), \(x\ne y\), there is a point \(z\in X\), \(x\ne z\ne y\), such that

$$\begin{aligned} d\left( x,y\right) =d\left( x,z\right) +d\left( z,y\right) \end{aligned}$$

([1, 6]). Clearly, every convex subset of a normed (or paranormed) space [13] is metrically convex.

If \(\left( X,d\right) \) is a complete metrically convex metric space, then the set \(I:=\left\{ d\left( x,y\right) :x,y\in X\right\} \) is an interval of the form

$$\begin{aligned} I=\left[ 0,a\right) \,\, \text { or } \,\, I=\left[ 0,a\right] \,\, \text { where } \,\, a:=\sup \left\{ d\left( x,y\right) :x,y\in X\right\} . \end{aligned}$$

Moreover, in view of Menger’s lemma (see Blumenthal [1], p. 41), for any \(\alpha \), \(0\le \alpha \le 1\), and any \(x,y\in X\), there exists \(z\in X\), such that

$$\begin{aligned} d\left( x,z\right) =\alpha d\left( x,y\right) \text {, \quad }d\left( z,y\right) =\left( 1-\alpha \right) d\left( x,y\right) . \end{aligned}$$

Hence, by induction, for any system of nonnegative numbers \(\alpha _{1}, \ldots ,\alpha _{k+1}\) satisfying \(\alpha _{1}+ \cdots +\alpha _{k+1}=1,\) and \( x,y\in X\), there exist \(x_{1}, \ldots ,x_{k}\in X\), such that

$$\begin{aligned} d\left( x,x_{1}\right)= & {} \alpha _{1}d\left( x,y\right) \text {, \ }d\left( x_{1},x_{2}\right) =\alpha _{2}d\left( x,y\right) ,\ldots ,d\left( x_{k-1},x_{k}\right) =\alpha _{k}d\left( x,y\right) \text {, } \\ \text {\ }d\left( x_{k},y\right)= & {} \alpha _{k+1}d\left( x,y\right) . \end{aligned}$$

This implies the following.

Lemma 2

If \(\left( X,d\right) \) is a complete metrically convex space, then for every \(x,y\in X\), \(x\ne y\), for every \(t\in \left( 0,d\left( x,y\right) \right) ,\) there are a unique \(k\in \mathbb {N}\) satisfying the inequality

$$\begin{aligned} kt\le d\left( x,y\right) <\left( k+1\right) t, \end{aligned}$$

and \(x_{1}, \ldots ,x_{k}\in X\), such that

$$\begin{aligned} d\left( x,x_{1}\right) =t, \quad d\left( x_{1},x_{2}\right) = \cdots =d\left( x_{k-1},x_{k}\right) =t, \quad d\left( x_{k},y\right) <t; \end{aligned}$$

moreover, setting \(x_{0}:=x\), we have

$$\begin{aligned} \sum \limits _{i=1}^{k}d\left( x_{i-1},x_{i}\right) +d\left( x_{k},y\right) =d\left( x,y\right) . \end{aligned}$$

Proof

Take arbitrary \(x,y\in X\), \(x\ne y,\) and \(t\in \left( 0,d\left( x,y\right) \right) \). There is a unique \(k\in \mathbb {N}\), such that \(kt\le d\left( x,y\right) <\left( k+1\right) t\). The numbers

$$\begin{aligned} \alpha _{1}= \cdots =\alpha _{k}:=\frac{t}{d\left( x,y\right) }, \quad \alpha _{k+1}:=1-\frac{kt}{d\left( x,y\right) } \end{aligned}$$

are nonnegative and \(\alpha _{1}+ \cdots +\alpha _{k+1}=1\). Thus, by the above consequence of the Menger lemma, there exist \(x_{1}, \ldots ,x_{k}\in X\), such that

$$\begin{aligned} d\left( x,x_{1}\right) =t, \,\, d\left( x_{1},x_{2}\right) = \cdots =d\left( x_{k-1},x_{k}\right) =t, \,\, d\left( x_{k},y\right) =\left( d\left( x,y\right) -kt\right) <t. \end{aligned}$$

Setting \(x_{0}:=x,\) we hence get

$$\begin{aligned} \sum \limits _{i=1}^{k}d\left( x_{i-1},x_{i}\right) +d\left( x_{k},y\right) =\sum \limits _{i=1}^{k}t+\left( d\left( x,y\right) -kt\right) =d\left( x,y\right) , \end{aligned}$$

which completes the proof. \(\square \)

Remark 2

The assumption of the completeness of the space \(\left( X,d\right) \) can be omitted, if it is a convex subset of a normed space.

4 Some properties of subadditive functions

Lemma 3

If a function \(\lambda :\left[ 0,\infty \right) \rightarrow \left[ 0,\infty \right) \) is continuous at 0 with \(\lambda \left( 0\right) =0,\) and \( \lambda \) is subadditive, that is

$$\begin{aligned} \lambda \left( s+t\right) \le \lambda \left( s\right) +\lambda \left( t\right) , \quad s,t\ge 0\text {,} \end{aligned}$$

then

(i) [19] (see also [7], Th.7.8.3) for every \(t>0\), there exist the one-sided limits \(\lambda \left( t+\right) :=\lim _{r\rightarrow t+}\lambda \left( r\right) ,\) \(\lambda \left( t-\right) :=\lim _{r\rightarrow t-}\lambda \left( r\right) \) satisfying the inequality

$$\begin{aligned} \lambda \left( t+\right) \le \lambda \left( t\right) \le \lambda \left( t-\right) ; \end{aligned}$$

(ii) [19] (see also [7], Th. 7.6.1, [12], Lemma 3)

$$\begin{aligned} \lim _{t\rightarrow 0+}\frac{\lambda \left( t\right) }{t}=\sup \left\{ \frac{ \lambda \left( u\right) }{u}:u>0\right\} , \quad \lim _{t\rightarrow \infty }\frac{\lambda \left( t\right) }{t}=\inf \left\{ \frac{\lambda \left( u\right) }{u}:u>0\right\} ; \end{aligned}$$

(iii) the function \(\phi :\left[ 0,\infty \right) \rightarrow \left[ 0,\infty \right) \) defined by

$$\begin{aligned} \phi \left( t\right) :=\sup \left\{ \lambda \left( s\right) :s\in \left[ 0,t \right] \right\} , \quad t\ge 0, \end{aligned}$$

(the smallest increasing function bounding \(\phi \) from above) is increasing, continuous, and subadditive.

Proof

(iii) Take \(s,t\ge 0\) and an arbitrary \(w\in \left[ 0,s+t\right] .\) Choosing \(u\in \left[ 0,s\right] \) and \(v\in \left[ 0,t\right] \), such that \( w=u+v\), by the subadditivity of \(\lambda \), and the definition of \(\phi \), we have

$$\begin{aligned} \lambda \left( w\right) =\lambda \left( u+v\right) \le \lambda \left( u\right) +\lambda \left( v\right) \le \phi \left( s\right) +\phi \left( t\right) , \end{aligned}$$

whence by the definition of \(\phi \)

$$\begin{aligned} \phi \left( s+t\right) \le \phi \left( s\right) +\phi \left( t\right) , \quad s,t\ge 0. \end{aligned}$$

Since \(\phi \) is continuous at 0 and \(\phi \left( 0\right) =0\), in view of (i), we get

$$\begin{aligned} \phi \left( t+\right) \le \phi \left( t\right) \le \phi \left( t-\right) , \quad t>0. \end{aligned}$$

Now, the increasing monotonicity of \(\phi \) implies that

$$\begin{aligned} \phi \left( t+\right) =\phi \left( t\right) =\phi \left( t-\right) , \quad t>0, \end{aligned}$$

so \(\phi \) is continuous in \(\left[ 0,\infty \right) .\) \(\square \)

Remark 3

A function \(\lambda :\left( 0,\infty \right) \rightarrow \mathbb {R}\) is subadditive if “it is concave at the point 0”, i.e., if the function

$$\begin{aligned} \left( 0,\infty \right) \ni t\longmapsto \frac{\lambda \left( t\right) }{t} \text { \ \ is \ decreasing}. \end{aligned}$$

A function \(\lambda :\left[ 0,\infty \right) \rightarrow \mathbb {R}\) is subadditive if it is concave at the point 0 and \(\lambda \left( 0\right) \ge 0\).

For some results involving more general linear inequalities than subadditivity, see Pycia [16]).

Lemma 4

If \(\phi :\left[ 0,\infty \right) \rightarrow \left[ 0,\infty \right) \) is subadditive,  \(\phi \left( 0\right) =0\), \(\ \phi \left( t\right) <t\) for every \(t>0\,,\) and

$$\begin{aligned} \phi \left( t-\right) <t, \quad t>0, \end{aligned}$$

then there is an increasing concave function \(\alpha :\left[ 0,\infty \right) \rightarrow \left[ 0,\infty \right) \), such that for every \(t>0\)

$$\begin{aligned} \phi \left( t\right) \le \alpha \left( t\right) <t. \end{aligned}$$

Proof

By the subadditivity of \(\phi \), the inequality \(\phi \left( t-\right) <t\) for all \(t>0,\) and from Lemma 3(i), we have

$$\begin{aligned} \limsup _{u\rightarrow t}\phi \left( u\right) <t , \quad t>0. \end{aligned}$$

Put

$$\begin{aligned} F:=\text{ cl }\left\{ \left( t,u\right) \in \left[ 0,\infty \right) ^{2}:0\le u\le \phi \left( t\right) \right\} \end{aligned}$$

and define a function \(\alpha :\left[ 0,\infty \right) \rightarrow \left[ 0,\infty \right) \) by the formula

$$\begin{aligned} \alpha \left( t\right) :=\max \left\{ u:\left( t,u\right) \in \text{ conv } F\right\} , \end{aligned}$$

where \(\text{ conv }F\) stands for the convex hull of the set F.

Take arbitrary \(t_{1},t_{2}\in \left( 0,\infty \right) \), \(\kappa \in \left[ 0,1\right] \) and choose \(u_{1},u_{2}\in \left[ 0,\infty \right) \), such that \( \left( t_{1},u_{1}\right) ,\) \(\left( t_{2},u_{2}\right) \in \text{ conv }F\). Then

$$\begin{aligned} \left( \kappa t_{1}+\left( 1-\kappa \right) t_{2},\kappa t_{1}+\left( 1-\kappa \right) u_{2}\right) \in \text{ conv }F \end{aligned}$$

and by the definition of \(\alpha \)

$$\begin{aligned} \alpha \left( \kappa t_{1}+\left( 1-\kappa \right) t_{2}\right) \ge \kappa u_{1}+\left( 1-\kappa \right) u_{2}, \end{aligned}$$

whence, passing to supremum, we get

$$\begin{aligned} \alpha \left( \kappa t_{1}+\left( 1-\kappa \right) t_{2}\right) \ge \kappa \alpha \left( t_{1}\right) +\left( 1-\kappa \right) \alpha \left( t_{2}\right) , \end{aligned}$$

which shows that \(\alpha \) is concave. The concavity of \(\alpha \) together with its nonnegativity and \(\alpha \left( 0+\right) =0\) imply that \(\alpha \) is increasing.

Since \(\frac{\phi \left( t\right) }{t}<1\) for every \(t>0,\) by Lemma 3(ii), we have

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{\phi \left( t\right) }{t}<1\text {, } \end{aligned}$$

it follows that there is \(a>0\), such that \(\alpha \left( t\right) <t\) for all \(\,t>a.\)

Of course, we have \(\alpha \left( t\right) \le t\) for \(t\in \left[ 0,a \right] \,.\) To show that \(\alpha \left( t\right) <t\) for \(t\in \left( 0,a \right] \), assume, for the contrary, that \(\alpha \left( t_{0}\right) =t_{0}\) for some \(t_{0}\in \left( 0,a\right] \). Thus, the point \(\left( t_{0},t_{0}\right) \) belongs to the convex hull of the set \(F\cap \left\{ \left( t.u\right) :t\in \left[ 0,a\right] , \quad 0\le u\le t\right\} \). In view of Caratheodory’s theorem ( [10], Cor. 17.4.2, p. 433), there is a two-dimensional simplex \(S\subset \text{ conv }F\) with vertices in the set F, such that \(\left( t_{0},t_{0}\right) \in S\). Since \(\left( t_{0},t_{0}\right) \) belongs to the boundary of S , it follows that for some \(\kappa \in \left[ 0,1\right] :\)

$$\begin{aligned} \left( t_{0},t_{0}\right) =\kappa \left( t_{1},u_{1}\right) +\left( 1-\tau \right) \left( t_{2},u_{2}\right) , \end{aligned}$$

where \(\left( t_{1},u_{1}\right) \in F\) and \(\left( t_{2},u_{2}\right) \in F\) are two of the vertices of S. Hence

$$\begin{aligned} t_{0}=\kappa t_{1}+\left( 1-\tau \right) t_{2}<\kappa u_{1}+\left( 1-\tau \right) u_{2}=t_{0}, \end{aligned}$$

that is, \(t_{0}<t_{0},\) and this contradiction completes the proof. \(\square \)

5 Main result on the restrictive Lipschitz mappings

The key result of this paper reads as follows.

Theorem 1

Let \(\left( X,d\right) \) be a complete metrically convex metric space, and \( \left( Y,\rho \right) \) a metric space. Suppose that \(T:X\rightarrow Y\) is a continuous and Lipschitz restrictive mapping, i.e., there are: a positive strictly decreasing sequence of real numbers \(\left( t_{n}:n\in \mathbb {N} \right) \)

$$\begin{aligned} \lim _{n\rightarrow \infty }t_{n}=0; \end{aligned}$$

and a sequence \(\left( L_{n}:n\in \mathbb {N}\right) \) of nonnegative real numbers with

$$\begin{aligned} L:=\liminf _{n\rightarrow \infty }L_{n}<\infty , \end{aligned}$$

such that for every \(n\in \mathbb {N}\) and for all \(x,y\in X\)

$$\begin{aligned} d\left( x,y\right) =t_{n}\Longrightarrow \rho \left( Tx,Ty\right) \le L_{n}d\left( x,y\right) . \end{aligned}$$
(11)

Then

$$\begin{aligned} \rho \left( Tx,Ty\right) \le Ld\left( x,y\right) , \quad x,y\in X; \end{aligned}$$

moreover, if the set \(\left\{ n\in \mathbb {N}:L_{n}<L\right\} \) is infinite, then

$$\begin{aligned} \rho \left( Tx,Ty\right) \le L\alpha \left( d\left( x,y\right) \right) , \quad x,y\in X, \end{aligned}$$
(12)

for an increasing concave function \(\alpha :\left[ 0,\infty \right) \rightarrow \left[ 0,\infty \right) \), such that

$$\begin{aligned} \alpha \left( t\right) <t, \quad t>0\text {.} \end{aligned}$$

Proof

Choosing, if necessary, a subsequence of the sequence \(\left( \left( t_{n},L_{n}\right) :n\in \mathbb {N}\right) \), we can assume, without any loss of generality, that \(\left( L_{n}:n\in \mathbb {N}\right) \) is monotonic and

$$\begin{aligned} L=\lim _{n\rightarrow \infty }L_{n}\text {.} \end{aligned}$$

Take arbitrary \(x,y\in X,\) \(x\ne y.\) For every \(n\in \mathbb {N}\), there is a unique \(k_{n}\) \(\in \mathbb {N}\mathbb {\cup }\left\{ 0\right\} \), such that

$$\begin{aligned} k_{n}t_{n}\le d\left( x,y\right) <\left( k_{n}+1\right) t_{n}. \end{aligned}$$

In view of Lemma 2, for every \(n\in \mathbb {N}\), there is a sequence of n points \(x_{0,n,}x_{1,n}, \ldots ,x_{k_{n},n}\) in X, such that

$$\begin{aligned} x_{0,n}=x, \, \, d\left( x_{i-1,n},x_{i,n}\right) =t_{n} \, \, \text {for} \, \, i=1, \ldots ,k_{n}\text {, } \end{aligned}$$

and

$$\begin{aligned} d\left( x_{k_{n}},_{n},y\right) <t_{n}. \end{aligned}$$
(13)

From (11), we have

$$\begin{aligned} \rho \left( Tx_{i-1,n},Tx_{i},_{n}\right) \le L_{n}t_{n}, \quad i=1, \ldots ,k_{n}. \end{aligned}$$
(14)

Hence, by the triangle inequality

$$\begin{aligned} \rho \left( Tx,Ty\right)\le & {} \sum _{i=0}^{k_{n}-1}\rho \left( Tx_{i-1,n},Tx_{i},_{n}\right) +\rho \left( Tx_{k_{n},n},Ty\right) \\\le & {} \sum _{i=0}^{k_{n}-1}L_{n}t_{n}+\rho \left( Tx_{k_{n},n},Ty\right) \\= & {} L_{n}k_{n}t_{n}+\rho \left( Tx_{k_{n},n},Ty\right) \\\le & {} L_{n}d\left( x,y\right) +\rho \left( Tx_{k_{n},n},Ty\right) . \end{aligned}$$

Letting \(n\rightarrow \infty \) in the resulting inequality, taking into account the continuity of T and the relation

$$\begin{aligned} \lim _{n\rightarrow \infty }x_{k_{n}},_{n}=y \end{aligned}$$

following from (13), and (14), we conclude that:

$$\begin{aligned} \rho \left( Tx,Ty\right) \le Ld\left( x,y\right) , \end{aligned}$$

which proves the first result.

To prove the “moreover” result, note first that the condition that the set \( \left\{ n\in \mathbb {N}:L_{n}<L\right\} \) is infinite implies that the sequence \(\left\{ L_{n}:n\in \mathbb {N}\right\} \) is strictly increasing. By Lemma 1, for every \(t\ge 0\), equality (4) holds, where \(\left( k_{n}\left( t\right) \right) \) is a unique sequence of nonnegative integers satisfying conditions (2) and (3). The boundedness of the sequence \(\left( L_{n}\right) \) implies that the function \(\gamma :\left[ 0,\infty \right) \rightarrow \left[ 0,\infty \right) \) given by (8) is well defined and, as \(L_{n}<L\) for all \(n\in \mathbb {N}\), we have

$$\begin{aligned} \gamma \left( t\right) <t, \quad t>0. \end{aligned}$$

To prove (12), take arbitrary \(x,y\in X,\) \(x\ne y\), and put

$$\begin{aligned} t=d\left( x,y\right) . \end{aligned}$$

The metrical convexity of X,  Lemma 2, and the continuity of the metric d , imply that in the metrical segment of endpoints xy, there is a sequence of points \(\left( x_{n}:n=0,1,2, \ldots \right) \), such that

$$\begin{aligned} x_{0}=x, \quad d\left( x_{n-1},x_{n}\right) =k_{n}\left( t\right) t_{n}\text { \ for\ \ }n\in \mathbb {N}, \quad y=\lim _{n\rightarrow \infty }x_{n}\text {,} \end{aligned}$$

and, for every \(n\in \mathbb {N}\), there is a finite sequence \(\left( y_{n,0},y_{n,1}, \ldots ,y_{n,k_{n}}\right) \) of points in the metrical segment of the endpoints \(x_{n-1},x_{n}\), such that

$$\begin{aligned} y_{n,0}=x_{n-1}, \quad d\left( y_{n,j-1},y_{n,j}\right) =t_{n}\text { \ for\ \ }j=1, \ldots ,k_{n}\left( t\right) , \quad y_{n,k_{n}\left( t\right) }=x_{n}\text {.\ } \end{aligned}$$

Applying in turn, the continuity of T, the definition of the sequence \(\left( x_{n}\right) \), the equality \(y=\lim _{n\rightarrow \infty }x_{n}\), the triangle inequality, the definition, and the properties of the sequence \(\left( y_{n,0},y_{n,1}, \ldots ,y_{n,k_{n}}\right) \), the assumed implication (5), the definition of the function \(\gamma ,\) and the equality \( t=d\left( x,y\right) \), we get

$$\begin{aligned} \rho \left( Tx,Ty\right)= & {} \lim _{n\rightarrow \infty }\rho \left( Tx_{0},Tx_{n}\right) \le \lim _{n\rightarrow \infty }\sum _{m=1}^{n}\rho \left( Tx_{m-1},Tx_{m}\right) \\= & {} \sum _{n=1}^{\infty }\rho \left( Tx_{n-1},Tx_{n}\right) \le \sum _{n=1}^{\infty }\sum _{j=1}^{k_{n}\left( t\right) }\rho \left( Ty_{n,j-1},Ty_{n,j}\right) \\\le & {} \sum _{n=1}^{\infty }\sum _{j=1}^{k_{n}\left( t\right) }L_{n}t_{n}=\sum _{n=1}^{\infty }k_{n}\left( t\right) L_{n}t_{n}=L\gamma \left( t\right) \\= & {} L\gamma \left( d\left( x,y\right) \right) . \end{aligned}$$

Since \(\gamma \left( 0\right) =0,\) we hence obtain

$$\begin{aligned} \rho \left( Tx,Ty\right) \le L\gamma \left( d\left( x,y\right) \right) , \quad x,y\in X. \end{aligned}$$

Now, consider the best upper estimation of the function \(X^{2}\ni \left( x,y\right) \longmapsto \frac{1}{L}\rho \left( Tx,Ty\right) \) with respect to the metric d, i.e., the function \(\lambda :\left[ 0,\infty \right) \rightarrow \left[ 0,\infty \right) \) defined by

$$\begin{aligned} \lambda \left( t\right) :=\left\{ \begin{array}{lll} \sup \left\{ \frac{1}{L}\rho \left( Tx,Ty\right) :d\left( x,y\right) =t; \,\, x,y\in X\right\} &{} \text {if} &{} \quad t\in I \\ 0 &{} \text {if} &{} \quad t\in \left[ 0,\infty \right) \backslash I \end{array} \right. , \end{aligned}$$

(the interval I is defined in Lemma 2).

Take \(s,t\in I\) and \(x,y,z\in X\) such that \(s=d\left( x,z\right) \), \( t=d\left( z,y\right) .\) Then, by the triangle inequality and the definition of \(\lambda \)

$$\begin{aligned} \rho \left( Tx,Ty\right) \le \rho \left( Tx,Tz\right) +\rho \left( Tz,Ty\right) \le L\lambda \left( s\right) +L\lambda \left( t\right) , \end{aligned}$$

whence \(L\lambda \left( s+t\right) \le L\lambda \left( s\right) +L\lambda \left( t\right) .\) The definition of \(\lambda \left( t\right) \) for \(t\in \left[ 0,\infty \right) \backslash I\) implies

$$\begin{aligned} \lambda \left( s+t\right) \le \lambda \left( s\right) +\lambda \left( t\right) , \quad s,t\in \left[ 0,\infty \right) \text {,} \end{aligned}$$

i.e., \(\lambda \) is subadditive in \(\left[ 0,\infty \right) \).

Since \(\lambda \left( 0\right) =0\) and

$$\begin{aligned} 0\le \lambda \left( t\right) \le \gamma \left( t\right) <t, \quad t>0\text {,} \end{aligned}$$

the function \(\lambda \) is (right) continuous at 0. Hence, applying Lemma 4, we conclude that there is a concave increasing function \(\alpha :\left[ 0,\infty \right) \rightarrow \left[ 0,\infty \right) \), such that

$$\begin{aligned} 0\le \lambda \left( t\right) \le \alpha \left( t\right) <t , \quad t\ge 0, \end{aligned}$$

which completes the proof. \(\square \)

The following fact is noteworthy:

Remark 4

Replacing in Theorem 1 the sequence \(\left( \left( t_{n},L_{n}\right) :n\in \mathbb {N}\right) \) by its arbitrary subsequence gives the same first thesis—thus, the first result does not depend on the choice of a subsequence. The situation changes radically in the case of the “moreover” result: it can happen if and only if there is a strictly increasing subsequence of \( \left( L_{n}:n\in \mathbb {N}\right) .\) Therefore, without any loss of generality, we can assume that the sequence \(\left( L_{n}:n\in \mathbb {N} \right) \) is monotonic.

Remark 5

In the above result, for a mapping T and a sequence \(\left( t_{n}\right) \), one could choose the sequence \(\left( L_{n}\right) \) as follows:

$$\begin{aligned} L_{n}:=\sup \left\{ \frac{\rho \left( Tx,Ty\right) }{d\left( x,y\right) }:x,y\in X\text {, }d\left( x,y\right) =t_{n}\right\} , \quad n\in \mathbb {N}\text {.} \end{aligned}$$

The following result allows us to avoid the assumption of the continuity of the mapping T.

Theorem 2

Let \(\left( X,d\right) \) be a complete metric space that is metrically convex and let \(\left( Y,\rho \right) \) be a metric space. Suppose that a mapping \(T:X\rightarrow Y\) and a real function \(\beta :\left( 0,\infty \right) \rightarrow \left[ 0,\infty \right) \) are such that

$$\begin{aligned} \rho \left( Tx,Ty\right) \le \beta \left( d\left( x,y\right) \right) , \quad x,y\in X, \quad x\ne y. \end{aligned}$$
(15)

If

$$\begin{aligned} \limsup _{t\rightarrow 0+}\frac{\beta \left( t\right) }{t}<\infty , \end{aligned}$$

then T is Lipschitz continuous, that is

$$\begin{aligned} \rho \left( Tx,Ty\right) \le Ld\left( x,y\right) , \quad x,y\in X, \end{aligned}$$

where

$$\begin{aligned} L:=\liminf _{t\rightarrow 0+}\frac{\beta \left( t\right) }{t}. \end{aligned}$$

Proof

The continuity of T follows from the inequality \(\limsup _{t\rightarrow 0+} \frac{\beta \left( t\right) }{t}<\infty \) and from (15). It is easy to verify that the remaining assumptions of the first part of Theorem 1 are satisfied. \(\square \)

6 Fixed-point theorems, consequences, and remarks

The following result generalizes the Banach fixed-point principle in metrically convex spaces:

Theorem 3

Let \(\left( X,d\right) \) be a nonempty complete metrically convex metric space and \(T:X\rightarrow X\) be a continuous mapping, such that there are: a positive decreasing sequence \(\left( t_{n}:n\in \mathbb {N}\right) \) with \( \lim _{n\rightarrow \infty }t_{n}=0\), and a sequence of real numbers \(\left( L_{n}:n\in \mathbb {N}\right) \), such that

$$\begin{aligned} L_{n}<1, \quad n\in \mathbb {N}\text {,} \end{aligned}$$

and for all \(n\in \mathbb {N}\) and \(x,y\in X\)

$$\begin{aligned} d\left( x,y\right) =t_{n}\Longrightarrow d\left( Tx,Ty\right) \le L_{n}d\left( x,y\right) . \end{aligned}$$

Then, T has a unique fixed-point. Moreover:

(i) if

$$\begin{aligned} L:=\liminf _{n\rightarrow \infty }L_{n}<1, \end{aligned}$$

then T is a (linear) Banach contraction with the constant L, that is

$$\begin{aligned} d\left( Tx,Ty\right) \le Ld\left( x,y\right) , \quad x,y\in X\text {;} \end{aligned}$$

(ii) if \(\ L=1\), then there is an increasing concave function \(\alpha : \left[ 0,\infty \right) \rightarrow \left[ 0,\infty \right) \)

$$\begin{aligned} \alpha \left( t\right) <t , \quad t>0\text {; \ }\alpha ^{\prime }\left( 0\right) =1, \end{aligned}$$

such that

$$\begin{aligned} d\left( Tx,Ty\right) \le \alpha \left( d\left( x,y\right) \right) , \quad x,y\in X, \end{aligned}$$

Proof

Assume \(Y=X\) and \(\rho =d\) in Theorem 1.

If \(L<1\), we get \(d\left( Tx,Ty\right) \le Ld\left( x,y\right) \) for all \( x,y\in X\), so T is a Banach contraction.

If \(L=1\) then, by the inequality \(L_{n}<1\) for all \(n\in \mathbb {N}\), the set \(\big \{ n\in \mathbb {N}:L_{n}<1\big \} \) is infinite. Consequently, T is a nonlinear \(\alpha \)-contraction with an increasing concave function \( \alpha .\) Since \(\lim _{n\rightarrow \infty }\alpha ^{n}\left( t\right) =0\) for every \(t>0,\) where \(\alpha ^{n}\) is the nth iterate of \(\alpha \), the existence and uniqueness of the fixed-point of T follows from Th. 1.2 in [14] (see also [11] and [6] p. 15). This completes the proof. \(\square \)

Let us note the following important result in applications.

Remark 6

Assume that the conditions of Theorem 3 are satisfied and denote by \(u\in X\) a unique fixed point of T. Then, the sequence of the iterates \(\left( T^{n}:n\in \mathbb {N}\right) \) of the mapping T converges (uniformly on bounded subsets of X) to the constant mapping \(X\ni x\rightarrow u\). Moreover:

(i) if \(L<1\), then

$$\begin{aligned} d\left( T^{n}x,u\right) \le \frac{L^{n}}{1-L}d\left( Tx,x\right) , \quad x\in X,\text { }n\in \mathbb {N}\text {;} \end{aligned}$$

(ii) if \(L=1\), then

$$\begin{aligned} d\left( T^{n}x,u\right) \le \alpha ^{n}\left( d\left( x,u\right) \right) , \quad x\in X,\text { }n\in \mathbb {N}\text {,} \end{aligned}$$

where \(\left( \alpha ^{n}:n\in \mathbb {N}\right) \) is a sequence of iterates of the function \(\alpha \), and

$$\begin{aligned} \lim _{n\rightarrow \infty }\alpha ^{n}\left( t\right) =0{, \ \ \ \ }t>0. \end{aligned}$$

Remark 7

In Theorem 3, without any loss of generality, the sequence \(\big ( L_{n}:n\in \mathbb {N}\big ) \) can be assumed to be monotonic; moreover, the case (i) holds if \(\left( L_{n}:n\in \mathbb {N}\right) \) is increasing, and (ii) holds if \(\left( L_{n}:n\in \mathbb {N}\right) \) is strictly increasing.

Hence, using the monotonic sequences \(\left( L_{n}:n\in \mathbb {N}\right) ,\) we easily obtain the following two corollaries.

Corollary 1

Let \(\left( X,d\right) \) be a complete metrically convex metric space and \( T:X\rightarrow X\) be a continuous mapping. The following two conditions are equivalent:

(i) T is restrictive Lipschitz with a decreasing sequence \(\left( L_{n}:n\in \mathbb {N}\right) \), such that \(L=\lim _{n\rightarrow \infty }L_{n}<1\);

(ii) T is a Banach contraction with the constant L.

Corollary 2

Let \(\left( X,d\right) \) be a complete metrically convex metric space and \( T:X\rightarrow X\) be a continuous mapping. The following two conditions are equivalent:

(i) T is restrictive Lipschitz with a strictly increasing sequence \(\big ( L_{n}:n\in \mathbb {N}\big ) \), such that \(L=\lim _{n\rightarrow \infty }L_{n}=1\);

(ii) T is a nonlinear contraction with an increasing concave function \( \alpha \), such that \(\alpha \left( t\right) <t\) for all \(t>0\,\) and

$$\begin{aligned} \alpha ^{\prime }\left( 0+\right) =1. \end{aligned}$$

Remark 8

An example of a nonlinear contraction of a complete metric following Remark 1 in [2] shows that the metrical convexity of the metric space \( \left( X,d\right) \) is essential for the existence of concave \(\alpha \) in the above result.

A mapping \(T:X\rightarrow X\) is called a strict contraction, if \( d\left( Tx,Ty\right) <d\left( x,y\right) \) for all \(x,y\in X\), \(x\ne y.\) The following is consequence of Theorem 1:

Remark 9

A continuous selfmapping T of a complete metrically convex metric space \( \left( X,d\right) \) is a strict contraction if and only if there is a positive zero sequence \(\left( t_{n}:n\in \mathbb {N}\right) \), such that for all \(x,y\in X\)

$$\begin{aligned} d\left( x,y\right) =t_{n}\Longrightarrow d\left( Tx,Ty\right) <d\left( x,y\right) . \end{aligned}$$

In Theorem 3, the mapping T is assumed to be continuous. The following result allows us to avoid this condition.

Theorem 4

Let \(\left( X,d\right) \) be a nonempty complete metrically convex space, T a selfmapping of X, and \(\beta :\left( 0,\infty \right) \rightarrow \left[ 0,\infty \right) \) a function, such that

$$\begin{aligned} \limsup _{t\rightarrow 0+}\frac{\beta \left( t\right) }{t}<\infty , \quad \liminf _{t\rightarrow 0+}\frac{\beta \left( t\right) }{t}=1, \end{aligned}$$

and \(0\,\ \)is an accumulation point of the set \(\left\{ t>0:\beta \left( t\right) <t\right\} \).

If

$$\begin{aligned} \rho \left( Tx,Ty\right) \le \beta \left( d\left( x,y\right) \right) , \quad x,y\in X,\text { }x\ne y, \end{aligned}$$

then T has a unique fixed point, and the theses of results (i)–(ii) of Theorem 2 hold true.

Proof

Theorem 2 implies the continuity of T. The remaining assumptions of Theorem 3 are easy to verify. \(\square \)

This theorem improves the result of Boyd and Wong (see [2], Th. 2).

Remark 10

If the sequence \(\left( L_{n}:n\in \mathbb {N}\right) \) is such that \( \liminf _{n\rightarrow \infty }L_{n}=1\), but the set \(\left\{ n\in \mathbb {N} :L_{n}<1\right\} \) is finite, then in view of Theorem 1

$$\begin{aligned} d\left( Tx,Ty\right) \le d\left( x,y\right) , \quad x,y\in X, \end{aligned}$$

that is, T is nonexpansive. The metrical convexity of the space X is not a sufficient condition to guarantee the existence of a fixed-point of the mapping T.

The situation dramatically changes if the metrical convex space X is replaced by a bounded closed convex subset of a uniformly convex Banach space (Browder [3], Göhde [4] , Kirk [8]; see also Reich [17, 18] and Section 5 of the book by Goebel and Reich [5]).

7 Some applications

In this section, we apply the main result in the theory of iterative functional equations (see for instance Kuczma [9]).

Let \(C\left( \left[ 0,1\right] \right) \) be the Banach space of continuous functions \(\varphi :\left[ 0,1\right] \rightarrow \mathbb {R}\) with the norm

$$\begin{aligned} \left\| \varphi \right\| =\max \left\{ \left| \varphi \left( u\right) \right| :u\in \left[ 0,1\right] \right\} . \end{aligned}$$

Theorem 5

Let the functions \(h:\mathbb {R}\rightarrow \mathbb {R}\), \(g:\left[ 0,1\right] \rightarrow \mathbb {R}\) and \(f:\left[ 0,1\right] \rightarrow \left[ 0,1 \right] \) be continuous. Assume that there exist: a positive decreasing to zero sequence \(\left\{ t_{n}:n\in \mathbb {N}\right\} \) and a nonnegative sequence \(\left( L_{n}:n\in \mathbb {N}\right) ,\) with \(\ L:=\liminf _{n\rightarrow \infty }L_{n}<\infty ,\) such that for all \(u,v\in \mathbb {R},\) \(n\in \mathbb {N}\)

$$\begin{aligned} \left| u-v\right| =t_{n}\Longrightarrow \left| h\left( u\right) -h\left( v\right) \right| \le L_{n}t_{n}\text {.} \end{aligned}$$

If \(L<1\) or \(L=1\) and there is a strictly increasing subsequence of \(\left( L_{n}:n\in \mathbb {N}\right) \) such that \(\lim _{n\rightarrow \infty }L_{n}=1, \) the functional equation

$$\begin{aligned} \varphi =h\circ \varphi \circ f+g \end{aligned}$$

has a unique continuous solution \(\varphi \in C\left( \left[ 0,1\right] \right) \);

moreover, for every \(\varphi _{0}\in \) \(C\left( \left[ 0,1\right] \right) ,\,\ \) the sequence \(\left( \varphi _{n}:n\in \mathbb {N}_{0}\right) \,\) defined by

$$\begin{aligned} \varphi _{n+1}:=h\circ \varphi _{n}\circ f+g, \quad n\in \mathbb {N }_{0}\text {,} \end{aligned}$$

converges uniformly to \(\varphi \).

Proof

Clearly, the mapping T given by

$$\begin{aligned} T\left( \varphi \right) :=h\circ \varphi \circ f+g, \quad \varphi \in C\left( \left[ 0,1\right] ,\mathbb {R}\right) \end{aligned}$$

maps \(C\left( \left[ 0,1\right] ,\mathbb {R}\right) \) into itself.

Since the interval \(\left[ 0,1\right] \) as well as \(\mathbb {R}\), with the Euclidean distances, are complete metrically convex metric spaces, in view of Theorem 1, we have either

$$\begin{aligned} \left| h\left( u\right) -h\left( v\right) \right| \le L\left| h\left( u\right) -h\left( v\right) \right| , \quad u,v\in \left[ 0,1\right] , \end{aligned}$$

with \(0\le L<1,\) or

$$\begin{aligned} \left| h\left( u\right) -h\left( v\right) \right| \le \alpha \left( \left| h\left( u\right) -h\left( v\right) \right| \right) , \quad u,v\in \left[ 0,1\right] , \end{aligned}$$

where \(\alpha :\left[ 0,\infty \right) \rightarrow \left[ 0,\infty \right) \) is strictly increasing continuous function, such that \(\alpha \left( t\right) <t\,\ \)for every \(t>0\).

In the first case, taking into account that f is a continuous selfmapping of \(\left[ 0,1\right] \), for all \(\varphi _{1},\varphi _{2}\in C\left( \left[ 0,1\right] ,\mathbb {R}\right) ,\) we have

$$\begin{aligned} \left\| T\left( \varphi _{1}\right) -T\left( \varphi _{2}\right) \right\|= & {} \max _{u\in \left[ 0,1\right] }\left| h\left( \varphi _{1}\left( f\left( u\right) \right) \right) -h\left( \varphi _{1}\left( f\left( u\right) \right) \right) \right| \\\le & {} L\max _{u\in \left[ 0,1\right] }\left| \varphi _{1}\left( f\left( u\right) \right) -\varphi _{1}\left( f\left( u\right) \right) \right| \\\le & {} L\max _{u\in \left[ 0,1\right] }\left| \varphi _{1}\left( u\right) -\varphi _{1}\left( u\right) \right| =L\left\| \varphi _{1}-\varphi _{2}\right\| , \end{aligned}$$

so T is a contraction mapping of \(C\left( \left[ 0,1\right] ,\mathbb {R} \right) \), and the result follows from the Banach principle.

In the second case, similarly, using the increasing monotonicity and continuity of \(\alpha \), we have

$$\begin{aligned}{} & {} \left\| T\left( \varphi _{1}\right) -T\left( \varphi _{2}\right) \right\| \\{} & {} \quad =\max _{u\in \left[ 0,1\right] }\left| h\left( \varphi _{1}\left( f\left( u\right) \right) \right) -h\left( \varphi _{1}\left( f\left( u\right) \right) \right) \right| \le \max _{u\in \left[ 0,1\right] }\alpha \left( \left| \varphi _{1}\left( f\left( u\right) \right) -\varphi _{1}\left( f\left( u\right) \right) \right| \right) \\{} & {} \quad \le \max _{u\in \left[ 0,1\right] }\alpha \left( \left| \varphi _{1}\left( u\right) -\varphi _{1}\left( u\right) \right| \right) =\alpha \left( \left\| \varphi _{1}-\varphi _{2}\right\| \right) \end{aligned}$$

for all \(\varphi _{1},\varphi _{2}\in C\left( \left[ 0,1\right] ,\mathbb {R} \right) ,\) and the result follows from Theorem 3. \(\square \)

Let \(\mathcal {L}^{1}\left( \left[ 0,1\right] \right) \) be the Banach space of Lebesgue integrable continuous functions \(\varphi :\left[ 0,1\right] \rightarrow \mathbb {R}\) with the norm

$$\begin{aligned} \left\| \varphi \right\| =\int _{\left[ 0,1\right] }\left| \varphi \left( u\right) \right| \textrm{d}u. \end{aligned}$$

Theorem 6

Let \(h:\mathbb {R}\rightarrow \mathbb {R}\) be continuous, let \(g\in \mathcal {L} ^{1}\left( \left[ 0,1\right] \right) \), and \(f:\left[ 0,1\right] \rightarrow \left[ 0,1\right] \) be continuously differentiable and such that \(f^{\prime }>0\) in \(\left[ 0,1\right] \).

Assume that there exist: a positive decreasing to zero sequence \(\left\{ t_{n}:n\in \mathbb {N}\right\} \) and a nonnegative sequence \(\left( L_{n}:n\in \mathbb {N}\right) ,\) with \(\ L:=\liminf _{n\rightarrow \infty }L_{n}<\infty ,\) such that for all \(u,v\in \mathbb {R},\) \(n\in \mathbb {N}\)

$$\begin{aligned} \left| u-v\right| =t_{n}\Longrightarrow \left| h\left( u\right) -h\left( v\right) \right| \le L_{n}t_{n}\text {.} \end{aligned}$$

If \(L<\inf \left\{ f^{\prime }\left( u\right) :u\in \left[ 0,1\right] \right\} ,\) or \(L=\inf \left\{ f^{\prime }\left( u\right) :u\in \left[ 0,1 \right] \right\} \) and there is a strictly increasing subsequence of \(\left( L_{n}:n\in \mathbb {N}\right) \) such that \(\lim _{n\rightarrow \infty }L_{n}=1, \) the functional equation

$$\begin{aligned} \varphi =h\circ \varphi \circ f+g \end{aligned}$$

has a unique solution \(\varphi \in \mathcal {L}^{1}\left( \left[ 0,1\right] \right) \);

moreover, for every \(\varphi _{0}\in \) \(\mathcal {L}^{1}\left( \left[ 0,1 \right] \right) ,\,\ \)the sequence \(\left( \varphi _{n}:n\in \mathbb {N} _{0}\right) \,\) defined by

$$\begin{aligned} \varphi _{n+1}:=h\circ \varphi _{n}\circ f+g, \quad n\in \mathbb {N }_{0}\text {,} \end{aligned}$$

converges in the \(\mathcal {L}^{1}\left( \left[ 0,1\right] \right) \)-norm to \( \varphi \).

Proof

Similarly as in the proof of the previous result, we have either (the first case)

$$\begin{aligned} \left| h\left( u\right) -h\left( v\right) \right| \le L\left| h\left( u\right) -h\left( v\right) \right| , \quad u,v\in \left[ 0,1\right] , \end{aligned}$$

with \(0\le L<1,\) or (the second case)

$$\begin{aligned} \left| h\left( u\right) -h\left( v\right) \right| \le \alpha \left( \left| h\left( u\right) -h\left( v\right) \right| \right) , \quad u,v\in \left[ 0,1\right] , \end{aligned}$$

where \(\alpha :\left[ 0,\infty \right) \rightarrow \left[ 0,\infty \right) \) is a strictly increasing continuous function, such that \(\alpha \left( t\right) <t\,\ \)for every \(t>0\).

The mapping T given by

$$\begin{aligned} T\left( \varphi \right) :=h\circ \varphi \circ f+g, \quad \varphi \in \mathcal {L}^{1}\left( \left[ 0,1\right] \right) , \end{aligned}$$

maps \(C\left( \left[ 0,1\right] ,\mathbb {R}\right) \) into itself.

Note that, for every \(\varphi \in \mathcal {L}^{1}\left( \left[ 0,1\right] \right) \), the function \(T\left( \varphi \right) \) is measurable, and

$$\begin{aligned}{} & {} \int _{\left[ 0,1\right] }\left| T\left( \varphi \right) \left( u\right) \right| \textrm{d}u \le \int _{\left[ 0,1\right] }\left| h\left( \varphi \left( f\left( u\right) \right) \right) \right| \textrm{d}u+\int _{\left[ 0,1\right] }\left| g\left( u\right) \right| \textrm{d}u \\{} & {} \quad \le L\int _{\left[ 0,1\right] }\left| \varphi \left( f\left( u\right) \right) \right| \textrm{d}u+L\left| h\left( 0\right) \right| +\int _{\left[ 0,1\right] }\left| g\left( u\right) \right| \textrm{d}u \\{} & {} \quad =\frac{L}{\inf f^{\prime }}\int _{f\left( \left[ 0,1\right] \right) }\left| \varphi \left( u\right) \right| \textrm{d}u+L\left| h\left( 0\right) \right| +\int _{\left[ 0,1\right] }\left| g\left( u\right) \right| \textrm{d}u \\{} & {} \quad \le \frac{L}{\inf f^{\prime }}\left\| \varphi \right\| +L\left| h\left( 0\right) \right| +\left\| g\right\| <\infty \text {,} \end{aligned}$$

so T maps \(\mathcal {L}^{1}\left( \left[ 0,1\right] \right) \) into itself.

In the first case, for all \(\varphi _{1},\varphi _{2}\in \mathcal {L} ^{1}\left( \left[ 0,1\right] \right) ,\) we have

$$\begin{aligned} \left\| T\left( \varphi _{1}\right) -T\left( \varphi _{2}\right) \right\|= & {} \int _{\left[ 0,1\right] }\left| h\left( \varphi _{1}\left( f\left( u\right) \right) \right) -h\left( \varphi _{1}\left( f\left( u\right) \right) \right) \right| \\\le & {} L\int _{\left[ 0,1\right] }\left| \varphi _{1}\left( f\left( u\right) \right) -\varphi _{1}\left( f\left( u\right) \right) \right| \textrm{d}u \\\le & {} \frac{L}{\inf f^{\prime }}\int _{f\left( \left[ 0,1\right] \right) }\left| \varphi _{1}\left( u\right) -\varphi _{1}\left( u\right) \right| \textrm{d}u \\\le & {} L\left\| \varphi _{1}-\varphi _{2}\right\| , \end{aligned}$$

so T is a contraction mapping of \(\mathcal {L}^{1}\left( \left[ 0,1 \right] \right) \), and the result follows from the Banach principle.

In the second case, using the Jensen inequality for the concave function \( \alpha \), we have

$$\begin{aligned} \left\| T\left( \varphi _{1}\right) -T\left( \varphi _{2}\right) \right\|= & {} \int _{\left[ 0,1\right] }\left| h\left( \varphi _{1}\left( f\left( u\right) \right) \right) -h\left( \varphi _{1}\left( f\left( u\right) \right) \right) \right| \textrm{d}u \\\le & {} \int _{\left[ 0,1\right] }\alpha \left( \left| \varphi _{1}\left( f\left( u\right) \right) -\varphi _{1}\left( f\left( u\right) \right) \right| \right) \textrm{d}u \\\le & {} \frac{1}{\inf f^{\prime }}\int _{f\left( \left[ 0,1\right] \right) }\alpha \left( \left| \varphi _{1}\left( u\right) -\varphi _{1}\left( u\right) \right| \right) \textrm{d}u \\\le & {} \frac{1}{\inf f^{\prime }}\alpha \left( \int _{f\left( \left[ 0,1 \right] \right) }\left| \varphi _{1}\left( u\right) -\varphi _{1}\left( u\right) \right| \textrm{d}u \right) \le \alpha \left( \left\| \varphi _{1}-\varphi _{2}\right\| \right) \end{aligned}$$

for all \(\varphi _{1},\varphi _{2}\in \mathcal {L}^{1}\left( \left[ 0,1\right] \right) ,\) and the result follows from Theorem 3. \(\square \)