1 Introduction and preliminaries

Probabilistic metric spaces were introduced in 1942 by Menger [1]. In such spaces, the notion of distance between two points x and y is replaced by a distribution function F x , y (t). Thus one thinks of the distance between points as being probabilistic with F x , y (t) representing the probability that the distance between x and y is less than t. Sehgal, in his Ph.D. thesis [2], extended the notion of a contraction mapping to the setting of the Menger probabilistic metric spaces. For example, a mapping T is a probabilistic contraction if T is such that for some constant 0<k<1, the probability that the distance between image points Tx and Ty is less than kt is at least as large as the probability that the distance between x and y is less than t.

In 1972, Sehgal and Bharucha-Reid proved the following result.

Theorem 1.1 (Sehgal and Bharucha-Reid [3], 1972)

Let (E,F,) be a complete Menger probabilistic metric space for which the triangular normis continuous and satisfies (a,b)=min(a,b). If T is a mapping of E into itself such that for some 0<k<1 and all x,yE,

F T x , T y (t) F x , y ( t k ) ,t>0,
(1.1)

then T has a unique fixed point x in E, and for any given x 0 X, T n x 0 converges to x .

The mapping T satisfying (1.1) is called a k-probabilistic contraction or a Sehgal contraction [3]. The fixed point theorem obtained by Sehgal and Bharucha-Reid is a generalization of the classical Banach contraction principle and is further investigated by many authors [2, 418]. Some results in this theory have found applications to control theory, system theory, and optimization problems.

Next we shall recall some well-known definitions and results in the theory of probabilistic metric spaces which are used later on in this paper. For more details, we refer the reader to [8].

Definition 1.2 A triangular norm (shortly, △-norm) is a binary operation △ on [0,1] which satisfies the following conditions:

  1. (a)

    △ is associative and commutative;

  2. (b)

    △ is continuous;

  3. (c)

    (a,1)=a for all a[0,1];

  4. (d)

    (a,b)(c,d) whenever ac and bd for each a,b,c,d[0,1].

The following are the six basic △-norms:

1 (a,b)=max(a+b1,0);

2 (a,b)=ab;

3 (a,b)=min(a,b);

4 (a,b)=max(a,b);

5 (a,b)=a+bab;

6 (a,b)=min(a+b,1).

It is easy to check that the above six △-norms have the following relations:

1 (a,b) 2 (a,b) 3 (a,b) 4 (a,b) 5 (a,b) 6 (a,b),

for any a,b[0,1].

Definition 1.3 A function F(t):(,+)[0,1] is called a distribution function if it is non-decreasing and left-continuous with lim t F(t)=0. If in addition F(0)=0 then F is called a distance distribution function.

Definition 1.4 A distance distribution function F satisfying lim t + F(t)=1 is called a Menger distance distribution function. The set of all Menger distance distribution functions is denoted by D + . A special Menger distance distribution function given by

H(t)={ 0 , t 0 , 1 , t > 0 .

Definition 1.5 A probabilistic metric space is a pair (E,F), where E is a nonempty set, F is a mapping from E×E into D + such that, if F x , y denotes the value of F at the pair (x,y), the following conditions hold:

(PM-1) F x , y (t)=H(t) if and only if x=y;

(PM-2) F x , y (t)= F y , x (t) for all x,yE and t(,+);

(PM-3) F x , z (t)=1, F z , y (s)=1 implies F x , y (t+s)=1

for all x,y,zE and <t<+.

Definition 1.6 A Menger probabilistic metric space (abbreviated, Menger PM space) is a triple (E,F,) where E is a nonempty set, △ is a continuous t-norm and F is a mapping from E×E into D + such that, if F x , y denotes the value of F at the pair (x,y), the following conditions hold:

(MPM-1) F x , y (t)=H(t) if and only if x=y;

(MPM-2) F x , y (t)= F y , x (t) for all x,yE and t(,+);

(MPM-3) F x , y (t+s)( F x , z (t), F z , y (s)) for all x,y,zE and t>0, s>0.

Now we give a new definition of probabilistic metric space so-called S-probabilistic metric space. This definition reflects a more probabilistic meaning and the probabilistic background. In this definition, the triangle inequality has been changed to a new form.

Definition 1.7 A S-probabilistic metric space is a pair (E,F), where E is a nonempty set, F is a mapping from E×E into D + such that, if F x , y denotes the value of F at the pair (x,y), the following conditions hold:

(SPM-1) F x , y (t)=H(t) if and only if x=y;

(SPM-2) F x , y (t)= F y , x (t) for all x,yE and t(,+);

(SPM-3) F x , y (t) F x , z (t) F z , y (t) x,y,zE,

where F x , z (t) F z , y (t) is the convolution between F x , z (t) and F z , y (t) defined by

F x , z (t) F z , y (t)= 0 + F x , z (tu)d F z , y (u).

Example Let X be a nonempty set, S be a measurable space which consist of some metrics on the X, (Ω,P) be a complete probabilistic measure space and f:ΩS be a measurable mapping. It is easy to think S is a random metric on the X, of course, (X,S) is a random metric space. The following expressions of the distribution functions F x , y (t), F x , z (t), and F z , y (t) are reasonable:

F x , y ( t ) = P { f 1 { d S ; d ( x , y ) < t } } , F x , z ( t ) = P { f 1 { d S ; d ( x , z ) < t } } ,

and

F z , y (t)=P { f 1 { d S ; d ( z , y ) < t } }

for all x,y,zX. Since

P { f 1 { d S ; d ( x , y ) < t } } P { f 1 { d S ; d ( x , z ) + d ( z , y ) < t } }

it follows from probabilistic theory that

P { f 1 { d S ; d ( x , z ) + d ( z , y ) < t } } = F x , z (t) F z , y (t).

Therefore

F x , y (t) F x , z (t) F z , y (t),x,y,zX.

In addition, the conditions (SPM-1), (SPM-2) are obvious.

In this paper, both the Menger probabilistic metric spaces and the S-probabilistic metric spaces are included in the probabilistic metric spaces.

Several problems can be changed as equations of the form Tx=x, where T is a given self-mapping defined on a subset of a metric space, a normed linear space, a topological vector space or some suitable space. However, if T is a non-self-mapping from A to B, then the aforementioned equation does not necessarily admit a solution. In this case, it is contemplated to find an approximate solution x in A such that the error d(x,Tx) is minimum, where d is the distance function. In view of the fact that d(x,Tx) is at least d(A,B), a best proximity point theorem guarantees the global minimization of d(x,Tx) by the requirement that an approximate solution x satisfies the condition d(x,Tx)=d(A,B). Such optimal approximate solutions are called best proximity points of the mapping T. Interestingly, best proximity point theorems also serve as a natural generalization of fixed point theorems, for a best proximity point becomes a fixed point if the mapping under consideration is a self-mapping. Research on the best proximity point is an important topic in the nonlinear functional analysis and applications (see [1931]).

Let A, B be two nonempty subsets of a complete metric space and consider a mapping T:AB. The best proximity point problem is whether we can find an element x 0 A such that d( x 0 ,T x 0 )=min{d(x,Tx):xA}. Since d(x,Tx)d(A,B) for any xA, in fact, the optimal solution to this problem is the one for which the value d(A,B) is attained.

Let A, B be two nonempty subsets of a metric space (X,d). We denote by A 0 and B 0 the following sets:

A 0 = { x A : d ( x , y ) = d ( A , B )  for some  y B } , B 0 = { y B : d ( x , y ) = d ( A , B )  for some  x A } ,

where d(A,B)=inf{d(x,y):xA and yB}.

It is interesting to notice that A 0 and B 0 are contained in the boundaries of A and B, respectively, provided A and B are closed subsets of a normed linear space such that d(A,B)>0 [19].

In order to study the best proximity point problems, we need the following notations.

Definition 1.8 ([30])

Let (A,B) be a pair of nonempty subsets of a metric space (X,d) with A 0 . Then the pair (A,B) is said to have the P-property if and only if for any x 1 , x 2 A 0 and y 1 , y 2 B 0 ,

{ d ( x 1 , y 1 ) = d ( A , B ) , d ( x 2 , y 2 ) = d ( A , B ) d( x 1 , x 2 )=d( y 1 , y 2 ).

In [31], the author proves that any pair (A,B) of nonempty closed convex subsets of a real Hilbert space H satisfies P-property.

In [25, 26], P-property has been weakened to the weak P-property. An example that satisfies the P-property but not the weak P-property can be found there.

Definition 1.9 ([25, 26])

Let (A,B) be a pair of nonempty subsets of a metric space (X,d) with A 0 . Then the pair (A,B) is said to have the weak P-property if and only if for any x 1 , x 2 A 0 and y 1 , y 2 B 0 ,

{ d ( x 1 , y 1 ) = d ( A , B ) , d ( x 2 , y 2 ) = d ( A , B ) d( x 1 , x 2 )d( y 1 , y 2 ).

Recently, many best proximity point problems with applications have been discussed and some best proximity point theorems have been proved. For more details, we refer the reader to [27].

In this paper, we establish some definitions and basic concepts of the best proximity point in the framework of probabilistic metric spaces.

Definition 1.10 Let (E,F) be a probabilistic metric space, A,BE be two nonempty sets. Let

F A , B (t)= sup x A , y B F x , y (t),t(,+),

which is said to be the probabilistic distance of A, B.

Example Let X be a nonempty set and d 1 , d 2 be two metrics defined on X with the probabilities p 1 =0.5, p 2 =0.5, respectively. Assume that

d 1 (x,y) d 2 (x,y),x,yX.

For any x,yX, the table

is a discrete random variable with the distribution function

F x , y (t)={ 0 , t d 1 ( x , y ) , 0.5 , d 1 ( x , y ) < t d 2 ( x , y ) , 1 , d 2 ( x , y ) < t .

Let A, B be two nonempty sets of X, the table

is also a discrete random variable with the distribution function

F A , B (t)={ 0 , t d 1 ( A , B ) , 0.5 , d 1 ( A , B ) < t d 2 ( A , B ) , 1 , d 2 ( A , B ) < t ,

where

d i (A,B)= inf x A , y B d i (x,y),i=1,2.

It is easy to see that

F A , B (t)= sup x A , y B F x , y (t),t(,+).

Definition 1.11 Let (E,F) be a probabilistic metric space, A,BE be two nonempty subsets and T:AB be a mapping. We say that x A is a best proximity point of the mapping T if the following equality holds:

F x , T x (t)= F A , B (t),t(,+).

Example Let X be a nonempty set and d 1 , d 2 be two metrics defied on X with the probabilities p 1 =0.5, p 2 =0.5, respectively. Let A, B be two nonempty sets of X and T:AB be a mapping. Assume

d 1 (x,y) d 2 (x,y),x,yX.

If there exists a point x A, such that

d 1 ( x , T x ) = d 1 (A,B), d 2 ( x , T x ) = d 2 (A,B),

then the table

is a discrete random variable with the distribution function

F x , T x (t)={ 0 , t d 1 ( x , T x ) , 0.5 , d 1 ( x , T x ) < t d 2 ( x , T x ) , 1 , d 2 ( x , T x ) < t .

It is obvious that F x , T x (t)= F A , B (t).

It is clear that the notion of a fixed point coincided with the notion of a best proximity point when the underlying mapping is a self-mapping. Let (E,F) be a probabilistic metric space. Suppose that AE and BE are nonempty subsets. We define the following sets:

A 0 = { x A : F x , y ( t ) = F A , B ( t )  for some  y B } , B 0 = { y A : F x , y ( t ) = F A , B ( t )  for some  x A } .

Definition 1.12 Let (A,B) be a pair of nonempty subsets of a probabilistic metric space (E,F) with A 0 . Then the pair (A,B) is said to have the P-property if and only if for any x 1 , x 2 A and y 1 , y 2 B,

F x 1 , y 1 (t)= F A , B (t), F x 2 , y 2 (t)= F A , B (t) F x 1 , x 2 (t)= F y 1 , y 2 (t).

Definition 1.13 Let (A,B) be a pair of nonempty subsets of a probabilistic metric space (E,F) with A 0 . Then the pair (A,B) is said to have the weak P-property if and only if for any x 1 , x 2 A and y 1 , y 2 B,

F x 1 , y 1 (t)= F A , B (t), F x 2 , y 2 (t)= F A , B (t) F x 1 , x 2 (t) F y 1 , y 2 (t).

Definition 1.14 Let (E,F) be a probabilistic metric space.

  1. (1)

    A sequence { x n } in E is said to converges to xE if for any given ε>0 and λ>0, there must exist a positive integer N=N(ε,λ) such that F x n , x (ε)>1λ whenever n>N.

  2. (2)

    A sequence { x n } in E is called a Cauchy sequence if for any ε>0 and λ>0, there must exists a positive integer N=N(ε,λ) such that F x n , x m (ε)>1λ, whenever n,m>N.

  3. (3)

    (E,F,) is said to be complete if each Cauchy sequence in E converges to some point in E.

We denote by x n x the { x n } converges to x. It is easy to see that x n x if and only if F x n , x (t)H(t) for any given t(,+) as n.

2 Contraction mapping principle in S-probabilistic metric spaces

Let (E,F) be a S-probabilistic metric space. For any x,yE we definite

d F (x,y)= 0 + td F x , y (t).

Since t is a continuous function and F x , y is a bounded variation functions, so the above integer is well definite. In fact, the above integer is just the mathematical expectation of F x , y (t). Throughout this paper we assume that

d F (x,y)= 0 + td F x , y (t)<+,x,yE,

for all probabilistic metric spaces (E,F) presented in this paper.

Next we give a new notation of convergence.

  1. (1)

    A sequence { x n } in E is said to converges averagely to xE if

    lim n 0 + td F x n , x (t)=0.
  2. (2)

    A sequence { x n } in E is called an average Cauchy sequence if

    lim n , m 0 + td F x n , x m (t)=0.
  3. (3)

    (E,F) is said to be average complete if each average Cauchy sequence in E converges averagely to some point in E.

We denote by x n x the { x n } that converges averagely to x.

Theorem 2.1 Let (E,F) be a S-probabilistic metric space. For any x,yE we define

d F (x,y)= 0 + td F x , y (t).

Then d F (x,y) is a metric on the E.

Proof Since F x , y (t)=H(t) (tR) if and only if x=y, and

0 + tdH(t)=0,

we know the condition d F (x,y)=0x=y holds. The condition d F (x,y)= d F (y,x), for all x,yE, is obvious. Next we will prove the triangle inequality. For any x,y,zE, from (SPM-3) we have

F x , y (t) 0 + F x , z (tu)d F z , y (u)= F x , z (t) F z , y (t).

By using probabilistic theory we know that

0 + td F x , y (t) 0 + td F x , z (t)+ 0 + td F z , y (t),

which implies that

d F (x,y) d F (x,z)+ d F (z,y).

This completes the proof. □

Theorem 2.2 Let (E,F) be a complete S-probabilistic metric space. Let T:EE be a mapping satisfying the following condition:

F T x , T y (t) F x , y ( t h ) ,x,yE,tR=(,+),
(2.1)

where 0<h<1 is a constant. Then T has a unique fixed point x E and for any given x 0 E the iterative sequence x n + 1 =T x n converges to x . Further, the error estimate inequality

0 + td F T n x 0 , x (t) h n 1 h 0 + td F T x 0 , x 0 (t)

holds for all n1.

Proof For any x,yE, from (2.1) we have

d F ( T x , T y ) = 0 + t d F T x , T y ( t ) 0 + t d F x , y ( t h ) = h 0 + t h d F x , y ( t h ) = h 0 + u d F x , y ( u ) = h d F ( x , y ) .

For any given x 0 E, define x n + 1 =T x n for all n=0,1,2, . Observe that

d F ( x n , x n + m ) d F ( x n , x n + 1 ) + d F ( x n + 1 , x n + m ) d F ( x n , x n + 1 ) + d F ( x n + 1 , x n + 2 ) + d F ( x n + 2 , x n + m ) ( h n + h n + 1 + h n + 2 + + h n + m 1 ) d F ( x 0 , x 1 ) .
(2.2)

Since 0<h<1, we have

( h n + h n + 1 + h n + 2 + + h n + m 1 ) d F ( x 0 , x 1 )0

as n. Hence

0 + td F x n , x n + m (t)= d F ( x n , x n + m )0

as n. We claim that

lim n F x n , x n + m =H(t).
(2.3)

If not, there must exist numbers t 0 >0, 0< λ 0 <1, and subsequences { n k }, { m k } of {n} such that F x n k , x n k + m k ( t 0 ) λ 0 , for all k1. In this case, we have

d F ( x n k , x n k + m k ) = 0 + t d F x n k , x n k + m k ( t ) = 0 t 0 t d F x n k , x n k + m k ( t ) + t 0 + t d F x n k , x n k + m k ( t ) t 0 + t d F x n k , x n k + m k ( t ) t 0 ( 1 F x n k , x n k + m k ( t 0 ) ) t 0 ( 1 λ 0 ) > 0 .

This is a contradiction. From (2.3) we know { x n } is a Cauchy sequence in complete S-probabilistic metric space (E,F). Hence there exists a point x E such that { x n } converges to x in the mean of

lim n F x n , x (t)=H(t),t0.

Therefore

lim n F x n , T x (t) lim n F x n 1 , x ( t h ) =H(t),t0.

We claim x is a fixed point of T, in fact, for any t>0, it follows from condition (SPM-3) that

F x , T x ( t ) 0 + F x , x n ( t u ) d F x n , T x ( u ) 0 t 2 F x , x n ( t u ) d F x n , T x ( u ) = F x , x n ( t 2 ) ( F x n , T x ( t 2 ) 0 ) 1

as n, which implies F x , T x (t)=H(t), and hence x =T x . The x is a fixed point of T. If there exists another fixed point x of T, we obverse

F x , x (t)= F T x , T x (t) F x , x ( t h ) ,

which implies F x , x (t)=H(t) tR, and hence x = x . Then the fixed point of T is unique. Meanwhile, for any given x 0 , the iterative sequence x n = T n x 0 converges to x . Finally, we prove the error estimate formula. Let m in the inequality (2.2); we get

d F ( x n , x ) h n 1 h d F ( x 0 , x 1 ),

which can be rewritten as the following error estimate formula:

0 + td F T n x 0 , x (t) h n 1 h 0 + td F T x 0 , x 0 (t).

This completes the proof. □

Theorem 2.3 Let (E,F,) be a complete Menger probabilistic metric space. Assume

( F x , z ( t 2 ) , F z , y ( t 2 ) ) 0 + F x , z (tu)d F z , y (u),
(2.4)

for all x,y,zE, t>0. Let T:EE be a mapping satisfying the following condition:

F T x , T y (t) F x , y ( t h ) ,x,yE,t>0,
(2.5)

where 0<h<1 is a constant. Then T has a unique fixed point x E and for any given x 0 E the iterative sequence x n + 1 =T x n converges to x . Further, the error estimate inequality

0 + td F T n x 0 , x (t) h n 1 h 0 + td F T x 0 , x 0 (t)

holds for all n1.

Proof From (2.4) we know that (E,F,) is a S-probabilistic metric space. This together with (2.5), by using Theorem 2.2, shows that the conclusion is proved. □

3 Best proximity point theorems for contractions

We first define the notion of P-operator P: B 0 A 0 , it is very useful for the proof of the best proximity point theorem. From the definitions of A 0 and B 0 , we know that for any given y B 0 , there exists an element x A 0 such that F x , y (t)= F A , B (t). Because (A,B) has the weak P-property, such x is unique. We denote by x=Py the P-operator from B 0 into  A 0 .

Theorem 3.1 Let (E,F) be a complete S-probabilistic metric space. Let (A,B) be a pair of nonempty subsets in E and A 0 be a nonempty closed subset. Suppose (A,B) satisfies the weak P-property. Let T:AB be a mapping satisfying the following condition:

F T x , T y (t) F x , y ( t h ) ,x,yA,t>0,

where 0<h<1 is a constant. Assume that T( A 0 ) B 0 . Then T has a unique best proximity point x A and for any given x 0 E the iterative sequence x n + 1 =PT x n converges to x . Further, the error estimate inequality

0 + td F ( P T ) n x 0 , x (t) h n 1 h 0 + td F P T x 0 , x 0 (t)

holds for all n1.

Proof Since the pair (A,B) has the weak P-property, we have

F P T x 1 , P T x 2 (t) F T x 1 , T x 2 (t) F x 1 , x 2 ( t h ) ,t>0,

for any x 1 , x 2 A 0 . This shows that PT: A 0 A 0 is a contraction from complete S-probabilistic metric subspace A 0 into itself. Using Theorem 2.2, we know that PT has a unique fixed point x and for any given x 0 E the iterative sequence x n + 1 =PT x n converges to x . Further, the error estimate inequality

0 + td F ( P T ) n x 0 , x (t) h n 1 h 0 + td F P T x 0 , x 0 (t)

holds for all n1. Since PT x = x if and only if F x , T x (t)= F A , B (t), so the point x is a unique best proximity point of T:AB. This completes the proof. □

Theorem 3.2 Let (E,F,) be a complete Menger probabilistic metric space. Assume that

( F x , z ( t 2 ) , F z , y ( t 2 ) ) 0 + F x , z (tu)d F z , y (u),
(3.1)

for all x,y,zE, t>0. Let (A,B) be a pair of nonempty subsets in E and A 0 be nonempty closed subset. Suppose that (A,B) satisfies the weak P-property. Let T:AB be a mapping satisfying the following condition:

F T x , T y (t) F x , y ( t h ) ,x,yA,t>0,

where 0<h<1 is a constant. Assume T( A 0 ) B 0 . Then T has a unique best proximity point x A and for any given x 0 E the iterative sequence x n + 1 =PT x n converges to x . Further, the error estimate inequality

0 + td F ( P T ) n x 0 , x (t) h n 1 h 0 + td F P T x 0 , x 0 (t)

holds for all n1.

Proof From (3.1) we know that (E,F,) is a S-probabilistic metric space. By using Theorem 3.1, the conclusion is proved. □

4 Best proximity point theorem for Geraghty-contractions

First, we introduce the class Γ of those functions β:[0,+)[0,1) satisfying the following condition:

β( t n )1 t n 0.

Definition 4.1 Let (E,F) be a probabilistic metric space. Let (A,B) be a pair of nonempty subsets in E. A mapping T:AB is said to be a Geraghty-contraction if there exists βΓ such that

F T x , T y (t) F x , y ( t β ( d F ( x , y ) ) ) ,x,yA,t>0,
(4.1)

where

d F (x,y)= 0 + td F x , y (t).

Theorem 4.2 Let (E,F) be a complete S-probabilistic metric space. Let (A,B) be a pair of nonempty subsets in E and A 0 be a nonempty closed subset. Suppose that (A,B) satisfies the weak P-property. Let T:AB be a Geraghty-contraction. Assume T( A 0 ) B 0 . Then T has a unique best proximity point x A and for any given x 0 E the iterative sequence x n + 1 =PT x n converges to x .

Proof From (4.1) and the weak P-property of (A,B), we get

d F ( P T x , P T y ) = 0 + t d F P T x , P T y ( t ) 0 + t d F T x , T y ( t ) 0 + t d F x , y ( t β ( d F ( x , y ) ) ) β ( d F ( x , y ) ) 0 + t β ( d F ( x , y ) ) d F x , y ( t β ( d F ( x , y ) ) ) = β ( d F ( x , y ) ) d F ( x , y ) , x , y E .
(4.2)

We have proved that d F (,) is a metric on the E in Theorem 2.1. For any given x 0 E, define x n + 1 =PT x n , n=0,1,2, . From (4.2) we have

d F ( x n , x n + 1 ) = d F ( P T x n 1 , P T x n ) d F ( T x n 1 , T x n ) β ( d F ( x , y ) ) d F ( x n 1 , x n ) < d F ( x n 1 , x n ) .
(4.3)

Suppose that there exists n 0 such that d F ( x n 0 , x n 0 + 1 )=0. In this case, PT x n 0 = x n 0 , which implies that x n 0 is a best proximity point of T and this is the desired result. In the contrary case, suppose that d F ( x n , x n + 1 )>0, for any n0. By (4.3), d F ( x n , x n + 1 ) is a decreasing sequence of nonnegative real numbers, and hence there exists r0 such that lim n d F ( x n , x n + 1 )=r. In the sequel, we prove that r=0. Assume r>0, then from (4.3) we have

0< d F ( x n , x n + 1 ) d F ( x n 1 , x n ) β ( d F ( x n 1 , x n ) ) <1

for all n0. The last inequality implies that lim n β( d F ( x n 1 , x n ))=1 and since βΓ, we obtain r=0 and this contradicts with our assumption. Therefore,

lim n d F ( x n , x n + 1 )=0.
(4.4)

In what follows, we prove that { x n } is a Cauchy sequence in metric space (E, d F (,)). In the contrary case, there exist two subsequences { x n k }, { x m k } such that

lim k d F ( x n k , x m k )>0.
(4.5)

Without loss of generality, we still denote by { x n }, { x m } these subsequences. By using the triangular inequality,

d F ( x n , x m ) d F ( x n , x n + 1 ) + d ( x n + 1 , x m + 1 ) + d F ( x m + 1 , x m ) d F ( x n , x n + 1 ) + d F ( P T x n , P T x m ) + d F ( x m + 1 , x m ) d F ( x n , x n + 1 ) + d F ( T x n , T x m ) + d F ( x m + 1 , x m ) d F ( x n , x n + 1 ) + β ( d F ( x n , x m ) ) d F ( x n , x m ) + d F ( x m + 1 , x m ) ,

which implies

d F ( x n , x m ) 1 1 β ( d F ( x n , x m ) ) ( d F ( x n , x n + 1 ) + d F ( x m + 1 , x m ) ) .

The last inequality together with (4.4) and (4.5) give us

lim n , m 1 1 β ( d F ( x n , x m ) ) =.

Therefore,

lim n , m β ( d F ( x n , x m ) ) =1.

Since βΓ, we get

lim n , m d F ( x n , x m )=0.

This is a contradiction with (4.5). Hence lim n , m d F ( x n , x m )=0, the { x n } is a Cauchy sequence in metric space (E, d F (,)). By using the same method as in Theorem 2.2, we know

lim n , m F x n , x m (t)=H(t),tR.

This shows that the { x n } is also a Cauchy sequence in S-probabilistic metric space (E,F). Since (E,F) is complete, then there exists a point x E such that x n x as n. By using the same method as in Theorem 2.2, we know that x is a unique fixed point of mapping PT: A 0 A 0 . That is, PT x = x , which is equivalent to x is a unique best proximity point of T. This completes the proof. □