1 Introduction

When a computational program uses a recursion process to find the solution to a problem, such a process is characterized by obtaining in each step of the computation an approximation to the aforementioned solution which is better than the approximations obtained in the preceding steps and, in addition, by obtaining always the final approximation to the problem solution as the ‘limit’ of the computing process. A mathematical model to this sort of situations is the so-called Scott model which is based on ideas from order theory and topology (see [1, 2] for a detailed account of the Scott model and its applications). In particular, the order represents some notion of information in such a way that each step of the computation is identified with an element of the mathematical model which is greater than (or equal to) the other ones associated with the preceding steps, since each approximation gives more information about the final solution than those computed before. The final output of the computational process is seen as the limit of the successive approximations. Thus the recursion processes are modeled as increasing sequences of elements of the mathematical model, which is identified with an ordered set, that converge to its least upper bound with respect to the given topology.

In 1994, Matthews introduced the notion of Scott-like topology as a mathematical framework to model increasing information content sequences in computer science in the spirit of Scott [3].

Let us recall, with the aim of reminding the concept of Scott-like topology, that a pair (X,) is said to be an ordered set provided that X is a nonempty set and ≤ is a reflexive, antisymmetric and transitive binary relation on X [2]. Given a subset YX, an upper bound for Y is an element xX such that yx for all yY. A least element for Y is an element zY such that zy for all yY. Moreover, a sequence ( x n ) n N in (X,) is increasing if x n x n + 1 for all nN.

According to Matthews, a weakly order consistent topology over an ordered set (X,) is a topology T over X such that xyxcl(y) for all x,yX, where by cl(y) we denote the closure of {y} with respect to T. Furthermore, a Scott-like topology over an ordered set (X,) is a weakly consistent topology T over X satisfying the following properties:

  1. (1)

    every increasing sequence ( x n ) n N in (X,) has least upper bound, where ℕ denotes the set of positive integer numbers;

  2. (2)

    for every OT containing the least upper bound of an increasing sequence ( x n ) n N , there exists n 0 N such that x n O for all n> n 0 .

In the aforesaid reference [3], Matthews introduced also the notion of a partial metric. In order to recall this new concept, let us denote by R + the set of nonnegative real numbers.

Following [3], a partial metric on a nonempty set X is a function p:X×X R + such that for all x,y,zX:

  1. (i)

    p(x,x)=p(x,y)=p(y,y)x=y;

  2. (ii)

    p(x,x)p(x,y);

  3. (iii)

    p(x,y)=p(y,x);

  4. (iv)

    p(x,y)p(x,z)+p(z,y)p(z,z).

Of course a partial metric space is a pair (X,p) such that X is a nonempty set and p is a partial metric on X.

The concept of a partial metric, since its introduction by Matthews, has been widely accepted in computer science (see, for instance, [415]). This is due to the fact that partial metric spaces can be used as a mathematical tool to model computational processes in the spirit of Scott. Indeed, each partial metric p on X generates a T 0 topology T(p) on X which has as a base the family of open p-balls { B p (x,ε):xX,ε>0}, where B p (x,ε)={yX:p(x,y)<p(x,x)+ε} for all xX and ε>0. Moreover, every partial metric p induces an order p on X as follows: x p yp(x,y)=p(x,x).

The next result reveals the reason for which the partial metric spaces can be used as a suitable mathematical tool to describe Scott processes (see [3] for a deeper discussion).

Proposition 1 Let (X,p) be a complete partial metric space. Then the partial metric topology T(p) is a Scott-like topology over (X, p ), and thus every increasing sequence ( x n ) n N in (X, p ) has least upper bound and converges to it with respect to T(p).

Remark 2 Note that the fact that the sequence ( x n ) n N is ascending with least upper bound x provides that x n p x for all nN and, thus, that p( x n ,x)p( x n , x n )=0. So, in the preceding proposition, we actually have the convergence of ( x n ) n N to x is with respect to T( p s ).

Inspired by the applications to program verification, Matthews extended Banach’s fixed point theorem to the framework of partial metric spaces in [3], and he used it to formulate a suitable test for lazy data flow deadlock in Kahn’s model of parallel computation in [4]. The aforementioned extension of Banach’s fixed point theorem can be stated as follows.

Theorem 3 Let f be a mapping of a complete partial metric space (X,p) into itself such that there is s[0,1[ such that

p ( f ( x ) , f ( y ) ) sp(x,y)

for all x,yX. Then f has a unique fixed point. Moreover, if x X is the fixed point of f, then p( x , x )=0.

According to [3], a sequence ( x n ) n N in a partial metric space (X,p) converges to a point xX with respect to T(p)p(x,x)= lim n p(x, x n ). Moreover, a sequence ( x n ) n N in a partial metric space (X,p) is called a Cauchy sequence if lim n , m p( x n , x m ) exists and is finite. Furthermore, a partial metric space (X,p) is said to be complete if every Cauchy sequence ( x n ) n N in X converges, with respect to T(p), to a point xX such that p(x,x)= lim n , m p( x n , x m ).

According to [3], every partial metric induces in a natural way a metric. Indeed, given a partial metric space (X,p), then the function p s :X×X R + defined by

p s (x,y)=2p(x,y)p(x,x)p(y,y)

for all x,yX is a metric on X. Moreover, a sequence ( x n ) n N in (X,p) converges to xX with respect to T( p s ) if and only if lim n p(x, x n )= lim n p( x n , x n )=p(x,x). Furthermore, a sequence in (X,p) is Cauchy if and only if it is Cauchy in the metric space (X, p s ), and (X,p) is complete if and only if (X, p s ) is complete.

Since Matthews published Theorem 3, an intense research activity on fixed point results in partial metric spaces have been developed. A large number of fixed point results in the metric framework have been extended to the partial metric context in [13, 1641].

In the light of the facts that, on the one hand, partial metric spaces are a useful tool for solving practical problems that arise in several fields of computer science and, on the other hand, the scientific community has a growing interest in partial metric fixed point theory, in this paper our goal is to show that partial metric spaces can be used satisfactorily for the asymptotic complexity analysis of algorithms by means of fixed point techniques. To this end, we discuss whether Theorem 3 can be used for such a purpose. However, we show that, in principle, this question has a negative answer and, for this reason, we delve into the study of fixed point techniques in asymptotic complexity analysis of algorithms. Thus, we present a mathematical technique for discussing the complexity of algorithms whose foundation lies in the use of two new fixed point results that we provide for partial metric spaces. In order to show the potential applicability of the developed theory and to validate our new fixed point technique, we apply it to analyze formally the asymptotic complexity of two celebrated recursive algorithms, namely, Quicksort and Hanoi.

The remainder of the paper is organized as follows. In Section 2 we prove the announced fixed point theorems for self-mappings defined on complete partial metric spaces. Moreover, in the same section, we provide examples that show that the hypothesis in the statements of our results cannot be weakened. Section 3 is devoted to introducing the reader to fixed point techniques for asymptotic complexity analysis of algorithms. Concretely, general and fundamental aspects of asymptotic complexity analysis are recalled and, in addition, the reference fixed point technique to carry out the complexity analysis of algorithms, due to Schellekens and in which our study is based, is presented in detail in order to motivate our subsequent work developed in Section 4. In the latter section, we discuss the feasibility of using the Matthews fixed point theorem as a mathematical tool for the asymptotic complexity analysis of algorithms and, in particular, we show that the aforesaid result is not, in principle, appropriate for such a purpose. Accordingly, in the same section, we introduce a new mathematical technique for the asymptotic complexity analysis of algorithms whose basis is provided by the fixed point results proved in Section 2. We end the section, and thus the paper, applying the developed fixed point method to analyze the asymptotic complexity of the aforesaid recursive algorithms.

2 The fixed point theorems

In this section we present our main results which will play a central role in the application to asymptotic complexity analysis developed in Section 4.1. To this end, let us recall that a mapping f from an ordered set (X,) into itself is monotone if f(x)f(y) whenever xy. Moreover, according to [42], a mapping from an ordered set (X,) into itself is said to be ≤-continuous provided that the least upper bound of ( f ( x n ) ) n N is f( x ) for every increasing sequence ( x n ) n N whose least upper bound exists and is x . Of course every ≤-continuous mapping is monotone.

Theorem 4 Let (X,p) be a complete partial metric space and let f:XX be a p -continuous mapping. If there exists x 0 X such that x 0 p f( x 0 ), then f has a fixed point in x 0 ={xX: x 0 p x}.

Proof Let x 0 X such that x 0 p f( x 0 ). Since f is monotone, we have that

x 0 p f( x 0 ) p f 2 ( x 0 ) p p f n ( x 0 ) p f n + 1 ( x 0 ) p .

Observe that we can assume, without loss of generality, that x 0 f( x 0 ) since otherwise we have guaranteed the existence of the fixed point in x 0 .

Since the sequence ( f n ( x 0 ) ) n N is increasing in (X, p ), we have, by Proposition 1 and Remark 2, that there exists x X such that x is the least upper bound of ( f n ( x 0 ) ) n N and, in addition, that lim n f n ( x 0 )= x with respect to T( p s ). Since f is p -continuous, we have that f( x ) is the least upper bound of ( f n ( x 0 ) ) n N . Whence we immediately obtain that f( x )= x and that x x 0 . □

Remark 5 Observe that the proof of Theorem 4 follows applying similar arguments to those given in the proof of Kleen’s theorem or Tarski-Kantorovitch’s theorem for mappings defined from ω-chain-complete ordered sets into itself (see [42, 43] for more details). However, we have included a detailed proof of the aforementioned result for the sake of completeness and in order to help the reader.

The next example shows that the p -continuity of the mapping cannot be deleted in the statement of Theorem 4.

Example 6 Let p be the partial metric space on (0,] given by

p (x,y)=max { 1 x , 1 y }

for all x,y(0,], where we adopt the convention that 1 =0. It is not hard to check that the partial metric space ((0,], p ) is complete and that x p yx y, where stands for the usual order on the extended real line. Consider the subset X={1,2} of (0,]. Then the partial metric space (X, p ) is also complete. Now, define the mapping f:XX by f(1)=2 and f(2)=1. Clearly, 1 p =f(1)=2. Observe that f is not p -continuous because f is not monotone. In fact, 1 p 2 and 2=f(1) p f(2)=1. It is clear that f has no fixed points.

Let us recall that, given a partial metric (X,p), a mapping f:XX is continuous provided that f is continuous from (X,T(p)) into itself.

Remark 7 Note that every monotone and continuous mapping f from a complete partial metric space into itself is p -continuous. Indeed, assume that ( ( x n ) ) n N is an increasing sequence in (X, p ). Since (X,p) is complete, we have guaranteed, by Proposition 1 and Remark 2, that there exists x X such that x is the least upper bound of ( x n ) n N and, in addition, that lim n x n = x with respect to T( p s ). Since f is continuous, we have that lim n f( x n )=f( x ) with respect to T(p). The monotonicity of f provides that f( x n ) p f( x ) and, thus, that p(f(x),f( x n ))p(f( x n ),f( x n ))=0. Whence we deduce that lim n f( x n )=f( x ) with respect to T( p s ). Moreover, since the sequence ( f ( x n ) ) n N is increasing and the partial metric space (X,p) is complete, we have guaranteed the existence of the least upper bound of ( f ( x n ) ) n N , say y X, in such a way that lim n f( x n )= y with respect to T( p s ). Therefore f( x )= y , as claimed.

In the light of Theorem 4 and Remark 7, we immediately obtain the following result.

Corollary 8 Let (X,p) be a complete partial metric space and let f:XX be a monotone and continuous mapping. If there exists x 0 X such that x 0 p f( x 0 ), then f has a fixed point in x 0 ={xX: x 0 p x}.

In the following, given a partial metric space (X,p), we will say that a mapping f:XX is p s -continuous provided that f is continuous from (X,T( p s )) into itself. In our subsequent result, the p s -continuity plays a central role.

Theorem 9 Let (X,p) be a complete partial metric space and let f:XX be a monotone and p s -continuous mapping. If there exists x 0 X such that f( x 0 ) p x 0 , then f has a fixed point in x 0 ={xX:x p x 0 }.

Proof Let x 0 X such that f( x 0 ) p x 0 . We assume, without loss of generality, that x 0 f( x 0 ) since otherwise we have guaranteed the existence of the fixed point in x 0 .

Since f is monotone, we have that

x 0 p f( x 0 ) p f 2 ( x 0 ) p p f n ( x 0 ) p f n + 1 ( x 0 ) p .

It follows that

p ( f n ( x 0 ) , f n ( x 0 ) ) p ( f n ( x 0 ) , f n + 1 ( x 0 ) ) =p ( f n + 1 ( x 0 ) , f n + 1 ( x 0 ) )

for all nN. Thus the sequence ( p ( f n ( x 0 ) , f n ( x 0 ) ) ) n N is decreasing in R + . So, there exists r R + such that lim n p( f n ( x 0 ), f n ( x 0 ))=r. Then lim m , n p( f m ( x 0 ), f n ( x 0 ))=r, since p( f m ( x 0 ), f n ( x 0 ))=p( f n ( x 0 ), f n ( x 0 )) for all m,nN with nm. It follows that the sequence ( f n ( x 0 ) ) n N is Cauchy. The fact that (X,p) is complete yields the existence of x X such that lim n p( f n ( x 0 ), f n ( x 0 ))= lim n p( f n ( x 0 ), x )=p( x , x ).

Next we show that x x 0 . It is clear that

p ( x , x 0 ) p ( x , x ) p ( x , f n ( x 0 ) ) + p ( f n ( x 0 ) , x 0 ) p ( f n ( x 0 ) , f n ( x 0 ) ) p ( x , x ) = p ( x , f n ( x 0 ) ) p ( x , x )

since p( f n ( x 0 ), x 0 )=p( f n ( x 0 ), f n ( x 0 )) for all nN. By the fact that lim n p( f n ( x 0 ), x )=p( x , x ), we conclude that p( x , x 0 )=p( x , x ) and, thus, that x p x 0 .

Now we prove that x is a fixed point of f. On the one hand, we have that

p ( x , f ( x ) ) p ( f ( x ) , f ( x ) ) p ( x , f n ( x 0 ) ) + p ( f n ( x 0 ) , f ( x ) ) p ( f n ( x 0 ) , f n ( x 0 ) ) p ( f ( x ) , f ( x ) )

for all nN. On the other hand, we have that lim n p( f n ( x 0 ), f n ( x 0 ))=p( f n ( x 0 ), x ) and, by the p s -continuity of f, that lim n p( f n ( x 0 ),f( x ))=p(f( x ),f( x )). Thus, we deduce that p( x ,f( x ))=p(f( x ),f( x )). So, we obtain that f( x ) p x .

Moreover, we have that

p ( x , f ( x ) ) p ( x , x ) p ( x , f n ( x 0 ) ) + p ( f n ( x 0 ) , f ( x ) ) p ( f n ( x 0 ) , f n ( x 0 ) ) p ( x , x )

for all nN. Again, we have that lim n p( f n ( x 0 ), x )=p( x , x ) and, by the p s -continuity of f, that lim n p( f n ( x 0 ), f n ( x 0 ))=p( f n ( x 0 ),f( x )). Thus we deduce that p( x ,f( x ))=p( x , x ). Whence we obtain that x p f( x ).

We conclude from f( x ) p x p f( x ) that f( x )= x . □

The next examples show that the p s -continuity and monotonicity of the mapping cannot be deleted in the statement of Theorem 9.

Example 10 Consider the partial metric p: R + × R + R + given by p(x,y)=max{x,y}. It is clear that the partial metric space ( R + ,p) is complete and p s (x,y)=|yx| for all x,y R + . Moreover, we have that x p yyx, where ≤ stands for the usual order on R + . Define the mapping f: R + R + by

f(x)={ 2 x + 1 if  x > 1 , x + 1 if  x 1

for all x R + . Then we have that f is monotone and that 1=f(0) p 0. It is easy to check that f is not p s -continuous. Moreover, it is clear that f has no fixed points.

Example 11 Let X={0,1}. Consider the restriction of the partial metric p defined in Example 10 to the set X. Denote the aforesaid restriction by p again. Of course the partial metric space (X,p) is complete. Now, define the mapping f:XX by f(0)=1 and f(1)=0. It follows easily that f is p s -continuous. Clearly, 1=f(0) p 0. Moreover, f is not monotone since 1 p 0 and 0=f(1) p f(0)=1. Furthermore, f has no fixed points.

3 Asymptotic complexity analysis of algorithms

3.1 Preliminaries

In computer science, the complexity analysis of an algorithm is based on determining mathematically the quantity of resources needed by the algorithm to solve the problem for which it has been designed. A typical resource, playing a central role in complexity analysis, is the execution time or running time of computing. Since there are often many algorithms to solve the same problem, one objective of the complexity analysis is to assess which of them is faster when large inputs are considered. To this end, it is necessary to compare their running time of computing. This is usually done by means of the asymptotic analysis in which the running time of an algorithm is denoted by a function T:N(0,] in such a way that T(n) represents the time taken by the algorithm to solve the problem under consideration when the input of the algorithm is of size n. Let us denote by RT the set formed by all functions from ℕ into (0,]. Of course the running time of an algorithm does not only depend on the input size n, but it depends also on the particular input of the size n (and the distribution of the data). Thus the running time of an algorithm is different when the algorithm processes certain instances of input data of the same size n. As a consequence, for the purpose of size-based comparisons, it is necessary to distinguish three possible behaviors in the complexity analysis of algorithms. These are the so-called best case, the worst case and the average case. The best case and the worst case for an input of size n are defined by the minimum and the maximum running time of computing over all inputs of the size n, respectively. The average case for an input of size n is defined by the expected value or average over all inputs of size n.

In general, to determine exactly the function which describes the running time of computing of an algorithm is an arduous task. However, in most situations it is useful to know the running time of computing of an algorithm in an ‘approximate’ way rather than in an exact one. For this reason, the asymptotic complexity analysis of algorithms is focused on obtaining the ‘approximate’ running time of computing of an algorithm, and this is done by means of the Θ-notation. Let us recall how the Θ-notation allows to achieve such a goal.

Let fRT denote the running time of computing of an algorithm, then the statement fO(g), where gRT, means that there exist n 0 N and c R + such that f(n) cg(n) for all nN such that n 0 n ( stands for the usual order on the extended real line). So, the function g gives an asymptotic upper bound of the running time f and, thus, an ‘approximate’ information of the running time of the algorithm. Following the standard notation, when g is an asymptotic upper bound of f, we write fO(g).

Sometimes in the analysis of the complexity of an algorithm, it is useful to assess an asymptotic lower bound of the running time of computing. In this case, the Ω-notation plays a central role. Thus the statement fΩ(g) means that there exist n 0 N and c>0 such that cg(n) f(n) for all nN with n 0 n. Of course, and similarly to the O-notation case, when the time taken by the algorithm to solve the problem f is unknown, the function g yields an ‘approximate’ information of the running time of the algorithm in the sense that the algorithm takes a time to solve the problem bounded below by g.

It is clear that the best situation, when the complexity of an algorithm is discussed, matches up with the case in which we can find a function gRT in such a way that the running time f satisfies the condition fO(g)Ω(g), denoted by fΘ(g), because, in this case, we obtain a ‘tight’ asymptotic bound of f and, thus, a total asymptotic information about the time taken by the algorithm to solve the problem under consideration. From now on, we will say that f belongs to the asymptotic complexity class of g whenever fΘ(g).

In the light of the preceding discussion, to determine, from an asymptotic complexity analysis viewpoint, the running time of an algorithm consists of obtaining its asymptotic complexity class.

For a fuller treatment of asymptotic complexity analysis of algorithms, we refer the reader to [44, 45].

3.2 The complexity space approach

In 1995, Schellekens introduced a topological foundation for the asymptotic complexity analysis of algorithms [46]. The aforementioned foundation is based on the notions of quasi-metric and complexity space.

Let us recall that, following [47], a quasi-metric on a nonempty set X is a function d:X×X R + such that for all x,y,zX:

  1. (i)

    d(x,y)=d(y,x)=0x=y;

  2. (ii)

    d(x,y)d(x,z)+d(z,y).

A quasi-metric space is a pair (X,d) such that X is a nonempty set and d is a quasi-metric on X.

Each quasi-metric d on X generates a T 0 -topology T(d) on X which has as a base the family of open d-balls { B d (x,ε):xX,ε>0}, where B d (x,ε)={yX:d(x,y)<ε} for all xX and ε>0.

Given a quasi-metric d on X, the function d s :X×X R + defined by d s (x,y)=max{d(x,y),d(y,x)} is a metric.

A quasi-metric space (X,d) is called bicomplete if the metric space (X, d s ) is complete.

Let us recall that the complexity space is the pair (C, d C ), where

C= { f RT : n = 1 2 n 1 f ( n ) < }

and d C is the bicomplete quasi-metric on C defined by

d C (f,g)= n = 1 2 n max { 1 g ( n ) 1 f ( n ) , 0 } .

Obviously, we adopt the convention that 1 =0.

According to [46], from a complexity analysis point of view, it is possible to associate a function of C with each algorithm in such a way that such a function represents, as a function of the size of the input data, the running time of computing of the algorithm. Because of this, the elements of C are called complexity functions. Moreover, given two functions f,gC, the numerical value d C (f,g) (the complexity distance from f to g) can be interpreted as the relative progress made in lowering the complexity by replacing any program P with a complexity function f by any program Q with a complexity function g. Therefore, if fg, the condition d C (f,g)=0 can be read as f is ‘at least as efficient’ as g on all inputs. In fact, we have that d C (f,g)=0f(n) g(n) for all nN and, thus, the fact that d C (f,g)=0 ( d C (g,f)=0) implies that fO(g) (fΩ(g)).

The applicability of the complexity space to the asymptotic complexity analysis of algorithms was illustrated by Schellekens in [46].

In particular, he introduced a method, based on a fixed point theorem for functionals defined on the complexity space into itself, to provide the asymptotic upper bound of those algorithms whose running time of computing satisfies a recurrence equation of Divide and Conquer type. Let us recall the aforenamed method.

A Divide and Conquer algorithm solves a problem of size n (nN) splitting it into a subproblems of size n b , for some constants a, b with a,bN and a,b>1, and solving them separately by the same algorithm. After obtaining the solution of the subproblems, the algorithm combines all subproblem solutions to give a global solution to the original problem. The recursive structure of a Divide and Conquer algorithm leads to a recurrence equation for the running time of computing. In many cases the running time of a Divide and Conquer algorithm is the solution to the Divide and Conquer recurrence equation of the form

T(n)={ c if  n = 1 , a T ( n b ) + h ( n ) if  n N b ,
(1)

where N b ={ b k :kN}, c>0 denotes the complexity on the base case (i.e., the problem size is small enough and the solution takes constant time), and h(n) represents the time taken by the algorithm in order to divide the original problem into a subproblems and to combine all subproblems solutions into a unique one (hC and h(n) < for all nN).

Notice that for Divide and Conquer algorithms with running time satisfying the recurrence equation (1), it is typically sufficient to obtain the complexity on inputs of size n, where n ranges over the set N b [45, 46].

Typical examples of algorithms whose running time of computing can be obtained by means of the recurrence (1) are Quicksort (best case behavior) and Mergesort (all behaviors) (see [45] for a detailed discussion about the both aforesaid algorithms).

The method introduced by Schellekens allows to show that the Divide and Conquer recurrence equation (1) has a unique solution and, in addition, to provide an upper asymptotic complexity bound of such a solution. To this end, denote by C b , c the subset of C given by

C b , c = { f C : f ( 1 ) = c  and  f ( n ) =  for all  n N N b  with  n > 1 } ,

and by d C | C b , c the restriction of d C to C b , c .

We have that the quasi-metric space ( C b , c , d C | C b , c ) is bicomplete since the quasi-metric space (C, d C ) is bicomplete [48] and the set C b , c is closed in (C, d C s ).

Next we associate a functional Φ T : C b , c C b , c with the recurrence equation (1) defined as follows:

Φ T (f)(n)={ c if  n = 1 , if  n N b  and  n > 1 , a f ( n b ) + h ( n ) otherwise .
(2)

Of course a complexity function in C b , c is a solution to the recurrence equation (1) if and only if it is a fixed point of the functional Φ T . It was proved in [46] that

d C | C b , c ( Φ T ( f ) , Φ T ( g ) ) 1 a d C | C b , c (f,g)
(3)

for all f,g C b , c . So, by Banach’s fixed point theorem for metric spaces, we deduce that the functional Φ T : C b , c C b , c has a unique fixed point and, thus, the recurrence equation (1) has a unique solution.

In order to obtain the asymptotic upper bound of the solution to the recurrence equation (1), Schellekens introduced a special class of functionals known as improvers.

Let CC, a functional Φ:CC is called an improver with respect to a function fC provided that Φ is monotone and that Φ(f)(n) f(n) for all nN.

Taking into account the exposed facts, the following result was stated in [46].

Theorem 12 The Divide and Conquer recurrence equation (1) has a unique solution f T in C b , c . Moreover, if the monotone functional Φ T associated to (1), and given by (2), is an improver with respect to some function g C b , c , then the solution to the recurrence equation satisfies that f T O(g).

In [46] Schellekens discussed the complexity class of Mergesort in order to illustrate the utility of Theorem 12. The Mergesort running time of computing (average case behavior) satisfies the following particular case of the recurrence equation (1):

T(n)={ c if  n = 1 , 2 T ( n 2 ) + n 2 if  n N 2 .
(4)

Of course Theorem 12 provides that the recurrence equation (4) has a unique solution f T M C 2 , c . In addition, it is not hard to check that the functional Φ T given by (2) and induced by the recurrence equation (4) is an improver with respect to the complexity function g k C 2 , c , with k>0, if and only if k 1 2 , where

g k (n)={ c if  n = 1 , k n log 2 ( n ) if  n N 2 .

Therefore, by Theorem 12, we conclude that f T M O( g 1 2 ). Hence the complexity function g 1 2 , or equivalently O(n log 2 (n)), gives an asymptotic upper bound of the running time of computing of the aforenamed algorithm.

Furthermore, it must be stressed that Schellekens provided the asymptotic lower bound, and thus the asymptotic complexity class, of the running time of Mergesort (average case behavior) in [46]. Concretely, it was obtained that such a running time of computing belongs to Ω(n log 2 (n)). Nevertheless, the asymptotic lower bound was obtained applying standard arguments that are not based on the use of fixed point techniques. So, Schellekens proved that Mergesort running time (average case behavior) belongs to the asymptotic complexity class Θ(n log 2 (n)), but the fixed point technique was used only to provide the asymptotic upper bound.

4 Asymptotic complexity analysis of algorithms and partial metric spaces

Motivated by the usefulness of partial metric spaces in program verification, we wonder if this kind of generalized metric spaces are also useful in asymptotic complexity analysis in the spirit of Schellekens. Of course it seems natural to attempt to apply Theorem 3 (in Section 1) in order to obtain a fixed point technique for asymptotic complexity analysis of algorithms based on the use of partial metric spaces. However, the following reasoning shows that the aforementioned result cannot be applied for this purpose. To this end, let us recall some additional and useful concepts about partial metrics and complexity spaces.

Following [49] the set C can be endowed with a partial metric p C defined for all f,gC by

p C (f,g)= n = 1 2 n max { 1 f ( n ) , 1 g ( n ) } .

Obviously, we make again the assumption that 1 =0.

Notice that although, in principle, several partial metrics could be defined on C with the aim of developing a mathematical foundation of asymptotic complexity analysis by means of partial metric spaces, it seems reasonable to consider p C as a (partial metric) complexity distance because of the following reasons.

On the one hand, it allows to retrieve the Schellekens (quasi-metric) complexity distance  d C . Indeed, the following correspondences between quasi-metric and partial metric spaces was stated in [3].

Proposition 13 If (X,p) is a partial metric space, then the function d p :X×X R + defined by d p (x,y)=p(x,y)p(x,x) is a quasi-metric such that T(p)=T( d p ).

In the light of Proposition 13, we have that the partial metric p C induces the quasi-metric d p C on C given by

d p C (f,g)= p C (f,g) p C (f,f)

for all f,gC. Moreover, it is not hard to see that

p C (f,g) p C (f,f)= d C (f,g),

f,gC and, thus, that d p C (f,g)= d C (f,g) for all f,gC.

On the other hand, the order induced by the partial metric p C , p C , is exactly the pointwise order relation defined on C. Indeed, given f,gC, we have that

f p C g p C (f,g)= p C (f,f)f(n) g(n)

for all nN. Whence we obtain that f p C g implies that fO(g). Observe that the last implication recovers the information yielded by the condition d C (f,g)=0 in the Schellekens approach.

Furthermore, the completeness of the partial metric space (C, p C ) is guaranteed by the bicompleteness of the complexity space (C, d C ) and the following result which was proved in [49].

Proposition 14 If (X,p) is a partial metric space, then the below assertions are equivalent:

  1. (1)

    (X,p) is complete.

  2. (2)

    (X, d p ) is bicomplete.

Since the partial metric space (C, p C ) is complete, so is the partial metric ( C b , c , p C | C b , c ).

Now, for the purpose of applying Theorem 3 to discuss the complexity of Divide and Conquer algorithms whose running time of computing satisfy the Divide and Conquer recurrence equation (1), suppose that there exists s[0,1[ such that

p C | C b , c ( Φ T ( f ) , Φ T ( g ) ) s p C | C b , c (f,g)

for all f,g C b , c , where Φ T is the functional given by (2).

Of course, we only consider the case of s]0,1[ because it is evident that the case s=0 gives a contradiction.

Take f,g C b , c such that f(n)=2c and g(n)=2(c+1) for all n N b . It is clear that

p C | C b , c ( Φ T ( f ) , Φ T ( g ) ) = 1 2 c + n = 2 2 b n max { 1 2 a c + h ( b n ) , 1 2 a ( c + 1 ) + h ( b n ) } .

Moreover,

p C | C b , c ( f , g ) = 1 2 c + n = 2 2 b n max { 1 f ( b n ) , 1 g ( b n ) } n = 1 2 n max { 1 2 c , 1 2 ( c + 1 ) } = 1 2 c .

Applying our hypothesis, we obtain that

1 2 c = p C | C b , c ( Φ T ( f ) , Φ T ( g ) ) s p C | C b , c (f,g)=s 1 2 c .

As a result we deduce that 1s<1, which is a contradiction.

Consequently, when we consider the partial metric p C as a complexity distance instead the original quasi-metric d C , Theorem 3 cannot be applied to the asymptotic complexity analysis of those algorithms whose running time of computing leads to the Divide and Conquer recurrence equation (1).

We want to point out that the above reasoning was first introduced in [50] in order to show the impossibility of using Theorem 3 to analyze the Divide and Conquer recurrence equations. As a consequence, in the aforesaid reference a fixed point technique, which differs from the technique that we will introduce in the remainder of this section, was introduced to discuss the complexity of algorithms via the use of partial quasi-metrics, and not partial metrics, and a few aspects of language theory.

4.1 The fixed point technique for asymptotic complexity analysis based on partial metric spaces

Inspired by the impossibility of developing a fixed point technique for the asymptotic complexity analysis of algorithms based on the use of Theorem 3, we present a new fixed point technique that respects the spirit of the original Schellekens technique and whose foundation lies in the use of Theorems 4 and 9 in Section 2.

In the complexity analysis of algorithms, Divide and Conquer algorithms belong to the wider class of recursive algorithms. In many cases the running time of imputing of the latter is the solution to the following recurrence equation:

T(n)={ c if  n = 1 , a T ( n 1 ) + h ( n ) if  n 2 ,
(5)

where c>0, a1 and hC with h(n) < for all nN.

Observe that the discussion of the asymptotic complexity of the Divide and Conquer algorithms introduced in Section 3.2 can be carried out from the recurrence equation (5). In fact, as discussed in Section 3.2, the running time of computing of the aforesaid algorithms is the solution to the recurrence equation

T(n)={ c if  n = 1 , a T ( n b ) + h ( n ) if  n N b .

Clearly, the preceding Divide and Conquer recurrence equation can be retrieved as a particular case of the recurrence equation (5). Indeed, the Divide and Conquer recurrence equation can be transformed into the following one:

S(m)={ c if  m = 1 , a S ( m 1 ) + r ( m ) if  m > 1 ,
(6)

where S(m)=T( b m 1 ) and r(m)=h( b m 1 ) for all mN.

The remainder of this section is devoted to introducing, by means of the partial metric space (C, p C ), a new fixed point technique in the spirit of Schellekens for yielding the asymptotic complexity class of those recursive algorithms whose running time is the solution to the recurrence equation (5).

4.1.1 The existence and uniqueness of solution

Consider the subset C c of C given by

C c = { f C : f ( 1 ) = c } .

Define the functional Ψ T : C c C c by

Ψ T (f)(n)={ c if  n = 1 , a f ( n 1 ) + h ( n ) if  n 2
(7)

for all f C c . It is clear that a complexity function in C c is a solution to the recurrence equation (5) if and only if it is a fixed point of the functional Ψ T .

In order prove the announced existence and uniqueness of solution to the recurrence equation (5), we will need the following auxiliary result.

Lemma 15 Let Ψ T : C c C c be the functional given by (7) and let f c , C c be the complexity function given by

f c , (n)={ c if n = 1 , if n 2 .

Then the following statements hold:

  1. (1)

    Ψ T is monotone.

  2. (2)

    Ψ T is continuous and p s -continuous.

  3. (3)

    Ψ T ( f c , ) p C c f c , .

Proof (1) Consider f,g C c such that f p C c g. Then f(n) g(n) for all nN. Thus we have that Ψ T (f)(1)= Ψ T (g)(1)=c and

Ψ T (f)(n)=af(n1)+h(n) ag(n1)+h(n)= Ψ T (g)(n)

for all nN with n2. It follows that Ψ T (f) p C c Ψ T (g).

(2) First of all we prove that Ψ T is continuous from (X,T(p)) into itself. Indeed, for the purpose of contradiction, assume that there exists a sequence ( f k ) k N in C c which converges to f C c with respect to T( p C c ) and, in addition, the sequence ( Ψ T ( f k ) ) k N does not converge to Ψ T ( f ) with respect to T( p C c ). Then we have that there exists ε>0 such that for each nN there is mn with

ε p C c ( Ψ T ( f m ) , Ψ T ( f ) ) p C c ( Ψ T ( f ) , Ψ T ( f ) )

and

p C c (f, f m ) p C c (f,f)<ε.

Then we have that

ε n = 2 2 n max { 1 a f m ( n 1 ) + h ( n ) , 1 a f ( n 1 ) + h ( n ) } n = 2 2 n 1 a f ( n 1 ) + h ( n ) 1 2 a ( n = 1 2 n max { 1 f m ( n ) , 1 f ( n ) } 1 f ( n ) ) = 1 2 a ( p C c ( f , f m ) p C c ( f , f ) ) < ε ,

which is a contradiction. It follows that Ψ T is continuous from ( C c ,T( p C c )) into itself.

Next we prove that Ψ T is continuous from ( C c ,T( p C c s )) into itself. To this end, we assume that there exists a sequence ( f k ) k N in C c which converges to f C c with respect to T( p C c s ) and, in addition, the sequence ( Ψ T ( f k ) ) k N does not converge to Ψ T (f) with respect to T( p C c s ). By the continuity of Ψ T , we deduce that necessarily there exists ε>0 such that for each nN there is mn with

ε p C c ( Ψ T ( f m ) , Ψ T ( f ) ) p C c ( Ψ T ( f m ) , Ψ T ( f m ) )

and

p C c s (f, f m )<ε.

If follows that

ε n = 2 2 n max { 1 a f m ( n 1 ) + h ( n ) , 1 a f ( n 1 ) + h ( n ) } n = 2 2 n 1 a f m ( n 1 ) + h ( n ) 1 2 a ( n = 1 2 n max { 1 f m ( n ) , 1 f ( n ) } 1 f m ( n ) ) = 1 2 a ( p C c ( f , f m ) p C c ( f m , f m ) ) 1 2 a p C c s ( f , f m ) < ε ,

which is a contradiction. Whence we conclude that Ψ T is continuous from ( C c ,T( p C c s )) into itself, i.e., Ψ T is p s -continuous.

(3) It is clear that Ψ T ( f c , )(1)=c= f c , (1). Moreover,

Ψ T ( f c , )(2)=a f c , (1)+h(2)=ac+h(2) f c , (2)=.

Finally,

Ψ T ( f c , )(n)== f c , (n)

for all nN with n>2. Consequently, we obtain that Ψ T ( f c , )(n) f c , (n) for all nN. Therefore Ψ T ( f c , ) p C c f c , . □

Remark 16 It should be stressed that, given a partial metric space (X,p), the continuous and p s -continuous mappings f:XX are called properly continuous in [5]. So, the functional Ψ T , by assertion (2) in the statement of Lemma 15, is properly continuous.

According to [51], the quasi-metric space ( C c , d C | C c ) is bicomplete. Hence, by Proposition 14, we have that the partial metric space ( C c , p C c ) is complete.

Theorem 17 The recurrence equation (5) has a unique solution f T in C c .

Proof The completeness of ( C c , p C c ) and Lemma 15 provide the conditions in the statement of Theorem 9. So, we have that Ψ T has a fixed point in f c , . Since f c , C c = C c , we obtain the existence of a fixed point of f in C c .

It remains to prove the uniqueness of the fixed point. To this end, assume that f T , g T C c are solutions to the recurrence equation (5). Next we will prove by induction over n that f T = g T . Since f T , g T are solutions to the recurrence equation (5), we have that they are fixed points of the functional Ψ T , i.e., Ψ T ( f T )= Ψ T ( g T ). Hence we have that

f T (1)= Ψ T ( f T )(1)=c= Ψ T ( g T )(1)= g T (1).

Now assume that Ψ( f T )(n)=Ψ( g T )(n) with n>1. Then

f T ( n + 1 ) = Ψ T ( f T ) ( n + 1 ) = a f T ( n ) + h ( n + 1 ) = a g T ( n ) + h ( n + 1 ) = Ψ T ( g T ) ( n + 1 ) = g T ( n + 1 ) .

Consequently, f T = g T and, thus, Ψ T has a unique fixed point in C c . Therefore the recurrence equation (5) has a unique solution in C c . □

4.1.2 The asymptotic complexity class of the solution

In the next result, we obtain the announced method to provide the complexity class of an algorithm whose running time of computing satisfies the recurrence equation (5). To this end, let us recall that, given CC, a monotone functional Φ:CC is called an improver with respect to a function fC provided that Φ(f) p C c f (see Section 3.2). Furthermore, on account of [51], a monotone functional Φ:CC is said to be a worsener with respect to a function fC provided that f p C c Φ(f).

Observe that the computational meaning of the improvers, as discussed in [46], can be interpreted as that this type of functionals correspond to a transformation on programs in such a way that the iterative applications of the transformation yield, from a complexity point of view, an improved program at each step of the iteration. Similarly, the worseners can be interpreted as those functionals which correspond to a transformation on programs in such a way that the iterative applications of the transformation yield a worsened, from a complexity point of view, program at each step of the iteration.

Theorem 18 Let f T C c be the unique solution to the recurrence equation (5). Then the following assertions hold:

  1. (1)

    If the functional Ψ T associated to (5), and given by (7), is a worsener with respect to some complexity function g C c , then f T Ω(g).

  2. (2)

    If the functional Ψ T associated to (5), and given by (7), is an improver with respect to some complexity function g C c , then f T O(g).

Proof (1) Suppose that there exists g C c such that Ψ T (g) p C c g. Then, by Corollary 8, we deduce the existence of a fixed point g T of Ψ T such that g T g, i.e., g T p C c g and, thus, g T (n) g(n) for all nN. So, g T Ω(g). Since f T is the unique fixed point of Ψ T in C c , we deduce that f T = g T and, hence, that f T Ω(g).

(2) Assume that there exists g C c such that g p C c Ψ T (g). Then Theorem 9 gives the existence of a fixed point g T of Ψ T such that g T g, i.e., g p C c g T and hence g(n) g T (n) for all nN. So, g T O(g). The uniqueness of a fixed point of Ψ T in C c allows us to deduce that f T = g T and, thus, that f T O(g). □

Remark 19 Notice that, indeed, Theorem 18 yields the complexity class of algorithms whose running time of computing satisfies the recurrence equation (5) because when there exist l C c , r,t>0 and n 0 N such that g(n)=rl(n) and h=tl(n) for all n> n 0 and, besides, Ψ T is an improver and a worsener with respect to g and h, respectively, then f T Θ(l).

4.2 Analyzing the running time computing of two examples

The aim of this section is twofold. On the one hand, we show that the developed method in Section 4.1 is useful to analyze the asymptotic complexity of recursive algorithms. On the other hand, in order to validate the new results, we will retrieve, by means of their application, the complexity class of two well-known algorithms in the literature.

Typical examples of algorithms whose running time of computing is the solution to the recurrence equation (5) are Quicksort (worst case behavior) and Hanoi.

In particular, the running time of computing of Quicksort is the solution to the recurrence equation given exactly as follows:

T(n)={ c if  n = 1 , T ( n 1 ) + j n if  n 2 ,
(8)

with j>0 and where c is the time taken by the algorithm in the base case.

Regarding Hanoi, under the uniform cost criterion assumption, its running time of computing is the solution to the recurrence equation given as follows:

T(n)={ c if  n = 1 , 2 T ( n 1 ) + d if  n 2 ,
(9)

with c,d>0 and where, again, c represents the time taken by the algorithm to solve the base case. Note that it does not make sense to distinguish three possible running time behaviors for Hanoi since the distribution of the input data is always the same for each size n.

For a deeper discussion about Quicksort and Hanoi, we refer the reader to [44, 45] and [44, 52], respectively.

Next we discuss the running time of computing of the aforesaid algorithms through our results.

Corollary 20 The running time of computing of Quicksort (worst case behavior) is in the complexity class Θ( n 2 ).

Proof The running time of computing of Quicksort (worst case behavior) is provided by the solution to the recurrence equation (8). Theorem 17 guarantees the existence and uniqueness of such a solution. Denote it by f T Q .

Consider the functional Ψ T Q given in (7) and induced by the recurrence equation (8), i.e.,

Ψ T Q (f)(n)={ c if  n = 1 , f ( n 1 ) + j n if  n 2
(10)

for all f C c .

Next we provide the asymptotic upper bound of f T Q . With this aim, define the complexity function h r as follows:

h r (n)={ c if  n = 1 , r n 2 if  n 2 ,

where r>0. Then it is not hard to see that Ψ T is an improver with respect to h r C c (i.e., Ψ T Q ( h r ) p C c h r ) if and only if rmax{ 3 j 5 , c 4 + j 2 }. It follows, by statement (2) in Theorem 18, that f T Q O( h max { 3 j 5 , c 4 + j 2 } ).

In order to provide the asymptotic complexity class, it remains to yield an asymptotic lower bound of f T Q . Now it is a routine to check that Ψ T Q is a worsener with respect to the complexity function h s (i.e., h s p C c Ψ T Q ( h s )) if and only if smin{ j 2 , c 4 + j 2 }, whence we deduce, by statement (1) in Theorem 18, that f T Q Ω( h min { j 2 , c 4 + j 2 } ).

Therefore we obtain that f T Q O( h max { 3 j 5 , c 4 + j 2 } )Ω( h min { j 2 , c 4 + j 2 } ). Whence we deduce, by Remark 19, that f T Q Θ( n 2 ), which is in accordance with the Quicksort (worst case behavior) asymptotic complexity class that can be found in the literature [44, 45]. □

Corollary 21 The running time of computing of Hanoi, under the uniform const criterion, is in the complexity class Θ( 2 n ).

Proof The running time of computing of Hanoi, under the uniform const criterion, is provided by the solution to the recurrence equation (9). Theorem 17 guarantees the existence and uniqueness of such a solution. Denote it by f T H .

Consider the functional Ψ T H given in (7) and induced by the recurrence equation (9), i.e.,

Ψ T H (f)(n)={ c if  n = 1 , 2 f ( n 1 ) + d if  n 2
(11)

for all f C c .

Next we provide the asymptotic upper bound of f T H . With this aim, define the complexity function h r as follows:

h r (n)={ c if  n = 1 , r ( 2 n 1 ) if  n 2 ,

where r>0. It is not hard to check that Ψ T H is an improver with respect to h r (i.e., Ψ T H ( h r ) p C c h r ) if and only if rmax{d, 2 c + d 3 }. It follows, by statement (2) in Theorem 18, that f T H O( h max { d , 2 c + d 3 } ).

Next we provide an asymptotic lower bound of f T H . It is a routine to check that Ψ T H is a worsener with respect to the complexity function h s (i.e., h s p C c Ψ T H ( h s )) if and only if smin{d, 2 c + d 3 }, whence we deduce, by statement (1) in Theorem 18, that f T H Ω( h min { d , 2 c + d 3 } ).

Therefore we obtain that f T H O( h max { d , 2 c + d 3 } )Ω( h min { d , 2 c + d 3 } ). Thus, by Remark 19, we obtain that f T H Θ( 2 n ), which is in accordance with the Hanoi asymptotic complexity class that can be found in the literature [44, 52]. □