Nonpolynomial WorstCase Analysis of Recursive Programs
 5 Citations
 1.3k Downloads
Abstract
We study the problem of developing efficient approaches for proving worstcase bounds of nondeterministic recursive programs. Ranking functions are sound and complete for proving termination and worstcase bounds of nonrecursive programs. First, we apply ranking functions to recursion, resulting in measure functions, and show that they provide a sound and complete approach to prove worstcase bounds of nondeterministic recursive programs. Our second contribution is the synthesis of measure functions in nonpolynomial forms. We show that nonpolynomial measure functions with logarithm and exponentiation can be synthesized through abstraction of logarithmic or exponentiation terms, Farkas’ Lemma, and Handelman’s Theorem using linear programming. While previous methods obtain worstcase polynomial bounds, our approach can synthesize bounds of the form \(\mathcal {O}(n \log n)\) as well as \(\mathcal {O}(n^r)\) where r is not an integer. We present experimental results to demonstrate that our approach can efficiently obtain worstcase bounds of classical recursive algorithms such as MergeSort, ClosestPair, Karatsuba’s algorithm and Strassen’s algorithm.
Keywords
Recursive Programs Worstcase Bound Triple Constraint FLO11 Expression Template Variables1 Introduction
Automated analysis to obtain quantitative performance characteristics of programs is a key feature of static analysis. Obtaining precise worstcase complexity bounds is a topic of both wide theoretical and practical interest. The manual proof of such bounds can be cumbersome as well as require mathematical ingenuity, e.g., the book The Art of Computer Programming by Knuth presents several mathematically involved methods to obtain such precise bounds [52]. The derivation of such worstcase bounds requires a lot of mathematical skills and is not an automated method. However, the problem of deriving precise worstcase bounds is of huge interest in program analysis: (a) first, in applications such as hard realtime systems, guarantees of worstcase behavior are required; and (b) the bounds are useful in early detection of egregious performance problems in large code bases. Works such as [36, 37, 40, 41] provide an excellent motivation for the study of automatic methods to obtain worstcase bounds for programs.
Given the importance of the problem of deriving worstcase bounds, the problem has been studied in various different ways.
 1.
WCET Analysis. The problem of worstcase execution time (WCET) analysis is a large field of its own, that focuses on (but is not limited to) sequential loopfree code with lowlevel hardware aspects [67].
 2.
Resource Analysis. The use of abstract interpretation and type systems to deal with loop, recursion, datastructures has also been considered [1, 37, 50], e.g., using linear invariant generation to obtain disjunctive and nonlinear bounds [19], potentialbased methods for handling recursion and inductive data structures [40, 41].
 3.
Ranking Functions. The notion of ranking functions is a powerful technique for termination analysis of (recursive) programs [8, 9, 20, 25, 58, 61, 64, 68]. They serve as a sound and complete approach for proving termination of nonrecursive programs [31], and they have also been extended as ranking supermatingales for analysis of probabilistic programs [12, 14, 16, 29].
Given the many results above, two aspects of the problem have not been addressed.
 1.
WCET Analysis of Recursive Programs Through Ranking Functions. The use of ranking functions has been limited mostly to nonrecursive programs, and their use to obtain worstcase bounds for recursive programs has not been explored in depth.
 2.
Efficient Methods for Precise Bounds. While previous works present methods for disjunctive polynomial bounds [37] (such as \(\max (0,n) \cdot (1+max(n,m))\)), or multivariate polynomial analysis [40], these works do not provide efficient methods to synthesize bounds such as \(\mathcal {O}(n \log n)\) or \(\mathcal {O}(n^r)\), where r is not an integer.
We address these two aspects, i.e., efficient methods for obtaining nonpolynomial bounds such as \(\mathcal {O}(n \log n)\), \(\mathcal {O}(n^r)\) for recursive programs, where r is not an integer.
 1.
First, we apply ranking functions to recursion, resulting in measure functions, and show that they provide a sound and complete method to prove termination and worstcase bounds of nondeterministic recursive programs.
 2.
Second, we present a sound approach for handling measure functions of specific forms. More precisely, we show that nonpolynomial measure functions involving logarithm and exponentiation can be synthesized using linear programming through abstraction of logarithmic or exponentiation terms, Farkas’ Lemma, and Handelman’s Theorem.
 3.A key application of our method is the worstcase analysis of recursive programs. Our procedure can synthesize nonpolynomial bounds of the form \(\mathcal {O}(n \log n)\), as well as \(\mathcal {O}(n^r)\), where r is not an integer. We show the applicability of our technique to obtain worstcase complexity bounds for several classical recursive programs:

For MergeSort [24, Chap. 2] and the divideandconquer algorithm for the ClosestPair problem [24, Chap. 33], we obtain \(\mathcal {O}(n \log n)\) worstcase bound, and the bounds we obtain are asymptotically optimal. Note that previous methods are either not applicable, or grossly overestimate the bounds as \(\mathcal {O}(n^2)\).

For Karatsuba’s algorithm for polynomial multiplication (cf. [52]) we obtain a bound of \(\mathcal {O}(n^{1.6})\), whereas the optimal bound is \(n^{\log _23}\approx \mathcal {O}(n^{1.585})\), and for the classical Strassen’s algorithm for fast matrix multiplication (cf. [24, Chap. 4]) we obtain a bound of \(\mathcal {O}(n^{2.9})\) whereas the optimal bound is \(n^{\log _27} \approx \mathcal {O}(n^{2.8074})\). Note that previous methods are either not applicable, or grossly overestimate the bounds as \(\mathcal {O}(n^2)\) and \(\mathcal {O}(n^3)\), respectively.

 4.
We present experimental results to demonstrate the effectiveness of our approach.
In general, our approach can be applied to (recursive) programs where the worstcase behaviour can be obtained by an analysis that involves only the structure of the program. For example, our approach cannot handle the Euclidean algorithm for computing the greatest common divisor of two given natural numbers, since the worstcase behaviour of this algorithm relies on Lamé’s Theorem [52]. The key novelty of our approach is that we show how nontrivial nonpolynomial worstcase upper bounds such as \(\mathcal {O}(n \log n)\) and \(\mathcal {O}(n^r)\), where r is nonintegral, can be soundly obtained, even for recursive programs, using linear programming. Moreover, as our computational tool is linear programming, the approach we provide is also a relatively scalable one (see Remark 2). Due to page limit, we omit the details for syntax, semantics, proofs, experiments and other technical parts. They can be found in the full version [15].
2 Nondeterministic Recursive Programs
In this work, our main contributions involve a new approach for nonpolynomial worstcase analysis of recursive programs. To focus on the new contributions, we consider a simple programming language for nondeterministic recursive programs. In our language, (a) all scalar variables hold integers, (b) all assignments to scalar variables are restricted to linear expressions with floored operation, and (c) we do not consider return statements. The reason to consider such a simple language is that (i) nonpolynomial worstcase running time often involves nonpolynomial terms over integervalued variables (such as array length) only, (ii) assignments to variables are often linear with possible floored expressions (in e.g. divideandconquer programs) and (iii) return value is often not related to worstcase behaviour of programs.
For a set A, we denote by A the cardinality of A and \(\mathbf {1}_A\) the indicator function on A. We denote by \(\mathbb {N}\), \(\mathbb {N}_0\), \(\mathbb {Z}\), and \(\mathbb {R}\) the sets of all positive integers, nonnegative integers, integers, and real numbers, respectively. Below we fix a set \(\mathcal {X}\) of scalar variables.
Arithmetic Expressions, Valuations, and Predicates. The set of (linear) arithmetic expressions \(\mathfrak {e}\) over \(\mathcal {X}\) is generated by the following grammar: \(\mathfrak {e}\,\,{::}{=}\,\, c\mid x \mid \left\lfloor \frac{\mathfrak {e}}{c}\right\rfloor \mid \mathfrak {e}+\mathfrak {e}\mid \mathfrak {e}\mathfrak {e}\mid c*\mathfrak {e}\) where \(c\in \mathbb {Z}\) and \(x\in \mathcal {X}\). Informally, (i) \(\frac{\centerdot }{c}\) refers to division operation, (ii) \(\lfloor \centerdot \rfloor \) refers to the floored operation, and (iii) \(+,,*\) refer to addition, subtraction and multiplication operation over integers, respectively. In order to make sure that division is welldefined, we stipulate that every appearance of c in \(\frac{\mathfrak {e}}{c}\) is nonzero. A valuation over \(\mathcal {X}\) is a function \(\nu \) from \(\mathcal {X}\) into \(\mathbb {Z}\). Informally, a valuation assigns to each scalar variable an integer. Under a valuation \(\nu \) over \(\mathcal {X}\), an arithmetic expression \(\mathfrak {e}\) can be evaluated to an integer in the straightforward way. We denote by \(\mathfrak {e}(\nu )\) the evaluation of \(\mathfrak {e}\) under \(\nu \). The set of propositional arithmetic predicates \(\phi \) over \(\mathcal {X}\) is generated by the following grammar: \(\phi \,\,{::}{=}\,\, \mathfrak {e}\le \mathfrak {e} \mid \mathfrak {e}\ge \mathfrak {e} \mid \lnot \phi \mid \phi \wedge \phi \mid \phi \vee \phi \) where \(\mathfrak {e}\) represents an arithmetic expression. The satisfaction relation \(\models \) between valuations and propositional arithmetic predicates is defined in the straightforward way through evaluation of arithmetic expressions. For each propositional arithmetic predicate \(\phi \), \(\mathbf {1}_{\phi }\) is interpreted as the indicator function \(\nu \mapsto \mathbf {1}_{\nu \models \phi }\) on valuations, where \(\mathbf {1}_{\nu \models \phi }\) is 1 if \(\nu \,\models \,\phi \) and 0 otherwise.
Syntax of the Programming Language. Due to page limit, we present a brief description of our syntax. The syntax is a subset of C programming language: in our setting, we have scalar variables which hold integers and function names which corresponds to functions (in programminglanguage sense); assignment statements are indicated by ‘\(:=\)’, whose lefthandside is a scalar variable and whose righthandside is a linear arithmetic expression; ‘skip’ is the statement which does nothing; whileloops and conditional ifbranches are indicated by ‘while’ and ‘if’ respectively, together with a propositional arithmetic predicate indicating the relevant condition (or guard); demonic nondeterministic branches are indicated by ‘if’ and ‘\(\star \)’; function declarations are indicated by a function name followed by a bracketed list of nonduplicate scalar variables, while function calls are indicated by a function name followed by a bracketed list of linear arithmetic expressions; each function declaration is followed by a curlybraced compound statement as function body; finally, a program is a sequence of function declarations with their function bodies. Given a recursive program in our syntax, we assign a distinct natural number (called label in our context) to every assignment/skip statement, function call, if/whilestatement and terminal line in the program. Each label serves as a program counter which indicates the next statement to be executed.
Semantics Through CFGs. We use controlflow graphs (CFGs) to specify the semantics of recursive programs. Informally, a CFG specifies how values for scalar variables and the program counter change in a program.
Definition 1

\( F \) is a finite set of function names;

each \( L ^{\mathsf {f}}\) is a finite set of labels attached to the function name \(\mathsf {f}\), which is partitioned into (i) the set \( L _{\mathrm {b}}^{\mathsf {f}}\) of branching labels, (ii) the set \( L _{\mathrm {a}}^{\mathsf {f}}\) of assignment labels, (iii) the set \( L _{\mathrm {c}}^{\mathsf {f}}\) of call labels and (iv) the set \( L _{\mathrm {d}}^{\mathsf {f}}\) of demonic nondeterministic labels;

each \(V^{\mathsf {f}}\) is the set of scalar variables attached to \(\mathsf {f}\);

each \(\ell _\mathrm {in}^{\mathsf {f}}\) (resp. \(\ell _\mathrm {out}^{\mathsf {f}}\)) is the initial label (resp. terminal label) in \( L ^{\mathsf {f}}\);

each \({\rightarrow }_{\mathsf {f}}\) is a relation whose every member is a triple of the form \((\ell ,\alpha ,\ell ')\) for which \(\ell \) (resp. \(\ell '\)) is the source label (resp. target label) of the triple such that \(\ell \in L ^{\mathsf {f}}\) (resp. \(\ell '\in L ^{\mathsf {f}}\)), and \(\alpha \) is (i) either a propositional arithmetic predicate \(\phi \) over \(V^{\mathsf {f}}\) (as the set of scalar variables) if \(\ell \in L _{\mathrm {b}}^{\mathsf {f}}\), (ii) or an update function from the set of valuations over \(V^{\mathsf {f}}\) into the set of valuations over \(V^{\mathsf {f}}\) if \(\ell \in L _{\mathrm {a}}^{\mathsf {f}}\), (iii) or a pair \((\mathsf {g}, h)\) with \(\mathsf {g}\in F \) and h being a valuepassing function which maps every valuation over \(V^{\mathsf {f}}\) to a valuation over \(V^{\mathsf {g}}\) if \(\ell \in L _{\mathrm {c}}^{\mathsf {f}}\), (iv) or \(\star \) if \(\ell \in L _{\mathrm {d}}^{\mathsf {f}}\).
Example 1
We consider the running example in Fig. 1 which abstracts the running time of BinarySearch. The CFG for this example is depicted in Fig. 2.
\(\square \)
It is intuitive that every recursive program in our setting can be transformed into a CFG. Based on CFGs, the semantics models executions of a recursive program as runs, and is defined through the standard notion of call stack. Below we fix a recursive program P and its CFG taking the form (\(\dag \)). We first define the notion of stack element and configurations which captures all information within a function call.
Stack Elements and Configurations. A stack element \(\mathfrak {c}\) (of P) is a triple \((\mathsf {f},\ell ,\nu )\) (treated as a letter) where \(\mathsf {f}\in F \), \(\ell \in L ^{\mathsf {f}}\) and \(\nu \in \textit{Val}_{\mathsf {f}}\); \(\mathfrak {c}\) is nonterminal if \(\ell \in L ^{\mathsf {f}}\setminus \{\ell _\mathrm {out}^{\mathsf {f}}\}\). A configuration (of P) is a finite word of nonterminal stack elements (including the empty word \(\varepsilon \)). Thus, a stack element \((\mathsf {f},\ell ,\nu )\) specifies that the current function name is \(\mathsf {f}\), the next statement to be executed is the one labelled with \(\ell \) and the current valuation w.r.t \(\mathsf {f}\) is \(\nu \); a configuration captures the whole trace of the call stack.
Schedulers and Runs. To resolve nondeterminism indicated by \(\star \), we consider the standard notion of schedulers, which have the full ability to look into the whole history for decision. Formally, a scheduler \(\pi \) is a function that maps every sequence of configurations ending in a nondeterministic location to the next configuration. A stack element \(\mathfrak {c}\) (as the initial stack element) and a scheduler \(\pi \) defines a unique infinite sequence \(\{w_j\}_{j\in \mathbb {N}_0}\) of configurations as the execution starting from \(\mathfrak {c}\) and under \(\pi \), which is denoted as the run \(\rho (\mathfrak {c},\pi )\). This defines the semantics of recursive programs.
We now define the notion of termination time which corresponds directly to the running time of a recursive program. In our setting, execution of every step takes one time unit.
Definition 2
(Termination Time). For each stack element \(\mathfrak {c}\) and each scheduler \(\pi \), the termination time of the run \(\rho (\mathfrak {c},\pi )=\{w_j\}_{j\in \mathbb {N}_0}\), denoted by \(T(\mathfrak {c},\pi )\), is defined as \(T(\mathfrak {c},\pi ):= \min \{j\mid w_j=\varepsilon \}\) (i.e., the earliest time when the stack is empty) where \(\min \emptyset :=\infty \). For each stack element \(\mathfrak {c}\), the worstcase terminationtime function \(\overline{T}\) is a function on the set of stack elements defined by: \(\overline{T}(\mathfrak {c}):= \sup \{T(\mathfrak {c}, \pi )\mid \pi \text { is a scheduler for }P\}\).
Thus \(\overline{T}\) captures the worstcase behaviour of the recursive program P.
3 Measure Functions
In this section, we introduce the notion of measure functions for recursive programs. We show that measure functions are sound and complete for nondeterministic recursive programs and serve as upper bounds for the worstcase terminationtime function. In the whole section, we fix a recursive program P together with its CFG taking the form (\(\dag \)). We now present the standard notion of invariants which represent reachable stack elements. Due to page limit, we omit the intuitive notion of reachable stack elements. Informally, a stack element is reachable w.r.t an initial function name and initial valuations satisfying a prerequisite (as a propositional arithmetic predicate) if it can appear in the run under some scheduler.
Definition 3
(Invariants). A (linear) invariant \(I\) w.r.t a function name \(\mathsf {f}^*\) and a propositional arithmetic predicate \(\phi ^*\) over \(V^{\mathsf {f^*}}\) is a function that upon any pair \((\mathsf {f},\ell )\) satisfying \(\mathsf {f}\in F \) and \(\ell \in L ^{\mathsf {f}}\backslash \{\ell _\mathrm {out}^{\mathsf {f}}\}\), \(I(\mathsf {f},\ell )\) is a propositional arithmetic predicate over \(V^{\mathsf {f}}\) such that (i) \(I(\mathsf {f},\ell )\) is without the appearance of floored expressions (i.e. \(\lfloor \centerdot \rfloor \)) and (ii) for all stack elements \((\mathsf {f},\ell ,\nu )\) reachable w.r.t \(\mathsf {f}^*,\phi ^*\), \(\nu \,\models \,I(\mathsf {f},\ell )\). The invariant I is in disjunctive normal form if every \(I(\mathsf {f},\ell )\) is in disjunctive normal form.
Obtaining invariants automatically is a standard problem in programming languages, and several techniques exist (such as abstract interpretation [26] or Farkas’ Lemma [19]). In the rest of the section we fix a(n initial) function name \(\mathsf {f}^*\in F \) and a(n initial) propositional arithmetic predicate \(\phi ^*\) over \(V^{\mathsf {f^*}}\). For each \(\mathsf {f}\in F \) and \(\ell \in L ^{\mathsf {f}}\backslash \{\ell _\mathrm {out}^{\mathsf {f}}\}\), we define \(D_{\mathsf {f},\ell }\) to be the set of all valuations \(\nu \) w.r.t \(\mathsf {f}\) such that \((\mathsf {f},\ell ,\nu )\) is reachable w.r.t \(\mathsf {f}^*,\phi ^*\). Below we introduce the notion of measure functions.
Definition 4

C1: if \(\ell =\ell _\mathrm {out}^{\mathsf {f}}\), then \(g(\mathsf {f},\ell ,\nu )=0\);

C2: if \(\ell \in L _{\mathrm {a}}^{\mathsf {f}}\backslash \{\ell _\mathrm {out}^{\mathsf {f}}\}\), \(\nu \in D_{\mathsf {f},\ell }\) and \((\ell ,h,\ell ')\) is the only triple in \({\rightarrow }_{\mathsf {f}}\) with source label \(\ell \) and update function h, then \(g(\mathsf {f},\ell ',h(\nu ))+1\le g(\mathsf {f},\ell ,\nu )\);

C3: if \(\ell \in L _{\mathrm {c}}^{\mathsf {f}}\backslash \{\ell _\mathrm {out}^{\mathsf {f}}\}\), \(\nu \in D_{\mathsf {f},\ell }\) and \((\ell ,(\mathsf {g},h),\ell ')\) is the only triple in \({\rightarrow }_{\mathsf {f}}\) with source label \(\ell \) and valuepassing function h, then \(1+g(\mathsf {g},\ell _\mathrm {in}^{\mathsf {g}},h(\nu ))+g(\mathsf {f},\ell ',\nu )\le g(\mathsf {f},\ell ,\nu )\);

C4: if \(\ell \in L _{\mathrm {b}}^{\mathsf {f}}\backslash \{\ell _\mathrm {out}^{\mathsf {f}}\}\), \(\nu \in D_{\mathsf {f},\ell }\) and \((\ell , \phi ,\ell _1),(\ell , \lnot \phi ,\ell _2)\) are namely two triples in \({\rightarrow }_{\mathsf {f}}\) with source label \(\ell \), then \(\mathbf {1}_{\nu \models \phi }\cdot g(\mathsf {f},\ell _1,\nu )+\mathbf {1}_{\nu \models \lnot \phi }\cdot g(\mathsf {f},\ell _2,\nu )+1\le g(\mathsf {f},\ell ,\nu )\);

C5: if \(\ell \in L _{\mathrm {d}}^{\mathsf {f}}\backslash \{\ell _\mathrm {out}^{\mathsf {f}}\}\), \(\nu \in D_{\mathsf {f},\ell }\) and \((\ell , \star ,\ell _1),(\ell , \star ,\ell _2)\) are namely two triples in \({\rightarrow }_{\mathsf {f}}\) with source label \(\ell \), then \(\max \{g(\mathsf {f},\ell _1,\nu ), g(\mathsf {f},\ell _2,\nu )\}+1\le g(\mathsf {f},\ell ,\nu )\).
Intuitively, a measure function is a nonnegative function whose values strictly decrease along the executions regardless of the choice of the demonic scheduler. By applying ranking functions to configurations, one can prove the following theorem stating that measure functions are sound and complete for the worstcase terminationtime function.
Theorem 1
(Soundness and Completeness). (1) (Soundness). For all measure functions g w.r.t \(\mathsf {f}^*,\phi ^*\), it holds that for all valuations \(\nu \in \textit{Val}_{\mathsf {\mathsf {f}^*}}\) such that \(\nu \,\models \,\phi ^*\), we have \(\overline{T}(\mathsf {f}^*,\ell _\mathrm {in}^{\mathsf {f}^*},\nu )\le g(\mathsf {f}^*,\ell _\mathrm {in}^{\mathsf {f}^*},\nu )\). (2) (Completeness). \(\overline{T}\) is a measure function w.r.t \(\mathsf {f}^*,\phi ^*\).
By Theorem 1, to obtain an upper bound on the worstcase terminationtime function, it suffices to synthesize a measure function. Below we show that it suffices to synthesize measure functions at cutpoints (which we refer as significant labels).
Definition 5
(Significant Labels). Let \(\mathsf {f}\in F \). A label \(\ell \in L ^{\mathsf {f}}\) is significant if either \(\ell =\ell _\mathrm {in}^{\mathsf {f}}\) or \(\ell \) is the initial label to some whileloop appearing in the function body of \(\mathsf {f}\).
We denote by \( L _{\mathrm {s}}^{\mathsf {f}}\) the set of significant locations in \( L ^{\mathsf {f}}\). Informally, a significant label is a label where valuations cannot be easily deduced from other labels, namely valuations at the start of the functioncall and at the initial label of a while loop.
The Expansion Construction (from g to \(\widehat{g}\) ). Let g be a function from \(\left\{ (\mathsf {f},\ell ,\nu )\mid \mathsf {f}\in F , \ell \in L _{\mathrm {s}}^{\mathsf {f}}, \nu \in \textit{Val}_{\mathsf {f}}\right\} \) into \([0,\infty ]\). One can obtain from g a function \(\widehat{g}\) from the set of all stack elements into \([0,\infty ]\) in a straightforward way through iterated application of the equality forms of C1–C5.
4 The Synthesis Algorithm
By Theorem 1, measure functions are a sound approach for upper bounds of the worstcase terminationtime function, and hence synthesis of measure functions of specific forms provide upper bounds for worstcase behaviour of recursive programs. We first define the synthesis problem of measure functions and then present the synthesis algorithm, where the initial stack element is integrated into the input invariant. Informally, the input is a recursive program, an invariant for the program and technical parameters for the specific form of a measure function, and the output is a measure function if the algorithm finds one, and fail otherwise.

Input: a recursive program P, an invariant \(I\) in disjunctive normal form and a quadruple \((d, \mathrm {op}, r,k)\) of technical parameters;

Output: a measure function h w.r.t the quadruple \((d, \mathrm {op}, r,k)\).
The quadruple \((d, \mathrm {op}, r,k)\) specifies the form of a measure function in the way that \(d\in \mathbb {N}\) is the degree of the measure function to be synthesized, \(\mathrm {op}\in \{\mathrm {log}, \mathrm {exp}\}\) signals either logarithmic (when \(\mathrm {op}=\log \)) (e.g., \(n\ln {n}\)) or exponential (when \(\mathrm {op}=\mathrm {exp}\)) (e.g., \(n^{1.6}\)) measure functions, r is a rational number greater than 1 which specifies the exponent in the measure function (i.e., \(n^r\)) when \(\mathrm {op}=\mathrm {exp}\) and \(k\in \mathbb {N}\) is a technical parameter required by Theorem 3. In the input for RecTermBou we fix the exponent r when \(\mathrm {op}=\mathrm {exp}\). However, iterating with binary search over an input bounded range we can obtain a measure function in the given range as precise as possible. Moreover, the invariants can be obtained automatically through e.g. [19]. Below we present our algorithm SynAlgo for synthesizing measure functions for the RecTermBou problem. The algorithm is designed to synthesize one function over valuations at each function name and appropriate significant labels so that C1–C5 are fulfilled. Due to page limit, we only illustrate the main conceptual details of our algorithm. In the following, we fix an input from the RecTermBou problem to our algorithm.
Overview. We present the overview of our solution which has the following five steps.
 1.
Step 1. Since one key aspect of our result is to obtain bounds of the form \(\mathcal {O}(n \log n)\) as well as \(\mathcal {O}(n^r)\), where r is not an integer, we first consider general form of upper bounds that involve logarithm and exponentiation (Step 1(a)), and then consider templates with the general form of upper bounds for significant labels (Step 1(b)).
 2.
Step 2. The second step considers the template generated in Step 1 for significant labels and generate templates for all labels. This step is relatively straightforward.
 3.
Step 3. The third step establishes constraint triples according to the invariant given by the input and the template obtained in Step 2. This step is also straightforward.
 4.
Step 4. The fourth step is the significant step which involves transforming the constraint triples generated in Step 3 into ones without logarithmic and exponentiation terms. The first substep (Step 4(a)) is to consider abstractions of logarithmic, exponentiation, and floored expressions as fresh variables. The next step (Step 4(b)) requires to obtain linear constraints over the abstracted variables. We use Farkas’ lemma and Lagrange’s MeanValue Theorem (LMVT) to obtain sound linear inequalities for those variables.
 5.
Step 5. The final step is to solve the unknown coefficients of the template from the constraint triples (without logarithm or exponentiation) obtained from Step 4. This requires the solution of positive polynomials over polyhedrons through the sound form of Handelman’s Theorem (Theorem 3) to transform into a linear program.
We first present an informal illustration of the key ideas through a simple example.
Example 2
4.1 Step 1 of SynAlgo
Step 1(a): General Form of a Measure Function
Extended Terms. In order to capture nonpolynomial worstcase complexity of recursive programs, our algorithm incorporates two types of extensions of terms.
 1.
Logarithmic Terms. The first extension, which we call \(\log {}\)extension, is the extension with terms from \(\ln {x},\ln {(xy+1)}\) where x, y are scalar variables appearing in the parameter list of some function name and \(\ln {(\centerdot )}\) refers to the natural logarithm function with base e. Our algorithm will take this extension when \(\mathrm {op}\) is \(\log \).
 2.
Exponentiation Terms. The second extension, which we call \(\mathrm {exp}\)extension, is with terms from \(x^r,(xy+1)^r\) where x, y are scalar variables appearing in the parameter list of some function name. The algorithm takes this when \(\mathrm {op}=\mathrm {exp}\).
The intuition is that x (resp. \(xy+1\)) may represent a positive quantity to be halved iteratively (resp. the length between array indexes y and x).
Semantics of General Form. A finite sum \(\mathfrak {e}\) at \(\mathsf {f}\) and \(\ell \in L _{\mathrm {s}}^{\mathsf {f}}\) in the form (1) defines a function \([\![\mathfrak {e}]\!]\) on \(\textit{Val}_{\mathsf {f}}\) in the way that for each \(\nu \in \textit{Val}_{\mathsf {f}}\): \([\![\mathfrak {e}]\!](\nu ):=\mathfrak {e}(\nu )\) if \(\nu \,\models \,I(\mathsf {f},\ell )\), and \([\![\mathfrak {e}]\!](\nu ):=0\) otherwise. Note that in the definition of \([\![\mathfrak {e}]\!]\), we do not consider the case when \(\log \) or exponentiation is undefined. However, we will see in Step 1(b) below that \(\log {}\) or exponentiation will always be welldefined.
Step 1(b): Templates. As in all previous works (cf. [12, 16, 20, 25, 40, 58, 61, 68]), we consider a template for measure function determined by the triple \((d,\mathrm {op},r)\) from the input parameters. Formally, the template determined by \((d,\mathrm {op},r)\) assigns to every function name \(\mathsf {f}\) and \(\ell \in L _{\mathrm {s}}^{\mathsf {f}}\) an expression in the form (1) (with degree d and extension option \(\mathrm {op}\)). Note that a template here only restricts (i) the degree and (ii) \(\log \) or \(\mathrm {exp}\) extension for a measure function, rather than its specific form. In detail, the algorithm sets up a template \(\eta \) for a measure function by assigning to each function name \(\mathsf {f}\) and significant label \(\ell \in L _{\mathrm {s}}^{\mathsf {f}}\) an expression \(\eta (\mathsf {f},\ell )\) in a form similar to (1), except for that \(c_i\)’s in (1) are interpreted as distinct template variables whose actual values are to be synthesized. In order to ensure that logarithm and exponentiation are welldefined over each \(I(\mathsf {f},\ell )\), we impose the following restriction (\(\S \)) on our template: \(\ln {x},x^r\) (resp. \(\ln {(xy+1)},(xy+1)^r\)) appear in \(\eta (\mathsf {f},\ell )\) only when \(x1\ge 0\) (resp. \(xy\ge 0\)) can be inferred from the invariant \(I(\mathsf {f},\ell )\). To infer \(x1\ge 0\) or \(xy\ge 0\) from \(I(\mathsf {f},\ell )\), we utilize Farkas’ Lemma.
Theorem 2
(Farkas’ Lemma [28, 60]). Let \(\mathbf {A}\in \mathbb {R}^{m\times n}\), \(\mathbf {b}\in \mathbb {R}^m\), \(\mathbf {c}\in \mathbb {R}^{n}\) and \(d\in \mathbb {R}\). Assume that \(\{\mathbf {x}\mid \mathbf {A}\mathbf {x}\le \mathbf {b}\}\ne \emptyset \). Then \(\{\mathbf {x}\mid \mathbf {A}\mathbf {x}\le \mathbf {b}\}\subseteq \{\mathbf {x}\mid \mathbf {c}^{\mathrm {T}}\mathbf {x}\le d\}\) iff there exists \(\mathbf {y}\in \mathbb {R}^m\) such that \(\mathbf {y}\ge \mathbf {0}\), \(\mathbf {A}^\mathrm {T}\mathbf {y}=\mathbf {c}\) and \(\mathbf {b}^{\mathrm {T}}\mathbf {y}\le d\).
By Farkas’ Lemma, there exists an algorithm that infers whether \(x1\ge 0\) (or \(xy\ge 0\)) holds under \(I(\mathsf {f},\ell )\) in polynomial time through emptiness checking of polyhedra (cf. [59]) since \(I(\mathsf {f},\ell )\) involves only linear (degree1) polynomials in our setting.
Then \(\eta \) naturally induces a function \([\![\eta ]\!]\) from \(\left\{ (\mathsf {f},\ell ,\nu )\mid \mathsf {f}\in F , \ell \in L _{\mathrm {s}}^{\mathsf {f}}, \nu \in \textit{Val}_{\mathsf {f}}\right\} \) into \([0,\infty ]\) parametric over template variables such that \([\![\eta ]\!](\mathsf {f},\ell ,\nu )={[\![}{\eta (\mathsf {f},\ell )}{]\!]}(\nu )\) for all appropriate stack elements \((\mathsf {f},\ell ,\nu )\). Note that \([\![\eta ]\!]\) is welldefined since logarithm and exponentiation is welldefined over satisfaction sets given by I.
4.2 Step 2 of SynAlgo
Step 2: Computation of \(\widehat{[\![\eta ]\!]}\) . Let \(\eta \) be the template constructed from Step 1. This step computes \(\widehat{[\![\eta ]\!]}\) from \(\eta \) by the expansion construction of significant labels (Sect. 3) which transforms a function g into \(\widehat{g}\). Recall the function \([\![\mathfrak {e} ]\!]\) for \(\mathfrak {e}\) is defined in Step 1(a). Formally, based on the template \(\eta \) from Step 1, the algorithm computes \(\widehat{[\![\eta ]\!]}\), with the exception that template variables appearing in \(\eta \) are treated as undetermined constants. Then \(\widehat{[\![\eta ]\!]}\) is a function parametric over the template variables in \(\eta \).
4.3 Step 3 of SynAlgo
This step generates constraint triples from \(\widehat{[\![\eta ]\!]}\) computed in Step 2. By applying nonnegativity and C2–C5 to \(\widehat{[\![\eta ]\!]}\) (computed in Step 2), the algorithm establishes constraint triples which will be interpreted as universallyquantified logical formulas later.
Constraint Triples. A constraint triple is a triple \((\mathsf {f}, \phi ,\mathfrak {e})\) where (i) \(\mathsf {f}\in F \), (ii) \(\phi \) is a propositional arithmetic predicate over \(V^{\mathsf {f}}\) which is a conjunction of atomic formulae of the form \(\mathfrak {e}'\ge 0\) with \(\mathfrak {e}'\) being an arithmetic expression, and (iii) \(\mathfrak {e}\) is an expression taking the form similar to (1) with the difference that (i) each \(c_i\) is either a scalar, or a template variable c appearing in \(\eta \), or its reverse \(c\), and (ii) each \(g_i\) is a finite product whose every multiplicand is either some \(x\in V^{\mathsf {f}}\), or some \(\lfloor \mathfrak {e}\rfloor \) with \(\mathfrak {e}\) being an instance of \(\langle expr \rangle \), or some \(\ln {\mathfrak {e}}\) (or \(\mathfrak {e}^r\), depending on \(\mathrm {op}\)) with \(\mathfrak {e}\) being an instance of \(\langle expr \rangle \). For each constraint triple \((\mathsf {f}, \phi ,\mathfrak {e})\), the function \([\![\mathfrak {e}]\!]\) on \(\textit{Val}_{\mathsf {f}}\) is defined in the way such that each \([\![\mathfrak {e}]\!](\nu )\) is the evaluation result of \(\mathfrak {e}\) when assigning \(\nu (x)\) to each \(x\in V^{\mathsf {f}}\); under (§) (of Step 1(b)), logarithm and exponentiation will always be welldefined.
Semantics of Constraint Triples. A constraint triple \((\mathsf {f}, \phi ,\mathfrak {e})\) encodes the following logical formula: \(\forall \nu \in \textit{Val}_{\mathsf {f}}.\left( \nu \,\models \,\phi \rightarrow [\![\mathfrak {e}]\!](\nu )\ge 0\right) \). Multiple constraint triples are grouped into a single logical formula through conjunction.
Step 3: Establishment of Constraint Triples. Based on \(\widehat{[\![\eta ]\!]}\) (computed in the previous step), the algorithm generates constraint triples at each significant label, then group all generated constraint triples together in a conjunctive way. To be more precise, at every significant label \(\ell \) of some function name \(\mathsf {f}\), the algorithm generates constraint triples through nonnegativity of measure functions and conditions C2–C5; after generating the constraint triples for each significant label, the algorithm groups them together in the conjunctive fashion to form a single collection of constraint triples.
Example 3

\(\mathfrak {q}_1:=(\mathsf {f}, n1\ge 0, c_1\cdot n+c_2\cdot \ln n+c_3)\) from the logical formula \(\forall n. (n\ge 1)\rightarrow c_1\cdot n+c_2\cdot \ln n+c_3\ge 0\) for nonnegativity of measure functions;
 \(\mathfrak {q}_2:=(\mathsf {f}, n1\ge 0\wedge 1n\ge 0, c_1\cdot n+c_2\cdot \ln n+c_32)\) and \(\mathfrak {q}_3:=(\mathsf {f}, n2\ge 0, c_1\cdot (n\left\lfloor {n}/{2}\right\rfloor ) +c_2\cdot (\ln n\ln {\left\lfloor {n}/{2}\right\rfloor })2\) from resp. logical formulaefor C4 (at label 1). \(\square \)

\(\forall n. (n\ge 1\wedge n\le 1)\rightarrow c_1\cdot n+c_2\cdot \ln n+c_3\ge 2\) and

\(\forall n. (n\ge 2)\rightarrow c_1\cdot n+c_2\cdot \ln n+c_3\ge c_1\cdot \left\lfloor {n}/{2}\right\rfloor + c_2\cdot \ln {\left\lfloor {n}/{2}\right\rfloor }+c_3+2\)

4.4 Step 4 of SynAlgo
Step 4: Solving Constraint Triples. To check whether the logical formula encoded by the generated constraint triples is valid, the algorithm follows a sound method which abstracts each multiplicand other than scalar variables in the form (2) as a standalone variable, and transforms the validity of the formula into a system of linear inequalities over template variables appearing in \(\eta \) through Handelman’s Theorem and linear programming. The main idea is that the algorithm establishes tight linear inequalities for those abstraction variables by investigating properties for the abstracted arithmetic expressions, and use linear programming to solve the formula based on the linear inequalities for abstraction variables. We note that validity of such logical formulae are generally undecidable since they involve nonpolynomial terms such as logarithm [33].
Below we describe how the algorithm transforms a constraint triple into one without logarithmic or exponentiation term. Given any finite set \(\varGamma \) of polynomials over n variables, we define \(\mathsf {Sat}(\varGamma ):=\left\{ \mathbf {x}\in \mathbb {R}^n\mid h(\mathbf {x})\ge 0\text { for all }h\in \varGamma \right\} \). In the whole step, we let \((\mathsf {f}, \phi ,\mathfrak {e}^*)\) be any constraint triple such that \(\phi =\bigwedge _{j}\mathfrak {e}_j\ge 0\); moreover, we maintain a finite set \(\varGamma \) of linear (degree1) polynomials over scalar and freshlyadded variables. Intuitively, \(\varGamma \) is related to both the set of all \(\mathfrak {e}_j\)’s (so that \(\mathsf {Sat}(\varGamma )\) is somehow the satisfaction set of \(\phi \)) and the finite subset of polynomials in Theorem 3. Due to lack of space, we only illustrate the part of the algorithm for logarithmic terms (i.e., the case when \(\mathrm {op}=\log \)); exponentiation terms can be treated in a similar fashion.
Step 4(a): Abstraction of Logarithmic, Exponentiation, and Floored Expressions. The first substep involves the following computational steps, where Items 2–4 handle variables for abstraction, and Item 6 is approximation of floored expressions, and other steps are straightforward.
 1.
Initialization. First, the algorithm maintains a finite set of linear (degree1) polynomials \(\varGamma \) and sets it initially to the empty set.
 2.Logarithmic and Floored Expressions. Next, the algorithm computes the following subsets of \(\langle expr \rangle \):

\(\mathcal {E}_L:=\{ \mathfrak {e}\mid {\ln {\mathfrak {e}}} \text { appears in } \mathfrak {e}^* \text { (as subexpression)} \} \text { upon } \mathrm {op}=\log .\)

\(\mathcal {E}_F:=\{ \mathfrak {e} \mid {{\mathfrak {e}} \text { appears in } \mathfrak {e}^* \text { and takes the form } \lfloor \frac{\centerdot }{c}\rfloor } \}\).
Let \(\mathcal {E}:=\mathcal {E}_L \cup \mathcal {E}_F\).

 3.Variables for Logarithmic and Floored Expressions. Next, for each \(\mathfrak {e}\in \mathcal {E}\), the algorithm establishes fresh variables as follows:

a fresh variable \(u_{\mathfrak {e}}\) which represents \(\ln {\mathfrak {e}}\) for \(\mathfrak {e}\in \mathcal {E}_L\);

a fresh variable \(w_{\mathfrak {e}}\) indicating \(\mathfrak {e}\) for \(\mathfrak {e}\in \mathcal {E}_F\).
After this step, the algorithm sets N to be the number of all variables (i.e., all scalar variables and all fresh variables added up to this point). In the rest of this section, we consider an implicit linear order over all scalar and freshlyadded variables so that a valuation of these variables can be treated as a vector in \(\mathbb {R}^{N}\).

 4.
Variable Substitution (from \({\mathfrak {e}}\) to \(\widetilde{\mathfrak {e}}\) ). Next, for each \(\mathfrak {e}\) which is either \(\mathfrak {t}\) or some \(\mathfrak {e}_j\) or some expression in \(\mathcal {E}\), the algorithm computes \(\tilde{\mathfrak {e}}\) as the expression obtained from \(\mathfrak {e}\) by substituting (i) every possible \(u_{\mathfrak {e}'}\) for \(\ln {\mathfrak {e}'}\), and (ii) every possible \(w_{\mathfrak {e}'}\) for \(\mathfrak {e}'\) such that \(\mathfrak {e}'\) is a subexpression of \(\mathfrak {e}\) which does not appear as subexpression in some other subexpression \(\mathfrak {e}''\in \mathcal {E}_F\) of \(\mathfrak {e}\). From now on, any \(\mathfrak {e}\) or \(\widetilde{\mathfrak {e}}\) or is treated as a polynomial over scalar and freshlyadded variables. Then any \(\mathfrak {e}(\mathbf {x})\) or \(\widetilde{\mathfrak {e}}(\mathbf {x})\) is the result of polynomial evaluation under the correspondence between variables and coordinates of \(\mathbf {x}\) specified by the linear order.
 5.
Importing \(\phi \) into \(\varGamma \). The algorithm adds all \(\widetilde{\mathfrak {e}_j}\) into \(\varGamma \).
 6.
Approximation of Floored Expressions. For each \(\mathfrak {e}\in \mathcal {E}_F\) such that \(\mathfrak {e}=\lfloor \frac{\mathfrak {e}'}{c}\rfloor \), the algorithm adds linear constraints for \(w_{\mathfrak {e}}\) recursively on the nesting depth of floor operation as follows.

Base Step. If \(\mathfrak {e}=\lfloor \frac{\mathfrak {e}'}{c}\rfloor \) and \(\mathfrak {e}'\) involves no nested floored expression, then the algorithm adds into \(\varGamma \) either (i) \(\widetilde{\mathfrak {e}'}c\cdot w_{\mathfrak {e}}\text { and }c\cdot w_{\mathfrak {e}}\widetilde{\mathfrak {e}'}+c1\) when \(c\ge 1\), which is derived from \(\frac{\mathfrak {e}'}{c}\frac{c1}{c}\le \mathfrak {e}\le \frac{\mathfrak {e}'}{c}\), or (ii) \(c\cdot w_{\mathfrak {e}}\widetilde{\mathfrak {e}'}\text { and} \widetilde{\mathfrak {e}'}c\cdot w_{\mathfrak {e}}c1\) when \(c\le 1\), which follows from \(\frac{\mathfrak {e}'}{c}\frac{c+1}{c}\le \mathfrak {e}\le \frac{\mathfrak {e}'}{c}\). Second, given the current \(\varGamma \), the algorithm finds the largest constant \(t_{\mathfrak {e}'}\) through Farkas’ Lemma such that \(\forall \mathbf {x}\in \mathbb {R}^N. \left( \mathbf {x}\in \mathsf {Sat}(\varGamma )\rightarrow \widetilde{\mathfrak {e}'}(\mathbf {x})\ge t_{\mathfrak {e}'}\right) \) holds; if such \(t_{\mathfrak {e}'}\) exists, the algorithm adds the constraint \(w_{\mathfrak {e}}\ge \left\lfloor \frac{t_{\mathfrak {e}'}}{c}\right\rfloor \) into \(\varGamma \).

Recursive Step. If \(\mathfrak {e}=\lfloor \frac{\mathfrak {e}'}{c}\rfloor \) and \(\mathfrak {e}'\) involves some nested floored expression, then the algorithm proceeds almost in the same way as for the Base Step, except that \(\widetilde{\mathfrak {e}'}\) takes the role of \(\mathfrak {e}'\). (Note that \(\widetilde{\mathfrak {e}'}\) does not involve nested floored expresions.)

 7.
Emptiness Checking. The algorithm checks whether \(\mathsf {Sat}(\varGamma )\) is empty or not in polynomial time in the size of \(\varGamma \) (cf. [59]). If \(\mathsf {Sat}(\varGamma )=\emptyset \), then the algorithm discards this constraint triple with no linear inequalities generated, and proceeds to other constraint triples; otherwise, the algorithm proceeds to the remaining steps.
Example 4

\(\widetilde{\mathfrak {q}_1}=(\mathsf {f}, n1\ge 0, c_1\cdot n+c_2\cdot u+c_3)\) and \(\varGamma _1=\{n1\}\);

\(\widetilde{\mathfrak {q}_2}=(\mathsf {f}, n1\ge 0\wedge 1n\ge 0, c_1\cdot n+c_2\cdot u+c_32)\) and \(\varGamma _2=\{n1,1n\}\);

\(\widetilde{\mathfrak {q}_3}:=(\mathsf {f}, n2\ge 0, c_1\cdot (nw) +c_2\cdot (uv)2)\) and \(\varGamma _3=\{n2,n2\cdot w,2\cdot wn+1,w1\}\). \(\square \)
For the next substep we will use Lagrange’s MeanValue Theorem (LMVT) [6, Chap. 6] to approximate logarithmic and exponentiation terms.
Step 4(b): Linear Constraints for Abstracted Variables. The second substep consists of the following computational steps which establish into \(\varGamma \) linear constraints for logarithmic or exponentiation terms. Below we denote by \(\mathcal {E}'\) either the set \(\mathcal {E}_L\) when \(\mathrm {op}=\log \) or \(\mathcal {E}_E\) when \(\mathrm {op}=\mathrm {exp}\). Recall the \(\widetilde{\mathfrak {e}}\) notation is defined in the Variable Substitution (Item 4) of Step 4(a).
 1.
LowerBound for Expressions in \(\mathcal {E}_L\) . For each \(\mathfrak {e}\in \mathcal {E}_L\), we find the largest constant \(t_{\mathfrak {e}}\in \mathbb {R}\) such that the logical formula \(\forall \mathbf {x}\in \mathbb {R}^N. \left( \mathbf {x}\in \mathsf {Sat}(\varGamma )\rightarrow \widetilde{\mathfrak {e}}(\mathbf {x})\ge t_{\mathfrak {e}}\right) \) holds, This can be solved by Farkas’ Lemma and linear programming, since \(\widetilde{\mathfrak {e}}\) is linear. Note that as long as \(\mathsf {Sat}(\varGamma )\ne \emptyset \), it follows from (§) (in Step 1(b)) that \(t_\mathfrak {e}\) is welldefined (since \(t_\mathfrak {e}\) cannot be arbitrarily large) and \(t_{\mathfrak {e}}\ge 1\).
 2.Mutual NoSmallerThan Inequalities over \(\mathcal {E}_L\). For each pair \((\mathfrak {e},\mathfrak {e}')\in \mathcal {E}_L\times \mathcal {E}_L\) such that \(\mathfrak {e}\ne \mathfrak {e}'\), the algorithm finds real numbers \(r_{\left( \mathfrak {e},\mathfrak {e}'\right) },b_{(\mathfrak {e},\mathfrak {e}')}\) through Farkas’ Lemma and linear programming such that (i) \(r_{(\mathfrak {e},\mathfrak {e}')}\ge 0\) and (ii) both the logical formulae$$ \forall \mathbf {x}\in \mathbb {R}^N. \left[ \mathbf {x}\in \mathsf {Sat}(\varGamma )\rightarrow \widetilde{\mathfrak {e}}(\mathbf {x})\left( r_{\mathfrak {e},\mathfrak {e}'}\cdot \widetilde{\mathfrak {e}'}(\mathbf {x}) +b_{\mathfrak {e},\mathfrak {e}'}\right) \ge 0\right] \quad \text { and } $$hold. The algorithm first finds the maximal value \(r^*_{\mathfrak {e},\mathfrak {e}'}\) over all feasible \((r_{\mathfrak {e},\mathfrak {e}'},b_{\mathfrak {e},\mathfrak {e}'})\)’s, then finds the maximal \(b^*_{\mathfrak {e},\mathfrak {e}'}\) over all feasible \((r^*_{\mathfrak {e},\mathfrak {e}'},b_{\mathfrak {e},\mathfrak {e}'})\)’s. If such \(r^*_{\mathfrak {e},\mathfrak {e}'}\) does not exist, the algorithm simply leaves \(r^*_{\mathfrak {e},\mathfrak {e}'}\) undefined. Note that once \(r^*_{\mathfrak {e},\mathfrak {e}'}\) exists and \(\mathsf {Sat}(\varGamma )\ne \emptyset \), then \(b^*_{\mathfrak {e},\mathfrak {e}'}\) exists since \(b_{\mathfrak {e},\mathfrak {e}'}\) cannot be arbitrarily large once \(r^*_{\mathfrak {e},\mathfrak {e}'}\) is fixed.$$ \forall \mathbf {x}\in \mathbb {R}^N. \left[ \mathbf {x}\in \mathsf {Sat}(\varGamma ) \rightarrow r_{\mathfrak {e},\mathfrak {e}'}\cdot \widetilde{\mathfrak {e}'}(\mathbf {x}) +b_{\mathfrak {e},\mathfrak {e}'}\ge 1\right] $$
 3.Mutual NoGreaterThan Inequalities over \(\mathcal {E}_L\). For each pair \((\mathfrak {e},\mathfrak {e}')\in \mathcal {E}_L\times \mathcal {E}_L\) such that \(\mathfrak {e}\ne \mathfrak {e}'\), the algorithm finds real numbers \(\mathsf {r}_{\left( \mathfrak {e},\mathfrak {e}'\right) },\mathsf {b}_{(\mathfrak {e},\mathfrak {e}')}\) through Farkas’ Lemma and linear programming such that (i) \(\mathsf {r}_{(\mathfrak {e},\mathfrak {e}')}\ge 0\) and (ii) the logical formulaholds. The algorithm then finds the minimal value (\(\mathsf {r}^*_{\mathfrak {e},\mathfrak {e}'}\),\(\mathsf {b}^*_{\mathfrak {e},\mathfrak {e}'}\)) similarly as above.$$ \forall \mathbf {x}\in \mathbb {R}^N. \left[ \mathbf {x}\in \mathsf {Sat}(\varGamma )\rightarrow \left( \mathsf {r}_{\mathfrak {e},\mathfrak {e}'}\cdot \widetilde{\mathfrak {e}'}(\mathbf {x}) +\mathsf {b}_{\mathfrak {e},\mathfrak {e}'}\right) \widetilde{\mathfrak {e}}(\mathbf {x})\ge 0\right] $$
 4.
Constraints from Logarithm. For each variable \(u_{\mathfrak {e}}\), the algorithm adds into \(\varGamma \) first the polynomial expression \(\widetilde{\mathfrak {e}}\left( \mathbf {1}_{t_{\mathfrak {e}}\le e}\cdot e+\mathbf {1}_{t_{\mathfrak {e}}> e}\cdot \frac{t_{\mathfrak {e}}}{\ln t_{\mathfrak {e}}}\right) \cdot u_{\mathfrak {e}}\) from the fact that the function \(z\mapsto \frac{z}{\ln {z}}\) (\(z\ge 1\)) has global minima at e (so that the inclusion of this polynomial expression is sound), and then the polynomial expression \(u_{\mathfrak {e}}\ln {t_{\mathfrak {e}}}\) due to the definition of \(t_{\mathfrak {e}}\).
 5.Mutual NoSmallerThan Inequalities over \(u_{\mathfrak {e}}'s\). For each pair \((\mathfrak {e},\mathfrak {e}')\in \mathcal {E}_L\times \mathcal {E}_L\) such that \(\mathfrak {e}\ne \mathfrak {e}'\) and \(r^*_{\mathfrak {e},\mathfrak {e}'},b^*_{\mathfrak {e},\mathfrak {e}'}\) are successfully found and \(r^*_{\mathfrak {e},\mathfrak {e}'}>0\), the algorithm adds \(u_{\mathfrak {e}}\ln {r^*_{\mathfrak {e},\mathfrak {e}'}}u_{\mathfrak {e}'}+\mathbf {1}_{b^*_{\mathfrak {e},\mathfrak {e}'}< 0}\cdot {\left( t_{\mathfrak {e}'}+\frac{b^*_{\mathfrak {e},\mathfrak {e}'}}{r^*_{\mathfrak {e},\mathfrak {e}'}}\right) }^{1}\cdot \left( \frac{b^*_{\mathfrak {e},\mathfrak {e}'}}{r^*_{\mathfrak {e},\mathfrak {e}'}}\right) \) into \(\varGamma \). This is due to the fact that \([\![\mathfrak {e}]\!]\left( r^*_{\mathfrak {e},\mathfrak {e}'}\cdot [\![\mathfrak {e}']\!]+b^*_{\mathfrak {e},\mathfrak {e}'}\right) \ge 0\) implies the following:where the last step is obtained from LMVT and by distinguishing whether \(b^*_{\mathfrak {e},\mathfrak {e}'}\ge 0\) or not, using the fact that the derivative of the naturallogarithm is the reciprocal function. Note that one has \(t_{\mathfrak {e}'}+\frac{b^*_{\mathfrak {e},\mathfrak {e}'}}{r^*_{\mathfrak {e},\mathfrak {e}'}}\ge 1\) due to the maximal choice of \(t_{\mathfrak {e}'}\).$$\begin{aligned} \ln {[\![\mathfrak {e}]\!]}\ge & {} \ln {r^*_{\mathfrak {e},\mathfrak {e}'}}+\ln \left( [\![\mathfrak {e}']\!]+({b^*_{\mathfrak {e},\mathfrak {e}'}}/{r^*_{\mathfrak {e},\mathfrak {e}'}})\right) \\= & {} \ln {r^*_{\mathfrak {e},\mathfrak {e}'}}+\ln {[\![\mathfrak {e}']\!]}+\left( \ln { \left( [\![\mathfrak {e}']\!]+({b^*_{\mathfrak {e},\mathfrak {e}'}}/{r^*_{\mathfrak {e},\mathfrak {e}'}})\right) }\ln {[\![\mathfrak {e}']\!]}\right) \\\ge & {} \ln {r^*_{\mathfrak {e},\mathfrak {e}'}}+\ln {[\![\mathfrak {e}']\!]}\mathbf {1}_{b^*_{\mathfrak {e},\mathfrak {e}'}< 0}\cdot {\left( t_{\mathfrak {e}'}+({b^*_{\mathfrak {e},\mathfrak {e}'}}/{r^*_{\mathfrak {e},\mathfrak {e}'}})\right) }^{1}\cdot \left( {b^*_{\mathfrak {e},\mathfrak {e}'}}/{r^*_{\mathfrak {e},\mathfrak {e}'}}\right) , \end{aligned}$$
 6.
Mutual NoGreaterThan Inequalities over \(u_{\mathfrak {e}}'s\). Similar to the previous item, the algorithm establishes mutual nogreaterthan inequalities over \(u_{\mathfrak {e}}\)’s.
Although in Item 4 and Item 6 above, we have logarithmic terms such as \(\ln t_{\mathfrak {e}}\) and \(\ln {r^*_{\mathfrak {e},\mathfrak {e}'}}\), both \(t_{\mathfrak {e}}\) and \({r^*_{\mathfrak {e},\mathfrak {e}'}}\) are already determined constants, hence their approximations can be used. After Step 4, the constraint triple \((\mathsf {f}, \phi ,\mathfrak {e}^*)\) is transformed into \((\mathsf {f}, \bigwedge _{h\in \varGamma }h\ge 0,\widetilde{\mathfrak {e}^*})\).
Example 5

(From Item 2,3 in Step 4(b) for \(\widetilde{\mathfrak {q}_3}\)) \(w\ge 0.5\cdot n0.5, w\le 0.5\cdot n\) and \(n\ge 2\cdot w, n\le 2\cdot w+1\);

(From Item 4 in Step 4(b) for \(\widetilde{\mathfrak {q}_1},\widetilde{\mathfrak {q}_2}\)) \(ne\cdot u\ge 0\) and \(u\ge 0\);

(From Item 4 in Step 4(b) for \(\widetilde{\mathfrak {q}_3}\)) \(ne\cdot u\ge 0, u\ln {2}\ge 0\) and \(we\cdot v\ge 0, v\ge 0\);

(From Item 6,7 in Step 4(b) for \(\widetilde{\mathfrak {q}_3}\)) \(uv\ln {2}\ge 0\) and \(vu+\ln {2}+\frac{1}{2}\ge 0\).

\(\varGamma _1=\{n1, ne\cdot u, u\}\) and \(\varGamma _2=\{n1, 1n, ne\cdot u, u\}\);

\(\varGamma _3=\{n2,n2\cdot w,2\cdot wn+1,w1,ne\cdot u, u\ln {2}, we\cdot v, v, uv\ln {2}, vu+\ln {2}+\frac{1}{2}\}\). \(\square \)
Remark 1
The key difficulty is to handle logarithmic and exponentiation terms. In Step 4(a) we abstract such terms with fresh variables and perform sound approximation of floored expressions. In Step 4(b) we use Farkas’ Lemma and LMVT to soundly transform logarithmic or exponentiation terms to polynomials.
\(\square \)
4.5 Step 5 of SynAlgo
This step is to solve the template variables in the template established in Step 1, based on the sets \(\varGamma \) computed in Step 4. While Step 4 transforms logarithmic and exponentiation terms to polynomials, we need a sound method to solve polynomials with linear programming. We achieve this with Handelman’s Theorem.
Definition 6
(Monoid). Let \(\varGamma \) be a finite subset of some polynomial ring \({\mathfrak {R}}{\left[ x_1,\dots , x_m\right] }\) such that all elements of \(\varGamma \) are polynomials of degree 1. The monoid of \(\varGamma \) is defined by: \(\textit{Monoid}(\varGamma ):=\left\{ \prod _{i=1}^k h_i \mid k\in \mathbb {N}_0\text { and }h_1,\dots ,h_k\in \varGamma \right\} \).
Theorem 3
Basically, Handelman’s Theorem gives a characterization of positive polynomials over polytopes. In this paper, we concentrate on Eq. (3) which provides a sound form for a nonnegative polynomial over a general (i.e. possibly unbounded) polyhedron.
Step 5: Solving Unknown Coefficients in the Template. Now we use the input parameter k as the maximal number of multiplicands in each summand at the righthandside of Eq. (3). For any constraint triple \((\mathsf {f},\phi ,\mathfrak {e}^*)\) which is generated in Step 3 and passes the emptiness checking in Item 7 of Step 4(a), the algorithm performs the following steps.
 1.
Preparation for Eq. ( 3 ). The algorithm reads the set \(\varGamma \) for \((\mathsf {f},\phi ,\mathfrak {e}^*)\) computed in Step 4, and computes \(\widetilde{\mathfrak {e}^*}\) from Item 4 of Step 4(a).
 2.
Application of Handelman’s Theorem. First, the algorithm establishes a fresh coefficient variable \(\lambda _h\) for each polynomial h in \(\text {Monoid}(\varGamma )\) with no more than k multiplicands from \(\varGamma \). Then, the algorithm establishes linear equalities over coefficient variables \(\lambda _h\)’s and template variables in the template \(\eta \) established in Step 1 by equating coefficients of the same monomials at the left and righthandside of the following polynomial equality \(\widetilde{\mathfrak {\mathfrak {e}^*}}=\sum _{h} \lambda _h\cdot h\). Second, the algorithm incorporates all constraints of the form \(\lambda _h\ge 0\).
Then the algorithm collects all linear equalities and inequalities established in Item 2 above conjunctively as a single system of linear inequalities and solves it through linearprogramming algorithms; if no feasible solution exists, the algorithm fails without output, otherwise the algorithm outputs the function \(\widehat{[\![\eta ]\!]}\) where all template variables in the template \(\eta \) are resolved by their values in the solution. We now state the soundness of our approach for synthesis of measure functions.
Theorem 4
Our algorithm, SynAlgo, is a sound approach for the RecTermBou problem, i.e., if SynAlgo succeeds to synthesize a function g on \(\left\{ (\mathsf {f},\ell ,\nu )\mid \mathsf {f}\in F , \ell \in L _{\mathrm {s}}^{\mathsf {f}}, \nu \in \textit{Val}_{\mathsf {f}}\right\} \), then \(\widehat{g}\) is a measure function and hence an upper bound on the terminationtime function.
Example 6

(From \(\widetilde{\mathfrak {q}_1}\)) \(c_2\cdot u+c_3=\lambda _1\cdot u+ \lambda _2\) where \(\lambda _1:=\frac{2}{\ln {2}}\) and \(\lambda _2:=2\);

(From \(\widetilde{\mathfrak {q}_2}\)) \(c_2\cdot u+c_32=\lambda _1\cdot u\);

(From \(\widetilde{\mathfrak {q}_3}\)) \(c_2\cdot (uv)2=\lambda _1\cdot (uv\ln {2})\).
Hence by Theorem 1, \(\overline{T}(\mathsf {f}, 1, n)\le \eta (\mathsf {f},1,n)=\frac{2}{\ln {2}}\cdot \ln {n}+2\). It follows that BinarySearch runs in \(\mathcal {O}(\log {n})\) in worstcase. \(\square \)
Remark 2
We remark two aspects of our algorithm. (i) Scalability. Our algorithm only requires solving linear inequalities. Since linearprogramming solvers have been widely studied and experimented, the scalability of our approach directly depends on the linearprogramming solvers. Hence the approach we present is a relatively scalable one. (ii) Novelty. A key novelty of our approach is to obtain nonpolynomial bounds (such as \(\mathcal {O}(n \log n)\), \(\mathcal {O}(n^r)\), where r is not integral) through linear programming. The novel technical steps are: (a) use of abstraction variables; (b) use of LMVT and Farkas’ lemma to obtain sound linear constraints over abstracted variables; and (c) use of Handelman’s Theorem to solve the unknown coefficients in polynomial time. \(\square \)
5 Experimental Results
Experimental results where \(\eta (\ell _0)\) is the part of measure function at the initial label.
Example  Time (in seconds)  \(\eta (\ell _0)\) 

MergeSort  6  \(25.02 \cdot n\cdot \ln {n} + 21.68\cdot n 20.68\) 
ClosestPair  11  \(128.85\cdot n \cdot \ln {n} + 108.95\cdot n  53.31\) 
Karatsuba  3  \(2261.55\cdot n^{1.6} + 1\) 
Strassen  7  \(954.20\cdot n^{2.9} + 1\) 
6 Related Work
The termination of recursive programs or other temporal properties has already been extensively studied [5, 21, 22, 23, 27, 53, 54, 55, 66]. Our work is most closely related to automatic amortized analysis [4, 32, 40, 42, 43, 44, 45, 46, 50, 51, 63], as well as the SPEED project [35, 36, 37]. There are two key differences of our methods as compared to previous works. First, our methods are based on extension of ranking functions to nondeterministic recursive programs, whereas previous works either use potential functions, abstract interpretation, or sizechange. Second, while none of the previous methods can derive nonpolynomial bounds such as \(\mathcal {O}(n^r)\), where r is not an integer, our approach can derive such nonpolynomial bounds through linear programming.
Ranking functions for intraprocedural analysis have been widely studied [8, 9, 20, 25, 58, 61, 64, 68], and have been extended to ranking supermartingales [12, 13, 14, 16, 17, 29] for probabilistic programs. Most works focus on linear or polynomial ranking functions/supermartingales [12, 14, 16, 20, 25, 58, 61, 64, 68]. Polynomial ranking functions alone can only derive polynomial bounds, and needs additional structures (e.g., evaluation trees) to derive nonpolynomial bounds such as \(\mathcal {O}(2^n)\) (cf. [10]). In contrast, we directly synthesize nonpolynomial ranking functions without additional structures. The approach of recurrence relations for worstcase analysis is explored in [1, 2, 3, 30, 34]. A related result is by Albert et al. [2] who considered using evaluation trees for solving recurrence relations, which can derive the worstcase bound for MergeSort. Their method relies on specific features such as branching factor and height of an evaluation tree, and cannot derive bounds like \(\mathcal {O}(n^r)\) where r is not an integer. Another approach through theorem proving is explored in [65]. This approach is to iteratively generate controlflow paths and then to obtain worstcase bounds over generated paths through theorem proving (with arithmetic theorems). Several other works present proof rules for deterministic programs [39] as well as for probabilistic programs [49, 57]. None of these works can be automated. Other related approaches are sized types [18, 47, 48], and polynomial resource bounds [62]. Again none of these approaches can yield bounds like \(\mathcal {O}(n \log n)\) or \(\mathcal {O}(n^r)\), for r nonintegral.
7 Conclusion
In this paper, we developed a rankingfunction based approach to obtain nonpolynomial worstcase bounds for recursive programs through (i) abstraction of logarithmic and exponentiation terms and (ii) Farkas’ Lemma, LMVT, and Handelman’s Theorem. Moreover our approach obtains such bounds using linear programming, thus is an efficient approach. Our approach obtains nontrivial worstcase complexity bounds for classical recursive programs: \(\mathcal {O}(n\log {n})\)complexity for both MergeSort and the divideandconquer ClosestPair algorithm, \(\mathcal {O}(n^{1.6})\) for Karatsuba’s algorithm for polynomial multiplication, and \(\mathcal {O}(n^{2.9})\) for Strassen’s algorithm for matrix multiplication. The bounds we obtain for Karatsuba’s and Strassen’s algorithm are close to the optimal bounds. An interesting future direction is to extend our technique to datastructures. Other future directions include investigating the application of our approach to invariant generation and using integer linear programming instead in our approach.
Notes
Acknowledgements
We thank all reviewers for valuable comments. The research is partially supported by Vienna Science and Technology Fund (WWTF) ICT15003, Austrian Science Fund (FWF) NFN Grant No. S11407N23 (RiSE/SHiNE), ERC Start grant (279307: Graph Games), the Natural Science Foundation of China (NSFC) under Grant No. 61532019 and the CDZ project CAP (GZ 1023).
References
 1.Albert, E., Arenas, P., Genaim, S., GómezZamalloa, M., Puebla, G., RamírezDeantes, D.V., RománDíez, G., Zanardini, D.: Termination and cost analysis with COSTA and its user interfaces. Electr. Notes Theor. Comput. Sci. 258(1), 109–121 (2009)CrossRefGoogle Scholar
 2.Albert, E., Arenas, P., Genaim, S., Puebla, G.: Automatic inference of upper bounds for recurrence relations in cost analysis. In: Alpuente, M., Vidal, G. (eds.) SAS 2008. LNCS, vol. 5079, pp. 221–237. Springer, Heidelberg (2008). doi: 10.1007/9783540691662_15 CrossRefGoogle Scholar
 3.Albert, E., Arenas, P., Genaim, S., Puebla, G., Zanardini, D.: Cost analysis of Java bytecode. In: Nicola, R. (ed.) ESOP 2007. LNCS, vol. 4421, pp. 157–172. Springer, Heidelberg (2007). doi: 10.1007/9783540713166_12 CrossRefGoogle Scholar
 4.Alias, C., Darte, A., Feautrier, P., Gonnord, L.: Multidimensional rankings, program termination, and complexity bounds of flowchart programs. In: Cousot, R., Martel, M. (eds.) SAS 2010. LNCS, vol. 6337, pp. 117–133. Springer, Heidelberg (2010). doi: 10.1007/9783642157691_8 CrossRefGoogle Scholar
 5.Alur, R., Chaudhuri, S.: Temporal reasoning for procedural programs. In: Barthe, G., Hermenegildo, M. (eds.) VMCAI 2010. LNCS, vol. 5944, pp. 45–60. Springer, Heidelberg (2010). doi: 10.1007/9783642113192_7 CrossRefGoogle Scholar
 6.Bartle, R.G., Sherbert, D.R.: Introduction to Real Analysis, 4th edn. Wiley, Hoboken (2011)zbMATHGoogle Scholar
 7.Bodík, R., Majumdar, R. (eds.): POPL. ACM, New York (2016)zbMATHGoogle Scholar
 8.Bournez, O., Garnier, F.: Proving positive almostsure termination. In: Giesl, J. (ed.) RTA 2005. LNCS, vol. 3467, pp. 323–337. Springer, Heidelberg (2005). doi: 10.1007/9783540320333_24 CrossRefGoogle Scholar
 9.Bradley, A.R., Manna, Z., Sipma, H.B.: Linear ranking with reachability. In: Etessami, K., Rajamani, S.K. (eds.) CAV 2005. LNCS, vol. 3576, pp. 491–504. Springer, Heidelberg (2005). doi: 10.1007/11513988_48 CrossRefGoogle Scholar
 10.Brockschmidt, M., Emmes, F., Falke, S., Fuhs, C., Giesl, J.: Analyzing runtime and size complexity of integer programs. ACM Trans. Program. Lang. Syst. 38(4), 13:1–13:50 (2016)CrossRefGoogle Scholar
 11.Castagna, G., Gordon, A.D. (eds.): POPL. ACM, New York (2017)Google Scholar
 12.Chakarov, A., Sankaranarayanan, S.: Probabilistic program analysis with martingales. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol. 8044, pp. 511–526. Springer, Heidelberg (2013). doi: 10.1007/9783642397998_34 CrossRefGoogle Scholar
 13.Chatterjee, K., Fu, H.: Termination of nondeterministic recursive probabilistic programs. CoRR abs/1701.02944 (2017). http://arxiv.org/abs/1701.02944
 14.Chatterjee, K., Fu, H., Goharshady, A.K.: Termination analysis of probabilistic programs through Positivstellensatzs. In: Chaudhuri, S., Farzan, A. (eds.) CAV 2016. LNCS, vol. 9779, pp. 3–22. Springer, Cham (2016). doi: 10.1007/9783319415284_1 Google Scholar
 15.Chatterjee, K., Fu, H., Goharshady, A.K.: Nonpolynomial worstcase analysis of recursive programs. CoRR abs/1705.00317 (2017). https://arxiv.org/abs/1705.00317
 16.Chatterjee, K., Fu, H., Novotný, P., Hasheminezhad, R.: Algorithmic analysis of qualitative and quantitative termination problems for affine probabilistic programs. In: Bodík and Majumdar [7], pp. 327–342Google Scholar
 17.Chatterjee, K., Novotný, P., Žikelić, Đ.: Stochastic invariants for probabilistic termination. In: Castagna and Gordon [11], pp. 145–160Google Scholar
 18.Chin, W., Khoo, S.: Calculating sized types. HigherOrder Symbolic Comput. 14(2–3), 261–300 (2001)CrossRefzbMATHGoogle Scholar
 19.Colón, M.A., Sankaranarayanan, S., Sipma, H.B.: Linear invariant generation using nonlinear constraint solving. In: Hunt, W.A., Somenzi, F. (eds.) CAV 2003. LNCS, vol. 2725, pp. 420–432. Springer, Heidelberg (2003). doi: 10.1007/9783540450696_39 CrossRefGoogle Scholar
 20.Colón, M.A., Sipma, H.B.: Synthesis of linear ranking functions. In: Margaria, T., Yi, W. (eds.) TACAS 2001. LNCS, vol. 2031, pp. 67–81. Springer, Heidelberg (2001). doi: 10.1007/3540453199_6 CrossRefGoogle Scholar
 21.Cook, B., Podelski, A., Rybalchenko, A.: Termination proofs for systems code. In: Schwartzbach, M.I., Ball, T. (eds.) PLDI, pp. 415–426. ACM (2006)Google Scholar
 22.Cook, B., Podelski, A., Rybalchenko, A.: Summarization for termination: no return!. Form. Methods Syst. Des. 35(3), 369–387 (2009)CrossRefzbMATHGoogle Scholar
 23.Cook, B., See, A., Zuleger, F.: Ramsey vs. lexicographic termination proving. In: Piterman, N., Smolka, S.A. (eds.) TACAS 2013. LNCS, vol. 7795, pp. 47–61. Springer, Heidelberg (2013). doi: 10.1007/9783642367427_4 CrossRefGoogle Scholar
 24.Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to Algorithms, 3rd edn. MIT Press, Cambridge (2009)zbMATHGoogle Scholar
 25.Cousot, P.: Proving program invariance and termination by parametric abstraction, lagrangian relaxation and semidefinite programming. In: Cousot, R. (ed.) VMCAI 2005. LNCS, vol. 3385, pp. 1–24. Springer, Heidelberg (2005). doi: 10.1007/9783540305798_1 CrossRefGoogle Scholar
 26.Cousot, P., Cousot, R.: Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: Graham, R.M., Harrison, M.A., Sethi, R. (eds.) POPL, pp. 238–252. ACM (1977)Google Scholar
 27.Cousot, P., Cousot, R.: An abstract interpretation framework for termination. In: Field, J., Hicks, M. (eds.) POPL, pp. 245–258. ACM (2012)Google Scholar
 28.Farkas, J.: A fourierféle mechanikai elv alkalmazásai (Hungarian). Mathematikaiés Természettudományi Értesitö 12, 457–472 (1894)Google Scholar
 29.Fioriti, L.M.F., Hermanns, H.: Probabilistic termination: soundness, completeness, and compositionality. In: Rajamani, S.K., Walker, D. (eds.) POPL, pp. 489–501. ACM (2015)Google Scholar
 30.Flajolet, P., Salvy, B., Zimmermann, P.: Automatic averagecase analysis of algorithm. Theor. Comput. Sci. 79(1), 37–109 (1991)MathSciNetCrossRefzbMATHGoogle Scholar
 31.Floyd, R.W.: Assigning meanings to programs. Math. Aspects Comput. Sci. 19, 19–33 (1967)MathSciNetCrossRefzbMATHGoogle Scholar
 32.Gimenez, S., Moser, G.: The complexity of interaction. In: Bodík and Majumdar [7], pp. 243–255Google Scholar
 33.Gödel, K., Kleene, S.C., Rosser, J.B.: On undecidable propositions of formal mathematical systems. Institute for Advanced Study Princeton, NJ (1934)Google Scholar
 34.Grobauer, B.: Cost recurrences for DML programs. In: Pierce, B.C. (ed.) ICFP, pp. 253–264. ACM (2001)Google Scholar
 35.Gulavani, B.S., Gulwani, S.: A numerical abstract domain based on expression abstraction and max operator with application in timing analysis. In: Gupta, A., Malik, S. (eds.) CAV 2008. LNCS, vol. 5123, pp. 370–384. Springer, Heidelberg (2008). doi: 10.1007/9783540705451_35 CrossRefGoogle Scholar
 36.Gulwani, S.: SPEED: symbolic complexity bound analysis. In: Bouajjani, A., Maler, O. (eds.) CAV 2009. LNCS, vol. 5643, pp. 51–62. Springer, Heidelberg (2009). doi: 10.1007/9783642026584_7 CrossRefGoogle Scholar
 37.Gulwani, S., Mehra, K.K., Chilimbi, T.M.: SPEED: precise and efficient static estimation of program computational complexity. In: Shao, Z., Pierce, B.C. (eds.) POPL, pp. 127–139. ACM (2009)Google Scholar
 38.Handelman, D.: Representing polynomials by positive linear functions on compact convex polyhedra. Pacific J. Math. 132, 35–62 (1988)MathSciNetCrossRefzbMATHGoogle Scholar
 39.Hesselink, W.H.: Proof rules for recursive procedures. Formal Asp. Comput. 5(6), 554–570 (1993)CrossRefzbMATHGoogle Scholar
 40.Hoffmann, J., Aehlig, K., Hofmann, M.: Multivariate amortized resource analysis. ACM Trans. Program. Lang. Syst. 34(3), 14 (2012)CrossRefzbMATHGoogle Scholar
 41.Hoffmann, J., Aehlig, K., Hofmann, M.: Resource aware ML. In: Madhusudan, P., Seshia, S.A. (eds.) CAV 2012. LNCS, vol. 7358, pp. 781–786. Springer, Heidelberg (2012). doi: 10.1007/9783642314247_64 CrossRefGoogle Scholar
 42.Hoffmann, J., Hofmann, M.: Amortized resource analysis with polymorphic recursion and partial bigstep operational semantics. In: Ueda, K. (ed.) APLAS 2010. LNCS, vol. 6461, pp. 172–187. Springer, Heidelberg (2010). doi: 10.1007/9783642171642_13 CrossRefGoogle Scholar
 43.Hoffmann, J., Hofmann, M.: Amortized resource analysis with polynomial potential. In: Gordon, A.D. (ed.) ESOP 2010. LNCS, vol. 6012, pp. 287–306. Springer, Heidelberg (2010). doi: 10.1007/9783642119576_16 CrossRefGoogle Scholar
 44.Hofmann, M., Jost, S.: Static prediction of heap space usage for firstorder functional programs. In: Aiken, A., Morrisett, G. (eds.) POPL, pp. 185–197. ACM (2003)Google Scholar
 45.Hofmann, M., Jost, S.: Typebased amortised heapspace analysis. In: Sestoft, P. (ed.) ESOP 2006. LNCS, vol. 3924, pp. 22–37. Springer, Heidelberg (2006). doi: 10.1007/11693024_3 CrossRefGoogle Scholar
 46.Hofmann, M., Rodriguez, D.: Efficient typechecking for amortised heapspace analysis. In: Grädel, E., Kahle, R. (eds.) CSL 2009. LNCS, vol. 5771, pp. 317–331. Springer, Heidelberg (2009). doi: 10.1007/9783642040276_24 CrossRefGoogle Scholar
 47.Hughes, J., Pareto, L.: Recursion and dynamic datastructures in bounded space: Towards embedded ML programming. In: Rémi, D., Lee, P. (eds.) ICFP. pp. 70–81. ACM (1999)Google Scholar
 48.Hughes, J., Pareto, L., Sabry, A.: Proving the correctness of reactive systems using sized types. In: Boehm, H., Jr., G.L.S. (eds.) POPL. pp. 410–423. ACM Press (1996)Google Scholar
 49.Jones, C.: Probabilistic nondeterminism. Ph.D. thesis, The University of Edinburgh (1989)Google Scholar
 50.Jost, S., Hammond, K., Loidl, H., Hofmann, M.: Static determination of quantitative resource usage for higherorder programs. In: Hermenegildo, M.V., Palsberg, J. (eds.) POPL, pp. 223–236. ACM (2010)Google Scholar
 51.Jost, S., Loidl, H.W., Hammond, K., Scaife, N., Hofmann, M.: “Carbon Credits” for resourcebounded computations using amortised analysis. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009. LNCS, vol. 5850, pp. 354–369. Springer, Heidelberg (2009). doi: 10.1007/9783642050893_23 CrossRefGoogle Scholar
 52.Knuth, D.E.: The Art of Computer Programming, vols. I–III. AddisonWesley, Reading (1973)Google Scholar
 53.Kuwahara, T., Terauchi, T., Unno, H., Kobayashi, N.: Automatic termination verification for higherorder functional programs. In: Shao, Z. (ed.) ESOP 2014. LNCS, vol. 8410, pp. 392–411. Springer, Heidelberg (2014). doi: 10.1007/9783642548338_21 CrossRefGoogle Scholar
 54.Lee, C.S.: Ranking functions for sizechange termination. ACM Trans. Program. Lang. Syst. 31(3), 10:1–10:42 (2009)CrossRefGoogle Scholar
 55.Lee, C.S., Jones, N.D., BenAmram, A.M.: The sizechange principle for program termination. In: Hankin, C., Schmidt, D. (eds.) POPL, pp. 81–92. ACM (2001)Google Scholar
 56.lp_solve 5.5.2.3 (2016). http://lpsolve.sourceforge.net/5.5/
 57.Olmedo, F., Kaminski, B.L., Katoen, J., Matheja, C.: Reasoning about recursive probabilistic programs. In: Grohe, M., Koskinen, E., Shankar, N. (eds.) LICS, pp. 672–681. ACM (2016)Google Scholar
 58.Podelski, A., Rybalchenko, A.: A complete method for the synthesis of linear ranking functions. In: Steffen, B., Levi, G. (eds.) VMCAI 2004. LNCS, vol. 2937, pp. 239–251. Springer, Heidelberg (2004). doi: 10.1007/9783540246220_20 CrossRefGoogle Scholar
 59.Schrijver, A.: Theory of Linear and Integer Programming. WileyInterscience Series in Discrete Mathematics and Optimization. Wiley, Hoboken (1999)zbMATHGoogle Scholar
 60.Schrijver, A.: Combinatorial Optimization  Polyhedra and Efficiency. Springer, Heidelberg (2003)zbMATHGoogle Scholar
 61.Shen, L., Wu, M., Yang, Z., Zeng, Z.: Generating exact nonlinear ranking functions by symbolicnumeric hybrid method. J. Syst. Sci. Complex. 26(2), 291–301 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
 62.Shkaravska, O., Kesteren, R., Eekelen, M.: Polynomial size analysis of firstorder functions. In: Rocca, S.R. (ed.) TLCA 2007. LNCS, vol. 4583, pp. 351–365. Springer, Heidelberg (2007). doi: 10.1007/9783540732280_25 CrossRefGoogle Scholar
 63.Sinn, M., Zuleger, F., Veith, H.: A simple and scalable static analysis for bound analysis and amortized complexity analysis. In: Biere, A., Bloem, R. (eds.) CAV 2014. LNCS, vol. 8559, pp. 745–761. Springer, Cham (2014). doi: 10.1007/9783319088679_50 Google Scholar
 64.Sohn, K., Gelder, A.V.: Termination detection in logic programs using argument sizes. In: Rosenkrantz, D.J. (ed.) PODS, pp. 216–226. ACM Press (1991)Google Scholar
 65.Srikanth, A., Sahin, B., Harris, W.R.: Complexity verification using guided theorem enumeration. In: Castagna and Gordon [11], pp. 639–652Google Scholar
 66.Urban, C.: The abstract domain of segmented ranking functions. In: Logozzo, F., Fähndrich, M. (eds.) SAS 2013. LNCS, vol. 7935, pp. 43–62. Springer, Heidelberg (2013). doi: 10.1007/9783642388569_5 CrossRefGoogle Scholar
 67.Wilhelm, R., et al.: The worstcase executiontime problem  overview of methods and survey of tools. ACM Trans. Embed. Comput. Syst. 7(3), 1–53 (2008)CrossRefGoogle Scholar
 68.Yang, L., Zhou, C., Zhan, N., Xia, B.: Recent advances in program verification through computer algebra. Front. Comput. Sci. China 4(1), 1–16 (2010)CrossRefzbMATHGoogle Scholar