Advertisement

TcT: Tyrolean Complexity Tool

  • Martin Avanzini
  • Georg Moser
  • Michael SchaperEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9636)

Abstract

In this paper we present Open image in new window v3.0, the latest version of our fully automated complexity analyser. Open image in new window implements our framework for automated complexity analysis and focuses on extensibility and automation. Open image in new window is open with respect to the input problem under investigation and the resource metric in question. It is the most powerful tool in the realm of automated complexity analysis of term rewrite systems. Moreover it provides an expressive problem-independent strategy language that facilitates proof search. We give insights about design choices, the implementation of the framework and report different case studies where we have applied Open image in new window successfully.

1 Introduction

Automatically checking programs for correctness has attracted the attention of the computer science research community since the birth of the discipline. Properties of interest are not necessarily functional, however, and among the non-functional ones, noticeable cases are bounds on the amount of resources (like time, memory and power) programs need, when executed. A variety of verification techniques have been employed in this context, like abstract interpretations, model checking, type systems, program logics, or interactive theorem provers; see [1, 2, 3, 12, 13, 14, 15, 16, 21, 25, 27] for some pointers.

In this paper, we present Open image in new window v3.0, the latest version of our fully automated complexity analyser. Open image in new window is open source, released under the BSD3 license, and available at

                              http://cl-informatik.uibk.ac.at/software/tct/.

Open image in new window features a standard command line interface, an interactive interface, and a web interface. In the setup of the complexity analyser, Open image in new window provides a transformational approach, depicted in Fig. 1. First, the input program in relation to the resource of interest is transformed to an abstract representation. We refer to the result of applying such a transformation as abstract program. It has to be guaranteed that the employed transformations are complexity reflecting, that is, the resource bound on the obtained abstract program reflects upon the resource usage of the input program. More precisely, the complexity analysis deals with a general complexity problem that consists of a program together with the resource metric of interest as input. Second, we employ problem specific techniques to derive bounds on the given problem and finally, the result of the analysis, i.e. a complexity bound or a notice of failure, is relayed to the original program. We emphasise that Open image in new window does not make use of a unique abstract representation, but is designed to employ a variety of different representations. Moreover, different representations may interact with each other. This improves modularity of the approach and provides scalability and precision of the overall analysis. For now we make use of integer transition systems (ITSs for short) or various forms of term rewrite systems (TRSs for short), not necessarily first-order. Currently, we are in the process of developing dedicated techniques for the analysis of higher-order rewrite systems (HRSs for short) that once should become another abstraction subject to resource analysis (depicted as \(\texttt {tct-hrs}\) in the figure). Concretising this abstract setup, Open image in new window currently provides a fully automated runtime complexity analysis of pure \(\texttt {OCaml}\) programs as well as a runtime analysis of object-oriented bytecode programs. Furthermore the tool provides runtime and size analysis of ITSs as well as complexity analysis of first-order rewrite systems. With respect to the latter application, Open image in new window is the most powerful complexity analyser of its kind.1 The latest version is a complete reimplementation of the tool that takes full advantage of the abstract complexity framework [6, 7] introduced by the first and second author. Open image in new window is open with respect to the complexity problem under investigation and problem specific techniques for the resource analysis. Moreover it provides an expressive problem independent strategy language that facilitates proof search. In this paper, we give insights about design choices, the implementation of the framework and report different case studies where we have applied Open image in new window successfully.
Fig. 1.

Complexity Analyser Open image in new window .

Development Cycle. Open image in new window was envisioned as a dedicated tool for the automated complexity analysis of first-order term rewrite systems. The first version was made available in 2008. Since then, Open image in new window has successfully taken part in the complexity categories of TERMCOMP. The competition results have shown that Open image in new window is the most powerful complexity solver for TRSs. The previous version [5] conceptually corresponds now to the \(\texttt {tct-trs}\) component depicted in Fig. 1. The reimplementation of Open image in new window was mainly motivated by the following observations:
  • automated resource analysis of programming languages is typically done by establishing complexity reflecting abstractions to formal systems

  • the complexity framework is general enough to integrate those abstractions as transformations of the original program

  • modularity and decomposition can be represented independently of the analysed complexity problem

We have rewritten the tool from scratch to integrate and extend all the ideas that were collected and implemented in previous versions in a clean and structured way. The new tool builds upon a small core (\(\texttt {tct-core}\)) that provides an expressive strategy language with a clearly defined semantics, and is, as envisioned, open with respect to the type of the complexity problem.

Structure. The remainder of the paper is structured as follows. In the next section, we provide an overview on the design choices of the resource analysis in Open image in new window , that is, we inspect the middle part of Fig. 1. In Sect. 3 we revisit our abstract complexity framework, which is the theoretical foundation of the core of Open image in new window (\(\texttt {tct-core}\)). Section 4 provides details about the implementation of the complexity framework and Sect. 5 presents four different use cases that show how the complexity framework can be instantiated. Among them the instantiation for higher-order programs (\(\texttt {tct-hoca}\)), as well as the instantiation to complexity analysis of TRSs (\(\texttt {tct-trs}\)). Finally we conclude in Sect. 6.

2 Architectural Overview

In this section we give an overview of the architecture of our complexity analyser. All components of Open image in new window are written in the strongly typed, lazy functional programming language Haskell and released open source under BSD3. Our current code base consists of approximately 12.000 lines of code, excluding external libraries. The core constitutes roughly 17 % of our code base, 78 % of the code is dedicated to complexity techniques. The remaining 5 % attribute to interfaces to external tools, such as \(\texttt {CeTA}\) 2 and SMT solvers, and common utility functions.
Fig. 2.

Architectural Overview of Open image in new window .

As depicted in Fig. 1, the implementation of Open image in new window is divided into separate components for the different program kinds and abstractions thereof supported. These separate components are no islands however. Rather, they instantiate our abstract complexity framework for complexity analysis [6], from which Open image in new window derives its power and modularity. In short, in this framework complexity techniques are modelled as complexity processors that give rise to a set of inferences over complexity proofs. From a completed complexity proof, a complexity bound can be inferred. The theoretical foundations of this framework are given in Sect. 3.

The abstract complexity framework is implemented in Open image in new window ’s core library, termed \(\texttt {tct-core}\), which is depicted in Fig. 2 at the bottom layer. Central, it provides a common notion of a proof state, viz proof trees, and an interface for specifying processors. Furthermore, \(\texttt {tct-core}\) complements the framework with a simple but powerful strategy language. Strategies play the role of tactics in interactive theorem provers like Isabelle or Coq. They allow us to turn a set of processors into a sophisticated complexity analyser. The implementation details of the core library are provided in Sect. 4.

The complexity framework implemented in our core library leaves the type of complexity problem, consisting of the analysed program together with the resource metric of interest, abstract. Rather, concrete complexity problems are provided by concrete instances, such as the two instances \(\texttt {tct-hoca}\) and \(\texttt {tct-trs}\) depicted in Fig. 2. We will look at some particular instances in detail in Sect. 5. Instances implement complexity techniques on defined problem types in the form of complexity processors, possibly relying on external libraries and tools such as e.g. SMT solvers. Optionally, instances may also specify strategies that compose the provided processors. Bridges between instances are easily specified as processors that implement conversions between problem types defined in different instances. For example our instance \(\texttt {tct-hoca}\), which deals with the runtime analysis of pure \(\texttt {OCaml}\) programs, makes use of the instance \(\texttt {tct-trs}\). Thus our system is open to the seamless integration of alternative problem types through the specification of new instances. Exemplarily, we mention the envisioned instance \(\texttt {tct-hrs}\) (see Fig. 1), which should incorporate dedicated techniques for the analysis of HRSs. We intend to use \(\texttt {tct-hrs}\) in future versions for the analysis of functional programs.

3 A Formal Framework for Complexity Analysis

We now briefly outline the theoretical framework upon which our complexity analyser Open image in new window is based. As mentioned before, both the input language (e.g. \(\texttt {Java}\), \(\texttt {OCaml}\), ...) as well as the resource under consideration (e.g. execution time, heap usage, ...) is kept abstract in our framework. That is, we assume that we are dealing with an abstract class of complexity problems, where however, each complexity problem \(\mathcal {P}\) from this class is associated with a complexity function \(\mathsf {cp}_{\mathcal {P}}\,:\,D \rightarrow D\), for a complexity domain D. Usually, the complexity domain D will be the set of natural numbers \(\mathbb {N}\), however, more sophisticated choices of complexity functions, such as e.g. those proposed by Danner et al. [11], fall into the realm of our framework.

In a concrete setting, the complexity problem \(\mathcal {P}\) could denote, for instance, a \(\texttt {Java}\) program. If we are interested in heap usage, then \(D = \mathbb {N}\) and \(\mathsf {cp}_{\mathcal {P}}\,:\,\mathbb {N}\rightarrow \mathbb {N}\) denotes the function that describes the maximal heap usage of \(\mathcal {P}\) in the sizes of the program inputs. As indicated in the introduction, any transformational solver converts concrete programs into abstract ones, if not already interfaced with an abstract program. Based on the possible abstracted complexity problem \(\mathcal {P}\) the analysis continues using a set of complexity techniques. In particular, a reasonable solver will also integrate some form of decomposition techniques, transforming an intermediate problem into various smaller sub-problems, and analyse these sub-problems separately, either again by some form of decomposition method, or eventually by some base technique which infers a suitable resource bound. Of course, at any stage in this transformation chain, a solver needs to keep track of computed complexity-bounds, and relay these back to the initial problem.

To support this kind of reasoning, it is convenient to formalise the internals of a complexity analyser as an inference system over complexity judgements. In our framework, a complexity judgement has the shape \({}\vdash \mathcal {P}\mathrel {:}B\), where \(\mathcal {P}\) is a complexity problem and B is a set of bounding functions \(f\,:\,D \rightarrow D\) for a complexity domain D. Such a judgement is valid if the complexity function of \(\mathcal {P}\) lies in B, that is, \(\mathsf {cp}_{\mathcal {P}} \in B\). Complexity techniques are modelled as processors in our framework. A processor defines a transformation of the input problem \(\mathcal {P}\) into a list of sub-problems \(\mathcal {Q}_1,\dots ,\mathcal {Q}_n\) (if any), and it relates the complexity of the obtained sub-problems to the complexity of the input problem. Processors are given as inferences
$$ \frac{Pre(\mathcal {P})\quad {}\vdash \mathcal {Q}_{1}\mathrel {:}B_{1}\quad \cdots \quad {}\vdash \mathcal {Q}_{n}\mathrel {:}B_{n}}{{}\vdash \mathcal {P}\mathrel {:}B}, $$
where \(Pre(\mathcal {P})\) indicates some pre-conditions on \(\mathcal {P}\). The processor is sound if under \(Pre(\mathcal {P})\) the validity of judgements is preserved, i.e.
$$ Pre(\mathcal {P}) \wedge \mathsf {cp}_{\mathcal {Q}_1} \in B_1 \wedge \cdots \wedge \mathsf {cp}_{\mathcal {Q}_n} \in B_n \quad ~\Longrightarrow ~\quad \mathsf {cp}_{\mathcal {P}} \in B . $$
Dual, it is called complete if under the assumptions \(Pre(\mathcal {P})\), validity of the judgement \({}\vdash \mathcal {P}\mathrel {:}B\) implies validity of the judgements \({}\vdash \mathcal {Q}_i\mathrel {:}B_i\).

A proof of a judgement \({}\vdash \mathcal {P}\mathrel {:}B\) from the assumptions \({}\vdash \mathcal {Q}_1\mathrel {:}B_1, \dots , {}\vdash \mathcal {Q}_n\mathrel {:}B_n\) is a deduction using sound processors only. The proof is closed if its set of assumptions is empty. Soundness of processors guarantees that our formal system is correct. Application of complete processors on a valid judgement ensures that no invalid assumptions are derived. In this sense, the application of a complete processor is always safe.

Proposition 1

If there exists a closed complexity proof \({}\vdash \mathcal {P}\mathrel {:}B\), then the judgement \({}\vdash \mathcal {P}\mathrel {:}B\) is valid.

4 Implementing the Complexity Framework

The formal complexity framework described in the last section is implemented in the core library, termed \(\texttt {tct-core}\). In the following we outline the two central components of this library: (i) the generation of complexity proofs, and (ii) common facilities for instantiating the framework to concrete tools, see Fig. 2.

4.1 Proof Trees, Processors, and Strategies

The library \(\texttt {tct-core}\) provides the verification of a valid complexity judgement \({}\vdash \mathcal {P}\mathrel {:}B\) from a given input problem \(\mathcal {P}\). More precise, the library provides the environment to construct a complexity proof witnessing the validity of \({}\vdash \mathcal {P}\mathrel {:}B\).

Since the class B of bounding-functions is a result of the analysis, and not an input, the complexity proof can only be constructed once the analysis finished successfully. For this reason, proofs are not directly represented as trees over complexity judgements. Rather, the library features proof trees. Conceptually, a proof tree is a tree whose leaves are labelled by open complexity problems, that is, problems which remain to be analysed, and whose internal nodes represent successful applications of processors. The complexity analysis of a problem \(\mathcal {P}\) then amounts to the expansion of the proof tree whose single node is labelled by the open problem \(\mathcal {P}\). Processors implement a single expansion step. To facilitate the expansion of proof trees, \(\texttt {tct-core}\) features a rich strategy language, similar to tactics in interactive theorem provers like Isabelle or Coq. Once a proof tree has been completely expanded, a complexity judgement for \(\mathcal {P}\) together with the witnessing complexity proof can be computed from the proof tree.

In the following, we detail the central notions of proof tree, processor and strategy, and elaborate on important design issues.

Proof Trees: The first design issue we face is the representation of complexity problems. In earlier versions of Open image in new window , we used a concrete problem type Open image in new window that captured various notions of complexity problems, but all were based on term rewriting. With the addition of new kinds of complexity problem, such as runtime of functional or heap size of imperative programs, this approach became soon infeasible. In the present reimplementation, we therefore abstract over problem types, at the cost of slightly complicating central definitions. This allows concrete instantiations to precisely specify which problem types are supported. Consequently, proof trees are parameterised in the type of complexity problems.
Fig. 3.

Data-type declaration of proof trees in \(\texttt {tct-core}\).

The corresponding (generalised) algebraic data-type Open image in new window \(\alpha \) (from module Tct.Core.Data.ProofTree) is depicted in Fig. 3. A constructor Open image in new window represents a leaf labelled by an open problem of type \(\alpha \). The ternary constructor Open image in new window represents the successful application of a processor of type \(\beta \). Its first argument, a value of type Open image in new window \(\beta \), carries the applied processor, the current complexity problem under investigation as well as a proof-object of type Open image in new window \(\beta \). This information is useful for proof analysis, and allows a detailed textual representation of proof trees. Note that Open image in new window is a type-level function, the concrete representation of a proof-object thus depends on the type of the applied processor. The second argument to Open image in new window is a certificate-function
which is used to relate the estimated complexity of generated sub-problems to the analysed complexity problem. Thus currently, the set of bounding-functions B occurring in the final complexity proof is fixed to those expressed by the data-type Open image in new window (module Tct.Core.Data.Certificate). Open image in new window includes various representations of complexity classes, such as the class of polynomials, exponential, primitive and multiple recursive functions, but also the more fine grained classes of bounding-functions \({{\mathrm{\mathcal {O}}}}(n^k)\) for all \(k \in \mathbb {N}\). The remaining argument to the constructor Open image in new window is a forest of proof trees, each individual proof tree representing the continuation of the analysis of a corresponding sub-problem generated by the applied processor. Finally, the constructor Open image in new window indicates that the analysis failed. It results for example from the application of a processor to an open problem which does not satisfy the pre-conditions of the processor. The argument of type Open image in new window allows a textual representation of the failure-condition. The analysis will always abort on proof trees containing such a failure node.
Fig. 4.

Data-type and class definitions related to processors in \(\texttt {tct-core}\).

Processors. The interface for processors is specified by the type-class Open image in new window , which is defined in module Tct.Core.Data.Processor and depicted in Fig. 4. The type of input problem and generated sub-problems are defined for processors on an individual basis, through the type-level functions Open image in new window and Open image in new window , respectively. This eliminates the need for a global problem type, and facilitates the seamless combination of different instantiations of the core library. Each processor instance specifies additionally the type of proof-objects Open image in new window \(\alpha \) – the meta information provided in case of a successful application. The proof-object is constrained to instances of Open image in new window , which beside others, ensures that a textual representation can be obtained. Each instance of Open image in new window has to implement a method execute, which given an input problem of type Open image in new window \(\alpha \), evaluates to a Open image in new window action that produces a value of type Open image in new window \(\alpha \). The monad Open image in new window (defined in module Tct.Core.Data.TctM) extends the Open image in new window monad with access to runtime information, such as command line parameters and execution time. The data-type Open image in new window \(\alpha \) specifies the result of the application of a processor to its given input problem. In case of a successful application, the return value carries the proof-object, a value of type Open image in new window , which relates complexity-bounds on sub-problems to bounds on the input-problem and the list of generated sub-problems. In fact the type is slightly more liberal and allows for each generated sub-problem a, possibly open, proof tree. This generalisation is useful in certain contexts, for example, when the processor makes use of a second processor.

Strategies. To facilitate the expansion of a proof tree, \(\texttt {tct-core}\) features a simple but expressive strategy language. The strategy language is deeply embedded, via the generalised algebraic data-type Open image in new window \(\alpha \) \(\beta \) defined in Fig. 5. Semantics over strategies are given by the function
defined in module Tct.Core.Data.Strategy. A strategy of type Open image in new window \(\alpha \) \(\beta \) thus translates a proof tree with open problems of type \(\alpha \) to one with open problems of type \(\beta \).
Fig. 5.

Deep Embedding of our strategy language in \(\texttt {tct-core}\).

Fig. 6.

Derived sequential strategy combinators.

Fig. 7.

Some laws obeyed by the derived operators.

The first four primitives defined in Fig. 5 constitute our tool box for modelling sequential application of processors. he strategy Open image in new window is implemented by the identity function on proof trees. The remaining three primitives traverse the given proof tree in-order, acting on all the open proof-nodes. The strategy Open image in new window p replaces the given open proof-node with the proof tree resulting from an application of p. The strategy Open image in new window signals that the computation should be aborted, replacing the given proof-node by a failure node. Finally, the strategy Open image in new window predicate s1 s2 s3 implements a very specific conditional. It sequences the application of strategies s1 and s2, provided the proof tree computed by s1 satisfies the predicate predicate. For the case where the predicate is not satisfied, the conditional acts like the third strategy s3.

In Fig. 6 we showcase the definition of derived sequential strategy combinators. Sequencing s1 \(\ggg \) s2 of strategies s1 and s2 as well as a (left-biased) choice operator s1 \(<\!|\!>\) s2 are derived from the conditional primitive Open image in new window . The strategy try s behaves like s, except when s fails then try s behaves as an identity. The combinator force complements the combinator try: the strategy force s enforces that strategy s produces a new proof-node. The combinator try brings backtracking to our strategy language, i.e. the strategy try s1 \(\ggg \) s2 first applies strategy s1, backtracks in case of failure, and applies s2 afterwards. Finally, the strategies exhaustive s applies s zero or more times, until strategy s fails. The combinator exhaustive+ behaves similarly, but applies the given strategy at least once. The obtained combinators satisfy the expected laws, compare Fig. 7 for an excerpt.

Our strategy language features also three dedicated types for parallel proof search. The strategy Open image in new window s implements a form of data level parallelism, applying strategy s to all open problems in the given proof tree in parallel. In contrast, the strategies Open image in new window s1 s2 and Open image in new window comp s1 s2 apply to each open problem the strategies s1 and s2 concurrently, and can be seen as parallel version of our choice operator. Whereas Open image in new window s1 s2 simply returns the (non-failing) proof tree of whichever strategy returns first, Open image in new window comp s1 s2 uses the provided comparison-function comp to decide which proof tree to return.

The final two strategies depicted in Fig. 5 implement timeouts, and the dynamic creation of strategies depending on the current Open image in new window . Open image in new window includes global state, such as command line flags and the execution time, but also proof relevant state such as the current problem under investigation.

4.2 From the Core to Executables

The framework is instantiated by providing a set of sound processors, together with their corresponding input and output types. At the end of the day the complexity framework has to give rise to an executable tool, which, given an initial problem, possibly provides a complexity certificate.

To ease the generation of such an executable, \(\texttt {tct-core}\) provides a default implementation of the main function, controlled by a Open image in new window record (see module Tct.Core.Main). A minimal definition of Open image in new window just requires the specification of a default strategy, and a parser for the initial complexity problem. Optionally, one can for example specify additional command line parameters, or a list of declarations for custom strategies, which allow the user to control the proof search. Strategy declarations wrap strategies with additional meta information, such as a name, a description, and a list of parameters. Firstly, this information is used for documentary purposes. If we call the default implementation with the command line flag --list-strategies it will present a documentation of the available processors and strategies to the user. Secondly, declarations facilitate the parser generation for custom strategies. It is noteworthy to mention that declarations and the generated parsers are type safe and are checked during compile-time. Declarations, together with usage information, are defined in module Tct.Core.Data.Declaration. Given a path pointing to the file holding the initial complexity problem, the generated executable will perform the following actions in order:
  1. 1.

    Parse the command line options given to the executable, and reflect these in the aforementioned Open image in new window .

     
  2. 2.

    Parse the given file according to the parser specified in the Open image in new window .

     
  3. 3.

    Select a strategy based on the command line flags, and apply the selected strategy on the parsed input problem.

     
  4. 4.

    Should the analysis succeed, a textual representation of the obtained complexity judgement and corresponding proof tree is printed to the console; in case the analysis fails, the uncompleted proof tree, including the Open image in new window for failure is printed to the console.

     
Interactive. The library provides an interactive mode via the \(\texttt {GHCi}\) interpreter, similar to the one provided in Open image in new window v2 [5]. The interactive mode is invoked via the command line flag --interactive. The implementation keeps track of a proof state, a list of proof trees that represents the history of the interactive session. We provide an interface to inspect and manipulate the proof state. Most noteworthy, the user can select individual sub-problems and apply strategies on them. The proof state is updated accordingly.

5 Case Studies

In this section we discuss several instantiations of the framework that have been established up to now. We keep the descriptions of the complexity problems informal and focus on the big picture. In the discussion we group abstract programs in contrast to real world programs.

5.1 Abstract Programs

Currently Open image in new window provides first-order term rewrite systems and integer transition systems as abstract representations. As mentioned above, the system is open to the seamless integration of alternative abstractions.
Fig. 8.

Polynomial Interpretation Proof.

Term Rewrite Systems. Term rewriting forms an abstract model of computation, which underlies much of declarative programming. Our results on pure \(\texttt {OCaml}\), see below, show how we can make practical use of the clarity of the model. The \(\texttt {tct-trs}\) instance provides automated resource analysis of first-order term rewrite systems (TRSs for short) [8, 26]. Complexity analysis of TRSs has received significant attention in the last decade, see [19] for details. A TRS consists of a set of rewrite rules, i.e. directed equations that can be applied from left to right. Computation is performed by normalisation, i.e. by successively applying rewrite rules until no more rules apply. As an example, consider the following TRS \(\mathcal {R}_{\mathsf {sq}}\), which computes the squaring function on natural numbers in unary notation.
$$\begin{aligned}&\mathsf {sq}(x) \rightarrow x * x \qquad \quad x * 0 \rightarrow 0 \qquad \qquad \qquad x + 0 \rightarrow x\\&\qquad \qquad \qquad \quad \mathsf {s}(x) * y \rightarrow y + (x * y) \qquad \mathsf {s}(x) + y \rightarrow \mathsf {s}(x+y) . \end{aligned}$$
The runtime complexity of a TRS is naturally expressed as a function that measures the length of the longest reduction, in the sizes of (normalised) starting terms. Figure 8 depicts the proof output of \(\texttt {tct-trs}\) when applying a polynomial interpretation [18] processor with maximum degree 2 on \(\mathcal {R}_{\mathsf {sq}}\). The resulting proof tree consists of a single progress node and returns the (optimal) quadratic asymptotic upper bound on the runtime complexity of \(\mathcal {R}_{\mathsf {sq}}\). The success of Open image in new window as a complexity analyser, and in particular the strength of \(\texttt {tct-trs}\) instance is apparent from its performance at TERMCOMP.3 It is noteworthy to mention that at this year’s competition Open image in new window not only won the combined ranking, but also the certified category. Here only those techniques are admissible that have been machine checked, so that soundness of the obtained resource bound is almost without doubt, cf. [7]. The \(\texttt {tct-trs}\) instance has many advantages in comparison to its predecessors. Many of them are subtle and are due to the redesign of the architecture and reimplementation of the framework. However, the practical consequences are clear: the instance \(\texttt {tct-trs}\) is more powerful than its predecessor, cf. the last year’s TERMCOMP, where both the old and new version competed against each other. Furthermore, the actual strength of the latest version of Open image in new window shows when combining different modules into bigger ones, as we are going to show in the sequent case studies.
Integer Transition Systems. The \(\texttt {tct-its}\) module deals with the analysis of integer transition systems (ITSs for short). An ITS can be seen as a TRS over terms \(\mathsf {f}(x_1,\dots ,x_n)\) where the variables \(x_i\) range over integers, and where rules are additionally equipped with a guard \([\![\cdot ]\!]\) that determines if a rule triggers. The notion of runtime complexity extends straight forward from TRSs to ITSs. ITSs naturally arise from imperative programs using loops, conditionals and integer operations only, but can also be obtained from programs with user-defined data structures using suitable size-abstractions (see e.g. [22]). Consider the following program, that computes the remainder of a natural number m with respect to n.
This program is represented as the following ITS:
$$ \mathsf {r}(m, n) \rightarrow \mathsf {r}(m - n,n) \, {[\![n> 0 \wedge m> n]\!]} \quad \mathsf {r}(m, n) \rightarrow \mathsf {e}(m,n) \, {[\![\lnot (n> 0 \wedge m > n)]\!]} . $$
It is not difficult to see that the runtime complexity of the ITS, i.e. the maximal length of a computation starting from \(\mathsf {r}(m,n)\), is linear in m and n. The linear asymptotic bound is automatically derived by \(\texttt {tct-its}\), in a fraction of a second. The complexity analysis of ITSs implemented by \(\texttt {tct-its}\) follows closely the approach by Brockschmidt et al. [10].

5.2 Real World Programs

One major motivation for the complexity analysis of abstract programs is that these models are well equipped to abstract over real-world programs whilst remaining conceptually simple.
Fig. 9.

Example run of the \(\texttt {HoCA}\) prototype on a \(\texttt {OCaml}\) program.

Fig. 10.

\(\texttt {HoCA}\) transformation pipeline modelled in \(\texttt {tct-hoca}\).

Pure \(\texttt {OCaml}\). For the case of higher-order functional programs, a successful application of this has been demonstrated in recent work by the first and second author in collaboration with Dal Lago [4]. In [4], we study the runtime complexity of pure \(\texttt {OCaml}\) programs. A suitable adaption of Reynold’s defunctionalisation [24] technique translates the given program into a slight generalisation of TRSs, an applicative term rewrite system (ATRS for short). In ATRSs closures are explicitly represented as first-order structures. Evaluation of these closures is defined via a global apply function (denoted by \({\texttt {@}}\)).

The structure of the defunctionalised program is necessarily intricate, even for simple programs. However, in conjunction with a sequence of sophisticated and in particular complexity reflecting transformations one can bring the defunctionalised program in a form which can be effectively analysed by first-order complexity provers such as the \(\texttt {tct-trs}\) instance; see [4] for the details. An example run is depicted in Fig. 9. All of this has been implemented in a prototype implementation, termed \(\texttt {HoCA}\).4 We have integrated the functionality of \(\texttt {HoCA}\) in the instance \(\texttt {tct-hoca}\). The individual transformations underlying this tool are seamlessly modelled as processors, its transformation pipeline is naturally expressed in our strategy language. The corresponding strategy, termed hoca, is depicted in Fig. 10. It takes an \(\texttt {OCaml}\) source fragment, of type Open image in new window , and turns it into a term rewrite system as follows. First, via mlToAtrs the source code is parsed and desugared, the resulting abstract syntax tree is turned into an expression of a typed \(\lambda \)-calculus with constants and fixpoints, akin to Plotkin’s PCF [23]. All these steps are implemented via the strategy mlToPcf Open image in new window Open image in new window Open image in new window Open image in new window . The given parameter, an optional function name, can be used to select the analysed function. With defunctionalise Open image in new window Open image in new window this program is then turned into an ATRS, which is simplified via the strategy simplifyAtrs Open image in new window Open image in new window modelling the heuristics implemented in \(\texttt {HoCA}\). Second, the strategy atrsToTrs uses the control-flow analysis provided by \(\texttt {HoCA}\) to instantiate occurrences of higher-order variables [4]. The instantiated ATRS is then translated into a first-order rewrite system by uncurrying all function calls. Further simplifications, as foreseen by the \(\texttt {HoCA}\) prototype at this stage of the pipeline, are performed via the strategy simplifyTrs Open image in new window Open image in new window .

Currently, all involved processors are implemented via calls to the library shipped with the \(\texttt {HoCA}\) prototype, and operate on exported data-types. The final strategy in the pipeline, toTctProblem Open image in new window Open image in new window , converts \(\texttt {HoCA}\)’s representation of a TRS to a complexity problem understood by \(\texttt {tct-trs}\). Due to the open structure of Open image in new window , the integration of the \(\texttt {HoCA}\) prototype worked like a charm and was finalised in a couple of hours. Furthermore, essentially by construction the strength of \(\texttt {tct-hoca}\) equals the strength of the dedicated prototype. An extensive experimental assessment can be found in [4].
Fig. 11.

\(\texttt {jat}\) transformation pipeline modelled in \(\texttt {tct-jbc}\).

Object-Oriented Bytecode Programs. The \(\texttt {tct-jbc}\) instance provides automated complexity analysis of object-oriented bytecode programs, in particular Jinja bytecode (JBC for short) programs [17]. Given a JBC program, we measure the maximal number of bytecode instructions executed in any evaluation of the program. We suitably employ techniques from data-flow analysis and abstract interpretation to obtain a term based abstraction of JBC programs in terms of constraint term rewrite systems (cTRSs for short) [20]. CTRSs are a generalisation of TRSs and ITSs. More importantly, given a cTRS obtained from a JBC program, we can extract a TRS or ITS fragment. All these abstractions are complexity reflecting. We have implemented this transformation in a dedicated tool termed \(\texttt {jat}\) and have integrated its functionality in \(\texttt {tct-jbc}\) in a similar way we have integrated the functionality of \(\texttt {HoCA}\) in \(\texttt {tct-hoca}\). The corresponding strategy, termed jbc, is depicted in Fig. 11. We then can use \(\texttt {tct-trs}\) and \(\texttt {tct-its}\) to analyse the resulting problems. Our framework is expressive enough to analyse the thus obtained problems in parallel. Note that Open image in new window s1 s2 requires that s1 and s2 have the same output problem type. We can model this with transformations to a dummy problem Open image in new window . Nevertheless, as intended any witness that is obtained by an successful application of its or trs will be relayed back.

6 Conclusion

In this paper we have presented Open image in new window v3.0, the latest version of our fully automated complexity analyser. Open image in new window is open source, released under the BSD3 license. All components of Open image in new window are written in Haskell. Open image in new window is open with respect to the complexity problem under investigation and problem specific techniques. It is the most powerful tool in the realm of automated complexity analysis of term rewrite systems, as for example verified at this year’s TERMCOMP. Moreover it provides an expressive problem independent strategy language that facilitates the proof search, extensibility and automation.

Further work will be concerned with the finalisation of the envisioned instance \(\texttt {tct-hrs}\), as well as the integration of current and future developments in the resource analysis of ITSs.

Footnotes

References

  1. 1.
    Albert, E., Arenas, P., Genaim, S., Puebla, G., Román-Díez, G.: Conditional termination of loops over heap-allocated data. SCP 92, 2–24 (2014)Google Scholar
  2. 2.
    Aspinall, D., Beringer, L., Hofmann, M., Loidl, H.W., Momigliano, A.: A program logic for resources. TCS 389(3), 411–445 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Atkey, R.: Amortised resource analysis with separation logic. LMCS 7(2), 1–33 (2011)MathSciNetzbMATHGoogle Scholar
  4. 4.
    Avanzini, M., Dal Lago, U., Moser, G.: Analysing the complexity of functional programs: higher-order meets first-order. In: Proceeding of the 20th ICFP, pp. 152–164. ACM (2015)Google Scholar
  5. 5.
    Avanzini, M., Moser, G.: Tyrolean complexity tool: features and usage. In: Proceeding of the 24th RTA, LIPIcs, vol. 21, pp. 71–80 (2013)Google Scholar
  6. 6.
    Avanzini, M., Moser, G.: A combination framework for complexity. IC (to appear, 2016)Google Scholar
  7. 7.
    Avanzini, M., Sternagel, C., Thiemann, R.: Certification of complexity proofs using CeTA. In: Proceeding of the 26th RTA, LIPIcs, vol. 36, pp. 23–39 (2015)Google Scholar
  8. 8.
    Baader, F., Nipkow, T.: Term Rewriting and All That. Cambridge University Press, Cambridge (1998)CrossRefzbMATHGoogle Scholar
  9. 9.
    Bird, R.: Introduction to Functional Programming using Haskell, 2nd edn. Prentice Hall, Upper Saddle River (1998)Google Scholar
  10. 10.
    Brockschmidt, M., Emmes, F., Falke, S., Fuhs, C., Giesl, J.: Alternating runtime and size complexity analysis of integer programs. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014 (ETAPS). LNCS, vol. 8413, pp. 140–155. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  11. 11.
    Danner, N., Paykin, J., Royer, J.S.: A static cost analysis for a higher-order language. In: Proceeding of the 7th PLPV, pp. 25–34. ACM (2013)Google Scholar
  12. 12.
    Gimenez, S., Moser, G.: The complexity of interaction. In: Proceeding of the 40th POPL (to appear, 2016)Google Scholar
  13. 13.
    Hirokawa, N., Moser, G.: Automated complexity analysis based on the dependency pair method. In: Armando, A., Baumgartner, P., Dowek, G. (eds.) IJCAR 2008. LNCS (LNAI), vol. 5195, pp. 364–379. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  14. 14.
    Hoffmann, J., Aehlig, K., Hofmann, M.: Multivariate amortized resource analysis. TOPLAS 34(3), 14 (2012)CrossRefzbMATHGoogle Scholar
  15. 15.
    Hofmann, M., Moser, G.: Multivariate amortised resource analysis for term rewrite systems. In: Proceeding of the 13th TLCA in LIPIcs Vol. 38, pp. 241–256 (2015)Google Scholar
  16. 16.
    Jost, S., Hammond, K., Loidl, H.W., Hofmann, M.: Static determination of quantitative resource usage for higher-order programs. In: Proceeding of the 37th POPL, pp. 223–236. ACM (2010)Google Scholar
  17. 17.
    Klein, G., Nipkow, T.: A machine-checked model for a java-like language, virtual machine, and compiler. TOPLAS 28(4), 619–695 (2006)CrossRefGoogle Scholar
  18. 18.
    Lankford, D.: On Proving Term Rewriting Systems are Noetherian. Technical Report MTP-3. Louisiana Technical University (1979)Google Scholar
  19. 19.
    Moser, G.: Proof Theory at Work: Complexity Analysis of Term Rewrite Systems. CoRR abs/0907.5527, Habilitation Thesis (2009)Google Scholar
  20. 20.
    Moser, G., Schaper, M.: A Complexity Preserving Transformation from Jinja Bytecode to Rewrite Systems (2012). CoRR, cs/PL/1204.1568, last revision: 6 May 2014Google Scholar
  21. 21.
    Noschinski, L., Emmes, F., Giesl, J.: Analyzing innermost runtime complexity of term rewriting by dependency pairs. JAR 51(1), 27–56 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Hill, P.M., Payet, E., Spoto, F.: Path-length analysis of object-oriented programs. In: Proceeding of the 1st EAAI. Elsevier (2006)Google Scholar
  23. 23.
    Plotkin, G.D.: LCF considered as a programming language. TCS 5(3), 223–255 (1977)MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Reynolds, J.C.: Definitional interpreters for higher-order programming languages. HOSC 11(4), 363–397 (1998)zbMATHGoogle Scholar
  25. 25.
    Sinn, M., Zuleger, F., Veith, H.: A simple and scalable static analysis for bound analysis and amortized complexity analysis. In: Biere, A., Bloem, R. (eds.) CAV 2014. LNCS, vol. 8559, pp. 745–761. Springer, Heidelberg (2014)Google Scholar
  26. 26.
    TeReSe: Term Rewriting Systems, Cambridge Tracks in Theoretical Computer Science, vol. 55. Cambridge University Press, Cambridge (2003)Google Scholar
  27. 27.
    Wilhelm, R., Engblom, J., Ermedahl, A., Holsti, N., Thesing, S., Whalley, D., Bernat, G., Ferdinand, C., Heckmann, R., Mitra, T., Mueller, F., Puaut, I., Puschner, P., Staschulat, J., Stenstrom, P.: The worst case execution time problem - overview of methods and survey of tools. TECS 7(3), 1–53 (2008)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  • Martin Avanzini
    • 1
    • 2
  • Georg Moser
    • 3
  • Michael Schaper
    • 3
    Email author
  1. 1.Università di BolognaBolognaItaly
  2. 2.INRIASophia AntipolisFrance
  3. 3.Department of Computer ScienceUniversity of InnsbruckInnsbruckAustria

Personalised recommendations