TcT: Tyrolean Complexity Tool
 8 Citations
 1.3k Downloads
Abstract
In this paper we present Open image in new window v3.0, the latest version of our fully automated complexity analyser. Open image in new window implements our framework for automated complexity analysis and focuses on extensibility and automation. Open image in new window is open with respect to the input problem under investigation and the resource metric in question. It is the most powerful tool in the realm of automated complexity analysis of term rewrite systems. Moreover it provides an expressive problemindependent strategy language that facilitates proof search. We give insights about design choices, the implementation of the framework and report different case studies where we have applied Open image in new window successfully.
1 Introduction
Automatically checking programs for correctness has attracted the attention of the computer science research community since the birth of the discipline. Properties of interest are not necessarily functional, however, and among the nonfunctional ones, noticeable cases are bounds on the amount of resources (like time, memory and power) programs need, when executed. A variety of verification techniques have been employed in this context, like abstract interpretations, model checking, type systems, program logics, or interactive theorem provers; see [1, 2, 3, 12, 13, 14, 15, 16, 21, 25, 27] for some pointers.
In this paper, we present Open image in new window v3.0, the latest version of our fully automated complexity analyser. Open image in new window is open source, released under the BSD3 license, and available at
http://clinformatik.uibk.ac.at/software/tct/.

automated resource analysis of programming languages is typically done by establishing complexity reflecting abstractions to formal systems

the complexity framework is general enough to integrate those abstractions as transformations of the original program

modularity and decomposition can be represented independently of the analysed complexity problem
We have rewritten the tool from scratch to integrate and extend all the ideas that were collected and implemented in previous versions in a clean and structured way. The new tool builds upon a small core (\(\texttt {tctcore}\)) that provides an expressive strategy language with a clearly defined semantics, and is, as envisioned, open with respect to the type of the complexity problem.
Structure. The remainder of the paper is structured as follows. In the next section, we provide an overview on the design choices of the resource analysis in Open image in new window , that is, we inspect the middle part of Fig. 1. In Sect. 3 we revisit our abstract complexity framework, which is the theoretical foundation of the core of Open image in new window (\(\texttt {tctcore}\)). Section 4 provides details about the implementation of the complexity framework and Sect. 5 presents four different use cases that show how the complexity framework can be instantiated. Among them the instantiation for higherorder programs (\(\texttt {tcthoca}\)), as well as the instantiation to complexity analysis of TRSs (\(\texttt {tcttrs}\)). Finally we conclude in Sect. 6.
2 Architectural Overview
As depicted in Fig. 1, the implementation of Open image in new window is divided into separate components for the different program kinds and abstractions thereof supported. These separate components are no islands however. Rather, they instantiate our abstract complexity framework for complexity analysis [6], from which Open image in new window derives its power and modularity. In short, in this framework complexity techniques are modelled as complexity processors that give rise to a set of inferences over complexity proofs. From a completed complexity proof, a complexity bound can be inferred. The theoretical foundations of this framework are given in Sect. 3.
The abstract complexity framework is implemented in Open image in new window ’s core library, termed \(\texttt {tctcore}\), which is depicted in Fig. 2 at the bottom layer. Central, it provides a common notion of a proof state, viz proof trees, and an interface for specifying processors. Furthermore, \(\texttt {tctcore}\) complements the framework with a simple but powerful strategy language. Strategies play the role of tactics in interactive theorem provers like Isabelle or Coq. They allow us to turn a set of processors into a sophisticated complexity analyser. The implementation details of the core library are provided in Sect. 4.
The complexity framework implemented in our core library leaves the type of complexity problem, consisting of the analysed program together with the resource metric of interest, abstract. Rather, concrete complexity problems are provided by concrete instances, such as the two instances \(\texttt {tcthoca}\) and \(\texttt {tcttrs}\) depicted in Fig. 2. We will look at some particular instances in detail in Sect. 5. Instances implement complexity techniques on defined problem types in the form of complexity processors, possibly relying on external libraries and tools such as e.g. SMT solvers. Optionally, instances may also specify strategies that compose the provided processors. Bridges between instances are easily specified as processors that implement conversions between problem types defined in different instances. For example our instance \(\texttt {tcthoca}\), which deals with the runtime analysis of pure \(\texttt {OCaml}\) programs, makes use of the instance \(\texttt {tcttrs}\). Thus our system is open to the seamless integration of alternative problem types through the specification of new instances. Exemplarily, we mention the envisioned instance \(\texttt {tcthrs}\) (see Fig. 1), which should incorporate dedicated techniques for the analysis of HRSs. We intend to use \(\texttt {tcthrs}\) in future versions for the analysis of functional programs.
3 A Formal Framework for Complexity Analysis
We now briefly outline the theoretical framework upon which our complexity analyser Open image in new window is based. As mentioned before, both the input language (e.g. \(\texttt {Java}\), \(\texttt {OCaml}\), ...) as well as the resource under consideration (e.g. execution time, heap usage, ...) is kept abstract in our framework. That is, we assume that we are dealing with an abstract class of complexity problems, where however, each complexity problem \(\mathcal {P}\) from this class is associated with a complexity function \(\mathsf {cp}_{\mathcal {P}}\,:\,D \rightarrow D\), for a complexity domain D. Usually, the complexity domain D will be the set of natural numbers \(\mathbb {N}\), however, more sophisticated choices of complexity functions, such as e.g. those proposed by Danner et al. [11], fall into the realm of our framework.
In a concrete setting, the complexity problem \(\mathcal {P}\) could denote, for instance, a \(\texttt {Java}\) program. If we are interested in heap usage, then \(D = \mathbb {N}\) and \(\mathsf {cp}_{\mathcal {P}}\,:\,\mathbb {N}\rightarrow \mathbb {N}\) denotes the function that describes the maximal heap usage of \(\mathcal {P}\) in the sizes of the program inputs. As indicated in the introduction, any transformational solver converts concrete programs into abstract ones, if not already interfaced with an abstract program. Based on the possible abstracted complexity problem \(\mathcal {P}\) the analysis continues using a set of complexity techniques. In particular, a reasonable solver will also integrate some form of decomposition techniques, transforming an intermediate problem into various smaller subproblems, and analyse these subproblems separately, either again by some form of decomposition method, or eventually by some base technique which infers a suitable resource bound. Of course, at any stage in this transformation chain, a solver needs to keep track of computed complexitybounds, and relay these back to the initial problem.
A proof of a judgement \({}\vdash \mathcal {P}\mathrel {:}B\) from the assumptions \({}\vdash \mathcal {Q}_1\mathrel {:}B_1, \dots , {}\vdash \mathcal {Q}_n\mathrel {:}B_n\) is a deduction using sound processors only. The proof is closed if its set of assumptions is empty. Soundness of processors guarantees that our formal system is correct. Application of complete processors on a valid judgement ensures that no invalid assumptions are derived. In this sense, the application of a complete processor is always safe.
Proposition 1
If there exists a closed complexity proof \({}\vdash \mathcal {P}\mathrel {:}B\), then the judgement \({}\vdash \mathcal {P}\mathrel {:}B\) is valid.
4 Implementing the Complexity Framework
The formal complexity framework described in the last section is implemented in the core library, termed \(\texttt {tctcore}\). In the following we outline the two central components of this library: (i) the generation of complexity proofs, and (ii) common facilities for instantiating the framework to concrete tools, see Fig. 2.
4.1 Proof Trees, Processors, and Strategies
The library \(\texttt {tctcore}\) provides the verification of a valid complexity judgement \({}\vdash \mathcal {P}\mathrel {:}B\) from a given input problem \(\mathcal {P}\). More precise, the library provides the environment to construct a complexity proof witnessing the validity of \({}\vdash \mathcal {P}\mathrel {:}B\).
Since the class B of boundingfunctions is a result of the analysis, and not an input, the complexity proof can only be constructed once the analysis finished successfully. For this reason, proofs are not directly represented as trees over complexity judgements. Rather, the library features proof trees. Conceptually, a proof tree is a tree whose leaves are labelled by open complexity problems, that is, problems which remain to be analysed, and whose internal nodes represent successful applications of processors. The complexity analysis of a problem \(\mathcal {P}\) then amounts to the expansion of the proof tree whose single node is labelled by the open problem \(\mathcal {P}\). Processors implement a single expansion step. To facilitate the expansion of proof trees, \(\texttt {tctcore}\) features a rich strategy language, similar to tactics in interactive theorem provers like Isabelle or Coq. Once a proof tree has been completely expanded, a complexity judgement for \(\mathcal {P}\) together with the witnessing complexity proof can be computed from the proof tree.
In the following, we detail the central notions of proof tree, processor and strategy, and elaborate on important design issues.
Processors. The interface for processors is specified by the typeclass Open image in new window , which is defined in module Tct.Core.Data.Processor and depicted in Fig. 4. The type of input problem and generated subproblems are defined for processors on an individual basis, through the typelevel functions Open image in new window and Open image in new window , respectively. This eliminates the need for a global problem type, and facilitates the seamless combination of different instantiations of the core library. Each processor instance specifies additionally the type of proofobjects Open image in new window \(\alpha \) – the meta information provided in case of a successful application. The proofobject is constrained to instances of Open image in new window , which beside others, ensures that a textual representation can be obtained. Each instance of Open image in new window has to implement a method execute, which given an input problem of type Open image in new window \(\alpha \), evaluates to a Open image in new window action that produces a value of type Open image in new window \(\alpha \). The monad Open image in new window (defined in module Tct.Core.Data.TctM) extends the Open image in new window monad with access to runtime information, such as command line parameters and execution time. The datatype Open image in new window \(\alpha \) specifies the result of the application of a processor to its given input problem. In case of a successful application, the return value carries the proofobject, a value of type Open image in new window , which relates complexitybounds on subproblems to bounds on the inputproblem and the list of generated subproblems. In fact the type is slightly more liberal and allows for each generated subproblem a, possibly open, proof tree. This generalisation is useful in certain contexts, for example, when the processor makes use of a second processor.
The first four primitives defined in Fig. 5 constitute our tool box for modelling sequential application of processors. he strategy Open image in new window is implemented by the identity function on proof trees. The remaining three primitives traverse the given proof tree inorder, acting on all the open proofnodes. The strategy Open image in new window p replaces the given open proofnode with the proof tree resulting from an application of p. The strategy Open image in new window signals that the computation should be aborted, replacing the given proofnode by a failure node. Finally, the strategy Open image in new window predicate s1 s2 s3 implements a very specific conditional. It sequences the application of strategies s1 and s2, provided the proof tree computed by s1 satisfies the predicate predicate. For the case where the predicate is not satisfied, the conditional acts like the third strategy s3.
In Fig. 6 we showcase the definition of derived sequential strategy combinators. Sequencing s1 \(\ggg \) s2 of strategies s1 and s2 as well as a (leftbiased) choice operator s1 \(<\!\!>\) s2 are derived from the conditional primitive Open image in new window . The strategy try s behaves like s, except when s fails then try s behaves as an identity. The combinator force complements the combinator try: the strategy force s enforces that strategy s produces a new proofnode. The combinator try brings backtracking to our strategy language, i.e. the strategy try s1 \(\ggg \) s2 first applies strategy s1, backtracks in case of failure, and applies s2 afterwards. Finally, the strategies exhaustive s applies s zero or more times, until strategy s fails. The combinator exhaustive+ behaves similarly, but applies the given strategy at least once. The obtained combinators satisfy the expected laws, compare Fig. 7 for an excerpt.
Our strategy language features also three dedicated types for parallel proof search. The strategy Open image in new window s implements a form of data level parallelism, applying strategy s to all open problems in the given proof tree in parallel. In contrast, the strategies Open image in new window s1 s2 and Open image in new window comp s1 s2 apply to each open problem the strategies s1 and s2 concurrently, and can be seen as parallel version of our choice operator. Whereas Open image in new window s1 s2 simply returns the (nonfailing) proof tree of whichever strategy returns first, Open image in new window comp s1 s2 uses the provided comparisonfunction comp to decide which proof tree to return.
The final two strategies depicted in Fig. 5 implement timeouts, and the dynamic creation of strategies depending on the current Open image in new window . Open image in new window includes global state, such as command line flags and the execution time, but also proof relevant state such as the current problem under investigation.
4.2 From the Core to Executables
The framework is instantiated by providing a set of sound processors, together with their corresponding input and output types. At the end of the day the complexity framework has to give rise to an executable tool, which, given an initial problem, possibly provides a complexity certificate.
 1.
Parse the command line options given to the executable, and reflect these in the aforementioned Open image in new window .
 2.
Parse the given file according to the parser specified in the Open image in new window .
 3.
Select a strategy based on the command line flags, and apply the selected strategy on the parsed input problem.
 4.
Should the analysis succeed, a textual representation of the obtained complexity judgement and corresponding proof tree is printed to the console; in case the analysis fails, the uncompleted proof tree, including the Open image in new window for failure is printed to the console.
5 Case Studies
In this section we discuss several instantiations of the framework that have been established up to now. We keep the descriptions of the complexity problems informal and focus on the big picture. In the discussion we group abstract programs in contrast to real world programs.
5.1 Abstract Programs
5.2 Real World Programs
Pure \(\texttt {OCaml}\). For the case of higherorder functional programs, a successful application of this has been demonstrated in recent work by the first and second author in collaboration with Dal Lago [4]. In [4], we study the runtime complexity of pure \(\texttt {OCaml}\) programs. A suitable adaption of Reynold’s defunctionalisation [24] technique translates the given program into a slight generalisation of TRSs, an applicative term rewrite system (ATRS for short). In ATRSs closures are explicitly represented as firstorder structures. Evaluation of these closures is defined via a global apply function (denoted by \({\texttt {@}}\)).
The structure of the defunctionalised program is necessarily intricate, even for simple programs. However, in conjunction with a sequence of sophisticated and in particular complexity reflecting transformations one can bring the defunctionalised program in a form which can be effectively analysed by firstorder complexity provers such as the \(\texttt {tcttrs}\) instance; see [4] for the details. An example run is depicted in Fig. 9. All of this has been implemented in a prototype implementation, termed \(\texttt {HoCA}\).^{4} We have integrated the functionality of \(\texttt {HoCA}\) in the instance \(\texttt {tcthoca}\). The individual transformations underlying this tool are seamlessly modelled as processors, its transformation pipeline is naturally expressed in our strategy language. The corresponding strategy, termed hoca, is depicted in Fig. 10. It takes an \(\texttt {OCaml}\) source fragment, of type Open image in new window , and turns it into a term rewrite system as follows. First, via mlToAtrs the source code is parsed and desugared, the resulting abstract syntax tree is turned into an expression of a typed \(\lambda \)calculus with constants and fixpoints, akin to Plotkin’s PCF [23]. All these steps are implemented via the strategy mlToPcf Open image in new window Open image in new window Open image in new window Open image in new window . The given parameter, an optional function name, can be used to select the analysed function. With defunctionalise Open image in new window Open image in new window this program is then turned into an ATRS, which is simplified via the strategy simplifyAtrs Open image in new window Open image in new window modelling the heuristics implemented in \(\texttt {HoCA}\). Second, the strategy atrsToTrs uses the controlflow analysis provided by \(\texttt {HoCA}\) to instantiate occurrences of higherorder variables [4]. The instantiated ATRS is then translated into a firstorder rewrite system by uncurrying all function calls. Further simplifications, as foreseen by the \(\texttt {HoCA}\) prototype at this stage of the pipeline, are performed via the strategy simplifyTrs Open image in new window Open image in new window .
ObjectOriented Bytecode Programs. The \(\texttt {tctjbc}\) instance provides automated complexity analysis of objectoriented bytecode programs, in particular Jinja bytecode (JBC for short) programs [17]. Given a JBC program, we measure the maximal number of bytecode instructions executed in any evaluation of the program. We suitably employ techniques from dataflow analysis and abstract interpretation to obtain a term based abstraction of JBC programs in terms of constraint term rewrite systems (cTRSs for short) [20]. CTRSs are a generalisation of TRSs and ITSs. More importantly, given a cTRS obtained from a JBC program, we can extract a TRS or ITS fragment. All these abstractions are complexity reflecting. We have implemented this transformation in a dedicated tool termed \(\texttt {jat}\) and have integrated its functionality in \(\texttt {tctjbc}\) in a similar way we have integrated the functionality of \(\texttt {HoCA}\) in \(\texttt {tcthoca}\). The corresponding strategy, termed jbc, is depicted in Fig. 11. We then can use \(\texttt {tcttrs}\) and \(\texttt {tctits}\) to analyse the resulting problems. Our framework is expressive enough to analyse the thus obtained problems in parallel. Note that Open image in new window s1 s2 requires that s1 and s2 have the same output problem type. We can model this with transformations to a dummy problem Open image in new window . Nevertheless, as intended any witness that is obtained by an successful application of its or trs will be relayed back.
6 Conclusion
In this paper we have presented Open image in new window v3.0, the latest version of our fully automated complexity analyser. Open image in new window is open source, released under the BSD3 license. All components of Open image in new window are written in Haskell. Open image in new window is open with respect to the complexity problem under investigation and problem specific techniques. It is the most powerful tool in the realm of automated complexity analysis of term rewrite systems, as for example verified at this year’s TERMCOMP. Moreover it provides an expressive problem independent strategy language that facilitates the proof search, extensibility and automation.
Further work will be concerned with the finalisation of the envisioned instance \(\texttt {tcthrs}\), as well as the integration of current and future developments in the resource analysis of ITSs.
Footnotes
 1.
See for example the results of Open image in new window at this year’s TERMCOMP, available from http://terminationportal.org/wiki/Termination_Competition_2015/.
 2.
 3.
 4.
References
 1.Albert, E., Arenas, P., Genaim, S., Puebla, G., RománDíez, G.: Conditional termination of loops over heapallocated data. SCP 92, 2–24 (2014)Google Scholar
 2.Aspinall, D., Beringer, L., Hofmann, M., Loidl, H.W., Momigliano, A.: A program logic for resources. TCS 389(3), 411–445 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
 3.Atkey, R.: Amortised resource analysis with separation logic. LMCS 7(2), 1–33 (2011)MathSciNetzbMATHGoogle Scholar
 4.Avanzini, M., Dal Lago, U., Moser, G.: Analysing the complexity of functional programs: higherorder meets firstorder. In: Proceeding of the 20th ICFP, pp. 152–164. ACM (2015)Google Scholar
 5.Avanzini, M., Moser, G.: Tyrolean complexity tool: features and usage. In: Proceeding of the 24th RTA, LIPIcs, vol. 21, pp. 71–80 (2013)Google Scholar
 6.Avanzini, M., Moser, G.: A combination framework for complexity. IC (to appear, 2016)Google Scholar
 7.Avanzini, M., Sternagel, C., Thiemann, R.: Certification of complexity proofs using CeTA. In: Proceeding of the 26th RTA, LIPIcs, vol. 36, pp. 23–39 (2015)Google Scholar
 8.Baader, F., Nipkow, T.: Term Rewriting and All That. Cambridge University Press, Cambridge (1998)CrossRefzbMATHGoogle Scholar
 9.Bird, R.: Introduction to Functional Programming using Haskell, 2nd edn. Prentice Hall, Upper Saddle River (1998)Google Scholar
 10.Brockschmidt, M., Emmes, F., Falke, S., Fuhs, C., Giesl, J.: Alternating runtime and size complexity analysis of integer programs. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014 (ETAPS). LNCS, vol. 8413, pp. 140–155. Springer, Heidelberg (2014)CrossRefGoogle Scholar
 11.Danner, N., Paykin, J., Royer, J.S.: A static cost analysis for a higherorder language. In: Proceeding of the 7th PLPV, pp. 25–34. ACM (2013)Google Scholar
 12.Gimenez, S., Moser, G.: The complexity of interaction. In: Proceeding of the 40th POPL (to appear, 2016)Google Scholar
 13.Hirokawa, N., Moser, G.: Automated complexity analysis based on the dependency pair method. In: Armando, A., Baumgartner, P., Dowek, G. (eds.) IJCAR 2008. LNCS (LNAI), vol. 5195, pp. 364–379. Springer, Heidelberg (2008)CrossRefGoogle Scholar
 14.Hoffmann, J., Aehlig, K., Hofmann, M.: Multivariate amortized resource analysis. TOPLAS 34(3), 14 (2012)CrossRefzbMATHGoogle Scholar
 15.Hofmann, M., Moser, G.: Multivariate amortised resource analysis for term rewrite systems. In: Proceeding of the 13th TLCA in LIPIcs Vol. 38, pp. 241–256 (2015)Google Scholar
 16.Jost, S., Hammond, K., Loidl, H.W., Hofmann, M.: Static determination of quantitative resource usage for higherorder programs. In: Proceeding of the 37th POPL, pp. 223–236. ACM (2010)Google Scholar
 17.Klein, G., Nipkow, T.: A machinechecked model for a javalike language, virtual machine, and compiler. TOPLAS 28(4), 619–695 (2006)CrossRefGoogle Scholar
 18.Lankford, D.: On Proving Term Rewriting Systems are Noetherian. Technical Report MTP3. Louisiana Technical University (1979)Google Scholar
 19.Moser, G.: Proof Theory at Work: Complexity Analysis of Term Rewrite Systems. CoRR abs/0907.5527, Habilitation Thesis (2009)Google Scholar
 20.Moser, G., Schaper, M.: A Complexity Preserving Transformation from Jinja Bytecode to Rewrite Systems (2012). CoRR, cs/PL/1204.1568, last revision: 6 May 2014Google Scholar
 21.Noschinski, L., Emmes, F., Giesl, J.: Analyzing innermost runtime complexity of term rewriting by dependency pairs. JAR 51(1), 27–56 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
 22.Hill, P.M., Payet, E., Spoto, F.: Pathlength analysis of objectoriented programs. In: Proceeding of the 1st EAAI. Elsevier (2006)Google Scholar
 23.Plotkin, G.D.: LCF considered as a programming language. TCS 5(3), 223–255 (1977)MathSciNetCrossRefzbMATHGoogle Scholar
 24.Reynolds, J.C.: Definitional interpreters for higherorder programming languages. HOSC 11(4), 363–397 (1998)zbMATHGoogle Scholar
 25.Sinn, M., Zuleger, F., Veith, H.: A simple and scalable static analysis for bound analysis and amortized complexity analysis. In: Biere, A., Bloem, R. (eds.) CAV 2014. LNCS, vol. 8559, pp. 745–761. Springer, Heidelberg (2014)Google Scholar
 26.TeReSe: Term Rewriting Systems, Cambridge Tracks in Theoretical Computer Science, vol. 55. Cambridge University Press, Cambridge (2003)Google Scholar
 27.Wilhelm, R., Engblom, J., Ermedahl, A., Holsti, N., Thesing, S., Whalley, D., Bernat, G., Ferdinand, C., Heckmann, R., Mitra, T., Mueller, F., Puaut, I., Puschner, P., Staschulat, J., Stenstrom, P.: The worst case execution time problem  overview of methods and survey of tools. TECS 7(3), 1–53 (2008)CrossRefGoogle Scholar