Abstract
FuncTion is a static analyzer designed for proving conditional termination of C programs by means of abstract interpretation. Its underlying abstract domain is based on piecewisedefined functions, which provide an upper bound on the number of program execution steps until termination as a function of the program variables.
In this paper, we fully parameterize various aspects of the abstract domain, gaining a flexible balance between the precision and the cost of the analysis. We propose heuristics to improve the fixpoint extrapolation strategy (i.e., the widening operator) of the abstract domain. In particular we identify new widening operators, which combine these heuristics to dramatically increase the precision of the analysis while offering good cost compromises. We also introduce a more precise, albeit costly, variable assignment operator and the support for choosing between integer and rational values for the piecewisedefined functions.
We combined these improvements to obtain an implementation of the abstract domain which subsumes the previous implementation. We provide experimental evidence in comparison with stateoftheart tools showing a considerable improvement in precision at a minor cost in performance.
Download conference paper PDF
1 Introduction
Programming errors which cause nontermination can compromise software systems by making them irresponsive. Notorious examples are the Microsoft Zune Z2K bug^{Footnote 1} and the Microsoft Azure Storage service interruption^{Footnote 2}. Termination bugs can also be exploited in denialofservice attacks^{Footnote 3}. Therefore, proving program termination is important for ensuring software reliability.
The traditional method for proving termination is based on the synthesis of a ranking function, a wellfounded metric which strictly decreases during the program execution. FuncTion [36] is a static analyzer which automatically infers ranking functions and sufficient precondition for program termination by means of abstract interpretation [13]. The tool is based on the abstract interpretation framework for termination introduced by Cousot and Cousot [14].
The underlying abstract domain of FuncTion is based on piecewisedefined ranking functions [40], which provide an upper bound on the number of program execution steps until termination as a function of the program variables. The piecewisedefined functions are represented by decision trees, where the decision nodes are labeled by linear constraints over the program variables, and the leaf nodes are labeled by functions of the program variables.
In this paper, we fully parameterize various aspects of the abstract domain, gaining a flexible balance between the precision and the cost of the analysis. We propose options to tune the representation of the domain and value of the ranking functions manipulated by the abstract domain. In particular, we introduce the support for using rational coefficients for the functions labeling the leaf nodes of the decision trees, all the while strengthening their decrease condition to still ensure termination. We also introduce a variable assignment operator which is very effective for programs with unbounded nondeterminism. Finally, we propose heuristics to improve the widening operator of the abstract domain. Specifically, we suggest an heuristic inspired by [1] to infer new linear constraints to add to a decision tree and two heuristics to infer a value for the leaf nodes on which the ranking function is not yet defined. We identify new widening operators, which combine these heuristics to dramatically increase the precision of the analysis while offering good cost compromises.
We combined these improvements to obtain an implementation of the abstract domain which subsumes the previous implementation. We provide experimental evidence in comparison with stateoftheart tools [21, 22, 34] showing a considerable improvement in precision at a minor cost in performance.
Outline. Sect. 2 offers a glimpse into the theory behind proving termination by abstract interpretation. In Sect. 3, we recall the ranking functions abstract domain and we discuss options to tune the representation of the piecewisedefined functions manipulated by the abstract domain. We suggest new precise widening operators in Sect. 4. Section 5 presents the result of our experimental evaluation. We discuss related work in Sect. 6 and Sect. 7 concludes.
2 Termination and Ranking Functions
The traditional method for proving program termination dates back to Turing [35] and Floyd [17]. It consists in inferring a ranking function, namely a function from the program states to elements of a wellordered set whose value decreases during program execution. The best known wellordered sets are the natural numbers \(\langle \mathbb {N},\le \rangle \) and the ordinals \(\langle \mathbb {O},\le \rangle \), and the most obvious ranking function maps each program state to the number of program execution steps until termination, or some wellchosen upper bound on this number.
In [14], Cousot and Cousot formalize the notion of a most precise ranking function w for a program. Intuitively, it is a partial function defined starting from the program final states, where it has value zero, and retracing the program backwards while mapping each program state definitely leading to a final state (i.e., a program state such that all program execution traces to which it belongs are terminating) to an ordinal representing an upper bound on the number of program execution steps remaining to termination. The domain \(\mathrm {dom}(w)\) of w is the set of states from which the program execution must terminate: all traces branching from a state \(s \in \mathrm {dom}(w)\) terminate in at most w(s) execution steps, while at least one trace branching from a state \(s \not \in \mathrm {dom}(w)\) does not terminate.
Example 1
Let us consider the following execution traces of a given program:
The most precise ranking function for the program is iteratively defined as:
where unlabelled states are outside the domain of the function.
The most precise ranking function w is sound and complete to prove program termination [14]. However, it is usually not computable. In the following sections we recall and present various improvements on decidable approximations of w [40]. These overapproximate the value of w and underapproximate its domain of definition \(\mathrm {dom}(w)\). In this way, we infer sufficient preconditions for program termination: if the approximation is defined on a program state, then all execution traces branching from that state are terminating.
3 The Ranking Functions Abstract Domain
We use abstract interpretation [13] to approximate the most precise ranking function mentioned in the previous section. In [40], to this end, we introduce an abstract domain based on piecewisedefined ranking functions. We recall here (and in the next section) the features of the abstract domain that are relevant for our purposes and introduce various improvements and parameterizations to tune the precision of the abstract domain. We refer to [37] for an exhaustive presentation of the original ranking functions abstract domain.
The elements of the abstract domain are piecewisedefined partial functions. Their internal representation is inspired by the space partitioning trees [18] developed in the context of 3D computer graphics and the use of decision trees in program analysis and verification [3, 24]: the piecewisedefined partial functions are represented by decision trees, where the decision nodes are labeled by linear constraints over the program variables, and the leaf nodes are labeled by functions of the program variables. The decision nodes recursively partition the space of possible values of the program variables and the functions at the leaves provide the corresponding upper bounds on the number of program execution steps until termination. An example of decision tree representation of a piecewisedefined ranking function is shown in Fig. 1.
The partitioning is dynamic: during the analysis, partitions (resp. decision nodes and constraints) are split (resp. added) by tests, modified by variable assignments and joined (resp. removed) when merging control flows. In order to minimize the cost of the analysis, a widening limits the height of the decision trees and the number of maintained partitions.
The abstract domain is parameterized in various aspects. Figure 2 offers an overview of the various parameterizations currently available. We discuss here options to tune the representation of the domain and value of the ranking functions manipulated by the abstract domain. The discussion on options to tune the precision of the widening operator is postponed to the next section.
3.1 Domain Representation
The domain of a ranking function represented by a decision tree is partitioned into pieces which are determined by the linear constraints encountered along the paths to the leaves of the tree. The abstract domain supports linear constraints of different expressivity. In the following, we also propose an alternative strategy to modify the linear constraints as a result of a variable assignment. We plan to support nonlinear constraints as part of our future work.
Linear Constraints. We rely on existing numerical abstract domains for labeling the decision nodes with the corresponding linear constraints and for manipulating them. In order of expressivity, we support interval [12] constraints (i.e., of the form \(\pm x \le c\)), octagonal [30] constraints (i.e., of the form \(\pm x_i \pm x_j \le c\)), and polyhedral [15] constraints (i.e., of the form \(c_1 \cdot x_1 + \dots + c_k \cdot x_k \le c_{k+1}\)). As for efficiency, contrary to expectations, octagonal constraints are the costliest labeling in practice. The reason for this lies in how constraints are manipulated as a results of a variable assignment which amplifies known performance drawbacks for octagons [19, 26]. We expand on this shortly.
Assignment Operator. A variable assignment might impact some of the linear constraints within the decision nodes as well as some functions within the leaf nodes. The abstract domain now supports two strategies to modify the decision trees as a result of a variable assignment:

The default strategy [40] consists in carrying out a variable assignment independently on each linear constraint labeling a decision node and each function labeling a leaf of the decision tree. This strategy is cheap since it requires a single tree traversal. It is sometimes imprecise as shown in Fig. 3.

The new precise strategy consists in carrying out a variable assignment on each partition of a ranking function and then merging the resulting partitions. This strategy is costlier since it requires traversing the initial decision tree to identify the initial partitions, building a decision tree for each resulting partition, and traversing these decision trees to merge them. Note that building a decision tree requires sorting a number of linear constraints possibly higher than the height of the initial decision tree [37]. However, this strategy is much more precise as shown in Fig. 3.
Both strategies do not work well with octagonal constraints. It is known that the original algorithms for manipulating octagons do not preserve their sparsity [19, 26]. An immediate consequence of this is that a variable assignment on a single octagonal constraints often yields multiple linear constraints. This effect is particularly amplified by the default assignment strategy described above. The precise assignment strategy suffers less from this but the decision trees still tend to grow considerably in size. We plan to support sparsitypreserving algorithms for octagonal constraints as part of our future work.
3.2 Value Representation
The functions used for labeling the leaves of the decision trees are affine functions of the program variables (i.e., of the form \(m_1 \cdot x_1 + \dots + m_k \cdot x_k + q\)), plus the special elements \(\bot \) and \(\top \) which explicitly represent undefined functions (cf. Fig. 1b). The element \(\top \) shares the same meaning of \(\bot \) but is only introduced by the widening operator. We expand on this in the next section. More specifically, we support lexicographic affine functions \((f_k, \dots , f_1, f_0)\) in the isomorphic form of ordinals \(\omega ^k \cdot f_k + \dots + \omega \cdot f_1 + f_0\) [29, 39]. The maximum degree k of the polynomial is a parameter of the analysis. We leave nonlinear functions for future work.
The coefficients of the affine functions are by default integers [40] and we now also support rational coefficients. Note that, when using rational coefficients, the functions have to decrease by at least one at each program execution step to ensure termination. Indeed, a decreasing sequence of rational number is not necessarily finite. However, the integer parts of rationalvalued functions which decrease by at least one at each program step yield a finite decreasing sequence.
4 The Widening Operator on Ranking Functions
The widening operator \(\triangledown \) tries to predict a value for the ranking function over the states on which it is not yet defined. Thus, it has more freedom than traditional widening operators, in the sense that it is temporarily allowed to underapproximate the value of the most precise ranking function w (cf. Sect. 2) or overapproximate its domain of definition \(\mathrm {dom}(w)\), or both — in contrast with the observation made at the end of Sect. 2. The only requirement is that these discrepancies are resolved before the analysis stabilizes.
In more detail, give two decision trees \(t_1\) and \(t_2\), the widening operator will go through the following steps to compute \(t_1 \mathbin {\triangledown } t_2\) [40]:

Domain Widening. This step resolves an eventual overapproximation of the domain \(\mathrm {dom}(w)\) of w following the inclusion of a program state from which a nonterminating program execution is reachable. This discrepancy manifests itself when a leaf in \(t_1\) is labeled by a function and its corresponding leaf in \(t_2\) is labeled by \(\bot \). The widening operator marks the offending leaf in \(t_2\) with \(\top \) to prevent successive iterates of the analysis from mistakenly including again the same program state into the domain of the ranking function.

Domain Extrapolation. This step extrapolates the domain of the ranking functions over the states on which it is not yet defined. The default strategy consists in dropping the decision nodes that belong to \(t_2\) but not to \(t_1\) and merging the corresponding subtrees^{Footnote 4}. In this way we might lose information but we ensure convergence by limiting the size of the decision trees.

Value Widening. This step resolves an eventual underapproximation of the value of w and an eventual overapproximation of the domain \(\mathrm {dom}(w)\) of w following the inclusion of a nonterminating program state. These discrepancies manifest themselves when the value of a function labeling a leaf in \(t_1\) is smaller than the value of the function labeling the corresponding leaf in \(t_2\). In this case, the default strategy consists again in marking the offending leaf in \(t_2\) with \(\top \) to exclude it from the rest of the analysis.

Value Extrapolation. This step extrapolates the value of the ranking function over the states that have been added to the domain of the ranking function in the last analysis iterate. These states are represented by the leaves in \(t_2\) that are labeled by a function and their corresponding leaves in \(t_1\) are labeled by \(\bot \). The default heuristic consists in increasing the gradient of the functions with respect to the functions labeling their adjacent leaves in the decision tree. The rationale being that programs often loop over consecutive values of a variable, we use the information available in adjacent partitions of the domain of the ranking function to infer the shape of the ranking function for the current partitions. An example is shown in Fig. 4.
In the rest of the section, we suggest new heuristics to improve the default strategies used in the last three steps performed by the widening operator. These yield new widening operators which combine these heuristics to dramatically increase the precision of the analysis while offering good cost compromises.
Note that, to improve precision, it is customary to avoid the use of the widening operator for a certain number of analysis iterates. In the following, we refer to this number as delay threshold.
4.1 Domain Extrapolation
The default strategy for the domain extrapolation never infers new linear constraints and this hinders proving termination for some programs. In the following, we propose an alternative strategy which limits the number of decision nodes to be dropped during the analysis and labels them with new linear constraints. It is important to carefully choose the new added constraints to avoid slowing down the analysis unnecessarily and to make sure that the analysis still converges.
We suggest here a strategy inspired by the evolving rays heuristic presented in [1] to improve the widening operator of the polyhedra abstract domain [15]. The evolving strategy examines each linear constraint \(c_2\) in \(t_2\) (i.e., the decision tree corresponding to the last iterate of the analysis) as if it was generated by rotation of a linear constraint \(c_1\) in \(t_1\) (i.e., the decision tree corresponding to the previous iterate of the analysis). This rotation is formalized as follows [1]:
where u and w are the vectors of coefficients of the linear constraints \(c_2\) in \(t_2\) and \(c_1\) in \(t_1\), respectively. In particular, evolve sets to zero the components of u that match the direction of rotation. Intuitively, the evolving strategy continues the rotation of \(c_2\) until one or more of the nonnull coefficients of \(c_2\) become zero. The new constraint reaches one of the boundaries of the orthant where \(c_2\) lies without trespassing it. This strategy is particularly useful in situations similar to the one depicted in Fig. 5a: the ranking function is defined over increasingly smaller pieces delimited by different rotations of a linear constraint. In such case, the evolving strategy infers the linear constraints highlighted in red in Fig. 5b, thus extrapolating the domain of the ranking function up to the boundary of the orthant where the function is defined.
More specifically, the evolving strategy explores each pair of linear constraints on the same path in the decision tree \(t_2\) and modifies them as described above to obtain new constraints. The strategy then discards the less frequently obtained constraints. The relevant frequency is a parameter of the analysis which in the following we call the evolving threshold. In our experience, it is usually a good choice to set the evolving threshold to be equal to the delay threshold of the widening. The remaining constraints are used to substitute the linear constraints that appear in \(t_2\) but not in \(t_1\), possibly merging the corresponding subtrees.
Note that, by definition, the number of new linear constraints that can be added by the evolving strategy is finite. The strategy then defaults to the default strategy and this guarantees the termination of the analysis.
4.2 Value Widening
The default strategy for the value widening marks with \(\top \) the leaves in \(t_2\) (i.e., the decision tree corresponding to the last iterate of the analysis) labeled with a larger value than their corresponding leaves in \(t_1\) (i.e., the decision tree corresponding to the previous iterate of the analysis). This resolves eventual discrepancies in the approximation of the most precise ranking function w at the cost of losing precision in the analysis. As an example, consider the situation shown in Fig. 6: Fig. 6a depicts the most precise ranking function for a program and Fig. 6b depicts its approximation at the iterate immediately after widening. Note that one partition of the ranking function shown in Fig. 6b underapproximates the value of the ranking function shown in Fig. 6a. The default strategy would then label the offending partition with \(\top \), in essence giving up on trying to predict a value for the ranking function on that partition.
A simple and yet powerful improvement is to maintain the values of the offending leaves in \(t_2\) and continue the analysis. In this way, the analysis can do various attempts at predicting a stable value for the ranking function. Note that using this retrying strategy without caution would cause the analysis to not converge for a number of programs. Instead, we limit the number of attempts to a certain retrying threshold, and then revert to the default strategy.
The retrying strategy for ordinals of the form \(\omega ^k \cdot f_k + \dots + \omega \cdot f_1 + f_0\) (cf. Sect. 3.2) behaves analogously to the other abstract domain operators for manipulating ordinals [39]. It works in ascending powers of \(\omega \) carrying to the next higher degree when the retrying threshold has been reached (up to the maximum degree for the polynomial, in which case we default to \(\top \)).
4.3 Value Extrapolation
The default heuristic for the value extrapolation consists in increasing the gradient of the ranking function with respect to its value in adjacent partition of its domain. Note that, many other heuristics are possible. In fact, this step only affects the precision of the analysis, and not its convergence or its soundness.
In this paper, we propose a selective extrapolation heuristic, which increases the gradient of the ranking function with respect to selected partitions of its domain. More specifically, the heuristic selects the partitions from which the current partition is reachable in one loop iteration. This strategy is particularly effective in combination with the evolving strategy described in Sect. 4.1. Indeed, the evolving strategy often splits partitions by adding new linear constraints and, in some cases, this affects the precision of the analysis since it alters the adjacency relationships between the pieces on which the ranking function is defined.
We plan to investigate other strategies as part of our future work.
5 Implementation and Experimental Evaluation
The ranking functions abstract domain and the new parameterizations introduced in this paper are implemented in FuncTion [36] and are available online^{Footnote 5}. The implementation is in OCaml and consists of around 3K lines of code. The current frontend of FuncTion accepts programs written in a (subset of) C, without struct and union types. It provides only a limited support for arrays, pointers, and recursion. The only basic data type are mathematical integers, deviating from the standard semantics of C. The abstract domain builds on the numerical abstract domains provided by the APRON library [25].
The analysis proceeds by structural induction on the program syntax, iterating loops until a fixpoint is reached. In case of nested loops, a fixpoint on the inner loop is computed for each iteration of the outer loop, following [4, 31]. It is also possible to refine the analysis by only considering the reachable states.
Experimental Evaluation. The ranking functions abstract domain was evaluated on 242 terminating C programs collected from the 5th International Competition on Software Verification (SVCOMP 2016). Due to the limitations in the current frontend of FuncTion we were not able to analyze 47% of the test cases. The experiments were performed on a system with a 3.20 GHz 64bit DualCore CPU (Intel i53470) and 6 GB of RAM, running Ubuntu 16.04.1 LTS.
We compared multiple configurations of parameters for the abstract domain. We report here the result obtained with the most relevant configurations. Unless otherwise specified, the common configuration of parameters uses the default strategy for handling variable assignments (cf. Sect. 3.1), a maximum degree of two for ordinals using integer coefficients for affine functions (cf. Sect. 3.2), and a delay threshold of three for the widening (cf. Sect. 4). Figure 7 presents the results obtained using polyhedral constraints. Figure 8 shows the successful configurations for each test case. Using interval constraints yields fewer successful test cases (around 50% less successes) but it generally ensures better runtimes. The exception is a slight slowdown of the analysis when using rational coefficients, which is not observed when using polyhedral constraints. We did not evaluate the use of octagonal constraints due to the performance drawbacks discussed in Sect. 3.1. We used a time limit of 300 s for each test case.
We can observe that using the retrying strategy always improves the overall analysis result: configurations 3, 6, 7, and 9are more successful than the corresponding configurations 1, 4, 5, and 8, which instead use the default strategy. In particular, configuration 3 is the best configuration in terms of number of successes (cf. Fig. 7). However, in general, improving the precision of the widening operator does not necessarily improve the overall analysis result. More specifically, configurations 4 to 9 seem to perform generally worse than configuration 1 and 3 both in terms of number of successes and running times. However, although these configurations are not effective for a number of programs for which configuration 1 and 3 are successful, they are not subsumed by them since they allow proving termination of many other programs (cf. Fig. 8).
Another interesting observation is that using rational coefficients in configuration 2 worsens the result of the analysis compared to configuration 1 which uses integer coefficients (cf. Fig. 8). Instead, using rational coefficients in configuration 10 allows proving termination for a number of programs for which configuration 9 (which uses integer coefficients) is unsuccessful.
The configurations using the evolving strategy (i.e., 4, 6, 8, 9, and 10) tend to be slower than the configurations which use the default strategy. As a consequence, they suffer from a higher number of timeouts (cf. Fig. 7). Even worse is the slowdown caused by the precise strategy to handle variable assignments (cf. configurations 11 and 12) and a higher delay threshold for the widening (cf. configuration 12). We observed that a delay threshold higher than six only marginally improves precision while significantly worsening running times.
Finally, we observed that there are some configurations for which decreasing the precision of the linear constraints (from polyhedral to interval constraints) allows proving termination of some more programs. In particular, this concerns configuration 2 as well as some of the other configurations when limiting the analysis to the reachable states. However, this happens very rarely: overall, only three programs can be proven terminating only using interval constraints.
We also compared FuncTion against the tools participating to SVCOMP 2016: AProVE [34], SeaHorn [21, 38] and UAutomizer [22]. We did not compare with other tools such as T2 [6] and 2LS [9] since FuncTion does not yet support the input format of T2 and bitprecise integer semantics (like 2LS does). As we observed that most of the parameter configurations of the abstract domain do not subsume each other, for the comparison, we set up FuncTion to use multiple parameter combinations successively, each with a time limit of 25 s. More specifically, we first use configuration 3, which offers the best compromise between number of successes and running times. We then move onto configurations that use the evolving strategy and the selective strategy, which are successful for other programs at the cost of an increased running time. Finally, we try the even more costly configurations that use the precise strategy for handling variable assignments and a higher delay threshold for the widening.
We ran FuncTion on the same machine as above, while for the other tools we used the results of SVCOMP 2016 since our machine was not powerful enough to run them. The time limit per test case was again 300 s. Figure 9 shows the result of the comparison and Fig. 10 shows the successful tools for each test case. We can observe that, despite being less successful than AProVE or UAutomizer, FuncTion is able to prove termination of an important number of programs (i.e., 80% of the test cases, cf. Fig. 9). Moreover, FuncTion is generally faster than all other tools, despite the fact that these were run on more powerful machines. Finally, we can observe in Fig. 10, that for each tool there is a small subset of the test cases for which it is the only successful tool. The four tools together are able to prove termination for all the test cases.
6 Related Work
In the recent past, termination analysis has benefited from many research advances and powerful termination provers have emerged over the years.
AProVE [34] is probably the most mature tool in the field. Its underlying theory is the sizechange termination approach [27], originated in the context of term rewriting systems, which consists in collecting a set of sizechange graphs (representing function calls) and combining them into multipaths (representing program executions) in such a way that at least one variable is guaranteed to decrease. Compared to sizechange termination, FuncTion avoids the exploration of the combinatorial space of multipaths by manipulating ordinals.
Terminator [10] is based on the transition invariants method introduced in [33]. More specifically, the tool iteratively constructs transition invariants by searching within a program for single paths representing potential counterexamples to termination, computing a ranking function for each one of them individually (as in [32]), and combining the obtained ranking functions into a single termination argument. Its successor, T2 [6], has abandoned the transition invariants approach in favor of lexicographic ranking functions [11] and has broadened its scope to a wide range of temporal properties [7].
UAutomizer [22] is a software model checker based on an automatatheoretic approach to software verification [23]. Similarly to Terminator, it reduces proving termination to proving that no program state is repeatedly visited (and it is not covered by the current termination argument), and composes termination arguments by repeatedly invoking a ranking function synthesis tool [28]. In contrast, the approach recently implemented in the software model checker SeaHorn [21] systematically samples terminating program executions and extrapolates from these a ranking function [38] using an approach which resembles the value extrapolation of the widening operator implemented in FuncTion.
Finally, another recent addition to the family of termination provers is 2LS [9], which implements a bitprecise interprocedural termination analysis. The analysis solves a series of secondorder logic formulae by reducing them to firstorder using polyhedral templates. In contrast with the tools mentioned above, both 2LS and FuncTion prove conditional termination.
7 Conclusion and Future Work
In this paper, we fully parameterized various aspects of the ranking function abstract domain implemented in the static analyzer FuncTion. We identified new widening operators, which increase the precision of the analysis while offering good cost compromises. We also introduced options to tune the representation of the ranking functions manipulated by the abstract domain. In combining these improvements, we obtained an implementation which subsumes the previous implementation and is competitive with stateoftheart termination provers.
In the future, we would like to extend the abstract domain to also support nonlinear constraints, such as congruences [20], and nonlinear functions, such as polynomials [5] or exponentials [16]. In addition, we plan to support sparsitypreserving algorithms for manipulating octagonal constraints [19, 26]. We would also like to investigate new strategies to predict a value for the ranking function during widening. Finally, we plan to work on proving termination of more complex programs, such as heapmanipulating programs. We would like to investigate the adaptability of existing methods [2] and existing abstract domains for heap analysis [8], and possibly design new techniques.
Notes
 1.
 2.
 3.
 4.
We requires the decision nodes belonging to \(t_1\) to be a subset of those belonging to \(t_2\). This can always be ensured by computing \(t_1 \mathbin {\triangledown } (t_1 \sqcup t_2)\) instead of \(t_1 \mathbin {\triangledown } t_2\).
 5.
References
Bagnara, R., Hill, P.M., Ricci, E., Zaffanella, E.: Precise widening operators for convex polyhedra. Sci. Comput. Program. 58(1–2), 28–56 (2005)
Berdine, J., Cook, B., Distefano, D., O’Hearn, P.W.: Automatic termination proofs for programs with shapeshifting heaps. In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 386–400. Springer, Heidelberg (2006). doi:10.1007/11817963_35
Bertrane, J., Cousot, P., Cousot, R., Feret, J., Mauborgne, L., Miné, A., Rival, X.: Static analysis and verification of aerospace software by abstract interpretation. In: AIAA (2010)
Bourdoncle, F.: Efficient chaotic iteration strategies with widenings. In: Bjørner, D., Broy, M., Pottosin, I.V. (eds.) Formal Methods in Programming and Their Applications. LNCS, vol. 735, pp. 128–141. Springer, Heidelberg (1993). doi:10.1007/BFb0039704
Bradley, A.R., Manna, Z., Sipma, H.B.: The polyranking principle. In: Caires, L., Italiano, G.F., Monteiro, L., Palamidessi, C., Yung, M. (eds.) ICALP 2005. LNCS, vol. 3580, pp. 1349–1361. Springer, Heidelberg (2005). doi:10.1007/11523468_109
Brockschmidt, M., Cook, B., Fuhs, C.: Better termination proving through cooperation. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol. 8044, pp. 413–429. Springer, Heidelberg (2013). doi:10.1007/9783642397998_28
Brockschmidt, M., Cook, B., Ishtiaq, S., Khlaaf, H., Piterman, N.: T2: temporal property verification. In: Chechik, M., Raskin, J.F. (eds.) TACAS 2016. LNCS, vol. 9636, pp. 387–393. Springer, Heidelberg (2016). doi:10.1007/9783662496749_22
Chang, B.E., Rival, X.: Modular construction of shapenumeric analyzers. In: Festschrift for Dave Schmidt, pp. 161–185 (2013)
Chen, H.Y., David, C., Kroening, D., Schrammel, P., Wachter, B.: Synthesising interprocedural bitprecise termination proofs. In: ASE, pp. 53–64 (2015)
Cook, B., Podelski, A., Rybalchenko, A.: Terminator: beyond safety. In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 415–418. Springer, Heidelberg (2006). doi:10.1007/11817963_37
Cook, B., See, A., Zuleger, F.: Ramsey vs. lexicographic termination proving. In: Piterman, N., Smolka, S.A. (eds.) TACAS 2013. LNCS, vol. 7795, pp. 47–61. Springer, Heidelberg (2013). doi:10.1007/9783642367427_4
Cousot, P., Cousot, R.: Static determination of dynamic properties of programs. In: Symposium on Programming, pp. 106–130 (1976)
Cousot, P., Cousot, R.: Abstract interpretation: a unied lattice model for static analysis of programs by construction or approximation of fixpoints. In: POPL, pp. 238–252 (1977)
Cousot, P., Cousot, R.: An abstract interpretation framework for termination. In: POPL, pp. 245–258 (2012)
Cousot, P., Halbwachs, N.: Automatic discovery of linear restraints among variables of a program. In: POPL, pp. 84–96 (1978)
Feret, J.: The arithmeticgeometric progression abstract domain. In: Cousot, R. (ed.) VMCAI 2005. LNCS, vol. 3385, pp. 42–58. Springer, Heidelberg (2005). doi:10.1007/9783540305798_3
Floyd, R.W.: Assigning meanings to programs. In: Proceedings of Symposium on Applied Mathematics, vol. 19, pp. 19–32 (1967)
Fuchs, H., Kedem, Z.M., Naylor, B.F.: On visible surface generation by a priori tree structures. SIGGRAPH Comput. Graph. 14(3), 124–133 (1980)
Gange, G., Navas, J.A., Schachte, P., Søndergaard, H., Stuckey, P.J.: Exploiting sparsity in differencebound matrices. In: Rival, X. (ed.) SAS 2016. LNCS, vol. 9837, pp. 189–211. Springer, Heidelberg (2016). doi:10.1007/9783662534137_10
Granger, P.: Static analysis of arithmetic congruences. Int. J. Comput. Math. 30, 165–199 (1989)
Gurfinkel, A., Kahsai, T., Navas, J.A.: SeaHorn: a framework for verifying C programs (competition contribution). In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 447–450. Springer, Heidelberg (2015). doi:10.1007/9783662466810_41
Heizmann, M., Dietsch, D., Greitschus, M., Leike, J., Musa, B., Schätzle, C., Podelski, A.: Ultimate automizer with twotrack proofs. In: Chechik, M., Raskin, J.F. (eds.) TACAS 2016. LNCS, vol. 9636, pp. 950–953. Springer, Heidelberg (2016). doi:10.1007/9783662496749_68
Heizmann, M., Hoenicke, J., Podelski, A.: Software model checking for people who love automata. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol. 8044, pp. 36–52. Springer, Heidelberg (2013). doi:10.1007/9783642397998_2
Jeannet, B.: Representing and approximating transfer functions in abstract interpretation of hetereogeneous datatypes. In: Hermenegildo, M.V., Puebla, G. (eds.) SAS 2002. LNCS, vol. 2477, pp. 52–68. Springer, Heidelberg (2002). doi:10.1007/3540457895_7
Jeannet, B., Miné, A.: Apron: a library of numerical abstract domains for static analysis. In: Bouajjani, A., Maler, O. (eds.) CAV 2009. LNCS, vol. 5643, pp. 661–667. Springer, Heidelberg (2009). doi:10.1007/9783642026584_52
Jourdan, J.H.: Sparsity preserving algorithms for octagons. In: NSAD (2016)
Lee, C.S., Jones, N.D., BenAmram, A.M.: The sizechange principle for program termination. In: POPL, pp. 81–92 (2001)
Leike, J., Heizmann, M.: Ranking templates for linear loops. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014. LNCS, vol. 8413, pp. 172–186. Springer, Heidelberg (2014). doi:10.1007/9783642548628_12
Manna, Z., Pnueli, A.: The Temporal Verification of Reactive Systems: Progress (1996)
Miné, A.: The octagon abstract domain. High.Order Symb. Comput. 19(1), 31–100 (2006)
Muthukumar, K., Hermenegildo, M.V.: Compiletime derivation of variable dependency using abstract interpretation. J. Log. Program. 13(2/3), 315–347 (1992)
Podelski, A., Rybalchenko, A.: A complete method for the synthesis of linear ranking functions. In: Steffen, B., Levi, G. (eds.) VMCAI 2004. LNCS, vol. 2937, pp. 239–251. Springer, Heidelberg (2004). doi:10.1007/9783540246220_20
Podelski, A., Rybalchenko, A.: Transition invariants. In: LICS, pp. 32–41 (2004)
Ströder, T., Aschermann, C., Frohn, F., Hensel, J., Giesl, J.: AProVE: termination and memory safety of C programs. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 417–419. Springer, Heidelberg (2015). doi:10.1007/9783662466810_32
Turing, A.: Checking a large routine. In: Report of a Conference on High Speed Automatic Calculating Machines, pp. 67–69 (1949)
Urban, C.: FuncTion: an abstract domain functor for termination. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 464–466. Springer, Heidelberg (2015). doi:10.1007/9783662466810_46
Urban, C.: Static analysis by abstract interpretation of functional temporal properties of programs. Ph.D. thesis, École Normale Supérieure, July 2015
Urban, C., Gurfinkel, A., Kahsai, T.: Synthesizing ranking functions from bits and pieces. In: Chechik, M., Raskin, J.F. (eds.) TACAS 2016. LNCS, vol. 9636, pp. 54–70. Springer, Heidelberg (2016). doi:10.1007/9783662496749_4
Urban, C., Miné, A.: An abstract domain to infer ordinalvalued ranking functions. In: Shao, Z. (ed.) ESOP 2014. LNCS, vol. 8410, pp. 412–431. Springer, Heidelberg (2014). doi:10.1007/9783642548338_22
Urban, C., Miné, A.: A decision tree abstract domain for proving conditional termination. In: MüllerOlm, M., Seidl, H. (eds.) SAS 2014. LNCS, vol. 8723, pp. 302–318. Springer, Cham (2014). doi:10.1007/9783319109367_19
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 SpringerVerlag GmbH Germany
About this paper
Cite this paper
Courant, N., Urban, C. (2017). Precise Widening Operators for Proving Termination by Abstract Interpretation. In: Legay, A., Margaria, T. (eds) Tools and Algorithms for the Construction and Analysis of Systems. TACAS 2017. Lecture Notes in Computer Science(), vol 10205. Springer, Berlin, Heidelberg. https://doi.org/10.1007/9783662545775_8
Download citation
DOI: https://doi.org/10.1007/9783662545775_8
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 9783662545768
Online ISBN: 9783662545775
eBook Packages: Computer ScienceComputer Science (R0)